What is AWS HealthOmics MCP Server?
AWS HealthOmics MCP Server is a specialized Model Context Protocol (MCP) server designed to bridge AI applications and the AWS HealthOmics platform. It exposes secure, standardized endpoints so LLMs and AI coding tools can create, manage, and introspect lifescience workflows written in languages such as WDL, Nextflow, and CWL on AWS HealthOmics. This server streamlines the process of orchestrating genomics and bioinformatics pipelines, abstracting the complexity behind AI-native function calls.
How to Configure
- Prerequisites:
- Ensure you have Python (>=3.10) installed (preferably via Astral’s
uv
). - Set up AWS credentials (profile or environment variables) with sufficient permissions to use AWS HealthOmics.
- Ensure you have Python (>=3.10) installed (preferably via Astral’s
- Installation:
- Install or update the server using the following command:
uvx awslabs.aws-healthomics-mcp-server@latest
- Or add it to your MCP client configuration (see samples below).
- Install or update the server using the following command:
- Client Configuration:
- Add an entry to your MCP client config file (e.g.,
mcp.json
,cline_mcp_settings.json
, etc.):"awslabs.aws-healthomics-mcp-server": { "command": "uvx", "args": ["awslabs.aws-healthomics-mcp-server@latest"], "env": { "AWS_PROFILE": "your-aws-profile", "AWS_REGION": "us-east-1", "FASTMCP_LOG_LEVEL": "ERROR" } }
- Add an entry to your MCP client config file (e.g.,
- Optional Parameters:
- Customize with variables such as workflow language, pipeline directory, or concurrency settings as described in the individual server README.
- Container Option:
- Build and run in Docker as:
docker build -t awslabs/aws-healthomics-mcp-server ./src/aws-healthomics-mcp-server docker run --rm -it -e AWS_PROFILE=your-aws-profile -e AWS_REGION=us-east-1 awslabs/aws-healthomics-mcp-server:latest
- Build and run in Docker as:
How to Use
- Discovery:
- Use your AI assistant’s tool-list feature or UI (such as Cline, Cursor, or Windsurf) to find the HealthOmics tools.
- Prompting:
- Direct your agent with prompts like:
"Using the AWS HealthOmics MCP Server, generate a new Nextflow workflow for RNA-seq analysis."
"Debug this failing pipeline execution in HealthOmics."
- Direct your agent with prompts like:
- Tool Invocation:
- The LLM or agent will call functions such as
list_workflows
,create_workflow
,run_workflow
, orget_run_status
as required.
- The LLM or agent will call functions such as
- Execution Modes:
- Use ‘Plan’ mode for suggestions and review, or ‘Act’ mode to let the agent execute workflow management operations directly (with manual or auto-approval for sensitive actions).
- Workflow Monitoring:
- Request real-time or summary statuses of workflow runs; receive logs, diagnostic messages, and run metrics via agent output.
- Integration:
- Combine HealthOmics actions with data processing or visualization by chaining tools in a single automation or agent session.
Key Features
- Multi-language Workflow Support: Supports generation and execution for WDL, Nextflow, and CWL workflows.
- End-to-End Workflow Management: Enables creation, submission, status monitoring, debugging, and optimization for HealthOmics pipelines from an AI-native interface.
- Secure & Compliant: Uses your AWS credentials or profiles for secure execution, supporting organizational policies and access controls.
- Integrated Diagnostics: Fetches execution logs, error reports, and run metrics for efficient debugging and optimization.
- Agent-Oriented Tooling: Exposes concise, discoverable tool functions for LLM integration—streamlining command synthesis and action approval.
- Automation-Ready: Ideal for use in batch evaluation, code assistant ‘vibe coding’ sessions, and cloud-native bioinformatics workflows.
Use Cases
- Bioinformatics Workflow Creation: Rapid generation of new genomics or proteomics analysis pipelines from high-level English instructions.
- Automated Workflow Execution: Seamless submission and monitoring of HealthOmics workflows for large-scale batch analyses.
- Pipeline Debugging: Interactive troubleshooting of failed or underperforming analytics runs with log review and remediation guidance.
- Collaborative Life Sciences Research: Coding teams or data scientists jointly iterate on reproducible data processing pipelines via conversational interfaces.
- Regulatory and Compliance Monitoring: Use AI-driven assistants to audit workflow definitions or execution traces for best practices and compliance.
- Education and Training: Teach bioinformatics workflow concepts interactively with instant feedback, code suggestions, and synthetic data.
FAQ
Q1: Can I use the AWS HealthOmics MCP Server with multiple workflow languages?
Yes, the server supports WDL, Nextflow, and CWL workflows. You can specify the workflow language in your prompt or configuration and use different tools or APIs as needed.
Q2: How do I monitor the status of my workflow runs?
List or get the status of running, completed, or failed workflows using the list_workflow_runs
or get_workflow_run_status
tools. You can also request logs for deeper debugging.
Q3: Do I need special IAM permissions to use this server?
Yes, your AWS credentials must grant appropriate permissions for AWS HealthOmics APIs (such as executing workflows, accessing S3 buckets for input/output data, and viewing logs).
Q4: Can my AI agent debug errors in failed pipeline runs?
Yes, the server allows agents to retrieve run logs, error traces, and recommendations for common failure patterns.
Q5: Is my scientific data secure when using the MCP server?
The server operates within your secure AWS environment and does not move or expose data outside your account unless explicitly configured.