Model Context Protocol (MCP) and AWS Integration Guide
Table of Contents
- Overview
- What is MCP?
- Why MCP Matters
- MCP Architecture
- Core Components
- MCP in AWS Context
- AWS Bedrock Integration
- AWS Services with MCP
- Building MCP Servers
- Use Cases
- MCP and OpenAPI: The Connection
- Best Practices
- Security Considerations
- Summary
Overview
Model Context Protocol (MCP) is an open standard introduced by Anthropic in November 2024 that standardizes how AI systems connect to external tools, data sources, and services. Think of it as "USB-C for AI" - a universal interface that enables seamless integration between AI applications and the systems they need to interact with.
Key Facts: - Developer: Anthropic - Released: November 25, 2024 - License: Open standard (MIT) - Website: modelcontextprotocol.io - Adoption: OpenAI, Google DeepMind, AWS, and others
The Problem MCP Solves:
Before MCP:
┌─────────────────────────────────────────────────────────────┐
│ AI Application needs to connect to: │
│ • Database → Custom integration code │
│ • CRM → Different custom code │
│ • File system → Another custom integration │
│ • API → Yet another custom solution │
│ │
│ Result: N × M integrations (every app × every tool) │
│ ❌ Fragmented, hard to maintain │
│ ❌ No standardization │
│ ❌ Duplicate effort │
└─────────────────────────────────────────────────────────────┘
With MCP:
┌─────────────────────────────────────────────────────────────┐
│ AI Application → MCP Protocol → Any MCP Server │
│ │
│ Result: Standardized interface │
│ ✅ Write once, use everywhere │
│ ✅ Plug-and-play integrations │
│ ✅ Community-driven ecosystem │
└─────────────────────────────────────────────────────────────┘
What is MCP?
Definition
MCP is a client-server protocol that enables AI applications to: - Discover and use external tools - Access data sources securely - Maintain context across interactions - Execute functions in external systems
The USB-C Analogy
USB-C Port MCP Protocol
│ │
├─ Universal connector ├─ Universal AI interface
├─ Works with any device ├─ Works with any AI model
├─ Standardized protocol ├─ Standardized communication
└─ Plug and play └─ Plug and play integrations
How It Works
┌─────────────────────────────────────────────────────────────┐
│ MCP COMMUNICATION FLOW │
├─────────────────────────────────────────────────────────────┤
│ │
│ 1. AI Application (Host) │
│ ↓ │
│ 2. MCP Client (embedded in host) │
│ ↓ │
│ 3. MCP Protocol (JSON-RPC 2.0) │
│ ↓ │
│ 4. MCP Server (exposes capabilities) │
│ ↓ │
│ 5. External System (database, API, file system, etc.) │
│ │
└─────────────────────────────────────────────────────────────┘
Example Interaction:
User: "What's the status of order #12345?"
1. AI Application receives query
2. MCP Client discovers available tools
3. Finds "query_database" tool on MCP Server
4. Calls tool via MCP Protocol
5. MCP Server queries database
6. Returns result to AI
7. AI generates response: "Order #12345 shipped on Jan 15"
Why MCP Matters
1. Standardization
Before MCP:
# Custom integration for each tool
def connect_to_crm():
# Custom CRM code
pass
def connect_to_database():
# Custom DB code
pass
def connect_to_api():
# Custom API code
pass
With MCP:
# Single standard interface
mcp_client.discover_tools()
mcp_client.call_tool("query_crm", params)
mcp_client.call_tool("query_database", params)
mcp_client.call_tool("call_api", params)
2. Ecosystem Growth
MCP enables:
├── Tool Marketplaces
│ └── Discover and install MCP servers
├── Shared Context
│ └── Agents can share workspaces
├── Interoperability
│ └── Any AI can use any MCP server
└── Community Innovation
└── Open source server ecosystem
3. Reduced Development Time
| Task | Without MCP | With MCP |
|---|---|---|
| Connect to new tool | Days-weeks | Minutes |
| Maintain integrations | Ongoing effort | Minimal |
| Switch AI providers | Rewrite integrations | No changes needed |
| Add new capability | Custom development | Install MCP server |
4. Security & Control
MCP provides:
✅ Standardized authentication
✅ Permission management
✅ Audit trails
✅ Secure communication
✅ Sandboxed execution
MCP Architecture
Three-Layer Architecture
┌─────────────────────────────────────────────────────────────┐
│ HOST LAYER │
│ (AI Application: Claude Desktop, IDEs, Custom Apps) │
│ │
│ • User interface │
│ • Conversation management │
│ • Embeds MCP Client │
└────────────────────────┬────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ CLIENT LAYER │
│ (MCP Client: Connection Manager) │
│ │
│ • Discovers MCP servers │
│ • Manages connections │
│ • Routes requests │
│ • Handles responses │
└────────────────────────┬────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ SERVER LAYER │
│ (MCP Servers: Capability Providers) │
│ │
│ • Expose tools │
│ • Provide resources │
│ • Offer prompt templates │
│ • Connect to external systems │
└─────────────────────────────────────────────────────────────┘
Communication Protocol
Transport Mechanisms:
STDIO (Local)
- For local processes
- Fast, low latency
- Ideal for development
HTTP/SSE (Remote)
- For remote servers
- Scalable
- Production-ready
Message Format: JSON-RPC 2.0
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "query_database",
"arguments": {
"query": "SELECT * FROM orders WHERE id = 12345"
}
}
}
Core Components
MCP servers expose three types of capabilities:
1. Tools (Actions)
What: Functions that AI can execute to perform actions
Characteristics: - Have side effects (create, update, delete) - Execute operations - Return results
Example:
# MCP Tool Definition
@mcp.tool()
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email to a recipient"""
# Send email logic
return f"Email sent to {to}"
# AI can call this tool
result = mcp_client.call_tool("send_email", {
"to": "user@example.com",
"subject": "Order Confirmation",
"body": "Your order has shipped"
})
Common Tool Types: - Database queries - API calls - File operations - Email sending - Calendar management - Code execution
2. Resources (Data)
What: Read-only data sources that AI can query
Characteristics: - No side effects - Provide context - Can be large datasets
Example:
# MCP Resource Definition
@mcp.resource("company://policies/return-policy")
def get_return_policy() -> str:
"""Get the company return policy"""
return load_policy_document()
# AI can read this resource
policy = mcp_client.read_resource("company://policies/return-policy")
Common Resource Types: - Documentation - Configuration files - Database schemas - API specifications - Knowledge bases - File contents
3. Prompts (Templates)
What: Pre-defined prompt templates for common tasks
Characteristics: - Reusable instructions - Can accept parameters - Guide AI behavior
Example:
# MCP Prompt Definition
@mcp.prompt()
def code_review_prompt(code: str, language: str) -> str:
"""Generate a code review prompt"""
return f"""
Review this {language} code for:
- Bugs and errors
- Performance issues
- Security vulnerabilities
- Best practices
Code:
{code}
"""
# AI can use this prompt
prompt = mcp_client.get_prompt("code_review_prompt", {
"code": user_code,
"language": "python"
})
Common Prompt Types: - Code review templates - Analysis frameworks - Writing guidelines - Troubleshooting workflows
Component Comparison
| Component | Purpose | Side Effects | Use Case |
|---|---|---|---|
| Tools | Execute actions | Yes | Send email, update database |
| Resources | Provide data | No | Read documentation, get config |
| Prompts | Guide behavior | No | Code review, analysis template |
MCP in AWS Context
Why MCP Matters for AWS
AWS + MCP Benefits:
├── Standardized AWS Service Access
│ └── S3, DynamoDB, Lambda, etc. via MCP
├── Bedrock Agent Integration
│ └── MCP servers as Bedrock Agent tools
├── Simplified Multi-Service Orchestration
│ └── One protocol for all AWS services
└── Enterprise-Grade Security
└── IAM, VPC, encryption built-in
AWS MCP Ecosystem
┌─────────────────────────────────────────────────────────────┐
│ AWS MCP ECOSYSTEM │
├─────────────────────────────────────────────────────────────┤
│ │
│ AI Applications │
│ ├── Amazon Q │
│ ├── Bedrock Agents │
│ ├── Custom Apps with Bedrock │
│ └── Third-party AI tools │
│ │
│ ↓ (MCP Protocol) │
│ │
│ MCP Servers (AWS-Specific) │
│ ├── Bedrock Data Automation MCP Server │
│ ├── Bedrock AgentCore MCP Server │
│ ├── AWS Services MCP Servers │
│ │ ├── S3 MCP Server │
│ │ ├── DynamoDB MCP Server │
│ │ ├── Lambda MCP Server │
│ │ └── CloudWatch MCP Server │
│ └── Custom MCP Servers on AWS │
│ │
│ ↓ │
│ │
│ AWS Services │
│ ├── Amazon Bedrock │
│ ├── S3, DynamoDB, Lambda │
│ ├── API Gateway │
│ └── Other AWS Services │
│ │
└─────────────────────────────────────────────────────────────┘
AWS Bedrock Integration
Bedrock + MCP: Perfect Match
Amazon Bedrock and MCP complement each other perfectly:
Amazon Bedrock provides:
├── Foundation models (Claude, Titan, etc.)
├── Knowledge Bases
├── Agents
└── Guardrails
MCP provides:
├── Standardized tool access
├── External system integration
├── Context management
└── Extensibility
Together:
└── Powerful, extensible AI applications
Integration Patterns
Pattern 1: MCP Servers with Bedrock Agents
# Bedrock Agent can use MCP servers as tools
from langchain_aws import ChatBedrock
import mcp
# Initialize Bedrock model
bedrock_llm = ChatBedrock(
model_id="anthropic.claude-3-sonnet-20240229-v1:0"
)
# Connect to MCP server
mcp_client = mcp.Client("stdio", command="python", args=["server.py"])
# Discover tools from MCP server
tools = mcp_client.list_tools()
# Bedrock agent uses MCP tools
for tool in tools:
print(f"Available: {tool.name} - {tool.description}")
# Agent execution with MCP tools
response = bedrock_llm.invoke(
"Query the database for customer orders",
tools=tools
)
Pattern 2: Bedrock Knowledge Bases via MCP
# Expose Bedrock KB as MCP resource
from mcp.server import Server
import boto3
mcp_server = Server("bedrock-kb-server")
@mcp_server.resource("bedrock://kb/{kb_id}")
def get_kb_data(kb_id: str, query: str):
"""Query Bedrock Knowledge Base"""
client = boto3.client('bedrock-agent-runtime')
response = client.retrieve(
knowledgeBaseId=kb_id,
retrievalQuery={'text': query}
)
return response['retrievalResults']
Pattern 3: Amazon Q with MCP
# Amazon Q can leverage MCP servers
# Example: Whiteboard to cloud workflow
# 1. User uploads whiteboard image to Amazon Q
# 2. Q uses Bedrock Data Automation MCP server
# 3. MCP server processes image, extracts architecture
# 4. Q generates CloudFormation templates
# 5. Q deploys to AWS
# MCP Server for Bedrock Data Automation
@mcp.tool()
def extract_architecture(image_data: bytes) -> dict:
"""Extract architecture from whiteboard image"""
client = boto3.client('bedrock-data-automation')
response = client.invoke_data_automation_async(
inputConfig={'s3Uri': upload_to_s3(image_data)},
outputConfig={'s3Uri': 's3://output-bucket/'},
dataAutomationProjectArn='arn:aws:...'
)
return parse_architecture(response)
AWS-Specific MCP Servers
1. Bedrock AgentCore MCP Server
Purpose: Automate Bedrock Agent lifecycle
Capabilities:
# Tools provided by AgentCore MCP Server
tools = [
"create_agent", # Create new Bedrock agent
"update_agent", # Update agent configuration
"create_action_group", # Add action groups
"create_knowledge_base", # Create KB
"test_agent", # Test agent functionality
"deploy_agent" # Deploy to production
]
# Example usage
@mcp.tool()
def create_bedrock_agent(name: str, instructions: str):
"""Create a new Bedrock agent"""
# Automated agent creation
pass
Benefits: - Eliminates manual agent setup - Reduces development time - Standardizes agent creation
2. Bedrock Data Automation MCP Server
Purpose: Process and extract data from documents
Capabilities:
# Tools for data automation
tools = [
"extract_text", # Extract text from images
"parse_documents", # Parse structured documents
"extract_tables", # Extract tables
"classify_content" # Classify document types
]
# Example
@mcp.tool()
def extract_whiteboard_architecture(image_url: str):
"""Extract architecture diagram from whiteboard"""
# Uses Bedrock Data Automation
pass
AWS Services with MCP
Amazon API Gateway MCP Proxy
Announced: December 2024
What it does: Transforms existing REST APIs into MCP-compatible endpoints
┌─────────────────────────────────────────────────────────────┐
│ API GATEWAY MCP PROXY │
├─────────────────────────────────────────────────────────────┤
│ │
│ Existing REST API │
│ GET /api/orders/{id} │
│ POST /api/orders │
│ │
│ ↓ (API Gateway MCP Proxy) │
│ │
│ MCP-Compatible Endpoints │
│ tools/call → query_order │
│ tools/call → create_order │
│ │
│ ✅ No code changes to existing API │
│ ✅ Automatic MCP schema generation │
│ ✅ AI agents can now use your API │
│ │
└─────────────────────────────────────────────────────────────┘
Configuration Example:
# API Gateway MCP Proxy Configuration
Resources:
MCPProxy:
Type: AWS::ApiGateway::RestApi
Properties:
Name: OrdersAPIMCP
MCPEnabled: true
MCPConfiguration:
ToolMapping:
- Path: /orders/{id}
Method: GET
ToolName: query_order
Description: "Get order by ID"
- Path: /orders
Method: POST
ToolName: create_order
Description: "Create new order"
S3 MCP Server
# MCP Server for S3 operations
from mcp.server import Server
import boto3
s3_server = Server("s3-mcp-server")
s3_client = boto3.client('s3')
@s3_server.tool()
def list_buckets() -> list:
"""List all S3 buckets"""
response = s3_client.list_buckets()
return [b['Name'] for b in response['Buckets']]
@s3_server.tool()
def upload_file(bucket: str, key: str, content: str):
"""Upload file to S3"""
s3_client.put_object(
Bucket=bucket,
Key=key,
Body=content.encode()
)
return f"Uploaded to s3://{bucket}/{key}"
@s3_server.resource("s3://{bucket}/{key}")
def read_file(bucket: str, key: str) -> str:
"""Read file from S3"""
response = s3_client.get_object(Bucket=bucket, Key=key)
return response['Body'].read().decode()
DynamoDB MCP Server
# MCP Server for DynamoDB
from mcp.server import Server
import boto3
dynamodb_server = Server("dynamodb-mcp-server")
dynamodb = boto3.resource('dynamodb')
@dynamodb_server.tool()
def query_table(table_name: str, key: dict) -> dict:
"""Query DynamoDB table"""
table = dynamodb.Table(table_name)
response = table.get_item(Key=key)
return response.get('Item', {})
@dynamodb_server.tool()
def put_item(table_name: str, item: dict):
"""Put item in DynamoDB"""
table = dynamodb.Table(table_name)
table.put_item(Item=item)
return f"Item added to {table_name}"
@dynamodb_server.resource("dynamodb://{table_name}/schema")
def get_table_schema(table_name: str) -> dict:
"""Get DynamoDB table schema"""
table = dynamodb.Table(table_name)
return {
'TableName': table.table_name,
'KeySchema': table.key_schema,
'AttributeDefinitions': table.attribute_definitions
}
Lambda MCP Server
# MCP Server for Lambda invocation
from mcp.server import Server
import boto3
import json
lambda_server = Server("lambda-mcp-server")
lambda_client = boto3.client('lambda')
@lambda_server.tool()
def invoke_function(function_name: str, payload: dict) -> dict:
"""Invoke Lambda function"""
response = lambda_client.invoke(
FunctionName=function_name,
InvocationType='RequestResponse',
Payload=json.dumps(payload)
)
result = json.loads(response['Payload'].read())
return result
@lambda_server.tool()
def list_functions() -> list:
"""List all Lambda functions"""
response = lambda_client.list_functions()
return [f['FunctionName'] for f in response['Functions']]
@lambda_server.resource("lambda://{function_name}/config")
def get_function_config(function_name: str) -> dict:
"""Get Lambda function configuration"""
response = lambda_client.get_function_configuration(
FunctionName=function_name
)
return response
CloudWatch MCP Server
# MCP Server for CloudWatch Logs
from mcp.server import Server
import boto3
cloudwatch_server = Server("cloudwatch-mcp-server")
logs_client = boto3.client('logs')
@cloudwatch_server.tool()
def query_logs(log_group: str, query: str, hours: int = 1) -> list:
"""Query CloudWatch Logs"""
import time
start_time = int((time.time() - hours * 3600) * 1000)
end_time = int(time.time() * 1000)
response = logs_client.start_query(
logGroupName=log_group,
startTime=start_time,
endTime=end_time,
queryString=query
)
query_id = response['queryId']
# Wait for query to complete
while True:
result = logs_client.get_query_results(queryId=query_id)
if result['status'] == 'Complete':
return result['results']
time.sleep(1)
@cloudwatch_server.resource("cloudwatch://logs/{log_group}")
def get_recent_logs(log_group: str, limit: int = 100) -> list:
"""Get recent log events"""
response = logs_client.filter_log_events(
logGroupName=log_group,
limit=limit
)
return response['events']
Building MCP Servers
Quick Start: Python MCP Server
# install: pip install mcp
from mcp.server import Server
from mcp.types import Tool, Resource, TextContent
# Create server
server = Server("my-aws-server")
# Define a tool
@server.tool()
def greet(name: str) -> str:
"""Greet a user by name"""
return f"Hello, {name}!"
# Define a resource
@server.resource("config://app-settings")
def get_settings() -> dict:
"""Get application settings"""
return {
"region": "us-east-1",
"environment": "production"
}
# Define a prompt
@server.prompt()
def analysis_prompt(data: str) -> str:
"""Generate analysis prompt"""
return f"Analyze this data and provide insights:\n{data}"
# Run server
if __name__ == "__main__":
server.run()
AWS-Integrated MCP Server
# Complete AWS MCP Server Example
from mcp.server import Server
import boto3
import json
server = Server("aws-services-server")
# AWS Clients
s3 = boto3.client('s3')
dynamodb = boto3.resource('dynamodb')
bedrock = boto3.client('bedrock-runtime')
# S3 Tools
@server.tool()
def s3_list_objects(bucket: str, prefix: str = "") -> list:
"""List objects in S3 bucket"""
response = s3.list_objects_v2(Bucket=bucket, Prefix=prefix)
return [obj['Key'] for obj in response.get('Contents', [])]
@server.tool()
def s3_read_file(bucket: str, key: str) -> str:
"""Read file from S3"""
response = s3.get_object(Bucket=bucket, Key=key)
return response['Body'].read().decode('utf-8')
# DynamoDB Tools
@server.tool()
def dynamodb_query(table: str, key_name: str, key_value: str) -> dict:
"""Query DynamoDB table"""
table_resource = dynamodb.Table(table)
response = table_resource.get_item(Key={key_name: key_value})
return response.get('Item', {})
# Bedrock Tools
@server.tool()
def bedrock_invoke(prompt: str, model_id: str = "anthropic.claude-3-sonnet-20240229-v1:0") -> str:
"""Invoke Bedrock model"""
body = json.dumps({
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 4096,
"messages": [{"role": "user", "content": prompt}]
})
response = bedrock.invoke_model(
modelId=model_id,
body=body
)
result = json.loads(response['body'].read())
return result['content'][0]['text']
# Resources
@server.resource("aws://account-info")
def get_account_info() -> dict:
"""Get AWS account information"""
sts = boto3.client('sts')
identity = sts.get_caller_identity()
return {
'account_id': identity['Account'],
'user_arn': identity['Arn']
}
# Run server
if __name__ == "__main__":
import asyncio
asyncio.run(server.run())
Deploying MCP Server on AWS
Option 1: Lambda Function
# MCP Server as Lambda function
import json
from mcp.server import Server
server = Server("lambda-mcp-server")
@server.tool()
def process_data(data: str) -> str:
"""Process data"""
return f"Processed: {data}"
def lambda_handler(event, context):
"""Lambda handler for MCP requests"""
# Parse MCP request
mcp_request = json.loads(event['body'])
# Handle MCP protocol
if mcp_request['method'] == 'tools/list':
return {
'statusCode': 200,
'body': json.dumps(server.list_tools())
}
elif mcp_request['method'] == 'tools/call':
result = server.call_tool(
mcp_request['params']['name'],
mcp_request['params']['arguments']
)
return {
'statusCode': 200,
'body': json.dumps(result)
}
Option 2: ECS/Fargate
# Dockerfile for MCP Server
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY server.py .
EXPOSE 8000
CMD ["python", "server.py"]
# ECS Task Definition
Resources:
MCPServerTask:
Type: AWS::ECS::TaskDefinition
Properties:
Family: mcp-server
NetworkMode: awsvpc
RequiresCompatibilities:
- FARGATE
Cpu: 256
Memory: 512
ContainerDefinitions:
- Name: mcp-server
Image: !Sub ${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/mcp-server:latest
PortMappings:
- ContainerPort: 8000
Protocol: tcp
Option 3: EC2 with Auto Scaling
#!/bin/bash
# User data script for EC2
# Install dependencies
yum update -y
yum install -y python3 python3-pip
# Install MCP server
pip3 install mcp boto3
# Download server code
aws s3 cp s3://my-bucket/mcp-server.py /opt/mcp-server.py
# Run server
python3 /opt/mcp-server.py
Use Cases
1. Intelligent DevOps Assistant
Scenario: AI assistant that manages AWS infrastructure
# MCP Server for DevOps
@server.tool()
def check_service_health(service_name: str) -> dict:
"""Check health of AWS service"""
# Check CloudWatch metrics
# Check service status
# Return health report
pass
@server.tool()
def deploy_application(app_name: str, version: str):
"""Deploy application to AWS"""
# Trigger CodePipeline
# Monitor deployment
# Return status
pass
@server.tool()
def scale_service(service_name: str, desired_count: int):
"""Scale ECS service"""
# Update ECS service
# Monitor scaling
pass
# Usage with Bedrock
user: "Check health of my web service and scale if needed"
ai: Uses check_service_health → Sees high CPU → Uses scale_service
2. Data Analysis Pipeline
Scenario: AI analyzes data from multiple AWS sources
# MCP Server for Data Analysis
@server.tool()
def query_athena(query: str) -> list:
"""Query data using Athena"""
athena = boto3.client('athena')
# Execute query
# Return results
pass
@server.tool()
def analyze_with_bedrock(data: str) -> str:
"""Analyze data using Bedrock"""
bedrock = boto3.client('bedrock-runtime')
# Invoke model for analysis
pass
@server.resource("s3://data-lake/{dataset}")
def get_dataset(dataset: str) -> dict:
"""Get dataset from S3"""
# Read from S3
# Return data
pass
# Workflow
user: "Analyze sales data from last quarter"
ai:
1. Uses get_dataset to fetch data
2. Uses query_athena to aggregate
3. Uses analyze_with_bedrock for insights
3. Customer Support Automation
Scenario: AI handles customer queries with AWS backend
# MCP Server for Customer Support
@server.tool()
def query_customer_data(customer_id: str) -> dict:
"""Get customer information from DynamoDB"""
table = dynamodb.Table('customers')
return table.get_item(Key={'id': customer_id})['Item']
@server.tool()
def query_order_status(order_id: str) -> dict:
"""Get order status"""
# Query orders table
pass
@server.tool()
def create_support_ticket(customer_id: str, issue: str) -> str:
"""Create support ticket"""
# Create ticket in system
pass
@server.resource("bedrock://kb/support-docs")
def get_support_docs(query: str) -> list:
"""Query support documentation from Bedrock KB"""
kb_client = boto3.client('bedrock-agent-runtime')
response = kb_client.retrieve(
knowledgeBaseId='KB123',
retrievalQuery={'text': query}
)
return response['retrievalResults']
# Interaction
user: "Where is my order #12345?"
ai:
1. Uses query_order_status
2. Finds order shipped
3. Responds with tracking info
4. Code Generation and Deployment
Scenario: AI generates and deploys code to AWS
# MCP Server for Code Deployment
@server.tool()
def generate_lambda_function(description: str) -> str:
"""Generate Lambda function code using Bedrock"""
bedrock = boto3.client('bedrock-runtime')
prompt = f"Generate Python Lambda function: {description}"
# Generate code
return generated_code
@server.tool()
def deploy_lambda(function_name: str, code: str):
"""Deploy Lambda function"""
lambda_client = boto3.client('lambda')
# Create/update function
# Return ARN
pass
@server.tool()
def create_api_gateway(lambda_arn: str) -> str:
"""Create API Gateway for Lambda"""
apigw = boto3.client('apigatewayv2')
# Create API
# Create integration
# Return API URL
pass
# Workflow
user: "Create a Lambda function that processes S3 uploads and expose it via API"
ai:
1. Uses generate_lambda_function
2. Uses deploy_lambda
3. Uses create_api_gateway
4. Returns API endpoint
5. Multi-Account Management
Scenario: Manage resources across multiple AWS accounts
# MCP Server for Multi-Account Management
@server.tool()
def assume_role(account_id: str, role_name: str) -> dict:
"""Assume role in another account"""
sts = boto3.client('sts')
response = sts.assume_role(
RoleArn=f"arn:aws:iam::{account_id}:role/{role_name}",
RoleSessionName="MCPSession"
)
return response['Credentials']
@server.tool()
def list_resources_cross_account(accounts: list, resource_type: str) -> dict:
"""List resources across multiple accounts"""
results = {}
for account in accounts:
creds = assume_role(account, 'ReadOnlyRole')
# List resources with assumed credentials
results[account] = resources
return results
@server.tool()
def deploy_cross_account(accounts: list, template: str):
"""Deploy CloudFormation across accounts"""
for account in accounts:
creds = assume_role(account, 'DeployRole')
# Deploy template
pass
MCP and OpenAPI: The Connection
Understanding the Relationship
MCP and OpenAPI serve complementary but different purposes in the API ecosystem:
┌─────────────────────────────────────────────────────────────┐
│ OpenAPI vs MCP │
├─────────────────────────────────────────────────────────────┤
│ │
│ OpenAPI (REST APIs) │
│ ├── Purpose: Human/application API documentation │
│ ├── Format: YAML/JSON specification │
│ ├── Describes: HTTP endpoints, parameters, responses │
│ ├── Consumers: Developers, API clients │
│ └── Use: Build SDKs, generate docs, validate requests │
│ │
│ MCP (AI Tool Protocol) │
│ ├── Purpose: AI agent tool integration │
│ ├── Format: JSON-RPC 2.0 protocol │
│ ├── Describes: Tools, resources, prompts │
│ ├── Consumers: AI models, LLM applications │
│ └── Use: Enable AI to discover and use tools │
│ │
│ The Bridge: OpenAPI → MCP Conversion │
│ └── Automatically generate MCP servers from OpenAPI specs │
│ │
└─────────────────────────────────────────────────────────────┘
Key Differences
| Aspect | OpenAPI | MCP |
|---|---|---|
| Target Audience | Developers | AI Agents |
| Protocol | HTTP/REST | JSON-RPC 2.0 |
| Discovery | Static documentation | Dynamic runtime discovery |
| Context | Stateless | Stateful, context-aware |
| Schema | OpenAPI 3.x | JSON Schema + MCP extensions |
| Purpose | API documentation | AI tool integration |
| Invocation | HTTP requests | Tool calls via protocol |
Why Convert OpenAPI to MCP?
The Problem:
You have existing REST APIs documented with OpenAPI
↓
AI agents need to use these APIs
↓
But AI agents work better with MCP protocol
↓
Solution: Convert OpenAPI specs to MCP servers
Benefits: 1. Reuse Existing APIs - No need to rebuild for AI 2. Automatic Tool Generation - Each endpoint becomes an MCP tool 3. Maintain Single Source of Truth - OpenAPI spec drives both 4. Faster AI Integration - Minutes instead of days
OpenAPI to MCP Conversion
Mapping Concepts
OpenAPI Concept → MCP Concept
─────────────────────────────────────────────
Endpoint (GET /users) → Tool (list_users)
Path Parameters → Tool Arguments
Request Body → Tool Arguments
Response Schema → Tool Return Type
API Description → Tool Description
Authentication → MCP Authentication
Example Conversion
OpenAPI Specification:
# openapi.yaml
openapi: 3.0.0
info:
title: User API
version: 1.0.0
paths:
/users/{userId}:
get:
summary: Get user by ID
operationId: getUser
parameters:
- name: userId
in: path
required: true
schema:
type: string
responses:
'200':
description: User found
content:
application/json:
schema:
type: object
properties:
id:
type: string
name:
type: string
email:
type: string
Generated MCP Server:
# Auto-generated from OpenAPI spec
from mcp.server import Server
import requests
server = Server("user-api-mcp")
BASE_URL = "https://api.example.com"
@server.tool()
def get_user(userId: str) -> dict:
"""
Get user by ID
Generated from: GET /users/{userId}
"""
response = requests.get(f"{BASE_URL}/users/{userId}")
response.raise_for_status()
return response.json()
Tools for OpenAPI to MCP Conversion
1. AutoMCP
What: Compiler that generates MCP servers from OpenAPI specs
# Install
pip install automcp
# Generate MCP server from OpenAPI spec
automcp generate openapi.yaml --output server.py
# Run generated server
python server.py
Features: - Supports OpenAPI 2.0 and 3.0 - Automatic schema registration - Authentication handling - Complete server implementation
2. FastMCP
What: Python framework with OpenAPI integration
from fastmcp import FastMCP
# Create MCP server from OpenAPI spec
mcp = FastMCP.from_openapi("openapi.yaml")
# Customize if needed
@mcp.tool()
def custom_tool():
"""Add custom tools alongside OpenAPI tools"""
pass
# Run
mcp.run()
AWS API Gateway MCP Proxy
The Game Changer: AWS API Gateway now supports automatic MCP proxy
┌─────────────────────────────────────────────────────────────┐
│ API GATEWAY MCP PROXY FLOW │
├─────────────────────────────────────────────────────────────┤
│ │
│ 1. You have existing REST API │
│ GET /api/orders/{id} │
│ POST /api/orders │
│ │
│ 2. Enable MCP Proxy in API Gateway │
│ ✅ No code changes needed │
│ ✅ Automatic schema generation │
│ │
│ 3. API Gateway exposes MCP endpoint │
│ POST /mcp │
│ (Handles MCP protocol) │
│ │
│ 4. AI agents can now use your API via MCP │
│ tools/list → Discover available tools │
│ tools/call → Execute API operations │
│ │
└─────────────────────────────────────────────────────────────┘
Configuration:
# CloudFormation template
Resources:
MyRestApi:
Type: AWS::ApiGateway::RestApi
Properties:
Name: OrdersAPI
Description: Orders REST API
MCPIntegration:
Type: AWS::ApiGateway::MCPIntegration
Properties:
RestApiId: !Ref MyRestApi
MCPConfiguration:
Enabled: true
AutoGenerateTools: true
ToolNamingStrategy: operationId
AuthenticationMethod: IAM
Usage:
# AI agent using API Gateway MCP Proxy
from langchain_aws import ChatBedrock
import mcp
# Connect to API Gateway MCP endpoint
mcp_client = mcp.Client(
"https://api.example.com/mcp",
auth={"type": "aws-iam"}
)
# Discover tools (automatically from OpenAPI)
tools = mcp_client.list_tools()
# Use with Bedrock
bedrock = ChatBedrock(model_id="anthropic.claude-3-sonnet")
response = bedrock.invoke(
"Get order #12345",
tools=tools
)
Best Practices for OpenAPI to MCP
1. Use Clear Operation IDs
# ✅ Good: Clear operation IDs become tool names
paths:
/users/{id}:
get:
operationId: getUserById # Becomes: get_user_by_id tool
# ❌ Bad: Missing operation ID
paths:
/users/{id}:
get:
# No operationId - generator creates generic name
2. Provide Detailed Descriptions
# ✅ Good: Detailed descriptions help AI understand
paths:
/orders:
post:
summary: Create a new order
description: |
Creates a new order in the system. Requires customer ID,
product IDs, and shipping address. Returns order ID and
estimated delivery date.
operationId: createOrder
Summary: OpenAPI + MCP
Key Points: 1. Complementary Standards - OpenAPI for APIs, MCP for AI tools 2. Automatic Conversion - Tools exist to generate MCP from OpenAPI 3. AWS Integration - API Gateway MCP Proxy makes it seamless 4. Reuse Existing APIs - No need to rebuild for AI 5. Best of Both Worlds - Maintain OpenAPI, get MCP for free
When to Use: - ✅ You have existing REST APIs with OpenAPI specs - ✅ Want to make APIs accessible to AI agents - ✅ Need to maintain single source of truth - ✅ Want automatic tool generation
Tools to Use: - AutoMCP - Open source compiler - FastMCP - Python framework - AWS API Gateway MCP Proxy - Zero-code solution
B
1. Security
# ✅ Good: Use IAM roles and policies
@server.tool()
def secure_operation():
# Use IAM role attached to Lambda/EC2
client = boto3.client('s3') # Uses IAM role automatically
# ❌ Bad: Hardcode credentials
@server.tool()
def insecure_operation():
client = boto3.client(
's3',
aws_access_key_id='AKIAIOSFODNN7EXAMPLE', # Never do this!
aws_secret_access_key='wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
)
2. Error Handling
# ✅ Good: Comprehensive error handling
@server.tool()
def robust_operation(bucket: str, key: str) -> dict:
"""Read from S3 with error handling"""
try:
s3 = boto3.client('s3')
response = s3.get_object(Bucket=bucket, Key=key)
return {
'success': True,
'data': response['Body'].read().decode()
}
except s3.exceptions.NoSuchKey:
return {'success': False, 'error': 'File not found'}
except s3.exceptions.NoSuchBucket:
return {'success': False, 'error': 'Bucket not found'}
except Exception as e:
return {'success': False, 'error': str(e)}
3. Performance
# ✅ Good: Cache expensive operations
from functools import lru_cache
@lru_cache(maxsize=100)
def get_table_schema(table_name: str):
"""Cached table schema lookup"""
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table(table_name)
return table.key_schema
# ✅ Good: Use pagination for large results
@server.tool()
def list_all_objects(bucket: str) -> list:
"""List all S3 objects with pagination"""
s3 = boto3.client('s3')
paginator = s3.get_paginator('list_objects_v2')
all_objects = []
for page in paginator.paginate(Bucket=bucket):
all_objects.extend(page.get('Contents', []))
return all_objects
4. Logging and Monitoring
# ✅ Good: Comprehensive logging
import logging
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
@server.tool()
def monitored_operation(param: str) -> dict:
"""Operation with logging"""
logger.info(f"Starting operation with param: {param}")
try:
result = perform_operation(param)
logger.info(f"Operation successful: {result}")
return {'success': True, 'result': result}
except Exception as e:
logger.error(f"Operation failed: {str(e)}", exc_info=True)
return {'success': False, 'error': str(e)}
# Send logs to CloudWatch
cloudwatch_handler = watchtower.CloudWatchLogHandler()
logger.addHandler(cloudwatch_handler)
5. Resource Management
# ✅ Good: Use context managers
@server.tool()
def process_large_file(bucket: str, key: str):
"""Process large file efficiently"""
s3 = boto3.client('s3')
# Stream file instead of loading into memory
response = s3.get_object(Bucket=bucket, Key=key)
with response['Body'] as stream:
for line in stream.iter_lines():
process_line(line)
return "Processing complete"
# ✅ Good: Clean up resources
@server.tool()
def temporary_resource_operation():
"""Create and clean up temporary resources"""
s3 = boto3.client('s3')
temp_bucket = create_temp_bucket()
try:
# Use temporary bucket
result = perform_operation(temp_bucket)
return result
finally:
# Always clean up
delete_bucket(temp_bucket)
6. Documentation
# ✅ Good: Clear, detailed documentation
@server.tool()
def well_documented_tool(
bucket: str,
prefix: str = "",
max_keys: int = 1000
) -> list:
"""
List objects in an S3 bucket with optional filtering.
Args:
bucket: Name of the S3 bucket
prefix: Optional prefix to filter objects (default: "")
max_keys: Maximum number of keys to return (default: 1000)
Returns:
List of object keys matching the criteria
Raises:
NoSuchBucket: If the bucket doesn't exist
AccessDenied: If lacking permissions
Example:
>>> list_objects("my-bucket", prefix="logs/", max_keys=100)
['logs/2025-01-01.log', 'logs/2025-01-02.log']
"""
s3 = boto3.client('s3')
response = s3.list_objects_v2(
Bucket=bucket,
Prefix=prefix,
MaxKeys=max_keys
)
return [obj['Key'] for obj in response.get('Contents', [])]
Security Considerations
1. IAM Permissions
Principle of Least Privilege:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}
MCP Server IAM Role:
MCPServerRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Policies:
- PolicyName: MCPServerPolicy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- s3:GetObject
- dynamodb:GetItem
- bedrock:InvokeModel
Resource: '*'
2. Authentication & Authorization
# Implement authentication for MCP server
from functools import wraps
import jwt
def require_auth(f):
@wraps(f)
def decorated_function(*args, **kwargs):
token = get_token_from_request()
try:
# Verify JWT token
payload = jwt.decode(token, SECRET_KEY, algorithms=['HS256'])
# Check permissions
if not has_permission(payload['user_id'], f.__name__):
raise PermissionError("Insufficient permissions")
return f(*args, **kwargs)
except jwt.InvalidTokenError:
raise AuthenticationError("Invalid token")
return decorated_function
@server.tool()
@require_auth
def sensitive_operation(data: str):
"""Protected operation requiring authentication"""
pass
3. Data Encryption
# Encrypt sensitive data
from cryptography.fernet import Fernet
import boto3
def encrypt_data(data: str) -> str:
"""Encrypt data using KMS"""
kms = boto3.client('kms')
response = kms.encrypt(
KeyId='alias/my-key',
Plaintext=data.encode()
)
return response['CiphertextBlob']
def decrypt_data(encrypted_data: bytes) -> str:
"""Decrypt data using KMS"""
kms = boto3.client('kms')
response = kms.decrypt(
CiphertextBlob=encrypted_data
)
return response['Plaintext'].decode()
@server.tool()
def store_sensitive_data(data: str):
"""Store encrypted data in S3"""
encrypted = encrypt_data(data)
s3 = boto3.client('s3')
s3.put_object(
Bucket='secure-bucket',
Key='sensitive-data',
Body=encrypted,
ServerSideEncryption='aws:kms'
)
4. Network Security
# VPC Configuration for MCP Server
Resources:
MCPServerSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Security group for MCP server
VpcId: !Ref VPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 443
ToPort: 443
SourceSecurityGroupId: !Ref ClientSecurityGroup
SecurityGroupEgress:
- IpProtocol: tcp
FromPort: 443
ToPort: 443
DestinationPrefixListId: !Ref S3PrefixList
5. Audit Logging
# Comprehensive audit logging
import json
from datetime import datetime
@server.tool()
def audited_operation(user_id: str, action: str, resource: str):
"""Operation with audit logging"""
# Log to CloudWatch
logger.info(json.dumps({
'timestamp': datetime.utcnow().isoformat(),
'user_id': user_id,
'action': action,
'resource': resource,
'source_ip': get_source_ip()
}))
# Also log to CloudTrail via AWS API call
cloudtrail = boto3.client('cloudtrail')
cloudtrail.put_insight_selectors(
TrailName='mcp-audit-trail',
InsightSelectors=[{
'InsightType': 'ApiCallRateInsight'
}]
)
# Perform operation
result = perform_operation(resource)
# Log result
logger.info(f"Operation completed: {result}")
return result
Summary
Key Takeaways
- MCP is the USB-C for AI - Standardizes how AI connects to external systems
- AWS + MCP = Powerful Combination - Native integration with Bedrock and AWS services
- Three Core Components - Tools (actions), Resources (data), Prompts (templates)
- Easy to Build - Simple Python/TypeScript SDKs for creating MCP servers
- Production Ready - Deploy on Lambda, ECS, or EC2 with full AWS integration
- Security First - Use IAM, encryption, and audit logging
- Growing Ecosystem - Community-driven with increasing adoption
MCP vs Traditional Integration
| Aspect | Traditional | MCP |
|---|---|---|
| Integration | Custom per tool | Standardized protocol |
| Development Time | Days-weeks | Minutes-hours |
| Maintenance | High | Low |
| Discoverability | Manual documentation | Automatic via protocol |
| Portability | Vendor lock-in | Model-agnostic |
| Ecosystem | Fragmented | Unified |
When to Use MCP
✅ Use MCP when: - Building AI applications that need external data/tools - Want standardized integration across multiple systems - Need to switch between different AI models - Building reusable AI capabilities - Want to leverage community MCP servers
⚠️ Consider alternatives when: - Simple, one-off integrations - Real-time, ultra-low latency requirements - Legacy systems without API access - Extremely high security requirements (air-gapped)
Getting Started Checklist
□ Understand MCP architecture (Host → Client → Server)
□ Identify use case (DevOps, data analysis, support, etc.)
□ Choose deployment platform (Lambda, ECS, EC2)
□ Set up AWS credentials and IAM roles
□ Install MCP SDK (pip install mcp)
□ Build your first MCP server
□ Test with MCP client
□ Integrate with Bedrock/AI application
□ Implement security (IAM, encryption, logging)
□ Deploy to production
□ Monitor and iterate
Resources
Official Documentation: - Model Context Protocol - MCP Specification - MCP Python SDK - MCP TypeScript SDK
AWS Resources: - AWS Blog: MCP and Amazon Bedrock - Bedrock AgentCore MCP Server - API Gateway MCP Proxy
Community: - Awesome MCP Servers - MCP Examples - MCP Discord Community
Content rephrased for compliance with licensing restrictions. Information synthesized from official MCP documentation, AWS blog posts, and community resources.
Sources: - Wikipedia: Model Context Protocol - AWS Community: MCP and Amazon Bedrock - AWS Blog: Harness the power of MCP servers with Amazon Bedrock Agents - Kubiya: MCP Architecture, Components & Workflow