AWS Bedrock Knowledge Base

AWS Bedrock Prompt Management Guide

Table of Contents


Overview

AWS Bedrock Prompt Management is a centralized service for creating, versioning, and managing prompts for foundation models. It provides a structured way to organize prompts, track changes, and deploy them across your applications.

Key Benefits: - Centralized prompt repository - Version control for prompts - A/B testing with variants - Reusable prompt templates - Collaboration across teams - Audit trail for changes - Easy deployment and rollback

Understanding Prompts: A Pedagogical Guide

What is a Prompt?

A prompt is the instruction or input you give to an AI language model to get a desired output. Think of it as a conversation starter or a question you ask the AI.

Simple Analogy: - Human conversation: "Can you help me write an email?" - AI prompt: "Write a professional email to a customer about a delayed shipment"

The prompt is the bridge between what you want and what the AI generates.

The Purpose of Prompts

Prompts serve several critical purposes:

1. Direction

Prompts tell the AI what to do:

❌ Vague: "Tell me about dogs"
✅ Clear: "Write a 3-paragraph article about dog training techniques for puppies"

2. Context

Prompts provide background information:

Without context: "Write a product description"
With context: "Write a product description for a waterproof smartwatch 
             targeted at fitness enthusiasts who swim"

3. Constraints

Prompts set boundaries and requirements:

"Write a summary in exactly 50 words"
"Respond in JSON format"
"Use a professional tone"
"Include 3 specific examples"

4. Format

Prompts specify how the output should look:

"List the steps as:
1. First step
2. Second step
3. Third step"

Prompt Structure: The Anatomy

A well-structured prompt typically contains these components:

┌─────────────────────────────────────────┐
│  1. ROLE/PERSONA                        │
│  "You are an expert Python developer"  │
├─────────────────────────────────────────┤
│  2. CONTEXT/BACKGROUND                  │
│  "Working on a web application"        │
├─────────────────────────────────────────┤
│  3. TASK/INSTRUCTION                    │
│  "Write a function to validate emails" │
├─────────────────────────────────────────┤
│  4. CONSTRAINTS/REQUIREMENTS            │
│  "Use regex, include error handling"   │
├─────────────────────────────────────────┤
│  5. FORMAT/STYLE                        │
│  "Include docstrings and comments"     │
├─────────────────────────────────────────┤
│  6. EXAMPLES (optional)                 │
│  "Like this: def validate(email)..."   │
└─────────────────────────────────────────┘

Component Breakdown

1. Role/Persona - Who should the AI be?

# Examples:
"You are a helpful customer service agent"
"You are an expert data scientist"
"You are a creative marketing copywriter"
"Act as a senior software architect"

2. Context/Background - What's the situation?

# Examples:
"Our company sells eco-friendly products"
"This is for a mobile app with 1M users"
"The user is a beginner programmer"
"We're launching a new product next month"

3. Task/Instruction - What should the AI do?

# Examples:
"Summarize this document"
"Generate 5 product names"
"Explain how neural networks work"
"Review this code for bugs"

4. Constraints/Requirements - What are the rules?

# Examples:
"Maximum 100 words"
"Use simple language (5th grade level)"
"Must include at least 3 examples"
"Avoid technical jargon"
"Response must be in JSON format"

5. Format/Style - How should it look?

# Examples:
"Use bullet points"
"Write in a friendly, conversational tone"
"Format as a numbered list"
"Use markdown with headers"
"Professional business language"

6. Examples - Show what you want (few-shot learning)

# Examples:
"Like this example: [show example]"
"Input: X, Output: Y"
"Here are 2 examples of good responses: ..."

Prompt Format Representations

Prompts can be represented in different formats depending on the use case:

Format 1: Plain Text (Simple)

The most basic format - just text:

prompt = "Summarize this article in 3 bullet points: [article text]"

When to use: - Simple, one-off tasks - Quick testing - Single-turn interactions

Format 2: Template with Variables (Reusable)

Text with placeholders that get filled in:

prompt_template = """
Translate the following text from {{source_language}} to {{target_language}}:

{{text}}

Translation:
"""

# Fill in variables
filled_prompt = prompt_template.replace('{{source_language}}', 'English')
filled_prompt = filled_prompt.replace('{{target_language}}', 'Spanish')
filled_prompt = filled_prompt.replace('{{text}}', 'Hello, how are you?')

When to use: - Reusable prompts - Multiple similar tasks - Production applications

Variable notation: - {{variable}} - Double curly braces (common) - {variable} - Single curly braces (Python f-strings) - $variable - Dollar sign (shell-style) - [variable] - Square brackets

Format 3: Structured (JSON/YAML)

Organized format with metadata:

{
  "name": "email-generator",
  "description": "Generates professional emails",
  "template": "Write an email to {{recipient}} about {{topic}}",
  "variables": [
    {
      "name": "recipient",
      "type": "string",
      "required": true,
      "description": "Email recipient name"
    },
    {
      "name": "topic",
      "type": "string",
      "required": true,
      "description": "Email subject/topic"
    }
  ],
  "parameters": {
    "temperature": 0.7,
    "max_tokens": 500
  }
}

When to use: - Complex prompts with metadata - Team collaboration - Version control - Automated systems

Format 4: Conversational (Messages)

Multi-turn conversation format:

messages = [
    {
        "role": "system",
        "content": "You are a helpful coding assistant"
    },
    {
        "role": "user",
        "content": "How do I sort a list in Python?"
    },
    {
        "role": "assistant",
        "content": "You can use the sorted() function or .sort() method..."
    },
    {
        "role": "user",
        "content": "What's the difference between them?"
    }
]

When to use: - Chatbots - Multi-turn conversations - Context-aware interactions

Prompt Examples: From Basic to Advanced

Level 1: Basic Prompt

prompt = "Write a poem about the ocean"

Issues: Too vague, no constraints, unpredictable output

Level 2: Improved Prompt

prompt = "Write a 4-line poem about the ocean using simple language"

Better: Added length and style constraints

Level 3: Structured Prompt

prompt = """
Write a poem about the ocean.

Requirements:
- Exactly 4 lines
- Use simple, accessible language
- Include imagery of waves and sunset
- Rhyme scheme: AABB

Poem:
"""

Better: Clear structure, specific requirements

Level 4: Professional Prompt with Context

prompt = """
You are a creative poet specializing in nature poetry.

Task: Write a poem about the ocean for a children's book.

Requirements:
- Exactly 4 lines
- Simple vocabulary (ages 5-8)
- Include imagery: waves, sunset, seashells
- Rhyme scheme: AABB
- Evoke feelings of wonder and calm

Example style:
"The sun sets low, the sky turns red,
The waves roll in, it's time for bed"

Your poem:
"""

Best: Role, context, requirements, examples, clear format

Visual Representation of Prompt Flow

User Need → Prompt Design → AI Processing → Output → Evaluation
    ↓            ↓              ↓              ↓          ↓
"I need     "Write a      Model reads    "Here is   "Is this
 an email"   professional  and generates  your       what I
             email..."     response       email..."   wanted?"
                                                          ↓
                                                    If No: Refine Prompt
                                                    If Yes: Done!

The Prompt Engineering Cycle

1. DEFINE GOAL
   ↓
2. WRITE INITIAL PROMPT
   ↓
3. TEST WITH AI
   ↓
4. EVALUATE OUTPUT
   ↓
5. REFINE PROMPT ←──┐
   ↓                │
6. TEST AGAIN       │
   ↓                │
7. STILL NOT RIGHT? ┘
   ↓
8. GOOD ENOUGH? → DEPLOY

Common Prompt Patterns

Pattern 1: Instruction Pattern

[Action Verb] + [Object] + [Constraints]

Examples:
"Summarize this article in 3 sentences"
"List 5 benefits of exercise"
"Explain quantum computing to a 10-year-old"

Pattern 2: Role-Based Pattern

"You are a [role]. [Task]."

Examples:
"You are a teacher. Explain photosynthesis."
"You are a lawyer. Review this contract."
"You are a chef. Suggest a recipe using chicken and rice."

Pattern 3: Few-Shot Pattern

"Here are examples:
Example 1: [input] → [output]
Example 2: [input] → [output]

Now do this:
[new input]"

Example:
"Classify sentiment:
'I love this!' → Positive
'This is terrible' → Negative
'It's okay' → Neutral

Classify: 'Best purchase ever!'"

Pattern 4: Chain-of-Thought Pattern

"Let's think step by step:
1. [First step]
2. [Second step]
3. [Conclusion]"

Example:
"Solve this problem step by step:
If a train travels 60 mph for 2 hours, how far does it go?

Let's think:
1. Speed = 60 mph
2. Time = 2 hours
3. Distance = Speed × Time
4. Distance = 60 × 2 = 120 miles"

Pattern 5: Constrained Output Pattern

"[Task]. Output must be:
- Format: [format]
- Length: [length]
- Style: [style]"

Example:
"Generate a product name. Output must be:
- Format: Single word, capitalized
- Length: 6-8 characters
- Style: Modern, tech-sounding"

Why Prompt Structure Matters

Poor Structure:

prompt = "write something about dogs"

Result: Unpredictable, may get anything from a poem to an essay to a list

Good Structure:

prompt = """
You are a veterinarian writing educational content.

Write a 200-word article about dog nutrition for pet owners.

Include:
1. Importance of balanced diet
2. Key nutrients dogs need
3. Foods to avoid
4. One practical tip

Use friendly, accessible language.

Article:
"""

Result: Consistent, predictable, meets requirements

Prompt Representation in Bedrock

In AWS Bedrock, prompts are represented as structured objects:

bedrock_prompt = {
    "name": "product-description-generator",
    "description": "Generates product descriptions",
    "variants": [
        {
            "name": "default",
            "templateType": "TEXT",
            "templateConfiguration": {
                "text": {
                    "text": """You are a product marketing expert.

Create a compelling product description for:

Product Name: {{product_name}}
Category: {{category}}
Key Features: {{features}}
Target Audience: {{audience}}

Requirements:
- Length: 100-150 words
- Tone: Enthusiastic but professional
- Include benefits, not just features
- End with a call-to-action

Description:"""
                }
            },
            "modelId": "anthropic.claude-3-sonnet-20240229-v1:0",
            "inferenceConfiguration": {
                "text": {
                    "temperature": 0.7,
                    "topP": 0.9,
                    "maxTokens": 500
                }
            }
        }
    ]
}

This structured representation includes: - Template: The prompt text with variables - Variables: Placeholders like {{product_name}} - Model Configuration: Which model to use - Inference Parameters: How the model should generate

What is Prompt Management?

Prompt Management allows you to: - Store prompts centrally - Single source of truth - Version prompts - Track changes over time - Create variants - Test different prompt versions - Share prompts - Collaborate across teams - Deploy safely - Test before production

Use Cases: - Managing prompts across multiple applications - A/B testing different prompt strategies - Maintaining prompt quality and consistency - Collaborative prompt engineering - Compliance and audit requirements

Key Concepts

Prompt

A reusable template with variables, model configuration, and metadata.

Version

An immutable snapshot of a prompt at a specific point in time.

Variant

Different versions of a prompt for A/B testing or experimentation.

Variables

Placeholders in prompts that get replaced with actual values at runtime.

Inference Configuration

Model parameters (temperature, topp, maxtokens) associated with a prompt.

Prerequisites

AWS Account Requirements: - AWS account with Bedrock access - IAM permissions for Prompt Management - Model access enabled

Required IAM Permissions:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "bedrock:CreatePrompt",
        "bedrock:GetPrompt",
        "bedrock:UpdatePrompt",
        "bedrock:DeletePrompt",
        "bedrock:ListPrompts",
        "bedrock:CreatePromptVersion",
        "bedrock:InvokeModel"
      ],
      "Resource": "*"
    }
  ]
}

SDK Installation:

# Python
pip install boto3

# Node.js
npm install @aws-sdk/client-bedrock-agent

# AWS CLI
aws configure

Getting Started

Option 1: Using AWS Console

  1. Navigate to Bedrock Console

    • Go to AWS Console → Amazon Bedrock → Prompt Management
    • Click "Create prompt"
  2. Configure Prompt

    • Name: customer-support-prompt
    • Description: "Prompt for customer support chatbot"
    • Model: Select foundation model
  3. Define Prompt Template

    • Add prompt text with variables
    • Configure inference parameters
    • Test the prompt
  4. Create Version

    • Save and create version
    • Deploy to applications

Option 2: Using AWS SDK (Programmatic)

Create and manage prompts using boto3 or AWS SDK.

Creating Prompts

Example 1: Simple Text Prompt

import boto3
import json

bedrock_agent = boto3.client('bedrock-agent', region_name='us-east-1')

# Create a simple prompt
response = bedrock_agent.create_prompt(
    name='summarization-prompt',
    description='Summarizes documents in bullet points',
    variants=[
        {
            'name': 'default',
            'templateType': 'TEXT',
            'templateConfiguration': {
                'text': {
                    'text': '''Summarize the following document in 3-5 bullet points:

{{document}}

Summary:'''
                }
            },
            'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
            'inferenceConfiguration': {
                'text': {
                    'temperature': 0.5,
                    'topP': 0.9,
                    'maxTokens': 500
                }
            }
        }
    ]
)

prompt_id = response['id']
prompt_arn = response['arn']

print(f"Prompt created: {prompt_id}")
print(f"ARN: {prompt_arn}")

Example 2: Prompt with Multiple Variables

# Create a prompt with multiple variables
response = bedrock_agent.create_prompt(
    name='email-generator',
    description='Generates professional emails',
    variants=[
        {
            'name': 'default',
            'templateType': 'TEXT',
            'templateConfiguration': {
                'text': {
                    'text': '''Write a professional email with the following details:

To: {{recipient_name}}
Subject: {{subject}}
Tone: {{tone}}
Key Points:
{{key_points}}

Email:'''
                }
            },
            'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
            'inferenceConfiguration': {
                'text': {
                    'temperature': 0.7,
                    'topP': 0.9,
                    'maxTokens': 1000
                }
            }
        }
    ]
)

print(f"Email generator prompt created: {response['id']}")

Example 3: Prompt with System Instructions

# Create a prompt with system message
response = bedrock_agent.create_prompt(
    name='code-reviewer',
    description='Reviews code and provides feedback',
    variants=[
        {
            'name': 'default',
            'templateType': 'TEXT',
            'templateConfiguration': {
                'text': {
                    'text': '''You are an expert code reviewer specializing in {{language}}.

Review the following code and provide:
1. Overall assessment
2. Issues found (with severity)
3. Suggestions for improvement
4. Security concerns

Code:

{{code}}


Review:'''
                }
            },
            'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
            'inferenceConfiguration': {
                'text': {
                    'temperature': 0.3,
                    'topP': 0.8,
                    'maxTokens': 2000
                }
            }
        }
    ]
)

Prompt Versions and Variants

Creating Versions

Versions are immutable snapshots of prompts:

# Create a version of a prompt
version_response = bedrock_agent.create_prompt_version(
    promptIdentifier=prompt_id,
    description='Initial production version'
)

version_number = version_response['version']
print(f"Version created: {version_number}")

Creating Variants for A/B Testing

Variants allow you to test different prompt strategies:

# Create prompt with multiple variants
response = bedrock_agent.create_prompt(
    name='product-description',
    description='Generates product descriptions',
    variants=[
        {
            'name': 'concise',
            'templateType': 'TEXT',
            'templateConfiguration': {
                'text': {
                    'text': 'Write a brief, concise product description for:\n\n{{product_name}}\n\nFeatures: {{features}}'
                }
            },
            'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
            'inferenceConfiguration': {
                'text': {
                    'temperature': 0.5,
                    'maxTokens': 200
                }
            }
        },
        {
            'name': 'detailed',
            'templateType': 'TEXT',
            'templateConfiguration': {
                'text': {
                    'text': '''Create a detailed, engaging product description for:

Product: {{product_name}}
Features: {{features}}
Target Audience: {{audience}}

Include benefits, use cases, and a compelling call-to-action.'''
                }
            },
            'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
            'inferenceConfiguration': {
                'text': {
                    'temperature': 0.7,
                    'maxTokens': 800
                }
            }
        },
        {
            'name': 'creative',
            'templateType': 'TEXT',
            'templateConfiguration': {
                'text': {
                    'text': '''Write a creative, story-driven product description for:

{{product_name}}

Features: {{features}}

Use vivid language and emotional appeal.'''
                }
            },
            'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
            'inferenceConfiguration': {
                'text': {
                    'temperature': 0.9,
                    'maxTokens': 600
                }
            }
        }
    ]
)

print(f"Prompt with 3 variants created: {response['id']}")

Updating Prompts

# Update an existing prompt
update_response = bedrock_agent.update_prompt(
    promptIdentifier=prompt_id,
    name='summarization-prompt-v2',
    description='Updated summarization prompt with better instructions',
    variants=[
        {
            'name': 'default',
            'templateType': 'TEXT',
            'templateConfiguration': {
                'text': {
                    'text': '''Analyze and summarize the following document.

Document:
{{document}}

Provide:
1. Main topic (one sentence)
2. Key points (3-5 bullet points)
3. Conclusion (one sentence)

Summary:'''
                }
            },
            'modelId': 'anthropic.claude-3-sonnet-20240229-v1:0',
            'inferenceConfiguration': {
                'text': {
                    'temperature': 0.5,
                    'topP': 0.9,
                    'maxTokens': 500
                }
            }
        }
    ]
)

print("Prompt updated successfully")

Using Prompts in Applications

Method 1: Invoke Prompt Directly

import boto3
import json

bedrock_agent_runtime = boto3.client('bedrock-agent-runtime', region_name='us-east-1')

def invoke_prompt(prompt_id, variables, variant_name='default'):
    """
    Invoke a managed prompt with variables

    Args:
        prompt_id: The prompt identifier
        variables: Dictionary of variable values
        variant_name: Which variant to use

    Returns:
        Model response
    """
    response = bedrock_agent_runtime.invoke_prompt(
        promptIdentifier=prompt_id,
        promptVariant=variant_name,
        input={
            'variables': variables
        }
    )

    # Extract the response
    result = response['output']
    return result

# Example usage
result = invoke_prompt(
    prompt_id='YOUR_PROMPT_ID',
    variables={
        'document': 'This is a long document that needs to be summarized...'
    }
)

print(result)

Method 2: Invoke Specific Version

def invoke_prompt_version(prompt_id, version, variables):
    """
    Invoke a specific version of a prompt
    """
    response = bedrock_agent_runtime.invoke_prompt(
        promptIdentifier=f"{prompt_id}:{version}",
        input={
            'variables': variables
        }
    )

    return response['output']

# Use specific version
result = invoke_prompt_version(
    prompt_id='YOUR_PROMPT_ID',
    version='1',
    variables={
        'document': 'Document text...'
    }
)

Method 3: Complete Prompt Management Class

import boto3
import json
from typing import Dict, Any, List, Optional

class PromptManager:
    """
    Helper class for managing Bedrock prompts
    """

    def __init__(self, region_name='us-east-1'):
        self.agent_client = boto3.client('bedrock-agent', region_name=region_name)
        self.runtime_client = boto3.client('bedrock-agent-runtime', region_name=region_name)

    def create_prompt(
        self,
        name: str,
        description: str,
        template: str,
        model_id: str,
        variables: List[str] = None,
        temperature: float = 0.7,
        max_tokens: int = 1000
    ) -> str:
        """
        Create a new prompt

        Returns:
            Prompt ID
        """
        response = self.agent_client.create_prompt(
            name=name,
            description=description,
            variants=[
                {
                    'name': 'default',
                    'templateType': 'TEXT',
                    'templateConfiguration': {
                        'text': {
                            'text': template
                        }
                    },
                    'modelId': model_id,
                    'inferenceConfiguration': {
                        'text': {
                            'temperature': temperature,
                            'maxTokens': max_tokens
                        }
                    }
                }
            ]
        )

        return response['id']

    def get_prompt(self, prompt_id: str, version: Optional[str] = None) -> Dict:
        """
        Get prompt details
        """
        params = {'promptIdentifier': prompt_id}
        if version:
            params['promptVersion'] = version

        response = self.agent_client.get_prompt(**params)
        return response

    def list_prompts(self) -> List[Dict]:
        """
        List all prompts
        """
        response = self.agent_client.list_prompts()
        return response['promptSummaries']

    def create_version(self, prompt_id: str, description: str = None) -> str:
        """
        Create a new version of a prompt

        Returns:
            Version number
        """
        params = {'promptIdentifier': prompt_id}
        if description:
            params['description'] = description

        response = self.agent_client.create_prompt_version(**params)
        return response['version']

    def invoke(
        self,
        prompt_id: str,
        variables: Dict[str, Any],
        variant: str = 'default',
        version: Optional[str] = None
    ) -> str:
        """
        Invoke a prompt with variables

        Returns:
            Generated text
        """
        identifier = f"{prompt_id}:{version}" if version else prompt_id

        response = self.runtime_client.invoke_prompt(
            promptIdentifier=identifier,
            promptVariant=variant,
            input={
                'variables': variables
            }
        )

        return response['output']

    def test_variants(
        self,
        prompt_id: str,
        variables: Dict[str, Any],
        variants: List[str]
    ) -> Dict[str, str]:
        """
        Test multiple variants with the same input

        Returns:
            Dictionary mapping variant names to outputs
        """
        results = {}

        for variant in variants:
            try:
                output = self.invoke(prompt_id, variables, variant)
                results[variant] = output
            except Exception as e:
                results[variant] = f"Error: {str(e)}"

        return results

    def delete_prompt(self, prompt_id: str):
        """
        Delete a prompt
        """
        self.agent_client.delete_prompt(promptIdentifier=prompt_id)

# Usage Example
manager = PromptManager()

# Create a prompt
prompt_id = manager.create_prompt(
    name='blog-post-generator',
    description='Generates blog post outlines',
    template='''Create a blog post outline for:

Topic: {{topic}}
Target Audience: {{audience}}
Tone: {{tone}}

Include:
1. Catchy title
2. Introduction hook
3. 3-5 main sections with subpoints
4. Conclusion
5. Call-to-action

Outline:''',
    model_id='anthropic.claude-3-sonnet-20240229-v1:0',
    temperature=0.7,
    max_tokens=1500
)

print(f"Prompt created: {prompt_id}")

# Invoke the prompt
result = manager.invoke(
    prompt_id=prompt_id,
    variables={
        'topic': 'The Future of AI in Healthcare',
        'audience': 'Healthcare professionals',
        'tone': 'Professional and informative'
    }
)

print(f"Generated outline:\n{result}")

# Create a version
version = manager.create_version(prompt_id, 'Initial version')
print(f"Version created: {version}")

Prompt Templates

Template 1: Customer Support

customer_support_template = '''You are a helpful customer support agent for {{company_name}}.

Customer Query: {{query}}

Guidelines:
- Be friendly and professional
- Provide clear, step-by-step solutions
- If you cannot help, offer to escalate
- Always thank the customer

Response:'''

Template 2: Content Summarization

summarization_template = '''Summarize the following {{content_type}} for {{audience}}:

{{content}}

Requirements:
- Length: {{length}} (brief/moderate/detailed)
- Focus on: {{focus_areas}}
- Format: {{format}} (bullet points/paragraph/executive summary)

Summary:'''

Template 3: Data Extraction

extraction_template = '''Extract structured information from the following text:

Text:
{{text}}

Extract:
{{fields_to_extract}}

Return as JSON with these exact fields.

JSON:'''

Template 4: Code Generation

code_generation_template = '''Generate {{language}} code for the following requirement:

Requirement: {{requirement}}

Constraints:
{{constraints}}

Additional Context:
{{context}}

Include:
- Clean, well-commented code
- Error handling
- Example usage

Code:'''

Template 5: Translation

translation_template = '''Translate the following text from {{source_language}} to {{target_language}}:

Text:
{{text}}

Style: {{style}} (formal/casual/technical)
Preserve: {{preserve}} (tone/formatting/technical terms)

Translation:'''

Best Practices

1. Use Clear Variable Names

# ✅ Good - Clear and descriptive
variables = {
    'customer_name': 'John Doe',
    'order_number': 'ORD-12345',
    'issue_description': 'Product not delivered'
}

# ❌ Bad - Unclear
variables = {
    'var1': 'John Doe',
    'var2': 'ORD-12345',
    'var3': 'Product not delivered'
}

2. Version Control Strategy

# Semantic versioning for prompts
version_strategy = {
    'major': 'Breaking changes to variables or output format',
    'minor': 'New features or improvements',
    'patch': 'Bug fixes or minor tweaks'
}

# Example: Create versions with meaningful descriptions
v1 = manager.create_version(prompt_id, 'v1.0.0 - Initial release')
v2 = manager.create_version(prompt_id, 'v1.1.0 - Added tone parameter')
v3 = manager.create_version(prompt_id, 'v1.1.1 - Fixed formatting issue')

3. Test Before Deploying

def test_prompt_before_deploy(prompt_id, test_cases):
    """
    Test prompt with multiple scenarios before creating version
    """
    manager = PromptManager()
    results = []

    for test_case in test_cases:
        print(f"\nTesting: {test_case['name']}")

        try:
            output = manager.invoke(
                prompt_id=prompt_id,
                variables=test_case['variables']
            )

            # Validate output
            is_valid = validate_output(output, test_case['expected'])

            results.append({
                'test_case': test_case['name'],
                'passed': is_valid,
                'output': output
            })

            print(f"✓ Passed" if is_valid else "✗ Failed")

        except Exception as e:
            results.append({
                'test_case': test_case['name'],
                'passed': False,
                'error': str(e)
            })
            print(f"✗ Error: {e}")

    # Only create version if all tests pass
    if all(r['passed'] for r in results):
        version = manager.create_version(prompt_id, 'Tested and validated')
        print(f"\n✓ All tests passed. Version {version} created.")
    else:
        print("\n✗ Some tests failed. Fix issues before creating version.")

    return results

# Test cases
test_cases = [
    {
        'name': 'Short document',
        'variables': {'document': 'Short text'},
        'expected': {'length': 'short'}
    },
    {
        'name': 'Long document',
        'variables': {'document': 'Very long text...' * 100},
        'expected': {'length': 'long'}
    }
]

test_prompt_before_deploy(prompt_id, test_cases)

4. Use Appropriate Inference Parameters

# Match parameters to use case
inference_configs = {
    'factual': {
        'temperature': 0.1,
        'topP': 0.5,
        'maxTokens': 500
    },
    'creative': {
        'temperature': 0.9,
        'topP': 0.95,
        'maxTokens': 2000
    },
    'balanced': {
        'temperature': 0.7,
        'topP': 0.9,
        'maxTokens': 1000
    }
}

5. Document Your Prompts

# Include comprehensive metadata
prompt_metadata = {
    'name': 'customer-email-responder',
    'description': '''Generates professional email responses to customer inquiries.

    Use Cases:
    - Customer support
    - Sales inquiries
    - General questions

    Variables:
    - customer_name: Customer's full name
    - inquiry_type: Type of inquiry (support/sales/general)
    - inquiry_text: The customer's message
    - company_name: Your company name

    Expected Output:
    - Professional email response
    - Addresses all points in inquiry
    - Includes appropriate call-to-action

    Version History:
    - v1.0: Initial release
    - v1.1: Added personalization
    - v1.2: Improved tone consistency
    ''',
    'tags': ['customer-service', 'email', 'automation'],
    'owner': 'customer-success-team',
    'last_updated': '2024-01-15'
}

6. Monitor Prompt Performance

import time
from datetime import datetime

class PromptMonitor:
    """
    Monitor prompt performance and usage
    """

    def __init__(self):
        self.metrics = []

    def track_invocation(
        self,
        prompt_id: str,
        variant: str,
        variables: dict,
        output: str,
        execution_time: float
    ):
        """
        Track prompt invocation metrics
        """
        self.metrics.append({
            'timestamp': datetime.now().isoformat(),
            'prompt_id': prompt_id,
            'variant': variant,
            'input_length': sum(len(str(v)) for v in variables.values()),
            'output_length': len(output),
            'execution_time': execution_time,
            'success': True
        })

    def get_statistics(self, prompt_id: str = None):
        """
        Get performance statistics
        """
        filtered = self.metrics
        if prompt_id:
            filtered = [m for m in self.metrics if m['prompt_id'] == prompt_id]

        if not filtered:
            return {}

        return {
            'total_invocations': len(filtered),
            'avg_execution_time': sum(m['execution_time'] for m in filtered) / len(filtered),
            'success_rate': sum(1 for m in filtered if m['success']) / len(filtered),
            'avg_output_length': sum(m['output_length'] for m in filtered) / len(filtered)
        }

# Usage
monitor = PromptMonitor()

start_time = time.time()
output = manager.invoke(prompt_id, variables)
execution_time = time.time() - start_time

monitor.track_invocation(prompt_id, 'default', variables, output, execution_time)

# Get stats
stats = monitor.get_statistics(prompt_id)
print(f"Average execution time: {stats['avg_execution_time']:.2f}s")

Advanced Techniques

A/B Testing Framework

import random
from collections import defaultdict

class PromptABTester:
    """
    A/B test different prompt variants
    """

    def __init__(self, prompt_id: str, variants: List[str]):
        self.prompt_id = prompt_id
        self.variants = variants
        self.results = defaultdict(list)
        self.manager = PromptManager()

    def run_test(self, variables: dict, user_id: str = None):
        """
        Run A/B test by randomly selecting a variant
        """
        # Select variant (can be based on user_id for consistency)
        if user_id:
            variant_index = hash(user_id) % len(self.variants)
            variant = self.variants[variant_index]
        else:
            variant = random.choice(self.variants)

        # Invoke prompt
        output = self.manager.invoke(
            self.prompt_id,
            variables,
            variant=variant
        )

        return {
            'variant': variant,
            'output': output
        }

    def record_feedback(self, variant: str, rating: int, feedback: str = None):
        """
        Record user feedback for a variant
        """
        self.results[variant].append({
            'rating': rating,
            'feedback': feedback,
            'timestamp': datetime.now()
        })

    def get_winner(self):
        """
        Determine which variant performs best
        """
        variant_scores = {}

        for variant, feedbacks in self.results.items():
            if feedbacks:
                avg_rating = sum(f['rating'] for f in feedbacks) / len(feedbacks)
                variant_scores[variant] = {
                    'avg_rating': avg_rating,
                    'sample_size': len(feedbacks)
                }

        if not variant_scores:
            return None

        winner = max(variant_scores.items(), key=lambda x: x[1]['avg_rating'])
        return {
            'variant': winner[0],
            'avg_rating': winner[1]['avg_rating'],
            'sample_size': winner[1]['sample_size']
        }

# Usage
tester = PromptABTester(
    prompt_id='product-description-prompt',
    variants=['concise', 'detailed', 'creative']
)

# Run test
result = tester.run_test(
    variables={'product_name': 'Smart Watch', 'features': 'GPS, Heart Rate, Waterproof'},
    user_id='user123'
)

print(f"Variant used: {result['variant']}")
print(f"Output: {result['output']}")

# Record feedback
tester.record_feedback(result['variant'], rating=4, feedback='Good but could be more engaging')

# After collecting enough data
winner = tester.get_winner()
print(f"Winning variant: {winner['variant']} with avg rating {winner['avg_rating']}")

Prompt Chaining

class PromptChain:
    """
    Chain multiple prompts together
    """

    def __init__(self):
        self.manager = PromptManager()
        self.chain = []

    def add_step(self, prompt_id: str, variable_mapping: dict):
        """
        Add a step to the chain

        variable_mapping: Maps output from previous step to variables for this step
        """
        self.chain.append({
            'prompt_id': prompt_id,
            'variable_mapping': variable_mapping
        })
        return self

    def execute(self, initial_variables: dict):
        """
        Execute the prompt chain
        """
        context = initial_variables.copy()
        results = []

        for step in self.chain:
            # Map variables from context
            step_variables = {}
            for var_name, source in step['variable_mapping'].items():
                if source.startswith('$'):
                    # Reference to previous output
                    step_variables[var_name] = context.get(source[1:])
                else:
                    # Direct value
                    step_variables[var_name] = source

            # Invoke prompt
            output = self.manager.invoke(
                step['prompt_id'],
                step_variables
            )

            # Store result in context
            context[f"step_{len(results)}_output"] = output
            results.append(output)

        return results

# Example: Research → Outline → Draft
chain = PromptChain()

chain.add_step(
    'research-prompt',
    {'topic': '$topic'}
).add_step(
    'outline-prompt',
    {'topic': '$topic', 'research': '$step_0_output'}
).add_step(
    'draft-prompt',
    {'outline': '$step_1_output', 'tone': '$tone'}
)

results = chain.execute({
    'topic': 'AI in Healthcare',
    'tone': 'professional'
})

print(f"Research: {results[0]}")
print(f"Outline: {results[1]}")
print(f"Draft: {results[2]}")

Dynamic Prompt Generation

def generate_prompt_dynamically(use_case: str, requirements: dict):
    """
    Generate prompts dynamically based on requirements
    """
    manager = PromptManager()

    # Base templates for different use cases
    templates = {
        'summarization': '''Summarize the following {{content_type}}:

{{content}}

Focus on: {{focus}}
Length: {{length}}

Summary:''',

        'analysis': '''Analyze the following {{data_type}}:

{{data}}

Analysis criteria:
{{criteria}}

Provide insights on:
{{insights_needed}}

Analysis:''',

        'generation': '''Generate {{output_type}} based on:

{{input}}

Requirements:
{{requirements}}

Style: {{style}}

Output:'''
    }

    template = templates.get(use_case)
    if not template:
        raise ValueError(f"Unknown use case: {use_case}")

    # Create prompt
    prompt_id = manager.create_prompt(
        name=f"{use_case}-{requirements.get('name', 'custom')}",
        description=f"Dynamically generated {use_case} prompt",
        template=template,
        model_id=requirements.get('model_id', 'anthropic.claude-3-sonnet-20240229-v1:0'),
        temperature=requirements.get('temperature', 0.7),
        max_tokens=requirements.get('max_tokens', 1000)
    )

    return prompt_id

# Usage
prompt_id = generate_prompt_dynamically(
    use_case='summarization',
    requirements={
        'name': 'article-summarizer',
        'temperature': 0.5,
        'max_tokens': 500
    }
)

AWS Code Samples and Resources

Official Documentation

  1. Bedrock Prompt Management Guide

    • URL: https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-management.html
  2. API Reference

    • URL: https://docs.aws.amazon.com/bedrock/latest/APIReference/APIOperationsAgentsforAmazon_Bedrock.html
  3. Best Practices

    • URL: https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-engineering-guidelines.html

AWS Samples Repository

git clone https://github.com/aws-samples/amazon-bedrock-samples.git
cd amazon-bedrock-samples/prompt-engineering

AWS CLI Examples

# Create a prompt
aws bedrock-agent create-prompt \
    --name "my-prompt" \
    --description "Test prompt" \
    --variants file://prompt-config.json \
    --region us-east-1

# List prompts
aws bedrock-agent list-prompts --region us-east-1

# Get prompt details
aws bedrock-agent get-prompt \
    --prompt-identifier PROMPT_ID \
    --region us-east-1

# Create prompt version
aws bedrock-agent create-prompt-version \
    --prompt-identifier PROMPT_ID \
    --description "Version 1.0" \
    --region us-east-1

# Invoke prompt
aws bedrock-agent-runtime invoke-prompt \
    --prompt-identifier PROMPT_ID \
    --input '{"variables":{"key":"value"}}' \
    --region us-east-1

Troubleshooting

Issue 1: Variable Not Replaced

Problem: Variables like {{variable}} appear in output

Solution:

# Ensure variable names match exactly
template = "Hello {{name}}"  # Variable is 'name'

# ✅ Correct
variables = {'name': 'John'}

# ❌ Wrong
variables = {'Name': 'John'}  # Case mismatch
variables = {'user_name': 'John'}  # Different name

Issue 2: Prompt Not Found

Problem: ResourceNotFoundException

Solution:

# Always check if prompt exists
def safe_invoke_prompt(prompt_id, variables):
    try:
        manager = PromptManager()
        prompt = manager.get_prompt(prompt_id)
        return manager.invoke(prompt_id, variables)
    except Exception as e:
        print(f"Error: {e}")
        # List available prompts
        prompts = manager.list_prompts()
        print("Available prompts:")
        for p in prompts:
            print(f"  - {p['name']} ({p['id']})")
        return None

Issue 3: Version Conflicts

Problem: Using wrong version in production

Solution:

# Use explicit version references
production_version = '1'
staging_version = '2'

# Production
result = manager.invoke(
    prompt_id=prompt_id,
    version=production_version,
    variables=variables
)

Conclusion

AWS Bedrock Prompt Management provides a robust system for organizing, versioning, and deploying prompts at scale. By centralizing prompt management, you can:

Next Steps: 1. Create your first managed prompt 2. Set up version control workflow 3. Implement A/B testing for critical prompts 4. Monitor and optimize based on metrics

For the latest features and updates, refer to the official AWS Bedrock documentation.