Anthropic Academy Courses

This video is still being processed. Please check back later and refresh the page.

Uh oh! Something went wrong, please try again.

Claude with Amazon Bedrock

As part of an accreditation program created for AWS, Anthropic launched a first-of-its-kind training for AWS employees. Here's the full course so you can follow along.

rate limit

Code not recognized.

About this course

Course Overview

This technical course provides a comprehensive guide to integrating and deploying Claude AI models through Amazon Bedrock. Developers will learn to implement Claude's API, build production-ready applications, and leverage advanced features including tool use, retrieval augmented generation (RAG), and autonomous agents. The curriculum covers practical implementation patterns, performance optimization techniques, and real-world application development using Claude's capabilities within the AWS ecosystem.

What You'll Learn

  • Utilize Anthropic models on Amazon Bedrock for multi-turn conversations and system prompt configuration
  • Build and evaluate prompts using structured approaches
  • Design and integrate custom tools using JSON Schema for function calling and batch processing
  • Develop RAG pipelines with text chunking, embeddings, BM25 search, and contextual retrieval techniques
  • Configure and optimize Claude's advanced features including extended thinking, vision capabilities, and prompt caching
  • Leverage Claude Code for automated debugging and task execution
  • Implement Model Context Protocol (MCP) for defining tools, resources, and prompts in client applications
  • Optimize inference through streaming, temperature control, and structured data extraction
  • Build evaluation frameworks for prompts using model-based and code-based grading approaches

Prerequisites

  • Proficiency in Python programming
  • Basic understanding of AWS services and Amazon Bedrock

Who This Course Is For

  • Backend developers building AI-powered applications requiring advanced language model integration
  • ML engineers implementing production RAG systems and conversational AI pipelines
  • DevOps engineers deploying and optimizing Claude models in AWS infrastructure
  • Full-stack developers creating applications with complex tool use and agent capabilities
  • Technical architects designing scalable AI systems with retrieval, caching, and performance requirements
  • Automation engineers building autonomous agents for code generation, debugging, and task automation

Curriculum

  • Course introduction

    This course introduces essential concepts for working with Anthropic's models through Amazon Bedrock. You'll learn both the theoretical foundations of generative AI and practical implementation skills for building with Claude.

    This module introduces you to:

    • Course Structure - The complete learning journey from AI fundamentals to advanced techniques like RAG and fine-tuning
    • Prerequisites - Key skills in Python, AWS Bedrock, and API requests needed for success
    • Learning Approach - Effective strategies for mastering content through hands-on practice and experimentation
    • Technical Setup - Essential preparations to ensure you can follow along with practical examples
  • Introduction to the course
  • Overview of Claude Models
  • Working with the API

    This module explores how to access and interact with AI models through AWS Bedrock. You'll learn the fundamental patterns for connecting your applications to cloud-based models and controlling how those models generate text.

    This module introduces you to:

    • Making API Requests - How to create properly structured requests to AWS Bedrock using the boto3 client and model IDs
    • Multi-Turn Conversations - How to maintain context across multiple exchanges by properly managing message histories
    • Response Control - How to influence model outputs using system prompts, temperature settings, and message pre-filling
    • Streaming Responses - How to implement real-time text generation for improved user experience
    • Structured Data Generation - How to extract clean, formatted data without explanatory text using stop sequences
  • Accessing the API
  • Making a request
  • Multi-Turn conversations
  • Chat bot exercise
  • System prompts
  • System prompt exercise
  • Temperature
  • Streaming
  • Controlling model output
  • Structured data
  • Structured data exercise
  • Quiz on working with the API
  • Prompt evaluations

    Prompt engineering helps you get the best possible output from Claude, but how do you know if your prompts are actually effective? This module focuses on measuring prompt performance through objective metrics before making improvements.

    This module introduces you to:

    • Evaluation Workflows - How to set up a systematic testing pipeline that measures prompt effectiveness with objective metrics
    • Test Dataset Creation - Techniques for generating diverse test cases that cover your prompt's expected inputs
    • Model-Based Grading - Using AI models as evaluators to assess response quality based on specified criteria
    • Code-Based Validation - Implementing programmatic checks to verify response formatting and syntax accuracy
    • Iterative Improvement - Methods for refining prompts based on evaluation results to achieve better performance
  • Prompt evaluation
  • A typical eval workflow
  • Generating test datasets
  • Running the eval
  • Model based grading
  • Code based grading
  • Exercise on prompt evals
  • Quiz on prompt evaluations
  • Prompt engineering

    This module walks you through the process of improving prompts to get better results from language models. We'll start with a simple meal plan prompt and gradually enhance it through various techniques.

    This module introduces you to:

    • Clear and Direct Instructions - How explicit action verbs and simple language in your first line dramatically improve model responses
    • Specific Guidelines - How adding detailed requirements or step-by-step instructions guides the model toward better outputs
    • XML Structure - How wrapping content in tags helps models distinguish between different types of information
    • Example-Based Learning - How providing sample inputs and ideal outputs teaches models to handle complex formats and edge cases
  • Prompt engineering
  • Being clear and direct
  • Being specific
  • Structure with XML tags
  • Providing examples
  • Exercise on prompting
  • Quiz on prompt engineering
  • Tool use

    In this module, we'll examine how to extend Claude's abilities using tools - functions that let the AI access external information or take actions. Tools are essential when you need Claude to interact with data beyond its training cutoff or perform specialized tasks.

    This module introduces you to:

    • Tool Fundamentals - The core architecture of tool integration and how tools enable Claude to access real-time information
    • Tool Implementation - Step-by-step process of creating, describing, and connecting tools to Claude's capabilities
    • Multi-Tool Management - Strategies for combining multiple tools efficiently, including parallelization with batch tools
    • Structured Data Extraction - Techniques for reliably extracting precisely formatted data using tool-based approaches
    • Text Editor Integration - How Claude can directly interact with files and code to provide development assistance
  • Introducing tool use
  • Tool functions
  • JSON Schema for tools
  • Handling tool use responses
  • Running tool functions
  • Sending tool results
  • Multi-Turn conversations with tools
  • Adding multiple tools
  • Batch tool use
  • Structured data with tools
  • Flexible tool extraction
  • The text editor tool
  • Quiz on tool use
  • Retrieval Augmented Generation

    RAG helps LLMs access and utilize external information that isn't part of their training data. This module explores how to implement effective RAG systems to enhance AI applications with relevant document retrieval.

    This module introduces you to:

    • RAG Fundamentals - How retrieval augmented generation enhances LLMs by finding and incorporating relevant information from external documents
    • Text Chunking Strategies - Different approaches to dividing documents into manageable pieces while preserving meaning and context
    • Vector Embeddings - How semantic meaning is captured numerically to enable similarity-based document retrieval
    • Hybrid Search Implementation - Combining semantic and lexical search techniques to improve retrieval accuracy
    • Results Optimization - Advanced techniques like reranking and contextual retrieval for more relevant document selection
  • Introducing Retrieval Augmented Generation
  • Text chunking strategies
  • Text embeddings
  • The full RAG flow
  • Implementing the RAG flow
  • BM25 lexical search
  • A multi-search RAG pipeline
  • Reranking results
  • Contextual retrieval
  • Quiz on Retrieval Augmented Generation
  • Features of Claude

    This module covers several powerful features that improve Claude's performance on complex tasks. You'll learn how to leverage these capabilities to get better results while balancing tradeoffs in cost and response time.

    This module introduces you to:

    • Extended Thinking - How to enable Claude's reasoning phase for tackling complex problems with greater accuracy
    • Image Support - Techniques for effectively using images with Claude and why detailed prompting remains essential
    • Prompt Caching - How to speed up responses and reduce costs by reusing computational work across requests
    • Implementation Strategies - Practical approaches to integrate these features in your applications through code examples
  • Extended thinking
  • Image support
  • Prompt caching
  • Rules of prompt caching
  • Prompt caching in action
  • Quiz on features of Claude
  • Model Context Protocol

    Model Context Protocol (MCP) is a communication layer that provides Claude with context and tools without requiring developers to write tedious integration code. This module explores how MCP connects applications with AI models through a specialized client-server architecture.

    This module introduces you to:

    • MCP Servers and Clients - How these components work together to expose functionality and data from external services to AI models
    • Tools Implementation - How to define model-controlled functions that extend Claude's capabilities with external services
    • Resources - How to expose application-controlled data from your MCP server for UI elements and context
    • Prompts - How to create user-triggered, optimized instructions for specific workflows
  • Introducing MCP
  • MCP clients
  • Project setup
  • Defining tools with MCP
  • The server inspector
  • Implementing a client
  • Defining resources
  • Accessing resources
  • Defining prompts
  • Prompts in the client
  • MCP review
  • Quiz on Model Context Protocol
  • Agents

    This module explores how to build AI agents using Claude's capabilities. You'll see real-world applications of Claude acting as an autonomous assistant through tools like Cloud Code and Computer Use.

    This module introduces you to:

    • Agent Fundamentals - How Claude can use tools to gather information and modify environments to accomplish complex tasks
    • Cloud Code Implementation - How to set up, use, and extend this terminal-based coding assistant for software development
    • Parallel Development - How to leverage multiple Claude Code instances to work on different features simultaneously
    • Computer Use Feature - How Claude can interact with web interfaces to perform testing and automation tasks
    • Effective Agent Design - Core principles behind successful AI agents and when they're most appropriately deployed
  • Agents overview
  • Claude Code setup
  • Claude Code in action
  • Enhancements with MCP servers
  • Parallelizing Claude Code
  • Automated debugging
  • Computer Use
  • How Computer Use works
  • Qualities of agents
  • Final assessment
  • Final assessment quiz
  • Wrap up

    This video wraps up our journey through generative AI fundamentals and Anthropic's Claude models. We've explored everything from basic concepts to practical applications with the API.

    • Course Recap - Review of key topics including model families, API usage, parameters, and prompt engineering
    • Critical Evaluations - Why thorough testing is essential before deploying any AI solution
    • Future Directions - Emerging trends in LLM orchestration and agentic workflows for building powerful AI systems
  • Course wrap up

About this course

Course Overview

This technical course provides a comprehensive guide to integrating and deploying Claude AI models through Amazon Bedrock. Developers will learn to implement Claude's API, build production-ready applications, and leverage advanced features including tool use, retrieval augmented generation (RAG), and autonomous agents. The curriculum covers practical implementation patterns, performance optimization techniques, and real-world application development using Claude's capabilities within the AWS ecosystem.

What You'll Learn

  • Utilize Anthropic models on Amazon Bedrock for multi-turn conversations and system prompt configuration
  • Build and evaluate prompts using structured approaches
  • Design and integrate custom tools using JSON Schema for function calling and batch processing
  • Develop RAG pipelines with text chunking, embeddings, BM25 search, and contextual retrieval techniques
  • Configure and optimize Claude's advanced features including extended thinking, vision capabilities, and prompt caching
  • Leverage Claude Code for automated debugging and task execution
  • Implement Model Context Protocol (MCP) for defining tools, resources, and prompts in client applications
  • Optimize inference through streaming, temperature control, and structured data extraction
  • Build evaluation frameworks for prompts using model-based and code-based grading approaches

Prerequisites

  • Proficiency in Python programming
  • Basic understanding of AWS services and Amazon Bedrock

Who This Course Is For

  • Backend developers building AI-powered applications requiring advanced language model integration
  • ML engineers implementing production RAG systems and conversational AI pipelines
  • DevOps engineers deploying and optimizing Claude models in AWS infrastructure
  • Full-stack developers creating applications with complex tool use and agent capabilities
  • Technical architects designing scalable AI systems with retrieval, caching, and performance requirements
  • Automation engineers building autonomous agents for code generation, debugging, and task automation

Curriculum

  • Course introduction

    This course introduces essential concepts for working with Anthropic's models through Amazon Bedrock. You'll learn both the theoretical foundations of generative AI and practical implementation skills for building with Claude.

    This module introduces you to:

    • Course Structure - The complete learning journey from AI fundamentals to advanced techniques like RAG and fine-tuning
    • Prerequisites - Key skills in Python, AWS Bedrock, and API requests needed for success
    • Learning Approach - Effective strategies for mastering content through hands-on practice and experimentation
    • Technical Setup - Essential preparations to ensure you can follow along with practical examples
  • Introduction to the course
  • Overview of Claude Models
  • Working with the API

    This module explores how to access and interact with AI models through AWS Bedrock. You'll learn the fundamental patterns for connecting your applications to cloud-based models and controlling how those models generate text.

    This module introduces you to:

    • Making API Requests - How to create properly structured requests to AWS Bedrock using the boto3 client and model IDs
    • Multi-Turn Conversations - How to maintain context across multiple exchanges by properly managing message histories
    • Response Control - How to influence model outputs using system prompts, temperature settings, and message pre-filling
    • Streaming Responses - How to implement real-time text generation for improved user experience
    • Structured Data Generation - How to extract clean, formatted data without explanatory text using stop sequences
  • Accessing the API
  • Making a request
  • Multi-Turn conversations
  • Chat bot exercise
  • System prompts
  • System prompt exercise
  • Temperature
  • Streaming
  • Controlling model output
  • Structured data
  • Structured data exercise
  • Quiz on working with the API
  • Prompt evaluations

    Prompt engineering helps you get the best possible output from Claude, but how do you know if your prompts are actually effective? This module focuses on measuring prompt performance through objective metrics before making improvements.

    This module introduces you to:

    • Evaluation Workflows - How to set up a systematic testing pipeline that measures prompt effectiveness with objective metrics
    • Test Dataset Creation - Techniques for generating diverse test cases that cover your prompt's expected inputs
    • Model-Based Grading - Using AI models as evaluators to assess response quality based on specified criteria
    • Code-Based Validation - Implementing programmatic checks to verify response formatting and syntax accuracy
    • Iterative Improvement - Methods for refining prompts based on evaluation results to achieve better performance
  • Prompt evaluation
  • A typical eval workflow
  • Generating test datasets
  • Running the eval
  • Model based grading
  • Code based grading
  • Exercise on prompt evals
  • Quiz on prompt evaluations
  • Prompt engineering

    This module walks you through the process of improving prompts to get better results from language models. We'll start with a simple meal plan prompt and gradually enhance it through various techniques.

    This module introduces you to:

    • Clear and Direct Instructions - How explicit action verbs and simple language in your first line dramatically improve model responses
    • Specific Guidelines - How adding detailed requirements or step-by-step instructions guides the model toward better outputs
    • XML Structure - How wrapping content in tags helps models distinguish between different types of information
    • Example-Based Learning - How providing sample inputs and ideal outputs teaches models to handle complex formats and edge cases
  • Prompt engineering
  • Being clear and direct
  • Being specific
  • Structure with XML tags
  • Providing examples
  • Exercise on prompting
  • Quiz on prompt engineering
  • Tool use

    In this module, we'll examine how to extend Claude's abilities using tools - functions that let the AI access external information or take actions. Tools are essential when you need Claude to interact with data beyond its training cutoff or perform specialized tasks.

    This module introduces you to:

    • Tool Fundamentals - The core architecture of tool integration and how tools enable Claude to access real-time information
    • Tool Implementation - Step-by-step process of creating, describing, and connecting tools to Claude's capabilities
    • Multi-Tool Management - Strategies for combining multiple tools efficiently, including parallelization with batch tools
    • Structured Data Extraction - Techniques for reliably extracting precisely formatted data using tool-based approaches
    • Text Editor Integration - How Claude can directly interact with files and code to provide development assistance
  • Introducing tool use
  • Tool functions
  • JSON Schema for tools
  • Handling tool use responses
  • Running tool functions
  • Sending tool results
  • Multi-Turn conversations with tools
  • Adding multiple tools
  • Batch tool use
  • Structured data with tools
  • Flexible tool extraction
  • The text editor tool
  • Quiz on tool use
  • Retrieval Augmented Generation

    RAG helps LLMs access and utilize external information that isn't part of their training data. This module explores how to implement effective RAG systems to enhance AI applications with relevant document retrieval.

    This module introduces you to:

    • RAG Fundamentals - How retrieval augmented generation enhances LLMs by finding and incorporating relevant information from external documents
    • Text Chunking Strategies - Different approaches to dividing documents into manageable pieces while preserving meaning and context
    • Vector Embeddings - How semantic meaning is captured numerically to enable similarity-based document retrieval
    • Hybrid Search Implementation - Combining semantic and lexical search techniques to improve retrieval accuracy
    • Results Optimization - Advanced techniques like reranking and contextual retrieval for more relevant document selection
  • Introducing Retrieval Augmented Generation
  • Text chunking strategies
  • Text embeddings
  • The full RAG flow
  • Implementing the RAG flow
  • BM25 lexical search
  • A multi-search RAG pipeline
  • Reranking results
  • Contextual retrieval
  • Quiz on Retrieval Augmented Generation
  • Features of Claude

    This module covers several powerful features that improve Claude's performance on complex tasks. You'll learn how to leverage these capabilities to get better results while balancing tradeoffs in cost and response time.

    This module introduces you to:

    • Extended Thinking - How to enable Claude's reasoning phase for tackling complex problems with greater accuracy
    • Image Support - Techniques for effectively using images with Claude and why detailed prompting remains essential
    • Prompt Caching - How to speed up responses and reduce costs by reusing computational work across requests
    • Implementation Strategies - Practical approaches to integrate these features in your applications through code examples
  • Extended thinking
  • Image support
  • Prompt caching
  • Rules of prompt caching
  • Prompt caching in action
  • Quiz on features of Claude
  • Model Context Protocol

    Model Context Protocol (MCP) is a communication layer that provides Claude with context and tools without requiring developers to write tedious integration code. This module explores how MCP connects applications with AI models through a specialized client-server architecture.

    This module introduces you to:

    • MCP Servers and Clients - How these components work together to expose functionality and data from external services to AI models
    • Tools Implementation - How to define model-controlled functions that extend Claude's capabilities with external services
    • Resources - How to expose application-controlled data from your MCP server for UI elements and context
    • Prompts - How to create user-triggered, optimized instructions for specific workflows
  • Introducing MCP
  • MCP clients
  • Project setup
  • Defining tools with MCP
  • The server inspector
  • Implementing a client
  • Defining resources
  • Accessing resources
  • Defining prompts
  • Prompts in the client
  • MCP review
  • Quiz on Model Context Protocol
  • Agents

    This module explores how to build AI agents using Claude's capabilities. You'll see real-world applications of Claude acting as an autonomous assistant through tools like Cloud Code and Computer Use.

    This module introduces you to:

    • Agent Fundamentals - How Claude can use tools to gather information and modify environments to accomplish complex tasks
    • Cloud Code Implementation - How to set up, use, and extend this terminal-based coding assistant for software development
    • Parallel Development - How to leverage multiple Claude Code instances to work on different features simultaneously
    • Computer Use Feature - How Claude can interact with web interfaces to perform testing and automation tasks
    • Effective Agent Design - Core principles behind successful AI agents and when they're most appropriately deployed
  • Agents overview
  • Claude Code setup
  • Claude Code in action
  • Enhancements with MCP servers
  • Parallelizing Claude Code
  • Automated debugging
  • Computer Use
  • How Computer Use works
  • Qualities of agents
  • Final assessment
  • Final assessment quiz
  • Wrap up

    This video wraps up our journey through generative AI fundamentals and Anthropic's Claude models. We've explored everything from basic concepts to practical applications with the API.

    • Course Recap - Review of key topics including model families, API usage, parameters, and prompt engineering
    • Critical Evaluations - Why thorough testing is essential before deploying any AI solution
    • Future Directions - Emerging trends in LLM orchestration and agentic workflows for building powerful AI systems
  • Course wrap up