Skip to content

AI Provider Setup

Before you can use the AI assistant, you need to tell Load Tester which AI provider to use and give it an API key. Two minutes, tops.

Load Tester supports three providers: Anthropic (for Claude models), AWS Bedrock, and OpenAI. Pick whichever your organization already uses, or whichever you have an API key for. There is no wrong answer here.


Opening the AI Preferences

The AI provider settings live in Load Tester's preferences dialog.

On Windows:

  1. Select Window -> Preferences
  2. Expand Web Performance -> Accounts -> AI Assistant

On macOS:

  1. Select Load Tester -> Preferences
  2. Expand Web Performance -> Accounts -> AI Assistant

You'll see fields for provider, API key, model, and region (for Bedrock). Everything you need is on this one page.


Choosing a Provider

Select your provider from the Provider dropdown at the top of the preferences page.

Anthropic (Direct API)

Connects directly to Anthropic's Claude API. This is the simplest option: one API key, no infrastructure to configure.

Requirements:

Available models:

  • Claude Sonnet 4.6
  • Claude Opus 4.6
  • Claude Haiku 4.5

AWS Bedrock

Accesses the same Claude models through your existing AWS account. Bedrock pulls IAM credentials from the AWS Load Generation settings you have already configured, so there is no separate API key to manage.

Requirements:

  • AWS credentials configured in Preferences -> Web Performance -> Accounts -> AWS Load Generation
  • Bedrock model access enabled in your AWS account

Region: Defaults to us-east-1. Change it if your Bedrock endpoint is in a different region.

Available models:

  • Claude Sonnet 4.6
  • Claude Opus 4.6
  • Claude Haiku 4.5

Already Using AWS for Load Generation?

If you've configured AWS credentials for cloud load testing, Bedrock is the easiest path. No new API key needed.

OpenAI

Connects to OpenAI's API for GPT models. Use this if your organization standardizes on OpenAI.

Requirements:

Available models:

  • GPT-5.4
  • GPT-4.1
  • GPT-4.1-mini

Configuring Your API Key

For Anthropic and OpenAI, enter your API key in the API Key field. The field is masked, so you won't see the full key after you type it.

Key Security

Your API key is encrypted with AES-256-GCM before it's saved to disk (in ai-assistant.properties). The key never leaves your local machine and is never sent anywhere except directly to your chosen provider's API endpoint.

For AWS Bedrock, there's no API key field. Bedrock uses the IAM credentials from your AWS Load Generation account settings.


Selecting a Model

After choosing a provider and entering your API key, click Refresh Models to pull the current model list from your provider. This ensures you see all available models, including any released after your version of Load Tester shipped.

Select a model from the Model dropdown.

Which Model Should I Pick?

Claude Sonnet 4.6 is a good starting point: fast enough for interactive use, capable enough for complex analysis. If you need deeper reasoning (multi-step performance analysis, complex correlation debugging), try Claude Opus 4.6. For quick, simple questions where speed matters most, Claude Haiku 4.5 or GPT-4.1-mini keeps costs low.

On the OpenAI side, GPT-5.4 is the comparable all-around choice.


Testing the Connection

Click Test Connection to verify everything works. Load Tester sends a small test request to your provider using your API key and selected model.

If it succeeds, you see a confirmation message. You are ready to go.

If it fails, the error message tells you what went wrong:

  • "Invalid API key" - Double-check the key. Make sure there are no leading or trailing spaces. (Copy-paste loves to sneak those in.)
  • "Model not found" - Click Refresh Models to get the current list and select a valid model.
  • "Connection failed" - Check your internet connectivity and firewall rules. Some corporate firewalls block API endpoints.
  • "Access denied" (Bedrock) - Confirm your AWS credentials are configured in Accounts -> AWS Load Generation and that Bedrock model access is enabled in your AWS account.

Saving Your Settings

Click Apply and Close to save your configuration.

Once saved, AI features are available throughout Load Tester:

  • The AI panel for interactive questions and debugging
  • The MCP server for connecting external AI tools
  • AI-powered report generation for load test analysis

Troubleshooting

"Invalid API key" after pasting

Copy-paste sometimes grabs invisible whitespace characters. Clear the field completely, then paste again. If the problem persists, try typing the first few characters manually to confirm the field is accepting input.

Refresh Models returns an empty list

This usually means the API key is invalid or the provider endpoint is unreachable. Test the connection first to see the specific error.

Bedrock credentials not working

Bedrock uses the same AWS credentials as cloud load generation. Verify those credentials work by checking Accounts -> AWS Load Generation and confirming you can see your AWS instances. Also confirm that your AWS account has Bedrock model access enabled. This is a separate setting in the AWS console, not something that happens automatically.

Changed providers but old model is still selected

After switching providers, click Refresh Models to load the new provider's model list. The previous model selection doesn't carry over.

API Key Costs

AI providers charge per request based on the model you select. Opus-class and GPT-5.4 models cost more per query than Sonnet or Haiku. Check your provider's pricing page to understand the cost structure before running extended analysis sessions.


Related Topics: