AI Limitations & Safety¶
The AI assistant is a powerful tool for load testing, but it has real limitations. This page explains what it cannot do, why those limitations exist, and how to work within them effectively.
Key principle: The AI is an expert assistant, not a replacement for your judgment. Treat its suggestions as informed recommendations that need verification. If you would not follow the same advice from a consultant without checking it first, do not follow it from the AI either.
What the AI Cannot Do¶
Cannot Execute Actions¶
Limitation: The AI can suggest fixes but cannot modify your test case, run replays, start load tests, or make any changes to your configuration.
Why this matters: You stay in control. The AI won't accidentally break a working test case or run an unintended load test.
What this means: - ✅ AI suggests: "Add an extractor for the session token at line 45" - ❌ AI cannot: Actually create that extractor for you - 🎯 You do: Create the extractor based on AI guidance
Why this design: Keeping you in the loop prevents AI errors from propagating into your test configuration. You verify each suggestion before applying it. The AI may be confident and wrong; you are the last line of defense.
Cannot Access Your Application Internals¶
Limitation: The AI only sees what Load Tester captures (HTTP traffic, response codes, headers, content). It cannot see:
- ❌ Your source code
- ❌ Database schemas or queries
- ❌ Server logs or infrastructure
- ❌ Application framework internals
- ❌ Business logic or workflows
Why this matters: The AI cannot understand HOW your application works internally. It only sees what travels over HTTP.
Example scenario:
You: "Why is the checkout page slow?"
AI can say: "The checkout page response time is 5000ms, which is 10x slower than other pages. The server is returning 200 OK but taking a long time. This suggests a server-side bottleneck."
AI cannot say: "The slow response is caused by the calculateShipping() function at line 234 in checkout.js making a synchronous database call." (AI can't see your code)
What this means: The AI identifies symptoms (slow response, high server CPU). You investigate root causes in your application code.
Cannot Predict Application Behavior¶
Limitation: The AI can analyze patterns in captured data but cannot predict how your application will behave under different conditions.
Why this matters: The AI cannot tell you definitively whether a fix will work. It can only suggest likely solutions based on common patterns.
Example scenario:
You: "If I add this extractor, will replay succeed?"
AI can say: "Based on the response content, extracting the CSRF token with a boundary extractor should fix the 422 errors you're seeing. This is a common pattern for CSRF validation failures."
AI cannot guarantee: That this fix will work in your specific application. Your CSRF implementation might be non-standard.
What this means: Test AI suggestions. Do not assume they will work without verification.
Cannot Make Architectural Decisions¶
Limitation: The AI can explain trade-offs and implications, but cannot decide the best approach for your specific requirements.
Why this matters: Your business needs, infrastructure constraints, and timeline priorities are unique. The AI does not know your priorities.
Example scenario:
You: "Should I use datasets or custom values for user credentials?"
AI can say: - "Datasets give you unique credentials per virtual user, which is more realistic. Custom values are simpler but all users share the same credentials. Trade-offs: datasets require CSV file management, custom values risk session conflicts at high user counts."
AI cannot say: "Definitely use datasets." (Depends on your test goals, data availability, infrastructure)
What this means: Use AI to understand options, but YOU decide based on your context.
Cannot Guarantee Correctness¶
Limitation: AI suggestions are based on common patterns and may not apply to your specific application.
Why this matters: Unusual implementations, custom authentication schemes, or non-standard frameworks can confuse the AI.
Example scenario:
AI suggests: "This looks like a standard OAuth Bearer token. Extract it from the Authorization header and inject it into subsequent requests."
Reality: Your application uses a CUSTOM token format that looks similar to OAuth but has different extraction rules.
What this means: Verify AI suggestions against your application's actual behavior. If a suggestion doesn't work, tell the AI what you observed and ask for alternatives.
Cannot Retain Context Across Sessions¶
Limitation: Each AI conversation starts fresh. The AI doesn't remember previous questions or analyses from past sessions.
Why this matters: If you close Load Tester and reopen it, the AI will not remember your earlier troubleshooting. Every session starts from zero.
What this means: - ✅ Document AI suggestions that work, for your own future reference - ✅ Expect to re-explain context if you start a new session - ❌ Do not assume the AI remembers yesterday's conversation
Why this design: Privacy. No conversation history is stored anywhere. Your troubleshooting discussions vanish when you close the session.
Why These Limitations Exist¶
Design Philosophy: Human-in-the-Loop¶
Reason: Load testing has real consequences. A misconfigured test can produce misleading results, waste infrastructure spending, or lead to incorrect capacity planning decisions.
AI role: Provide expert guidance and suggestions
Your role: Verify, apply, and validate
Safety benefit: You catch AI errors before they impact your testing.
Privacy & Security¶
Reason: Test cases often contain sensitive data: credentials, API keys, production URLs. The MCP tools run entirely locally, which protects your data.
Trade-off: The AI cannot access external knowledge bases or consult with other systems. It only knows what Load Tester has captured.
Benefit: Your test cases, credentials, and results stay on your machine.
Technical Constraints¶
Reason: AI models have finite context windows. They can see YOUR current test case but not the entire history of all your tests.
Trade-off: The AI can't learn from your past work or remember fixes you've applied before.
Benefit: Simpler architecture, faster responses, no data retention concerns.
Safety Considerations¶
Verify Before Applying¶
Rule: Never apply AI suggestions blindly. Always verify they make sense for your application.
How to verify: 1. Understand the suggestion - If you don't understand why AI recommends something, ask for clarification 2. Check against your application - Does the suggestion align with how your app actually works? 3. Test incrementally - Apply one change at a time, test, then apply the next 4. Roll back if needed - Keep backups of working configurations
Example verification process:
AI suggests: "Extract the session cookie with name 'JSESSIONID' and inject it into all requests."
Verify: - ✅ Check: Does your application actually use a cookie named 'JSESSIONID'? (Look in Headers View) - ✅ Test: Create the extractor, run replay, verify it works - ✅ Validate: Check that extracted values are being injected correctly
Understand the Reasoning¶
Rule: If you don't understand WHY the AI recommends something, ask for explanation.
Good follow-up questions: - "Why do you think this is a CSRF token?" - "What pattern made you suggest a boundary extractor instead of regex?" - "Explain how this fix addresses the 401 error"
Why this matters: Understanding the reasoning helps you: - Spot when AI is making incorrect assumptions - Learn patterns for future troubleshooting - Adapt suggestions to your specific situation
Start Simple, Add Complexity¶
Rule: Begin with the simplest AI suggestion, test it, then add complexity if needed.
Example:
AI suggests: "This could be fixed by: (1) extracting the token, (2) creating a custom field assignment, or (3) using a dataset with pre-generated tokens."
Safe approach: 1. Try option 1 (simplest) first 2. If that doesn't work, try option 2 3. Only pursue option 3 if simpler approaches fail
Why this matters: Simpler solutions are easier to debug. If a complex three-part fix fails, you will not know which part is wrong.
Know When to Stop and Seek Help¶
Rule: If AI suggestions aren't working after 2-3 attempts, stop and seek human expertise.
Warning signs: - AI keeps suggesting the same fix that doesn't work - Suggestions contradict each other - AI says "I'm not sure" or "This is unusual" - You're making changes but don't understand why
What to do instead: - Check the manual - Read the relevant section for comprehensive background - Contact support - See Getting Support for how to gather diagnostic info - Post in community forums - Other users may have encountered similar issues
AI is a tool, not a complete solution. Human expertise (yours or support's) may be needed for problems the AI cannot see into.
Common AI Failure Modes¶
1. Hallucinated Configuration¶
What happens: The AI suggests a feature or setting that does not exist in Load Tester.
Example: "Enable the 'Auto-Correlation' setting in Preferences."
Why this happens: AI models sometimes confuse features from other tools, or invent plausible-sounding settings from whole cloth.
How to detect: If you cannot find the suggested setting in Load Tester, it probably does not exist.
What to do: Tell the AI: "I don't see that setting. Are you sure it exists in Load Tester?" AI will correct itself or suggest alternatives.
2. Overgeneralization from Pattern¶
What happens: The AI assumes your application follows a common pattern when it actually uses a custom implementation.
Example: "This is standard OAuth 2.0. Extract the Bearer token from the Authorization header."
Reality: Your app uses OAuth but with a custom token location (query parameter instead of header).
How to detect: The AI's suggestion does not match what you actually see in response content or headers.
What to do: Tell the AI what you actually observe: "The token is in a query parameter called 'auth_token', not the Authorization header."
3. Incomplete Troubleshooting¶
What happens: The AI identifies one issue but misses others.
Example: "The 403 error is caused by a missing CSRF token."
Reality: Both the CSRF token AND a session cookie are missing.
How to detect: Fixing the AI-suggested issue doesn't resolve the problem.
What to do: After applying the fix, if errors persist, ask: "I fixed the CSRF token but still see 403 errors. What else could be wrong?"
4. Misunderstanding Context¶
What happens: The AI misinterprets what you are asking about.
Example:
You ask: "Why is the login slow?"
AI assumes: You mean "Why does the login PAGE have slow response time?"
You meant: "Why does the login TRANSACTION fail during load testing?"
How to detect: The answer does not address your actual question.
What to do: Clarify: "I meant the login transaction is failing, not that it's slow."
Working Effectively Within Limitations¶
Provide Context¶
Good prompts include: - ✅ Specific errors: "I'm getting 403 Forbidden on transaction #12" - ✅ What you tried: "I extracted the CSRF token but still see validation errors" - ✅ Relevant details: "This is a React SPA using OAuth authentication"
Poor prompts: - ❌ Too vague: "My test doesn't work" - ❌ No details: "Help me"
Iterate and Refine¶
Pattern: 1. Ask initial question 2. Try AI suggestion 3. Report results back: "I tried that but saw [result]" 4. AI suggests next step 5. Repeat until resolved
Example conversation:
You: "Why is replay failing with 401 errors?"
AI: "Check if session cookies are being extracted and injected."
You: "I see a cookie named 'session_id' being extracted. Still get 401."
AI: "Verify the cookie is being injected into subsequent requests. Check Headers View for request #3."
You: "The cookie is there but value is different from what was extracted."
AI: "That suggests the cookie value is being regenerated. You may need to extract it from each response, not just the first one."
Combine AI with Manual Resources¶
Best approach: Use AI for context-specific help, manual for conceptual learning.
Example workflow: 1. Learn concept in manual: Read Understanding Web Pages & Application State 2. Apply to your test case: Configure with ASM 3. Hit a problem: "ASM didn't detect the session token" 4. Ask AI: "Why wasn't the session token detected?" (AI analyzes YOUR test case) 5. Verify solution: Cross-check AI suggestion against manual's explanation
When AI Uncertainty is Appropriate¶
The AI will tell you when it's uncertain. This is GOOD - it means the AI knows its limits.
Appropriate uncertainty: - ✅ "I don't have enough context to diagnose this. Can you run a replay and share the specific error?" - ✅ "This could be either a CSRF token or a nonce. Let's check the response content to confirm." - ✅ "I'm not familiar with this specific authentication pattern. The manual may have more details."
Inappropriate responses (should make you skeptical): - ❌ Absolute certainty on complex issues: "This will definitely fix your problem." - ❌ Contradicting itself across responses - ❌ Suggesting impossible features
How to respond to uncertainty: Provide more context, run tests to gather data, or consult the manual for comprehensive explanation.
Privacy & Data Safety¶
What Data the AI Sees¶
AI has access to: - ✅ Test case structure (pages, transactions, URLs) - ✅ Response codes and headers - ✅ Captured response content (HTML, JSON, XML) - ✅ Configuration (field datasources, extractors) - ✅ Metrics from load test results
AI does NOT have access to: - ❌ Your file system or other applications - ❌ External networks or internet - ❌ Previous conversations from past sessions
Data Retention¶
Session data: The AI analyzes data currently in Load Tester's memory. When you close Load Tester, conversation history is gone.
No telemetry: Load Tester does not send AI conversations to external servers or track your usage.
Your responsibility: If response content contains sensitive data (passwords in plaintext, API keys), the AI will see it during analysis. The MCP tools are local-only and nothing is retained, but be aware of what you are analyzing.
Summary: How to Use AI Safely¶
DO:¶
✅ Verify all AI suggestions before applying them
✅ Ask for clarification if you don't understand the reasoning
✅ Test changes incrementally (one at a time)
✅ Combine AI guidance with manual documentation
✅ Report back results so AI can refine suggestions
✅ Seek human expertise when AI suggestions aren't working
DON'T:¶
❌ Apply AI suggestions blindly without understanding them
❌ Assume AI suggestions will work without testing
❌ Trust AI to know your application's internals
❌ Expect AI to make architectural decisions for you
❌ Retry the same AI suggestion repeatedly if it's not working
❌ Ignore your own judgment in favor of AI recommendations
Related Topics¶
- AI Assistant Overview - What AI does at each stage
- Getting Started with AI - Your first AI interaction
- AI for Configuration - Debug test cases
- AI for Monitoring - Real-time load test guidance
- AI for Analysis - Performance investigation
The AI is a powerful assistant, but you are the expert on your application. Use it to accelerate troubleshooting and analysis, but always verify that suggestions make sense for your specific context. The AI can see what happened. Only you know what was supposed to happen.