Getting Started with AI¶
Once your provider is configured, the AI assistant is ready. No feature flags, no activation step. Open the panel and ask it something.
This guide walks you through your first interaction and shows you how to get the most useful answers.
Opening the AI Panel¶
The AI panel is accessible from anywhere in Load Tester.
Method 1: AI Button in Toolbar¶
- Click the AI button in the main toolbar. The AI panel opens on the right side.
Method 2: Keyboard Shortcut¶
- Windows/Linux: Press Ctrl+Shift+A
- macOS: Press ⌘⇧A
Method 3: View Menu¶
- View → AI Assistant - Opens the AI panel
The panel stays open as you work, so you can ask follow-up questions without reopening it.
Your First Question¶
Let's start with something simple to see how the AI works.
Example: Ask About Replay Errors¶
Scenario: You've recorded a test case, run ASM, and attempted a replay - but it failed with 401 errors.
-
Open the AI panel (any method above)
-
Type your question in the input field:
- Press Enter or click Send
What Happens Next¶
The AI analyzes your current test case and responds with:
- Likely causes based on your test case structure
- What to check (session cookies, authentication tokens)
- Specific recommendations ("Check if the JSESSIONID cookie was extracted from the login response")
- Next steps to resolve the issue
Key point: The AI sees your actual test case. Responses are specific to your situation, not "have you tried restarting?" generic advice.
Understanding AI Responses¶
AI responses typically include:
1. Direct Answer¶
The AI starts with a clear answer to your question:
"Your replay is failing with 401 errors because the session cookie isn't being extracted from the login response."
2. Explanation¶
Why this is happening:
"I can see the login page returns a Set-Cookie header with JSESSIONID, but there's no extractor configured to capture it. Without this cookie, subsequent requests aren't authenticated."
3. Action Items¶
What to do about it:
To fix this: 1. Select the login transaction in your test case 2. Check the Headers View for the Set-Cookie header 3. Right-click → Create Extractor → Cookie 4. Verify the JSESSIONID field appears in Fields View with a dynamic datasource
4. Context and Alternatives¶
Additional information:
"ASM usually detects JSESSIONID automatically. If it didn't, the cookie name might be non-standard. Check if your application uses a different name like 'sessionid' or 'auth_token'."
Example Prompts for Common Scenarios¶
Here are effective prompts for typical situations. Notice the pattern: the more specific you are, the more specific the AI can be.
Configuration Help¶
❌ Too vague: "Help with my test case"
✅ Better: "I need to configure login for this test case. It uses form authentication with username and password."
✅ Even better: "My test case has a login form, but I want to test with 50 different users. How do I configure multiple usernames and passwords?"
Why it works: Context lets the AI give you a specific answer instead of a tutorial.
Debugging Replay Failures¶
❌ Too vague: "Replay doesn't work"
✅ Better: "My replay is failing with 403 Forbidden errors on the checkout page"
✅ Even better: "Replay succeeds through login and browsing, but fails with 403 Forbidden when submitting the checkout form. The error view shows 'CSRF validation failed'."
Why it works: Error codes and which page fails narrow the problem.
Understanding ASM Results¶
❌ Too vague: "What did ASM do?"
✅ Better: "Why did ASM configure the authenticity_token field as dynamic?"
✅ Even better: "ASM configured 15 hidden fields as dynamic, but I'm not sure if all of them need to be. How do I know which ones are actually required?"
Why it works: Asks about specific configuration decisions.
Load Test Monitoring¶
❌ Too vague: "Is my load test okay?"
✅ Better: "Response times increased from 200ms to 2000ms at 75 users. What's causing this?"
✅ Even better: "The load test was running smoothly until 75 users, then response times suddenly spiked from 200ms to 2000ms. Error rate is still 0%. What should I check?"
Why it works: Numbers and symptoms give the AI something concrete to work with.
Performance Analysis¶
❌ Too vague: "Why is my app slow?"
✅ Better: "Which pages have the worst performance in this load test?"
✅ Even better: "Compare the response times of the product search page at 10 users vs 100 users. Why did it degrade so much?"
Why it works: Asks for specific analysis with comparison.
Tips for Effective AI Interaction¶
Be Specific About Context¶
Include: - What you were doing (recording, replaying, load testing) - What page/transaction has the problem - Error codes or messages you see - What you've already tried
Example:
I'm trying to replay a test case for a React SPA. The login works, but when
I navigate to the dashboard, I get a 401 error. I've run ASM and verified
the session cookie is extracted. What else should I check?
Ask Follow-Up Questions¶
The AI remembers context within a conversation, so you can dig deeper without re-explaining everything:
Initial question: "Why is this field configured as dynamic?"
Follow-up: "Should I override it to use a dataset instead?"
Follow-up: "Show me how to configure that"
Use "Why" and "How" Questions¶
- "Why..." - Get explanations: "Why did ASM detect this as a CSRF token?"
- "How..." - Get procedures: "How do I configure OAuth authentication?"
- "What..." - Get information: "What does this error code mean?"
- "Should..." - Get recommendations: "Should I use Play Fast or regular Play for this?"
Reference Specific Elements¶
- "This page" - If you have a page selected
- "This transaction" - If you have a transaction selected
- "This field" - When asking about field configuration
- "These errors" - When the Errors View shows problems
The AI sees what you're looking at and can give context-specific answers.
What to Expect from the AI¶
What the AI Does Well¶
✅ Diagnose errors - Analyzes YOUR test case to identify problems
✅ Suggest fixes - Provides specific steps to resolve issues
✅ Explain decisions - Clarifies why ASM made certain configuration choices
✅ Compare options - Explains trade-offs between different approaches
✅ Guide workflows - Helps you know what to do next
What the AI Cannot Do¶
❌ Execute actions - Cannot modify your test case, run replays, or start load tests. You control all actions.
❌ Access your application - Cannot see your actual web application, only what Load Tester captured
❌ Read your mind - Needs context in your questions to give helpful answers
❌ Guarantee solutions - Suggestions work for common patterns but may need adjustment
When uncertain: The AI will tell you if it doesn't have enough information or if multiple solutions are possible.
Your First Real Interaction¶
Now try asking the AI something you actually need help with:
If You're Configuring a Test Case:¶
If Replay Failed:¶
My replay is failing with [error code/message]. The errors appear
on [page name]. What's the likely cause?
If Running a Load Test:¶
If Analyzing Results:¶
The AI will respond based on your actual context - what test case you have open, what errors you're seeing, what results you're analyzing.
Common First-Time Questions¶
"Can the AI fix my test case automatically?"¶
No. The AI tells you what to fix and why, but you make the changes. This keeps you in control and, more importantly, helps you learn the patterns so you can spot similar issues yourself.
Example flow: 1. AI: "The session cookie isn't extracted. Here's how to fix it..." 2. You: Follow the steps to create the extractor 3. AI: "Now try replaying again to verify it works"
"Will the AI remember what I asked yesterday?"¶
No. Each conversation starts fresh. The AI sees your current test case, results, and context, but has no memory of previous sessions.
If you need to reference earlier work, include the relevant context in your new question. A sentence or two is usually enough.
"Can I use the AI instead of reading the manual?"¶
The AI complements the manual; it does not replace it. Use the manual to learn concepts and workflows. Use the AI when you are staring at a specific problem in your specific test case.
The manual teaches you how load testing works. The AI tells you why this particular replay is returning 403s.
"What if the AI gives wrong advice?"¶
AI suggestions are based on common patterns and may not fit every situation. Always verify that recommendations make sense for your application before acting on them.
If advice does not work, tell the AI what happened: "I tried that but still see [problem]." Give it the new context and it will suggest alternatives. This back-and-forth is the normal workflow, not a sign that something is broken.
Next Steps¶
Now that you know how to use the AI:
- Try it with your current work - Ask about whatever you're working on right now
- Learn AI patterns for each stage:
- AI for Configuration - Debugging test case setup
- AI for Monitoring - Real-time guidance during load tests
- AI for Analysis - Investigating performance results
- Understand limitations - Limitations & Safety
Or just keep the AI panel open as you work and ask questions when you need them. That is how most people use it.
Related Topics: - AI Assistant Overview - AI for Configuration - AI for Monitoring - AI for Analysis - Limitations & Safety