Skip to content

Command Line Tools

Load Tester includes a command-line interface (CLI) for headless execution: running replays and load tests without the GUI. If you want to wire load testing into a CI/CD pipeline, schedule nightly regression tests, or automate anything that shouldn't require a human clicking buttons, this is the tool.

This reference covers CLI syntax, options, return codes, and common use cases.


CLI Executable

Location:

  • Windows: <install>\cli.exe
  • macOS/Linux: <install>/cli

Where <install> is: - Windows: C:\Program Files\Web Performance Load Tester 7.0\ - macOS: /Applications/Web Performance Load Tester.app/Contents/Eclipse/ - Linux: /usr/local/bin/WebPerformanceLoadTester_7.0/


Available Commands

The CLI supports two main operations:

Command Purpose Use Case
replay Run test case replays headlessly Automated validation, CI/CD pre-deployment checks
execute Run load tests headlessly Scheduled load tests, performance regression testing

Replay Command

Run test case replays without the GUI to validate test cases or check for errors.

Syntax

cli replay [options] <repository>

Options

Option Description Required
<repository> Path to .wpt file ✅ Yes
-t <name> Name of test case to replay ⬜ No (replays all if omitted)
-w <workspace> Workspace location ⬜ No (defaults to ~/WebPerformance7/)

Behavior

  • If -t is provided: Only the specified test case is replayed
  • If -t is omitted: All test cases in the repository are replayed sequentially
  • After completion: Repository is saved with replay results

Return Codes

Code Meaning
0-N Number of errors encountered during replays (0 = success, no errors)
-1 Illegal arguments provided (invalid options or missing repository)
-2 Software error occurred (check cli.log and contact support)

Examples

Replay all test cases in a repository:

cli replay /path/to/project.wpt

Replay specific test case:

cli replay -t "Login and Checkout" /path/to/project.wpt

Specify custom workspace:

cli replay -w /custom/workspace -t "API Test" /path/to/project.wpt

Check exit code in bash:

cli replay /path/to/project.wpt
if [ $? -eq 0 ]; then
  echo "All replays succeeded"
else
  echo "Errors found during replay"
  exit 1
fi

Windows batch script:

cli.exe replay C:\LoadTests\project.wpt
if %ERRORLEVEL% neq 0 (
  echo Replay failed with %ERRORLEVEL% errors
  exit /b 1
)

Execute Command

Run load tests headlessly with optional report generation.

Syntax

cli execute [options] <repository>

Options

Option Description Required Example
<repository> Path to .wpt file ✅ Yes /path/to/project.wpt
-t <profile> Load test profile name (as shown in GUI under "Load Configurations") ✅ Yes "Peak Load Test"
-o <output> Output .wpt file for results ⬜ No /results/test-2026-02-11.wpt
-r <report> Report output path (ZIP format) ⬜ No /reports/test-2026-02-11.zip
-e <engines> Load engines (semicolon-separated hostnames/IPs) ⬜ No "engine1.com;engine2.com"
-s <servers> Servers to monitor (semicolon-separated hostnames/IPs) ⬜ No "web1.com;db1.com"

Behavior

  • Executes the specified load test profile from the repository
  • Saves results to output file (or overwrites source repository if -o not provided)
  • Optionally generates HTML report (ZIP archive)
  • Uses specified load engines for distributed testing (or local if -e omitted)
  • Monitors specified servers during test (if -s provided)

Return Codes

Code Meaning
0 Load test completed successfully
-1 Illegal arguments (invalid options, missing profile)
-2 Software error (check cli.log)
Other Test-specific error codes (check logs)

Examples

Run load test with local engines:

cli execute -t "Baseline Load Test" /path/to/project.wpt

Run with cloud engines and save results:

cli execute \
  -t "Peak Load Test" \
  -e "engine1.example.com;engine2.example.com;engine3.example.com" \
  -o /results/peak-test-$(date +%Y%m%d).wpt \
  /path/to/project.wpt

Generate report and monitor servers:

cli execute \
  -t "Performance Regression" \
  -r /reports/regression-$(date +%Y%m%d).zip \
  -s "webserver.example.com;database.example.com" \
  -o /results/regression-latest.wpt \
  /path/to/project.wpt

CI/CD integration (Jenkins/GitLab):

#!/bin/bash
# Run load test as part of deployment pipeline

RESULT_FILE="/results/ci-test-${BUILD_NUMBER}.wpt"
REPORT_FILE="/reports/ci-test-${BUILD_NUMBER}.zip"

cli execute \
  -t "Smoke Test" \
  -o "$RESULT_FILE" \
  -r "$REPORT_FILE" \
  /tests/smoke-test.wpt

# Check exit code
if [ $? -ne 0 ]; then
  echo "Load test failed - blocking deployment"
  exit 1
fi

echo "Load test passed - proceeding with deployment"

Load Engine Configuration

Load engines can be configured via command-line options in system.properties (for network settings) and Load Engine.lax (for JVM options).

system.properties

Location: <engine-install>/system.properties

Create or edit this file to control network and capacity settings.

Network Settings

Property Description Default Example
EngineRMIAddress IP address to accept connections on All interfaces 192.168.1.62
RmiRegistryPort Port for incoming controller connections 1099 1099
RmiEnginePort Port for engine-controller communication Any available 1100

Example system.properties:

EngineRMIAddress=192.168.1.62
RmiRegistryPort=1099
RmiEnginePort=1100

When to set EngineRMIAddress: - Engine has multiple network interfaces (IPv4 + IPv6) - Engine is behind NAT/firewall - Controller can't auto-detect engine

Firewall tip: Set RmiEnginePort to the same value as RmiRegistryPort. One port to open instead of two.


Capacity Settings

Property Description Default
EngineUserCapacity Maximum virtual users this engine can run 5000
StartingRemoteEngineUserCapacity Maximum VUs to start in first ramp 1000

Example system.properties:

EngineUserCapacity=10000
StartingRemoteEngineUserCapacity=2000

When to increase capacity: - Testing with more than 5000 concurrent users - Engine has sufficient memory/CPU for more VUs - First ramp needs more than 1000 users

Memory considerations: Each VU uses roughly 1-2MB RAM. For 10,000 VUs, allocate at least 12-20GB to the engine.


Load Engine.lax (JVM Options)

Location: <engine-install>/Load Engine.lax

Edit this file to configure JVM options like heap size and public hostname.

Public Hostname (NAT/Firewall)

If the engine is behind NAT or a firewall with a public IP different from the engine's local IP, specify the public hostname/IP:

Add to lax.nl.java.option.additional line:

-Djava.rmi.server.hostname=engine.example.com

Complete example:

lax.nl.java.option.additional=-Djava.library.path=lib32 -Djava.rmi.server.hostname=engine.example.com

If lax.nl.java.option.additional doesn't exist: 1. Add the line at the end of the file 2. Add a blank line after it


Silent Installation

Install Load Tester or Load Engine without GUI prompts for automated deployments.

Syntax

LoadEngine_<Platform>_N.N.NNNN.bin -i silent

Examples

Linux silent install:

chmod +x LoadEngine_Linux_7.0.1234.bin
./LoadEngine_Linux_7.0.1234.bin -i silent

Automated deployment script:

#!/bin/bash
# Deploy load engine to multiple servers

INSTALLER="LoadEngine_Linux_7.0.1234.bin"
SERVERS="engine1 engine2 engine3"

for server in $SERVERS; do
  echo "Deploying to $server..."
  scp $INSTALLER $server:/tmp/
  ssh $server "chmod +x /tmp/$INSTALLER && /tmp/$INSTALLER -i silent"
done

Default installation locations: - Linux: /usr/local/bin/WebPerformanceLoadTester_7.0/ - Windows: C:\Program Files\Web Performance Load Tester 7.0\


Logging and Debugging

CLI execution generates log files for troubleshooting.

Log Files

File Location Contents
cli.log Current directory CLI execution log (errors, warnings, progress)
diagnostic.log Workspace config/ Detailed diagnostic output (if enabled)

Enabling Debug Logging

Edit diagnostic.properties (in workspace config/ folder):

Debug.CLI=true
Debug.show_time=true

Common debug flags: - Debug.CLI=true - CLI-specific debugging - HTTP.transport=true - HTTP session/cookie debugging - AWS.engineStates=true - Cloud engine state changes


Common Use Cases

CI/CD Integration

Pre-deployment smoke test:

#!/bin/bash
# Run quick smoke test before deploying new version

cli execute -t "Smoke Test" /tests/smoke.wpt
if [ $? -ne 0 ]; then
  echo "Smoke test failed - blocking deployment"
  exit 1
fi

Scheduled Performance Testing

Cron job for nightly regression test:

# Run nightly at 2 AM
0 2 * * * /usr/local/bin/cli execute \
  -t "Nightly Regression" \
  -o /results/nightly-$(date +\%Y\%m\%d).wpt \
  -r /reports/nightly-$(date +\%Y\%m\%d).zip \
  /tests/regression.wpt

Batch Replay Validation

Validate all test cases after ASM changes:

#!/bin/bash
# Replay all test cases and report errors

cli replay /path/to/project.wpt
ERRORS=$?

if [ $ERRORS -eq 0 ]; then
  echo "All test cases replayed successfully"
else
  echo "WARNING: $ERRORS errors found during replay"
  echo "Check replay results in project.wpt"
fi

Cloud Engine Scaling Test

Test with increasing engine count:

#!/bin/bash
# Test with 1, 2, 4, 8 engines to find optimal count

ENGINES=(
  "engine1.example.com"
  "engine1.example.com;engine2.example.com"
  "engine1.example.com;engine2.example.com;engine3.example.com;engine4.example.com"
  "engine1.example.com;engine2.example.com;engine3.example.com;engine4.example.com;engine5.example.com;engine6.example.com;engine7.example.com;engine8.example.com"
)

for i in "${!ENGINES[@]}"; do
  ENGINE_COUNT=$((2**i))
  echo "Testing with $ENGINE_COUNT engine(s)..."

  cli execute \
    -t "Scaling Test" \
    -e "${ENGINES[$i]}" \
    -o "/results/scaling-${ENGINE_COUNT}engines.wpt" \
    -r "/reports/scaling-${ENGINE_COUNT}engines.zip" \
    /tests/scaling.wpt
done

Troubleshooting

"Illegal arguments" (-1 exit code)

Symptom: CLI returns -1.

Possible causes: 1. Missing required option (repository path, -t for execute) 2. Invalid file path (repository doesn't exist) 3. Test case name doesn't match (case-sensitive)

Solution: - Verify repository path exists: ls -l /path/to/project.wpt - Check test case name in GUI: Navigator → Load Configurations → note exact name - Quote names with spaces: -t "My Load Test"


"Software error" (-2 exit code)

Symptom: CLI returns -2.

Cause: Internal error during execution.

Solution: 1. Check cli.log in current directory 2. Enable debug logging in diagnostic.properties 3. Re-run and capture full log 4. Contact support with cli.log


Load engine won't connect

Symptom: Execute command fails to connect to specified engines.

Possible causes: 1. Engine not running (Load Engine started message not shown) 2. Firewall blocking ports 1099/1100 3. Incorrect hostname/IP in -e option

Solution: - Verify engine is running: SSH to engine, check console shows "Load Engine started" - Test connectivity: telnet engine.example.com 1099 - Check system.properties for correct EngineRMIAddress


Out of memory during load test

Symptom: Test fails with OutOfMemoryError in logs.

Cause: Too many VUs for available memory, or heap size too small.

Solution: - Reduce EngineUserCapacity in system.properties - Increase JVM heap in Load Engine.lax: -Xmx8g (for 8GB heap) - Distribute load across more engines


Quick Reference

Replay all test cases:

cli replay /path/to/project.wpt

Replay specific test case:

cli replay -t "Test Case Name" /path/to/project.wpt

Run load test (local):

cli execute -t "Load Profile Name" /path/to/project.wpt

Run with cloud engines + report:

cli execute \
  -t "Load Profile Name" \
  -e "engine1.com;engine2.com" \
  -r /reports/output.zip \
  -o /results/output.wpt \
  /path/to/project.wpt

Silent install:

./LoadEngine_Linux_7.0.1234.bin -i silent

Engine configuration (system.properties):

EngineRMIAddress=192.168.1.62
RmiRegistryPort=1099
RmiEnginePort=1100
EngineUserCapacity=5000