
Rhea Thadani
May 28, 2025
- What is Locust?
- Why Use Locust?
- Setting Up Locust
- Preparing Data For Testing
- Generating Test Data
- Importing Test Data
- Writing Locust Test Scripts
- Basic Structure of a Locust Test Script
- Handling Bulk Data
- Running Locust Tests
- Executing the Test Script
- Configuring Test Parameters
- Understanding Locust Reports
- Request Statistics
- Response Time Statistics
- Failures Statistics
- Charts
- Final Ratios
- Best Practices for Using Locust
- Conclusion
What is Locust?
Locust is a scalable, user-friendly load-testing tool designed for testing APIs, websites, or any system that can handle HTTP requests. Unlike traditional performance testing tools, Locust uses Python to define user behavior, making it flexible and easy to integrate with existing workflows.
Why Use Locust?
- Python-Based Scripting: Write test scripts in Python, giving you complete control over user behavior.
- Distributed Load Testing: Scale tests across multiple machines for testing large systems.
- Web-Based UI: Monitor test results in real time through a web interface.
- Extensibility: Easily integrates with other tools or systems using Python libraries.
Setting Up Locust
To get started with Locust, follow these steps:
- Install Locust using pip:
pip install locust
- To verify the installation, run the following command:
locust -V
This command should display the installed version of Locust.
Preparing Data For Testing
Effective performance testing often requires bulk data to simulate realistic scenarios, such as testing APIs that query databases or interact with application components. If your use case involves a database, ensure sufficient test data is prepared.
Generating Test Data
Create scripts to generate bulk records in a format suitable for your application, such as CSV. There are python libraries like Faker that can be used to generate fake data. For random numeric data, one of the most commonly used module is random.
Importing Test Data
If the test requires data to be loaded into a database:
- Write a script to import the generated CSV records into the relevant database tables.
- Test the import script to verify that the data loads correctly.
Writing Locust Test Scripts
A Locust test script defines user behavior using Python code. For instance, if you’re testing an API, the script can simulate multiple users sending requests.
Basic Structure of a Locust Test Script
A typical Locust script includes:
- User Behavior: Define tasks that represent user actions.
- Load Simulation: Specify the number of users and spawn rate.
- Data Handling: Include logic to read test data if required.
Here’s a simple example:
from locust import HttpUser, task, between
class MyUser(HttpUser):
wait_time = between(1, 3) # Simulates wait time between tasks
@task
def perform_task(self):
self.client.get("/api/endpoint") # Replace with your endpoint
Handling Bulk Data
When performance testing involves large volumes of data, it’s essential to simulate realistic scenarios by loading bulk data into your test scripts.
For example, if you are testing an API that processes user records, you may need a large set of user data to stress-test the system.
Let us consider you’re testing a user registration system. To do so effectively, you would need to generate a large number of user records and simulate requests that process this data.
Assuming that we have a CSV file containing user records, when simulating the registration of users, you would read the CSV file and use the data in your HTTP requests.
import csv
from locust import HttpUser, task, between
class UserRegistrationTest(HttpUser):
wait_time = between(1, 3)
def on_start(self):
self.users = []
with open('users.csv', newline='') as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
self.users.append(row)
@task
def register_user(self):
user = self.users.pop()
self.client.post("/register", json={
"username": user['username'],
"email": user['email'],
"password": user['password'],
})
In this example, the on_start
method reads the CSV and stores the user data in a list. The register_user task simulates the registration process by sending a POST request with user data.
Running Locust Tests
Executing the Test Script
Once your Locust script is ready, you can execute it as follows:
- Navigate to the directory containing the script.
- Run Locust with the script file:
locust -f your_locust_script.py
- Open the Locust web interface by visiting http://localhost:8089 in your browser.
Configuring Test Parameters
In the Locust web interface:
- Number of Users: Set the total number of peak concurrent users.
- Spawn Rate: Define how many users start per second.
- Host: Specify the URL of the API endpoint to test.
- Run Time: Specify the duration of the test.
Click “Start” to begin the test and monitor the results, such as request rates, response times, and failure counts, in real-time.
Understanding Locust Reports
Locust provides detailed reports that help identify system bottlenecks. Here’s how to interpret them:
Request Statistics
This section gives an overview of each API endpoint’s performance:
Type | Name | # Requests | # Fails | Median (ms) | Average (ms) | Min (ms) | Max (ms) | Current RPS |
---|---|---|---|---|---|---|---|---|
POST | /api/endpoint | 500 | 0 | 55 | 63.44 | 39 | 445 | 0.2 |
Key columns include:
- Requests: Total number of requests made to the endpoint.
- Median/Average (ms): Median and average response times.
- Current RPS: Requests per second at the time of reporting.
Response Time Statistics
This section provides response time percentiles:
Method | Name | 50%ile (ms) | 90%ile (ms) | 99%ile (ms) | 100%ile (ms) |
---|---|---|---|---|---|
POST | /api/endpoint | 51 | 63 | 88 | 420 |
Interpretation:
- Percentiles give insights into response times experienced by most users (e.g., 90% of requests are completed within 63 ms).
- High 99th or 100th percentiles may indicate outliers or bottlenecks.
Failures Statistics
Failures are summarized in a dedicated section:
Failures | Method | Name | Message |
---|---|---|---|
3 | POST | /api/endpoint | Timeout Error |
Investigate errors to identify system issues, such as timeouts or bad requests.
Charts
Locust generates interactive charts for:
- Requests per second: Tracks traffic patterns over time.
- Response times: Visualizes latency trends in milliseconds.
- Number of users: Shows the load profile during testing.
Final Ratios
Summarizes the traffic distribution among tasks or classes.
Best Practices for Using Locust
- Prepare Adequate Data: Ensure test data matches real-world scenarios. Generate bulk data as needed. For example, if you are testing an e-commerce platform, simulate different types of product data (e.g., electronics, clothing, books) and user actions/tasks such as browsing, searching, applying product filters, adding to cart, and checking out.
- Modular Scripts: Keep your test scripts modular to reuse for different endpoints or workflows.
- Monitor System Resources: Observe your system’s CPU, memory, and network usage during tests to identify bottlenecks.
- Iterate and Refine: Run multiple test cycles, adjusting parameters and scenarios to cover edge cases.
Conclusion
Locust is a versatile tool for performance testing that can simulate user behavior, measure system capabilities, and uncover potential bottlenecks. By preparing appropriate test data and crafting detailed Locust scripts, you can effectively validate your system’s performance under load. Whether you’re testing APIs, databases, or workflows, Locust makes the process streamlined and efficient.