vegeta is an HTTP stress testing tool written in Go, which is used for command line interaction and can generate summaries and charts for result analysis.
Basic Usage#
For detailed usage, please refer to the content in the README. It can be used in two ways: command line interaction and importing as a library into a Go program. I installed vegeta using brew, and the entire testing process involved writing a script in Go for testing. For more details, please refer to the code snippet.
Instead of importing it as a library, I used the exec.Command()
function to execute command lines.
Specific vegeta commands used:
- attack: This command is used to call the API. The rate represents the number of calls per second, duration represents the number of seconds to call, body refers to the request body for a POST request, name is used to name this test (optional), tee is used to generate a raw binary file (similar to a raw format photo, which can be converted to many other formats using bin), and report is used to generate a report, which is output in the command line.
echo "POST http://172.17.1.36:30898/hello" | vegeta attack -rate=2000 -duration=1s -body=query.json -timeout=0 -name=hello | tee ./result/results-null.bin | vegeta report
- Generate a report file, which generates a txt file based on the raw binary file.
vegeta report -type=text results-query.bin > repost.txt
- Generate analysis charts. The plot command can generate interactive charts. When merging multiple results into one chart, please note that when using the first command, be sure to add the -name parameter to name it.
vegeta plot results-null.bin results-sleep.bin results-query.bin > plot-all.html
Test Scenarios#
Two REST services are implemented using Java and Go, respectively, to test three APIs under the conditions of a single replica and five replicas in k8s. The API information is as follows:
- Direct return
- Return after sleeping for 2 seconds
- Query 10,000 data from Elasticsearch, with each data being 500 bytes
Elasticsearch cluster information
- Number of nodes: 2
- Version: 7.4.1
- CPU: 8 Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz, 1 core each
- Memory: 31G
Kubernetes information
- Version: 1.16.15
- Number of nodes: 7 (1 master, 6 workers)
Specify that the deployments are scheduled to the same node, and both deployments are allocated 2G of memory.
Test Steps#
After packaging the Go script, create a configuration file config.ini
in the same directory.
[address]
null = "POST http://172.17.1.36:30898/hello"
sleep = "POST http://172.17.1.36:30898/sleep"
query = "POST http://172.17.1.36:30898/es"
[param]
rate = 2000
Where address refers to the API address and method to be called, and rate refers to the number of calls per second (I only made calls for 1 second here).
Execute the script and wait...
Test Results#
Three files are generated for each API:
- results-*.bin file: the raw test file
- report-*.txt file: the report file
- plot-*.html file: individual result chart file
First is Java
- Empty interface
Requests [total, rate, throughput] 2000, 2004.21, 1996.25
Duration [total, attack, wait] 1.002s, 997.9ms, 3.98ms
Latencies [min, mean, 50, 90, 95, 99, max] 2.114ms, 28.492ms, 11.283ms, 77.584ms, 90.305ms, 111.482ms, 150.836ms
Bytes In [total, mean] 10000, 5.00
Bytes Out [total, mean] 0, 0.00
Success [ratio] 100.00%
Status Codes [code:count] 200:2000
Field explanations
Requests:
- total represents the total number of requests, which is 2000.
- rate represents the number of requests per second, which is 2004.21.
- throughput represents the number of requests successfully completed per second, which is 1996.25.
Duration:
- total represents the total test time, which is 1.002 seconds.
- attack represents the attack time, which is the actual time for sending requests, which is 997.9 milliseconds.
- wait represents the total waiting time for all requests during the attack, which is 3.98 milliseconds.
Latencies:
- min represents the minimum response time
- mean represents the average response time
- 50, 90, 95, 99 represent percentiles (not sure what these metrics represent), which are the response times for 50%, 90%, 95%, and 99% of the requests, respectively.
- max represents the maximum response time
Bytes In:
- total represents the total number of bytes received for all requests
- mean represents the average number of bytes received per request
Bytes Out:
- total represents the total number of bytes sent for all requests
- mean represents the average number of bytes sent per request
Success:
- ratio represents the percentage of successful requests out of the total requests
Status Codes:
- code represents the number of occurrences for each status code.
When I merged the three results together, the response time basically increased linearly, and the query time eventually increased to over 3 minutes.
- Go
The sleep and empty interfaces are quite stable, and the response time for the query interface is much better than Java.
Next, the results after adding both services to five replicas.
- Java
I can only say that the stability of the sleep interface has improved, as it does not increase linearly. However, the response time for the query interface is still a bit unreasonable.
- Go
The response time for the query interface did not improve significantly.
Conclusion#
For the query interface, the bottleneck may occur on the Elasticsearch side, so I won't test it for now.
From the charts, it is clear that Go has a slightly better concurrent processing capability.
There are still many ways to use the vegeta tool that I haven't explored. Compared to Jmeter, although the command line interface is not very user-friendly, the generated results are quite intuitive.