AD Performance Workshop

30 minute(s)

Performance Guide

AD Performance Testing

This is the 2025 Performance Testing Workshop resource for the AD quarterly and is an accompying resource for that workshop. Not all resources (such as the API) may be available if this is done in the future.

You are welcome to try and follow this guide but there's no guarantee all things will work

The Target

api.bob-productions.dev

The goal of this workshop will be to run a performance test on the API. The API compromises of 3 simple features, a GET call, a PUT call and a POST call.

At the end of this guide we will have a performance test that will measure all 3 of these endpoints, giving us statistics on each one.

During that time for the AD quarterly the stats of the server hosting the API will be visible. So we can keep an eye on that to see how servers behave under load as well as what statistic to look out for

Installing the Tools

There are many performance testing tools available.

For the sake of the workshop and this guide the tool we will use is k6.

K6

You can install k6 here if you are using Windows.

Follow the installation instructions and add it to the system path if prompted

Some Technical Gubbins

K6 utilizes JavaScript to write tests however it's an execution engine written in go. E.g. You don't need node installed as K6 acts as node, but this has some downsides. Including that you can't just use NPM and import packages to K6 you need to recompile K6 with them

K6 comes with a large amount of packages supported meaning you should be able to do everything you need it to do, as a general advice sticking to the K6 docs is usually a good idea

Verify K6 Installation

Before writing a test make sure that K6 is running

You can do this with:

k6 version

If the version came up, your good to go!

First Performance Test

You can view the swagger doc for the API here: Swagger. Let's setup a first test for the GET endpoint of the API, which has three resources available.

Let's create a K6 file with a call to the GET endpoint of the above API:

import http from "k6/http";
import { sleep, check } from "k6" ;

export const options = {
  vus: 3,
  duration: "30s",
};

export default function () {
  const res = http.get("https://api.bob-productions.dev/items/1");
  check(res, { "status is 200": (r) => r.status === 200 });
  sleep(1);
}

Save this file as a Javascript file locally.

Running the test

Running the test is as simple as executing the run command k6 run with your script name with a script named script.js the below will run the test

You can configure how much traffic k6 will create by modifying the options objects at the top of the script. There are many configurations for this. I would suggest looking at k6-options on how to configure the tests

k6 run script.js

Adding more to the test

Let's add a few new calls to the test POST and GET and run again

import http from "k6/http";
import { sleep, check } from "k6";

export const options = {
  vus: 3,
  duration: "30s",
};

export default function () {
  let postRes = http.post(
    "https://api.bob-productions.dev/items",
    JSON.stringify({
      name: "NewItem",
      value: "SecondItem",
    }),
    {
      headers: { "Content-Type": "application/json" },
    },
  );
  check(postRes, { "POST status is 200": (r) => r.status === 200 });
  let body = postRes.json();
  console.log(body);

  let getRes = http.get(`https://api.bob-productions.dev/items/${body.id}`);
  check(getRes, { "GET status is 200": (r) => r.status === 200 });
  console.log(getRes);

  let putRes = http.put(
   `https://api.bob-productions.dev/items/${body.id}`,
    JSON.stringify({
      name: "UpdatedItem",
    }),
    {
      headers: { "Content-Type": "application/json" },
    },
  );
  check(putRes, { "PUT status is 204": (r) => r.status === 204 });
  console.log(putRes);

  sleep(1);
}

Give that a run.

You may have noticed that HTTP metrics are gathered together summarized as one, for the sake of measuring the individual endpoints GET, POST and PUT let's utilize a feature of k6 'tags'

k6 Trends

You can add some Trends to your test lets add those in

import http from "k6/http";
import { sleep, check } from "k6";
import { Trend } from "k6/metrics";

export const options = {
  vus: 1,
  duration: "10s",
};

const getTrend = new Trend("GET_Items");
const putTrend = new Trend("PUT_Items");
const postTrend = new Trend("POST_Items");

export default function () {
  let postRes = http.post(
    "https://api.bob-productions.dev/items",
    JSON.stringify({
      name: "NewItem",
      value: "SecondItem",
    }),
    {
      headers: { "Content-Type": "application/json" },
    },
  );
  console.log(postRes);

  check(postRes, { "POST status is 200": (r) => r.status === 200 });
  let body = postRes.json();
  console.log(body);

  postTrend.add(postRes.timings.duration);

  let getRes = http.get(`https://api.bob-productions.dev/items/${body.id}`);
  check(getRes, { "GET status is 200": (r) => r.status === 200 });
  console.log(getRes);

    getTrend.add(getRes.timings.duration);

    let putRes = http.put(
      `https://api.bob-productions.dev/items/${body.id}`,
      JSON.stringify({
        name: "UpdatedItem",
      }),
    );
    check(putRes, { "PUT status is 204": (r) => r.status === 204 });
    console.log(putRes);

    putTrend.add(putRes.timings.duration);

    sleep(1);
  }
    

Analyzing the Stats: Client

k6 will provide us with quite a lot of detailed statistics and requires some maths:

Averages

The main reason we measure in averages is that network traffic, can be a bit random. Sometimes things can be slow due to networking jitters, so we rely on averages over a long period of time to get an accurate gauge on performance

Percentiles

This is a mathematical term, from a series of data we measure the x% of something we are measuring the remaining 100% - x% as a metric

In performance testing we are generally concerned about the Average (The average for all users) and the higher percentiles 80,90,95,99 so for each of those it's the average of the slowest 20%,10%,5% and 1%

We also measure the max and min but in performance testing these often fall victim to slow-starts where the software is loading up from being used the first time.

Analzying the Stats: Server

Analyzing statistics and inferring meaning is where the value in performance testing is. A lot of performance testing work is done by thinking about what the statistics are telling you

However k6 is only able to measure response times from a client side, how does one measure the server?

Most performance tests aim to measure some key metrics server side; a list although not exhaustive, include:

  • Request Throughput
  • Network Latency
  • CPU Utilization
  • Memory Utilization
  • Disk I/O Rate

These are PC statistics that we need to grab, and we will want to see what these are for the duration of the test.

The best place to get these statistics will depend, mostly things are developed on the cloud these days so your cloud provider should provide these statistics, some cloud providers require you do some additional setup.

You also should measure non 2xx codes, aka server failures but these will can be interpreted via server logs

No replacement for common sense

Meauring stats is one thing, but when dealing with potential performance issues, logical ordering, DRY (If network calls are involved), configurations are often times what cause perf issues. Vertical scaling (Just giving it more power) will work for a good amount of instances but has limited returns if there are logical problems in the software A lot comes from experience here but I have a general rule, if something is difficult to understand or is complex it probably has something wrong with it

Wrap-up

This guide does not contain the walkthrough of the server side.

I will put these on later!