.. _run-a-load-test: Run a load test using Zelt ========================== This guide will take you through: - Setting up Kubernetes - Defining your load test - Using Zelt to deploy Locust - Running your load test Install Zelt ------------ To get started, first install Zelt from PyPI: .. code:: bash pip install zelt Install minikube ---------------- For the purpose of this guide we will use a local instance of Kubernetes called minikube_. Note that this setup will not allow you to generate any significant load. We recommended using a hosted Kubernetes solution when you want to execute large-scale load tests. If you already have a hosted Kubernetes, e.g. EKS_, then skip this section. #. Download and install minikube_ #. Enable Ingress: .. code:: bash minikube addons enable ingress #. Configure a hostname for minikube: .. code:: bash echo `minikube ip` zelt.minikube | sudo tee -a /etc/hosts Create a scenario ----------------- In order to run a load test, we need to create a load test scenario. If you already have a locustfile_ representing a load test scenario, then skip this section. #. Chose your target i.e the application you want to test #. Follow `this guide`_ to create a HAR file for the scenario #. Save the HAR file locally, we will refer to it as ``PATH_TO_HAR_FILE`` in this guide Determine deployment type ------------------------- Zelt can deploy Locust in either combined_ (standalone) or distributed_ mode. The decision for which mode to use depends on how much load you wish to generate and the hardware available to you. In combined mode, Locust can generate ~500 users (per CPU core). This equivalent to running Locust locally. In distributed mode, Locust uses workers to generate the load. Therefore, the number of users can be calculated as the number of workers you deploy multiplied by 500. For example, if you wanted to have 1000 users you would deploy Locust in distributed mode with 2 workers. Keep in mind that since the amount of users is CPU-bound, the hardware of each worker will affect the amount of load generated. If using an AWS-hosted Kubernetes, take a look at `this table`_ for a detailed breakdown of how much load can be generated by different `instance types`_. Prepare manifests ----------------- Kubernetes uses manifest files to define what to deploy. `Example manifest files`_ for both combined and distributed deployment are available in Zelt's source code. For the purpose of this guide, we will use the distributed manifests. These consist of: - ``controller-deployment.yaml`` that defines the Locust controller - ``worker-deployment.yaml`` that defines the Locust worker(s) - ``namespace.yaml`` that defines the Kubernetes namespace_ to deploy to - ``service.yaml`` that defines the Kubernetes service_ to create - ``ingress.yaml`` that defines the Kubernetes ingress_ to create .. note:: Currently Zelt's support is limited to the above Kubernetes resource types, as well as `custom resources`_. Please note that Zelt will deploy a custom resource only if its `custom resource definition`_ is already deployed in the cluster, and only if the scope of the resource is limited to a namespace. .. TODO: Create a page detailing each manifest .. For more detailed information, please refer to :ref:`manifests`. 1. `Download them`_ and save them locally, we will refer to their location as ``PATH_TO_MANIFESTS`` in this guide. **N.B.** The hostname specified when configuring minikube must match that defined in the ``ingress.yaml`` file. Deploy to Kubernetes -------------------- Once you have a load test scenario, you can use Zelt to deploy it to Kubernetes. First, ensure you are logged in to the correct cluster e.g.: .. code:: bash kubectl config use-context minikube **From HAR** Zelt will use Transformer_ to convert your HAR file to a locustfile before deploying it to Kubernetes. .. code:: bash zelt from-har PATH_TO_HAR_FILE --manifests PATH_TO_MANIFESTS **From locustfile** If you already have a locustfile, then run the following command instead: .. code:: bash zelt from-locustfile PATH_TO_LOCUSTFILE --manifests PATH_TO_MANIFESTS Both of these commands will: #. Create a Namespace called ``zelt`` (subsequent items will be created there) #. Deploy 1 Locust controller and 2 workers #. Create a Service for communication between the controller and workers #. Expose the Locust UI at ``http://zelt.minikube`` using Ingress Run the load test ----------------- In order to actually run the load test, we will use the Locust dashboard: #. In your browser, navigate to ``http://zelt.minikube`` #. Enter a desired number of users to simulate #. Enter the desired ramp-up speed #. Click ``Start swarming`` Refer to `Locust's documentation`_ for more information on how to run/stop/report your load test. Rescale your deployment ----------------------- You can use Zelt to increase/descrease the number of Locust workers that are available to generate load without needing to redeploy. For example, to reduce the number of workers to 1 simply run: .. code:: bash zelt rescale 1 -m PATH_TO_MANIFESTS **N.B.** If a load test is currently running then increasing the number of worker pods will not immediately increase the amount of load being generated. The load test must be restarted through the Locust UI. Decreasing the number of workers *will* decrease the amount of load being generated *immediately*. Delete your deployment ---------------------- Once your load test has completed, you can use Zelt to delete the Locust deployment from Kubernetes. .. code:: bash zelt delete -m PATH_TO_MANIFESTS **N.B.** Make sure you have downloaded your Locust reports if you want them before doing this or they will be deleted! .. _minikube: https://kubernetes.io/docs/setup/minikube/ .. _EKS: https://aws.amazon.com/eks/ .. _locustfile: https://docs.locust.io/en/stable/writing-a-locustfile.html .. _`this guide`: https://transformer.readthedocs.io/en/latest/Creating-HAR-files.html .. _combined: https://docs.locust.io/en/stable/quickstart.html#start-locust .. _distributed: https://docs.locust.io/en/stable/running-locust-distributed.html .. _`instance types`: https://aws.amazon.com/ec2/instance-types/ .. _`this table`: https://github.com/zalando-incubator/docker-locust#capacity-of-docker-locust-in-aws .. _`Example manifest files`: https://github.com/zalando-incubator/zelt/tree/master/examples/manifests .. _namespace: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ .. _service: https://kubernetes.io/docs/concepts/services-networking/service/ .. _ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/ .. _`Download them`: https://github.com/zalando-incubator/zelt/tree/master/examples/manifests/combined .. _Transformer: https://github.com/zalando-incubator/Transformer .. _`Locust's documentation`: https://docs.locust.io/en/stable/what-is-locust.html .. _`custom resources`: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/ .. _`custom resource definition`: https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/