
Lambdas are a flexible thing. They can be written in several different languages, from the ever-present Javascript, to Go or even custom runtimes. But which one is the best for computational tasks? In this blog post, we test the runtime differences between NodeJs, Javascript and Python. The source code for this blog can be found in our GitHub repository.
The Setup
We deploy the lambdas using a cloudformation template. For this, we require a bucket, which will be created to store the templates. Additionally, the results will also be uploaded to this bucket. To get a meaningful output, we also create a dashboard together with the lambdas. The dashboard shows the key numbers we want, obtained from the execution logs of the lambdas.
The test will be executed on an EC2 instance, where we run a Gatling test. The EC2 instance requires a security group which allows us to connect to it, so we can set up and start the Gatling tests. The instance, as well as the security group, are bundled in a separate template.
Lambdas are running with 1GB RAM and a timeout of 1 minute. The EC2 instance is a t2.micro type.
We use Java8, NodeJs 8.10 and Python 2.7 as runtime environments for the lambdas.
The Result
Since we already use Gatling to invoke the calls, we can also use its reporting to compare the results. Although we also measure the network delay this way, it still provides a general idea of how long things take. For a closer look, the cloudwatch logs are inspected using the dashboard.
It seems that the API-Gateway also increases the cold start time a lot, making Gatling reports unfitting to measure them. E.g. in one instance, the longest java request took 2.1s in Gatling, but the cloudwatch logs show us that it only took 974ms. Other than that, the network seems to add 10-20ms delay.
The target value for the runs determines the computational effort required. It scales exponentially with value; see the algorithm topic at the end. We do not care about the total duration of the task; we only care about the relative durations between the different languages. Short runtimes denote a very low computational complexity, however, we must ensure that there is at least some computational effort to get meaningful results.
Numbers
Of the 3 contestants, Python performs the worst by far. Using 100 requests with a 1-second pace and a larger value of 3000, we achieve the following results:
As shown, python execution takes extremely long compared to java and javascript. A likely explanation is that python simply does not perform well for the scenario we chose. To confirm this we run the test again, this time with a target value of 100.
As suspected, python performs much better this time. For safety reasons, we perform a third run with 1000 requests, but this time with a target value of only 2500, for faster runtime.
Nope, no change, same result.
Another surprising outcome is that available memory has a large impact on the cold start time. We had to increase the memory to 1GB to get meaningful results, even though we only use about 5 integer variables and do not save any data. This has already been taken care of from the beginning of this post.
Conclusion
Java is a good candidate for high throughput serverless functions, while NodeJs benefits from a very fast cold startup time, making it ideal for functions invoked in irregular intervals. Pythons’ performance is suboptimal when it comes to CPU intensive calculations.
How it works
Try it out in the Cloud
Prerequisites:
- A UNIX shell
- AWS CLI
- Git
- SSH
To run everything on your own AWS account, you need to have the above things set up. After cloning the repository (git clone $link), navigate into the folder and run the deploy script:
./deploy.sh 3000 100 1
This creates the complete setup required to run the tests: It starts up an EC2 instance, runs the Gatling tests and downloads the results after it is done.
At the end, the EC2 instance is shut down again, however, the Lambdas and their API Gateway remain in our AWS account. They do not accrue any additional costs, and you can delete them easily via their Cloudformation stack. You do not need to delete the stack to rerun the tests.
The parameters stand for a target value of 3000, 100 requests per endpoint, and one request every second. For a better understanding of what the target value denotes, see the algorithm section below.
Run it locally
Prerequisites:
- Docker
- Gatling
- AWS SAM
You can also check out the lambdas locally, and run the Gatling test against them.
To start up the lambdas and their API, simply execute
sam local start-api -t sam_template.yaml
You can now send events against the lambdas. Here is an example using the provided payload
curl localhost:3000/nodejs -d "@event.json"
To run the Gatling tests, simply run
./run_gatling.sh http://host.docker.internal:3000 5000 100 2
Parameters are the same as the deployment script, except the first parameter, which is the address of the API. Since we run Gatling in a docker container, we cannot use localhost and have to use the actual IP or the “host.docker.internal” placeholder.
The Algorithm
We need a function that creates measurable runtime, but also ensures that we do not measure compiler optimisations. We want to make sure that each implementation does exactly the same thing, regardless of language. What we need is a simple, independent way to create hot air.
For this purpose, I chose a simple algorithm that does integer addition but depends on the previous result, to prevent loop unrolling and similar improvements. We tell the algorithm what end value we want to reach, and it then oscillates between 1, -1, 2, -2,…. until it reaches the final value. We change the current value in increments of 1, resulting in exponential runtime growth.
Here’s a fancy graph showing the value development during a run to 100:

A lot of effort to count to 100, but an easy way to ensure we only do integer math with easily portable code.
Bernhard Bonigl, Software Engineer at viesure