At its re:Invent conference in Las Vegas last week, AWS described how it now uses Rust by default for certain types of project, having found it much faster than Kotlin (a JVM language) or Go, thanks in part to avoiding garbage collection.
Garbage collection is a technique used by many programming languages to relieve programmers from the burden of manually allocating and freeing memory – but which also imposes an overhead that is particularly noticeable in large-scale distributed applications.
A J Stuyvenberg, a staff engineer at Datadog, and Darko Mesaros, principal developer advocate at AWS, presented a session on Rust at AWS. Mesaros kicked off with the story of Aurora DSQL, a distributed and mostly PostgreSQL compatible database, which was rewritten from Kotlin to Go to solve performance issues. That journey was described earlier this year by AWS CTO Werner Vogels. “The [Rust] code was 10x faster than our carefully tuned Kotlin implementation – despite no attempt to make it faster,” said Vogels.
Mesaros said Rust is now the default for all data plane projects at AWS. The cloud giant also uses Rust for its micro vms (virtual machines) Bottlerocket and Firecracker.
Developers might not be surprised that Rust is so much more performant than Kotlin but the comparison with Go is less obvious since Go also compiles to native code. Stuyvenberg spoke in detail about Datadog’s experience with its observability agent, written in Go, when ported to run as an extension on AWS Lambda in order to monitor Lambda functions.
Using Go for this on Lambda “got us out of the door quickly but had some problems,” said Stuyvenberg. In particular, the cold start time or initialization duration was between 700 and 800 milliseconds, which was “an incredible overhead for people to stomach when they are moving to serverless and they want observability … it was untenable.”
Migrating this to Rust reduced this overhead to 80 milliseconds, said Stuyvenberg.

Datadog is now in the process of migrating more of its agent code to Rust, even when not running on Lambda. The issue with Go, said Stuyvenberg, is that observability code tends to require many small memory allocations in order to collect and save millions of data points. The Go code “spends 30 percent of its time in garbage collection” he said. The overall result was that the Rust code was almost three times faster, and could handle many times as many data points per second than the Go equivalent.
It is possible to optimize Go code by reducing use of the garbage collector, using a technique called off-heap memory management. The problem with this, said Stuyvenberg, is that this code becomes an “unmaintainable mess” because it is not the way Go was designed. Therefore, “idiomatic Rust code yields the performance of what carefully optimized Go code does,” he said.
Tips for using Rust? The two speakers recommended setting clippy warnings – clippy being a Rust linter – so that that builds with warnings fail. They also recommended not using the unwrap() and expect() error handling functions other than for debugging, and to remove them in production code. When using Rust with Lambda, they suggested use of the Lambda emulator for debugging, which is especially useful for code using the tokio networking library as the emulator gives access to the tokio console, a debugging and profiling tool, but Lambda itself does not.
Support for Rust functions in Lambda was made generally available last month, though it has been working well for some time. There is no Lambda runtime for Rust; because it is native code, it is an OS-only runtime, typically using Amazon Linux 2023.
The full presentation is at https://www.youtube.com/watch?v=buBBQ5mXAi8.
