Datadog on Profiling in Production

A episode by Julien Danjou and Kirk Kaiser
Datadog and Datadog

January 28, 2022, 04:30 PM

Register for the "Datadog On..." Series

By submitting your email you agree to the Terms of Service and Privacy Statement
Register

Categories covered by this episode

About this episode

Depending on your chosen programming language and stack, you may have never used a profiler in production. The very idea of using a profiler in production for a web service may seem unrealistic, due to the amount of overhead involved. After all, aren’t profilers extremely computationally expensive to run?

Despite a reputation for being computationally expensive, many programming languages have examples of profilers built to run in production. The importance of seeing how your application behaves in production is critically important to understanding how it performs in the real world.

In this episode of Datadog On, we’ll learn how Datadog created a production ready profiler for Python using statistical sampling. We’ll explore the history of production profilers in other languages, and see how languages like Java have a rich history of profiling in production.

We’ll also see how profilers can be used to solve tricky memory leaks, save on cloud costs with more efficient CPU usage, and help you deploy better, more robust software to end users.