Skip to main content
18.03.2026

Python 3.15 JIT: 12% Faster and Back on Track

Python 3.15 JIT Performance

Python's experimental JIT compiler has been a topic of debate since its introduction in Python 3.13. Critics pointed out that it often produced no speedups or even slowdowns. Now, with Python 3.15 alpha, the JIT is finally delivering real performance improvements.

The Numbers

The latest benchmarks show impressive gains:

  • macOS AArch64: 11-12% faster than the tail-calling interpreter
  • x86_64 Linux: 5-6% faster than the standard interpreter
  • Range: From 20% slowdown to over 100% speedup depending on workload

These are geometric means across the benchmark suite. Individual workloads vary significantly, but the overall trend is clearly positive.

What Changed

Trace Recording Rewrite

The JIT frontend was completely rewritten to use a tracing approach. This increased JIT code coverage by 50%. More code getting JIT-compiled means more opportunities for optimization.

The key innovation was a "dual dispatch" mechanism. Instead of doubling the interpreter size with tracing versions of every instruction, the new approach uses a single tracing instruction and two dispatch tables. This keeps the base interpreter fast while enabling efficient trace recording.

Reference Count Elimination

Python's reference counting has always been a performance bottleneck. Each decrement operation includes a branch to check if the object should be deallocated.

The team discovered that eliminating these branches, even when the decrement itself remains, provides significant speedups. A single branch per Python instruction adds up across millions of operations.

Community-Driven Development

After the Faster CPython team lost its main sponsor in 2025, the project pivoted to community stewardship. This turned out to be a strength.

The team broke down JIT optimization into manageable tasks that contributors without JIT experience could tackle. Converting interpreter instructions to JIT-friendly forms became a collaborative effort with 11 contributors.

Quick Reference

Check if JIT is available in your Python build:

import sys
print(sys.flags.jit)

Enable JIT compilation:

# Set environment variable before running
PYTHON_JIT=1 python your_script.py

Build Python with JIT support:

./configure --enable-experimental-jit
make

What This Means for DevOps

For SRE and DevOps engineers running Python workloads at scale, this matters:

Automation scripts: Long-running Python scripts for monitoring, deployment, and orchestration will see incremental speedups without code changes.

Data processing: ETL pipelines and log processing tools written in Python benefit from JIT optimization on hot code paths.

Cost savings: 5-12% faster execution translates directly to reduced compute costs for Python-heavy workloads.

No action required: The JIT is opt-in and backward compatible. Existing Python code works unchanged.

Current Limitations

The JIT is not yet production-ready:

  • Free-threading support is still in development
  • Some workloads see slowdowns rather than speedups
  • Memory usage increases due to JIT compilation overhead
  • ARM64 Linux shows smaller gains than macOS

The team is targeting full free-threading support and further optimizations for Python 3.15 and 3.16.

Following the Progress

Track JIT performance in real-time at doesjitgobrrr.com. The site shows daily benchmark runs comparing JIT-enabled Python against the standard interpreter.

The transformation from "barely any speedup" to "consistently faster" happened over about six months of focused community effort. It demonstrates that even complex compiler projects can progress through incremental, well-organized contributions.

Conclusion

Python 3.15's JIT compiler represents a turning point for Python performance. While not yet suitable for production, the trajectory is clear. DevOps engineers should start testing their workloads with JIT enabled to identify which tools benefit most.

The combination of trace recording, reference count optimization, and community collaboration has put Python's JIT back on track. The goal of a 10% faster JIT by Python 3.16 now seems achievable.


Want to monitor your Python workloads and infrastructure with AI-powered automation? Check out Akmatori, the open-source AI agent platform built for SRE teams.

Automate incident response and prevent on-call burnout with AI-driven agents!