Can Rust make Python faster?

I have the following script in Python calculating some fibonacci:

import time

def fibonacci(n: int) -> int:
    if n <= 1:
        return n

    return fibonacci(n-1) + fibonacci(n-2)

def start():
    start = round(time.time() * 1000)
    for i in range(0, 35):
    end = round(time.time() * 1000)
    print(" ")
    print("Time: " + str(end - start))


The execute time for this script is 2737ms.

I interpret the Python script in Rust using PyO3:

use std::time::Instant;
use pyo3::types::PyTuple;
use pyo3::types::PyAny;
use pyo3::prelude::*;

fn main() {
    let now = Instant::now();

    let python_code = std::fs::read_to_string("").expect("Failed to read file...");
    Python::with_gil(|py| {
        let fun: Py<PyAny> = PyModule::from_code(
        .into();, PyTuple::empty(py), None).unwrap();

    let elapsed = now.elapsed();
    println!("Elapsed: {:?}", elapsed.as_millis());

The times are:
Time: 2716 -- Output in Python script.
Elapsed: 2736 -- Elapsed time when Rust finished.

I was wondering if Rust or PyO3 has a cool feature or something to make Python scripts run faster then native Python?

Yes there is an ABI.

1 Like

Could you elaborate a little bit?

Generally the practice is called inter-op as in Rust Python Inter-op. The most thorough, but not necessarily the most up-to-date, reference I know of is here. A closely related topic is broached frequently on Discourse related to FFI.

1 Like

I saw PyPy was supported in PyO3 but it looks like you can only run Rust code in PyPy, not run PyPy in Rust. All the Python-running functions have this:

This function is unavailable under PyPy because PyPy cannot be embedded in Rust (or any other software). Support for this is tracked on the PyPy issue tracker.

If you don't need to send information or if stdin/stdout is enough, you could call PyPy as a command, assuming it's installed.

1 Like

The context of the application are very simple trading algorithms. In Rust I open a stream to the Binance API using a websocket, and every time new candlestick data is received the Python code gets executed in Rust.

Simple example of Python code:

import pandas as pd
import numpy as np

def moving_average(data, window):
    return data['close'].rolling(window=window).mean()

def generate_signal(data, short_window, long_window):
    Generate buy/sell signal based on moving average crossover.
    Buy signal (1): short moving average crosses above long moving average.
    Sell signal (-1): short moving average crosses below long moving average.
    No action (0): no crossover.
    short_mavg = moving_average(data, short_window)
    long_mavg = moving_average(data, long_window)

    if short_mavg.iloc[-1] > long_mavg.iloc[-1] and short_mavg.iloc[-2] <= long_mavg.iloc[-2]:
        return 1  # Buy signal
    elif short_mavg.iloc[-1] < long_mavg.iloc[-1] and short_mavg.iloc[-2] >= long_mavg.iloc[-2]:
        return -1  # Sell signal
        return 0  # No action

# Parameters
short_window = 40
long_window = 100

# Assume `data` is the DataFrame with candlestick data passed to the function
def process_new_data(data):
    signal = generate_signal(data, short_window, long_window)
    if signal == 1:
        print("Buy Signal")
        # Implement buy logic here
    elif signal == -1:
        print("Sell Signal")
        # Implement sell logic here
        print("No Action")

# Example usage
# data = ... # data is provided by the external application
# process_new_data(data)

Should I find a way to increase performance or let my code be as is?

The usual workflow is:

  • Find out what code is slowest
  • Rewrite it in Rust

But if all your calculations are in pandas/numpy, then making the python faster isn't going to help much anyway. You might be able to improve things a little bit by removing python entirely and rewriting it in something like polars, but if you want maximum performance you'll need to write rust code that's custom-made for your data.

Also remember to run with --release.


Thanks for the reply. The point of the app is that users can upload different Python files with different trading algorithms to track the performance.

So the code needs to be interpreted and dynamic.

If there are certain common actions that take a lot of time, you can write rust functions for them, make them available in python, and encourage users to use those when possible.

But other than that, cpython is already a fast, general-purpose python interpreter. There's not much you can do if you want to accept any code.

1 Like

I'll share everything I know about Python as it relates to the code you've shared. The two libraries you are importing are (over generalizing) possible the most optimized libraries in Python. Pandas is based on R dataframes which also uses C. Both numpy and pandas make extensive use of lowered C function calls. Python was invented as a wrapper for C and that's exactly what those libraries are using it for. Numerous benchmarks compare Rust to numpy data processing performance and the results depend strongly on using the unique advantages Rust has over Python, such as data ingestion and munging, and largely irrespective of those function calls to numpy which, realistically, cannot be optimized further without doing wildly unsafe array operations.