Rust vs C++, which is best for building a high frequency trading software from scratch

Between Rust and C++, which is better for building a low-latency software from scratch that runs heavy models and executes trades at high speed?

What do you mean by that?

2 Likes

I currently think that any from scratch systems are better off being written in rust if you are thinking long term, it wins just because of the dev tooling, in my humble opinion (just kidding, there's actually many more reasons behind why rust would be better). But it depends on whether you can actually find enough rust experts to work on that project. C++ is the current standard for 'low-latency' trading systems (I am not sure whether you mean ML or mathematical models, like @firebits.io has already pointed out) and it will be easier to find experienced people to make stuff in it. If you need something like quantlib, then there's no equivalent in rust. But if you are making EVERYTHING from scratch, rust is definitely the better choice.

EDIT: Also, concurrency and cache friendly code is a VERY niche area in C++, so it's not like there will be way many more people to work on it compared to rust either.

3 Likes

What do you mean by "at high speed"? Micro seconds, nano seconds?

Processing speed, while depending on complexity of algorithm used, still usually is not the main problem. Data transfer over network connection is a bottle neck, protocol parsing, data deserialization, etc. is.

One should use FPGAs for high frequency, in my opinion, anything slower than that... i don't know if it can be called high frequency nowadays...

I have a Python based quantitative trading model that uses batch learning from data collection through mathematical feature engineering to machine learning algorithms On my current setup it takes about 1 minute 45 seconds to train and produce outputs The model runs entirely on CPU I want to build a software system that can speed up the training inference and trade execution ideally completing everything within 500 to 750 milliseconds without changing the models outputs or with minimal code adaptation that does not affect results I do not have prior experience building such software so I am asking whether it is possible to create a system like this and if so which programming language Rust or C++ would be better suited to deliver this performance reliably on a CPU based laptop

Milliseconds is okay for my use case At least between 12:59 to 13:00 it already completes the training prediction and trade execution so the timing works for me

it's ML models

Oh, that's a completely different thing. You should just profile and optimize the python code first. You should be able to highly optimize your code using vectorized loops (for numpy, pandas). You can also try out polars (written in rust) but available as a python package, which I have heard often outperforms pandas if you are hitting a bottleneck especially for data processing stages. Try out numba JIT, and if that still doesn't work, then try out pyo3 bindings for interop with rust. Highly unlikely that you will need a full rewrite in a low level language.

3 Likes

Thanks a lot for the detailed explanation. I’ll give these approaches a try and see how much I can optimize the model.

3 Likes

500 milliseconds isn't "high frequency". That's ages in HFT time. High-frequency trading runs in under a millisecond, often in microseconds. Half a second is enough to run your models in Java or C#, with several garbage collection passes and network calls.

1 Like

The only way to determine the answer is implement the algorithm in both languages and test, to get empirical evidence.