Idiomatic logging for a game server?

How do you do idiomatic logging for a game server ? Is this the point where we have to convert structs to rows of tables and insert into clickhouse db ?

Up until this point, println! has served me fine; the problem here is that with a multiplayer game server, we have println! from multiple users interleaved ... ; I don't know if this is overkill, but it seems throwing all this into clickhouse and using SQL is the way to go.

However, I'm new to this, and am interested in hearing how others are dealing with logging issues similar to this (multiple real time users).


That seems like the issues tracing's spans solve?

I mean interleaved between DIFFERENT players, so we get a log of things like:

p0 logs in
p1 logs in
p2 logs in
p3 logs in
p0 does blah
p1 does blah
p2 does blah

but when debugging, we often want to pick one user and get all lots related only to that user

Am I misunderstanding what tracing/span does ?

span serves this purpose perfectly, you enter a span each time an player logged in, and record any info you need (such as player id, client ip, etc) as fields for that span, and then record all event within that span. when the player disconnected, you leave the span.
when replay the trace, you can filter based on arbitrary predictions (such as span id, target, level, etc), including custom fields you record for that span.


A span represents an operation of some sort. They often map to a region of code but they don't have to. Spans can have data attached to them, and can be nested. Events[1] are associated with the current active spans, and can also have data attached to them in addition to the data attached to the spans that are active.

A simple way to model your problem with tracing would be to have a game span with some sort of ID attached to it, and to attach a user ID to all of the events that apply to specific users

use tracing::Level;

fn main() {

    // Construct and enter a span manually.
    let span = tracing::span!(Level::INFO, "game", id = 12);
    let _guard = span.enter();


    tracing::info!(player = 2, action = "foo");
    tracing::info!(player = 0, action = "bar");
    tracing::info!(player = 3, action = "baz");

// Or let the proc macro construct the span for a function for you
fn startup() {
    for i in 0..4 {
        tracing::info!(player = i, "Log in succeeded");

    tracing::info!("Start up completed")

Using the default tracing-subscriber formatter there gets you

2023-04-23T19:14:16.973787Z  INFO game{id=12}:startup: main: Log in succeeded player=0
2023-04-23T19:14:16.973964Z  INFO game{id=12}:startup: main: Log in succeeded player=1
2023-04-23T19:14:16.974102Z  INFO game{id=12}:startup: main: Log in succeeded player=2
2023-04-23T19:14:16.974179Z  INFO game{id=12}:startup: main: Log in succeeded player=3
2023-04-23T19:14:16.974247Z  INFO game{id=12}:startup: main: Start up completed
2023-04-23T19:14:16.974379Z  INFO game{id=12}: main: player=2 action="foo"
2023-04-23T19:14:16.974479Z  INFO game{id=12}: main: player=0 action="bar"
2023-04-23T19:14:16.974560Z  INFO game{id=12}: main: player=3 action="baz"

If you're consistent about how you specify ids and such, it's pretty easy to search through the log file for what you're looking for. There's also a built in JSON formatter

{"timestamp":"2023-04-23T19:16:17.092361Z","level":"INFO","fields":{"message":"Log in succeeded","player":0},"target":"main","span":{"name":"startup"},"spans":[{"id":12,"name":"game"},{"name":"startup"}]}
{"timestamp":"2023-04-23T19:16:17.092647Z","level":"INFO","fields":{"message":"Log in succeeded","player":1},"target":"main","span":{"name":"startup"},"spans":[{"id":12,"name":"game"},{"name":"startup"}]}
{"timestamp":"2023-04-23T19:16:17.092801Z","level":"INFO","fields":{"message":"Log in succeeded","player":2},"target":"main","span":{"name":"startup"},"spans":[{"id":12,"name":"game"},{"name":"startup"}]}
{"timestamp":"2023-04-23T19:16:17.092892Z","level":"INFO","fields":{"message":"Log in succeeded","player":3},"target":"main","span":{"name":"startup"},"spans":[{"id":12,"name":"game"},{"name":"startup"}]}
{"timestamp":"2023-04-23T19:16:17.092977Z","level":"INFO","fields":{"message":"Start up completed"},"target":"main","span":{"name":"startup"},"spans":[{"id":12,"name":"game"},{"name":"startup"}]}

That's obviously much harder to read directly, but it gives you a lot of flexibility in searching for logs programmatically.

  1. basically log messages ↩ī¸Ž


tracing is a general concept, more than often, you want to store structured traces (not just formatted text messages) somewhere, which then can be replayed or analyzed as needed.

it is not just a concept tied to the tracing crate, for instance, there's crates for zipkin, loki etc, and conviniently, there's crates connecting tracing with them.

the tracing-subscriber facility employs a very flex layered architecture, allow users to do complex analysis, feel free to utilize the tracing facility however you like to suit your need. just as an example, using the builtin EnvFilter, you can do something like:

instrument your code (probably a library crate) to add spans and events:

// assume your server implementation will spawn an async task for each client
// this example uses the extension trait to instrument a Future
// you can use the `instrument` attribute on a async function too,
use tracing::Instrument;
// a client connected
let client_span = tracing::span!(tracing::Level::INFO, "client", id = client_id);
tokio::spawn(async {
    // handle client activity
    tracing::trace!("this is a trace level message");
    tracing::debug!("this is a debug level message");

consume the trace from the main crate:

use tracing_subscriber::prelude::*;
    // the fmt layer turns the structured data into text, useful for human to read, but not efficient for storage and post-processing
    // the EnvFilter filters spans and traces based on directives given via an environment variable, default to "RUST_LOG"
    // you can add your own fancy layer, or you can write a full Subscriber if necessary

to show all events inside the span "client" with id = 42, run with:

env RUST_LOG="[client{id=42}]" cargo run

or level DEBUG and above within the span "client", others default to INFO

env RUST_LOG="info,[client]=debug" cargo run

remember to escape the environment variable using your shell's syntax

@semicoleon @nerditation :

If not "insert into Clickhouse + SQL queries", how are you reading / analyzing the logs? Are you dumping to txt files and using grep ?

well, the answer is, as usual, it depends.

there's two questions to ask: a) how to store the log entries, and b) how to process them.

to store the data, you can send them to some logging service (open telementry, for example), you can store them in a local database, or you can just save them as text files. as long it preserves the information you need, all is good

once the data is there, how you process them is really up to you. depending on your task, grep over text files might be a perfect valid option. but you could also do very complicated data mining on it, after all it's just data. if you store the data using some cloud service, you'll get very good analysis tool from the service provider.

btw, the EnvFilter in my previous comment is just an example of how you can filter the events. you can find a more involved example which changes the filter at runtime on demand in this video, demo starts around 8:20

I think I finally got this working / get the hang of this now. Thanks!

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.