yes, it is up to the binary crate to decide wether or how to consume the traces and provide a subscriber to do the work.
at runtime, disabled spans and events are not free, but very cheap, typically one atomic load followed by a conditional branch, unless the subscriber is very dynamic, the branch prediction hardware would mostly make it neglegible. if you are still concerned, there are compile time knobs to turn off certain traces based on max level
yes, but you need to combine it with tracing_subscriber::FmtSubscriber
. the appender implements io::Write
, it it the FmtSubscriber
which implements tracing::Subscriber
and consumes the events, not the appender itself. see example code in the documentation
if you don't need special handling of the events (such as custom filter), you don't need to implement your own layer or subscriber, you can just configure the fmt
layer with a combined writer, which write both to stderr and to a rotating log file appender, example code from the documentation:
use tracing_subscriber::fmt::writer::MakeWriterExt;
// Log all events to a rolling log file.
let logfile = tracing_appender::rolling::hourly("/logs", "myapp-logs");
// Log `INFO` and above to stdout.
let stdout = std::io::stdout.with_max_level(tracing::Level::INFO);
tracing_subscriber::fmt()
// Combine the stdout and log file `MakeWriter`s into one
// `MakeWriter` that writes to both
.with_writer(stdout.and(logfile))
.init();
alternatively, you can configure two different instances of the fmt
layer with different formatters, something like this:
use tracing_subscriber::*;
use tracing_subscriber::prelude::*;
registry()
// first layer for console output, use pretty formatter and level filter
.with(fmt::layer().pretty().with_filter(filter::LevelFilter::from(tracing::Level::INFO)))
// second layer for log file appender, use json formatter, no filter
.with(fmt::layer().json().with_writer(tracing_appender::rolling::hourly("/some/directory", "prefix.log")))
.init();
it has a specification, but the official statement is "a framework and API", in it's term, services like zipkin or loki are called observability back-end vendors. you instrument your code to generate "signals", and you use "exporters" to send your signals to thoses observability backends. I don't like jargons like such, but it is what it is. see their documentation for details