@jchimene has a point. If you want to use copious logs to find bugs and perf problems, then it helps if the logs are structured, so that they can be parsed reliably. For instance, having ENTER/EXIT (and EXN for languages that raise exceptions) log-lines with known structure (like class/method-names) can allow you to reconstruct partial call-trees, and if they contain timestamps of adequate granularity, you can use that to find performance bottlenecks. Similarly, if you log session-id information and maybe thread-id info, you can use that to "sessionize your logs".
There's a general statement here: if you add data to your log-lines in parseable format, you can use them to do detective work in your running code. There are entire companies (Splunk, probably lots of others) that use logs to help you diagnose problems.
In my experience, loading lots into an SQL DB isn't very useful, unless the analysis you want to do requires that sort of query capability. That is to say, first figure out what analysis you want to do, then organize your log-data to support it. But before you even get there, you need to know what kind of logging you should be producing, to drive those analyses.
BTW, I've been a big fan of google's glog. I see that there's a really, really minimal first-attempt at starting to implement something like it for Rust. Log-lines that you can enable at runtime are really powerful -- they're already compiled-in, and you can turn 'em on at program startup, or even afterwards based on interactive commands. Perhaps, via a web-server interface. Really, really powerful. Key to this, is to make the runtime cost of a log-line that is not enabled be as low as possible.
I'm not trying to tell you not to use an SQL database. Just observing that sometimes it's the right tool, and sometimes it isn't. For instance, in your example, you're assuming
index (k, time)
What I'm trying to say, is that you've already presupposed the access paths for your data. Which is the same as saying that you're building a custom data-store for the kind of problem you're trying to solve.
By contrast, what most people with massive logs do, is to build data warehouses, which are explicitly designed to allow many kinds of queries, albeit less efficiently than a data-store aimed at one class of queries.
In any case, there's one really important thing: don't put your data into anything other than logfiles, from your program. At most, something like Kafka. Then postprocess the logfile to load into whatever datastore you use for analytics. B/c the last thing you want (and I know this from experience) is for your program to hang up, b/c your datastore can't keep up. Boy howdy, that's fun.
ha! OK, point taken. In that case, can I just suggest that you should please push your logs into logfiles, and then convert to SQL ? B/c really, you don't want to live thru a "my app broke b/c my logging solution sucked". Seriously, life's too short for that.
But also: sure, once you have raw log data, post-processing can be of all sorts, and really, the sky's the limit. So an SQL db? Sure. But all sorts of other stuff is also useful and interesting.
This is my fault for not stating all this up front. Here is my logging problem:
I am building a webapp with multiple (50+) webworkers. web_sys::console::log_1 is no longer cutting it. I want to log to sqlite3 in real time so that I can query it in real time ... at the Chrome dev tools console.
Right now, I open up chrome dev tools console, there's console_logs from 50 web worker threads, I have no idea what is going on.
I want all those events stuffed into sqlite3 so that in the chrome dev tools console, I can type things like 'select * from ... where ...' and get just the events I want.
If you have parallel work from 50+ "threads", I suspect you will want to log some sort of session-id, so you can sessionize your logs. And also, I think you'll find that logging directly into sqlite will not work, unless you also configure those indices. But configuring those indices will mean that logging isn't cheap anymore, b/c it will incur Btree maintenance. So .... perhaps you might want to instead log to a file, and write a program that slurps the file into sqlite, and monitors the file for appends, slurping those appends over time? That way, you aren't blocking your program for DB inserts.
The more-general version of this is to put a log-saving mechanism like Kafka in-between, but sure, that might be overkill. A logfile is the simple version of that.