What's everyone working on this week (27/2017)?

New week, new Rust! What are you folks up to?

I will try to finish a basic scripting engine for a simple opengl app. Basic app created.

https://github.com/MickDuprez/Game-Scripting-Mastery-Rust/tree/master/gsm-console2

1 Like

The libz blitz for same-file is underway, so I'll be working on that this week.

I spent most of last week iterating on some API refactorings for the Date type in elastic_types. Adding date math exposed how I'm using chrono::DateTimes as both an intermediary date value between formats and as an Elasticsearch formattable date. I've tried to separate things a bit by adding a dedicated DateValue type that doesn't carry any additional formatting semantics with it. Hopefully that doesn't just make things more confusing. At least an accidental format change will be a compile-time error instead of a runtime error during indexing or searching.

I'll be spending some more time polishing the APIs of the various elastic crates to try make them easier to use.

2 Likes

Hopefully I get the API ready to render the first scene by parsing a .pbrt file instead of hardcoding the scene:

https://github.com/wahn/rs_pbrt/wiki/Release-Notes

I will extend the parser and the API on the basis of new test scenes ... so I can focus on the renderer again and start implementing global illumination (GI). But the first results with GI will be several weeks/month away (I guess) ...

2 Likes

Found some time to continue my experiments with procfs sampling. From a toy microbenchmark targeting a very simple file (/proc/uptime), it seems that the minimal kernel overhead of reading a /proc file and doing nothing with its contents (which rustc's optimizer is very likely to take unfair advantage of) would be around 650ns.

Adding minimal parsing and data recording on top of that gets me to ~800ns, so about 150ns more. This means that as I would have expected, for such a simple file, optimizing the parsing is not worthwhile: the overhead is completely dominated by the intrinsic inefficiency of the procfs kernel API, which forces all the overhead of a file read syscall, plus that of converting data to text and back. Gotta love the UNIX way of life.

But this still opens interesting perspectives: if I manage to stick with such a ~µs sampling overhead on more interesting procfs files like /proc/stat, it would allow me to achieve an acceptable ~1% sampling overhead when sampling at 1 kHz, which would be a very interesting result. That would allow for much more precise system-wide performance studies than what typical system monitors allow, without needing (yet) to drop into perf_events, which requires root access or special kernel parameter tuning for system-wide studies on most modern Linux distros.

My next experiment will thus target /proc/stat, after a small turn through /proc/version in order to be ready for the parts that depend on the kernel version.

Curious about what the future of this project will bring!

1 Like

Mostly vacation, but a little bit of work on uom (type-safe zero-cost dimensional analysis) where I'm working out how to implement conversions for thermodynamic temperature.

Still working on my compact set data types. I'm getting some pretty good results, and am starting to think about maps, beginning with benchmarking existing types to see if they are suitable.

I published my first crate on crates.io (https://crates.io/crates/priority-queue) and I'm trying to improve it in order to bring it to a good level of quality.

1 Like

I'm working on a new configuration language because I guess there's not enough of them already :smiley:

1 Like

Laying the groundwork for porting Maud to the new procedural macro API :tada: