New week, new Rust! What are you folks up to?
I've done some refactoring in elastic
this week. I've started working on the sniffing connection pool again.
I've also opened up an evaluation for the semver
crate and will start poring through the API shortly. If you've got any feedback on semver
it'd be great to hear from you there!
Contributed to a new forum written in rust, mainly using spongedown as the content format. Forum demo
Still working on uom
(type-safe zero-cost dimensional analysis). I've been tearing everything apart of change how the library defined marker traits are used and just started putting the pieces back together yesterday. If I can get everything working again I've have better code with fewer types that is also a lot more powerful.
I have started a new icecc-rs crate (and the corresponding libicecc-sys) which wraps libicecc
because I felt like making an attempt at writing a console-based IceCC monitor in Rust. This involved writing a plain C API over libicecc
's C++ one, and I more or less have something that should work, but right now I doesn't. So the next thing is doing some debugging on that. I might do some bugfixes to the Popsicle IceCC environment tarball creator as well.
Last week I published a blog post about the Conference Room: First Results for Rust version of PBRT. I'm quiet happy with the current state of the renderer and also the parser (using the beta version of pest 1.0) is fast and reliable now. A real bottleneck for more complex scenes seems to be building the acceleration structure called Bounding Volume Hierarchies (BVH). This happens after the parser finished, and before rendering (and therefore multi-threading) starts. This week I already fixed the bug mentioned in the blog post. I'm not sure if I can attack the BVH problem this week because I will be on holiday until October, but I might release another version and update the Wiki and Readme file before leaving.
If anybody is interested in figuring out where the bottleneck comes from let me know. Basically you would compile the crate and all examples files via 'cargo test --release
' and try to render one of the more complex scenes (after 'gunzipping
' it):
> ./target/release/examples/pest_test -i assets/scenes/conference_room.pbrt
...
WorldEnd
DEBUG: rr_threshold = 1
...
Once you see the lines above the BVH gets build (single threaded), and once the mutli-threading starts (watch the CPUs e.g. with htop) the BVH is done and rendering starts. This takes several minutes, whereas the C++ version is pretty fast. I assume it's because it's doing this recursively and the C++ version manages memory itself via a class 'MemoryArena
':
> rg -tcpp "BVHBuildNode \*recursiveBuild" -A 3
accelerators/bvh.h
71: BVHBuildNode *recursiveBuild(
72- MemoryArena &arena, std::vector<BVHPrimitiveInfo> &primitiveInfo,
73- int start, int end, int *totalNodes,
74- std::vector<std::shared_ptr<Primitive>> &orderedPrims);
I will probably create an (issue) ticket for this problem.
Just got a PR accepted in the bytesize crate so that its next release will use u64 as a byte counter, rather than the ABI-dependent usize. There are still 32-bit CPUs in the wild, as well some crazy people who will use 32-bit pointers on 64-bit architectures, and for those the former max representable value of (2^32 - 1) bytes = 4 GiB was not enough to represent the full breadth of modern storage.
Otherwise, been busy for a couple of weeks dealing with the task overload of the new work-year, which slowed down my progress on procfs sampling. The separation of parsing and storage is still slowly progressing when I find time, I start to be happy about the interface between them, and early performance results are highly encouraging (with small or negative overhead). However, rewriting all the tests that are broken by the interface changes sure is a big pain.
Getting the bugs out of imag plus some more little features and then the 0.4.0 release will be ready. No due date though.
(And secretly I'm working on packaging imag for my local machine, so I can start using it in private despite me telling people that it isn't that usable yet because I'm not that confident yet telling people that they can use it)
After 3 months of working on supporting libraries for this project I have a working program called abrute. abrute is an AES file brute force decryption software utilizing all system cores to run faster.
The projects I built to make this work are digits and base_custom. digits is a character sequencer which treats all characters as numbers and implements some basic mathematics via linked list logic. base_custom is a pure mapper between base 10 and any character set you define (useful for base64, base62, or whatever).
The code for abrute is very rough looking for now whereas the two other project look quite nice. The project won't stop here; more to come.
I published my first nom-based parser. It parses ICE candidates, a subset of SDP used to exchange network information in systems that need NAT traversal like WebRTC.
I also added FFI bindings, so the crate can be used from C and compatible languages
https://github.com/dbrgn/candidateparser
Next step would be to explore how the library can be used from Android and iOS.
I'm working on a little tool for synchronizing version numbers in Rust projects. I noticed that I forget to update the version numbers in my README files when I make a release... You know, the version number where you show how others should use your crate:
[dependencies]
foobar = "1.2.3"
So I wrote a small crate that makes it easy to write an integration test that will check this: version-sync. You use it by making an integration test with
#[macro_use]
extern crate version_sync;
#[test]
fn test_readme_deps() {
assert_markdown_deps_updated!("README.md");
}
I have just released version 0.3.0 which lets you exclude certain TOML code blocks from your README file.
Please check it out and let me know what you think