What's everyone working on this week (18/2020)?

New week, new Rust! What are you folks up to?

I'm working on refactoring my somewhat messy code to make it possible to build a game out of this:


Unfortunately it's becoming a bit of a chore to acces things across items in my code. I would have loved to use Amethyst, but unfortunately there's minimal to no android support.

8 Likes

I released a new version 0.2.0 of yew_prism, I released a new version 0.3.0 of yew_styles, I adapt yew-parcel-template to the new version 0.15.0 of yew.rs and I will stat to work in a form component for yew_styles

I'm working on the flood-fill algorithm for flo_curves. I've had it able to do concave areas for some time, but I want it to be able to fit a path to an area of any shape. I've got a lot of planned features for FlowBetween that need this to work: the paint bucket tool is the most obvious, but this algorithm can also automatically create clipping paths or turn sketchy outlines made of many paths into a single smooth path.

My test file shows the progress on the algorithm so far, which has gained the ability to raycast its way around corners:

image

plus an amusing failure (which looks fine until you inspect the points that were generated):

image

(Here I used a line-line intersection algorithm instead of a line-ray intersection: it went around and around as rays are unit length but eventually converged to an amazingly complicated path)

I've still got some problems to solve this week: there's a slight tendency of the algorithm to pick a new point that's outside of the area it's supposed to be filling, and a related issue where points close to a corner point are too far apart.

1 Like

Working on presentation material about adding WASM support to Amethyst, which will be presented (virtually) next Monday at the local Rust meetup.

I'm working on a distributed onion routing network for securely mixing (anonymizing) cryptocurrency transactions. (like in Tor, but the nodes exchange transactions instead of requests) (repo / explanations)

At the same time, I'm developing a forensics tool which tries to break these onions, and makes statistics and charts (with plotters crate) to evaluate how secure such a network is. (repo)

This is pretty minor but I've been wanting some numerical operations to be available for use in constants. This can be a bit tricky until control flow is stabilized within const fn. So it required brushing off my bit twiddling skills, which are very rusty (but not very Rusty :wink:). Obviously these aren't particularly efficient and I dare say someone else could do a better job.

Example playground link.

Now that I've extracted a lot of logic from mdbook-linkcheck I'm putting together a bit of a write-up on how my new linkcheck crate works.

Looks beautiful! :heart_decoration:

1 Like

@OptimisticPeach I'd be keen to hear more about how you developed that image and plan to make it interactive.

So things like how you did 3D graphics on Android, how that affects the code flow, the algorithms used to generate the different "biomes" (ocean, beach, mountain, etc.). I'm guessing you used perlin noise to generate heights for every point on a uniform grid, then build a mesh by triangulating adjacent grid points and assign a biome based on the average height (ocean: height < 10, beach: height < 15, mountain: height > 100, etc.)?

1 Like

Sure!

  • 3D graphics on Android are done weird in this case; I'm using a locally modified piston/piston2d-opengl_graphics which I've modified to allow me to hook into the OpenGL context. I originally did this for a few reasons:

    • I was planning on making a 2D game.
    • I later planned on moving to 3D, but I was still too uncomfortable with OpenGL to actually factor it out. I will probably factor it out since it's basically glorified gl bindings now.

    There are definitely a few things to keep in mind with phones/graphics, and how android changes your experience:

    • You won't get backtraces for the most part and the best you'll get are panic locations (which are enough to go off of for me, I'm used to println debugging by now). Android stack traces are broken for the most part, and I've kind of given up on trying to get them to work. My application runs smoother on release anyway, and I've gotten used to it, but you may be inclined to look into it.
    • Weird driver bugs can happen, even on different phones with the same model or emulators with the same OS.
    • You don't actually have OpenGL! You have OpenGL ES, which is missing a few things. For the most part, attaching opengles to the end of a google search will lead you to the relevant es pages, but if you're looking for some things which aren't very well documented, it'll be a bit more tricky to get documentation.
    • A few tips when writing shaders for ES:
      • OpenGL ES 3.2 uses #version 320 es at the top of the file.
      • OpenGL ES fragment shaders require a precision declaration for floats (precision lowp/mediump/highp float), since hardware might not support higher precision or it might just be too slow.
      • Shaders written in OpenGL ES use in/out modifiers instead of varyings as seen in some articles. It seems to be that those are outdated.
  • I used winit to initialize the window (which I then use to intialize the gl context), and I run it on the piston event loop. I put all of this code into a package since it also deals with the uglies about android. A main one being dealing with special android events (From android_glue, which I'll talk mode about below) and passing both those and regular events on to my actual implementation. Another ugly thing that this code deals with is the focus of the window; swapping buffers while the window isn't yours breaks egl, so this just loops for a GainedFocus event, and if the receiver returns an Err (meaning the user has ended the application), it kills the app.

  • android_glue has changed substantially in the 5-6 months I've been working on this project. I am using the version from back then since it still exposes events and I want to focus on developing and not updating my dependencies right now. My local copy of it is this commit.

  • For the terrain, I generate a hexagonal grid. Here's a desmos graph which explains how I do that. I then proceed to generate simplex noise (from the noise crate) values for each of the points. Since these points are generated on a unit hexagonal grid, I need to downscale by a factor of four to make the sampling smooth. Then I decide on the colour based on this code:

    let noise = noise.get([point[0] as f64 / 6.0, point[1] as f64 / 6.0]);
    let y = noise as f32 * 10.0;
    fn mul_arr(mut arr: [u8; 4], by: f32) -> [u8; 4] {
        arr[0] = (arr[0] as f32 * by).min(255.0) as u8;
        arr[1] = (arr[1] as f32 * by).min(255.0) as u8;
        arr[2] = (arr[2] as f32 * by).min(255.0) as u8;
        arr
    }
    const DARK_SAND: [u8; 4] = [230, 168, 62, 255];
    const SAND: [u8; 4] = [245, 216, 86, 255];
    const GRASS: [u8; 4] = [150, 199, 46, 255];
    const SNOW: [u8; 4] = [204, 221, 255, 255];
    let random = rng.gen::<f32>() * 0.2 + 0.9;
    let colour = if noise <= -0.03 {
        mul_arr(DARK_SAND, random)
    } else if noise <= 0.1 {
        mul_arr(SAND, random)
    } else if noise <= 0.3 {
        mul_arr(GRASS, random)
    } else {
        mul_arr(SNOW, random)
    };
    

    Which adds some randomness to the colours to make them distinguished. I proceed to calculate the normals, which paired with the positions and colours, I ship off to a shader which does a basic light calculation based on the dot product of the face normal and the normalized direction towards the light from the vertex. Since I didn't understand most explanations online, I developed it from scratch using desmos and it seems to correlate with the online stuff now, which I can now understand.
    A very important thing to note about doing the low poly effect that I've done here is that you must duplicate faces. If you don't then you'll end up interpolating, so the normals must be calculated per face and must be applied to copies of all three vertices. Your other option if in some fantasy world you've got a very very fast GPU with a tiny amount of gpu ram, is to use fragment math to determine the original values from the barycentric interpolation the GPU does automatically. Here's a desmos graph which shows kind of how barycentrics work. I used that graph to figure out how to work backwards from what the GPU does, but unfortunately the results were slow, and imprecise, leading to weird lines.

  • For the water, I did the same thing as the terrain (Except the terrain's hexagon generation subdivides hexagon triangles, while this one doesn't), but instead of calculating the normals and colours, I do that in the shader. I use simplex noise paired with some clever tricks for finding smooth movement to make it move over time. These methods involve rotating the sample points in space and sampling over time to get a consistent and uniform change or "velocity" for the water. I'd be lying if I said I didn't follow a tutorial to grasp the concepts. However, in the tutorial, he does square low poly water, while this is hexagonal, so there were some challenges with getting it as optimized as I'd like. Mainly, I had to apply scale factors to make the hexagon unit points end up on whole numbers, so that I could round and cast to i16s and i8s. It's the least impressive part of my code code-wise but it's the most impressive visually, in my opinion. It also does reflections, which I facilitated using the described method in the tutorial. However, since user defined clipping planes aren't available on OpenGL ES, I used this paper to produce code which would clip the vertices below the "water". It was a challenge to understand where I was going wrong with the implementation of the oblique near plane, given that I lack formal training in linear algebra and was just going off of what I've learned from writing shaders and reading online.

It already is; I've got grid selection implemented by tracing a ray from the camera through the space and trying to find an intersection with the plane. The necessary realization that must be made is that you need to store the hexagonal mesh in the original square unit mesh's coordinate system as the key to a map, and the value be the point in the terrain. Transforming to and from either position is important.
Here's a video showing how the selection works.

Anyway, I didn't plan on making this reply that long. I apologize for the length of the post.

I hope this helps!

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.