On Rust goals in 2018 and beyond

I would like to repost 0b_0101_001_1010 from reddit thread, as unfortunately author does not have an account here. I fully agree with this well articulated opinion and think it's worth to discuss it more widely:

During the last year I have gotten the impression that Rust is only targeting a niche of a niche, that is, the tiny amount of developers writing high-performance Http servers.

Most developers doing web/http stuff are using Javascript, Go, Ruby, Python, Java, ... and they do not care enough about performance to use Rust. How do I know? Because those that do are already using C and C++, and that's a tiny fraction of the group above.

For this web/http-dev audience that doesn't care that much about performance Rust is not better than the languages they are already using, so why would they use it? Even though they might like Rust, they do not need it.

Then why aren't we attracting C and C++ developers? Well, we are. But we are only attracting a tiny fraction of them because to them Rust is that nice language that only has libraries for doing web-related stuff (servo/http) and most C and C++ developers don't do anything web related. Those that do, use a different language for that, or use C and C++ because they need to do something more than just web (like numerics, graphics, data-science, guis...).

In my opinion, the goal for 2017: "Rust should be a good language for writing high-performance async servers" was a mistake. That's a tiny audience, and a very high-level goal for a very low-level language.

The goal for 2018 should be "C developers should be able to use Rust for all the task that they use C for", and the goals for 2019-202N should be the same for C++.

That is, in 2018, Rust should run everywhere where C can run. We need at least a GCC backend, and way better would be to have a C89-backend. We need alloca, inline assembly, SIMD, CUDA, etc. Cargo should be able to easily compile C code, so that people can port C libraries from Makefiles and autoconf to Cargo, and cargo should be able to easily spit Makefiles.

From 2019-202N on we should have as the primary goal to allow C++ developers to use Rust. Right now, one cannot integrate Rust into a C++ code-base, at all. Using C-FFI works if you are writing C++ and Rust code like its 1989. If you are writing modern generic heavy C++ code using a Rust library of any quality feels like using an antiquated C library. If there is no reason for your library to be generic, then that's great, but the best Rust libraries are generic heavy (e.g. think serde). The image that C++ devs have of Rust is important, and that image depends on how does Rust look from the C++ side. Also Rust does not have "abstraction parity" with C++: no variadics, function traits, ATCs, type-level consts, virtual enums, ... Procedural macros look like the C preprocessor when compared against the C++2a compile-time typed AST manipulation in clang.

Also, people who use C++ even for web/http applications typically use C++ for something else. Whatever that something else is (data-science, scientific computing, geometry, games, finance, CAD, banking systems, control software for robots, machine learning...), number crunching is probably a big part of it, and honestly, C++ number crunching libraries (Eigen3, GLM, CGAL, VTK, PCL, ITK, VDB, ...) have better APIs than the Rust alternatives even though we sell Rust as a "high-level"-looking language. We should be ashamed of how horrible ndarray code looks like when compared with Eigen3 code (and IMO, whoever mentions typenum is part of the problem). Why should any C++ dev switch to Rust when one cannot write in Rust even the lowest level of the stack without constantly feeling that C++ would allow writing much nicer code? And well then we have GUIs. C++ interfaces perfectly with native GUIs in all platforms, and on top of that it has Qt.

These C++ libraries are not perfect, but Rust alternatives are worse, and for any C++ dev to seriously consider using or integrating Rust, either interoperability needs to get way better, or Rust needs to offer way better libraries than C++ (because if they are just equally good, why bother?). You might argue that if these libraries are important to me then I should go on and write them, which is a valid opinion to have if it would be possible to write them at all in Rust (most of them use type-level consts to make their APIs nicer).

This was a long rant but IMO, we should focus on attracting C developers (this is low hanging fruit), and C++ developers, because these developers are actually interested in Rust, and they do need it. But for them to be able to use it, it is not enough for Rust to be equally better, Rust must be much better than what they are actually using. For some reason, we are actually telling them that they can switch to Rust, but IMO if you are doing any kind of number crunching in C++ there aren't probably many reasons to switch to Rust. Fearless concurrency is useless if you cannot do any work because the library you need to do the work doesn't exist or is a pain to use.

Not by any means this post tries to undermine efforts from the Rust team and 2017 roadmap, but instead tries to possibly open an discussion about directions which could be the most beneficial for Rust and maybe gather ideas and feedback for 2018 roadmap.

UPD: Reddit thread.

24 Likes

i've skimmed through but let me read again in more detail. Great post!

I keep saying, Rust is missing opportunities by focussing on one specific use case. 95% of this 'language engine' would be useful to me , if a few % of changes were made to unlock it , making it better than C or C++ for 100% of the use cases. Rust does have an opportunity because there are a few ways in which it's syntax does objectively have more potential, it's cleaner having started with lambdas,type inference, tuples,slices.

But with the current philosophy I have to hold back, continue with C++ and consider waiting for JAI (which targets my use case 100%)

Rust could easily take the space JAI is targeted at (and has a head start with tooling), without compromising what it does... just by adding options that the "high performance HTTP server people" don't enable.

the above talks about 'number crunching' a lot which does turn up in games, but games tends (IMO) to mix that with more complex logic too (instead of just crunching big arrays). I was originally drawn to rust by the 'do notation' (which got removed) and immutable by default (which should eliminate the need for restrict). The idea of that is it would make parallel iterators look as natural as 'for loops'. (and in turn the sigils which made the 'modern C++' ideas look a lot nicer, and again they got removed)

great talk on the rationale behind JAI

my own posts on various issues here
https://www.reddit.com/user/dobkeratops/?count=25&after=t1_dl7lm7t

https://www.reddit.com/r/rust/comments/6qv2s5/rust_not_so_great_for_codec_implementing/dl0owty/

https://www.reddit.com/r/rust/comments/6p3i0m/what_kinds_of_projects_are_being_written_in_rust/dknrcfd/

In my opinion, the goal for 2017: “Rust should be a good language for writing high-performance async servers” was a mistake. That’s a tiny audience, and a very high-level goal for a very low-level language.

it should be possible to handle more use cases without compromising that

other feaures
    A
    |          WHAT I NEED
    |             /
    |           X   <-would be Rust with sigils, global inference, unsafe opt, etc)         
    |     C++                 
    |            
    |            +Rust ~2013 with sigils,do
    |            +Rust today
    |
    +- - - - - - - - - - - - ->
                  some features

although that diagram makes it look like 'C++ is closer..', it's vastly more work to modify C++ toward the ideal than to modify Rust.

I don't think that focusing on servers was a bad idea. These days large systems are built from services that mostly talk to each other via HTTP. If you want to integrate Rust into a larger multi-language stack you'll either launch it from shell (which is a bit slow and limiting) or you'll have to talk to it via a network.

And the effort helped evolve async primitives, which may be useful not just for network I/O, but also for async computation. If I could have fearless concurrency with sprinkling of await here and there that would be wonderful!

I'm only disappointed that Rust doesn't have decent story for OOM handling. It seems there's a popular opinion it's an unachievable goal a lost cause, which looks pretty bad from my C-progammer perspective.

The rest seems on point. I can't wait for SIMD to land. I would love alloca, or even just C99 VLA.

15 Likes

I think it's important to question and make doubly and triply sure we're going in the right direction. There could potentially be users, like you're saying, that we aren't currently reaching with the current approach.

There's a little backstory about why high-performance async server was a focus. There's the 2016 Rust survey that pointed at Servers and Web Dev as one of the top fields of users as well as strong interest in things like async i/o.

There was also a 2016 commercial users survey, with a strong leaning to async i/o and networking.

There very well could be a "dark matter" set of users that we didn't reach with either survey, and if so, it'd be great to figure out how to reach them.

31 Likes

I have personally put a lot of work into some of the features that this particular user cares about - generic associated types & const generics, for example. But I think there is a very serious misconception that underlies this post, and I want to respond to that, rather than prognosticate about individual features.

This post makes this claim:

In my opinion, the goal for 2017: “Rust should be a good language for writing high-performance async servers” was a mistake.

This is not an accurate characterization of the 2017 roadmap, which contains eight vision statements, and two areas of exploration. This is a reasonable restatement of one of the roadmap's vision statements. In contrast, both of the areas of exploration (which were originally a more specific ninth vision statement before the roadmap RFC was amended) are relevant to this user's goals:

  • Usage in resource-constrained environments
  • Integration with other languages

Perhaps these have not made enough progress in the execution of the 2017 roadmap, but they're clearly on the roadmap, so an absence of intent could not be the problem here.

The reason we have a roadmap and not a single "goal" for the year, as this post would have us do (indeed, for indefinitely many years), is that we have a community with many diverging interests. I've given the (oversimplified) explanation to people that Rust has users coming into it from three directions - users from C/C++, who have high standards for control; users from Haskell/Scala, who have high standards for abstraction; users from Python/Ruby/JavaScript, who have high standards for ergonomics.

What's great about this is that we have high standards for everything. But one of the biggest challenges I experience, as a member of two of the subteams, is balancing the needs of various users against each other to try to reach the best decision; I know that when planning how the entire project should prioritize its goals, the core team has an even greater challenge.

The claim in this post (with pretty specious reasoning) that people writing network services do not have a use for Rust is not borne out by the data we have collected or the composition of our active contributor community. Its very easy as an individual contributor or user to imagine that what Rust really needs to do is better meet the needs of users just like me, but I do not think that is a productive, global perspective of the challenges facing the language.

51 Likes

I have to disagree that the only reason for a web developer to consider Rust is performance. Seeing how many problems are null pointer dereferences or otherwise type related issues that should be detected at compile time, I can argue that safety is probably the biggest point. And it's not only lazy application developer's code suffering from such a problems but infrastructure critical systems written by highly skilled engineers like Apache Zookeeper. We have run into an issue leading to outages caused by NPE in Zookeeper's node cleanup code recently.

I have no numbers to back up my feeling but I got an impression that C++ developers doing some sort of number crunching is an extremely niche case. Or maybe those people just don't go to meetups/conferences, rarely post online, don't talk about their job in real life and never hired using public job ads.

3 Likes

everyone talks about the fact that C, C++ are in need of retirement and replacement.
Rust aspires to be a suitable replacement, as such it must consider the C , C++ niches, not just 'what 95% of users do'. Otherwise C, C++ live on for those last 5%.

Seeing how many problems are null pointer dereferences or otherwise type related issues that should be detected at compile time

I think there's way more languages that can eliminate nulls, than there are languages that can run without a GC and do low level high performance software.. as someone who's domain is constrained to C++ for 'real work', I've always been jealous of just how much choice you have elsewhere..

1 Like

That probably depends on where you are looking. C++ programmers generally don't go to webdev or appdev conferences.

2 Likes

I agree with parts of that post, and most of the feature requests are things we want, but as woboats said, there were items on the roadmap that address these issues. Web servers were never the only or the primary goal of the roadmap. They were the only application domain with a vision statement, which I believe is because that's the only domain we knew already had strong user interest and that we thought we could feasibly become competitive in within a year (imo that seems overly ambitious now, but it does seem like we could have async/await in 2018 and then become a very attractive choice).

Feature parity with C is not something we could feasibly do in a year. I do believe we should add all the useful features of C into Rust in some form or another, but I also think a huge part of Rust's appeal is that it's not simply copy-pasting features from other languages, but trying very hard to get the useful functionality with minimal compromises to all of its other design goals. Properly discussing every C feature that Rust is currently missing, deciding whether we want it at all, and whether we can learn anything from C and do it even better, is simply going to take a very long time.

But a lot of those conversations did happen this year! And a lot of features not necessarily on the roadmap but relevant to the list in that rant also made design or implementation progress. Proposals to stabilise SIMD intrinsics were thoroughly discussed and seem near consensus. Const generics has had a viable RFC posted, which would unblock a lot of those "number crunching" applications. CTFE (compile time function evaluation) is getting close as Miri is on the verge of being merged into the compiler. More platforms are supported now than they were at the beginning of the year. Parts of macros 2.0 are already in the nightly compiler.

I assume that user has the perception they do because almost nothing from that last paragraph has made it into a stable compiler yet. I strongly believe we have gotten closer to shipping each of those things, and some might ship by the end of the year, but they haven't yet. There are a lot of reasons for this, but imo they boil down to 1) designing programming languages is really hard, especially one with as many diverse and superficially contradictory goals as Rust. 2) Rust is still a very young language. It's actually mind-blowing how useful it is already, considering how many decades most of the competing languages have had to develop since their 1.0s. 3) I think there's currently a bit of a log jam design-wise where many, many things are blocked on "core megafeatures" like specialisation, const generics, CTFE, macros 2.0, etc, though we are getting a lot closer to shipping useful subsets of these megafeatures than we were at the start of 2017.

The part of the post I agree with is that one of our future roadmap goals should be to make embedded development feasible on stable. Not because I think Rust's primary audience should be C veterans (that should be one of many audiences), but because embedded is a domain where we could potentially be far better than the competition, yet some of the critical feature gaps (like inline asm) seem stuck in unstable limbo and probably require a concentrated lang team push to get them moving forward again.

I also think "breaking the log jam" and shipping some of the megafeatures should be a 2018 goal, unless a bunch of them do ship in December.

16 Likes

I have no numbers to back up my feeling but I got an impression that C++ developers doing some sort of number crunching is an extremely niche case. Or maybe those people just don’t go to meetups/conferences, rarely post online, don’t talk about their job in real life and never hired using public job ads.

I don't have statistics neither, but I know a good number of researchers / PhD students in fundamental & applied sciences who don't call themselves developers, yet spend most of their time on their number-crunching simulations written in C++. You don't see them in programming language conferences because they are at the conferences relevant to their fields of research, and C++ is just the fast and handy programming language that they heard of during their studies and that everyone uses in the lab. They do talk about their jobs in real life, but of the high-level physics rather than of the technicalities of their simulations.

8 Likes

They do talk about their jobs in real life, but of the high-level physics rather than of the technicalities of their simulations.

I really wish these technicalities were more actively discussed and there were more opportunities for people outside of their domains to contribute to those efforts.

3 Likes

So, a couple of thoughts.

First and foremost, I do see why it seems like Rust has been focusing a lot on Web stuff. I myself have kinda been tired of all the talk about tokio, and felt that we had way too much focus at the beginning of this year.

But really, taking a step back, it's actually not that much of a focus. A subset of the community (maintainers of the web stack + Alex and others) are working on this, and they've been extremely vocal about the progress. That's about it?

If you look at the roadmap it's just one of the main points, and looking at each one of these points there's been significant effort put into them. Bear in mind that these goals are all derived from the survey results.

The main difference is that Tokio gets talked about a lot because it's a user facing set of APIs that needs constant feedback, whereas the other improvements tend to fly under the radar.


Here's the way I perceive the focus on web. I say this as a part-systems programmer who has been doing Rust since ~2013.

Rust ca 2013/2014 was a systems language. It still is. But back then it had a huge focus on only systems stuff. Nobody really took it seriously for places where systems was not already king.

After 2015/1.0 there was a shift. What happened was that a lot of non-systems programmers discovered Rust and realized that it was their gateway into systems programming. This led to a huge influx of folks from all over the programming community into Rust. "Rust taught me systems programming" is a common refrain I heard around then.

This also led to a lot of not-typically-systems stuff being done in Rust (like web stuff)

Except the language wasn't built to handle that, and it showed.

What you're seeing is the language catching up to handle a lot of use cases that were previously not considered as necessary by the designers. It's not that Rust is a better web language than systems. Rust has always been a much better systems language than web. Now it's catching up. The state of Rust wrt web programming is worse; look at the missing libraries for the web and compare with your list of missing systems features -- the missing web stuff is far more basic.


(servo/http)

I mentioned this on Reddit but Servo really doesn't count as a "oh it's 'just' web" application. A web browser in Rust is one of the strongest arguments out there that Rust is a valid C++ replacement.

We should be ashamed of how horrible ndarray code looks like when compared with Eigen3 code

You mention ndarray but there has been significant work put forward towards constant generics this year. There's a whole overhaul of the trait system (Chalk) that it's blocked on IIRC, but there's been a lot of work being put into that too.

You also mentioned ATCs and they're in the same bucket. Type level consts are stabilizing IIRC.

All of this is happening. Folks talk about it less, but it's happening. The language team is putting lots of effort into Chalk and it seems to be a major focus, maybe as large as Tokio was.

Cargo should be able to easily compile C code, so that people can port C libraries from Makefiles and autoconf to Cargo, and cargo should be able to easily spit Makefiles.

I'm not really sure what more cal be done here. In general folks just have cargo call the makefile or the GCC crate, and both have decent ergonomics.

The core problem here is that each C project has its own flavor of makefile. There's no high-level declarative syntax for Cargo to deal with, it's just steps, and ultimately make is the best tool for dealing with it.

We need at least a GCC backend, and way better would be to have a C89-backend.

I don't think so. Platforms which LLVM can't handle are way more niche than "web". I'd like these backends, but I think it's really unnecessary for Rust to be a good C++ replacement.

We need alloca, inline assembly, SIMD, CUDA, etc

SIMD is happening. I think Rust has CUDA bindings?

Inline asm is hard and I wish we had something better here. The core problem is that the clobber/etc syntax can't be stabilized since it's not stable in LLVM. GCC and clang just expose it as extensions and folks hope they never change; we have stronger stability guarantees. I've been considering pushing for just stabilizing the syntax as-is.

11 Likes

This is an interesting idea! Although my knowledge of inline assembly is very limited, I think the number of things you can actually express in inline assembly is also extremely limited. I think it would be feasible to just come up with a syntax that is later translated into whatever syntax/data structures the lowest layers want. It doesn't hurt if the syntax looks like something well known...

The problem is that the register syntax for clobbers and such is nontrivial; as is the question of whether LLVM will always continue to support it in that form.

What's LLVM's (and gcc for that matter) stance on stabilizing inline asm? Are they in limbo as well? At some point it'll become pseudo-stable by virtue of users being pissed if they (finally) decide to change it - would need a great reason to do it at that point :slight_smile:.

2 Likes

For a good overview of inline asm, i highly recommend Rust Cologne, June: Florian Zeitz - Inline Assembly - YouTube

3 Likes

With some effort you can do interesting things with inline asm in Rust, like using macros to dynamically generate symbol names at compile time that you can link to in a final binary. Where "some effort" is mainly "trying to get the LLVM backend to not crash".

Even in C with some fairly straightforward assembly (no macros), getting constraints wrong can hit LLVM assertions.

As someone who spends time integrating Rust with autotools projects, I'm really interested in cargo calling Makefiles. However, my google-fu is failing me. Can cargo conveniently run arbitrary scripts, or is this something special for Makefiles?

1 Like

With build.rs you can do what ever in pre-build stage.

1 Like

Cargo used to be able to run any shell script (build=make), but it's been removed in favour of build.rs. You can still write a few lines of Command::new("make") to run make from build.rs.

I've been down that path, but ended up switching to Rust-native gcc-rs to compile instead. It automatically supports cross-compilation and other Cargo options. It works on Windows/MSVC without requiring users to install GNU toolchain. For dependencies I can use *-sys crates instead of reinventing my own detection in configure (and they get downloaded automatically by Cargo instead of needing system-specific mechanism and a pit of despair on Windows).

1 Like