C++ Core Guidelines

#10

The way I understood, it defaults to the intersection of the two (return’s lifetime is valid only while all input’s lifetimes are). In Rust I think it would be equivalent to defaulting to:

fn foo<'a>(x:&'a type, y: &'a type) -> &'a type

There was also at least one example of ad hoc case: if a function takes references to two strings, and one of these references is const, then the return’s lifetime is the lifetime of the non const string (because is is supposed that the const string is just a pattern to search in the other string).

1 Like
#11

I have to say, that glimpse into Rust of yore makes me appreciate more where we ended up! :smile:

Thank you, I missed that! Nice to know my concern was both valid and already addressed. I should re-read the whole thing more carefully. That’s section 10 “Lifetime-const” in the lifetimes pdf for others who are curious.

Sutter explicitly said in his talk that he designed this lifetime system without looking at other languages, to help ensure a fresh and unbiased perspective. That’s all fine to start with, but now I hope they’ll take a deep comparing look elsewhere to see what lessons they have missed.

#12

My first impression from first 30 minutes: guys just came from cryo chamber and tried to reinvent Rust in a C++ lib plus style checker. Would’ve worked if C++ wasn’t language of unsound defaults.

3 Likes
TWiR quote of the week
#13

What bothered me is how quickly they dismissed the idea, that a new language could have any success. Rust does more things better than C++ other than just lifetimes and ownership.
Just look at things rust shipped without.

3 Likes
#14

@hoodie

That’s because we have gigatons of code in C++. They’re right that we can’t just drop it. And they’re right that there were multiple attempts to overthrow C++. They just don’t confess some of them were actually successful

  • Java occupied heavy enterprise serverside, and even some games (Minecraft :))
  • C# occupied parts of windows (and also games, hello Unity)
  • Javascript assaults small servers via Node.JS
  • Golang assaults Node.JS and many other niches; some folks are discussing games written in Go (which can be another success)

Because, let’s be honest, C++ sucks at ergonomics and clean code. That’s why it was substituted in many areas where being 3-5 times slower matters less than debugging 200-line template substitution errors.

The place where C++ still shines is performance-critical code - games, browser engines, OS core features etc. Even compilers are mostly written in different languages (usually bootstrapped).

I will spare your time by not repeating all C++ flaws another 100500’th time :smile:

4 Likes
#15

I’d like to nominate your post to be quote of the week.

#16

Nominated: TWiR quote of the week

#17

You’re welcome :smile:

#18

The problem is, you can’t make the gigatons of existing code any better by imposing guidelines upon new code. The old code will eventually have to be rewritten or redactored or replaced. In that case it really begs the question, will guidlined c++ still be better than code, written in a genuinely more modern language?

1 Like
#19

Unfortunately, the problem is too twofold.
C++ interfaces only with C++. So to reuse existing libs we’ll need certain amount of wrappers.

Though I don’t think that C++ can become that much better, simply because of multiple inherent flaws. Stroustrup & co provide smooth transition, but no way to isolate newly written code from legacy flaws.

Each evolutionary addition increases complexity. My humble opinion is that Rust doesn’t have some skyrocket complexity. It just has some complexity details not familiar to C++ programmers.

Not for boast, just for context. I’m coding in C++ in production for about 7 years, and had student experience with it for about 5 years. And to me Rust is dead simpler than C++ - just because it’s much, much, much more consistent with itself and isn’t composed of a bunch of square wheels. At least for now.

1 Like
#20

It’s a bit offtopic, but I have beed wondering recently, if there is a tool (like swig?), which can

  • take a header file for a C++ class and generate Rust structure/impl definition and ffi bindings.
  • take a Rust struct w/ traits and generate a C++ class definition for it.

I guess such tool will make a huge impact on Rust adoption.

3 Likes
#21

Would be wonderful. Except C++ class layouts are not standartized and thus completely implementation defined. Classes without virtual methods can be wrapped purely by code generation.
But what to do with vtables? What to do with multiple inheritance? What to do with virtual inheritance? At example, method pointer sizes on MSVC vary depending on inheritance type.
The only solution I can guess is to:
a) generate compilable C++ which wraps C++ method calls as struct-classes (i.e. functions with explicit this, and pointer to class bod passed by pointer)
b) Wrap that C interface with Rust externs

By the way, we still have templates - and I have no idea what to do with them.

#22

The problem is that C++ does not have a standard ABI, each compiler implements their own, with varying degrees of stability promises. So you need the C++ to expose a C ABI if you want this to be foolproof.

1 Like
#23

Yeah. The reason everyone does FFI through a C ABI is not that C is awesome – it just has a stable and reasonably simple ABI to work with.

#24

To some extent, this can be solved if we tightly integrate with a C++ compiler. For example, we could have clang frontend parse the headers and then query it about precise layout of the data structures it is going to generate.

I’ve posted some ideas here the other day. If you think of something else, please add to that discussion.

#25

Personally I don’t see any other ways for now. And template->generic transformation isn’t possible in general case - because as you know templates are duck-typed and have multiple ways of specialization: not only via template arguments, but also via SFINAE.

#26

Sorry for resurrecting this thread, but I just started reading https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md after finishing reading through this tutorial. I am amazed by how much overlap there is with Rust concepts and the guidelines put forward there. I would say that if you follow these guidelines, you should have safe C++ code. The problem with that is of course that the compiler doesn’t enforce it; you have to use an additional tool which can only give warnings.

That said, as a young programmer (born around the turn of the century), I am sympathetic with C/C++. Imagine yourself in 1972. Imagine how awesome it must have been to be able to write an OS in a high-level language. Even if we now think C is the worst language ever because of memory and security bugs, it must have been the thing back then. Remember that most computers weren’t networked back then, so the occasional system crash wasn’t as bad. Security vulnerabilities weren’t even on the radar back then. And, I bet all those OSes and other low-level programs written in assembly prior to that crashed even more.

Then came C++ in 1985. I don’t know about you, but I don’t want to write a modern browser or OS in C. I like my higher level of abstraction that C++ provide. Still, this was not memory safe and security bugs were easy. Yet again, this was before the Internet, so I believe that security vulnerabilities weren’t as much of an issue as it is today. C++ is also most definitely safer than C.

Then came the 90s and Java with its garbage collector. It was wonderful if you could afford the performance hit, but if you needed speed, you still had to use C++ or, if it wasn’t fast enough, C. Thus C++ carved out a niche for itself. This was also the time of the advent of the Internet. Security bugs suddenly became much more of an issue. But you needed fast code for your OS, so you still programmed in C or C++.

In the first decade of the 21st century, multithreading became a thing. C and C++ (and all the other languages) had to adapt. But they botched it. Race conditions were discovered, I believe, after language support for multithreading. By this time, the computing landscape changed dramatically from 1972 when C first came out. There is no way the designers could’ve foreseen all that. The result was a language with more and more pitfalls, because it wasn’t designed with the new developments in mind. Clearly a new direction was needed. The problem was, if you needed fast software, there was no “new direction” for you to take. C and C++ were still the only games in town.

Enter Rust. A new language developed with the hindsight of all the failures of C and C++ in the face of new technology, and more crucially, a knowledge of what these new technologies are. The development of a new language was almost unavoidable. The need arised to address the use-after-free, double free, buffer overread, buffer overflow, dangling pointers/references, nullpointer dereferences, use of uninitialized data, data races, etc. nightmares.

Do I say that C++ is a bad language? No, I think it was very good for its time. Do I believe that it is past its prime? Maybe. But it isn’t dead yet. Do I believe that a Rust could and should replace it as the language in which new high-performance code is written? Absolutely! But maybe the most important lesson we should learn from this anecdote is that while we believe Rust will still be used 40 years from now, we shouldn’t be as shortsighted as to close our eyes for and turn our backs on new languages which are developed to address problems arising from new technology we cannot forsee. We should realize that Rust will not necessarily live for ever, just like C and C++ will not live for ever. New hardware and tech will mandate new languages. May we as community be open-minded enough to realize that when those new languages start to make their appearance and not cling to our beloved Rust like C and C++ programmers sadly do today.

PS sorry for the long rant.

1 Like
#27

An interesting rant. Thanks for posting it.

As a not so young programmer who wrote his first program in 1963, I have one, minor correction. We knew about race conditions much earlier than the 2000s. In the mid 1980s, I managed to generate plenty of them when using various dialects of parallel FORTRAN.

1 Like
#28

Would it be correct to say that race conditions weren’t as much of an issue back then because parallel programming was a very niche thing back then?

#29

Race conditions were a problem for anybody trying to take advantage of shared memory multiprocessors. My work was in scientific programming, but people working with commercial software likely faced similar problems.