Dlang adds a borrowchecker, called the "OB system" (for ownership-borrowing)

If you read the documentation for OB, you'd know that using lambda's in that way isn't checked by the D's borrow checker. If you just reduce the code like below, this compiles fine and a pointer "p" can be used and returned after it was free'd.

@live int* test1() @safe
    import core.stdc.stdlib : free, malloc;

    // minor note here: we have to create a lambda for this
    // because @trusted can only be applied to D functions,
    // not code blocks
    auto p = () @trusted { return cast(int*) malloc(4); }();

    *p = 0;

    () @ trusted { free(p); }();

    *p = 10; // "ok"

    return p; // "ok"

You've provided the perfect example of how this can be easily misused and no warning or error message will be given. And at first glance the code looks okay because it is both being validated in @safe and @live code! This is the exact kind of situation @live and @safe are suppose to prevent, yet it becomes even more dangerous because neither does it's job properly.

1 Like

As far as I understand it @trusted is like unsafe in Rust, i.e. you have to know what you do.
So why would you expect a compiler error if you explicitly use @trusted?

1 Like

That's the point, there's no way to correctly use D's borrow checker.

1 Like

Maybe what's missing is just some additional abstractions with a safe interface that hide away the explicit malloc/free calls?

It's difficult to comment effectively on this right now, because there are known bugs in the current implementation: see e.g. https://forum.dlang.org/post/ra7buo$312i$1@digitalmars.com. So I wouldn't be confident that the failure you observe here isn't just another example of that same bug, rather than a fundamental problem in the design.

More generally, I'm really not sure that the mixing up of different @system actions to obscure things from the borrow checker is that much of a gotcha. Once you start casting from pointers to value types and back again there is no reasonable way that you can expect any safety mechanism to catch that. A more normal use-after-free will be caught, even in @system code, and if it isn't, that's a surely a bug of the current implementation rather than the broader design.

But such an example also ignores the expected long-term use case: @safe-by-default code, @live checks applying to more than just pointers, and (as @troplin points out) safe interfaces to via which to access memory allocation. Stuff like the casts in your example will be caught in that scenario.

Your examples are useful to expose bugs and limitations of the early stage implementation, but there's no basis to suggest that the fundamental idea is flawed given how provisional that implementation is right now.

1 Like

We could arguably have been more focused about it, but these higher-level language comparisons are relevant to the OB feature and how it compares to Rust's borrowing and lifetime checks. For example:

  • Rust's restrictive approach to what the developer can do should at least theoretically make it easier to reason about and design features like a borrow checker

  • D's permissive-by-default settings open up some nice potential ways to use borrow-checked code: write core system/stdlib functionality using very locked down @live restrictions that assume worst case scenarios (as Rust does), but leaving the user the option to write higher-level app code in a simpler style depending on their use case -- while still writing code in the same fundamental language

The open question here is of course how readily it will be possible to implement borrow-checking of comparable power to Rust's in a language that isn't as locked down in terms of what the developer can do. But my intuition is that the long-term direction of @live is to allow the creation of a subset of functionality which can and will always be checked that rigorously, so you can just put @live: at the top of a module and, hey presto, everything in there will be as strictly checked as Rust is.

Calling into that subset from less restrictive code would then be akin to something like calling into Rust routines from, say, a Python app -- but even then with considerably more safety and correctness checking (not to mention performance), and all the benefits that come from working in the same language.

1 Like


Swift and C++ are both trying to add some sort of an ownership/lifetime/memory safety system. So far, I haven't seen any significant (positive) results.

In general, my experience with retrofitted language features is that they end up being clunky and painful to use, and as a result, they don't gain traction.

I am also kind of missing the point of adding the ownership/borrowing model to other languages. We already have Rust – why reinvent the wheel instead of making Rust better?

1 Like


Let's view the software stack from bootloader to OS kernel to OS components to applications as a bucket full of water. Ideally we would like the bucket not to have any holes in it, else the water leaks out. Or perhaps contaminants get in.

What the above "permissive-by-default" argument above says to me is that we ensure there are no holes in the bottom of the bucket. But we will tolerate the possibility of holes higher up the sides.

Personally I don't find that reassuring. My systems rely on everything, from top to bottom. If any of that fails it's not good news.


Not sure myself.

C++ has to have it because C++ has to have everything :slight_smile:

Arguably if you can make a language millions of people use and know well safer that is a good thing.

Arguably Rust is very complicated and hard to use. If there were a simpler way to achieve what Rust achieves that might attract more users. I suspect it's not even possible to do that though.

I don't think it's a bad thing that D dabbles with this technology. Nobody uses it much so it's a good laboratory. Whatever the outcome it will be another data point indicating what can be done and perhaps how to do it.

1 Like

At least partly because Rust is less nice to work with when one isn't working on a problem that requires the extreme strictness that Rust is designed for. (If, say, I'm writing a single-threaded app that's going to run on a system with plenty of memory.)

Of course one could write core functionality in strictly-checked Rust and use another more friendly language to write the stuff that doesn't need to be so strict. But that opens up all sorts of problems in terms of defining cross-language APIs, testing that things work across the language barrier, and so on.

So, it's much nicer if one can offer the possibility of a single language that allows strict lockdown in parts of the codebase that need it, and less restrictions in parts where it's not needed. That also makes it much easier to introduce these stricter checks to codebases already written in these languages (some of which are very large).

But in any case, languages like to evolve. Given that Rust has genuinely innovated in proving that it's possible to have such rigorous compile-time memory management, it's natural that other languages want to adapt that innovation. Consider it the compliment it is :slight_smile:

1 Like

Fair enough…

That's indeed extremely arguable. At best it's subjective. So far I find Rust the nicest-to-work-with language I've ever used. Not having had to dig into a debugger literally for years is worth every single "annoying" recompilation.

You have to understand the basics of computing for it to make sense, sure – but I always found that to be a bad argument against the language. It's at most an argument against making it the first teaching language one learns. But after the initial learning phase, I'm on the side that understanding the things Rust requires me to understand is pretty much necessary if I want to call myself a professional.


To me the issue is not the complexity of learning the Rust language: it's the need to alter development methodology. Rust mandates a change in programming mindset, demanding attention to details that historically most programmers have ignored until after their programs were deployed and failed in the field.

Many Rustaceans have reported on this forum that their coding practices in other languages have changed as a result of their learning to code effectively in Rust, and that as a consequence their code in those other languages requires less debugging.


Well you're going to need at least 2 holes at the top of your bucket to fit the handle, so ... :slight_smile:

But if we're going to talk metaphorically: obviously you don't want holes below the expected maximum waterline of a bucket. But would you apply the same design considerations to, say, the metal basket you put your shopping in? There it's useful that it's full of holes because it reduces the weight, and the holes rarely matter given what you're putting in it.

Comparable metaphors: should you use the same equipment and safety regulations for your local swimming pool as you would for deep-sea compression dives? Do you want to armour all the parts of your fighter plane the same, given that every kilo of unnecessary weight reduces speed and manoeuverability?

It's very important, when writing code, that one has as few distractions as possible from what one's problem actually is. Meeting safety requirements that actually aren't needed for one's use-case takes time and mental effort away from the kinds of correctness that really matter, and can lead to inefficient solutions.

Having a solid ownership and borrowing system is really powerful -- but it's much more powerful if it's a tool that one can use when it is needed, rather than a gatekeeper that won't let you write your program if you don't follow its required approach.


I'm not sure how to read that. Most computers I have programmed have instructions and data in memory. The instructions are really simple, you can copy things around, do arithmetic and logical operations on numbers, jump around in the instructions, make decisions and jump around accordingly.

Boiling it all down we have sequence, selection and iteration.

High level languages layer a lot of conceptual baggage on top of those basics. Block structure, functions, types, generics, classes, etc, etc. Rust adds more with lifetimes and macros etc.

Now I don't disagree with you, I like Rust, it fits my "values", I will be persist with it. However despite generally understanding what it is doing I have a hard time reading all those angle brackets, tick marks, pipes, iterators and the functional style we often see here. In one line of code there can be a lot of abstract concepts crammed into a few cryptic symbols. I may never be able to do anything useful with macros!

To bring this back on topic, I wonder what the impact of adding OB to a language like D is. Does it start to become as cryptic as Rust? Is this just unavoidable?

Having ownership and borrowing implemented in a language that they already use means:

  • Not having to learn a bunch of new incidental stuff. Learning Rust doesn't just mean learning O/B. It also means learning syntax, and the system of modules and privacy, and the Cargo CLI (or an IDE frontend to it), and new names for everything (what Java calls String and StringBuffer, Rust calls str and String, respectively), and acculturation to differing norms (if we start talking about which norms are better, then this thread will probably wind up locked, but suffice to say that it is a barrier to entry). Rust does try to avoid pointlessly differing from other languages, but it's obviously impossible to be perfectly consistent with D, C++, Java, JavaScript, OCaml, Ruby, Python, and Haskell all at the same time, because they aren't perfectly consistent with each other. D with @live can be perfectly consistent with GC-ed D, only forcing you to learn ownership and borrowing and nothing else.

  • If someone does want to learn Rust, then it'll be easier. The ones that do find it easy always seem to describe O/B as a formalized version of principles that they already followed, so that it already was familiar to them.


It's fair to comment on it as it is. It isn't a bug, it's a known limitation. Again see the documentation for OB. That's also part of the reason here, bugs and intended behavior are almost synonymous with each other for D's borrow checker. Can that change in the future? Sure it can, but looking to what the future can be isn't reality. There's no design documentation, there wasn't even a DIP for this. The DIP for OB basically just disabled the same variable being passed twice to a function, it didn't actually go into the detail of what the actual implementation is or what the ultimate goal is. As far as I know there isn't anything concrete other than the flabby implementation.

It doesn't. Even if you have a @safe implementation it would need to be used in @safe and @live otherwise it wouldn't be memory safe. So then you have an implementation of something that can be used in @safe without @live but relies on @live to be memory safe. You end up having an interface labelled as @safe that is suppose to mean it is memory safe, not actually being memory safe unless it is also used exclusively with @live.

There's no point discussing this further I guess if you are just going to be dismissing every legitimate criticism by stating that it could be a bug or be fixed by some magical interface that doesn't exist.

I didn't say it wasn't.

It's completely clear from the release notes that this is a highly provisional feature that is vastly incomplete, and that its introduction at this point is primarily to allow exploration of a broader design that the author has in mind.

Would it be nice if there were systematic design documents laying out the theoretical justification for the approach, with a complete plan for how it should work? Sure. But I'm not sure how much it really makes sense to criticize something for being exactly as provisional, buggy, and incomplete as it openly advertises itself to be.

It's not that it isn't legitimate criticism; it's that it's not really constructive to go pointing out that this feature is buggy, provisional, and incomplete, when it openly states as much about itself. Where there are bugs or observable limitations, and you can create a nice minimal example that isn't among the documented known issues, the only constructive thing to do is to report those to the people who can do something about it.

If what's bothering you is that it has been introduced into the language in this state, fair enough. But lots of D features were evolved in a similar way as I recall: early D2 was a sometimes scary place to be, whereas by contrast the bugginess or incompleteness of this feature will only bite you if you try to actually use it (which clearly no one should for now, except for experimentation and testing).