Biggest opportunity for a new programming model

the biggest opportunity for a new programming model is extracting the majority of the code from an application and moving it into the infrastructure instead. The second biggest opportunity is for the remaining code—what people refer to as the business logic, the essence of the program—to be portable and secure.

https://blog.colinbreck.com/predicting-the-future-of-distributed-systems/

It's a very interesting perspective the author shares in the blog post. Out of interest, are there actually any Rust projects aiming to move the majority of code "into the infrastructure".

I think in Scala, Akka tried to move the bulk of network code into Akka. I am not fully convinced if actors are the ultimate answer as a new distributed programming model as there is still room left.

I find Rust with it's overall sensible design around memory safety and relatively easy serialization (werde) well suited to explore alternative distributed programming models so I'm wondering what people in the field think about the idea of moving more code down the line i.e. out of the actual application?

If only we knew what the author actually and precisely meant by this, that'd be wonderful.

Until then, it's just a stack of big words.

4 Likes

Judging from the paragraphs following the one OP has cited, I think the idea is to decouple your business logic so much from IO that you can move it freely between environments. I.e. having a set of standard and high level APIs provided by your infrastructure that do all the stuff like serving HTTP traffic, persisting data and logging:

Instead of embedding HTTP servers and logging libraries and database clients and all the rest into an application binary, if this code can move down into the infrastructure, then these resources can be isolated, secured, monitored, scaled, inventoried, and patched independently from application code, very similar to how monitoring, upgrading, securing, and patching servers underneath a Kubernetes cluster is transparent to the application developer today.[21] If the business logic can be described and executed like this, then it also becomes possible to move code between environments, like between the cloud and the IoT edge, or between service providers.[22]

The author believes WASM can help with this:

WebAssembly may help with this. WebAssembly offers a secure way to run portable code, and the WebAssembly Component Model could be the basis of a standard set of interfaces that more than one platform can provide.[23]


Fluvio maybe? There is also a lot of exciting stuff happening around WASM that is being build in Rust and sounds related to the objectives of the author, like wasmer, for example.

2 Likes

I agree with the author, but I also think that the issue is, that moving it into the infrastructure is highly domain- and architecture-dependent.

2 Likes

In my experience leaning too heavily on "infrastructure" (by which they usually mean cloud vendor specific stuff) often leads to worse development experience, higher latencies, higher deployment times, vendor lockin and needing even more cloud infrastructure to get visibility into all the moving parts.

If you have an abstraction layer that makes for example blob storage work across vendors and with local files: that can work. But then you end up giving up some of the functionality that isn't portable.

It can still be worth it if you're dealing with huge scale or global distribution. But people underestimate how far you can scale on a single or a handful of machines.

And its a cost issue too. E.g. at work our on-prem GPUs cost us a fraction of what AWS is charging. That's including labor costs, utilization factor and the on-prem storage to feed the GPUs.

9 Likes

Valid point. I am actually working a lot on abstraction layers that remove the distinction between cloud, docker, and local environment and, as you correctly pointed out, for economic reasons.
It's a pity though you can't easily write a library that does the infra abstraction so you can focus on your application.

I actually wrote an example project for the Fluvio project about six months ago, and I get what they try to accomplish and it remains to be seen how the in message processing works out in practice. AFAIK, it is one of the more interesting solutions out there but not quite there yet. I like the idea though.

1 Like

The old LAMP stack (linux, apache, mariadb, php) achieves a lot of that "decoupling" for a web framework. (really it's the CGI script model I might be thinking of)

The "build the world on every request" model does have some benefits.

For system software I don't see anything that would achieve that.

2 Likes

I will note, for OP's benefit, that this observation is old enough to have been part of the design of COBOL, whose facilities for I/O are intended to separate program logic - "do this to these records, then do that to those records" - from the minutiae of programming tape drives or magnetic storage. The idea is broadly sound, and there will always be new ways to implement it, but it's not a novel or revolutionary idea. It's an old and well-tested one.

2 Likes

In practice you will probably want to own the infrastructure too, in the sense of being the source of exactly how your code maps into the infrasructure provider. And then you're back to programming again, just now it's probably in some crappy JSON file.

Doing proper IaaS can be worth it, (not needing to worry about memory leaks or os updates or that sort of thing is so nice!) but you should know what you're getting into first.

2 Likes

True, and I don't even wanted to imply there is anything new. Rather, I just asked what the Rust eco system is doing in this direction in the sense of finding new ways to decouple business logic from IO.

It is close to my current work as I deal a lot with custom abstraction to make my business application largely context agnostic. As many pointed out here, separating business logic from IO cannot easily generalizes due to custom requirements and that is definitely true.

At the end of the day, I ended up with implementing three fairly timeless principles:

  1. Universal Contextual Autoconfiguration.
    Everything that needs a configuration uses a ConfigManger, which ensures each config is adapted to the detected context. That way, your app or services flies through dev / test / stating and prod without config error

  2. Custom Service or App templates
    Because each or has its own set of best practices and guidelines, just encoded them in a standard template. More recently, these templated managed to achieve just the stated idea of handling all IO so that the dev just implements the actual business logic. And, because the template uses the autoconfig, config is also taken care of within the boundaries that the app or services was correctly specified by the author.

  3. Integration Mangement Service (IMS)
    This came after a lot of painful experience, but it is a classic inversion of control; Just write an integration that registers itself with the Integration Mangement Service, and whatever app or service needs an integration first contacts the IMS to test if it is there, if if is the right version, and only then pulls the details how to connect to that integration. I am not gonna lie, but this has made debugging and maintenance a lot simpler and I wish I would have done this way earlier.

Are there different ways to do similar things? For sure; Can this be stuffed in a Domain Specific Language or custom programming model? Possibly, if you want so. Does this generalize enough to be widely applicable - I don't think so.

The crux with new programming paradigms really is how well they generaliz; separating business logic form IO has been tried many times before, think actors, but very few generalize enough to become adopted. Async and LINQ are noticeable examples, but it seems to me that only is the case because these generalize well enough.

With systemd you can have socket activation, and it can start isolated (~container) processes.

Sounds like you want some sort of enterprise application framework like Java Spring with a whole bunch of pluggable providers and so on. That's quite a big glueball. And also, well, a specific flavor of corporate.

If you build some embedded controller for a drone you don't need an AWS-secrets-client. If you build a multi-threaded jpeg decoder library you don't need a block storage abstraction, you take &[u8] or Read + Clone traits. If you write a game you don't need to auto-discover which cloud database you have to talk to. All of them might need some configuration and abstraction, but usually in a way that's covered by a few crates and traits. Well, I guess game engines come close, but they basically have somewhat different concerns than business software.

I haven't looked or heard of any so I guess they currently exist inside of corps where churning out lots of similar things is essential to their business.

Valid point.
Indeed, I am building a cloud native microservice system and there you do need a bit of overhead.
Not like spring, far from it, but you do need something to manage your stuff.

IMHO Spring came into existence mostly because JaveEE was to cumbersome to work with and, luckily, in Rust we don't have that. Rather, there is a crate for everything and the how you wire things together is up to you.

Also, you are driving home the point that whatever abstraction you end up with really reflects your specific requirements.

1 Like