I'm going to assume that for a long time no machine intelligence is going to be smarter than an actual bunch of educated, experienced, motivated humans.
So effectively we already have that "end game", except we are simulating it with human programmers rather than some hypothetical machine. They are equivalent, right?
So somebody has some notion of some requirements in their head. They write it down, provide a bunch of diagrams describing it, generally a specification. Typically they are customers, clients, my boss, whoever. They know nothing about Rust or any other language, they don't see or care about any intermediate form of the solution. They don't have to concern themselves with any inherent complexity of things. They just get a machine code executable back.
Well, as you have probably noticed, the likely hood such people get what they actually want back, that works reliably and performantly etc is almost zero.
To fix that they have to provide a very detailed requirement, they have to do a lot of backwards and forwards iteration with that "coding black box", in this case a bunch of humans to pin down what they really want. They have to provide very detailed tests of the behaviour they want. They end up having to be concerned about the inherent complexity of things.
It's not clear to me that replacing humans with machines in that "coding black box" is going to save those people anything or produce a meaningfully different result.
Well, apart from the fact the machine never sleeps and likely does not cost so much to run as a bunch of human programmers. But logically nothing is gained.
Ultimately the code humans write its the detailed expression of the requirements. It's not clear to me that making that non-human readable is a benefit to to the clients. I can see how it would have a lot of down sides.
Anyway, in short, the "end game" of a "code producing black box" as you describe already exists, except it's humans in the box not transistors, and see what difficulty we have with it now!