I don't find that explanation particularly satisfying.
Certainly I don't understand relativity beyond a bit of Special Relativity and I would not expect most people to grasp that or, as you say, even need to.
However, once you can demonstrate or otherwise convince people of the finite (and fixed) maximum speed of light then it's likely they start to realize there is a problem there, that their simple notions of space, time, speed, etc may not be sufficient in some situations. Even if they don't have the where with all to think about it any further.
Oddly enough I recall this happening to me when I was about 9 years old, when my father happened to mention that nothing can travel faster than light.
What then is the analog of the speed of light in physics that compels is to have to have a concept of "memory model" in computing?
After extensive research (wikipedia) I found my answer:
a memory model describes the interactions of threads through memory and their shared use of the data.
It's all about the interaction between compiler optimization, caches, etc and what visibly happens in memory. That is "visible" as in observed by other threads or hardware looking at the memory you are using.
Oddly enough this has strange parallels to the relativity issues caused by the finite speed of light. Different observers can see events happening in different orders or perhaps can't see them happening at all.
I also start to realize why had not heard of this until recently. Seems it is not something that people realized even needed formalizing till recently.