I'd like to make a low-rez impostor of a large area of a big-world game. Like this one, from GTA V:
That's the easy case, and can be made from a height map and a top view picture.
This one is harder. Vertical building walls need more than a top-down image.
This is how GTA V lets you see so far. Beyond about 300 meters, you're seeing these low-rez models. This is 10 year old technology, so this was probably hand-built, at least partially, back in 2013. Since I'm doing a client for a metaverse system with user-created content, this has to be automatic. Generate new models daily or weekly, as a batch job.
Both Google Earth and the new Microsoft Flight Simulator do this from real world data. (Looking closely at Google Earth, I get the the feeling that Manhattan has been hand-tweaked to get really nice building setbacks, but nobody bothered for Cleveland.) There's photogrammetry technology that can do this.
I'd like to do this by photogrammetry. Generate an elevation map from above. Generate flat pictures from the game, maybe with depth from the Z buffer. At least straight down, and from some other angles. A steep angle like 15 or 20 degrees from vertical from the four major directions should provide enough info to texture vertical walls. From that, generate a textured 3D mesh.
This is easier than doing it on real world images, since in a game environment you know exactly where the camera is. In real world use, there's a "camera pose estimation" phase where you take the images and work backwards to the camera location.