In the first Unreal Engine from 2000 in games like Klingon Honor Guard and Deus Ex, mirrors and windows and screens were implemented using zone portals, which didn’t need to be adjacent to what you wanted to see. They were just rendering a camera image (dynamic actor viewpoint) onto a texture, by basically connecting a rectangle in zone A to another rectangle in zone B, and if you looked into the rectangle in zone A you would actually look out of the rectangle in zone B, while zones A and B could even be the same zone, just from another viewpoint.
For mirrors the camera image of rectangle in zone B was just X-flipped, but there was no additional rendering effort, in fact by using those zone portals it even increased performance, because you could cull out what was not visible.
Please excuse that my knowledge is still on that level in that regard, and therefore I still need clarification why it seems to terribly difficult in SC to create a mirror or a camera screen into another room, while the tech was already there in 2000. I still understand that we have now much more polygons to render than back in 2000, but also the GFX hardware got better.
It has, unfortunately, become a lot harder to do things like that in a modern engine, which is most likely why we saw a die-off of working mirrors in games. The issue is that, back then, everything was forward-rendered, so all the information needed to draw an object would be provided at the time the object was being drawn, and by the time a pixel had been shaded it looked the way it ought to look.
So one thing that has changed is that a lot of engines, including ours, use a deferred renderer, where polygons write out a set of material properties and a depth value, and a later pass combines that with a set of lights. In order for that to work, there’s a presumption that there’s a straightforward calculation that converts any pixel coordinate and depth into a 3D position. Adding mirrors or portals is theoretically possible, but you’d need extra data stored in every pixel to tell you which transformation to apply, amongst other things.
The other big problem is data that’s set up beforehand, with similar built-in assumptions. Before we shade the scene, we construct a 2D grid of what lights affect what 8×8 tiles of the screen, again assuming that it’s a single space in front of the camera, as well as a low-res volumetric texture containing fog information. In theory I could see us adding portal-IDs to every light, for some added cost, but the fog really can’t just be masked out of the mirror area as it’s so much lower resolution.
There are probably extra problems I’ve forgotten, to go along with these, but these are the ones that jump to mind.