A long time ago, (aug 3, 2017) there was an episode of Around the Verse, highlighting a lack of realism, that when the episode went live, seemed like it was fairly soon to be implemented… the tech, refered to as “secondary viewports” was supposed to implement realtime “reflections” onto windows, mirrors, etc… it was also a key part of the ”halographic” communications system for one of the mission givers.
Hi, sorry for taking a while to get back to this, I typed out a big response and then, I guess, didn’t click publish. I’ll try to answer various things point by point.
What is the status of the system? It’s up and running! It’s being used all over, in places you may not have realised, such as mini-previews and comms calls in mobiglass, holograms out in the world, like the big soft drink advert in Area 18 and (I think) the enemy ship views in the HUD.
Is it still the plan for reflections? It’s important to stress that this isn’t a general-purpose solution to add high quality reflections everywhere. Whenever we set up one of these views, we have to make decisions about what style of rendering it will use, and what features will be enabled, based on the time it will take to render and what permanent GPU memory they’ll need. So this makes it a good match for a mirror in an enclosed bathroom, where we know there’s only a handful of lights and you can’t, for example, see out a window to a planet’s atmosphere. The same mirror on a player-controlled object would be like a performance landmine, where looking at it in the wrong circumstance would halve your framerate. Similarly, we can’t just spawn them on every shiny surface.
Is there any update as to the timeline? There are a few small technical issues that currently stop it from being used as a mirror in the simplest way, for example, because of the way polygons are culled, mirroring an object also shows you the back faces of the mesh, effectively turning it inside out. None of these are likely to be huge problems but there are likely a lot of those little snags hiding in different systems.
Will secondary viewports be available on the HUD, to assist in hanger landings? See my answer to (2) for why we’re not keen to add general purpose camera views onto things that can fly around arbitrary placever, that’s not to say that no view like this could possibly work. The graphics team has generally argued for these kinds of features to be presented as a kind of “scanner view”, which would mean it could have a visually appealing non-photorealistic look, and give us the freedom to leave out major performance sinks that you don’t actually need, or would actively interfere with landing. For instance, volumetric fog has major performance and memory costs, but landing in fog is probably harder, so why not have a scanner that doesn’t see fog at all?
Again, sorry for asking a question and then vanishing for a month, hope this goes some way to answering your questions.