Jump to content

Physically Based Rendering


Recommended Posts

So, if you've been in the discord, reading posts on here, or followed my workblog at all, you've no doubt seen a lot of talking about Physically Based Rendering.

There's a lot of primers and explanations on what exactly that is, so I won't really go into the details of that here, but I do want to go into why it's been a challenge to get it working.

Physically Based Rendering/Lighting

Obviously, the change to the lighting/materials system is the core of it. Utilizing math that's more accurate, but more importantly, more consistent is the crux of the whole thing. The good news on this end is this part has actually been working for a good while now. We've had branches that take in PBR-style inputs from PBR workflow tools like Substance for a while now. Albedo, Metal, Roughness, etc.

We even have a utility behavior that lets you take separated channel information of the metalness, roughness and AO and compile them down into a singular 'Composite' texture map for efficiency's sake.

So the general material changes and root behavior are actually in good shape. The wrinkles come from the next parts

Image Based Lighting

Ultimately, all the changes to the materials handling just gets us consistent art. It doesn't really do anything to drastically improve the quality of the rendering itself. Utilizing outdated shading models will still make the results not match up to the content tools, nor look as accurate as possible. To make that happen, we needed to change up the actual lighting/shading logic itself.

The go-to on this end is known as Image Based Lighting. It's named as such because we heavily utilize captured images in order to inform the lighting information of the scene. Unlike the classical way of lighting, where you have the direct light from the sun or a light source, and then a all-encompassing general "ambient" color value for what the indirect/shadow color would be, we utilize processed images - either staticly set, or baked from the scene itself - to ensure accurate, local information we can plug into the lighting system.

The mainstay of this is Reflection Probes. They're objects you place into the map and then eiher provide it a static cubemap, or do a baked capture(where it renders the scene from the center point into a cubemap which is then saved to disk). Once we have our cubemap, we then prefilter it to get the averaged lighting information for both specular and irradiance. Specular is pure reflectivity information - if the material is a fully smooth metal surface, it'll reflect the environment like a mirror, so it'll display the clearest mip from the prefiltered cubemap data. If it's a rougher material, we use a lower mip, which has less, blurrier info, so you lose those crisp details and it goes from "mirror" to "somewhat shiny surface".

Irradiance is the averaged lighting information over an area, and is utilized to inform the ambient lighting information at a given point based on the cubemap. This allows for FAR more accurate ambient/indirect info than the single flat color from before.

Building off of probes, we have the Skylight, which is a special type of probe that covers basically everything. It lets you have a single probe cover an entire outdoor area instead of needing hundreds and hundreds of regular probes.

The problems we've had stem from these.

There's a lot of different ways to calculate the IBL behavior, some more accurate but slower, some less accurate but faster. Some are dependent on other behaviors, etc. It's also compounded by probes by design not having shadows like lights, meaning that they much more readily clip into areas they shouldn't, as well as skylights being all-covering means they trample on areas you'd like to be dark or more locally reflective/lit like caves or indoor areas.

All of this together means we've spent a majority of our time trying to find behaviors and tricks to minimize or mitigate the negatives, while maximizing our flexibility. Of course, these techniques aren't simple or easy and that's why it's taken as long as it did. In fairness, most engines seem to just embrace the innate limitations with these and just leave it to the developer to work around it, but honestly it bugged us so we wanted to take a stab at resolving some of these things.

But, in the interest of getting stuff out, we're going to spread out those improvements and fixes, so for now some of that stuff, like zonal-clipping so probes don't bleed through, or smart blending so local probes always have priority over the all-encompassing skylight will have to wait so we can get the bulk of the PBR stuff in. Nothing stops us from looping back and fixing up the odd bits in these things in the following months.

There's one last major stickler that's proven to be a problem for the past 2 weeks, and it, unlike the neat improvements listed above, isn't something we can so readily toss by the wayside.

Screen Space Reflections

The problem with probes is, because they're cubemaps, they're not fast to bake. Even with the optimizations we've got so far, a 32 pixel resolution cubemap, if you account for baking, processing and saving takes about 30 milliseconds on my machine(which is admittedly pretty average). That's not a long time, in fact when you hit the bake it tends to be a 'blink and you miss it' unless you're doing a number at once.

But it's also very much not "do updates in realtime" either.

Which means that for dynamic objects - while they receive lighting information from the probes no problem - cannot appear in reflections. So a perfectly mirrored floor will reflect the environment no problem, but the player himself won't show on it and as you can expect, that tends to look really weird.

So, the cheap solution(because while it's approaching very fast, the mainstream isn't ready for raytracing just yet ;) ) is to do Screen Space Reflections.

Notionally similar to Screen Space Ambient Occlusion, where we take a pixel and sample around it to find if it has other surfaces nearby, which would occlude some light, SSR is taking a pixel and sampling the reflective angle between it and the eye and seeing if there's another pixel in-view we can display as reflection information. So in our mirrored floor example, if the player object is standing there, then in our SSR post effect, we can shoot a ray from the eye to the floor pixel, calculate the reflect angle and trace the ray until we hit our player, and then add that info to our reflection info.

Suddenly, dynamic objects will reflect in the scene just like the static environment baked into the probes.

The main hangup is that the math for this is kinda weird. To do it efficiently, you basically have to do a raymarch between pixels, but in a sort of projected 2d way because we're operating in screen-space, not the full gameworld. This is the part that's been throwing off @Azaezel as he's been trying to crunch on it. But we'll get it cracked, and it'll plug the last major hole in the probe/reflectivity concept.

So what's TODO then?

Thankfully, not much for the short term.

While the milestone is running a bit late, mainly due to this stuff, anticipation is that the bulk of all the PBR stuff should be PR-able this week, and then any bits giving us problems will be rolled in progressively along with other September milestone changes.

I'm currently working on breaking down the bulk of the already-proven PBR/probe stuffs into PR-able chunks so we'll start seeing those go up this week to begin testing for merge-in and we'll then be able to try and focus on the SSR stuff to lock that bad-boy in to complete the core PBR implementation.

Looking forward to the future, we have a list of things we'll be implementing to improve flexibility, stability, or just neat features.


  • Reflection Zones - Allow management and control of a group of probes at once, as well as depth-clipping behavior to prevent probes inside the zone from 'bleeding out' and getting into spaces they shouldn't be rendering in
  • GBuffer baking with dynamic lighting - Instead of baking the full scene, we bake JUST the GBuffer, which we then feed in to the probe render code. The GBuffer generation is the most expensive part of our render pipeline, so that removes most of the overhead for dynamic-lighting probes. This will allow the probes to update any lighting information they have in the scene, so stuff like point or spotlights, or changes to time of day will see updated IBL information. This will only work for static geometry information, so stuff like players still won't contribute to the lighting at all.
  • Cubemap Array rendering - Tying to zones, we'll pack all related probes into a cubemap array list and render them en-mass in the shader, drastically reducing the render overhead of the probes
  • Multi-pass baking - As-is, probes only bake the scene once. Going forward, we'll make it support multiple bakes which means that any irradiance information will propagate around, yielding better indirect lighting information for the IBL.
  • Contextual bakes - Also related to zones, but allow adding 'tags' to bake information, so you can swap out active probe cubemaps based on situations. An example would be if it's night time, you could flag the "night" bake group and all probes with it will use that cubemap instead of the daytime one, etc.


Some of those are higher priority, some are just neat utility features that aren't a must-have. But that hopefully gives you an idea of what the deal with PBR currently is and what the roadmap is for it in the future.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...