Jump to content

PBR: Principles, Practice, and Prepwork


Recommended Posts

How it started:

http://www.garagegames.com/community/forums/viewthread/136389

First set of prepwork

http://www.garagegames.com/community/forums/viewthread/137917

Contributing crew:

Andrew Mac

Tim Newel

Haladrin

Timmy

Anis

Luis

MangoFusion


Principles:

The general high-level philosophical approach to PBR vs more traditional rendering workflows is a more accurate representation of how light bounces around a given scene,

which presents both a set of more consistent results, as well as a better set of tools to provide scene-based results. http://en.wikipedia.org/wiki/Cornell_box would be one sample thrown around as a test-case.


Workflow differences:

At the end of the day, from an artist perspective, the art pipeline largely differs in terms of specular inputs: http://www.marmoset.co/toolbag/learn/pbr-practice (jmp to inputs)

For folks already in production, and used to the old system, the conversion process at time of writing and using the current design can best be sumed up as:

1) The red channel of the specular map is used to manipulate how blurry a reflection is. (roughness).* **

2) Strip out your AO mask that you're used to baking into a diffuse texture, and put that in the green channel.* **

2) Take the alpha mask you're used to using to modulate cube mapping via 2 diffuse layers, and chuck it into blue instead for metalness.* **

http://i.imgur.com/WMjMzPz.jpg (http://i.imgur.com/etSly9A.jpg )

*note for the deferred transitional branch, we're sticking to a dot product approximation of colored specular that turns it into a single-channel result for roughness lookup)

** alternatively, tech was developed to allow swapping channel i/o around, simply do not feed the material a specular map, and instead feed it at least a metalness and roughness map, specifying channels: http://i.imgur.com/TpzSLpc.png


Present status:

I'll have more on this later, but to quickly address this end of things for folks:

 

Cool, I seriously thought I missed something that was 'complete'. If you are not tracking it being 'good to go' then I am all about waiting. PBR (physically based rendering) in a practical sense would be a game changer for T3D. I know and am following the current work. Just thought I might have missed something. Too be honest, I get the PBR thing, but at the SAME time, whoever is doing it NEEDS to make MASSIVE changes to the material tools as well. (since PDR uses 'special' mats).


Oh yeah, I get the deferred shading v. deferred lighting but, in the end.... isn't it just 50% and half a dozen of another? Just asking since I STOPPED following once CryEngine 2 came out. i see where one v. the other can make a difference but.... I don't see any (mathematically) difference in performance.

 

PBR isn't. deferred_complete refers to the last submisison-series where it was broken up, and needed clarifications for how to resolve the PRs since they conflicted when broken into steps (has since also been expanded to include additional fixes found since.Edit: and also kept up to date with development on a semi-daily basis so folks can see what they'd end up with were it rolled in). There's some prepwork in there already, like the 'specular' map being shunted on over to a scalar derived from rgb, and a reserved for metalness as an alternate to the cubemapping layer requirement in stock, and the like.


PBR proper simplifies that on down to the g and a channels for the 'specular' map (roughness and metalness respectively), since those are the highest fidelity ones for dxt5s. (also gives us an additional channel to play with for AO, once (if) we get to that point.). Also changes the specular power and strength entries on over to roughness and metalness internally and in-editor, and adds automatically applied IBL in the form of a levelinfo fallback, and envVolumes (areas taking a different ibl entry). That last may be entirely superseded by the LPV end that Mac and Jeff® are focusing on, depending on how that pans on out. Wil try and do up a more full report some time this week, but in the mean time, layout and thinking can best be summed up with the internal comit docs: https://github.com/GarageGames/Torque3D/pull/867


Do note that while the backwards compatible end of things was intended for inclusion in 3.7, too many other things have cropped up, so it'll likely be going up for a third review post 3.7, once the opengl/linux system is nailed down, and likely after the repo structure has been reworked (which makes sense to me at least, since theres quite a few alterations to game/core, game/shaders, and a few in game/tools that need to accompany the source-side alts.

Edited by Azaezel
Link to post
Share on other sites
  • Replies 164
  • Created
  • Last Reply

Top Posters In This Topic

Sorry if this has been asked before, I didn't look through the whole previous thread. It sounds like you are going with the metal/roughness Disney method? What kind of perfomance hit you are seeing when moving to PBR?


I saw a mention in the previous thread about making the PBR a runtime option. This didn't make sense to me, since everything I've looked at implies that PBR requires entirely different art assets, created in a whole different mindset. I don't see how it could be something you could just switch on and off. Am I missing something? Apologies if this was already discussed.


This seems like a great way to improve the reusability of art assets, and make art easier to create in general. Looking forward to playing around with it!

Link to post
Share on other sites

The numbers aren't anything I'd write home about at present:

(All shots taken at highest settings for maximum load.)

stock:

http://i.imgur.com/ltJBBN4.jpg

deferred:

http://i.imgur.com/awl8lGW.jpg

pbr:

http://i.imgur.com/NY144Zu.jpg


There's a few more places we can make gains, like ditching the shader-inserted linearization methodology for a hardware driven sRGB loading solution. (That end is pending an opengl -side equivalent.). Ditching the general specular-color to single-range converter left in to ease folks' transitions will also take some load off.


Additionally, to make some headroom for the calculation load, we revisited how shadows are calculated, and instead of the present stock re-render every frame, went with a 2-phase solution:

http://i.imgur.com/4KKD7iV.jpg

+

http://i.imgur.com/O6vsHZX.jpg / http://i.imgur.com/9Z4kMxq.png

How that works is pretty simple: tag a given material as a dynamic shadow caster or not, and it'll use a given light's static or dynamic listing for how often to refresh that asset's shadow. (At time of writing, for backwards compatibility, everything still defaults to rendering using the dynamic rate. 8MS was picked to ensure it's pretty much always going to be refreshed per frame, while you'll note for the pointlight, static refreshes at a rate of 1/4sec by default (but is tweakable if needed.).


On a switch... while technologically possible, from a pragmatic perspective, the experiments we attempted to run with that ended up turning it into... the kindest thing I can say is it turned into a hot mess real quick. Rewiring the gbuffer, lighting shaders, shader features... yeah. Not happening without somone absolutely dedicated to that end so we can actually get the rest of this stuff in a truly useable state.

Link to post
Share on other sites

Wait, so there's already a PBR build?

v0245xP_HVk


The numbers look fine, there's a lot of odd things about the deferred shading compared to lighting. Like all the rendered terrain cells are override cells, and there is in general a higher polycount (although fewer drawcalls?) :P Also the level doesn't look like it's exactly the same :P

Link to post
Share on other sites

Higher polycount/fewer drawcalls bit is due to the pair of shadowmaps cycling at different (in the scene the same, since scattersky is configured for dynamic shadow motion) rates. and yeah, linerization is used to get a result that closer matches art-tools spit-out for textures. (that and dear god the contortions you'd have to make to the PBR math otherwise.)

Link to post
Share on other sites

You have made awesome progress Az, well done :mrgreen: :mrgreen:


Slightly off topic but one of the biggest things i have seen that is really holding T3D back, is just how CPU bound it is. Using T3D now on a decent size level it quickly becomes very apparent how under utilized the GPU is. It needs multi threading.

Link to post
Share on other sites
Higher polycount/fewer drawcalls bit is due to the pair of shadowmaps cycling at different (in the scene the same, since scattersky is configured for dynamic shadow motion) rates. and yeah, linerization is used to get a result that closer matches art-tools spit-out for textures. (that and dear god the contortions you'd have to make to the PBR math otherwise.)

 

The static/dynamic shadowmap mixing will take up more memory usage because of both maps, but the total rendered polys should be roughly the same since stock renders all the shadows all the time. Unless things are being shadow casted more than once, there should be no difference in polys with static/dynamic shadow system.


I have no idea why deferred shading is reporting more polys in those screenshots. The reflection on the water seems very different, and would have a major impact on polys if it were not working correctly, or rendering different distances in stock. Additionally, if you're using dynamic cubemaps on anything for PBR in that scene you're gonna tank your performance and increase your polys significantly.

Link to post
Share on other sites

The static/dynamic shadowmap mixing will take up more memory usage because of both maps, but the total rendered polys should be roughly the same since stock renders all the shadows all the time. Unless things are being shadow casted more than once, there should be no difference in polys with static/dynamic shadow system.


I have no idea why deferred shading is reporting more polys in those screenshots. The reflection on the water seems very different, and would have a major impact on polys if it were not working correctly, or rendering different distances in stock. Additionally, if you're using dynamic cubemaps on anything for PBR in that scene you're gonna tank your performance and increase your polys significantly.

 

To non-IRC folks up:

You're right polycount doesn't make sense from that context after all. Gonna have to look into that.


On the PBR IBL/dynamic end of things, current system is using a 5 stage mechanism:

If it's attached to a sceneobject, it attempts to tag a material with the dynamiccubemap shader feature.

If such a material is taged, it checks to see if the object has been assigned a ReflectorDesc. if so, it generates a cubemap from the pov of that object.

If the object doesn't contain one, it checks to see if the object is within an envVolume, and uses the cubemap assigned there. (The thinking at design time being spit out a cube from the center of a volume and pass it along. not yet implemented, so pure prefab lookup atm.)

If it's not within a volume, look to a (static) entry in levelinfo.

If any of the above is true, it'll run through a modified reflectcube shadergen feature using roughness as mip level (specularmap.g. stored in matinfo.b in the gbuffer) and metalness as a difuse to diffuse*cubemap lerp value (specularmap.a stored in matinfo.a in the gbuffer) (really, that should be modified to a proper metalness equasion)

(code is at https://github.com/Azaezel/Torque3D/blob/PBR/Engine/source/shaderGen/HLSL/shaderFeatureHLSL.cpp#L1879 and https://github.com/Azaezel/Torque3D/blob/PBR/Engine/source/shaderGen/HLSL/shaderFeatureHLSL.cpp#L1927 respectively)

May or may not pursue that end further in light of the LPV work being done.

Link to post
Share on other sites

Alright. So. Design contention time.

It would seem at least a couple folks are unhappy with gba entries, and would prefer a different setup. 2 quick options, and a second pair that'll take more research.


Options in order of probable time-to-implement:

Option 1: stick with a combined packed map, shift it over to RGB (say, roughness, ao, metalness)

Option 2: add an additional fallback to the above with a series of 2 fallbacks, meaning you'd have a setup that looks like combinedmap, then roughness or sliderbar, then metalness or sliderbar, then ao map (or nothing, there. it'd just go to a default)

Option 3: runtime compile a combined packed map.

Option 4: write a tool for packing


Weigh in. you'll be stuck with it for a bit.

Link to post
Share on other sites

Option 1 sucks for artists as they'd have to "compile" a combined map every time they wanted to make a slight change to the maps used. Also, option 4 is not really an option, its a supplement to option 1 since the tool would spit out the RGB map to go with this option.


Option 2 is the most flexible, but it makes the shaders and material features uglier since you have to account for two possible input methods, either the combined map or the separate maps. The cleanest and fastest way to handle this would be to combine your 3 optional maps into 1 combined map.. but that just leads to..


Option 3. I believe to be the best option. Artists supply roughness, metalness, and AO as separate maps and change/tweak them as they please, and then during the material loading phase we generate a single combined map to upload to GPU by grabbing the specific channels from each of the 3 maps. This is really quite trivial as long as the maps are the same resolutions (which they should be).

Link to post
Share on other sites

I would also prefer option 3.


Manually combining those maps is complicated since not all programs support that, especially not the free and/or open source ones like gimp.

This option is also very ugly because of the reason andrewmac mentioned, you have to extract and recombine and reexport every time you want to make a slight change. And the worst case would be an artist of a full project with hundreds or thousands of layers and someone tells them: You have to manually redo them all, this is simply not an option.


Option 2 is also not good, since it adds more options and more confusion and those options are not used anyway or should not be used, because they are more ugly than the real deal, similar to the specular button thing we have at the moment. Better motivate artists to do it right from the beginning.


Option 4 would also be acceptable for me, since I will not have to worry to combine and export them right, I can just make my maps and when done, just run the tool. This option may save disk space, but still leaves the issues with having to recombine everything if you make a small change to one of the layers.

Link to post
Share on other sites

Option 3 is my vote as well. The less time spent compiling technical stuff, the more time used for creating assets and art. Also automating as much as possible limits human error. If all we need to know are naming conventions, format/file types, and paths to where we upload the art then that'd improve the pipeline a lot. But, as mentioned before, I can work however we need to -- but auto compiling these maps would be great :)

Link to post
Share on other sites

Frontend wise, odds are 2 and 3 will end up looking the same, barring any particular organizational UI layout, since you'll still need to feed the interface file locations (not real big on mandating stuff like 'you must suffix this file with _r, _ao and _m or it won't know what to do'). Really the only difference there would be whether an additional step is taken to go from 3 interchangeable entries, to 3 mutually depending ones.

/butts back out. so users can talk.


edit: ah ken speel

Edited by Azaezel
Link to post
Share on other sites

Option 1: stick with a combined packed map, shift it over to RGB (say, roughness, ao, metalness)

Option 2: add an additional fallback to the above with a series of 2 fallbacks, meaning you'd have a setup that looks like combinedmap, then roughness or sliderbar, then metalness or sliderbar, then ao map (or nothing, there. it'd just go to a default)

Option 3: runtime compile a combined packed map.

 

It only compiles once? I.E. does it act like colladra files? So we could remove the source files and it still has the cache version. Also, how will the format be controlled? As in what format will it be compiling it to and from, and do you want to limit these maps to that format alone? So in short, would it be taking tiffs as the input? Could it take a Photoshop, Krita, or Gimp document with layer folders named and auto separate those out into the correct channels? Would it only export out like a dxt5 dds?


Some artists and company workflows use a master image file to work from. Each portion of the material texture is under its own folder in this image file. Having to export each folders as separate files if done more than a few times wastes time. Yep, we can create scripts/actions to help with this in the art programs. But its almost as much time as making a script to do option 1.


Anycase if three wins out, could it be setup that all master images that are being combined can be chosen outside the materials folder (or even the engine directory)? As to keep it clean, and not having redundant information sent if using svn, the images that are linked to the combined compile version do not have to be in the same directory. Use local directory information using the material folder as the master path vs that computer as so it wont break the link as easily if one of the images is edited on another computer.

 

Option 4: write a tool for packing

Kinda redundant with 3.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...