-
Posts
295 -
Joined
-
Last visited
Content Type
Profiles
Forums
Blogs
Articles
Docs
Gallery
Everything posted by andrewmac
-
I wonder if you get a notification for me quoting you on something you didn't actually say.
-
Yeah, the only references to it are through random posts on the old forums + the link in the topic of the IRC channel. I told Lukas he should make a blog post announcing it but he said that's up to the steering committee.. so.. c'mon steering committee!
-
I think I did a decent job explaining it in my original post about Deferred Shading: http://www.garagegames.com/community/forums/viewthread/137917
-
I'll be honest, I don't really care. As long as there's a formatting popup window to give me hints on how to format, then BBCode or Markdown makes no difference to me. So, with that said, I should mention that the "BBCode" link down the bottom should target _new or _blank or whatever it is, rather than using the current window and potentially losing what you've typed in your post so far.
-
Crappy, but valiant attempt at the cornell box: http://i.imgur.com/VWhFH5n.png http://i.imgur.com/oRBOAui.png
-
http://i.imgur.com/1g7hrYs.jpg
-
I resolved my volume texture issues, so it's no longer corrupting when changing screen resolution. I also added the ability to lock/unlock volume textures for updating so I'm no longer creating a new volume texture for each update, making the whole thing a lot easier to work with and test. I added the ability for the propagation algorithm to flip/flop between two propagation grids, so you can run the propagation algorithm as many times as you want. Can't stress enough, the propagation algorithm is just a simple bleed and not physically accurate at all. That's my next step, try to make the propagation geometry aware + inverse square falloff. Now some better screenshots on the whole setup. First, this is what my testing area looks like lit by sun: http://i.imgur.com/blfQ3h0.png Here is the lighting conditions I'm testing this is in. It's two point lights (red and green) off to the right hand side, both with cubemap shadows on (which has bugs. note the banding in the light. but that's not me, that's stock. they give the best occlusion so I chose them): http://i.imgur.com/sLNinNG.png That's what it looks like in stock. Now, here's what it looks like after a single propagation: http://i.imgur.com/a3ewWBj.png And here it is after 3 propagations: http://i.imgur.com/R5c6HR9.png
-
@ Timmy I can't really dedicate a lot of time to my experiments in Torque anymore so I have to keep them very focused and short lived. What you see so far is a total of maybe 10 hours of programming. I was hoping to have it working well in a only a few days, but it's taken a bit longer to work out some bugs with 3d textures in torque that I didn't see coming. Luckily, JeffR took interest and is now contributing to the project. It was actually pretty much his idea anyway. When VXGI demos came out he was saying it would be interesting to try to implement it as an offline lighting solution in Torque, and as hardware (and Torque) got better it could be converted to an online solution. This is the first steps towards those dreams I suppose. It was also partially inspired by Godot engine. I never looked at the code but I strongly suspect their GI baking solution is an offline LPV implementation. @ buckmaster All of the heavy calculations are done during editing, so even if it was so bad as to take 20 minutes to calculate the final result it can be stored in a binary format if needed. The actual application of the lighting results is pretty damn light. It's a single full screen postfx that samples the depth buffer, determines worldspace position, does a simple subtraction and division operation to determine UV coordinates and samples a 3d texture, then blends it into the lightbuffer. The heaviest bit for actually displaying it will come from memory usage of the volumes. If you were using lots of high resolution volumes in a level, the memory usage could get up there. I've seen people cover all of sponza with a single 256x256x256 volume. So, 256x256x256 x 4 bytes (RGBA8) = ~67 MB of video ram usage. Not too bad I don't think. It also depends on the layout of your level. For instance with something like this: http://i.imgur.com/eBswYia.jpg Maybe it's not the best decision to use a single volume to cover the entire level. That's a lot of unused area. It might make more sense to use multiple volumes. I'll likely have the final shader able to support up to 4 (arbitrarily chosen. maybe 8, maybe 16? we'll see) volumes in one shot, and calculate a visibility score for all the volumes in the level on the CPU and choose the 4 with the highest score to sample in the shader.
-
Now, post screenshots of your new grass in Torque :P I'll do a little research and see what I can find for a grass shader.
-
This might be the first time the world has ever answered back!
-
Mobile seems to work well. When I'm not logged in though the theme is the default blue one. When I logged in it changes to the red one. Only problem I spotted in the red theme is that it says "t3dforums.lukasj.org" at the top and it obstructs the Torque3D logo.
-
Content Level design, models, animations, physics, etc. Rendering Materials, textures, lighting, postfx C++ Expanding and utilizing the engine via C++. TorqueScript Scripting questions, discussions, etc
-
What is this 1996? Where's the youtube tags.
-
Here's a video of geometry being voxelized: pM7gs99NmTU
-
I've been interested in light propagation volumes since first seeing about them in a crytek presentation. I think it's a simple and novel idea for decent real time global illumination. Unfortunately, at the moment, Torque3D is still using DirectX9 so it lacks two major features I would need for real time LPVs: rendering to a 3D texture, and compute shaders. It CAN be done without them, but it's messy. What I decided to do in the mean time is try to implement an offline version of them. Completely calculated on the CPU, and static. How does it work? You place an OfflineLPV volume around an area in the level. It steps through the area of the volume and tests for static geometry, producing a voxelized version of the area inside the volume. Next it detects the lights in the scene and injects them into the grid. Finally it propagates the light outward through the grid. What is essentially a postfx is ran after the lighting pass which takes the worldspace position of each pixel and checks if it falls within the volume. If it's in the volume it pulls the color from that spot in the cube and blends it into the lighting buffer. Here's an example of an area being lit entirely by a cube filled with testing values (point sampling on for debugging): http://i.imgur.com/e7jzyhz.png I haven't had a chance to put a lot of time into the propagation algorithm yet, but here's a screenshot with some simple tracing: http://i.imgur.com/VJfFk4D.png There's a single green pointlight offscreen to the right that is directly lighting the convex shape on the left hand side, bouncing off and illuminating the backside of the convex shape to the right. Will post more screenshots and updates as I work on it. As usual my code is on my github: https://github.com/andr3wmac/Torque3D/tree/offlineLPV
-
http://i.imgur.com/QVQkgSh.png
-
DO YE NEED APPROVAL?
-
Came for dragons, was disappointed.
