Jump to content
  • Torque3D

    The pinnacle raw Torque power, Torque3D has been used for everything from driving simulators to MMOs!


    Check Out Torque3D

    Torque2D

    All the power of Torque with less of those pesky dimensions to tie you down!


    Check Out Torque2D

    Forums

     

    Looking for answers, wanna see some cool projects or talk about new / upcoming features?


    Read the Forums
  • Work Blog Updates

    • By JeffR in Torque 3D
      So, on today's installment of "random rambles about development things"
      But for real, it's a good time to do a new workblog and keep people in the loop for those not in the discord, or those that aren't spending every day in it  
      So, what's on the ol' discussion stuffs today? 
      Well, for the big one, the main Feature target for 4.1: Components. 
      Or, more specifically DOCs. What does DOCs mean? Well... 
      Directors, Objects, Components 
      You may have heard us discuss 'Entity-Components or Entity-Component-Systems(EC and ECS, respectively). For a brief refresher in concept, here's a simple breakdown: 
      Entity-Components as a paradigm can be described simply as having an Entity object, and then Component objects that contain and implement data. As in, if you have a component to render a shape, the component not only holds the info for what shape to render, but also the logic to render the shape. This is how, most other engines do components. 
      The reason for that is pretty simple. It's robust, and it's easy to work with. It's not the most efficient system, but it's pretty hard to screw up. You slap a component onto an object, set the properties on the component, and then the component does the thing. 
      When I did the main previous implementation of components, this was also the system I went with. The MAIN problem with this approach is that any given component is kinda...chonky. And you also have a lot of bloat on the Entity object in most cases. And with all the bits that have to cross-communicate to ensure dependencies work(you gotta have collisions for physics to work properly for example) as well as order of events(collisions are calculated then physics) as well as any deeper engine system dependencies. It can spiral quite a lot. 
      Beyond that, it's also very difficult to thread any of the component's workloads because everything cross-communicates in order to work. You can't easily punt a physics component into a thread if other threads need to talk to the same collision component or entity it uses, etc. 
      So, advancements to the theory of components implementations lead to ECS: Entity-Component-Systems. 
      Now, the confusing use of "Systems" aside, the main differentiator to EC is that Components now ONLY contain data. They don't implement any logic whatsoever. Likewise, ALL the burden of functionality is moved off of Entities. In a 'pure' ECS implementation, an Entity is nothing more than an ID for Components and Systems to reference. Instead, Systems implement all functionality logic. If you have a physics component, there's  PhysicsSystem that implements the actual logic for it. 
      This is certainly more complex to implement. In fact, very few engines or games use ECS. Unity's new DOTS approach is based on ECS, and a few games like Overwatch have utilized it. But the innate complexity of the approach and how abstracted the data and implementation means that it's far less common. 
      So why use it? 
      Because it is MUCH easier to be cache-coherent and thread things. For the non-coders out there, cache-coherency is the idea of wanting to keep all the memory a given chunk of code in the engine uses all bunched together. Think of it like how if you're studying. Rather than getting a book, reading a paragraph, then walking back to the shelf, putting that book away, and getting a new book and reading the next paragraph, and so on - which would be very, very inefficient - instead you just get all the books you need, and can quickly reference between them. 
      In practice, memory in the computer works similarly. So if you can cram all the data you need to work into the same blob of memory. Performance is improved SIGNIFICANTLY. But it's not very 'human friendly'. Which is why you get stuff like ECS. All components of a type can be crammed into a dense set of data. So when a System goes to implement logic, you've got all the relevent components in a tight blob of memory, and the whole thing can be processed without having to go "get another book" as it were. 
      In addition to this, the data being more detached from implementing logic(and better managed in memory) makes it much easier to implement the logic in multiple threads. This allows the machine to crunch a lot of objects in parallel - which is especially good on modern CPUs that have sometimes dozens of threads. 
      But there's a good number of downsides to this approach as well. It is, as said, not very human friendly. Implementing new components and their associated systems is not how most code is implemented, so it can be difficult to work with. It also requires much more tracking of when things are added, removed, when things should run. Dependencies are still burdened on the components and systems to keep track of for when to implement things, and scripting it is very, very difficult. 
      All those and a bunch of other smaller inconveniences make it generally a pretty poor paradigm to work with in something as complex as a game engine. There's a LOT of ECS implementations out on the internet. But they're more academic than practical because of the inherent limitations of the approach. Cramming it into a game engine while still making it easy to work with from a scripter, designer or artist's perspective is pretty hard. 
      And both of these have various limitations in how to deal with it from a networking perspective. It's very difficult to have the server and client safely agree on the data the client has without trafficking a ton of data, which is bad for net performance. 
      So, between what I learned from implementing an EC style deal in the first pass of components, and a lot of tests and research into ECS. I settled on the fact that, both approaches just kinda aren't ideal. 
      So I did some work and fashioned up a - as far as I can tell - novel components implementation for Torque3D. 
      Directors, Objects, Components, natch. 
      So, what’s the deal then? Well, per the name, there’s 3 main components(heh) to the model, which we’ll cover here: 
      Directors 
      So what’s a director? Well, in practice a Director is a simple class that ‘directs’ when and where updates to components happen, hence the name. The idea is that we want to move the burden of when and why updates happen off the objects and components. 
      At it’s core, a Director is in charge of doing a particular thing, generally updating a specific component or set of components. Like, say, when we want our RenderMesh component to draw. The Director has a specific timing to it(aka, Rendering) that the rest of the engine can invoke to the DirectorManager, which is pretty much just a simple container class. 
      When we want anything with the Rendering timing to kick off, we tell that DirectorManager to run an update on said timing. And in turn, any Director with that timing is told to do it’s work. Simple enough. 
      So in our example of the RenderMesh components, the RenderMeshDirector has the Rendering timing, the engine, when it goes to draw objects, can tell the DirectorManager to run the Rendering timing, and our RenderMeshDirector gets told to update. When this happens, the Director loops over valid RenderMesh components and directs them to do their work. 
      And thus, our RenderMesh components have drawn their meshes. 
      Now this sounds like a lot of work compared to just looping over the objects or components directly, but there’s a bunch of benefits to this. 
      For example, as noted with EC and ECS implementations, one of the biggest tangle-up points is dependency management. Normally components have to track what dependencies they have, if they’re fulfilled, and if not then they are not enabled. Any time a new component is added to it’s owner, the component is in charge of validating its dependencies. 
      This is important, certainly, but it also can lead to a lot of complexity, spiraling dependency chains, and code bulk on the components themselves. So instead, we move that to the director. 
      Because ultimately, the director is in charge of a set of components, like our RenderMesh components, we can track which ones are valid in the Director. If an Object adds a new RenderMesh component, it naturally associates to our RenderMeshDirector, and it now knows if it’s valid or not. The component itself doesn’t have to care in the slightest. 
      This keeps the component code leaner and cleaner, so it’s easier to maintain. 
      It also means we can very much more explicitly control the timing of when things run and sequence in the engine. I used the example of the “Rendering” timing before, but it’s powered by a simple enum. So you have as many entries as you can cram into an enum for when to kick off updates. You can update just physics things, or just rendering things, or specifically objects that have client inputs because they’re controlled. 
      This gives a much more comprehensible order of operations about when and where stuff is executed in the engine, making it easier to track and debug when stuff kicks off. 
      Additionally, because the director has an explicit list of components it’s in charge of, and we are specifically working on that list of components at a specific time in the execution of the engine’s loop, it means that we have MUCH more control over the memory in play for the engine. 
      This ties back to the aforementioned cache coherency. We can keep a list of components, like RenderMesh components, and that list can be much more easily just shoved into memory as a straight shot, minimizing how much the CPU needs to jump around. The Director works on THESE objects, so the CPU can have all the data on hand. 
      It also means that, between the more tightly bound memory and the express execution timing, we can much more safely handle when things are threadable. Which is a big thing for game engines. 
      Even major engines are still predominantly single threaded. So when you’re busting out your brand new CPU with 36 cores. Most of them are sitting around doing nothing. With Directors controlling the memory and execution, we can spool up a bunch of tasks in the threadpool and split the workload across those cores/threads that aren’t doing anything, allowing the regular workloads to be processed way faster. And this should, in theory, scale well with object counts. 
      So yeah, Directors are kinda the MVP of the system, what with keeping a tight wrangle on memory, streamlining execution of parts of the engine, standardizing a lot of bits, and also making the engine significantly more threadable than before. 
      So, you know. A little bit of a thing there. Which takes us to the next bit of our paradigm: 
      Objects 
      Compared to everything that directors do and are, Objects are pretty simple in the end. These are your entities that you slap components onto. Unlike a full-fat ECS implementation, Entities are ultimately still full objects. 
      There’s a good reason for that, of course. The big one is that T3D has a scripting language, and that’s super useful. So an Entity can’t just be a ID that exists in the void, because we gotta have an object for the scripts to work through. 
      Additionally, T3D also has a very good networking system, and not maximizing that is just dumb. And rather than having each component be manually replicated, or ghosted to clients, we exploit the way T3D does networking streams and packing to go through our Entities. 
      Specifically, Entities keep a list of components they own. And if a component is marked to be networked, it has a separate list for that. When a networking event happens, such as a component is added, removed, or updated, the Entity is itself flagged for networking action. 
      Since the Entity is ghosted to clients as normal, we can then piggyback the Entity’s network updates. Each component that’s networked has it’s own mask bits for granularity - we only need to update what actually changes - and this is packed into the Entity’s network update. 
      This means we can fully network whatever number of components, but only 1 ghost per Entity. And what updates we DO traffic to the client is able to be as lean as possible. This keeps the traffic as thin as physically possible without giving up the very solid networking that T3D offers. 
      And lastly, we have the mainstay of any component system(duh): 
      Components 
      In DOCs, Components behave very similarly to in the previously mentioned EC paradigm. They hold data and implement the functionality for that data. So a RenderMesh component holds what mesh we want to render, and also does the logic to render the mesh. 
      The main difference, as covered in the section on directors, is that the components are stripped down to JUST the data and implementing logic and all the surrounding boilerplate is largely standardized up into the Directors, as well as when the components are told when to kick off their logic. 
      This means that implementing new components can be relatively easy, as you have the basic data/implement setup, along with any networking pass-through logic as noted in the Objects section, and then a companion Director to manage when the whole shindig activates. 
      All in all, while a bit more complex than standard game object classes, you get a lot of flexibility and ability to quickly slap stuff together without completely shifting to a new conceptual paradigm like a pure ECS implementation. 
      It also keeps networking lean, keeps scripting on the table, but also opens up the door to massively thread workloads in the engine. 
      So more flexibility, cleaner structure, more performance, and without compromising the good bits the engine already offers. 
      Not too shabby a deal, eh? 😉
      Now, this update’s already quite a long one, pretty technical and tragically limited on pictures, so I’ll do a follow up post next weekend going into the front-end usage(which is realistically where most people will work with DOCs) as well as other development stuff going on or planned. 
      So I’ll see you all then! 
      -JeffR
    • By JeffR in Torque 3D
      Mid-October workblog time! (Which should've been last month, but chasing down bugs like memleaks and straight up crashes before I wanted to post caused delays, so...whoops!)
      So, how's it going everyone? Time for fanciful development news.
      First, lets go over what all work has happened thus far since the last workblog:
      76 pull requests merged in, which had over 164 changes which ranged from bugfixes, to improvements to additions.
      Notable examples include:
      Updating SDL to latest Steve passing along a fix to correct render issues for the ribbon particles Preference settings(which will get integration into the options menu soon) for forcing static object fadeout at a distance, as well as limiting the number of local lights renderiable at a time, and if they fade out over distance as well. These can potentially help a lot in very object-dense scenes with lots of small clutter stuff that doesn't need to render at a distance Some better default values for local lights, and cleaning unneeded fields Fixing of gamepad inputs Various shadergen cleanups A whole metric butt-ton of fixes, improvements and QoL changes for the asset workflow Ability to better control script queue ordering between modules A crossplat option for 'open in browser', which could see a lot of use in jumping to documentation Improvements to baking of meshes Adds populating of a level's utilized assets into it's dependencies list to ensure everything preloads as expected instead of trying to do it at the last second, which could cause scripts executing during object creation and lead to variable corruption Settings files are now properly sorted(A small change, but it keeps the layout of the settings.xml and asset import config files consistent making them easier to catch up on changes) Re-implementing SIS files for the importer so there can be special-case overrides for any given file type Fixes the resource change detection for TSStatics, so if a shape files is changed, it auto-updates the objects in the scene Fixed several potential memleaks and one confirmed one that could balloon the mem usage pretty substantially over time Misc improvements to asset import pipeline stuff, such as suffix parsing improvements Shuffled some gui profiles into core to better standardize them Created a number of macros to wrapper around defining and setup/usage of assets(image asset for now, but others later). This is so you don't have to define a bunch of supporting stuff in a given class over and over. Basically, convenience and code templating thing. GL and GCC compilation fixes Integrated the old MeshRoad Profile editor, so you can have more control over the shape of the meshroad Added guiRenderTargetViz control, which lets you specify any given render target and display it to a GUI control. Minimal direct use currently, but useful in debug operations, and in the future could drastically simplify doing stuff like picture-in-picture displays or multi view GUIs and the like. Fixed a pretty gnarly memleak, so we're memory stable again. Lukas got his C# project caught up to current BaseGame so we can better test the cinterface changes(and soon people can play around with all that too) For some of the bigger changes worth going into more detail:
      First, Mars contributed some very important improvements to window and resolution handling.
      This adds in Borderless window mode, as well as the ability to set in the options what display(if more than one is detected) the game window should be on. Additionally better handling for what screen resolution should be in play based on window mode(ie, Borderless is always the desktop resolution) as well as ensuring display options apply properly.
      I added handling to disable options fields if they are non-applicable to avoid confusion
      Secondly, an update to both PostFX organization, Integration, and Editing behavior
      All stock PostFXs and their shaders are now safely tucked into the core/PostFX module. Easier to find, easier to edit, and the shaders aren't in a completely different module.
      The loader logic was tweaked slightly as well, so that the levelAsset has a field which indicates what the postFX file is. This requires less manual logic for 'lemme go dig around for a posteffectpreset file' and should be a bit more reliable.
      Additionally, the PostFX editor saw some fairly big updates, both in how its accessed, and how it works.
      You can now either edit the default postFX preset, or edit the current Scene's preset, as seen here:

      The editor now better integrates into the PostFXs themselves, as well as auto-applying changes as they happen which is much, much, much better for dialing in how they impact a scene.

      Importantly, you'll note that the list of PostFX's displayed in the editor there looks kinda...lean.
      This is because it now only displays active PostFXs rather than the entire registered library, which should help cut down on confusion about what PostFXs are actually active and impacting the scene.
      Adding is as simple as pressing the green plus button at the top and then picking which to add from the list that auto-populates with all registered postFXs

      And then selecting from the list on the left to edit a given postFX:

      Removal is as easy as selecting the PostFX in question in the list and pressing the red minus button.
      You may also note a bit of change in what PostFXs are there. Most are the same as always, but a few tweaks happened to make things a little more consistent and interop-friendly, such as moving the LUT color correction logic out of HDR and into it's own. I also added a simple Sharpen PostFX, and integrated a Chromatic Aberration PostFX.
      We also recently shifted over to utilize 'ORM' formatting for the PBR image maps.
      In order to better integrate with tools, and because GLTF assumes ORM, and it's the closest thing to an industry standard for the PBR map, we're shuffling it around internally to work as ORM as well. What's ORM? It stands for (Ambient) Occlusion, Roughness, Metalness. It's basically the PBR image maps arranged in a specific channel order. GLTF and a handful of engines assume for it in that order, making it the closest thing to a standard, so to keep it simple, we're doing the reorg.
      Likewise, we were operating with Smoothness being the front-facing value instead of Roughness. This is - similar to the PBR channel order - sorta a 'eh whatever works', but Roughness is a bit more standard, so we're going ahead and making that the front-face standard as well.
      Internally the math assumed roughness anyways, so it isn't a huge change principally.
      Touching to the above, we'd noted some oddball behavior when adjusting roughness and metalness values, so with Timmy's fantastical eye for that stuff, he was able to spot some problem areas with the logic and our BRDF image. The changes he passed result in much, much better behavior across the full roughness/metalness ranges and thus looks much more correct on any given material. Huge props there.
      I had also mentioned some crashes and stuff. There was a bug that snuck in with the probes where baking them could cause crashes. It hit some hardware more consistently than others, and was a right pain to track down. In the end, though, I put up a PR with a big update to the probe handling. Before, you could have up to 50 active probes, and they would render all in one big go. It worked, and it guaranteed consistent performance, but a lot of scenes don't utilize anywhere near than many probes in your line of sight.
      So I shifted the logic to where you have registered probes, and active probes.
      You can have up to 250 registered probes in the scene now, which is quite a lot for anything other than a big open world deal(and you can always unregister them selectively as needed), and an adjustable amount of active probes. The default is 8 but it can technically go all the way up the whole 250(though that's not recommended for performance reasons).
      One of the big advantages outside of the basic performance consideration of not needing to actively render as many probes at once, is we can now lean on culling to ignore probes you can't even see, which just compounds the performance gains, and be smarter about which ones to bother with. It calculates and picks the best probes based on the camera's position, and I'll be adding an 'influence' value for more artistic control so certain probes can be noted as more important than others.
      All of this together means it selects the best probes and renders only those up to the set per-frame limit(which again, is adjustable for a given game's needs) yielding the same results in terms of blending between probes, but much smarter and targeted selection of which ones to render, yielding improvements in performance.
      We also fixed it finally so that surfaces that are metallic will correctly render in the baked cubemaps for probes instead of the flat black they were before, which should yield more accurate results in the reflections.
      While the above probe reorg and crash chase-down sorta dominated the last 2 weeks(crashes are a pretty big deal, so it was important to get out of the way), we can get back on the track of getting the asset integration with the game classes sorted out. I mentioned the utility macros added for image assets, which are most recently utilized by the materials class now.
      This has streamlined quite a bit of code and makes everything Just Work, and between it and the asset usage in TSStatic, we're feeling pretty confident in finally moving forward and adding asset integration for the remaining classes. This is probably the last big obstacle for 4.0, especially as the general asset pipeline is proving to be fairly stable at this point, and PBR is getting a good workout in several projects and what minor issues crop up are getting plugged quickly. The issues list I've got keeps creeping downward, and things are looking quite nice 🙂 
      For a parting bit, one thing we also added in recently was a Prototyping module. It has a number of color-coded materials designed to be used for certain surface/object types and be visually distinctive so as to help understand the map/design space without needing to worry about the fine texture detail. Additionally, a number of primitive objects like a cube, cylinder, etc are in there.
      However, we also needed a size reference stand-in, and I thought, who better for that than our very own Kork-chan?

      Definitely won't be as long for the next workblog, and I imagine we'll have a number of modules to try, art to ogle, and a new test build up quite soon, so stay posted!
      -JeffR
    • By JeffR in Torque 3D
      Hey everyone!
      So, been a while since the last workblog, but the good news is, I didn’t quite threshold past the one year mark!
      So that’s… good.
      Anywho. Predictably there’s a frankly preposterous amount of stuff to go over so lets get getting and dig on in!
       
      What’s new?
      A lot, as said. But for specifics, since the last workblog, we’ve rolled in over 150 pull requests. These range everything from bugfixes, library updates, QOL improvements and all the way up to converting pretty much every class that touches a content file into using the assets system. We even got the site updated, as I’m sure you all had noticed.
      Lets cover a few of the specifics and big ones.
       
      New Site
      So, if you’re reading this, then you’re at least slightly aware that the site underwent some changes relatively recently. We were using a git-based pages system before which while lightweight and anyone could contribute, the turnaround time ON changes was fairly suboptimal, and because the only way to make any changes at all was via PR’s, it disincentivized people from contributing content.
      Pushing tutorials or updating documentation on the site had enough of a hurdle that no one really wanted to bother.
      So we did a lot of digging and landed on a fully integrated CMS platform. This allowed the forums to be integrated into the main site and allowed us to easily manage the content that was on the site itself.
      Other features like broad site search and a few handy additional boons like being able to post micro-communities for games all tie together to make a better centralized point of community engagement.
      And as noted prior, it drastically lowers the bar for people pitching in to update the documentation if they have at least the Contributor designation, as someone trusted within the community on helping out and know their business.
      As we’re in the home stretch on 4.0(as I’ll get into more below), I’m starting to be able to split my time from that as we wait on rounds of testing to begin filling out the documentation section.
      The plan is to consolidate all the disparate docs from the various sites torque’s found itself on, from the old GG docs to the old TDN, to the wiki and so on, and fold it all back here in the docs section.
      This way we have a concrete point people KNOW they can find what they’re looking for, instead of having to guess what random site the info they need is on.
      Major, MAJOR props to Tony and Lukas for helping get this thing where it is now. It's definitely a big deal!
       
      PBR
      It’s been a bit since I’ve covered PBR stuffs, but honestly not TOO much has changed since I’d last covered it, so this can be fairly quick.
      The main points to note is we spent a lot of time refining and stabilizing the math so you can have pretty high confidence that whatever is exported out from your art tools like blender or substance, if they use the standard Metalness/Roughness PBR formatting, will look Tools Correct. Also, as noted previously, we’d shifted up to utilize the more industry-standard ORM Configuration map, which further emphasizes the Tools-Correct approach.
      Of course, you can still use sliders or individual maps to fill in the PBR properties too. That hadn’t gone anywhere.

        
      Other things were cleaning up and stabilizing probes and IBL and improving performance with them by adding some nice convenience functionality with controlling active probe counts and the like.
      All in all, PBR should give stable, quick results that look quite nice.
       
      Assets
      Alright, now we’re getting into the meat of things. So, assets. I’ve talked about them a good bit before, so if you’ve been following my workblogs, you’ve got at least a general notion of them.
      Suffice it to say, they’ve gotten a LOT of work done in the past months.
      So, as of this moment, outside of single-digit numbering exceptions, every class that ingests a normal “content” file type - that is, Images, Shapes, Materials - now utilizes assets.
      While a lot of work to get right, we did everything we could to ensure that the process is stable and doesn’t impact the functionality of the classes or objects themselves.
      Part of that is we made some… liberal use of macros to fill in a lot of duplicated and common code pertaining to these content file types.
      Whereas before each class would have to implement the entire file ingest, load and bind logic each time - which as with all duplicate and repeated code, leads to points of possible failure - they now use a common set of macros which have been aggressively and thoroughly tested, making everything a bit leaner and less prone to issues sneaking in.
      In this shot, we see an example of defining our asset macro in the class header file:

      And here's how you use the asset macros to set up the init persist fields to expose them to script:

      There's even macros for the packing and unpacking of the network data:

      And finally, an important bit, is that it doesn't just generate function info or anything, but actually configures the internal variables and the like off the parameters you pass in. This allows everything to be VERY consistent and predictable, making usage in the classes once you actually want to utilize the data set much more comprehensible, like so:
       
      This means that pretty much every class has a very similar content-file flow, which allows us to standardize object fields more(less guessing if it was shapeName, or shapeFile, or shape, etc). This also standardizes accessor functions.
      Want the gui’s bitmap?
      %bitmap = %gui.getBitmap();  
      And you can be sure it’ll get it, no questions asked.
      Part of these macros is retaining the legacy fields too, so if you load up a project and some stuff doesn’t get processed successfully via the Project Importer(will get to that here shortly), then the old fields are held onto so you can retry, or debug it.
      So those accessor functions are smart enough to figure out where your bitmap data is and return it back.
      Using an asset? Returns the asset info. It still using the legacy bitmap filename field? Returns that instead.
      This should alleviate a good chunk of pain for existing projects that are looking to port up into 4.0 without utterly snapping all your stuff over its knee.
      Ultimately, this all ends out with a standardized, consistent content flow without having to juggle different means, forms and fields depending on the object and class, which is nice.
       
      Asset Browser
      So, obviously, if we’re gunna be mainlining assets now, the asset workflow needs to be up to snuff as well. First up, we’ve got the Asset Browser. I’ve shown and talked about this thing before, but now it’s the primary access point for all content in your game. 
      Previous, I’d gone over the improvements to searches in the AB with stuff like complex searches and collection sets. This is obviously all still in, but I’ve added some further tweaks to improve reliability here. A search isn’t much help if it doesn’t find what you need it to.
      Additionally, I’d covered how you could do stuff like drag-n-drop datablocks to spawn the vehicle or the like. This too has been improved and expanded on, so that pretty much anything that is generally spawnable via the Library tab can be spawned via the Asset Browser. The plan here is to ultimately replace the Library tab with the AB as the centralized one-stop-shop for all your content needs.
      To that end, most of the ‘Creator’ entries have been brought over to the AB as well, so spawning lights, probes, precipitation, skies and so on can be done via the AB too.

      (Of course, the current class icons are a bit low res and need some sprucing)
      Beyond that, improvements to the tooltips for when you hover over an asset to present where the asset is located in the file structure and other meta info should help tracking down where those assets you knew you had but forgot where you put. We’ll also be looking into some ‘Go To Asset’ navigation option/actions as well to further facilitate tracking down these lost sheep, er, assets.

      Now, if anyone threw some big images, or a lot of shapes at the AB previously(Catographer, for example, uses like a dozen 4k images for material atlases), you’d find there’d be a load hitch every time it’d try and populate the asset previews in the AB.
      Obviously, this is a minor annoyance that, when repeated enough times(which it would, now that the AB is the primary content nexus) it would become grating fast, and hurt workflow.
      So I added a system where it’ll take preview images and create a scaled down version, ensuring that no preview is so large as to cause hitching, allowing fast loading of directories even if they have a crazy number of assets visible.

      Whenever the AB loads, it’ll generate a preview image based off the source content - a shape or image - and generate a scaled down preview image to a given size, such as a 512x512 preview.
      This keeps them reasonably sized, which keeps loading in the AB fast, but doesn’t hurt the quick-glance recognition of what you’re looking for either.
      Additionally, I added logic to compare the file modified timestamps whenever the AB is opened, so if an asset’s file saw changes newer than the preview image, it’ll regenerate it. This ensure that if you recently re-exported an image or a model, the previews will always show the correct version.
       
      Asset Importing
      So with everything actually for-realsies using assets now, we want to ensure the importing process is as sexy-smooth as possible, so this saw improvements as well. Various sanity checks, safeguards and issue-resolution methods were added to the import configs.
      For example, a new Issue Resolution step “FolderPrefix” was added. In the event of an importing asset happening to have the same name as an existing asset, like say, both have “grass” for the AssetName.
      In this case, with the FolderPrefix resolution, it’ll then find the folder the asset’s file is in, and then just prefix it.
      So if our conflicting importing file is in the ‘Foliage’ folder, it’d resolve the conflict via turning the asset name into Foliage_grass. It’s also smart enough to try and find non-conflicting folders in case THAT was already taken too.
      Additionally, options for force-adding type suffixes onto the asset name help prevent collisions too. Like say we had a ‘Player’ model, that has a Player material and uses the Player image for the diffuse.
      These 3 would normally conflict, being all named player.
      With the options on to add the type suffixes, though, they become Player_shape, Player_mat and Player_image.
      This prevents collisions AND becomes easier to search for/spot at a glance which-is-which.
      So several birds with one stone there!
      And remember, this doesn’t affect the actual file the asset uses, just the assetName the engine builds for the AssetId, so you don’t have to worry about it doing spooky rename voodoo on your png file so you can’t find it again later.
       
      Asset Loading/Error Tracking
      A big addition that Az pushed for was better tracking of the explicit load status of an asset. Before, you effectively had two states for an asset. It either loaded, or it didn’t. Obviously, depending on what’s going on, this could be wildly unhelpful. For example, if the asset definition loaded ok, but something was wrong with the associated file(like lets say the model file was corrupted), we can report the error while still continuing to load the asset, instead presenting a fallback shape.

      This avoids the old situation where if a model failed to load on a TSStatic, the TSStatic just wholesale doesn’t load, disappearing from your level and some point later down the line you realize a bunch of your objects are missing.
      Beyond that, naturally having more specific loading error codes lets you pin down exactly what’s wrong. From bad file references, to invalid formats, being able to note the problem is the first step to fixing it, which can save on development time without the engine just throwing its hands up and failing to load everything.
       
      Modules
      Also important to assets are the modules as well. Per previous updates, we’ve been refining how modules handle loading of scripts that may overlap or override via the new queueExec() function.
      We’ve been shoring up problems that’d come up and dialing it in more, but overall modules haven’t seen any big functionality shifts, which is good, as that means even with people wrenching on it, it’s proven to be rather stable.
       
      Legacy Project Importer
      So this has been a big one. When we were implementing the conversion process of existing projects to work with 4.0(since obviously there’s a number of people with existing projects that would be keen just update update to latest to get the sweet new benefits) we originally tried a sort of “in line” import process.
      As part of the above mentioned asset macros, we would integrate in the original filename fields, that when a file is fed into, would run the Asset Importer in-place, get the results, and update the field definitions.
      And in a lot of cases, this worked!
      The problem was there was also a lot of cases where we started seeing some really weird and funky behavior, timing issues, order of operations shenanigans, and needing to close and open things multiple times to process everything.
      The sexy, seamless system unfortunately didn’t play out quite as smoothly as the theory indicated.
      So, I opted to pivot on it and build it as a dedicated importer process. There were several advantages to this approach. Sure, we lost the ‘pure automagic’ of the inline approach, but you could start up a new 4.0 project, open the importer, point at your old game, and it’d just import everything in and process it into a valid module for you.
      So maybe not PURE automagic, but like, 56.5% automagic.
      So I did the initial pass of it and with Az helping crunch a lot of the logical refinements, we got it to a spot where we could get others throwing their projects or old content kits at it. While there’s still some bugs to be fully chased down yet, the results are looking really good, and should make it MUCH less painful for anyone and everyone with an existing torque game(in theory all the way back to TGE games, though that’s not a dedicated focus at the moment) can get rolled over to 4.0 in an afternoon.
      Nice, How’s It Work?
      As mentioned, the process involves you pointing at your old torque project, and getting a nice, 4.0 compliant module in your new 4.0 project as a result.
      But lets get into the specifics of it.
      This:

      Is the Project Importer. It’s formatted as a classical wizard utility that walks one through the process in pretty plain terms.
      It’s pretty straightforward, but we can do a quick overview of how it works here. If you continue past the welcome page, you select what version you're importing from(though currently everything is pre-4.0, this gives us options to have by-version import rules for people upgrading from 4.0 to 4.1 and so on later) it goes on to prompt you about what content you’re going to process in with the importer, as well as it’s location, as seen here:

      Depending on which you pick dictates some supplemental behavior. As you can see your options are:
      A folder that is inside the current project already - useful for re-importing something if required A folder that is outside the current project. This is the normal option and would be what you pick to select your old project’s main directory The Core and Tools folder. This is primarily for getting the BaseGame template up to speed, but if you’ve got some custom tools or the like you’ve copied into place already, you could run that. Normally, this is where you would select the Outside folder option. Then you click the locate button, and navigate to your project you want to import. Once you’ve done that, you can continue.
      It’ll next prompt what the new module being created will be called. This is defaulted to whatever folder you pointed to’s name, but it can be named anything like ‘MyCoolGame’. 
      Continuing  on will see the files be copied into the new module, then processed. It’ll run all content files(shapes, images, etc) through the Asset Importer, and then it’ll process all editable/code files looking for legacy fields.
      If it finds one, it’ll then convert that field into the associated asset field, and find the new asset associated to the filepath it had originally. So you go from something like this:
      Bitmap = “./background.png”;
      To:
      bitmapAsset = “MyCoolGame:background_image”;  
      It’ll only process lines and objects that actually can successfully convert, so if a file doesn’t import and generate an asset, the legacy line will stay as-is. You can re-run the importer again on the existing module if needbe, but tests have so far proven it’s generally a one-and-done process.
      Once the import process has completed, the only real thing left is to integrate in the scripts to the module. You’d crack open the module script file, and then slap in your exec calls for the client, server, etc in the appropriate place and there ya go. Your project’s pretty well converted up into a 4.0 module!
      In example, this we can see the FPSGameplay content imported in as a module now here:

      Naturally, we’ll be fully fleshing out the process and details of this in the documentation, but that’s the broadstrokes. Currently it’s part of the tools suite, but the main plan is it being part of the new project manager, so you can create a project, launch the LPI, and convert in an old project into the 4.0 one all in rapid succession in the PM.
       
      Terrain Improvements
      Up next, going to do a quick go-over on the terrain updates. No major overhauls here, but we did bring in Lukas’ improvements to the terrain material blending, as well as toggles for if it should utilize the height data in the terrain materials to blend materials or not, as seen here:

      This naturally allows not only nicer looking terrain materials blending, but the toggle gives more control if it fits your style.
      Additionally, Lukas did some nice wizardry to streamline the rendering of the terrain a lot, utilizing texture arrays to remove a LOT of the drawcalls and render overhead that came with the older way we rendered terrain. So looks better and runs better. Not too shabby 😉
       
      TScript Conversion
      This one has been discussed for quite a while, but we went ahead and pulled the trigger on it a little while back. Namely, shifting of the default torquescript extension to ‘tscript’ from ‘cs’. A number of reasons for this, but the big ones being that a lot of tools treat cs as a C# extension, which can have some weird results when dealing with code IDEs.
      Another reason tying into that was the expansion of the cinterface and support for more script languages Lukas spearheaded, namely, C#.
      With that soon to become a fully viable possible route for people’s T3D games, dealing with confusing extension mix-n-matches - especially for project generation and asset/module installs not being able to tell a torquescript and a c# version apart - all lead to the lock-in on swapping the default extension.
      That said, we know people liked having that cs extension, so while the default may have changed, we added a new project setting for CMAKE as well as a global var that sets the extension used. So if you wanted to keep with cs, you can just change that in your project settings and presto!

      And to better facilitate the extension swapping, we went in and adjusted a number of filename handlings to check for both the default and the ‘other options’. Meaning that doing an exec() call can just take the path and name, leaving off the file extension and the engine’ll figure it out. Additionally, we added the isScriptFile() console function to specifically check valid script extensions if you're trying to test if a filename is a valid script file, rather than trying to bruteforce your way through with isFile().
      Further, If you wanted to go ahead and use the tscript extension and were importing in an old project, fear not, because the Project Importer will check what the extension global variable is, and update any scripts to utilize that extension instead(including filename references in files, in cases where it doesn't just drop the extension as unneeded entirely);
      So while it is a change and I know some of you may not be fond of it, we’ve done everything possible to make it painless to swap, or just keep to the classical if you so desire. 🙂
       
      TorqueScript Interpreter Update
      While not merged in juuuuust yet, we’re in the final testing pass for some big updates to the TS Interpreter(lovingly dubbed as ‘TSNeo’). Hutch has been doing some solid work not only simplifying the interpreter, making it easier to maintain and expand, but also some biiiig performance gains. How big? Lets have a gander.
      He drafted up a number of tests to check perf of various things, from crunching math right in TS, to handling objects and variables, to invoking up into the engine via console methods. All metrics are for his i9-9900K @ 4.8ghz. All numbers are in milliseconds.
      So here’s stock:

      As you can see, some stuff isn’t too bad, but function calls in particular are really painful.
      Now let’s look at the TSNeo benchmarks:

      As we can see, biiiiiiig gains in basically everything except the string pressure tests. Some benchmarks in fact have an order of magnitude improvement. And even though the string pressure tests show a dip in performance, it’s pretty slight.
      So suffice it to say, big, big gains here.
      Beyond that, he fixed several bugs that were lurking around, in particular with the telnet debugger(which was causing some weird behaviors in Torsion), as well as added a pretty large list of unit tests. So any future changes to the interpreter can come easier because we can validate the crap out of it with standardized testing.
      Mind, part of the improvements do come with the consequence of some relatively minor changes to how the interpreter handles things.
      For example, you can’t have local variables be in the global scope anymore. Aka, if you have a %localVar it HAS to be within a function or namespace. 
      The good news is, if something is unsupported full stop by the interpreter, it’ll error and provide a specific message including the offending file, so correcting these problems is pretty simple.
      I’m also planning to try and make the Project Importer catch as many of these as possible, so *in theory* it should be a seamless transition unless you got REAL freaky with your torquescript. And in the end, I think the performance gains are so good it’s worth it.
       
      Project Manager
      I’ve been working on this bit by bit for a while now, so most of you are aware that there’ll be a new PM for 4.0 to work with. I’ll be doing a follow-up workblog soon with more deets about it, but I had tossed it out to those on the discord for an initial peek and slap around and got a lot of excellent feedback on its shortcomings.
      So with all the real big changes out of the way and shifting into the polish-up phase for the 4.0 release, I can get some grind time on this very soon and get the first real Release Candidate build out to work in conjunction with the upcoming 4.0 RC. 
       
      And Now for Some Other Bits
      So that’s it for the big standalone update bits, but there’s definitely a lot of other smaller things to note still, so lets go over some of those, shall we?
      Base UI Updates
      A number of small fixes went in, but an important one was further improvements and fixes to handling of the window state, like with borderless mode. These options should be quite robust and stable to work with.
      Multiplatform Fixes
      Ragora, Az, HiGuy, Hutch and TRON all got some really good grid time recently to fix a number of lingering issues for our non-windows platforms. Between a litany of file handling fixes, compiler shenanigans fixes(GCC, as ever, being very strict) and crash fixes, the multiplat situation is coming along nicely.
      Hutch even confirmed that 4.0’ll run on Apple’s new M1 chips, so we needn’t worry about people upgrading their machines causing problems on the mac side of things, which is exciting 🙂
      Library Updates
      As part of the chasedown for multiplat shenanigans, we’ve made sure that OpenAL, SDL and TinyXML are up to date as well.
      Zip Loading
      Mars helped a lot in ensuring that the zip handling was ironed out and brought back up to working. This means you can package your modules or games into zips again and it won’t get all freaky-deaky when trying to load or save stuff.
      This also opens up some options for the future as well, such as collapsing assets(and their associated files) into singular archive *.asset files or the like, keeping file counts down and potentially reduced disk footprints, which is pretty cool.
      So there we are!
      With all the big stuff settled, or in final testing for roll-in, the plan for the next month or so is bug chasing, polishing and shoring up the new Project Manager, and just generally refining 4.0 to make it locked and prepped.
      So keep your eyes out and ears to the ground for the RC builds so we can make sure 4.0 is good to go sooner rather than later, because we’ve all definitely been waiting long enough 😛
      Until next time!
      -JeffR
       
  • Topics

  • Show-Off

  • Built with Torque

×
×
  • Create New...