Jump to content

Atomic Walrus

Members
  • Posts

    8
  • Joined

  • Last visited

Everything posted by Atomic Walrus

  1. Thanks for doing all this and releasing it! This is 100% functional in 2021 with the Index + controllers, still using SDK build 1.0.17. I tried throwing 1.16.8 in, but enough has changed that I'll probably need to upgrade a few builds at a time, and for the moment this is working perfectly. Only one proper technical issue so far: The LOD system doesn't function in VR (take a look a the vehicle in the empty terrain map). Will investigate. At the moment I just have it hacked to always use max LOD. --- TL;DR portion below, discussing gameplay implementation details for VR in T3D Right now you can reach parity with No Man's Sky by simply using the seated mode. This is more than sufficient for vehicle cockpits, and it's workable for a player character if you don't mind that it's not proper roomscale (you can walk away from your Player body). Like in NMS, you can just tell the player they've moved too far from their character body and do an auto-recenter. If concerned about the player putting their head through geometry, you can run a raycast every tick between the character's head and the VR HMD. That plus putting the ShapeBaseImage on a tracked controller is sufficient for your basic singleplayer experience as long as you don't have too many narrow doorways. The player never sees their own body, it's just used for collision volume and taking damage, so they won't care that your hands and gun don't line up with the rest of the body. I think for a basic crossplay multiplayer experience, getting the Player to line up its aim angle with the held VR gun should be sufficient. Limited, because it eliminates all the fun VR social interface of gestures and head tracking, but functional for putting VR and flat players into the same world. Your avatar will do odd stuff if you throw the gun around, but nothing more odd than what you can do now by spinning with your mouse. I'm also running your GuiOnObject2 implementation for VR GUIs. Working on a hybrid of laser pointer and touchscreen, where you can point and click the trigger from a distance, but at very close range it will register as an auto-click like a touchscreen. It needs basic collision detection (prevent the hand from going through the screen) to work the way I really want it, where you would be able to touch and swipe to move a slider. Will share whatever I come up with on this if it's any good.
  2. This reply is only life half a year old, which is like 3 days under modern time rules. Or is that 3 decades? Well either way, if it helps someone... I'm going to presume you're seeing correction packets (writepacketdata/readpacketdata) which are what's actually causing the stutter. The default turret implementation doesn't properly account for the client-predictive/server-authoritative model control objects work under. In my experience, it's the deeply integrated client-server system of Torque that most people bounce off of, but that's also one of its biggest strengths if you're actually making a multiplayer game. I was certain I'd posted about this at some point, and indeed if you journey back to the old GarageGames forums here is my fix: http://www.garagegames.com/community/forums/viewthread/130721 What this does is create a "render" version of the rotation variable and uses that to determine render orientation. This version can be safely modified by client-only code for interpolating between ticks, without altering the actual simulation variable for turret rotation (interpolation doesn't occur on the server, so interpolating the simulation rotation would put client and server out of sync). Follows the same model as every scene object having a separate simulation and render transform to store pos/rot.
  3. What happens when you fire from 3rd person? My hypothesis is that the server doesn't actually use the 1st person model, and so bases its muzzle vector on the weapon's position in your 3rd person model animation. If that's true then the vectors should line up when you fire in 3rd person. If they still don't then I'm wrong and it's.. something else, and I have no clue.
  4. I can't remember which version added TSStatics working as mountable props, but it definitely functions in stock 3.10. Since they don't have any game functionality they're a good low-overhead choice for mounting a visual prop to a vehicle or whatnot. For most objects, the following code pasted into the end of processTick and advanceTime is sufficient to get them to behave correctly when mounted: if (isMounted()) { MatrixF mat; mMount.object->getMountTransform( mMount.node, mMount.xfm, &mat ); Parent::setTransform(mat); } ShapeBase also had this functionality added in a recent version -- try mounting a Cheetah to another Cheetah in stock 3.10!
  5. Mounting was moved to the sceneObject quite a while back, so basically everything in the scene can be mounted to anything else. Mount points in the models (mount0, mount1, etc) are still used, but not required; for example, if you mount an object to a mount slot that doesn't have a defined mount point will simply mount at the origin (center). You can use the offset coordinates in the mountObject command to put the mounted shape anywhere you want. Keep in mind that while all SceneObjects can mount, not every object has a defined behavior when it mounts; Mounting is just a theoretical association, and the mounted object must use that association to place itself at the mount point. Take a look at how TSStatics handle mounting, as they are the simplest functional mountable object. Open up TSStatic.cpp and search for "ismounted" (no quotes), you will find very basic code for making a mounted object actually visibly stick to its mount point. Depending on your object's inheritance, you may need to copy these bits of code into the relevant functions. If your object has a physics simulation, either stock or external, you'll probably want to pause it while the object is mounted.
  6. Because of this line: mShapeInstance->castRayEA(start, end, info,0,mDataBlock->HBIndex) in player::castRay, the hitboxes must always be in the highest (first) detail level. Detail zero, as passed in the function above before the hitbox index. If you wanted to do something like define a specific detail size as the hitbox detail you would have to loop through the model's details, mShape->details (for (U32 i = 0; i < mShape->details.size(); i++)) testing mShape->details.size until you found the one you wanted to use, then send that detail level's index instead of zero. If you didn't want to run that loop on every raycast, you could search for and store the hitbox detail level index during preload. Alternatively, I used to just make the boxes invisible with a material.
  7. Stick this at the end of mathTypes.cpp (for some reason all the mathutil console function hooks are here so you won't need to add any headers): DefineConsoleFunction( VectorGetMatrixFromUpVector, TransformF, ( VectorF vec ),, "@Create a matrix from the up vector.\n\n" "@param VectorF (x,y,z) up vector.\n" "@return TransformF.\n" "@ingroup Vectors" ) { MatrixF outMat; MathUtils::getMatrixFromUpVector(vec, &outMat); return outMat; } The data that comes back will be in the "TransformF" format which means words 0-2 are position and 3-6 are rotation (axis angle). In script you can do something like this: %mat = VectorGetMatrixFromUpVector(%normal) %outTrans = %object.getPosition() SPC getWords(%mat, 3, 6); %object.setTransform(%outTrans); All of your placed objects will have 0 rotation around the normal (because the normal vector obviously contains no info about that rotation) like the mines. If you want to rotate about the normal it should be a simple matrix multiplication like this: void WorldEditorSelection::rotate(const EulerF &rot) { for( iterator iter = begin(); iter != end(); ++ iter ) { SceneObject* object = dynamic_cast< SceneObject* >( *iter ); if( !object ) continue; MatrixF mat = object->getTransform(); MatrixF transform(rot); mat.mul(transform); object->setTransform(mat); } } But again you need to expose some math helpers to script (or just attach a function like this to sceneobject and expose THAT to script). The script-end handling of object rotation isn't very good in general. I'd like to have euler angles exposed (along site the current axis angle interface) and some rotation functions that can do both of the relative rotations the editor handles (world and local yaw/pitch/roll).
  8. Just a heads up on the existing energy system: It's part of the client-predictive networking and was intended to be used mainly for energy related to movement (or other client-predictable events). Since it's part of writePacketData it will: -Only be updated to controlling clients (during a correction) and -Trigger correction packets if modified by a server-initiated event (as opposed to a client-predictable event initiated by a "move"). +Allow movement to be energy-dependent without worrying about being out of sync with the server. For example it would be network-safe to scale player movement speed by current energy level. It's not the best choice if you're doing script-based abilities that drain energy, want a pool for weapons (not client predicted in stock), shield energy that drains when you take damage, etc. It will work but the correction packets will cause movement stuttering and waste bandwidth (sending the whole correction packet when all the client needed was 5 bits).
×
×
  • Create New...