Most pressing tasks at the moment, is setting up the input manager to handle both buffered and unbuffered input more easily. Currently a simple system is used, the input manager interfaces witgh the backup library (currently OIS) and exposes a registerCommandListener() method. For each command we want to bind input to, there is a specific subclass of AbstractCommand passed to the input manager along with the key it should be bound to (which gets mapped our own representation to what OIS uses). A simple group of associative arrays handle mapping buffered input back to the registered command objects, which have the task of making sure whatever is supposed to happen on said input event, actually does. A prime example from the prototype, movement commands are registered for mouse input and the W, A, S, D keys; when the AbstractCommand::handleInput() method is called, they tell PlayerEntity that it should move in the appropriate way. In plain C instead of C++, I would just have used a pointer to a structure holding all the interesting data about the binding + a pointer to a callback function. Same basic concept, only my prototype takes advantage of C++, which is what I’m stuck with for the moment 8=). I have no real love of C++, beyond not having to deal with yet-another implementation of data structure XYZ. (BSD Unix at least has provided macros for the most basic data structures in sys/queue.h for many years, now if only a new C standard would do likewise.)
Soon I’ll need to start modeling “Stick Man” as a place holder, it’ll likely be an extremely simple humanoid model; about as complex as a wooden artists model. Just something that can pass as a human figure during the prototypes tests lol. An arm for example will likely be a sphere for the shoulder and elbow, joined by beams, and terminating in some similiar representation of a hand.
The thing that interests me most in terms of 3d entities, is the possibilities of what can be done. Whatever the performance and code difficulty involved, an idea that I would like test: is to create the humanoid model and a ‘door’ model. The door would define two points of interest (assumably bones), one running along the hinge (to calculate how it should swing/pull open) and one through the knob. Then to establish suitable references to the models hand and the door knob, and then specify that the users hand (and obviously, any other bones that need to come along for the ride) should be moved to a position close to the door knob and back again. So that if for say, our model had to open doors; we could show it happening (kind of) and wouldn’t need separate pre-done animations for whether the model is crouched or standing, let along having to go through the trouble of making doorknobs the same exact position on every door ^_^. Likewise the possibility to take a weapon model (aka “Stick gun”), and define points of interest: such as a muzzle, foregrip-point, stock, and the handgrip. Then defining that the humanoid models hands should be positioned at the foregrip and handgrip points, and apply a griping animation to it of some sort. I am more interested in exploring what is possible without popping a cork then in creating what looks and acts proper in that regard.
Weapons implementation as of yet is undecided; current calculations for the damage portion of the ballistics has been promising. The algorithm and test-weapon specifications have yielded suitable figures when used within proper situations. Basically, everything is tuned on the concept of an ideal and a maximum range that the weapon should be effective at; results showed that for the various example test-weapon specs gave the desired behaviours until a round began moving past maximum effective range. The downside is obviously, if one wants to take a short range weapon and use it like a sniper rifle, expect to have to empty a crap load of ammo or learn how to target the vitals! The main point at which things get sticky, is hit detection and trajectory. The tests for damage/penetration power were based on segmenting the bullets travel path and scaling accordingly. The idea that comes to my mind as the obvious-first attempt, is apply a simple hitscan method – fire a beam from the muzzle to whatever it hits, like a laser gun (I hate games that do hitscan). Slice this path into smaller sectors then mutilate its forward trajectory, in effect causing a way to implement bullet drop if desired. For removing the “perfect aim”, I reckon it would be easy enough to apply a jitter to offset the bullets hit point according to what the weapon should be capable of doing.
Games are not really my area of expertise programming wise, because I’m more accustomed to building tools and combing front/back-ends. It is however, an interesting thing to tinker with; and I am building a prototype to explore random ideas — not a polished product.