Skeletal Animation
At the start of the project I was in charge of skeletal animation. In that regard I implemented all the functionality for skeletons and skinning.
First, I made the importer so that they could be saved into our own mesh file and used within the engine; this also meant taking care of instancing meshes and skeletons differently. Every mesh can be loaded once and instanced every time it is used but skeletons need to be generated separately for every mesh or they will all move equally.
After having properly imported, loaded and generated skeletons within meshes I moved into skinning. Although, we had previously done all skinning in CPU, this time we decided to do it in GPU with shaders due to consumption. So I made a first version of a skinning shader passing bone transformations as 4x4 matrices within a texture buffer and calculating all vertex positions within the GPU. The shader has been further improved to only pass 3x4 matrices and was going to be improved later to use dual quaternion, all research for that was made but this optimization was deemed unnecessary in the end and there where higher priority matters to attend to so dual quaternion where not put in.
To finalize this part I added functionality to save and load for meshes with skeleton so that they could easily be used within scenes in the engine’s editor.
AI Structure
After being in charge of skeletons and skinning I joined the AI department witch up to that point had a severe lack of people in it. When I joined the whole AI had been done by a single person without the necessary time to be able to reach the vertical slice deadline. That AI was severely lacking due to not having had enough time, programmers and a bug filled scripting system. At that point the AI was rushed, unfinished, all in one single script and did not provide any sort of solid base to expand upon. We (the newly formed AI team and me) decided to change that.
First, after some discussion about it within the group, I made a base structure for the AI together with two other programers (Ferran Martin and Eric Sola). This structure is similar to how Behavior trees work in that it was finite actions and when no action (or the idle action) is taking place the enemy makes a decision on which action to do. This would consist in a base class to take care of the whole actions taking place, calling their updates, managing interruptions… And then we have a child class to take care of all the enemies’ behaviors and finally one grandchild class to implement the specific decision making of each character.
Later, I had to expand the structure twice: first to make behaviors and animations match and after to make the final boss’ AI which does not function like an enemy.
For the first problem, and this is one of the, if not the, biggest setback we had within the AI department, animations worked in two separate states: animations with the weapons shaded and animations with the weapons drawn. To solve this I had to expand the structure so that the enemy class would manage two states for combat and out of combat and then the individual classes would make decisions within those states; combat decisions and out of combat ones.
The second problem was, as said before, implementing the boss AI. To do that I had to make changes to the AI structure so that only normal enemy specific functionalities where done by the enemy class and functionalities needed by the boss as well where in the base class given that the boss does not function as a normal enemy at all. This was already the way we thought of it at the start of the project but it was not until we had a boss and could really put the structure to the test that many of the existing bugs appeared and I could address and fix them.
Enemy Movement
After having a solid structure that lets us expand on it as well as work in parallel within the team the only remaining parts of the AI are he actions and the specific behaviors or decision makings. Most notably I was in charge of the movement which we decided to do with steering behaviors instead of constant speeds so that the enemies might feel more realistic. In regards to realism I also used blending clips to simulate stopping and starting movement.
Movement has been expanded and tweaked intensively, most notably I expanded on it with sideways movements when in combat for all enemies and the boss to make combat be faster and make the AIs feel a little bit smarter. The complexity has been rising nonstop throughout development given the needs of the enemies and boss and this area has been one, if not the, biggest headache in this department.
Other Actions and Individual Decision Making
Besides movement I have also implemented totally or had part in making and debbuging of many other actions.
These actions are:
- Facing Player
- Basic Attack
- Engage
- Disengage
- Enemys' Sight
- Chasing Player
I also implemented the decision making for the sword enemy and the boss as well as being part of all the other decision making implementations due to knowing the structure.
Automatic Builds
Later, I also worked together with an engine programmer (Elliot Jimenez) to be able to generate builds automatically. He programmed a whole batch mode and I took care of making use of it through an YML file with Appveyor.
Boss
Finally, with development in alpha I joined the Boss team to implement the boss. For the boss I implemented the decision making, the basic attack the movement using the before mentioned structure and the charge attack.
For the decision making we had two phases, in the second phase the charge attack is added to the pool of possible attacks and if the player gets far enough the boss charges at him.
The charge attack was the more complex task in the boss’ development, in my opinion. To make it I had to implement a way to push the player back and move it as well as the charge itself crossing the map and checking if the player is hit and pushing it in the logical direction.