Aesir Interactive is an independent development company with an emphasis on high quality video games. Their ambitious and motivated team is located in Munich, Germany. With much passion and devotion they have one common goal – developing games with long-lasting gaming fun and long term life cycles.
Aesir Interactive creates diversified, and demanding games by combining gaming technologies with industry applications and standards. Their game POLICE SIMULATOR: PATROL OFFICERS includes a dynamic traffic system that organically creates traffic flow and car accidents, as well as emergency situations that can randomly pop up during your shift as a patrol officer. In the open world of Brighton players will be able to choose different neighborhoods for patrols, making sure to keep them safe. You can patrol through three unique districts with several neighborhoods each. Every single one has a distinct flair, from the high rises of Downtown to the historical buildings of Brickston.
Additionally the team at Aesir Interactive has looked to create procedurally generated non-playable characters (NPCs) by using Reallusion’s Character Creator, Headshot and SkinGen plugins, in combination with Blender and Unreal Engine.
The Citizens of Brighton – Procedural NPCs in Police Simulator: Patrol Officers
Q: Hello Aesir Interactive and welcome to our feature stories. How has Character Creator helped you production in Police Simulator? Can you share your typical creation process?
Character Creator (CC) has been a huge help because it allows us to create realistic characters easily and helps us speed up our pipeline. A notable help is the fact that there is an extensive library of assets on the Reallusion marketplace which we use to get some of our haircuts and outfits. Another benefit of that is that these assets are skinned already, which reduces the necessary amount of weight painting to next to nothing.
Since we are dealing with a large open world in Police Simulator, we have to create the NPCs in a modular way which allows for recombination of the different parts. This means that every NPC consists of different parts for each body zone. The most important ones are upper and lower body clothing and faces. In the following we are going to focus on how we create the faces.
Police Simulator deals with a big city which requires the development team to create different versions of characters.
We start in Character Creator and have to adjust the height of each character to reference images we created beforehand. The rest of the body is ignored, since it isn’t used in the rest of the process. We use the Headshot plugin for Character Creator, to get a good base face model and texture. Based on that we then tweak the facial shape further until we achieve the desired look.
We have to be careful with the placement of the mouth and ears, however, so they will work for certain actions or attachments in the game. An example of that would be a DUI for which the NPC has to blow into an officer’s device to determine the alcohol level in the NPC’s blood. For that we need the mouth position to be exact, otherwise the the device might clip into nose or chin.
Once we are finished with the facial shape we start adjusting the texture. For this, the Character Creator SkinGen plugin helps to add some finer details. We focus on smaller details such as pores, freckles, etc, and don’t add huge or easily recognizable features because the face would stand out too much.
After we are happy with the character’s face, we use Character Creator’s convert-to-game base tool and export it. Character Creator automatically exports a bunch of maps which need to be packed together to work in our pipeline.
For that we run a batch script to set up the textures correctly. E.g. Occlusion, Roughness and Metallic maps are merged into a single ORM texture. All sub textures for the head (skin, eyes, teeth) are also packed together into a single map.
The next step in our pipeline is Blender. We import the character from CC and run through a set of actions which we simplified with a custom addon.
For example, we need an edge loop around the neck to be the same for every NPC to have no visible seams between body and neck in-game. This is transformed automatically with our add-on. Another part that needed to be adjusted is the UV layout. The facial UVs needed to be repacked to fit the merged texture maps. We also needed to add a second UV channel and transform the UV islands to the correct parts of an ID map to set the NPC’s body part.
Apart from Blender, we also adjust the albedo texture a bit to de-light the NPC’s skin and make sure it works under every lighting condition.
After finishing the steps in Blender and importing the parts into Unreal Engine (UE4), the individual parts have to be set up there. The body part type (arms, head, hair, clothing) has to be specified, as well as adding a lot of additional possible values which can define the gender, ethnicity and more custom tags. This allows us to assign certain rules to the NPCs (e.g. colored hair should appear much less frequently than natural hair colors to prevent certain unique styles appearing at the same time). Once this is set up, the body part can be found in-game.
We use the same creation technique for the officers as for the NPCs. Before implementing it in the engine, we need to add patches with the Brighton Police branding. Their facial textures have to be treated with special care as well. This is something that we are looking forward to being more easily achievable in Character Creator 4.
Q: How did the team at Aeiser Interactive animate the mouth movements?
Right now we use the volume of the voice lines and map it onto the rotation of the jawbone and the Affricate, Pucker and Open morph targets. We are, however limited by the number of morph targets we can use on our NPCs. After recent changes to the engine we are thinking about using proper spectral analysis on the dialogue audio to drive the morph targets.
Q: How did you render the NPCs (non-playable characters) inside Unreal Engine?
Unreal Engine uses four draw calls per mesh section per skeletal mesh in one single frame. The number varies with the material and shadow settings, but under normal circumstances four is what we see. When rendering potentially hundreds of NPCs, this quickly amounts to thousands of draw calls if you are not careful.
Luckily, UE4 provides a neat utility in their C++ code that allows you to merge multiple skeletal meshes into a single one at runtime. You can read the documentation for that and alternatives in the official documentation. The provided source code for the skeletal mesh merge does not compile anymore – if it ever did – but it is not too complicated to fix.
The code blocks the game thread by default to wait for a render command fence. This, alongside the huge amount of data that needs to be copied for the merging, causes tons of hitching when generating NPCs at runtime. By removing the render command fence and using the async task graph we moved the actual mesh merging to a separate thread to fix these hitches.
We then merge all the modules that we randomly generate by a set of rules into a single mesh that has 2 material sections on LOD0, one for the hair that has a masked material, and one for the rest that has an opaque material. On LOD1+ we use only a single mesh section that uses an opaque material.
In order to shade the mesh with the correct textures we have to resolve the required texture objects at runtime. We use the 2nd UV channel to detect which ID is used and use the appropriate set of textures accordingly. If we had to do it again, we would use a UDIM workflow to save on memory consumption.
As you can see, we need to pass a lot of textures into the material. We apply a little trick in the sampling of the textures in order to stick to the limit of 16 texture samplers that are allowed in the shader model. While looking through the generated shader code, we found that when you pass a texture object into a custom node you actually pass the texture AND its sampler into the function. So what if you simply don’t use all of them? Under the assumption that all textures use the exact same sampler settings we only use the sampler of the texture object. As it turns out, this leads to the shader compiler optimizing all the other samplers and allows us to use more than 16 textures in our material!
Keep in mind: when implementing custom sampling for non-linear color or normal textures, you will have to convert the data to sRGB and unpack the normals manually. This is better than just using shared samplers, of course. We also make use of the ID mapping to use different shading models for the areas by means of the “From Material Function” shading model. Hair, Head and Skin use the subsurface shading model while the clothes use the default lit surface shading model.
Despite its many benefits, the skeletal mesh merge brought many challenges with it, too. In no particular order:
- Meshes are merged in their bind pose, which means all merged meshes need to fit together perfectly when in their respective bind poses. This is why we had to create our blender tooling to ensure that all the modules fit together.
- Morph targets need to be remapped and merged for any module after the first. We only needed them on the head, so we did not have to remap the others anyway.
- Morph targets of the first mesh need to be copied to the merged mesh for each new mesh. This takes up a LOT of memory if you consider that Character Creator 3 characters have more than 80 morph targets. We ended up stripping all but a few of them to save on memory. We recently did an engine mod that shares morph target data between meshes so that we do not have to copy them, but we did not yet find the time to adjust our animations to the new capabilities.
- LODs are merged as well, which means generated LODs must still fit together at their seams to not have holes. Enabling the Lock Mesh Edges checkbox for the reduction settings helped resolve most of the issues.
- Modules do not keep their own skeletons (as opposed to the master pose approach), which causes issues for modules that require bones in different locations than others. This is the case for different heads/faces where the mouth and eye sockets are in different positions. We worked around this by using the head and its skeleton as the base for each NPC. It still creates slight issues with accessories such as sunglasses.
- When playing animations, the skin sometimes clips through the clothes due to differences in topology between the clothing and the skin. We applied a depth offset in the material to make sure that the clothes always render on top of the skin.
We have some rough ideas about using CPU Skinning to deform the modules before merging in order to resolve some of the above constraints, but we did not implement this yet. The render pipeline for the NPCs is optimized towards minimizing CPU and GPU load at the cost of memory. This tradeoff has, in turn, caused issues on older hardware. We are currently working on a fallback that uses multiple skeletal mesh components with the head as their master pose component. However, for 75 NPCs this costs an additional 10ms in CPU time.
Still, we consider our merged-mesh pipeline a successful part of our project and will continue to build on this tech stack in the future. Other projects are already building on top of this pipeline in UE5 and also seeing great results.
If you also want to be part of our future projects and endeavors, we are hiring! – https://aesir-interactive.com/vacancy/
Follow Aesir Interactive: