In this 5 part series for – Master Class “Realtime Digital Double with Character Creator and Texturing.xyz”, Sefki Ibrahim shares his tips and tricks on how to take advantage of Character Creator and Texturing.xyz multi-channel face textures to create a real-time character (Ed Harris), and animate Ed’s face and body easily in Unreal Engine.
Sefki Ibrahim is a freelance character artist that specializes in photorealistic digital human works.
He is familiar with employing Texturing.xyz material maps, and has been selected as “2019 Artist of the Year” by Texturing.xyz.
Part #2: Sculpting and Utilizing Multi-channel Face Maps
Section 1: Sculpting
- First thing I did was open the CC3+ male base into the scene and roughly matched Ed Harris’ proportions using the facial morph sliders. Mainly, I estimated the eye, nose and mouth dimensions here as this will all be appropriately sculpted once imported into ZBrush.
2. To bring this into ZBrush, first, I selected the base in the Scenes tab, along with the eyes and teeth, and then pressed the GoZ button on the top taskbar. Immediately, you’ll be presented with a window, as shown below. In this case, I selected the eyelash and tear ducts, which ZBrush will import as separate subtools ‒ something that I found was more comfortable to work with through my experimentation.
3. Here I began to sculpt the likeness: using this time to establish good proportions and anatomy while paying close attention to my references at all times. I implemented some secondary information from a previous project in order to quickly establish some secondary detail due to time-constraints and reworked them later after adding the displacement. Subdividing up to level 5 (subdividing this high is optional), this model will be used for projection later once I’ve textured the skin, so there is still work to do, but my main goal at this point was just to get a sense of the actor.
4. Next, I selected the eyelash subtool and modified the positioning of the cards to better sit on the newly shaped eyelids. The same goes for the tear ducts, eyes and teeth.
5. The model was then imported back to CC using the All button above the Subtools. Make sure when you return to Character Creator to check, ‘Adjust Bone to Fit Morph’, ‘Automatically Generate and Re-Align Tear Line’ and ‘Automatically Generate and Re-Align Eye Occlusion’, as well as the pose you originally exported out.
The import is simply just the base mesh, so there is no secondary information showing through just yet. We’ll get all of that information in after the texturing process is complete.
A note: If you wish to go back into ZBrush to make changes, you can! Make the edits to your sculpt, press the All button again and make sure to select ‘Adjust Bone to Fit Morph‘.
Section 2: Texturing
In Character Creator, I added a closed-eye blendshape and a surprised blendshape using the Edit Facial option. These will be useful for texturing to make sure that I get a clear projection on the eyes and the inner lips. I exported these out of Character Creator using the settings below and then imported these blendshapes into my ZBrush scene on separate layers.
Before exporting, I polygrouped by UVs and separated the head, eye and mouth cavities from the body since I’m only concentrating on texturing the head. Then I exported the facial expressions out of ZBrush at subdivision level 3 for texturing. From experience with using Texturing XYZ maps, it’s desirable to texture with a detailed base to ensure your projection sits in correlation to the sculpted features.
Utilising the Multi-channel Maps
The Texturing XYZ multi-channel map that I decided to use for this project was “Male 40s multi-channel Face #70”. The skin texture seems to contain a lot of colour variations and wrinkles, so I thought it would be great for capturing Harris’ 60+-year-old face.
The process of wrapping the multi-channel texture on our closed-eye model is as follows:
- First, check the dimensions of the multi-channel map; in this case, it’s 9824×6190.
2. I imported the closed-eyes model into Maya and created a plane; in the attribute editor, the dimensions of the plane were reduced by 1/100th of their size. So, in this example, my image was 9824×6190, so I changed it to 98.24×61.90. The amount of subdivisions is up to you, the denser the plane is, the longer it will take to wrap. Somewhere between 100 and 200 should suffice.
3. Ensure to turn normalisation off and check the UV editor as you do this. The plane should encompass the entire UV-tile.
4. Finally, global scale and translate the plane, so it relatively matches with the model’s features. Applying the albedo texture to the plane here will help with this.
5. Export the plane out and then import both the model and plane inside of Wrap3. In Wrap, the node network is as follows. With the multi-channel albedo texture connected to the plane geometry, I joined the two models to a Select Point Pairs node. This part of the process involves picking corresponding points from model to plane, like the image below. Some areas will not wrap cleanly such as the ears, but do not fret ‒ these artefacts will be taken care of in ZBrush.
6. The wrapped plane is then imported into the ZBrush scene and manipulated to match my sculpt. The aim here is to match the colour of the plane geometry to the sculpt. So I have the texture map applied to the plane in ZBrush. With the ears, for example, I force the plane geometry into the correct position using the colour as an indicator — Mask areas of the ear, project and smooth. To ensure a clean projection, I subdivided the plane geometry up to level 3.
7. I regularly checked the diffuse in flat view mode to check there was no significant distortion in the map. Once I had covered the entire plane geometry and verified that the albedo map was clean, I exported the plane geometry (at level 3) out of ZBrush and into a new Wrap scene. Below is a before (left) and after (right). Notice the areas such as the laugh line where I repositioned the texture according to the sculpt.
8. In Wrap, I imported the closed eyes model and the newly projected plane and attached an image node to the plane geometry. I then added a transfer texture node with the dimensions set to 8192×8192. Subsequently, I added an extrapolate texture node and transferred all three maps (diffuse, displacement and utility).
Here are the exported maps.
Cleaning the Maps in Mari
I imported the closed eye model into a new scene, and I proceeded to create three channels. I set each of these channels to 8-bit and 8K. Subsequently, I imported each of the maps into their respective channels and began the clean-up process. It went a little like this:
- Diffuse – I first exported the diffuse map from Character Creator (CC diffuse) to use as a fill layer. This diffuse map was imported into Mari and positioned below the Texturing XYZ diffuse map. I then added a mask to the Texturing XYZ layer and painted out all of the redundant areas, like the scalp, neck and cavities. Subsequently, I colour-corrected the layers, so they blended as seamlessly as possible.
I used the open-mouth blendshape to ensure there were no artefacts or visible UV seams in awkward areas such as the inside of the mouth.
2. Displacement – The displacement requires a 50% grey colour for removing unwanted information. Below you can see a before and after. I made sure to remove the eyebrows and eyelashes here. The best way to do this is to project the original texture map: selecting the region just below the lashes (or just above the eyebrows) to paint out the hair. I used the same technique for the diffuse and utility map.
3.Utility – The utility repeats the methodology mentioned above with the only exception being that I used black to paint out everything but the immediate facial region. It is optional whether you want to paint out the ears or not.
Now to export. For the displacement and utility, I added a copy channel layer in Mari and exported each of the Red, Green and Blue channels out of Mari and saved them as follows. Displacement_R_Primary, Displacement_G_Tertiary, Displacement_B_Micro. And Utility_R, Utility_G, and Utility B. This operation of splitting the map into its RGB components can also be carried out in Photoshop.
There are many ways to go about creating a roughness map, and I feel it’s always a topic of discussion amongst character artists. The method I’m demonstrating here is just one way, feel free to experiment on your own.
Texturing XYZ regards the Utility-B map as a ‘fake specular’ map, which I thought would be useful as a base for the roughness. So, I imported the Utility-B map and inverted it. I also brought in the Displacement R map and inverted it. The displacement map was then layered on top of the utility map and set to multiply. Afterwards, I added a mask for the displacement layer where I painted out areas of the displacement to leave me with the map below. I made sure to have darker regions in the most specular areas of the face, such as the nose, forehead, cheeks and lips.
I didn’t concern myself too much with the look of the roughness map simply because Character Creator allows you to further alter the roughness of each region of the face with great control via sliders.
Applying the Displacement
Back in Character Creator, I select the GoZ button to export the model to a new ZBrush scene. This scene is specific to texturing the head in high-resolution. So when you press GoZ, the same window will appear; however, this time choose the settings shown below.
This setting will allow us to work on the head separately from the body to apply the displacement map on a very dense mesh. Once in ZBrush, I subdivided the head geometry to level 7 or 16M polys. Remember when I said we would be projecting our sculpt later on? Well, this is that time.
I imported my level-5 model (your highest subdivision level) and projected it onto this new head base. The best way to approach this is: start at the lowest subdivision of the new head and select ‘project all’. Then, up the subdivision and project once more. Repeat this until their polycounts match. You shouldn’t encounter any geometry errors or ‘spikes’ since the heads align; however, it’s good to double-check, mainly the mouth corners.
On the left is the initial import, subdivided up to level 7 and on the right is after projecting my sculpt.
Now, I created a new layer and imported the displacement-R as an alpha. In the displacement map section, select the map and change the intensity to something low. The values I used for R, G, B were 0.2, 0.1, 0.03, respectively. Double-check that you are happy with the intensity and press Apply DispMap. Repeat this process on a new layer with displacement-G and make sure to turn off any other displacement layers as you do this.
Once you have added all of these displacement layers, you can fine-tune them by sliding the intensity of the layer until you get a look you like.
I then continued to work on the face with many more hours of proportion changes and subtle detailing until I eventually felt like I produced a strong likeness of the actor. Creating an accurate portrait doesn’t happen overnight…well for most of us. There were many iterations that I tested in Character Creator and even in Arnold to ensure I was getting the best likeness possible. My advice, don’t get stuck in ZBrush the whole time!
Then the normal map was exported out at 4K using the settings below.
The thing to note here is that you can press All to update the face again in Character Creator.
Just in case you’re confused at this point why you have two ZBrush scenes. The first scene was necessary for the initial block out where you establish the main proportions, the positioning of the eyes, teeth etc. Now, in this new scene, we can refine the face and push the likeness as much as possible. You don’t have to revisit scene 1 from this point on. It takes some time to get used to the back and forth between Character Creator and ZBrush; I recommend experimenting with the workflow.
PART #1 – Project Overview
PART #3 – Hair Card Generation
PART #4 – Dynamic Texture Editing with Character Creator SkinGen
PART #5 – Animating in iClone for Unreal Engine