
Learn how to create fully-rigged digital doubles from a single photo with the new Headshot Plug-in !
Reallusion’s Headshot Plug-in for Character Creator is a unique approach to real-time character head modelling which combines automatic AI based photo alignment with morph-based manual alignment and texture re-projection. Used in tandem with an image editor such as Photoshop, it can be used to produce highly realistic, animation ready models in a fraction of the time involved when making rigged models from scratch using conventional sculpture, scans or photogrammetry. You can download a free trial here.
In this article, Mike describes his workflow of using Character Creator and Headshot to produce a high quality animated model from a single photo. Compressed from 3 hours of continuous footage, whilst the video is not a tutorial, it demonstrates a professional’s approach to high quality model creation, with many tips for Headshot artists along the way.
The Better the Photo, the Better the Model
A good photo for use with Headshot will be taken face on to the subject, evenly lit, and with the subject’s face in a neutral, closed mouth expression – using Headshot’s Pro Mode, this will always provide a better a better starting point for more detailed work. For me, even when working on challenging subjects like this, the first step is always to reduce head tilt as well as to perform some initial de-lighting in Photoshop.

The Automatically Generated Model is Just the Start
Once the photo is loaded into Headshot, with an appropriate skin and body type selected – and you press the Generate button – a character is created with the photo mapped to the face and the model’s features aligned more or less accurately depending on how well the software has picked up the features. I’ll be using Headshot’s Image Matching approach to improve the alignment further, but first I can make some quick texture improvements by switching the surrounding skin type here and adding some procedural hair and stubble mapping.

Using Image Matching
Next, I activate Headshot’s Image Matching Tool, which brings up an overlay of the original photo based on the software’s estimation of the original camera orientation and settings. I’m using the default here, but you can change perspective via the lens setting if needed. The primary control I use is to regularly adjust the opacity to check the difference between the original photo and the model as I work.
And with the overlay suitably semi-transparent, I go to Character Creator’s Modify Panel and use the Headshot morphs to improve the alignment of the model to the photo. I approach this much like sketching – making larger changes first, to the skull, ears, jaw and facial features.

Understanding Re-Projection
Improving the alignment using the morphs inevitably changes the relationship between the photo texture on the face and the photo itself – since the texture is stretched and compressed by the morphs, so to rectify this – I use Headshot’s Re-Project Photo function. This creates a new photo texture based on the current camera and the current shape of the model.

Repeat and Introduce Depth
The process of using morphs followed by re-projecting the photo onto the model – is key to producing better models, and it’s something I do repeatedly, in multiple passes – moving from general morph adjustments to finer detail. I keep going until I’m reasonably happy with the frontal alignment, and then start work on the depth. In this case, I don’t have any side reference for the subject so I’m doing this freehand, simply using the depth morphs to create a shape which appears more natural. Once I’ve started on the depth, I combine this with further frontal alignment and re-projection as I proceed.

You can of course make cartoons and caricatures with Headshot (type in extreme morph values if needed), but my goal here is to produce a model which remains true to the original photo, but also works well in the round, under virtual lighting, so as I continue to work on the model – I’m also looking closely at the mesh and textures to see where further improvements can be made.
Editing the Photo Diffuse and Blend Mask
Apart from editing the photo before generating the model, and as long as you’re happy with the current projection and don’t intend to use re-project again, you can also modify the Photo Diffuse as well as the Blend Mask via the Headshot UI – by sending them to an external image editor and using the Update Skin Texture button to blend in the edited textures. Here, I’ve customized the outer boundary of the mask to include more skin from the original photo, as well as reduced the shadow under the nose on the Photo-Diffuse Map.

Baked Texture Editing
Editing the textures via the Headshot UI means that you can still change procedural elements and blending, however now that I’m happy with the blend, I move on to edit the final baked materials via the character’s Modify/ Material Panel. Here, I’m working on the final diffuse texture over in Photoshop, using a combination of cloning and painting to reduce the remaining discontinuity between the photo-mapped face and the surrounding generic texture.

Creating Unique Normals
Headshot doesn’t (yet…) create unique normals based on the photo-texture, so as well as editing the Base Diffuse, I also edit the Base Normal map: here I generate two normals from the diffuse in Photoshop (one for the eyebrows and the other inverted for the facial contours), and blend them with the procedural normal.

Polishing the Model
Now that the primary fixes are complete, it’s time to start really polishing the model – which for me means vertex editing to address remaining problem areas and improve interpolation where needed, as well as further morphing and texture tweaks.

Adapting Stock Models
The model is also getting to the stage where I can start thinking about hair and clothing, as well as preparing for animation. So here, I’ve taken one of Character Creator’s stock hair models, adjusted the textures and now I’m using Edit Mesh to form the stock hair into a shape which is more like the original subject’s hair.

Adjusting Teeth for Animation
I’ve also adjusted the teeth for more natural scale and position, and I can see how this will look during animation using the Modify/Motion/Expression Panel to test various facial expressions.

Neck Wrinkles
One of the biggest remaining issues is at the neck, where the textures are clearly too smooth in comparison to those on the character’s face, so here I’ve added some wrinkles by simply painting the Body Diffuse as well as generating the Normal over in Photoshop, which helps reduce the discontinuity.

Dressing the Model
After more passes of morphing as well as mesh and texture editing – I dress the model with stock clothing from Character Creator’s content library.

Animating the Character
After a few further vertex tweaks, the character is ready to be animated in iClone, Unreal Engine, Unity or elsewhere. Here is an example of iClone animation.


You can download a free trial of Headshot Plug-in for Character Creator, and learn more details about how to make good use of this tool from their Tutorials.

Mike Sherwood (aka 3Dtest)
Artist and specialist in realistic 3D human modeling and animation
1 comment
Very interesting article and video. Thanks a lot. What software are you using to edit the normals ?