首頁 » Page 2

WarLord’s Workshop – Film School 101: Shooting Dialog

Dialog is a common element in visual storytelling and while its basics are easy enough, its execution can be a bit tricky for beginners and even experienced animators if you don’t do a little research on the subject. Just as you need video editing skills, you’ll also need to properly set up cameras for dialog. 

While we can shoot standard dialog with one camera, it can be a bit boring unless the conversation is well-written, well-animated, and riveting.  

In live action they have physical cameras and in animation we have our software-based equivalents with a good amount of control in terms of internal attributes. Using the iClone camera system is very simple as most of us learn right away. Using cameras effectively takes a little more time and maybe we can cut down some of that learning curve for you. 

Using cameras in general can fill volumes of books and more videos than there are viewers to watch them. With this in mind, we will stick to the basics of shooting dialog. You’ll learn the basic rule of shooting dialog and I’ll point you to resources of when to bend and break that rule. 

Yes… I know some of you have a real dislike of rules and you didn’t get into this to be told that rules exist. The great thing is that you can ignore them and move along on your animation journey if you so desire. As an Indie we all like to chart our own path but keep in mind… we need an audience to play to.  

Whether it’s for financial or artistic/creative reasons we need to make sure we don’t lose our connection to the viewers by losing audience focus with bad camera angles. There is already enough to keep up with in the animation game, so many opportunities to drop the ball, that we need to shore up what we can control when we have the opportunity. 


The rule states that the camera should stay on one side of an imaginary line between two characters so that each character always appears to be facing the same direction, regardless of where the camera is positioned. 


In other words, don’t change the direction of view in a two-person dialog for each or any character. The characters always need to face the same direction on the screen without regard to camera positioning. Otherwise, it can get confusing to the viewer if a character or even both, for that matter, are suddenly facing the opposite direction at different times during the course of the dialog.  

There is a lot more to it than this, so I recommend you search out and study the rules of dialog if you aren’t already familiar with them. Keep in mind that as the creator or producer you ALWAYS know what is going on, but the audience hasn’t read the script.  

In filmmaking, the 180-degree rule is a cinematography principle that establishes spatial relationships between on-screen characters. The rule states that the camera should stay on one side of an imaginary line between two characters so that each character always appears to be facing the same direction, regardless of where the camera is positioned. When you keep your camera on one side of this imaginary line, you preserve the left/right relationship of your characters and help the audience maintain a sense of visual consistency. This means that no matter what type of shot you use, the viewer still knows where everyone in the scene is located. 


We all know that rules are made to be broken but I would tread lightly in doing so with a vital conversation. What may seem like a cool shot to you may be confusing to the audience. Interest could wane if the viewer cannot keep up with the onscreen conversation because they are too busy questioning the jarring change of direction from an inconsistent camera angle. 

This rule seems very straightforward and don’t overthink it but it can be a hard concept to grasp or to convey when you first start to implement it. It’s also very easy to misunderstand what the safe area of the rule is and how it affects camera usage. Diagrams representing this rule tend to be simplified because it is a simple concept.  


If, like the image below, you see a camera on each side that doesn’t mean this is the way an “over the shoulder” shot must be made. It’s just the extreme edges of the safe camera area. Cameras can go anywhere in the green area and maintain the continuity of the conversation. This is a very simplified example. 

In the image above you can see the female and male dialog actors and the blue line that bisects them marking the Safe Camera Area on one side and the No Camera Area on the other side. This image is an extreme example of an “over-the-shoulder” shot. 

While you can see some of the available camera positions don’t interpret this as being the only camera positions you should use. Focus more on being on the proper side of the bisecting line when setting up the individual camera shots. Once you get a pairing you like then it’s advisable to stick with it. Don’t get fancy for the sake of fancy. Let the cameras tell the story with consistency so the viewer doesn’t miss any important plot points. 

Below is an excerpt of my initial research (including ChatGPT and Bing Chat) and it checks out as a solid list of reasons why you should pay attention to this rule. 

– The 180-degree rule is a fundamental guideline in filmmaking that helps maintain spatial continuity and visual coherence in a scene. Here’s a summary of why it’s important: 

– Maintains spatial consistency: The 180-degree rule ensures that the spatial relationships between characters and objects in a scene remain consistent. This is crucial for the audience’s understanding of the scene’s geography and the characters’ positions within it. 

– Preserves screen direction: By adhering to the 180-degree rule, filmmakers maintain the same screen direction for characters and objects throughout a scene. This consistency helps viewers follow the action and maintain a sense of visual orientation. 

– Enhances continuity: Maintaining the 180-degree rule contributes to visual continuity, making it easier for viewers to connect individual shots and understand the flow of the narrative. This consistency is vital in avoiding confusion and distractions. 

– Creates a natural look: When you break the 180-degree rule, it can result in jarring or disorienting shifts in perspective. Sticking to the rule helps maintain a more natural and fluid visual experience for the audience. 

– Facilitates editing: Following the 180-degree rule simplifies the editing process by providing a range of shots that cut together smoothly. This allows for seamless transitions between different camera angles and perspectives. 


This refers to throwing all this out the window and crossing the bisecting line of the scene. It’s not always bad either. 

In indie filmmaking, there will always be those who jump the line, break the rule, and come out on top. I’m not an expert here by any means so I will give you some more references to check out so you can make up your own mind.  


While we all have our own ideas as to how a shot should be framed or presented, we do need to keep in mind what the industry has trained the audience to expect. It might be wise to not step outside of the norm unless there is a compelling reason to do so from a storyline viewpoint. 

After all, the story is the most important part of all this, and we don’t need to be distracting the audience without good reason to do so. Rules are there for consistency and are not intended to stifle creativity but to aid in getting that creative story across to the audience. 


Sight Unsound – https://www.youtube.com/watch?v=yTaPI3nsH88 Crossing the Line. How and Why it’s Done. 

StudioBinder – https://www.youtube.com/watch?v=iW0bKUfvH2c The 180 Degree Rule in Film (and How to Break the Line) 

Camber Film School https://www.youtube.com/watch?v=e-QbgIpK3zM Breaking the 180 Degree Rule for BETTER Storytelling – Crossing the 180° Line Examples in Movies  

Jesse Trible – https://www.youtube.com/watch?v=jmCpFbZ20Fg – The 180° Rule (And How to Break It) 

Garrett Sammons – https://www.youtube.com/watch?v=7H1EtLBhB9k&t=226s – DON’T CROSS THELINE!!! | 180 Degree Rule Explained 


Masterclass – https://www.masterclass.com/articles/understanding-the-180-degree-rule-in-cinematography 

Indie Film Hustle – https://indiefilmhustle.com/180-degree-rule/ – What The Heck Is The 180 Degree Rule? – Definition And Examples 

Adobe – https://www.adobe.com/creativecloud/video/discover/what-is-the-180-degree-rule.html – Channel the 180-degree rule for compelling cinematography. 

Animating Dialog – Peter Haynes – Reallusion Magazine

MD McCallum - WarLord

MD McCallum – WarLord

Digital Artist MD “Mike” McCallum, aka WarLord, is a longtime iClone user. Having authored free tutorials for iClone in its early years and selected to write the iClone Beginners Guide from Packt Publishing in 2011, he was fortunate enough to meet and exchange tricks and tips with users from all over the world and loves to share this information with other users. He has authored hundreds of articles on iClone and digital art in general while reviewing some of the most popular software and hardware in the world. He has been published in many of the leading 3D online and print magazines while staying true to his biggest passion, 3D animation. For more information click here

WarLord’s Workshop Beginners Guide: The Timeline Simplified

A timeline is perhaps the most important basic concept that an animator must grasp. The timeline allows animations to happen. Allows action to take place. In most cases a timeline is demonstrated with the simple act of moving a primitive like a box across the screen. While that adequately shows what a timeline does in its most basic form… it’s not much fun. 

With that in mind, we are going to replace the primitive with an aircraft. Something that will make this exercise a little more interesting to animate. 


I remember the first time I had to deal with a timeline. The concept eluded me at first because until then everything I did was static, digital art. Mainly for illustration and web graphics.  I had gone from Corel Paint to Photoshop, Windows was a beast that required backups if you worked with it and the growing demand from the web meant moving onto audio and video with apps like The Real Player and this new kid on the block, Flash. 

If you were involved with web graphics or site development in the early days of the internet, Flash was a real boon to your workflow. This was particularly so in the mid-1990s when 33.6kb was a common speed before any type of consumer broadband was even on the horizon. Security issues weren’t on the minds of many web developers either. We were just trying to get meaningful, interactive websites deployed which took a lot of time and expense.  

I knew Flash was vector graphics and was capable of animation and that was the extent of it all. The only “timelines” I was familiar with back then were in video editing but for some reason, the connection wasn’t made to what a timeline really meant. 

If you are reading this and you have no idea what the timeline is then you are not alone, as a lot of us have been there, but it is a concept you must eventually grasp.  

The timeline is quite possibility the most single important concept/tool that must be understood to be an animator. You don’t have to use curves even though they greatly improve animation. You don’t have to blend motions or use reach targets, but you do have to understand the timeline. 


What is a keyframe? From StudioBinder

A keyframe, also written as “key frame,” is something that defines the starting and/or ending point of any smooth transition. That something can be a drawing in animation or a particular frame of a shot when dealing with film or video. Any shot, animated or live action, is broken down into individual frames. You can think of keyframes as the most important frames of a shot that set the parameters for the other frames and indicate the changes that will occur throughout as transitions.  

You won’t really have to be concerned with a keyframe or what they are during this exercise, but it is a term you need to be aware of and become familiar with during your animation journey. 

The timeline is quite possibly the single most important concept/tool that must be understood to be an animator.


For this scenario, I used the free S-14 Jet over at Sketchfab  (“S-14 Fighter Jet (High Poly)” (https://skfb.ly/owLXP) by Kamran Mughal is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/) as it comes configured for flight with the landing gear retracted and had a nice cockpit too.   

I loaded a Pilot character into the cockpit in a sitting position and placed the hands near the joystick and throttle on each side.  Any iClone humanoid character will work, just remember to link the character to the aircraft.  

The first method. Move the jet.


Step 1: Load the aircraft. Set the aircraft shadow to “Receive Only” in the Scene Manager. 

Step 2: Load, link, and position the pilot within the aircraft. Set the Pilot Shadow to “Receive Only”. 

Step 3: Load a sky from CONTENT->SET->SKY in the Content Manager. I used “Cloudy” sky. 

Step 4: Set the Z axis on the sky to something like –10000 to lower the sky and fill the viewport with clouds. This may vary depending on the sky you chose. 


Step 5: Select the PREVIEW camera and pull back until you see the aircraft and a lot of sky.  

Step 6: Use the Zoom tool to zoom out and the Pan tool to move the Aircraft to the left of the viewport.  

Step 7: Move the Play head slider to the last frame or click on the end button.  

Step 8: Use the move tool to move the aircraft towards the opposite side of the screen until the nose of the aircraft is almost off-screen. 

Return the play head to the start and play the animation. You will notice two things immediately. It takes a while for the aircraft to get going and moves very slowly across the screen. 

And… Congratulations as you have just created your first keyframes with the beginning and the ending frames of the aircraft movement. IClone takes care of everything in-between hence the term tween or tweener, which is not commonly used in iClone terminology but is a general animation term you may run across. 

The first problem in this exercise is the default curve that starts and stops each motion. If you don’t know what a curve is then just follow along as you need to be a little more familiar with animation before we go into curves. Curves are not a beginner’s topic for most new users but the sooner you understand and properly use curves the more polished your animations will be.  


Step 1: With the plane selected, open the timeline by pressing the Show Timeline button in the play area. Hold down the Alt button and scroll the middle mouse button until the complete timeline is visible.   
Step 2: Right-click on the last keyframe (small green symbol on the right side of the timeline) and select Transition Curve.  

Step 3: Change the transition curve from Default to Linear. The aircraft should now be moving at a constant speed between keyframes. 


To speed up the aircraft as it goes across the screen you need to select the last keyframe on the right (in the timeline with the aircraft selected) and move it towards the left to about halfway. You have now doubled the speed of the aircraft.  Keep moving this keyframe to the left to increase speed.  

You should now see the relationship between the aircraft and the timeline as it gives the animator an opportunity to mark the beginning and end of this animation (the fly across), its duration, and speed. 


If you want the aircraft to move from offscreen left to offscreen right, then all you need do is move the play head to the first keyframe (frame 0 in the case), and with the aircraft selected move it to the left until it is offscreen enough to not be seen. 

Do the same thing on the last keyframe of the animation and move the aircraft offscreen to the right. You must be on the first frame and last frame as instructed above for this to work. It uses those frames as a starting and ending point and then creates the frames in between.  

NOTE: To increase the distance the aircraft flies you will need to zoom out far enough to have the space needed for the longer distance. 

How did this Involve the Timeline? I used it to time the movement from frame 0 to the last frame which captures the movement of the aircraft.  

A video tutorial covering both methods described in this article.


For one plane it does not really matter if you move the aircraft or the sky. If, however, you have several aircraft then it becomes much easier to move the sky (one object) versus multiple aircraft objects.  

This is my preferred method of making this type of shot with multiple objects. This also allows for easy animation of the various aircraft bobbing up and down or moving about. It cannot be done with a standard sky but instead, it needs to be a prop so it can be moved and keyframed. 

Method two… move the sky.


I used Props->Props->3D Space->Dome as the sky.  

  • Increase the Dome scale to 2000 
  • Add a cloudy sky to the Dome diffuse channel or Drag and Drop it directly on the Dome. 
  • Add Water, I used Water 8 in the Still Normal folder. 
  • Set the Water to –100 Height, 270 Direction (so it will flow opposite the direction of the aircraft) Water size to 1000. 
  • Set Wave Size to around 335 to increase size. Set Wave Speed to around 100. 
  • While at Frame 1 move the Sky down to the water or its edge just a little below the surface.  
  • At Frame 1 select the sky and move it all the way to the left while still being able to grab the gizmo to move it. 
  • Go to the last frame and move and move the sky to the right side of the screen making sure not to go too far or the curve of the dome may show. 
  • We now need to select the last keyframe (last frame) and right-click then select Transition Curve and select Linear. You will see a short preview. Now the sky should move smoothly from the first to the last frame. 
  • If you duplicate the Aircraft, make sure to select the Aircraft and the Pilot. Relink each pilot to its appropriate aircraft if you get a popup message regarding strange behavior from links. 

As you can see it doesn’t take that much for a simple animation of moving an object or objects across the screen, up and down or in any direction for that matter. You would basically do the same steps for launching a rocket changing the object to angle across or go straight up for a launch. 

This is for props, walking characters are another issue as they depend on distance and speed to not have foot sliding issues. If, however, you want to fly a superhero across the screen then the same principles would apply.  


The timeline will hold the keys to the animation kingdom and the more you understand what it can do, the more complex your animations can be. What you have seen here is just a starting point. And, in this case, particularly, utilizing the manual and Reallusion resources will limit frustration while you grow as an animator. 

MD McCallum - WarLord

MD McCallum – WarLord

Digital Artist MD “Mike” McCallum, aka WarLord, is a longtime iClone user. Having authored free tutorials for iClone in its early years and selected to write the iClone Beginners Guide from Packt Publishing in 2011, he was fortunate enough to meet and exchange tricks and tips with users from all over the world and loves to share this information with other users. He has authored hundreds of articles on iClone and digital art in general while reviewing some of the most popular software and hardware in the world. He has been published in many of the leading 3D online and print magazines while staying true to his biggest passion, 3D animation. For more information click here

AccuFACE Revolutionizes Facial Tracking from Live and Recorded Video

Frustrated with the reliance on expensive hardware for facial mocap production? Why not opt for pre-recorded videos for facial tracking? Maybe the technology wasn’t quite there in the past, or it wasn’t accessible to most. But now, the moment has come to put professional actors and video content at the forefront! See how a brand new technology can offer top-tier facial mocap and make collaborative production a reality.

Reallusion is thrilled to introduce iClone AccuFACE, an AI-powered facial tracker that delivers high-quality facial animation, accurately in real-time. AccuFACE can seamlessly capture facial expressions from webcams, GoPros, or existing video files. By integrating Nvidia RTX GPU-accelerated AI technology, AccuFACE revolutionizes facial tracking by eliminating the need for costly hardware or subscription fees, making it accessible to designers of all levels. With the synergy of iClone and Character Creator, crafting new IP characters for professional facial animation is now within reach.


AccuFACE is capable of capturing facial expressions from both webcams and conventional video files.

  • Live Webcam: Compatible with standalone USB web cameras or laptops equipped with integrated webcams, AccuFACE offers optimal tracking resolution of 720p at 30 frames per second. Recommended webcam specs >
  • Recorded Videos: Rotate videos for mobile video recordings or footage from mounted helmet cameras.
  • Remote Collaboration: AccuFACE lets production studios seamlessly collaborate with professional actors or voice talents worldwide, liberating animation productivity from equipment dependencies and the restrictions of time or location.


Optimize Expressive Range

Calibration for personalized subtle expressions and precise brow movements.

  • LIVE MODE Calibration: Tailor the character’s facial settings to match the live performer with an easy-to-follow guided process, and receive immediate animation feedback for thorough refinement.
  • VIDEO MODE Calibration: When obtaining the ideal calibration poses from a single video source proves challenging, AccuFACE offers the flexibility to search for calibration frames across multiple video files.


AccuFACE offers a comprehensive toolkit to address the common facial mocap challenges, bringing artists several steps closer to achieving flawless animation, real-time filters to enhance tracking quality.

  • Strength Balancing: adapt regional strength adjustments to the actor’s individuality and balance expression intensity. Watch video >
  • Smooth Filter: Eliminate undesirable spasms from the tracking data as a result of exceeding tracking angles, excessive hair coverage, or uneven lighting and shadowing. Watch video >
  • Denoise: AccuFACE offers a comprehensive toolkit to address these common challenges, bringing artists several steps closer to achieving flawless animation. Watch video >
  • Anti-Interference: Cross-region interference and cross-triggering can lead to muddied expressions. AccuFACE interference cancellation can reduce unwanted head, brow, and mouth movements. Watch video >


AccuFACE captures synchronized audio data and facial animation, enabling the rapid generation of voice-synced animations using data from the webcam and microphone.


Timecode-based full-frame animation recording, independent of computer performance. Capture up to 60 fps of clean facial animation data without frame drops.


Incorporate subtle lips and tongue animation through integrated AccuLIPS technology. Be able to fine-tune viseme and lip shapes for flawless talking or singing animations.


By cleverly bypassing voice-to-text recognition, AccuLIPS can extend its capabilities to other languages beyond English. Lip detection and accuracy can be further improved with additional text scripts while free tools are readily available to convert non-English audio files into their romanized counterparts.

Know more about AccuFACE related production:

KeenTools FaceBuilder for Blender and Character Creator 4 – Create 3D Characters from Multiple Photos with Headshot 2


This is a short guide on how to make unique fully dressed 3D characters with the powerful integration of FaceBuilder for Blender, Character Creator 4, and Headshot 2. Follow these quick steps to give your model a look-alike appearance using just a few photos or video snapshots.

Create a digital head using photos in FaceBuilder for Blender

FaceBuilder for Blender lets you make fully-textured 3D heads based on photos. Inside Blender, go to the FaceBuilder tab, click on Create New Head and load images using the Add Images button. Check Allow facial expressions and Lock neck movement.

Reference photos for creating a head model with FaceBuilder

Creating heads with FaceBuilder is semi-automatic. Select the top image on the list and click Align Face. FaceBuilder will detect the face in the image and estimate the position of the mesh in 3D. The next step is to line up the mesh with the face manually. Drag the existing pins or add new ones to accurately match the facial features. Pay special attention to eyes, brows, nose and mouth. This will help you get a better texture in the end. Repeat the same operations for the remaining images.

Creating a fully textured digital copy of the head with FaceBuilder for Blender

The next step is projecting texture from the images onto Geometry. Select maxface UV type in the Texture tab, then go to Advanced and check Equalize brightness and Equalize colour to smooth the texture and make it more even. Then click on Create Texture and hit OK. The digital copy of the head is now ready for export to Character Creator. Go to the Export tab and hit the Character Creator button. FaceBuilder will run the software and transfer the 3D head model and the texture right into it.

Important! You need to have the latest versions of Character Creator and Headshot plug-in installed on your computer for this operation. Visit the official Reallusion website to download the software.

Exporting the model to Character Creator and refining it

Generate the body in Character Creator

Character Creator offers extensive settings to give your 3D character a unique appearance. We’ll stick to a very basic setup to demonstrate a quick path to creating a character.

Export from FaceBuilder to Character Creator gets you to the final stage of generating a head called REFINE. Here you can explore the mesh in the viewport and adjust everywhere it doesn’t quite match up using a number or brush tools. For example, click on the Project button and go over those areas on the mesh that didn’t fill out right versus the FaceBuilder profile. You can speed up this process using the Symmetry Edit option that will mirror your adjustments to the other side of the face. As soon as you like the way your head looks, click on ATTACH TO BODY.

In the popup first select a body type for your character and also Mask type, which is basically how much of the original texture you want to use. You can go with No Mask to use the full head at this point and adjust that later. Click on Generate to finally put the head and the body together.

Customise your 3D character

Here are some effective steps on customising your 3D character to make it look like the person in the original photos.

Go to the Content tab to set up the overall appearance of your character. For example, you can choose a hairstyle in the Hair tab or dress up your model in the Clothes tab. You can also add and remove various parts of your model in the Scene tab. The Motion button lets you quickly and easily select a pose for your character.

Customizing your character in Character Creator

Control your character likeness with the Headshot 2 plug-in

The Headshot 2 plug-in lets you tweak every single element of your head model inside Character Creator. Go to the Headshot tab, scroll down to the bottom, and activate Auto Update to have all your changes updated automatically. Inside this tab, you will find some very useful texture settings.


You can select Skin Type from the number of presets and also change skin colour using the colour picker. You can also localise the texture imported from FaceBuilder by specifying Face Mask in the Skin Settings section.

A little tip. Try different options to see which mask works best for your head, paying attention to ears, cheeks, and forehead. Likewise, you can improve texture around the nose and mouth by choosing a Mouth and Nostril Mask. Selecting an Eyelid Mask may help you fix issues like eyelashes baked in the texture.


You can select Eye Color at the very top of the Texture Adjustment section. But there’s a lot more you can do with the eyes in the Morphs tab on the Modify panel. Go to Actor Parts and search for Eyeball Iris Scale for example to adjust that value. Back in the Headshot tab, you can go to Shape Adjustment, check Activate Sculpts Morph, click on the Eyes button and then adjust their position simply by dragging them right in the viewport. You can then rotate the head side-on, zoom in on the eye and adjust its depth.


Go to the Scene tab, select Hair Base, then go to the Texture tab on the Modify panel, select Hair in the Material List, then scroll down to Shader Settings. Here in Strand Color you can change the colour of all main hair parts. If the hairstyle contains more elements, you can adjust their colour in the same way by selecting them in the Scene tab.

A little tip. You can change the scale and size of all body parts in the Morphs tab on the Modify panel. Use search for specific elements, for example, Head ScaleNeck Scale, and Shoulder Top Scale. You can also go to the Attribute tab, make your model open the mouth with the Display Mouth button, and then back in the Morphs tab, search for elements like upper and lower teeth to change their height and scale.

Export to Blender

Go to File > Export > FBX and select the Clothed Character option. In the export menu select Blender as destination, then select Mesh in the FBX Options, uncheck Embed Textures, and then click on Export and save it to disk as model.fbx.

To get full control over your CC4 character we recommend installing the Character Creator add-on. Go to Add-ons in Blender Preferences and activate the Rigify option to be able to use the local rig. Download CC/iC Tools from GitHub. After you install it, you’ll get a new tab named CC/iC Pipeline on your side panel. Go to that tab, click on Import Character, navigate to the model.fbx file, and hit Import.

Explore Character Creator Add-on settings

The Character Creator add-on gives you access to many useful settings. You can go to the Scene tab and select a lighting scheme, like CC3 for example. This will make the materials inside your model visible in the viewport. Another thing you will surely need for your model is a rig. Go to the Rigging and Animation tab and click on the Rigify button. This way you get control over all elements of your model inside Blender.

A little tip. If you plan to export your model further on to other 3D environments, it makes sense to optimise the materials applied to your model. Go to the Character Build Settings tab, select Basic, and then click Rebuild Basic Materials. This will create a simplified version of the materials for your model.

Final results in Blender

Animate your model

Back in Blender, on the CC/iC Pipeline panel, click on Import Animations, select motion.fbx, and hit Import. Then go to the Rigging and Animation tab, select the animation you’ve imported in Source Armature and also Source Action, and then hit Preview Retarget. Preview the imported animation by hitting Play.

The quickest way to animate your model is to use animation export from Character Creator. Inside Character Creator, click on the Motion button and select a specific animation. You can preview it by hitting the Play button right below the viewport. If this is the way you want your character to move in Blender, go to File > Export > FBXClothed Character. Select Blender as a destination once again, and then choose Motion in the FBX Options. You will also need to activate Current Animation and All checkboxes, and also deselect First Frame in Bind-Pose. Now hit Export and save it as motion.fbx.

For more info, check out the KeenTools dedicated webpage on FaceBuilder x Character Creator 4 integration.

WarLord’s Workshop – Beginners Guide: Soft Cloth & Volumetric Lighting

One of the most important skills you learn as an animator is lighting and this cannot be overstated. It is a make-or-break factor that can make a so-so story look great while poor lighting can make a great story look bad. There can never be enough tutorials viewed regarding lighting as there are a lot of different tricks, tips, and viewpoints from different authors that, when combined, may lead to great lighting techniques that set up apart from the crowd. 


This article is an introduction to how Volumetric Lighting, combined with Soft Cloth physics and character interaction adds depth to a scene but like all eye candy can be overplayed. I will be combining a few concepts: 

  • Animating a character with a simple motion. 
  • Using that motion with the character to interact and pull open windows curtains. 
  • Using dummies attached to the hands to make it easier to move the curtains with physics. 
  • Use a volumetric spotlight to silhouette a character standing outside the windows looking in. 
Short Demonstration of Soft Cloth and Volumetric Lighting


The character I used is Camilla, one of the characters that comes with iClone, and the motion, Open Curtains, can be found in the MOTION->HUMAN FEMALE->PERFORM folder.  Both of these should be available unless you haven’t downloaded the extra free resources that come with iClone. Look under the FREE RESOURCES folder in Smart Gallery.  


A woman in front of a set of closed curtains covering a large picture window in a rural living room. There is a strange blue light coming from the window and she uses her hand to grab the curtains and open them to the point that she can look out only to see the silhouette of a strange-looking creature looking back through the window just a few feet away. 

The room is dimly lit so the rays of the blue light pass through the window silhouetting the creature in front of the window. 


This is a very simple setup with two characters, the woman and the creature on each side of a living room window. The drapes are Soft Cloth props that can be found in PROPS->PHYSICS PROPS->CLOTH TEMPLATE PIN-> Pin4_4Points. I duplicated the cloth for the other curtain and left their settings at default.  

There are multiple light sources within the room including a point light that is near the character so I could keep the overall lighting down but still be able to see the character. I also increased the IBL lighting for this article so we could see what was going on. Experiment with different IBL lighting images and strengths to enhance the image visually without blowing out the more subtle lighting.  

The Hand Dummies 

The hand dummies are not necessary as the character comes set up and ready to interact with the curtain via collision dummies. This, however, requires a more pinpoint placement of the character to get the curtains to move properly. The dummies have more contact area, but they can be resized too. 

These dummies are blocks found in PROPS->3D BLOCKS->BOX_001, resized to better fit the hands and they can be resized independently if needed as was the case here. The right-hand dummy was longer than the left-hand dummy to get the desired effect on the curtains. Since the blocks are invisible, if Set as Dummy, they will not render.  Be sure to ATTACH the dummies to the hands.

A better view of the Hand Dummies with Physics turned on and set to Kinematic.

If you look closely enough at different camera angles, you will see that there is a bit of space between the character’s arm/hand and the curtain. The lighting of this scene will hide that. It could be tweaked, if necessary, by decreasing the dummy size and the collision dummies in the arms and hands of the character. 

Activate the physics box. Leave the physics setting at default and the mode to Kinematic. This is very important as this will hold the dummies’ place while maintaining the collision physics necessary for this animation to work. 


This is the easy part since we are only using one, supplied motion. No tweaking will be necessary unless you just want to do so. Place the character near the center of the Soft Cloth curtains. Next drag and drop the Open Curtains motion onto the character. 

The final movement and interaction between character and curtains.

If she runs into the wall or past the curtains, don’t panic, just adjust her starting position until you get the expected results. This will take a bit of back and forth while you find the optimal position that will keep the character from running into the wall while interacting with the curtains. 


While we can adjust the properties of the curtains the interaction itself will be controlled by the motion clip added earlier. The physical contact between the character and the curtain can be controlled by the size of the dummies we attached earlier. 

If you need to, make the hand dummies large enough to easily see until you see what is going on, then you can scale them down to decrease the gap between the character and the curtain cloth If needed. 


The spotlight is used to project the lighting through the window. The placement of the spotlight in terms of distance from and height are arbitrary with direct or close placement yielding more light while being above and angled down yields a better silhouette and will not wash out the scene with too much light. 

Check the Volumetric Light box about halfway down the modify menu with the spotlight selected. Also, check the Unlink Light Intensity box so the slider becomes available, and then you can experiment with different looks. 


In this example, I used an alien model that I rigged through Character Creator 4 which I then animated with an idle motion using the motion puppet. This was “motion for effect” so I just needed movement so it wouldn’t look like a statue planted in front of the window. 

The creature was placed directly in front of the window with the spotlight up and behind. Experiment with the placement of both.

Run the animation, which since it involves Soft Cloth is also a physics simulation. The difference is that the character is animated and the curtains are simulated. Tweak, position, re-position, experiment, relight, or whatever at this point as the scene is now ready to render. 


Volumetric lighting can be overplayed so keep it dramatic when it needs to be, like this test scene, more subtle at other times. While lighting is very important, you do not want it to detract from the purpose of the scene which is to tell a story. 

A bright, ray-filled scene is cool, but it doesn’t and shouldn’t be in every scene unless the lighting is more subtle. A more judicious use of volumetric lighting will go a long way towards a more cinematic result. 

Before I close, I’d like to add the first time I saw this volumetric light and curtain effect in iClone was a masterful test created by a pioneer of iClone Mark Pleasant of Small Wonder Studios. The test was brilliantly executed and at the time, stunning, as volumetric lighting was new to iClone. Any tutorial by Mark is worth watching as he is an experienced cinematographer and an excellent animator.  

MD McCallum - WarLord

MD McCallum – WarLord

Digital Artist MD “Mike” McCallum, aka WarLord, is a longtime iClone user. Having authored free tutorials for iClone in its early years and selected to write the iClone Beginners Guide from Packt Publishing in 2011, he was fortunate enough to meet and exchange tricks and tips with users from all over the world and loves to share this information with other users. He has authored hundreds of articles on iClone and digital art in general while reviewing some of the most popular software and hardware in the world. He has been published in many of the leading 3D online and print magazines while staying true to his biggest passion, 3D animation. For more information click here

Iconic Super Mario Princess Peach Made With Character Creator and Blender

Ping Chi Kuo (Big Tree)

Hello everyone, I am Ping Chi Kuo, often referred to as “Big Tree”. Presently, I hold the position of Art Director and Manager at Chinesegamer International Corp (CIC). It is with great privilege that I take this opportunity to impart my insights and experiences garnered from working in the 3D character art industry.

During my academic journey, I developed a strong passion for the realm of 3D art. I’m convinced that 3D art serves as a boundless canvas for infusing creativity and imagination into visual creations, thereby offering unique and immersive experiences to viewers. For this reason, I remain dedicated to expanding my expertise across various aspects of production, encompassing everything from crafting 3D models and materials to perfecting animation and artistic design. Today, I am very pleased to introduce how I made this Princess Peach character starting from the CC base model after discovering the Character Creator to Blender workflow.

My Experience with iClone and Character Creator

A few years ago, our company embarked on a journey to harness the power of PBR shader technologies for our new game. Our team eagerly embraced the challenge, even though the creation of a PBR character demanded significantly more time and effort than our conventional approaches. In my role overseeing both art direction and management, I keenly felt the pressure from all angles, necessitating a delicate balancing act.

Through a stroke of fortune, we stumbled upon Character Creator as a potential panacea for our ordeal. This software not only slashed production time by half but also enabled us to uphold exemplary quality standards. With Character Creator, we were able to craft dozens of distinctive and captivating characters, effortlessly achieving the majority of our objectives.

During that period, the capabilities of Character Creator were already impressive. We could swiftly generate a diverse range of martial arts characters while effortlessly fulfilling the need for an extensive array of weapons, apparel, and accessories. These components could be seamlessly combined and customized to achieve various visual styles. This streamlined approach not only facilitated the creation of fresh characters but also enabled us to meet stylistic demands swiftly and easily. Notably, our production efficiency experienced a substantial boost when it came to the creation of NPCs (non-player characters), which simply required the skillful combination of distinct elements.

Stylized Characters Made with Character Creator

As I continue my search for tools that can enhance our content creation pipeline, I’ve noticed that beyond Character Creator’s powerful versatility, Reallusion offers a range of other tools that greatly improve efficiency. Drawing from my own experiences, integrating characters into iClone while infusing them with visual appeal solely through imagination and creativity is truly unparalleled. 

Notably, Character Creator excels in developing realistic characters and handling stylistic ones, significantly expanding the scope for introducing stylistic variety. A notable example from last year involves my use of Character Creator 3 to craft an adorable adaptation of Super Mario. If this catches your interest, feel free to explore the link below — just remember to enable English subtitles if Chinese isn’t your forte!

Making Princess Peach with Character Creator and Blender

The advent of Character Creator 4 (CC4) has ushered in a new level of simplicity in crafting stylized 3D characters, to the extent that virtually anyone can accomplish it with ease. On the back of this opportunity, I’m thrilled to present my entire creation process of another iconic character in the Mario universe: Princess Peach! The workflow for bringing Miss Toadstool to life can be summarized into the following key steps. (Please turn on the CC subtitles and select auto-translate for English subtitles)

Export the base mesh proxy from Character Creator 4 into Blender for shape modeling

Design Princess Peach’s hairstyle in Blender, and turn curve lines into hair meshes

Create outfits and accessories using Blend’s powerful mesh editing tools, and make their corresponding texture UVs.

Assigning PBR materials for the outfit via Blender’s material node tools

Send my custom character shape, hairstyle, and outfit design with skin weight, and material settings to CC4, and be able to further adjust materials and refine skin weights in CC4. Apply some motions to test the physics settings for her skirt and export to Blender to adjust the test motions.

Use the talking animations in CC4 to refine the details of the facial expressions.

Use the Reach tool in iClone 8 to create seamless interactive motions between Princess and Mario.

Final Thoughts

Integrating Reallusion tools into my Character Creator to Blender workflow has consistently brought me joy and satisfaction at every turn of the process. While their solutions are powerful and versatile, they still provide a stimulating environment for me to express my creativity.

Regarding CIC, the inclusion of Reallusion tools marks a revolutionary turning point in our 3D content creation process. They not only enable us to match our rapid turnaround pace but also uphold our morale and creative enthusiasm. By simplifying intricate processes and injecting excitement into routine tasks, Reallusion software empowers us to embrace more substantial challenges and attain new heights.

Ultimately, I firmly believe that technological advancements bring forth even more opportunities. Through the utilization of newer and more advanced tools and techniques, we can broaden the horizons of our creativity, delivering rich and stunning gameplay experiences like never before.

Learn More

realistic digital human by character creator and texturing XYZ

Making Realistic Emotive Characters using VFace & Headshot 2

Reallusion, the proud creator of iClone, Character Creator, and corresponding ZBrush plugins unveils the remarkable synergy between VFace and Headshot 2.0, demonstrating how they harmoniously converge to craft an impeccably rigged character, brimming with personality and versatility.

Usually, transforming static character sculpts into animatable figures demanded substantial time and effort from character artists — a process that also necessitated extensive training, knowledge, and skill sets to execute proficiently. The challenges are alleviated with the arrival of Headshot 2.0, offering a quick and seamless pathway for conversion of 3D mesh to 3D animatable head. Continue reading to discover how this groundbreaking workflow accelerates production schedules, navigates around technical intricacies, and grants you the freedom to channel your energies into the creative aspects of your project.

Preparation of Source Assets

For this demonstration, we will be using the “Mykhaylo #115” VFace asset.

VFace asset “Mykhaylo #115”

In Zbrush, open the VFace file and export it as a “XYZ_head” Subtool for ZBrush.

For a better user experience, we’ll also need to export a low-poly version of the VFace for use with Headshot 2.0. Here, I am reducing the model to SDiv3, which lets me strip away some of the taxing details while keeping the contours of the mesh intact. Next, we’ll need to export a high-poly of the VFace as a reference model for the baking process by upping the model to SDiv6.

*Opting for greater subdivisions will result in more intricate details within the baked texture maps. However, it’s important to note that this choice will significantly extend the time required to bake the textures. Additionally, bear in mind that the ultimate resolution is constrained by the chosen texture resolution.

Using Headshot 2

Importing the Mesh

Drag the SDiv3 mesh into the Character Creator viewport, and then access the material texture channels (‘Modify > Textures’). From there, simply drag and drop the texture files into their respective channel slots.

We now proceed to open the Headshot 2 panel in order to utilize the newly introduced ‘Mesh to CC’ feature. To do this, simply click on the ‘MESH’ button located near the upper part of the interface and then follow the instructions provided below (indicated in red in the following illustration). Once you have properly configured the mesh accordingly, click the ‘Start Head Generation’ button to begin the procedure.

Deploying Alignment Points

In the initial phase, the left viewport displays the CC base model, adorned with multiple marker points, each marked with its unique identification number. On the opposing right side, the prior imported source model is displayed. For the model fitting procedure, HeadShot 2 will analyze the alignment points between these two models.

To ensure precise outcomes, it is essential to apply corresponding alignment points on the VFace model, aligning them with those on the CC base model on the left. Utilizing the auto-deployment feature is advisable, allowing HeadShot 2 to determine the initial placement of the 24 foundational points. Subsequently, manual adjustments can be made to swiftly establish the matching feature landmarks.

Next, manual point deployment will be needed to complete other features like the ears, neck, etc.

Model Selection

In stage two of the process, you can discard areas of the source mesh that shouldn’t be factored into the model fitting process using ‘Effective Area’. Essentially, we are allowed to hide mesh faces at this step and let HeadShot 2 compensate for the missing parts with auto-generated geometry. 

During the second stage of the process, you have the option to exclude specific regions of the source mesh that are not relevant to the model fitting process. In essence, you can conceal mesh faces at this juncture, and HeadShot 2 will intelligently generate geometry to fill in the gaps caused by the hidden portions. This capacity to intelligently fill the absent portions of the geometry proves invaluable when dealing with fractured or flawed source meshes.

There are three preset options for ‘Effective Area’ in HeadShot 2, which allow for the swift concealment of geometry intended for removal, each tailored to a different scenario. Given that we began with intact source mesh that had no discernible flaws, we will opt for the comprehensive area selection located at the far right.

Using the Projection Brush

At this stage of the process, HeadShot 2 has already transformed the VFace source model into a CC model. Our next step involves utilizing the 3D paint brush located in the right panel to enhance the appearance of the new CC model, ensuring a closer resemblance to the original VFace character. Simultaneously, we will address any mesh imperfections and problematic edge loops that may be present.

Usually, the mesh surrounding the eyes requires adjustments, similar to the current situation where the alignment of the eyelids with the VFace model is not optimal. It is advisable to deactivate the ‘Keep Border’ and ‘Projection’ settings to facilitate geometry adjustments without being overly constrained by the underlying topology. Subsequently, we employ the ‘Move’ brush to nudge the eyelids into proper shapes and positions. After the corrections have been made, remember to reactivate the ‘Keep Border’ and ‘Projection’ options.

Similar techniques can also be applied to address other areas of imperfection. In instances where the contours of facial features significantly diverge from the source model, the ‘Projection’ brush comes in handy to nudge the mesh back to likeness.

Optimize face topology for the best facial animation

Achieving enhanced character expressions in facial animations relies on ensuring that the primary edge loops closely mirror the structure of the CC3+ base topology. As a result, it becomes imperative to prioritize the accuracy of these primary edge loops. To gain insight into the desired form of proper edge loops, refer to the official topology guide available on Reallusion’s website.

With the finer details and corrections now addressed, we can advance to the actual character production phase. In the case of this VFace model, I will activate the ‘Keep Neck Shape’ option to ensure that the generated head seamlessly integrates with the subsequent base body, all while preserving the authentic neck shape of the source model.

Character Creation

In the ‘Texture Bake Options’ section, I opted for ‘From Source Mesh’ since the source model includes its own diffuse texture. As for the ‘Normal’ setting, I selected ‘From High Poly Mesh’ and designated the high-poly geometry exported earlier from ZBrush. In terms of the geometry, I chose the ‘Male’ preset for body type, and ‘No Mask,’ as the original textures adequately fulfill all the requirements. To finish the process, I clicked on the ‘Generate’ button to kick off the creation of the complete character. Please note that, Character Creator morph sliders can also be used to adjust the body shape.

In the following illustration, the result speaks for itself: The HeadShot generated character on the left, and the source VFace model on the right.

Polishing the Model

Refining the Textures

To rectify color imperfections around the mouth, we can opt to launch the texture maps into Photoshop for small adjustments.

Repositioning the Eyes and Teeth

Morph sliders are also available for readjusting the positioning of the eyes and teeth.

Modifying Facial Expressions

While HeadShot 2 offers comprehensive facial expression data, there might be instances where these expressions display minor imperfections. In such cases, utilizing the ‘Facial Profile Editor’ is recommended to rectify and refine the facial expressions.

The two most frequently encountered expression morphs that require adjustment are closed eyes and an open mouth. The steps to address these issues are outlined below:

  1. Locate the specific morph slider requiring correction within the ‘Facial Profile Editor’.
  2. Utilize GoZ to export the affected morph shape to ZBrush, where you can rectify it using a selection of ZBrush’s brushes.
  3. Upon completing the necessary corrections, bring the refined morph shape back into ZBrush to finalize the adjustments.

Applying Different Styles

Another significant advantage of utilizing Character Creator is the speed with which you can don different attires, hairstyles, and accessories. Simply access the ‘Content Manager,’ search for the hairstyles and clothing that appeal to you, and effortlessly apply the assets by either double-clicking on them or dragging them onto the avatar.

Character Performance & Unreal Render

Once the character is dressed and prepared, we can start to make it perform. Through the utilization of Unreal Live Link, we can transfer the character to Unreal for look dev. This allows us to apply animations to the character within iClone and observe the same outcomes in the Unreal environment.

For this character, we employed Live Link to capture facial performances and utilized timeline tools for making minor refinements to the animation.

In Closing

TextureXYZ’s VFace provides 3D artists with the opportunity to operate from meticulously detailed scanned models featuring highly intricate diffuse textures. By synergizing the capabilities of VFace with the advancements offered by HeadShot 2, we are able to rapidly craft a fully developed 3D character that approaches lifelike realism. We appreciate your dedication in reading this article to its conclusion and trust that you’ve garnered valuable insights for your upcoming artistic pursuits.

Explore the Vface content pack in Reallusion website

Headshot 2.0: AI 3D Head Generator

Know more:

Character Creator: 3D Character Design Software

FREE for 30 days & get 2,000+ ready-made assets:

3D Character Generator Free Download | Character Creator

BAD TO THE BONE Tutorial – Making 2D and 3D dance together with Cartoon Animator and iClone

Tom Jantol

Tom Jantol, a filmmaker and animator who has been a professional animator for more than 15 years.

He first started with game engines as part of the Machinima movement, then further delved into software such as iClone, Cartoon Animator for 3D and 2D animation, Unreal and Unity game engines, and After Effects for post-production.

The main purpose of making this video was to attempt to make a new breed of tutorials. A movie that is a “self-making off”, a hybrid of an animated story, and a tutorial about making that story at the same time. Or, to put it simply, to bring “behind the scene” to the front of the scene.

To make it happen, first I needed 3D environment in which I could import animated 2D materials and make them behave as 3D materials.

iClone is a perfect tool for this 3D part of needs.  Cartoon Animator for 2D. Together, they are merging into a best of all dimensions, one that doesn’t have a number because it is infinite – a dimension limited just by imagination.

3D environments of the retro-futuristic laboratory were made in parts, so I could move walls and objects for easier camera management. 

The main object is a corkboard, an animatable plate that will hold inner tutorial graphics. 

The main character, the skeleton, had to be made in a couple of different versions. One as 3D model plus several 2D renderings of the same model for use in Cartoon Animator.

Renderings made in iClone, of course.

In Cartoon Animator skeleton is rigged as G3 human character. That job was done in a couple of minutes, thanks to the dummy character included in Cartoon Animator. It was enough to bring that dummy to Photoshop and replace dummy parts with skeleton parts.

And… with that, preparations for a movie are complete. All that’s left is to combine footage from two software, as seamlessly as possible.

What was much easier to achieve than it sounds. Mainly because of the excellent MotionLive plug-in. With MotionLive, dance animation made in 3D iClone can be assigned to 2D skeleton, so motion is consistent between two dimensions. 

One of the main features of the movie is the use of the extremely useful Spring Bones option in Cartoon Animator. 

For some scenes, I animated just a hands in Cartoon Animator with a spring effect and then attached that 2D sequence to 3D iClone skeleton. After hiding “real” hands of iClone skeleton.

Again, much easier than it sounds. For importing of 2D videos into iClone I used PopVideo, a very efficient tool for making videos with alpha channel. Path can’t be much simpler; rendering video with hands and transparent background in Cartoon Animator and assigning that video as a texture to a plane in 3D environment of iClone. That plane is then a 3D object and as such, it can be additionally animated, attached to other objects, etc… 

That way we can achieve the best of both worlds.

This is, actually, all.

With careful planning of the camera moves and timings of imported 2D animation, both software is becoming one tool. If we upgrade that tool with our creative minds, possibilities are endless.

Follow Tom Jantol:


WarLord’s Workshop: Hands-On Review Business Crowd Set Vol 1

One of the major flaws to a lot of animated scenes is the lack of people present in the scene. While this is improving there are still a lot of good scenes out there that have less than a half dozen actors in them with some only having one or two characters. While this works for some setups it does not work for creating a busy office or hotel lobby, restaurant, or other public place.  

A hotel lobby may be mostly empty at certain times of the day and night but not all the time. In a lot of cases, it comes down to a lack of resources to make a crowd. Even a small one. Crowd solutions are not cheap and low poly characters, while becoming more plentiful are still not widely available. 

It’s difficult to give a sense of a busy office or lobby without a lot of people but one thing to remember in animation is that a few can look can a lot in the right situation with the right character assets. That being said, it still takes upwards of twenty to thirty or more characters to make a convincing outdoor scene in a busy public area. 

The demonstration video of this pack is at the bottom of this article including details on the project used for review testing.

This can be different in an office-type situation, or part of a hotel lobby like the main desk or elevator area. They don’t need a huge crowd, but they do need people. Even if you only look at a half dozen to a dozen people you are still talking about having to create or acquire significant resources so paying attention to the versatility of those resources becomes very important. 

If you lay out some hard-earned dollars on ten to twelve or more characters, then they need to be able to be easily duplicated and colors changed from hair and skin down to the clothing and shoes so you can use them in a lot of situations going forward, not just the project in hand. 

Reallusion has provided just such a pack with its Business Crowd Set. This set is not only low poly with pro-grade clothing and 3D scans but also contains some very important maps like an RGB mask for skin, hair, and teeth and a Color ID Mask for outfits, accessories, and eyes.  

RGB and COLOR ID Masks

As you can see from the image above these maps cover specific areas like the skin, clothing, and other parts of the character. There is one hitch though: you must drag and drop a tool, proved by Reallusion, onto the character to access the Substance texture layers. It’s just a simple drag and drop then, by the digital magic of the Reallusion engineers, you have sliders available within the texture menu that were not visible previously. 


The drag and drop tool will activate the Substance tool allowing us to alter the newly available sliders for Color, Roughness, Metallic, Ambient Occlusion, Glow, and Mask. 
This modifier is supplied by Reallusion and must be dragged and dropped onto the character directly otherwise you will not see the sliders necessary to make easy edits like color changes. If iClone is up to date this modifier should already be in your Substance folder in the materials section. 

Material Modifier


This is a great feature of these actorSCAN characters. The Substance tool can be a time saver in diversifying the characters into a much larger crowd with different color clothing and skin tones. If, like this pack, you use fifteen characters, you can drag and drop the material modifier on the first set of characters and duplicate them for additional crowd or background characters quickly creating a larger crowd than just the initial set of characters. 


Initially, you notice the Substance section of the Modify Material Menu is blank and inaccessible. After activation with the modifier, the section will become active, and the sliders will be available after checking the appropriate boxes to make them visible. 

Drag and Drop Material Modifier
Drag and Drop PBR Material Modifer – Necessary for Quick Changes


Clicking the available checkboxes will bring up the RGB Mask and Color ID Mask making their subsections available, and you can click your way down through the Tweak RGB and Tweak Color ID menu. 

Experimentation is highly encouraged in this area. If you are not sure what each slider does, then use them, as you can always reload the character if it gets away from you. Experimenting is part of 3D animation.  No matter how many tools we are given there always be something not covered by an available tool or method that you must solve by other means. 

It can also lead to undocumented methods never envisioned by the development team which might even lead to further development of the discovery. Such was the case many years ago when I first brought destructible props into iClone after physics was introduced. They would run, for a bit, then crash. Before long a Reallusion dev had stabilized the procedure and Reallusion issued a patch. 

This is just one example of how Reallusion pays attention to its user base. Don’t get “married” to a few or a certain set of sliders in iClone or Character Creator. When time allows, experiment, play around, and push iClone to the limits of what can and cannot be done. 

Drilling Down to the Sliders


I like the Hue slider as it is a time-saving tool just like the Adjust Color popup. It gives you many color variations in a short time. While there are input boxes, it is very quick to go through the sliders just to see their effects. 

Using the Hue Slider for Quick Color Changes


In this case, I just wanted an older-looking man and extras like these are usually not in close-ups. A simple hair desaturation and lightning would do the trick quickly and painlessly and, in this case, would pass in a closeup as well.  

Easy Graying of the Hair with Saturation and Luminosity

It only takes a few times using the modifying sliders to see what they can do and how they can help save you a lot of work in building up a business crowd or even a bar crowd after work. 


The test scene was set up in a high poly design studio environment with simple lighting that utilizes HDR, Tone, and IBL lighting. This project had 25 avatars that can be used from closeup on out with good results in a final render. This, of course, depends on the lighting like any render. 

Project Stats: 

Project Stats

As you can see the scene contains 25 avatars and 149 props as the major assets. I built this scene on an older 2017 Intel i7 HP Omen instead of my newer XPS i9 to see how it would handle it.  
It was a pleasant scene build without any thought given to the fact that it was a five-plus-year-old HP system. It did start to react a little sluggishly towards the end but could have held more avatars if needed.  
I worked in the minimal view mode when it started to drag a bit and had no problem working with the test scene in either minimal or full mode. 
This type of scene would have been very hard to work with just a few years ago but the old Omen breezed through building out the scene, animating, and testing the characters.

To be honest, my Intel i9 is not that much faster or quicker to respond than the older Intel i7 and this scene presented no issues. I could work in full mode at any time. 

TIP: You can use Depth of Field (DOF) to blur characters into the background to further disguise the usage of duplicated characters. 

Demonstration Video of Characters in Use.


These characters are lower poly than most standard characters, so they won’t tax the engine as much and are multitudes lower than many competing character solutions.   While a high poly character makes for a great image render, running several low poly characters in the same poly budget is a real plus for animated renders. 

Oh, by the way, I got this far and haven’t even mentioned another great feature. You can edit these characters in Character Creator. Since they are actorSCAN the morph sliders are not available but other tools like Optimize and Decimate, Edit Mesh, Adjust Bones and Proportion are available. 

The next time you need a business crowd, the Business–Crowd Set – Vol. 1 can go a long way towards filling out a conference room, staff meeting or even a small auditorium. 

MD McCallum - WarLord

MD McCallum – WarLord

Digital Artist MD “Mike” McCallum, aka WarLord, is a longtime iClone user. Having authored free tutorials for iClone in its early years and selected to write the iClone Beginners Guide from Packt Publishing in 2011, he was fortunate enough to meet and exchange tricks and tips with users from all over the world and loves to share this information with other users. He has authored hundreds of articles on iClone and digital art in general while reviewing some of the most popular software and hardware in the world. He has been published in many of the leading 3D online and print magazines while staying true to his biggest passion, 3D animation. For more information click here