Crowd Sim and Motion Director Editor for iClone 8.4 Launch as Free Upgrades

Crowd simulation continues to be a cornerstone of films, games, architectural visualization, traffic analysis, construction, and engineering. Its significance extends further into AI training, driving simulations, and the advancements of digital twin development, elevating the discipline as an increasingly valuable component of cutting-edge industries.

In response to the soaring demand for crowd simulation, iClone 8.4 has undergone a monumental transformation, now boasting an integrated crowd simulation engine available as a free upgrade. This remarkable leap forward democratizes access, empowering small to mid-sized studios to harness its power affordably. The fruition of two years of relentless development, this update brings forth streamlined functionalities and flawless asset integration. The Crowd Sim technology marks Reallusion’s pioneering step into the world of simulations, with upcoming releases in 2024 set to elevate the experience further with a focus on environment interaction and motion planning.

Know more: iClone Crowd Sim Official Page | Getting Started Tutorials

Last but not least, this update introduces the Motion Director (MD) Editor, providing an infinite range of options to customize and interact with MD toolsets. This allows you to precisely steer characters around the scene and trigger motion sequences precisely as desired.

Know more: Motion Director Official Page | MD Editor Tutorials

Feature Highlights

Complete Toolchain

Most existing crowd simulation solutions are composed of plugins for off-the-shelf software such as 3ds Max or Maya. This fragmentation commonly lacks readily available assets for immediate production and is frequently priced beyond the means of smaller studios. In contrast, Reallusion has developed a comprehensive character production and crowd simulation toolchain that is both practical and affordable for real-time simulation. 
This toolchain includes a spectrum of tools ranging from 3D character creation (Character Creator) and character animation (iClone) to an extensive library of motion and character assets (Actorcore), complemented by an asset developer ecosystem comprising the Content Store and Marketplace. The integrated suite empowers users to design custom characters tailored to specific thematic needs and seamlessly integrate personalized animations, including complex crowd behavior.

Environment Setup

Designating the traversable area on an imported 3D environment is mostly automatic. Your scene can comprise multiple elevations connected by ramps and stairs. iClone seamlessly detects accessible areas and promptly updates the NavMesh whenever new objects are introduced. It integrates navigable inclines and distinguishes between passable and obstructed zones. Live NavMesh updates can dynamically adjust to changes, such as the movement of automatic sliding doors. Learn more about NavMesh: Tutorial | Manual.

Walkway for Foot Traffic

Establish footpaths with simple mouse clicks, modify road curvature and width by dragging control points, and automatically identify surfaces conducive for movement. Seamlessly close paths to create continuous loops, connect individual paths to craft passages or intersections, and fine-tune the probability of selecting specific routes. Learn more about WalkWay: Tutorial | Manual.

Flexible Character Spawning

Choose the target location for spawning actors, whether it’s a 3D model, a user-defined volume, a walkway, or the entire NavMesh. Have the flexibility to specify the crowd size, formation, density, facing direction, and observe their actions upon placement in the scene. Male, female, elderly, or child actors can automatically adopt appropriate animation styles, with adjustable probability ratios among idle, walking, jogging, and other performances. Learn more about Spawning Actors: Tutorial | Manual.

Smart Avoidance and Flocking

Incorporating natural crowd dynamics, this iClone crowd simulation lets characters maintain distancing and formations while circumnavigating obstacles, irrespective of their movement styles and speeds. Through the use of agent radius, detection times, and wait-and-move cadences, the simulation can minimize backtracking and unravel pedestrian bottlenecks. Learn more about Avoidance Tutorial >

Directable Characters 

Easily generate realistic characters with intricate facial expressions, various accessories, and diverse animation styles, ideal for populating crowd scenes at any distance. Users have the flexibility to manage characters collectively or individually, employing keyboards, game controllers, or mouse-click waypoints to direct their locomotion. Learn more about Crowd Generation Tutorial >

Ambient Animation with ActorGroups

ActorGroups are equipped with pre-defined, loopable animations for standing, sitting, and talking, enriched by an array of accessories, facial expressions, and genuine social interactions. They enhance scenes by introducing randomized actors, materials, and motions, or by crafting bespoke ActorGroups, further enlivening the overall experience. Learn more about ActorGroup: Applying Preset Actor Group – Tutorial | Manual. Applying Random Actor Group – Tutorial | Manual.

Alternate Performance and Versatility

Enjoy the flexibility of converting characters into Lite actors when system resources are constrained or when optimizing for heavy crowd simulations. Choose to revert Lite actors back to Edit mode for precise control over their movements using Motion Director along with a diverse range of input controllers.

Motion Director Editor

Motion Matching has been integrated for smooth navigation control, enabling easy addition of new idle and perform triggers to Motion Director via drag-and-drop interface. Motion Mixer lets users direct upper-body motions while the legs are controlled by standing, walking, or jogging iMD states. Moreover, the Mixer features specific sub-mixer actions, enabling actors to perform tasks like making a call, sending a message, or simply glancing at a handheld device. Know more about Motion Director Official Page | Tutorial | Manual

Customization and Expansion 

Access the ActorCore roster to enlist scanned actors and their matching mocap loops. Use Character Creator to design modular actors, decimate the mesh for massive crowd simulations, or auto-rig with AccuRIG. Spawned crowds can be randomized based on user-defined actor and motion lists that match them with predefined accessories and material options. 

Pipeline and Render

Effortlessly export animated characters to your preferred 3D tool or game engine. Native Auto Setup plugins are available at no cost, offering pre-configured model and material setups for Blender, Unreal Engine, and Unity. Moreover, Live Link supports synchronization with the Unreal Engine and NVIDIA Omniverse, enabling a two-way workflow. Upcoming support is also in the works for 3ds Max and Maya.

Startup Assets

iClone includes 8 crowd characters, each featuring 5 texture options. To enhance your crowd simulation experience, consider obtaining the two essential packs for additional crowd extras. For a broader selection of 3D actors offering loopable ambient motions created from authentic mocap data, explore the vast offerings available in ActorCore.

Modern People Sampler

31 scanned actors are available with full-body rigging, encompassing various genders and ages. These individuals offer a rich array of animation possibilities using Crowd Generation and the dynamic spawning features of ActorGroups. Each character is equipped with 5 texture options, allowing for both random and specific visual variations, enhancing the diversity and realism within your animations. Know more about Modern People Sampler >

Social Actions

“Social Actions” is an ambient crowd animation pack designed to utilize the dynamic spawning system for ActorGroups. Effortlessly bring scenes to life with individuals standing, sitting, engaging in conversation, or naturally interacting with close ones. Comprising 90 mocap animation loops for endless background animation, each detailed with authentic facial expressions and digitized in MotionPlus format, this pack is compatible with any project. Know more about Social Actions >

Know more about iClone Crowd Sim related features:

WarLord’s Workshop – Beginners Guide: Creating a Snowman in CC4 with Primitives

This Christmas, like many that came before, will be the first Christmas for many new iClone/Character Creator users to make special something for the holiday season. Whether it be for family and friends, a business, or the web in general Reallusion and Character Creator 4 have made it easy to create characters like an animated snowman, or rather I should say a snowman you can animate, to spread some Christmas cheer and extend your skills at the same time.

The best part is we will use primitives that come with CC4 to create the snowman. What we will be doing specifically:

  • Drag, drop, rescale, and place primitives, balls (spheres), and cylinders to create a simple snowman.
  • Merge the various props into one prop.
  • Use AccuRig to rig the snowman for animation (and mask out bones we don’t need).
  • Test the Snowman character in iClone.


This exercise is a great starting point for new animators and an easy, quick solution for seasoned pros. AccuRig can take a mesh that is similar to human form and give it humanoid movement. It can be anthropomorphic animals or any group of props that AccuRig can read and skin as a character. This also opens up iClone’s entire motion library for easy animation.

As shown below, drag and drop the prop, primitive Ball_000, found in Props-> 3D Blocks, then duplicate the ball and move it on the upward axis till it is in the approximate range of the torso. With the newly created torso still selected duplicate it, and bring it up to the area of the head while rescaling it down to the size of the head you need.

The start of our snowman. Three basic balls (Spheres) for the body.

Now search for a cylinder in the same 3D Blocks folder. In this case, I used Cylinder_004. Rotate it to 90 degrees and place it where you think the arms would be.

A cylinder was added for the arms and runs through the body mesh.

PLEASE NOTE: There is a video tutorial at the end of this overview.


Now we are ready to Merge all the primitives into one object. This is done to prepare the mesh to work with AccuRig.

If you are concerned about no face at this point, there is simply no reason to include those added props to the skinning process as they will function as accessories that are attached. It’s just a simple process of adding more balls (spheres) textured black for eyes and mouth while adding a cone of your choice for the nose with the appropriate color texturing.


Now comes the easy part… using AccuRig to turn our pile of props in a snowman character we can animate. While we are in AccuRig we will also use masking to turn off the bones from the waist down since this snowman doesn’t have legs.

With the newly merged mesh selected press the AccuRig button to take us into rigging. We’ll move around some of the markers but that is a simple enough task to complete.

Once in AccuRig press the “Create Guides” button and let it do its magic.

Arranging the Guides (dots) with the mesh.

After arranging the guides accordingly, we set the Number of Fingers to zero since our snowman has no fingers. You could add two more fingers with scaled cylinders and skin as three fingered but we’re keeping this simple for the purpose of this tutorial.

Click Generate Skeleton to continue.

Set the number fingers and Generate Skeleton

Once the skeleton appears, select the Pelvis bone and it should turn blue when selected. Next, turn the Mask by clicking the On button in the mask section. This will mask out the pelvis and the bones below it, so they do not twist or distort the bottom of the snowman when motions are applied.

This is a very powerful and useful feature of AccuRig making it possible to skin many different types of characters that may not have the standard number of limbs or lost a limb.

Click the Bind Skin button and the skinning process will begin. It may take a while to work on some models but generally, it skins it in less than a minute in my experience. Once it’s finished you will see a multi-colored skeleton indicating the skinning process is complete. From here you close AccuRig and the newly skinned snowman character will be in the Character Creator 4 workspace.

The multi-colored bones show that the character is rigged with bottom bones masked out.

Below you will see the snowman character after transfer to iClone. As referenced in the image you can see where the arm and shoulder bend away from each other with little deformation to the body. This is due to placement of the marker on the upper arm and another marker straddling the edge of the body mesh and arm.


The rigged character in iClone 8.

What you see above is the snowman mesh using the same Ball (sphere) primitives we used for the body parts, scaled down and textured black for the eyes and mouth. I then duplicated one of the eyes and scaled it down to use for the mouth. From here I duplicated and placed the stones in a smiling mouth but it could be a neutral or frowning mouth just as easily.

In fact, you could animate the mouth by creating a neutral or frown with the mouth primitives then moving down the timeline and arranging the mouth primitives into a smile. While that is beyond the scope of this tutorial it is not that difficult. Experimentation is highly encouraged.

The nose is a cone primitive scaled to long and narrow. The top hat is three cylinders scaled to the various shapes of the brim, hat, and headband.


There is a possibility that the skinning might miss a few vertices in the back of the bottom sphere as that is a long way from the bone. This did happen to me at least once during testing.

Occasionally a bone will be weighed improperly, particularly in extreme situations like this large round body sphere.

If this happens you may have to go into Skin Weights and adjust the CC Base Hip with the smooth tool. While this isn’t the best way, it’s the easiest for beginners until you grasp more of what skinning is.

Using the smooth tool in skin weights to smooth out the mesh deformation.

You can also just hide this portion of the snowman from the camera view when any deformation occurs. This is due to the mapping of the CC Base Hip and is another area that I highly recommend you save a copy and experiment with correcting this error should it happen to you.

While our snowman isn’t a sophisticated character by any means, it is a great starting point if you have no idea what Character Creator 4 can do outside of human characters. For a school project, it can be pencils for arms and legs, an eraser for the body, and so forth. Want a rock golem? Find some rock props and put them into a humanoid, bipedal shape so AccuRig can skin it for you.

Just keep in mind that your characters are NOT limited to the base meshes you find in Character Creator. You are only limited by the objects you have on hand and your imagination.

Merry Christmas!!!

MD McCallum - WarLord

MD McCallum – WarLord

Digital Artist MD “Mike” McCallum, aka WarLord, is a longtime iClone user. Having authored free tutorials for iClone in its early years and selected to write the iClone Beginners Guide from Packt Publishing in 2011, he was fortunate enough to meet and exchange tricks and tips with users from all over the world and loves to share this information with other users. He has authored hundreds of articles on iClone and digital art in general while reviewing some of the most popular software and hardware in the world. He has been published in many of the leading 3D online and print magazines while staying true to his biggest passion, 3D animation. For more information click here.

Professional Actors Stress Test AccuFACE Facial Mocap

To showcase the new iClone mocap profile AccuFACE’s role within a real production environment and stress test its capabilities before release, Reallusion has proudly invited seasoned performers Lucas Gilbertson and Carol Anne to rigorously explore the AccuFACE facial mocap technology.

Both Lucas and Carol-Anne boast over two decades of experience as professional voice actors. Lucas has made his mark on renowned franchises like the Mega Man series and Sonic the Hedgehog. Carol-Anne Day is recognized for her extensive voice acting for anime and video games produced by Viz Media, Bandai Entertainment, Capcom, Geneon, and Bandai-Namco. Since 2020, they have ventured into game and animation creation, utilizing iClone, Character Creator, and Unreal Engine. Their first-hand experiences have helped them foster leading expertise in the field of facial capture technology.

In a bid to unravel the full potential of AccuFACE, they set about portraying cartoon characters, orc monsters, and lifelike digital humans. Their experiments involve testing scripts of varying inflections — ranging from subtle to intense — to gauge AccuFACE’s ability to deliver dramatic, natural, and nuanced performances.

Here is the timeline for a quick overview of the AccuFACE test:
00:00 Project background
00:53 How we plan to test it (with beard and nose ring)
01:16 Quick test without calibration
01:52 After calibration
02:00 Impersonate a vicious orc or cartoon character
02:29 Combining facial editing for stylized details
03:00 Tongue twister and dialog test
04:00 Comparing to iPhone facial tracking
04:42 Individual brow tracking with AccuFACE
05:22 Proper lighting requirements for AccuFACE
05:36 An extreme performance test (with good eyeball and nose tracking)

Know more about AccuFACE related production:
Try AccuFACE facial mocap free for 30 Days
Easily produce professional animations with iClone
Automatically generated Lip-sync animation with AccuLIPS

Animated 3D Events for the Metaverse, Easier than ever

ActorCore & iClone integrated with EEVEE real-time in Blender

In the burgeoning realm of metaverses and the emergence of virtual events on these platforms, we find ourselves delving into immersive experiences, surrounded by avatars who are interconnected through virtual reality. However, the key to designing successful events lies in the integration of genuine human connections within these virtual spaces, necessitating a sturdy and lifelike framework to deliver impactful virtual content. This is precisely where ActorCore comes into play—an indispensable resource for procuring top-tier 3D assets, including lifelike human models, animations, motion capture capabilities, and real-time rendering, all conveniently accessible via the web.

It’s the boundless prospects offered by this remarkable tool that has captured my attention. Coupled with iClone 8, it provides unparalleled flexibility and simplicity in seamlessly incorporating crowd animations and staging for the web metaverse.

In the following sections, I will share my endeavors in Blender, where I’ve worked on integrating realistic individuals into a 3D environment simulating a party, achieving exceptional levels of visualization and detail.

So, let’s embark on this journey and delve into the process!

About the Artist

I am José Ripalda, a Video Game Designer, and Metaverse Artist, serving as the Co-Director of Marco Virtual MX. Our dedicated creative team focuses on delivering top-notch experiences within virtual events and the metaverse. Leveraging Blender as my primary tool, I craft diverse thematic environments tailored for web performance.

At Marco Virtual MX, our business model revolves around providing premium digital services for advertising purposes. Among our offerings are comprehensive 3D environment designs for the metaverse, video games, and various visual media. Moreover, our specialization lies in seamlessly integrating 3D content into the web using WebGL. We also excel in developing bespoke interactive systems for clients seeking personalized digital spaces. Furthermore, our expertise extends to crafting digital replicas of individuals, achieved through scanning or meticulous modeling. These services collectively define our role as a creative team dedicated to harnessing 3D technology.

Behind the Scenes

This tutorial will cover the entire process, commencing with ActorCore. It will elucidate the diverse possibilities of visualizing actors or 3D assets directly on the web, showcasing their animations and accessories. Subsequently, we’ll proceed to acquire these assets, integrating them seamlessly into our library within iClone 8. This integration marks the pivotal step in assembling multiple actors within a virtual space, which could be perceived as a stage or a setting where the events will unfold.

Throughout our exploration of iClone 8, we’ll delve into various tools, including Motion Layer, a comprehensive system designed to facilitate seamless actor control. Additionally, we’ll discover the functionality of the Timeline, which enables the amalgamation of motions, along with other remarkable features tailored to animate with meticulous attention to detail.

Moreover, as part of our supplementary learning, we’ll delve into the CC Auto Setup for Blender. This tool significantly streamlines the process of importing animated actors from iClone to Blender, enhancing the efficiency of our workflow.

ActorCore Animated 3D Characters

Initially, my aim in orchestrating virtual events revolved around sourcing lifelike performances from ActorCore’s expansive repertoire. First, I seek actors in the midst of skillful dancing or lively conversation, followed by a DJ responsible for overseeing the DJ booth. Lastly, I enlist waitstaff to curate a personalized and intimate ambiance for the event.

Following this introduction, a comprehensive breakdown of the ActorCore interface showcases the array of options available for discovering and previewing actors. This interface allows users to manipulate various visual elements, including lighting and selected animations.

Through this web interface, we have the capability to designate selected actors as “FAVORITES” for the party. Leveraging ActorCore’s extensive library, I effortlessly located all necessary components characterized by their realistic appearance, PBR shading, and readiness for animation through the Motion Animation library. This vast selection ensured that everything I required was readily available.

After preparing our selected actors, the next step involves downloading them for integration with Character Creator or iClone. This seamless integration capability enables us to effortlessly incorporate these actors into various Reallusion systems, game engines, and 3D applications—specifically, Blender.

An important aspect to emphasize here is the actors’ optimization for seamless integration into the metaverse. Each actor is tailored with an intermediate polygon count ranging between 15K-20K, flexibly accommodating events hosting multiple actors without significantly impacting performance. Furthermore, iClone 8 offers a streamlined method to import these actors directly from ActorCore. Its robust and efficient system ensures a swift and solid process, requiring just a few clicks for acquisition and integration.

iClone Advanced Motion Editing

Incorporating animations is as effortless as a simple drag-and-drop action onto the desired avatar, offering a visual interface that expedites the process without the need for extensive, time-consuming steps. This intuitive system not only enables users to seamlessly adjust the position and orientation of the actor but also ensures steady foot-and-floor contact. Initiating the animation setup with an established position facilitates better control, allowing users to define precisely where the actor will commence the animation sequence from the outset.

After grasping the process of integrating actors into iClone 8, the subsequent crucial step for our event is to incorporate the apartment stage. I highly suggest utilizing FBX files (including textures) to ensure seamless importation into iClone 8 without encountering any issues.

Once the stage is set, the process becomes straightforward—simply select and drag our actors to designated areas, such as the DJ section, which is prepped with mixing equipment. This setup allows us to effortlessly test out the Motion Layer tool for smooth execution.

The Motion Layer represents an innovative system seamlessly integrated into iClone 8, offering meticulous control over every aspect of our actor’s performance. This includes configuring individual finger movements, manipulating hand gestures such as opening and closing, editing various body parts, transitioning between IK and FK seamlessly, and accessing a comprehensive panel for selecting specialized gizmos.

In addition to working with the DJ, I managed to implement specific modifications for two guests engaging in a handshake. Motion Layer makes it easy to adjust hand positions. Moreover, within this tool, we explore a practical method enabling actors to maintain eye contact while conversing. Under the ‘LOOK AT’ menu, options like Look at Camera or Pick Target can significantly enhance the realism of our scenes by facilitating natural gaze and interaction between characters.

Furthermore, beyond the guest actors, it’s important to integrate featured actors. In a recent scenario where I required waiters, I found ActorCore particularly useful. Specifically, the “CATERING SERVICE STAFF” set offers a diverse range of actor types ideally suited to serve as waitstaff for our virtual party.

Once all our actors are prepared and animated using the tools showcased in the video, a crucial step involves transferring the actors and animations to Blender (the program chosen for creating this virtual scene). In iClone 8, I opted to export all avatars in FBX format, selecting the “BLENDER” preset and configuring a comprehensive array of animation and other parameters as depicted in the following image.

Stunning Rendering Results in Blender

In under 5 minutes, I efficiently grouped each avatar, seamlessly integrated textures, animations, and more. While this process is undeniably efficient, I’d like to delve into some aesthetic nuances I implemented during the creation of the apartment in Blender.

Blender offers unparalleled flexibility and practicality in crafting virtual environments, allowing for effortless texturing, lighting manipulation, and the creation of an extensive camera system with easy control. In my case, aiming for a realistic yet swiftly generated rendering, I utilized techniques like “Irradiance Volume” and “Reflection Cubemap,” both highly effective within EEVEE. These, combined with effects such as bloom and ambient occlusion, strike a fine balance between achieving good quality visuals and optimizing the overall realism of the scene.

Once our actors are primed, the Blender add-on called CC Auto Setup for Blender comes into play. This add-on allows for direct selection of the FBX files exported from iClone 8, simplifying the import process into Blender. Once imported, each avatar will be neatly separated with its corresponding ARMATURE and integrated animation. This practical feature significantly hastens various processes involved in animating the party scene.

The camera system integrated within Blender offers remarkable flexibility in capturing stunning shots that vividly depict the virtual event, providing a clear preview of its appearance within the metaverse. Regarding the equipment used for this work, it comprised a single workstation equipped with an i9 12900KF processor and an RTX 3080 graphics card. I’m delighted to have explored these fantastic tools offered by Reallusion, and I eagerly anticipate continuing to share and learn from experiences with such excellent and practical platforms. Thank you for joining in this exploration of innovative possibilities!


ActorCore and iClone complement each other perfectly, offering a seamless integration of animations and lifelike characters ideal for virtual events or VR metaverse settings. The addition of CC Auto Setup for Blender further enhances this workflow, swiftly importing renderable, animated characters into Blender, eliminating the complexities of additional material settings. This intuitive process doesn’t require specialized technical expertise, making it accessible for all. I wholeheartedly endorse this workflow for anyone looking to create immersive VR metaverse productions, as it simplifies the process while ensuring impressive results.

More info about Marco Virtual MX

WarLord’s Beginners Guide: Creating a Simple Scene with AI, iClone 8 & CC4 

In this installment of the Beginners Guide, I am going to demonstrate how easy it is to incorporate AI tools into your workflow so you can leverage both the timesaving aspect of AI and the creative tools of iClone and Character Creator 4 (CC4) to make a simple but effective scene. 

I’ll be focusing on iClone and CC4 while making use of the free-tier AI services relevant to this type of work.

Specifically, I’ll use: 

– Leonardo AI (free tier) to generate a headshot image.  

– Bing Create (free) for the Headshot 2 image. 

– ChatGPT 3 (free tier) for dialog and scene-setting ideas. 

– Eleven Labs (free tier) AI-powered voice generation for the character.  

Below is an image of what we will end up with. A pleasant and hopefully somewhat charming female host in a 3D cartoon environment of a bedroom doing a podcast. The headshot, (the AI-generated image) and voice are important to adding any factor like charm to a character. I picked Eleven Labs for their ability to make a voice emote instead of a monotone or rhythmic recitation. Leonardo AI is an excellent image generator for my tastes. There are many of these AI tools to choose from. 

I have created two short YouTube tutorials that are near the end of this article that cover both character creation and scene creation along with using AI-generated dialog. It also shows how to enlarge the mesh for big cartoon-type eyes in Character Creator 4.

Below is the setup view with the background turned off. 

As you can see in the image above the scene is extremely simple when you leave out the AI-generated background image. The “desk” is the Door prop that comes with iClone and I didn’t bother with a chair either. It would be simple enough to add one if you desire to do so. You could also dress up the desktop a little more with other objects as I did in a later render with the headshot image on the desk.  

You could also put a plant prop in the foreground at the side of the desk to add more 3D depth to the scene as it goes into the 2D background. A proper combination of image in the background and props in the foreground or intermediate space can help to blend the 2D background image into the scene eliminating some of the contrast between the flat image and the 3D objects. 


The first thing I did was go over to ChatGPT 3 at chat.openai.com at ask it to write a dialog for “a fan talking to the camera about how much she loves Taylor Swift”. I received the following response: 

I didn’t follow the script at all in terms of direction because that wasn’t the creative direction I was looking for. I wanted to keep this a simple scene with a simple character, so I used the Podcast concept from the bedroom. The dialog was all I was interested in.  


For this, I went over to ElevenLabs.io and used the free tier (requires signup) to create the dialog. Like most tasks of this nature, it took longer to decide on a voice from the voice library than it did to copy and paste the text into the dialog box, process, and download the audio file. 

At first, I did a sentence or two of dialog with the voice I had selected to find out how it interprets the text and how pauses and other formatting worked. It was much easier than expected.  Commas and the “…” created different timing and pauses in the audio. Using two blank lines between paragraphs seemed to work to keep the paragraphs from running together when spoken. 

Now, of course, this Is where I pushed the envelope too far in terms of how much of the script dialog I sent into the speech generator at a time. After a couple of small tests with just a line or two at a time, I proceeded to dump all of the remaining text into the voice generator.  

It worked but had some pronunciation errors that would require correcting and running through the AI again. This burns up credits in an extremely wasteful manner as you get 10,000 characters on the free tier. Not words, characters. If I had done it a sentence or two at a time, I wouldn’t have burned so many credits. 

In this case, it didn’t matter as I wasn’t going to use it all anyway in the final render.  I just needed enough to do a good demonstration. 

Creating the dialog from Eleven Labs to iClone was easy. It was only a matter of minutes to copy, paste, process, download, import into iClone, and have the entire script’s audio lip-synched and loaded into the character. I also copied the dialog into the clipboard so I could paste it into AccuLips to go along with the audio file for near-perfect lip-synching with no intervention on my part. I’m not doing anything advanced, and I think you’ll be surprised at the final product. 

TIP: When generating audio from text think about how to write out the text. If you want a character to say iClone then write it out as “eye Clone”. “AI” is another stumper for generated speech. If you leave them as AI, you’ll probably get an “aye” so write it out as A I with a space in the middle. You may have to phonetically spell some words to get the desired results. 


I couldn’t get the face image I wanted out of Leonardo, but it was a snap getting the background image of the bedroom. Feel free to use the image in your own project but it is low resolution for the purpose of this article to not take up too many resources. 


I used Bing Create to generate the character head image to use with Headshot 2 (or Headshot 1). I tried Leonardo and got some great-looking images but few that faced forward like I needed. So, I cruised over to Bing Create and used the prompt: 

Portrait of a bald, Pacific Islander cartoon girl with no hair, facing the camera, white background 

I got several good images from Bing and was actually very pleased with the outcome. I had already spent a frustrating time with Leonardo without the desired results so when Bing Create immediately started returning usable results, I was surprised, to say the least. 

Bing Creator generated Head Image. In Windows right-click and “Save Image As” it to download.

It didn’t take me long to choose a good headshot image and I was off to CC4 to create the character. I already knew there was going to be a problem with the large eyes in the image, but I decided to “brute force” my way through that by pushing the eye and eyeball sliders way past the intended maximum. I love this feature about the slider. If you go to the end and it doesn’t do enough… just type in a number in the input box and press enter. If 100 or –100 isn’t enough, then type in 200 or –200.  

TIP: The input boxes do not stop at the end of the slider. You can manually enter 200 (or any number) in the input box if the slider only goes to 100. This can be a very powerful feature pushing intended use past the boundaries and experienced users know how helpful it can be. 


The AI work was very enjoyable, and I looked forward to putting it all together but I still had to make the character and it needed some personality. The Bing Create image certainly helped in that department. 

I needed a stylized female body to further enhance the cartoon effect. You could load the free Alika character that comes with iClone and CC4, but I chose to load a neutral base and then use the Alika slider in the slider section to morph the body to a more cartoonish shape. I also used the Alika head morph. 


Next up was jumping over Headshot 2 to wrap the head to the character. Since I’m talking basics here I will only use the Image, Auto feature, and a little bit of sculpting around the eyes. Since I used the Auto feature it also created some hair which I discarded so I could use the Alika cartoonish hair instead. 

Now we see some freaky things happening like the strange-shaped head (compared to the image) and really weird eyes because of the huge eye size on the AI-generated image compared to the mesh. Like I said though, we are going to use brute force to enlarge the mesh eyes and eyeball to cover up the eyes on the Headshot image. 

TIP: If the remnants of the eye colors around the edges and the bottom of the eye bother you then take the diffuse image into something like Photoshop and clone over the lower two-thirds of the painted eye leaving only the top or you can clone over all of it if you want a clean look. 

I then dressed the character in a top, jeans, and tennis shoes changing the original colors to a shade of pink where appropriate. 


As I showed you in the first and second images, the scene itself is very simple. Using a prop for a desktop, add a microphone or whatever you want to place in the foreground. Add your AI-generated background image and the scene is set. All you need to do now is light the scene and that is personal preference, but I would keep it bright and airy for this type of scene. 

Creating a Toon Girl Character in Character Creator 4

Creating the scene, animating the character including using AI-generated dialog.

MD McCallum - WarLord

MD McCallum – WarLord

Digital Artist MD “Mike” McCallum, aka WarLord, is a longtime iClone user. Having authored free tutorials for iClone in its early years and selected to write the iClone Beginners Guide from Packt Publishing in 2011, he was fortunate enough to meet and exchange tricks and tips with users from all over the world and loves to share this information with other users. He has authored hundreds of articles on iClone and digital art in general while reviewing some of the most popular software and hardware in the world. He has been published in many of the leading 3D online and print magazines while staying true to his biggest passion, 3D animation. For more information click here

Anyone can Create Professional Animations with Cartoon Animator 5.2 Motion Pilot

Veteran Cartoon Animator user and developer Garry Pye has been working with Cartoon Animator (CTA) for over a decade and has made the journey from casual animator to the role of 2D Community Manager for Reallusion. Now working alongside his 15 year old daughter Maddison, together they sat down to showcase just how easy the new Cartoon Animator 5.2 Motion Pilot makes 2D animation.

With a layered scene setup in Stage Mode, Garry and Maddison demonstrate the tools and features of Motion Pilot and how anyone can now produce real time smooth, professional results in minutes. 

Cartoon Animator has long been the easiest and fastest way to create 2D animation. With Cartoon Animator you can produce quality projects in no time, regardless of your skill level in animation. Cartoon Animator makes animating easy for everyone, and today we’re going to show you how.

With the latest update to Cartoon Animator 5.2, we see the introduction of Motion Pilot, which gives you total control over the movement of your actors and props, controlled in real time using mouse movements, with variable parameters set by you. 

Motion Pilot gets rid of the need for hours of keyframing your animation, because now you can control your characters like puppets using your mouse, while you record the motion. Motion Pilot is more like acting than animating.  It makes everything easier and faster, but more importantly it makes animating so much fun for people of all ages and skill levels. Without any previous knowledge or experience of 2D animation, anyone can use Motion Pilot to create smooth results in minutes. 

In the screenshot above we have a scene set up in Stage Mode. This scene was bought in as a Photoshop PSD file, maintaining its layers so we can move our actors around inside the scene, both in front of and behind objects and then move our camera about to create a sense of depth to our scene. 

You can instantly switch to 3D view to see where all your characters and prop elements sit on the stage, as shown below.

To introduce an actor to your scene, you can simply drag and drop them from your Content Manager. By adding any one of the hundreds of premade motions to your actor, we could have our scuba diver swim instantly, without the need to keyframe his movement or draw individual frames. Cartoon Animator‘s premade motions do all the hard work and give perfect results every time. 

With Motion Pilot, you can now make the scuba diver automatically swim as you pilot him around the screen using your mouse. In Motion Pilot, there is a section called Motion. Check the Blend Motion box, then drag and drop your swim motion onto the text box. If you only want him to swim forwards, scroll up to Face Cursor, and turn that off so he doesn’t flip direction as he swims.

Position your scuba diver to where he will begin in your animation and then before recording the screen you can select practice so you can rehearse you motion. Once you click Preview, move your mouse over the scuba diver and hit the space bar and watch him come to life and swim, following your mouse movement in real time. 

You can change the speed at which an actor moves using the Speed slider. 

Once you have rehearsed your animation, do the same again, only this time hit Record, to capture all of your characters animation in real time. Moving your character around feels more like playing than animating, as you become the actor and put your performance into their motions.

From frame zero, hit Play on the timeline and watch your new animation playback. What would have taken hours of tedious keyframing and animation skills has been completed in mere seconds thanks to Motion Pilot and has produced the smoothes result possible. 

Even after you have completed many frames of animation, Cartoon Animator allows you to rewind through your project and add new animation or even alter the animation you have created so far. 

The octopus actor in our project has Spring Bones built into his body rig, which you can apply yourself to any CTA actor. Spring Bones allow character parts like limbs, hair and accessories to wobble and bounce about in real time automatically, with no need for keyframing from the animator. Spring Bones dramatically reduce the amount of time it takes to produce 2D animation and at the same time provide realistic and smooth motion results.

Instead of adding a motion to the octopus like we did with the scuba diver, this time we were able to  be the puppeteer and move him about while we record his movements in real time. After recording the motion of the octopus, we could playback our animation to see your octopus and scuba diver swimming together.

Animation has never been so easy or fun! You can control any character or prop using Pilot Motion. Moving them about like a puppet and just recording it to make an animation in real time.

We were able to pilot the fish in our scene using the same real time puppetry applied to the octopus. In addition, using a Motion Pilot feature called Flock Settings, we could make duplicates of the fish and have them follow the leader. Flock Settings give the option to select multiple items as the children, following the parents with preset time delays and in various patterns. This is the ideal way to create animations of groups like swarming insects or random swirling leaves blowing about in the wind. Again, the type of animation that would otherwise require a high degree of skill and a large volume of time to execute, can be done in seconds.  

Even though Motion Pilot makes animation fast, fun and easy, it is still an incredibly powerful tool that gives you total control over your animation. For instance, we saw that Maddison and I could animate the Octopus in real time just by moving the mouse around. But even after the motion is recorded, we can still go back and add more character animation to our actors. 

Using Cartoon Animator‘s 360 Head function,  we were able to make the octopus turn its head in the direction it’s swimming. We could keyframe the rotation of the actors head and even add in facial animation like blinking and lip syncing to voiceover tracks.

Spring Bones were easily added to the flat PNG fish props to make them move in a more dynamic way, as shown in the image below. Then, using the Wave function of Motion Pilot, we were able to have the fish move automatically, controlled by parameters we set. For instance, by adjusting the Y-axis of the Move Amplitude, the fish will now bob up and down in the water on its own. And changing the Loop Duration determines how fast or slow this motion is.  You can even tell Cartoon Animator whether you want the fishes movement to be constant or random.

As well as puppeting your actors in real time, you can create a specific path that your actors can follow, so you have total control over their movement. After plotting a path with mouse clicks, you can link your actor to the path, which it will follow as the animation plays, and you can determine how long it takes for the actor to progress along that path. In the image below, the red line is the path that has been manually drawn for the fish to follow. 

Your actors don’t have to just move side to side and up and down either. In Cartoon Animator you can build your scenes so they have camera depth. All of the elements in your scene are layered, from close to camera, to all the way off in the distance. This means you can have your actors move in front of and behind objects and control this during live recording with Motion Pilot by adjusting the Z-axis. We were even able to use the Scale function to make actors become larger as they move closer to camera, and smaller as they move further away. 

There are lots of different ways you can control your actors with Motion Pilot, and changing the parameters for how they move is fast and easy, and give instant results that you can preview before recording your final movie.

Finally, you can decide whether you want your actor to move in time with your mouse or using the Lazy Mouse function, you can have the actor follow along at a slower, smoother pace.

You are not limited to just the actors in your scene being controlled by Motion Pilot. You can control other scene elements as well, to help bring your scene to life in a dynamic way. When we set up our scene and added the sponge as a static prop, we were able to add our own custom bones to the sponge to give it a simple skeleton. Then we turned some of those bones into Spring Bones, giving them flexibility. Using Motion Pilot, we were able to set the rotation on the prop so it gently moves side to side, as if affected by the water current, and it does this automatically as our scene plays. 

By making the Tween frames Random, the motion looks more natural, and then we can set the Loop duration to make the sponge gently sway about during our scene, while the rest of the action takes place around it. 

You can apply this style of automated motion to any item in your scene, meaning everything can move in its own unique way. We could even apply the wave motion to the water current prop at the top, creating the illusion of the water constantly moving. By applying Spring Bones to this prop, it appears to flex and wobble about when we applied a simple Wave motion to it, helping to bring our scene to life. 

Getting the exact look you want is as easy as changing the values in the menu, for instant results. You can make them as realistic or as exaggerated as you want! 

Maddison and I had so much fun working together on this Motion Pilot project. Anyone of any animation skill level, advanced, amateur or even zero experience, can be producing quality animation instantly with CTA5 now. You don’t have to do anything but have fun, because Cartoon Animator and Motion Pilot do all the work for you, making the process feel more like playing than animating. 

Learn about Cartoon Animator:

Discover CTA Motion Pilot:

Follow Garry Pye:

Winner Tips & Interview: Astrivique’s “Valiant Seven”

The Winner Tips & Interview series covers practical workflows and techniques shared by winners from the “2023 Animation At Work Contest”. To let users see the full spectrum of the Cartoon Animator pipeline, we are introducing projects that received attention and credit from the community. Let’s now take a look at “Valiant Seven” to see how Astrovique worked her magic with Reallusion Cartoon Animator (CTA).

About Astrovique

Greetings! My name is Astrovique. I’m a full-time illustrator and hobbyist animator. I often create fan art to exhibit and sell at conventions in Australia such as SMASH!, Oz Comic Con, and Supanova. On the sidelines, I also have a webcomic that I’m working on known as Valiant Seven and oftentimes find myself animating characters from my series. I’m also a massive memester and would just about animate anything that I find myself laughing at and sharing that laughter with others.

How I Discovered Cartoon Animator and why I used it

I stumbled upon this software through the Animation at Work competition. About a month after the competition had already started, I saw a sponsored ad on Instagram and thought, “Why not give it a shot?” I’m always eager to learn new software and see what I can create. I selected Cartoon Animator 5 as one of my production tools due to its free-form deformation tool and spring dynamics. I was intrigued by the idea of picking up a new program on the fly, and it turned out to be a user-friendly choice with educational resources readily available to guide me through the process. I particularly appreciated the character rig tool, which allowed me to easily manipulate my character’s limbs and keyframe them, while the applied spring dynamics responded to the movement. The deformation tool proved invaluable for creating quick and simple motions for my characters.

Why did I choose this topic?

I was drawn to the Comics and Art Animation category because I have an existing webtoon series that I wanted to explore further in various digital media, including illustrations and animation. This contest presented an opportunity to create a short animated trailer for my series, and I am satisfied with the result.

My Creation Process

Step 1 – Storyboard

To begin, I crafted a rough storyboard outlining the composition and timing of each scene in sync with the trailer’s music. Underneath each rough sketch would be general notes of how it would be timed to the beat of the trailer music.

Step 2 – Drawing Unique Characters

After finalizing the composition, I worked on detailed character drawings, ensuring that various parts like hair, head, eyes, and clothing were on separate layers for animation. I dedicated extra time to make the final shot in the trailer as detailed as possible, given the limited timeframe that I had.

Step 3 – Animation

I imported each scene into Cartoon Animator 5, utilizing different layers that needed animation. I used the free deform tool and character rig tool to manipulate and rig them in the program. Additionally, I applied spring dynamics to smaller details, such as strands of hair, to respond to specific keyframes rather than animating them manually.

Step 4 – VFX Composition

Once all the scenes were animated, I exported them as a PNG sequence and imported them into After Effects. There, I edited each shot to match the music, and I used the puppet tool to animate complex layers and create unique distortion effects during scene transitions.

Interview with Astrovique

Q: What do you think makes a winning entry to competitions like this? How does someone make their entry stand out in the crowd and really grab the judges’ attention?

A: Honestly, having something that is uniquely yours and fits whatever category you’ve entered is the best way to go about in a competition like this. On top of that, remember to have fun with what you create! I participated in this competition because I wanted to create an exciting trailer featuring my characters in action scenes, all while learning new software and enhancing my drawing and editing skills.

Q: Some of you are quite new to Cartoon Animator, while some of you would now be considered veterans of the software. For those of you who are new to Cartoon Animator, what surprised you most about working with this 2D animation software?

A: I’ve only been using Cartoon Animator 5 for about a month, so I’m relatively new to the program. However, I managed to pick up a lot in a short time thanks to its user-friendliness. Despite being a new user, I found it straightforward to learn, with abundant resources and tutorials available. The most exciting feature I discovered was the ability to capture mouth movements for lip-syncing and body animations for 2D characters, which I think is a fantastic addition.

Q: Did you find the process of creating characters and content for your entry easy and enjoyable? And how did you find the experience of animating your project with Cartoon Animator? Were there any specific challenges you had to overcome? And if so, how did you overcome them?

A: Creating characters was relatively straightforward since they were already established. My experience with Cartoon Animator has been a positive one! The learning curve isn’t too steep, and anyone can pick it up. My main challenge was finding time to work on the animation alongside my regular job and daily life while learning new software. Additionally, creating initial storyboards and coming up with compelling scene compositions for events that hadn’t occurred in my webtoon were specific challenges. I wish I could have added more details to my drawings, but I’m still happy with the end results.

Q: As a Cartoon Animator user, if you had to choose only one tool or feature of CTA that is your absolute favorite, what would it be?

A: If I had to pick one feature out of them all I would choose the deformation tool. A very simple and straightforward tool to use that allows you to capture movement quickly and easily.

Q: What is your best piece of advice to other Cartoon Animator users who might be considering entering contests like this in the future?

A: Familiarize yourself with the program as much as you can and have characters or illustrations ready beforehand so you don’t rush towards the end. Have a clear plan of what style you want for your animation, the music you’re going to use, and the voice acting you’ll need in your animation, so you are able to schedule everything efficiently and maintain consistent quality in your work.

Q: Now that this year’s contest is finalized, what does the year ahead hold for your work? Are there specific projects you are working on? How does Cartoon Animator fit into your future plans?

A: Some plans I have for the future are continuing my webcomic on the sidelines, drawing more original illustrations and animating some of these illustrations!

Follow Astrovique

Instagram | https://www.instagram.com/astrovique/

YouTube | https://www.youtube.com/@astrovique

TikTok | https://www.youtube.com/@astrovique

WarLord’s Workshop Review: A Look at CC4 ZBrush Face Tools

ZBrush has long been recognized as a powerhouse of 3D sculpting and painting with innovative tools, techniques, and a user interface that is very unconventional but very usable. Once you’ve spent some time with ZBrush you can begin to appreciate its value to the 3D community.

ZBrush can sculpt the finest details into a mesh whether it be an aging, realistic character with weathered old leathery skin to detailed armor, or finely engraved jewelry. It’s long been recognized as a 3D tool that punches way above its weight class in terms of pure artistic “enablement” if that is a word.

While Character Creator 4 does indeed create 3D mesh characters it does so from base or original meshes reducing the need to rely on pure artistic sculpting or technical rigging. It does not produce character meshes from scratch which is the domain of tools like ZBrush that can and often do require advanced skills. On the other hand, ZBrush is not known for its character rigging either, which is one of the strong suits of Character Creator 4.

Put them both together and you have basically supercharged Character Creator 4.


Face Tools is the kind of GoZ functionality we’ve always wanted. It packs a lot into the time it takes to make the trip back and forth from ZBrush to iClone with the click of a button. The Face Tools control panel is packed with one click morphs for testing and editing along with detail and diffuse functionality. Just the diffuse feature alone is a great improvement and that is not counting the rest of the features in the Face Tools control panel.

Left and Middle: Face Tools Menu (in ZBrush). Right: A few of the layers that import from CC4.


Face Tools installation is a very simple process. Use the Reallusion Hub to install the plug-in and you should see a Face Tools button (with a character loaded) on the modify menu. Do your thing in Character Creator 4 with the sliders to define your character, add Skin Gen, and things like that if desired. Once your character is ready to go into ZBrush for heavy-duty work you press the Face Tools button.

The first time you press that button it will check to see if the GoZ functionality is installed and then do so. It will ask you which version of ZBrush to use if you have more than one version installed.

That’s it. Worked flawlessly for me and there was no need to manually copy some GoZ files like many users had to do with the original GoZ feature.  


The tool relies heavily on the Layers function in GoZ to keep track of changes within the mesh. When you go to layers you will see there are several layers there already. Since Layers is being used you need to use an existing layer or create a new one to make changes. The Face Tools control panel image above shows just a few of the available layers that transfer over to ZBrush.


Just to see what Face Tools was about I loaded a base mesh and then jumped over to an online AI image generator to create a facial image of an older Native American Indian male. I used Leonardo AI for this particular image, and it really made the character pop without a lot of interaction from me.

I sent the mesh over to ZBrush with all the options checked that I could find or made sense as I had not read the manual yet wanting to see how far I could get just using it. There is very little learning curve with this tool if you are proficient with ZBrush and CC4. By proficient I mean able to follow instructional steps to move forward as you don’t have to be a power user.

As I said earlier, layers rule the day as you need to create a new layer or use an existing layer to make changes in ZBrush. This is nothing new to seasoned ZBrush users. Layers are powerful and record your changes.

At this stage, I can add more detail to wrinkles or add more skin texturing if desired. Enhance what is already there or cut in new wrinkles that match some of the AI-generated head image. I never ran into a snag along the way. It was one of the few times the experience was frustration-free.

Left: CC4 Character, Middle: CC4 Character with Detail, Right: CC4 Character Diffuse Polypainting

I will list Kai’s quick start tutorial in the resources section at the bottom of this article. When I said I didn’t read anything, that didn’t mean I didn’t WATCH anything. So with Kai as my guide, I flew through the process like I knew what I was doing and that Is a plus when you can do that on your first run with any new plugin or tool. I made the clothing and accessories in Marvelous Designer.


Since the early days of Character Creator, I created several toon sliders and some extreme reshaping sliders to morph characters like the Toon Wolf I’m using in this example. I loaded a generic base in CHARACTER CREATOR 4 and applied the slider to the full extent.  The slider was based on reshaping the existing base mesh at the original low poly count for GoZ.  It might have been created as far back as CC1 since it seems I’ve always had these morphs around to test with.


I sent all the different maps like diffuse and normal over to Photoshop to remove things like the mouth, nose, and ear sections so they wouldn’t interfere in the anthropomorphic wolf skin. If you don’t remove these details, they can show up on the character in the wrong places or show a badly distorted human ear where the original ear was located on the base mesh.

Removing some of the skin features through the various channels with Photoshop. Used the clone tool to clone clean areas over the features that were removed.

You can use the image editor of your choice for the same task. In this example, I didn’t change the texture until the end of the process.

TIP: If you still see two black nostrils after editing the maps in the various channels then you need to change the shader from Human Skin to PBR and the black nostrils will disappear.


Next up was sending the head over to ZBrush with the Head Tools plugin so I could correct some problems with the teeth sticking through the lips due to the extreme reshaping and low poly count. The Face Tools menu in ZBrush makes it easy to quickly select a morph like Mouth Smile or Mouth Frown to check on the teeth penetrating the skin.

This is Face Tools in its basic usage. We could also create more wrinkles, and/or modify existing wrinkles while adding other details like skin textures, wounds, scrapes and bruises if needed, and much more.


As I went through the morph buttons in the Face Tools menu, I used the Move tool with horizontal symmetry turned on to pull the skin over teeth or gums when they poked through.

Selecting a morph in the Face Tools panel and using the move tools to bring the lips out over the teeth and gums.

This requires using the aforementioned layers in ZBrush to make edits and you can make edits to the wrinkles and other modifications or add more detail to the skin surfaces as long as you do not alter the polycount like any GoZ function. You cannot increase or decrease the polycount as it has to match the model it will be transferred to.

TIP: When I needed to do body shaping and head shaping, I did the body first with the regular GoZ function then used Face Tools to Create a new head mesh instead of Relinking to the original GoZ mesh.


The next step was changing the diffuse map from a human skin tone to a tone more suitable for a toon wolf. To do this I turned on the Diffuse channel in the Face Tools ZBrush menu then went to Polypaint -> Paint from Texture (in ZBrush) to transfer the existing skin texture which we could alter or paint over.

I turned off the diffuse button in the ZBrush Face Tools palette and noticed the original coloring remaining was now painted on the head. I selected the material in ZBrush I was going to use for the and got to work. For the purpose of this review, I kept it very simple.

Simple two-material texture painting.

After transferring back to iClone I then dressed the Toon Wolf in clothing similar to what a jazz musician might wear to complete the look of a laid-back, saxophone-playing anthropomorphic wolf with his nose and mouth scrunched up playing the saxophone with no penetration from the teeth or gums. I dragged and dropped in a 2D ai-generated background of a stage to dress things up a bit.

What used to be a tedious character to work with because of the extreme shapes forced on the low poly base mesh had become a pleasure to work with no penetration problems even when using the wrinkle system heavily.

Saxophone model “Saxophone” (https://skfb.ly/oo97B) by Matt Caddie is licensed under Creative Commons Attribution (http://creativecommons.org/licenses/by/4.0/). Beard from the Beard and Brows Builder pack.

I’ve been using ZBrush for several years and consider myself to be a typical user, not a power user by any means, and I found Face Tools to be simple to use, easy to grasp, and somewhat intuitive which is not a word generally associated with ZBrush. This GoZ function is much more powerful and including the ability to paint in ZBrush and transfer to iClone without mapping will be a time saver and eliminate some of the buzzkill that type of housekeeping work can bring with it.


If you use ZBrush with iClone then Face Tools is worth a look. Take the trial for a test drive and decide for yourself.

MD McCallum - WarLord

MD McCallum – WarLord

Digital Artist MD “Mike” McCallum, aka WarLord, is a longtime iClone user. Having authored free tutorials for iClone in its early years and selected to write the iClone Beginners Guide from Packt Publishing in 2011, he was fortunate enough to meet and exchange tricks and tips with users from all over the world and loves to share this information with other users. He has authored hundreds of articles on iClone and digital art in general while reviewing some of the most popular software and hardware in the world. He has been published in many of the leading 3D online and print magazines while staying true to his biggest passion, 3D animation. For more information click here.

Winner Tips & Interview: Nandor Toth’s “Bringing To Life”

The Winner Tips & Interview series covers practical workflows and techniques shared by winners from the “2023 Animation At Work Contest”. To let users see the full spectrum of the Cartoon Animator pipeline, we are introducing projects that received attention and credit from the community. Let’s now take a look at “Bringing to live” to see how Nandor Toth worked his magic with Reallusion Cartoon Animator (CTA).

About Nandor Toth

Hi, I’m Nandor Toth, and I am from Hungary. I used to study veterinary medicine in Budapest, and treating dogs, cats, and horses was my daily life. But a decade ago, I was lucky to switch to another interest, which is animation. I love my job, and I am grateful to have a job that is also my hobby and where I can express my creativity. I think about my work almost all the time, even when I’m walking, swimming, or mowing the lawn.

My main strengths are creativity, a sense of beauty and drawing, and creating cartoon characters. I have been working with some main clients including BOSCH, Liqui Moly, and RTL. My main areas of work include Motion Graphics, Explainer videos, Visual effects, Cartoon Animations, Logo animation, Mascot animation, Opener videos, and Whiteboard animation. I like challenges and I wanted to challenge myself in the competition as I have never participated in any competition before. The “Business & Commercial” category seems the closest category related to my background, as I have created animations for my previous clients that require a similar mindset.

How I Discovered Cartoon Animator

I’ve been working in animation for 13 years, and I’ve been improving my skills all the time. Every day, I think about what field of animation I can express myself better and how. I first found it on YouTube in a video comparing it with two other software, but I liked Cartoon Animator 5 more, and then I installed it on my computer in March 2023.

When animating, I think the ease of adding movements and modifying them is really important. And making professional movements is also vital for my work. Besides Cartoon Animator, I finished the project with the help of other software, such as Procreate, Illustrator, After Effects, and Audacity. Though it took me some time to finish, I am glad to share how I made it with Cartoon Animator.

My Creation Process

Step 1 – Ideation and Script Writing

I immediately started brainstorming when the competition kicked off. My goal was to showcase the software’s user-friendliness and my own skills. It wasn’t the first scene that came immediately to mind, but the core storyline: a cute superhero doing magic on a laptop. Then came the challenge: someone had to display the superhero character. This gave me the idea of having a cartoon artist to draw him, but he can’t animate him, instead the Cartoon Animator comes into the picture, which he gets as a gift from his lover. Then the ideas started coming like an avalanche, later I had to take some things out of the script, I had so many ideas.

Step 2 – Drawing Unique Characters

After several days of scripting, I began drawing the characters on my iPad using Procreate. I first sketched their simplified skeleton in blue, followed by a black outline sketch, and finished with detailed lines and coloring.

Step 3 – Vectorization

The next step involved vectorization. I opened the character template file and imported the JPGs exported from Procreate into an Illustrator file, reducing the visibility of the imported JPGs to 20-30%. I made the lines of the template file invisible, leaving the enclosing layers empty, where I drew the body parts. I positioned the bones and then exported them to an SVG file.

Step 4 – Animation

I launched Cartoon Animator 5, imported the background graphics and characters, and placed the background elements in 3D space. I adjusted the characters one by one in Composer Mode. For the facial animation setup, I made the necessary modifications, followed by 360 Head Creator adjustments. I recolored the sprite hands in SVG Color Adjustment. In Spring Editor, I removed hair and some clothing elements. Then, I proceeded with character animation.

Step 5: Adding Effects

In the next phase, I added effects to enhance the video’s impact. Most of these were free downloadable effects, while some were part of the standard package. To achieve subtle movements of the iPad in the stock video, I used Motion Tracking to analyze and connect the animated Cartoon characters from CTA5 to the Null.

Interview with Nandor

Q: What do you think makes a winning entry to competitions like this? How does someone make their entry stand out in the crowd and really grab the judges’ attention?

A: For an animation to be a winner in the “Business & Commercial” category, I think it has to meet several factors (I’ve ranked them in order of importance, with the most important ones at the top):
-be special, unique: different from the rest
-present the story in a creative way
-to convey the clearest possible message with the simplest possible graphics
-it is advantageous if the main character is likable to the majority of the audience (especially if he is cute)
-be pleasing to the eye, with no distracting graphic elements (e.g. too much vibration, flickering, flashing)
-it can be spectacular, but be careful with it, because it’s like spice: if you put too much in, people run away screaming 🙂

Q: Some of you are quite new to Cartoon Animator, while some of you would now be considered veterans of the software. For those of you who are new to Cartoon Animator, what surprised you most about working with this 2D animation software?

A: As I just started using it in March 2023, I consider myself a newcomer to CTA. I have never come across such good animation software that moves characters in such a professional way, and I can save the movements and add new movements made by others so easily, and even edit them back and forth. For me, this is probably the most important benefit of the program. There are many other advantages, but the one I mentioned is one that I use for all my animations.

Q: Did you find the process of creating characters and content for your entry easy and enjoyable? And how did you find the experience of animating your project with Cartoon Animator? Were there any specific challenges you had to overcome? And if so, how did you overcome them?

A: Yes, I found the job relatively easy compared to the end result. In other similar software, it would have been much harder and slower. A challenge for me was the scene where the superhero is facing us and pointing the magic wand at us. As the deadline was approaching, I didn’t have time to make a character in another view, so I took advantage of the 360 head rotation I had done beforehand, so I turned the head towards us, I also made a version of the magic wand facing us, while the half sideways turned body is covered by the laptop monitor in front of the character 🙂

Q: As a Cartoon Animator user, if you had to choose only one tool or feature of CTA that is your absolute favorite, what would it be?

A: 360 Head Creator, Spring Editor and the latest Motion Pilot (end result looks amazing!)

Q: What is your best piece of advice to other Cartoon Animator users who might be considering entering contests like this in the future?

A: First, learn to use at least a basic level of this wonderful software. Then: unleash your creativity, and free your mind: don’t bother to come up with a story, go for a walk, a bike ride, or as I do: swim a lot, because that’s when the best ideas come 🙂

Q: Now that this year’s contest is finalized, what does the year ahead hold for your work? Are there specific projects you are working on? How does Cartoon Animator fit into your future plans?

A: I want to learn more about the software and will use this knowledge to make “Demo Reel” videos. Videos that I will be proud of. But what I will be MOST PROUD of in my whole life is winning the first prize in the “Animation@Work Contest 2023 Business & Commercial” category! Thank you so much!

Follow Nandor Toth

Website | https://SuperAnimation.net

LinkedIn | https://www.linkedin.com/in/n%C3%A1ndor-t%C3%B3th-2285a128b/