首頁 » Page 3

The Easy Way to Create a Populated Park Scene

Learn how to generate an animated pedestrian scene using ActorCore, iClone & Omniverse

Pekka Varis is a cinematographer, 3D animator from Finland and the official NVIDIA Omniverse Ambassador.  He is also the CEO of an Omniverse-based startup called Cineshare.

In this tutorial, Pekka Varis shares his ways of enlivening large-scale exterior settings by filling a 3D park with animated people using iClone and Omniverse Machinima with ActorCore 3D characters and mocap animations.

3D-Ready Content on ActorCore Asset Store

Hi, my name is Pekka. I started this journey with Brownstone City (a free asset from Omniverse), then I actively browsed the ActorCore website to download all kinds of character and motion packs to fill a busy commonplace street with kids, workers, police officers, and citizens.

The ActorCore asset store is the secret to my success with 3D works. I find the motion library super inspirational and useful for all sorts of projects and genres like fantasy, commercials, and even dramas! I begin by building my storyboards based on stock motions, then I fine-tune them if needed. In addition, the provided lightweight content for 3D people and mocap animations are extremely important⁠—even more so because they come fully rigged from head to toe!

ActorCore interactive 3D people for crowds usage.

Import to iClone for Animation

After selectively exporting the crude street geometry of Brownstone City from Omniverse to iClone in FBX format, I deployed all the 3D characters and dropped their idle motions, then applied some paired motions, e.g., people having a conversation.

The Digital Souls content pack is crafted to meet real-world needs, especially for jack-of-all-trades like myself, or what I like to call “modern-day Da Vincis” who don’t have the ample time to manually animate facial expressions. Digital Souls has distilled the process down to simple drag-and-drops! I can easily apply believable facial animations to all digital actors with subtle and natural facial performances that can truly engage the audience.

The Pedestrian Actions content pack is ideal for populating any 3D scene with busy streets that call for a variety of natural, autonomous background characters. With fourteen walking styles and eleven idle poses, Pedestrian Actions is a genius concept and I had fun directing all the city people around the park.

Pedestrian Actions used inside iClone.

Walking Around with Motion Director

With idle motions setting the mood and the city as the staging area, I began to direct all the pedestrians while making sure they don’t cross paths and crash into one another. The entire process was a game-like experience for me and, in the end, I had over fifty people walking about. 

Using alt+click in Motion Director gave me all I needed to move characters around—a method that I applied one by one to every pedestrian until the streets were roused with activity. I enjoyed building the look and feel of my city which resembled a hodgepodge of ‘80s fashion mixed with present-day aesthetics. Finally, exporting the scene in USD format from iClone took a bit of time, so naturally, I fetched another cup of tea.

Using Motion Director to navigate characters in the scene.

Prop & Camera Movements inside Omniverse

In Omniverse Machinima, I animated my camera and added free props found in NVIDIA’s massive asset store. Following the path of the camera, I focused on decorating parts of the street that needed some extra characters to keep the entire scene reasonably light. As a finishing step, I added flowing water and rendered everything in Path Traced mode which gave the movie the final touch that I wanted.

Decorating the scene with props inside Omniverse.

Best 3D Content and Tools for Crowd Simulation

I have been making videos all my life starting from twelve years old—back in the days of Commodore 64. Lights, camerawork, and timing are my strongest skills, but now with tools like iClone and Omniverse Machinima, I can perform miracles!

In a nutshell, the combination of iClone and Omniverse offers a powerful, fast, and intuitive way to create large-scale outdoor renders. Combining characters, natural motions from ActorCore’s asset store, and content from Pedestrian Actions is a powerful way to “colorize” and customize your cityscapes.

For free ActorCore 3D characters and mocap animations, visit here.

For a free trial version of iClone , visit here

For more info about Pekka Varis and Cineshare, visit here.

iClone 8 User Tutorial : Using Lens Flares to enhance real-time animation visuals

Final scene render by Benjamin Sokomba Dazhi
Benjamin Sokomba Dazhi – Professional iClone Animator

Hi, I’m Benjamin Sokomba Dazhi and I want to welcome you to this iClone 8 (iC8) tutorial. As you all know I am an animator and my major 3D tool is iClone . I’m so excited to bring you this next tutorial.

This time around, we will still be talking about how to enhance the visuals in our projects on iClone 8 for a captivating and alluring finish.

We are going to talk about lens flare. How to optimize your scene and make your scene look aesthetically appealing using lens flare.


Winner Tips & Tricks Interview: The Making of LuckyPlanet’s “The Secret of Figs”

The “Winner Tips & Tricks ” series covers practical workflows and techniques shared by winners from the “2022 Animation At Work Contest”. To let users see the full spectrum of the Cartoon Animator pipeline, we are introducing projects that received attention and credit from the community. Let’s now take a look at LuckyPlanet’s: “The Secret of Figs”, and see how he works his magic with Reallusion Cartoon Animator (CTA).

About LuckyPlanet

Hi, I’m LuckyOne and LuckyPlanet is my YouTube channel. LuckyOne is an alias I use along with a voxel rabbit as my avatar. In the past, I worked as a 3D animator for over seven years before starting this project. I have created several short films which were awarded and nominated in animation festivals like Annecy, SIGGRAPH, and Hiroshima Animation Fest. Unexpectedly, my career prospects took a turn for the worse when I lost 20% of my vision due to eye surgery. That was reason enough to put me off using 3D software because the interface became too complicated and difficult for me to read. I was still able to work as a producer and director for a while, however, I very much miss creating my own shows.

After a long hiatus from animation, I began searching for alternative 2D software with a clean UI that is simple and easy to use. I happened to find Cartoon Animator 4 (CTA4), through which, only took me a week to start my first animation project after watching numerous informative tutorials on YouTube.

Although it’s never easy to gather funding for an animation project, I stepped into the world of NFTs in 2021 with a project called “LuckyPlanet”. It had a clear roadmap for financing an animated series and creating an intellectual property business. Cardano—being one of the most eco-friendly blockchains—had a community that welcomed me with open arms. Thanks to them, I finally had enough funding to launch the shows on my own with twenty episodes on the way.

Why choose this entry topic? 

Dr. Tanthai Prasertkul and Linina Phuttitarn, good friends of mine, have produced thousands of hours of science podcasts. We believe it’s possible to create a fancy animated series with these amazing stories. “Scientific thinking” is what we want to convey through our show. Being rational, curious, open-minded, and eager to learn, are the basic skill sets that can help us overcome many of the challenges of the modern world.

Why choose Cartoon Animator?

Pre-rigged characters and facial animations are, in my opinion, the two crucial areas that CTA dominates. As a 3D animator, I can confidently say that these aforementioned facets take a lot of time in production. CTA makes the turnaround twice as fast with results that can compete with other software. User-friendliness and large, clear user interfaces are a must for me, and CTA provides it in spades.

How I did it with CTA

Step 1: Script and voice acting

Since each episode of the original podcast is longer than an hour, they need to be cut into two to five-minute shorts (so it doesn’t take years to complete production). This was done by removing several jokes and extraneous information, then having the entire script rewritten to add cohesion. After the script was finalized, we forwarded a copy to our talented voice actors.

Step 2: Character Creation & Rigging

I collaborated with a friend who is a designer to finalize the main characters’ looks. We iterated on the designs until we had the look and feel that fit the story well and played nicely with the animation style. Thanks to the premade rigs in CTA4, we were able to take two character templates from the library and replace them with our designs in Photoshop.

Step 3: Character Customization & Animation

360 Head Creator is a very powerful tool in my arsenal; By which, took a week to set up all the facial expressions and provide the ability to make head turns, but it made it much easier and faster to animate the characters. With all the options out there, especially with the motion library at hand, I chose a straightforward process of animating each bone one by one because I wanted to build up a collection of custom motions. In keeping down this path, I should have made enough movement to create the entire series within four to five episodes.

Step 4: Scenes creation & composition & Camera setting

There isn’t much camera work here since the video is focused on narration. In addition, there are only a few layers applied in Adobe After Effects.

Step 5: Composition in After Effects

The characters are composed of basic flat colors. Adding several additional layers in Adobe After Effects can give them more volume and let them stand out from the background.

I sincerely hope you enjoyed our show and would appreciate it if you could subscribe to our YouTube channel and leave some comments!

Follow LuckyPlanet

Twitter | https://twitter.com/LuckyPlanet_NFT

Youtube | https://youtube.com/luckyplanet

Linktree | https://linktr.ee/luckyplanet

Pitch & Produce | Remnants: Post Apocalyptic Short with iClone, Character Creator & Unreal Engine 5

This story is featured in 80 Level

Stanislav Petruk

My name is Stanislav (but I prefer Stan). I am a self-taught artist working in the game-dev industry. I started as a motion designer in a small Siberian town and eventually moved to a bigger city chasing my goal to make games. The first job I landed was a startup mobile game, and I think it was a perfect project that helped me to make a transition from video production to a real-time environment. My next job was at a company called Sperasoft, it’s a big outsourcing company that co-developed a lot of great AAA games. I started as a VFX artist and worked on several big titles there – WWE immortals (mobile), Mortal Kombat (mobile), Agents of Mayhem, Overkill’s the walking dead, and Saints Row. It was a great time and it helped me to grow a lot as a professional artist.

After over 4 years at Sperasoft, it was time to move on and in 2019 I moved to Poland. I spent almost 2 years working at Techland Warsaw. In 2021 I moved once again, this time to Stockholm, and currently work at Avalanche studios as a senior VFX artist.

In iClone 8 I was able to do all the mocap clean-up, previously done in MotionBuilder, but without the complexity or higher cost. Inertial sensors of Xsens understand the movement well but have no idea where they are really located in 3D space.

Stan Petruk – Senior VFX Artist from Avalache Studios

About the Remnants Project

Besides making games, I love films and parallel to my real-time VFX career I am trying to develop myself as a filmmaker. I attended several filmmaking courses, the most recent one was a short online course from Vancouver film school. I am also learning from books and trying to break down the films I watch. But it was all a theory and at some point, I realized that I need to get my hands dirty and make my first film. So, it is when I started to develop an idea for a short film with a simple story. I also explored a similar theme in my previous project, which was a VFX contest two years ago – But it was not a film yet.

The production of the Remnants started maybe a year ago, but in the beginning, it was mostly research and development. I took a pack of sticky notes and started to break down the project into small elements, first breaking it into the story structure acts, adding events in between, and making a list of what software and techniques I must learn to make the project come to life. Everything was glued to the wall and visually looked like there is no way I could make it alone with the skills I currently have. At the same time, I didn’t want the production to turn into only a technical presentation, my goal here was to make a film and the main focus should be exactly film direction.

So, I started to dig into how I can optimize my work. The real-time approach in all possible ways was something obvious. Merging my two passions – gamedev and films – into one was the core idea to get production going.

Discovering the Reallusion Tools

I was very concerned about having characters in my film, they are very important and they must look alive, it must be possible to empathize with them. So, I googled how I can make a character without being a Character Artist, and after some research, Reallusion products seemed like a perfect solution. Simple to use and with some basic knowledge of CG you can get an awesome result from it.

Reallusion software works very well with other programs, especially with Unreal Engine. All file formats I needed are supported, and export presets are there, too. You just use it as a part of a solid pipeline. It simply does the job and helps to save a massive amount of time.

The creation process in Character Creator is as simple as it can be, you just pull the slides and get the shapes you want. But if you want to get the result you must understand what exactly you need. So, I made a virtual film casting. I found some AI that generated a bunch of photos and just chose the ones I liked best. After that maybe an hour per person and you are ready for the next production stages.

Clothes and Hair

I did not want the clothes on the characters to look like a standard game asset, where they are a part of the skeleton. So, I decided to use Marvelous Designer and simulate the clothes as a separate element. The pipeline was the following:

  • Export an animated character from Unreal Engine
  • Import it into Marvelous Designer
  • Create a 3D model of clothes in Marvelous Designer
  • Add UV and textures
  • Simulate the clothes on top of the animated character
  • Export as alembic
  • Import into Unreal Engine as a 3ds Max preset with skeleton

Eventually, I used a mix of simulated clothes and 3D model skinning to save some time.

Hair was a tricky part, mostly because I wanted to use UE hair and fur. A lot of hair is also added to the character’s clothes, there are reasons for it – it looks great and hides imperfections of simulated cloth. Here is the pipeline:

  • Export character (or clothes) from Unreal
  • Adding particle hair in Blender. Blender has a lot of great tools to style hair.
  • Export hair as an alembic file (only the first frame, not a whole simulation)
  • (It is an extra stage which is necessary only if the fur in UE is not binding to mesh correctly, I used it only for clothes). Export the first frame of the mesh as alembic and import as a skeletal mesh in UE
  • Import fur into Unreal Engine
  • Create hair binding (use additionally exported mesh as a source skeletal mesh if required)
  • Add the groom component to a mesh

The real problem with the hair – it is not always sorting correctly, and if for example there is a transparent particle in front of fur, the fur will be rendered on top anyway. So, I had to remove or tweak somehow a lot of smoke and fog because of that.

Creating Animation in iClone

Before making the final animations, I made the whole film as a simplified cinematic. I used a mannequin in UE and was just moving it in the scene, all cameras were also there as well. Thus, I had a very good plan, I knew what animations I need and even how much time should every movement last (roughly of course). In the final version, a lot was changed but the core idea was from the initial cinematic. For the motion capture, I used Xsens, which is really great but requires a bit of cleanup and tweaking anyway.

In iClone I was able to fix most of such issues and also fix some of the mistakes I made during the pre-production stage. The most complicated scenes were the ones with interaction involved. Especially because I had to be both characters at the same time. Such things required a lot of manual fixing, e.g., there is a scene where one character takes a cigarette from the other, or throws and catches a potato, all of that originally was not moving correctly. I didn’t have access to finger tracking so it was made in iClone.

For facial mocap I used an iPhone that also required a bit of tweaking later, e.g., chewing the potato was done manually as tracking was not able to understand the recorded movement. I also used a bridge to export characters to Unreal and it is very simple and smooth, with just a click of a button. The animations were imported manually because I needed to clean up and fix the movements before using them in production.

VFX used in Remnants

All of the effects are animated flipbooks, they are mostly fire and smoke. It was the simplest part for me because I am a VFX artist. The pipeline is standard, make a simulation in Houdini, render it as a flipbook, and make a particle effect in Niagara.


The production started quite a long time ago (over a year ago), mostly because it was a research and development for me in the beginning. So, you could understand better the last 5 minutes of the film were made during maybe a couple of months. So, when I developed a good pipeline, made all preproduction, and had a good plan, the production started to go very fast. So, you basically have a scene set up, add the lights, drop characters to the scene and roll the camera. I really loved this approach, because it actually simulates the film production and shifts most of your attention to actual filmmaking.
I think with a traditional approach I would most likely never make it alone. Now with all the knowledge I have, I started a new short animated film with a bit more complicated story. Currently, it is in preproduction, but soon I will be able to share more information about it.

Follow Stan




Want accurate auto-rigging for characters? Try AccuRIG

This article is featured on Creativebloq

Free application for quickly transforming static models into moving characters.

Ever lament the substantial time and energy spent on rigging your favorite characters? Do you desire a one-click solution to streamline the entire rigging process? Look no further – a brand-new solution is in town, namely Reallusion’s AccuRIG

Presented by the makers of iClone and Character Creator, AccuRIG has direct access to character content on Sketchfab and 3D motions on ActorCore. Not only does it convert static models into rigged and animated characters, AccuRIG also automates tedious and repetitive tasks – all for free!

AccuRIG: A better alternative to Mixamo

Easy steps for character rigging.

As a free auto-rig tool, AccuRIG is available for download from ActorCore online store. The program is designed for fast, easy, and accurate character rigging.  Whether you have models in A-, T-, or scan-poses, with low, high, or multiple meshes, you can complete the rigging in five steps. 

You can export the rigged FBX file directly to any major 3D tool such as Unreal Engine, Blender, Unity, and C4D, or upload it to ActorCore where you can try out thousands of 3D motions.

Body and hand rigging can be manually refined.

What is amazing about this application is that it also allows you to manually refine the body and hand rigging – with quite impressive results. With manual joint definition, rigging can be customized to match models with different shapes, looks, and scales. The skin weight is optimized for all joints, from the neck, shoulders, elbows, and knees, to detailed finger joints. The volumes for the head, body, and accessories are well-preserved using the AccuRIG rigging technology.

Use Post Offset to correct the avatar posture.

Pose offset is another excellent feature in AccuRIG. It helps correct pose problems such as arm penetration or leg distance when applying animation data to characters with different body scales. It can also generate stylized performances for cartoonish figures. 

Enliven unrigged models on Sketchfab

Search ‘AccuRIG’ in Sketchfab and find compatible models to choose (Image credit: Sketchfab)

Online asset stores, such as Sketchfab, offer hundreds of character models for download, paid or free. Many of these assets require further rigging, and the time and effort to complete the rigging process can be a real obstacle for users.

Without the lengthy rigging process, users can take advantage of the large content library in Sketchfab, use AccuRIG to turn unrigged models into animated avatars and bring them to production, quickly and easily.  

When searching AccuRIG in Sketchfab, users can find a selection of models designed by different artists that are ready for use. There is also access to the recommended 3D characters, directly from the tool, making it very convenient for users to pick and choose desired models from Sketchfab.

Value-added for model artists 

Character artists can benefit from AccuRIG simply because it is free, fast, and offers accurate results. No more hours of manual rigging to make the static creation perfectly fitted for animations. Artists can display their newly rigged models in different poses and increase the asset value in 3D stores like Sketchfab. 

“All that can be said about AccuRig is that it’s definitely a game changer and definitely a big upgrade from similar tools like Mixamo,” says Roumen Filipov, senior 3D character artist at Chief Rebel. “The easy and fast workflow combined with a high-quality result is surely a must for character artists wanting to take their models to the next level.” 

Thousands of production-ready mocap animations

ActorCore asset store with a variety of 3D motions.

Nowhere to find suitable motions or poses for your newly rigged character? Just upload it from AccuRIG to ActorCore online asset store. Explore over 1,700 well-themed, professionally produced 3D motions in the real-time interactive viewport. Download and retarget character animations from ActorCore to use with all major 3D tools for game, film, arch-viz, and interactive projects.

Download and try it yourself

AccuRIG offers many powerful functions, similar to those found in Maya Quick Rig or Blender Auto-Rig Pro. But AccuRIG is TOTALLY FREE, making it an option that is head and shoulders above the competition. 

You can download AccuRIG from ActorCore website and try it with complementary Sketchfab models to experience this new 3D production tool. Tutorials are available for getting started with AccuRIG. Visit the Reallusion Courses website and try them for yourself.

Learn from Winners : K-style game character created by undergraduate student.

Go Eun Kim

Hello, my name is Go Eun Kim, a college student majoring in computer engineering, and I am from South Korea.

Since my hobby was playing games and drawing pictures, I naturally became interested in a job related to my hobby and had a dream of becoming a 3D modeler who can implement realistic characters in games. However, graphic classes in college were not what I expected, and I felt the need to study 3D modeling separately to ensure a clear path for my career. So I decided to take a year off from college in 2021.

That year, I started studying character modeling through a graphics academy. At first, I studied using 3ds Max and gradually studied how to model characters using programs such as Z-Brush, Substance painter, and Unreal Engine 4. And by participating in a contest held in Korea, I could learn how to easily implement high-quality characters using Character Creator 3. Learning various programs was not an easy process, but visible results are satisfying and fun to me. Also, I was able to concentrate on studying 3D modeling with passion because I always get excited when I imagine people playing future game characters that I have made. It was very valuable that I studied 3D modeling for a year. This year, I am studying my major and 3D modeling together because I am attending college.

Part I. Winner Entry Workflow

Step 1. Character Concept

I found various references to make characters that would appear in the game and decided to make characters with the concept of “Adventurer”. 

Character Creator 3 provides various types of modeling, and the sliders can be used to modify the body shape and appearance so that the desired character’s appearance can be made high-quality and easy. It also provides texture maps such as base colors, normals, etc., which helped to shorten the working time very much.

Step 2: Additional Software

Take the model from the Character Creator to the Z-Brush and make a simple clothing shape to see if it matches the character. Then, make low poly and high poly using 3ds Max and Z-Brush (or Marvelous Designer if necessary) and bake the normal map.

Take the baked normal map to Substance Painter, apply it to the model, and create a texture map to complete the clothes.

Additionally, the face texture map provided by Character Creator 3 changes the appearance by modifying the skin details and makeup in Photoshop and Substance Painter.

For the hair, use the hair cards in 3ds Max to create a hairstyle that fits the character. And make a hair texture using Z-Brush‘s Fiber mesh and apply it to the hair cards.

Step 3: Pose and Lighting

Move the character’s bones and pose according to the character’s concept in 3ds Max. Adjust the skin value using the weight tool to make it look smooth and not awkward. 

Then export it and take it to Unreal Engine 4 (UE4). Apply all the texture maps created in step 2 to the material node in UE4.

In the case of the background, it has a solid color or gradation plane to give a lighting effect, or it adds various brightness and colors to the set downloaded free of charge from Unreal Engine to make the character look the prettiest and “coolest”. If you apply lighting effects to the character, you can create various moods.

Step 4: Animation with iClone

Export the action of applying facial expression change to the animation provided by iClone and take it to UE4. As in step 3, add a lighting effect to the animation and add camera movement to make it look more dynamic. Then take a screenshot and record a video to complete all the work.

In Conclusion

After going through the steps above, all the work is done and the character is completed. Character Creator 3 makes the process of modeling the basic appearance easy and fast, helping to reduce working time and making the work fun. Thank you for viewing my workflow.

Part II. Feature story

Q : Hi Go Eun, thank you very much for sharing the workflow with us. First of all, congratulations on winning the contest!

There are two scenes of your entry Adventurer: one in the spaceship and the other in the canyon. Can you elaborate more about the stories behind this Adventurer?

What’s the fun part and the biggest challenge of creating this character?

Hello, thank you for congratulating me on winning the award! I was worried about choosing a background that would make the character of “Adventurer” stand out. To maintain the concept, I thought I shouldn’t bring the character to any background. So I chose two backgrounds because I wanted to give the character a feeling of exploring a place. It’s like an NPC waiting for a user on a hunting ground or a dungeon in an RPG game.

The biggest challenge in creating characters was the clothes modeling process. Because it was my first time making clothes using Marvelous Designer, but I think it is a good experience. The greatest fun in creating characters is in the process of using Substance Painter. I think adding color to low poly mesh and detail using textures is the most fun work because it feels like instilling vitality in the character.

Q : How did you learn Character Creator (CC)? In your experience, what are the merits and drawbacks of using CC to design characters? Have you ever tried the GoZ function to sculpt your characters in ZBrush and CC at the same time?

What are your favorite features of CC 4 / iClone 8?

I got to know CC through a 3D character production contest hosted in Korea, and I used the GOZ function by making the appearance of the character using CC, modifying it in ZBrush, and bringing it back to CC to check. I think the advantage of CC is that it not only provides realistic and high-quality character models but also provides various textures and facial expressions, so it saves working time.

However, I think it was regrettable that I had to install and use various software to use the animation function. And among the newly added features of CC 4 / iClone 8, I am looking forward to using the physics and animation player functions for my new project.

Q : Because of your great passion for creating games, you have even decided to pause one year from college and learned 3D modeling by yourself. Do you have any role models whose works keep motivating you to pursue this goal?

As you’ve returned to the campus this year to continue your studies in Computer Engineering, I wonder if you see any potential to combine your passions with your major studies? Or say, has the knowledge of Computer Engineering ever helped you to create something different from the past year?

I don’t have a role model, but I always try to do my best in a given task because I believe that if I do my best, I can be the best. And the project I did my best is the motivation for the next project. Currently, my major is different from studying 3D modeling, but I continue to study them together because I think that knowledge of computer science can help me communicate with other departments when I produce games later.

Q : The Mechanic Girl has a similar style to that of the Adventurer, but with more interesting accessories from her hairstyle and weapons to her bag. Do you usually design unique props for your characters? Could you share more of your artistic ideas on this Mechanic Girl?

How did you achieve your character design and animation by using Character Creator, ZBrush, 3ds Max, iClone, and Unreal Engine? In your opinion, what are the merits and shortcomings of using iClone Unreal Live Link?

Mechanic Girl was my first project to create after seeing the concept art of characters drawn by others. The design looked unique, so I thought it would be fun, and I chose this character because there were many things that I could try, such as skirts, weapons, and accessories. And the body and appearance of this character I modeled using 3ds Max and Zbrush, so it takes a relatively long time and there are some awkward parts, but I was able to learn how to make a body mesh and how to make a human face; I think it helped me understand character modeling.

And I planted the bones in 3ds Max and rotated them directly to make the character take the attack pose. I then modified the face model and created a facial expression that matches the attack posture to complete the animation of the character. I think the advantage of iClone Unreal Live Link is that it works with iClone and Unreal Engine in real time without having to export and import FBX.

Q : Please share with us three things (e.g., website, books, podcast, music, games, sports, etc.) that inspire you most when you start a new project.

When starting a new project, the most inspiring things are games, ArtStation, and animated movies. Because I think these inspirations come from the desire to challenge, starting from the curiosity of “how did you make this?” when I see things that I have never made. 

For more of Kim’s work, check out her ArtStation page.

Made with Cartoon Animator: The 2022 Animation at Work Contest Winners!

After two months, the 2022 ‘Animation At Work’ Contest for Cartoon Animator (CTA) has come to a close. Reallusion would like to thank all of its wonderful sponsors including XPPEN, Affinity, Magix, and ASIFA. Overall, the event received 169 global entries, each with their work-in-progress videos among five categories: Business & Commercial, Comics & Art, Education, Vertical Shorts, and VLOG & VTuber.

A total of 26 winners, from different backgrounds, were selected with different levels of skills and experience. Additionally, a wide array of styles and topics were created, proving that Cartoon Animator is streamlined for creative diversity. Several newcomers who were first-time CTA users, were even able to use their newly acquired techniques to combine other software pipelines for amazing submissions. Click here to see the full list of winners with their respective judges’ comments.

Business & Commercial Animation — 1st Prize

Songrea Travels TVC by Onome Egba (Nigeria)

Onome Egba and his thoughts:

“Cartoon Animator excels at simplifying a lot of complex processes naturally associated with character animation. The rigging tools and animation presets are especially good at helping you get your characters moving in no time. Real-time playback is also something you’ll quickly take for granted when using the software. No ramp previews, pre-render, or 1/4 resolutions are needed to actually see what you’re working on which really helps with iterations.”

Judges comments:

Congratulations to Onome Egba! This is a delicate 2D production that all judges immediately pictured seeing as a TV or YouTube ad. The voice acting, characters, and body motions were all well balanced, and we also loved the scene and camera changes. Well done!

Comics & Art Animation — 1st Prize

Godly Princess by Eon De Bruin (South Africa)

Eon De Bruin and his thoughts:

“I have been working with Cartoon Animator 4 since 2017 and it has opened up a whole new world of possibilities for me. Today I have my own streaming platform where I create numerous animation shows for kids, and CTA4 is my software of choice because it is so easy to work with, fast to create animations, and integrates well with Clip Studio Paint. I don’t think I will be able to create as many animated shows for my platform if I were to use another software. I love Cartoon Animator!”

Judges comments:

Our seasoned CTA user Eon De Bruin impressed us again with his entry and WIP Video. He took a chance to have the 2D princess interact with 3D people – and it all paid off! Congratulations Eon!

Education Animation — 1st Prize

The Secret of Figs by LuckyPlanet (Thailand)

LuckyPlanet and his thoughts:

“Cartoon Animator allows me to produce my animated series more quickly and easily. These animation tools make me feel like an actor myself.”

Judges comments:

A 10 out of 10 winning entry for the Education category! We were very impressed with the final quality, and we cannot wait for more educational Cartoon Animator videos created by LuckyPlanet!

Vertical Shorts — Winners

Guard Duty by Jeremy Fisher (Canada)

The Sisters AR by Kirill Klochkov (Russia)

VLOG & VTuber — Winner

Approaching “Global Warming” What measures can you take? by Hiromi Yamamoto (Japan)

Student Animation — Winner

The U.S. Declaration Of Independence by Andre Luiz Siqueira Alencar (Brazil)

As the contest ended, Reallusion was also pleased to announce the coming of Cartoon Animator 5. In this forthcoming release, we are fulfilling the groundwork of animation productions and also elevating Cartoon Animators’s unique techniques. In the new version, secondary animations will be automatically created with simple keys, squash-and-stretch motions can be deformed freely, and vector graphics are supported for infinite resolution. Take a sneak peek at what’s inside Cartoon Animator 5!

Pitch & Produce | Raise Ravens: Reanimating loved ones in virtual reality with Character Creator

Raqi Sayed – Senior Lecturer | School of Design Innovation – Programme Director, ICT Graduate School | Victoria University of Wellington

Raqi Syed

Raqi Syed is an artist, visual effects designer, researcher, and lecturer at Victoria University of Wellington. She is the co-director of MINIMUM MASS.

Her practice and teaching focus on the materiality of light, hybrid forms of documentary and fiction storytelling, and using media archeology and software studies methodologies to better understand contemporary practices in visual effects.

Raqi has worked as a visual effects artist on a bunch of feature films. In 2020, her VR work was exhibited at the Tribeca, Cannes, Annecy, and Venice International Film Festivals. She is a 2018 Sundance and Turner Fellow, and a 2020 Ucross Fellow. In 2017, The Los Angeles Times pegged Raqi for a list of 100 people who can help solve Hollywood’s diversity problem. She holds an MFA from the USC School of Cinematic Arts and an MA from the VUW Institute of Modern Letters.

Raqi Sayed created an animated docu-memoir (Raise Ravens) where she digitally resurrected her deceased father in virtual reality with the help of Character Creator in order to lay his ghost, and generations of family hauntings to rest.

” I’ve been using Reallusion software for 5 years. I find the tools flexible and responsive to the different needs of storytelling in both my own work and teaching students the principles of character-centric visual effects.”

Raqi Sayed – Artist, Visual Effects Designer, Researcher, Lecturer

Ballerina: One Architect’s CGI Fantasy made with Cinema 4D, Character Creator and iClone

This article is featured on Fox Renderfarm

Project “Ballerina” is a 30-second full CG animation, Kay John Yim’s first personal project to feature an animated photorealistic CG character staged within a grand Baroque rotunda lounge. The lead ballerina is mainly made with Character Creator, animated in iClone, and rendered with Redshift and Cinema4D.

Kay John Yim is a Chartered Architect at Spink Partners based in London. He has worked on a wide range of projects across the UK, Hong Kong, Saudi Arabia and Qatar, including property development and landscape design. His work has been featured on Maxon, Artstation, CG Record and 80.LV.

Yim’s growing passion for crafting unbuilt architecture with technology has gradually driven himself to taking on the role of a CGI artist, delivering visuals that not only serve as client presentations but also as means of communication among the design and construction team. Since the 2021 COVID lockdown, he challenged himself to take courses in CG disciplines beyond architecture, and has since won more than a dozen CG competitions. 

This making-of tutorial article is a short version of “Ballerina: A CGI Fantasy”, written by Kay John Yim. For the full version, please visit Fox Renderfarm News Center.

The Making of Ballerina

The animation is a representation of my inner struggles in all artistic pursuits, both metaphorically and literally.

Ballet, an art form widely known to have stringent standards of beauty and highly susceptible to public and self-criticism, is the metaphor of my daily professional and artistic practice. As an architect by day, I work on architectural visualizations, where every detail is being scrutinized by my colleagues, senior architects and clients. As an artist by night, I work on personal CG projects, of which I would do hundreds and up to thousands of iterations to get the perfect compositions and color schemes. No matter how proficient I become in my professional and artistic skills, the inner struggle never fades away.

Through months of trial and error, I have since learned a lot about efficient character animation and rendering. This article is an intermediate guide for any indie artists like myself who want to take their CG art to the next level. 

The guide is divided into 4 main parts:

  • The Architecture
  • The Character
  • The Animation
  • Rendering

Software I used include:

  • Rhino
  • Moment of Inspiration 4 (MOI)
  • Cinema4D (C4D)
  • Redshift (RS)
  • Character Creator 3 (CC3)
  • iClone
  • ZBrush & ZWrap
  • XNormal
  • Marvelous Designer 11 (MD)
  • Houdini


My primary software for architectural modeling is Rhino.

There are many different ways to approach architectural modeling. Rhino’s main advantage over some other more popular DCCs like Cinema4D (C4D) or Houdini is its capability in handling very detailed curves in large quantities. 

As an Architect, every model I built always started with a curve, usually in the shape of a wall section, cornice or skirting section, swept along another curve of a plan. Rhino’s command list might seem overwhelming at first, but I almost exclusively used a dozen of them to turn curves into 3D geometry:

  • Rebuild
  • Trim
  • Blend
  • Sweep
  • Extrude
  • Sweep 2 Rails
  • Flow Along Surface
  • Surface from Network of Curves

The key to architectural modeling is to always use reference wherever possible. I always have PureRef open at the right bottom corner of my screen to make sure I model in correct proportions and scale. This usually includes actual photos and architectural drawings. 

For this particular project I used the Amalienburg Hunting Lounge in Munich as my primary reference for the architecture.

PureRef board for the project

While the architecture consisted of 3 parts – the rotunda, the hallway and the end wall – they were essentially the same module. Hence I initially modeled one wall module consisting of a mirror and a window, duplicated and bent along a circle to get the walls of the rotunda.

Rhino modeling always begins with curves

Wall module duplicated and bent along a curve

The module was reused for both the hallway and the end wall to save time and (rendering) memory. Having built up a library of architectural profiles and ornaments over the past year, I was able to reuse and recycle profiles and ornaments for the modeling of the architecture.

Ornament modeling could be a daunting task, but with a couple of ornaments modeled I simply duplicated and rearranged them geometrically to get unique shapes.

Rhino ornament placement

All the objects within Rhino were then assigned to different layers by material; this made material assignment a lot easier later on in C4D.

Assigning objects to layers by material

Note :

The best way to get familiar with Rhino navigation is to model small-scale objects. Simply Rhino has a great beginner’s series in modeling a teapot in Rhino: For anyone in a pinch, there are pre-built ornaments for purchase on 3D model stores like Textures.com ; some ornament manufactures have free models available for download on Sketchfab and 3dsky.

Exporting from Rhino to C4D

Rhino is primarily a NURBS (Non-Uniform Rational B-Splines) software; and although NURBS models are very accurate in representing curve and surface data, most render engines or DCCs do not support NURBS.

For this reason I exported the NURBS and MESHES to .3dm and .FBX respectively, and used Moment of Inspiration (MOI) to convert the NURBS model to a mesh.

MOI has the best NURBS to quad mesh conversion(over Rhino or any other DCCs) – it always gives a clean mesh that could then be easily edited or UV-mapped for rendering.

Exporting from MOI

Importing into C4D

Importing the FBX file into C4D was relatively straightforward, but there were a couple of things I paid attention to, notably the import settings, the model orientation and file unit, listed below in order of operation:

  1. open up a new project in C4D (project unit in cm);
  2. merge FBX;
  3. check “Geometry” and “Material” in the merge panel;
  4. change imported geometry orientation (P) by -90 degree in the Y-axis;
  5. use script “AT Group All Materials” to automatically organize Rhino materials into different groups.

Importing FBX exported from MOI

Importing FBX exported directly from Rhino

I modeled half of the architecture in Rhino and then mirrored it as an instance in C4D, since everything is symmetrical.

C4D instance & mirroring

The floor (Versailles Parquet tiles) was modeled using photo-texturing method, most widely touted by CG artist Ian Hubert. I applied a Versailles Parquet tile photo as texture on a plane, then sliced up the plane with a “knife” tool to get the reflection roughness variations along the tile grouts. This allowed me to add subtle color and dirt variations with Curvature in Redshift.

The floor tile was then placed under a Cloner to be duplicated and spanned over the entire floor.

Cloning floor tiles

Note :

C4D and Rhino use different Y and Z orientations, hence FBX directly exported from Rhino has to be rotated in C4D.

Architectural Shading (Cinema4D + Redshift)

Since I grouped all the meshes by materials in advance, assigning materials was just as simple as dragging and dropping to the material groups as cubic maps or Tri-planar maps. I used Textures.com, Greyscalegorilla’s EMC material pack and Quixel Megascans as base materials for all my shaders.

For ACES to work correctly within Redshift, every texture has to be manually assigned to the correct color space in the RS Texture Node; generally diffuse/albedo maps belong to “sRGB”, and the rest (roughness, displacement, normal maps) belong to “Raw”. My architectural shaders were mostly a 50/50 mix of photo texture and “dirt” texture to give an extra hint of realism.


The base character was created in Character Creator 3 (CC3) with Ultimate Morphs and SkinGen plugins – both of which were very artist friendly with self-explanatory parameters. 

Ultimate Morphs provided precise slider controls to every bone and muscle size of the character, while SkinGen gave a wide range of presets for skin color, skin texture detail and makeup. I also used CC3’s Hair Builder to apply a game-ready hair mesh to my character.

CC3 morphing & Hair Builder

Face Texturing

Face was the one of the most important parts of the CG character that required extra attention. The best workflow I found to add photorealistic detail was the “Killer workflow” using Texturing XYZ’s VFace model and Zwrap.

VFACE is a collection of state-of-the-art photogrammetry human head models produced by Texturing XYZ; every VFACE comes with 16K of photoscanned skin textures, displacement and utility maps; Zwrap is a ZBrush plugin that allows one to automatically fit a pre-existing topology to a custom model.

The “Killer workflow” essentially matches the VFACE mesh shape to the CC3 head model; using the Killer workflow, I was able to bake all the VFACE details down to the CC3 head model once the 2 mesh shapes are matched up.

My adaptation of the “Killer workflow” can be broken down as follow:

  1. export T-posed character from CC3 to C4D;
  2. delete all polygons except the head of the CC3 character;
  3. export both CC3 head model and VFACE model to ZBrush;
  4. use MOVE/Smooth brush to maneuverer VFACE model to fit as closely as possible to the CC3 head model;
  5. launch ZWRAP, click and match as many points as possible, notably around the nose, eyes, mouth and ears;
  6. let ZWRAP process the matched up points;
  7. ZWRARP should then be able to output a VFACE model that matches perfectly to the CC3 head model;
  8. feed both models into XNormal and bake the VFACE textures to the CC3 head model.

Matching points of VFACE (left) & CC3 HEADS (right) in ZWRAP

Note :

Full “Killer Workflow” Tutorial on Textureing.XYZ’s official Youtube channel: VFace – Getting started with Amy Ash. I recommend saving the matching points in ZWRAP before processing. I also recommend baking all the VFACE maps individually in XNormal as they are very high-res and could crash XNormal when baked in batch.

Skin Shading (Cinema4D + Redshift)

Once I had the XYZ texture maps ready, I then exported the rest of the character texture maps from CC3. After that, I imported the character into C4D, and converted all the materials to Redshift materials.

At the time of writing, Redshift unfortunately did not yet support Randomwalk SSS (a very realistic and physically accurate subsurface scattering model found in other renderers like Arnold), hence required a lot more tweaking when it came to rendering skin.

The 3 levels of subsurface scattering were driven by a single diffuse material with different “Color Correct” settings. The head shader was a mix of both the CC3 textures and VFACE textures; the VFACE multichannel displacement was blended with the “microskin” CC3 displacements map.

Character look-dev

A “Redshift Object” was applied to the character to enable displacement – only then would the VFACE displacements show up in render.

Close-up render of the character

Hair Shading

Having experimented with grooming using C4D Ornatrix, Maya Xgen and Houdini, I decided that using the baked hair mesh from CC3 for project “Ballerina” was leaps and bounds more efficient down the line. I use a Redshift “glass” material with CC3 hair textures maps fed into the “reflection” and “refraction” color slots, as hair (in real life) reacts to light like tiny glass tubes.

Note :

For anyone interested in taking the CC3 hair to the next level of realism, CGcircuit has a great vellum tutorial dedicated to hair generation and simulation.

Early test of CC3 mesh hair to hair geometry conversion in Houdini


Character Animation : iClone

I then exported the CC3 Character to iClone for animation. I considered a couple of ways to approach realistic character animation, these included:

  1. using off-the-shelf mocap data (Mixamo, Reallusion ActorCore);
  2. comissioning a mocap studio to do bespoke mocap animation;
  3. using a mocap suit (e.g. Rokoko or Xsens) for custom mocap animation;
  4. old-school keyframing.

Having experimented with various off-the-shelf mocap data, I found Mixamo mocaps to be way too generic, most of which look very robotic; Reallusion Actorcore had some very realistic motions, but I could not find exactly what I needed for the project. With no budget and (my) very specific character motion requirements, option 2 and 3 were out of the picture. This led me to old-school keyframing.

First I screen-captured videos of ballet performances and laid them out frame by frame in PureRef. I then overlaid the PureRef reference (in half opacity) over iClone, and adjusted every character joint to match my reference using “Edit Motion Layer”. 

Pose 1

Pose 2

The animated characters were then exported to Alembic files.

Final character animation
Note :

While my final project concept depicted ballerinas in slow motion, my original idea was actually to keyframe a 20-second ballet dance, which I very quickly realized to be bad idea for a number of reasons:

  1. in slow motion a lot of frames could be interpolated, but real time motion involved a lot of unique frames and hence required a lot more tweaking;
  2. subsequently more unique frames meant more rendering problems (flickering, tessellation issues etc.).

Early test render of my original idea.

Considering this as my first character animation project, I came to the conclusion of doing a slow-motion style sequence instead – 2 unique poses with 160 frames of motion each.

Garment Simulation

Cloth simulation was by far the most challenging part of the project. The two major cloth simulation/solvers that I considered were Marvelous Designer (MD) and Houdini Vellum.

While Houdini Vellum was much more versatile and more reliable than Marvelous Designer, I personally found it to be way too slow and therefore impractical without a farm (one frame of cloth simulation could take up to 3 minutes in Houdini Vellum vs. 30 seconds in Marvelous Designer on a Threadripper PRO 3955WX with 128GBs ram).

Cloth simulation in MD, while generally a lot quicker to setup than Houdini vellum, was not as straightforward as I imagined. Simulated garments in MD always came with some form of glitches; this included cloth jittering, piercing through character or just complete dislocations. Below are some of the settings I tweaked to minimize glitches:

  1. using “Tack” to attach parts of the garment to the character;
  2. increasing cloth “Density” and “Air Damping” to prevent garment from moving too fast and subsequently move out of place;
  3. simulate parts of the garment in isolation – though not physically accurate, allowed me to iterate and debug a lot quicker.

I also reduced “Gravity” in addition to the above tweaks to achieve a slow-motion look.

MD simulation

Note :

The official Marvelous Designer Youtube channel has a lot of garment modeling livestreams which I find to be the most helpful resource for learning MD. Alternatively there are a lot of readily available 3D garment online (notably on Marvelous Designer’s official site or Artstation Marketplace) which I used as a basis for a lot of my projects.

MD is extremely prone to crashing, there is also a bug in both MD10 and MD11 that prevents saving of simulated garments 90% of the time, so always export simulated garment as Alembic files rather than relying on MD to save the simulation.

Simulation Clean-up

After dozens of simulations, I would then import the MD exported Alembic files into Houdini, where I did a lot of manual cleanups, this included:

  1. manually fixing collided cloth and character with “Soft Transform”;
  2. reducing simulation glitches with “Attribute Blur”;
  3. blending together preferable simulations from different alembic files with “Time Blend”.

Cleaning up simulated cloth in Houdini with “Soft Transform”

The cleaned-up cloth simulation was then exported as Alembic to C4D.

Alternative to Garment Simulation

For anyone frustrated by the impractical Houdini Vellum cloth simulation times and MD glitches, an alternative would be to literally attach the garment to the character’s skin in CC3 – a technique most commonly found in game production.

Attaching garment to character in CC3

Note :

See the Reallusion’s official guide for creating game-ready garments here.

Garment Baking and Shading

Once I was done with cloth simulation in MD and clean-up in Houdini, I imported the Alembic file into C4D. MD Alembic files always show up in C4D as one alembic object without any selection sets; this makes material assigning impossible.

This was where C4D baking came to play – a process I used for converting the Alembic file into C4D object with PLA (Point Level Animation):

  1. drag the alembic object into C4D timeline;
  2. go to “Functions”;
  3. “Bake Objects”;
  4. check “PLA”;
  5. then bake.

Going through the steps above I was able to get a baked down C4D object that I could easily select polygons and assign multiple materials using selection sets. I then exported an OBJ file from MD with materials, imported into C4D and dragged the selection sets directly onto the baked down garment object. This eliminated the need to manually reassign materials in C4D.

I used a blend of linen texture maps (from Quixel Megascans Bridge) and Redshift Car Shader to emulate sequins fabric (think “blink”) found in a lot of professional ballet tutu dresses.

Close-up render of the fabric material


Do not use AO or Curvature nodes for the simulated garment materials (or any animated object), as they could potentially produce glitches in final renders.


Lighting & Environment

Although I tried to keep my lighting as minimal as possible, project “Ballerina” inevitably required a lot of tinkering due to the nighttime setting.

The nighttime HDRI did not provide sufficient ambient light to the interior space, and the chandelier bulbs were way too dim as the primary light source. Ultimately I placed an invisible spot light under the center chandelier and used a fake spot light that only affected all the architectural ornaments. The fake light provided an extra level of bounce light that gave just the right amount of illumination without ruining the moody atmosphere.

I also added a “Redshift Environment” controlled in Z axis multiplied with “Maxon Noise” to give more depth to the scene. Exterior-wise, I scattered 2 variations of Dogwood Trees with C4D “Matrix” in the surrounding area. They were lit from ground up in the scene to give extra depth. In summary lighting of the scene includes:

  1. Dome light (nighttime HDRI) x 1
  2. chandelier (mesh lights) x 3
  3. Spot Light (center) x 1
  4. exterior Area Lights x 4
  5. fake Area Light positioned under chandelier (includes architectural ornaments only)

RS lights

Note :

The trees were generated with SpeedTree. Lighting takes a lot of consistent practice to master; apart from my daily CG practice, I spent a lot of time watching b-rolls/breakdowns of movies – for instance I took a lot of inspiration from Roger Deakin‘s lighting and cinematography, as well as Wes Anderson‘s frame composition and color combinations.

Camera Movements

All my camera movements were very subtle. This included dolly, camera roll and panning shots, all driven with Greyscalegorilla’s C4D plugin Signal. I personally prefer using Signal for its non-destructive nature, but old-school key-framing would work just fine for similar camera movements.

Draft Renders

Once I had the character animations, cloth simulations and camera movements ready, I began to do low-res test renders to make sure that I would not get any surprises during the final renders, this included:

  1. flipbook (openGL) renders to ensure the timing of the animations were optimal;
  2. low-res low-sample full sequence renders to ensure there were no glitches;
  3. full-res (2K) high-sample still renders with AOVs (diffuse, reflection, refraction, volume) to check what contributed to the prevalent noise if any;
  4. submitting test render to Fox Renderfarm to ensure the final renders matched up with my local renders.

This process lasted over 2 months with iterations and iterations of renders and corrections.

Close-up shot I
Close-up shot II
Final shot

Final Renders & Denoising

I used a relatively high-sample render setting for the final renders, as interior scenes in Redshift were generally prone to noise.

RS final render settings

I also had motion blur and bokeh turned on for the final renders – in general motion blurs and bokehs look better (more physically accurate) in-render compared to motion blurs and bokehs added via compositing.

Half of the final 2K sequence was rendered on a local workstation, while the rest was rendered on Fox Renderfarm, total in about 6840 hours of render time on dual RTX 3090 machines. I used Neat Video for denoising the final shot, whereas the closeup shots were denoised using Single Altus (in Redshift).

Note :

Always turn “Random Noise Pattern” off under Redshift “Unified Sampling” when using “Altus Single” for denoising.

Redshift Rendering GI Trick

Redshift’s GI Irradiance Cache calculation could be quite costly; my final renders for instance have an average of 5 minutes of GI Irradiance Caching time for each frame.

In Vray there was an option in the IR/LC setting named “use camera path”, designed specifically for scenes where the camera would move through a still scene. Once “use camera path” was enabled Vray would then only calculate one frame of GI cache for an entire sequence. Borrowing a page from Vray, I use the following motion blur settings to calculate the first frame of Irradiance Cache:

RS rendering GI trick motion blur setting

The one Irradiance Cache is then used to render the entire sequence. Two shots of the project were rendered using one single GI cache, resulting in a 10% faster render time overall.

Note :

The GI trick only applies to shots with very little motion; when applied to the 2 closeup shots of project “Ballerina” for example, I got light patches and ghosting on the character skin.


Having spent months working on the project, I have gained an appreciation for traditional character animators – I never realized the amount of effort involved in crafting character animations, and the subtlety of details required to bring convincing CG characters to live.

Though I would not consider myself to be a character artist, I personally think Character Animations are really powerful in making CG environments relatable, and therefore would still be an essential part of my personal CG pursuit moving forward.

Learn more :

• Kay John Yim’s personal site https://johnyim.com/

• Kay John Yim’s ArtStation https://www.artstation.com/johnyim

• Character Creator https://www.reallusion.com/character-creator/download.html

• iClone https://www.reallusion.com/iclone/download.html

• Reallusion https://www.reallusion.com/