The “Winner Tips & Tricks ” series covers practical workflows and techniques shared by winners from the “2022 Animation At Work Contest”. To let users see the full spectrum of the Cartoon Animator pipeline, we are introducing projects that received attention and credit from the community. Let’s now take a look at LuckyPlanet’s: “Songrea Travels”, and see how he works his magic with Reallusion Cartoon Animator (CTA).
About Raknum Animation
Hi, I’m Onome Egba and I’m a writer and director. I have a background in psychology and have also spent some time working in advertising. I’ve worked on both corporate and personal projects in various capacities including as a producer, director, and animator on ads, music videos, and short films—one of which earned me an African Movie Academy Award (AMAA) nomination for best animation.
As founder and creative director at Raknum Animation Studios, I see the company as an outlet to enable me and my team to find interesting ways to engage with art and creativity. As a relatively young and evolving hybrid art form, I’ve loved the creative potential animation offers. It’s also been particularly exciting to find new ways to leverage innovations in software and technology to make processes simpler and more streamlined; because, though animation can be highly expressive and offer near-infinite possibilities, it can also get very tedious.
Why we made Songrea Travels and why chose Cartoon Animator
We were exploring new tools and solutions for 2D character rigging for a project—where speed and consistency were paramount—when we found Cartoon Animator and the Animation at Work contest. As an iClone user, I’ve always known about CTA but having only recently started to dabble in more 2D-centric workflows, I never really paid it much attention. The contest created an opportunity to force ourselves into putting Cartoon Animator to the test; see how fast we could learn it and more importantly how well it’ll integrate into our workflow. Also, finding out about the contest way past its announcement, meant we were left with only about seven weeks to execute the project, which seemed a good creative challenge for our small team. In the end, we lived up to our billing and we’re more than proud of what we pulled off.
Our workflow for this project was pretty straightforward. We like to work in a way that lets us perform concurrent tasks with multiple things going on at the same time. That way, the backgrounds can inspire the character art, music can inspire the writing, and everything just piggybacks off each other till we organically find a common ground everyone’s excited about. About this, the home vs vacation idea came pretty early on and the somber acoustic guitar tune soon became a guiding light for the project. That is until we started editing and got completely tired of the song. Hence, why this animation breakdown is in the most non-acoustic guitar-sounding tune we could find.
How I did it with CTA
Step 1: Ideation
Once we had the home and vacation theme, we started exploring some places that would be cool to explore visually. We fought between making it completely centered around the home or the vacation before deciding to combine them both. At this time, our art director, Funto Coker, was already exploring some background art styles and we were assessing locations that seemed to connect best. Soon we created a script and had a blueprint for everything else.
Step 2: World Building & Layout
Backgrounds were completed before the character because that’s what we started with. So we looked at the space, set up the shot, and then added the characters. Another thing we looked out for in the composition, background art, and camera layout, were opportunities to utilize the z-axis to add more depth and visual interest to the shot with a parallax effect. Parallax also helps the audience get a feel for the scale of the scene, which also serves as a story-telling element. For domestic scenes, we kept it intimate and flat and then pushed for more depth as we went outside.
Step 3: Character Rigging & Animation
Once the characters were done, we brought them into CTA 4. With the limbs separated from the PSD file, simple drag-and-drops connected the bones, and in a few minutes, we had a character ready for animation. I also personally liked being able to have a custom GUI for each character—it just added personality to the animation process. Furthermore, the animation was mostly done using the Motion Key Editor. Using key-frames and transition curves, we were able to complete the animation in no time, which was great because, by this time, we were fast approaching the final days of the contest.
Step 4: Clean up & Post Production
We finally get to the most fun part of the process as the heavy lifting has already been done. We have an animation that’s working cohesively and telling a comprehensive story. At this point, we’re focused on supplementing that story by amplifying the background art and further integrating the character animation into the background so they blend as one. We also do some basic relighting to add a bit more contrast and depth. We were also concerned about directing the viewer’s eyes, which entails darkening and slightly blurring out elements that seem to distract the audience from where they are supposed to be focusing. The scene over the temples of Myanmar was especially exciting, it’s a perfect example of an already beautiful scene being further enhanced through compositing.
In the end, it was a fun exercise and we’re glad we took it on. Given the sheer volume of inspiring work submitted to the contest, we feel very honored to have been awarded first prize in the “Business and Commercial” category. I hope this overview helped; We look forward to the amazing things being created by everyone in this community.
The Vambies are an avatar collection developed by Vambie Inc. and Taiyaki Studios. A quirky cast of vampire/zombie hybrids, they are being released into the world on the Reallusion marketplace and are appearing throughout social media.
To celebrate the launch, Reallusion and Taiyaki Studios will be running a competition: who can create the best video using a Vambie?
Make your own social media inspired Vambie video, post it to your channel and make sure to Tag #vambies to be part of the competitions. To make sure your video is not missed, also email or post us your video on our discord channel https://discord.gg/taiyaki
Reallusion will be offering one free iClone license to the winner of the competition!
The winner will be announced of the 2nd of January2023.
The Vambie story
Taiyaki Studios mascot and metaverse tour guide ‘Mr Yaki’ travels to meet Dan, the creator of the Vambies, who explains to him the origins of the Vambies:
This video was created using Reallusion’s iClone and the Unreal Engine. For a behind the scenes look at how Taiyaki Studios uses Reallusion software to produce its videos, check out their latest ‘making of’ video:
Who are they for?
The Vambies are a community-driven animated show living on social media. Anyone interested in character animation and creating fun content is welcome to download a Vambie and begin producing videos: https://marketplace.reallusion.com/iclone/search/vambies with more Vambies to come soon! Creators can use Vambies for free and monetize their creations with no royalties.
We have a dedicated team of experts ready to help in our discord channel https://discord.gg/mWWS84gS and it’s also a place to share your work and ideas and even collaborate with other Vambie creators.
About Vambie Inc.
Dan was born a vampire but accidentally bit a zombie and found himself turned into a hybrid. Shunned by both the vampire and zombie communities, he founded Vambie Inc. to manufacture clones of himself – the Vambies – and spread them into the world.
About Taiyaki Studios
Taiyaki Studios builds avatars for vtubers and virtual influencers. The future of media is social-first, powered by virtual characters, and supercharged by fan-generated content. We’re here for it.
Kay John Yim is a Chartered Architect at Spink Partners based in London. He has worked on a wide range of projects across the UK, Hong Kong, Saudi Arabia and Qatar, including property development and landscape design. His work has been featured on Maxon, Artstation, CG Record and 80.LV.
Yim’s growing passion for crafting unbuilt architecture with technology has gradually driven himself to taking on the role of a CGI artist, delivering visuals that not only serve as client presentations but also as means of communication among the design and construction team. Since the 2021 COVID lockdown, he challenged himself to take courses in CG disciplines beyond architecture, and has since won more than a dozen CG competitions.
Part I. ARTWORK
Q : Hi John, welcome to our Feature Story series. First of all, congratulations on all the art and architectural visualization awards you’ve won and on being elected into the Hall of Fame on CG Boost and VWArtclub in 2022!
Could you share with us the art concepts behind ‘Kagura’ and ‘Ballerina’ ? What are their similarities and what kind of message would you like to convey with ‘Kagura’?
Thank you so much for having me, it is such an honor to be featured again in Reallusion Magazine!
Both projects ‘Ballerina’ and ‘Kagura’ are representations of myself; they are metaphors for the inner conflicts and struggles in my artistic pursuit. As both an architect and a CGI artist, I am constantly struggling between creating art for mass appeal as opposed to simply creating a well-composed image that I love. In ‘Ballerina‘ I combined ballet with Baroque architecture, knowing full well that glamorous ballet poses and architectural style would draw the most attention.
Ballet, an art form widely known to have stringent standards of beauty and highly susceptible to public and self-criticism, is the metaphor of my artistic practice, particularly in gaining online traction through social media. No matter how proficient I become in my skills, the struggle never fades away as I feel like I am always competing against every other artist for attention.
‘Kagura‘, on the other hand, embodied my enthusiasm for Japanese culture and aesthetics. The project concept is a fantasized version of a Shinto ritual ceremonial dance in Japan. Traditionally, the dancer herself turns into a god during the performance – here depicted as the dancer’s ballerina tutu dress transforming into a hakama as she dances on the floating stage, purifying spirits of nature. The transformation sequence is a literal “reveal” of my inner conflict, as I have come to terms with it and accepted the fact that creating art could simply be an act of self-indulgence.
Q : Thank you for sharing such an uneasy yet beautiful struggle with us. The transformational moment really caught my eye—such a magic moment as the ballerina finally meets her true self!
Can you share more about the process of creating such a moment? Did you confront any difficulties during the process?
The animation can be broken down into three parts: the character, the cloth simulation, and the transformation. The character animation was based on one mocap data found on Reallusion Marketplace, which I modified in iClone to get the specific gestures and slow-motion look that I envisioned.
For the cloth simulation, I used a combination of Marvelous Designer(MD) and iClone/Character Creator. MD gave very realistic results but it was very fiddly and time-consuming for simulating multi-layered clothing; iClone and CC cloth physics was essentially real-time but lacked realism for complex clothes. For these reasons I prepared two sets of garments in MD (tutu dress & hakama) and grouped them into two categories: skin-tight garments and loose garments. The skin-tight garments (tutu dress leotard & hakama inner layer) required less detail and were animated in iClone; the loose garments (tutu dress skirt & rest of the hakama) were simulated in MD for maximum detail. The transformation of the tutu dress into hakama was primarily driven by “PolyFX” within Cinema4D.
Even though the animation was fairly simple—technically speaking—it was extremely challenging to reach a rhythm and an aesthetic that flowed naturally with the character’s movement. I ended up spending over two months just iterating over the ten seconds of animation.
Q : Is the final result close to what you envisaged? What could be done better next time?
The final result is quite close to what I envisaged, although I initially planned to include a zoom-in shot very early on but ultimately had to give up due to PC spec constraints. The model got extremely heavy early on and I spent a lot of time simply waiting for viewport feedback—in retrospect, I could have optimized the model a lot more and kept the model as simple as possible until the final render.
Part II. CHARACTER CREATION
Q : As mentioned in many interviews, you’re heavily influenced by Japanese culture, like the scene of ‘Kagura’ is set at a Japanese ryokan that inspired the high-grossing anime film “Demon Slayer: Kimetsu no Yaiba” in 2020. Interestingly enough, the characters you have designed are more leaning toward realistic digital humans.
I wonder if you’ve ever considered creating more Japanese anime-like characters, or game characters such as “Final Fantasy XIII”, which also are hyper-realistic yet in anime style.
What are the pros and cons of using Character Creator to produce your characters ?
I have considered creating Japanese anime-like characters—in fact, Final Fantasy XIII inspired me to learn 3D. However, as architecture has become an inseparable part of my life throughout the past decade, I came to appreciate the beauty in subtle proportions, lighting, materials, and details found in photorealistic CGI, which ultimately led me to the path I have chosen.
Character Creator empowered architects like me to create CG characters without the professional knowledge of a character artist. As characters convey the scale and purpose of a space, Character Creator allows me to add narrative to my architectural renders, making them more relatable to the viewers.
Q : In terms of stylized characters, I’m quite curious about your thoughts on Disney-style characters such as the Toy Story tetralogy. Has this type of cartoon-style aesthetics ever influenced your character creations?
Like many other artists, I am also inspired by Disney animations, particularly the lighting and the color combinations, but to a lesser extent Disney-style characters. I find partially stylized characters in 3D—notably the combination of realistic materials and exaggerated proportions—not as immersive as fully stylized characters like those found in the recent Netflix series Arcane.
The unique combination of painterly textures and 3D models in Arcane looks nothing like every other Disney or stylized character I have ever seen. I am sure it will be a lot of fun and a challenge to design a fully stylized character myself, but there is still so much to explore in the world of photorealism I have no plans on publishing any stylized artwork yet.
Part III. COMPOSITION
Q : Wabi-sabi aesthetics is the core philosophy permeating Japanese art and lifestyle, which emphasizes asymmetry, simplicity, and modesty. However, John’s pieces always reveal a glamorous world where the exquisite characters situate in splendid surroundings.
I can’t help but wonder how this worldview influences you when you start creating something new.
I was heavily exposed to Japanese culture growing up in Hong Kong, and although I appreciate wabi-sabi aesthetics I see it more as a philosophy and a work ideal—in every artwork I create, I aim to bring out a sense of serene melancholy from the viewers, longing for more.
Take ‘Forfeited Souls: The Unfinished Chapels of Batalha‘ as an example, every element within the composition was doused with mystery—the giant cat appearing from the sky, the large standing statues, and the glowing flowers—everything was woven together into an incomplete narrative. Although I did have a story in mind while creating ‘Forfeited Souls’, I never described it explicitly and left it to the viewers’ imagination.
Q : Since you’re familiar with Cinema4D and Redshift to build up environments, could you elaborate on the artistic concept behind the winning entry ‘Ark Muse’ which eventually got featured in Maxon Redshift’s new 2022 official demo reel ?
How do you create such a convincing fantasy by combining C4D with Redshift and Character Creator? Did Wabi-sabi aesthetics inspire you in any way?
‘Ark Muse‘ was created under a very tight deadline for a Clint Jones’ INFINITE JOURNEYS Challenge; the project concept is one’s desire of going back in time ultimately manifested while asleep (note the clock on the table). Unlike a lot of my other works, ‘Ark Muse‘ depicts a fantasy land, where elements of different eras collide. The CG character is the essential ingredient in creating a “suspension of disbelief”—an important role that anchors the viewers in a chaotic dreamy world.
I used Character Creator in combination with Marvelous Designer to create the character and placed her in the foreground of the composition. Character Creator allowed me to iterate on various character poses very quickly, and thus allowed me to make design decisions a lot quicker. The skin texture maps created using Character Creator’s Skingen plugin used in combination with Redshift added a lot of subtle detail and made the composition much more tangible.
Similar to ‘Forfeited Souls‘, nothing in ‘Ark Muse‘ was explicitly implied; it invites the viewers to ponder and question the fantasy land, leaving the viewers longing for more.
Part IV. ARCHITECT, ARCHVIZ & CG ART
Q : In the Renderbus CG Webinar, you shared four tips to CG enthusiasts from your industry background as a RIBA chartered architect.
I wonder how these personal CG arts projects have influenced your architectural career in the past two-plus years, whether positively or negatively?
My personal projects have definitely helped me progress with my career as an architect, especially in upping my work efficiency and making design decisions, and I stand by the four tips that I give to all CG enthusiasts.
Four tips from Renderbus CG Webinar 24: ▪ Iterate objectively. ▪ CG is not a lab experiment. ▪ Don’t reinvent the wheel. ▪ Meeting deadline.
The one piece of advice that I have been pondering a lot lately is “not reinventing the wheel”. I have built up a library of 3D assets that I could reuse to realize ideas much more quickly. This spared me a lot of repetitive modeling time that I could then spend elsewhere, for instance, learning character animation.
Q : ArchViz is a relatively new field in the AEC industry. As an “Architect by day, CGI Artist by night”, how do you see the development of ArchViz and Architecture industries for indie artists and pro studios in the next five years?
With the rapid advancement in real-time rendering and AI software, ArchViz has a much lower entry barrier than before.
Real-time rendering software like Unreal Engine 5 and D5 Render are very promising in delivering Archviz of decent quality, their instant visual feedback means much quicker turnarounds. On the other hand, AI software like Midjourney and Disco Diffusion are capable of generating images with lighting and composition pleasing to the eye—both of which used to take a lot of time for ArchViz artists to iterate. Though the aforementioned programs are still lacking in features to be reliably used on a daily basis, I can definitely see architects, including myself, being able to produce decent renders much quicker and hence communicate with clients much more efficiently in the next five years.
Q : So far, instead of using 3D characters, architecture firms tend to use 2D characters to decorate their architecture mocap.
In your opinion, how will 3D character animation be applied to the ArchViz and architectural industries in the near future ?
I think the majority of ArchViz always feel very distant from the general public due to the lack of convincing CG people and crowds. With realistic CG characters more readily available through software like Character Creator 4 and iClone, this will definitely help bridge the communication gap between architects and clients.
Part V. PIPELINE
Q : Being a self-taught and diligent artist, John has been learning more than thirty-plus software to create CG artwork.
Can you elaborate on your experiences of not fixating on one or two software? Instead, you seem to be trying new ones all the time. Isn’t that time-consuming or has it opened up new possibilities in your CG art?
Learning new software obviously takes time, but with software advancing so quickly in recent years, one is more likely to lose time, in the long run, fixating on outdated software and workflows.
Learning Houdini for instance, allowed me to look at 3D from a completely different perspective; I have since transitioned to a procedural workflow as opposed to a destructive workflow, which eliminated a lot of repetitive tasks that I used to do on a daily basis. I could not imagine completing projects ‘Ballerina’ or ‘Kagura’ without Houdini’s procedural workflow, particularly in cleaning up cloth simulations.
Q : ‘Kagura’ is probably your first work using Character Creator 4 (CC4) and iClone 8 (iC8). How did the transition between the software upgrades impact your work?
Did you confront any hindrances? If so, how did you solve them? Finally, what are your favorite features of CC4 and iC8?
The transition from CC3 to CC4 and iC7 to iC8 is relatively smooth; I appreciate the lack of GUI overhaul which makes transitioning much easier. My favorite features of CC4 are the timeline integration and the ability to mirror poses by body parts. The timeline integration eliminated the need to export CC Characters to iClone for previewing animations, and the mirroring function gave me more flexibility while posing my characters.
My favorite feature of iClone 8 is the integration of 3DXChange, which streamlined my workflow of importing non-CC charactersand mocap animations for use within iClone.
Part VI. ADVICE
Q : Architecture is a demanding profession, how do you gather new ideas beyond the daily routine, especially something not related to architecture?
For example, your piece ‘The Magician: Golden Gallery’ recently was highlighted in the ArtStation Fashion week.
A lot of the time my inspiration outside of architecture comes from movies. ‘The Magician: Golden Gallery‘ was indirectly inspired by the Disney movie “Cruella”. The elaborate costume designs really caught my eye and sparked my interest in fashion design. It motivated me to learn garment creation in Marvelous Designer, which I did so by studying sewing patterns and reading fashion magazines.
Q : As you addressed in the webinar, the best way to learn is to pick up one subject that interests you most and then dive in.
Could you describe your experiences of learning 3D modeling, rendering, and creating clothes in Marvelous Designer? Who are your role models for learning each topic?
Similar to learning Marvelous Designer, I think the best way to learn modeling and rendering is just to work on personal projects one is interested in and search for solutions online when encountering a hurdle.
I do not have a particular role model in CG, but I picked up advice from a mentor and a senior architect that I greatly respect, which is to work consistently as opposed to cramming for deadlines. I took the advice to heart and learned something new every day, consistently over the past two years.
Q : For people who are interested in the ArchViz industry, from your point of view, what are the best three learning resources to start with ?
I think official (software) documentation is the most underrated resource for learning any sort of 3D software; I personally learned to use Redshift render mostly from reading its official documentation.
Apart from official learning resources, I always recommend Ian Hubert’s Patreon and Hugo Guerra’s Youtube Channel for anyone interested in ArchViz or simply creating beautiful renders in general. Both of the aforementioned channels teach 3D and compositing in a software-agnostic manner that applies to any toolset.
Q : Please share with us one quote that influenced you a lot to this today.
‘Kagura’ is by far the most challenging personal project I have ever done since I had little to no experience in motion graphics or character animation half a year ago. I learned along the way as I worked on projects ‘Kagura’ and ‘Ballerina’ all through trial and error, rendering out iteration after iteration throughout the past 6 months.
With Reallusion and Fox Renderfarm’s support, I eventually brought ‘Kagura’ to life, and this has been the most rewarding project since I began my CGI journey. For any self-taught CG artist out there like myself, who is constantly struggling to up their quality and skill set, I would like to share a quote by American novelist Anne Lamott—the quote originally refers to writing but it deeply resonated with me as an artist:
Creating art is like driving a car at night. “You can see only as far as your headlights, but you can make the whole trip that way.” You don’t have to see where you’re going, you don’t have to see your destination or everything you will pass along the way. You just have to see two or three feet ahead of you.
Greetings, my name is Peter Alexander and I specialize in Character Creator related workflows. In this video, I’m going to detail the process of adding physics to an article of clothing on my Piccolo character, specifically, his cape. I’ll also show you a few other tricks along the way. This feature tutorial leverages the CC4 Blender Pipeline Tool (Installed in CC4) and CC/iC Blender Tools(Installed in Blender) for Character Character and Blender.
This character’s outfit consists of several layers of clothing, and proper layering is important when utilizing various functions in Character Creator, such as the Conform and Auto Hide Mesh features. However, physics is currently not impacted by these layers.
By default, if you have no clothing items, your first imported clothing item will start at the layer one slot, then after that, it’s two, three, and so on (the clothing layer menu allows you to rearrange these layers). Originally the layers were limited to twenty or so, now the limit is 252, so there’s plenty of room for adding clothes — the exceptions being the Gloves and Footwear slots, which are still limited to two layers.
I created some of these items in ZBrush, and by default, the same material name is applied to all items imported through GoZ. However, Blender is strict in not allowing materials or items to share the same name ; For this reason, you must change the names of the materials before importing them into Blender.
Cape Weight Mapping
For the most part, I’ve weighted the cape to the Spine02 bone, which is just below the neck. I find this works best for capes, but your settings may vary. I would probably not weight the cape heavily to the legs or arms, as these bones move too often in a way that wouldn’t affect an actual cape.
Since we are using physics, I want the cape to move and follow the character, but not be impacted by a lot of movement beyond the torso.
Using the edit and sculpting tools, I will fix the various clipping issues that are apparent upon import. Note that the cape is not draped, as that can be handled by physics. Also, note there are no normal maps on this cape; In my experience, clothing folds faked with normal maps tend to look odd when actual physics are applied; So, I avoid applying cloth-fold normal maps for an item such as a cape.
Mesh Smoothing and Normals
For whatever reason, I find most imported items need to have Smooth Shading applied. After this, in Edit mode, you can recalculate the normals by average. This is not entirely necessary, but it saves from having to deal with miscalculated normals in Character Creator.
Adding and Painting a Physics Weight Map
A clothing weight map can be applied through the Cloth Settings area of the addon. When Add Weight Map is initiated, you can paint the map as you would a regular texture. Remember that White adds physics, and Black negates physics. A black weight map will have no physics, but adding black in areas can serve to pin your garment.
I found that using the Gradient tool was the best method of painting a weight map for a cape, though getting the right area to initiate the gradient can take some trial and error. When you are done weight painting, click on Done Weight Painting and save your map.
Exporting & Reimporting into Character Creator
Export the character through the CC/iC Blender Tools(Installed in Blender), and import it into Character. Hidden mesh settings will be lost in this process, so they must be set up again.
After the character has been reimported, you can see that the weight map has been automatically loaded. If you want to adjust the weight map further, you can play with the brightness and contrast settings.
Using the Animation Player, I can see that I needed to add some collision shapes to the main character to prevent clipping with the cape. Collision shapes, based on my understanding, lessen the resources necessary to calculate physics. If you were to calculate between meshes with a high polygon density, many systems would stutter.
Adding Collision Objects
There’s currently no physics on the rest of the clothing, therefore, I’m setting up these colliders so that the cape will also not clip with those items.
Colliders can be scaled on the X-, Y-, and Z-axis, rotated, repositioned, as well as changed from capsule to sphere shape.
With the colliders set up, I’m going to try another physics test. The new Animation Player in Character Creator 4is very helpful in troubleshooting physics before a character gets sent over to iClone for more detailed animation.
If you’re still having trouble at this stage, I would play with the physics settings a bit and be patient with the results. I found that the cape shrank a bit during some simulations, so I adjusted the Tether Limit. Also, adjusting the Dampening setting can add or decrease air resistance. There is a breakdown of these properties listed in the iClone online manual.
While this demo covered some relatively common knowledge, I hope there were some tips along the way that were beneficial to your workflow. The CC4 Blender Pipeline Tool (Installed in CC4) and CC/iC Blender Tools(Installed in Blender) add-onsmake the process much more streamlined, permitting an easier path for optimizing clothing and physics settings.
The ability to export and import content from Character Creator to Blender has never been easier and more accessible to the average user, and it’s getting better with each update to these wonderful tools.
I finished architecture and urban design in 2002. While I was still studying, back in 1998 I was invited by the university to start aiding teachers with their classes. Ever since I have been working part-time and freelancing in the archviz industry.
In 2004, I opened my studio FX Animation Studios, catering to the local market doing TV Ads and PSA’s. It’s what we’ve doing to this day. Our studio very is small, it’s mostly me and my wife plus a third member to handle all the administrative work a company requires. I’m the one behind most of the stories and all the 3D animation, editing, and more. My wife handles mostly modelling, story, sound and starting to get into 2D animation.
In 2011, “Os Pestinhas” aka the troublemakers were born. It’s a project comprising of shorts, comic book stories and also a feature film project. Over the years we were able to make (all self-funded) 2 award winning shorts, over 15 PSA’s and a few comic book stories under the same name. “Os Pestinhas” is a group of 3 kids who aim to educate and learn from their surroundings and adventures.
Our dream is to make the very first 3D Animated feature film and that’s why “Kibwe” was born, and after almost 8 years of working in the feature film project on and off, we are excited to have the support of Reallusion and Epic Games and also our partners at Ekaya Productions who will handle the sound for the film.
In the making of the film “Os Pestinhas” we are heavily utilizing the power of real-time creation and animation tools like Character Creator, iClone, Blender and Unreal Engine to get this project over the finish line.
“The Reallusion set of tools – iClone 8 and Character Creator 4, have allowed me to really tell my stories faster, and the way I want them to be told!”
Living up to the motto of “Enlivening Any Character”, Character Creator 4.1 (CC4.1) aims to overcome the obstacle of rigging characters for animation. Aided by the built-in technology of AccuRIG, CC4.1 can turn any static model into an animation-ready character with cross-platform support in minutes. Furthermore, enhanced subdivision export combined with the flexibility to generate levels of detail (LODs) empower both pro and indie studios to enhance their characters toward hyper-realistic digital actors for film or optimize for massively multiplayer online games without compromising real-time performance.
See the latest update in CC4.1:
1. Advanced AccuRIG Turns Any Static 3D Models to Live Characters
Powered by AccuRIG, Character Creator can auto-rig humanoid models as well as handle complicated multi-mesh structures with cutting-edge features. Hard surfaces can be isolated from skin weights to retain their rigidity, while painted skin weights can be pruned and tweaked, regardless of complexity.
For the non-rigged characters, you can choose the desired items to undergo a simple 3-step auto-rig process with each step being configurable and reversible. Game developers can apply the same bone definition profile to other game avatars of similar body scales, which is a massive time-saver with the elimination of repetitive steps.
Rig Selected Mesh & Auto-Attach Hard Surface Items
Besides the accuracy, the customizability and the flexibility of Advanced AccuRIG in Character Creator 4 are what distinguishes it from the free AccuRIG tool of ActorCore. Items can be designated as rigging targets, and rigid items such as helmets and armor are auto-attached to the closest bones as accessories to prevent mesh distortion.
Re-rigging, Repurpose & Reduction
Rigged and humanoid characters or static meshes can be transformed into CC avatars while retaining original facial bones and animations. Polycount and bone count can then be effortlessly reduced for re-rigged characters.
Easily Setting for Any Type of Character : Masking Unused Bones
Even if your character isn’t whole, like a desecrated greek statue or an amputee, AccuRIG can still ensure proper animation for the rest of the body by masking away the unused bones. Masking is also suitable for partial body movement and characters with imperfect T- or A-poses. >> Learn more
2. Character Scalability – Easily up/downgrade character level (LOD) to suit all application needs
Optimized and decimated characters can be attained with one simple click for one-man teams and AAA production studios alike. Character Creator maximizes usability for each character you make while outfitting them for film production, game design, VR/AR applications, and massive crowd simulations.
Filled to the brim with intuitive one-click solutions, Character Creator can easily upscale a character for film production and extreme closeups. By contrast, the same character can be downgraded to comply with ActorBuild, LOD1, or LOD2 standards that are suitable for AEC industries, crowd simulations, and mobile games — wherever optimization is of the utmost importance.
By upgrading and downgrading character quality with Character Creator, artists can generate multiple levels of detail and have complete control over the revamping process, including bone count, polycount, texture size, and facial details. >> Learn more
3. The World’s Only Subdivision Export for Animated Characters
Exquisite quality for film production & all render engines
The capacity to subdivide a low poly real-time character model is crucial for close-up shots and high-resolution image/video output. Subdivided animation-ready characters are compatible with all render engines and provide salient improvements to visual quality.
Character Creator’s Subdivision Export is a solid step in becoming a universal character system by removing obstacles to FBX and USD interoperability. Consequently, artists can rest assured that their rigged quad-based characters render without a hitch and that facial morphs perform in the most exquisite manner across different applications.
CC Avatars and any imported low-res, quad-based characters can benefit from Subdivision Export, including realistic and stylized characters. The outline and shading of accessories, clothes, and props are optimized as well. Subdivision Export keeps CC avatars nimble for live performances while retaining hi-res details suitable for final production, from cinema to TV commercials.
With FBX/USD format and LiveLink, Subdivision Export greatly elevates the quality of all types of real-time characters for professional high-resolution rendering. >> Learn more
Greetings, this is Peter Alexander. In this tutorial I’m going to demonstrate Reallusion’s free auto-rig tool, AccuRIG. There have been many videos exploring this great tool, so I hope I can provide a new angle by exploring these unique characters, each with their own attributes that are useful for demonstrating some of AccuRIG’s potential.
Character One: Non-Human, Stylized Monkey
First up is this stylized monkey character which I purchased through Artstation from an artist named Ali Farsangi. It’s a charming character which I chose because it is very stylized and cartoonish. It served as a nice challenge for AccuRIG, as it’s not a typical human character. The same applies to the other character types, but the monkey has its own set of challenges. For example, the fingers were too close together which confused the rigging algorithm when it came time to calculate that aspect of the rig. I had to manually increase the space between the monkey’s fingers, allowing it to succeed. Also, as with many cartoon characters, this one had fewer fingers. Luckily, AccuRIG can accommodate hands with zero to five fingers.
As with most characters, the AccuRIG algorithm does a good job of estimating where the bones should go on this monkey character.
Hovering Joint Guides
A feature I really like is that when you hover over a joint, it shows you the ideal position compared to a human model. AccuRIG struggled to calculate the position of the fingers, but thanks to the feature mentioned, it was easy to move the joints to the correct positions.
The Thumb Direction Guide
The thumb has an additional guide which helps alleviate issues with the thumb seen in other rigging tools.
Any adjustments to the joint nodes on one hand can be applied to the other hand using the Mirror Function.
AccuRIG has several animations selected to preview the rigging of your models, and if needed, to adjust them using the Pose Offset feature. Although most characters will have a high degree of compatibility with ActorCore animations, offsetting a pose can allow for a better flow of movement for highly exaggerated characters.
Uploading to ActorCore
Uploading to ActorCore provides you access to the full animation library, limited only by what you’ve purchased. From here you can preview animations and download them for use in a 3D application of your choice. This character probably has a childlike personality, so I chose one of the Kids animation packs.
Exporting to Blender
I’m a Blender user, so I’ll be using that format to demonstrate the very useful Rigify integration provided by the Character Creator addon.
Using Rigify through the Character Creator Add-on
By importing through this addon, you can add Rigify controls to your character, vastly increasing the animation integration with Blender.
Expanding on AccuRIG and Rigify
Many times when I import a character for any animation work, I’ll switch to the Matcap shader, which is significantly less laggy than the other shaders (It’s more of a habit of mine and not a necessity).
AccuRIG does not currently rig the eyes of characters, but Rigify does support those controls. For those with slightly more experience in rigging, you can easily make advancements to the AccuRIG’s base setup into a very capable rig with full eye controls.
Here I am just making sure that smooth shading is applied, then I average out the normals. I do this with most imported characters and items in Blender.
Using IK and FK Controls
Rigify supports both IK and FK controls, which can be found under the Itemmenu. From here, you can also toggle controls that you don’t need.
In addition, you can import any animation you’ve downloaded from ActorCore to preview, adjust, and apply to your AccuRIG character.
Character Two: High Polygon Werecat with Huge Claws and Exaggerated Limbs
Next up is this stylized werecat model that I’ve purchased through Artstation from an artist named Anton Rabarskyi.
Dealing with Anatomy and Stylization with AccuRIG
Like the monkey, this character is highly stylized but in a different direction. He has exaggerated muscles, limbs and is only partially humanoid. The fingers have claws, and the finger joints differ from humans. The feet have an extended hind-leg aspect to them. Finally, the model has a high polygon count. For this reason, it serves as a good problem-solving demonstration for AccuRIG.
The only real issue I had was some minor distortion due to the stylized nature of the head and neck, but I made some adjustments and didn’t run into any problems after the second run through this process.
I did find the fingers more difficult to deal with due to the claws, and my impression is that this character has limited finger joints compared to a human. Overall I found that AccuRIG was able to accommodate this character very well. It just took a few seconds longer to process due to the much higher polygon count.
Even the fingers, which I was concerned about, seemed fine.
After uploading to ActorCore, I briefly tested some animations to inspect his movements, then I selected the “Grumpy Claw” animation, which fit the character well.
Character Scaling Issues Using Rigify
As with the stylized monkey, I was able to import and apply Rigify and the selected animations with no difficulty. One thing I will point out is that if your character is not scaled similarly to characters you’d find with Character Creator and ActorCore, you may find that Rigify has difficulty assigning controls properly. For example, if your character is thought to be a few millimeters or hundreds of meters in size, your controls will likely be out of place. So if you find your Rigify controls are not fitting your character, this is likely the reason.
Character Three: Very Low Polygon Demon Lord
Finally, I tried AccuRIG with a model purchased from Sketchfab, a 2,000+ polygon Demon Lord, from the artist Bitgem. While this model was already appropriately rigged, I wanted to see how AccuRIG would handle a very low polygon character — like the ones you would find in a lightweight mobile game.
I switched the hand rigging to four digits, after which, AccuRIG correctly detected the fingers with only minor adjustments needed.
Reassigning Character Vertices and Weights in Blender
After uploading the character to the ActorCore server, I assigned the character a “spell summoning” animation and downloaded the files for use in Blender. Once in Blender, the Rigify controls were allocated to the rig and worked as expected. The only issue I ran into was that the lower teeth were not identified as part of the face. However, I assigned those vertices surrounding the teeth to the rest of the group associated with the head bone, which has fixed the issue.
In conclusion, I have found AccuRIG is a wonderful free tool for Blender. All three characters were Rigify-ready in Blender after being rigged by AccuRIG, saving me lots of time. Although these characters were quite different in shapes and sizes, AccuRIG can handle them easily and the rigged results were impressive. And since the rig is compatible with the mocap animation on ActorCore, it serves as a great way to bring non-standard, animated characters into game engines like Unity or Unreal, or in 3rd party 3D applications like Blender or Maya.
And with that, I’ll bring this demo to a close. I hope there was something you learned or found useful. I would encourage everyone to check out AccuRIG and see for yourself how powerful it is.
Level Up 2D Animation with Spring Physics, FFD Exaggeration, and Vector Graphics
Cartoon Animator 5 (CTA 5) launches with several crucial features introduced to level the playing field for amateur and professional artists alike. In addition to overhauling rudimentary animations with the secondary motion from Spring physics, free-form deformation (FFD) makes cartoonish anticipation and exaggeration accessible to any aspiring animator. CTA 5 also supports vector animation giving rise to boosted render quality with high-res output and a highly-acclaimed SVG pipeline.
Cartoon Animator is a 2D animation software designed for ease of entry and productivity. It can turn static images into animated characters, drive facial expressions with facial mocap animation, generate lip-sync animation from audio, create 3D parallax scenes, and produce 2D visual effects. Artists can access content resources and wield a comprehensive photoshop/vector pipeline to rapidly customize sprite characters and create interesting content. While implementing the most innovative workflow by synergizing industry-leading applications, 3D animation resources, and motion capture devices, Cartoon Animator brings the ultimate freedom for high-quality production.
Whether vector or bitmap, any image can be imported, rigged, and animated in Cartoon Animator.
Automated secondary motion is applicable to any object and extremely easy to work with.
3D Head Creator transforms 2D art into 3D styled characters with 0 to 360° head turns.
Motion Link connects iClone with CTA 2D characters to stream and converts 3D motion to 2D animation.
Few things are more rewarding than watching your static 2D artwork suddenly turn into living and breathing animations. CTA’s autonomous Spring dynamics puts you in creative control without fussing over complex physics and follow-through motion. These flashy physics effects are perfectly fit to energize character and prop animation.
Spring bones bring a secondary motion to characters so that they jiggle as they move.
An object can have multiple Spring settings set up with different Spring groups with distinctive behavior.
Spring physics reacts to all types of movement including simple transform keys, mouse-driven facial puppets, and live performances made by facial tracking or motion capture.
Ten Spring presets of different material types make it easy to find the most eligible starting point for Spring animation.
Spring attributes include bounciness, speed, gravity, and angle limitations for squash and stretch characteristics.
A set of 14 samples and templates designed after common items work in tandem with manual rigging.
FFD can easily exaggerate motion by simply moving the bounded lattice points. With 109 FFD presets and customizable timeline keys, designers can create cartoonish effects like squash and stretch, anticipation, and exaggeration. FFD presets can be dragged and dropped onto static objects to make them come alive.
By adding FFD to existing animated 2D characters, designers can accentuate different parts of the animated drawing by moving lattice points and adjusting intensity levels.
FFD keys can also be exaggerated by using transition curves in the timeline or deployed with pre-made FFD templates to instantly enrich existing animation.
36 squash and stretch presets in the FFD Editor can give a cartoonish style to any type of animation.
The asset library is updated with free G3 Human motion files containing adjustable FFDs.
Multiple FFDs can be additively stacked to fine-tune exaggerations and deform with precision.
FFDs can be dragged and dropped onto photos and vector graphics to make them come alive and emote.
Vector graphics support lets users import widely available SVG vector assets to CTA5; whether they are downloaded from stock image sites, or custom created with Illustrator, CorelDRAW, or other compatible tools. Unlike raster images that allocate file size to store pixel data, you can zoom in and out of vector graphics and still enjoy a picture quality that is both crisp and sharp. There is no better way to build a large scene than with vector graphics that can be scrolled, zoomed, and navigated throughout.
Vector Grouping Tool gives designers the power to define color combinations and opacity settings, letting users set color options for any vector object or sprite character and create various looks in a snap.
Adjustable coloring for entire groups that do not break the cohesion of custom color schemes.
Color grouping and opacity settings can swap clothing styles and even adjust cuts of clothing.
Any visual style can be realized with the aid of spline curves, line styles, gradient fills, and vector layers.
New tools make it easy to animate sketches, line art, fashion designs, and photo-realistic images.
Vector graphics of various formats can be converted to SVG and imported into CTA for a variety of applications.
Color Grouping Tool automatically bands together elements and defines associated vector groups for effortless color change.
Smart Content Manager
Smart Content Manager lets users one-click install purchased assets and free designer resources. Artists can easily search and browse personal works, upload, and download items to and from their online inventory.
Based out of Los Angeles, California – Taiyaki Studios builds avatars and avatar collections optimized for Tiktok and YouTube content creation. They help to build a community of virtual creators across all genres and platforms by educating, collaborating and honoring their hard work. Whether you’re using Unreal Engine, Unity, or never touched 3D at all, Taiyaki Studios can help you learn and grow in the world of virtual production and audience building.
Taiyaki Studios also partners with select creators to help them build and design custom avatars and universes with their staff of highly skilled 3D artists, animators and tech wizards.
In July of 2022, Cory Williams – Unreal Engine Technical Artist at Taiyaki Studios animated one of his favourite childhood toys He-Man in an animated short film, by use of photogrammetry, clever rigging, motion capture and Unreal Engine.
His first video proved to be a wonderous success with audiences, who were amazed at the smooth animated results delivered onto a renown plastic figurine.
Both Mr. Yaki and Virtual He-Man were created by Cory. With voice acting, cinematography and animation all performed by him using an Xsens MVN Link motion capture suit, Manus Prime II gloves, an iPhone X (Apple AR Kit),iClone 8 and Unreal Engine 5 (UE5).
After the introduction of Reallusion’s free AccuRIG tool, Cory decided to one-up himself by introducing a second well-known character – but this time using a much faster and easier process.
In the end, He-Man battles his long-time nemesis Skeletor in an epic dance-off that was possible thanks to Blender, AccuRIG, Character Creator, iClone, and Unreal Engine 5. Enjoy!
Now, the Making of He-Man VS Skeletor might sound like a daunting task to many… and rightfully so. But Cory has been able to distill the process into an easy to follow process that involves a collection of new and powerful tools.
His original step was to take many pictures of his characters with an iPhone in a process known as photogrammetry. But Cory was able to enhance that process by purchasing a portable scanner known as a Revopoint MINI, which he used to scan his Xsens character for a test, prior to scanning Skeletor.
Next Cory used his new secret weapon — ActorCore’s free AccuRIGtool to import the scanned FBX character and automatically rig it for full body and finger animations. This process takes about 15 minutes and is perfect for working with all kinds of static poses and well known character rigs, to later export as FBX, which you can even test and correct for mesh deformations caused by motion stretching.
Then Cory used Character Creator 4 to import his new AccuRIGGED character to test and add custom dance animations he captured with his mocap suit, including individual finger tests which AccuRIG definitely rigs for. Inside Character Creator you can even do specific characterizations for non-standard characters which will allow you to work with any specific motion you design through your mocap suit.
Once you know your character is ready, Cory starts production by doing all the voice recordings, and animations for each character. Amazingly he does this all by himself, which gives him a mental picture of each character’s gestures, nuances and performances.
When the custom animations are ready, they are brought into iClone 8 with the custom rigged characters. Now there is a reason why iClone is used instead of going straight into Unreal Engine, because iClone allows you to correct any offset motions that are much easier to fix in iClone rather than Unreal.
iClone is especially useful when you really want that high quality feel in your performances where you want to do small editing like on hand gestures in specific timeframes.
For example: you can easily adjust any facial expression or lipsync on Skeletor (if you are not using a face mocap device), or if the Mr. Yaki character needs to look up because he is missing the mark of He-Man’s face, then with iClone you can easily correct this by editing any specific motion track for face, eyes, hands, and fingers.
Finally, Cory adds everything into Unreal Engine to start setting up his shots by creating a level sequence, synchronizing everything with his audio wave forms, positions and animations, cameras, lighting and any special effects! Done!