首頁 » Page 4

3D Short Film Kiha with iClone, ActorCore and Unreal Engine

Behind the Scenes of ‘KIHA’- with iClone and Unreal Engine

Dom Fred – Director / Producer / Editor / 3D Animator

Dom Fred

Dom began his career in the television advertising industry, directing commercials using new production’s digital and 3D tools. Over the past 20 years, he has also worked on music videos and documentaries, catering to a global clientele.

Since his childhood, Dom has been a martial artist, mastering various disciplines such as Taekwondo, Aikido, and Kung-Fu. His exceptional skills in Taekwondo, particularly at the Olympic and spectacular levels, have highlighted his talent for stunt work and acrobatics from a young age.

In 2007, Dom established his own company, DOM ANIMATION STUDIO, specializing in product and post-production. With a unique touch, he incorporates 3D characters into real environments, integrates special effects (SFX), and creates martial arts and acrobatics action sequences for commercials.

In 2010, Dom directed a short action film utilizing camera tracking, motion capture, and 3D compositing techniques. Two years later, he produced and directed his first independent short films, which received several awards in Los Angeles, Texas, Las Vegas, and New York for Best Action Sequences, Best Action Director, and Best Short Film.

Introduction: Unveiling the Vision

“KIHA,” a captivating science fiction short film, stands as a testament to the boundless creativity and technical prowess of director and Maya 3D animator Dom Fred. In this quick behind the scenes, the magic unfolds through the seamless integration of iClone, where every detail comes to life with precision and finesse.

Character Creation: From Concept to Reality

The journey of “KIHA” begins with the intricate design of its characters, meticulously crafted using the powerful tools within Character Creator. Here, Dom Fred finds a canvas to breathe life into his vision, ensuring that each character embodies the essence of the story with unparalleled detail and authenticity.

Motion Capture: Breathing Life into the Narrative

Enter ActorCore, a treasure trove of high quality 3D animations that infuse “KIHA” with a sense of realism and authenticity. With a diverse array of motion captures at his disposal, Dom Fred navigates the nuances of performance, seamlessly integrating them into the fabric of his narrative to evoke emotions and captivate audiences.

Mastering Animation: The Power of iClone 8

Central to the fluidity and technical precision of “KIHA” lies the formidable power of iClone. Dom Fred harnesses the powerful iClone 3D motion editing tools at his disposal to imbue his characters with lifelike movements and reactions, ensuring that every frame resonates with depth and emotion. With precise control over animation keys, Dom navigates the intricacies of action scenes with finesse, sculpting moments of tension and intrigue that keep audiences on the edge of their seats.

Real-Time Production: Bridging the Gap with Unreal Engine

As the narrative of “KIHA” unfolds, the seamless integration of iClone and Unreal Engine emerges as a pivotal asset in the director’s arsenal. Through iClone to Unreal LIVE LINK, Dom Fred navigates the dynamic landscape of real-time production, fine-tuning the placement of characters, settings, and camera movements with unparalleled precision. This synergy between iClone and Unreal Engine empowers the director to sculpt his vision with unrivaled flexibility, ensuring that every frame resonates with cinematic brilliance.

The Grand Finale: Lighting, Composition, and Rendering

As the pieces of “KIHA” fall into place, the stage is set for the grand finale on Unreal Engine. Here, Dom Fred meticulously orchestrates the composition, lighting, and rendering, elevating the visual splendor of his creation to new heights. With Adobe Premiere Pro 2024 serving as the final canvas for post-production, Dom Fred weaves together the threads of his narrative with precision and finesse, culminating in a cinematic masterpiece that transcends boundaries and captivates the imagination.

Conclusion: A Vision Transformed

In the realm of digital filmmaking, “KIHA” stands as a beacon of innovation and creativity, a testament to the limitless possibilities of technology and imagination. Through the collaborative synergy of iClone, Character Creator, and Unreal Engine, director Dom Fred breathes life into his vision, transforming dreams into reality with every frame. As “KIHA” takes flight, it invites audiences on a mesmerizing journey through the depths of the human experience, where the lines between reality and fantasy blur, and the essence of storytelling shines bright.

Follow Dom Fred:

LinkedIn:
https://www.linkedin.com/in/freddy-lounana-41015991/
https://www.linkedin.com/in/dom-fred-films-81904194/

YouTube:
https://www.youtube.com/channel/UCaEVZPzkWUsg9Wti9Cao4Vw

Facebook:
https://www.facebook.com/profile.php?id=100063571358642

Vimeo:
https://vimeo.com/823861782

What is Good Topology?

Mesh topology. Probably the most important yet least utilized and discussed topic among those new to 3D modeling and animation. Why? Because it can be hard to master and implement for many users. This is one of those skills most of us can learn but not all of us will be good at it. Many will go about their 3D journey without considering good topology and its effect on the movement of the mesh.

Clean, good topology means the underlying mesh of the model is evenly distributed with consideration given to areas that bend a lot like the eyes, shoulders, and elbows. In some cases, these areas will consist of denser mesh such as armpits or near the mouth.

Even though I have been in 3D for decades, topology was one of those areas that I knew about from a working perspective but had to research to be able to explain it better. This article depends heavily on that research to explain some facts about mesh topology that you may not have considered before.

Exploring Good Topology: Understanding Different Types of Topology

TRIANGLES (TRIS)

In a 3D mesh, triangles are the fundamental building blocks that make up the surface of a 3D object. A 3D mesh is a collection of vertices, edges, and faces that define the shape of an object in three-dimensional space. Triangles are the simplest type of polygon, consisting of three vertices and three edges.

Triangles make up many models, particularly legacy models. While tris can make just about anything they can also collide and collapse causing the mesh to buckle in an unnatural way or stretching of vertices that is noticeable due to the texture stretching with it.

Key points about triangles in meshes.

  • Basic Unit: A triangle is the smallest unit of a 3D mesh. It is defined by three vertices and three edges.
  • Rendering Primitives: Triangles are commonly used to render primitives in graphics pipelines. Many rendering engines and hardware are optimized for processing and rasterizing triangles.
  • Simplicity and Efficiency: Triangles are computationally efficient and simple to work with, both in terms of mathematical calculations and rendering algorithms.
  • Interpolation: Triangles allow for easy interpolation of attributes such as color, texture coordinates, and Normals across their surfaces. This is crucial for achieving smooth shading and realistic rendering effects.
  • Deformation and Animation: Triangles play a key role in character animation and deformation. They define how the surface of a 3D model deforms as it moves, providing flexibility for realistic animations.
  • Smoothing and Normals: Normals (vectors perpendicular to the surface) are often calculated at each vertex of a triangle to achieve smooth shading. This helps simulate lighting effects and create a more realistic appearance.
Triangles (Tris) based game character.

QUADS

A quad is a polygon with four sides (edges) and four vertices. Quads are often used in 3D meshes to represent planar surfaces or faces of 3D objects. They are an alternative to triangles. Quads have some advantages and are commonly used in certain modeling scenarios.

Key points about quads in 3D meshes:

  • Planar Surfaces: Quads are well-suited for representing planar surfaces, as they form a flat, four-sided shape.
  • Modeling Convenience: During the modeling process, especially when creating surfaces that align with a grid or require a regular structure, quads can be more convenient and easier to work with than triangles.
  • Edge Loops: Quads are often used to create smooth edge loops in character modeling. Edge loops are sequences of connected edges that follow the natural contours of a character, helping with deformation during animation.
  • Deformation and Animation: Quads can deform more predictably than triangles in certain situations, especially when modeling characters or objects that need to bend or deform smoothly.
  • Subdivision Surfaces: Quads are commonly used in subdivision surface modeling. Subdivision surfaces involve iteratively subdividing the faces of a mesh to create smoother surfaces, and quads play a key role in this process.
  • Smoother Shading: In some cases, quads can contribute to smoother shading when rendering, especially if the geometry is planar and the lighting conditions are appropriate.
  • UV Mapping: Quads can simplify UV mapping, the process of applying 2D textures to 3D surfaces. UV mapping is often easier with quads compared to triangles.
Quad Topology ZBrush Demo Soldier

DECIMATED (OPTIMIZED) MESH

Decimation involves reducing the number of polygons in a 3D model while attempting to preserve its overall shape and appearance. The goal is to simplify the geometry by removing unnecessary detail, which can be useful for optimizing the model’s performance in terms of rendering, animation, or real-time applications.

The process of decimation typically involves the following steps:

  • Polygon Reduction: Decimation algorithms analyze the geometry of the mesh and selectively remove vertices, edges, and faces while attempting to retain the essential features of the model.
  • Preservation of Important Details: Advanced decimation algorithms aim to preserve critical details and features of the original model, such as edges, contours, and surface characteristics, to maintain the model’s visual fidelity as much as possible.
  • Simplification for Performance: The primary motivation for decimating a mesh is often to improve performance in real-time applications or when dealing with large and complex scenes. By reducing the polygon count, the model becomes less computationally intensive to render or animate.
  • UV Mapping Considerations: Some decimation tools take into account the UV mapping of the original model to ensure that texture coordinates are preserved, minimizing the need for retexturing after decimation.
  • Adjustable Parameters: Decimation tools often provide adjustable parameters that allow the user to control the level of simplification, balancing between reducing polygon count and preserving details.

RE-MESHING VERSUS RE-TOPOLOGY

The primary goal of re-meshing is to alter the overall geometry and structure of a mesh, often by changing the distribution of polygons or altering the topology. Re-topology involves creating a new surface for an existing 3D model, often with the goal of improving the topology for better deformation, animation, or UV mapping.

WHY DO WE SAY CHARACTER CREATOR 4 HAS GOOD TOPOLOGY ?

  • Clean and Efficient Geometry:
    Character Creator 4 mesh topology has clean, and efficient geometry with as few polygons as necessary to accurately represent the character shape avoiding unnecessary details in areas that don’t require it.
  • Evenly Distributed Geometry:
    Character Creator 4 mesh also relies heavily on evenly distributed geometry that maintains a consistent polygon density across the model to ensure smooth deformations during animation.
  • Consistent Normals
    The consistent Normals of the CC4 character mesh avoid shading problems and other issues.
  • Quads (Four-sided polygons):
    Character Creator’s four-sided quad topology allows for more predictable deformations than triangles and is generally easier to work with during the modeling process.
  • Topology for Animation:
    The CC4 mesh also takes into consideration the intended use of the model which in this case is smooth bipedal animation. CC4 topology deforms well during rigging and movement. Joints and areas of deformation have sufficient geometry to bend naturally without buckling or collapsing.
  • Edge Flow:
    CC4 character meshes also maintain a logical and consistent flow of edges that follows the natural contours of the object. Good edge flow is also essential for animation and deformation.
  • UV Mapping Considerations:
    The UV Mapping in Character Creator 4 meshes prevents seams from showing up in highly visible areas of the character. Normal maps are used for finer details to optimize performance.
  • Efficient Use of Detail:
    CC4 mesh focuses on adding detail where it matters most, such as the face or areas that will be prominently featured.

Character Creator 4 mesh topology checks off all the important points regarding quality character meshes so you can create a character with the confidence it can withstand the intense scrutiny of a camera closeup.

Learn more: CC3+ Base Model

CLOSING

Before I close, I want to also point out that using tris does not, by itself, mean bad topology. If they don’t collide or bunch up tris are fine. So are less than optimal models that may not have perfect quads and contain mixed tris if they are distributed evenly enough to provide good vertex manipulation for animation and posing.

I repeat… whatever topology you use, even distribution is key. We are not seeking perfection but a workable mesh that won’t twist, collapse, or distort allowing the animator to pay more attention to the details of animation and less time on problematic mesh.

And… as usual, Character Creator has your back when it comes to characters. CC4 takes care of topology by providing clean base meshes and industry-standard optimization and decimation. From smooth, front-line characters to highly decimated background crowd actors, CC4 does the work, while you do the storytelling.

MD McCallum - WarLord

MD McCallum – WarLord

Digital Artist MD “Mike” McCallum, aka WarLord, is a longtime iClone user. Having authored free tutorials for iClone in its early years and selected to write the iClone Beginners Guide from Packt Publishing in 2011, he was fortunate enough to meet and exchange tricks and tips with users from all over the world and loves to share this information with other users. He has authored hundreds of articles on iClone and digital art in general while reviewing some of the most popular software and hardware in the world. He has been published in many of the leading 3D online and print magazines while staying true to his biggest passion, 3D animation. For more information click here.

Stylized Lipsync Animation using iClone AccuLIPS & Face Puppet

Peter Haynes

Peter Haynes is a filmmaker based in New Zealand and a prior winner of the Epic MegaGrants award. He takes time out of his busy schedule to generously provide a quick tutorial on his process for enhancing lip-sync animation using Reallusion iClone. This article demonstrates his mastery of iClone AccuLIPS in the creation of expert lip-sync animation.

Effective ways to make stylized lip-sync

I believe that my mini-guide will prove useful, as existing resources focus on AccuLIPS and Face Puppet separately without much integration. I aim to emphasize the effective combination of these tools to achieve a more dynamic and dramatic character performance, moving beyond simple lip-syncing techniques.

Effective ways to make stylized lip-sync

iClone is just really simple. The pipeline between Character Creator 4, iClone 8, and Unreal is pretty seamless once you’ve done it a few times. 

iClone pipeline is pretty seamless for character animation

My workflow for lip-sync animation

1. Before I do anything else I like to enhance the AccuLIPS clip by going into the lip options and setting the smoothness to 1 and the strength to 1.2. I find this just tends to make the movements look a bit more natural.

Setting the smoothness and the strength for lipsync animation
Setting the talking style for lipsync animation
Setting the talking style for lipsync animation

2. I then like to set what I call my “base emotion” for the clip, which can be done with an animation from the Digital Soul pack of pre-animated facial expressions, or by capturing my own expression from my iPhone via Motion LIVE for facial mocap.

Digital Soul pack of pre-animated facial expressions

3. After that, I make another pass, often with face keys, to enhance and finetune particular moments in the dialogue.

4. Then I’ll make another number of passes with Face Puppet, adding more emotion at key points and generally moving the mouth in a more expressive way. 

With iClone Face Puppet to add more facial expressions at key points

5. Finally, I add some head motion with Face Puppet, and then whatever body motion the character requires for the scene.

Learn More

Visit Peter Haynes: https://www.youtube.com/@EpicallyCasual/featured

iClone Lip-sync Animation: https://www.reallusion.com/iclone/lipsync-animation.html

Try iClone 30 days free for Lip-sync Animation: https://www.reallusion.com/iclone/download.html

The Making of RIOT 3D Motion Capture Series for Action Films

Hollywood-caliber mocap at the fingertips: ActorCore blazes the stage for mass protest productions

Orlando-based Monkey Chow Animation Studios presents Riot, released on ActorCore, the leading online 3D content store. As a highly anticipated theme, Riot is a motion capture content pack for drama and action scenes where crowds express anger, take part in disorderly conduct and intense confrontations.

Riot Brings a Spotlight to a Gripping Discourse

Embodying the spirit of Selma, Born on the 4th of July, The Hunger Games, and recent global flare-ups marked by widespread social and political unrest, Riot is a dynamic motion capture pack comprising 61 tumultuous sequences. Tipping the intensity scale are depictions of public protests and clashes with authorities, ranging from peaceful marches with placards to confrontations with law enforcement clad in riot gear. 

Riot encompasses a spectrum of actions, including hoisting signs, police presence, and engaging in physical altercations such as shoving and throwing debris. This diverse array of motions enables crowds of characters to effectively convey outrage, offering a level of speed and convenience that minimizes production time and costs compared to live-action shoots featuring a crowd of extras within an on-site set-piece.

Get Behind the Scenes to Witness the Dedication

Monkey Chow Animation Studios is a leading motion capture company headquartered in Orlando, Florida, with a catalog of notable motion packs, including  Run for Your Life and Bank Heist. Each animation within these packs is meticulously captured by skilled stunt performers in state-of-the-art facilities, showcasing a combination of expertise and creativity. Explore the behind-the-scenes footage to witness the dedication and innovation that goes into each production.

Monkey Chow Motion Capture Pulls Out All the Stops

In motion capture, it is crucial for cameras to capture as many of the performer’s bodysuit markers as possible. Consequently, the production team had to create specialized props for mocap to minimize occlusion. In one instance, a stand-in shield was crafted to replicate a police shield, featuring only the contact points required by the actors. This puts the emphasis on the performers’ imagination, as rendered objects may appear larger than they do on stage.

Riot needs to cover all the moves, from a peaceful protest to an all-out street war between demonstrators and police.  We put together a team of experienced stunt professionals, and they helped us block out a lot of the fight scenes.

Jeff Scheetz, Monkey Chow Animation Studios

Fast Crowd Generation Presented by ActorCore and iClone

Select from a diverse collection of over 400 fully-rigged, animatable 3D characters available in the ActorCore asset store to build an extensive crowd as ambient extras. Alternatively, users can easily “clone” their animated bystanders within iClone, forming a surge of angry citizens marching, waving, and shouting.

This pack is ideal for generating seamless crowd scenarios, as all walking motions are loopable. When integrated with iClone 8 Crowd Simulation, users can effortlessly spawn characters with diverse protester animations, rapidly assembling a sizable public demonstration scene. Merge motions, such as those found in Run for Your Life,  to create chaotic moments, facilitating the swift creation of a dramatic riot scene for use in games and films.

iClone Advances Motion Editing

The Riot motion pack encompasses a range of intense interactions, including throwing, pushing, fighting, making arrests, and using police shields to disperse crowds. Each of these actions demands adept handling of objects or involves human-to-human interactions made possible with iClone, the user-friendly software for easy editing and refining of animations. For enthusiasts, Reallusion offers complimentary mocap editing courses to assist users in mastering the creation of realistic interactions.

Immediate Riot Motions on Demand

Users have the flexibility to utilize these 61 motions independently or seamlessly integrate them with other motions using iClone’s advanced motion editing features. All motions come with a 100% royalty-free license and can be exported in FBX or BVH formats for integration into real-time game engines such as Unity, Unreal, CryEngine, Game Maker, and more. Why wait? Explore Riot motion pack now in the ActorCore 3D Store!

Creating Game Characters from ANY Mesh! – Neriverse

Welcome to Neriverso, where the captivating world of video games meets the expertise of Neri Neto, a seasoned technology journalist and skilled game programmer. Through Neriverso, Neri shares his profound insights and boundless passion for everything he adores.

As a dedicated game programmer, Neri has lent his talents to numerous indie projects, crafting engaging mechanics and contributing to the development of captivating titles. Embrace the Neriverse and embark on a journey fueled by Neri Neto’s unwavering enthusiasm and commitment to his craft and community.

In his latest video, Neri reviews Character Creator’s ai plugin Headshot 2.0 for game developers to create advanced 3D real-time game characters from photos and 3D models.

With Auto and Pro modes, Headshot offers precise model fitting, texture baking, and full body animation capabilities thanks to one-click generation of low-res virtual heads with 3D hair, as well as high-resolution texture processing with extensive morph options and advanced tools for refinement.

Follow Neriverse:

YouTube:
https://youtu.be/m7pA17C0kvU

Instagram:
https://www.instagram.com/reel/C29wyEvOzU2

TikTok:
https://www.tiktok.com/@neriverso/video/7332207491628928262

Creating Genre-Based Actor Groups

The new Crowd Sim tool in iClone 8.4 has some powerful features and can create a crowd in minutes or less but it is not the only timesaving, scene-enhancing tool that was recently released. I must admit to also putting this particular tool on the back burner until I could get more comfortable with the new features.

That was a mistake.

I had no idea just how much time custom ActorGroups can save while dragging, dropping, and randomizing them into a scene. The best part is, being custom groups, they fit your needs and over time we can create a library of custom Actor Groups for different situations like sporting events, concerts, street scenes, block parties, and military actions.

My first dive into Actor Groups was to watch Kai’s short and info-packed tutorial, Crowd Sim: Customizing Actor Groups, on creating custom groups. This video packs a lot of information into less than six minutes and covers how to create all three types of Actor Groups.

OPTIMIZE & DECIMATE CC4

When I first created a base, I was impressed with the process but wondered what I would do with it until I understood it is just what it says… a base. It’s a starting point that can be used to create Presets that can hold multiple bases (via a drop-down menu) in one preset. Since this is an item I intend to use many times in a scene I need to create variations of my soldier character.

Base Infantry Soldier variations via Character Creator 4 sliders before optimize and decimate.

I used my infantry soldier from the Marketplace to create three Levels of Detail for use in crowd generation. The base soldier, stripped of equipment like harness, backpack and front pouches came in at just under 60K. I created five variations of the original 60K character that differed in face, height, weight, and ethnicity to give me more variation so the soldiers wouldn’t be the same size and have the same appearance.

I left the battle rifle attached to the character and sent it all through the OPTIMIZE AND DECIMATE menu choosing the 7K LOD 1 and 800 KB LOD 2 for the tests. This gives me the option of using the standard character up close to the camera, LOD 1 at the mid-range and LOD 2 in the distance. The battle rifle was optimized along with the characters.

CAUTION: THE BELOW EMBEDDED VIDEO HAS AUDIO

CREATING THE BASE

The actual process of creating a base is very simple. Under the CREATE menu, you will find the CREATE ACTOR GROUP down at the bottom. Select this to open the Actor Group Settings popup which is the interface to create or reopen a Base, Random, or Preset actor group. Under the BASE selection, this popup automatically populates with the actors, motions, and props you drag into the workspace.

TIP: Make sure the Actor Group Settings popup is open or you will find yourself creating a scene instead of an Actor Group and will have to start over.

I’m embarrassed to say that I created a group or two, or at least thought I was when I wasn’t because I had not opened the Actor Group Settings to record the group. After this little hiccup, I was creating gun crews, gun pits, and groups of soldiers.

All of them were saved as bases and then used in the creation of military genre Presets. The same can be done for sci-fi, fantasy, contemporary, and others that can, over time, build a huge library of drop-in crowds with control over their randomization.

NOTE: The video tutorial at the end of the article goes into detail about creating base, random, and preset genre-based groups.

Base Group

PRESET

This is where custom Actor Groups really show off their muscle. We can select previously created Actor Group Bases, including multiple bases as mentioned earlier, to deploy and randomize these custom groups. In my example, I have various Random groups and Base groups like a crewed Howitzer emplacement with security and manned sandbag barriers.

All that is required is to drop the preset into the workspace, select what you are deploying from the drop-down menu that pops up, and watch the magic happen. Below you will see the pop-up Actor Group Settings that create the Preset actor group by including the groups along with a pool of actors.

Preset using previously created Base Groups with an actor pool.

RANDOM

Great for random groupings of characters and props. The popup menu utilizes an Actor Pool and a Motion Pool to draw from when randomizing while deploying. The Group Structure is where you drag in characters and props.

Save the random group. It’s not all that flashy at this point because you must deploy the random group to the workspace before you see how it allows you to randomize items like characters and motions. When you deploy the actor group, iClone will provide a pop-up dialog for options.

Random Group

TUTORIAL

I have provided the following tutorial that goes into much more detail about how to create each group and how to modify existing groups to have variations for additional base actor groups. This can all be done very quickly too. Once you get all your base groups created it goes even faster. What you see in the tutorial is far from all the things you can do with Actor Groups so don’t limit using them to things like standing, talking crowds.

CLOSING

In closing, I can say that I have only scratched the surface of what we will be able to do with custom actor groups. I vastly underestimated them when I first viewed Kai’s tutorial, but it didn’t take me long to realize the powerful option of being able to include props in the mix, and accessories through the Character Creator 4 Optimize and Decimate tool.

With volume and NavMesh crowds, walkway crowds, and actor groups we can make crowds for almost any situation, and we can save the crowds and the actor groups for future use.

While I’ve used a military example in this article, the same can be done for science fiction, sports, and other purposes. If it’s winter you can create skaters and a frozen pond, or in summer a youth sports league team item like a baseball dugout with players and coaches.

It will be very interesting in the future to see what the talented iClone user base will do with Actor Groups and other new tools like the Crowd Sim and Distribute Props tool.

If you haven’t looked into customizing Actor Groups, then you are missing out on a versatile and time-saving tool that provides the needed assets large or small scale for your next masterpiece.

MD McCallum - WarLord

MD McCallum – WarLord

Digital Artist MD “Mike” McCallum, aka WarLord, is a longtime iClone user. Having authored free tutorials for iClone in its early years and selected to write the iClone Beginners Guide from Packt Publishing in 2011, he was fortunate enough to meet and exchange tricks and tips with users from all over the world and loves to share this information with other users. He has authored hundreds of articles on iClone and digital art in general while reviewing some of the most popular software and hardware in the world. He has been published in many of the leading 3D online and print magazines while staying true to his biggest passion, 3D animation. For more information click here.

PavMike CC to ZBrush Workflow Tutorial (3/3): Posing & Animation

Introduction- Final Steps for 3D Printing or Animating Your ZBrush Characters

We’ve discussed getting a fully adorned character into Character Creator, but what if you want to 3D print a pose of your ZBrush character, or control the poses and facial expressions / lip sync of your character? In this article we’ll cover both topics, giving you even more fun and easy options to fully utilize your character creations.

If you want to pose out a character that doesn’t have CC body topology, check out this ZBrush and CC Posing workflow!

Part 1. High-res Posing for 3D Print from ZBrush

GoZ ReLink Files

We’ve already set up a naked body file that we can GoZ back-and-forth between ZBrush and Character Creator. We’ll start from that point, so open your ZBrush project file (.zpr) and your corresponding Character Creator file (ccProject), then press the GoZ All button in ZBrush to relink your files. The dialog box in CC should default to Update, choose “Current Pose” and hit the Update button.

GoZ from CC to ZBrush to ReLink
Update Options for the ReLinked body in Character Creator

Setting Up Poseable Gear in ZBrush

Append your gear (clothing, accessories, weapons, etc…) as subtools to your body files in ZBrush. Make sure your gear subtools have a lower subdivision for each item. Remember, there’s a 300k poly / 600k tri limit in CC. Of course that doesn’t mean your ZBrush files have to be under 300k polys – for example, your helmet might be 10 million polygons, and that’s fine, as long as it’s 10 million polygons at SDiv 6 (for example), but a significantly lower polycount for SDiv 1. The reason for this is when we GoZ, it will send our Subdivision Level 1 asset to Character Creator, where we can skin and animate our meshes, and when we’re done, we can GoZ from Character Creator back to ZBrush, which will update our SDiv 1 subtool positions, but we’ll still have all the details from our SDiv 6 for those subtools still intact and available.

Body (green) and gear (red) subtools

GoZ Gear from ZBrush to Character Creator

Once you’ve got all your gear subtools set up, GoZ All from ZBrush to CC. In CC, the GoZ Options box will default to Custom, and will do some basic selections for you. For your synced body files, it’ll be set to Update, and for the other objects, Create Cloth will be selected by default. Just like the previous article covering game-res accessories and clothing, for assets that are influenced by multiple joints (cloth type items), you can keep the Action as Create Cloth. For more rigid accessories that can be attached to one joint, choose the Create Acc option. Don’t worry about getting this selection exactly right — you can always convert a clothing item to an accessory and vice versa in CC later if you need to.

Update Options for the ReLinked body and new gear from ZBrush in Character Creator

Setting up Clothing and Accessories

Again, we’ll utilize the same process we did with the game-res article. For accessories, select them, move them into place if necessary, attach it to the appropriate bone in the Attribute tab > Attach section, then set the pivot point. For clothing items, it already transferred the weights for us, so we can leave those alone. Also the same as the game-res process, if you have any clothing objects that need physics simulation, make sure they have UVs and a corresponding weight map to plug in, turn on rigid-body and soft-cloth simulation in the CC menu bar, turn on physics for that item, and finally modify any appropriate capsules for your character body that interact with the cloth asset. Remember to go to Edit > Project Settings and check on Bake Animation under the Simulation section, so you can scrub the animation timeline later and still have your cloth react naturally.

Setting up Accessory assets
Setting up Physics for clothing items that need it, like a loin cloth

Applying Animation

Select the “CC3_Base_Plus” group in the Scene tab, and double click any animation in your Content tab to apply that to your character. In our case, you can see we’ve applied the “Saloon Door” animation. Play the animation in the Animation Player all the way through, setting it from Realtime to By Frame to allow our cloth to simulate appropriately. When it gets to the end of the animation, you can then scrub in the timeline to the pose that you want.

Applying an animation to our character

Modifying the Body Pose

While the pose from the animation is pretty good, we can fine-tune it to our exact needs. With your character selected, go to the Motion tab, and click on the Edit Pose button. There, you can use the dummy, in either IK or FK mode (see the tabs at the top of the Edit Pose dialog box) to move and rotate your joints as you see fit. Remember you can also navigate to the Poses and/or Gesture section of your Content tab and apply different body poses or hand positions to your character as well.

Edit Pose button
Edit Pose and Gestures

Modifying Facial Expressions

Let’s change the mood of our character! Close your Edit Pose window, and back in the Motion tab, click on the Edit Facial button. In the Muscle tab, you can click and drag on the different areas of the face to change your characters expression, in the Expressions tab, you can apply an expression from the list of dropdown options, and in the Modify tab, you can use sliders to make exact changes to your characters different face shapes. Use one or a combination of these to change your character’s face. When you’re done, simply close the Edit Facial window.

Edit Facial button
Edit Facial Muscle, Expression, and Modify tabs
D4_Angry facial expression applied to our character

Sending the Posed Assets Back to ZBrush

If you don’t need your pose applied to your subtools in ZBrush as layers (to toggle between different poses), simply select all the character assets you want to send back in the Scene tab, and hit the GoZ button in the Character Creator menu bar. The GoZ settings should default to Relink, and because our original mesh had the “Eyelash” and “Tear Ducts” separated from the base body — we’ll check those on. Make sure you choose Current Pose, then hit the GoZ button. This will update the SDiv1 vertex positions in ZBrush to match the CC assets, and your higher subdivision details will follow. To see all the highest subdivision levels, go to Tool > Subtool and click the All High button.

Select and GoZ

If you DO want to apply your pose as a layer, open the ZBrush Pose Tools plugin in ZBrush, click the Record New Pose button, then GoZ your character assets over from CC. When the changes have been updated in ZBrush, click Save New Record and name it. Do this as many times as you’d like, storing a library of poses as you go.

  • Note – if you didn’t separate out any Body meshes from our earlier step (eyelashes, tear ducts), you can use the Character Creator > Plugins > ZBrush Pose Link menu to send a pose over and have it recorded automatically on the ZBrush side!
  • For an in-depth explanation and examples of Pose Tools functionality, please visit this article!
Our high res model, posed in ZBrush!

Part 2. Turn Your ZBrush Character to Life with Facial, Body Animation & Mocap

Exporting a Character from CC4 to iClone

We’ll need to go into iClone in order for us to apply and modify mocap data. Getting our character from Character Creator to iClone is easy – simply go to File > Export > Send to iClone > All!

Sending our Character to iClone

Import Mocap Animation into iClone

I recorded some mocaps in Rokoko that contain body and hand motion capture data, including actions that needed to make contact with the body (belly, head, clapping). You could make your own mocap and import it into iClone, or download animation files from the internet as well. All you need to do to get it into iClone is to drag your mocap file onto your character (you should see a yellow bounding box show up on your character as you do this). Import settings should pop up — for Motion Profile choose Motionbuilder, and for Motion T-Pose, navigate to your mocap file and select it, choose 30 FPS, then hit Convert All. Now you can press play to playback the animation.

Motion Modifier

Editing Your Motion Capture – Global Corrections

With the character selected, in the timeline, right click on the animation (in the Motion row in the timeline), and choose Motion Modifier (or click the Motion Modifier button in the timeline menubar). Press the Preview button, then press the spacebar to play your animation. Tune the sliders to taste as you watch the effect on your character in real time. In my case, the mocap head was facing down the entire clip, so I added some “Head Back” offset to make the animation more natural. Hit OK to apply your changes.

Motion Modifier
Default animation vs Head Back Motion Modifier

Fixing Penetration Issues

Since my proportions aren’t the same as our virtual characters, my mocap might have issues, especially in scenarios where I need precise hand placement (since our arms are different relative lengths).

Goblin proportions vs human proportions
Edit Motion Layer

In the Modify panel, go to the Animation tab, and click on Edit Motion Layer.In the puppet window, select the IK controls you want to change to avoid interpenetration (like the hand position). Scrub a bit before the body part clips through the body and press the Set Key button. This will be the transition point from mocap data to our modified animation positions. Scrub to where the clipping occurs (where the hand is inside of the body for example). Move and rotate the hand control so there is no longer interpenetration, and hit Set Key to add a keyframe. Scrub the timeline to a bit after the clipping fix, and hit the Reset button; this will transition from our modified pose back to the original mocap data. Do this for all the problem areas of your animation.

Original pose
Modifi ed Motion Layer pose with no hand penetration

Installing and Launching AccuFACE

Although our Mocap file didn’t have facial animation, we can still add it. To get started, install the AccuFACE plugin for iClone 8 from the Reallusion Hub, then launch the AccuFace application. We’re going to use the camera workflow in this example, but you can also bring in video, and capture from that! In AccuFACE, select the CAMERA tab, and choose your camera in the Source dropdown menu. Since my camera is on a tripod, I’ll choose Static Cam for Tracking Mode.

Installing AccuFace
AccuFace Camera mode

Calibration

Click the Calibrate Facial Capture, and follow the directions to match your face to the expression descriptions – look at the camera with a neutral face, hit the Set Expression button, do a Brow Raise with your face, hit Set Expression, etc… until you’ve calibrated all four expressions. Close this window when you’re done.

Calibrating our face

Connecting to Motion Live

In iClone, select your character model and go to Plugins > Motion Live > Motion Live. Under Gear List, click the + button in the ‘Facial’ row, and select AccuFACE. Type the IP address shown in your AccuFACE application into Motion LIVE > Gear List > Connection, and click the hollow green circle to activate AccuFACE, making the circle solid! Next, click on the hazard sign under Character List > Face and select AccuFace.

Selecting AccuFACE for our characters face

Click the Preview button and press the Spacebar to preview your own expressions on the virtual character, or click the Record button and press the Spacebar to start recording facial capture to your body motion capture!

Using our face to add facial animation to our virtual character!

Read More

PavMike’s CC to ZBrush Workflow Tutorial (1/3): Base to Texturing

PavMike’s CC to ZBrush Workflow Tutorial (2/3): Cloth & Accessory

See More Tutorial in “ZBrush Master Class”

Expert Tips: making the most with AcuuFace iClone motion capture

Expert tips: Making the most of AccuFace Facial Mocap

John Blalock

John’s background and education are in the traditional visual effects pipeline, mainly as a Compositor. Over the years, he had worked at visual effects studios such as Rhythm & Hues, and game company Sony Playstation.

While he enjoyed those studio experiences, he found a greater love in passing along what he had learned. He has taught in the college education sector in Southern California for over 12 years, including the Academy of Art University.”

He has been entranced by their possibilities and integration with other tools, such as iClone and Character Creator ever since. John is also co-founder of the YouTube channel AsArt and also runs a tutorial-based channel, JohnnyHowTo. He would love for you to stop by and join him on his adventures!

My mocap workflow with iClone AccuFace

One great feature of AccuFace is that it is integrated into iClone’s Motion LIVE system. This makes the interface intuitive and easy to use. Aside from wanting to spend some time understanding the considerations of video-based motion capture as a whole, everything felt like “business as usual”. Compared to other stand-alone capture systems I’ve used, this greatly simplifies the entire process to offer the type of creativity that brings the desired outcome in far less time.

Since iClone processes everything and applies it directly to the character, potentially incorporating face and body mocap simultaneously, you won’t encounter any of the pipeline concerns that might arise with other solutions. I used almost all the capture hardware I already had on hand: a webcam for video, an LED light to even out shadows, and a mini-tripod to keep the camera head-level. These items are relatively low-cost, so even if newly purchased, they won’t break the bank. However, there are some overall considerations, which I’ll share below.

Lighting matters

For AccuFace, I consistently used an LED light to ensure even lighting and minimize shadows, irrespective of the camera used — except when it comes to using something like Razer Kiyo, which has a built-in ring light. Shadows on the face can potentially impact tracking accuracy or consistency, so it’s essential to work toward eliminating them. Providing the best “source material” (i.e., capture environment) is crucial, and proper lighting setup is a simple step we can take.

Beyond lighting, you’ll want the camera to be aligned directly with the face so that it can capture everything at a natural angle. If the camera is positioned too high or too low, it may not accurately capture details such as mouth movements or eye blinks in their proper proportion. If I’m using a static camera, I would typically attach it to a mini-tripod to elevate it. I often find myself utilizing the Reduce Tracking Interference > Head option in AccuFace, as otherwise, my brow movements tend to affect the entire head movement.

AccuFace Panel

On the iClone end, there wasn’t much new to consider, as I had previously used the iPhone Motion LIVE profile, and the interfaces are quite similar. To use the module, you activate it within Motion LIVE, assign it to the character, and then access some tabs where you can filter the incoming data if needed. Before you start tweaking settings like Strength, Denoise, and Smooth, I recommend performing a sample capture to see how things look. If the lighting and angle in the video are solid, you might not need to adjust anything else – but rest assured, those tabs are there for you if the need arises.

Tips on live-capture mocap for AccuFace

Most of what I mentioned above will apply to both live and pre-recorded video capture, with a few caveats here and there. When dealing with live capture, it becomes crucial to ensure that everything is set up ideally because, well… it’s being live-captured! The easy solution here is to conduct some trial runs before you record “for real” and observe how everything looks when applied to the character.

I found myself adjusting the strength of morphs several times and making other miscellaneous tweaks to optimize results. Since it’s easy to use a webcam with AccuFace, many of us might prefer to just “leave it where it is” on the monitor, but typically that angle is too high. Having the angle and lighting locked down will save you from a lot of manual fixing later on — or having to start all over again — so it’s definitely worth the setup time.

I also found that accessing my webcam’s advanced settings was very useful, although the location of this option may vary depending on the webcam model. For me, unchecking the “low light compensation” setting improved video capture. While this setting enhances video clarity in low-light situations, it tends to make the footage appear more “smeared” in a way I typically associate with webcam footage. If the lighting is sufficient, this setting shouldn’t be necessary, which underscores the importance of using an LED light or a similar solution.

The Audio for Viseme option is tremendously helpful and, at least for me, is something I will always use with dialogue. This is because our capture is integrated with iClone AccuLips for lipsync animation. It provides more defined mouth movements for speech, and its effect on the animation can always be dialed down in iClone if you feel it’s too pronounced. It’s also important to take the time to perform the Calibrate Facial Capture poses: the enhanced results are more than worth the 10 to 30 seconds it will take you to go through them!

Tips on using pre-recorded video with AccuFace

I appreciate that we can use pre-recorded video in AccuFace, as it opens up plenty of options for us. In one aspect, it can be a little trickier to “lock down” the settings, as you might not know what the footage looks like until you view it on the PC after-the-fact. The tiny displays on many cameras can make it challenging to accurately assess how things look, and depending on the lighting in the shooting location, you might not even trust the preview display.

Phone-based video review works better since you have its larger screen, but ideally (in almost all cases), it would be useful to have something like a laptop on hand, where you can load some sample clips and see how they look on a larger display. This practice is pretty common for film shoots as well, so I don’t really consider it a limitation as much as a “best-practice” precaution.

As an example of checking settings: when I used a GoPro for helmet-mounted captures, I loaded a couple of sample clips onto the PC before committing to “real” takes. It was a bit of a pain since I had to maneuver the card out of the helmet rig, but it turned out to be worth the effort. The initial footage was a bit too dark and grainy/noisy, which might have worked fine but was less than ideal for feature tracking.

After making a few adjustments on the camera, things looked much better, and I could continue with a greater sense of confidence in the expected results. As a final but important consideration: don’t forget to record your poses for the Calibration step! It can be fairly easy to forget, but it’s worth doing before you start recording “for real”.

Helmet mocap comparison: Live Face vs AccuFace + GoPro

I’ve observed discussions among iClone users regarding their preferences between iPhone LIVE Face and AccuFace solutions. When it comes to helmet camera options, the choice depends on what you have access to. I personally use a relatively ancient GoPro Hero Silver, which, being ultra-lightweight, makes it an excellent candidate for a helmet-mounted camera. This camera is employed in various professional helmet setups.

An iPhone equipped with a TrueDepth sensor is also a viable option, as seen in the projects of companies like Epic Games. Both devices can deliver great results, but I find the “GoPro attached to a helmet” solution much more comfortable than having an iPhone strapped and suspended in front of my head. The weight of an iPhone becomes noticeable, and I found myself adjusting my performance to compensate — not ideal. Using a lightweight camera with AccuFace alleviates this issue and provides a significant advantage for AccuFace.

On the flip side, I see two main upsides to using iPhone Live Face over camera recording. At least in my scenario, one is synchronization. Unless you have a lightweight “action camera” that can serve as a wireless webcam, then you’re going to be stuck trying to figure out where the body mocap lines up with the face performance afterwards. 

The other main consideration is lighting, and since TrueDepth-enabled iPhones use a depth sensor, they aren’t affected as much by lighting changes. The AccuFace solution here is a camera-mounted light, which is possible but requires extra effort to get implemented (and can add more weight). I hope what I’ve written above gives you some ideas and insights to enjoy the new tools!

Learn More

About iClone AccuFace Mocap Profile

About iClone Facial Animation

Make believable Lipsync Animation

PavMike CC to ZBrush Workflow Tutorial (2/3): Cloth & Accessory

Create Accessories in ZBrush for Your Animated Characters

Intro– Create Accessories in ZBrush for Your Animated Characters

We can’t leave our character naked, so let’s talk about accessories and clothing. Figure out what kinds of things your character needs — horns, loincloths, pants, boots, wraps, weapons, etc… and we’ll discuss in the following sections how to create them in ZBrush, texture them, get them into Character Creator, and attach them to your character.

High res clothing and accessories in ZBrush

Part 1- Create Cloth and Accessory in ZBrush

Modeling High Res Clothing & Accessories

The in-depth creation techniques for the clothing and accessories shown here go beyond the scope of this article. However, please watch this livestream for more information on how to create high-res assets in ZBrush for use in Character Creator.

Export High & Low Meshes

Once you have your high-res meshes modeled, export them as an .fbx, with “<object_name>_high” as the subtool name, so you can bake later in Substance 3D Painter using namespaces. Create a corresponding low-res for each high-res asset, and again, export your low-res as an .fbx. Don’t forget to UV your assets, and name each object with “<object_name>_low” to match the naming of your high-res .fbx file. Keep in mind there’s a 300k quads (600k tris) limit per character in Character Creator — that’s a lot of triangles, so you shouldn’t have to go crazy with optimization, but optimize it enough to keep your character under that poly budget!

Corresponding _high and _low meshes

Part 2- Texturing Armor, Clothing, & Accessories in Substance Painter

Baking Accessories and Clothing

Just like we did with our body, we’ll launch Substance 3D Painter, go to File > New, select your “_low.fbx”, then go to the Texture Set Settings tab and click Bake Mesh Maps. Load up your high-res mesh, remember to change your Material ID to “Vertex Color” so we can bake our polypaint to selectable masks, and press the Bake selected textures button. When your objects are done baking, press Return to painting mode.

Baking _high to _low

Texturing Accessories and Clothes

Just like we did with our body, drag and drop materials from the material library, create your own materials using fill layers, mix and match and blend materials together using masks, add generators like dirt and edge wear to do some visual storytelling, splash on some blood splatter and mud, whatever you’d like in order to make your accessories and clothing look like they’re made of the correct type of material, and are worn and “lived in” to add some realism to convince your audience this is a real character.

Texturing our accessories and clothing

Exporting Textures

This is pretty simple — go to File > Export textures, choose “PBR Metallic Roughness” for your Output template, make your Output Directory wherever makes sense for your textures to be exported to, and hit the Export button.

Part 3- Importing Clothing & Accessories from ZBrush to CC

Importing Clothes & Accessories into Character Creator

Now that we have our clothing and accessories textured, let’s get them into Character Creator. In CC, go to Create > Accessory, and import your low-res file. When it asks you if you’d like to merge all, choose Cancel. You’ll see the accessories don’t fit the character perfectly; Select them all in the Scene tab, press W on your keyboard to bring up the gizmo, and move your accessories onto your character so they’re placed appropriately.

Moving imported accessories into place

Converting Accessories into Clothing

We’re going to skip objects that could be bound to a single joint, like armor, pouches, backpacks, and weapons for now. Instead, select an accessory mesh that will bend with multiple joints across the body, for example: shirts, pants, or boots. Press the Transfer Skin Weights button in the Modify panel Attributes tab.

Transfer Skin Weights
Choosing the skin template

Choose the appropriate template for the type of asset. If it’s a shirt or pants, choose “Default”; if it’s gloves or boots, choose the appropriate template. Do this for the rest of your clothing type assets.

Binding Accessories to the Correct Joints

After we’ve converted the appropriate objects to clothing, all we have left are objects that can be “rigid bound”. In other words, objects like helmets or shoulder pads that can rotate along with a single joint, NOT deform across multiple joints. You don’t want your metal helmet bending and stretching when your character’s head moves. For the remaining accessories (again, objects attached to just one joint), select the one you want to modify, then scroll down toward the bottom of the Attributes tab and use Pick Parent and/or Attach to: to update the bones associated with your accessory.

Attaching Accessories to joints and setting pivots

Setting your Accessories Pivot Point

After attaching, continue scrolling down the Attributes tab and use the Quick Set panel to quickly set your accessories pivot, and/or use the Edit Pivot button to finetune pivot placement for your accessory.

Materials

In Substance Painter, we exported our textures using the PBR metallic/roughness preset, giving us base color, metallic, normal, and roughness textures. In Character Creator, select an accessory, go to the Material tab, switch the Shader Type to PBR, and double click the preview swatches to load in all of your textures. For your base color, you may have to increase theStrength of the Base Color from 80 to 100, and for your normal map, you may have to Flip the Y channel to get it to appear correctly.

Plugging in textures to our materials

Cloth Simulation

If you have clothing that needs to swish and flip around while the character moves, we’ll need to do some extra steps. Select your cloth object, and in the Menu Bar, turn on Rigid Body Simulation and Soft Cloth Simulation. Then, go to the Physics tab, and turn on Activate Physics. While still in the Physics tab, plug in a weight map, where the black areas of the map are parts of your mesh you want skinned to the body weights, and the white areas are where you want physics to take over in regards to how your mesh reacts to movement.

Turning on Rigid Body and Soft Cloth simulation and activating physics for our loincloth
The black areas of your weight map are skinned, white are physics driven

Collision Capsules

The physics-driven parts of our meshes won’t actually collide with the body, accessory, or clothing assets in Character Creator. Instead, they will collide with invisible capsules. To adjust these, turn off your cloth object, select your “CC3_Base_Plus” group in your Scene tab, and press the Collision Shape button. In the Character Collision Shape Editor, make sure Collision On is checked, then click on the various parts of the dummy to modify, delete, or add capsules to your character body.

In the case of the loincloth, you’ll see we only need capsules turned on for the thighs, so we just need to activate those capsules, and modify them so they’re representative of our character’s thigh volumes.

Accessing Collision Shapes
Deactivating capsules we don’t need, and modifying the ones we do

Testing Your Physics

In the Animation Player, click on the motion button and choose a walk or dance animation, where you can see your now physics-enhanced asset interacting realistically with your character. To ensure the cloth is being simulated correctly, you can switch the Animation Player from Realtime to By Frame.

Testing our cloth with an animation
Caching our cloth simulation

If you want to scrub the simulation, go to Edit > Project Settings, and enable Bake Animation. That way, when you do your initial playthrough of your character’s animation, it will cache the physics movement, allowing you to scrub your animation with the cloth behaving as expected.

Saving Clothing and Accessories

It’s often useful to re-use clothing and accessories for other characters. In CC, select the asset you want to save (shirt, shoes, etc…), navigate to the Custom tab in the Content browser, going to the folder you expect the asset to be located (Accessory > Head for example, or Cloth > Pants). Then simply hit the Save button. The resulting dialogue box might choose the correct asset type, but if not, simply choose it from the dropdown menu. Do this for all the clothing and accessories for your character that you might want to share later.

Saving our asset as a reusable custom asset
Naming and ensuring asset location

About The Author

Michael Pavlovich earned a Bachelor’s degree in Computer Animation from RSAD in 2005. Initially, he contributed to the development of environment and character art for popular video games such as Madden and NCAA Football. Later, he relocated to Austin to join Daybreak Games, where he worked on the creation of art assets for DC Universe Online.

Presently, Michael holds the position of Director of Character, Weapon, and Vehicle Art at Certain Affinity. His expertise lies in implementing iterative pipelines for Certain Affinity artists helping develop renowned video game franchises, including Halo, Call of Duty, and DOOM. To stay updated on his latest tutorial projects, you can visit Michael’s YouTube or ArtStation page.

Read More

PavMike’s CC to ZBrush Workflow Tutorial (1/3): Base to Texturing

PavMike CC to ZBrush Workflow Tutorial (3/3): Posing & Animation

See More Tutorial in “ZBrush Master Class”