Categories
Product Release
Pitch & Produce
News
Learning
Featured Story
Events
Tags
Product
iClone
AccuRIG
Character Creator
Cartoon Animator
ActorCore
Smart Content Manager
Plugins & Pipelines
Headshot
Auto Rig
ZBrush
Motion LIVE
Illustrator
Photoshop
PSD Pipeline
Vector Pipeline
Blender
SkinGen
Cinema 4D
After Effect
Unreal
Omniverse
Maya
Unreal Live Link
Unity
MetaHuman Live Link
3ds Max
Marvelous Designer
Daz
Motion Link
Iray
Application
Animation
Cartoon
Previz
Conceptual Art
Games
Commercial Ads
Television & Network
Education
Films & Movies
Music Videos
Social Media
AEC
3D Scan
Comics
Live Performance
Virtual Production
AR/VR/MR/XR
Vtuber
Theme
Scene Creation
Character Creation
Character Animation
Facial Animation
Lip-sync Animation
Video Compositing
Motion Capture
Motion Director
Digital Double
360 Head
Digital Twin
Environment & Crowd
Digital Human
AI & Deep Learning
MetaHuman
Metaverse
Developer
Content
Plug-ins
Certified Trainer
Columnist
WarLord
GET THE LATEST UPDATE

Ballerina: One Architect’s CGI Fantasy made with Cinema 4D, Character Creator and iClone

Share

This article is featured on Fox Renderfarm

Project “Ballerina” is a 30-second full CG animation, Kay John Yim’s first personal project to feature an animated photorealistic CG character staged within a grand Baroque rotunda lounge. The lead ballerina is mainly made with Character Creator, animated in iClone, and rendered with Redshift and Cinema4D.

Kay John Yim is a Chartered Architect at Spink Partners based in London. He has worked on a wide range of projects across the UK, Hong Kong, Saudi Arabia and Qatar, including property development and landscape design. His work has been featured on Maxon, Artstation, CG Record and 80.LV.

Yim’s growing passion for crafting unbuilt architecture with technology has gradually driven himself to taking on the role of a CGI artist, delivering visuals that not only serve as client presentations but also as means of communication among the design and construction team. Since the 2021 COVID lockdown, he challenged himself to take courses in CG disciplines beyond architecture, and has since won more than a dozen CG competitions. 

This making-of tutorial article is a short version of “Ballerina: A CGI Fantasy”, written by Kay John Yim. For the full version, please visit Fox Renderfarm News Center.

The Making of Ballerina

The animation is a representation of my inner struggles in all artistic pursuits, both metaphorically and literally.

Ballet, an art form widely known to have stringent standards of beauty and highly susceptible to public and self-criticism, is the metaphor of my daily professional and artistic practice. As an architect by day, I work on architectural visualizations, where every detail is being scrutinized by my colleagues, senior architects and clients. As an artist by night, I work on personal CG projects, of which I would do hundreds and up to thousands of iterations to get the perfect compositions and color schemes. No matter how proficient I become in my professional and artistic skills, the inner struggle never fades away.

Through months of trial and error, I have since learned a lot about efficient character animation and rendering. This article is an intermediate guide for any indie artists like myself who want to take their CG art to the next level. 

The guide is divided into 4 main parts:

  • The Architecture
  • The Character
  • The Animation
  • Rendering

Software I used include:

  • Rhino
  • Moment of Inspiration 4 (MOI)
  • Cinema4D (C4D)
  • Redshift (RS)
  • Character Creator 3 (CC3)
  • iClone
  • ZBrush & ZWrap
  • XNormal
  • Marvelous Designer 11 (MD)
  • Houdini

1. THE ARCHITECTURE

My primary software for architectural modeling is Rhino.

There are many different ways to approach architectural modeling. Rhino’s main advantage over some other more popular DCCs like Cinema4D (C4D) or Houdini is its capability in handling very detailed curves in large quantities. 

As an Architect, every model I built always started with a curve, usually in the shape of a wall section, cornice or skirting section, swept along another curve of a plan. Rhino’s command list might seem overwhelming at first, but I almost exclusively used a dozen of them to turn curves into 3D geometry:

  • Rebuild
  • Trim
  • Blend
  • Sweep
  • Extrude
  • Sweep 2 Rails
  • Flow Along Surface
  • Surface from Network of Curves

The key to architectural modeling is to always use reference wherever possible. I always have PureRef open at the right bottom corner of my screen to make sure I model in correct proportions and scale. This usually includes actual photos and architectural drawings. 

For this particular project I used the Amalienburg Hunting Lounge in Munich as my primary reference for the architecture.

PureRef board for the project

While the architecture consisted of 3 parts – the rotunda, the hallway and the end wall – they were essentially the same module. Hence I initially modeled one wall module consisting of a mirror and a window, duplicated and bent along a circle to get the walls of the rotunda.

Rhino modeling always begins with curves

Wall module duplicated and bent along a curve

The module was reused for both the hallway and the end wall to save time and (rendering) memory. Having built up a library of architectural profiles and ornaments over the past year, I was able to reuse and recycle profiles and ornaments for the modeling of the architecture.

Ornament modeling could be a daunting task, but with a couple of ornaments modeled I simply duplicated and rearranged them geometrically to get unique shapes.

Rhino ornament placement

All the objects within Rhino were then assigned to different layers by material; this made material assignment a lot easier later on in C4D.

Assigning objects to layers by material

Note :

The best way to get familiar with Rhino navigation is to model small-scale objects. Simply Rhino has a great beginner’s series in modeling a teapot in Rhino: For anyone in a pinch, there are pre-built ornaments for purchase on 3D model stores like Textures.com ; some ornament manufactures have free models available for download on Sketchfab and 3dsky.

Exporting from Rhino to C4D

Rhino is primarily a NURBS (Non-Uniform Rational B-Splines) software; and although NURBS models are very accurate in representing curve and surface data, most render engines or DCCs do not support NURBS.

For this reason I exported the NURBS and MESHES to .3dm and .FBX respectively, and used Moment of Inspiration (MOI) to convert the NURBS model to a mesh.

MOI has the best NURBS to quad mesh conversion(over Rhino or any other DCCs) – it always gives a clean mesh that could then be easily edited or UV-mapped for rendering.

Exporting from MOI

Importing into C4D

Importing the FBX file into C4D was relatively straightforward, but there were a couple of things I paid attention to, notably the import settings, the model orientation and file unit, listed below in order of operation:

  1. open up a new project in C4D (project unit in cm);
  2. merge FBX;
  3. check “Geometry” and “Material” in the merge panel;
  4. change imported geometry orientation (P) by -90 degree in the Y-axis;
  5. use script “AT Group All Materials” to automatically organize Rhino materials into different groups.

Importing FBX exported from MOI

Importing FBX exported directly from Rhino

I modeled half of the architecture in Rhino and then mirrored it as an instance in C4D, since everything is symmetrical.

C4D instance & mirroring

The floor (Versailles Parquet tiles) was modeled using photo-texturing method, most widely touted by CG artist Ian Hubert. I applied a Versailles Parquet tile photo as texture on a plane, then sliced up the plane with a “knife” tool to get the reflection roughness variations along the tile grouts. This allowed me to add subtle color and dirt variations with Curvature in Redshift.

The floor tile was then placed under a Cloner to be duplicated and spanned over the entire floor.

Cloning floor tiles

Note :

C4D and Rhino use different Y and Z orientations, hence FBX directly exported from Rhino has to be rotated in C4D.

Architectural Shading (Cinema4D + Redshift)

Since I grouped all the meshes by materials in advance, assigning materials was just as simple as dragging and dropping to the material groups as cubic maps or Tri-planar maps. I used Textures.com, Greyscalegorilla’s EMC material pack and Quixel Megascans as base materials for all my shaders.

For ACES to work correctly within Redshift, every texture has to be manually assigned to the correct color space in the RS Texture Node; generally diffuse/albedo maps belong to “sRGB”, and the rest (roughness, displacement, normal maps) belong to “Raw”. My architectural shaders were mostly a 50/50 mix of photo texture and “dirt” texture to give an extra hint of realism.

2. THE CHARACTER

The base character was created in Character Creator 3 (CC3) with Ultimate Morphs and SkinGen plugins – both of which were very artist friendly with self-explanatory parameters. 

Ultimate Morphs provided precise slider controls to every bone and muscle size of the character, while SkinGen gave a wide range of presets for skin color, skin texture detail and makeup. I also used CC3’s Hair Builder to apply a game-ready hair mesh to my character.

CC3 morphing & Hair Builder

Face Texturing

Face was the one of the most important parts of the CG character that required extra attention. The best workflow I found to add photorealistic detail was the “Killer workflow” using Texturing XYZ’s VFace model and Zwrap.

VFACE is a collection of state-of-the-art photogrammetry human head models produced by Texturing XYZ; every VFACE comes with 16K of photoscanned skin textures, displacement and utility maps; Zwrap is a ZBrush plugin that allows one to automatically fit a pre-existing topology to a custom model.

The “Killer workflow” essentially matches the VFACE mesh shape to the CC3 head model; using the Killer workflow, I was able to bake all the VFACE details down to the CC3 head model once the 2 mesh shapes are matched up.

My adaptation of the “Killer workflow” can be broken down as follow:

  1. export T-posed character from CC3 to C4D;
  2. delete all polygons except the head of the CC3 character;
  3. export both CC3 head model and VFACE model to ZBrush;
  4. use MOVE/Smooth brush to maneuverer VFACE model to fit as closely as possible to the CC3 head model;
  5. launch ZWRAP, click and match as many points as possible, notably around the nose, eyes, mouth and ears;
  6. let ZWRAP process the matched up points;
  7. ZWRARP should then be able to output a VFACE model that matches perfectly to the CC3 head model;
  8. feed both models into XNormal and bake the VFACE textures to the CC3 head model.

Matching points of VFACE (left) & CC3 HEADS (right) in ZWRAP

Note :

Full “Killer Workflow” Tutorial on Textureing.XYZ’s official Youtube channel: VFace – Getting started with Amy Ash. I recommend saving the matching points in ZWRAP before processing. I also recommend baking all the VFACE maps individually in XNormal as they are very high-res and could crash XNormal when baked in batch.

Skin Shading (Cinema4D + Redshift)

Once I had the XYZ texture maps ready, I then exported the rest of the character texture maps from CC3. After that, I imported the character into C4D, and converted all the materials to Redshift materials.

At the time of writing, Redshift unfortunately did not yet support Randomwalk SSS (a very realistic and physically accurate subsurface scattering model found in other renderers like Arnold), hence required a lot more tweaking when it came to rendering skin.

The 3 levels of subsurface scattering were driven by a single diffuse material with different “Color Correct” settings. The head shader was a mix of both the CC3 textures and VFACE textures; the VFACE multichannel displacement was blended with the “microskin” CC3 displacements map.

Character look-dev

A “Redshift Object” was applied to the character to enable displacement – only then would the VFACE displacements show up in render.

Close-up render of the character

Hair Shading

Having experimented with grooming using C4D Ornatrix, Maya Xgen and Houdini, I decided that using the baked hair mesh from CC3 for project “Ballerina” was leaps and bounds more efficient down the line. I use a Redshift “glass” material with CC3 hair textures maps fed into the “reflection” and “refraction” color slots, as hair (in real life) reacts to light like tiny glass tubes.

Note :

For anyone interested in taking the CC3 hair to the next level of realism, CGcircuit has a great vellum tutorial dedicated to hair generation and simulation.

Early test of CC3 mesh hair to hair geometry conversion in Houdini

3. THE ANIMATION

Character Animation : iClone

I then exported the CC3 Character to iClone for animation. I considered a couple of ways to approach realistic character animation, these included:

  1. using off-the-shelf mocap data (Mixamo, Reallusion ActorCore);
  2. comissioning a mocap studio to do bespoke mocap animation;
  3. using a mocap suit (e.g. Rokoko or Xsens) for custom mocap animation;
  4. old-school keyframing.

Having experimented with various off-the-shelf mocap data, I found Mixamo mocaps to be way too generic, most of which look very robotic; Reallusion Actorcore had some very realistic motions, but I could not find exactly what I needed for the project. With no budget and (my) very specific character motion requirements, option 2 and 3 were out of the picture. This led me to old-school keyframing.

First I screen-captured videos of ballet performances and laid them out frame by frame in PureRef. I then overlaid the PureRef reference (in half opacity) over iClone, and adjusted every character joint to match my reference using “Edit Motion Layer”. 

Pose 1

Pose 2

The animated characters were then exported to Alembic files.

Final character animation
Note :

While my final project concept depicted ballerinas in slow motion, my original idea was actually to keyframe a 20-second ballet dance, which I very quickly realized to be bad idea for a number of reasons:

  1. in slow motion a lot of frames could be interpolated, but real time motion involved a lot of unique frames and hence required a lot more tweaking;
  2. subsequently more unique frames meant more rendering problems (flickering, tessellation issues etc.).

Early test render of my original idea.

Considering this as my first character animation project, I came to the conclusion of doing a slow-motion style sequence instead – 2 unique poses with 160 frames of motion each.

Garment Simulation

Cloth simulation was by far the most challenging part of the project. The two major cloth simulation/solvers that I considered were Marvelous Designer (MD) and Houdini Vellum.

While Houdini Vellum was much more versatile and more reliable than Marvelous Designer, I personally found it to be way too slow and therefore impractical without a farm (one frame of cloth simulation could take up to 3 minutes in Houdini Vellum vs. 30 seconds in Marvelous Designer on a Threadripper PRO 3955WX with 128GBs ram).

Cloth simulation in MD, while generally a lot quicker to setup than Houdini vellum, was not as straightforward as I imagined. Simulated garments in MD always came with some form of glitches; this included cloth jittering, piercing through character or just complete dislocations. Below are some of the settings I tweaked to minimize glitches:

  1. using “Tack” to attach parts of the garment to the character;
  2. increasing cloth “Density” and “Air Damping” to prevent garment from moving too fast and subsequently move out of place;
  3. simulate parts of the garment in isolation – though not physically accurate, allowed me to iterate and debug a lot quicker.

I also reduced “Gravity” in addition to the above tweaks to achieve a slow-motion look.

MD simulation

Note :

The official Marvelous Designer Youtube channel has a lot of garment modeling livestreams which I find to be the most helpful resource for learning MD. Alternatively there are a lot of readily available 3D garment online (notably on Marvelous Designer’s official site or Artstation Marketplace) which I used as a basis for a lot of my projects.

MD is extremely prone to crashing, there is also a bug in both MD10 and MD11 that prevents saving of simulated garments 90% of the time, so always export simulated garment as Alembic files rather than relying on MD to save the simulation.

Simulation Clean-up

After dozens of simulations, I would then import the MD exported Alembic files into Houdini, where I did a lot of manual cleanups, this included:

  1. manually fixing collided cloth and character with “Soft Transform”;
  2. reducing simulation glitches with “Attribute Blur”;
  3. blending together preferable simulations from different alembic files with “Time Blend”.

Cleaning up simulated cloth in Houdini with “Soft Transform”

The cleaned-up cloth simulation was then exported as Alembic to C4D.

Alternative to Garment Simulation

For anyone frustrated by the impractical Houdini Vellum cloth simulation times and MD glitches, an alternative would be to literally attach the garment to the character’s skin in CC3 – a technique most commonly found in game production.

Attaching garment to character in CC3

Note :

See the Reallusion’s official guide for creating game-ready garments here.

Garment Baking and Shading

Once I was done with cloth simulation in MD and clean-up in Houdini, I imported the Alembic file into C4D. MD Alembic files always show up in C4D as one alembic object without any selection sets; this makes material assigning impossible.

This was where C4D baking came to play – a process I used for converting the Alembic file into C4D object with PLA (Point Level Animation):

  1. drag the alembic object into C4D timeline;
  2. go to “Functions”;
  3. “Bake Objects”;
  4. check “PLA”;
  5. then bake.

Going through the steps above I was able to get a baked down C4D object that I could easily select polygons and assign multiple materials using selection sets. I then exported an OBJ file from MD with materials, imported into C4D and dragged the selection sets directly onto the baked down garment object. This eliminated the need to manually reassign materials in C4D.

I used a blend of linen texture maps (from Quixel Megascans Bridge) and Redshift Car Shader to emulate sequins fabric (think “blink”) found in a lot of professional ballet tutu dresses.

Close-up render of the fabric material

WARNING :

Do not use AO or Curvature nodes for the simulated garment materials (or any animated object), as they could potentially produce glitches in final renders.

4. RENDERING

Lighting & Environment

Although I tried to keep my lighting as minimal as possible, project “Ballerina” inevitably required a lot of tinkering due to the nighttime setting.

The nighttime HDRI did not provide sufficient ambient light to the interior space, and the chandelier bulbs were way too dim as the primary light source. Ultimately I placed an invisible spot light under the center chandelier and used a fake spot light that only affected all the architectural ornaments. The fake light provided an extra level of bounce light that gave just the right amount of illumination without ruining the moody atmosphere.

I also added a “Redshift Environment” controlled in Z axis multiplied with “Maxon Noise” to give more depth to the scene. Exterior-wise, I scattered 2 variations of Dogwood Trees with C4D “Matrix” in the surrounding area. They were lit from ground up in the scene to give extra depth. In summary lighting of the scene includes:

  1. Dome light (nighttime HDRI) x 1
  2. chandelier (mesh lights) x 3
  3. Spot Light (center) x 1
  4. exterior Area Lights x 4
  5. fake Area Light positioned under chandelier (includes architectural ornaments only)

RS lights

Note :

The trees were generated with SpeedTree. Lighting takes a lot of consistent practice to master; apart from my daily CG practice, I spent a lot of time watching b-rolls/breakdowns of movies – for instance I took a lot of inspiration from Roger Deakin‘s lighting and cinematography, as well as Wes Anderson‘s frame composition and color combinations.

Camera Movements

All my camera movements were very subtle. This included dolly, camera roll and panning shots, all driven with Greyscalegorilla’s C4D plugin Signal. I personally prefer using Signal for its non-destructive nature, but old-school key-framing would work just fine for similar camera movements.

Draft Renders

Once I had the character animations, cloth simulations and camera movements ready, I began to do low-res test renders to make sure that I would not get any surprises during the final renders, this included:

  1. flipbook (openGL) renders to ensure the timing of the animations were optimal;
  2. low-res low-sample full sequence renders to ensure there were no glitches;
  3. full-res (2K) high-sample still renders with AOVs (diffuse, reflection, refraction, volume) to check what contributed to the prevalent noise if any;
  4. submitting test render to Fox Renderfarm to ensure the final renders matched up with my local renders.

This process lasted over 2 months with iterations and iterations of renders and corrections.

Close-up shot I
Close-up shot II
Final shot

Final Renders & Denoising

I used a relatively high-sample render setting for the final renders, as interior scenes in Redshift were generally prone to noise.

RS final render settings

I also had motion blur and bokeh turned on for the final renders – in general motion blurs and bokehs look better (more physically accurate) in-render compared to motion blurs and bokehs added via compositing.

Half of the final 2K sequence was rendered on a local workstation, while the rest was rendered on Fox Renderfarm, total in about 6840 hours of render time on dual RTX 3090 machines. I used Neat Video for denoising the final shot, whereas the closeup shots were denoised using Single Altus (in Redshift).

Note :

Always turn “Random Noise Pattern” off under Redshift “Unified Sampling” when using “Altus Single” for denoising.

Redshift Rendering GI Trick

Redshift’s GI Irradiance Cache calculation could be quite costly; my final renders for instance have an average of 5 minutes of GI Irradiance Caching time for each frame.

In Vray there was an option in the IR/LC setting named “use camera path”, designed specifically for scenes where the camera would move through a still scene. Once “use camera path” was enabled Vray would then only calculate one frame of GI cache for an entire sequence. Borrowing a page from Vray, I use the following motion blur settings to calculate the first frame of Irradiance Cache:

RS rendering GI trick motion blur setting

The one Irradiance Cache is then used to render the entire sequence. Two shots of the project were rendered using one single GI cache, resulting in a 10% faster render time overall.

Note :

The GI trick only applies to shots with very little motion; when applied to the 2 closeup shots of project “Ballerina” for example, I got light patches and ghosting on the character skin.

Conclusion

Having spent months working on the project, I have gained an appreciation for traditional character animators – I never realized the amount of effort involved in crafting character animations, and the subtlety of details required to bring convincing CG characters to live.

Though I would not consider myself to be a character artist, I personally think Character Animations are really powerful in making CG environments relatable, and therefore would still be an essential part of my personal CG pursuit moving forward.

Learn more :

• Kay John Yim’s personal site https://johnyim.com/

• Kay John Yim’s ArtStation https://www.artstation.com/johnyim

• Character Creator https://www.reallusion.com/character-creator/download.html

• iClone https://www.reallusion.com/iclone/download.html

• Reallusion https://www.reallusion.com/

Related topics

Share

Leave a Reply

Recommended Posts