Kay John Yim is a Chartered Architect at Spink Partners based in London. He has worked on a wide range of projects across the UK, Hong Kong, Saudi Arabia and Qatar, including property development and landscape design. His work has been featured on Maxon, Artstation, CG Record and 80.LV.
Yim’s growing passion for crafting unbuilt architecture with technology has gradually driven himself to taking on the role of a CGI artist, delivering visuals that not only serve as client presentations but also as means of communication among the design and construction team. Since the 2021 COVID lockdown, he challenged himself to take courses in CG disciplines beyond architecture, and has since won more than a dozen CG competitions.
“Masquerade” tells a story of a girl with a concealed identity who enters an empty theater filled with statues. As the statues come to life, a mysterious gentleman joins her and they waltz to the rhythm of enchanted flames, lost in a world of magic and fantasy.
This project was my debut for a short CG film, allowing me to develop my skills in storyboarding while working with AccuRIG and motion capture data to create intricate character animations. It was also an opportunity for me to refine my lighting and rendering techniques. That being said, I would like to express my sincere gratitude to Reallusion and GarageFarm for their sponsorship. Their support made this project possible.
The software and workflow I used throughout every stage of the project can be summarized as the following:
▪ Rhino > MOI > Cinema4D > Redshift
Ⅱ. CHARACTER CREATION
▪ Character Creator 4 (CC4) > iClone 8 (iC8) > Houdini > Marvelous Designer > Houdini
Ⅲ. AccuRIG (STATUES ANIMATION)
▪ Cinema 4D > Houdini > CC4 AccuRIG > iC8.1 > Cinema 4D > Embergen > Houdini (retime) > Cinema 4D > Redshift
Ⅳ. CHARACTER ANIMATION
▪ iC8.1 > Embergen > Houdini (retime) > Cinema 4D > Redshift
Ⅴ. CLOTH SIMULATION
▪ Marvelous Designer > Houdini > Cinema 4D (C4D)
Ⅵ. ASSEMBLY, LOOK_DEV & SHOT FRAMING
▪ iClone > C4D
▪ Redshift > Neat Video (denoise) > After Effects (Heatwave) > Premiere (Magic Bullet Looks) > RIFE-App (interpolation)
The scene consists of two spaces, the hallway and theater, designed and modeled in reference to the Louvre Palace and Opera Garnier theater respectively. While they are not complicated spaces in terms of architecture, the ornaments are highly detailed and could have easily resulted in extremely heavy models that caused render times to skyrocket. Because of this, optimization became a core tenet throughout the making of “Masquerade”.
The following iconic buildings served as references throughout the modeling process.
- Palais Garnier Theater (from 3:18)
- Louvre Palace Dining Room & Hallway (from 08:00)
Using my past projects “Ballerina”, “The Magician”, and “The Gallery of the Great Battles” as the foundations, I recycled a lot of the ornaments that I had modeled for the Amalienburg hunting lodge in Munich. With the main architectural space blocked out, I then switch between Rhino and C4D, using the following Rhino commands and C4D functions for modeling:
- Flow (Rhino): Conform objects to a specified curve, useful for accurately bending ornaments meshes to architraves, bullnose etc.
- Flow Along Surface (Rhino): Conform objects from one surface to another, useful for accurately transferring ornaments on flat surfaces to curved counterparts, e.g., balcony balustrade.
- Spline Wrap (C4D): Similar to Flow in Rhino but procedural, can not handle heavy meshes like Flow but extremely versatile for changing design.
- Volume Builder + Volume Mesher + Quad Remesher (C4D): Useful for topologizing heavy meshes that require close-up details but need to maintain their 3D silhouettes.
- Mirror Instance (C4D): I set my scene origin at (0,0,0), placed everything symmetrically under a null, instanced, then scaled the null’s X by -1, which mirrored it procedurally.
While I used a lot of my ornaments previously modeled in ZBrush and retopologized in C4D, I also used 3D ornaments from textures.com. I recycled meshes and used “Render Instances” within the project wherever possible to constrain file size.
2. CHARACTER CREATION
The female character I used for this project was originally created in Character Creator for a short called “Ballerina”. I have since updated her using the integrated SkinGen library in CC4.1 and added an extended facial profile for animation in iClone 8.1.
My CG character workflow drastically sped up when I began to integrate CC4 with Redshift’s Random Walk Subsurface Scattering. Not only had CC4’s integrated SkinGen features allowed for faster procedural texturing, my overall workflow was simplified into a series of drag-and-drop processes with textures automatically exported with the FBX files.
The male lead ended up being a modified version of a default character in CC4.1. While the character creation process had become a lot easier in CC4, making convincing characters remained out of reach for me. I used the following websites as my go-to reference while adjusting character bone structures, facial silhouettes, and skin textures.
- This site generates a non-existing person every time the page is refreshed.
- This site provides a series of high-res portrait photos.
After creating the male and female leads, I exported both characters to iClone 8.1 for animation.
3. AccuRIG in CHARACTER CREATOR 4
The statues played a huge role in setting the tone for the scene, in fact, the final theater model and shot framing were designed around the movement of the statues.
AccuRIG in CC 4.1 lets inexperienced artists rig their models and turn them into animatable characters. For AccuRIG to work its magic, I had to first remove the skirt on the ballerina statue that would have interfered with the rigging procedure. I then reposed the statue into a t-pose using a combination of Cinema 4D (C4D) and Houdini.
For the animation, I used Edit Motion Layer to create three varied movements for use in my scene. I then exported the animated statues into Houdini, removed all of the body parts except for the sleeves, and imported them into Embergen for pyrotechnic simulation. Exporting the pyro VDBs from Embergen became an overnight process much like final rendering.
At the time of this writing, Embergen (v0.7.5.8) did not support VDB export at 24 fps, which would have been the preferred frame-rate for most of my projects, including “Masquerade”. I eventually uploaded the project to GarageFarm for final rendering, hence it was equally important to optimize the file sizes along with the render times. Exported VDB files can easily take up dozens of gigabytes of space (approximately 150 GB in total for the whole project). So, in order to reduce the total files size, I used Houdini’s Retime node to adjust the timing of the VDBs from 60 or 30 fps to 24 fps, which effectively reduced the total file size by 25%.
I then imported the animated Alembic files, FBXs, and simulated pyro VDBs into C4D. I used Constraints to bind the skirt back onto the FBX skeleton’s pelvis, which would transform according to the animation. Finally, I duplicated the statues into an array and offset every statue’s animation slightly to create a gradual rhythm to the animation.
4. CHARACTER ANIMATION
The character animations were created using a mix of Xsens motion capture data and ActorCore premade motions. For the entrance sequence, I first applied a looping “Cat Walk” cycle from the ActorCore motion library to the female lead. This gave me the foundation for animating the girl’s entry into the theater hall. I then animated her transformation by matching the pace of the walk cycle and using iClone 8.1’s Motion Correction feature to compensate for foot sliding. I continued to use Edit Motion Layer to add the head turn amid her cat walk and used features from “Digital Soul” to apply facial animations.
With the entrance sequence animation complete, I exported the animated character as an Alembic file to Houdini, where I removed intersecting body parts. Particularly, the arms which would have interfered with the cloth simulation in Marvelous Designer (MD).
For the second half of the animation, I motion captured real-life actors in Xsens suits doing a “Cinderella” waltz and transitioned the animation data to a slow dance motion from Reallusion’s “Studio Mcap Series: Motion for Lovers”.
With the Xsens raw data imported into iClone 8.1, I used “Digital Soul” to apply facial animations and locked the eyes of my actors on one another. I used Reach Target to correct the glitchy parts of the interactions and Edit Motion Layer to readjust the intersecting body parts wherever necessary.
5. CLOTH SIMULATION
While C4D and Houdini have both made significant improvements in cloth simulation lately, I still find that Marvelous Designer offers a level of control, speed, and quality that surpasses every cloth simulation software I have tried. While realistic most of the time, MD’s simulation rarely gives the perfect result. Generally, I tinker with the Friction settings and Simulation Quality in face of the minor glitches I encountered along the way. I then export various versions of the simulation into Houdini and use Blendshape to blend them together.
To add some mystery to the protagonist, I had her wear a cloak at the opening sequence, which gradually vanishes and reveals the character’s 3D face and dress. This was created using C4D’s PolyFX with an improved setup carried over from a previous project of mine called “Kagura”.
6. ASSEMBLY, LOOK-DEV, AND SHOT FRAMING
Speaking again of the entrance sequence, I used iClone’s camera in combination with C4D Camera Morph to precisely control the tracking camera. At first, I set up a camera in iClone 8.1 and aimed it toward the female lead’s eyes. This created the illusion of the character looking into the camera and breaking the fourth wall. I then exported the camera as an FBX file for use in C4D. Then I positioned a total of four cameras around the entrance and used Camera Morph to smoothly transition among the four cameras.
With the character in place, I played the sequence repeatedly and added three extra cameras along her walking path, with two cameras eventually passing her and switching their focus toward the statues in the theater. The final Camera Morph is controlled and timed with Greyscalegorilla’s “Signal” plugin, which allowed me to use a curve (rather than keys) to control the timing of the camera transitions. For the dance sequence, I used Align to Spline with a circle spline centered on the characters to create a consistent and controlled framing of the characters, as if they were filmed with actual cameras attached to dolly tracks.
I kept the lighting setup as simple as possible, using the RS Sun and chandeliers as the primary sources of light and pyro VDBs as the secondary source of illumination. I added a RS Environment driven by a Maxon Noise shader to add an additional layer of atmosphere to the final render. I also kept the materials simple, using only five materials in total to maintain a distinguishable color palette.
Since version 3.5.06, Redshift’s updated Random Walk SSS has provided more realistic SSS models without sacrificing render speed. It simplifies the setup of skin materials and produces better results under a variety of lighting conditions. Prior to this update, Redshift’s ray-traced SSS required multiple texture layers and manual adjustments to create realistic skin materials, which was a time-consuming process that required constant adjustments on animation sequences with significant light changes. While Arnold Renderer has offered Random Walk SSS for some time, Redshift’s implementation has made it much more efficient and practical for use in animation.
For the Subsurface settings of the characters’ skin materials, I used the Skin Diffuse map from CC 4.1 as the color and set Radius to a salmon color (similar to the color one’s dermis when viewed under direct illumination). I set Scale to 0.1 to represent the thickness of the skin, and used Random Walk mode with Include Mode set to “All Objects”.
As for the look of the fire, I aimed for a more fantastical and smokeless appearance. To ensure that the colors and movements of the fire would match the lighting and color palette of the rest of the animation, I dedicated a lot of time at the start of the project to test different render and pyro simulation settings.
At the time of writing, there was a persistent NVIDIA driver issue (a VRAM memory allocation bug) that consistently caused Redshift to crash on long sequence renders. This issue was widely discussed on Redshift’s Facebook group and official forum. Some 3D artists found that downgrading their NVIDIA drivers to 462.59 worked well, but the only fix that worked for me was to disable half of my GPUs for renderings (two out of four in my case).
The scene for Masquerade was one of the heaviest among all the animation projects I have done — the architectural model alone totalled 4 GBs, while the entire scene (including VDBs, textures, characters, cloth simulations) totalled over 500 GBs. To optimize render times, I converted all static objects, including Matrix objects, into Redshift proxies by groups. For example, ceiling as one proxy and walls as another. This drastically reduced the loading times for geometry during final renders. While I used to shy away from using Redshift proxies for animations due to render farm limitations, I have had the pleasure of using GarageFarm, which fully supports Redshift proxies as long as the proxy material properties are set to “Materials from Object” or “Materials from Scene (Match name and prefix)”.
Aside from its support for Redshift proxies, working with GarageFarm was one of the best experiences I have ever had using a render farm. The 24/7 support from the GarageFarm team was invaluable and they were always available to answer any questions I had. One of the standout features of GarageFarm was the flexibility it offered when it came to rendering. The ability to render single images in strips and having three levels of priorities for rendering jobs allowed me to easily manage my budget and render times. I would highly recommend GarageFarm to anyone looking for a user-friendly and reliable renderfarm solution for their CGI animation projects.
For the final rendering, every shot was saved individually as one project, which included the following:
- Static geometry as RS Proxies, for architecture and furniture.
- Matrix Objects as RS Proxies, for flower petals scattered across the ground and in the air.
- Characters and animated garments in Alembic format
- Simulated pyro in VDB format
I experimented with multiple techniques for further optimization and found the following four to have the most drastic reduction in render times:
- Deleting everything outside the camera: I created unique scenes for every shot such that each scene only contained what was visible to the camera. This significantly reduced my final render times by up to 80%, especially for the statue close-up shots.
- Turning off motion blur for Matrix/Cloner objects with RS Object tags
- Keeping render samples low. I kept mine at the default, with Automatic Sampling set to a threshold of 0.03 and denoising with Neat Video in post-processing
- Rendering only every other frame and using “RIFE-App” to interpolate frames.
Technique 3 resulted in relatively noisy renders, which I then imported as sequences into Premiere Pro and used Neat Video 5 for denoising. I left most of the Neat Video settings at their default and “automatic” values, but it’s important that you right-click on the Premiere Pro viewport and make sure the Playback Resolution is set to “Full”. This ensures that Neat Video samples the final renders at full resolution. The Neat Video user interface is easy to navigate, but I recommend checking out their official tutorials to get the most out of the software. Technique 4 worked for the first half of this particular project, but did not work for some of my other projects. Make sure to do test renders before proceeding with the final renders.
For the final touches, I used Red Giant Universe Heatwave in After Effects to add heat distortions to the fire. I also used Magic Bullet Looks in Premiere for color correcting, adding chromatic aberration, and film grain. The resulting effects were reminiscent of movies filmed with detuned 70s lenses, adding a layer of nostalgia and enigma to the final aesthetics.
In conclusion, the creation of project “Masquerade” was a challenging but rewarding journey of self-learning and artistic exploration. The process of bringing this character animation project to life allowed me to develop my skills in storyboarding, character design, character animation and rendering.
This project was created amid the rise of text-to-image artificial intelligence (AI) systems, which have the potential to disrupt the field of visual arts and threaten the livelihood of artists. While these technologies offer new possibilities for automation and efficiency, they also raise important questions about the role of creativity and human expression in an increasingly automated world. In the face of these challenges, I believe that it is more important than ever to celebrate and support the work of artists and creators.
By sharing my own process and insights through this article, I hope to inspire others to pursue their own artistic endeavors and to value the unique perspectives and talents of human artists. Despite the advances of AI, we must remember that it is through the passion and dedication of artists like ourselves that the world can continue to be enriched by the beauty and complexity of human creativity.
Learn more :
• Kay John Yim’s personal site https://johnyim.com/
• Kay John Yim’s ArtStation https://www.artstation.com/johnyim
• Character Creator https://www.reallusion.com/character-creator/download.html