
Project “Kagura” is a one-minute full CG animation, and the sequel to John Yim’s “Ballerina” project, made with Character Creator, animated in iClone, and rendered with Redshift and Cinema4D.

Kay John Yim is a Chartered Architect at Spink Partners based in London. He has worked on a wide range of projects across the UK, Hong Kong, Saudi Arabia and Qatar, including property development and landscape design. His work has been featured on Maxon, Artstation, CG Record and 80.LV.
Yim’s growing passion for crafting unbuilt architecture with technology has gradually driven himself to taking on the role of a CGI artist, delivering visuals that not only serve as client presentations but also as means of communication among the design and construction team. Since the 2021 COVID lockdown, he challenged himself to take courses in CG disciplines beyond architecture, and has since won more than a dozen CG competitions.
The project’s concept centers on a fantasized version of Kagura (神楽) – a type of Shinto ritual ceremonial dance in Japan. According to tradition, a Kagura dancer turns into a god during the performance, thus depicted as the dancer’s ballerina tutu dress transforming into a hakama as she dances on the floating stage, signifying the purifying spirits of nature.
This article focuses primarily on shots three and four of project “Kagura”, where Yim begins to give details on his design and technical process of the four main aspects below:
- The Architecture
- The Animation
- The Transformation
- Rendering
“Kagura” was made with the following software:
- Rhino
- Moment of Inspiration (MOI)
- Cinema 4D (C4D)
- Redshift (RS)
- Character Creator (CC)
- iClone
- Marvelous Designer 11 (MD)
- Houdini
This making-of tutorial article is a short version of “The Making of ‘Kagura’, A Photorealistic CG Animation”, written by Kay John Yim. For the full version, please visit Fox Renderfarm News Center.




1. THE ARCHITECTURE
The architecture was loosely based on Ookawaso Hotel’s lobby in Fukushima Prefecture, Japan.

It was probably one of the most challenging interior spaces that I have ever modeled, due to the following reasons:
- Most photographs available online focus on the floating stage and thus were quite limited in showing the actual space.
- With no access to architectural drawings, I had to eyeball all the measurements from photographs.
- The space does not conform to a single orthogonal grid, for instance, the stairs and the 1F walkway did not align to the columns.
I first gauged the size of the space by the balustrade height—as a rule of thumb, balustrades are usually 1.1 meters tall, and it varies slightly depending on exterior vs. interior space and the country’s building regulations.
By estimation, the distance between the columns is about 7.7 meters.

Looking at the orientation of the floating stage and the columns, I assumed that the space was designed with two sets of grids: a construction grid that aligned with the columns (which structurally holds up the space) and a secondary grid diagonal to the construction grid (which serves only as a design grid).
I drew up the construction grid uniformly (7.7 x 7.7 meters), and placed columns accordingly. Then I drew diagonal lines on top of the construction grid to get the secondary grid, and this gave me a starting point for the floating stage as well as the 1F walkway.

A large portion of the architectural elements then instantly fell into place according to the grids I drew up.
Having said that, the modeling process was not exactly straight-forward though. With the lack of references (especially for the corner details), I spent most of the time re-designing and tweaking wall panel sizes and wall stud positions to get to proportions that were aesthetically pleasing.


I then exported the Rhino model to .3dm, opened it up in MOI and exported it again into FBX. Doing so gave me clean, quad meshes that I could easily edit and UV-map in C4D.

While the majority of the space took less than a week to model, I spent an additional month solely on fine-tuning the details, tweaking the lighting, and framing a composition that I was satisfied with.


2. THE ANIMATION
2-1 Character Animation made with Character Creator and iClone
The character animation was created with Character Creator (CC) based on mocap animation, which can be found on the Reallusion Marketplace.
I kept my animation workflow as simple as possible; In fact, I exclusively used “Set Speed” and “Edit Motion Layer” functions in iClone to get to the final character animation. First, I imported my CC character into iClone, applied the mocap animation onto the character via drag-and-drop, and altered the speed with “Set Speed” to create a slow-motion effect.

*Note: Please see my previous article for CG Character creation: Ballerina: A CGI Fantasy made with Character Creator and iClone.
Altering the speed however, exaggerated a lot of movement that looked distracting; Hence, I played the character animation on loop and deleted keyframes that I found unnecessary. I then used “Edit Motion Layer” to lift up the arms and modify the finger positions.

2-2 Garment Preparation
Once I have gotten a decent character animation, I moved on to Marvelous Designer (MD) and Character Creator to prepare the garments for animation and simulation.
Cloth simulation in Marvelous Designer is extremely finicky: multiple layers of clothing too close together causes a lot of jittering, and that could take an infinite number of simulations to resolve. For the above reason, I separated the two sets of Marvelous Designer garments (ballet tutu and hakama) into two categories: skin-tight vs loose garments.
The skin-tight garments would be animated in Character Creator & iClone, a technique most commonly used in game production. This technique excels in speed but falls short in simulating loose garment details compared to MD. The skin-tight garments in this project included the ballet tutu leotard and the hakama’s inner layer.

The remaining loose garments would be simulated in MD (the ballet tutu skirt and the hakama’s outer layers).

2-3 Skintight Garment Animation with Character Creator and iClone
My preparation for the garments in CC are as follows:
- Export garment from MD to FBX as T-pose.
- Import FBX into CC by “Create Accessories”.
- Assign “Skin Weight”.
- Export to iClone.
The skin-tight garment would then be automatically applied to the animated character in iClone.

2-4 Loose Garment Simulation with Marvelous Designer and iClone
MD in general simulates garment better by using CPU rather than GPU when there are multiple layers of clothing. Having separated the tutu leotard from the tutu skirt in this particular case, I found GPU simulation actually gave a cleaner and faster simulation than using CPU alone.

For the hakama I wanted to create a calm but otherworldly aesthetic, so I reduced the “Gravity” under “Simulation Settings” to 0, and upped the “Air Damping” to 5. This resulted in a constantly floating sleeve and a clear silhouette throughout the entire animation.

With all the garments animated and simulated, I exported all of them as separate Alembic files. The Character was exported as an animated FBX from iClone.
2-5 Pose-simulation clean-up in Houdini
Garment simulated in MD could sometimes result in too many details or polygons with messy connectivity. The former I personally found distracting and the latter would cause problems down the line in C4D when used in combination with “Cloth Surface”.
I imported the Alembic files into Houdini and used “Attribute Blur” to smooth out the garment, eliminating extra wrinkles.
3. THE TRANSFORMATION
3-1 Setting up the Camera
Having imported the character FBX and all the Alembic files into Cinema 4D (C4D), I then move onto setting up my camera based on the character animation. This prevented me from spending extra time working on details that would not be visible in the final shot.
I used “PSR” under “Constraint” to bind the camera’s height position to the character’s “neck” position; doing so stabilized the camera and avoided distracting movements.
3-2 Tutu Dress to Hakama
The transformation of the tutu dress into hakama was driven by a combination of “PolyFx”‘s and animated fields within C4D.

C4D’s “PolyFx” breaks down objects by their polygons; any Mograph effectors assigned thereafter will then affect the object on a per-polygon basis rather than affecting the object itself as a whole.
I assigned a “PolyFx”, a “Random Effector”, a “Plain Effector” and a “Spherical Field” to each of the following parts:
- Tutu leotard
- Tutu skirt
- Hakama sleeve
- Hakama top (outer layer)
- Hakama top (inner layer)
- Hakama bottom
Each of the “Spherical Fields” were then bound to the character’s skeleton “pelvis”. With the Spherical Field bound to the character, I animated the sizes of the “Spherical Fields” and tweaked the timing to different garment parts to gradually scale down/scale up by their polygon divisions. For specific steps, please see the detailed guide in the full version.
*Note:When in doubt, press Shift-C then type in the Mograph or function you are looking for—I use Shift-C all the time in C4D.

3-3 Tutu Skirt to Butterfly Hakama
In addition to the garment transformation driven by “PolyFX”s, I added an extra layer of animation with a “Cloner” of animated butterflies; this created an illusion as if the tutu skirt disintegrated into a swarm of butterflies and flew away.
I use an animated butterfly created by Travis David (click to download) cloned onto the simulated tutu skirt, driven with a plain effector in scale to make them appear and disappear in flow with the PolyFx animation.

For the final rendering, I added “Cloth Surface” and “Subdivision” to each garment part to break up the polygons to even smaller parts; this resulted in an illusion of the tutu dress being disintegrated and subsequently reintegrated into the hakama.
Technically speaking it was a relatively simple animation, the most challenging parts were timing and developing an aesthetic that flowed naturally with the character movement. The ten seconds of transformation alone took me more than two months to get to the final version; I was constantly adjusting the Spherical Fields’ animation through the plugin “Signal”, rendering the viewport sequence, tweaking and re-rendering over and over again.
“Cloth Surface” and “Subdivision” are computationally expensive—each viewport frame took at least two minutes to process, totalling about ten minutes per viewport sequence render.


4. RENDERING
4-1 Texturing
I kept my texturing workflow fairly simple—apart from the characters, I used Megascans material and foliage in the final renders.
4-2 Redshift Limitations & Workaroundsturing
Though Redshift is my favorite offline renderer for its unmatched rendering speed, there were a few limitations regarding Motion Blur and Cloner/Matrix that I had to have a workaround in preparation for the final rendering.
“Motion Blur”, or “Deformation Blur” to be specific, contributes to the realism of CG animation. However, there is a known limitation of Redshift automatically disabling “Deformation Blur” on “PolyFX” objects. This would cause glitches (objects look as if they pass through each other) in the final render if “Deformation Blur” is turned on globally. While keeping global “Deformation Blur” on, I added a Redshift Object tag on every character and garment object and unchecked “Deformation Blur” on the RS object tags.
On the other hand, while “Cloner” and “Matrix” both serve the same purpose of cloning objects, they differ in viewport feedback and rendering speed. Using “Cloner” has the advantage of wysiwyg in the viewport, as opposed to using “Matrix” where you have to render out the frame to see the final result.
Rendering-wise, “Matrix” has the advantage of being rendered by Redshift much more efficiently than “Cloner”; Taking Shot 4 for instance, the final render duration per frame is three hours using exclusively “Cloner” as opposed to 2.5 hours using exclusively “Matrix”. Hence, I used “Cloner” while working on the shot composition and used “Swap Cloner/Matrix” to replace all “Cloner” into “Matrix” for the final render.


4-3 Redshift Environment
I used Redshift Environment to give all the shots an atmospheric and mysterious look; it also helped to convey the depth of the scene, especially in a busy composition like Shot 4.
The Redshift Environment’s Volume Material was driven by two “Null”s in height; a fake Spot Light directly above the dancing character and two Area Lights from below the stage also contributed to the Redshift Environment.
4-4 Redshift Proxies
Having finalized the look of the shots, I exported as many objects as possible into Redshift Proxies for rendering efficiency. I used “Redshift Proxy Exporter” to batch export objects—this saved me a lot of time, especially when exporting foliage. With everything replaced as Redshift proxies, this brought my final render time per frame from 2.5 hours down to two hours.
Conclusion
“Kagura” is by far the most challenging personal project I have ever done. I learned along the way as I worked on “Kagura” and “Ballerina”, all through trial and error, rendering out iterations after iterations throughout the past six months.
With Reallusion and Fox Render Farm’s support, I eventually brought “Kagura” to life, and this has been the most rewarding project since I began my CGI journey.
Learn more :
• Kay John Yim’s personal site https://johnyim.com/
• Kay John Yim’s ArtStation https://www.artstation.com/johnyim
• Character Creator https://www.reallusion.com/character-creator/download.html