FMP: Reality illusion

Reaching the final stage, we have finally obtained the finished animated product. Throughout this process, we have learned together how to select storyboards from scratch, create stylized character textures, add realistic hair simulations to characters, and integrate motion capture processes into animation production. Although we encountered some issues in the rendering phase, through continuous trial and error, we eventually achieved a relatively complete animation. To enhance the image quality, I ultimately chose the software Topaz Video Enhance AI to improve the video resolution and denoise the entire video, resulting in a quality suitable for playback on the big screen at the academy.

FMP blog links:

Week_10: Animation sound effects and background music generationWeek_10:

In this animation, most of the sound effects revolve around the characters. These include the rustling of clothes, breathing, hands patting the bed, phone alarms, tapping on the phone, touching clothes, footsteps on the ground, walking sounds, plates being placed on the table, sitting down, dragging chairs, waving hands, eating crisps, pillow impacts, door pushing, wind, heavily sitting on the ground, doors slamming, men’s roars, twisting doorknobs, cups sliding on the table, gasping, unlocking doors, and closing doors, among others. These sounds help integrate the characters’ actions and the environment into a cohesive whole. I primarily voiced the sounds made by the animated characters, including speech and breathing.

Additionally, I incorporated the sound of tinnitus to represent the protagonist’s somatic depression and to reflect the sudden noises that depression sufferers might experience in real life. I also added a phone ringtone, hoping to see if viewers instinctively reach for their phones upon hearing it (which isn’t that just being trapped by our phones?). Subsequently, I found and included necessary sounds from online sources and my personal sound effect library, adjusting all sounds gradually. Adding effects to the sounds made them more fitting for my animation. Building a personal material library accelerates animation progress without any drawbacks.

For background music, I preferred ethereal, rhythmic, and slightly creepy tunes to match the events following the character forgetting to take their medication. Near the end, I needed a piece that feels relaxing yet subtly tense, hinting that the story isn’t over, suggesting that our protagonist might not have left their “room.” Concerned about copyright issues, I chose to use the SUNO-AI music generation platform to create the music I needed. Although generating around 40 background tracks resulted in only one perfect match, it was free and commercially usable, so it was acceptable. Using the entire platform is similar to using AI image generation: simply input relevant prompts, like instrument types or music genres, and it quickly generates suitable background music.

Week_09: Video color correction and frame extraction processing

This week, I mainly worked on color grading and frame extraction for the animation. Let me introduce these parts of the work separately and highlight some important points to keep in mind:

Regarding color grading, the most crucial aspect is to ensure the video maintains consistent color tones, preventing any jarring changes that might break immersion. Next, it’s essential to establish a primary color tone for the entire animation. For this, I applied a LUT (a pre-set color grading setting) to the video to achieve uniform coloring. Adding a layer of LUT can better restore colors and give the video a unified tone.

  • BEFORE
  • AFTER
This image has an empty alt attribute; its file name is WPS2-4.png

As for frame extraction, I hope this technique can create an unreal feeling and a sense of detachment in my animation, making viewers feel oppressed and unnatural, which is what I want to showcase—a somatic representation of depression. I hope to not only observe myself but also observe those who are experiencing these struggles and feel their pain. To achieve this effect, I used the frame extraction tool in After Effects (AE) to make the animation play three frames per image, while key shots play at two frames per image.

Week_08: Start using the render farm

This week, the primary task was to individually inspect each animation shot and upload all of them to the render farm. I encountered many issues, so let me explain each one and provide my solutions:

First, I added character outlines during rendering, but this effect conflicted with the built-in motion blur in MAYA-Arnold, causing the outlines to disappear. So, I disabled the motion blur and decided to add a frame-skipping effect after rendering. This approach aligns with the story’s intention and, based on my tests, resolved the unrealistic issues caused by excessive elasticity and collision in the character’s clothing simulation.

Second, while using the render farm, there were instances where the animation skipped rendering entirely, effectively stopping the process. After investigating, I discovered that an invalid texture group in the original scene model was causing MAYA2024 on the LCC computer to fail in reading that specific texture. Hence, it’s crucial to thoroughly check whenever MAYA reports a texture-related error.

Third, despite no apparent issues in the scene inspection, the animation still wouldn’t render. Upon further examination, I found that some shots had tested the Xgen hair simulation feature, but simply deleting this feature caused unresolved nodes that MAYA2024 couldn’t process. I recommend using MAYA’s built-in scene cleanup tool to delete unnecessary nodes in the scene, resolving the rendering issue.

In summary, while I faced more issues than these, I successfully rendered the entire project. Tackling problems head-on and researching solutions always leads to finding a way through. Here, I present some of the current rendering clips from the animation, using the first shot as an example:

Week_07: Model scene construction and lighting production

Before this week, we completed the fabric simulation for the characters, achieving excellent results. However, the material and elasticity of the clothing made it look strange, creating a disconnect between the clothes and the character animations. I need to research solutions to avoid this issue. Now, let’s start building the scene:

The character’s room is a part of a scene pack I purchased from the UE marketplace, and it fits the story’s vision for the bedroom, hallway, and kitchen. But how do we export UE scene files and import them into MAYA? If we export directly in FBX format, we need to relink the materials manually in MAYA. To obtain a scene model with pre-linked textures more efficiently, we need to download the gift export plugin from the UE marketplace. This format, accessible in Blender, includes texture channels. The next step is to export the scene from Blender in FBX format to MAYA, and you’ll find that the scene textures are intact.

Regarding lighting, we use a full MAYA workflow. For rendering quality, I chose the Arnold renderer, which I’ve used for a long time and have rarely encountered unsolvable issues with. In the lighting setup phase, I divided the areas needing lighting into the character’s bedroom, hallway, kitchen, and living room. Pre-setting the lights in the scene means that when we reference the scene into the animation file later, we won’t need to light the characters one by one. In the subsequent process, we only need to pay attention to the reflections and lighting effects between the characters and the environment.

Week_06: Character costume making

In this section, let’s discuss how to create a character’s pajamas in Marvelous Designer. I chose a nice-looking pajama set as my template. Essentially, we can break down pajamas into a loose shirt with the first button on the chest unbuttoned. First, we import the character’s T-pose into MD as a template for the clothing size. Then, we create appropriate fabric pieces based on the character’s body shape. My character’s shoulder deformity from prior rigging issues required some stretching of this area, so I widened and heightened the shoulder parts of the fabric. This adjustment ensures that there won’t be excessive stretching issues during animation.

Next, I discovered that MD has a feature for directly creating buttons on the fabric. However, I chose to use fake stitching to button up the shirt because using MD’s built-in buttons often led to garment breakdown during simulation. After multiple tests, I finalized this method.

After completing the character’s clothing and pants, we can try importing the character animation in ABC format into MD. Before importing, ensure at least 100 frames of blank space before the original animation, allowing the character to transition from the T-pose to the original animation. This step maximizes the chances of avoiding sudden distortions in the clothing and saves time from dressing the character for each animation file.

Week_05: Preliminary completion of animation

This week, we gradually completed the retargeting of motion capture data and began recording animation references based on actual movements, modifying the existing animation accordingly. Since the motion capture data lacks hand details, I need to complete this part of the animation in MAYA. However, due to my mistake during rigging, the character’s finger joints and elbow joints didn’t have their extra rotation axes frozen, causing gimbal lock issues throughout the animation. While MAYA’s built-in plugins can fix this, it significantly wastes time, especially for a 3700-frame animation.

Another point to note is that our motion capture was performed without props, so after retargeting the data to the character model, we need to create appropriately sized boxes based on our scene models. This ensures that each action can seamlessly integrate with the displayed scene in later stages.

At this step, we continue adjusting our animation and camera shots. As animators, we only need to ensure the animation is correct within the shot. However, software issues often cause lags or crashes when selecting character controllers and plugins. In such cases, I recommend using MAYA’s built-in scene cleanup function to delete the history and empty nodes within the scene, reducing the software’s load. I have always used MAYA2018, so some issues might be due to this version. If possible, I would try a higher version of MAYA, but for now, we’ll continue with the 2018 version.

Returning to animation production, I find that motion capture serves as a more intuitive animation reference. We can’t use the motion capture data directly; it needs modification to conform to animation principles. Using motion capture data as a tool to observe action timing and rhythm is an excellent method. In this project, I mainly combined traditional and motion capture techniques to quickly produce the needed animation. In later stages, I will also need to make changes to certain animation segments, knowing the current version isn’t the final one. We still need to add facial expressions.

Week_04: Animation starts

This week, we focused on creating animation based on our storyboard. However, after completing some shots, I realized the animation production cycle was insufficient, so I decided to use motion capture to enhance my animation project. This approach ensures faster completion of the entire animation. Therefore, the production process is now divided into two tasks: continuing to refine my hand-keyed animation and using the school’s motion capture lab to record the remaining animation.

After obtaining the motion capture data, we need to retarget it to our characters. This is why I use the ADV plugin for rigging the models, as ADV’s development for animation retargeting is very comprehensive. We can easily bake the entire motion capture data onto our models. However, due to differences in skeleton models, naming conventions, and the number of bones, the motion capture data won’t perfectly match our characters. This requires us to make secondary modifications to the animation, giving it more accurate motion trajectories that adhere to fundamental animation principles, adding follow-through effects to the character’s limb movements.

At this stage, I chose to roughly apply all motion capture data to the character models first and then work on the character’s facial expressions. This way, we can see the initial effect of the entire animation more quickly, usually taking about two weeks. So, if I don’t complete the entire animation this week, I will upload my animation video by next week at the latest.

Week_03: Character real hair production

This week, let’s try creating realistic hair for the characters. I recommend using MAYA’s built-in Xgen feature to generate hair. However, if needed, I can also share other hair creation and simulation plugins suitable for a full MAYA workflow.

Returning to the process, in this step, we need to select a region on the character model where we want to generate hair. It’s important not to generate hair directly on the character’s original skin because stretching and wrinkling of the skin can cause the hair to spread out.

When using the Xgen feature, remember to turn off the auto-refresh option to help keep the computer stable (nobody likes software crashes, right?). Here, I tested creating the character’s eyebrows to see if the model supports Xgen hair generation. After completing this, we can try some facial expressions to check if the hair effect meets our needs. I chose to keep this version of the changes.

Next, we move on to creating the character’s hairstyle. For this, we can use a quicker method by applying a pre-made hair model to our character. Using MAYA’s curve extraction tool, we extract the hair curves, saving us time from creating the hairstyle from scratch. Time is tight, and tasks are heavy.

Then, we use Xgen’s feature to convert the obtained curves into guide lines used by Xgen, allowing us to achieve a realistic hair simulation more quickly. We need to finely adjust the shape of the guide lines to ensure the hair doesn’t collide with the scalp. Special attention is required for the sides and back of the head, as these areas are more prone to collisions with the body and clothing during animation, causing visual inconsistencies.

Let’s make an emoji to finish today’s quiz.

Week_02: Character model and texture making

This week, I started working on creating the character models. I purchased the relevant male character model from the official website of the model creator. Then, I can modify and add the elements I need using ZBrush, which is an efficient method. I highly recommend animators to consciously collect some animation models in their spare time. This way, you don’t have to spend too much time searching for animation models.

However, in this animation project, I wanted to create a character model that looks like me (since this animation project is more of a reflection of my own life). So, I had to modify a standard male model to get the model I needed. It’s actually quite simple and doesn’t even require the use of ZBrush. MAYA has built-in functions for refining models that can achieve similar effects. This way, we can complete the entire process within one software, making project management more convenient.

Next, I need to create the character’s texture maps. Typically, when creating animations, we export the model directly in FBX format without paying attention to the character’s pose. Here, I recommend animators import the character into Substance Painter in a T-pose. If we need to create higher-quality models with better detail, we need to bake the model in the software to obtain all the texture information. If a character in an A-pose is imported into the baking process, shadows may appear between the arms due to their proximity.

After that, the main task is to paint the character’s facial textures. For areas where the skin is stretched, use a deeper red color and blend some cool tones under the skin with a mask to represent the veins.

Then let’s relink the map back into the model and try rendering it in MAYA.