Reaching the final stage, we have finally obtained the finished animated product. Throughout this process, we have learned together how to select storyboards from scratch, create stylized character textures, add realistic hair simulations to characters, and integrate motion capture processes into animation production. Although we encountered some issues in the rendering phase, through continuous trial and error, we eventually achieved a relatively complete animation. To enhance the image quality, I ultimately chose the software Topaz Video Enhance AI to improve the video resolution and denoise the entire video, resulting in a quality suitable for playback on the big screen at the academy.
In this animation, most of the sound effects revolve around the characters. These include the rustling of clothes, breathing, hands patting the bed, phone alarms, tapping on the phone, touching clothes, footsteps on the ground, walking sounds, plates being placed on the table, sitting down, dragging chairs, waving hands, eating crisps, pillow impacts, door pushing, wind, heavily sitting on the ground, doors slamming, men’s roars, twisting doorknobs, cups sliding on the table, gasping, unlocking doors, and closing doors, among others. These sounds help integrate the characters’ actions and the environment into a cohesive whole. I primarily voiced the sounds made by the animated characters, including speech and breathing.
Additionally, I incorporated the sound of tinnitus to represent the protagonist’s somatic depression and to reflect the sudden noises that depression sufferers might experience in real life. I also added a phone ringtone, hoping to see if viewers instinctively reach for their phones upon hearing it (which isn’t that just being trapped by our phones?). Subsequently, I found and included necessary sounds from online sources and my personal sound effect library, adjusting all sounds gradually. Adding effects to the sounds made them more fitting for my animation. Building a personal material library accelerates animation progress without any drawbacks.
For background music, I preferred ethereal, rhythmic, and slightly creepy tunes to match the events following the character forgetting to take their medication. Near the end, I needed a piece that feels relaxing yet subtly tense, hinting that the story isn’t over, suggesting that our protagonist might not have left their “room.” Concerned about copyright issues, I chose to use the SUNO-AI music generation platform to create the music I needed. Although generating around 40 background tracks resulted in only one perfect match, it was free and commercially usable, so it was acceptable. Using the entire platform is similar to using AI image generation: simply input relevant prompts, like instrument types or music genres, and it quickly generates suitable background music.
This week, I mainly worked on color grading and frame extraction for the animation. Let me introduce these parts of the work separately and highlight some important points to keep in mind:
Regarding color grading, the most crucial aspect is to ensure the video maintains consistent color tones, preventing any jarring changes that might break immersion. Next, it’s essential to establish a primary color tone for the entire animation. For this, I applied a LUT (a pre-set color grading setting) to the video to achieve uniform coloring. Adding a layer of LUT can better restore colors and give the video a unified tone.
BEFORE
AFTER
As for frame extraction, I hope this technique can create an unreal feeling and a sense of detachment in my animation, making viewers feel oppressed and unnatural, which is what I want to showcase—a somatic representation of depression. I hope to not only observe myself but also observe those who are experiencing these struggles and feel their pain. To achieve this effect, I used the frame extraction tool in After Effects (AE) to make the animation play three frames per image, while key shots play at two frames per image.
This week, the primary task was to individually inspect each animation shot and upload all of them to the render farm. I encountered many issues, so let me explain each one and provide my solutions:
First, I added character outlines during rendering, but this effect conflicted with the built-in motion blur in MAYA-Arnold, causing the outlines to disappear. So, I disabled the motion blur and decided to add a frame-skipping effect after rendering. This approach aligns with the story’s intention and, based on my tests, resolved the unrealistic issues caused by excessive elasticity and collision in the character’s clothing simulation.
Second, while using the render farm, there were instances where the animation skipped rendering entirely, effectively stopping the process. After investigating, I discovered that an invalid texture group in the original scene model was causing MAYA2024 on the LCC computer to fail in reading that specific texture. Hence, it’s crucial to thoroughly check whenever MAYA reports a texture-related error.
Third, despite no apparent issues in the scene inspection, the animation still wouldn’t render. Upon further examination, I found that some shots had tested the Xgen hair simulation feature, but simply deleting this feature caused unresolved nodes that MAYA2024 couldn’t process. I recommend using MAYA’s built-in scene cleanup tool to delete unnecessary nodes in the scene, resolving the rendering issue.
In summary, while I faced more issues than these, I successfully rendered the entire project. Tackling problems head-on and researching solutions always leads to finding a way through. Here, I present some of the current rendering clips from the animation, using the first shot as an example:
Before this week, we completed the fabric simulation for the characters, achieving excellent results. However, the material and elasticity of the clothing made it look strange, creating a disconnect between the clothes and the character animations. I need to research solutions to avoid this issue. Now, let’s start building the scene:
The character’s room is a part of a scene pack I purchased from the UE marketplace, and it fits the story’s vision for the bedroom, hallway, and kitchen. But how do we export UE scene files and import them into MAYA? If we export directly in FBX format, we need to relink the materials manually in MAYA. To obtain a scene model with pre-linked textures more efficiently, we need to download the gift export plugin from the UE marketplace. This format, accessible in Blender, includes texture channels. The next step is to export the scene from Blender in FBX format to MAYA, and you’ll find that the scene textures are intact.
Regarding lighting, we use a full MAYA workflow. For rendering quality, I chose the Arnold renderer, which I’ve used for a long time and have rarely encountered unsolvable issues with. In the lighting setup phase, I divided the areas needing lighting into the character’s bedroom, hallway, kitchen, and living room. Pre-setting the lights in the scene means that when we reference the scene into the animation file later, we won’t need to light the characters one by one. In the subsequent process, we only need to pay attention to the reflections and lighting effects between the characters and the environment.