Week Eight: Identifying issues within the Blocking phase of animation

After completing the modifications to the character’s center of gravity, it’s time to make some minor adjustments to our animated character’s poses. At this stage, we must pay closer attention to the character’s performance in front of the camera, incorporating appropriate animation techniques to ensure the animation flows smoothly and leans more towards a ‘true animation’ feel.

Once the character’s center of gravity adjustments are confirmed to be accurate, the twelve principles of animation become incredibly important at this moment (although they are often important).

Poses are indeed vital! Therefore, animators must meticulously craft each pose to ensure they are visually clear and impactful, while also aligning with the character’s dynamics and the rhythm of the animation.

Poses are crucial! Pay attention to the character’s left hand here. The original animation placed the left hand too close to the body, causing the entire hand to be hidden within the body, obscuring the hand’s silhouette. By slightly turning the hand outward and redefining the animation curves, we can make the hand’s movements more pronounced and the poses more aesthetically pleasing, as this avoids creating excessive negative space in the silhouette.

In this section, we notice that the original action’s center of gravity shift isn’t very distinct, or it feels artificial. Introducing a greater change in the center of gravity, making the character’s step forward more ‘resolute,’ can enhance the animation’s effect. The animation curves also become more appealing and well-defined. (Constantly focusing on curves during animation production is indeed the correct approach!)

Week 7: Creation of a Resolume Arena Project 

This week, we ventured into creating a Resolume Arena project. Through exploration, I discovered that it is an intriguing process that allows for the amalgamation of audio and video, culminating in impressive visual effects. Here are some fundamental steps I’ve summarized for creating a Resolume Arena project:

Familiarization with the Interface

The interface of Resolume Arena comprises multiple sections, including the menu bar, control tools, and thumbnails, among others. Each thumbnail represents an independent video clip. In this phase, you’ll notice a resemblance to Photoshop, where you simply place existing effects on different layers and amalgamate them together.

Triggering Clips

Beneath the menu bar, I observed a horizontally arranged set of bars—these are my video clips. By clicking on a thumbnail, I initiate the playback of a clip. However, it’s important to note that these clips are by default synchronized with the BPM (Beats Per Minute).

Mixing Clips

As previously mentioned, the entire interface is user-friendly. Each horizontal bar represents an independent layer, allowing me to switch between different thumbnails on the same layer. I attempted to click on clips from another layer, blending them together.

Adding Effects

On the right side of the interface, there are several tabs including “Archive,” “Composition,” “Effects,” and “Sources.” I click on the “Effects” tab, select an effect, such as Bendoscope, and drag it to the appropriate position under the composition bar.

Adjusting Effects

The effect immediately alters the output video. Most effects have additional control parameters; for instance, Bendoscope has a slider that controls the number of divisions for the warping effect. I can adjust these parameters to achieve the desired outcome.

Week Seven: Let’s make progress by identifying the errors in our animations. 

Upon inspection, there might not seem to be anything amiss at first glance. However, trust me, the striding motion I’ve implemented here inadvertently makes the character resemble a lady struggling to take a step rather than a gentleman urgently striding due to warfare.

Therefore, it’s crucial to note that when we use character controllers for animating walk cycles, the master controller is designed to handle the overall rotation of the stride and the vertical movement of the body. The hip controller, on the other hand, primarily utilizes its Y-axis, which is the vertical rotation, to aid the motion of the stride. This was an oversight on my part in this instance.

When executing the walk cycle, the amplitude of the movements need not be so large. As the leading foot descends, our stride should lean towards the side of the lifted foot.

So, for the waist: It will rotate towards the side of the front foot, reaching its maximum angle. 

For the center of gravity sub-controller: We can use it to adjust the overall tilt of the body.

Action Choices

A good animation choice can enhance our animation, so we can step out of our reference to find poses that are more character-specific. Here, we can see that the modified S-curve and the bent arms make the character’s silhouette more pronounced and visually appealing.

Week 6: Commence animation BLOCKING

Last week, we captured motion references that align with the dialogue and character actions. It’s now time to attempt the conversion of these into animation data. 

The primary focus should be on the stride and shoulder movements, while also ensuring the accurate emotional portrayal of the animated character.

Therefore, I have selected some BLOCKING pieces with roars that I consider to be very effective.

In this instance, we can observe the character extending their head forward, initially dropping the chin down, then compressing it upward before opening again to complete a cycle of mouth opening and closing, accompanied by a significant head movement. Concurrently, the head swings in a C-shaped curve that aligns with the character’s facial tangent line.

When the character bellows forcefully, the facial features first compress, then exaggerate into a wide-open position, with eyebrows raised and the mouth opening to an incredulously large size. The character’s head exhibits a slight left and right shake during the roar, conveying a sense of intense effort. This detail should be noted for the record.

The character feels fear and lowers their head. Here, one can observe the character’s shoulders rising, with the entire body engaging in the motion.

Week 6: Artifact Project Model Rigging

We discovered that the motion capture data we had prepared could not be linked to the models we purchased, Apollo and Artemis. Our preliminary assessment suggests that the model creator may have frozen the bone transformations during the modeling process. However, directly unfreezing them could result in the loss of the model rigging. After careful consideration, we have decided to proceed with re-rigging the models.

Due to project time constraints, I will demonstrate a simplified rigging operation, utilizing MIXAMO’s auto-rigging feature to complete the body rigging, and employing the MAYA plugin ADV to finalize the facial rigging of the character.

During the MIXAMO rigging phase, we need to upload our model to the official website. After correctly setting the skeletal points, we can await the automatic completion of the model’s rigging.

After acquiring the model with body rigging, we need to upload the file into MAYA and activate our ADV plugin. Utilizing the MOTION CAPTURE feature, we convert the MIXAMO-based skeleton to an ADV skeleton, enabling us to leverage this universal skeleton to redirect our motion capture data. Moreover, to utilize UE’s facial technology, we must employ the ADV facial rigging system to create operations compatible with Apple’s ARKIT. Below is the operation video I recorded:

Following this, we can proceed with our project using the characters based on the ADV skeleton.

Week 5: Establishing Animation References and Key POSES

I have re-shot the video reference and have determined the primary Position of Significant Stance (POSS).

In the selection of the POSS, I prefer that when the character speaks the first sentence, they are in a state of shock and confusion, conveying the situation to the person opposite them. During the second sentence, the character carries a certain degree of anger. The final sentence is delivered with extreme anger and a hint of madness.

video reference

During the video reference shoot, my primary focus was to portray the three emotions mentioned above, incorporating a forward step to complete the transition between each emotional state. With each step the character takes, the emotional evolution intensifies, culminating in extreme anger. However, it seems that the shooting angle I used differs slightly from the angle in my reference.

Week 4: Final revisions for the BODY MACHINE animation

When there is excessive variation in hand movements during a downward swing, we need to make meticulous adjustments to the motion curve to ensure the continuity and precision of the action. By doing so, we can elevate the professional standard of the animation, making it appear more fluid and realistic.

In the final stages of the animation, adding some hang time effects can enhance the visual impact. This involves initiating the movement with the main body parts first, followed by the limbs. This technique not only adds to the fun and dynamism of the animation but also creates a visual effect of the limbs being dragged by the body. Such treatment makes the animation more vivid, realistic, and engaging for the audience.

In the end, we add the scene behind the character and also add the ambient lighting. Here I place the final modified animation.

Week Five: Attempting to Create Video Motion Capture Data

 This week, I learned and utilized a niche 3D animation software called CASCADEUR. It significantly reduces the animation production speed for novice animators. Animators only need to set some simple poses to obtain a complete animation. However, this step is uncontrollable. For someone accustomed to creating animations in the MAYA interface, learning new software feels like a life-or-death challenge, but it’s a necessary skill to acquire, isn’t it?

Since we require motion capture data files, we need not concern ourselves with which character drives our animation. Here, I can directly use the official models provided. In this case, I am using the ‘CASCY’ character from the official model library.

In this step, we need to import the video that requires motion capture. I have imported a dance video as a resource that my character will need for subsequent projects. Next, we select the model’s skeleton and the video to be motion-captured within the project, then locate the MOCAP option on the far right of the interface. After this, we simply wait for the motion capture to complete. 

The entire process appears swift, but there are many precautions to consider. For instance, the character in the video should not wear overly loose clothing, and unlike true green screen motion capture, video motion capture cannot account for movements hidden behind the body, resulting in bizarre computational outcomes. Moreover, to prevent the character model from intersecting with the ground, we must ensure the video is stationary during filming. (By the time you read this, I have recalculated seven times; each point of caution mentioned in the text is an answer obtained through repeated attempts.)

Finally, let me show you a paragraph of the final capture data:

Week Four: Character Costume Design and the Creation of 2.5D Textures for Models

 Regarding the Artifact project, Shihong and I plan to collaborate on an animation. My part involves a male character dressed in black, dancing in the dark, continually connecting with the female character through dance.

This week, I began attempting to create the costume needed by the male character in Artifact. After completing the costume mood board, I started searching for similar attire in Marvelous Designer. A cloak gives the character a sense of mystery, while the black fabric, by delineating the character, allows the audience to focus more on the character’s movements.

Displayed here are the character costumes created during this period. The second image shows an initial draft that I decided to abandon. Although it bears the semblance of ancient Asian attire, it does not align with the overall character design.

 Regarding the creation of 2.5D materials for the character, my concern is that using the original textures might not integrate well with the UE scene Shihong found, potentially disrupting the audience’s immersion. Therefore, I opted to remake the model’s textures and gradually learn how to use 2.5D materials.

Here are some details of my research process:

We need to use a plugin called Pencil+ 4. Simply dragging the installation package into MAYA makes it usable. However, this means that the entire file can only be processed using the MAYA workflow, which I still need to discuss with Shihong. Once the plugin is installed, we can assign a unique material sphere to the target model, which facilitates the organization and creation of 2.5D materials.

In the color bar located below, you can adjust the highlightsbright areascolor transitions, and shadows of our model. By configuring these settings, we can obtain a material sphere that closely resembles skin color. Using the same method, we can also set the appropriate colors for the hair model. Some may notice that the model’s outline appears to be two-dimensional, akin to a stroke. This is the convenience offered by this plugin; we can select the model object and give it an independent outline stroke, allowing for finer adjustments to the model’s overall stroke effect. Here, we need to right-click on the plugin to open the first window and choose to add ‘Pencil+4 Line.’

By doing so, we unlock the operation window related to strokes. Clicking the plus sign under ‘LINE SET’ allows us to add a line effect to the system renderer, while the ‘OBJECTS’ option in the middle is used to add the models that we want to apply the stroke effect to. Consequently, we can create a character with simple strokes and a two-dimensional appearance.