Week 6: Commence animation BLOCKING

Last week, we captured motion references that align with the dialogue and character actions. It’s now time to attempt the conversion of these into animation data. 

The primary focus should be on the stride and shoulder movements, while also ensuring the accurate emotional portrayal of the animated character.

Therefore, I have selected some BLOCKING pieces with roars that I consider to be very effective.

In this instance, we can observe the character extending their head forward, initially dropping the chin down, then compressing it upward before opening again to complete a cycle of mouth opening and closing, accompanied by a significant head movement. Concurrently, the head swings in a C-shaped curve that aligns with the character’s facial tangent line.

When the character bellows forcefully, the facial features first compress, then exaggerate into a wide-open position, with eyebrows raised and the mouth opening to an incredulously large size. The character’s head exhibits a slight left and right shake during the roar, conveying a sense of intense effort. This detail should be noted for the record.

The character feels fear and lowers their head. Here, one can observe the character’s shoulders rising, with the entire body engaging in the motion.

Week 6: Artifact Project Model Rigging

We discovered that the motion capture data we had prepared could not be linked to the models we purchased, Apollo and Artemis. Our preliminary assessment suggests that the model creator may have frozen the bone transformations during the modeling process. However, directly unfreezing them could result in the loss of the model rigging. After careful consideration, we have decided to proceed with re-rigging the models.

Due to project time constraints, I will demonstrate a simplified rigging operation, utilizing MIXAMO’s auto-rigging feature to complete the body rigging, and employing the MAYA plugin ADV to finalize the facial rigging of the character.

During the MIXAMO rigging phase, we need to upload our model to the official website. After correctly setting the skeletal points, we can await the automatic completion of the model’s rigging.

After acquiring the model with body rigging, we need to upload the file into MAYA and activate our ADV plugin. Utilizing the MOTION CAPTURE feature, we convert the MIXAMO-based skeleton to an ADV skeleton, enabling us to leverage this universal skeleton to redirect our motion capture data. Moreover, to utilize UE’s facial technology, we must employ the ADV facial rigging system to create operations compatible with Apple’s ARKIT. Below is the operation video I recorded:

Following this, we can proceed with our project using the characters based on the ADV skeleton.

Week 5: Establishing Animation References and Key POSES

I have re-shot the video reference and have determined the primary Position of Significant Stance (POSS).

In the selection of the POSS, I prefer that when the character speaks the first sentence, they are in a state of shock and confusion, conveying the situation to the person opposite them. During the second sentence, the character carries a certain degree of anger. The final sentence is delivered with extreme anger and a hint of madness.

video reference

During the video reference shoot, my primary focus was to portray the three emotions mentioned above, incorporating a forward step to complete the transition between each emotional state. With each step the character takes, the emotional evolution intensifies, culminating in extreme anger. However, it seems that the shooting angle I used differs slightly from the angle in my reference.

Week 4: Final revisions for the BODY MACHINE animation

When there is excessive variation in hand movements during a downward swing, we need to make meticulous adjustments to the motion curve to ensure the continuity and precision of the action. By doing so, we can elevate the professional standard of the animation, making it appear more fluid and realistic.

In the final stages of the animation, adding some hang time effects can enhance the visual impact. This involves initiating the movement with the main body parts first, followed by the limbs. This technique not only adds to the fun and dynamism of the animation but also creates a visual effect of the limbs being dragged by the body. Such treatment makes the animation more vivid, realistic, and engaging for the audience.

In the end, we add the scene behind the character and also add the ambient lighting. Here I place the final modified animation.

Week Five: Attempting to Create Video Motion Capture Data

 This week, I learned and utilized a niche 3D animation software called CASCADEUR. It significantly reduces the animation production speed for novice animators. Animators only need to set some simple poses to obtain a complete animation. However, this step is uncontrollable. For someone accustomed to creating animations in the MAYA interface, learning new software feels like a life-or-death challenge, but it’s a necessary skill to acquire, isn’t it?

Since we require motion capture data files, we need not concern ourselves with which character drives our animation. Here, I can directly use the official models provided. In this case, I am using the ‘CASCY’ character from the official model library.

In this step, we need to import the video that requires motion capture. I have imported a dance video as a resource that my character will need for subsequent projects. Next, we select the model’s skeleton and the video to be motion-captured within the project, then locate the MOCAP option on the far right of the interface. After this, we simply wait for the motion capture to complete. 

The entire process appears swift, but there are many precautions to consider. For instance, the character in the video should not wear overly loose clothing, and unlike true green screen motion capture, video motion capture cannot account for movements hidden behind the body, resulting in bizarre computational outcomes. Moreover, to prevent the character model from intersecting with the ground, we must ensure the video is stationary during filming. (By the time you read this, I have recalculated seven times; each point of caution mentioned in the text is an answer obtained through repeated attempts.)

Finally, let me show you a paragraph of the final capture data:

Week Four: Character Costume Design and the Creation of 2.5D Textures for Models

 Regarding the Artifact project, Shihong and I plan to collaborate on an animation. My part involves a male character dressed in black, dancing in the dark, continually connecting with the female character through dance.

This week, I began attempting to create the costume needed by the male character in Artifact. After completing the costume mood board, I started searching for similar attire in Marvelous Designer. A cloak gives the character a sense of mystery, while the black fabric, by delineating the character, allows the audience to focus more on the character’s movements.

Displayed here are the character costumes created during this period. The second image shows an initial draft that I decided to abandon. Although it bears the semblance of ancient Asian attire, it does not align with the overall character design.

 Regarding the creation of 2.5D materials for the character, my concern is that using the original textures might not integrate well with the UE scene Shihong found, potentially disrupting the audience’s immersion. Therefore, I opted to remake the model’s textures and gradually learn how to use 2.5D materials.

Here are some details of my research process:

We need to use a plugin called Pencil+ 4. Simply dragging the installation package into MAYA makes it usable. However, this means that the entire file can only be processed using the MAYA workflow, which I still need to discuss with Shihong. Once the plugin is installed, we can assign a unique material sphere to the target model, which facilitates the organization and creation of 2.5D materials.

In the color bar located below, you can adjust the highlightsbright areascolor transitions, and shadows of our model. By configuring these settings, we can obtain a material sphere that closely resembles skin color. Using the same method, we can also set the appropriate colors for the hair model. Some may notice that the model’s outline appears to be two-dimensional, akin to a stroke. This is the convenience offered by this plugin; we can select the model object and give it an independent outline stroke, allowing for finer adjustments to the model’s overall stroke effect. Here, we need to right-click on the plugin to open the first window and choose to add ‘Pencil+4 Line.’

By doing so, we unlock the operation window related to strokes. Clicking the plus sign under ‘LINE SET’ allows us to add a line effect to the system renderer, while the ‘OBJECTS’ option in the middle is used to add the models that we want to apply the stroke effect to. Consequently, we can create a character with simple strokes and a two-dimensional appearance.

Week 3: Distribution of SPACE for Character’s Center of Gravity in 3D Animation

After addressing the issue of the velocity of the center of gravity movement, it’s time to resolve the spatial issue of the character’s center of gravity.

During the movement of the center of gravity, greater attention needs to be paid to the issue of spatial distribution, as the original allocation of the center of gravity space is too random. Therefore, we can use the Tween Machine plugin to readjust it.

Here, I have overturned the initial ordinary starting action and introduced a preparatory action to lay the groundwork for the character’s subsequent on-the-spot staggering, thereby enhancing the fun of the animation.

Additionally, incorporate more curvilinear motion during the movement of the center of gravity.

Week 3: Controlling UE with a Smartphone

This week, we delved into the use of LIVELINK technology within the UE software, where we created a virtual camera within the program and linked it to a smartphone. This setup allows us to use the smartphone to capture objects within the virtual scene. 

To achieve this, we need to activate the following plugins: Virtual CameraTake RecorderLive LinkApple ARKit, and Remote Session.

For the rendering part of this project, the default settings are utilized, with the frame buffer pixel format set to 8-bit RGBA. Subsequently, within the project settings, locate the UDP message transmission—unicast endpoint. For the static endpoint, enter your computer’s IP address first, omitting colons and zeros, followed by the smartphone’s IP address.

Lastly, we need to create a virtual camera and open the Apple smartphone LiveLink Vcam to control the UE5 camera. However, it is important to note that if the software and smartphone remain unable to connect, the user should try disabling the computer’s firewall to establish a connection.

Week Two: Revise the animation. POSS, POSS, or still POSS

I must admit, when I animate too quickly, I subconsciously forget many of the fundamental principles of animation. As we can see in the illustration, when the character takes a large stride, there is a neglect in the use of curves. Incorporating ease-out into the stepping motion will make the action more forceful and swift.

When the character grabs the stick to perform a pendulum swing, the motion should also conform to that of a pendulum. Therefore, the movement trajectory should consist of easing in, accelerating, and then easing out.

In the animation production process, greater attention must be paid to the character’s silhouette. Additionally, the character should not be completely side-facing the camera.

Week Two: Let’s Try Motion Capture

This week, we ventured into the motion capture lab to experiment with mocap technology. The entire motion capture data was ultimately integrated into the SHOGUN LIVE software. Initially, we ensured that all cameras were connected to the computer and performed spatial calibration according to the software guide, ensuring that sensors within the capture area were correctly identified and configured.

Character Setup: In the software, we created or imported character models and performed the necessary rigging and settings.

Motion Capture: The process of using specialized equipment to record human movements, so these actions can be converted into digital information.

Data Processing: We adjusted and improved the captured motion data using software tools to ensure the accuracy and naturalness of the movements.

Data Export: The optimized motion data was saved and transferred, making it usable by other software, such as animation or game engines.

Thus, motion capture is about recording and optimizing human movements, and then transferring the data into other software for further use.