Final Artifact_Personal
Final Artifact_Collaboration
Final Artifact_Personal
Final Artifact_Collaboration
With the dedicated efforts of me and Shihong, we managed to render the project in the final week. During this process, we encountered numerous technical challenges and made certain adjustments to the performance of our project. Here, I present the video:
Additionally, we have uploaded segments of our creation to YOUTUBE, accompanied by various background music tracks.
During this semester, we embarked on an exploration of experimental animation and investigated various methods that could expedite the project production process. In terms of animation, we employed video motion capture technology to acquire our dance animations (after all, we truly cannot dance, yet we were eager to create animations of this genre). Apart from the dance animations, all were keyframe animations. Both methods are broadly similar in their production approach; however, video motion capture aids animators in achieving the desired results more swiftly (though the output quality may not be as high).
In this project, I attempted to integrate the technology of UE’s virtual camera. However, similar to the unused ‘three-render-two’ technique, this technology proved to be quite challenging to manipulate and tended to impact the final output quality adversely. I hope to utilize it more effectively in future projects.
During this week, we initiated the repair of our motion capture data and commenced the production of specific animation sequences in line with the narrative demands. It is crucial to acknowledge that the data derived from video motion capture is inherently flawed, necessitating its importation into MAYA for the refinement of motion data. This encompasses the recalibration of footstep placement and the rectification of distorted joints. Upon examination of our motion capture data, we find that each frame is populated with data, prompting us to discern and eliminate the stationary footsteps adhered to the ground, as well as to extensively prune the erroneous frames. Moreover, certain frames bear no relevance to our reference footage, thereby requiring their complete excision and the subsequent reanimation of those intervals.
Over the course of this week, we have embarked on rectifying our motion capture data and crafting specific animated segments that align with the script’s demands. Pertaining to the motion capture data, it is imperative to recognize that the data procured through video motion capture is incomplete. Consequently, we must import it into MAYA to amend the motion data. This involves fine-tuning aspects such as footstep alignment and correcting any distorted joints. Our scrutiny of the motion capture data reveals that each frame is replete with data. Hence, we isolate those footsteps that are affixed to the ground without movement, and we extensively excise erroneous frames. Furthermore, there are frames that are entirely incongruent with our reference video, which necessitates their full removal and the re-creation of animations for those specific durations.
Today, we commenced the process of exporting our Resolume Arena project. The export interface distinctly categorizes videos into various dimensions, facilitating the projection of graphics onto disparate screens.
Detailed Operation:
Projecting the Resolume Arena project file onto a physical wall surface through a projector is an invaluable technique, particularly for VJ artists and live performers. Let’s examine the step-by-step process to achieve this. Initially, we must ensure that our Resolume Arena project file is fully prepared, with all media content imported into the project. This is an essential prerequisite before commencing.
Subsequently, we need to configure the projector. This involves establishing the necessary hardware connections and verifying its operational status, which may include power connections and signal inputs. Next, connect the projector to the computer running Resolume Arena via HDMI or another compatible connection method. Consequently, our computer can transmit the image signal to the projector.
Within Resolume Arena, navigate to the Output menu, select Display Setup, and confirm that your projector is recognized as an output device. During this process, we may need to adjust the screen resolution and position to align with our projection area.
The most critical step was to use Resolume Arena’s Mapping to adapt the video content to the actual wall. This may include Angle correction, size adjustment, etc., to ensure that the projected image fits perfectly with the wall.
This week, we ventured into creating a Resolume Arena project. Through exploration, I discovered that it is an intriguing process that allows for the amalgamation of audio and video, culminating in impressive visual effects. Here are some fundamental steps I’ve summarized for creating a Resolume Arena project:
Familiarization with the Interface
The interface of Resolume Arena comprises multiple sections, including the menu bar, control tools, and thumbnails, among others. Each thumbnail represents an independent video clip. In this phase, you’ll notice a resemblance to Photoshop, where you simply place existing effects on different layers and amalgamate them together.
Triggering Clips
Beneath the menu bar, I observed a horizontally arranged set of bars—these are my video clips. By clicking on a thumbnail, I initiate the playback of a clip. However, it’s important to note that these clips are by default synchronized with the BPM (Beats Per Minute).
Mixing Clips
As previously mentioned, the entire interface is user-friendly. Each horizontal bar represents an independent layer, allowing me to switch between different thumbnails on the same layer. I attempted to click on clips from another layer, blending them together.
Adding Effects
On the right side of the interface, there are several tabs including “Archive,” “Composition,” “Effects,” and “Sources.” I click on the “Effects” tab, select an effect, such as Bendoscope, and drag it to the appropriate position under the composition bar.
Adjusting Effects
The effect immediately alters the output video. Most effects have additional control parameters; for instance, Bendoscope has a slider that controls the number of divisions for the warping effect. I can adjust these parameters to achieve the desired outcome.
We discovered that the motion capture data we had prepared could not be linked to the models we purchased, Apollo and Artemis. Our preliminary assessment suggests that the model creator may have frozen the bone transformations during the modeling process. However, directly unfreezing them could result in the loss of the model rigging. After careful consideration, we have decided to proceed with re-rigging the models.
Due to project time constraints, I will demonstrate a simplified rigging operation, utilizing MIXAMO’s auto-rigging feature to complete the body rigging, and employing the MAYA plugin ADV to finalize the facial rigging of the character.
During the MIXAMO rigging phase, we need to upload our model to the official website. After correctly setting the skeletal points, we can await the automatic completion of the model’s rigging.
After acquiring the model with body rigging, we need to upload the file into MAYA and activate our ADV plugin. Utilizing the MOTION CAPTURE feature, we convert the MIXAMO-based skeleton to an ADV skeleton, enabling us to leverage this universal skeleton to redirect our motion capture data. Moreover, to utilize UE’s facial technology, we must employ the ADV facial rigging system to create operations compatible with Apple’s ARKIT. Below is the operation video I recorded:
Following this, we can proceed with our project using the characters based on the ADV skeleton.
This week, I learned and utilized a niche 3D animation software called CASCADEUR. It significantly reduces the animation production speed for novice animators. Animators only need to set some simple poses to obtain a complete animation. However, this step is uncontrollable. For someone accustomed to creating animations in the MAYA interface, learning new software feels like a life-or-death challenge, but it’s a necessary skill to acquire, isn’t it?
Since we require motion capture data files, we need not concern ourselves with which character drives our animation. Here, I can directly use the official models provided. In this case, I am using the ‘CASCY’ character from the official model library.
In this step, we need to import the video that requires motion capture. I have imported a dance video as a resource that my character will need for subsequent projects. Next, we select the model’s skeleton and the video to be motion-captured within the project, then locate the MOCAP option on the far right of the interface. After this, we simply wait for the motion capture to complete.
The entire process appears swift, but there are many precautions to consider. For instance, the character in the video should not wear overly loose clothing, and unlike true green screen motion capture, video motion capture cannot account for movements hidden behind the body, resulting in bizarre computational outcomes. Moreover, to prevent the character model from intersecting with the ground, we must ensure the video is stationary during filming. (By the time you read this, I have recalculated seven times; each point of caution mentioned in the text is an answer obtained through repeated attempts.)
Finally, let me show you a paragraph of the final capture data:
Regarding the Artifact project, Shihong and I plan to collaborate on an animation. My part involves a male character dressed in black, dancing in the dark, continually connecting with the female character through dance.
This week, I began attempting to create the costume needed by the male character in Artifact. After completing the costume mood board, I started searching for similar attire in Marvelous Designer. A cloak gives the character a sense of mystery, while the black fabric, by delineating the character, allows the audience to focus more on the character’s movements.
Displayed here are the character costumes created during this period. The second image shows an initial draft that I decided to abandon. Although it bears the semblance of ancient Asian attire, it does not align with the overall character design.
Regarding the creation of 2.5D materials for the character, my concern is that using the original textures might not integrate well with the UE scene Shihong found, potentially disrupting the audience’s immersion. Therefore, I opted to remake the model’s textures and gradually learn how to use 2.5D materials.
Here are some details of my research process:
We need to use a plugin called Pencil+ 4. Simply dragging the installation package into MAYA makes it usable. However, this means that the entire file can only be processed using the MAYA workflow, which I still need to discuss with Shihong. Once the plugin is installed, we can assign a unique material sphere to the target model, which facilitates the organization and creation of 2.5D materials.
In the color bar located below, you can adjust the highlights, bright areas, color transitions, and shadows of our model. By configuring these settings, we can obtain a material sphere that closely resembles skin color. Using the same method, we can also set the appropriate colors for the hair model. Some may notice that the model’s outline appears to be two-dimensional, akin to a stroke. This is the convenience offered by this plugin; we can select the model object and give it an independent outline stroke, allowing for finer adjustments to the model’s overall stroke effect. Here, we need to right-click on the plugin to open the first window and choose to add ‘Pencil+4 Line.’
By doing so, we unlock the operation window related to strokes. Clicking the plus sign under ‘LINE SET’ allows us to add a line effect to the system renderer, while the ‘OBJECTS’ option in the middle is used to add the models that we want to apply the stroke effect to. Consequently, we can create a character with simple strokes and a two-dimensional appearance.
This week, we delved into the use of LIVELINK technology within the UE software, where we created a virtual camera within the program and linked it to a smartphone. This setup allows us to use the smartphone to capture objects within the virtual scene.
To achieve this, we need to activate the following plugins: Virtual Camera, Take Recorder, Live Link, Apple ARKit, and Remote Session.
For the rendering part of this project, the default settings are utilized, with the frame buffer pixel format set to 8-bit RGBA. Subsequently, within the project settings, locate the UDP message transmission—unicast endpoint. For the static endpoint, enter your computer’s IP address first, omitting colons and zeros, followed by the smartphone’s IP address.
Lastly, we need to create a virtual camera and open the Apple smartphone LiveLink Vcam to control the UE5 camera. However, it is important to note that if the software and smartphone remain unable to connect, the user should try disabling the computer’s firewall to establish a connection.
This week, we ventured into the motion capture lab to experiment with mocap technology. The entire motion capture data was ultimately integrated into the SHOGUN LIVE software. Initially, we ensured that all cameras were connected to the computer and performed spatial calibration according to the software guide, ensuring that sensors within the capture area were correctly identified and configured.
Character Setup: In the software, we created or imported character models and performed the necessary rigging and settings.
Motion Capture: The process of using specialized equipment to record human movements, so these actions can be converted into digital information.
Data Processing: We adjusted and improved the captured motion data using software tools to ensure the accuracy and naturalness of the movements.
Data Export: The optimized motion data was saved and transferred, making it usable by other software, such as animation or game engines.
Thus, motion capture is about recording and optimizing human movements, and then transferring the data into other software for further use.