Week 12

Quick fixes according to feedback:

Dr. Kennedy was mostly pleased with the overall results I showed him last week. However, he did point out a couple places that could be refined:


Eyes position:

Around frame 120, the eyes would look better if they were moved slightly more toward the center. This would make it clearer that Miri is "looking straight forward" as she should be focused directly on Fred in this moment. Also, pushing the eyelids higher would help convey the surprised/shocked expression in the scene.

When I was adjusting the eye placement, I focused too much on balancing the amount of eye white on each side, which led to the eyes being slightly off-center compared to the reference footage. 
Before

After

Same around frame 136, the eye placement was too far up, but instead, it should be more towards the center.
Before

After


Floaty feet:

Between frames 22-26, where the center of gravity shifts from the left leg to the right, the feet seem to float a bit, especially during frame 24. Even though Dr. Kennedy mentioned it wasn’t noticeable from the SHOTCAM, it’s still best to fix it regardless.

Dr. Kennedy pointed out that in a work environment, we never know what angle the director might ultimately choose, and something that’s not noticeable in the original SHOTCAM could become a big issue from a different camera angle. So, it’s always a good idea to double-check and fix any problems we spot from multiple angles. (Totally agree!)


Fixing the issue was quick and easy. I just adjusted the timing of the left foot by moving its frames back, so it lifts right after the right foot is fully in contact with the floor.


Lip Balm Fix:

I’m used to animating objects being passed from one hand to another using two controllers, one constrained to the left hand, the other to the right, and then just switching the constraint weight to complete the handoff. I was planning to set up a support controller for the lip balm to manage the constraint switching. But while exploring Animbot, I came across the 'Copy & Paste Xform Relationship' function, which is designed for animating the holding and releasing of objects, so I figured I’d give it a try.

I found that using "Paste Xform Relationship Next Frame" works the best for most of the time. It gives me more flexibility in adjusting minor positions and rotation of the lip balm. Since there are quite significant fidgeting and passing movements in the performance.

Animating the lip balm being passed from one hand to the other ended up being pretty time-consuming. The basic premise was straightforward, but it was tricky to get everything to feel just right, especially between frames 63-69 and 158-185. There wasn’t much useful information in the reference footage during those parts, so I had to rely mostly on the finger gestures and retrace the movements myself.

I also reenacted the gestures with a real lip balm to get a better sense of how it might’ve moved in the reference footage. Which was super helpful, about 80% of the animation ended up being based on my reenactments.


After finishing the lip balm animation, I can confidently say frames 154-186 took the longest to refine. The influence of both hands on the lip balm at the same time was quite challenging to animate at first. Used "Paste Xform Relationship Next Frame" to get a rough pass of the timing, then slowly refined the position and rotation frame by frame.  


Reflection:

Fixing the eyes and feet was really quick, I managed to get it done in about an hour. It’s nice to see how much more confident I’ve become with cleaning up mocap data. I can now easily spot which controller is causing the problem and use my experience to fix it quickly.

After 12 weeks of motion capture classes, I still find the process fun and rewarding. I’m especially happy I got assigned this particular frame range for the Miri character performance. She’s so expressive in both her face and body, which made it challenging to get everything feeling just right, but that made the end result even more satisfying.

Using Faceware Analyzer was pretty confusing at first. Some tracker points were difficult to keep consistent due to lighting and large movements, especially around the mouth. But I’m glad I didn’t rush through the tracking because the extra attention definitely paid off in the retargeting stage.

That said, I wasn’t completely satisfied with Faceware Retargeter. Occasionally, the controllers would pick up random movements, even when the tracker data seemed stable in Analyser. The eye retargeting, in particular, was noticeably worse compared to other parts of the face, and I couldn’t quite figure out why. On the bright side, the mouth, which I expected to be the worst, actually came out about 80% right after the first retargeting, which was a nice surprise given how much trouble I had with the eyes and brows.

Refining the retargeted data was a bit repetitive, but still enjoyable, especially with Animbot’s help. It made cleaning up frames and smoothing out curves much easier. I’m also really glad I didn’t skip the 'Copy & Paste Xform Relationship' function. It worked well for animating the continuous movement of a prop. Of course, it’s not a magical one-click solution for my case, I still had to manually adjust most frames, but it’s definitely a technique I’ll be adding to my animation skill set.

Overall, this semester’s motion capture class was way more informative and fun than the last one, where we focused on body mocap and VFX. I found refining retargeted data to be the most rewarding part. It’s a great way to improve my animation skills in Maya and an excellent practice for nailing animation timing in general.






















Comments

Popular posts from this blog

Week 4