Week 12
Quick fixes according to feedback:
Eyes position:
Around frame 120, the eyes would look better if they were moved slightly more toward the center. This would make it clearer that Miri is "looking straight forward" as she should be focused directly on Fred in this moment. Also, pushing the eyelids higher would help convey the surprised/shocked expression in the scene.
| After |
Same around frame 136, the eye placement was too far up, but instead, it should be more towards the center.
| Before |
| After |
Floaty feet:
Between frames 22-26, where the center of gravity shifts from the left leg to the right, the feet seem to float a bit, especially during frame 24. Even though Dr. Kennedy mentioned it wasn’t noticeable from the SHOTCAM, it’s still best to fix it regardless.
Dr. Kennedy pointed out that in a work environment, we never know what angle the director might ultimately choose, and something that’s not noticeable in the original SHOTCAM could become a big issue from a different camera angle. So, it’s always a good idea to double-check and fix any problems we spot from multiple angles. (Totally agree!)
Lip Balm Fix:
I found that using "Paste Xform Relationship Next Frame" works the best for most of the time. It gives me more flexibility in adjusting minor positions and rotation of the lip balm. Since there are quite significant fidgeting and passing movements in the performance.
I also reenacted the gestures with a real lip balm to get a better sense of how it might’ve moved in the reference footage. Which was super helpful, about 80% of the animation ended up being based on my reenactments.
Fixing the eyes and feet was really quick, I managed to get it done in about an hour. It’s nice to see how much more confident I’ve become with cleaning up mocap data. I can now easily spot which controller is causing the problem and use my experience to fix it quickly.
After 12 weeks of motion capture classes, I still find the process fun and rewarding. I’m especially happy I got assigned this particular frame range for the Miri character performance. She’s so expressive in both her face and body, which made it challenging to get everything feeling just right, but that made the end result even more satisfying.
Using Faceware Analyzer was pretty confusing at first. Some tracker points were difficult to keep consistent due to lighting and large movements, especially around the mouth. But I’m glad I didn’t rush through the tracking because the extra attention definitely paid off in the retargeting stage.
That said, I wasn’t completely satisfied with Faceware Retargeter. Occasionally, the controllers would pick up random movements, even when the tracker data seemed stable in Analyser. The eye retargeting, in particular, was noticeably worse compared to other parts of the face, and I couldn’t quite figure out why. On the bright side, the mouth, which I expected to be the worst, actually came out about 80% right after the first retargeting, which was a nice surprise given how much trouble I had with the eyes and brows.
Refining the retargeted data was a bit repetitive, but still enjoyable, especially with Animbot’s help. It made cleaning up frames and smoothing out curves much easier. I’m also really glad I didn’t skip the 'Copy & Paste Xform Relationship' function. It worked well for animating the continuous movement of a prop. Of course, it’s not a magical one-click solution for my case, I still had to manually adjust most frames, but it’s definitely a technique I’ll be adding to my animation skill set.
Overall, this semester’s motion capture class was way more informative and fun than the last one, where we focused on body mocap and VFX. I found refining retargeted data to be the most rewarding part. It’s a great way to improve my animation skills in Maya and an excellent practice for nailing animation timing in general.
Comments
Post a Comment