Sunday, May 14, 2017

Character Simulation

We had two classes discussing about character simulation, but actually I think this field seems more applicable in robotics. In my opinion, there are mainly two reasons for this. Firstly, the simulation calculation is still complicated, with simulated characters still moving unnaturally. Since usually the objective of character simulation is to make the character decide which motion to take given current environment or state. This somewhat results in the stiffness of the characters' movement. I found them look more like robots rather than a live character. I think in the case of animation, enriching the characters with vividness is more important than the rigid mechanics. The second reason is that character simulation usually needs multiple techniques, like IK, state machine, and often motion capture, which itself is quite mature and can be applied together with other techniques to generate satisfying animation results. Therefore, I didn't really understand why we need such efforts to research on the method, while the task can be well accomplished more easily with another method.

Nonetheless, I'm still quite interested in the research involving machine learning, so the character movement can be improved through time, as long as the sample provides accurate information during simulation. The results from the this paper were quite impressive.

Rendering

The semester has ended, but I still want to go back and revise the discussions that I couldn't include in time. Before the discussion of VR technology, we had a class talking about rendering technologies, which I'm not familiar with. Many fellow students from animation background have mentioned it very often, but all I know is that it is a technology that decides the color and light intensity of each pixel from the 3D model to project a 2D image, so that it can be displayed on the screen.

It seems that the computation cost for rendering is very expensive, or there wouldn't be a term called "render farm", and I have heard from fellow students that it takes several hours just to render one frame. They were rendering for an animation project, but this rendering speed is definitely not practical for game industry, where it has to be real time. The first paper that we talked about was a simplification designed for games. The BRDF (Bi-directional Distribution Function) to decide how much light can be seen from different directions was approximated by a sinusoidal function called LTC (Linearly Transformed Cosines). Ubisoft has implemented this method in its product, and the results look satisfying, although there is no shadow. Actually, I didn't really notice that.


There are also many efforts to reduce the rendering computation costs in movie production. One was using sorting method to speed up ray tracing process. Although the concept is very easy, it is surprisingly effective.
We have also looked through some special techniques of rendering scratches, cloth and sand. They all look so impressive. I'd like to look more into it when I have more time.

VR! Always!

As I'm majoring in entertainment technology, I was massively exposed to VR platforms last semester, so I know the magic that VR can perform, and I also know the limitation that current VR devices have. Even so, the topics we discussed in Tech Animation classes are still new for me.

The first topic was about the device itself. I have never considered carefully the working principle of VR devices. A research claimed that the sickness resulted from VR devices mainly comes from stereo system of the eyes, so researchers found a way to reduce disparity of left and right eyes. This results in smaller difference, reducing sickness while maintaining recognizable depth. However, as I mentioned in class, the sickness I experienced mainly came from the mismatch of velocity in the virtual world and the physical world. Nancy mentioned about the acceleration added to those 3D theme park rides to reduce motion sickness. Actually, I feel zero sickness in those rides. So I'm a bit curious: the mismatch of acceleration doesn't matter at all?

Another interesting research found a way to map paths in the virtual world to a much smaller physical space. It takes some tricks to guide the user to turn to certain directions, so the user feels like exploring a very large area, but actually the paths make use of the physical space repetitively.



I found this interesting, because indeed this is a practical problem. If look at the VR games in the market, there are very few of them dealing with physical movements of the player. Given the limited physical space, how to fit in the whole game world. To me, this still remains unsolved, given the limitations of this research: The ratio of step length in the game and in the real world is not resolved, so player cannot run or even walk naturally.

Another interesting research makes use of treadmill, so the player can actually walk and run in the game. And there is a belt tied around the waist of the player to pull backwards. This is to simulate the sub-force generated from gravity when a person is walking on a slope. They have also researched on a wind system that can redirect the air to go from the front of the player to simulate real natural environment.

The last research that I want to mention is the one uses sound wave to produce pressure on hands. Although there are a lot of limitations, like the space limit and the intensity limit, I think this is a brave and valuable direction to go. I feel that the bottle neck of VR development is at the hardware side, and the other types of physical sense feedback is as important as the visual feedback, which seems to be over focused on.

I think it was a fruitful discussion, and I personally am still very interested in the future of VR technology.

Thursday, April 27, 2017

Crowd Simulation

It seems like I haven't updated the blog for a long time again... There were just too much to do! But I enjoyed every Tech Animation class as always. I have also taken down some notes, so I'll try to update my thoughts from previous classes.

So for this class and last class, the topic was mainly crowd simulation. I felt it quite intuitive to solve the problem using fluid simulation. However, the crowd flow seems too "smooth". The people simulated are going around the crowd coming from opposite direction, forming a large swirl. This is not what happens in real life: People are usually too lazy to make such a large detour. From my observation, people usually slows down and lean to the side to make way, so actually small swirls should be formed. And this is why I quite appreciate the research of shoulder motion when people moving in the crowd. It really makes the simulation feel more natural.

There are also many data driven approaches. State machine and database are used to decide character's next action. The interactions with both objects (stairs, discontinuous road etc, AI technology applied) and other characters (different gestures and movements among the crowd) are both enabled. I feel that the interaction with objects is quite satisfying, and this is somewhat like the robotic character simulation discussed previously (hopefully I'll update those discussions soon), and this seems even better than the robotic approach. However, the interaction with people seems weird to me, especially with the fact that the data was based on behaviors within an ant crowd. I can't see the meaning of the interactions in the simulation, which makes sense, because the way ants interacting with each other definitely is meaningless for human beings.

The discussions made me think of a game called Watch Dogs, in which every character has a predefined routine, which includes all kinds of interactions with objects and other characters. The player can follow anyone of them to see what will happen next to those characters. It really costs a lot to design so many complicated interactions. What if one day, the crowd simulation and AI technologies are so advanced that they can do the design work automatically? It would be much easier to design more good games!


Sunday, March 5, 2017

Skinning - Wait, now?!

It has been quite a long time since we discussed about skinning techniques, but I still want to revise some of the points that I found interesting.

The basic idea of skinning is easy to understand: just to assign the vertices on the "skin" to corresponding bones, so the "skin" will move together with its bone. It becomes interesting when the "Candy Wrapper" artifact appears. Since the connected bones might rotate to different directions, the skin at the joint will twist because its vertices are assigned to different bones. The visual results can be improved by assigning more weights to a vertex. The non-linear method called Dual Quaternion Blending method (DQB) can also solve the problem. This method is very fast, although it might work when strange motions are involved, like wrapping around. This can be solved by breaking the transformation into phases.

There are also issues with bulging artifacts, and a "cage" method was introduced by a research to guide the muscle bulging, sliding and twisting. I found it interesting because the "cage" method was an old technique introduced for skinning, and it's surprising to see modern modifications based on historic methods.

Some research about general deformers were also discussed during classes. I was wondering why my artist friends still hate skinning so much due to massive manual operations involved in the process. Maybe the good methods are not well developed for prevalent commercial use.

Cloth Simulation

Last week we discussed about some cloth simulation techniques. We watched several demo videos of the research results published over the decade. They look amazing! But I also noticed that the simulated cloth seems too light, and when there are movements involved, the cloth tends to oscillate more at the edges before recovering to the original state. That's why the result of the knitted cloth simulation paper was so impressive: The scarf model seems to have mass and volume. Of course, this method takes more resources to render. And at the same time I found the force against each yarn was too strong, that each knot seems to puff and form a round shape, especially at edges. But in reality, there's not only repulsion but also attraction between yarns. I guess the subtle force relations are hard to balance. This is somewhat like what another student commented during class: the wrinkle simulation in a demo seems unnatural when the cloth completely comes back to its original state after wrinkling. Some left-over creases might make it better. It was mentioned that there are also methods that simulate cloth based on texture mapping. I hope that I can have a chance to look into that.

Another research that I found interesting was to precompute simulation for each state and apply the same simulation to the repeating states. It is clever and fits in the game development needs, but with the expanding decision tree for transitions between each state, the problem becomes more complicated.

Given all the challenges, cloth simulation seems like still an open space to explore. I wish I can gain more insights after hands-on practice for the second mini project.

Sunday, February 26, 2017

Inverse Kinematics

Sorry for the late post! But I was blown up by a bunch of information from the classes introducing IK.

Different Jacobian methods were introduced , together with the advantages and disadvantages for each. I found it interesting that the new expression in the Damped Pseudoinverse Method not only enables the balance between oscillations and convergence rate, but also provides a nullspace approach to prioritize the movements.

There are also CCD and FABRIK methods. I think these two methods look more friendly, because they deals with vector coordinate space directly. They are not only easier to understand and implement, but they are also faster. Although their performance might not be enough in some cases, they should be very useful in game industry where efficiency is more important than absolute quality, and I really appreciate their beauty of simplicity.

Simulation!

I'm so excited that our course has finally moved on to simulations! We used a 1-hour-20-minute class to cover most of the basics of simulation, from the fundamental Euler method for particle simulation to dealing with rigid body and resolving conflicts. For the series of Euler methods, the basic steps are easy to understand: First we know particle's current position x and time step t. Next we take the derivative of x and t to get the velocity. Then we get the force adjusted from environment change to get the acceleration a from the formula f = ma. After that, we can have the changed velocity and then calculate the position at next time step t+1.  The recommended RK4 method (the Runge Kutta Method) is like the advanced version of Euler's Midpoint Method. The difference is that RK4 takes more calculation points and is a forth-order method, so it is more accurate due to less truncation.

For dealing with rigid body, the rotation and angular velocity are added to the matrix. It surprised me again that quaternion can always solve the rotation matrix problem.

Another recommended method is Implicit Euler Method. I didn't quite understand it during the class, but I later understood that it evaluates the velocity function at the end of the time step, so assume the particle as reached the position after the time step, evaluate the function so that the particle can trace back to the original position. This is used to avoid stiff situation, where large k is present, so small h is desired, which might result in unstable solution if we use explicit Euler Method. This was explained clearly in David Baraff's course notes.

I appreciate the pertinent course materials, and I look forward to knowing more details of simulation technology!

Sunday, February 5, 2017

Motion Capture Lab

Last Monday, we visited the Motion Capture Lab of CMU. It was the first time I've seen a real-time demo of motion capture.

It was just amazing to look at the instantly rendered model moving as the researcher moves. I've also looked at the markers closely. They are just spheres made from plastic. Nothing fancy. But there are surprisingly many sizes for them. At first, I was wondering how does the system distinguish markers of different sizes from such a long distance when they are used at the same time. And I found the answer later: the system need not distinguish different markers. As long as it recognizes the marker, it works to capture the motion of that point. On the other hand, for general purposes, larger markers are preferred, because they are easier to be spotted no matter from which direction. The smaller markers are used only in special cases to deal with sophisticated motions, like for hands or facial expressions. In this case, some special preparations and processes might be needed, so it's likely that markers with large size difference are not used at the same time. Well, I might need some citations here to prove my assumption.

I also got some fun facts about motion capture in this trip. I didn't know that static objects can be motion captured too. And it is actually preferred to capture the objects in the scene to determine the positions for the models later. Sometimes, the motion captured is totally different from what we see in the final animation work. For example, the motions of a dog and the motions of infants in the animation are all from motions captured from adult human actors. Also, additional facilitators are used to invigorate the motions. For instance, toggle the ball to an elastic string to enhance its bounciness.

The mocap lab is also cooperating with robotics lab to capture human motions for programming the robot motions. I think while this is a good way to make the movements more natural, ie. more like human, it might not be as effective as expected. Human body movement seems natural because the movements are generated naturally from the body structure. It might hardly seem natural for the robot with different body structure from human. Also, the weight distribution of the robot should be very different from human, so it is challenging to maintain its balance with human motion embedded.

In a nutshell, it was a fun experience to visit the lab. I hope I will get a chance to try it out for my project in the near future.


Sunday, January 29, 2017

Embark on the Journey of Technical Animation!

It's been 2 weeks since the semester began, and I'm excited that there are so much to learn!

It was surprising but propelling that Professor Eftychios D. Sifakis came for a talk just after one intro class. I really appreciate the 45-minute wrap-up session right before the talk. It gave me an idea of how the calculation of simulations works behind the scene. Interestingly, the day before the class, a friend told me about a course he took during undergrad, which was about how to distribute work from CPU to GPU, because GPU has many small algorithm unit that can work simultaneously. I've also taken a course, to build a pipeline for a simple CPU, which took me so much energy that semester. So I could feel the pain of researchers who try hard to enhance the calculation performance. It is indeed a challenging task, and I was also amazed by the diversity of topics to discuss in the course of TA.

During the talk, although things stopped making sense once Professor Sifakis began to explain the formula, I was impressed by the research achievement demo during the introduction, especially the one about virtual surgery. It's said that the simulation showing surgery steps only takes several hours to calculate, while it takes a year for an animator to render the same animation. This is  especially important for surgery simulation, because every second counts. Although some classmates mentioned that it might need some future work, I think it's a great achievement.

In the next class, we had some discussion about traditional and modern animation techniques. In my opinion, there is really no superior techniques. Although in this course, we focus on the technologies supporting the animation production, we have to bare in mind that animations are not here to showcase the high-end technologies. They are art pieces that have their own purposes to serve, not restricted in entertaining, educational, and medical purposes. For different situations, there should be different suitable techniques. As engineers, we are used to defining better technique as faster, more efficient, less manual operations etc. This is simply not the case here. Even "more natural" is not a standard for better animation technique, because sometimes artists want to embody unnaturalness in their work.

I'm looking forward to learning more to support the artists' masterpieces!