Sunday, May 14, 2017

Character Simulation

We had two classes discussing about character simulation, but actually I think this field seems more applicable in robotics. In my opinion, there are mainly two reasons for this. Firstly, the simulation calculation is still complicated, with simulated characters still moving unnaturally. Since usually the objective of character simulation is to make the character decide which motion to take given current environment or state. This somewhat results in the stiffness of the characters' movement. I found them look more like robots rather than a live character. I think in the case of animation, enriching the characters with vividness is more important than the rigid mechanics. The second reason is that character simulation usually needs multiple techniques, like IK, state machine, and often motion capture, which itself is quite mature and can be applied together with other techniques to generate satisfying animation results. Therefore, I didn't really understand why we need such efforts to research on the method, while the task can be well accomplished more easily with another method.

Nonetheless, I'm still quite interested in the research involving machine learning, so the character movement can be improved through time, as long as the sample provides accurate information during simulation. The results from the this paper were quite impressive.

Rendering

The semester has ended, but I still want to go back and revise the discussions that I couldn't include in time. Before the discussion of VR technology, we had a class talking about rendering technologies, which I'm not familiar with. Many fellow students from animation background have mentioned it very often, but all I know is that it is a technology that decides the color and light intensity of each pixel from the 3D model to project a 2D image, so that it can be displayed on the screen.

It seems that the computation cost for rendering is very expensive, or there wouldn't be a term called "render farm", and I have heard from fellow students that it takes several hours just to render one frame. They were rendering for an animation project, but this rendering speed is definitely not practical for game industry, where it has to be real time. The first paper that we talked about was a simplification designed for games. The BRDF (Bi-directional Distribution Function) to decide how much light can be seen from different directions was approximated by a sinusoidal function called LTC (Linearly Transformed Cosines). Ubisoft has implemented this method in its product, and the results look satisfying, although there is no shadow. Actually, I didn't really notice that.


There are also many efforts to reduce the rendering computation costs in movie production. One was using sorting method to speed up ray tracing process. Although the concept is very easy, it is surprisingly effective.
We have also looked through some special techniques of rendering scratches, cloth and sand. They all look so impressive. I'd like to look more into it when I have more time.

VR! Always!

As I'm majoring in entertainment technology, I was massively exposed to VR platforms last semester, so I know the magic that VR can perform, and I also know the limitation that current VR devices have. Even so, the topics we discussed in Tech Animation classes are still new for me.

The first topic was about the device itself. I have never considered carefully the working principle of VR devices. A research claimed that the sickness resulted from VR devices mainly comes from stereo system of the eyes, so researchers found a way to reduce disparity of left and right eyes. This results in smaller difference, reducing sickness while maintaining recognizable depth. However, as I mentioned in class, the sickness I experienced mainly came from the mismatch of velocity in the virtual world and the physical world. Nancy mentioned about the acceleration added to those 3D theme park rides to reduce motion sickness. Actually, I feel zero sickness in those rides. So I'm a bit curious: the mismatch of acceleration doesn't matter at all?

Another interesting research found a way to map paths in the virtual world to a much smaller physical space. It takes some tricks to guide the user to turn to certain directions, so the user feels like exploring a very large area, but actually the paths make use of the physical space repetitively.



I found this interesting, because indeed this is a practical problem. If look at the VR games in the market, there are very few of them dealing with physical movements of the player. Given the limited physical space, how to fit in the whole game world. To me, this still remains unsolved, given the limitations of this research: The ratio of step length in the game and in the real world is not resolved, so player cannot run or even walk naturally.

Another interesting research makes use of treadmill, so the player can actually walk and run in the game. And there is a belt tied around the waist of the player to pull backwards. This is to simulate the sub-force generated from gravity when a person is walking on a slope. They have also researched on a wind system that can redirect the air to go from the front of the player to simulate real natural environment.

The last research that I want to mention is the one uses sound wave to produce pressure on hands. Although there are a lot of limitations, like the space limit and the intensity limit, I think this is a brave and valuable direction to go. I feel that the bottle neck of VR development is at the hardware side, and the other types of physical sense feedback is as important as the visual feedback, which seems to be over focused on.

I think it was a fruitful discussion, and I personally am still very interested in the future of VR technology.

Thursday, April 27, 2017

Crowd Simulation

It seems like I haven't updated the blog for a long time again... There were just too much to do! But I enjoyed every Tech Animation class as always. I have also taken down some notes, so I'll try to update my thoughts from previous classes.

So for this class and last class, the topic was mainly crowd simulation. I felt it quite intuitive to solve the problem using fluid simulation. However, the crowd flow seems too "smooth". The people simulated are going around the crowd coming from opposite direction, forming a large swirl. This is not what happens in real life: People are usually too lazy to make such a large detour. From my observation, people usually slows down and lean to the side to make way, so actually small swirls should be formed. And this is why I quite appreciate the research of shoulder motion when people moving in the crowd. It really makes the simulation feel more natural.

There are also many data driven approaches. State machine and database are used to decide character's next action. The interactions with both objects (stairs, discontinuous road etc, AI technology applied) and other characters (different gestures and movements among the crowd) are both enabled. I feel that the interaction with objects is quite satisfying, and this is somewhat like the robotic character simulation discussed previously (hopefully I'll update those discussions soon), and this seems even better than the robotic approach. However, the interaction with people seems weird to me, especially with the fact that the data was based on behaviors within an ant crowd. I can't see the meaning of the interactions in the simulation, which makes sense, because the way ants interacting with each other definitely is meaningless for human beings.

The discussions made me think of a game called Watch Dogs, in which every character has a predefined routine, which includes all kinds of interactions with objects and other characters. The player can follow anyone of them to see what will happen next to those characters. It really costs a lot to design so many complicated interactions. What if one day, the crowd simulation and AI technologies are so advanced that they can do the design work automatically? It would be much easier to design more good games!


Sunday, March 5, 2017

Skinning - Wait, now?!

It has been quite a long time since we discussed about skinning techniques, but I still want to revise some of the points that I found interesting.

The basic idea of skinning is easy to understand: just to assign the vertices on the "skin" to corresponding bones, so the "skin" will move together with its bone. It becomes interesting when the "Candy Wrapper" artifact appears. Since the connected bones might rotate to different directions, the skin at the joint will twist because its vertices are assigned to different bones. The visual results can be improved by assigning more weights to a vertex. The non-linear method called Dual Quaternion Blending method (DQB) can also solve the problem. This method is very fast, although it might work when strange motions are involved, like wrapping around. This can be solved by breaking the transformation into phases.

There are also issues with bulging artifacts, and a "cage" method was introduced by a research to guide the muscle bulging, sliding and twisting. I found it interesting because the "cage" method was an old technique introduced for skinning, and it's surprising to see modern modifications based on historic methods.

Some research about general deformers were also discussed during classes. I was wondering why my artist friends still hate skinning so much due to massive manual operations involved in the process. Maybe the good methods are not well developed for prevalent commercial use.

Cloth Simulation

Last week we discussed about some cloth simulation techniques. We watched several demo videos of the research results published over the decade. They look amazing! But I also noticed that the simulated cloth seems too light, and when there are movements involved, the cloth tends to oscillate more at the edges before recovering to the original state. That's why the result of the knitted cloth simulation paper was so impressive: The scarf model seems to have mass and volume. Of course, this method takes more resources to render. And at the same time I found the force against each yarn was too strong, that each knot seems to puff and form a round shape, especially at edges. But in reality, there's not only repulsion but also attraction between yarns. I guess the subtle force relations are hard to balance. This is somewhat like what another student commented during class: the wrinkle simulation in a demo seems unnatural when the cloth completely comes back to its original state after wrinkling. Some left-over creases might make it better. It was mentioned that there are also methods that simulate cloth based on texture mapping. I hope that I can have a chance to look into that.

Another research that I found interesting was to precompute simulation for each state and apply the same simulation to the repeating states. It is clever and fits in the game development needs, but with the expanding decision tree for transitions between each state, the problem becomes more complicated.

Given all the challenges, cloth simulation seems like still an open space to explore. I wish I can gain more insights after hands-on practice for the second mini project.

Sunday, February 26, 2017

Inverse Kinematics

Sorry for the late post! But I was blown up by a bunch of information from the classes introducing IK.

Different Jacobian methods were introduced , together with the advantages and disadvantages for each. I found it interesting that the new expression in the Damped Pseudoinverse Method not only enables the balance between oscillations and convergence rate, but also provides a nullspace approach to prioritize the movements.

There are also CCD and FABRIK methods. I think these two methods look more friendly, because they deals with vector coordinate space directly. They are not only easier to understand and implement, but they are also faster. Although their performance might not be enough in some cases, they should be very useful in game industry where efficiency is more important than absolute quality, and I really appreciate their beauty of simplicity.