Sunday, May 14, 2017

Character Simulation

We had two classes discussing about character simulation, but actually I think this field seems more applicable in robotics. In my opinion, there are mainly two reasons for this. Firstly, the simulation calculation is still complicated, with simulated characters still moving unnaturally. Since usually the objective of character simulation is to make the character decide which motion to take given current environment or state. This somewhat results in the stiffness of the characters' movement. I found them look more like robots rather than a live character. I think in the case of animation, enriching the characters with vividness is more important than the rigid mechanics. The second reason is that character simulation usually needs multiple techniques, like IK, state machine, and often motion capture, which itself is quite mature and can be applied together with other techniques to generate satisfying animation results. Therefore, I didn't really understand why we need such efforts to research on the method, while the task can be well accomplished more easily with another method.

Nonetheless, I'm still quite interested in the research involving machine learning, so the character movement can be improved through time, as long as the sample provides accurate information during simulation. The results from the this paper were quite impressive.

Rendering

The semester has ended, but I still want to go back and revise the discussions that I couldn't include in time. Before the discussion of VR technology, we had a class talking about rendering technologies, which I'm not familiar with. Many fellow students from animation background have mentioned it very often, but all I know is that it is a technology that decides the color and light intensity of each pixel from the 3D model to project a 2D image, so that it can be displayed on the screen.

It seems that the computation cost for rendering is very expensive, or there wouldn't be a term called "render farm", and I have heard from fellow students that it takes several hours just to render one frame. They were rendering for an animation project, but this rendering speed is definitely not practical for game industry, where it has to be real time. The first paper that we talked about was a simplification designed for games. The BRDF (Bi-directional Distribution Function) to decide how much light can be seen from different directions was approximated by a sinusoidal function called LTC (Linearly Transformed Cosines). Ubisoft has implemented this method in its product, and the results look satisfying, although there is no shadow. Actually, I didn't really notice that.


There are also many efforts to reduce the rendering computation costs in movie production. One was using sorting method to speed up ray tracing process. Although the concept is very easy, it is surprisingly effective.
We have also looked through some special techniques of rendering scratches, cloth and sand. They all look so impressive. I'd like to look more into it when I have more time.

VR! Always!

As I'm majoring in entertainment technology, I was massively exposed to VR platforms last semester, so I know the magic that VR can perform, and I also know the limitation that current VR devices have. Even so, the topics we discussed in Tech Animation classes are still new for me.

The first topic was about the device itself. I have never considered carefully the working principle of VR devices. A research claimed that the sickness resulted from VR devices mainly comes from stereo system of the eyes, so researchers found a way to reduce disparity of left and right eyes. This results in smaller difference, reducing sickness while maintaining recognizable depth. However, as I mentioned in class, the sickness I experienced mainly came from the mismatch of velocity in the virtual world and the physical world. Nancy mentioned about the acceleration added to those 3D theme park rides to reduce motion sickness. Actually, I feel zero sickness in those rides. So I'm a bit curious: the mismatch of acceleration doesn't matter at all?

Another interesting research found a way to map paths in the virtual world to a much smaller physical space. It takes some tricks to guide the user to turn to certain directions, so the user feels like exploring a very large area, but actually the paths make use of the physical space repetitively.



I found this interesting, because indeed this is a practical problem. If look at the VR games in the market, there are very few of them dealing with physical movements of the player. Given the limited physical space, how to fit in the whole game world. To me, this still remains unsolved, given the limitations of this research: The ratio of step length in the game and in the real world is not resolved, so player cannot run or even walk naturally.

Another interesting research makes use of treadmill, so the player can actually walk and run in the game. And there is a belt tied around the waist of the player to pull backwards. This is to simulate the sub-force generated from gravity when a person is walking on a slope. They have also researched on a wind system that can redirect the air to go from the front of the player to simulate real natural environment.

The last research that I want to mention is the one uses sound wave to produce pressure on hands. Although there are a lot of limitations, like the space limit and the intensity limit, I think this is a brave and valuable direction to go. I feel that the bottle neck of VR development is at the hardware side, and the other types of physical sense feedback is as important as the visual feedback, which seems to be over focused on.

I think it was a fruitful discussion, and I personally am still very interested in the future of VR technology.