View-Dependent Adaptive Cloth Simulation with Buckling Compensation
Published in IEEE Transactions on Visualization and Computer Graphics (TVCG), October 2015.
Authors: Woojong Koh, Rahul Narain, and James F. O’Brien.
Abstract #
This paper presents a novel method for view-dependent cloth simulation using dynamically adaptive mesh refinement and coarsening.
Simulating highly detailed cloth is computationally expensive. However, much of that detail is often imperceptible to the viewer depending on the camera’s position. Given a prescribed camera motion, our method adjusts the criteria controlling the mesh refinement to account for both visibility and apparent size within the camera’s view frustum.
To prevent objectionable dynamic artifacts (like “popping” meshes), the system employs anticipative refinement and smoothed coarsening. This approach preserves the high-fidelity appearance of detailed cloth throughout the animation while completely avoiding the wasted computational effort of simulating microscopic folds that are too far away or occluded to be seen.
The computational savings realized by this method scale significantly as scene complexity grows, producing a 2x speed-up for a single character and more than a 4x speed-up for scenes involving a small group.
Video Demonstration #
Resources & Links #
- Paper: Preprint PDF | IEEE DL (Archive)
- Project Page: UC Berkeley Graphics Lab (Archive)
- Video Mirrors: YouTube | Vimeo