One reconfigurable fabric enabling multi-mode VR interaction (VIDEO)
Caption
The supplementary video demonstrates how the same FTHP is operated in different states. A user performs representative gestures on the fabric in its flat state, then folds/transforms it into predefined shapes to enable additional 3D manipulation inputs. The fabric’s multi-channel triboelectric signals are decoded by a convolutional neural network (CNN) to recognize interaction commands and drive VR interactive tasks shown in the demonstration.
Credit
©Science China Press
Usage Restrictions
Use with credit.
License
Original content