C01-1 Augmenting Material Perception by Human-Machine Integrated Vision
The goal of this project is to dramatically improve human material (Shitsukan) recognition ability by integrating machine vision with human vision. Human vision predicts and recognizes rich material information from limited sensory stimuli based on the world model. On the other hand, machine vision captures the propagation of light with much higher spatio-temporal and wavelength resolution than human vision, and can precisely decompose the light field emitted from real objects into information presenting materials. On the other hand, machine vision can capture light propagation with much higher spatio-temporal and wavelength resolution than human vision, and can accurately decompose the light field emitted from real objects into information that represents its material. In this research project, we approach the generation of “deep Shitsukan perception” by converting the material information captured by machine vision into a form that can be perceived by humans and giving it to their field of vision to update their world model. Specifically, we have developed a wearable device (light modulation glasses) that directly manipulates the light field emitted from a real object just before it reaches the human eye, and selectively emphasizes or suppresses the material information. The device is constructed as near-eye optics capable of spatio-temporal and wavelength filtering of a group of light rays, and combined with synchronized computational illumination. In addition, we will investigate whether light-modulated glasses can improve people’s ability to recognize materials.