Grant-in-Aid for Transformative Research Areas (A) Analysis and synthesis of deep SHITSUKAN information in the real world


D01-8 Deep Lighting: Data-Driven Active Light Fields


Takahiro OKABE Kyushu Institute of Technology

Recently, a machine learning (ML)-based approach achieves some degree of success in visual SHITSUKAN recognition such as material classification, BRDF estimation, and analysis of photometric phenomena. However, the conventional ML-based approach only optimizes the processing method of input images in a data-driven manner. When we manage to recognize deep visual SHITSUKAN, we should observe an object of interest not statically but dynamically. In other words, we should change the viewpoints from which the object is observed and the light sources under which it is observed so that we can easily recognize visual SHITSUKAN. In this study, we will work on the design of illumination environment (light fields) for visual SHITSUKAN recognition, which is a part of the so-called observation planning. Specifically, we simultaneously optimize both the light fields for capturing input images and the processing method of the input images in a data-driven manner. With this new methodology termed Deep Lighting, we aim to achieve deep visual SHITSUKAN recognition beyond human visual system and existing methods.