Grant-in-Aid for Transformative Research Areas (A) Analysis and synthesis of deep SHITSUKAN information in the real world


B01-4 Unified understanding of deep SHITSUKAN recognition from visual, auditory, tactile, and linguistic information


Shin’ya Nishida Kyoto University

Shitsukan information of real-world things or events is transmitted to humans through physical light and vibration, processed by a variety of sensory modalities such as vision, hearing, and touch, and some information is converted into language. This group is analyzing the entire deep Shitsukan processing of humans from a unified perspective. Experts in the fields of vision, hearing, touch, and language have come together to conduct research using a variety of methods, including psychophysics, computer graphics, haptic engineering, and onomatopoeic Kansei engineering. Our focuses are on the mechanisms of recognizing real Shitsukan from multisensory information, and on the similarity and uniqueness of Shitsukan information processing among sensory modalities. We are investigating the characteristics of the world models in the brain based on the perceptual consistency of multiple attributes including Shitsukan, elucidating the psychological and physiological factors of the subjective reality given by real-world objects, analyzing the human recognition mechanism of natural sound Shitsukan from comparison with artificial neural networks, and developing technologies to reproduce a variety of tactile sensations of real-world objects. In addition, continuing from our previous project “Innovative Shitsukan Science and Technologies,” we are expanding the multimodal Shitsukan database of real objects, and developing Shitsukan engineering for social implementation of Shitsukan research.

Co-Investigator

Hiroyuki Kajimoto The University of Electro-Communications
Takuya Koumura NTT Communication Science Laboratories
Takahiko Horiuchi Graduate School of Engineering, Chiba University
Maki Sakamoto The University of Electro-Communications