{"id":1141,"date":"2024-07-04T14:22:28","date_gmt":"2024-07-04T05:22:28","guid":{"rendered":"https:\/\/shitsukan.jp\/deep\/?page_id=1141"},"modified":"2024-07-04T14:33:05","modified_gmt":"2024-07-04T05:33:05","slug":"2023%e5%b9%b4%e5%ba%a6-%e7%a0%94%e7%a9%b6%e6%88%90%e6%9e%9c","status":"publish","type":"page","link":"https:\/\/shitsukan.jp\/deep\/?page_id=1141","title":{"rendered":"2023\u5e74\u5ea6 \u7814\u7a76\u6210\u679c"},"content":{"rendered":"\n<ul><li>Liu S.; Suganuma M.; Okatani T. Symmetry-aware Neural Architecture for Embodied Visual Navigation. International Journal of Computer Vision, 132, 4, 1091-1107, 2023.<\/li>\n<li>Thannarot Kunlamai, Tatsuro Yamane, Masanori Suganuma, Pang-Jo Chun, Takayaki Okatani. Improving visual question answering for bridge inspection by pre-training with external data of image\u2013text pairs. Computer-Aided Civil and Infrastructure Engineering, 39, 3, 345-361, 2023.<\/li>\n<li>Jie Zhang, Masanori Suganuma, Takayuki Okatani. That&#8217;s BAD: blind anomaly detection by implicit local feature clustering. Machine Vision Applications, 35, 2, 31, 2023.<\/li>\n<li>Zhijie Wang, Xing Liu, Masanori Suganuma, Takayuki Okatani. Unsupervised domain adaptation for semantic segmentation via cross-region alignment. Computer Vision and Image Understanding, 234, 103743, 2023.<\/li>\n<li>Hori Y, Nagai Y, Hori Y, Oyama K, Mimura K, Hirabayashi T, Inoue K, Fujinaga M, Zhang MR, Takada M, Higuchi M, Minamimoto T.. Multimodal imaging for validation and optimization of ion channel-based chemogenetics in nonhuman primates. . J Neurosci, 43(39):6619\u20136627 , 2023.<\/li>\n<li>Oyama K, Nagai Y, Minamimoto T.. Targeted delivery of chemogenetic adeno-associated viral vectors to cortical sulcus regions in macaque monkeys by handheld injections.. Bio Protocol. , 23(13):e4897, 2023.<\/li>\n<li>Kang HJ, Minamimoto T, Wess J, Roth BL. . Chemogenetics for cell-type specific modulation of signaling and neuronal activity. . Nature Reviews Methods Primers, 3, 1-17, 2023.<\/li>\n<li>Hori Y, Mimura K, Nagai Y, Hori Y, Kumata K, Zhang R-M, Suhara T, Higuchi M, Minamimoto T.. Reduced serotonergic transmission alters sensitivity to cost and reward via 5-HT1A and 5-HT1B receptors in monkeys.. PLoS Biology, 22(1): e3002445., 2023.<\/li>\n<li>Amita H, Koyano KW, Kunimatsu J.. Neuronal mechanisms underlying face recognition in non-human primates. Japanese Psychological Research, , 2023.<\/li>\n<li>Saito H, Masaoka A, Komatsu H. Local field potential (LFP) responses associated with perceptual filling-in at blind spot in macaque primary visual cortex.. J Neurophysiol, 130(6): 1464\u20131479, 2023.<\/li>\n<li>Shiotani K, Tanisumi Y, Osako Y, Murata K, Hirokawa J, Sakurai Y, Manabe H. An intra-oral flavor detection task in freely moving mice.. iScience, 27(2):108924, 2023.<\/li>\n<li>Taniyama, Y., Suzuki, Y., Kondo, T., Minami, T., Nakauchi, S. Pupil dilation is driven by perceptions of naturalness of color composition in paintings. Psychology of Aesthetics, Creativity, and the Arts, \u306a\u3057, 2023.<\/li>\n<li>Imura, T., Shirai, N., Kondo, T., Nakauchi, S. Children\u2019s preferences of the colour composition of art paintings. Infant and Child Development, e2450, 2023.<\/li>\n<li>Miyamoto, K., Taniyama, Y., Hine, K., Nakauchi, S. Congruency of color\u2013sound crossmodal correspondence interacts with color and sound discrimination depending on color category. i-Perception, 14(4); 1-32, 2023.<\/li>\n<li>Morihara K, Ota S, Kakinuma K, Kawakami N, Higashiyama Y, Kanno S, Tanaka F, Suzuki K.. Buccofacial apraxia in primary progressive aphasia. Cortex, 158;61-70, 2023.<\/li>\n<li>Osawa S, Suzuki K, Asano E, Ukishiro K, Agari D, Kakinuma K, Kochi R, Jin K, Nakasato N, Tominaga T. . Causal involvement of medial inferior frontal gyrus of non-dominant hemisphere in higher order auditory perception: A single case study.. Cortex, 163; 57-65, 2023.<\/li>\n<li>Kawakami N, Kannno S, Ota S, Morihara K, Ogawa N, Suzuki K. Auditory phonological identification impairment in primary progressive aphasia.. Cortex, 168: 130-142, 2023.<\/li>\n<li>Takuya Koumura, Hiroki Terashima and Shigeto Furukawa. Human-like Modulation Sensitivity Emerging through Optimization to Natural Sound Recognition. Journal of Neuroscience, 43(21), 3876-3894, 2023.<\/li>\n<li>Pei-Yin Chen, Chien-Chung Chen, Shin&#8217;ya Nishida. Coarse-to-fine interaction on perceived depth in compound grating . Journal of Vision, 23(9), 2023.<\/li>\n<li>Yung-Hao Yang, Taiki Fukiage, Zitang Sun, Shin&#8217;ya Nishida. Psychophysical measurement of perceived motion flow of naturalistic scenes . iScience, 26, 108307, 2023.<\/li>\n<li>Zitang Sun, Zhengbo Luo, Shin&#8217;ya Nishida. Decoupled spatiotemporal adaptive fusion network for self-supervised motion estimation.&nbsp;. Neurocomputing, 534 133-146, 2023.<\/li>\n<li>Kuroki S, Nishida S. Touch cannot attentionally select signals based on feature binding. IEEE Transactions on Haptics, , 2023.<\/li>\n<li>Takumi Hamazaki, Miku Kaneda, Seitaro Kaneko, Hiroyuki Kajimoto . Chemical Approach to the Thermal Grill Illusion. IEEE Access, vol. 12, pp. 29385 &#8211; 29396, 2024, 2023.<\/li>\n<li>\u771f\u934b \u5149\u5e0c, \u725b\u5c71 \u594e\u609f, \u9ad9\u6a4b \u54f2\u53f2, \u68b6\u672c \u88d5\u4e4b. \u53cd\u767a\u3059\u308b\u78c1\u77f3\u306e\u6f0f\u308c\u78c1\u675f\u3092\u5229\u7528\u3057\u305f\u30a6\u30a7\u30a2\u30e9\u30d6\u30eb\u89e6\u899a\u63d0\u793a\u7d20\u5b50. \u65e5\u672c\u30d0\u30fc\u30c1\u30e3\u30eb\u30ea\u30a2\u30ea\u30c6\u30a3\u5b66\u4f1a\u8ad6\u6587\u8a8c, Vol.28, No.4, p.361-370, 2023.<\/li>\n<li>Midori Tanaka, Tsubasa Ando and Takahiko Horiuchi. Automatic MTF Conversion between Different Characteristics Caused by Imaging Devices. Journal of Imaging, Vol.10, No.2, 49, 2023.<\/li>\n<li>Takeuchi M, Kusuyama H, Iwai D, Sato K. Projection Mapping under Environmental Lighting by Replacing Room Lights with Heterogeneous Projectors. IEEE Transactions on Visualization and Computer Graphics, 30(5):2151-2161, 2023.<\/li>\n<li>Zhong S, Punpongsanon P, Iwai D, Sato K. Topology Optimization with Text-Guided Stylization. Structural and Multidisciplinary Optimization, 66:256, 2023.<\/li>\n<li>Erel Y, Iwai D, Bermano A. Neural Projection Mapping Using Reflectance Fields. IEEE Transactions on Visualization and Computer Graphics, 29(11):4339-4349, 2023.<\/li>\n<li>Zhong S, Punpongsanon P, Iwai D, Sato K. Estimation of Fused-Filament-Fabrication Structural Vibro-Acoustic Performance by Modal Impact Sound. Computers &amp; Graphics, 115:137-147, 2023.<\/li>\n<li>Ueda F, Kageyama Y, Iwai D, Sato K. Focal Surface Projection: Extending Projector Depth-of-Field Using a Phase-Only Spatial Light Modulator. Journal of the Society for Information Display, 31(11):651-656, 2023.<\/li>\n<li>Aoki H, Tochimoto T, Hiroi Y, Itoh Y. Towards Co-operative Beaming Displays: Dual Steering Projectors for Extended Projection Volume and Head Orientation Range. IEEE Transactions on Visualization and Computer Graphics, 30(5):2309-2318, 2023.<\/li>\n<li>Hiroi Y, Hiraki T, Itoh Y. StainedSweeper: Compact, Variable-Intensity Light-Attenuation Display with Sweeping Tunable Retarders. IEEE Transactions on Visualization and Computer Graphics, 30(5):2682-2692, 2023.<\/li>\n<li>Hiroi Y, Watanabe A, Mikawa Y, Itoh Y. Low-Latency Beaming Display: Implementation of Wearable, 133 \u03bcs Motion-to-Photon Latency Near-eye Display. IEEE Transactions on Visualization and Computer Graphics, 29 (11):4761-4771, 2023.<\/li>\n<li>Masahiko Yasui, Ryota Iwataki, Masatoshi Ishikawa, and Yoshihiro Watanabe. Projection Mapping with a Brightly Lit Surrounding Using a Mixed Light Field Approach. IEEE Transactions on Visualization and Computer Graphics, Vol. 30, No. 5, pp. 2217-2227, 2023.<\/li>\n<li>Gen Ohara, Daiki Kikuchi, Masashi Konyo, Satoshi Tadokoro. Stereohaptic Vibration: Out-of-Body Localization of Virtual Vibration Source through Multiple Vibrotactile Stimuli on the Forearms. IEEE Transactions on Haptics, vol. 17, no. 1, pp. 86-91, 2023.<\/li>\n<li>Isoyama Takuto, Kidani Shunsuke, Unoki Masashi. Computational models of auditory sensation important for sound quality on basis of either gammatone or gammachirp auditory filterbank. Applied Acoustics, vol. 218, p. 109914, 2023.<\/li>\n<li>Yasuji Ota and Masashi Unoki. Anomalous sound detection for industrial machines using acoustical features related to timbral metrics. IEEE Access, vol. 31, pp. 2534-2547, 2023.<\/li>\n<li>Khalid Zaman, Melike Sah, Cem Direkoglu, Masashi Unoki. A Survey of Audio Classification using Deep Learning. IEEE Access, vol. 11, pp. 106620-106649, 2023.<\/li>\n<li>\u5ca1\u672c\u4f91\u6c70\u90ce, \u5929\u91ce\u654f\u4e4b. \u7167\u660e\u74b0\u5883\u306e\u5909\u52d5\u306b\u5bfe\u3057\u3066\u9811\u5f37\u306a\u898b\u304b\u3051\u306e\u5236\u5fa1. \u65e5\u672c\u30d0\u30fc\u30c1\u30e3\u30eb\u30ea\u30a2\u30ea\u30c6\u30a3\u5b66\u4f1a\u8ad6\u6587\u8a8c, Vol.28, No.3, 2023, pp.271-279, 2023.<\/li>\n<li>\u677e\u672c \u4f91\u5927, \u5929\u91ce \u654f\u4e4b. \u30e9\u30a4\u30c8\u30d5\u30a3\u30fc\u30eb\u30c9\u30d5\u30a3\u30fc\u30c9\u30d0\u30c3\u30af\u3092\u7528\u3044\u305f\u5149\u6ca2\u7269\u4f53\u306b\u5bfe\u3059\u308b\u8996\u70b9\u4f9d\u5b58\u306e\u5149\u6ca2\u611f\u5f37\u8abf. \u65e5\u672c\u30d0\u30fc\u30c1\u30e3\u30eb\u30ea\u30a2\u30ea\u30c6\u30a3\u5b66\u4f1a\u8ad6\u6587\u8a8c, Vol.28, No.3, 2023, pp.255-262, 2023.<\/li>\n<li>Mirai Azechi, Shogo Okamoto. Bumps and Dents are Not Perceptually Opposite When Exploring With Lateral Force Cues. IEEE Transactions on Haptics, 17(1), 52-57, 2023.<\/li>\n<li>Sayaka YAMADA, Shogo OKAMOTO, Yumeka OGURA, Yuki KOSUGE. Optimal design of a haptic popping toy using response surface. Journal of Advanced Mechanical Design, Systems, and Manufacturing, 18(2), 23-00413, 2023.<\/li>\n<li>Ichiro Kuriki, Kazuki Sato, Satoshi Shioiri. The reality of head mounted display (HMD) environment tested via lightness perception.. Journal of Imaging, 10(2): 36, 2023.<\/li>\n<li>He Y, Sato H, Phuangsuwan C, Rattanakasamsuk K, Mizokami Y. Relationship between brightness perception and skin color influenced by experimental method. Color Research &amp; Application, pp. 1-14 (Early view), 2023.<\/li>\n<li>Naoki Ishida, Tomoyo Isoguchi Shiramatsu, Tomoyuki Kubota, Dai Akita, Hirokazu Takahashi. Quantification of information processing capacity in living brain as physical reservoir. Applied Physics Letters, 122, 233702, 2023.<\/li>\n<li>Shinichi Kumagai, Tomoyo Isoguchi Shiramatsu, Akane Matsumura, Yohei Ishishita, Kenji Ibayashi, Yooshiyuki Onuki, Kensuke Kawai, Hirokazu Takahashi. &#8220;Frequency-specific modulation of oscillatory activity in the rat auditory cortex by vagus nerve stimulation&#8221;. Brain Stimulation, 16, 1476-1485, 2023.<\/li>\n<li>Nakai S, Kitanishi T, Mizuseki K. Distinct manifold encoding of navigational information in the subiculum and hippocampus. Science Advances, 10:eadi4471, 2023.<\/li>\n<li>&#8220;Zhou, T., Kawasaki, K., Suzuki, T., Hasegawa, I., Roe, A. W. &amp; Tanigawa, H.&#8221;. Mapping information flow between the inferotemporal and prefrontal cortices via neural oscillations in memory retrieval and maintenance. . Cell Rep.,  42, 113169 , 2023.<\/li>\n<li>Munenori Ono, Tetsufumi Ito, Sachiko Yamaki, Yoshie Hori, Qing Zhou, Xirun Zhao, Shinji Muramoto, Ryo Yamamoto, Takafumi Furuyama, Hiromi Sakata-Haga, Toshihisa Hatta, Tsuyoshi Hamaguchi, Nobuo Kato. Spatiotemporal development of the neuronal accumulation of amyloid precursor protein and the amyloid plaque formation in the brain of 3xTg-AD mice . Heliyon, 2024.e28821, 2023.<\/li>\n<li>Tatsuya Oikawa, Kento Nomura, Toshimitsu Hara, Kowa Koida. A Fine-Scale and Minimally Invasive Marking Method for Use with Conventional Tungsten Microelectrodes.. eNeuro&nbsp;, 10(9), 2023.<\/li>\n<li>\u9bc9\u7530 \u5b5d\u548c. \u932f\u8996\u306e\u79d1\u5b66\u3068\u305d\u306e\u793e\u4f1a\u5b9f\u88c5\u3078\u306e\u5c55\u958b \u540c\u6642\u8272\u5bfe\u6bd4\u306e\u8272\u76f8\u7279\u6027\u3068\u7a7a\u9593\u7279\u6027. \u5149\u5b66  (Japanese Journal of Optics) , &#8220;53 \u5dfb 1 \u53f7 p. 23-26 &#8220;, 2023.<\/li>\n<li>Masahiro Kawatani, Kayo Horio, Mahito Ohkuma, Wan-Ru Li, and Takayuki Yamashita. Interareal synaptic inputs underlying whisking-related activity in the primary somatosensory barrel cortex. The Journal of Neuroscience, 44: e1148232023, 2023.<\/li>\n<li>Kazuki Shiotani, Yuta Tanisumi, Yuma Osako, Koshi Murata, Junya Hirokawa, Yoshio Sakurai, Hiroyuki Manabe. An intra-oral flavor detection task in freely moving mice   . iScience, Volume 27, Issue 2, 2023.<\/li>\n<li>Shinji Kubota, Chika Sasaki, Satomi Kikuta, Junichiro Yoshida, Sho Ito, Hiroaki Gomi, Tomomichi Oya, Kazuhiko Seki. Modulation of somatosensory signal transmission in the primate cuneate nucleus during voluntary hand movement. . Cell reports, &#8220;Volume 43,Issue 3,113884&#8221;, 2023.<\/li>\n<li>Pearl JE, Matsumoto N, Hayashi K, Matsuda K, Miura K, Nagai Y, Miyakawa N, Minamimoto T, Saunders RC, Sugase-Miyamoto Y, Richmond BJ, Eldridge MAG. Neural correlates of category learning in monkey inferior temporal cortex.. bioRxiv, , 2023.<\/li>\n<li>Mitsuru Tanaka, Keishiro Arima, Haruna Ide, Mariko Koshi, Naoto Ohno, Miho Imamura and Toshiro Matsui. Application of graphite carbon black assisted-laser desorption ionization-mass spectrometry for soy sauce product discrimination. Bioscience, Biotechnology, and Biochemistry, , 2023.<\/li>\n<li>Yamamoto K, Iwai D, Tani I, Sato K. A Monocular Projector-Camera System using Modular Architecture. IEEE Transactions on Visualization and Computer Graphics, 29 (12):5586-5592, 2023.<\/li>\n<li>Takahiro Miura, Naoyuki Okochi, Junya Suzuki, Tohru Ifukube. Binaural Listening with Head Rotation Helps Persons with Blindness Perceive Narrow Obstacles . International Journal of Environmental Research and Public Health, 20(8):5573, 2023.<\/li>\n<li>Takahiro Miura, Ken-ichiro Yabu. Narrative review of assistive technologies and sensory substitution in people with visual and hearing impairment. Psychologia, 2023.<\/li>\n<li>Tetsushi Nonaka, Arsen Abdulali, Chapa Sirithunge, Kieran Gilday, Fumiya Iida. Soft robotic tactile perception of softer objects based on learning of spatiotemporal pressure patterns. 6th IEEE-RAS International Conference on Soft Robotics (RoboSoft 2023), 2023.<\/li>\n<\/ul>\n\n\n\n<ul><li>Ichikawa T, Fukao Y, Nonaka S, Nobuhara S, Nishino K. Fresnel Microfacet BRDF: Unification of Polari-Radiometric Surface-Body Reflection. Conference on Computer Vision and Pattern Recognition CVPR\u201923, \u30ab\u30ca\u30c0\u30fb\u30d0\u30f3\u30af\u30fc\u30d0\u30fc, 2023.<\/li> \n<li>Li Z, Jiang H, Cao M, Zheng Y. Polarized Color Image Denoising. Conference on Computer Vision and Pattern Recognition CVPR\u201923, \u30ab\u30ca\u30c0\u30fb\u30d0\u30f3\u30af\u30fc\u30d0\u30fc, 2023.<\/li> \n<li>Zhan Y, Nobuhara S, Nishino K, Zheng Y. NeRFrac: Neural Radiance Fields through Refractive Surface. IEEE International Conference on Computer Vision (ICCV), \u30d5\u30e9\u30f3\u30b9\u30fb\u30d1\u30ea, 2023.<\/li> \n<li>Nakamura S, Kawanishi Y, Nobuhara S, Nishino K. DeePoint: Visual Pointing Recognition and Direction Estimation. IEEE International Conference on Computer Vision (ICCV), \u30d5\u30e9\u30f3\u30b9\u30fb\u30d1\u30ea, 2023.<\/li> \n<li>Niu M, Zhong Z, Zheng Y. NIR-assisted Video Enhancement via Unpaired 24-hour Data. IEEE International Conference on Computer Vision (ICCV), \u30d5\u30e9\u30f3\u30b9\u30fb\u30d1\u30ea, 2023.<\/li> \n<li>Ji X, Wang Z, Satho S, Zheng Y. Single Image Deblurring with Row-dependent Blur Magnitude. IEEE International Conference on Computer Vision (ICCV), \u30d5\u30e9\u30f3\u30b9\u30fb\u30d1\u30ea, 2023.<\/li> \n<li>Jie Zhang, Masanori Suganuma, Takayuki Okatani. Contextual Affinity Distillation for Image Anomaly Detection. IEEE\/CVF Winter Conference on Applications of Computer Vision, Wikoloa, USA, 2023.<\/li> \n<li>Xiangyong Lu,&nbsp;Masanori Suganuma,&nbsp;Takayuki Okatani. SBCFormer: Lightweight Network Capable of Full-size ImageNet Classification at 1 FPS on Single Board Computers. IEEE\/CVF Winter Conference on Applications of Computer Vision, Wikoloa, USA, 2023.<\/li> \n<li>Liang Xu, Han Zou, Takayuki Okatani. How Do Label Errors Affect Thin Crack Detection by DNNs. IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancouver, BC, Canada, 2023.<\/li> \n<li>Han Zou, Liang Xu, Takayuki Okatani. Geometry Enhanced Reference-based Image Super-resolution. IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancouver, BC, Canada, 2023.<\/li> \n<li>Qian Ye, Masanori Suganuma, Takayuki Okatani. Accurate Single-Image Defocus Deblurring Based on Improved Integration with Defocus Map Estimation. IEEE International Conference on Image Processing (ICIP), Kuala Lumpur, Malaysia, 2023.<\/li> \n<li>Jie Zhang, Masanori Suganuma, Takayuki Okatani. Network Pruning and Fine-tuning for Few-shot Industrial Image Anomaly Detection. IEEE 21st International Conference on Industrial Informatics (INDIN), Lemgo, Germany, 2023.<\/li> \n<li>Yuichi Kamata, Moyuru Yamada, Takayuki Okatani. Self-Modularized Transformer: Learn to Modularize Networks for Systematic Generalization. International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and  Applications, Lisbon, Portugal, 2023.<\/li> \n<li>Toshimichi Aota, Lloyd Teh Tzer Tong, Takayuki Okatani. Zero-shot versus Many-shot: Unsupervised Texture Anomaly Detection.. IEEE\/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, USA, 2023.<\/li> \n<li>Tamura, H., Nakauchi, S., Minami, T. Pupillary responses to perceived glossiness and attractiveness. European Conference on Visual Perception 2023, Paphos, Cyprus, 2023.<\/li> \n<li>Sun Zitang, Yun-Hao Yang, Shin\u2019ya Nishida. Modeling of Human Motion Perception Mechanism: A Simulation based on Deep Neural Network and Attention Transformer. Vision Sciences Society, St Pete Beach, Florida, USA, 2023.<\/li> \n<li>Yen-Ju Cheng, Shin\u2019ya Nishida. Temporal limits of visual segmentation based on temporal asynchrony in luminance, color, motion direction, and their mixtures. Vision Sciences Society, St Pete Beach, Florida, USA, 2023.<\/li> \n<li>Taiki Fukiage, Shin&#8217;ya Nishida. Local image statistics can account for the perceived naturalness of image contrast. Vision Sciences Society, St Pete Beach, Florida, USA, 2023.<\/li> \n<li>Sun Zitang, Yen-Ju Chen, Yun-Hao Yang, Shin\u2019ya Nishida. A Comparative Analysis of Visual Motion Perception: Computer Vision Models versus Human Abilities. Conference on Cognitive Computational Neuroscience, Oxford, UK, 2023.<\/li> \n<li>Yui Suga, Masahiro Miyakami, Izumi Mizoguchi, Hiroyuki Kajimoto. 3D Shape Presentation by Combination of Force Feedback and Electro-tactile Stimulation. IEEE World Haptics Conference 2023, Delft, Netherlands, 2023.<\/li> \n<li>Soma Kato, Yui Suga, Masahiro Miyakami, Izumi Mizoguchi, Hiroyuki Kajimoto. Presentation of Tracing Sensation to Fingertip Using a Rotating Disk. IEEE World Haptics Conference 2023, Delft, Netherlands, 2023.<\/li> \n<li>Mizuki Hamaguchi, Takumi Hamazaki, Miku Kaneda, Seitaro Kaneko,  Hiroyuki Kajimoto. Temperature change measurement of skin-to-skin contact. IEEE World Haptics Conference 2023, Delft, Netherlands, 2023.<\/li> \n<li>Yui Suga, Izumi Mizoguchi, Hiroyuki Kajimoto. Presentation of Finger-size Shapes by Combining Force Feedback and Electro-tactile Stimulation. IEEE Virtual Reality 2024, Florida, 2023.<\/li> \n<li>Kota Iwase, Momoka Matsufuji, Midori Tanaka and Takahiko Horiuchi. Visual Perception Experiment and Analysis of Metamorphosis for Object Appearance by Change-Detection Task. 15th Congress of the International Colour Association, Chiang Rai, Thailand, 2023.<\/li> \n<li>Toshinori Oba, Midori Tanaka and Takahiko Horiuchi. Analysis of Harmony between Coloured Light and Fragrance by the Left-Right Orbitofrontal Area. 15th Congress of the International Colour Association, Chiang Rai, Thailand, 2023.<\/li> \n<li>Taiga Baba, Midori Tanaka, Shoko Imaizumi and Takahiko Horiuchi. Perceptual Collation Method for Colour Halftone Images based on Visual Saliency. 15th Congress of the International Colour Association, Chiang Rai, Thailand, 2023.<\/li> \n<li>Takahiko Horiuchi. Physical and perceptual worlds of material appearance. 15th Congress of the International Colour Association, Chiang Rai, Thailand, 2023.<\/li> \n<li>Takuya Koumura, Hiroki Terashima, Shigeto Furukawa. Comparison of Neural Networks Trained for Multi-Source and Single-Source Sound Recognition: Towards Modeling Auditory Mechanisms of Multi-Source Sound Recognition.. \u7b2c33\u56de\u5168\u56fd\u5927\u4f1a, \u65e5\u672c\u795e\u7d4c\u56de\u8def\u5b66\u4f1a, \u6771\u4eac\u90fd, 2023.<\/li> \n<li>Takuya Koumura, Hiroki Terashima, Shigeto Furukawa. Simulating Psychophysical Modulation Masking Experiments in an Artificial Neural Network Trained for Sound Classification . 47th Annual MidWinter Meeting, Association for Research in Otolaryngology, Anaheim, CA, USA, 2023.<\/li> \n<li>Kitagishi T, Hiroi Y, Watanabe Y, Itoh Y, Rekimoto J. Telextiles: End-to-end Remote Transmission of Fabric Tactile Sensation. ACM Symposium on User Interface Software and Technology (UIST) 2023, San Francisco (USA,  California), 2023.<\/li> \n<li>Aoki H, Hiroi Y, Itoh Y, Rekimoto J. Retinal Homing Display: Head-Tracking Auto-stereoscopic Retinal Projection Display. ACM Symposium on Virtual Reality Software and Technology (VRST) 2023, Christchurch (New Zealand), 2023.<\/li> \n<li>Hao-Lun Peng, Shin&#8217;ya Nishida, and Yoshihiro Watanabe. Studying User Perceptible Misalignment in Simulated Dynamic Facial Projection Mapping. IEEE International Symposium on Mixed and Augmented Reality, University of New South Wales, Australia, 2023.<\/li> \n<li>Soran Nakagawa and Yoshihiro Watanabe. High-Frame-Rate Projection with Thousands of Frames Per Second Based on the Multi-Bit Superimposition Method. IEEE International Symposium on Mixed and Augmented Reality, University of New South Wales, Australia, 2023.<\/li> \n<li>Gen Ohara. Stereohaptic Vibration: Out-of-Body Localization of Virtual Vibration Source through Multiple Vibrotactile Stimuli on the Forearms. IEEE World Haptics Conference 2023, Delft, Netherlands, 2023.<\/li> \n<li>\u5927\u539f \u7384. \u5916\u754c\u3092\u8868\u73fe\u3059\u308b\u7acb\u4f53\u632f\u52d5\u30c7\u30a3\u30b9\u30d7\u30ec\u30a4 \u7b2c 6 \u5831\uff1a\u8155\u8f2a\u578b\u30c7\u30d0\u30a4\u30b9\u3092\u7528\u3044\u305f\u5b9a\u4f4d\u3068\u7acb\u4f53\u97f3\u97ff\u3068\u306e\u76f8\u4e92\u4f5c\u7528. \u65e5\u672c\u6a5f\u68b0\u5b66\u4f1a \u30ed\u30dc\u30c6\u30a3\u30af\u30b9\u30fb\u30e1\u30ab\u30c8\u30ed\u30cb\u30af\u30b9 \u8b1b\u6f14\u4f1a 2023, \u540d\u53e4\u5c4b\u5e02, 2023.<\/li> \n<li>\u5927\u539f \u7384. \u5916\u754c\u3092\u8868\u73fe\u3059\u308b\u7acb\u4f53\u632f\u52d5\u30c7\u30a3\u30b9\u30d7\u30ec\u30a4 \u7b2c7\u5831\uff1a\u30d5\u30a1\u30f3\u30c8\u30e0\u30bb\u30f3\u30bb\u30fc\u30b7\u30e7\u30f3\u3092\u751f\u8d77\u3059\u308b\u523a\u6fc0\u6761\u4ef6\u306e\u691c\u8a0e. \u7b2c28\u56de\u65e5\u672c\u30d0\u30fc\u30c1\u30e3\u30eb\u30ea\u30a2\u30ea\u30c6\u30a3\u5b66\u4f1a\u5927\u4f1a, \u516b\u738b\u5b50\u5e02, 2023.<\/li> \n<li>\u5927\u539f \u7384. \u5916\u754c\u3092\u8868\u73fe\u3059\u308b\u7acb\u4f53\u632f\u52d5\u30c7\u30a3\u30b9\u30d7\u30ec\u30a4 \u7b2c 8 \u5831:\u88dc\u52a9\u632f\u52d5\u306b\u3088\u308b\u7acb\u4f53\u632f\u52d5\u306e\u5b9a\u4f4d\u6027\u306e\u6539\u5584\u624b\u6cd5\u306e\u63d0\u6848. \u7b2c24\u56de\u8a08\u6e2c\u81ea\u52d5\u5236\u5fa1\u5b66\u4f1a\u30b7\u30b9\u30c6\u30e0\u30a4\u30f3\u30c6\u30b0\u30ec\u30fc\u30b7\u30e7\u30f3\u90e8\u9580\u8b1b\u6f14\u4f1a, \u65b0\u6f5f\u5e02, 2023.<\/li> \n<li>Juro Hosoi, Du Jin, Yuki Ban and Shin\u2019ichi Warisawa. FurAir: Non-contact Presentation of Soft Fur Texture by Psuedo-haptics and Mid-air Ultrasound Haptic Feedback. SIGGRAPH Asia 2023 Emerging Technologies, Sydney, NSW, Australia, 2023.<\/li> \n<li>\u6797\u90c1\u7f8e, \u6e21\u8fba\u54f2\u967d. \u7740\u8863\u5bb9\u6613\u5316\u306e\u305f\u3081\u306e\u525b\u6027\u30fb\u6469\u64e6\u53ef\u5909\u30a2\u30af\u30c1\u30e5\u30a8\u30fc\u30bf\u306e\u958b\u767a. \u7b2c29\u56de\u30ed\u30dc\u30c6\u30a3\u30af\u30b9\u30b7\u30f3\u30dd\u30b8\u30a2, \u6c96\u7e04\u770c\u540d\u8b77\u5e02 \u30ab\u30cc\u30c1\u30e3\u30ea\u30be\u30fc\u30c8 \u30ab\u30cc\u30c1\u30e3\u30d9\u30a4\u30db\u30c6\u30eb\uff06\u30f4\u30a3\u30e9\u30ba, 2023.<\/li> \n<li>Cheng Haowei, Mawalim Candy Olivia, Li Kai, Wang Lijun, Unoki Masashi. Analysis of Spectro-Temporal Modulation Representation for Deep-Fake Speech Detection. APSIPA ASC 2023, Taipei, 2023.<\/li> \n<li>Li Kai, Tran Dung Kim, Lu Xugang, Akagi Masato, Unoki Masashi. Data-driven Non-uniform Filterbanks Based on F-ratio for Machine Anomalous Sound Detection. EUSIPCO2023, Helsink, Finland, 2023.<\/li> \n<li>\u78ef\u5c71\u62d3\u90fd, \u6728\u8c37\u4fca\u4ecb, \u9d5c\u6728\u7950\u53f2. \u8074\u899a\u30d5\u30a3\u30eb\u30bf\u30d0\u30f3\u30af\u3092\u7528\u3044\u305f\u6642\u5909\u52d5\u97f3\u306e\u30e9\u30a6\u30c9\u30cd\u30b9\u8a08\u7b97\u6cd5\u306e\u69cb\u7bc9. \u65e5\u672c\u97f3\u97ff\u5b66\u4f1a2023\u5e74\u5ea6\u79cb\u5b63\u7814\u7a76\u767a\u8868\u4f1a, \u540d\u53e4\u5c4b\u5927\u5b66, 2023.<\/li> \n<li>\u5927\u7530\u606d\u58eb, \u9d5c\u6728\u7950\u53f2. \u7523\u696d\u6a5f\u5668\u306e\u7570\u5e38\u97f3\u691c\u77e5\u306b\u5411\u3051\u305f\u97f3\u8272\u95a2\u9023\u7279\u5fb4\u91cf\u306e\u691c\u8a0e. \u65e5\u672c\u97f3\u97ff\u5b66\u4f1a2023\u5e74\u5ea6\u79cb\u5b63\u7814\u7a76\u767a\u8868\u4f1a, \u540d\u53e4\u5c4b\u5927\u5b66, 2023.<\/li> \n<li>\u7530\u4e2d\u3000\u5145. \u5473\u899a\u30fb\u55c5\u899a\u6210\u5206\u306e\u30c7\u30b8\u30bf\u30eb\u5316\u306e\u305f\u3081\u306e\u65b0\u305f\u306a\u98df\u54c1\u5206\u6790\u6280\u8853. \u7b2c7\u56de\u3000\u8cea\u611f\u306e\u3064\u3069\u3044, \u30a2\u30af\u30ea\u30a8\u3072\u3081\u3058\u3000\u591a\u76ee\u7684\u30db\u30fc\u30eb, 2023.<\/li> \n<li>\u7530\u4e2d\u3000\u5145. \u98df\u54c1\u54c1\u8cea\u304a\u3088\u3073\u5065\u5eb7\u6a5f\u80fd\u306e\u8a55\u4fa1\u30fb\u4e88\u6e2c\u3092\u53ef\u80fd\u306b\u3059\u308b\u30d5\u30fc\u30c9\u30df\u30af\u30b9\u6280\u8853\u306e\u78ba\u7acb. \u300c\u4e5d\u5dde\u5927\u5b66\u5b66\u8853\u7814\u7a76\u90fd\u5e02\u300d\u30bb\u30df\u30ca\u30fcin \u6771\u4eac 2023, \u30db\u30c6\u30eb\u96c5\u53d9\u5712, 2023.<\/li> \n<li>\u7530\u4e2d\u3000\u5145. \u98df\u54c1\u6210\u5206\u306e\u9ad8\u611f\u5ea6\u53ef\u8996\u5316\u6280\u8853\u3068\u5473\u30fb\u9999\u308a\u6210\u5206\u30c7\u30b8\u30bf\u30eb\u5316\u6280\u8853\u306e\u69cb\u7bc9\u3078\u306e\u8a66\u307f. \u65e5\u672c\u9999\u6599\u5354\u4f1a \u5b66\u8853\u8b1b\u6f14\u4f1a , \u30a8\u30c3\u30b5\u30e0\u795e\u7530\uff08\u6771\u4eac\uff09, 2023.<\/li> \n<li>\u7530\u4e2d\u3000\u5145. \u5473\u3068\u306b\u304a\u3044\u306e\u540c\u6642\u5206\u6790\u306b\u3088\u308b\u98a8\u5473\u6210\u5206\u60c5\u5831\u306e\u4e00\u5143\u5316 . \u98df\u54c1\u958b\u767a\u5c552023 \u30bb\u30df\u30ca\u30fc, \u6771\u4eac\u30d3\u30c3\u30b0\u30b5\u30a4\u30c8, 2023.<\/li> \n<li>\u7530\u4e2d\u3000\u5145. \u98df\u54c1\u6210\u5206\u306e\u9ad8\u5ea6\u53ef\u8996\u5316\u6280\u8853\u306e\u958b\u767a\u3068\u98df\u6a5f\u80fd\u30fb\u54c1\u8cea\u8a55\u4fa1\u3078\u306e\u5fdc\u7528. \u65e5\u672c\u98df\u54c1\u5206\u6790\u5b66\u4f1a\u3000\u4ee4\u548c5\u5e74\u5ea6\u5b66\u8853\u96c6\u4f1a, \u6771\u6d0b\u5927\u5b66\u767d\u5c71\u30ad\u30e3\u30f3\u30d1\u30b9, 2023.<\/li> \n<li>\u5289 \u5353\u975e\u3001\u6709\u99ac\u7d99\u58eb\u90ce\u3001\u6851\u539f\u3000\u6dbc\u3001\u897f\u6728\u3000\u76f4\u5df3\u3001\u7530\u4e2d\u3000\u5145\u3001\u677e\u4e95\u5229\u90ce. Graphite sheet-assisted laser desorption ionization-mass spectrometry for simple and rapid detection of small organic compounds. \u65e5\u672c\u5206\u6790\u5316\u5b66\u4f1a\u3000\u7b2c72\u5e74\u4f1a, \u718a\u672c\u57ce\u30db\u30fc\u30eb, 2023.<\/li> \n<li>\u4e95\u624b\u6674\u83dc\u3001\u6709\u99ac\u7d99\u58eb\u90ce, \u570b\u6b66\u53cb\u91cc, \u5927\u91ce\u76f4\u571f, \u4eca\u6751\u7f8e\u7a42, \u677e\u4e95\u5229\u90ce\u3001\u7530\u4e2d\u3000\u5145. \u5473\u30fb\u306b\u304a\u3044\u6210\u5206\u3092\u4e00\u6589\u691c\u51fa\u53ef\u80fd\u306a\u65b0\u898f\u30b0\u30e9\u30d5\u30a1\u30a4\u30c8\u30ab\u30fc\u30dc\u30f3\u30d6\u30e9\u30c3\u30af\u652f\u63f4-LDI-MS\u6cd5\u306e\u91a4\u6cb9\u54c1\u8cea\u8a55\u4fa1\u3078\u306e\u5fdc\u7528. \u65e5\u672c\u98df\u54c1\u79d1\u5b66\u5de5\u5b66\u4f1a \u7b2c70\u56de\u8a18\u5ff5\u5927\u4f1a, \u4eac\u90fd\u5973\u5b50\u5927\u5b66, 2023.<\/li> \n<li>\u7530\u4e2d\u3000\u5145. \u5206\u6790\u5316\u5b66\u7684\u30a2\u30d7\u30ed\u30fc\u30c1\u3067\u98df\u306e\u6a5f\u80fd\u3068\u54c1\u8cea\u306b\u8feb\u308b\u3000\u301c\u65b0\u305f\u306a\u98df\u79d1\u5b66\u7814\u7a76\u3092\u76ee\u6307\u3057\u3066\u301c. \u4ee4\u548c\uff15\u5e74\u5ea6\u65e5\u672c\u8fb2\u82b8\u5316\u5b66\u4f1a\u897f\u65e5\u672c\u652f\u90e8\u4f8b\u4f1a, \u4e5d\u5dde\u5927\u5b66\u533b\u5b66\u90e8\u767e\u5e74\u8b1b\u5802, 2023.<\/li> \n<li>\u5c71\u7530\u60a0\u7a00, \u5ddd\u539f\u50da, \u5ca1\u90e8\u5b5d\u5f18. \u30e9\u30a4\u30c8\u30c8\u30e9\u30f3\u30b9\u30dd\u30fc\u30c8\u7372\u5f97\u306e\u305f\u3081\u306e\u7b26\u53f7\u5316\u3068\u5fa9\u53f7\u306e\u540c\u6642\u6700\u9069\u5316. \u7b2c26\u56de\u753b\u50cf\u306e\u8a8d\u8b58\u30fb\u7406\u89e3\u30b7\u30f3\u30dd\u30b8\u30a6\u30e0 (MIRU2023), \u9759\u5ca1, 2023.<\/li> \n<li>\u4e0a\u7530\u5b87\u8d77, \u5ddd\u539f\u50da, \u5ca1\u90e8\u5b5d\u5f18. \u76f4\u63a5\u30fb\u5927\u57df\u6210\u5206\u3078\u306e\u5206\u89e3\u306e\u305f\u3081\u306e\u64ae\u5f71\u6761\u4ef6\u3068\u5206\u89e3\u51e6\u7406\u306e\u540c\u6642\u6700\u9069\u5316. \u7b2c22\u56de\u60c5\u5831\u79d1\u5b66\u6280\u8853\u30d5\u30a9\u30fc\u30e9\u30e0 (FIT2023),, \u5927\u962a, 2023.<\/li> \n<li>\u5e73\u5c3e\u5bff\u5e0c, \u5ddd\u539f\u50da, \u5ca1\u90e8\u5b5d\u5f18. \u4efb\u610f\u65b9\u5411\u5149\u6e90\u4e0b\u753b\u50cf\u751f\u6210\u306e\u305f\u3081\u306e\u7167\u660e\u74b0\u5883\u3068\u88dc\u9593\u51e6\u7406\u306e\u540c\u6642\u6700\u9069\u5316. \u7b2c22\u56de\u60c5\u5831\u79d1\u5b66\u6280\u8853\u30d5\u30a9\u30fc\u30e9\u30e0 (FIT2023), \u5927\u962a, 2023.<\/li> \n<li>Takaoki Ueda, Ryo Kawahara, and Takahiro Okabe. Learning projection patterns for direct-global separation. The 19th International Conference on Computer Vision Theory and Applications (VISAPP2024), Rome, Italy, 2023.<\/li> \n<li>Toshiki Hirao, Ryo Kawahara, and Takahiro Okabe. Using extended light sources for relighting from a small number of images. The 19th International Conference on Computer Vision Theory and Applications (VISAPP2024), Rome, Italy, 2023.<\/li> \n<li>\u6cb3\u91ce\u5275\u7950, \u5e73\u5c3e\u5bff\u5e0c, \u4e0a\u7530\u5b87\u8d77, \u5ddd\u539f\u50da, \u5ca1\u90e8\u5b5d\u5f18. \u30e9\u30a4\u30c8\u30c8\u30e9\u30f3\u30b9\u30dd\u30fc\u30c8\u7372\u5f97\u306e\u305f\u3081\u306e\u7b26\u53f7\u5316\u7167\u660e\u30fb\u9732\u5149\u30d1\u30bf\u30f3\u3068\u5fa9\u53f7\u51e6\u7406\u306e\u540c\u6642\u5b66\u7fd2. \u60c5\u5831\u51e6\u7406\u5b66\u4f1a \u7b2c86\u56de\u5168\u56fd\u5927\u4f1a, \u795e\u5948\u5ddd, 2023.<\/li> \n<li>\u677e\u85e4\u61b2\u543e, \u77f3\u6797, \u5ddd\u539f\u50da, \u5ca1\u90e8\u5b5d\u5f18. \u4efb\u610f\u8996\u70b9\u306b\u304a\u3051\u308b\u76f4\u63a5\u30fb\u5927\u57df\u6210\u5206\u5206\u96e2\u3068\u8cea\u611f\u7de8\u96c6. \u60c5\u5831\u51e6\u7406\u5b66\u4f1a \u7b2c86\u56de\u5168\u56fd\u5927\u4f1a, \u795e\u5948\u5ddd, 2023.<\/li> \n<li>\u77f3\u6797, \u677e\u85e4\u61b2\u543e, \u5ddd\u539f\u50da, \u5ca1\u90e8\u5b5d\u5f18. \u4efb\u610f\u8996\u70b9\u30fb\u4efb\u610f\u5149\u6e90\u8272\u306b\u304a\u3051\u308b\u86cd\u5149\u7269\u4f53\u306e\u753b\u50cf\u751f\u6210. \u60c5\u5831\u51e6\u7406\u5b66\u4f1a \u7b2c86\u56de\u5168\u56fd\u5927\u4f1a, \u795e\u5948\u5ddd, 2023.<\/li> \n<li>\u718a\u8c37\u6dbc\u5e73, \u4e0a\u7530\u5b87\u8d77, \u5e73\u5c3e\u5bff\u5e0c, \u5ddd\u539f\u50da, \u5ca1\u90e8\u5b5d\u5f18. \u86cd\u5149\u7269\u8cea\u691c\u51fa\u306e\u305f\u3081\u306e\u7167\u660e\u30b9\u30da\u30af\u30c8\u30eb\u3068\u9818\u57df\u5206\u5272\u51e6\u7406\u306e\u540c\u6642\u5b66\u7fd2. \u60c5\u5831\u51e6\u7406\u5b66\u4f1a \u7b2c86\u56de\u5168\u56fd\u5927\u4f1a, \u795e\u5948\u5ddd, 2023.<\/li> \n<li>Yutaro Abe, Asaki Kawaguchi, and Shogo Okamoto. Minimal impact of tangible objects on body ownership transfer in immersive virtual reality. International Symposium on Affective Science and Engineering, Online, 2023.<\/li> \n<li>Giryeon KIM, Shogo OKAMOTO and Hisataka MARUYAMA. Response Surface of Softness Perceived via Frictional Tactile Stimuli on Flat Touch-display. International Symposium on Affective Science and Engineering, Online, 2023.<\/li> \n<li>Hiroharu Natsume and Shogo Okamoto. Prediction of dynamic preference by using temporal dominance of sensations data. International Symposium on Affective Science and Engineering, Online, 2023.<\/li> \n<li>Yuta GOTO and Shogo OKAMOTO. Relaxation effects of auricular vibration stimuli synchronized with music. International Symposium on Affective Science and Engineering, Online, 2023.<\/li> \n<li>Yuta GOTO and Shogo OKAMOTO. Stroking stimuli to ear emotionally affect musical and non-musical sounds in a different way. IEEE\/SICE International Symposium on System Integration, Vietnam, 2023.<\/li> \n<li>Hongbo Wang and Shogo Okamoto. GFrictional Planes are Felt Harder through a Force Display Device. IEEE Global Conference on Consumer Electronics, Nara, 2023.<\/li> \n<li>Hongbo Wang and Shogo Okamoto. Kinetic friction deters softness judgment during sliding motion. IEEE Global Conference on Consumer Electronics, Nara, 2023.<\/li> \n<li>Tzu-Hsuan Hsia and Shogo Okamoto. Three-dimensional localization of a finger in water by using human body antenna signals. IEEE Global Conference on Consumer Electronics, Nara, 2023.<\/li> \n<li>Mirai Azechi and Shogo Okamoto. Easy-to-Recognize Bump Shapes using only Lateral Force Cues for Real and Virtual Surfaces. IEEE World Haptics Conference, Delft, Netherlands, 2023.<\/li> \n<li>Yiruna, Takei T., Matsumomo M, Kunimatsu J.. Roles of the supplementary motor area in the voluntary control of breathing\u2019. The 46th Annual Meeting of the Japanese Neuroscience Society, \u4ed9\u53f0, 2023.<\/li> \n<li>\u6728\u6751\u6ec9\u8f14\u30fb\u6e05\u5ddd\u5b8f\u6681\u30fb\u6817\u6728\u4e00\u90ce. \u753b\u50cf\u306e\u4e0d\u81ea\u7136\u3055\u3092\u6c7a\u5b9a\u3059\u308b\u5f69\u5ea6\u9650\u754c\u306b\u95a2\u3059\u308b\u7814\u7a76\uff0e. \u65e5\u672c\u8996\u899a\u5b66\u4f1a2024\u51ac\u5b63\u5927\u4f1a, \u5de5\u5b66\u9662\u5927\u5b66\uff08\u6771\u4eac\u30fb\u65b0\u5bbf\uff09, 2023.<\/li> \n<li>\u6e9d\u4e0a \u967d\u5b50, \u6797 \u79c0\u7f8e, \u4f50\u85e4 \u5f18\u7f8e. \u808c\u306e\u8272\u7d20\u306b\u3088\u308b\u9854\u8272\u306e\u6bb5\u968e\u7684\u5909\u5316\u304c\u8868\u60c5\u77e5\u899a\u306b\u4e0e\u3048\u308b\u5f71\u97ff. \u65e5\u672c\u8272\u5f69\u5b66\u4f1a\u8272\u899a\u7814\u7a76\u4f1a \u4ee4\u548c\uff15\u5e74\u5ea6\u7814\u7a76\u767a\u8868\u4f1a, \u5343\u8449\u5927\u5b66\uff08\u5343\u8449\u770c\u30fb\u5343\u8449\u5e02\uff09, 2023.<\/li> \n<li>\u6e9d\u4e0a \u967d\u5b50. \u9854\u306e\u8272\u30fb\u8cea\u611f\u306f\u3069\u306e\u3088\u3046\u306b\u8a8d\u8b58\u3055\u308c\u308b\u304b\uff1f. \u7b2c4\u56de\u56fd\u969b\u5316\u7ca7\u7642\u6cd5\u533b\u5b66\u4f1a2023\uff08\u30b7\u30f3\u30dd\u30b8\u30a6\u30e0 \u8133\u6a5f\u80fd\u304b\u3089\u306e\u6a5f\u80fd\u6027\u8a55\u4fa1\uff08\u9999\u7ca7\uff06\u5316\u7ca7\uff09\uff09, \u30aa\u30f3\u30e9\u30a4\u30f3, 2023.<\/li> \n<li>\u5ca9\ufa11 \u62d3\u771f, \u4f50\u85e4 \u5f18\u7f8e, \u6e9d\u4e0a \u967d\u5b50. \u30aa\u30f3\u30e9\u30a4\u30f3\u4f1a\u8b70\u74b0\u5883\u306b\u304a\u3051\u308b\u9854\u9762\u7167\u660e\u3068\u80cc\u666f\u7167\u660e\u304c\u9854\u306e\u898b\u3048\u306b\u4e0e\u3048\u308b\u5f71\u97ff. \u65e5\u672c\u8996\u899a\u5b66\u4f1a2024\u5e74\u51ac\u5b63\u5927\u4f1a, \u5de5\u5b66\u9662\u5927\u5b66\uff08\u6771\u4eac\u90fd\u30fb\u65b0\u5bbf\u533a\uff09, 2023.<\/li> \n<li>\u9053\u4e0b \u6dbc, \u5c71\u7530 \u771f\u5e0c\u5b50, \u5e73\u5c3e \u8cb4\u5927, \u4f50\u85e4 \u5f18\u7f8e, \u6e9d\u4e0a \u967d\u5b50. \u8868\u60c5\u30ab\u30c6\u30b4\u30ea\u8a8d\u8b58\u306b\u5bfe\u3059\u308b\u808c\u306e\u30d8\u30e2\u30b0\u30ed\u30d3\u30f3\u306b\u3088\u308b\u8272\u5909\u5316\u306e\u5f71\u97ff. \u65e5\u672c\u8996\u899a\u5b66\u4f1a2024\u5e74\u51ac\u5b63\u5927\u4f1a, \u5de5\u5b66\u9662\u5927\u5b66\uff08\u6771\u4eac\u90fd\u30fb\u65b0\u5bbf\u533a\uff09, 2023.<\/li> \n<li>Mizokami Y. Color and brightness perception of facial skin. Colour and Vision Science and Imaging Forum (CVIF 2023), Hangzhou (China), 2023.<\/li> \n<li>Iwasaki T, Sato H, Mizokami Y. Influence of Lighting Spectra on Facial Appearance in Static Images, Movies, and Real Environment. The 15th Congress of the International Colour Association 2023, Chiang Rai (Thailand), 2023.<\/li> \n<li>He Y, Michishita R, Sato H, Mizokami Y. Influence of Skin Color Change on Facial Expression Recognition Among Asian Observers. The 15th Congress of the International Colour Association 2023, Chiang Rai (Thailand), 2023.<\/li> \n<li>Mizokami Y. Facial appearance assessment. APPAMAT\/IS&amp;T 2023 International Workshop on Material Appearance, Paris (France), 2023.<\/li> \n<li>He Y, Michishita R, Sato H, Mizokami Y. Effect of Skin Color Change on Facial Expression Recognition: A Study of Japanese and Thai Observers. Satellite International Conference on Advanced Imaging 2023, \u5343\u8449\u5927\u5b66\uff08\u5343\u8449\u770c\u30fb\u5343\u8449\u5e02\uff09, 2023.<\/li> \n<li>Kitano T, Sato H, Mizokami Y. Face Color Perception for Melanin and Hemoglobin Changes. Satellite International Conference on Advanced Imaging 2023, \u5343\u8449\u5927\u5b66\uff08\u5343\u8449\u770c\u30fb\u5343\u8449\u5e02\uff09, 2023.<\/li> \n<li>\u9ed2\u6fa4 \u99ff, \u4f50\u85e4 \u5f18\u7f8e, \u6e9d\u4e0a \u967d\u5b50. \u808c\u306e\u8272\u5f01\u5225\u306b\u304a\u3051\u308b\u8272\u899a\u591a\u69d8\u6027\u306e\u5f71\u97ff. \u65e5\u672c\u8272\u5f69\u5b66\u4f1a\u7b2c54\u56de\u5168\u56fd\u5927\u4f1a [\u6771\u4eac]\u201923, \u6771\u4eac\u9020\u5f62\u5927\u5b66\uff08\u6771\u4eac\u90fd\u30fb\u516b\u738b\u5b50\u5e02\uff09, 2023.<\/li> \n<li>\u5ca9\ufa11 \u62d3\u771f, \u4f50\u85e4 \u5f18\u7f8e, \u6e9d\u4e0a \u967d\u5b50. \u30aa\u30f3\u30e9\u30a4\u30f3\u4f1a\u8b70\u306b\u304a\u3051\u308b\u7167\u660e\u3068\u9854\u5370\u8c61\u306e\u95a2\u4fc2\uff1a\u9759\u6b62\u753b\u3068 \u52d5\u753b\u306e\u6bd4\u8f03. \u65e5\u672c\u8272\u5f69\u5b66\u4f1a\u7b2c54\u56de\u5168\u56fd\u5927\u4f1a [\u6771\u4eac]\u201923, \u6771\u4eac\u9020\u5f62\u5927\u5b66\uff08\u6771\u4eac\u90fd\u30fb\u516b\u738b\u5b50\u5e02\uff09, 2023.<\/li> \n<li>\u6e9d\u4e0a \u967d\u5b50. \u8996\u899a\u5fc3\u7406\u304b\u3089\u8003\u3048\u308b\u808c\u306e\u8272\u3068\u304a\u5316\u7ca7. \u7b2c48\u56de\u65e5\u672c\u9999\u7ca7\u54c1\u5b66\u4f1a \u30b7\u30f3\u30dd\u30b8\u30a6\u30e0\u3000\u76ae\u819a\u8272\u30a2\u30c3\u30d7\u30c7\u30fc\u30c8\uff1b\u76ee\u304b\u3089\u9c57\u306e\u304a\u8a71, \u6709\u697d\u753a\u671d\u65e5\u30db\u30fc\u30eb\uff08\u6771\u4eac\u90fd\u30fb\u5343\u4ee3\u7530\u533a\uff09, 2023.<\/li> \n<li>He Y, Hiromi Y. Sato, Mizokami Y. Impact of judgment criteria on the hue-dependent brightness perception of face. The 23rd annual meeting of the Vision Sciences Society, St. Pete Bearch (USA), 2023.<\/li> \n<li>Tanaka M, Amari S, Horiuchi T. Modelling of Perceptual Gloss under Mixed Lighting Conditions. 30th Quadrennial Session of the CIE, Ljubljana, Slovenia, 2023.<\/li> \n<li>Ando T, Tanaka M, Horiuchi T. Experimental Study on Reducing Wavelength Dependency of Spatial Resolution Characteristics of a Digital Camera by MTF Correction. Electronic Imaging, San Francisco, USA, 2023.<\/li> \n<li>Hexin Xu, Amit Yaron, Tomoyo Isoguchi Shiramatsu, Hirokazu Takahashi. Common Mechanism Underlying Multimodal Integration. 2023 15th Biomedical Engineering International Conference (BMEiCON), \u6771\u4eac, 2023.<\/li> \n<li>Karin Oshima, Tomoyo Isoguchi Shiramatsu, Hirokazu Takahashi. The effect of 4-weeks exposure to music on social bonding between rats. 2023 45th Annual International Conference of the IEEE Engineering in Medicine &amp; Biology Society (EMBC), \u30b7\u30c9\u30cb\u30fc, 2023.<\/li> \n<li>\u79cb\u7530 \u5927\uff0c\u77f3\u7530 \u76f4\u6a39\uff0c\u767d\u677e (\u78ef\u53e3) \u77e5\u4e16\uff0c\u9ad8\u6a4b \u5b8f\u77e5. \u60c5\u5831\u51e6\u7406\u5bb9\u91cf\u304b\u3089\u898b\u305f\u97f3\u697d\u306e\u6642\u7cfb\u5217\u69cb\u9020. \u65e5\u672c\u97f3\u97ff\u5b66\u4f1a\u8074\u899a\u7814\u7a76\u4f1a, \u718a\u672c, 2023.<\/li> \n<li>\u9ad8\u6728 \u6c38\u9060\uff0c\u5927\u5cf6 \u679c\u6797\uff0c\u767d\u677e (\u78ef\u53e3) \u77e5\u4e16\uff0c\u9ad8\u6a4b \u5b8f\u77e5. \u97f3\u306e\u500b\u5225\u63d0\u793a\u30b7\u30b9\u30c6\u30e0\u3092\u7528\u3044\u305f \u30e9\u30c3\u30c8\u306e\u30b3\u30df\u30e5\u30cb\u30b1\u30fc\u30b7\u30e7\u30f3\u97f3\u58f0\u306e\u691c\u8a0e. \u65e5\u672c\u97f3\u97ff\u5b66\u4f1a\u8074\u899a\u7814\u7a76\u4f1a, \u718a\u672c, 2023.<\/li> \n<li>Hexin XU , Amit YARON , Tomoyo Isoguchi SHIRAMATSU, Hirokazu TAKAHASHI. Individual Difference in the Perception of Multimodal Illusions. \u65e5\u672c\u97f3\u97ff\u5b66\u4f1a\u8074\u899a\u7814\u7a76\u4f1a, \u718a\u672c, 2023.<\/li> \n<li>\u5c71\u6728 \u5d1a\u592a\u90ce\uff0c\u767d\u677e\uff08\u78ef\u53e3\uff09\u77e5\u4e16\uff0c\u68ee\u5ddd \u52dd\u7530\uff0c\u6c60\u8c37 \u88d5\u4e8c\uff0c\u9ad8\u6a4b \u5b8f\u77e5. \u9855\u8457\u306a\u523a\u6fc0\u306b\u5bfe\u3059\u308b\u5074\u5750\u6838\u30c9\u30fc\u30d1\u30df\u30f3\u306e\u5fdc\u7b54\u7279\u6027. \u96fb\u6c17\u5b66\u4f1a\u3000\u533b\u7528\u30fb\u751f\u4f53\u5de5\u5b66\u7814\u7a76\u4f1a, \u6771\u4eac, 2023.<\/li> \n<li>\u661f\u91ce \u6709\u4f50\uff0c\u5927\u5cf6 \u679c\u6797\uff0c\u9ad8\u6728 \u6c38\u9060\uff0c\u767d\u677e (\u78ef\u53e3) \u77e5\u4e16\uff0c\u9ad8\u6a4b \u5b8f\u77e5. \u81ea\u7531\u884c\u52d5\u30e9\u30c3\u30c8\u3092\u5bfe\u8c61\u3068\u3057\u305f\u30ef\u30a4\u30e4\u30ec\u30b9\u77b3\u5b54\u5f84\u8a08\u6e2c\u30b7\u30b9\u30c6\u30e0. \u96fb\u6c17\u5b66\u4f1a\u3000\u533b\u7528\u30fb\u751f\u4f53\u5de5\u5b66\u7814\u7a76\u4f1a, \u6771\u4eac, 2023.<\/li> \n<li>\u6d66 \u5f69\u4eba\uff0c\u5c71\u6728 \u5d1a\u592a\u90ce\uff0c\u767d\u677e\uff08\u78ef\u53e3\uff09\u77e5\u4e16\uff0c\u9ad8\u6a4b \u5b8f\u77e5. \u52d5\u7269\u30e2\u30c7\u30eb\u306e\u4e88\u6e2c\u884c\u52d5\u3092\u4fc3\u3059\u30ec\u30d0\u30fc\u5f15\u304d\u8ab2\u984c\u306e\u958b\u767a. \u96fb\u6c17\u5b66\u4f1a\u3000\u533b\u7528\u30fb\u751f\u4f53\u5de5\u5b66\u7814\u7a76\u4f1a, \u6771\u4eac, 2023.<\/li> \n<li>Nakai S, Kitanishi T, Mizuseki K.. Distinct manifold encoding of diverse navigational information in the subiculum and hippocampus.. Neuroscience 2023, Washington, D.C., 2023.<\/li> \n<li>\u4e2d\u4e95\u69d9\u4e5f\u3001\u5317\u897f\u5353\u78e8\u3001\u6c34\u95a2\u5065\u53f8. Distinct manifold encoding of diverse navigational information in the subiculum and hippocampus during awake and sleep.. \u7b2c46\u56de\u65e5\u672c\u795e\u7d4c\u79d1\u5b66\u5927\u4f1a, \u4ed9\u53f0, 2023.<\/li> \n<li>\u5317\u897f\u5353\u78e8. \u5927\u898f\u6a21\u8a08\u6e2c\u3067\u6d77\u99ac\u306e\u60c5\u5831\u51e6\u7406\u306b\u8feb\u308b. \u7b2c\u4e00\u56de\u795e\u7d4c\u5316\u5b66\u306e\u82e5\u624bKYOUEN, \u6771\u4eac, 2023.<\/li> \n<li>\u5e73\u6fa4\u4f51\u5553\u3001\u5ca1\u672c\u96c5\u5b50\u3001\u767d\u9808\u672a\u9999\u3001\u5965\u6751\u4fca\u6a39\u3001\u5ca9\u5d0e\u7559\u7f8e\u3001\u9ec4\u7530\u80b2\u5b8f\u3001\u897f\u672c\u4f38\u5fd7\u3001\u6771\u539f\u548c\u6210. \u4e73\u5150\u7279\u5fb4\u7684\u306a\u4f53\u81ed\u6210\u5206\u306b\u3088\u308b\u7d4c\u7523\u5973\u6027\u306e\u30aa\u30ad\u30b7\u30c8\u30b7\u30f3\u5206\u6ccc\u3078\u306e\u5f71\u97ff . \u65e5\u672c\u8d64\u3061\u3083\u3093\u5b66\u4f1a\u7b2c23\u56de\u5b66\u8853\u96c6\u4f1a, \u5927\u962a\u5927\u5b66, 2023.<\/li> \n<li>\u5ca1\u672c\u96c5\u5b50. \u7279\u5225\u8b1b\u6f14\uff1a\u8133\u8a08\u6e2c\u3092\u7528\u3044\u305f\u30d2\u30c8\u306e\u5302\u3044\u77e5\u899a\u306e\u7814\u7a76 . \u7b2c36\u56de\u306b\u304a\u3044\u30fb\u304b\u304a\u308a\u74b0\u5883\u5b66\u4f1a, \u9759\u5ca1\u770c\u5bcc\u58eb\u898b\u5e02, 2023.<\/li> \n<li>Masako Okamoto . Temporal Dynamics of Neural Odor Representations Revealed by\u3000Multivariate Pattern Analysis on Scalp-Recorded EE. \u65e5\u672c\u5473\u3068\u5302\u5b66\u4f1a\u7b2c57\u56de\u5927\u4f1a Young Scientists Symposium, \u6771\u4eac\u90fd, 2023.<\/li> \n<li>Mugihiko Kato , Toshiki Okumura , Kazushige Touhara , Masako Okamoto. Neural odor representation at early latency determines behavioral odor discrimination performance. Neuroscience2023, Washington Convention Center in Washington, D.C.\u3000USA, 2023.<\/li> \n<li>\u5ca1\u672c\u96c5\u5b50\u3001\u5965\u6751\u4fca\u6a39\u3001\u9ec4\u7530\u80b2\u5b8f\u3001\u6a2a\u4e95\u60c7\u3001\u4e2d\u4e95\u667a\u4e5f\u3001\u897f\u672c\u4f38\u5fd7\u3001\u6771\u539f\u548c\u6210. \u8a00\u8449\u30e9\u30d9\u30eb\u304c\u9999\u308a\u77e5\u899a\u3068\u4e00\u6b21\u55c5\u899a\u91ce\u306e\u6d3b\u52d5\u306b\u53ca\u307c\u3059\u5f71\u97ff . \u65e5\u672c\u8fb2\u82b8\u5316\u5b66\u4f1a2024\u5e74\u5ea6\u5927\u4f1a \u5275\u7acb100\u5468\u5e74\u8a18\u5ff5\u5927\u4f1a, \u6771\u4eac\u90fd, 2023.<\/li> \n<li>Takehiro Nagai, Hiroaki Kiyokawa, Stephen Palmisano, &amp; Juno Kim. Towards a psychophysically plausible simulation of translucent appearance. SIGGRAPH Asia 2023, Sydney, Australia, 2023.<\/li> \n<li>Mizuki Takanashi, &amp; Takehiro Nagai. Image features involved in translucency enhancement by chromaticity information. 2023 OSA Fall Vision Meeting, Seattle, USA, 2023.<\/li> \n<li>Suzuha Horiuchi, &amp; Takehiro Nagai. Spillover effects of color discrimination training on color category boundaries and color appearance. 2023 OSA Fall Vision Meeting, Seattle, USA, 2023.<\/li> \n<li>\u4e2d\u5cf6\u5065\u592a\uff0c\u6c38\u4e95\u5cb3\u5927. \u7269\u4f53\u753b\u50cf\u304b\u3089\u751f\u3058\u308b\u8cea\u611f\u30fb\u611f\u6027\u306e\u8133\u6ce2\u306b\u3088\u308b\u5224\u5225\u306e\u8a66\u307f. \u65e5\u672c\u8996\u899a\u5b66\u4f1a2023\u5e74\u590f\u5b63\u5927\u4f1a, \u5fb3\u5cf6, 2023.<\/li> \n<li>\u82b1\u7530\u90c1\u6597\uff0c\u6c38\u4e95\u5cb3\u5927. \u81ea\u7136\u753b\u50cf\u306b\u304a\u3051\u308b\u8272\u5f69\u306e\u7a00\u6709\u6027\u3068\u62bd\u8c61\u753b\u306e\u8272\u5f69\u9078\u597d\u306e\u95a2\u4fc2. \u65e5\u672c\u8272\u5f69\u5b66\u4f1a\u7b2c54\u56de\u5168\u56fd\u5927\u4f1a, \u6771\u4eac, 2023.<\/li> \n<li>\u6728\u8c37\u4fca\u4ecb\uff0c\u78ef\u5c71\u62d3\u4eba\uff0c\u9d5c\u6728\u7950\u53f2. \u8b21\u66f2\u306e\u826f\u3055\u306b\u5bc4\u4e0e\u3059\u308b\u30b9\u30da\u30af\u30c8\u30eb\u30fb\u6642\u9593\u5909\u8abf\u60c5\u5831\u306e\u691c\u8a0e. \u65e5\u672c\u97f3\u97ff\u5b66\u4f1a2023\u5e74\u79cb\u5b63\u7814\u7a76\u767a\u8868\u4f1a, \u540d\u53e4\u5c4b, 2023.<\/li> \n<li>\u6728\u8c37\u4fca\u4ecb\uff0c\u7267\u52dd\u5f18\uff0c \u9957\u5ead\u7d75\u91cc\u5b50\uff0c\u4e2d\u5185\u8302\u6a39. \u80fd\u697d\u5e2b\u3068\u30aa\u30da\u30e9\u6b4c\u624b\u306b\u3088\u308b\u6b4c\u5531\u306e\u7a7a\u9593\u653e\u5c04\u7279\u6027. \u65e5\u672c\u97f3\u97ff\u5b66\u4f1a2024\u5e74\u6625\u5b63\u7814\u7a76\u767a\u8868\u4f1a, \u6771\u4eac, 2023.<\/li> \n<li>\u53ca\u5ddd \u9054\u4e5f, \u9bc9\u7530 \u5b5d\u548c . \u30cb\u30db\u30f3\u30b6\u30eb\u5916\u5074\u819d\u72b6\u4f53\u306b\u304a\u3051\u308b\u9752\u8272\u5fdc\u7b54\u7d30\u80de\u306eK\u5c64\u5c40\u5728. \u7b2c46\u56de\u65e5\u672c\u795e\u7d4c\u79d1\u5b66\u5927\u4f1a , \u4ed9\u53f0\u56fd\u969b\u4f1a\u8b70\u5834, 2023.<\/li> \n<li>Kowa Koida. Diurnal variations in luminance and chromatic contrast sensitivity. Optica Fall Vision Meeting, University of Washington, Seattle, WA, US , 2023.<\/li> \n<li>\u9bc9\u7530\u5b5d\u548c. \u8f1d\u5ea6\u3068\u8272\u30b3\u30f3\u30c8\u30e9\u30b9\u30c8\u611f\u5ea6\u306e\u65e5\u5185\u5909\u52d5\u306e\u9055\u3044. \u7b2c26\u56de \u8996\u899a\u79d1\u5b66\u30d5\u30a9\u30fc\u30e9\u30e02023 \u7814\u7a76\u4f1a, \u6c96\u7e04\u79d1\u5b66\u6280\u8853\u5927\u5b66\u9662\u5927\u5b66\uff08OIST\uff09, 2023.<\/li> \n<li>\u85e4\u4e95\u4fca\u8f14, \u6edd\u4e4b\u5f25, \u5409\u7530\u6e29\u767b, \u897f\u5ddd\u83dc\u6708, \u5c0f\u68ee\u653f\u55e3, \u690d\u7530\u4e00\u535a, \u52a0\u85e4\u90a6\u4eba, \u539f\u6b66\u53f2, \u718a\u5d0e\u535a\u4e00, \u5bfa\u7530\u548c\u61b2.. \u76ee\u7389\u713c\u304d\u306e\u8cea\u611f\u3092\u6c7a\u5b9a\u3065\u3051\u308b\u6f5c\u5728\u7a7a\u9593\u4e0a\u306e\u52b9\u7528\u95a2\u6570\u3092\u7528\u3044\u305f\u81ea\u9589\u30b9\u30da\u30af\u30c8\u30e9\u30e0\u75c7\u8005\u3068\u5b9a\u578b\u767a\u9054\u8005\u306e\u611f\u899a\u51e6\u7406\u7279\u6027\u306e\u6bd4\u8f03.. \u7b2c\uff13\uff18\u56de\u4eba\u5de5\u77e5\u80fd\u5b66\u4f1a\u5168\u56fd\u5927\u4f1a., \u6d5c\u677e\u5e02, 2023.<\/li> \n<li>\u5869\u8c37 \u548c\u57fa, \u8c37\u9685 \u52c7\u592a, \u6751\u7530 \u822a\u5fd7, \u5927\u8feb \u512a\u771f, \u5927\u8cab \u670b\u54c9, \u9ad8\u5bae \u6e09\u543e, \u5ee3\u5ddd \u7d14\u4e5f, \u6afb\u4e95 \u82b3\u96c4, \u771e\u90e8 \u5bdb\u4e4b. \u55c5\u899a\u3068\u5473\u899a\u306e\u591a\u611f\u899a\u7d71\u5408\u306b\u3088\u308b\u98a8\u5473\u611f\u899a\u306e\u89e3\u660e. \u7b2c46\u56de \u65e5\u672c\u795e\u7d4c\u79d1\u5b66\u5927\u4f1a\u3000 , \u4ed9\u53f0, 2023.<\/li> \n<li>\u6c60\u6238 \u512a\u5e0c, \u6751\u7530 \u822a\u5fd7, \u9818 \u5bb6\u5d07, \u5869\u8c37 \u548c\u57fa, \u771e\u90e8 \u5bdb\u4e4b, \u9ed2\u7530 \u4e00\u6a39, \u5409\u6751 \u4ec1\u5fd7, \u6df1\u6fa4 \u6709\u543e. \u30c1\u30e7\u30b3\u30ec\u30fc\u30c8\u6442\u98df\u306b\u95a2\u9023\u3057\u305f\u30e9\u30c3\u30c8\u8d85\u97f3\u6ce2\u767a\u58f0\u30b5\u30d6\u30bf\u30a4\u30d7\u306e\u6a5f\u68b0\u5b66\u7fd2\u306b\u3088\u308b\u5206\u985e. 2023\u5e74\u5ea6 \u65e5\u672c\u5473\u3068\u5302\u5b66\u4f1a, \u6771\u4eac, 2023.<\/li> \n<li>Koshi Murata, Yuki Ikedo, Takashi Ryoke, Kazuki Shiotani, Hiroyuki Manabe, Kazuki Kuroda, Hitoshi Yoshimura, Yugo Fukazawa. Identification of subtypes of ultrasonic vocalizations associated with chocolate eating in rats using machine learning  . 52st Society for Neuroscience Annual Meeting , Washington, D.C., 2023.<\/li> \n<li>\u5869\u8c37 \u548c\u57fa, \u8c37\u9685 \u52c7\u592a, \u6751\u7530 \u822a\u5fd7, \u5927\u8feb \u512a\u771f, \u5927\u8cab \u670b\u54c9, \u9ad8\u5bae \u6e09\u543e, \u5ee3\u5ddd \u7d14\u4e5f, \u6afb\u4e95 \u82b3\u96c4, \u771e\u90e8 \u5bdb\u4e4b . \u3052\u3063\u6b6f\u985e\u306b\u304a\u3051\u308b\u98df\u3092\u8c4a\u304b\u306b\u3059\u308b\u98a8\u5473\u77e5\u899a\u8ab2\u984c\u306e\u958b\u767a. \u98df\u6b32\u30fb\u98df\u55dc\u597d\u3092\u5f62\u6210\u3059\u308b\u611f\u899a\u30fb\u5185\u5206\u6ccc\u30fb\u795e\u7d4c\u57fa\u76e4\u7814\u7a76\u4f1a 2023, \u5ca1\u5d0e, 2023.<\/li> \n<li>\u6df1\u6fa4\u6709\u543e, \u6c60\u6238\u512a\u5e0c, \u9818\u5bb6\u5d07, \u5869\u8c37\u548c\u57fa, \u771e\u90e8\u5bdb\u4e4b, \u9ed2\u7530\u4e00\u6a39, \u5409\u6751\u4ec1\u5fd7, \u6751\u7530\u822a\u5fd7 . \u30c1\u30e7\u30b3\u30ec\u30fc\u30c8\u6442\u98df\u306b\u95a2\u9023\u3057\u305f\u30e9\u30c3\u30c8\u8d85\u97f3\u6ce2\u767a\u58f0\u30b5\u30d6\u30bf\u30a4\u30d7\u306e\u6a5f\u68b0\u5b66\u7fd2\u306b\u3088\u308b\u5206\u985e. \u7b2c129\u56de\u65e5\u672c\u89e3\u5256\u5b66\u4f1a\u7dcf\u4f1a\u30fb\u5168\u56fd\u5b66\u8853\u96c6\u4f1a , \u6c96\u7e04, 2023.<\/li> \n<li>Masaharu Yasuda. Neuronal representation of emotion in the monkey parabrachial nucleus. Society for Neuroscience, Washington DC, 2023.<\/li> \n<li>Harpreet Sareen, Yibo Fu, Nour Boulahcen, and Yasuaki Kakehi. BubbleTex: Designing Heterogenous Wettable Areas for Carbonation Bubble Patterns on Surfaces. Conference on Human Factors in Computing Systems (CHI \u201923), \u30c9\u30a4\u30c4, 2023.<\/li> \n<li>\u5e73\u6728 \u525b\u53f2. Laser HaPouch: A Haptic Display Utilizing Selective Activation of Laser-powered Liquid-to-gas Phase Change Actuator Arrays. IEEE WorldHaptics 2023, \u30aa\u30e9\u30f3\u30c0, 2023.<\/li> \n<li>Tetsushi Nonaka. Soft Robotics and Embodied Intelligence. 6th IEEE-RAS International Conference on Soft Robotics (RoboSoft 2023), \u30b7\u30f3\u30ac\u30dd\u30fc\u30eb, 2023.<\/li> \n<li>Tetsushi Nonaka, Arsen Abdulali, Chapa Sirithunge, Kieran Gilday, Fumiya Iida. Soft robotic tactile perception of softer objects based on learning of spatiotemporal pressure patterns. 6th IEEE-RAS International Conference on Soft Robotics (RoboSoft 2023), \u30b7\u30f3\u30ac\u30dd\u30fc\u30eb, 2023.<\/li> \n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Liu S.; Suganuma M.; Okatani T. Symmetry-aware Neural Architecture for Embodied Visual Navigation. Internation [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":187,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":[],"_links":{"self":[{"href":"https:\/\/shitsukan.jp\/deep\/index.php?rest_route=\/wp\/v2\/pages\/1141"}],"collection":[{"href":"https:\/\/shitsukan.jp\/deep\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/shitsukan.jp\/deep\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/shitsukan.jp\/deep\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/shitsukan.jp\/deep\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1141"}],"version-history":[{"count":9,"href":"https:\/\/shitsukan.jp\/deep\/index.php?rest_route=\/wp\/v2\/pages\/1141\/revisions"}],"predecessor-version":[{"id":1153,"href":"https:\/\/shitsukan.jp\/deep\/index.php?rest_route=\/wp\/v2\/pages\/1141\/revisions\/1153"}],"up":[{"embeddable":true,"href":"https:\/\/shitsukan.jp\/deep\/index.php?rest_route=\/wp\/v2\/pages\/187"}],"wp:attachment":[{"href":"https:\/\/shitsukan.jp\/deep\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1141"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}