{"id":49,"date":"2021-03-31T03:25:09","date_gmt":"2021-03-30T18:25:09","guid":{"rendered":"http:\/\/shitsukan.jp\/deep\/en\/?page_id=49"},"modified":"2021-06-01T02:38:56","modified_gmt":"2021-05-31T17:38:56","slug":"a01-1","status":"publish","type":"page","link":"http:\/\/shitsukan.jp\/deep\/en\/?page_id=49","title":{"rendered":"A01-1  Computational Visual Perception of Tangible and Intangible Deep Shitsukan"},"content":{"rendered":"<div id=\"block-profile\">\n  <figure id=\"profile-fig\">\n    <img class=\"profile-img\" src=\"http:\/\/shitsukan.jp\/deep\/en\/wordpress\/wp-content\/uploads\/2021\/03\/a01-1.png\" \/>\n  <\/figure>\n  <div id=\"profile-text\">\n    <span id=\"profile-name\">Ko Nishino<\/span>\n    <span id=\"profile-affiliation\">School of Informatics, Kyoto University<\/span>\n  <\/div>\n<\/div>\n\n\n\n<p>Our goal is to establish the computational foundations for extracting, from visual information, intrinsic properties of real-world objects and scenes that exhibit their hard to verbalize but characteristic looks and feels, i.e., &#8220;Shitsukan.&#8221; By deriving computer vision methods that can recognize tangible and intangible shitsukan, we aim to gain insights into their perceptual mechanisms. For tangible shitsukan, we focus on estimating apparent attributes that encode physical properties of objects and scenes that are likely relevant to their shitsukan, including weight, size, softness, and condition, just from sight. For intangible shitsukan, we seek to uncover and quantify, from visual information, attributes of real-world environments that inform how to act in them. In particular, we consider objects, people, and the 3D space in which they are situated as key components of an environment, and derive methods that systematically estimate intrinsic semantic and contextual information that likely aids action planning and decision making of autonomous agents in them. By enabling perception, representation, and use of deep shitsukan with computer vision, we hope to aid in principled understanding and manipulation of human shitsukan perception.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" width=\"1200\" height=\"444\" src=\"http:\/\/shitsukan.jp\/deep\/en\/wordpress\/wp-content\/uploads\/2021\/04\/vision_llust4-\u897f\u91ce\u6052-1200x444.png\" alt=\"\" class=\"wp-image-69\" srcset=\"http:\/\/shitsukan.jp\/deep\/en\/wordpress\/wp-content\/uploads\/2021\/04\/vision_llust4-\u897f\u91ce\u6052-1200x444.png 1200w, http:\/\/shitsukan.jp\/deep\/en\/wordpress\/wp-content\/uploads\/2021\/04\/vision_llust4-\u897f\u91ce\u6052-600x222.png 600w, http:\/\/shitsukan.jp\/deep\/en\/wordpress\/wp-content\/uploads\/2021\/04\/vision_llust4-\u897f\u91ce\u6052-768x284.png 768w, http:\/\/shitsukan.jp\/deep\/en\/wordpress\/wp-content\/uploads\/2021\/04\/vision_llust4-\u897f\u91ce\u6052-1536x569.png 1536w, http:\/\/shitsukan.jp\/deep\/en\/wordpress\/wp-content\/uploads\/2021\/04\/vision_llust4-\u897f\u91ce\u6052-2048x759.png 2048w\" sizes=\"(max-width: 1200px) 100vw, 1200px\" \/><\/figure>\n\n\n\n<h4>Co-Investigator<\/h4>\n\n\n<div id=\"block-profile\">\n  <figure id=\"profile-fig\">\n    <img class=\"profile-img\" src=\"http:\/\/shitsukan.jp\/deep\/en\/wordpress\/wp-content\/uploads\/2021\/06\/nobuhara_0640_1200x1200-shohei.jpg\" \/>\n  <\/figure>\n  <div id=\"profile-text\">\n    <span id=\"profile-name\">Nobuhara Shohei<\/span>\n    <span id=\"profile-affiliation\">Graduate School of Informatics, Kyoto University<\/span>\n  <\/div>\n<\/div>\n\n\n<div id=\"block-profile\">\n  <figure id=\"profile-fig\">\n    <img class=\"profile-img\" src=\"http:\/\/shitsukan.jp\/deep\/en\/wordpress\/wp-content\/uploads\/2021\/06\/\u912d\u9280\u5f37-Yinqiang-Zheng.jpg\" \/>\n  <\/figure>\n  <div id=\"profile-text\">\n    <span id=\"profile-name\">ZHENG YINQIANG<\/span>\n    <span id=\"profile-affiliation\">The University of Tokyo<\/span>\n  <\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Our goal is to establish the computational foundations for extracting, from visual information, intrinsic properties of real-world objects and scenes that exhibit their hard to verbalize but characteristic looks and feels, i.e., &#8220;Shitsukan.&#8221; By deriving computer vision methods that can recognize tangible and intangible shitsukan, we aim to gain insights into their perceptual mechanisms. For [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":23,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":[],"_links":{"self":[{"href":"http:\/\/shitsukan.jp\/deep\/en\/index.php?rest_route=\/wp\/v2\/pages\/49"}],"collection":[{"href":"http:\/\/shitsukan.jp\/deep\/en\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"http:\/\/shitsukan.jp\/deep\/en\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"http:\/\/shitsukan.jp\/deep\/en\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/shitsukan.jp\/deep\/en\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=49"}],"version-history":[{"count":7,"href":"http:\/\/shitsukan.jp\/deep\/en\/index.php?rest_route=\/wp\/v2\/pages\/49\/revisions"}],"predecessor-version":[{"id":166,"href":"http:\/\/shitsukan.jp\/deep\/en\/index.php?rest_route=\/wp\/v2\/pages\/49\/revisions\/166"}],"up":[{"embeddable":true,"href":"http:\/\/shitsukan.jp\/deep\/en\/index.php?rest_route=\/wp\/v2\/pages\/23"}],"wp:attachment":[{"href":"http:\/\/shitsukan.jp\/deep\/en\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=49"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}