


{"id":1206,"date":"2017-11-24T15:47:55","date_gmt":"2017-11-24T14:47:55","guid":{"rendered":"https:\/\/www-intuidoc.irisa.fr\/?page_id=1206"},"modified":"2023-01-12T19:23:29","modified_gmt":"2023-01-12T18:23:29","slug":"teaching-air-analyse-interpretation-et-reconnaissance-des-gestes-2d-touch-et-3d","status":"publish","type":"page","link":"https:\/\/www-intuidoc.irisa.fr\/en\/enseignement\/teaching-air-analyse-interpretation-et-reconnaissance-des-gestes-2d-touch-et-3d\/","title":{"rendered":"AIR &#8211; Analysis, Interpretation and Recognition of 2D (touch) and 3D Gestures for New Man-Machine Interactions"},"content":{"rendered":"<p><\/p>\n<h2>Description<\/h2>\n<section>With the development of touch screen and motion capture technology, new human-computer interaction gains in popularity in the recent years:\u00a0 human-machine interactions are evolving. Several methods of artificial intelligence have been designed to take advantage of the new interaction potential offered by 2D and 3D action gestures. These gestural controls allow the user to execute many actions simply by doing 2D or 3D Gestures. Recognition of human actions (2D and 3D action gestures) has recently become an active research topic in Artificial Intelligence, Computer Vision, Pattern Recognition and Man-Machine Interaction.In this course, we address this emerging scientific topic: Analysis, Interpretation and Recognition of 2D (touch) and 3D Gestures for new Man-Machine Interactions. Technically, an action is a sequence generated by a human subject during the performance of a task. Action recognition deals with the process of labelling such motion sequence with respect to the depicted motions. The course will expose the specificity of the motion capture and modelisation as well as the recognition process of these two kind of actions (2D and 3D action gestures) but also the potential convergence of the scientific approaches used for each of them. We want also to address in this course some notion of user-centered design, user needs, acceptability and user testing to illustrate the importance of considering the user when we develop such new human-computer interaction.<\/section>\n<h2>Key-words<\/h2>\n<section>Geste 2D, Geste 3D, classification, Reconnaissance, Analyse, Interaction Homme-Machine, Computer Vision, Pattern Recognition, Man-Machine Interaction<\/section>\n<h2><strong>Prerequisite<\/strong><\/h2>\n<p>None<\/p>\n<h2>Content<\/h2>\n<section>\n<ul>\n<li>\n<h4><strong>Signal acquisition, Pre-processing and Normalization <\/strong><\/h4>\n<ul>\n<li>Motion capture (MoCap) systems to extract 3D joint positions by using markers and high precision camera array.<\/li>\n<li>Microsoft Kinect or Leap Motion sensor: Shotton algorithm largely eases the task of extracting 3D joint positions.<\/li>\n<li>Pen-based and Multi-Touch Capture on touch screen: smartphone, tablet PC and tangible surface which support simultaneous participation of multiple users<\/li>\n<li>Morphology normalisation pre-processing<\/li>\n<li>Joint trajectory modelling<\/li>\n<\/ul>\n<\/li>\n<li>\n<h4><strong>Feature Extraction<\/strong><\/h4>\n<ul>\n<li>2D and 3D feature extraction<\/li>\n<li>Sub-stroke representation<\/li>\n<li>Temporal, shape and motion relation between Sub-stroke<\/li>\n<\/ul>\n<\/li>\n<li>\n<h4><strong>Artificial Intelligence for 2D and 3D Action recognition<\/strong><\/h4>\n<ul>\n<li>Eager and lazy Recognition<\/li>\n<li>Skeleton-based human action recognition<\/li>\n<li>Several Recognition and Machine Learning Approaches:\n<ol>\n<li>Graph modelling, matching and embedding algorithm<\/li>\n<li>Dynamic Time Warping (DTW)<\/li>\n<li>Hidden Markov Model (HMM)<\/li>\n<li>Support Vector Machine (SVM)<\/li>\n<li>Neural Network (NN)<\/li>\n<li>Reject Option\u2026<\/li>\n<\/ol>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<ul>\n<li>\n<h4><strong>2D and 3D Segmentation and action detection<\/strong><\/h4>\n<ul>\n<li>Direct manipulation and indirect commands<\/li>\n<li>Early detection of an action, in an unsegmented stream<\/li>\n<li>Temporal segmentation methods<\/li>\n<li>Sliding Window approach<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<ul>\n<li>\n<h4><strong>Human-centered design (ISO 9241-210) and test protocol<\/strong><\/h4>\n<ul>\n<li>The goal of the user-centered design process is to obtain a product that is functional, operational and satisfies the user applying humans factors, ergonomics, and knowledge and technics of usability.<\/li>\n<li>Test protocols<\/li>\n<li>Data analysis<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<ul>\n<li>\n<h4><strong>Example and demo<\/strong><\/h4>\n<\/li>\n<\/ul>\n<\/section>\n<h2>Acquired skills<\/h2>\n<section>Comprehensive vision of a processing chain from signal acquisition, pre-processing, classification, interpretation and user feedback.<br \/>\nLink between pattern recognition issues and human-machine interaction.<br \/>\nLink between 2D and 3D gesture recognition approaches.<\/section>\n<h2>Teachers<\/h2>\n<p>Eric Anquetil (responsable), Richard Kulpa, Nathalie Girard<\/p>\n<h2>Organization 2022-2023<\/h2>\n<section>\n<h4>Room:<\/h4>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li>Guernesey<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h4>Dates (dd\/mm\/yyyy):<\/h4>\n<ol>\n<li><strong>Signal acquisition, Pre-processing and Normalization (<a href=\"https:\/\/www-intuidoc.irisa.fr\/files\/2021\/12\/AIR_M2_SIF-2021-RK.pdf\">PDF<\/a>):<\/strong>\n<ul>\n<li>15\/11\/2022 (16h15 &#8211; 18h15 ;\u00a0<strong>B. 02B, E209<\/strong>)<\/li>\n<li>22\/11\/2022 (10h15 &#8211; 12h15)<\/li>\n<li>24\/11\/2022 (16h15 &#8211; 18h15)<\/li>\n<\/ul>\n<\/li>\n<li><strong>2D and 3D Segmentation and action detection (<a href=\"https:\/\/www-intuidoc.irisa.fr\/files\/2021\/12\/AIR_M2_SIF-2021-EA.pdf\">PDF<\/a>):<\/strong>\n<ul>\n<li><strong>22\/11\/2022 (16h15 &#8211; 18h15)<\/strong><\/li>\n<li>29\/11\/2022 (16h15 &#8211; 18h15)<\/li>\n<li>6\/12\/2022\u00a0(16h15 &#8211; 18h15)<\/li>\n<\/ul>\n<\/li>\n<li><strong>Human-centered design (ISO 9241-210) and test protocol (<a href=\"https:\/\/www-intuidoc.irisa.fr\/files\/2023\/01\/Cours_AIR_20222023_Test_AD_NA.pdf\">PDF<\/a>):\u00a0<\/strong>\n<ul>\n<li>08\/12\/2022 (16h15 &#8211; 18h15)<\/li>\n<li><del>13\/12\/2022 (10h15 &#8211; 12h15)<\/del> <strong>9\/01\/2023 (14h &#8211; 16h)<\/strong><\/li>\n<li><del>15\/12\/2022 (16h15 &#8211; 18h15)<\/del> <strong>12\/01\/2023 (16h15 &#8211; 18h15)<\/strong><\/li>\n<\/ul>\n<\/li>\n<\/ol>\n<h4>Evaluations:<\/h4>\n<p>This module will be evaluated by two tests:<\/p>\n<ul>\n<li><strong>Written Exam<\/strong><strong>:<\/strong>\n<ul>\n<li>Duration : 1h30<\/li>\n<li>Date:\u00a0<strong>19<\/strong>\/01\/2023 (16h15 &#8211; 18h15 ; B. 12D <strong>i50<\/strong>)<\/li>\n<li><a href=\"https:\/\/www-intuidoc.irisa.fr\/enseignement\/teaching-air-analyse-interpretation-et-reconnaissance-des-gestes-2d-touch-et-3d\/exam-air-2017_final\/\">Exam 2017-2018<\/a><\/li>\n<\/ul>\n<\/li>\n<li><strong>Homework<\/strong> :\n<ul>\n<li>The work consists of reading a paper of the literature that is given by the teachers and finding (2)3 other papers on the same subject. Then to make a synthesis that will be presented during the defense .<\/li>\n<li>The homework will be carried out and defended by pairs or trinomials.<\/li>\n<li>Duration: 15&#8217;\/20&#8242; of presentation and 5&#8242;-10&#8242; questions.<\/li>\n<li>Date: 17\/01\/2023 (10h15 &#8211; 12h15 ; B. 12D Guernesey)<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<ul>\n<li><strong>Homework papers 2022-2023:<\/strong>\n<ul>\n<li><strong>Signal acquisition, Pre-processing and Normalization:<\/strong>\n<ul>\n<li><a href=\"https:\/\/arxiv.org\/pdf\/1704.04516v1.pdf\">Interpretable 3D Human Action Analysis With Temporal Convolutional Networks<\/a>,<br \/>\nTae Soo Kim, Austin Reiter; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017, pp. 20-28<\/li>\n<\/ul>\n<\/li>\n<li><strong>2D and 3D Segmentation and action detection:<\/strong>\n<ul>\n<li><a href=\"https:\/\/arxiv.org\/pdf\/2108.03656.pdf\"><em>Skeleton-Contrastive 3D Action Representation Learning<\/em><\/a>.\u00a0THOKER, Fida Mohammad, DOUGHTY, Hazel, et SNOEK, Cees GM; MM &#8217;21: Proceedings of the 29th ACM International Conference on Multimedia, October 2021, Pages 1655\u20131663;<\/li>\n<li><a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/papers\/Wang_OadTR_Online_Action_Detection_With_Transformers_ICCV_2021_paper.pdf\"><em>OadTR: Online Action Detection with Transformers<\/em><\/a>.\u00a0Wang, X., Zhang, S., Qing, Z., Shao, Y., Zuo, Z., Gao, C., &amp; Sang, N. ; ICCV 2021, Pages: 7565-7575<\/li>\n<\/ul>\n<\/li>\n<li><strong>Human-centered design (ISO 9241-210) and test protocol:<\/strong>\n<ul>\n<li><a href=\"https:\/\/link.springer.com\/chapter\/10.1007\/978-3-540-24598-8_38\"><em>A Procedure for Developing Intuitive and Ergonomic Gesture Interfaces for HCI<\/em><\/a>. Nielsen M., St\u00f6rring M., Moeslund T.B., Granum E. (2004) In: Camurri A., Volpe G. (eds) Gesture-Based Communication in Human-Computer Interaction. GW 2003. Lecture Notes in Computer Science, vol 2915. Springer, Berlin, Heidelberg. https:\/\/doi.org\/10.1007\/978-3-540-24598-8_38<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/section>\n<p><\/p>","protected":false},"excerpt":{"rendered":"<\/p>\n<h2>Description<\/h2>\n<p> With the development of touch screen and motion capture technology, new human-computer interaction gains in popularity in the recent years:\u00a0 human-machine interactions are evolving. Several methods of artificial intelligence have been designed to take advantage of the new interaction potential offered by 2D and 3D action gestures. These gestural controls allow the user &hellip; <\/p>\n<p><a class=\"more-link btn\" href=\"https:\/\/www-intuidoc.irisa.fr\/en\/enseignement\/teaching-air-analyse-interpretation-et-reconnaissance-des-gestes-2d-touch-et-3d\/\">Continue reading<\/a><\/p>\n","protected":false},"author":2132,"featured_media":0,"parent":1197,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-1206","page","type-page","status-publish","hentry","nodate","item-wrap"],"_links":{"self":[{"href":"https:\/\/www-intuidoc.irisa.fr\/en\/wp-json\/wp\/v2\/pages\/1206","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www-intuidoc.irisa.fr\/en\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www-intuidoc.irisa.fr\/en\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www-intuidoc.irisa.fr\/en\/wp-json\/wp\/v2\/users\/2132"}],"replies":[{"embeddable":true,"href":"https:\/\/www-intuidoc.irisa.fr\/en\/wp-json\/wp\/v2\/comments?post=1206"}],"version-history":[{"count":108,"href":"https:\/\/www-intuidoc.irisa.fr\/en\/wp-json\/wp\/v2\/pages\/1206\/revisions"}],"predecessor-version":[{"id":1919,"href":"https:\/\/www-intuidoc.irisa.fr\/en\/wp-json\/wp\/v2\/pages\/1206\/revisions\/1919"}],"up":[{"embeddable":true,"href":"https:\/\/www-intuidoc.irisa.fr\/en\/wp-json\/wp\/v2\/pages\/1197"}],"wp:attachment":[{"href":"https:\/\/www-intuidoc.irisa.fr\/en\/wp-json\/wp\/v2\/media?parent=1206"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}