Skip to content
Mua sắm thiết bị dụng cụ toolsviet.com hàng đầu tại việt nam

Keynote Speakers

 1. Professor Antoine Doucet

(L3i laboratory of the University of La Rochelle )

Antoine Doucet is a tenured full Professor in computer science at the L3i laboratory of the University of La Rochelle since 2014. Director of the ICT department in the University of Science and Technology of Hanoi, he leads in La Rochelle the research group in document analysis, digital contents and images (about 50 people). In terms of research activities, he is the coordinator of the H2020 project NewsEye, running until 2022 and focusing on augmenting access to historical newspapers, across domains and languages. He further leads the effort on semantic enrichment for low-resourced languages in the context of the H2020 project Embeddia. His main research interests lie in the fields of information retrieval, natural language processing, (text) data mining and artificial intelligence. The central focus of his work is on the development of methods that scale to very large document collections and that use as few external resources as possible, in order to be particularly applicable to documents of any type written in any language, from news articles to social networks, and from digitized manuscripts to digitally-born documents. Antoine Doucet holds a  PhD in computer science from the University in Helsinki (Finland) since 2005, and a French research supervision habilitation (HDR) since 2012.

 

Talk:  Robust and multilingual analysis of digitised documents - a use case with historical newspapers

Many documents can only be accessed through digitisation. This is particularly the case for historical and handwritten documents, but for many digitally-born documents as well, turned into images for various reasons (e.g., a file conversion or the use of an analog form in order to insert a manual signature, to send by post, etc.). Being able to analyze the textual content of such digitized documents requires a phase of conversion from the captured image to a textual representation, key parts of which are optical character recognition (OCR) and layout analysis. The resulting text and structure are often imperfect, to an extent which is notably correlated with the quality of the initial medium (which may be stained, folded, aged, etc.) and with the quality of the image taken from it. I will present recent advances in AI and automatic language processing enabling this type of corpus to be analyzed in a way that is robust to digitization errors. For example, I will show how how we were able, in the H2020 NewsEye project to create state-of-the-art results for the cross-lingual recognition and disambiguation of named entities (names of people, places, and organizations) in large corpora of historical newspapers written in 4 languages, written between 1850 and 1950. This type of result paves the way to a large-scale analysis of digitised documents, notably able to cross linguistic borders.

 2.Professor Inseop Na.

(National Program of Excellence in Software Center and Major of AI-Convergence )

He is a professor at National Program of Excellence in Software Center and Major of AI-Convergence. He is also serving as member of Artificial Intelligence Subcommittee, Korea Federation of Science and Technology and director of Korea Internet Information Society. He is serving as evaluator of many evaluation committee in Korean government’s various public and research institutions like National Research Foundation of Korea, Institute of Information & Communications Tech. Planning & Evaluation, National IT Industry Promotion Agency, Korea Institute of S&T Evaluation and Planning, Korea Evaluation Institute Of Industrial Tech., Korea Internet & Security Agency, Korea Internet & Security Agency, Homeland Safety Management Agency, Homeland Safety Management Agency. He has also served as the chairman of the evaluation committee for Korea artificial intelligence learning data construction project in 2020 and 2021. He has served in-depth interviewer for Korea Presidential Science Scholarship (information part) in 2020, 2021. His research interests center around visual intelligence, object detection, segmentation, tracking and understanding, human activity understanding and emotion recognition. He is serving as various SCI journal’s editorial manager in Pattern Recognition, IEEE Transactions on Cybernetics, IEEE Computational Intelligence Magazine, Artificial Intelligence in Medicine, Transactions on Internet and Information Systems.

 

Talk:  INTRODUCTION OF EMOTION DATASETS AND TRENDS IN RECOGNITION RESEARCH IN ORDER TO THINK MACHINE BY THEMSELVES

The term "robot" was first used by Czechoslovak playwright Karel Čapek in his play 'RUR (Rosuum's Universal Robots)' to describe a machine with a human-like appearance in the 1920. After that, in 1950, Alan Mathison Turing asked "Can machines think?" in his paper 'Computing Machinery and Intelligence' published in the journal Mind. It began to question whether machine could have feelings.

So far, many researchers have analyzed patterns of human facial expressions, behaviors, and voices, or identified brain waves and heartbeat patterns from sensors such as EEG and ECG, in order to make AI have independent emotions. Researchers' research on emotion recognition is changing from the recognition of a single emotion to the recognition of a complex emotion, moreover research is progressing from the recognition of general emotions to the recognition of emotions that can be used in a specific field.

We are building many public emotion datasets according to race, age, gender, etc. for artificial intelligence training. And also, many artificial intelligence techniques are continuously being conducted for emotion recognition according to the development of various deep learning method.

In this presentation, let's overview the current state of emotion datasets and related technologies in the field of emotion recognition that have been released so far. And find out what kind of research needs to be done in order for the machines to have emotions on their own in the future.