Skip to content
Mua sắm thiết bị dụng cụ toolsviet.com hàng đầu tại việt nam

Keynote Speakers

 1. Professor François Brémond

 

(STARS – INRIA – Sophia Antipolis)

 

François Brémond is a Research Director at Inria Sophia Antipolis-Méditerranée, where he created the STARS team in 2012. He has pioneered the combination of Artificial Intelligence, Machine Learning and Computer Vision for Video Understanding since 1993, both at Sophia-Antipolis and at USC (University of Southern California), LA. In 1997 he obtained his PhD degree in video understanding and pursued this work at USC on the interpretation of videos taken from UAV (Unmanned Airborne Vehicle). In 2000, recruited as a researcher at Inria, he modeled human behavior for Scene Understanding: perception, multi-sensor fusion, spatio-temporal reasoning and activity recognition. He is a co-founder of Keeneo, Ekinnox and Neosensys, three companies in intelligent video monitoring and business intelligence. He also co-founded the CoBTek team from Nice University in January 2012 with Prof. P. Robert from Nice Hospital on the study of behavioral disorders for older adults suffering from dementia. He is author or co-author of more than 250 scientific papers published in international journals or conferences in video understanding. He has (co)- supervised 20 PhD theses.

More information is available at: http://www-sop.inria.fr/members/Francois.Bremond/

 

Talk: Action Recognition for People Monitoring 

 

In this talk, we will discuss how Video Analytics can be applied to human monitoring using as input a video stream. Existing work has either focused on simple activities in real-life scenarios, or on the recognition of more complex (in terms of visual variabilities) activities in hand-clipped videos with well-defined temporal boundaries. We still lack methods that can retrieve multiple instances of complex human activity in a continuous video (untrimmed) flow of data in real-world settings. Therefore, we will first review few existing activity recognition/detection algorithms. Then, we will present several novel techniques for the recognition and detection of ADLs (Activities of Daily Living) from 2D video cameras. We will illustrate the proposed activity monitoring approaches through several home care application datasets: Toyota SmartHome, NTU-RGB+D, Charades and Northwestern UCLA. We will end the talk by presenting some results on home care applications.

 

 

 2. Professor Kanghyun Jo

 

(School of Electrical Engineering, University of Ulsan, Korea)

 

Kanghyun Jo, Professor and Faculty Dean, School of Electrical Engineering,University of Ulsan, Korea.

Kang-Hyun Jo (Senior Member, IEEE) received the Ph.D. degree in computer controlled machinery from Osaka University, Osaka, Japan, in 1997. He joined the School of Electrical Engineering, University of Ulsan, Ulsan, South Korea where currently serving as the Faculty Dean.

His research interests include computer vision, robotics, autonomous vehicle, and ambient intelligence. He has served as the Director or an AdCom Member for the Institute of Control, Robotics and Systems(currently Vice-President, Fellow member), The Society of Instrument and Control Engineers, and the IEEE IES Technical Committee on Human Factors Chair, an AdCom Member, and the Secretary until
2019.

He has also been involved in organizing many international conferences, such as the International Workshop on Frontiers of Computer Vision, the International Conference on Intelligent Computation, the International Conference on Industrial Technology, the International Conference on Human System Interactions, and the Annual Conference of the IEEE Industrial Electronics Society. He is also an Editorial Board Member for international journals, such as the International Journal of Control, Automation, and Systems. He has published more than 200 technical papers with the peer-reviews. It’s worth reading his latest works in top-tier journals like IEEE Trans. Industrial Informatics(TII, IF: 11.648) and Trans. Industrial Electronics(TIE, IF: 8.236).

 

Talk: Drone Imagery for Artificial Intelligence Service

 

In the talks, I will deliver how the drone images are worth for AI services. As an example, in Korean National Grants how the drone images were established for AI services. As the PM of the drone image in year 2020, I exchange the program outline and the scope of the images from the autonomous drone’s viewpoint. General approaches with deep learning models render the bounding boxes of the objects in the classification tasks and the model works. Data processing for the AI data for the researchers is also interesting issue so that the examples of approaches and processing tools will be delivered. Finally, I will explain the AI data archives including drone images in AIHub(aihub.or.kr currently in Korean) and other valuable data which will be released continuously. Some AI service examples are also explored and demonstrated in the talks

 

 

 3. Professor Hiroo Sekiya

 

(Graduate School of Engineering, Chiba University, Chiba, Japan)

 

Hiroo Sekiya received the B.E., M.E., and Ph.D. degrees in Electrical Engineering from Keio University, Yokohama, Japan, in 1996, 1998, and 2001 respectively. Since April 2001, he has been with Chiba University, and now he is a Professor at Graduate School of Engineering, Chiba University, Chiba, Japan. Besides, he is the Honorary Professor of Xiangtan University, China, and the Specially Appointed Professor of Nagasaki Institute of Applied Science, Japan. From Feb. 2008 to Feb. 2010, he was also a visiting scholar with Electrical Engineering, Wright State University, Ohio, USA. His research interests include wireless power transfer systems, high-frequency tuned power amplifiers, resonant converters, nonlinear phenomena on electrical circuits, communication protocol designs, and digital signal processing for wireless communications, speech, and image.

 

He has 135 Journal papers, 232 conference papers, and 22 patents. He won 2008 Funai Information and Science Award for Young Scientist, 2008 Hiroshi Ando Memorial Young Engineering Award, Ericsson Young Scientist Award 2008, and 2019 Best Paper Award of IEICE Transactions, and Best paper award of ICRERA2021 and ICUFN2010.

Dr. Sekiya has served IEEE CASS Board of Governor (2020-2022), Vice-Chair of IEEE PELS Japan Joint Chapter (2022-2023), Associate Editor of IET Circuits, Devices & Systems, Editor of NOLTA, IEICE and Associate Editor of International Journal of Renewable Energy Research. Additionally, He served as Associate Editor of IEEE TCAS-II(2017-2019), Editor-in-Chief of IEICE Communication Express (2018-2019), Regional Editor of IET Circuits, Device & Systems (2018-2021), and Secretary of IEEE CASS Japan Joint Chapter (2016-2017).

 

 

Talk: Wireless Brain-Inspired Computing (WiBIC)

 

 

In this talk, I would like to introduce a new information-processing platform, which is merged version of an IoT network and a spiling neural network. Each IoT device works as not only an information collector but a neuron of the spiking neural network. Thie means that the IoT devices have a function of the edge-computing device. Besides, each IoT devices are connected by wireless communications, which expresses a spike signal. Concretely, the FPGA implementation of the device is shown. The wireless-communication selection is important for achieving high-level information processing. Therefore, a new communication mechanism is adopted for WiBIC.