Skip to content
Mua sắm thiết bị dụng cụ toolsviet.com hàng đầu tại việt nam

Keynote Speakers

 1. Professor Kang-Hyun Jo 

(University of Ulsan, Korea)

Kang-Hyun Jo (Senior Member, IEEE) received the Ph.D. degree in computer-controlled machinery from Osaka University, Japan, in 1997. After a year of experience with ETRI as a Postdoctoral Research Fellow, he joined the School of Electrical Engineering, University of Ulsan, Ulsan, South Korea, where he is currently the Faculty Dean. His current research interests include computer vision, robotics, autonomous vehicles, and ambient intelligence. He was the Director/Vice-President or an AdCom Member of the Institute of Control, Robotics, and Systems (ICROS), the Society of Instrument and Control Engineers (SICE), and IEEE Industrial Electronics Society (IEEE IES) Technical Committee on Human Factors Chair and served as the IEEE IES Secretary by 2019. He had served as an Editorial Board Member of international journals, such as the International Journal of Control, Automation, and Systems and Transactions on Computational Collective Intelligence. He has also been involving in organizing many international conferences, such as the International Workshop on Frontiers of Computer Vision, International Conference on Intelligent Computation, International Conference on Industrial Technology, International Conference on Human System Interactions, International Symposium on Industrial Electronics (ISIE), and Annual Conference of the IEEE Industrial Electronics Society(IECON). He founded the IEEE IES-Technically cosponsored International Workshop on Intelligent Systems(IWIS) and have been managing the conference series every year as an organizing chair.

Talk: Vision Based AI and DX Service

Object detection and understanding in imagery is an interesting topic in the Computer Vision field. This work was widely applied in traffic analysis and control, rescue systems, smart agriculture, etc. However, many challenges exist in developing and optimizing applications because of object density, multi-scale objects, and blur motion. This speech will focus on analyzing and assessing several drone imagery datasets and their challenges in AI and DX service. Also, it covers the related research and recent projects especially on the digital twin systems developed by the team of Intelligent Systems Laboratory (ISLab), Department of Electrical, Electronic and Computer Engineering, University of Ulsan, Ulsan, Korea.

 

 2. Professor Ngai_Man(Man) Cheung

(Singapore University of Technology and Design (SUTD), Singapore )

Ngai-Man (Man) Cheung is an Associate Pillar Head and Associate Professor with Singapore University of Technology and Design (SUTD). He receives his Ph.D. degree in Electrical Engineering from University of Southern California (USC), Los Angeles, CA. His Ph.D. research focused on image and video coding, and the work was supported in part by NASA-JPL. He was a postdoctoral researcher with the Image, Video and Multimedia Systems group at Stanford University, Stanford, CA. He was a core team member of the National Research Foundation (NRF) Foundational Research Capabilities Team for AI, and an AI Advisor - Smart Nation and Digital Government Office (SNDGO) in Singapore. His research has resulted in more than 100 papers and 14 U.S. patents granted with several pending. Two of his inventions have been licensed to companies. One of his research results has led to a SUTD spinoff on AI for healthcare. His research has also been featured in the National Artificial Intelligence Strategy. He has received several research recognitions, including the Best Paper Finalist at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019, the Finalist of Super AI Leader (SAIL) Award at the World AI Conference (WAIC) 2019 at Shanghai, China. His research interests are Signal and Image Processing, Computer Vision and AI.

Talk: Model Inversion in Deep Neural Networks

Model Inversion in Deep Neural NetworksGiven a machine learning model trained on a private dataset, to what extent can an adversary reconstruct the private training samples by exploiting access to the trained model? This problem, known as Model Inversion (MI), raises significant privacy concerns and poses a critical threat to the security of machine learning systems. As such models are increasingly deployed in applications involving sensitive data (e.g., facial recognition, speaker identification, medical diagnosis, security), it is important to understand the risks associated with unauthorized reconstruction of private training samples. In this talk, I will discuss our work on studying MI attacks [1, 2], MI defenses [3], and MI-resilient architecture designs [4] to shed light on this critical privacy threat in modern deep neural networks.

[1] NB Nguyen, K Chandrasegaran, M Abdollahzadeh, NM Cheung. Re-thinking Model Inversion Attacks Against Deep Neural Networks. CVPR-2023.

[2] NB Nguyen, K Chandrasegaran, M Abdollahzadeh, NM Cheung. Label-Only Model Inversion Attacks via Knowledge Transfer. NeurIPS-2023.

[3] ST Ho, KJ Hao, K Chandrasegaran, NB Nguyen, NM Cheung. Model Inversion Robustness: Can Transfer Learning Help? CVPR-2024.

[4] JH Koh, ST Ho, NB Nguyen, NM Cheung. On the Vulnerability of Skip Connections to Model Inversion Attacks. ECCV-2024.

 3. Professor Hiêp Q. Luong

(Department of Telecommunications and Information Processing Ghent University Gent, Belgium)

Hiêp Q. Luong received his Ph.D. degree in Computer Science Engineering from Ghent University, Belgium. He is currently a Professor in the Image Processing and Interpretation (IPI) research group — an embedded IMEC research group — and leads the Unmanned Aerial Vehicles (UAV) Research Centre at Ghent University, which is a close interdisciplinary collaboration between the faculties of Engineering and Architecture, Bioscience Engineering and Sciences. His research interests include image and real-time video processing for a variety of applications, such as high dynamic range (HDR) imaging, (bio)medical imaging, depth and multi-view processing (including Light Detection and Ranging (LiDAR) and Time-of-Flight (ToF) cameras), hyperspectral imaging, GPU processing, markerless 3D body tracking, and multi-sensor fusion for UAV and augmented reality (AR) applications.

Talk: Sensor fusion applications

This keynote will provide an overview of recent research on multi-modal sensor fusion, with a focus on visual and image-based sensing. Sensor fusion allows us to combine complementary information from different sensors, improving robustness and enabling new capabilities in complex environments. After introducing several common fusion strategies, we will present examples in various applications such as autonomous driving, mobile mapping and drone-based remote sensing. We will also discuss the challenges of deploying such systems in real-world scenarios, including issues related to data alignment, calibration, and computational efficiency.