This lab currently works on the following three topics.

Assistive Technology for Disabled/Older People

We focus on exploring computer vision abilities for better enhancing robotics systems or wearable devices for better serving and helping disabled/older people, not just for safety purposes, but also for improving their quality of life and better engagement in the society. We start with helping blind people "see" the attitude and emotion of people who they are talking with using wereable cameras. Future research will also cover other practical needs of different groups of people.

First-Person Vision with Wearable Cameras and Mobile Computing

First-person vision is to let computers/robots see what we see, in exactly the same viewpoints and potentially the same time spans, and therefore it may be a better way for understanding human's vision, interest, intension, and behavior. We are using wearable cameras and light mobile computing devices (e.g. smartphones) to capture and process the data, and communicate with other resources. The research is expected to better solve many traditional computer vision problems including segmentation, detection, tracking and recognition, and also motivate many new applications.

Scalable Visual Recognition for Dealing with Large Amounts of Data

As the big data era comes, we are able to share and access to large amounts of data, and get countless amount of sensors and devices connected. For computer vision, the critical research of visual recognition also goes from small-scale restricted data to large-scale real-world problems. We work on a representative task called large-scale across-camera person re-identification (with many cameras and lots of persons), to support large-scale real video surveillance systems. At the same time, we also look into more general problems such as image categorization and object recognition for investigating generic scalable visual recognition models. The research shall also benefit our research on first-person vision.

To page top