The Computer Vision and Image Processing Laboratory (CVIP Lab) has active research in four broad areas: Computer Vision (Smart Systems and Autonomous Robotics); Biomedical Imaging (Computer-aided diagnosis for early detection of colorectal and lung cancers, and image-guided interventions); Biometrics (Facial information modeling, face recognition at a distance, and modeling human engagement); and Dental Imaging (Reconstruction of the Oral Cavity and Image-guided dental interventions). Under these four focus areas, the CVIP Lab has secured federal and industrial funding and graduated and trained over 70 PhD and MS/MENG students, scores of undergraduates and over a dozen postdocs and research scientists. Below we refer to three projects that presently funded.
The purpose of this project is to improve student learning by the automated capture of non-verbal cues of engagement: How can we use students’ expressions of engagement, based on non-verbal signs such as facial expressions, body and eye movements, physiological reactions, posture, to enhance learning? Project Goals are: a) Establishment of a robust network of non-obtrusive and non-invasive sensors in mid-size classes to enable real-time extraction of facial and vital signs, which will be integrated and displayed on instructors’ dashboards; b) Identification of robust descriptors for modeling the emotional and behavioral components of engagement using data collected by the sensor networks; c) Gathering meaningful data for subsequent work on emotional, behavioral, and cognitive metrics of engagement; and d) Exploring effectiveness of artificial intelligent and machine learning for delivering education and training of STEM subjects.Fig. 1 shows data collection for behavioral and emotional engagement: i) eye-gaze; ii) head movement; iii) hand movement, and facial expressions. Videos captured by webcams are used to extract facial; information and eye-gaze. Video captured by wall-cam is used to hand and body movement. 3D reconstruction shows the students’ heads and gazes relative to teacher’s board.
Layout of students’ engagement research at the CVIP Lab. The setup enabled data collection using approved IRB and Consent forms. The data were used to test Emotional and Behavioral Engagement Classifiers using Convolutional Neural Network (CNN) with parameters estimated using Deep Learning. The 3D reconstruction, shown in lower right, is the students’ heads and gazes relative to the teacher’s board, which provide a clue for students’ attuning to the lecture.
This research has been supported by NSF Award #1900456.
This project aims to build a front-end visualization system for Computed-Tomography Colonography (CTC). The technology uses abdominal CT scans of prepped patients to create a 3D model of the human colon and enable visible inspection of the luminal surface in same manner as performed physically on prepped patients by gastroenterologists using optical colonoscopy (OC). CTC is non-invasive and can serve as a prelude to OC, if warranted; thus enabling large scale screening of colorectal cancer and minimizing healthcare cost. Our purpose is to create an entirely model-based CTC system, which would enable expert CTC radiologists as well as AI-based examination of the luminal surface to detect and classify colonic polyps as a way for early detection of colorectal cancer. Fig. 2 shows the layout of the CTC system.
CTC Systems: (a) Typical setup; (b) An R&D platform by the investigators in collaboration with Kentucky Imaging Technologies.
This research has been supported by NSF award 1602333.
Optical impressions, which are generated using Intra-oral scanner (IOS), allow the dentist to capture 3D models of the patient’s dental arches. Unlike conventional impressions, which have always been unwelcome to patients due to using trays, materials and pouring plaster cast, optical impressions save time and space. The special intra-oral scanners nowadays are very developed, but they are very expensive, the hand piece is bulky, and need a professional in many ways. Moreover, the case of long-span restorations, and in particular for full arches, IOSs do not yet seem to be sufficiently accurate. we are working on new IOS system, which works purely on optical sensors. The system is designed to compete other IOS scanners in the following manners: cost, accuracy and easy to use. The simulation results (Fig. 3) of the proposed design shows a promising in the trueness accuracy comparing with other the state-of-the-art scanners.
Accuracy of 3D reconstruction using the CVIP Lab optical probe.