Adel S. Elmaghraby, an IEEE Life Senior Member, is the Speed School Director of Industrial Research and Innovation and Winnia Professor of CSE and former chairman of the Computer Engineering and Computer Science Department at the University of Louisville. He has also held appointments at Carnegie Mellon's Software Engineering Institute and the University of Wisconsin-Madison, and has advised over 60 master's graduates and 24 doctoral graduates. His research and publications span intelligent systems, neural networks, cyber-security, visualization and simulation. The IEEE-Computer Society has recognized his work with multiple awards including a Golden Core membership.
- Ph.D. in Electrical Engineering, University of Wisconsin - Madison, 1982
- M.S. in Electrical Engineering, University of Wisconsin - Madison, 1978
- B.S. in Electrical Engineering - Computer Science and Automatic Control Systems, Alexandria University, 1973
Pressure injuries represent a major concern in many nations. These wounds result from prolonged pressure on the skin, which mainly occur among elderly and disabled patients. If retrieving quantitative information using invasive methods is the most used method, it causes significant pain and discomfort to the patients and may also increase the risk of infections. Hence, developing non-intrusive methods for the assessment of pressure injuries would represent a highly useful tool for caregivers and a relief for patients. Traditional methods rely on findings retrieved solely from 2D images. Thus, bypassing the 3D information deriving from the deep and irregular shape of this type of wounds leads to biased measurements. In this paper, we propose an end-to-end system which uses a single 2D image and a 3D mesh of the pressure injury, acquired using the Structure Sensor, and outputs all the necessary findings such as: external segmentation of the wound as well as its real-world measurements (depth, area, volume, major axis and minor axis). More specifically, a first block composed of a Mask RCNN model uses the 2D image to output the segmentation of the external boundaries of the wound. Then, a second block matches the 2D and 3D views to segment the wound in the 3D mesh using the segmentation output and generates the aforementioned real-world measurements. Experimental results showed that the proposed framework can not only output refined segmentation with 87% precision, but also retrieves reliable measurements, which can be used for medical assessment and healing evaluation of pressure injuries.