Machine Automation Perception and Learning (MAPLE)
Algorithms to model surgical systems, environments, and our interactions with both.
Director: Jie Ying Wu
https://vu-maple-lab.github.io/
One key project my lab is currently working on is 3D reconstruction from endoscope videos. While skilled surgeons are capable of complex 3D mapping from endoscope videos to patient’s anatomy, machines currently are unable to localize the surgical instrument. My lab seeks to develop novel algorithms that use multi-modal fusion to better model surgical scenes and localize surgical instruments within them. First, we draw from computer vision techniques to detect feature points in the video which allow us to triangulate the anatomy information while simultaneously estimating the camera’s motion. Next, the CT scan gives us an estimated geometry of the anatomy. This is not the same as the intraoperative presentation as the anatomy may have deformed so we develop models to estimate the deformation. Additionally, the camera may be held by a robot which has its own estimate of the camera motion. As surgical robots are generally compliant, we model the errors in forward kinematics of the robot. Lastly, we develop algorithms that combine these sources of information to produce the best estimate of the surgical scene. This geometric reconstruction enables downstream tasks such as surgical robot automation.