Image-based tracking of the c-arm continues to be a critical and challenging problem for many clinical applications due to its widespread use in many computer-assisted procedures that rely upon its accuracy for further planning, registration, and reconstruction tasks. In this thesis, I present a variety of approaches to improve current c-arm tracking methods and devices for intra-operative procedures.
The first approach presents a novel two-dimensional fiducial comprising a set of coplanar conics and an improved single-image pose estimation algorithm that addresses segmentation errors using a mathematical equilibration approach. Simulation results show an improvement in the mean rotation and translation errors by factors of 4 and 1.75, respectively, as a result of using the proposed algorithm. Experiments using real data obtained by imaging a simple precisely machined model consisting of three coplanar ellipses retrieve pose estimates that are in good agreement with those obtained by a ground truth optical tracker. This two-dimensional fiducial can be easily placed under the patient allowing a wide field of view for the motion of the c-arm.
The second approach employs learning-based techniques to two-view geometrical theories. A demonstrative algorithm is used to simultaneously tackle matching and segmentation issues of features segmented from pairs of acquired images. The corrected features can then be used to retrieve the epipolar geometry which can ultimately provide pose parameters using a one-dimensional fiducial. I formulate the problem of match refinement for epipolar geometry estimation in a reinforcement-learning framework. Experiments demonstrate the ability to both reject false matches and fix small localization errors in the segmentation of true noisy matches in a minimal number of steps.
The third approach presents a feasibility study for an approach that entirely eliminates the use of tracking fiducials. It relies only on preoperative data to initialize a point-based model that is subsequently used to iteratively estimate the pose and the structure of the point-like intraoperative implant using three to six images simultaneously. This method is tested in the framework of prostate brachytherapy in which preoperative data including planned 3-D locations for a large number of point-like implants called seeds is usually available. Simultaneous pose estimation for the c-arm for each image and localization of the seeds is studied in a simulation environment. Results indicate mean reconstruction errors that are less than 1.2 mm for noisy plans of 84 seeds or fewer. These are attained when the 3D mean error introduced to the plan as a result of adding Gaussian noise is less than 3.2 mm.
Maria S. Ayad received the B. Sc. degree in Electronics and Communications from the Faculty of Engineering, Cairo University, in 2001. She also earned a diploma in Networks from the Information Technology Institute in Cairo (iTi) in 2002 and an M.S.E. in Computer Science from Johns Hopkins University in 2009. She was inducted into the Upsilon Pi Epsilon (UPE) honor society in 2008. She received the Abel Wolman Fellowship by the Whiting School of Engineering in 2006 and a National Science Foundation Graduate Research Fellowship in 2008.
Her research focuses on pose estimation, reconstruction, and estimating structure from motion for image-guided medical procedures and computer-assisted surgery. Her 2009 paper has been awarded the best student paper award in the Visualization, Image-Guided Procedures, and Modeling Track of the 2009 SPIE Medical Imaging conference.
She has been working as an electrical patent examiner at the United States Patent and Trademark Office since 2013.