Image-guided minimally invasive surgery has been credited with a variety of benefits for patients and surgical teams such as reduced recovery times, reduced rates and severity of postsurgical complications, higher patient acceptance rates, and higher cost-efficiency. Recently, robot assisted image-guided minimally invasive surgery has been on the rise. Current commercially available systems focus on reducing the burden on the surgeon by motorizing tiring manual tasks and reducing or eliminating the exposure to ionizing radiation. In addition, the reproducibility and accuracy of the procedures are improved through high-precision motorized stages. Almost all robotic systems available today are only teleoperators or assistants for holding and aiming, and are operating at a low autonomy level and under constant supervision. To enable robotic systems to perform some tasks independently and under loose supervision, significant scientific challenges have to be addressed. In the project AIMRobot, we will combine methods from artificial intelligence and multi-sensor fusion to accurately determine the relative position and its uncertainty of a surgical tool with respect to relevant target anatomy inside the patients body in real time, and in deforming and dynamic environments. This will lay the groundwork for the next generation of autonomous robotic surgery systems.