Ph.D. candidate: Sara El Hadji

Ph.D. period abroad: AI in Imaging and Neuroscience Lab, University of California, Los Angeles (UCLA)

Supervisors: Fabien Scalzo


Reconstruction of perfusion map for a dynamic visualisation of the brain vascular tree which exploits both previously extracted contrast enhancement and morphological information. The proposed method uses morphological topology for the regularisation of the output probability map obtained from an unsupervised classification.

AI in Imaging and Neuroscience Lab, University of California, Los Angeles (UCLA)

Ph.D. candidate: Marco Vidotto

Ph.D. period abroad: Tribology Group, Imperial College, London

Supervisors: Daniele Dini


Implementation of a brain-like geometrical model to estimate the extra cellular space effective diffusivity and tortuosity. Acquisition of Scanning Electron Microscope (SEM) images of human brain samples.

Tribology Group, Imperial College, London

Ph.D. candidate: Alberto Favaro

Ph.D. period abroad: MiM, Mechatronics in Medicine, Imperial College, London

Supervisors: Ferdinando Rodriguez y Baena, Stefano Galvan, Riccardo Secoli


Development of an automatic obstacles segmentation and tracking system based on stereo imaging for in-vitro analysis of the EDEN2020 steerable catheter where to test the pre-op and intra-op planning software and the catheter control system.

MiM, Mechatronics in Medicine, Imperial College, London

Ph.D. candidate: Hirenkumar Nakawala

Ph.D. period abroad: MediCIS, University of Rennes 1, Rennes, France

Supervisors: Bernard Gibaud, Pierre Jannin


Implementing inductive logic programming and knowledge representation techniques for automatic surgical workflow analysis for a robot-assisted surgery

MediCIS group (Models of Surgical and Interventional competencies), Faculty of Medicine, University of Rennes 1

NimaPh.D. candidate: Nima Enayati

Ph.D. period abroad: Collaborative Haptics and Robotics in Medicine Laboratory (CHARM LAB), Stanford University, Department of Mechanical Engineering

Supervisor: Allison Okamura


Implementing trainee progress assessment through machine learning methods in surgical robotics training and studying adaptive haptic assistive methods for training procedure enhancement.

Collaborative Haptics and Robotics in Medicine Laboratory (CHARM LAB), Stanford University, Department of Mechanical Engineering

veronica_penza3bPh.D. candidate: Veronica Penza

Ph.D. period abroad: Surgical Robot Vision Group, Centre for Medical Image Computing, Department of Computer Science, University College London (UCL)

Supervisor: Danail Stoyanov


A Long Term Safety Area Tracking (LT-SAT) framework was developed to robustly track soft tissue areas (Safety Area, SA) that need to be preserved from injuries during surgery. The framework combines a correlation-based tracker with a tracking-by-detection approach in order to be robust against failures caused by: (i) partial occlusion, (ii) total occlusion, and (iii) sudden endoscopic camera movements. A Bayesian inference-based approach, based on online context information, was used to detect the failures of the tracker. A Model Update Strategy (MUpS) was proposed to improve the re-detection of the SA after failures, taking into account its changes of appearance due to contact with instruments or image noise.

Surgical Robot Vision Group, Centre for Medical Image Computing, Department of Computer Science, University College London (UCL)

Ph.D. candidate: Jacopo BuzziJacopo

Ph.D. period abroad: Ben Gurion University of the Negev – Biomedical Robotics Lab

Supervisor: Ilana Nisky


The use of teleoperated robotic systems is widely spreading in multiple fields, ranging from telemedicine to radioactive materials handling. In teleoperation, the user directly interacts with a robot that acts as a master device to control a slave manipulator; the master device translates the users’ hand gestures in movements of the slave end-effector that interacts with the environment. Since the users’ movements can be downscaled to increase precision, the effects of hand tremor can be compensated and the users can operate in a safe and comfortable position.

Teleoperation represents a complex motor control task users need to undergo an intensive phase of training to exploit the full potential of the high dexterity robots and tools that are currently used. The effects of the master devices in human motor control strategies are yet to be fully explored. This complexity derives from the high redundancy that characterize the human arm which translates into multiple possible joints configurations to achieve the same hand movement. Previous works analysed the joint variability using the uncontrolled manifold analysis, showing how expert tele-operators were able to exploit the arm redundancy to efficiently reduce hand movement variability with respect to novices. Experts were also able to achieve better performance in teleoperation compared to free-hand while novices showed the opposite. 

In this work, we propose the use of the uncontrolled manifold analysis as a way of underlying the differences that different master devices induce in the motor control strategies adopted in the execution of three different virtual teleoperation tasks using a parallel and a serial link master device. In the three planar tasks, the users were asked to accurately follow a half cloverleaf path, to perform eight radial reaching movements starting from a central resting position and to follow the same half cloverleaf path while also orienting the cylinder-shaped tool end-effector to be tangent to the trajectory. The tasks were used to test the users’ precision, accuracy during fast movements and dexterity in orienting the tool.

The arm and thorax motions of eight right-handed users were acquired using optoelectronic and electromagnetic tracking devices. The arm kinematic was then reconstructed using a user-specific model through OpenSim. The results will be used to depict the differences between the two types of master devices analysed and to understand if and how different types of tasks elicit different ways of exploiting the arm redundancy.

This study is supported by the SMARTsurg project (grant # H2020-ICT-2016-1- 732515), by the Israeli Science Foundation (grant #823/15), and by the Halmsley Charitable Trust through the ABC Robotics Initiative at BGU.

Ben Gurion University of the Negev – Biomedical Robotics Lab

Ph.D. candidate: Sara MocciaSara

Ph.D. period abroad: German Cancer Research Centre

Supervisor: Prof. Dr. Lena Maier-Hein



Surgical Data Science (SDS) has recently emerged as a new scientific field that aims at improving the quality of interventional healthcare. In laparoscopy, some of the major opportunities brought by SDS for surgical outcome improvement are surgical decision support and context awareness. Indeed, although laparoscopy is considered as a well-established technique, with advantages such as reduction of patient pain, time recovery and risks of infection, it remains complex from the surgical standpoint mainly due to the limited visual feedback. Image-guided identification of tissues can significantly enhance the surgeon’s perception of the surgical scene and facilitate the decision-making process. Robustness and reliability of the classification are of crucial importance in the perspective of SDS, since possibile errors in anatomical manipulation could strongly and negatively affect the intervention outcome. This research project focuses on developing an Uncertainty-aware organ classification method for Surgical Data Science with application in multispectral laparoscopy. Multispectral imaging generally produces images with tens or hundreds of channels (vs RGB with only 3 channels), each of which corresponds to the reflection of light within a certain wavelength band. Reflectance can be exploited together with textural features to classify abdominal organs with Support Vector Machines.  For visual representation, RGB images are reconstructed from 3 multispectral channels (centered in red, green and blue) and showed in Fig. 1a,d. A measure of confidence on the classification (Fig. 1b,e) is included and only confident regions in the image are exploited for labeling organs (Fig. 1e,f), further increasing the classification accuracy.  

German Cancer Research Centre

This site uses cookies to enhance your experience. By continuing to the site you accept their use. More info in our cookies policy.     ACCEPT