Surgical robots rely on robust and efficient computer vision algorithms to be able to intervene in real-time. With clear and accurate datasets, surgeons are able to precisely move surgical tools in reference to the deforming soft tissue.
However, the main issue is that training or testing of such algorithms, especially when using deep learning techniques, requires large endoscopic datasets. Obtaining these large datasets can be seen as a challenge task as it requires expensive hardwares, ethical approvals, patient consent and the access to hospitals.
VisionBlender is a synthetic dataset generator and is specifically built for assisting robotic surgery. By adding a user interface to Blender, this tool allows users to generate realistic video sequences with ground truth maps of depth, disparity, segmentation masks, surface normals, optical flow, object pose, and camera parameters.
In the presentation at the workshop of Medical Image Computing and Computer Assisted Intervention 2020 (MICCAI 2020), the researchers not only presented the example of endoscopic data that can be generated by using this tool, but also demonstrated one of potential applications where the generated data has been used to train and evaluate state-of-the-art 3D reconstruction algorithms.
Being able to generate realistic endoscopic datasets efficiently, VisionBlender promises an exciting step forward in robotic surgery.
VisionBlender is an open source project. More information can be found on VisionBlender's GitHub page.