1
|
Trute RJ, Alijani A, Erden MS. Visual cues of soft-tissue behaviour in minimal-invasive and robotic surgery. J Robot Surg 2024; 18:401. [PMID: 39508918 PMCID: PMC11543711 DOI: 10.1007/s11701-024-02150-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2024] [Accepted: 10/20/2024] [Indexed: 11/15/2024]
Abstract
Minimal-invasive surgery (MIS) and robotic surgery (RS) offer multiple advantages over open surgery (Vajsbaher et al. in Cogn Syst Res 64:08, 2020). However, the lack of haptic feedback is still a limitation. Surgeons learn to adapt to this lack of haptic feedback using visual cues to make judgements about tissue deformation. Experienced robotic surgeons use the visual interpretation of tissue as a surrogate for tactile feedback. The aim of this review is to identify the visual cues that are consciously or unconsciously used by expert surgeons to manipulate soft tissue safely during Minimally Invasive Surgery (MIS) and Robotic Surgery (RS). We have conducted a comprehensive literature review with papers on visual cue identification and their application in education, as well as skill assessment and surgeon performance measurement with respect to visual feedback. To visualise our results, we provide an overview of the state-of-the-art in the form of a matrix across identified research features, where papers are clustered and grouped in a comparative way. The clustering of the papers showed explicitly that state-of-the-art research does not in particular study the direct effects of visual cues in relation to the manipulation of the tissue and training for that purpose, but is more concentrated on tissue identification. We identified a gap in the literature about the use of visual cues for educational design solutions, that aid the training of soft-tissue manipulation in MIS and in RS. There appears to be a need RS education to make visual cue identification more accessible and set it in the context of manipulation tasks.
Collapse
Affiliation(s)
- Robin Julia Trute
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, UK
- Edinburgh Centre for Robotics, Edinburgh, UK
| | | | - Mustafa Suphi Erden
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, UK.
- Edinburgh Centre for Robotics, Edinburgh, UK.
| |
Collapse
|
2
|
Asgari M, Magerand L, Manfredi L. A review on model-based and model-free approaches to control soft actuators and their potentials in colonoscopy. Front Robot AI 2023; 10:1236706. [PMID: 38023589 PMCID: PMC10665478 DOI: 10.3389/frobt.2023.1236706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 09/22/2023] [Indexed: 12/01/2023] Open
Abstract
Colorectal cancer (CRC) is the third most common cancer worldwide and responsible for approximately 1 million deaths annually. Early screening is essential to increase the chances of survival, and it can also reduce the cost of treatments for healthcare centres. Colonoscopy is the gold standard for CRC screening and treatment, but it has several drawbacks, including difficulty in manoeuvring the device, patient discomfort, and high cost. Soft endorobots, small and compliant devices thatcan reduce the force exerted on the colonic wall, offer a potential solution to these issues. However, controlling these soft robots is challenging due to their deformable materials and the limitations of mathematical models. In this Review, we discuss model-free and model-based approaches for controlling soft robots that can potentially be applied to endorobots for colonoscopy. We highlight the importance of selecting appropriate control methods based on various parameters, such as sensor and actuator solutions. This review aims to contribute to the development of smart control strategies for soft endorobots that can enhance the effectiveness and safety of robotics in colonoscopy. These strategies can be defined based on the available information about the robot and surrounding environment, control demands, mechanical design impact and characterization data based on calibration.
Collapse
Affiliation(s)
- Motahareh Asgari
- Division of Imaging Science and Technology, School of Medicine, University of Dundee, Dundee, United Kingdom
| | - Ludovic Magerand
- Division of Computing, School of Science and Engineering, University of Dundee, Dundee, United Kingdom
| | - Luigi Manfredi
- Division of Imaging Science and Technology, School of Medicine, University of Dundee, Dundee, United Kingdom
| |
Collapse
|
3
|
Ciocan RA, Graur F, Ciocan A, Cismaru CA, Pintilie SR, Berindan-Neagoe I, Hajjar NA, Gherman CD. Robot-Guided Ultrasonography in Surgical Interventions. Diagnostics (Basel) 2023; 13:2456. [PMID: 37510199 PMCID: PMC10378616 DOI: 10.3390/diagnostics13142456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 07/19/2023] [Indexed: 07/30/2023] Open
Abstract
INTRODUCTION The introduction of robotic-guided procedures in surgical techniques has brought an increase in the accuracy and control of resections. Surgery has evolved as a technique since the development of laparoscopy, which has added to the visualisation of the peritoneal cavity from a different perspective. Multi-armed robot associated with real-time intraoperative imaging devices brings important manoeuvrability and dexterity improvements in certain surgical fields. MATERIALS AND METHODS The present study is designed to synthesise the development of imaging techniques with a focus on ultrasonography in robotic surgery in the last ten years regarding abdominal surgical interventions. RESULTS All studies involved abdominal surgery. Out of the seven studies, two were performed in clinical trials. The other five studies were performed on organs or simulators and attempted to develop a hybrid surgical technique using ultrasonography and robotic surgery. Most studies aim to surgically identify both blood vessels and nerve structures through this combined technique (surgery and imaging). CONCLUSIONS Ultrasonography is often used in minimally invasive surgical techniques. This adds to the visualisation of blood vessels, the correct identification of tumour margins, and the location of surgical instruments in the tissue. The development of ultrasound technology from 2D to 3D and 4D has brought improvements in minimally invasive and robotic surgical techniques, and it should be further studied to bring surgery to a higher level.
Collapse
Affiliation(s)
- Răzvan Alexandru Ciocan
- Department of Surgery-Practical Abilities, "Iuliu Hațieganu" University of Medicine and Pharmacy Cluj-Napoca, Marinescu Street, No. 23, 400337 Cluj-Napoca, Romania
| | - Florin Graur
- Department of Surgery, "Iuliu Hațieganu" University of Medicine and Pharmacy Cluj-Napoca, Croitorilor Street, No. 19-21, 400162 Cluj-Napoca, Romania
| | - Andra Ciocan
- Department of Surgery, "Iuliu Hațieganu" University of Medicine and Pharmacy Cluj-Napoca, Croitorilor Street, No. 19-21, 400162 Cluj-Napoca, Romania
| | - Cosmin Andrei Cismaru
- Research Center for Functional Genomics, Biomedicine and Translational Medicine, "Iuliu Hațieganu" University of Medicine and Pharmacy Cluj-Napoca, Victor Babeș Street, No. 8, 400347 Cluj-Napoca, Romania
| | - Sebastian Romeo Pintilie
- "Iuliu Hațieganu" University of Medicine and Pharmacy Cluj-Napoca, Victor Babeș Street, No. 8, 400347 Cluj-Napoca, Romania
| | - Ioana Berindan-Neagoe
- Research Center for Functional Genomics, Biomedicine and Translational Medicine, "Iuliu Hațieganu" University of Medicine and Pharmacy Cluj-Napoca, Victor Babeș Street, No. 8, 400347 Cluj-Napoca, Romania
| | - Nadim Al Hajjar
- Department of Surgery, "Iuliu Hațieganu" University of Medicine and Pharmacy Cluj-Napoca, Croitorilor Street, No. 19-21, 400162 Cluj-Napoca, Romania
| | - Claudia Diana Gherman
- Department of Surgery-Practical Abilities, "Iuliu Hațieganu" University of Medicine and Pharmacy Cluj-Napoca, Marinescu Street, No. 23, 400337 Cluj-Napoca, Romania
| |
Collapse
|
4
|
Marchionna L, Pugliese G, Martini M, Angarano S, Salvetti F, Chiaberge M. Deep Instance Segmentation and Visual Servoing to Play Jenga with a Cost-Effective Robotic System. SENSORS (BASEL, SWITZERLAND) 2023; 23:752. [PMID: 36679543 PMCID: PMC9866192 DOI: 10.3390/s23020752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 01/04/2023] [Accepted: 01/06/2023] [Indexed: 06/17/2023]
Abstract
The game of Jenga is a benchmark used for developing innovative manipulation solutions for complex tasks. Indeed, it encourages the study of novel robotics methods to successfully extract blocks from a tower. A Jenga game involves many traits of complex industrial and surgical manipulation tasks, requiring a multi-step strategy, the combination of visual and tactile data, and the highly precise motion of a robotic arm to perform a single block extraction. In this work, we propose a novel, cost-effective architecture for playing Jenga with e.Do, a 6DOF anthropomorphic manipulator manufactured by Comau, a standard depth camera, and an inexpensive monodirectional force sensor. Our solution focuses on a visual-based control strategy to accurately align the end-effector with the desired block, enabling block extraction by pushing. To this aim, we trained an instance segmentation deep learning model on a synthetic custom dataset to segment each piece of the Jenga tower, allowing for visual tracking of the desired block's pose during the motion of the manipulator. We integrated the visual-based strategy with a 1D force sensor to detect whether the block could be safely removed by identifying a force threshold value. Our experimentation shows that our low-cost solution allows e.DO to precisely reach removable blocks and perform up to 14 consecutive extractions in a row.
Collapse
Affiliation(s)
- Luca Marchionna
- Department of Electronics and Telecommunications (DET), Politecnico di Torino, 10129 Torino, Italy
| | - Giulio Pugliese
- Department of Electronics and Telecommunications (DET), Politecnico di Torino, 10129 Torino, Italy
| | - Mauro Martini
- Department of Electronics and Telecommunications (DET), Politecnico di Torino, 10129 Torino, Italy
- PIC4SeR Interdepartmental Centre for Service Robotics, 10129 Torino, Italy
| | - Simone Angarano
- Department of Electronics and Telecommunications (DET), Politecnico di Torino, 10129 Torino, Italy
- PIC4SeR Interdepartmental Centre for Service Robotics, 10129 Torino, Italy
| | - Francesco Salvetti
- Department of Electronics and Telecommunications (DET), Politecnico di Torino, 10129 Torino, Italy
- PIC4SeR Interdepartmental Centre for Service Robotics, 10129 Torino, Italy
| | - Marcello Chiaberge
- Department of Electronics and Telecommunications (DET), Politecnico di Torino, 10129 Torino, Italy
- PIC4SeR Interdepartmental Centre for Service Robotics, 10129 Torino, Italy
| |
Collapse
|
5
|
Li W, Yin Ng W, Zhang X, Huang Y, Li Y, Song C, Chiu PWY, Li Z. A Kinematic Modeling and Control Scheme for Different Robotic Endoscopes: A Rudimentary Research Prototype. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3186758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Weibing Li
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Wing Yin Ng
- Department of Surgery and the Chow Yuk Ho Technology Centre for Innovative Medicine, The Chinese University of Hong Kong, Hong Kong, China
| | - Xue Zhang
- Department of Surgery and the Chow Yuk Ho Technology Centre for Innovative Medicine, The Chinese University of Hong Kong, Hong Kong, China
| | - Yisen Huang
- Department of Surgery and the Chow Yuk Ho Technology Centre for Innovative Medicine, The Chinese University of Hong Kong, Hong Kong, China
| | - Yehui Li
- Department of Surgery and the Chow Yuk Ho Technology Centre for Innovative Medicine, The Chinese University of Hong Kong, Hong Kong, China
| | - Chengzhi Song
- Shenzhen Cornerstone Technology Co., Ltd., Shenzhen, China
| | - Philip Wai-Yan Chiu
- Department of Surgery and the Chow Yuk Ho Technology Centre for Innovative Medicine, The Chinese University of Hong Kong, Hong Kong, China
| | - Zheng Li
- Department of Surgery, Chow Yuk Ho Technology Centre for Innovative Medicine, Li Ka Shing Institute of Health Science, and Multi-Scale Medical Robotics Centre, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
6
|
Planning and visual-servoing for robotic manipulators in ROS. INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS 2022. [DOI: 10.1007/s41315-022-00253-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Abstract
AbstractThis article presents a probabilistic road map (PRM) and visual servo control (visual-servoing) based path planning strategy that allows a Motoman HP20D industrial robot to move from an initial positional to a random final position in the presence of fixed obstacles. The process begins with an application of the PRM algorithm to take the robot from an initial position to a point in space where it has a free line of sight to the target, to then apply visual servoing and end up, finally, at the desired position, where an image captured by a camera located at the robot’s end effector matches a reference image, located on the upper surface of a rectangular prismatic object. Algorithms and experiments were developed in simulation, specifically, the visual servo control that includes the dynamic model of the robot and the image sensor subject to realistic lighting were developed in robot operating system (ROS) environment.
Collapse
|
7
|
Alami H, Lehoux P, Shaw SE, Papoutsi C, Rybczynska-Bunt S, Fortin JP. Virtual Care and the Inverse Care Law: Implications for Policy, Practice, Research, Public and Patients. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph191710591. [PMID: 36078313 PMCID: PMC9518297 DOI: 10.3390/ijerph191710591] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/19/2022] [Accepted: 08/23/2022] [Indexed: 05/31/2023]
Abstract
Virtual care spread rapidly at the outbreak of the COVID-19 pandemic. Restricting in-person contact contributed to reducing the spread of infection and saved lives. However, the benefits of virtual care were not evenly distributed within and across social groups, and existing inequalities became exacerbated for those unable to fully access to, or benefit from virtual services. This "perspective" paper discusses the extent to which challenges in virtual care access and use in the context of COVID-19 follow the Inverse Care Law. The latter stipulates that the availability and quality of health care is inversely proportionate to the level of population health needs. We highlight the inequalities affecting some disadvantaged populations' access to, and use of public and private virtual care, and contrast this with a utopian vision of technology as the "solution to everything". In public and universal health systems, the Inverse Care Law may manifests itself in access issues, capacity, and/or lack of perceived benefit to use digital technologies, as well as in data poverty. For commercial "Direct-To-Consumer" services, all of the above may be encouraged via a consumerist (i.e., profit-oriented) approach, limited and episodic services, or the use of low direct cost platforms. With virtual care rapidly growing, we set out ways forward for policy, practice, and research to ensure virtual care benefits for everyone, which include: (1) pay more attention to "capabilities" supporting access and use of virtual care; (2) consider digital technologies as a basic human right that should be automatically taken into account, not only in health policies, but also in social policies; (3) take more seriously the impact of the digital economy on equity, notably through a greater state involvement in co-constructing "public health value" through innovation; and (4) reconsider the dominant digital innovation research paradigm to better recognize the contexts, factors, and conditions that influence access to and use of virtual care by different groups.
Collapse
Affiliation(s)
- Hassane Alami
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG, UK
| | - Pascale Lehoux
- Center for Public Health Research and Department of Health Management, Evaluation and Policy, University of Montreal, Montreal, QC H3C 3J7, Canada
| | - Sara E. Shaw
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG, UK
| | - Chrysanthi Papoutsi
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG, UK
| | - Sarah Rybczynska-Bunt
- Community and Primary Care Research Group, Faculty of Health, Plymouth University, Plymouth PL6 8BX, UK
| | - Jean-Paul Fortin
- VITAM Research Centre on Sustainable Health, Faculty of Medicine, Laval University, Quebec, QC G1J 2G1, Canada
| |
Collapse
|
8
|
A Natural Language Interface for an Autonomous Camera Control System on the da Vinci Surgical Robot. ROBOTICS 2022. [DOI: 10.3390/robotics11020040] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Positioning a camera during laparoscopic and robotic procedures is challenging and essential for successful operations. During surgery, if the camera view is not optimal, surgery becomes more complex and potentially error-prone. To address this need, we have developed a voice interface to an autonomous camera system that can trigger behavioral changes and be more of a partner to the surgeon. Similarly to a human operator, the camera can take cues from the surgeon to help create optimized surgical camera views. It has the advantage of nominal behavior that is helpful in most general cases and has a natural language interface that makes it dynamically customizable and on-demand. It permits the control of a camera with a higher level of abstraction. This paper shows the implementation details and usability of a voice-activated autonomous camera system. A voice activation test on a limited set of practiced key phrases was performed using both online and offline voice recognition systems. The results show an on-average greater than 94% recognition accuracy for the online system and 86% accuracy for the offline system. However, the response time of the online system was greater than 1.5 s, whereas the local system was 0.6 s. This work is a step towards cooperative surgical robots that will effectively partner with human operators to enable more robust surgeries. A video link of the system in operation is provided in this paper.
Collapse
|
9
|
Huber M, Mitchell JB, Henry R, Ourselin S, Vercauteren T, Bergeles C. Homography-based Visual Servoing with Remote Center of Motion for Semi-autonomous Robotic Endoscope Manipulation. ... INTERNATIONAL SYMPOSIUM ON MEDICAL ROBOTICS. INTERNATIONAL SYMPOSIUM ON MEDICAL ROBOTICS 2021; 220:1-7. [PMID: 39351396 PMCID: PMC7616652 DOI: 10.1109/ismr48346.2021.9661563] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/04/2024]
Abstract
The dominant visual servoing approaches in Minimally Invasive Surgery (MIS) follow single points or adapt the endoscope's field of view based on the surgical tools' distance. These methods rely on point positions with respect to the camera frame to infer a control policy. Deviating from the dominant methods, we formulate a robotic controller that allows for image-based visual servoing that requires neither explicit tool and camera positions nor any explicit image depth information. The proposed method relies on homography-based image registration, which changes the automation paradigm from point-centric towards surgical-scene-centric approach. It simultaneously respects a programmable Remote Center of Motion (RCM). Our approach allows a surgeon to build a graph of desired views, from which, once built, views can be manually selected and automatically servoed to irrespective of robot-patient frame transformation changes. We evaluate our method on an abdominal phantom and provide an open source ROS Moveit integration for use with any serial manipulator. A video is provided.
Collapse
Affiliation(s)
- Martin Huber
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
| | - John Bason Mitchell
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
- Department of Medical Physics and Biomedical Engineering, Faculty of Engineering Sciences, University College London, London, United Kingdom
| | - Ross Henry
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
| | - Sébastien Ourselin
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
| | - Tom Vercauteren
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
| | - Christos Bergeles
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
| |
Collapse
|
10
|
Vision-Based Framework of Single Master Dual Slave Semi-Autonomous Surgical Robot System. Ing Rech Biomed 2021. [DOI: 10.1016/j.irbm.2020.06.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
11
|
Cherubini A, Navarro-Alarcon D. Sensor-Based Control for Collaborative Robots: Fundamentals, Challenges, and Opportunities. Front Neurorobot 2021; 14:576846. [PMID: 33488375 PMCID: PMC7817623 DOI: 10.3389/fnbot.2020.576846] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2020] [Accepted: 12/08/2020] [Indexed: 11/13/2022] Open
Abstract
The objective of this paper is to present a systematic review of existing sensor-based control methodologies for applications that involve direct interaction between humans and robots, in the form of either physical collaboration or safe coexistence. To this end, we first introduce the basic formulation of the sensor-servo problem, and then, present its most common approaches: vision-based, touch-based, audio-based, and distance-based control. Afterwards, we discuss and formalize the methods that integrate heterogeneous sensors at the control level. The surveyed body of literature is classified according to various factors such as: sensor type, sensor integration method, and application domain. Finally, we discuss open problems, potential applications, and future research directions.
Collapse
Affiliation(s)
| | - David Navarro-Alarcon
- Department of Mechanical Engineering, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| |
Collapse
|
12
|
Li W, Chiu PWY, Li Z. An Accelerated Finite-Time Convergent Neural Network for Visual Servoing of a Flexible Surgical Endoscope With Physical and RCM Constraints. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:5272-5284. [PMID: 32011270 DOI: 10.1109/tnnls.2020.2965553] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This article designs and analyzes a recurrent neural network (RNN) for the visual servoing of a flexible surgical endoscope. The flexible surgical endoscope is based on a commercially available UR5 robot with a flexible endoscope attached as an end-effector. Most of the existing visual servo control frameworks of the robotic endoscopes or robot arms have not considered either the physical limits of the robot or the remote center of motion (RCM) constraints (i.e., the fulcrum effect). To tackle this issue, this article first conducts the kinematic modeling of the flexible robotic endoscope to achieve automation by visual servo control. The kinematic modeling results in a quadratic programming (QP) framework with physical limits and RCM constraints involved, making the UR5 robot applicable to surgical field. To solve the QP problem and accomplish the visual task, an RNN activated by a sign-bi-power activation function (AF) is proposed. The motivation of using the sign-bi-power AF is to enable the RNN to exhibit an accelerated finite-time convergence, which is more preferred in time-critical applications. Theoretically, the finite-time convergence of the RNN is rigorously proved using the Lyapunov theory. Compared with the previous AFs applied to the RNN, theoretical analysis shows that the RNN activated by the sign-bi-power AF delivers an accelerated convergence speed. Comparative validations are performed, showing that the proposed finite-time convergent neural network is effective to achieve visual servoing of the flexible endoscope with physical limits and RCM constraints handled simultaneously.
Collapse
|
13
|
Bihain F, Klein M, Nomine-Criqui C, Brunaud L. Robotic adrenalectomy in patients with pheochromocytoma: a systematic review. Gland Surg 2020; 9:844-848. [PMID: 32775278 DOI: 10.21037/gs-2019-ra-05] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Pheochromocytomas (PHEOs) are neural crest cell tumors producing catecholamines. PHEOS need to be early diagnosed and adequately managed. Adrenalectomy is the gold standard treatment of these type of tumors. There has been major improvement of surgical technologies with the development of laparoscopic and robotic systems these past several years. We conducted a review of the literature to evaluate the robotic approach for adrenalectomy for patients with PHEO.
Collapse
Affiliation(s)
- Florence Bihain
- Département de Chirurgie Viscérale, Métabolique et Cancérologique (CVMC), Unité multidisciplinaire de chirurgie métabolique, endocrinienne et thyroïdienne (UMET), CHRU Brabois, Université de Lorraine, Nancy, France
| | - Marc Klein
- Service d'Endocrinologie, Diabétologie et Nutrition, Unité multidisciplinaire de chirurgie métabolique, endocrinienne et thyroïdienne (UMET), CHRU Brabois, Université de Lorraine, Nancy, France
| | - Claire Nomine-Criqui
- Département de Chirurgie Viscérale, Métabolique et Cancérologique (CVMC), Unité multidisciplinaire de chirurgie métabolique, endocrinienne et thyroïdienne (UMET), CHRU Brabois, Université de Lorraine, Nancy, France
| | - Laurent Brunaud
- Département de Chirurgie Viscérale, Métabolique et Cancérologique (CVMC), Unité multidisciplinaire de chirurgie métabolique, endocrinienne et thyroïdienne (UMET), CHRU Brabois, Université de Lorraine, Nancy, France
| |
Collapse
|
14
|
Yu L, Wang P, Yan Y, Xia Y, Cao W. MASSD: Multi-scale attention single shot detector for surgical instruments. Comput Biol Med 2020; 123:103867. [PMID: 32658787 DOI: 10.1016/j.compbiomed.2020.103867] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Revised: 06/17/2020] [Accepted: 06/17/2020] [Indexed: 11/26/2022]
Abstract
Surgical instrument detection is a significant task in computer-aided minimal invasive surgery for providing real-time feedback to physicians, evaluating surgical skills, and developing a training plan for surgeons. In this study, a multi-scale attention single detector is designed for surgical instruments. In the field of object detection, accurate detection of small objects is always a challenging task. We propose an innovative feature fusion technique aimed at small surgical instrument detection. First, the attention map is created from high-level features to act on the low-level features and enrich the semantic information of the low-level features. The original and processed features are then fused by skip connection. Finally, multi-scale feature maps are created to predict fusion features. The experiments on the ATLAS Dione dataset yielded results with a detection time of 0.066 s per frame and a mean average precision of 90.08%. Our proposed feature fusion module can obtain more semantic information for low-level features and significantly enhance the performance of small surgical instrument detection.
Collapse
Affiliation(s)
- Lingtao Yu
- College of Mechanical and Electrical Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, China.
| | - Pengcheng Wang
- College of Mechanical and Electrical Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, China.
| | - Yusheng Yan
- College of Mechanical and Electrical Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, China.
| | - Yongqiang Xia
- College of Mechanical and Electrical Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, China.
| | - Wei Cao
- College of Mechanical and Electrical Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, China.
| |
Collapse
|
15
|
Gonzalez EA, Bell MAL. GPU implementation of photoacoustic short-lag spatial coherence imaging for improved image-guided interventions. JOURNAL OF BIOMEDICAL OPTICS 2020; 25:1-19. [PMID: 32713168 PMCID: PMC7381831 DOI: 10.1117/1.jbo.25.7.077002] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Accepted: 06/29/2020] [Indexed: 05/04/2023]
Abstract
SIGNIFICANCE Photoacoustic-based visual servoing is a promising technique for surgical tool tip tracking and automated visualization of photoacoustic targets during interventional procedures. However, one outstanding challenge has been the reliability of obtaining segmentations using low-energy light sources that operate within existing laser safety limits. AIM We developed the first known graphical processing unit (GPU)-based real-time implementation of short-lag spatial coherence (SLSC) beamforming for photoacoustic imaging and applied this real-time algorithm to improve signal segmentation during photoacoustic-based visual servoing with low-energy lasers. APPROACH A 1-mm-core-diameter optical fiber was inserted into ex vivo bovine tissue. Photoacoustic-based visual servoing was implemented as the fiber was manually displaced by a translation stage, which provided ground truth measurements of the fiber displacement. GPU-SLSC results were compared with a central processing unit (CPU)-SLSC approach and an amplitude-based delay-and-sum (DAS) beamforming approach. Performance was additionally evaluated with in vivo cardiac data. RESULTS The GPU-SLSC implementation achieved frame rates up to 41.2 Hz, representing a factor of 348 speedup when compared with offline CPU-SLSC. In addition, GPU-SLSC successfully recovered low-energy signals (i.e., ≤268 μJ) with mean ± standard deviation of signal-to-noise ratios of 11.2 ± 2.4 (compared with 3.5 ± 0.8 with conventional DAS beamforming). When energies were lower than the safety limit for skin (i.e., 394.6 μJ for 900-nm wavelength laser light), the median and interquartile range (IQR) of visual servoing tracking errors obtained with GPU-SLSC were 0.64 and 0.52 mm, respectively (which were lower than the median and IQR obtained with DAS by 1.39 and 8.45 mm, respectively). GPU-SLSC additionally reduced the percentage of failed segmentations when applied to in vivo cardiac data. CONCLUSIONS Results are promising for the use of low-energy, miniaturized lasers to perform GPU-SLSC photoacoustic-based visual servoing in the operating room with laser pulse repetition frequencies as high as 41.2 Hz.
Collapse
Affiliation(s)
- Eduardo A. Gonzalez
- Johns Hopkins University, School of Medicine, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Muyinatu A. Lediju Bell
- Johns Hopkins University, School of Medicine, Department of Biomedical Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Whiting School of Engineering, Department of Electrical and Computer Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Whiting School of Engineering, Department of Computer Science, Baltimore, Maryland, United States
| |
Collapse
|
16
|
Zhang J, Gao X. Object extraction via deep learning-based marker-free tracking framework of surgical instruments for laparoscope-holder robots. Int J Comput Assist Radiol Surg 2020; 15:1335-1345. [DOI: 10.1007/s11548-020-02214-y] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Accepted: 06/10/2020] [Indexed: 01/25/2023]
|
17
|
Wang X, Fang G, Wang K, Xie X, Lee KH, Ho JDL, Tang WL, Lam J, Kwok KW. Eye-in-Hand Visual Servoing Enhanced With Sparse Strain Measurement for Soft Continuum Robots. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2969953] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
18
|
Sun Y, Pan B, Fu Y, Niu G. Visual-based autonomous field of view control of laparoscope with safety-RCM constraints for semi-autonomous surgery. Int J Med Robot 2020; 16:e2079. [PMID: 31953893 DOI: 10.1002/rcs.2079] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2019] [Revised: 01/13/2020] [Accepted: 01/13/2020] [Indexed: 11/06/2022]
Abstract
PURPOSE The surgeon is not timely in direct control of his field of view. Autonomous laparoscope control can provide appropriate surgical field of view and facilitate the intelligent level of surgical robot system. METHODS This study explores an autonomous laparoscope control framework for semiautonomous surgery. We propose a novel concept that integrates two forms of Remote Center of Motion (RCM) constraint. We also propose a novel safety-RCM model to cope with the collision condition. We modify the image Jacobian matrix to realize RCM constraint. The two models constitute a dual-RCM constraint for robot laparoscope arm. We perform validation of the algorithm with two experiments. RESULTS The experimental results show that the RCM position error can be reduced with the dual-RCM constraint. Owing to safety-RCM model, autonomous surgical field of view control could still be realized in case of collision. CONCLUSION The autonomous laparoscope control could be realized with safety-RCM and dual-RCM algorithms.
Collapse
Affiliation(s)
- Yanwen Sun
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Bo Pan
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Yili Fu
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | - Guojun Niu
- Faculty of Mechanical Engineering & Automation, Zhejiang Sci-Tech University, Hangzhou, China
| |
Collapse
|
19
|
Adaptive Fusion-Based Autonomous Laparoscope Control for Semi-Autonomous Surgery. J Med Syst 2019; 44:4. [PMID: 31760504 DOI: 10.1007/s10916-019-1460-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 09/18/2019] [Indexed: 10/25/2022]
Abstract
The purpose of this paper is to develop an autonomous tracking algorithm based on adaptive fusion kinematics method, the autonomous laparoscope control algorithm and adaptive fusion kinematics method are proposed for semi-autonomous surgery, focus on solving the problems of autonomous laparoscope field of view control for surgical robot system. A novel autonomous tracking algorithm is proposed. To realize more robust tracking, an adaptive fusion kinematics method based on fuzzy logic is proposed, the method adaptive associates the kinematics information of surgical robot system and the laparoscope information. The proposed methods are implemented on the laparoscopic minimally invasive surgical robot system which is developed by our laboratory. Two experiments are carried out, the results indicate that the accurate autonomous field of view control is achieved with the addition of laparoscope information, laparoscopic motion frequency is reduced, the methods can avoid the laparoscope continuous motion and ensure the stability of field of view. The proposed methods improve the intelligence level of surgical robot system.
Collapse
|
20
|
Sun Y, Pan B, Fu Y, Cao F. Development of a novel intelligent laparoscope system for semi-automatic minimally invasive surgery. Int J Med Robot 2019; 16:e2049. [PMID: 31677231 DOI: 10.1002/rcs.2049] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Revised: 10/05/2019] [Accepted: 10/08/2019] [Indexed: 11/10/2022]
Abstract
BACKGROUND Intelligent surgical robot has great significance to alleviate the fatigue of surgeons. In the minimally invasive surgery robot system, adding intelligent control method to the laparoscope control has great realizability and significance. METHODS Depth independent image Jacobian matrix was modified to make it suitable for laparoscope trocar constraint. We propose a method for intelligent and autonomous adjustment of surgeon's surgical field of view, enabling it to track and predict the motion trajectory of surgical instruments. RESULTS The result of experiment shows that the proposed method could realize tracking the surgical instruments and adjusting the surgical field of view autonomously. In case of occlusion, motion trajectory of surgical instruments can be predicted. CONCLUSION The intelligent laparoscope system could improve the intelligent level of surgical robot system. Given providing "a third hand" for the surgeon, the proposed system is a highly improvement for semi-autonomous surgical robot system.
Collapse
Affiliation(s)
- Yanwen Sun
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | | | | | | |
Collapse
|
21
|
Antico M, Sasazawa F, Wu L, Jaiprakash A, Roberts J, Crawford R, Pandey AK, Fontanarosa D. Ultrasound guidance in minimally invasive robotic procedures. Med Image Anal 2019; 54:149-167. [DOI: 10.1016/j.media.2019.01.002] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2017] [Revised: 01/01/2019] [Accepted: 01/09/2019] [Indexed: 12/20/2022]
|
22
|
Abstract
Robotic platforms are taking their place in the operating room because they provide more stability and accuracy during surgery. Although most of these platforms are teleoperated, a lot of research is currently being carried out to design collaborative platforms. The objective is to reduce the surgeon workload through the automation of secondary or auxiliary tasks, which would benefit both surgeons and patients by facilitating the surgery and reducing the operation time. One of the most important secondary tasks is the endoscopic camera guidance, whose automation would allow the surgeon to be concentrated on handling the surgical instruments. This paper proposes a novel autonomous camera guidance approach for laparoscopic surgery. It is based on learning from demonstration (LfD), which has demonstrated its feasibility to transfer knowledge from humans to robots by means of multiple expert showings. The proposed approach has been validated using an experimental surgical robotic platform to perform peg transferring, a typical task that is used to train human skills in laparoscopic surgery. The results show that camera guidance can be easily trained by a surgeon for a particular task. Later, it can be autonomously reproduced in a similar way to one carried out by a human. Therefore, the results demonstrate that the use of learning from demonstration is a suitable method to perform autonomous camera guidance in collaborative surgical robotic platforms.
Collapse
|
23
|
Image Based High-Level Control System Design for Steering and Controlling of an Active Capsule Endoscope. J INTELL ROBOT SYST 2018. [DOI: 10.1007/s10846-018-0956-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
24
|
Marmol A, Peynot T, Eriksson A, Jaiprakash A, Roberts J, Crawford R. Evaluation of Keypoint Detectors and Descriptors in Arthroscopic Images for Feature-Based Matching Applications. IEEE Robot Autom Lett 2017. [DOI: 10.1109/lra.2017.2714150] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
25
|
Wang Z, Lee SC, Zhong F, Navarro-Alarcon D, Liu YH, Deguet A, Kazanzides P, Taylor RH. Image-Based Trajectory Tracking Control of 4-DoF Laparoscopic Instruments Using a Rotation Distinguishing Marker. IEEE Robot Autom Lett 2017. [DOI: 10.1109/lra.2017.2676350] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
26
|
The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017; 37:66-90. [DOI: 10.1016/j.media.2017.01.007] [Citation(s) in RCA: 183] [Impact Index Per Article: 22.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Revised: 01/16/2017] [Accepted: 01/23/2017] [Indexed: 12/27/2022]
|
27
|
Yeung BPM, Chiu PWY. Application of robotics in gastrointestinal endoscopy: A review. World J Gastroenterol 2016; 22:1811-1825. [PMID: 26855540 PMCID: PMC4724612 DOI: 10.3748/wjg.v22.i5.1811] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/06/2015] [Revised: 12/12/2015] [Accepted: 12/30/2015] [Indexed: 02/06/2023] Open
Abstract
Multiple robotic flexible endoscope platforms have been developed based on cross specialty collaboration between engineers and medical doctors. However, significant number of these platforms have been developed for the natural orifice transluminal endoscopic surgery paradigm. Increasing amount of evidence suggest the focus of development should be placed on advanced endolumenal procedures such as endoscopic submucosal dissection instead. A thorough literature analysis was performed to assess the current status of robotic flexible endoscopic platforms designed for advanced endolumenal procedures. Current efforts are mainly focused on robotic locomotion and robotic instrument control. In the future, advances in actuation and servoing technology, optical analysis, augmented reality and wireless power transmission technology will no doubt further advance the field of robotic endoscopy. Globally, health systems have become increasingly budget conscious; widespread acceptance of robotic endoscopy will depend on careful design to ensure its delivery of a cost effective service.
Collapse
|
28
|
Ellis RD, Munaco AJ, Reisner LA, Klein MD, Composto AM, Pandya AK, King BW. Task analysis of laparoscopic camera control schemes. Int J Med Robot 2015; 12:576-584. [PMID: 26648563 DOI: 10.1002/rcs.1716] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2015] [Revised: 09/24/2015] [Accepted: 10/20/2015] [Indexed: 11/07/2022]
Abstract
BACKGROUND Minimally invasive surgeries rely on laparoscopic camera views to guide the procedure. Traditionally, an expert surgical assistant operates the camera. In some cases, a robotic system is used to help position the camera, but the surgeon is required to direct all movements of the system. Some prior research has focused on developing automated robotic camera control systems, but that work has been limited to rudimentary control schemes due to a lack of understanding of how the camera should be moved for different surgical tasks. METHODS This research used task analysis with a sample of eight expert surgeons to discover and document several salient methods of camera control and their related task contexts. RESULTS Desired camera placements and behaviours were established for two common surgical subtasks (suturing and knot tying). CONCLUSION The results can be used to develop better robotic control algorithms that will be more responsive to surgeons' needs. Copyright © 2015 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- R Darin Ellis
- Department of Industrial and Systems Engineering, Wayne State University, Detroit, MI, USA
| | - Anthony J Munaco
- Department of Pediatric Surgery, Children's Hospital of Michigan, Detroit, MI, USA
| | - Luke A Reisner
- Department of Electrical and Computer Engineering, Wayne State University, Detroit, MI, USA
| | - Michael D Klein
- Department of Pediatric Surgery, Children's Hospital of Michigan, Detroit, MI, USA
| | - Anthony M Composto
- Department of Electrical and Computer Engineering, Wayne State University, Detroit, MI, USA
| | - Abhilash K Pandya
- Department of Electrical and Computer Engineering, Wayne State University, Detroit, MI, USA
| | - Brady W King
- Department of Pediatric Surgery, Children's Hospital of Michigan, Detroit, MI, USA
| |
Collapse
|
29
|
Yang L, Wang J, Ando T, Kubota A, Yamashita H, Sakuma I, Chiba T, Kobayashi E. Towards scene adaptive image correspondence for placental vasculature mosaic in computer assisted fetoscopic procedures. Int J Med Robot 2015; 12:375-86. [PMID: 26443691 DOI: 10.1002/rcs.1700] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/21/2015] [Indexed: 11/11/2022]
Abstract
BACKGROUND Visualization of the vast placental vasculature is crucial in fetoscopic laser photocoagulation for twin-to-twin transfusion syndrome treatment. However, vasculature mosaic is challenging due to the fluctuating imaging conditions during fetoscopic surgery. METHOD A scene adaptive feature-based approach for image correspondence in free-hand endoscopic placental video is proposed. It contributes towards existing techniques by introducing a failure detection method based on statistical attributes of the feature distribution, and an updating mechanism that self-tunes parameters to recover from registration failures. RESULTS Validations on endoscopic image sequences of a phantom and a monkey placenta are carried out to demonstrate mismatch recovery. In two 100-frame sequences, automatic self-tuned results improved by 8% compared with manual experience-based tuning and a slight 2.5% deterioration against exhaustive tuning (gold standard). CONCLUSION This scene-adaptive image correspondence approach, which is not restricted to a set of generalized parameters, is suitable for applications associated with dynamically changing imaging conditions. Copyright © 2015 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Liangjing Yang
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Junchen Wang
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Takehiro Ando
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Akihiro Kubota
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Hiromasa Yamashita
- Clinical Research Center, National Center for Child Health and Development, Tokyo, Japan
| | - Ichiro Sakuma
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Toshio Chiba
- Clinical Research Center, National Center for Child Health and Development, Tokyo, Japan
| | - Etsuko Kobayashi
- Graduate School of Engineering, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
30
|
|
31
|
Azizian M, Najmaei N, Khoshnam M, Patel R. Visual servoing in medical robotics: a survey. Part II: tomographic imaging modalities--techniques and applications. Int J Med Robot 2014; 11:67-79. [PMID: 24623371 DOI: 10.1002/rcs.1575] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2013] [Revised: 12/15/2013] [Accepted: 01/06/2014] [Indexed: 11/12/2022]
Abstract
BACKGROUND Intraoperative application of tomographic imaging techniques provides a means of visual servoing for objects beneath the surface of organs. METHODS The focus of this survey is on therapeutic and diagnostic medical applications where tomographic imaging is used in visual servoing. To this end, a comprehensive search of the electronic databases was completed for the period 2000-2013. RESULTS Existing techniques and products are categorized and studied, based on the imaging modality and their medical applications. This part complements Part I of the survey, which covers visual servoing techniques using endoscopic imaging and direct vision. CONCLUSION The main challenges in using visual servoing based on tomographic images have been identified. 'Supervised automation of medical robotics' is found to be a major trend in this field and ultrasound is the most commonly used tomographic modality for visual servoing.
Collapse
|