1
|
Bronowicki K, Antoniuk-Majchrzak J, Malesza I, Możarowski W, Szymborska A, Pachuta B, Walenta T, Jasica W, Stanuch M, Skalski A, Raciborska A. An attempt to evaluate the use of mixed reality in surgically treated pediatric oncology patients. NPJ Digit Med 2025; 8:262. [PMID: 40346298 PMCID: PMC12064705 DOI: 10.1038/s41746-025-01638-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2025] [Accepted: 04/14/2025] [Indexed: 05/11/2025] Open
Abstract
Mixed reality (MR) technology is increasingly used in surgical procedures, particularly in pediatric oncological surgery. The CarnaLife Holo system (MedApp S.A., Poland) converts medical imaging data into interactive 3D holograms for preoperative planning and intraoperative use. This study presents a preliminary evaluation of MR's impact on surgical procedure (SP) duration and hospitalization (H) time. A retrospective analysis of patients treated between 2023 and 2024 compared outcomes of surgeries performed with (n = 9) and without MR. Diagnoses included pulmonary metastases, sacrococcygeal tumor, clavicle tumor, aneurysmal bone cyst, soft tissue tumors, femoral and chest wall tumors. SP duration in the MR group was generally comparable to conventional methods, with hospitalization times remaining within typical ranges. Although a slight increase in procedure time was observed in a few cases, MR did not significantly prolong SP or H. MR appears to be a promising tool in pediatric oncological surgery. Further research on larger cohorts is warranted.
Collapse
Affiliation(s)
- Krzysztof Bronowicki
- Department of Oncology and Surgical Oncology for Children and Youth, Institute of Mother and Child, Warsaw, Poland.
| | - Justyna Antoniuk-Majchrzak
- Department of Oncology and Surgical Oncology for Children and Youth, Institute of Mother and Child, Warsaw, Poland.
| | - Iwona Malesza
- Department of Oncology and Surgical Oncology for Children and Youth, Institute of Mother and Child, Warsaw, Poland
| | - Wiktor Możarowski
- Department of Artificial Intelligence and Innovation Medical Technology, Institute of Mother and Child, Warsaw, Poland
| | - Agnieszka Szymborska
- Department of Oncology and Surgical Oncology for Children and Youth, Institute of Mother and Child, Warsaw, Poland
| | - Bartosz Pachuta
- Department of Oncology and Surgical Oncology for Children and Youth, Institute of Mother and Child, Warsaw, Poland
| | - Tomasz Walenta
- Department of Oncology and Surgical Oncology for Children and Youth, Institute of Mother and Child, Warsaw, Poland
| | - Wojciech Jasica
- Department of Oncology and Surgical Oncology for Children and Youth, Institute of Mother and Child, Warsaw, Poland
| | | | - Andrzej Skalski
- MedApp S.A., Krakow, Poland
- Department of Measurement and Electronics, AGH University of Krakow, Krakow, Poland
| | - Anna Raciborska
- Department of Oncology and Surgical Oncology for Children and Youth, Institute of Mother and Child, Warsaw, Poland
| |
Collapse
|
2
|
Tortora M, Luppi A, Pacchiano F, Marisei M, Grassi F, Werner H, Kitamura FC, Tortora F, Caranci F, Ferraciolli SF. Current applications and future perspectives of extended reality in radiology. LA RADIOLOGIA MEDICA 2025:10.1007/s11547-025-02001-2. [PMID: 40153208 DOI: 10.1007/s11547-025-02001-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 10/23/2024] [Accepted: 03/13/2025] [Indexed: 03/30/2025]
Abstract
Extended reality (XR) technologies, including virtual reality (VR), augmented reality (AR), and mixed reality (MR), hold transformative potential for radiology. This review examines the current applications, benefits, limitations, and future prospects of XR in radiology, with a focus on education, diagnostics, interventional procedures, and patient interaction. A comprehensive literature search of PubMed, Scopus, and Web of Science databases identified relevant publications from 1992 to 2024. Key studies were selected for detailed discussion. XR technologies enhance radiology education by offering immersive learning experiences that improve the proficiency and confidence of professionals. In diagnostics, XR improves the accuracy and efficiency of ultrasound and CT imaging and aids in precise patient positioning. For interventional radiology, XR provides valuable tools for training and real-time procedural planning, leading to better patient outcomes. Additionally, XR improves patient-doctor interactions, reducing anxiety and enhancing the consent process. Despite challenges such as high costs, technical limitations, and the need for extensive clinical validation, the potential benefits of XR underscore its value as a significant tool in radiology. Addressing these challenges will be essential for the widespread adoption and integration of XR in radiology, ensuring its potential benefits are fully realized. This review highlights the transformative impact of XR technologies on radiology, emphasizing the need for further research and development to harness their full capabilities and improve patient care.
Collapse
Affiliation(s)
- Mario Tortora
- Department of Advanced Biomedical Sciences, University "Federico II", Naples, Italy.
| | - Andre Luppi
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
- Pediatric Imaging Research Center and Cardiac Imaging Research Center, Massachusetts General Hospital, Boston, MA, USA
| | - Francesco Pacchiano
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Caserta, Italy
| | - Mariagrazia Marisei
- Department of Advanced Biomedical Sciences, University "Federico II", Naples, Italy
| | - Francesca Grassi
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Caserta, Italy
| | - Heron Werner
- Department of Fetal Medicine, Biodesign Laboratory DASA/PUC, Rio de Janeiro Pontifical Catholic University, Rio de Janeiro, Brazil
| | | | - Fabio Tortora
- Department of Advanced Biomedical Sciences, University "Federico II", Naples, Italy
| | - Ferdinando Caranci
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Caserta, Italy
| | - Suely Fazio Ferraciolli
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
- Pediatric Imaging Research Center and Cardiac Imaging Research Center, Massachusetts General Hospital, Boston, MA, USA
| |
Collapse
|
3
|
Doornbos MCJ, Peek JJ, Maat APWM, Ruurda JP, De Backer P, Cornelissen BMW, Mahtab EAF, Sadeghi AH, Kluin J. Augmented Reality Implementation in Minimally Invasive Surgery for Future Application in Pulmonary Surgery: A Systematic Review. Surg Innov 2024; 31:646-658. [PMID: 39370802 PMCID: PMC11475712 DOI: 10.1177/15533506241290412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2024]
Abstract
OBJECTIVE This systematic review investigates of Augmented Reality (AR) systems used in minimally invasive surgery of deformable organs, focusing on initial registration, dynamic tracking, and visualization. The objective is to acquire a comprehensive understanding of the current knowledge, applications, and challenges associated with current AR-techniques, aiming to leverage these insights for developing a dedicated AR pulmonary Video or Robotic Assisted Thoracic Surgery (VATS/RATS) workflow. METHODS A systematic search was conducted within Embase, Medline (Ovid) and Web of Science on April 16, 2024, following the Preferred Reporting items for Systematic Reviews and Meta-Analyses (PRISMA). The search focused on intraoperative AR applications and intraoperative navigational purposes for deformable organs. Quality assessment was performed and studies were categorized according to initial registration and dynamic tracking methods. RESULTS 33 articles were included, of which one involved pulmonary surgery. Studies used both manual and (semi-) automatic registration methods, established through anatomical landmark-based, fiducial-based, or surface-based techniques. Diverse outcome measures were considered, including surgical outcomes and registration accuracy. The majority of studies that reached an registration accuracy below 5 mm applied surface-based registration. CONCLUSIONS AR can potentially aid surgeons with real-time navigation and decision making during anatomically complex minimally invasive procedures. Future research for pulmonary applications should focus on exploring surface-based registration methods, considering their non-invasive, marker-less nature, and promising accuracy. Additionally, vascular-labeling-based methods are worth exploring, given the importance and relative stability of broncho-vascular anatomy in pulmonary VATS/RATS. Assessing clinical feasibility of these approaches is crucial, particularly concerning registration accuracy and potential impact on surgical outcomes.
Collapse
Affiliation(s)
- Marie-Claire J. Doornbos
- Department of Cardiothoracic Surgery, Thoraxcenter, Erasmus MC, Rotterdam, The Netherlands
- Educational Program Technical Medicine, Leiden University Medical Center, Delft University of Technology & Erasmus University Medical Center Rotterdam, Leiden, The Netherlands
| | - Jette J. Peek
- Department of Cardiothoracic Surgery, Thoraxcenter, Erasmus MC, Rotterdam, The Netherlands
| | | | - Jelle P. Ruurda
- Department of Surgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | | | - Edris A. F. Mahtab
- Department of Cardiothoracic Surgery, Thoraxcenter, Erasmus MC, Rotterdam, The Netherlands
- Department of Cardiothoracic Surgery, Leiden University Medical Center, Leiden, The Netherlands
| | - Amir H. Sadeghi
- Department of Cardiothoracic Surgery, Thoraxcenter, Erasmus MC, Rotterdam, The Netherlands
- Department of Cardiothoracic Surgery, University Medical Center Utrecht, The Netherlands
| | - Jolanda Kluin
- Department of Cardiothoracic Surgery, Thoraxcenter, Erasmus MC, Rotterdam, The Netherlands
| |
Collapse
|
4
|
Shukla A, Chaudhary R, Nayyar N. Role of artificial intelligence in gastrointestinal surgery. Artif Intell Cancer 2024; 5. [DOI: 10.35713/aic.v5.i2.97317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 07/11/2024] [Accepted: 07/17/2024] [Indexed: 09/05/2024] Open
Abstract
Artificial intelligence is rapidly evolving and its application is increasing day-by-day in the medical field. The application of artificial intelligence is also valuable in gastrointestinal diseases, by calculating various scoring systems, evaluating radiological images, preoperative and intraoperative assistance, processing pathological slides, prognosticating, and in treatment responses. This field has a promising future and can have an impact on many management algorithms. In this minireview, we aimed to determine the basics of artificial intelligence, the role that artificial intelligence may play in gastrointestinal surgeries and malignancies, and the limitations thereof.
Collapse
Affiliation(s)
- Ankit Shukla
- Department of Surgery, Dr Rajendra Prasad Government Medical College, Kangra 176001, Himachal Pradesh, India
| | - Rajesh Chaudhary
- Department of Renal Transplantation, Dr Rajendra Prasad Government Medical College, Kangra 176001, India
| | - Nishant Nayyar
- Department of Radiology, Dr Rajendra Prasad Government Medical College, Kangra 176001, Himachal Pradesh, India
| |
Collapse
|
5
|
Ryan ML, Knod JL, Pandya SR. Creation of Three-dimensional Anatomic Models in Pediatric Surgical Patients Using Cross-sectional Imaging: A Demonstration of Low-cost Methods and Applications. J Pediatr Surg 2024; 59:426-431. [PMID: 37981543 DOI: 10.1016/j.jpedsurg.2023.10.053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 10/20/2023] [Indexed: 11/21/2023]
Abstract
BACKGROUND Pediatric surgery patients often present with complex congenital anomalies or other conditions requiring deep understanding of their intricate anatomy. Commercial applications and services exist for the conversion of cross-sectional imaging data into three-dimensional (3D) models for education and preoperative planning. However, the associated costs and lack of familiarity may discourage their use in centers with limited resources. The purpose of this report is to present a low-cost, reproducible method for generating 3D images to visualize patient anatomy. METHODS De-identified DICOM files were obtained from the hospital PACS system in preparation for assorted pediatric surgical procedures. Using open-source visualization software, variations in anatomic structures were examined using volume rendering and segmentation techniques. Images were further refined using available editing tools or artificial intelligence-assisted software extensions. RESULTS Using the described techniques we were able to obtain excellent visualization of desired structures and associated anatomic variations. Once structures were selected and modeled in 3D (segmentation), they could be exported as one of several 3D object file formats. These could then be retained for 3D printing, visualization in virtual reality, or as an anatomic reference during the perioperative period. Models may also be imported into commercial gaming engines for rendering under optimal lighting conditions and with enhanced detail. CONCLUSION Pediatric surgeons are frequently tasked with the treatment of patients with complex and rare anomalies. Visualization and preoperative planning can be assisted by advanced imaging software at minimal to no cost, thereby facilitating enhanced understanding of these conditions in resource-limited environments. LEVEL OF EVIDENCE V, Case Series, Description of Technique.
Collapse
Affiliation(s)
- Mark L Ryan
- Division of Pediatric Surgery, Department of Surgery, Children's Medical Center Dallas/University of Texas Southwestern Medical Center, Dallas, TX, USA.
| | - Jennifer Leslie Knod
- Department of Surgery and Pediatrics, Connecticut Children's Medical Center, University of Connecticut School of Medicine, Hartford, CT, USA
| | - Samir R Pandya
- Division of Pediatric Surgery, Department of Surgery, Children's Medical Center Dallas/University of Texas Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
6
|
Taleb A, Guigou C, Leclerc S, Lalande A, Bozorg Grayeli A. Image-to-Patient Registration in Computer-Assisted Surgery of Head and Neck: State-of-the-Art, Perspectives, and Challenges. J Clin Med 2023; 12:5398. [PMID: 37629441 PMCID: PMC10455300 DOI: 10.3390/jcm12165398] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 08/08/2023] [Accepted: 08/14/2023] [Indexed: 08/27/2023] Open
Abstract
Today, image-guided systems play a significant role in improving the outcome of diagnostic and therapeutic interventions. They provide crucial anatomical information during the procedure to decrease the size and the extent of the approach, to reduce intraoperative complications, and to increase accuracy, repeatability, and safety. Image-to-patient registration is the first step in image-guided procedures. It establishes a correspondence between the patient's preoperative imaging and the intraoperative data. When it comes to the head-and-neck region, the presence of many sensitive structures such as the central nervous system or the neurosensory organs requires a millimetric precision. This review allows evaluating the characteristics and the performances of different registration methods in the head-and-neck region used in the operation room from the perspectives of accuracy, invasiveness, and processing times. Our work led to the conclusion that invasive marker-based methods are still considered as the gold standard of image-to-patient registration. The surface-based methods are recommended for faster procedures and applied on the surface tissues especially around the eyes. In the near future, computer vision technology is expected to enhance these systems by reducing human errors and cognitive load in the operating room.
Collapse
Affiliation(s)
- Ali Taleb
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
| | - Caroline Guigou
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
- Otolaryngology Department, University Hospital of Dijon, 21000 Dijon, France
| | - Sarah Leclerc
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
| | - Alain Lalande
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
- Medical Imaging Department, University Hospital of Dijon, 21000 Dijon, France
| | - Alexis Bozorg Grayeli
- Team IFTIM, Institute of Molecular Chemistry of University of Burgundy (ICMUB UMR CNRS 6302), Univ. Bourgogne Franche-Comté, 21000 Dijon, France; (C.G.); (S.L.); (A.L.); (A.B.G.)
- Otolaryngology Department, University Hospital of Dijon, 21000 Dijon, France
| |
Collapse
|
7
|
Chiou SY, Liu LS, Lee CW, Kim DH, Al-masni MA, Liu HL, Wei KC, Yan JL, Chen PY. Augmented Reality Surgical Navigation System Integrated with Deep Learning. Bioengineering (Basel) 2023; 10:617. [PMID: 37237687 PMCID: PMC10215407 DOI: 10.3390/bioengineering10050617] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 05/11/2023] [Accepted: 05/17/2023] [Indexed: 05/28/2023] Open
Abstract
Most current surgical navigation methods rely on optical navigators with images displayed on an external screen. However, minimizing distractions during surgery is critical and the spatial information displayed in this arrangement is non-intuitive. Previous studies have proposed combining optical navigation systems with augmented reality (AR) to provide surgeons with intuitive imaging during surgery, through the use of planar and three-dimensional imagery. However, these studies have mainly focused on visual aids and have paid relatively little attention to real surgical guidance aids. Moreover, the use of augmented reality reduces system stability and accuracy, and optical navigation systems are costly. Therefore, this paper proposed an augmented reality surgical navigation system based on image positioning that achieves the desired system advantages with low cost, high stability, and high accuracy. This system also provides intuitive guidance for the surgical target point, entry point, and trajectory. Once the surgeon uses the navigation stick to indicate the position of the surgical entry point, the connection between the surgical target and the surgical entry point is immediately displayed on the AR device (tablet or HoloLens glasses), and a dynamic auxiliary line is shown to assist with incision angle and depth. Clinical trials were conducted for EVD (extra-ventricular drainage) surgery, and surgeons confirmed the system's overall benefit. A "virtual object automatic scanning" method is proposed to achieve a high accuracy of 1 ± 0.1 mm for the AR-based system. Furthermore, a deep learning-based U-Net segmentation network is incorporated to enable automatic identification of the hydrocephalus location by the system. The system achieves improved recognition accuracy, sensitivity, and specificity of 99.93%, 93.85%, and 95.73%, respectively, representing a significant improvement from previous studies.
Collapse
Affiliation(s)
- Shin-Yan Chiou
- Department of Electrical Engineering, College of Engineering, Chang Gung University, Kwei-Shan, Taoyuan 333, Taiwan
- Department of Nuclear Medicine, Linkou Chang Gung Memorial Hospital, Taoyuan 333, Taiwan
- Department of Neurosurgery, Keelung Chang Gung Memorial Hospital, Keelung 204, Taiwan
| | - Li-Sheng Liu
- Department of Electrical Engineering, College of Engineering, Chang Gung University, Kwei-Shan, Taoyuan 333, Taiwan
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Chia-Wei Lee
- Department of Electrical Engineering, College of Engineering, Chang Gung University, Kwei-Shan, Taoyuan 333, Taiwan
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Mohammed A. Al-masni
- Department of Artificial Intelligence, College of Software & Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
| | - Hao-Li Liu
- Department of Electrical Engineering, National Taiwan University, Taipei 106, Taiwan
| | - Kuo-Chen Wei
- New Taipei City Tucheng Hospital, Tao-Yuan, Tucheng, New Taipei City 236, Taiwan
| | - Jiun-Lin Yan
- Department of Neurosurgery, Keelung Chang Gung Memorial Hospital, Keelung 204, Taiwan
| | - Pin-Yuan Chen
- Department of Neurosurgery, Keelung Chang Gung Memorial Hospital, Keelung 204, Taiwan
| |
Collapse
|
8
|
Hayashi Y, Misawa K, Mori K. Database-driven patient-specific registration error compensation method for image-guided laparoscopic surgery. Int J Comput Assist Radiol Surg 2023; 18:63-69. [PMID: 36534226 DOI: 10.1007/s11548-022-02804-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 11/21/2022] [Indexed: 12/23/2022]
Abstract
PURPOSE A surgical navigation system helps surgeons understand anatomical structures in the operative field during surgery. Patient-to-image registration, which aligns coordinate systems between the CT volume and a positional tracker, is vital for accurate surgical navigation. Although a point-based rigid registration method using fiducials on the body surface is often utilized for laparoscopic surgery navigation, precise registration is difficult due to such factors as soft tissue deformation. We propose a method that compensates a transformation matrix computed using fiducials on the body surface based on the analysis of positional information in the database. METHODS We built our database by measuring the positional information of the fiducials and the guidance targets in both the CT volume and positional tracker coordinate systems through previous surgeries. We computed two transformation matrices: using only the fiducials and using only the guidance targets in all the data in the database. We calculated the differences between the two transformation matrices in each piece of data. The compensation transformation matrix was computed by averaging these difference matrices. In this step, we selected the data from the database based on the similarity of the fiducials and the configuration of the guidance targets. RESULTS We evaluated our proposed method using 20 pieces of data acquired during laparoscopic gastrectomy for gastric cancer. The locations of blood vessels were used as guidance targets for computing target registration error. The mean target registration errors significantly decreased from 33.0 to 17.1 mm before and after the compensation. CONCLUSION This paper described a registration error compensation method using a database for image-guided laparoscopic surgery. Since our proposed method reduced registration error without additional intraoperative measurements during surgery, it increases the accuracy of surgical navigation for laparoscopic surgery.
Collapse
Affiliation(s)
- Yuichiro Hayashi
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan.
| | - Kazunari Misawa
- Department of Gastroenterological Surgery, Aichi Cancer Center Hospital, 1-1 Kanokoden, Chikusa-ku, Nagoya, 464-8681, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan.,Research Center for Medical Bigdata, National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo, 101-8430, Japan
| |
Collapse
|
9
|
Chandelon K, Sharifian R, Marchand S, Khaddad A, Bourdel N, Mottet N, Bernhard JC, Bartoli A. Kidney tracking for live augmented reality in stereoscopic mini-invasive partial nephrectomy. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2157750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Affiliation(s)
- Kilian Chandelon
- Institut Pascal, Clermont-Ferrand University Hospital, Clermont-Ferrand, France
- SurgAR - Surgical Augmented Reality, Clermont-Ferrand, France
| | - Rasoul Sharifian
- Institut Pascal, Clermont-Ferrand University Hospital, Clermont-Ferrand, France
| | - Salomé Marchand
- Department of Urology, Hôpital Nord, Saint-Etienne University Hospital, Saint-Etienne, France
| | - Abderrahmane Khaddad
- Department of Urology, Hôpital Pellegrin, Bordeaux University Hospital, Bordeaux, France
| | - Nicolas Bourdel
- Institut Pascal, Clermont-Ferrand University Hospital, Clermont-Ferrand, France
- SurgAR - Surgical Augmented Reality, Clermont-Ferrand, France
- Department of Obstetrics and Gynecology, Clermont-Ferrand University Hospital, Clermont-Ferrand, France
| | - Nicolas Mottet
- Department of Urology, Hôpital Nord, Saint-Etienne University Hospital, Saint-Etienne, France
| | | | - Adrien Bartoli
- Institut Pascal, Clermont-Ferrand University Hospital, Clermont-Ferrand, France
- SurgAR - Surgical Augmented Reality, Clermont-Ferrand, France
- Department of Clinical Research and Innovation, Clermont-Ferrand University Hospital, Clermont-Ferrand, France
| |
Collapse
|
10
|
Chiou SY, Zhang ZY, Liu HL, Yan JL, Wei KC, Chen PY. Augmented Reality Surgical Navigation System for External Ventricular Drain. Healthcare (Basel) 2022; 10:healthcare10101815. [PMID: 36292263 PMCID: PMC9601392 DOI: 10.3390/healthcare10101815] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 09/14/2022] [Accepted: 09/19/2022] [Indexed: 12/02/2022] Open
Abstract
Augmented reality surgery systems are playing an increasing role in the operating room, but applying such systems to neurosurgery presents particular challenges. In addition to using augmented reality technology to display the position of the surgical target position in 3D in real time, the application must also display the scalpel entry point and scalpel orientation, with accurate superposition on the patient. To improve the intuitiveness, efficiency, and accuracy of extra-ventricular drain surgery, this paper proposes an augmented reality surgical navigation system which accurately superimposes the surgical target position, scalpel entry point, and scalpel direction on a patient’s head and displays this data on a tablet. The accuracy of the optical measurement system (NDI Polaris Vicra) was first independently tested, and then complemented by the design of functions to help the surgeon quickly identify the surgical target position and determine the preferred entry point. A tablet PC was used to display the superimposed images of the surgical target, entry point, and scalpel on top of the patient, allowing for correct scalpel orientation. Digital imaging and communications in medicine (DICOM) results for the patient’s computed tomography were used to create a phantom and its associated AR model. This model was then imported into the application, which was then executed on the tablet. In the preoperative phase, the technician first spent 5–7 min to superimpose the virtual image of the head and the scalpel. The surgeon then took 2 min to identify the intended target position and entry point position on the tablet, which then dynamically displayed the superimposed image of the head, target position, entry point position, and scalpel (including the scalpel tip and scalpel orientation). Multiple experiments were successfully conducted on the phantom, along with six practical trials of clinical neurosurgical EVD. In the 2D-plane-superposition model, the optical measurement system (NDI Polaris Vicra) provided highly accurate visualization (2.01 ± 1.12 mm). In hospital-based clinical trials, the average technician preparation time was 6 min, while the surgeon required an average of 3.5 min to set the target and entry-point positions and accurately overlay the orientation with an NDI surgical stick. In the preparation phase, the average time required for the DICOM-formatted image processing and program import was 120 ± 30 min. The accuracy of the designed augmented reality optical surgical navigation system met clinical requirements, and can provide a visual and intuitive guide for neurosurgeons. The surgeon can use the tablet application to obtain real-time DICOM-formatted images of the patient, change the position of the surgical entry point, and instantly obtain an updated surgical path and surgical angle. The proposed design can be used as the basis for various augmented reality brain surgery navigation systems in the future.
Collapse
Affiliation(s)
- Shin-Yan Chiou
- Department of Electrical Engineering, College of Engineering, Chang Gung University, Kwei-Shan, Taoyuan 333, Taiwan
- Department of Nuclear Medicine, Linkou Chang Gung Memorial Hospital, Taoyuan 333, Taiwan
- Department of Neurosurgery, Keelung Chang Gung Memorial Hospital, Keelung 204, Taiwan
| | - Zhi-Yue Zhang
- Department of Electrical Engineering, College of Engineering, Chang Gung University, Kwei-Shan, Taoyuan 333, Taiwan
| | - Hao-Li Liu
- Department of Electrical Engineering, National Taiwan University, Taipei 106, Taiwan
| | - Jiun-Lin Yan
- Department of Neurosurgery, Keelung Chang Gung Memorial Hospital, Keelung 204, Taiwan
| | - Kuo-Chen Wei
- Department of Neurosurgery, New Taipei City TuCheng Hospital, New Taipei City 236, Taiwan
| | - Pin-Yuan Chen
- Department of Neurosurgery, Keelung Chang Gung Memorial Hospital, Keelung 204, Taiwan
- School of Medicine, Chang Gung University, Kwei-Shan, Taoyuan 333, Taiwan
- Correspondence: ; Tel.: +886-2-2431-3131
| |
Collapse
|
11
|
Preliminary study for developing a navigation system for gastric cancer surgery using artificial intelligence. Surg Today 2022; 52:1753-1758. [PMID: 35511359 DOI: 10.1007/s00595-022-02508-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 03/30/2022] [Indexed: 12/24/2022]
Abstract
PURPOSE We are attempting to develop a navigation system for safe and effective peripancreatic lymphadenectomy in gastric cancer surgery. As a preliminary study, we examined whether or not the peripancreatic dissection line could be learned by a machine learning model (MLM). METHODS Among the 41 patients with gastric cancer who underwent radical gastrectomy between April 2019 and January 2020, we selected 6 in whom the pancreatic contour was relatively easy to trace. The pancreatic contour was annotated by a trainer surgeon in 1242 images captured from the video recordings. The MLM was trained using the annotated images from five of the six patients. The pancreatic contour was then segmented by the trained MLM using images from the remaining patient. The same procedure was repeated for all six combinations. RESULTS The median maximum intersection over union of each image was 0.708, which was higher than the threshold (0.5). However, the pancreatic contour was misidentified in parts where fatty tissue or thin vessels overlaid the pancreas in some cases. CONCLUSION The contour of the pancreas could be traced relatively well using the trained MLM. Further investigations and training of the system are needed to develop a practical navigation system.
Collapse
|
12
|
Ghosh NK, Kumar A. Colorectal cancer: Artificial intelligence and its role in surgical decision making. Artif Intell Gastroenterol 2022; 3:36-45. [DOI: 10.35712/aig.v3.i2.36] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 02/02/2022] [Accepted: 04/26/2022] [Indexed: 02/06/2023] Open
|
13
|
Privitera L, Paraboschi I, Dixit D, Arthurs OJ, Giuliani S. Image-guided surgery and novel intraoperative devices for enhanced visualisation in general and paediatric surgery: a review. Innov Surg Sci 2021; 6:161-172. [PMID: 35937852 PMCID: PMC9294338 DOI: 10.1515/iss-2021-0028] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 12/17/2021] [Indexed: 12/27/2022] Open
Abstract
Fluorescence guided surgery, augmented reality, and intra-operative imaging devices are rapidly pervading the field of surgical interventions, equipping the surgeon with powerful tools capable of enhancing the surgical visualisation of anatomical normal and pathological structures. There is a wide range of possibilities in the adult population to use these novel technologies and devices in the guidance for surgical procedures and minimally invasive surgeries. Their applications and their use have also been increasingly growing in the field of paediatric surgery, where the detailed visualisation of small anatomical structures could reduce procedure time, minimising surgical complications and ultimately improve the outcome of surgery. This review aims to illustrate the mechanisms underlying these innovations and their main applications in the clinical setting.
Collapse
Affiliation(s)
- Laura Privitera
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, London, UK
- Developmental Biology and Cancer Programme, UCL Great Ormond Street Institute of Child Health, London, UK
| | - Irene Paraboschi
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, London, UK
- Developmental Biology and Cancer Programme, UCL Great Ormond Street Institute of Child Health, London, UK
| | - Divyansh Dixit
- Faculty of Medicine, University of Southampton, Southampton, UK
| | - Owen J Arthurs
- Department of Clinical Radiology, NHS Foundation Trust, Great Ormond Street Hospital for Children, London, UK
- NIHR GOSH Biomedical Research Centre, NHS Foundation Trust, UCL Great Ormond Street Institute of Child Health, London, UK
| | - Stefano Giuliani
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, London, UK
- Developmental Biology and Cancer Programme, UCL Great Ormond Street Institute of Child Health, London, UK
- Department of Specialist Neonatal and Paediatric Surgery, NHS Foundation Trust, Great Ormond Street Hospital for Children, London, UK
| |
Collapse
|
14
|
Adballah M, Espinel Y, Calvet L, Pereira B, Le Roy B, Bartoli A, Buc E. Augmented reality in laparoscopic liver resection evaluated on an ex-vivo animal model with pseudo-tumours. Surg Endosc 2021; 36:833-843. [PMID: 34734305 DOI: 10.1007/s00464-021-08798-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 10/17/2021] [Indexed: 02/01/2023]
Abstract
BACKGROUND The aim of this study was to assess the performance of our augmented reality (AR) software (Hepataug) during laparoscopic resection of liver tumours and compare it to standard ultrasonography (US). MATERIALS AND METHODS Ninety pseudo-tumours ranging from 10 to 20 mm were created in sheep cadaveric livers by injection of alginate. CT-scans were then performed and 3D models reconstructed using a medical image segmentation software (MITK). The livers were placed in a pelvi-trainer on an inclined plane, approximately perpendicular to the laparoscope. The aim was to obtain free resection margins, as close as possible to 1 cm. Laparoscopic resection was performed using US alone (n = 30, US group), AR alone (n = 30, AR group) and both US and AR (n = 30, ARUS group). R0 resection, maximal margins, minimal margins and mean margins were assessed after histopathologic examination, adjusted to the tumour depth and to a liver zone-wise difficulty level. RESULTS The minimal margins were not different between the three groups (8.8, 8.0 and 6.9 mm in the US, AR and ARUS groups, respectively). The maximal margins were larger in the US group compared to the AR and ARUS groups after adjustment on depth and zone difficulty (21 vs. 18 mm, p = 0.001 and 21 vs. 19.5 mm, p = 0.037, respectively). The mean margins, which reflect the variability of the measurements, were larger in the US group than in the ARUS group after adjustment on depth and zone difficulty (15.2 vs. 12.8 mm, p < 0.001). When considering only the most difficult zone (difficulty 3), there were more R1/R2 resections in the US group than in the AR + ARUS group (50% vs. 21%, p = 0.019). CONCLUSION Laparoscopic liver resection using AR seems to provide more accurate resection margins with less variability than the gold standard US navigation, particularly in difficult to access liver zones with deep tumours.
Collapse
Affiliation(s)
- Mourad Adballah
- Institut Pascal, UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Bâtiment 3C, 28 place Henri Dunant, 63000, Clermont-Ferrand, France
- Department of Digestive and Hepatobiliary Surgery, University Hospital Clermont-Ferrand, 1 Place Lucie et Raymond Aubrac, 63003, Clermont-Ferrand Cedex, France
| | - Yamid Espinel
- Institut Pascal, UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Bâtiment 3C, 28 place Henri Dunant, 63000, Clermont-Ferrand, France
| | - Lilian Calvet
- Institut Pascal, UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Bâtiment 3C, 28 place Henri Dunant, 63000, Clermont-Ferrand, France
- Biostatistics Department (DRCI), University Hospital Clermont-Ferrand, 63000, Clermont-Ferrand, France
| | - Bruno Pereira
- Biostatistics Department (DRCI), University Hospital Clermont-Ferrand, 63000, Clermont-Ferrand, France
| | - Bertrand Le Roy
- Institut Pascal, UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Bâtiment 3C, 28 place Henri Dunant, 63000, Clermont-Ferrand, France
- Department of Digestive and Oncologic Surgery, University Hospital Nord St-Etienne, Avenue Albert Raimond, 42270, Saint-Priest en Jarez, France
| | - Adrien Bartoli
- Institut Pascal, UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Bâtiment 3C, 28 place Henri Dunant, 63000, Clermont-Ferrand, France
- Biostatistics Department (DRCI), University Hospital Clermont-Ferrand, 63000, Clermont-Ferrand, France
| | - Emmanuel Buc
- Institut Pascal, UMR6602, Endoscopy and Computer Vision Group, Faculté de Médecine, Bâtiment 3C, 28 place Henri Dunant, 63000, Clermont-Ferrand, France.
- Department of Digestive and Hepatobiliary Surgery, University Hospital Clermont-Ferrand, 1 Place Lucie et Raymond Aubrac, 63003, Clermont-Ferrand Cedex, France.
| |
Collapse
|
15
|
Solanki SL, Pandrowala S, Nayak A, Bhandare M, Ambulkar RP, Shrikhande SV. Artificial intelligence in perioperative management of major gastrointestinal surgeries. World J Gastroenterol 2021; 27:2758-2770. [PMID: 34135552 PMCID: PMC8173379 DOI: 10.3748/wjg.v27.i21.2758] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/16/2021] [Revised: 04/06/2021] [Accepted: 04/28/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) demonstrated by machines is based on reinforcement learning and revolves around the usage of algorithms. The purpose of this review was to summarize concepts, the scope, applications, and limitations in major gastrointestinal surgery. This is a narrative review of the available literature on the key capabilities of AI to help anesthesiologists, surgeons, and other physicians to understand and critically evaluate ongoing and new AI applications in perioperative management. AI uses available databases called "big data" to formulate an algorithm. Analysis of other data based on these algorithms can help in early diagnosis, accurate risk assessment, intraoperative management, automated drug delivery, predicting anesthesia and surgical complications and postoperative outcomes and can thus lead to effective perioperative management as well as to reduce the cost of treatment. Perioperative physicians, anesthesiologists, and surgeons are well-positioned to help integrate AI into modern surgical practice. We all need to partner and collaborate with data scientists to collect and analyze data across all phases of perioperative care to provide clinical scenarios and context. Careful implementation and use of AI along with real-time human interpretation will revolutionize perioperative care, and is the way forward in future perioperative management of major surgery.
Collapse
Affiliation(s)
- Sohan Lal Solanki
- Department of Anesthesiology, Critical Care and Pain, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai 400012, Maharashtra, India
| | - Saneya Pandrowala
- Gastro-Intestinal Services, Department of Surgical Oncology, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai 400012, Maharashtra, India
| | - Abhirup Nayak
- Department of Anesthesiology, Critical Care and Pain, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai 400012, Maharashtra, India
| | - Manish Bhandare
- Gastro-Intestinal Services, Department of Surgical Oncology, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai 400012, Maharashtra, India
| | - Reshma P Ambulkar
- Department of Anesthesiology, Critical Care and Pain, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai 400012, Maharashtra, India
| | - Shailesh V Shrikhande
- Gastro-Intestinal Services, Department of Surgical Oncology, Tata Memorial Hospital, Homi Bhabha National Institute, Mumbai 400012, Maharashtra, India
| |
Collapse
|
16
|
Hartwig R, Ostler D, Feußner H, Berlet M, Yu K, Rosenthal JC, Wilhelm D. COMPASS: localization in laparoscopic visceral surgery. CURRENT DIRECTIONS IN BIOMEDICAL ENGINEERING 2020. [DOI: 10.1515/cdbme-2020-0013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Tracking of surgical instruments is an essential step towards the modernization of the surgical workflow by a comprehensive surgical landscape guidance system (COMPASS). Real-time tracking of a laparoscopic camera used in minimally-invasive surgery is required for applications in surgical workflow documentation, machine learning, image-localization, and intra-operative visualization. In our approach, an inertial measurement unit (IMU) assists the tool tracking in situations when no line-of-sight is available for infrared (IR) based tracking of the laparoscopic camera. The novelty of this approach lies in the localization method adjusted for the laparoscopic visceral surgery, particularly when the line-of-sight is lost. It is based on IMU tracking and the positioning of the trocar entry point. The trocar entry point is the remote center of motion (RCM), reducing degrees of freedom. We developed a method to tackle localization and a real-time tool for position and orientation estimation. The main error sources are given and evaluated in a test scenario. It reveals that for small changes in penetration length (e.g., pivoting), the IMU’s accuracy determines the error.
Collapse
Affiliation(s)
- Regine Hartwig
- Research Group MITI, Technical University of Munich , Munich , Germany
| | - Daniel Ostler
- Research Group MITI, Technical University of Munich , Munich , Germany
| | - Hubertus Feußner
- Research Group MITI, Technical University of Munich , Munich , Germany
| | - Maximilian Berlet
- Research Group MITI, Technical University of Munich , Munich , Germany
| | - Kevin Yu
- Research Group MITI, Technical University of Munich , Munich , Germany
| | | | - Dirk Wilhelm
- Research Group MITI, Technical University of Munich , Munich , Germany
| |
Collapse
|
17
|
Elsayed M, Kadom N, Ghobadi C, Strauss B, Al Dandan O, Aggarwal A, Anzai Y, Griffith B, Lazarow F, Straus CM, Safdar NM. Virtual and augmented reality: potential applications in radiology. Acta Radiol 2020; 61:1258-1265. [PMID: 31928346 DOI: 10.1177/0284185119897362] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The modern-day radiologist must be adept at image interpretation, and the one who most successfully leverages new technologies may provide the highest value to patients, clinicians, and trainees. Applications of virtual reality (VR) and augmented reality (AR) have the potential to revolutionize how imaging information is applied in clinical practice and how radiologists practice. This review provides an overview of VR and AR, highlights current applications, future developments, and limitations hindering adoption.
Collapse
Affiliation(s)
- Mohammad Elsayed
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Nadja Kadom
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Comeron Ghobadi
- Department of Radiology, The University of Chicago Pritzker School of Medicine, IL, USA
| | - Benjamin Strauss
- Department of Radiology, The University of Chicago Pritzker School of Medicine, IL, USA
| | - Omran Al Dandan
- Department of Radiology, Imam Abdulrahman Bin Faisal University College of Medicine, Dammam, Eastern Province, Saudi Arabia
| | - Abhimanyu Aggarwal
- Department of Radiology, Eastern Virginia Medical School, Norfolk, VA, USA
| | - Yoshimi Anzai
- Department of Radiology and Imaging Sciences, University of Utah School of Medicine, Salt Lake City, Utah, USA
| | - Brent Griffith
- Department of Radiology, Henry Ford Health System, Detroit, MI, USA
| | - Frances Lazarow
- Department of Radiology, Eastern Virginia Medical School, Norfolk, VA, USA
| | - Christopher M Straus
- Department of Radiology, The University of Chicago Pritzker School of Medicine, IL, USA
| | - Nabile M Safdar
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| |
Collapse
|
18
|
Singh T, Alsadoon A, Prasad P, Alsadoon OH, Venkata HS, Alrubaie A. A novel enhanced hybrid recursive algorithm: Image processing based augmented reality for gallbladder and uterus visualisation. EGYPTIAN INFORMATICS JOURNAL 2020. [DOI: 10.1016/j.eij.2019.11.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
19
|
Luo H, Yin D, Zhang S, Xiao D, He B, Meng F, Zhang Y, Cai W, He S, Zhang W, Hu Q, Guo H, Liang S, Zhou S, Liu S, Sun L, Guo X, Fang C, Liu L, Jia F. Augmented reality navigation for liver resection with a stereoscopic laparoscope. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 187:105099. [PMID: 31601442 DOI: 10.1016/j.cmpb.2019.105099] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Revised: 08/14/2019] [Accepted: 09/27/2019] [Indexed: 06/10/2023]
Abstract
OBJECTIVE Understanding the three-dimensional (3D) spatial position and orientation of vessels and tumor(s) is vital in laparoscopic liver resection procedures. Augmented reality (AR) techniques can help surgeons see the patient's internal anatomy in conjunction with laparoscopic video images. METHOD In this paper, we present an AR-assisted navigation system for liver resection based on a rigid stereoscopic laparoscope. The stereo image pairs from the laparoscope are used by an unsupervised convolutional network (CNN) framework to estimate depth and generate an intraoperative 3D liver surface. Meanwhile, 3D models of the patient's surgical field are segmented from preoperative CT images using V-Net architecture for volumetric image data in an end-to-end predictive style. A globally optimal iterative closest point (Go-ICP) algorithm is adopted to register the pre- and intraoperative models into a unified coordinate space; then, the preoperative 3D models are superimposed on the live laparoscopic images to provide the surgeon with detailed information about the subsurface of the patient's anatomy, including tumors, their resection margins and vessels. RESULTS The proposed navigation system is tested on four laboratory ex vivo porcine livers and five operating theatre in vivo porcine experiments to validate its accuracy. The ex vivo and in vivo reprojection errors (RPE) are 6.04 ± 1.85 mm and 8.73 ± 2.43 mm, respectively. CONCLUSION AND SIGNIFICANCE Both the qualitative and quantitative results indicate that our AR-assisted navigation system shows promise and has the potential to be highly useful in clinical practice.
Collapse
Affiliation(s)
- Huoling Luo
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Dalong Yin
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China; Department of Hepatobiliary Surgery, Shengli Hospital Affiliated to University of Science and Technology of China, Hefei, China
| | - Shugeng Zhang
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China; Department of Hepatobiliary Surgery, Shengli Hospital Affiliated to University of Science and Technology of China, Hefei, China
| | - Deqiang Xiao
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Baochun He
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Fanzheng Meng
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Yanfang Zhang
- Department of Interventional Radiology, Shenzhen People's Hospital, Shenzhen, China
| | - Wei Cai
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Shenghao He
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Wenyu Zhang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Qingmao Hu
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Hongrui Guo
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shuhang Liang
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shuo Zhou
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shuxun Liu
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Linmao Sun
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xiao Guo
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Chihua Fang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Lianxin Liu
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China; Department of Hepatobiliary Surgery, Shengli Hospital Affiliated to University of Science and Technology of China, Hefei, China.
| | - Fucang Jia
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China.
| |
Collapse
|
20
|
Singh P, Alsadoon A, Prasad P, Venkata HS, Ali RS, Haddad S, Alrubaie A. A novel augmented reality to visualize the hidden organs and internal structure in surgeries. Int J Med Robot 2020; 16:e2055. [DOI: 10.1002/rcs.2055] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2018] [Revised: 10/27/2019] [Accepted: 10/28/2019] [Indexed: 11/08/2022]
Affiliation(s)
- P. Singh
- School of Computing and MathematicsCharles Sturt University Sydney New South Wales Australia
| | - Abeer Alsadoon
- School of Computing and MathematicsCharles Sturt University Sydney New South Wales Australia
| | - P.W.C. Prasad
- School of Computing and MathematicsCharles Sturt University Sydney New South Wales Australia
| | | | - Rasha S. Ali
- Department of Computer Techniques EngineeringAL Nisour University College Baghdad Iraq
| | - Sami Haddad
- Department of Oral and Maxillofacial ServicesGreater Western Sydney Area Health Services New South Wales Australia
- Department of Oral and Maxillofacial ServicesCentral Coast Area Health Gosford New South Wales Australia
| | - Ahmad Alrubaie
- Faculty of MedicineUniversity of New South Wales Sydney New South Wales Australia
| |
Collapse
|
21
|
Shen J, Zemiti N, Taoum C, Aiche G, Dillenseger JL, Rouanet P, Poignet P. Transrectal ultrasound image-based real-time augmented reality guidance in robot-assisted laparoscopic rectal surgery: a proof-of-concept study. Int J Comput Assist Radiol Surg 2019; 15:531-543. [DOI: 10.1007/s11548-019-02100-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2019] [Accepted: 11/27/2019] [Indexed: 12/21/2022]
|
22
|
Yanagi Y, Yoshimaru K, Matsuura T, Shibui Y, Kohashi K, Takahashi Y, Obata S, Sozaki R, Izaki T, Taguchi T. The outcome of real-time evaluation of biliary flow using near-infrared fluorescence cholangiography with Indocyanine green in biliary atresia surgery. J Pediatr Surg 2019; 54:2574-2578. [PMID: 31575415 DOI: 10.1016/j.jpedsurg.2019.08.029] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 08/24/2019] [Indexed: 02/07/2023]
Abstract
BACKGROUND Indocyanine green (ICG) fluorescence imaging is a promising tool for intraoperative decision-making. The aim of this study was to evaluate the utility of near-infrared fluorescence cholangiography (NIR-FCG) with ICG in primary surgery for biliary atresia (BA). METHODS We performed NIR-FCG with ICG in 10 BA patients and observed the fluorescence of their hilar micro-bile ducts and hilar exudate in order to assess the appropriate level at which to dissect the hilar fibrous corn. We compared the jaundice outcome of 10 patients using NIR-FCG (Group A) to that of 35 historical patients in whom NIR-FCG had not been used (Group B). RESULTS The mean age of patients was 74.8 days. The classification of BA was type I in two cases and type-III in eight cases. NIR-FCG visualized the hilar micro-bile ducts, and the incidence of positive fluorescence was 80%. The ratio of postoperative normalization of hyperbilirubinemia in Group A was significantly higher than that in Group B (1.0 vs. 0.65, p < 0.05). CONCLUSION NIR-FCG provided important objectifiable information about the biliary structures in surgery of BA. Although the number of cases was small, our results suggest that NIR-FCG may be useful for improving the outcome of primary surgery for BA. TYPE OF STUDY Study of Diagnostic Test. LEVEL OF EVIDENCE Level III.
Collapse
Affiliation(s)
- Yusuke Yanagi
- Department of Pediatric Surgery, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan.
| | - Koichiro Yoshimaru
- Department of Pediatric Surgery, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Toshiharu Matsuura
- Department of Pediatric Surgery, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Yuichi Shibui
- Anatomic Pathology, Pathological Sciences, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Kenichi Kohashi
- Anatomic Pathology, Pathological Sciences, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Yoshiaki Takahashi
- Department of Pediatric Surgery, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Satoshi Obata
- Department of Pediatric Surgery, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Ryota Sozaki
- Department of Pediatric Surgery, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Tomoko Izaki
- Department of Pediatric Surgery, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Tomoaki Taguchi
- Department of Pediatric Surgery, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| |
Collapse
|
23
|
Ma C, Chen G, Zhang X, Ning G, Liao H. Moving-Tolerant Augmented Reality Surgical Navigation System Using Autostereoscopic Three-Dimensional Image Overlay. IEEE J Biomed Health Inform 2019; 23:2483-2493. [DOI: 10.1109/jbhi.2018.2885378] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
24
|
Oda M, Roth HR, Kitasaka T, Misawa K, Fujiwara M, Mori K. Abdominal artery segmentation method from CT volumes using fully convolutional neural network. Int J Comput Assist Radiol Surg 2019; 14:2069-2081. [PMID: 31493112 DOI: 10.1007/s11548-019-02062-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Accepted: 08/27/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE : The purpose of this paper is to present a fully automated abdominal artery segmentation method from a CT volume. Three-dimensional (3D) blood vessel structure information is important for diagnosis and treatment. Information about blood vessels (including arteries) can be used in patient-specific surgical planning and intra-operative navigation. Since blood vessels have large inter-patient variations in branching patterns and positions, a patient-specific blood vessel segmentation method is necessary. Even though deep learning-based segmentation methods provide good segmentation accuracy among large organs, small organs such as blood vessels are not well segmented. We propose a deep learning-based abdominal artery segmentation method from a CT volume. Because the artery is one of small organs that is difficult to segment, we introduced an original training sample generation method and a three-plane segmentation approach to improve segmentation accuracy. METHOD : Our proposed method segments abdominal arteries from an abdominal CT volume with a fully convolutional network (FCN). To segment small arteries, we employ a 2D patch-based segmentation method and an area imbalance reduced training patch generation (AIRTPG) method. AIRTPG adjusts patch number imbalances between patches with artery regions and patches without them. These methods improved the segmentation accuracies of small artery regions. Furthermore, we introduced a three-plane segmentation approach to obtain clear 3D segmentation results from 2D patch-based processes. In the three-plane approach, we performed three segmentation processes using patches generated on axial, coronal, and sagittal planes and combined the results to generate a 3D segmentation result. RESULTS : The evaluation results of the proposed method using 20 cases of abdominal CT volumes show that the averaged F-measure, precision, and recall rates were 87.1%, 85.8%, and 88.4%, respectively. This result outperformed our previous automated FCN-based segmentation method. Our method offers competitive performance compared to the previous blood vessel segmentation methods from 3D volumes. CONCLUSIONS : We developed an abdominal artery segmentation method using FCN. The 2D patch-based and AIRTPG methods effectively segmented the artery regions. In addition, the three-plane approach generated good 3D segmentation results.
Collapse
Affiliation(s)
- Masahiro Oda
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, Japan.
| | - Holger R Roth
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, Japan
| | - Takayuki Kitasaka
- School of Information Science, Aichi Institute of Technology, 1247 Yachigusa, Yakusa-cho, Toyota, Aichi, Japan
| | - Kazunari Misawa
- Aichi Cancer Center Hospital, 1-1 Kanokoden, Chikusa-ku, Nagoya, Aichi, Japan
| | - Michitaka Fujiwara
- Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, Japan.,Research Center for Medical Bigdata, National Institute of Informatics, 2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo, Japan
| |
Collapse
|
25
|
Evaluating the impact of image guidance in the surgical setting: a systematic review. Surg Endosc 2019; 33:2785-2793. [PMID: 31168704 PMCID: PMC6684543 DOI: 10.1007/s00464-019-06876-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2018] [Accepted: 05/28/2019] [Indexed: 12/02/2022]
Abstract
Background Image guidance has been clinically available for over a period of 20 years. Although research increasingly has a translational emphasis, overall the clinical uptake of image guidance systems in surgery remains low. The objective of this review was to establish the metrics used to report on the impact of surgical image guidance systems used in a clinical setting. Methods A systematic review of the literature was carried out on all relevant publications between January 2000 and April 2016. Ovid MEDLINE and Embase databases were searched using a title strategy. Reported outcome metrics were grouped into clinically relevant domains and subsequent sub-categories for analysis. Results In total, 232 publications were eligible for inclusion. Analysis showed that clinical outcomes and system interaction were consistently reported. However, metrics focusing on surgeon, patient and economic impact were reported less often. No increase in the quality of reporting was observed during the study time period, associated with study design, or when the clinical setting involved a surgical specialty that had been using image guidance for longer. Conclusions Publications reporting on the clinical use of image guidance systems are evaluating traditional surgical outcomes and neglecting important human and economic factors, which are pertinent to the uptake, diffusion and sustainability of image-guided surgery. A framework is proposed to assist researchers in providing comprehensive evaluation metrics, which should also be considered in the design phase. Use of these would help demonstrate the impact in the clinical setting leading to increased clinical integration of image guidance systems. Electronic supplementary material The online version of this article (10.1007/s00464-019-06876-x) contains supplementary material, which is available to authorized users.
Collapse
|
26
|
Speers AD, Ma B, Jarnagin WR, Himidan S, Simpson AL, Wildes RP. Fast and accurate vision-based stereo reconstruction and motion estimation for image-guided liver surgery. Healthc Technol Lett 2018; 5:208-214. [PMID: 30464852 PMCID: PMC6222177 DOI: 10.1049/htl.2018.5071] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Accepted: 08/20/2018] [Indexed: 11/25/2022] Open
Abstract
Image-guided liver surgery aims to enhance the precision of resection and ablation by providing fast localisation of tumours and adjacent complex vasculature to improve oncologic outcome. This Letter presents a novel end-to-end solution for fast stereo reconstruction and motion estimation that demonstrates high accuracy with phantom and clinical data. The authors’ computationally efficient coarse-to-fine (CTF) stereo approach facilitates liver imaging by accounting for low texture regions, enabling precise three-dimensional (3D) boundary recovery through the use of adaptive windows and utilising a robust 3D motion estimator to reject spurious data. To the best of their knowledge, theirs is the only adaptive CTF matching approach to reconstruction and motion estimation that registers time series of reconstructions to a single key frame for registration to a volumetric computed tomography scan. The system is evaluated empirically in controlled laboratory experiments with a liver phantom and motorised stages for precise quantitative evaluation. Additional evaluation is provided through testing with patient data during liver resection.
Collapse
Affiliation(s)
- Andrew D Speers
- Department of Electrical Engineering and Computer Science, York University, Toronto, ON, Canada
| | - Burton Ma
- Department of Electrical Engineering and Computer Science, York University, Toronto, ON, Canada
| | - William R Jarnagin
- Hepatopancreatobiliary Service, Department of Surgery, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Sharifa Himidan
- Department of Surgery, The Hospital for Sick Children, Toronto, ON, Canada.,Department of Surgery, University of Toronto, Toronto, ON, Canada
| | - Amber L Simpson
- Hepatopancreatobiliary Service, Department of Surgery, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Richard P Wildes
- Department of Electrical Engineering and Computer Science, York University, Toronto, ON, Canada
| |
Collapse
|
27
|
The usefulness of 3D-CT simulation and intraoperative navigation in pediatric minimally invasive surgery. JOURNAL OF PEDIATRIC SURGERY CASE REPORTS 2018. [DOI: 10.1016/j.epsc.2018.07.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023] Open
|
28
|
Lau LW, Liu X, Plishker W, Sharma K, Shekhar R, Kane TD. Laparoscopic Liver Resection with Augmented Reality: A Preclinical Experience. J Laparoendosc Adv Surg Tech A 2018; 29:88-93. [PMID: 30192172 DOI: 10.1089/lap.2018.0183] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
INTRODUCTION Intraoperative imaging, such as ultrasound, provides subsurface anatomical information not seen by standard laparoscopy. Currently, information from the two modalities may only be integrated in the surgeon's mind, an often distracting and inefficient task. The desire to improve intraoperative efficiency has guided the development of a novel, augmented reality (AR) laparoscopic system that integrates, in real time, laparoscopic ultrasound (LUS) images with the laparoscopic video. This study shows the initial application of this system for laparoscopic hepatic wedge resection in a porcine model. MATERIALS AND METHODS The AR system consists of a standard laparoscopy setup, LUS scanner, electromagnetic tracking system, and a laptop computer for image fusion. Two liver lesions created in a 40-kg swine by radiofrequency ablation (RFA) were resected using the novel AR system and under standard laparoscopy. RESULTS Anatomical details from the LUS were successfully fused with the laparoscopic video in real time and presented on a single screen for the surgeons. The RFA lesions created were 2.5 and 1 cm in diameter. The 2.5 cm lesion was resected under AR guidance, taking about 7 minutes until completion, while the 1 cm lesion required 3 minutes using standard laparoscopy and ultrasound. Resection margins of both lesions grossly showed noncoagulated liver parenchyma, indicating a negative-margin resection. CONCLUSIONS The use of our AR system in laparoscopic hepatic wedge resection in a swine provided real-time integration of ultrasound image with standard laparoscopy. With more experience and testing, this system can be used for other laparoscopic procedures.
Collapse
Affiliation(s)
- Lung W Lau
- 1 Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, Washington, District of Columbia.,2 Department of Surgery, University Hospitals Cleveland Medical Center, Cleveland, Ohio
| | - Xinyang Liu
- 1 Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, Washington, District of Columbia
| | | | - Karun Sharma
- 1 Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, Washington, District of Columbia
| | - Raj Shekhar
- 1 Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, Washington, District of Columbia.,3 IGI Technologies, Inc., College Park, Maryland
| | - Timothy D Kane
- 1 Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, Washington, District of Columbia
| |
Collapse
|
29
|
Vávra P, Roman J, Zonča P, Ihnát P, Němec M, Kumar J, Habib N, El-Gendi A. Recent Development of Augmented Reality in Surgery: A Review. JOURNAL OF HEALTHCARE ENGINEERING 2017; 2017:4574172. [PMID: 29065604 PMCID: PMC5585624 DOI: 10.1155/2017/4574172] [Citation(s) in RCA: 161] [Impact Index Per Article: 20.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2017] [Accepted: 07/03/2017] [Indexed: 12/11/2022]
Abstract
INTRODUCTION The development augmented reality devices allow physicians to incorporate data visualization into diagnostic and treatment procedures to improve work efficiency, safety, and cost and to enhance surgical training. However, the awareness of possibilities of augmented reality is generally low. This review evaluates whether augmented reality can presently improve the results of surgical procedures. METHODS We performed a review of available literature dating from 2010 to November 2016 by searching PubMed and Scopus using the terms "augmented reality" and "surgery." Results. The initial search yielded 808 studies. After removing duplicates and including only journal articles, a total of 417 studies were identified. By reading of abstracts, 91 relevant studies were chosen to be included. 11 references were gathered by cross-referencing. A total of 102 studies were included in this review. CONCLUSIONS The present literature suggest an increasing interest of surgeons regarding employing augmented reality into surgery leading to improved safety and efficacy of surgical procedures. Many studies showed that the performance of newly devised augmented reality systems is comparable to traditional techniques. However, several problems need to be addressed before augmented reality is implemented into the routine practice.
Collapse
Affiliation(s)
- P. Vávra
- Department of Surgery, University Hospital Ostrava, 17. Listopadu 1790, 708 52 Ostrava, Czech Republic
| | - J. Roman
- Faculty of Medicine, University of Ostrava, Syllabova 19, 703 00 Ostrava, Czech Republic
| | - P. Zonča
- Department of Surgery, University Hospital Ostrava, 17. Listopadu 1790, 708 52 Ostrava, Czech Republic
| | - P. Ihnát
- Department of Surgery, University Hospital Ostrava, 17. Listopadu 1790, 708 52 Ostrava, Czech Republic
- Faculty of Medicine, University of Ostrava, Syllabova 19, 703 00 Ostrava, Czech Republic
| | - M. Němec
- Faculty of Electrical Engineering and Computer Science, Technical University of Ostrava, 17. Listopadu 15/2172, 708 33 Ostrava, Czech Republic
| | - J. Kumar
- Department of Surgery & Cancer, Faculty of Medicine, Imperial College London, South Kensington Campus, London SW7 2AZ, UK
| | - N. Habib
- Department of Surgery & Cancer, Faculty of Medicine, Imperial College London, South Kensington Campus, London SW7 2AZ, UK
| | - A. El-Gendi
- Department of Surgery, Faculty of Medicine, Alexandria University, Chamblion Street, El Azareeta, Alexandria Governorate, Egypt
| |
Collapse
|
30
|
Chu Y, Yang J, Ma S, Ai D, Li W, Song H, Li L, Chen D, Chen L, Wang Y. Registration and fusion quantification of augmented reality based nasal endoscopic surgery. Med Image Anal 2017; 42:241-256. [PMID: 28881251 DOI: 10.1016/j.media.2017.08.003] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2016] [Revised: 06/10/2017] [Accepted: 08/02/2017] [Indexed: 11/24/2022]
Abstract
This paper quantifies the registration and fusion display errors of augmented reality-based nasal endoscopic surgery (ARNES). We comparatively investigated the spatial calibration process for front-end endoscopy and redefined the accuracy level of a calibrated endoscope by using a calibration tool with improved structural reliability. We also studied how registration accuracy was combined with the number and distribution of the deployed fiducial points (FPs) for positioning and the measured registration time. A physically integrated ARNES prototype was customarily configured for performance evaluation in skull base tumor resection surgery with an innovative approach of dynamic endoscopic vision expansion. As advised by surgical experts in otolaryngology, we proposed a hierarchical rendering scheme to properly adapt the fused images with the required visual sensation. By constraining the rendered sight in a known depth and radius, the visual focus of the surgeon can be induced only on the anticipated critical anatomies and vessel structures to avoid misguidance. Furthermore, error analysis was conducted to examine the feasibility of hybrid optical tracking based on point cloud, which was proposed in our previous work as an in-surgery registration solution. Measured results indicated that the error of target registration for ARNES can be reduced to 0.77 ± 0.07 mm. For initial registration, our results suggest that a trade-off for a new minimal time of registration can be reached when the distribution of five FPs is considered. For in-surgery registration, our findings reveal that the intrinsic registration error is a major cause of performance loss. Rigid model and cadaver experiments confirmed that the scenic integration and display fluency of ARNES are smooth, as demonstrated by three clinical trials that surpassed practicality.
Collapse
Affiliation(s)
- Yakui Chu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China.
| | - Shaodong Ma
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Wenjie Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Hong Song
- School of Software, Beijing Institute of Technology, Beijing 100081, China
| | - Liang Li
- Department of Otolaryngology-Head and Neck Surgery, Chinese PLA General Hospital, Beijing 100853, China
| | - Duanduan Chen
- School of Life Science, Beijing Institute of Technology, Beijing 100081, China
| | - Lei Chen
- Department of Otolaryngology-Head and Neck Surgery, Chinese PLA General Hospital, Beijing 100853, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| |
Collapse
|
31
|
Kong SH, Haouchine N, Soares R, Klymchenko A, Andreiuk B, Marques B, Shabat G, Piechaud T, Diana M, Cotin S, Marescaux J. Robust augmented reality registration method for localization of solid organs' tumors using CT-derived virtual biomechanical model and fluorescent fiducials. Surg Endosc 2017; 31:2863-2871. [PMID: 27796600 DOI: 10.1007/s00464-016-5297-8] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2016] [Accepted: 10/14/2016] [Indexed: 12/11/2022]
Abstract
BACKGROUND Augmented reality (AR) is the fusion of computer-generated and real-time images. AR can be used in surgery as a navigation tool, by creating a patient-specific virtual model through 3D software manipulation of DICOM imaging (e.g., CT scan). The virtual model can be superimposed to real-time images enabling transparency visualization of internal anatomy and accurate localization of tumors. However, the 3D model is rigid and does not take into account inner structures' deformations. We present a concept of automated AR registration, while the organs undergo deformation during surgical manipulation, based on finite element modeling (FEM) coupled with optical imaging of fluorescent surface fiducials. METHODS Two 10 × 1 mm wires (pseudo-tumors) and six 10 × 0.9 mm fluorescent fiducials were placed in ex vivo porcine kidneys (n = 10). Biomechanical FEM-based models were generated from CT scan. Kidneys were deformed and the shape changes were identified by tracking the fiducials, using a near-infrared optical system. The changes were registered automatically with the virtual model, which was deformed accordingly. Accuracy of prediction of pseudo-tumors' location was evaluated with a CT scan in the deformed status (ground truth). In vivo: fluorescent fiducials were inserted under ultrasound guidance in the kidney of one pig, followed by a CT scan. The FEM-based virtual model was superimposed on laparoscopic images by automatic registration of the fiducials. RESULTS Biomechanical models were successfully generated and accurately superimposed on optical images. The mean measured distance between the estimated tumor by biomechanical propagation and the scanned tumor (ground truth) was 0.84 ± 0.42 mm. All fiducials were successfully placed in in vivo kidney and well visualized in near-infrared mode enabling accurate automatic registration of the virtual model on the laparoscopic images. CONCLUSIONS Our preliminary experiments showed the potential of a biomechanical model with fluorescent fiducials to propagate the deformation of solid organs' surface to their inner structures including tumors with good accuracy and automatized robust tracking.
Collapse
Affiliation(s)
- Seong-Ho Kong
- IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France
- Department of Surgery, Seoul National University Hospital, Seoul, Korea
| | - Nazim Haouchine
- Institut national de recherche en informatique et en automatique (INRIA) Mimesis, Strasbourg, France
| | - Renato Soares
- IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France
| | - Andrey Klymchenko
- Biophotonic and Pharmacology Lab, UMR 7213 CNRS, Pharmacological Faculty, University of Strasbourg, Strasbourg, France
| | - Bohdan Andreiuk
- Biophotonic and Pharmacology Lab, UMR 7213 CNRS, Pharmacological Faculty, University of Strasbourg, Strasbourg, France
| | - Bruno Marques
- Institut national de recherche en informatique et en automatique (INRIA) Mimesis, Strasbourg, France
| | - Galyna Shabat
- IRCAD, Research Institute against Cancer of the Digestive System, 1, Place de l'Hôpital, 67091, Strasbourg, France
| | | | - Michele Diana
- IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France.
- IRCAD, Research Institute against Cancer of the Digestive System, 1, Place de l'Hôpital, 67091, Strasbourg, France.
| | - Stéphane Cotin
- Institut national de recherche en informatique et en automatique (INRIA) Mimesis, Strasbourg, France
| | - Jacques Marescaux
- IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France
- IRCAD, Research Institute against Cancer of the Digestive System, 1, Place de l'Hôpital, 67091, Strasbourg, France
| |
Collapse
|
32
|
Kawanaka H, Akahoshi T, Nagao Y, Kinjo N, Yoshida D, Matsumoto Y, Harimoto N, Itoh S, Yoshizumi T, Maehara Y. Customization of laparoscopic gastric devascularization and splenectomy for gastric varices based on CT vascular anatomy. Surg Endosc 2017. [PMID: 28639036 DOI: 10.1007/s00464-017-5646-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
BACKGROUND Laparoscopic gastric devascularization(Lap GDS) and splenectomy (SPL) for gastric varices is technically challenging because of highly developed collateral vessels and bleeding tendency. We investigated the feasibility of customization of Lap GDS and SPL based on CT vascular anatomy. METHODS We analyzed 61 cirrhotic patients with gastric varices who underwent Lap GDS and SPL between 2006 and 2014. Lap GDS was customized according to the afferent feeding veins (left gastric vein (LGV) and/or posterior gastric vein (PGV)/short gastric vein (SGV)) and efferent drainage veins (gastrorenal shunt and/or gastrophrenic shunt, or numerous retroperitoneal veins) based on CT imaging. RESULTS Thirty-four patients with efferent drainage veins suitable for balloon-occluded retrograde transvenous obliteration (B-RTO) underwent B-RTO instead of surgical GDS, with subsequent Lap SPL. Among 27 patients with gastric varices unsuitable for B-RTO, 15 patients with PGV/SGV underwent Lap GDS of the greater curvature and SPL, and 12 patients with LGV or LGV/PGV/SGV underwent Lap GDS of the greater and lesser curvature and SPL. The mean operation time was 294 min and mean blood loss was 198 g. There was no mortality or severe morbidity. Gastric varices were eradicated in all 61 patients, with no bleeding or recurrence during a mean follow-up of 55.9 months. The cumulative 3-, 5-, and 7-year survival rates were 92, 82, and 64%, respectively. CONCLUSIONS Lap GDS and SPL customized based on CT vascular anatomy is a safe and effective procedure for treating gastric varices.
Collapse
Affiliation(s)
- Hirofumi Kawanaka
- Department of Surgery and Science, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan. .,Clinical Research Institute and Department of Surgery, National Beppu Medical Center, 1473 Uchikamado, Beppu, 874-0011, Japan.
| | - Tomohiko Akahoshi
- Department of Surgery and Science, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Yoshihiro Nagao
- Department of Surgery and Science, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Nao Kinjo
- Department of Surgery and Science, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Daisuke Yoshida
- Department of Surgery and Science, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Yoshihiro Matsumoto
- Department of Surgery and Science, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Norifumi Harimoto
- Department of Surgery and Science, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Shinji Itoh
- Department of Surgery and Science, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Tomoharu Yoshizumi
- Department of Surgery and Science, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Yoshihiko Maehara
- Department of Surgery and Science, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| |
Collapse
|
33
|
Nishi M, Kanaji S, Otake Y, Harada H, Yamamoto M, Oshikiri T, Nakamura T, Suzuki S, Suzuki Y, Hiasa Y, Sato Y, Kakeji Y. Quantitative comparison of operative skill using 2- and 3-dimensional monitors during laparoscopic phantom tasks. Surgery 2017; 161:1334-1340. [DOI: 10.1016/j.surg.2016.08.060] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2016] [Revised: 08/11/2016] [Accepted: 08/16/2016] [Indexed: 11/16/2022]
|
34
|
Oshiro Y, Ohkohchi N. Three-Dimensional Liver Surgery Simulation: Computer-Assisted Surgical Planning with Three-Dimensional Simulation Software and Three-Dimensional Printing<sup/>. Tissue Eng Part A 2017; 23:474-480. [PMID: 28343411 DOI: 10.1089/ten.tea.2016.0528] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
To perform accurate hepatectomy without injury, it is necessary to understand the anatomical relationship among the branches of Glisson's sheath, hepatic veins, and tumor. In Japan, three-dimensional (3D) preoperative simulation for liver surgery is becoming increasingly common, and liver 3D modeling and 3D hepatectomy simulation by 3D analysis software for liver surgery have been covered by universal healthcare insurance since 2012. Herein, we review the history of virtual hepatectomy using computer-assisted surgery (CAS) and our research to date, and we discuss the future prospects of CAS. We have used the SYNAPSE VINCENT medical imaging system (Fujifilm Medical, Tokyo, Japan) for 3D visualization and virtual resection of the liver since 2010. We developed a novel fusion imaging technique combining 3D computed tomography (CT) with magnetic resonance imaging (MRI). The fusion image enables us to easily visualize anatomic relationships among the hepatic arteries, portal veins, bile duct, and tumor in the hepatic hilum. In 2013, we developed an original software, called Liversim, which enables real-time deformation of the liver using physical simulation, and a randomized control trial has recently been conducted to evaluate the use of Liversim and SYNAPSE VINCENT for preoperative simulation and planning. Furthermore, we developed a novel hollow 3D-printed liver model whose surface is covered with frames. This model is useful for safe liver resection, has better visibility, and the production cost is reduced to one-third of a previous model. Preoperative simulation and navigation with CAS in liver resection are expected to help planning and conducting a surgery and surgical education. Thus, a novel CAS system will contribute to not only the performance of reliable hepatectomy but also to surgical education.
Collapse
Affiliation(s)
- Yukio Oshiro
- Division of Gastroenterological and Hepatobiliary Surgery and Organ Transplantation, Department of Surgery, Faculty of Medicine, University of Tsukuba , Tsukuba, Japan
| | - Nobuhiro Ohkohchi
- Division of Gastroenterological and Hepatobiliary Surgery and Organ Transplantation, Department of Surgery, Faculty of Medicine, University of Tsukuba , Tsukuba, Japan
| |
Collapse
|
35
|
The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017; 37:66-90. [DOI: 10.1016/j.media.2017.01.007] [Citation(s) in RCA: 183] [Impact Index Per Article: 22.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Revised: 01/16/2017] [Accepted: 01/23/2017] [Indexed: 12/27/2022]
|
36
|
Van der Jeught S, Dirckx JJJ. Real-time structured light-based otoscopy for quantitative measurement of eardrum deformation. JOURNAL OF BIOMEDICAL OPTICS 2017; 22:16008. [PMID: 28301636 DOI: 10.1117/1.jbo.22.1.016008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2016] [Accepted: 12/20/2016] [Indexed: 06/06/2023]
Abstract
An otological profilometry device based on real-time structured light triangulation is presented. A clinical otoscope head is mounted onto a custom-handheld unit containing both a small digital light projector and a high-speed digital camera. Digital fringe patterns are projected onto the eardrum surface and are recorded at a rate of 120 unique frames per second. The relative angle between projection and camera axes causes the projected patterns to appear deformed by the eardrum shape, allowing its full-field three-dimensional (3-D) surface map to be reconstructed. By combining hardware triggering between projector and camera with a dedicated parallel processing pipeline, the proposed system is capable of acquiring a live stream of point clouds of over 300,000 data points per frame at a rate of 40 Hz. Real-time eardrum profilometry adds an additional dimension of depth to the standard two-dimensional otoscopy image and provides a noninvasive tool to enhance the qualitative depth perception of the clinical operator with quantitative 3-D data. Visualization of the eardrum from different perspectives can improve the diagnosis of existing and the detection of impending middle ear pathology. The capability of the device to detect small middle ear pressure changes by monitoring eardrum deformation in real time is demonstrated.
Collapse
Affiliation(s)
- Sam Van der Jeught
- University of Antwerp, Department of Physics, Laboratory of Biomedical Physics, Groenenborgerlaan 171, B-2020 Antwerp, Belgium
| | - Joris J J Dirckx
- University of Antwerp, Department of Physics, Laboratory of Biomedical Physics, Groenenborgerlaan 171, B-2020 Antwerp, Belgium
| |
Collapse
|
37
|
Robust and Accurate Algorithm for Wearable Stereoscopic Augmented Reality with Three Indistinguishable Markers. ELECTRONICS 2016. [DOI: 10.3390/electronics5030059] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
38
|
Hayashi Y, Misawa K, Oda M, Hawkes DJ, Mori K. Clinical application of a surgical navigation system based on virtual laparoscopy in laparoscopic gastrectomy for gastric cancer. Int J Comput Assist Radiol Surg 2016; 11:827-36. [PMID: 26429785 DOI: 10.1007/s11548-015-1293-z] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2015] [Accepted: 09/09/2015] [Indexed: 01/25/2023]
Abstract
PURPOSE Knowledge of the specific anatomical information of a patient is important when planning and undertaking laparoscopic surgery due to the restricted field of view and lack of tactile feedback compared to open surgery. To assist this type of surgery, we have developed a surgical navigation system that presents the patient's anatomical information synchronized with the laparoscope position. This paper presents the surgical navigation system and its clinical application to laparoscopic gastrectomy for gastric cancer. METHODS The proposed surgical navigation system generates virtual laparoscopic views corresponding to the laparoscope position recorded with a three-dimensional (3D) positional tracker. The virtual laparoscopic views are generated from preoperative CT images. A point-based registration aligns coordinate systems between the patient's anatomy and image coordinates. The proposed navigation system is able to display the virtual laparoscopic views using the registration result during surgery. RESULTS We performed surgical navigation during laparoscopic gastrectomy in 23 cases. The navigation system was able to present the virtual laparoscopic views in synchronization with the laparoscopic position. The fiducial registration error was calculated in all 23 cases, and the average was 14.0 mm (range 6.1-29.8). CONCLUSION The proposed surgical navigation system can provide CT-derived patient anatomy aligned to the laparoscopic view in real time during surgery. This system enables accurate identification of vascular anatomy as a guide to vessel clamping prior to total or partial gastrectomy.
Collapse
Affiliation(s)
- Yuichiro Hayashi
- Information & Communications, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan.
| | - Kazunari Misawa
- Department of Gastroenterological Surgery, Aichi Cancer Center Hospital, 1-1 Kanokoden, Chikusa-ku, Nagoya, 464-8681, Japan
| | - Masahiro Oda
- Graduate School of Information Science, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8603, Japan
| | - David J Hawkes
- Information Technology Center, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan
- Centre for Medical Image Computing, University College London, Gower Street, London, WC1E 6BT, UK
| | - Kensaku Mori
- Information & Communications, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan
- Graduate School of Information Science, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8603, Japan
| |
Collapse
|
39
|
Hayashi Y, Misawa K, Hawkes DJ, Mori K. Progressive internal landmark registration for surgical navigation in laparoscopic gastrectomy for gastric cancer. Int J Comput Assist Radiol Surg 2016; 11:837-45. [PMID: 26811079 DOI: 10.1007/s11548-015-1346-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2015] [Accepted: 12/24/2015] [Indexed: 10/22/2022]
Abstract
PURPOSE A surgical navigation system supports the comprehension of anatomical information during surgery. Patient-to-image registration is the alignment process between CT volume and patient coordinate systems. Achieving accurate registration in the surgical navigation of laparoscopic surgery is very challenging due to soft tissue deformation. This paper presents a new patient-to-image registration method based on internal anatomical landmarks for improving registration accuracy in the surgical navigation of laparoscopic gastrectomy for gastric cancer. METHODS Our proposed registration method progressively utilizes internal anatomical landmarks. In laparoscopic gastrectomy for gastric cancer, the surgeon cuts the blood vessels around the stomach. The positions of the cut vessels are sequentially used as fiducials for registration during surgery. The proposed method uses a weighted point-based registration method for computing the transformation matrix using the fiducials both on the body surface and on the blood vessels. When a blood vessel is cut during surgery, the proposed progressive registration method measures the cut vessel's position and computes a transformation matrix by adding the cut vessel as a fiducial. RESULTS We applied our proposed progressive registration method using the positional information of the blood vessels acquired during laparoscopic gastrectomy in 20 cases. We evaluated it using target registration error in four blood vessels. The average target registration error in the four blood vessels was 12.6 mm and ranged from 2.1 to 32.9 mm. CONCLUSION Since the proposed progressive registration can reduce registration error, our proposed method is very useful for the surgical navigation of laparoscopic gastrectomy. Our proposed progressive registration method might increase the accuracy of surgical navigation in laparoscopic gastrectomy.
Collapse
Affiliation(s)
- Yuichiro Hayashi
- Information & Communications, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan.
| | - Kazunari Misawa
- Department of Gastroenterological Surgery, Aichi Cancer Center Hospital, 1-1 Kanokoden, Chikusa-ku, Nagoya, 464-8681, Japan
| | - David J Hawkes
- Information Technology Center, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan
- Centre for Medical Image Computing, University College London, Gower Street, London, WC1E 6BT, UK
| | - Kensaku Mori
- Information & Communications, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan
| |
Collapse
|
40
|
Robuste intraoperative Registrierung mit fluoreszierenden Markern für die computergestützte Laparoskopie. INFORMATIK AKTUELL 2016. [DOI: 10.1007/978-3-662-49465-3_15] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
41
|
Souzaki R, Kinoshita Y, Ieiri S, Kawakubo N, Obata S, Jimbo T, Koga Y, Hashizume M, Taguchi T. Preoperative surgical simulation of laparoscopic adrenalectomy for neuroblastoma using a three-dimensional printed model based on preoperative CT images. J Pediatr Surg 2015; 50:2112-5. [PMID: 26440294 DOI: 10.1016/j.jpedsurg.2015.08.037] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/15/2015] [Accepted: 08/24/2015] [Indexed: 01/17/2023]
Abstract
BACKGROUND Three-dimensional (3D) printed models based on computed tomography (CT) images facilitate the visualization of complex structures and are useful for understanding the surgical anatomy preoperatively. We developed a preoperative surgical simulation method using a 3D printed model based on CT images obtained prior to laparoscopic adrenalectomy for adrenal neuroblastomas (NBs). MATERIALS AND METHODS The multi-detector CT images were transferred to a 3D workstation, and 3D volume data were obtained by reconstructing the sections. A model was made with a 3D printer using acrylic ultraviolet curable resin. The adrenal tumor, kidney, renal vein and artery, inferior vena cava, aorta, and outer body were fabricated. The pneumoperitoneum, insertion of trocars, and laparoscopic view were all attainable in this model. We used this model for three cases with adrenal NB. RESULTS We used this model to discuss the port layout before the operation and to simulate the laparoscopic view and range of forceps movement. All three cases with NB were completely resected without any surgical complications. CONCLUSIONS The surgical simulation using 3D printed models based on preoperative CT images for adrenal NB was very useful for understanding the patient's surgical anatomy and for planning the surgical procedures, especially for determining the optimal port layout.
Collapse
Affiliation(s)
- Ryota Souzaki
- Department of Pediatric Surgery, Faculty of Medical Sciences, Kyushu University; Department of Advance Medicine and Innovative Technology, Kyushu University Hospital.
| | - Yoshiaki Kinoshita
- Department of Pediatric Surgery, Faculty of Medical Sciences, Kyushu University
| | - Satoshi Ieiri
- Department of Pediatric Surgery, Faculty of Medical Sciences, Kyushu University
| | - Naonori Kawakubo
- Department of Pediatric Surgery, Faculty of Medical Sciences, Kyushu University
| | - Satoshi Obata
- Department of Pediatric Surgery, Faculty of Medical Sciences, Kyushu University
| | - Takahiro Jimbo
- Department of Pediatric Surgery, Faculty of Medical Sciences, Kyushu University
| | - Yuhki Koga
- Department of Pediatrics, Graduate School of Medical Sciences, Kyushu University
| | - Makoto Hashizume
- Department of Advance Medicine and Innovative Technology, Kyushu University Hospital
| | - Tomoaki Taguchi
- Department of Pediatric Surgery, Faculty of Medical Sciences, Kyushu University
| |
Collapse
|
42
|
Souzaki R, Kinoshita Y, Ieiri S, Hayashida M, Koga Y, Shirabe K, Hara T, Maehara Y, Hashizume M, Taguchi T. Three-dimensional liver model based on preoperative CT images as a tool to assist in surgical planning for hepatoblastoma in a child. Pediatr Surg Int 2015; 31:593-6. [PMID: 25895074 DOI: 10.1007/s00383-015-3709-9] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/07/2015] [Indexed: 12/27/2022]
Abstract
The patient is a 3-year-old female diagnosed with PRETEXT IV hepatoblastoma (HB). Although the tumor was decreased after the neoadjuvant chemotherapy, HB still located at the porta hepatis. The patient underwent extended left lobectomy successfully after surgical simulation using three-dimensional (3D) printing liver model based on preoperative CT.
Collapse
Affiliation(s)
- Ryota Souzaki
- Department of Pediatric Surgery, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan,
| | | | | | | | | | | | | | | | | | | |
Collapse
|
43
|
Okamoto T, Onda S, Yasuda J, Yanaga K, Suzuki N, Hattori A. Navigation surgery using an augmented reality for pancreatectomy. Dig Surg 2015; 32:117-23. [PMID: 25766302 DOI: 10.1159/000371860] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/30/2014] [Accepted: 12/31/2014] [Indexed: 12/18/2022]
Abstract
AIM The aim of this study was to evaluate the utility of navigation surgery using augmented reality technology (AR-based NS) for pancreatectomy. METHODS The 3D reconstructed images from CT were created by segmentation. The initial registration was performed by using the optical location sensor. The reconstructed images were superimposed onto the real organs in the monitor display. Of the 19 patients who had undergone hepatobiliary and pancreatic surgery using AR-based NS, the accuracy, visualization ability, and utility of our system were assessed in five cases with pancreatectomy. RESULTS The position of each organ in the surface-rendering image corresponded almost to that of the actual organ. Reference to the display image allowed for safe dissection while preserving the adjacent vessels or organs. The locations of the lesions and resection line on the targeted organ were overlaid on the operating field. The initial mean registration error was improved to approximately 5 mm by our refinements. However, several problems such as registration accuracy, portability and cost still remain. CONCLUSION AR-based NS contributed to accurate and effective surgical resection in pancreatectomy. The pancreas appears to be a suitable organ for further investigations. This technology is promising to improve surgical quality, training, and education.
Collapse
Affiliation(s)
- Tomoyoshi Okamoto
- Department of Surgery, The Jikei University Daisan Hospital, Tokyo, Japan
| | | | | | | | | | | |
Collapse
|
44
|
Selka F, Nicolau S, Agnus V, Bessaid A, Marescaux J, Soler L. Context-specific selection of algorithms for recursive feature tracking in endoscopic image using a new methodology. Comput Med Imaging Graph 2015; 40:49-61. [PMID: 25542640 DOI: 10.1016/j.compmedimag.2014.11.012] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2013] [Revised: 09/16/2014] [Accepted: 11/20/2014] [Indexed: 10/24/2022]
Abstract
In minimally invasive surgery, the tracking of deformable tissue is a critical component for image-guided applications. Deformation of the tissue can be recovered by tracking features using tissue surface information (texture, color,...). Recent work in this field has shown success in acquiring tissue motion. However, the performance evaluation of detection and tracking algorithms on such images are still difficult and are not standardized. This is mainly due to the lack of ground truth data on real data. Moreover, in order to avoid supplementary techniques to remove outliers, no quantitative work has been undertaken to evaluate the benefit of a pre-process based on image filtering, which can improve feature tracking robustness. In this paper, we propose a methodology to validate detection and feature tracking algorithms, using a trick based on forward-backward tracking that provides an artificial ground truth data. We describe a clear and complete methodology to evaluate and compare different detection and tracking algorithms. In addition, we extend our framework to propose a strategy to identify the best combinations from a set of detector, tracker and pre-process algorithms, according to the live intra-operative data. Experimental results have been performed on in vivo datasets and show that pre-process can have a strong influence on tracking performance and that our strategy to find the best combinations is relevant for a reasonable computation cost.
Collapse
Affiliation(s)
- F Selka
- Biomedical Engineering Laboratory, Sciences Engineering Faculty, Abou Bekr Belkaid University, Tlemcen, Algeria; Research Institute against Digestive Cancer, IRCAD 1 place de l'Hopital, Strasbourg, France.
| | - S Nicolau
- Research Institute against Digestive Cancer, IRCAD 1 place de l'Hopital, Strasbourg, France.
| | - V Agnus
- Research Institute against Digestive Cancer, IRCAD 1 place de l'Hopital, Strasbourg, France.
| | - A Bessaid
- Biomedical Engineering Laboratory, Sciences Engineering Faculty, Abou Bekr Belkaid University, Tlemcen, Algeria.
| | - J Marescaux
- Research Institute against Digestive Cancer, IRCAD 1 place de l'Hopital, Strasbourg, France; IHU 1 place de l'Hopital, Strasbourg, France.
| | - L Soler
- Research Institute against Digestive Cancer, IRCAD 1 place de l'Hopital, Strasbourg, France; IHU 1 place de l'Hopital, Strasbourg, France.
| |
Collapse
|
45
|
Kenngott HG, Wagner M, Nickel F, Wekerle AL, Preukschas A, Apitz M, Schulte T, Rempel R, Mietkowski P, Wagner F, Termer A, Müller-Stich BP. Computer-assisted abdominal surgery: new technologies. Langenbecks Arch Surg 2015; 400:273-81. [PMID: 25701196 DOI: 10.1007/s00423-015-1289-8] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2015] [Accepted: 02/09/2015] [Indexed: 12/16/2022]
Abstract
BACKGROUND Computer-assisted surgery is a wide field of technologies with the potential to enable the surgeon to improve efficiency and efficacy of diagnosis, treatment, and clinical management. PURPOSE This review provides an overview of the most important new technologies and their applications. METHODS A MEDLINE database search was performed revealing a total of 1702 references. All references were considered for information on six main topics, namely image guidance and navigation, robot-assisted surgery, human-machine interface, surgical processes and clinical pathways, computer-assisted surgical training, and clinical decision support. Further references were obtained through cross-referencing the bibliography cited in each work. Based on their respective field of expertise, the authors chose 64 publications relevant for the purpose of this review. CONCLUSION Computer-assisted systems are increasingly used not only in experimental studies but also in clinical studies. Although computer-assisted abdominal surgery is still in its infancy, the number of studies is constantly increasing, and clinical studies start showing the benefits of computers used not only as tools of documentation and accounting but also for directly assisting surgeons during diagnosis and treatment of patients. Further developments in the field of clinical decision support even have the potential of causing a paradigm shift in how patients are diagnosed and treated.
Collapse
Affiliation(s)
- H G Kenngott
- Department of General, Abdominal and Transplant Surgery, Ruprecht-Karls-University, Heidelberg, Germany
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
46
|
Pessaux P, Diana M, Soler L, Piardi T, Mutter D, Marescaux J. Robotic duodenopancreatectomy assisted with augmented reality and real-time fluorescence guidance. Surg Endosc 2014; 28:2493-2498. [PMID: 24609700 DOI: 10.1007/s00464-014-3465-2] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2013] [Accepted: 01/24/2014] [Indexed: 02/06/2023]
Abstract
BACKGROUND The minimally invasive surgeon cannot use 'sense of touch' to orientate surgical resection, identifying important structures (vessels, tumors, etc.) by manual palpation. Robotic research has provided technology to facilitate laparoscopic surgery; however, robotics has yet to solve the lack of tactile feedback inherent to keyhole surgery. Misinterpretation of the vascular supply and tumor location may increase the risk of intraoperative bleeding and worsen dissection with positive resection margins. METHODS Augmented reality (AR) consists of the fusion of synthetic computer-generated images (three-dimensional virtual model) obtained from medical imaging preoperative work-up and real-time patient images with the aim of visualizing unapparent anatomical details. RESULTS In this article, we review the most common modalities used to achieve surgical navigation through AR, along with a report of a case of robotic duodenopancreatectomy using AR guidance complemented with the use of fluorescence guidance. CONCLUSIONS The presentation of this complex and high-technology case of robotic duodenopancreatectomy, and the overview of current technology that has made it possible to use AR in the operating room, highlights the needs for further evolution and the windows of opportunity to create a new paradigm in surgical practice.
Collapse
Affiliation(s)
- Patrick Pessaux
- Hepato-Biliary and Pancreatic Surgical Unit, General, Digestive and Endocrine Surgery, IRCAD, IHU MixSurg, Institute for Minimally Invasive Image-Guided Surgery, University of Strasbourg, 1 place de l'Hôpital, 67091, Strasbourg, France,
| | | | | | | | | | | |
Collapse
|
47
|
Okamoto T, Onda S, Yanaga K, Suzuki N, Hattori A. Clinical application of navigation surgery using augmented reality in the abdominal field. Surg Today 2014; 45:397-406. [PMID: 24898629 DOI: 10.1007/s00595-014-0946-9] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2013] [Accepted: 01/23/2014] [Indexed: 12/20/2022]
Abstract
This article presents general principles and recent advancements in the clinical application of augmented reality-based navigation surgery (AR based NS) for abdominal procedures and includes a description of our clinical trial and subsequent outcomes. Moreover, current problems and future aspects are discussed. The development of AR-based NS in the abdomen is delayed compared with another field because of the problem of intraoperative organ deformations or the existence of established modalities. Although there are a few reports on the clinical use of AR-based NS for digestive surgery, sophisticated technologies in urology have often been reported. However, the rapid widespread use of video- or robot assisted surgeries requires this technology. We have worked to develop a system of AR-based NS for hepatobiliary and pancreatic surgery. Then we developed a short rigid scope that enables surgeons to obtain 3D view. We recently focused on pancreatic surgery, because intraoperative organ shifting is minimal. The position of each organ in overlaid image almost corresponded with that of the actual organ with about 5 mm of mean registration errors. Intraoperative information generated from this system provided us with useful navigation. However, AR-based NS has several problems to overcome such as organ deformity, evaluation of utility, portability or cost.
Collapse
Affiliation(s)
- Tomoyoshi Okamoto
- Department of Surgery, The Jikei University Daisan Hospital, 4-11-1 Izumihoncho, Komae-shi, Tokyo, Japan,
| | | | | | | | | |
Collapse
|
48
|
Kang X, Azizian M, Wilson E, Wu K, Martin AD, Kane TD, Peters CA, Cleary K, Shekhar R. Stereoscopic augmented reality for laparoscopic surgery. Surg Endosc 2014; 28:2227-35. [PMID: 24488352 DOI: 10.1007/s00464-014-3433-x] [Citation(s) in RCA: 51] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2013] [Accepted: 01/10/2014] [Indexed: 02/08/2023]
Abstract
BACKGROUND Conventional laparoscopes provide a flat representation of the three-dimensional (3D) operating field and are incapable of visualizing internal structures located beneath visible organ surfaces. Computed tomography (CT) and magnetic resonance (MR) images are difficult to fuse in real time with laparoscopic views due to the deformable nature of soft-tissue organs. Utilizing emerging camera technology, we have developed a real-time stereoscopic augmented-reality (AR) system for laparoscopic surgery by merging live laparoscopic ultrasound (LUS) with stereoscopic video. The system creates two new visual cues: (1) perception of true depth with improved understanding of 3D spatial relationships among anatomical structures, and (2) visualization of critical internal structures along with a more comprehensive visualization of the operating field. METHODS The stereoscopic AR system has been designed for near-term clinical translation with seamless integration into the existing surgical workflow. It is composed of a stereoscopic vision system, a LUS system, and an optical tracker. Specialized software processes streams of imaging data from the tracked devices and registers those in real time. The resulting two ultrasound-augmented video streams (one for the left and one for the right eye) give a live stereoscopic AR view of the operating field. The team conducted a series of stereoscopic AR interrogations of the liver, gallbladder, biliary tree, and kidneys in two swine. RESULTS The preclinical studies demonstrated the feasibility of the stereoscopic AR system during in vivo procedures. Major internal structures could be easily identified. The system exhibited unobservable latency with acceptable image-to-video registration accuracy. CONCLUSIONS We presented the first in vivo use of a complete system with stereoscopic AR visualization capability. This new capability introduces new visual cues and enhances visualization of the surgical anatomy. The system shows promise to improve the precision and expand the capacity of minimally invasive laparoscopic surgeries.
Collapse
Affiliation(s)
- Xin Kang
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Medical Center, 111 Michigan Avenue NW, Washington, DC, 20010, USA,
| | | | | | | | | | | | | | | | | |
Collapse
|
49
|
Souzaki R, Ieiri S, Uemura M, Ohuchida K, Tomikawa M, Kinoshita Y, Koga Y, Suminoe A, Kohashi K, Oda Y, Hara T, Hashizume M, Taguchi T. An augmented reality navigation system for pediatric oncologic surgery based on preoperative CT and MRI images. J Pediatr Surg 2013; 48:2479-83. [PMID: 24314190 DOI: 10.1016/j.jpedsurg.2013.08.025] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/25/2013] [Accepted: 08/26/2013] [Indexed: 10/26/2022]
Abstract
PURPOSE In pediatric endoscopic surgery, a limited view and lack of tactile sensation restrict the surgeon's abilities. Moreover, in pediatric oncology, it is sometimes difficult to detect and resect tumors due to the adhesion and degeneration of tumors treated with multimodality therapies. We developed an augmented reality (AR) navigation system based on preoperative CT and MRI imaging for use in endoscopic surgery for pediatric tumors. METHODS The patients preoperatively underwent either CT or MRI with body surface markers. We used an optical tracking system to register the reconstructed 3D images obtained from the CT and MRI data and body surface markers during surgery. AR visualization was superimposed with the 3D images projected onto captured live images. Six patients underwent surgery using this system. RESULTS The median age of the patients was 3.5 years. Two of the six patients underwent laparoscopic surgery, two patients underwent thoracoscopic surgery, and two patients underwent laparotomy using this system. The indications for surgery were local recurrence of a Wilms tumor in one case, metastasis of rhabdomyosarcoma in one case, undifferentiated sarcoma in one case, bronchogenic cysts in two cases, and hepatoblastoma in one case. The average tumor size was 22.0±14.2 mm. Four patients were treated with chemotherapy, three patients were treated with radiotherapy before surgery, and four patients underwent reoperation. All six tumors were detected using the AR navigation system and successfully resected without any complications. CONCLUSIONS The AR navigation system is very useful for detecting the tumor location during pediatric surgery, especially for endoscopic surgery.
Collapse
Affiliation(s)
- Ryota Souzaki
- Department of Pediatric Surgery, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan; Department of Advance Medicine and Innovative Technology, Kyushu University Hospital, Fukuoka, Japan.
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
50
|
Real-time image guidance in laparoscopic liver surgery: first clinical experience with a guidance system based on intraoperative CT imaging. Surg Endosc 2013; 28:933-40. [PMID: 24178862 DOI: 10.1007/s00464-013-3249-0] [Citation(s) in RCA: 71] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2013] [Accepted: 10/04/2013] [Indexed: 01/11/2023]
Abstract
BACKGROUND Laparoscopic liver surgery is particularly challenging owing to restricted access, risk of bleeding, and lack of haptic feedback. Navigation systems have the potential to improve information on the exact position of intrahepatic tumors, and thus facilitate oncological resection. This study aims to evaluate the feasibility of a commercially available augmented reality (AR) guidance system employing intraoperative robotic C-arm cone-beam computed tomography (CBCT) for laparoscopic liver surgery. METHODS A human liver-like phantom with 16 target fiducials was used to evaluate the Syngo iPilot(®) AR system. Subsequently, the system was used for the laparoscopic resection of a hepatocellular carcinoma in segment 7 of a 50-year-old male patient. RESULTS In the phantom experiment, the AR system showed a mean target registration error of 0.96 ± 0.52 mm, with a maximum error of 2.49 mm. The patient successfully underwent the operation and showed no postoperative complications. CONCLUSION The use of intraoperative CBCT and AR for laparoscopic liver resection is feasible and could be considered an option for future liver surgery in complex cases.
Collapse
|