1
|
Liu Z, Gu J, Zhang Y, Bai L, Guan Y, Ni B, Zhang H, Aimaiti M, Zhang P, Shen X, Wang S, Yue B, Xia X, Zhang Z. Total laparoscopy-assisted total gastrectomy with Da Vinci robotic system conducted by robotic enhanced neurocomputing joint intelligence gastrointestinal surgery hub (RENJI-GISH): a preliminary clinical study and case report. World J Surg Oncol 2025; 23:86. [PMID: 40087678 PMCID: PMC11907975 DOI: 10.1186/s12957-025-03735-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2024] [Accepted: 02/26/2025] [Indexed: 03/17/2025] Open
Abstract
BACKGROUND On August 7, 2024, the inaugural total laparoscopy-assisted total gastrectomy with the Da Vinci robotic system was performed in the department of gastrointestinal surgery of Renji Hospital, Shanghai Jiaotong University School of Medicine. The procedure, conducted by RENJI-GISH, employed the use of a Da Vinci robot system in conjunction with the Vision Pro and SonoScape medical electronic endoscopy system. This phenomenon has not been documented in the field of gastric cancer surgery The objective of this study is to investigate the safety, feasibility, and surgical effect of the first total laparoscopy-assisted total gastrectomy with the Da Vinci robotic system, conducted by the Robotic Enhanced Neurocomputing Joint Intelligence Gastrointestinal Surgery Hub (RENJI-GISH). CASE PRESENTATION A 71-year-old male patient was admitted to the hospital with a six-month history of nausea and vomiting. A gastric malignant tumor was identified through gastroscopic examination. The patient was diagnosed with cardiac adenocarcinoma by gastroscopy and pathology, and there were clear indications for surgical intervention, though no contraindications were identified. On August 7, 2024, the patient underwent a robot-assisted total laparoscopic gastrectomy under general anesthesia. During the surgical procedure, the Vision Pro and the electronic endoscopy system played a pivotal role in accurately identifying the location of the gastric lesions, confirming the resection margin of the tumor, and ensuring the safety of the anastomosis. The intraoperative blood loss was 20 ml, and the operative time was 180 min. On the third postoperative day, the patient passed flatus and was transitioned to a liquid diet on the fourth postoperative day. The patient was discharged on the seventh postoperative day, having not experienced any complications. The postoperative pathology report indicated that the lymph node dissection was complete (0/32), and no evidence of malignancy was identified in the upper and lower resection margins. CONCLUSIONS This case study illustrates the safety and feasibility of total laparoscopy-assisted total gastrectomy with the Da Vinci robotic system, performed by the robotic enhanced neurocomputing joint intelligence gastrointestinal surgery hub (RENJI-GISH). Further investigation and experimentation are necessary to fully elucidate the potential of this approach.
Collapse
Affiliation(s)
- Zihang Liu
- Department of Gastrointestinal Surgery, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Jiayi Gu
- Department of Gastrointestinal Surgery, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yeqian Zhang
- Department of Gastrointestinal Surgery, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Long Bai
- Department of Gastrointestinal Surgery, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yujing Guan
- Department of Gastrointestinal Surgery, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Bo Ni
- Department of Gastrointestinal Surgery, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Haoyu Zhang
- Department of Gastrointestinal Surgery, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Muerzhate Aimaiti
- Department of Gastrointestinal Surgery, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Puhua Zhang
- Department of Gastrointestinal Surgery, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaoyao Shen
- Department of Gastrointestinal Surgery, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Shuchang Wang
- Department of Gastrointestinal Surgery, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Ben Yue
- Department of Gastrointestinal Surgery, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Xiang Xia
- Department of Gastrointestinal Surgery, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China.
| | - Zizhen Zhang
- Department of Gastrointestinal Surgery, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
2
|
Popov V, Mateju N, Jeske C, Lewis KO. Metaverse-based simulation: a scoping review of charting medical education over the last two decades in the lens of the 'marvelous medical education machine'. Ann Med 2024; 56:2424450. [PMID: 39535116 PMCID: PMC11562026 DOI: 10.1080/07853890.2024.2424450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 08/12/2024] [Accepted: 10/11/2024] [Indexed: 11/16/2024] Open
Abstract
BACKGROUND Over the past two decades, the use of Metaverse-enhanced simulations in medical education has witnessed significant advancement. These simulations offer immersive environments and technologies, such as augmented reality, virtual reality, and artificial intelligence that have the potential to revolutionize medical training by providing realistic, hands-on experiences in diagnosing and treating patients, practicing surgical procedures, and enhancing clinical decision-making skills. This scoping review aimed to examine the evolution of simulation technology and the emergence of metaverse applications in medical professionals' training, guided by Friedman's three dimensions in medical education: physical space, time, and content, along with an additional dimension of assessment. METHODS In this scoping review, we examined the related literature in six major databases including PubMed, EMBASE, CINAHL, Scopus, Web of Science, and ERIC. A total of 173 publications were selected for the final review and analysis. We thematically analyzed these studies by combining Friedman's three-dimensional framework with assessment. RESULTS Our scoping review showed that Metaverse technologies, such as virtual reality simulation and online learning modules have enabled medical education to extend beyond physical classrooms and clinical sites by facilitating remote training. In terms of the Time dimension, simulation technologies have made partial but meaningful progress in supplementing traditional time-dependent curricula, helping to shorten learning curves, and improve knowledge retention. As for the Content dimension, high-quality simulation and metaverse content require alignment with learning objectives, interactivity, and deliberate practice that should be developmentally integrated from basic to advanced skills. With respect to the Assessment dimension, learning analytics and automated metrics from metaverse-enabled simulation systems have enhanced competency evaluation and formative feedback mechanisms. However, their integration into high-stakes testing is limited, and qualitative feedback and human observation remain crucial. CONCLUSION Our study provides an updated perspective on the achievements and limitations of using simulation to transform medical education, offering insights that can inform development priorities and research directions for human-centered, ethical metaverse applications that enhance healthcare professional training.
Collapse
Affiliation(s)
- Vitaliy Popov
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Natalie Mateju
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Caris Jeske
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Kadriye O. Lewis
- Children’s Mercy Kansas City, Department of Pediatrics, UMKC School of Medicine, Kansas City, MO, USA
| |
Collapse
|
3
|
Minamimura K, Aoki Y, Kaneya Y, Matsumoto S, Arai H, Kakinuma D, Oshiro Y, Kawano Y, Watanabe M, Nakamura Y, Suzuki H, Yoshida H. Current Status of Robotic Hepatobiliary and Pancreatic Surgery. J NIPPON MED SCH 2024; 91:10-19. [PMID: 38233127 DOI: 10.1272/jnms.jnms.2024_91-109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Laparoscopic surgery is performed worldwide and has clear economic and social benefits in terms of patient recovery time. It is used for most gastrointestinal surgical procedures, but laparoscopic surgery for more complex procedures in the esophageal, hepatobiliary, and pancreatic regions remains challenging. Minimally invasive surgery that results in accurate tumor dissection is vital in surgical oncology, and development of surgical systems and instruments plays a key role in assisting surgeons to achieve this. A notable advance in the latter half of the 1990s was the da Vinci Surgical System, which involves master-slave surgical support robots. Featuring high-resolution three-dimensional (3D) imaging with magnification capabilities and forceps with multi-joint function, anti-shake function, and motion scaling, the system compensates for the drawbacks of conventional laparoscopic surgery. It is expected to be particularly useful in the field of hepato-biliary-pancreatic surgery, which requires delicate reconstruction involving complex liver anatomy with diverse vascular and biliary systems and anastomosis of the biliary tract, pancreas, and intestines. The learning curve is said to be short, and it is hoped that robotic surgery will be standardized in the near future. There is also a need for a standardized robotic surgery training system for young surgeons that can later be adapted to a wider range of surgeries. This systematic review describes trends and future prospects for robotic surgery in the hepatobiliary-pancreatic region.
Collapse
Affiliation(s)
| | - Yuto Aoki
- Department of Surgery, Nippon Medical School Chiba Hokusoh Hospital
| | - Youhei Kaneya
- Department of Surgery, Nippon Medical School Chiba Hokusoh Hospital
| | | | - Hiroki Arai
- Department of Surgery, Nippon Medical School Chiba Hokusoh Hospital
| | - Daisuke Kakinuma
- Department of Surgery, Nippon Medical School Chiba Hokusoh Hospital
| | - Yukio Oshiro
- Department of Surgery, Nippon Medical School Chiba Hokusoh Hospital
| | - Yoichi Kawano
- Department of Surgery, Nippon Medical School Chiba Hokusoh Hospital
| | | | | | - Hideyuki Suzuki
- Department of Surgery, Nippon Medical School Chiba Hokusoh Hospital
| | | |
Collapse
|
4
|
Mahmud M, Sari DCR, Sari D, Arfian N, Zucha MA. The application of augmented reality for improving clinical skills: a scoping review. KOREAN JOURNAL OF MEDICAL EDUCATION 2024; 36:65-79. [PMID: 38462243 PMCID: PMC10925804 DOI: 10.3946/kjme.2024.285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 10/18/2023] [Accepted: 12/11/2023] [Indexed: 03/12/2024]
Abstract
Augmented reality technology had developed rapidly in recent years and had been applied in many fields, including medical education. Augmented reality had potential to improve students' knowledge and skills in medical education. This scoping review primarily aims to further elaborate the current studies on the implementation of augmented reality in advancing clinical skills. This study was conducted by utilizing electronic databases such as PubMed, Embase, and Web of Science in June 2022 for articles focusing on the use of augmented reality for improving clinical skills. The Rayyan website was used to screen the articles that met the inclusion criteria, which was the application of augmented reality as a learning method in medical education. Total of 37 articles met the inclusion criteria. These publications suggested that using augmented reality could improve clinical skills. The most researched topics explored were laparoscopic surgery skills and ophthalmology were the most studied topic. The research methods applied in the articles fall into two main categories: randomized control trial (RCT) (29.3%) and non-RCT (70.3%). Augmented reality has the potential to be integrated in medical education, particularly to boost clinical studies. Due to limited databases, however, any further studies on the implementation of augmented reality as a method to enhance skills in medical education need to be conducted.
Collapse
Affiliation(s)
- Mahmud Mahmud
- Department of Anesthesiology & Intensive Care Therapy, Sardjito General Hospital, Faculty of Medicine, Public Health and Nursing Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Dwi Cahyani Ratna Sari
- Department of Anatomy, Faculty of Medicine, Public Health and Nursing Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Djayanti Sari
- Department of Anesthesiology & Intensive Care Therapy, Sardjito General Hospital, Faculty of Medicine, Public Health and Nursing Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Nur Arfian
- Department of Anatomy, Faculty of Medicine, Public Health and Nursing Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Muhammad Ary Zucha
- Department of Obstetrics and Gynecology, Sardjito General Hospital, Faculty of Medicine, Public Health and Nursing Universitas Gadjah Mada, Yogyakarta, Indonesia
| |
Collapse
|
5
|
Zhao G, Chen X, Zhu M, Liu Y, Wang Y. Exploring the application and future outlook of Artificial intelligence in pancreatic cancer. Front Oncol 2024; 14:1345810. [PMID: 38450187 PMCID: PMC10915754 DOI: 10.3389/fonc.2024.1345810] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 01/29/2024] [Indexed: 03/08/2024] Open
Abstract
Pancreatic cancer, an exceptionally malignant tumor of the digestive system, presents a challenge due to its lack of typical early symptoms and highly invasive nature. The majority of pancreatic cancer patients are diagnosed when curative surgical resection is no longer possible, resulting in a poor overall prognosis. In recent years, the rapid progress of Artificial intelligence (AI) in the medical field has led to the extensive utilization of machine learning and deep learning as the prevailing approaches. Various models based on AI technology have been employed in the early screening, diagnosis, treatment, and prognostic prediction of pancreatic cancer patients. Furthermore, the development and application of three-dimensional visualization and augmented reality navigation techniques have also found their way into pancreatic cancer surgery. This article provides a concise summary of the current state of AI technology in pancreatic cancer and offers a promising outlook for its future applications.
Collapse
Affiliation(s)
- Guohua Zhao
- Department of General Surgery, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Liaoning, China
| | - Xi Chen
- Department of General Surgery, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Liaoning, China
- Department of Clinical integration of traditional Chinese and Western medicine, Liaoning University of Traditional Chinese Medicine, Liaoning, China
| | - Mengying Zhu
- Department of General Surgery, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Liaoning, China
- Department of Clinical integration of traditional Chinese and Western medicine, Liaoning University of Traditional Chinese Medicine, Liaoning, China
| | - Yang Liu
- Department of Ophthalmology, First Hospital of China Medical University, Liaoning, China
| | - Yue Wang
- Department of General Surgery, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Liaoning, China
| |
Collapse
|
6
|
Pulumati A, Algarin YA, Jaalouk D, Hirsch M, Nouri K. Exploring the potential role for extended reality in Mohs micrographic surgery. Arch Dermatol Res 2024; 316:67. [PMID: 38194123 DOI: 10.1007/s00403-023-02804-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 11/03/2023] [Accepted: 12/14/2023] [Indexed: 01/10/2024]
Abstract
Mohs micrographic surgery (MMS) is a cornerstone of dermatological practice. Virtual reality (VR) and augmented reality (AR) technology, initially used for entertainment, have entered healthcare, offering real-time data overlaying a surgeon's view. This paper explores potential applications of VR and AR in MMS, emphasizing their advantages and limitations. We aim to identify research gaps to facilitate innovation in dermatological surgery. We conducted a PubMed search using the following: "augmented reality" OR "virtual reality" AND "Mohs" or "augmented reality" OR "virtual reality" AND "surgery." Inclusion criteria were peer-reviewed articles in English discussing these technologies in medical settings. We excluded non-peer-reviewed sources, non-English articles, and those not addressing these technologies in a medical context. VR alleviates patient anxiety and enhances patient satisfaction while serving as an educational tool. It also aids physicians by providing realistic surgical simulations. On the other hand, AR assists in real-time lesion analysis, optimizing incision planning, and refining margin control during surgery. Both of these technologies offer remote guidance for trainee residents, enabling real-time learning and oversight and facilitating synchronous teleconsultations. These technologies may transform dermatologic surgery, making it more accessible and efficient. However, further research is needed to validate their effectiveness, address potential challenges, and optimize seamless integration. All in all, AR and VR enhance real-world environments with digital data, offering real-time surgical guidance and medical insights. By exploring the potential integration of these technologies in MMS, our study identifies avenues for further research to thoroughly understand the role of these technologies to redefine dermatologic surgery, elevating precision, surgical outcomes, and patient experiences.
Collapse
Affiliation(s)
- Anika Pulumati
- University of Missouri-Kansas City School of Medicine, Kansas City, MO, USA.
- Department of Dermatology and Cutaneous Surgery, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA.
| | | | - Dana Jaalouk
- Florida State University College of Medicine, Tallahassee, FL, USA
| | - Melanie Hirsch
- University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| | - Keyvan Nouri
- Department of Dermatology and Cutaneous Surgery, University of Miami Leonard M. Miller School of Medicine, Miami, FL, USA
| |
Collapse
|
7
|
Nawaz FA, Mottershead R, Farooq R, Hryniewicki J, Kaldasch M, El Idrissi BJ, Tariq H, Ahmed W. Integrating Metaverse in Psychiatry for Adolescent Care and Treatment (IMPACT). Digit Health 2024; 10:20552076241297055. [PMID: 39544922 PMCID: PMC11561995 DOI: 10.1177/20552076241297055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Accepted: 10/17/2024] [Indexed: 11/17/2024] Open
Abstract
The integration of the metaverse in healthcare has been evolving, encompassing various areas such as mental health interventions, neurological treatments, physical therapy, rehabilitation, medical education, and surgical procedure assistance. For the adolescent population, growing in the digital era and witnessing the interaction of technology with daily life has made digitalization a second nature. Despite the potential of this technology in advancing adolescent mental health care and treatment, there is a notable gap in research and development. Thus, this commentary article aims to elucidate the current landscape of emerging technologies for adolescent mental healthcare in the metaverse, identify potential challenges with its implementation in this growing population, as well as provide recommendations to overcome these obstacles.
Collapse
Affiliation(s)
- Faisal A. Nawaz
- Al Amal Psychiatric Hospital, Emirates Health Services, Dubai, United Arab Emirates
| | - Richard Mottershead
- Faculty of Nursing, College of Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
| | - Rihab Farooq
- Al Amal Psychiatric Hospital, Emirates Health Services, Dubai, United Arab Emirates
| | | | | | | | - Hanaa Tariq
- Jinnah Sindh Medical University, Karachi, Pakistan
| | - Waleed Ahmed
- Maudsley Health, Abu Dhabi, United Arab Emirates
| |
Collapse
|
8
|
Sadiq Z, Laswi I, Raoof A. The Effectiveness of OsiriX and the Anatomage Virtual Dissection Table in Enhancing Neuroanatomy and Neuroradiology Teaching. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2023; 14:1037-1043. [PMID: 37772090 PMCID: PMC10522455 DOI: 10.2147/amep.s418576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Accepted: 09/17/2023] [Indexed: 09/30/2023]
Abstract
Introduction In recent years with the advent of technology in medical education, teaching methodology has shifted towards heavy use of online-learning modalities. This has been especially the case for anatomy and radiology courses since they require students to visualize structures of the human body. Several studies indicated that Anatomage and OsiriX can be effective at enhancing students' learning experiences in anatomy and radiology. Purpose This aim of this study is to assess the effectiveness of online case-based learning modules in teaching medical students about the anatomy and radiology of different types of brain tumors. Methods Two online case-based learning modules were designed using Anatomage Table® and OsiriX DICOM viewer®, consisting of a clinical case and CT and MRI images. We recruited 36 fourth-year medical students that completed two 10-question quizzes (one on glioblastoma multiforme and one on pituitary adenomas). Participants were randomly assigned to either a study group that completed both modules prior to completing the quizzes, or a control group that completed the quizzes without access to the modules. The performance of both groups was compared to assess the effectiveness of the modules. Participants in the study group also completed a feedback survey to assess the quality and convenience of using the modules. Results Students who used the case-based learning modules performed significantly better than those who did not (Quiz 1: mean = 6.56 vs 3.28, p<0.01. Quiz 2: mean = 6.67 vs 3.06, p<0.01). Students who completed the modules would like to see similar modules used in teaching anatomy and radiology in the future (64%). They found them easy to navigate (72%), useful in teaching anatomy and radiology (72%), and helpful in improving understanding of anatomical and radiological clinical correlations (77%). Conclusion Online case-based learning modules created using Anatomage and OsiriX can be used effectively in teaching medical students about the anatomy and radiology of different types of brain tumors.
Collapse
Affiliation(s)
| | - Ibrahim Laswi
- Weill Cornell Medicine-Qatar, Education City, Qatar Foundation, Doha, Qatar
| | - Ameed Raoof
- Weill Cornell Medicine-Qatar, Education City, Qatar Foundation, Doha, Qatar
| |
Collapse
|
9
|
Shuai H, Duan X, Wu T. Comparison of perioperative, oncologic, and functional outcomes between 3D and 2D laparoscopic radical prostatectomy: a systemic review and meta-analysis. Front Oncol 2023; 13:1249683. [PMID: 37795432 PMCID: PMC10546177 DOI: 10.3389/fonc.2023.1249683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 09/05/2023] [Indexed: 10/06/2023] Open
Abstract
Objectives Literature regarding experience with 3D laparoscopy about prostatectomy has remained scanty, and this could be related to the rise of robotic assisted laparoscopic surgery. This study aimed to perform a systemic review and meta-analysis to evaluate the perioperative, functional, and oncologic outcomes between 3D and 2D laparoscopic radical prostatectomy (LRP). Methods We systematically searched the PubMed, Embase, and Cochrane Library databases for studies that compared perioperative, functional, or oncologic outcomes of both 3D and 2D LRP. The Newcastle-Ottawa Scale (NOS) tool and Jadad scale were used to assess the risk of bias in the included studies. Review Manager 5.3 was used for the meta-analysis. Results Seven studies with a total of 542 patients were included in the analysis. Among them, two were RCTs. There was no difference between groups in terms of preoperative characteristics. Anastomosis time, hospital day, and overall complication rates were similar in 3D than 2D group. However, operative time [mean difference (MD) -36.96; 95% confidence interval [CI] -59.25 to -14.67; p = 0.001], blood loss (MD -83.5; 95% CI -123.05 to -43.94; p <0.0001), and days of drainage (MD -1.48; 95% CI -2.29 to -0.67; p = 0.0003) were lower in 3D LRP. 2D and 3D LRP showed similarity in the positive surgical margin (PSM) rate and biochemical recurrence (BCR) rate at 3, 6, and 12months postoperatively. Additionally, there was no significant differences in continence and potency recovery rate between two group except higher continence rate of 3D LRP at 3 months. Conclusion Current evidence shows that 3D LRP offers favorable outcomes compared with 2D LRP, including operative time, blood loss, days of drainage, and early continence. However, there was no conclusive evidence that 3D LRP was advantaged in terms of oncologic and functional outcomes (except for continence rate at 3 months). Systematic review registration The study has been registered on the International Prospective Register of Systematic Reviews (PROSPERO: CRD42023426403).
Collapse
Affiliation(s)
- Hui Shuai
- Department of Urology, Affiliated Hospital of North Sichuan Medical College, Nanchong, Sichuan, China
| | - Xi Duan
- Department of Dermatology, Affiliated Hospital of North Sichuan Medical College, Nanchong, Sichuan, China
| | - Tao Wu
- Department of Urology, Affiliated Hospital of North Sichuan Medical College, Nanchong, Sichuan, China
| |
Collapse
|
10
|
Abstract
INTRODUCTION During an operation, augmented reality (AR) enables surgeons to enrich their vision of the operating field by means of digital imagery, particularly as regards tumors and anatomical structures. While in some specialties, this type of technology is routinely ustilized, in liver surgery due to the complexity of modeling organ deformities in real time, its applications remain limited. At present, numerous teams are attempting to find a solution applicable to current practice, the objective being to overcome difficulties of intraoperative navigation in an opaque organ. OBJECTIVE To identify, itemize and analyze series reporting AR techniques tested in liver surgery, the objectives being to establish a state of the art and to provide indications of perspectives for the future. METHODS In compliance with the PRISMA guidelines and availing ourselves of the PubMed, Embase and Cochrane databases, we identified English-language articles published between January 2020 and January 2022 corresponding to the following keywords: augmented reality, hepatic surgery, liver and hepatectomy. RESULTS Initially, 102 titles, studies and summaries were preselected. Twenty-eight corresponding to the inclusion criteria were included, reporting on 183patients operated with the help of AR by laparotomy (n=31) or laparoscopy (n=152). Several techniques of acquisition and visualization were reported. Anatomical precision was the main assessment criterion in 19 articles, with values ranging from 3mm to 14mm, followed by time of acquisition and clinical feasibility. CONCLUSION While several AR technologies are presently being developed, due to insufficient anatomical precision their clinical applications have remained limited. That much said, numerous teams are currently working toward their optimization, and it is highly likely that in the short term, the application of AR in liver surgery will have become more frequent and effective. As for its clinical impact, notably in oncology, it remains to be assessed.
Collapse
Affiliation(s)
- B Acidi
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France; Inria « Mimesis », Strasbourg, France
| | - M Ghallab
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France
| | - S Cotin
- Augmented Operating Room Innovation Chair (BOPA), France; Inria « Mimesis », Strasbourg, France
| | - E Vibert
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France; DHU Hepatinov, 94800 Villejuif, France; Inserm, Paris-Saclay University, UMRS 1193, Pathogenesis and treatment of liver diseases; FHU Hepatinov, 94800 Villejuif, France
| | - N Golse
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France; Inria « Mimesis », Strasbourg, France; DHU Hepatinov, 94800 Villejuif, France; Inserm, Paris-Saclay University, UMRS 1193, Pathogenesis and treatment of liver diseases; FHU Hepatinov, 94800 Villejuif, France.
| |
Collapse
|
11
|
Balla A, Sartori A, Botteri E, Podda M, Ortenzi M, Silecchia G, Guerrieri M, Agresta F. Augmented reality (AR) in minimally invasive surgery (MIS) training: where are we now in Italy? The Italian Society of Endoscopic Surgery (SICE) ARMIS survey. Updates Surg 2023; 75:85-93. [PMID: 36131182 DOI: 10.1007/s13304-022-01383-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 09/12/2022] [Indexed: 01/19/2023]
Abstract
Minimally invasive surgery (MIS) is a widespread approach in general surgery. Computer guiding software, such as the augmented reality (AR), the virtual reality (VR) and mixed reality (MR), has been proposed to help surgeons during MIS. This study aims to report these technologies' current knowledge and diffusion during surgical training in Italy. A web-based survey was developed under the aegis of the Italian Society of Endoscopic Surgery (SICE). Two hundred and seventeen medical doctors' answers were analyzed. Participants were surgeons (138, 63.6%) and residents in surgery (79, 36.4%). The mean knowledge of the role of the VR, AR and MR in surgery was 4.9 ± 2.4 (range 1-10). Most of the participants (122, 56.2%) did not have experience with any proposed technologies. However, although the lack of experience in this field, the answers about the functioning of the technologies were correct in most cases. Most of the participants answered that VR, AR and MR should be used more frequently for the teaching and training and during the clinical activity (170, 80.3%) and that such technologies would make a significant contribution, especially in training (183, 84.3%) and didactic (156, 71.9%). Finally, the main limitations to the diffusion of these technologies were the insufficient knowledge (182, 83.9%) and costs (175, 80.6%). Based on the present study, in Italy, the knowledge and dissemination of these technologies are still limited. Further studies are required to establish the usefulness of AR, VR and MR in surgical training.
Collapse
Affiliation(s)
- Andrea Balla
- UOC of General and Minimally Invasive Surgery, Hospital "San Paolo", Largo Donatori del Sangue 1, 00053, Rome, Civitavecchia, Italy.
| | - Alberto Sartori
- Department of General Surgery, Ospedale Di Montebelluna, Via Palmiro Togliatti, 16, 31044, Montebelluna, Treviso, Italy
| | - Emanuele Botteri
- General Surgery, ASST Spedali Civili Di Brescia PO Montichiari, Via Boccalera 325018, Montichiari, Brescia, Italy
| | - Mauro Podda
- Department of Surgical Science, University of Cagliari, Cagliari, Italy
| | - Monica Ortenzi
- Department of General Surgery, Università Politecnica Delle Marche, Piazza Roma 22, 60121, Ancona, Italy
| | - Gianfranco Silecchia
- Department of Medical-Surgical Sciences and Biotechnologies, Faculty of Pharmacy and Medicine, "La Sapienza" University of Rome-Polo Pontino, Bariatric Centre of Excellence IFSO-EC, Rome, Italy
| | - Mario Guerrieri
- Department of General Surgery, Università Politecnica Delle Marche, Piazza Roma 22, 60121, Ancona, Italy
| | - Ferdinando Agresta
- Department of General Surgery, AULSS2 Trevigiana del Veneto, Hospital of Vittorio Veneto, Vittorio Veneto, Treviso, Italy
| |
Collapse
|
12
|
Chen X, Sakai D, Fukuoka H, Shirai R, Ebina K, Shibuya S, Sase K, Tsujita T, Abe T, Oka K, Konno A. Basic Experiments Toward Mixed Reality Dynamic Navigation for Laparoscopic Surgery. JOURNAL OF ROBOTICS AND MECHATRONICS 2022. [DOI: 10.20965/jrm.2022.p1253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Laparoscopic surgery is a minimally invasive procedure that is performed by viewing endoscopic camera images. However, the limited field of view of endoscopic cameras makes laparoscopic surgery difficult. To provide more visual information during laparoscopic surgeries, augmented reality (AR) surgical navigation systems have been developed to visualize the positional relationship between the surgical field and organs based on preoperative medical images of a patient. However, since earlier studies used preoperative medical images, the navigation became inaccurate as the surgery progressed because the organs were displaced and deformed during surgery. To solve this problem, we propose a mixed reality (MR) surgery navigation system in which surgical instruments are tracked by a motion capture (Mocap) system; we also evaluated the contact between the instruments and organs and simulated and visualized the deformation of the organ caused by the contact. This paper describes a method for the numerical calculation of the deformation of a soft body. Then, the basic technology of MR and projection mapping is presented for MR surgical navigation. The accuracy of the simulated and visualized deformations is evaluated through basic experiments using a soft rectangular cuboid object.
Collapse
|
13
|
Minimally invasive and invasive liver surgery based on augmented reality training: a review of the literature. J Robot Surg 2022; 17:753-763. [DOI: 10.1007/s11701-022-01499-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 11/14/2022] [Indexed: 11/29/2022]
|
14
|
Chiou SY, Zhang ZY, Liu HL, Yan JL, Wei KC, Chen PY. Augmented Reality Surgical Navigation System for External Ventricular Drain. Healthcare (Basel) 2022; 10:healthcare10101815. [PMID: 36292263 PMCID: PMC9601392 DOI: 10.3390/healthcare10101815] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 09/14/2022] [Accepted: 09/19/2022] [Indexed: 12/02/2022] Open
Abstract
Augmented reality surgery systems are playing an increasing role in the operating room, but applying such systems to neurosurgery presents particular challenges. In addition to using augmented reality technology to display the position of the surgical target position in 3D in real time, the application must also display the scalpel entry point and scalpel orientation, with accurate superposition on the patient. To improve the intuitiveness, efficiency, and accuracy of extra-ventricular drain surgery, this paper proposes an augmented reality surgical navigation system which accurately superimposes the surgical target position, scalpel entry point, and scalpel direction on a patient’s head and displays this data on a tablet. The accuracy of the optical measurement system (NDI Polaris Vicra) was first independently tested, and then complemented by the design of functions to help the surgeon quickly identify the surgical target position and determine the preferred entry point. A tablet PC was used to display the superimposed images of the surgical target, entry point, and scalpel on top of the patient, allowing for correct scalpel orientation. Digital imaging and communications in medicine (DICOM) results for the patient’s computed tomography were used to create a phantom and its associated AR model. This model was then imported into the application, which was then executed on the tablet. In the preoperative phase, the technician first spent 5–7 min to superimpose the virtual image of the head and the scalpel. The surgeon then took 2 min to identify the intended target position and entry point position on the tablet, which then dynamically displayed the superimposed image of the head, target position, entry point position, and scalpel (including the scalpel tip and scalpel orientation). Multiple experiments were successfully conducted on the phantom, along with six practical trials of clinical neurosurgical EVD. In the 2D-plane-superposition model, the optical measurement system (NDI Polaris Vicra) provided highly accurate visualization (2.01 ± 1.12 mm). In hospital-based clinical trials, the average technician preparation time was 6 min, while the surgeon required an average of 3.5 min to set the target and entry-point positions and accurately overlay the orientation with an NDI surgical stick. In the preparation phase, the average time required for the DICOM-formatted image processing and program import was 120 ± 30 min. The accuracy of the designed augmented reality optical surgical navigation system met clinical requirements, and can provide a visual and intuitive guide for neurosurgeons. The surgeon can use the tablet application to obtain real-time DICOM-formatted images of the patient, change the position of the surgical entry point, and instantly obtain an updated surgical path and surgical angle. The proposed design can be used as the basis for various augmented reality brain surgery navigation systems in the future.
Collapse
Affiliation(s)
- Shin-Yan Chiou
- Department of Electrical Engineering, College of Engineering, Chang Gung University, Kwei-Shan, Taoyuan 333, Taiwan
- Department of Nuclear Medicine, Linkou Chang Gung Memorial Hospital, Taoyuan 333, Taiwan
- Department of Neurosurgery, Keelung Chang Gung Memorial Hospital, Keelung 204, Taiwan
| | - Zhi-Yue Zhang
- Department of Electrical Engineering, College of Engineering, Chang Gung University, Kwei-Shan, Taoyuan 333, Taiwan
| | - Hao-Li Liu
- Department of Electrical Engineering, National Taiwan University, Taipei 106, Taiwan
| | - Jiun-Lin Yan
- Department of Neurosurgery, Keelung Chang Gung Memorial Hospital, Keelung 204, Taiwan
| | - Kuo-Chen Wei
- Department of Neurosurgery, New Taipei City TuCheng Hospital, New Taipei City 236, Taiwan
| | - Pin-Yuan Chen
- Department of Neurosurgery, Keelung Chang Gung Memorial Hospital, Keelung 204, Taiwan
- School of Medicine, Chang Gung University, Kwei-Shan, Taoyuan 333, Taiwan
- Correspondence: ; Tel.: +886-2-2431-3131
| |
Collapse
|
15
|
Krause K, Schumacher LY, Sachdeva UM. Advances in Imaging to Aid Segmentectomy for Lung Cancer. Surg Oncol Clin N Am 2022; 31:595-608. [DOI: 10.1016/j.soc.2022.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
16
|
Huber T, Huettl F, Hanke LI, Vradelis L, Heinrich S, Hansen C, Boedecker C, Lang H. Leberchirurgie 4.0 - OP-Planung, Volumetrie, Navigation und Virtuelle
Realität. Zentralbl Chir 2022; 147:361-368. [DOI: 10.1055/a-1844-0549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
ZusammenfassungDurch die Optimierung der konservativen Behandlung, die Verbesserung der
bildgebenden Verfahren und die Weiterentwicklung der Operationstechniken haben
sich das operative Spektrum sowie der Maßstab für die Resektabilität in Bezug
auf die Leberchirurgie in den letzten Jahrzehnten deutlich verändert.Dank zahlreicher technischer Entwicklungen, insbesondere der 3-dimensionalen
Segmentierung, kann heutzutage die präoperative Planung und die Orientierung
während der Operation selbst, vor allem bei komplexen Eingriffen, unter
Berücksichtigung der patientenspezifischen Anatomie erleichtert werden.Neue Technologien wie 3-D-Druck, virtuelle und augmentierte Realität bieten
zusätzliche Darstellungsmöglichkeiten für die individuelle Anatomie.
Verschiedene intraoperative Navigationsmöglichkeiten sollen die präoperative
Planung im Operationssaal verfügbar machen, um so die Patientensicherheit zu
erhöhen.Dieser Übersichtsartikel soll einen Überblick über den gegenwärtigen Stand der
verfügbaren Technologien sowie einen Ausblick in den Operationssaal der Zukunft
geben.
Collapse
Affiliation(s)
- Tobias Huber
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie,
Universitätsmedizin Mainz, Mainz, Deutschland
| | - Florentine Huettl
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie,
Universitätsmedizin Mainz, Mainz, Deutschland
| | - Laura Isabel Hanke
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie,
Universitätsmedizin Mainz, Mainz, Deutschland
| | - Lukas Vradelis
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie,
Universitätsmedizin Mainz, Mainz, Deutschland
| | - Stefan Heinrich
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie,
Universitätsmedizin Mainz, Mainz, Deutschland
| | - Christian Hansen
- Fakultät für Informatik, Otto von Guericke Universität
Magdeburg, Magdeburg, Deutschland
| | - Christian Boedecker
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie,
Universitätsmedizin Mainz, Mainz, Deutschland
| | - Hauke Lang
- Klinik für Allgemein-, Viszeral- und Transplantationschirurgie,
Universitätsmedizin Mainz, Mainz, Deutschland
| |
Collapse
|
17
|
Soriero D, Batistotti P, Malinaric R, Pertile D, Massobrio A, Epis L, Sperotto B, Penza V, Mattos LS, Sartini M, Cristina ML, Nencioni A, Scabini S. Efficacy of High-Resolution Preoperative 3D Reconstructions for Lesion Localization in Oncological Colorectal Surgery—First Pilot Study. Healthcare (Basel) 2022; 10:healthcare10050900. [PMID: 35628036 PMCID: PMC9141148 DOI: 10.3390/healthcare10050900] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 04/20/2022] [Accepted: 05/11/2022] [Indexed: 02/01/2023] Open
Abstract
When planning an operation, surgeons usually rely on traditional 2D imaging. Moreover, colon neoplastic lesions are not always easy to locate macroscopically, even during surgery. A 3D virtual model may allow surgeons to localize lesions with more precision and to better visualize the anatomy. In this study, we primary analyzed and discussed the clinical impact of using such 3D models in colorectal surgery. This is a monocentric prospective observational pilot study that includes 14 consecutive patients who presented colorectal lesions with indication for surgical therapy. A staging computed tomography (CT)/magnetic resonance imaging (MRI) scan and a colonoscopy were performed on each patient. The information gained from them was provided to obtain a 3D rendering. The 2D images were shown to the surgeon performing the operation, while the 3D reconstructions were shown to a second surgeon. Both of them had to locate the lesion and describe which procedure they would have performed; we then compared their answers with one another and with the intraoperative and histopathological findings. The lesion localizations based on the 3D models were accurate in 100% of cases, in contrast to conventional 2D CT scans, which could not detect the lesion in two patients (in these cases, lesion localization was based on colonoscopy). The 3D model reconstruction allowed an excellent concordance correlation between the estimated and the actual location of the lesion, allowing the surgeon to correctly plan the procedure with excellent results. Larger clinical studies are certainly required.
Collapse
Affiliation(s)
- Domenico Soriero
- General and Oncologic Surgery, IRCCS Ospedale Policlinico San Martino, 16132 Genoa, Italy; (D.S.); (R.M.); (D.P.); (A.M.); (L.E.); (B.S.); (S.S.)
| | - Paola Batistotti
- Department of Integrated Surgical and Diagnostic Sciences, University of Genoa, 16132 Genoa, Italy;
| | - Rafaela Malinaric
- General and Oncologic Surgery, IRCCS Ospedale Policlinico San Martino, 16132 Genoa, Italy; (D.S.); (R.M.); (D.P.); (A.M.); (L.E.); (B.S.); (S.S.)
- Urological Clinical Unit, San Martino Hospital, 16132 Genoa, Italy
| | - Davide Pertile
- General and Oncologic Surgery, IRCCS Ospedale Policlinico San Martino, 16132 Genoa, Italy; (D.S.); (R.M.); (D.P.); (A.M.); (L.E.); (B.S.); (S.S.)
| | - Andrea Massobrio
- General and Oncologic Surgery, IRCCS Ospedale Policlinico San Martino, 16132 Genoa, Italy; (D.S.); (R.M.); (D.P.); (A.M.); (L.E.); (B.S.); (S.S.)
| | - Lorenzo Epis
- General and Oncologic Surgery, IRCCS Ospedale Policlinico San Martino, 16132 Genoa, Italy; (D.S.); (R.M.); (D.P.); (A.M.); (L.E.); (B.S.); (S.S.)
| | - Beatrice Sperotto
- General and Oncologic Surgery, IRCCS Ospedale Policlinico San Martino, 16132 Genoa, Italy; (D.S.); (R.M.); (D.P.); (A.M.); (L.E.); (B.S.); (S.S.)
| | - Veronica Penza
- Biomedical Robotics Lab, Department of Advanced Robotics, Istituto Italiano di Tecnologia, 16163 Genoa, Italy; (V.P.); (L.S.M.)
| | - Leonardo S. Mattos
- Biomedical Robotics Lab, Department of Advanced Robotics, Istituto Italiano di Tecnologia, 16163 Genoa, Italy; (V.P.); (L.S.M.)
| | - Marina Sartini
- Department of Health Sciences, University of Genoa, Via Pastore 1, 16132 Genoa, Italy
- Operating Unit Hospital Hygiene, Galliera Hospital, Mura delle Cappuccine 14, 16128 Genoa, Italy
- Correspondence: (M.S.); (M.L.C.)
| | - Maria Luisa Cristina
- Department of Health Sciences, University of Genoa, Via Pastore 1, 16132 Genoa, Italy
- Operating Unit Hospital Hygiene, Galliera Hospital, Mura delle Cappuccine 14, 16128 Genoa, Italy
- Correspondence: (M.S.); (M.L.C.)
| | - Alessio Nencioni
- Section of Geriatrics, Department of Internal Medicine and Medical Specialties (DIMI), University of Genoa, 16132 Genoa, Italy;
- Gerontology and Geriatrics, IRCCS Ospedale Policlinico San Martino, 16132 Genoa, Italy
| | - Stefano Scabini
- General and Oncologic Surgery, IRCCS Ospedale Policlinico San Martino, 16132 Genoa, Italy; (D.S.); (R.M.); (D.P.); (A.M.); (L.E.); (B.S.); (S.S.)
| |
Collapse
|
18
|
Christou CD, Tsoulfas G. Role of three-dimensional printing and artificial intelligence in the management of hepatocellular carcinoma: Challenges and opportunities. World J Gastrointest Oncol 2022; 14:765-793. [PMID: 35582107 PMCID: PMC9048537 DOI: 10.4251/wjgo.v14.i4.765] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 08/24/2021] [Accepted: 03/25/2022] [Indexed: 02/06/2023] Open
Abstract
Hepatocellular carcinoma (HCC) constitutes the fifth most frequent malignancy worldwide and the third most frequent cause of cancer-related deaths. Currently, treatment selection is based on the stage of the disease. Emerging fields such as three-dimensional (3D) printing, 3D bioprinting, artificial intelligence (AI), and machine learning (ML) could lead to evidence-based, individualized management of HCC. In this review, we comprehensively report the current applications of 3D printing, 3D bioprinting, and AI/ML-based models in HCC management; we outline the significant challenges to the broad use of these novel technologies in the clinical setting with the goal of identifying means to overcome them, and finally, we discuss the opportunities that arise from these applications. Notably, regarding 3D printing and bioprinting-related challenges, we elaborate on cost and cost-effectiveness, cell sourcing, cell viability, safety, accessibility, regulation, and legal and ethical concerns. Similarly, regarding AI/ML-related challenges, we elaborate on intellectual property, liability, intrinsic biases, data protection, cybersecurity, ethical challenges, and transparency. Our findings show that AI and 3D printing applications in HCC management and healthcare, in general, are steadily expanding; thus, these technologies will be integrated into the clinical setting sooner or later. Therefore, we believe that physicians need to become familiar with these technologies and prepare to engage with them constructively.
Collapse
Affiliation(s)
- Chrysanthos D Christou
- Department of Transplantation Surgery, Hippokration General Hospital, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki 54622, Greece
| | - Georgios Tsoulfas
- Department of Transplantation Surgery, Hippokration General Hospital, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki 54622, Greece
| |
Collapse
|
19
|
Application of Augmented Reality Navigation in Treatment With Fibrosis Dysplasia. J Craniofac Surg 2021; 33:1317-1321. [PMID: 34873103 DOI: 10.1097/scs.0000000000008391] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 11/07/2021] [Indexed: 11/26/2022] Open
Abstract
OBJECTIVE In order to reduce the possibility of accidental injury to neurovascular and important tissues, this research conduct preoperative design and intraoperative guidance for fibrous dysplasia through augmented reality technology. METHODS Five patients with fibrous dysplasia were selected for three-dimensional (3D) computed tomography (CT) scan and 3D model was reconstructed. Considering the navigation plan was comprehensively, the guide plate (composed of card groove, connector, and fixator) was designed and 3D printed. Three-dimensional software was used to unify the coordinates of the surgical plan and the guide plate, and the relative position was fixed. Then, the virtual and real overlapping registration is completed on the physical model. Pattern recognition technology is used to identify pre-defined markers in the video images before operation. Finally, the registration results are superimposed into the surgical field of vision to guide and remind surgeons. RESULTS In this study, the navigation based on augmented reality technology was used in the surgical treatment of 5 cases patients with fibrous dysplasia. The 3D navigation information was displayed in real time in the operative field. The operation was accurate and the postoperative effect was good. CONCLUSIONS This paper introduces an effective visual navigation surgical method in treatment with fibrous dysplasia. The augmented-reality based navigation system achieves individualized precise treatment by displaying 3D navigation directly in the surgical field. It is an effective auxiliary method for future research on craniofacial surgery.
Collapse
|
20
|
Chytas D, Nikolaou VS. Mixed reality for visualization of orthopedic surgical anatomy. World J Orthop 2021; 12:727-731. [PMID: 34754828 PMCID: PMC8554346 DOI: 10.5312/wjo.v12.i10.727] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 06/16/2021] [Accepted: 08/30/2021] [Indexed: 02/06/2023] Open
Abstract
In the modern era, preoperative planning is substantially facilitated by artificial reality technologies, which permit a better understanding of patient anatomy, thus increasing the safety and accuracy of surgical interventions. In the field of orthopedic surgery, the increase in safety and accuracy improves treatment quality and orthopedic patient outcomes. Artificial reality technologies, which include virtual reality (VR), augmented reality (AR), and mixed reality (MR), use digital images obtained from computed tomography or magnetic resonance imaging. VR replaces the user's physical environment with one that is computer generated. AR and MR have been defined as technologies that permit the fusing of the physical with the virtual environment, enabling the user to interact with both physical and virtual objects. MR has been defined as a technology that, in contrast to AR, enables users to visualize the depth and perspective of the virtual models. We aimed to shed light on the role that MR can play in the visualization of orthopedic surgical anatomy. The literature suggests that MR could be a valuable tool in orthopedic surgeon's hands for visualization of the anatomy. However, we remark that confusion exists in the literature concerning the characteristics of MR. Thus, a more clear description of MR is needed in orthopedic research, so that the potential of this technology can be more deeply understood.
Collapse
Affiliation(s)
- Dimitrios Chytas
- Department of Physiotherapy, University of Peloponnese, Sparta 23100, Greece
| | - Vasileios S Nikolaou
- 2nd Department of Orthopedics, National and Kapodistrian University of Athens, Athens 15124, Greece
| |
Collapse
|
21
|
Christou CD, Tsoulfas G. Challenges and opportunities in the application of artificial intelligence in gastroenterology and hepatology. World J Gastroenterol 2021; 27:6191-6223. [PMID: 34712027 PMCID: PMC8515803 DOI: 10.3748/wjg.v27.i37.6191] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 05/06/2021] [Accepted: 08/31/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is an umbrella term used to describe a cluster of interrelated fields. Machine learning (ML) refers to a model that learns from past data to predict future data. Medicine and particularly gastroenterology and hepatology, are data-rich fields with extensive data repositories, and therefore fruitful ground for AI/ML-based software applications. In this study, we comprehensively review the current applications of AI/ML-based models in these fields and the opportunities that arise from their application. Specifically, we refer to the applications of AI/ML-based models in prevention, diagnosis, management, and prognosis of gastrointestinal bleeding, inflammatory bowel diseases, gastrointestinal premalignant and malignant lesions, other nonmalignant gastrointestinal lesions and diseases, hepatitis B and C infection, chronic liver diseases, hepatocellular carcinoma, cholangiocarcinoma, and primary sclerosing cholangitis. At the same time, we identify the major challenges that restrain the widespread use of these models in healthcare in an effort to explore ways to overcome them. Notably, we elaborate on the concerns regarding intrinsic biases, data protection, cybersecurity, intellectual property, liability, ethical challenges, and transparency. Even at a slower pace than anticipated, AI is infiltrating the healthcare industry. AI in healthcare will become a reality, and every physician will have to engage with it by necessity.
Collapse
Affiliation(s)
- Chrysanthos D Christou
- Organ Transplant Unit, Hippokration General Hospital, Aristotle University of Thessaloniki, Thessaloniki 54622, Greece
| | - Georgios Tsoulfas
- Organ Transplant Unit, Hippokration General Hospital, Aristotle University of Thessaloniki, Thessaloniki 54622, Greece
| |
Collapse
|
22
|
Wahba R, Thomas MN, Bunck AC, Bruns CJ, Stippel DL. Clinical use of augmented reality, mixed reality, three-dimensional-navigation and artificial intelligence in liver surgery. Artif Intell Gastroenterol 2021; 2:94-104. [DOI: 10.35712/aig.v2.i4.94] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 07/10/2021] [Accepted: 08/27/2021] [Indexed: 02/06/2023] Open
Abstract
A precise knowledge of intra-parenchymal vascular and biliary architecture and the location of lesions in relation to the complex anatomy is indispensable to perform liver surgery. Therefore, virtual three-dimensional (3D)-reconstruction models from computed tomography/magnetic resonance imaging scans of the liver might be helpful for visualization. Augmented reality, mixed reality and 3D-navigation could transfer such 3D-image data directly into the operation theater to support the surgeon. This review examines the literature about the clinical and intraoperative use of these image guidance techniques in liver surgery and provides the reader with the opportunity to learn about these techniques. Augmented reality and mixed reality have been shown to be feasible for the use in open and minimally invasive liver surgery. 3D-navigation facilitated targeting of intraparenchymal lesions. The existing data is limited to small cohorts and description about technical details e.g., accordance between the virtual 3D-model and the real liver anatomy. Randomized controlled trials regarding clinical data or oncological outcome are not available. Up to now there is no intraoperative application of artificial intelligence in liver surgery. The usability of all these sophisticated image guidance tools has still not reached the grade of immersion which would be necessary for a widespread use in the daily surgical routine. Although there are many challenges, augmented reality, mixed reality, 3D-navigation and artificial intelligence are emerging fields in hepato-biliary surgery.
Collapse
Affiliation(s)
- Roger Wahba
- Department of General, Visceral, Cancer and Transplantation Surgery, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne 50937, Germany
| | - Michael N Thomas
- Department of General, Visceral, Cancer and Transplantation Surgery, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne 50937, Germany
| | - Alexander C Bunck
- Department of Diagnostic and Interventional Radiology, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne 50937, Germany
| | - Christiane J Bruns
- Department of General, Visceral, Cancer and Transplantation Surgery, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne 50937, Germany
| | - Dirk L Stippel
- Department of General, Visceral, Cancer and Transplantation Surgery, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne 50937, Germany
| |
Collapse
|
23
|
Schneider C, Allam M, Stoyanov D, Hawkes DJ, Gurusamy K, Davidson BR. Performance of image guided navigation in laparoscopic liver surgery - A systematic review. Surg Oncol 2021; 38:101637. [PMID: 34358880 DOI: 10.1016/j.suronc.2021.101637] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 07/04/2021] [Accepted: 07/24/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Compared to open surgery, minimally invasive liver resection has improved short term outcomes. It is however technically more challenging. Navigated image guidance systems (IGS) are being developed to overcome these challenges. The aim of this systematic review is to provide an overview of their current capabilities and limitations. METHODS Medline, Embase and Cochrane databases were searched using free text terms and corresponding controlled vocabulary. Titles and abstracts of retrieved articles were screened for inclusion criteria. Due to the heterogeneity of the retrieved data it was not possible to conduct a meta-analysis. Therefore results are presented in tabulated and narrative format. RESULTS Out of 2015 articles, 17 pre-clinical and 33 clinical papers met inclusion criteria. Data from 24 articles that reported on accuracy indicates that in recent years navigation accuracy has been in the range of 8-15 mm. Due to discrepancies in evaluation methods it is difficult to compare accuracy metrics between different systems. Surgeon feedback suggests that current state of the art IGS may be useful as a supplementary navigation tool, especially in small liver lesions that are difficult to locate. They are however not able to reliably localise all relevant anatomical structures. Only one article investigated IGS impact on clinical outcomes. CONCLUSIONS Further improvements in navigation accuracy are needed to enable reliable visualisation of tumour margins with the precision required for oncological resections. To enhance comparability between different IGS it is crucial to find a consensus on the assessment of navigation accuracy as a minimum reporting standard.
Collapse
Affiliation(s)
- C Schneider
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK.
| | - M Allam
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK; General surgery Department, Tanta University, Egypt
| | - D Stoyanov
- Department of Computer Science, University College London, London, UK; Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - D J Hawkes
- Centre for Medical Image Computing (CMIC), University College London, London, UK; Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK
| | - K Gurusamy
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK
| | - B R Davidson
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK
| |
Collapse
|
24
|
A narrative review on endopancreatic interventions: an innovative access to the pancreas. JOURNAL OF PANCREATOLOGY 2021. [DOI: 10.1097/jp9.0000000000000069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
|
25
|
Zhang W, Zhu W, Yang J, Xiang N, Zeng N, Hu H, Jia F, Fang C. Augmented Reality Navigation for Stereoscopic Laparoscopic Anatomical Hepatectomy of Primary Liver Cancer: Preliminary Experience. Front Oncol 2021; 11:663236. [PMID: 33842378 PMCID: PMC8027474 DOI: 10.3389/fonc.2021.663236] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Accepted: 03/11/2021] [Indexed: 12/17/2022] Open
Abstract
BACKGROUND Accurate determination of intrahepatic anatomy remains challenging for laparoscopic anatomical hepatectomy (LAH). Laparoscopic augmented reality navigation (LARN) is expected to facilitate LAH of primary liver cancer (PLC) by identifying the exact location of tumors and vessels. The study was to evaluate the safety and effectiveness of our independently developed LARN system in LAH of PLC. METHODS From May 2018 to July 2020, the study included 85 PLC patients who underwent three-dimensional (3D) LAH. According to whether LARN was performed during the operation, the patients were divided into the intraoperative navigation (IN) group and the non-intraoperative navigation (NIN) group. We compared the preoperative data, perioperative results and postoperative complications between the two groups, and introduced our preliminary experience of this novel technology in LAH. RESULTS There were 44 and 41 PLC patients in the IN group and the NIN group, respectively. No significant differences were found in preoperative characteristics and any of the resection-related complications between the two groups (All P > 0.05). Compared with the NIN group, the IN group had significantly less operative bleeding (P = 0.002), lower delta Hb% (P = 0.039), lower blood transfusion rate (P < 0.001), and reduced postoperative hospital stay (P = 0.003). For the IN group, the successful fusion of simulated surgical planning and operative scene helped to determine the extent of resection. CONCLUSIONS The LARN contributed to the identification of important anatomical structures during LAH of PLC. It reduced vascular injury and accelerated postoperative recovery, showing a potential application prospects in liver surgery.
Collapse
Affiliation(s)
- Weiqi Zhang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Wen Zhu
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Jian Yang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Nan Xiang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Ning Zeng
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Haoyu Hu
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Fucang Jia
- Research Laboratory for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Chihua Fang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| |
Collapse
|
26
|
Chen J, Wan Z, Zhang J, Li W, Chen Y, Li Y, Duan Y. Medical image segmentation and reconstruction of prostate tumor based on 3D AlexNet. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105878. [PMID: 33308904 DOI: 10.1016/j.cmpb.2020.105878] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2020] [Accepted: 11/22/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND Prostate cancer is a disease with a high incidence of tumors in men. Due to the long incubation time and insidious condition, early diagnosis is difficult; especially imaging diagnosis is more difficult. In actual clinical practice, the method of manual segmentation by medical experts is mainly used, which is time-consuming and labor-intensive and relies heavily on the experience and ability of medical experts. The rapid, accurate and repeatable segmentation of the prostate area is still a challenging problem. It is important to explore the automated segmentation of prostate images based on the 3D AlexNet network. METHOD Taking the medical image of prostate cancer as the entry point, the three-dimensional data is introduced into the deep learning convolutional neural network. This paper proposes a 3D AlexNet method for the automatic segmentation of prostate cancer magnetic resonance images, and the general network ResNet 50, Inception -V4 compares network performance. RESULTS Based on the training samples of magnetic resonance images of 500 prostate cancer patients, a set of 3D AlexNet with simple structure and excellent performance was established through adaptive improvement on the basis of classic AlexNet. The accuracy rate was as high as 0.921, the specificity was 0.896, and the sensitivity It is 0.902 and the area under the receiver operating characteristic curve (AUC) is 0.964. The Mean Absolute Distance (MAD) between the segmentation result and the medical expert's gold standard is 0.356 mm, and the Hausdorff distance (HD) is 1.024 mm, the Dice similarity coefficient is 0.9768. CONCLUSION The improved 3D AlexNet can automatically complete the structured segmentation of prostate magnetic resonance images. Compared with traditional segmentation methods and depth segmentation methods, the performance of the 3D AlexNet network is superior in terms of training time and parameter amount, or network performance evaluation. Compared with the algorithm, it proves the effectiveness of this method.
Collapse
Affiliation(s)
- Jun Chen
- Department of Urology, The Second Affiliated Hospital of Zhejiang Chinese Medical University, No.318 Chaowang Road, Gongshu District, Hangzhou 310005 China
| | - Zhechao Wan
- Department of Urology, Zhuji Central Hospital, No.98 Zhugong Road, Jiyang Street, Zhuji City, 311800, Zhejiang Province, China
| | - Jiacheng Zhang
- The 2nd Clinical Medical College, Zhejiang Chinese Medical University, 548 Bin Wen Road, Hangzhou 310053, China
| | - Wenhua Li
- Department of Radiology, Xinhua Hospital affiliated to Shanghai Jiao Tong University School of Medicine, 1665 Kong Jiang Road, Shanghai 200092, China
| | - Yanbing Chen
- Computer Application Technology, School of Applied Sciences, Macao Polytechnic Institute, Macao SAR 999078, China
| | - Yuebing Li
- Department of Anaesthesiology, The Second Affiliated Hospital of Zhejiang Chinese Medical University, No.318 Chaowang Road, Gongshu District, Hangzhou 310005 China.
| | - Yue Duan
- Department of Urology, The Second Affiliated Hospital of Zhejiang Chinese Medical University, No.318 Chaowang Road, Gongshu District, Hangzhou 310005 China.
| |
Collapse
|
27
|
Miyata A, Arita J, Kawaguchi Y, Hasegawa K, Kokudo N. Simulation and navigation liver surgery: an update after 2,000 virtual hepatectomies. Glob Health Med 2020; 2:298-305. [PMID: 33330824 PMCID: PMC7731191 DOI: 10.35772/ghm.2020.01045] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 07/28/2020] [Accepted: 07/31/2020] [Indexed: 04/24/2023]
Abstract
The advent of preoperative 3-dimensional (3D) simulation software has made a variety of unprecedented surgical simulations possible. Since 2004, we have performed more than 2,000 preoperative simulations in the University of Tokyo Hospital, and they have enabled us to obtain a great deal of information, such as the detailed shape of liver segments, the precise volume of each segment, and the volume of hepatic venous drainage areas. As a result, we have been able to perform more aggressive and complicated surgery safely. The next step is to create a navigation system that will accurately reproduce the preoperative plan. Real-time virtual sonography (RVS) is a navigation system that provides fusion images of ultrasonography and reconstructed computed tomography images or magnetic resonance images. The RVS system facilitates the surgeon's understanding of interpretation of ultrasound images and the detection of tumors that are difficult to find by ultrasound alone. In the near future, surgical navigation systems may evolve to the point where they will be able to inform surgeons intraoperatively in real time about not only intrahepatic structures, such as vessels and tumors, but also the portal territory, hepatic vein drainage areas, and resection lines that have been planned preoperatively.
Collapse
Affiliation(s)
- Akinori Miyata
- Hepato-Biliary-Pancreatic Surgery Division, Artificial Organ and Transplantation Division, Department of Surgery, Graduate School of Medicine, The University of Tokyo, Japan
| | - Junichi Arita
- Hepato-Biliary-Pancreatic Surgery Division, Artificial Organ and Transplantation Division, Department of Surgery, Graduate School of Medicine, The University of Tokyo, Japan
| | - Yoshikuni Kawaguchi
- Hepato-Biliary-Pancreatic Surgery Division, Artificial Organ and Transplantation Division, Department of Surgery, Graduate School of Medicine, The University of Tokyo, Japan
| | - Kiyoshi Hasegawa
- Hepato-Biliary-Pancreatic Surgery Division, Artificial Organ and Transplantation Division, Department of Surgery, Graduate School of Medicine, The University of Tokyo, Japan
- Address correspondence to:Kiyoshi Hasegawa, Hepato-Biliary-Pancreatic Surgery Division, Department of Surgery, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan. E-mail:
| | - Norihiro Kokudo
- Hepato-Biliary-Pancreatic Surgery Division, Department of Surgery, National Center for Global Health and Medicine, Tokyo, Japan
| |
Collapse
|
28
|
Abbas AE. Commentary: Stereothoracoscopic Lobectomy. One More Step Toward Surgical Augmented Reality. Semin Thorac Cardiovasc Surg 2020; 32:1097-1098. [PMID: 32846230 DOI: 10.1053/j.semtcvs.2020.07.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2020] [Accepted: 07/18/2020] [Indexed: 11/11/2022]
Affiliation(s)
- Abbas E Abbas
- Division of Thoracic Surgery, Department of Thoracic Medicine and Surgery, Temple University Hospital and Fox Chase Comprehensive Cancer Center, Lewis Katz School of Medicine, Philadelphia, Pennsylvania.
| |
Collapse
|
29
|
Kosieradzki M, Lisik W, Gierwiało R, Sitnik R. Applicability of Augmented Reality in an Organ Transplantation. Ann Transplant 2020; 25:e923597. [PMID: 32732862 PMCID: PMC7418780 DOI: 10.12659/aot.923597] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Accepted: 02/20/2020] [Indexed: 12/20/2022] Open
Abstract
Augmented reality (AR) delivers virtual information or some of its elements to the real world. This technology, which has been used primarily for entertainment and military applications, has vigorously entered medicine, especially in radiology and surgery, yet has never been used in organ transplantation. AR could be useful in training transplant surgeons, promoting organ donations, graft retrieval and allocation, and microscopic diagnosis of rejection, treatment of complications, and post-transplantation neoplasms. The availability of AR display tools such as Smartphone screens and head-mounted goggles, accessibility of software for automated image segmentation and 3-dimensional reconstruction, and algorithms allowing registration, make augmented reality an attractive tool for surgery including transplantation. The shortage of hospital IT specialists and insufficient investments from medical equipment manufacturers into the development of AR technology remain the most significant obstacles in its broader application.
Collapse
Affiliation(s)
- Maciej Kosieradzki
- Department of General and Transplantation Surgery, The Medical University of Warsaw, Warsaw, Poland
| | - Wojciech Lisik
- Department of General and Transplantation Surgery, The Medical University of Warsaw, Warsaw, Poland
| | - Radosław Gierwiało
- Virtual Reality Techniques Division, Institute of Micromechanics and Photonics, Faculty of Mechatronics, Warsaw University of Technology, Warsaw, Poland
| | - Robert Sitnik
- Virtual Reality Techniques Division, Institute of Micromechanics and Photonics, Faculty of Mechatronics, Warsaw University of Technology, Warsaw, Poland
| |
Collapse
|
30
|
Qian L, Wu JY, DiMaio SP, Navab N, Kazanzides P. A Review of Augmented Reality in Robotic-Assisted Surgery. ACTA ACUST UNITED AC 2020. [DOI: 10.1109/tmrb.2019.2957061] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
31
|
Pérez-Pachón L, Poyade M, Lowe T, Gröning F. Image Overlay Surgery Based on Augmented Reality: A Systematic Review. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2020; 1260:175-195. [PMID: 33211313 DOI: 10.1007/978-3-030-47483-6_10] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Augmented Reality (AR) applied to surgical guidance is gaining relevance in clinical practice. AR-based image overlay surgery (i.e. the accurate overlay of patient-specific virtual images onto the body surface) helps surgeons to transfer image data produced during the planning of the surgery (e.g. the correct resection margins of tissue flaps) to the operating room, thus increasing accuracy and reducing surgery times. We systematically reviewed 76 studies published between 2004 and August 2018 to explore which existing tracking and registration methods and technologies allow healthcare professionals and researchers to develop and implement these systems in-house. Most studies used non-invasive markers to automatically track a patient's position, as well as customised algorithms, tracking libraries or software development kits (SDKs) to compute the registration between patient-specific 3D models and the patient's body surface. Few studies combined the use of holographic headsets, SDKs and user-friendly game engines, and described portable and wearable systems that combine tracking, registration, hands-free navigation and direct visibility of the surgical site. Most accuracy tests included a low number of subjects and/or measurements and did not normally explore how these systems affect surgery times and success rates. We highlight the need for more procedure-specific experiments with a sufficient number of subjects and measurements and including data about surgical outcomes and patients' recovery. Validation of systems combining the use of holographic headsets, SDKs and game engines is especially interesting as this approach facilitates an easy development of mobile AR applications and thus the implementation of AR-based image overlay surgery in clinical practice.
Collapse
Affiliation(s)
- Laura Pérez-Pachón
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK.
| | - Matthieu Poyade
- School of Simulation and Visualisation, Glasgow School of Art, Glasgow, UK
| | - Terry Lowe
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK
- Head and Neck Oncology Unit, Aberdeen Royal Infirmary (NHS Grampian), Aberdeen, UK
| | - Flora Gröning
- School of Medicine, Medical Sciences and Nutrition, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
32
|
Yasuda J, Okamoto T, Onda S, Fujioka S, Yanaga K, Suzuki N, Hattori A. Application of image-guided navigation system for laparoscopic hepatobiliary surgery. Asian J Endosc Surg 2020; 13:39-45. [PMID: 30945434 DOI: 10.1111/ases.12696] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Revised: 01/10/2019] [Accepted: 01/16/2019] [Indexed: 01/16/2023]
Abstract
BACKGROUND To achieve safety of the operation, preoperative simulation became a routine practice for hepatobiliary and pancreatic (HBP) surgery. The use of intraoperative ultrasonography (IOUS) is essential in HBP surgery. There is a limitation in the use of IOUS in laparoscopic surgery (LS), for which a new intraoperative system is expected. We have developed an image-guided navigation system (IG-NS) for open HBP surgery since 2006, and we have applied our system to LS. The aim of this study is to evaluate the results of clinical application of IG-NS in LS. MATERIALS AND METHODS Eight patients underwent LS using IG-NS; LS consisted of cholecystectomy and hepatectomy in four patients each. After registration, the 3D models were superimposed on the surgical field. We performed LS while observing the navigation image. Moreover, we developed a support system for operations. RESULTS The average registration error was 8.8 mm for LS. Repeated registration was effective for organ deformation and improved the precision of IG-NS. By using various countermeasures, identification of the tumor's position and the setting of the resection line became easy. CONCLUSION As IG-NS provided real-time detailed and intuitive information, this intraoperative assist system may be an effective tool in LS.
Collapse
Affiliation(s)
- Jungo Yasuda
- Department of Surgery, The Jikei University School of Medicine, Tokyo, Japan
| | | | - Shinji Onda
- Department of Surgery, The Jikei University School of Medicine, Tokyo, Japan
| | - Shuuichi Fujioka
- Department of Surgery, The Jikei University School of Medicine, Tokyo, Japan
| | - Katsuhiko Yanaga
- Department of Surgery, The Jikei University School of Medicine, Tokyo, Japan
| | - Naoki Suzuki
- Institute for High Dimensional Medical Imaging, The Jikei University School of Medicine, Tokyo, Japan
| | - Asaki Hattori
- Institute for High Dimensional Medical Imaging, The Jikei University School of Medicine, Tokyo, Japan
| |
Collapse
|
33
|
Uchida T, Sadahiro M. Minimally invasive cardiac surgery using three-dimensional computed tomography image projection. Chirurgia (Bucur) 2019. [DOI: 10.23736/s0394-9508.18.04893-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
34
|
Bailer R, Martin RC. The effectiveness of using 3D reconstruction software for surgery to augment surgical education. Am J Surg 2019; 218:1016-1021. [DOI: 10.1016/j.amjsurg.2019.07.045] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2018] [Revised: 07/24/2019] [Accepted: 07/30/2019] [Indexed: 12/11/2022]
|
35
|
Wu X, Liu R, Xu S, Yang C, Yang S, Shao Z, Li S, Ye Z. Feasibility of mixed reality-based intraoperative three-dimensional image-guided navigation for atlanto-axial pedicle screw placement. Proc Inst Mech Eng H 2019; 233:1310-1317. [PMID: 31617820 DOI: 10.1177/0954411919881255] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
This study aimed to evaluate the safety and accuracy of mixed reality-based intraoperative three-dimensional navigated pedicle screws in three-dimensional printed model of fractured upper cervical spine. A total of 27 cervical model from patients of upper cervical spine fractures formed the study group. All the C1 and C2 pedicle screws were inserted under mixed reality-based intraoperative three-dimensional image-guided navigation system. The accuracy and safety of the pedicle screw placement were evaluated on the basis of postoperative computerized tomography scans. A total of 108 pedicle screws were properly inserted into the cervical three-dimensional models under mixed reality-based navigation, including 54 C1 pedicle screws and 54 C2 pedicle screws. Analysis of the dimensional parameters of each pedicle at C1/C2 level showed no statistically significant differences in the ideal and the actual entry points, inclined angles, and tailed angles. No screw was misplaced outside the pedicle of the three-dimensional printed model, and no ionizing X-ray radiation was used during screw placement under navigation. It is easy and safe to place C1/C2 pedicle screws under MR surgical navigation. Mixed reality-based navigation is feasible within upper cervical spinal fractures with improved safety and accuracy of C1/C2 pedicle screw insertion.
Collapse
Affiliation(s)
- Xinghuo Wu
- Department of Orthopaedic Surgery, Wuhan Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Rong Liu
- Department of Orthopaedic Surgery, Puren Hospital of Wuhan, Wuhan University of Science and Technology, Wuhan, China
| | - Song Xu
- Department of Orthopaedic Surgery, Wuhan Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Cao Yang
- Department of Orthopaedic Surgery, Wuhan Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Shuhua Yang
- Department of Orthopaedic Surgery, Wuhan Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zengwu Shao
- Department of Orthopaedic Surgery, Wuhan Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Suyun Li
- Department of Orthopaedic Surgery, Wuhan Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhewei Ye
- Department of Orthopaedic Surgery, Wuhan Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
36
|
Azizi Koutenaei B, Fotouhi J, Alambeigi F, Wilson E, Guler O, Oetgen M, Cleary K, Navab N. Radiation-free methods for navigated screw placement in slipped capital femoral epiphysis surgery. Int J Comput Assist Radiol Surg 2019; 14:2199-2210. [PMID: 31321601 DOI: 10.1007/s11548-019-02026-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2018] [Accepted: 07/03/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE For orthopedic procedures, surgeons utilize intra-operative medical images such as fluoroscopy to plan screw placement and accurately position the guide wire with the intended trajectory. The number of fluoroscopic images needed depends on the complexity of the case and skill of the surgeon. Since more fluoroscopic images lead to more exposure and higher radiation dose for both surgeon and patient, a solution that decreases the number of fluoroscopic images would be an improvement in clinical care. METHODS This article describes and compares three different novel navigation methods and techniques for screw placement using an attachable Inertial Measurement Unit device or a robotic arm. These methods provide projection and visualization of the surgical tool trajectory during the slipped capital femoral epiphysis procedure. RESULTS These techniques resulted in faster and more efficient preoperative calibration and set up times compared to other intra-operative navigation systems in our phantom study. We conducted an experiment using 120 model bones to measure the accuracy of the methods. CONCLUSION As conclusion, these approaches have the potential to improve accuracy of surgical tool navigation and decrease the number of required X-ray images without any change in the clinical workflow. The results also show 65% decrease in total error compared to the conventional manual approach.
Collapse
Affiliation(s)
- Bamshad Azizi Koutenaei
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Informatics, Technical University of Munich (TUM), Munich, Germany. .,Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, Washington, DC, USA.
| | - Javad Fotouhi
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Farshid Alambeigi
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| | | | | | - Mathew Oetgen
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, Washington, DC, USA
| | - Kevin Cleary
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, Washington, DC, USA
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Department of Informatics, Technical University of Munich (TUM), Munich, Germany.,Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
37
|
颜 野, 夏 海, 李 旭, 何 为, 朱 学, 张 智, 肖 春, 刘 余, 黄 华, 何 良, 卢 剑. [Application of U-shaped convolutional neural network in auto segmentation and reconstruction of 3D prostate model in laparoscopic prostatectomy navigation]. BEIJING DA XUE XUE BAO. YI XUE BAN = JOURNAL OF PEKING UNIVERSITY. HEALTH SCIENCES 2019; 51:596-601. [PMID: 31209437 PMCID: PMC7439022 DOI: 10.19723/j.issn.1671-167x.2019.03.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Indexed: 11/20/2022]
Abstract
OBJECTIVE To investigate the efficacy of intraoperative cognitive navigation on laparoscopic radical prostatectomy using 3D prostatic models created by U-shaped convolutional neural network (U-net) and reconstructed through Medical Image Interaction Tool Kit (MITK) platform. METHODS A total of 5 000 pieces of prostate cancer magnetic resonance (MR) imaging discovery sets with manual annotations were used to train a modified U-net, and a set of clinically demand-oriented, stable and efficient full convolutional neural network algorithm was constructed. The MR images were cropped and segmented automatically by using modified U-net, and the segmentation data were automatically reconstructed using MITK platform according to our own protocols. The modeling data were output as STL format, and the prostate models were simultaneously displayed on an android tablet during the operation to help achieving cognitive navigation. RESULTS Based on original U-net architecture, we established a modified U-net from a 201-case MR imaging training set. The network performance was tested and compared with human segmentations and other segmentation networks by using one certain testing data set. Auto segmentation of multi-structures (such as prostate, prostate tumors, seminal vesicles, rectus, neurovascular bundles and dorsal venous complex) were successfully achieved. Secondary automatic 3D reconstruction had been carried out through MITK platform. During the surgery, 3D models of prostatic area were simultaneously displayed on an android tablet, and the cognitive navigation was successfully achieved. Intra-operation organ visualization demonstrated the structural relationships among the key structures in great detail and the degree of tumor invasion was visualized directly. CONCLUSION The modified U-net was able to achieve automatic segmentations of important structures of prostate area. Secondary 3D model reconstruction and demonstration could provide intraoperative visualization of vital structures of prostate area, which could help achieve cognitive fusion navigation for surgeons. The application of these techniques could finally reduce positive surgical margin rates, and may improve the efficacy and oncological outcomes of laparoscopic prostatectomy.
Collapse
Affiliation(s)
- 野 颜
- 北京大学第三医院泌尿外科,北京 100191Department of Urology, Peking University Third Hospital, Beijing 100191, China
| | - 海缀 夏
- 北京大学第三医院泌尿外科,北京 100191Department of Urology, Peking University Third Hospital, Beijing 100191, China
| | - 旭升 李
- 同济大学电子与信息工程学院, 上海400047Institute of Electronic and Information, Tongji University, Shanghai 400047, China
| | - 为 何
- 北京大学第三医院放射科,北京 100191Department of Radiology, Peking University Third Hospital, Beijing 100191,China
| | - 学华 朱
- 北京大学第三医院泌尿外科,北京 100191Department of Urology, Peking University Third Hospital, Beijing 100191, China
| | - 智荧 张
- 北京大学第三医院泌尿外科,北京 100191Department of Urology, Peking University Third Hospital, Beijing 100191, China
| | - 春雷 肖
- 北京大学第三医院泌尿外科,北京 100191Department of Urology, Peking University Third Hospital, Beijing 100191, China
| | - 余庆 刘
- 北京大学第三医院泌尿外科,北京 100191Department of Urology, Peking University Third Hospital, Beijing 100191, China
| | - 华 黄
- 北京理工大学计算机学院, 北京 100081School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China
| | - 良华 何
- 同济大学电子与信息工程学院, 上海400047Institute of Electronic and Information, Tongji University, Shanghai 400047, China
| | - 剑 卢
- 北京大学第三医院泌尿外科,北京 100191Department of Urology, Peking University Third Hospital, Beijing 100191, China
| |
Collapse
|
38
|
Zhang ZY, Duan WC, Chen RK, Zhang FJ, Yu B, Zhan YB, Li K, Zhao HB, Sun T, Ji YC, Bai YH, Wang YM, Zhou JQ, Liu XZ. Preliminary application of mxed reality in neurosurgery: Development and evaluation of a new intraoperative procedure. J Clin Neurosci 2019; 67:234-238. [PMID: 31221576 DOI: 10.1016/j.jocn.2019.05.038] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2018] [Revised: 04/20/2019] [Accepted: 05/21/2019] [Indexed: 11/25/2022]
Abstract
During neurological surgery, neurosurgeons have to transform the two-dimensional (2D) sectional images into three-dimensional (3D) structures at the cognitive level. The complexity of the intracranial structures increases the difficulty and risk of neurosurgery. Mixed reality (MR) applications reduce the obstacles in the transformation from 2D images to 3D visualization of anatomical structures of central nervous system. In this study, the holographic image was established by MR using computed tomography (CT), computed tomography angiography (CTA) and magnetic resonance imaging (MRI) data of patients. The surgeon's field of vision was superimposed with the 3D model of the patient's intracranial structure displayed on the mixed reality head-mounted display (MR-HMD). The neurosurgeons practiced and evaluated the feasibility of this technique in neurosurgical cases. We developed the segmentation image masks and texture mapping including brain tissue, intracranial vessels, nerves, tumors, and their relative positions by MR technologies. The results showed that the three-dimensional imaging is in a stable state in the operating room with no significant flutter and blur. And the neurosurgeon's feedback on the comfort of the equipment and the practicality of the technology was satisfactory. In conclusion, MR technology can holographically construct a 3D digital model of patient's lesions and improve the anatomical perception of neurosurgeons during craniotomy. The feasibility of the MR-HMD application in neurosurgery is confirmed.
Collapse
Affiliation(s)
- Zhen-Yu Zhang
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian She Dong Road 1, Zhengzhou, Henan 450000, China
| | - Wen-Chao Duan
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian She Dong Road 1, Zhengzhou, Henan 450000, China
| | - Ruo-Kun Chen
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian She Dong Road 1, Zhengzhou, Henan 450000, China
| | - Feng-Jiang Zhang
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian She Dong Road 1, Zhengzhou, Henan 450000, China
| | - Bin Yu
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian She Dong Road 1, Zhengzhou, Henan 450000, China
| | - Yun-Bo Zhan
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian She Dong Road 1, Zhengzhou, Henan 450000, China
| | - Ke Li
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian She Dong Road 1, Zhengzhou, Henan 450000, China
| | - Hai-Biao Zhao
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian She Dong Road 1, Zhengzhou, Henan 450000, China
| | - Tao Sun
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian She Dong Road 1, Zhengzhou, Henan 450000, China
| | - Yu-Chen Ji
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian She Dong Road 1, Zhengzhou, Henan 450000, China
| | - Ya-Hui Bai
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian She Dong Road 1, Zhengzhou, Henan 450000, China
| | - Yan-Min Wang
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian She Dong Road 1, Zhengzhou, Henan 450000, China
| | - Jin-Qiao Zhou
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian She Dong Road 1, Zhengzhou, Henan 450000, China
| | - Xian-Zhi Liu
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Jian She Dong Road 1, Zhengzhou, Henan 450000, China.
| |
Collapse
|
39
|
Abstract
BACKGROUND One of the main challenges for modern surgery is the effective use of the many available imaging modalities and diagnostic methods. Augmented reality systems can be used in the future to blend patient and planning information into the view of surgeons, which can improve the efficiency and safety of interventions. OBJECTIVE In this article we present five visualization methods to integrate augmented reality displays into medical procedures and the advantages and disadvantages are explained. MATERIAL AND METHODS Based on an extensive literature review the various existing approaches for integration of augmented reality displays into medical procedures are divided into five categories and the most important research results for each approach are presented. RESULTS A large number of mixed and augmented reality solutions for medical interventions have been developed as research prototypes; however, only very few systems have been tested on patients. CONCLUSION In order to integrate mixed and augmented reality displays into medical practice, highly specialized solutions need to be developed. Such systems must comply with the requirements with respect to accuracy, fidelity, ergonomics and seamless integration into the surgical workflow.
Collapse
Affiliation(s)
- Ulrich Eck
- Lehrstuhl für Informatikanwendungen in der Medizin, Technische Universität München, Boltzmannstr. 3, 85748, Garching bei München, Deutschland.
| | - Alexander Winkler
- Lehrstuhl für Informatikanwendungen in der Medizin, Technische Universität München, Boltzmannstr. 3, 85748, Garching bei München, Deutschland.
| |
Collapse
|
40
|
Kawashima K, Kanno T, Tadano K. Robots in laparoscopic surgery: current and future status. BMC Biomed Eng 2019; 1:12. [PMID: 32903302 PMCID: PMC7422514 DOI: 10.1186/s42490-019-0012-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Accepted: 04/25/2019] [Indexed: 02/07/2023] Open
Abstract
In this paper, we focus on robots used for laparoscopic surgery, which is one of the most active areas for research and development of surgical robots. We introduce research and development of laparoscope-holder robots, master-slave robots and hand-held robotic forceps. Then, we discuss future directions for surgical robots. For robot hardware, snake like flexible mechanisms for single-port access surgery (SPA) and NOTES (Natural Orifice Transluminal Endoscopic Surgery) and applications of soft robotics are actively used. On the software side, research such as automation of surgical procedures using machine learning is one of the hot topics.
Collapse
|
41
|
Kim HJ, Choi GS, Park JS, Park SY, Cho SH, Seo AN, Yoon GS. S122: impact of fluorescence and 3D images to completeness of lateral pelvic node dissection. Surg Endosc 2019; 34:469-476. [PMID: 31139999 DOI: 10.1007/s00464-019-06830-x] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2018] [Accepted: 05/14/2019] [Indexed: 12/12/2022]
Abstract
BACKGROUND Lateral pelvic lymph node dissection (LPND) is a technically demanding procedure. Consequently, there is a possibility of incomplete dissection of lateral pelvic lymph nodes (LPNs). We aimed to identify metastatic LPNs intraoperatively in real-time under dual guidance of fluorescence imaging and 3D lymphovascular reconstruction, and then to remove them completely. METHODS Rectal cancer patients who were scheduled to undergo LPND after preoperative chemoradiotherapy (CRT) were prospectively enrolled. We traced changes in suspected metastatic LPNs during preoperative CRT and defined them as index LPNs on post-CRT imaging studies. For fluorescence imaging, indocyanine green (ICG) at a dose of 2.5 mg was injected transanally around the tumor before the operation. For 3D reconstruction images, each patient underwent preoperative axial CT scan with contrast (0.6 mm slice thickness). These images were then manipulated with OsiriX. Index LPNs and essential structures in the pelvic sidewall, such as the obturator nerve, were reconstructed with abdominal arteries from 3D volume rendering. All surgical procedures were performed via laparoscopic or robotic approach. RESULTS From March to July 2017, ten rectal cancer patients underwent total mesorectal excision with LPND after preoperative CRT under dual image guidance. Bilateral LPND was performed in five patients. All index LPNs among ICG-bearing lymph nodes were clearly identified intraoperatively by matching with their corresponding 3D images. Pathologic LPN metastasis was confirmed in four patients (40.0%) and in five of the 15 dissected pelvic sidewalls (33.0%). All metastatic LPNs were identified among index LPNs. Four (80.0%) of the five metastatic LPNs were located in the internal iliac area. CONCLUSION Index LPNs among ICG-bearing lymph nodes in pelvic sidewall were clearly identified and completely removed by matching with their corresponding 3D reconstruction images. Further studies and long-term oncologic outcomes are required to determine the real impact of dual image guidance in LPND.
Collapse
Affiliation(s)
- Hye Jin Kim
- Colorectal Cancer Center, Kyungpook National University Chilgok Hospital, School of Medicine, Kyungpook National University, 807 Hogukro, Buk-gu, Daegu, 41404, South Korea
| | - Gyu-Seog Choi
- Colorectal Cancer Center, Kyungpook National University Chilgok Hospital, School of Medicine, Kyungpook National University, 807 Hogukro, Buk-gu, Daegu, 41404, South Korea.
| | - Jun Seok Park
- Colorectal Cancer Center, Kyungpook National University Chilgok Hospital, School of Medicine, Kyungpook National University, 807 Hogukro, Buk-gu, Daegu, 41404, South Korea
| | - Soo Yeun Park
- Colorectal Cancer Center, Kyungpook National University Chilgok Hospital, School of Medicine, Kyungpook National University, 807 Hogukro, Buk-gu, Daegu, 41404, South Korea
| | - Seung Hyun Cho
- Department of Radiology, Kyungpook National University Chilgok Hospital, School of Medicine, Kyungpook National University, Daegu, South Korea
| | - An Na Seo
- Department of Pathology, Kyungpook National University Chilgok Hospital, School of Medicine, Kyungpook National University, Daegu, South Korea
| | - Ghuil Suk Yoon
- Department of Pathology, Kyungpook National University Chilgok Hospital, School of Medicine, Kyungpook National University, Daegu, South Korea
| |
Collapse
|
42
|
Wang J, Shen Y, Yang S. A practical marker-less image registration method for augmented reality oral and maxillofacial surgery. Int J Comput Assist Radiol Surg 2019; 14:763-773. [PMID: 30825070 DOI: 10.1007/s11548-019-01921-5] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Accepted: 01/29/2019] [Indexed: 12/12/2022]
Abstract
BACKGROUND Image registration lies in the core of augmented reality (AR), which aligns the virtual scene with the reality. In AR surgical navigation, the performance of image registration is vital to the surgical outcome. METHODS This paper presents a practical marker-less image registration method for AR-guided oral and maxillofacial surgery where a virtual scene is generated and mixed with reality to guide surgical operation or provide surgical outcome visualization in the manner of video see-through overlay. An intraoral 3D scanner is employed to acquire the patient's teeth shape model intraoperatively. The shape model is then registered with a custom-made stereo camera system using a novel 3D stereo matching algorithm and with the patient's CT-derived 3D model using an iterative closest point scheme, respectively. By leveraging the intraoral 3D scanner, the CT space and the stereo camera space are associated so that surrounding anatomical models and virtual implants could be overlaid on the camera's view to achieve AR surgical navigation. RESULTS Jaw phantom experiments were performed to evaluate the target registration error of the overlay, which yielded an average error of less than 0.50 mm with the time cost less than 0.5 s. Volunteer trial was also conducted to show the clinical feasibility. CONCLUSIONS The proposed registration method does not rely on any external fiducial markers attached to the patient. It performs automatically so as to maintain a correct AR scene, overcoming the misalignment difficulty caused by patient's movement. Therefore, it is noninvasive and practical in oral and maxillofacial surgery.
Collapse
Affiliation(s)
- Junchen Wang
- School of Mechanical Engineering and Automation, Beihang University, Beijing, 100191, China.,Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100086, China
| | - Yu Shen
- School of Mechanical Engineering and Automation, Beihang University, Beijing, 100191, China.,Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100086, China
| | - Shuo Yang
- Stomatological Hospital, Southern Medical University, Guangzhou, China.
| |
Collapse
|
43
|
Masuoka Y, Morikawa H, Kawai T, Nakagohri T. Use of Smartphone-Based Head-Mounted Display Devices to View a Three-Dimensional Dissection Model in a Virtual Reality Environment: Pilot Questionnaire Study. JMIR MEDICAL EDUCATION 2019; 5:e11921. [PMID: 31344673 PMCID: PMC6682296 DOI: 10.2196/11921] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2018] [Revised: 12/12/2018] [Accepted: 12/30/2018] [Indexed: 06/10/2023]
Abstract
BACKGROUND Virtual reality (VR) technology has started to gain attention as a form of surgical support in medical settings. Likewise, the widespread use of smartphones has resulted in the development of various medical applications; for example, Google Cardboard, which can be used to build simple head-mounted displays (HMDs). However, because of the absence of observed and reported outcomes of the use of three-dimensional (3D) organ models in relevant environments, we have yet to determine the effects of or issues with the use of such VR technology. OBJECTIVE The aim of this paper was to study the issues that arise while observing a 3D model of an organ that is created based on an actual surgical case through the use of a smartphone-based simple HMD. Upon completion, we evaluated and gathered feedback on the performance and usability of the simple observation environment we had created. METHODS We downloaded our data to a smartphone (Galaxy S6; Samsung, Seoul, Korea) and created a simple HMD system using Google Cardboard (Google). A total of 17 medical students performed 2 experiments: an observation conducted by a single observer and another one carried out by multiple observers using a simple HMD. Afterward, they assessed the results by responding to a questionnaire survey. RESULTS We received a largely favorable response in the evaluation of the dissection model, but also a low score because of visually induced motion sickness and eye fatigue. In an introspective report on simultaneous observations made by multiple observers, positive opinions indicated clear image quality and shared understanding, but displeasure caused by visually induced motion sickness, eye fatigue, and hardware problems was also expressed. CONCLUSIONS We established a simple system that enables multiple persons to observe a 3D model. Although the observation conducted by multiple observers was successful, problems likely arose because of poor smartphone performance. Therefore, smartphone performance improvement may be a key factor in establishing a low-cost and user-friendly 3D observation environment.
Collapse
Affiliation(s)
- Yoshihito Masuoka
- Department of Surgery, Tokai University School of Medicine, Kanagawa, Japan
| | | | - Takashi Kawai
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan
| | - Toshio Nakagohri
- Department of Surgery, Tokai University School of Medicine, Kanagawa, Japan
| |
Collapse
|
44
|
Quero G, Lapergola A, Soler L, Shahbaz M, Hostettler A, Collins T, Marescaux J, Mutter D, Diana M, Pessaux P. Virtual and Augmented Reality in Oncologic Liver Surgery. Surg Oncol Clin N Am 2019; 28:31-44. [PMID: 30414680 DOI: 10.1016/j.soc.2018.08.002] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Virtual reality (VR) and augmented reality (AR) in complex surgery are evolving technologies enabling improved preoperative planning and intraoperative navigation. The basis of these technologies is a computer-based generation of a patient-specific 3-dimensional model from Digital Imaging and Communications in Medicine (DICOM) data. This article provides a state-of-the- art overview on the clinical use of this technology with a specific focus on hepatic surgery. Although VR and AR are still in an evolving stage with only some clinical application today, these technologies have the potential to become a key factor in improving preoperative and intraoperative decision making.
Collapse
Affiliation(s)
- Giuseppe Quero
- IHU-Strasbourg, Institute of Image-Guided Surgery, 1 Place de l'Hôpital, Strasbourg 67091, France
| | - Alfonso Lapergola
- IRCAD, Research Institute Against Cancer of the Digestive System, 1 Place de l'Hôpital, Strasbourg 67091, France
| | - Luc Soler
- IRCAD, Research Institute Against Cancer of the Digestive System, 1 Place de l'Hôpital, Strasbourg 67091, France
| | - Muhammad Shahbaz
- IHU-Strasbourg, Institute of Image-Guided Surgery, 1 Place de l'Hôpital, Strasbourg 67091, France
| | - Alexandre Hostettler
- IRCAD, Research Institute Against Cancer of the Digestive System, 1 Place de l'Hôpital, Strasbourg 67091, France
| | - Toby Collins
- IRCAD, Research Institute Against Cancer of the Digestive System, 1 Place de l'Hôpital, Strasbourg 67091, France
| | - Jacques Marescaux
- IHU-Strasbourg, Institute of Image-Guided Surgery, 1 Place de l'Hôpital, Strasbourg 67091, France; IRCAD, Research Institute Against Cancer of the Digestive System, 1 Place de l'Hôpital, Strasbourg 67091, France
| | - Didier Mutter
- Department of General, Digestive and Endocrine Surgery, University Hospital of Strasbourg, 1 Place de l'Hôpital, Strasbourg 67091, France
| | - Michele Diana
- IHU-Strasbourg, Institute of Image-Guided Surgery, 1 Place de l'Hôpital, Strasbourg 67091, France; IRCAD, Research Institute Against Cancer of the Digestive System, 1 Place de l'Hôpital, Strasbourg 67091, France; Department of General, Digestive and Endocrine Surgery, University Hospital of Strasbourg, 1 Place de l'Hôpital, Strasbourg 67091, France
| | - Patrick Pessaux
- IHU-Strasbourg, Institute of Image-Guided Surgery, 1 Place de l'Hôpital, Strasbourg 67091, France; IRCAD, Research Institute Against Cancer of the Digestive System, 1 Place de l'Hôpital, Strasbourg 67091, France; Department of General, Digestive and Endocrine Surgery, University Hospital of Strasbourg, 1 Place de l'Hôpital, Strasbourg 67091, France.
| |
Collapse
|
45
|
Paydarfar JA, Wu X, Halter RJ. Initial experience with image-guided surgical navigation in transoral surgery. Head Neck 2018; 41:E1-E10. [PMID: 30556235 DOI: 10.1002/hed.25380] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Revised: 05/08/2018] [Accepted: 05/28/2018] [Indexed: 12/19/2022] Open
Abstract
BACKGROUND Surgical navigation using image guidance may improve the safety and efficacy of transoral surgery (TOS); however, preoperative imaging cannot be accurately registered to the intraoperative state due to deformations resulting from placement of the laryngoscope or retractor. This proof of concept study explores feasibility and registration accuracy of surgical navigation for TOS by utilizing intraoperative imaging. METHODS Four patients undergoing TOS were recruited. Suspension laryngoscopy was performed with a CT-compatible laryngoscope. An intraoperative contrast enhanced CT scan was obtained and registered to fiducials placed on the neck, face, and laryngoscope. RESULTS All patients were successfully scanned and registered. Registration accuracy within the pharynx and larynx was 1 mm or less. Target registration was confirmed by localizing endoscopic and surface structures to the CT images. Successful tracking was performed in all 4 patients. CONCLUSION For surgical navigation during TOS, although a high level of registration accuracy can be achieved by utilizing intraoperative imaging, significant limitations of the existing technology have been identified. These limitations, as well as areas for future investigation, are discussed.
Collapse
Affiliation(s)
- Joseph A Paydarfar
- Section of Otolaryngology, Audiology, and Maxillofacial Surgery, Department of Surgery, Dartmouth-Hitchcock Medical Center, Geisel School of Medicine, Lebanon, New Hampshire
- Thayer School of Engineering at Dartmouth, Hanover, New Hampshire
| | - Xiaotian Wu
- Thayer School of Engineering at Dartmouth, Hanover, New Hampshire
| | - Ryan J Halter
- Thayer School of Engineering at Dartmouth, Hanover, New Hampshire
- Dartmouth College Geisel School of Medicine, Department of Surgery, Hanover, New Hampshire
| |
Collapse
|
46
|
Chu Y, Li X, Yang X, Ai D, Huang Y, Song H, Jiang Y, Wang Y, Chen X, Yang J. Perception enhancement using importance-driven hybrid rendering for augmented reality based endoscopic surgical navigation. BIOMEDICAL OPTICS EXPRESS 2018; 9:5205-5226. [PMID: 30460123 PMCID: PMC6238941 DOI: 10.1364/boe.9.005205] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2018] [Revised: 09/20/2018] [Accepted: 09/22/2018] [Indexed: 06/09/2023]
Abstract
Misleading depth perception may greatly affect the correct identification of complex structures in image-guided surgery. In this study, we propose a novel importance-driven hybrid rendering method to enhance perception for navigated endoscopic surgery. First, the volume structures are enhanced using gradient-based shading to reduce the color information in low-priority regions and improve the distinctions between complicated structures. Second, an importance sorting method based on the order-independent transparency rendering is introduced to intensify the perception of multiple surfaces. Third, volume data are adaptively truncated and emphasized with respect to the perspective orientation and the illustration of critical information for viewing range extension. Various experimental results prove that with the combination of volume and surface rendering, our method can effectively improve the depth distinction of multiple objects both in simulated and clinical scenes. Our importance-driven surface rendering method demonstrates improved average performance and statistical significance as rated by 15 participants (five clinicians and ten non-clinicians) on a five-point Likert scale. Further, the average frame rate of hybrid rendering with thin-layer sectioning reaches 42 fps. Given that the process of the hybrid rendering is fully automatic, it can be utilized in real-time surgical navigation to improve the rendering efficiency and information validity.
Collapse
Affiliation(s)
- Yakui Chu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Xu Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Xilin Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Yong Huang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Hong Song
- School of Software, Beijing Institute of Technology, Beijing 100081, China
| | - Yurong Jiang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Xiaohong Chen
- Department of Otolaryngology, Head and Neck Surgery, Beijing Tongren Hospital, Beijing 100730, China
- Co-corresponding authors
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
- Co-corresponding authors
| |
Collapse
|
47
|
Usefulness of the 3D virtual visualization surgical planning simulation and 3D model for endoscopic endonasal transsphenoidal surgery of pituitary adenoma: Technical report and review of literature. INTERDISCIPLINARY NEUROSURGERY-ADVANCED TECHNIQUES AND CASE MANAGEMENT 2018. [DOI: 10.1016/j.inat.2018.02.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
48
|
Baste JM, Soldea V, Lachkar S, Rinieri P, Sarsam M, Bottet B, Peillon C. Development of a precision multimodal surgical navigation system for lung robotic segmentectomy. J Thorac Dis 2018; 10:S1195-S1204. [PMID: 29785294 DOI: 10.21037/jtd.2018.01.32] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Minimally invasive sublobar anatomical resection is becoming more and more popular to manage early lung lesions. Robotic-assisted thoracic surgery (RATS) is unique in comparison with other minimally invasive techniques. Indeed, RATS is able to better integrate multiple streams of information including advanced imaging techniques, in an immersive experience at the level of the robotic console. Our aim was to describe three-dimensional (3D) imaging throughout the surgical procedure from preoperative planning to intraoperative assistance and complementary investigations such as radial endobronchial ultrasound (R-EBUS) and virtual bronchoscopy for pleural dye marking. All cases were operated using the DaVinci SystemTM. Modelisation was provided by Visible Patient™ (Strasbourg, France). Image integration in the operative field was achieved using the Tile Pro multi display input of the DaVinci console. Our experience was based on 114 robotic segmentectomies performed between January 2012 and October 2017. The clinical value of 3D imaging integration was evaluated in 2014 in a pilot study. Progressively, we have reached the conclusion that the use of such an anatomic model improves the safety and reliability of procedures. The multimodal system including 3D imaging has been used in more than 40 patients so far and demonstrated a perfect operative anatomic accuracy. Currently, we are developing an original virtual reality experience by exploring 3D imaging models at the robotic console level. The act of operating is being transformed and the surgeon now oversees a complex system that improves decision making.
Collapse
Affiliation(s)
- Jean Marc Baste
- Department of General and Thoracic Surgery, Rouen University Hospital, Rouen, France
| | - Valentin Soldea
- Department of General and Thoracic Surgery, Rouen University Hospital, Rouen, France.,Department of Pathology, University of Medicine and Pharmacy, Targu Mures, Roumania
| | - Samy Lachkar
- Department of Pulmonology and CIC-CRB 1404, Rouen University Hospital, Rouen, France
| | - Philippe Rinieri
- Department of General and Thoracic Surgery, Rouen University Hospital, Rouen, France
| | - Mathieu Sarsam
- Department of General and Thoracic Surgery, Rouen University Hospital, Rouen, France
| | - Benjamin Bottet
- Department of General and Thoracic Surgery, Rouen University Hospital, Rouen, France
| | - Christophe Peillon
- Department of General and Thoracic Surgery, Rouen University Hospital, Rouen, France
| |
Collapse
|
49
|
Augmented reality technology for preoperative planning and intraoperative navigation during hepatobiliary surgery: A review of current methods. Hepatobiliary Pancreat Dis Int 2018; 17:101-112. [PMID: 29567047 DOI: 10.1016/j.hbpd.2018.02.002] [Citation(s) in RCA: 67] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Accepted: 11/16/2017] [Indexed: 02/05/2023]
Abstract
BACKGROUND Augmented reality (AR) technology is used to reconstruct three-dimensional (3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the virtual images onto a view of the surgical field. In liver surgery, these superimposed virtual images help the surgeon to visualize intrahepatic structures and therefore, to operate precisely and to improve clinical outcomes. DATA SOURCES The keywords "augmented reality", "liver", "laparoscopic" and "hepatectomy" were used for searching publications in the PubMed database. The primary source of literatures was from peer-reviewed journals up to December 2016. Additional articles were identified by manual search of references found in the key articles. RESULTS In general, AR technology mainly includes 3D reconstruction, display, registration as well as tracking techniques and has recently been adopted gradually for liver surgeries including laparoscopy and laparotomy with video-based AR assisted laparoscopic resection as the main technical application. By applying AR technology, blood vessels and tumor structures in the liver can be displayed during surgery, which permits precise navigation during complex surgical procedures. Liver transformation and registration errors during surgery were the main factors that limit the application of AR technology. CONCLUSIONS With recent advances, AR technologies have the potential to improve hepatobiliary surgical procedures. However, additional clinical studies will be required to evaluate AR as a tool for reducing postoperative morbidity and mortality and for the improvement of long-term clinical outcomes. Future research is needed in the fusion of multiple imaging modalities, improving biomechanical liver modeling, and enhancing image data processing and tracking technologies to increase the accuracy of current AR methods.
Collapse
|
50
|
|