Alternative solutions for transoral robotic surgery (TORS)






  • Contents



  • Micro-technologies and systems for robot-assisted laser phonomicrosurgery (μRALP): a new microrobot prototype driven by an endoscope for laser treatment of pathologies of the vocal folds 142



  • The Flex® Robotic System for transoral surgery 154




Micro-technologies and systems for robot-assisted laser phonomicrosurgery (μRALP): a new microrobot prototype driven by an endoscope for laser treatment of pathologies of the vocal folds



L. Tavernier
N. Andreff
T. Ortmaier
G. Peretti
L.S. Mattos

The history of transoral robotic surgery (TORS) began just over 10 years ago with the first animal model in 2005 [ ], followed by first-in-man procedures in 2006 [ , ]. Across the world, one medical robot, the da Vinci® system (Intuitive Surgical Inc., California, USA), is used for almost all transoral surgeries. Since January 2010, it has been approved by the US Food and Drug Administration (FDA) for “surgical procedures restricted to benign and malignant tumors classified as T1 and T2, […] for adult and pediatric use (except for transoral otolaryngology surgical procedures)” [ ].


Since its first application in ENT, the use of robots has steadily increased worldwide. TORS was developed as an alternative treatment option to bypass the limits of traditional approaches. Studies have shown safety at least equivalent to that of traditional surgery [ , , ]. Despite these advantages, however, many publications pointed out its limitations, both technical and financial, and undertook development of specific tools, including for pathologies of the vocal folds [ , ].


On the basis of these same findings, a European Consortium of 5 institutions [ ] in 3 countries (Italy, France and Germany) was funded by the European Union under the 7th Framework Program for Research and Technological Development (led by the Commission for Research and Innovation) ( http://ec.europa.eu/research/fp7/index_en.cfm ). This μRALP project ( http://www.microralp.eu/ ) proposes a different conception of surgical robotics where, rather than starting from a “universal” robot that is then adapted to a given surgical practice, we start from a given need for which a tool is then created. The need here is microscopic phonosurgery and its laser tool, the limits of which concern the accessibility and visibility of the surgical site and remote control and accuracy of the sighting system, for which we propose an endoscopic surgical system assisted by a micro-robot. It will also have the important economic specificity that each technological solution developed and described below can be used independently for other applications to come.


A new three-dimensional vision with augmented reality


Leibniz Universität Hannover (Leibniz University of Hannover): A. Schoob, D. Kundrat, L.A. Kahrs, T. Ortmaier


The μRALP endoscope is equipped with stereo vision and allows three-dimensional perception when inserted though the patient’s mouth and positioned close to the vocal folds ( Figure 9.1A ), instead of the conventional transoral laser microsurgery (TLM) approach with a stereo microscope. μRALP’s stereo vision system provides computable depth information for more precise and safer tissue manipulation and laser cutting. As described in the following sections, stereoscopic imaging facilitates vision-guided intraoperative planning within the μRALP workflow. The augmented reality system is based on a trifocal arrangement of integrated stereo vision and a surgical laser unit ( Figure 9.1B ).




Figure 9.1


μRALP scenario with (A) laser endoscope inserted to the larynx.

The multifunctional tip (B) is positioned close to the vocal fold.


Interactive laser focus positioning


Due to the limited depth-of-field of the fixed-focus laser integrated in the endoscopic tip, a constant distance to the tissue has to be set after inserting the endoscope in the larynx. To assist precise endoscope and field-of-view positioning, surface information is acquired by image-based reconstruction of the vocal fold tissue, as shown in Figure 9.2A . Detailed descriptions of the methods are given by Schoob et al. [ ]. In combination with registration to the integrated laser scanning unit, the area of intersection between tissue surface and laser workspace can be highlighted in the live surgical view [ ]. Registration enables transfer of image-based incision planning to three-dimensional laser cutting, yielding a maximum deviation of only 0.2 mm between planned and executed laser incision.




Figure 9.2


Stereo imaging integrated in the endoscopic tip facilitates (A) three-dimensional reconstruction of the vocal fold region of interest and (B) color-coded overlay of the laser focal range on the live surgical view, shown for in vivo image data (top row) and human cadaver images acquired with the μRALP endoscope (bottom row).

(Source: A. Schoob et al.)


Experimental studies have shown that color-coding the laser focal distance provides visual feedback to the surgeon to position the endoscopic system with submillimeter accuracy in just a few seconds [ ]. In detail, the laser depth-of-field, which is characterized by the beam waist ( Figure 9.1B ), is represented by a color gradient ranging from red (near) to blue (far), and optimal focusing is highlighted in green. The lateral border of the color-coding indicates the maximum scanning range of the laser. Figure 9.2B illustrates color-coding applied to in-vivo data acquired with a commercial stereo endoscope (VSii®, Visionsense Ltd., Israel) and a sequence obtained with the μRALP endoscope (chip-on-the-tip camera modules MO-BS0804P®, MISUMI Electronics Corp., Taiwan) in human cadaver trials. Results demonstrated that an increase in color-coded distance easily reveals misalignment to the laser focal range; i.e. surface regions where incisions are expected to be unfocused and thus inefficient. In summary, the endoscope can be inserted through the patient’s mouth and positioned at the correct distance to the lesion on the vocal folds.


Motion compensation during planning


Subsequent to positioning the endoscopic tip, the surgeon plans an incision using the stylus-tablet-based interface. To achieve consistent temporal planning on the vocal folds undergoing deformation, image-based non-rigid motion estimation is implemented. The target area containing the lesion is represented by a deformation model and tracked in the stereo view [ ]. A detailed description of the most advanced methods regarding this topic was published by Schoob et al. [ ]. As shown in Figure 9.3A , the target region is represented by a Thin Plate Spline-based mesh model that is able to represent soft tissue motion and deformation, as induced by endoscopic movements or tissue exposure by surgical grasping forceps. As a result, an incision line can be planned inside this region and adapted to underlying motion ( Figure 9.3B ). Optical triangulation of the corresponding points in the two views gives the three-dimensional motion vector of the tracked surface area. Finally, the path of a planned incision line can be followed on the deforming tissue by an integrated laser scanning unit. Experimental results on ex-vivo image data showed tracking accuracy of 0.83 ± 0.61 mm, with an up-date rate of 30 frames per second [ ].




Figure 9.3


Soft tissue motion tracking between consecutive image frames with (A) mesh-based deformation modeling and (B) an incision line virtually defined in the tracked region.

(Source: A. Schoob et al.)


New surgeon-robot interfaces with dynamic planning


Istituto Italiano di Tecnologia (Italian Institute of Technology): L.S. Mattos, N. Deshpande, B. Davies, J. Ortiz, L. Fichera, E. Olivieri, G. Barresi, D. Pardo, F. Mora, A. Laborai


The teleoperation control console


Intraoperative control of the μRALP surgical system is performed entirely through the surgeon-robot interface. This is a teleoperation control console specifically designed to place the surgeon in an ergonomic position and to offer intuitive control over all system components. Its set-up is based on an open-frame structure that does not obstruct communication and interaction between the surgeon and the operating room (OR) staff. In addition, its compact cart structure can be easily rolled in and out of the OR and can be placed in the vicinity of the patient, thus facilitating direct surgeon-patient interaction.


During the operation, the surgical site is visualized through μRALP’s Virtual Microscope interface [ ]. This is an immersive stereoscopic display specially configured to offer an improved visualization experience compared to the surgical microscope, which is the standard visualization equipment surgeons are trained with and accustomed to using for delicate microsurgery. Stereo images captured from μRALP’s endoscope cameras are processed and displayed in the system in real time, allowing relevant information and augmented reality features to be added directly onto the surgeon’s field of view. Examples of the use of such capabilities include dynamic planning of laser incision lines with graphic overlay, as shown in Figure 9.4 .




Figure 9.4


Examples of μRALP’s dynamic planning and automatic laser control capabilities based on augmented reality features.

The functionalities include: (A) incision line planning; (B) ablation area planning; and (C) definition of operative regions for the laser (i.e. safe and forbidden areas).

(Source: L.S. Mattos et al.)


The surgical laser is controlled using a graphics tablet and stylus device, which is a highly intuitive interface able to significantly improve laser-beam control, aim precision, and the overall feasibility of laser microsurgery systems [ , ]. In the μRALP system, the tablet interface is used for several functions: 1) real-time laser aim control; 2) incision planning; 3) ablation planning; and 4) definition of operative regions (safe and forbidden areas for laser operation).


Real-time laser aim control involves directly mapping inputs received through the tablet interface by the motion controller of the laser-steering micro-robot. In this operating mode, the tablet interface works similarly to a computer mouse: the laser spot is instantly moved to follow the movement of the stylus.


In incision planning mode, on the other hand, the stylus is used to precisely draw incision lines over the region of interest; these are displayed through graphics overlays added to the live video stream. Once planning is completed and confirmed by the surgeon, the desired laser trajectory is sent to the micro-robot controller for high-precision autonomous execution. The surgeon can stop the planning process or reprogram the system by simply pressing a button on the stylus. This makes the process highly intuitive and dynamic.


Ablation planning is performed in a very similar way. The stylus is used to precisely draw the perimeter of the area to be ablated, which is marked using graphic overlays. Once this planning is completed and approved by the surgeon, the system computes the optimal laser trajectory for total coverage and ablation of the defined region. This trajectory is then sent to the micro-robot controller, which executes the defined trajectory by high-speed scanning. Once again, re-planning is simple and dynamic: a click on the stylus button cancels the previous plan and allows a new programming cycle to be started.


Cognitive controllers for incision and ablation depth control


μRALP’s dynamic planning system also allows precise control over the depth of laser incision and ablation. This is done using built-in cognitive controllers, with both assistive and autonomous depth control modes.


The assistive laser incision-depth controller is an online supervisor and alert system able to provide real-time feedback on the depth of the incision being performed. This feedback is provided graphically through the Virtual Microscope ( Figure 9.5 ). Additionally, it is possible to use haptic and/or tactile feedback to convey the information to the user, as demonstrated in [ ] and [ ]. In all cases, the controller monitors the surgical laser parameters and activation durations continuously and uses this information as input for the cognitive models of laser-tissue interaction. The output of such models are real-time estimates of laser incision-depth, accurate to within approximately 100 μm [ ]. This feedback complements the range of information provided to the surgeon, leading to significantly improved laser incision-depth control [ ].




Figure 9.5


Online estimation of laser incision depth.

A widget is added to the surgeon’s field of view to provide real-time feedback on the depth of the laser incision being performed.

(Source: L.S. Mattos et al.)


The autonomous incision-depth controller executes incisions to a predefined depth, based on cognitive laser-tissue interaction models similar to those described above. In this case however, the surgeon first plans the desired incision line and depth using the μRALP interface, and then the robotic system executes the plan autonomously. This is done based on feed-forward control with cognitive models that map the desired incision characteristics to the laser activation parameters; e.g. power and scanning speed. The result is precise incision with 100 μm depth accuracy [ ], a level of precision previously unimaginable in soft-tissue surgery.


The final μRALP depth control functionality is an automatic ablation-depth controller. This is an extension of the incision-depth controller, able to receive both the desired ablation area and the desired depth as inputs. Based on this information and internal models, the system computes the appropriate laser activation parameters and automatically executes the ablation process. Here, ablations accurate to within about 170 μm are achieved by automatic superimposition of precisely controlled incisions, as illustrated in Figure 9.6 [ ].




Figure 9.6


Transverse plane of an ablation crater achieved by automatically controlled superimposition of laser incisions.

(Source: L.S. Mattos et al.)


Safety supervision


Università degli Studi di Genova (University of Genoa): G. Peretti, L. Guastini; Istituto Italiano di Tecnologia (Italian Institute of Technology): L.S. Mattos, N. Deshpande


The μRALP surgical system offers several levels of safety supervision to minimize risks associated with the robotic system and with the surgical procedures themselves. The first level consists of a hardware watchdog that monitors key system components and software modules, triggering alarms and emergency states if any fault is detected. This system is directly connected, for example, to the surgical laser interlock circuit, which immediately disables the device in case of fault.


Operationally, safety supervision is available through augmented reality features that define operating regions for the laser in the surgical area. In this case, safe and forbidden areas can be dynamically defined by the surgeon intraoperatively, following a process similar to the one used for planning ablation areas (see Figure 9.4 ). This is a simple and intuitive process that only requires drawing the perimeter of the region to be marked, using the stylus and tablet interface. When safe regions are defined, the surgical laser is only activated if inside one such area; if it enters a forbidden region, the system automatically disables its activation.


Additional operational safety is provided by a cognitive system of incision-depth supervision as previously described. In this case however, instead of providing depth feedback, the system simply disables the laser when the real-time incision-depth estimate reaches a surgeon-defined threshold. This system keeps the surgeon in direct control of laser activation while increasing safety when underlying tissue layers have to be protected or preserved.


Microrobot


Franche-Comté Électronique Mécanique Thermique et Optique, Université de Franche-Comté (FEMTO, University of Franche-Comté, Automatic Control and Micro-Mechatronic Systems Department): B. Tamadazte, K. Rabenorosoa, M. Rakotondrabe, S. Lescano, N. Andreff


Micromechatronic design


Mechatronic, or more precisely micromechatronic, design represents a challenge in the development of robotic surgery systems, for various reasons. For example, instruments need to be dexterous, miniature and intuitive, and able to function well in confined spaces, especially in minimally invasive applications. The efficacy of a mechatronic (robotic) surgery system is, naturally, measured by the clinical benefit it provides: for the patient (accuracy, reduced bleeding, less scarring and postoperative trauma, quick recovery, etc.) and for the surgeon (dexterity, ergonomy, feasibility of tasks impossible for manual surgery, etc.). Minimally or non-invasive robotic-assisted surgery has become the reference in both developed and developing countries. This leads to a need for ever more miniaturized systems with high dexterity. Thus a whole new discipline has emerged to meet these clinical and technological demands: medical and/or surgical microrobotics.


In developing the μRALP system, intended to be miniature and intuitive, we aimed at a microrobotic device to control laser scanning of the vocal folds ( Figure 9.7 ). The micromechatronic device is integrated in the patient-end of the robotic endoscopic system [ ], and can thus be seen as the active component of the robotic system as a whole. The most difficult point is no doubt the activated mirror controlling the orientation of the laser beam aimed at the target (here, the vocal folds). Various designs have been studied, developed and tested on bench models or human cadavers. The one adopted for the prototype was a passive silicone mirror activated by two millimeter-sized piezoelectric linear motors. The mirror design is based on micro-electro-mechanical system (MEMS) technology, with cold-site manufacture using specific microproduction procedures. The device reproduces the architecture of parallel robots (parallel kinematics), widely used in industry (at larger scales). The combined mirror plus piezoelectric motor has 2 degrees of freedom in rotation, giving 2 translations of the laser spot on the target. Prerequisites for vocal-fold laser surgery include the size, working distance and work-space of the laser scanning system. In summary, to be able to scan the entire surface of the vocal folds (about 20 × 20 mm on average in adults) at some 20 mm (distance between the tip of the robot and the vocal folds), the MEMS mirror requires at least 20° rotation.




Figure 9.7


Diagram of the global μRALP concept.


Finally, we developed a scanning system: the microrobot shown in Figure 9.8 . The silicone mirror has a thin gold coating, to enhance reflection: gold has been shown to reflect almost 100% of the energy of a surgical laser beam when the wave-length is > 0.7 μm, such as λ CO2 = 9.4 μm or λ Nd-YAG = 1.06 μm, for the two lasers most widely used in surgery. Micromirror design and manufacturing details were described in [ , ].


Jun 9, 2019 | Posted by in OPHTHALMOLOGY | Comments Off on Alternative solutions for transoral robotic surgery (TORS)

Full access? Get Clinical Tree

Get Clinical Tree app for offline access