Surgical robotics: safety, legal, ethical and economic aspects






  • Contents



  • Safety and normalization in surgical robotics 228



  • Economic assessment of transoral robotic-assisted surgery 231



  • Law, ethics, robots and surgery 232




Safety and normalization in surgical robotics



B. Lombard

Despite their complexity, ultra-specialization and intrinsically invasive nature, legislation worldwide considers surgical robots as medical devices at the same level as bandages or contact lenses. Regulations largely remain to be established, harmonized and internationalized; having so recently been introduced in the world of surgery, the norms, safety requirements and even certification of these devices are as yet based more on industrial than medical criteria. There is moreover great confusion in the biomedical engineering community regarding the exact meaning of existing norms and rules, how they are to be applied, and the extent to which they apply to the manufacturer, the institution and the surgeon respectively [ ].


Two basic principles govern safety in surgical robots: application must be of benefit for the patient compared to the validated reference treatment, and an effective risk minimization and risk management strategy for the inevitable residual risks has to be drawn up and integrated in every aspect of robot design: hardware, software, manufacture, user training, etc.


General safety design strategies





  • Fault-tree analysis is an example of a safety design strategy, widely used in industry. It consists in a graphic representation, with logical symbols, of the conditions that have to coincide to trigger a failure event [ , ].



  • Event-tree analysis is another tool, which chronologically combines all possible events and their normal or failure-inducing consequences [ ]. Each stage of the tree includes an action to be taken or a recommendation to avoid risk onset. The mathematical probability of the occurrence of any failure (or success) can also be computed for various conditioning situations [ ]. The advantages of these industrial tools, widely used in the military, land transport and aerospace industries, are that risks are formally and graphically represented, allowing both computer analysis of large amounts of data and a human check of their consistency.



  • Fault-tolerance algorithms are used for designing failsafe industrial and medical systems, where each error which would lead to a fail state is managed so as to minimize the consequences of any failure [ ]. For instance, power failure in a surgical robotized arm would normally lead to it falling onto the patient. Minimizing the weight of the arm and integrating in each joint brakes which have to be powered to release any motion are simple yet effective solutions to manage the risk of power failure. This example shows that the design of the robot has a major impact on its safety. However, adding brakes adds complexity and increases the total weight of the robot, creating new risks. For this reason, the generally agreed “golden rule” is that the simpler the design, the less the cumulative risk [ ].



This aspect applies also to the software controlling the robot. The greater the number of sophisticated routines, the greater the risk of encountering a condition leading to fatal failure. A single division by zero, or the computation of the tangent of 90° may lead to a total crash of a computer if the possibility has not been considered by the software designer. Although the source code of a robotized system is presumed to be closely supervised by multiple developers, some bugs may be very difficult to detect when the kinematics of the robot involve complex computations (see chapter 4 ) with mathematical singularities and zero truncation 1


1. The zero value of a computer running in floating-point mode is not a mathematical zero but rather the lowest possible value it can discern: e.g. dividing 1 by 10 − 16 could return a weak number or generate a fatal error, depending the computer.

.


The best method to check these systems it is to perform running tests under lab conditions for large cycles in situations as varied as possible [ ].




  • Redundancy design , widely used in aerospace, consists in adding in parallel components which are critical for safety [ ]; for instance, adding sensors at different stages and checking their mutual values; using a back-up power supply battery, or an architecture based on multiple computers running separately in parallel.



  • Dead-man switches are buttons or pedals that the user must press in order to make the system able to perform an action. An example is the infrared switch located in the forehead of the da Vinci® master control (see chapter 6 ).



  • Frequent checking and servicing: as robots have moving part, wear is unavoidable. Frequent checks, calibration controls, systematic replacement of any crucial wearable component and use of cut-off timers are usual policy in robotics.



  • Logging files are important tools which should be mandatorily incorporated into any surgical robot, as black boxes are in commercial aviation. Not only can they shed light on the context of a failure, but also allow data to be accumulated so as to improve the state of knowledge for the whole community.



European directives applicable to surgical robotics


CE mark


European directives 93/42/CEE and 2007/47/CE define four classes of medical devices:




  • class I: low level of risk (usually, no contact with the patient);



  • class IIa: moderate level of risk (passive system in contact with the patient, but non-invasively);



  • class IIb: significant level of risk (active or invasive system);



  • class III: critical level of risk (e.g. system using ionizing radiation).



Unlike general-purpose software, medical software components are also considered as an entity requiring a CE mark. The IEC 62304 standard defines their classification:




  • class A: no injury nor damage to health is possible;



  • class B: no serious injury is possible;



  • class C: serious injury or death possible.



From these basic definitions, it may be difficult to decide precisely to which class a surgical robot should belong. In Europe, the decision to classify a medical device in one or another category is taken by the manufacturer, who should be able to explain, if asked to do so by an official overseer, why and on the basis of what evidence the classification was made.


Officially, prototypes are not industrial products and CE marking should not apply, since this label attests that the commercially released product has followed a traceable pathway from design stage to marketing and that its manufacturing was under the control of a constant quality assurance process [ , ]. However, prototype developments should meet similar requirements, and more specifically a highly detailed risk-analysis, including risk management.


Norms involved in medical robotics


Current applicable norms are centered on electrical safety: IEC-60601-2, regarding the acceptable ceiling for power supply leakage; IEC-62304, regarding the proper management of medical software development and its traceability; IP level (Ingress Protection, part of CEI 60529), regarding a system’s level of immunity to solids, dust or liquid penetration; and ISO 10218.1:2006, regarding the safety requirements for industrial robots, including the definition of safe areas.


Risk management at the surgeon’s level


Some publications related to safety in robotized surgical procedures [ , ] have demonstrated that, although safety in surgical robotics is regarded as an important issue by manufacturers, users, authorities and that programs for managing surgeon training are well organized, none of these programs train surgeons effectively regarding failure or adverse events. This has to be highlighted since it diverges, for instance, from the policy in commercial aviation, where training includes safety-critical situations, with software taking any significant critical situation into account [ ]. However, some independent work has been done to move toward the same level of risk-management training strategy: Alemzadeh et al. [ ] searched the publicly available FDA MAUDE (Manufacturer and User Facility Device Experience) database for officially reported events for the da Vinci® system. From these data, the authors distinguished those due to robot malfunction and those imputable to human error ( Table 14.1 ).



Table 14.1

Examples of reported events for the da Vinci® system (from MAUD database).










Device and instrument malfunctions Inadequate operational practices



  • Master tool manipulator malfunctions



  • Patient-side manipulator failures



  • Unintended operation of instruments (e.g. uncontrolled movements, power on/off)



  • Video/imaging problems at the surgeon’s console



  • Recoverable and non-recoverable system errors



  • Burns and holes in tip cover accessories, leading to electrical arcing, sparking, or charring of instruments



  • Broken parts of instruments falling into patients




  • Inadequate handling of emergency situations



  • Lack of training with specific system features



  • Inadequate troubleshooting of technical problems



  • Inadequate system/instrument checks before procedure



  • Incorrect port placements



  • Incorrect electro-cautery settings or cable connections



  • Inadequate manipulation of robot master controls



  • Inadequate hand and foot coordination by main surgeon



  • Incorrect manipulation or exchange of instruments



Table 14.2 gives some examples of malfunctions, their report number in the MAUDE database, and their origin and consequences.



Table 14.2

Examples of reported malfunctions for the da Vinci® system (from MAUD database).







































Report No. in the MAUDE database (year) Summary description Malfunction type Procedure outcome
1006071 (2008)


  • Recurring system errors #201 and #264, even after multiple restarts.



  • Errors due to voltage tracking faults and put the system in a recoverable safe state

Master tool manipulators Converted after 2 hours
3283230 (2013)


  • Master tool manipulator arm was sluggish and could not control the robotic arms



  • System error #22580 due to out-of-range hardware voltage level



  • Multiple system restarts did not resolve the issue

Manipulators Aborted post-anesthesia
3093014 (2013)


  • Recurring error #23000, even after emergency power off and restart



  • System error caused because the angular positions of one or more robotic joints on a manipulator as measured by the primary sensor (encoder) and secondary sensor (potentiometer) were out of range or in disagreement. Joint sensors (potentiometer or encoder)

Joint sensors (potentiometer or encoder) Aborted post-anesthesia
and post-incision
3620041 (2014)


  • Non-recoverable error #23013 on patient side manipulator



  • Multiple system restarts to recover from error but unsuccessful

Joint sensors (potentiometer or encoder) Converted to open surgery
921167 (2007)


  • Patient-side manipulator dropped suddenly



  • Scissors instrument bumped into uterus

Surgeon removed his/her hands from master manipulators before removing his/her head from console viewer (keeping head in the console viewer; keeps the robot engaged) Pierced patient’s uterus
1961862 (2010)


  • Instrument toggled to guided tool change mode, moved slightly forward, and bumped into colon

Injury to patient’s colon

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jun 9, 2019 | Posted by in OPHTHALMOLOGY | Comments Off on Surgical robotics: safety, legal, ethical and economic aspects

Full access? Get Clinical Tree

Get Clinical Tree app for offline access