Contents
Classes of surgical robots 54
Human-robot input interfaces 56
Haptic and force feedbacks 57
Linking robots to navigation systems 58
Virtual fixtures 58
Giving robots surgical intelligence 59
This chapter aims to complete the basic knowledge provided in by chapter 4 , by analyzing the state of the art of surgical robot control methods.
Classes of surgical robots
Until now, five classes of surgical robot are distinguished, although most of these are still experimental.
Master-slave manipulators
These currently constitute the main class of surgical robot. The surgeon is the driver of a master control made of electromechanical arms. The surgeon’s manipulations are transformed into position/orientation values by encoders and are sent to a computer software, running the inverse kinematics of the electromechanical arms. The surgeon’s gesture is therefore converted into a numerical pose, usually a matrix containing rotation orientations and Cartesian coordinates.
These numeric data are then processed before being sent to the slave, made of manipulators which replicate surgeon’s hands motion ( Figure 5.1 ). It can be made with more arms than a surgeon has, typically 1 to 4.
The da Vinci® system from Intuitive Surgical Corp., described in chapter 6 , is a typical example of a telemanipulator. Strictly speaking, telemanipulators are not robots since they cannot be programmed to perform a surgical task independently, but rather only replicate surgeon’s hand motions. Their main goal is to improve surgeon dexterity, both by using specific instruments (with specifically designed degrees of freedom [DOF]) fixed to slave arms and by scaling and filtering surgeon’s hands motion, to allow micromovements and reduce tremor respectively.
Additional features such as haptic control can be implemented on them, but most systems rely mainly on the surgeon’s visiomotor loop, namely the eye-hand coordination. For this reason, a good imaging system with high resolution and frame rate is mandatory to provide good immersion of the surgeon in the operating field [ ]. In turn, good immersion significantly improves surgeons’ performances [ ], which is an added advantage of telemanipulators. Another significant advantage is the possibility of providing mentoring from one skilled surgeon to one in training, both working on two networked master control stations, with the advantage that they are equally immersed in the same operating field.
Originally, these were developed for telesurgery, an application now rather on the back seat but still pursued by some groups [ , ]. Apart from legal issues and the difficult with consistency of the surgical team, telesurgery raises the problem of the transmission time between the master control and slave and the return to the surgeon. Time delay progressively affects surgical performance and task completion time, by introducing a latency in the visual control loop of the surgeon’s brain. A significant decay is seen when this time lag is over 100 ms. When the latent time is over 300 ms, surgery becomes difficult, and at 700 ms, only very few surgeons are able to complete a surgical task [ , ]. Due to the high resolution image stream normally used to give the surgeon a realistic immersion in the operating field, hardware compression CoDecs are mandatory to minimize the latency times. Some telesurgery experiments on the RAVEN system in various situations (including undersea) have shown that the mean transmission delay was around 100 ms even when the distance between master and slave was as far as 6,000 miles [ ].
Telesurgery could be a solution to network a group of highly skilled surgeons to another set of difficult case patients, without surgeons needing to travel to another hospital. Highly trained surgeons could telementor local surgeons operating on patients in their own facilities.
Automated machines
These are machine tools dedicated to surgery, which operate by following a list of coordinates to access and possibly drill, in the same way as industrial machines. The only working example is the ProBot system developed at the Imperial College of London [ ]. A cutter is controlled by an algorithm running a list of coordinates of the prostatic tissue to be resected. An ultrasound probe linked to the robot provides direct control. The surgeon simply looks for the correct execution of the program via an urethroscope.
In ENT surgery, automated machines may be useful to perform rapid and accurate mastoidectomies or DRAF-3 frontal drill-out. However, this concept requires perfect patient immobilization (a Mayfield head clamp, for example), very accurate registration of the robot in the patient coordinate space and safety tests. Whereas any industrial machine-tool has to undergo extensive unitary testing prior to starting production, the same does not apply to surgery, where “production” is a single piece with no possible unitary test.
Collaborative robots
Also called cobots , collaborative robots have the ability to be directly and easily handled manually by the surgeon him/herself ( Figure 5.2 ). They can work both as automated machines for short sequences or by cooperating with any degree of compliance with the surgeon. This class of manipulators relies on its ability to be moved inversely from their distal part, a property known as backdriveability . Such systems are not simple to design as the actuators used by manipulators are usually mechanically geared in order to gain force/torque at the expense of motion speed (see previous chapter), giving the whole manipulator very low compliance. Special actuators and gearing trains are required to let these robots be moved effortlessly by the surgeon.
Until now, two types of collaborative manipulators have been described [ , ]:
- ●
parallel cobots which copy and follow the human movement and add to this their own force or accuracy. Some of these can have the same kinematic structure as a human body part (arm, leg or hand) and are usually called exoskeletons 1
1. An example of exosqueletton is the Hercule-V3® developed by the French company RB3D with the French Army. It helps a soldier to carry a higher payload and enable him/her to run faster (see www.rb3d.com ).
;
- ●
serial cobots form a serial kinematic chain where human and robot have complementary motions.
These systems are gaining increasing interest for industrial applications as they combine the different and complementary performances of human beings and robots. However, they have not stimulated significant interest for surgical applications and until now, their introduction in the field of surgery has been limited to the Surgicobot project [ ] and the SurgiMotion/SurgiDelta system (see chapter 10 ). Both are primarily parallel cobots.
We firmly believe that these deserve greater attention in the near future as they have the potential to combine the accuracy of electromechanical devices with the procedural skills of surgeons.
In vivo microrobots
These have been studied for heart surgery (HeartLander, dedicated to myocardial stem cell injection without sternotomy through a single 15 mm port [ ]) and to control the movement of a micro-endoscope in the gastro-intestinal tract [ ].
Hand-held surgical robots
These robots, which are a conceptual extension of cobots, are designed to be held directly by the surgeon [ ]. Here, the surgeon’s hand acts as the first stage of a manipulator system, the manually held hardware refining the motion at a submillimeter level. Some systems are designed to improve surgical agility and include additional DOF or extended rotation around an axis, extending the surgeon’s reach [ ]. They are mainly intended for laparoscopic surgery ( Figure 5.3 ).
Another kind of machine in this class is designed to improve surgical accuracy by removing physiological hand tremor which is present with any human movement, typically in an 8–15 Hz frequency band and an order of magnitude of 50 μm along each principal motion axis [ ]. They have the advantage of keeping the surgeon at the best possible level of control, near the patient, simply removing tremor by band-pass or low-pass filtering ( Figure 5.4 ).
NOTES-robots
We should mention the emerging concept of NOTES (natural orifice transluminal endoscopic surgery) which is used to perform minimally invasive surgery of the peritoneal cavity through natural orifices without any incision. This concept is designed to attempt to avoid any visible scaring by performing surgery through mouth, anus, bladder or vagina, with an internal incision to reach intraperitoneal targets. It mainly relies on flexible endoscopy ( Figure 5.5 ) [ ]. Details of this type of system are beyond the scope of this chapter, but extensive work has been done to try to robotize this type of surgery [ , ].
Human-robot input interfaces
Telemanipulators, and at a lesser extent comanipulators, require special human-machine interfaces to be properly operated. While the most common input system is a master control consisting of handles copying surgeon’s hand movements (see chapter 6 ), significant work has been published around the aim of keeping the surgeon at the side of his patient rather than being isolated in a remote station [ ], while still introducing a high level of interaction with machines. Several studies have shown that this complex field can seek to offer converging ergonomics, safety analysis, psychology with mental models and immersive science. The aim of some groups is to define a new robot control paradigm in which the surgeon’s skills can be directly transmitted to the robot [ , ].
Voice-commands were soon introduced for this purpose, although there were major concerns about their capability to work safely in the very noisy environment of a surgical theater were the noise floor is around 45 dB and can occasionally rise to 75 dB 2
2. Values measured with a decibel meter by the author.
, for instance when a surgical instrument is being unpacked. For this reason, voice commands are often used as a command selector combined with a foot-pedal for validation purposes [ , ]. However, the extensive work which has been carried out in the automotive industry to resolve this issue allows us to expect that in the near future accurate results will be achieved thanks to beam-focusing microphones and sophisticated voice recognition algorithms.Infrared gaze-trackers have been proposed to control endoscopic vision, although current developments are limited to laparoscopic surgery [ ]. Although less accurate, a headset with passive infrared reflectors worn by the surgeon and acting as a computer mouse, is a much simpler option when limited control may be sufficient [ ].
Surprisingly, some products from the gaming industry (a huge market which stimulates very creative innovation) have been successfully introduced in the operating theatre as input controls. One example is the Kinect® camera equipped with 3D sensors able to accurately detect manual gestures and therefore allowing surgical interaction with a machine without any sterilization issues [ , ].
Haptic and force feedbacks
Haptic or touch sense is, with warmth, cool and pain, one of the four somesthetic sensations, described by Mouncastle [ ]. But it is also the most difficult to accurately define, due to its multimodal nature. Tactile sensations can be roughly categorized into simple stimuli, such as touch detection or vibration, or complex epicritic stimuli including texture, spatial orientation, size, shape, sharpness, form and motion. Temporal cadence is also known to influence significantly the integration of tactile sensations [ ]. The way human cortex transform these perceptions to provide a consistent knowledge pattern is yet poorly known [ ]. However, it is obvious that the highly trained handworkers, among those surgeons, have developed a very specific ability to use their haptic sensations for improving their knowledge of the world they are working in and for guiding their movements [ ]. This point is of particular importance for surgeons who usually can only rely upon their vision and touch senses to control the progression of their work. When a robotized device is interposed between surgeon and patient with any more mechanical linkage between them, the operator’s haptic sense is lost and only virtual means can be used to inform him. This is responsible of a decrease in surgical performance [ ]. Developing a solution to it requires to decide which kind of information to send to the surgeon hand: the force/torque occurring when the slave instrument comes in contact with a tissue; the vibrations eventually generated by such a contact; the transmitted pulsations of contiguous vessel? Until now, the question remains unclear and the words of haptic and force feedback are indistinctly used to designate any technology aiming to provide the master control with mechanically-based sensations.
Technically, implementing haptics on a robotic system arises many issues [ ]. Since forces and torques can act on three different axes, 6 DOF forces sensors should be used [ ], thus making the instrument heavier and cumbersome, a true problem for a minimally invasive instrument. Usually, these sensors require to be properly aligned and scaled with the instrument geometry on which they are affixed, a complex process when significant accuracy is desired. Finally, matching their data with the relevant patient’s anatomy requires an additional registration process to link robot kinematics to the surgical world.
Only few surgical robots use a force sensor for haptic purpose, despite the availability of extremely sensitive and large dynamic range of some sensors like the Nano-17 from ATI Inc ( Figure 5.6 )., able to accurately detect forces applied to endosteal membrane while performing a cochleostomy [ , ].