9th Summer School on Surgical Robotics

head_sssr_2019

The registration for the 9th Summer School on Surgical Robotics (SSSR-2019) is now open (registration deadline: July 26th, 2019).

The School will be held in Montpellier, France, from 23th to 28th September 2019, and is open to Master students, PhD students, Post-docs and participants from industry.

All information can be found on the official website: http://www.lirmm.fr/sssr-2019/

sssr-2019 Working on translationnal activities in surgical robotic inside LIRMM office located in the new medical school of Montpellier, France.

Robotics enables surgery to be less invasive and/or to enhance the performance of the surgeon. In minimally invasive surgery (MIS) for instance, robotics can improve the dexterity of conventional instruments, which is restricted by the insertion ports, by adding intra-cavity degrees of freedom. It can also provide the surgeon with augmented visual and haptic inputs. In open surgery, robotics makes it possible to use in real time pre-operative and per-operative image data to improve precision and reproducibility when cutting, drilling, milling bones, to locate accurately and remove tumours. In both cases, as in other surgical specialities, robotics allows the surgeon to perform more precise, reproducible and dextrous motion. It is also a promising solution to minimize fatigue and to restrict exposition to radiation. For the patient, robotics surgery may result in lower risk, pain and discomfort, as well as a shorter recovery time. These benefits explain the increasing research efforts made all over the world since the early 90’s.

Surgical robotics requires great skills in many engineering fields as the integration of robots in the operating room is technically difficult. It induces new problems such as safety, man-machine cooperation, real time sensing and processing, mechanical design, force and vision-based control. However, it is very promising as a mean to improve conventional surgical procedures, for example in neurosurgery and orthopaedics, as well as to provide innovation in micro-surgery, image-guided therapy, MIS and Natural Orifice Transluminal Endoscopic Surgery (NOTES).

sssr-2019 LIRMM at Montpellier faculty of medecine 2, France

The highly interdisciplinary nature of surgical robotics requires close cooperation between medical staff and researchers in mechanics, computer sciences, control and electrical engineering. This cooperation has resulted in many prototypes for a wide variety of surgical procedures. A few robotics systems are yet available on a commercial basis and have entered the operating room namely in neurosurgery, orthopaedics and MIS.

Depending on the application, surgical robotics gets more or less deeply into the following fields:

  • multi-modal information processing;
  • modelling of rigid and deformable anatomical parts;
  • pre-surgical planning and simulation of robotic surgery;
  • design and control of guiding systems for assistance of the surgeon gesture.

During the Summer school, these fields will be addressed by surgeons and researchers working in leading hospitals and labs. They will be completed by engineers who will give insight into practical integration problems. The courses are addressed to PhD students, post-docs and researchers already involved in the area or interested by the new challenges of such an emerging area interconnecting technology and surgery. Basic background in mechanical, computer science, control and electrical engineering is recommended.

Advertisements

karma chameleon (gripper)

source: Festo

00131-flexshapegripper-2140x940px

The chameleon is able to catch a variety of different insects by putting its tongue over the respective prey and securely enclosing it. The FlexShapeGripper uses this principle to grip the widest range of objects in a form-fitting manner. Using its elastic silicone cap, it can even pick up several objects in a single gripping process and put them down together, without the need for a manual conversion.

01-flexshapegripper-1532x900px.jpg

The gripper consists of a double-acting cylinder, of which one chamber is filled with compressed air whilst the second one is permanently filled with water. This second chamber is fitted with elastic silicone moulding, which equates to the chameleon’s tongue. The volume of the two chambers is designed so that the deformation of the silicone part is compensated. The piston, which closely separates the two chambers from each other, is fastened with a thin rod on the inside of the silicone cap.

02-flexshapegripper-1532x900px

During the gripping procedure, a handling system guides the gripper across the object so that it touches the article with its silicone cap. The top pressurised chamber is then vented. The piston moves upwards by means of a spring support and the water-filled silicone part pulls itself inwards. Simultaneously, the handling system guides the gripper further across the object. In doing so, the silicone cap wraps itself around the object to be gripped, which can be of any shape, resulting in a tight form fit. The elastic silicone allows a precise adaptation to a wide range of different geometries. The high static friction of the material generates a strong holding force.

insideOnce it has been put into operation, the gripper is able to do various tasks. This functional integration is a possible way of how systems and components can in future adapt to various products and scenarios themselves. The project also shows how Festo acquires new findings from nature for its core business of automation. But the aims of the Bionic Learning Network not only include learning from nature. Identifying good ideas and fostering them also plays a major part. The FlexShapeGripper came about through a cooperation with the the Oslo and Akershus University College of Applied Sciences and is an outstanding example for a close collaboration beyond company borders.

 

Robots Cascadeurs

source: ce site

scaled_full_d0915f5286a901838e00A l’heure où la modélisation numérique fait de tels progrès qu’il est devenu possible de faire “jouer” des acteurs décédés dans un film, Disney explore une piste qui sent bon l’animatronique à l’ancienne: des robots humanoïdes cascadeurs. Présenté dans une vidéo de démonstration diffusée par le site TechCrunch, le projet consiste plus exactement à créer des machines qui exécutent des figures de voltige, sauts de l’ange, salto et autre vrille, avec un réalisme confondant – sinon troublant, surtout pour les cascadeurs.

stuntronics-un-robot-cascadeur-impressionnant-bientot-dans-les-parcs-disney-77003Oubliez, en effet, les mouvements saccadés et les postures rigides qui trahissent la machine. Walt Disney Imagineering, la division recherche et développement du studio, a conçu un robot capable non seulement de mimer un humain mais aussi de corriger sa gestuelle en plein vol, comme par exemple en se mettant en boule au moment de la descente pour atterrir dans des piles de cartons ou en effectuant un mouvement de balancier accroché à une liane avant de se propulser dans les airs en lançant les jambes en avant.

x1080-gtVAppelé Stuntronics (contraction de stunt, cascade, et electronics, comme animatronics venait de animation et electronics), ce robot est équipé d’accéléromètres et de gyroscopes, des capteurs servant à mesurer accélération et position angulaire de la machine, ainsi que de télémètres laser mesurant les distances et d’une technologie de vision par ordinateur. Cette machine est en fait la continuité directe d’un autre projet appelé Stickman qui testait toutes ces techniques sur une ébauche de robot cascadeur consistant en trois barres articulés. A priori, le projet est destiné aux parcs à thèmes de Disney, où il est impossible par définition de s’en remettre à des personnages virtuels. Mais, au vu du résultat, il est tout à fait envisageable de les voir intégrer un prochain tournage de film d’action.

 

Leachy ou Reachy ?

Dans le cadre d’un projet de recherche, Pollen Robotics et l’INCIA ont créé en 2017 Reachy, un bras robotique bio-inspiré reprenant la taille et les mobilité d’un bras adulte à 7 degrés de liberté. Reachy est destiné à être une plateforme de recherche et d’expérimentation permettant, par exemple, d’explorer de nouvelles interactions ou encore les problématiques liées à la commande dans des espaces de grandes dimensions. Open source, imprimé en 3D et modulaire, il est conçu pour pouvoir facilement s’adapter à différent setups expérimentaux !

reachyandleachyAujourd’hui Reachy est disponible dans une nouvelle version qui inclue:

  • une mécanique totalement revue permettant la réalisation de mouvements lisses et précis,
  • la cinématique inverse et directe,
  • la possibilité d’ajouter une main faite par OpenBionics,
  • une version bras gauche appelée Leachy

À ne pas rater le site web officiel de Pollen Robotics 🙂

pollen

3-D scanning with water

source: this website

A global team of computer scientists and engineers have developed an innovative technique for 3D shape reconstruction. This new approach to 3D shape acquisition is based on the well-known fluid displacement discovery by Archimedes and turns modeling surface reconstruction into a volumetric problem. Most notably, their method accurately reconstructs even hidden parts of an object that typical 3D laser scanners are not able to capture.

3D scannerTraditional 3D shape acquisition or reconstruction methods are based on optical devices, most commonly, laser scanners and cameras that successfully sample the visible shape surface. But this common approach tends to be noisy and incomplete. Most devices can only scan what is visible to them but hidden parts of an object remain inaccessible to the scanner’s line of sight. For instance, a typical laser scanner cannot accurately capture the belly or underside of an elephant statue, which is hidden from its line of sight.

The team’s dip transform to reconstruct complex 3D shapes utilizes liquid, computing the volume of a 3D object versus its surface. By following this method, a more complete acquisition of an object, including hidden details, can be reconstructed in 3D. Liquid has no line of sight; it can penetrate cavities and hidden parts, and it treats transparent and glossy materials identically to opaque materials, thus bypassing the visibility and optical limitations of optical and laser-based scanning devices.

water 3D scanningThe research, “Dip Transform for 3D Shape Reconstruction“, is authored by a team from Tel-Aviv University, Shandong University, Ben-Gurion University and University of British Columbia. They implemented a low-cost 3D dipping apparatus: objects in the water tank were dipped via a robotic arm. By dipping an object in the liquid along an axis, they were able to measure the displacement of the liquid volume and form that into a series of thin volume slices of the shape. By repeatedly dipping the object in the water at various angles, the researchers were able to capture the geometry of the given object, including the parts that would have normally been hidden by a laser or optical 3D scanner.

The team’s dip transform technique is related to computed tomography, an imaging method that uses optical systems for accurate scanning or to produce detailed pictures. However, the challenge with this more traditional method is that tomography-based devices are bulky and expensive and can only be used in a safe, customized environment. The team’s approach is both safe and inexpensive, and a much more appealing alternative for generating a complete shape at a low-computational cost using an innovative data collection method.

In the study, they demonstrated the new technique on 3D shapes with a range of complexity, including a hand balled up into a fist, a mother-child hugging and a DNA double helix. Their results show that the dip reconstructions are nearly as accurate as the original 3D model, paving the way to a new world of non-optical 3D shape acquisition techniques.

le Jellyfishbot, c’est génial!

source: site officiel

Le Jellyfishbot est un petit robot de dépollution téléopéré. Il permet de ramasser les macrodéchets ainsi que les hydrocarbures (pollutions de surface).

  • jfbDimensions : L = 70 cm, l = 70 cm, h = 54 cm
  • Poids : environ 16 kg
  • Propulsion : 3 moteurs électriques (dont 1 transversal)
  • Autonomie : 7 à 8 heures (2 batteries de 22 Ah)
  • Vitesse max : environ 6,5 km/h
  • Surface traitée : 1000 m²/h (à la vitesse moyenne de 2 km/h)

Bravo Nicolas! 🙂

 

what’s up, Handle?

Handle

Handle is a research robot that stands 6.5 ft tall (about 2 meters), travels at 9 mph (about 14.5 kmh) and jumps 4​ ​feet vertically (about 1.2 m). ​It uses electric power to operate both electric and hydraulic actuators, with a range of about 15 miles (about 24 km) on one battery charge. ​​​Handle uses many of the same dynamics, balance and mobile manipulation principles​ found in the quadruped and biped robots built by Boston Dynamics, but with only about 10 actuated joints, it is significantly less complex. Wheels are efficient on flat surfaces while legs can go almost anywhere: by combining wheels and legs Handle can have the best of both worlds.