Bioprinting for Regenerative Medicine

credits: wscs.com

following up on my previous post (more than six years ago!)

Anthony Atala is a pediatric surgeon, urologist and directs the Wake Forest Institute for Regenerative Medicine (WFIRM) in North Carolina. Together with 400 colleagues and in a work that spans more than three decades, he has successfully implanted in human patients a variety of tissues regenerated from the patient’s own cells. Dr. Atala talked to 3DPrint.com about ways to translate the science of regenerative medicine into clinical therapy and the importance of adopting new technologies, as well as some of the challenges.

“Back in the 90’s we created by hand, even without using the printer, bladders, skin, cartilage, urethra, muscle and vaginal organs, and later implanted them successfully in patients. The printer automated what we were already doing and scaled it up making some of the processes easier. Still, the technology has its own challenges. With hand made constructs you have more control as you are creating the tissue, but with the printed structure everything has to be built in before it is created, so that you have to have the whole plan and information ready to go once you push that ‘start’ button”.

The WFIRM is working to grow tissues and organs and develop healing cell therapies for more than 40 different areas of the body, from kidney and trachea to cartilage and skin. Dr. Atala and his team of scientists have been first in the world to implant lab-grown tissues and organs into patients. Starting in 1990 with most of their research and implanting the first structures at the end of that decade, using a 3D printer to build a synthetic scaffold of a human bladder, which they then coated with cells taken from their patients. New research at WFIRM shows innovative wound healing through the use of a bedside 3D skin printer.

“Today, we continue to develop replacement tissues and organs, and are also working to speed up the availability of these treatments to patients. The ultimate goal is to create tissues for patients. Part of that is taking a very small piece of the patients tissue from the organ that we are trying to reconstruct, like muscle or blood vessels, only to expand the cells outside of the body and then use them to create the organ or structure along with a scaffold or a hydrogel which is the glue that holds the cells together. We have been doing this for quite some time with patients and 16 years ago we realized that we needed to scale up the technology and automate it to work with thousands of patients at a time, so we started thinking about 3D printers, and began using the typical desktop inkjet printer which was modified in-house to print cells into a 3D shape”.

The living cells were placed in the wells of the ink cartridge and the printer was programmed to print them in a certain order. The printer is now part of the permanent collection of the National Museum of Health and Medicine. According to Dr. Atala, all the printers at the WFIRM continue to be built in-house specifically to create tissues, so that they are highly specialized and able to create cells without damaging the tissue as it gets printed. Inside the institute, more than 400 scientists in the fields of biomedical and chemical engineering, cell and molecular biology, biochemistry, pharmacology, physiology, materials science, nanotechnology, genomics, proteomics, surgery and medicine work to try to develop some of the most advanced functional organs for their patients. At WFRIM they are focusing on personalized medicine, whereby the scientists use the sample tissue from the patient they are treating, grow it and implant it back to avoid rejection. Dr. Atala claims that “these technologies get tested extensively before they are implanted into a patient”, and that “it could take years or even decades of research and investigation before going from the experimental phase to the actual trial in humans. Our goal for the coming decade is to keep implanting tissues in patients, however, the most important thing for us is that we temper peoples expectations because these tissues come out very slowly and they come out one at a time, so we don’t give false hopes and provide the technology to patients who really need them. Working with over 40 different tissues and organs, means that about 10 applications of this technologies are already in patients. The research we have done helps us categorize tissues under order of complexity, so we know that flat structures (like skin) are the least complex; tubular structures (such as blood vessels) have the second level of complexity, and hollow non-tubular organs, including the bladder or stomach, have the third level of complexity because the architecture of the cells are manifold. Finally, the most complex organs are solid ones, like the heart, the liver and kidneys, which require more cells per centimeter”.

the Iceberg of ignorance

credits: Frank Zijlstra

Consultant Sidney Yoshida produced this study called ‘The Iceberg of Ignorance‘ in 1989. Yoshida revealed what he saw in the work and leadership habits of Japanese car manufacturer, Calsonic.

How much do we know, what is going on within our organisation? Is it even possible to know all? But if we do not know all, how do we know if we are resolving the right challenges? If executives are continuously impatient and do not ask how they can help, will it really help to resolve Problems in a quality way?

Can engagement within an organization help to resolve challenges on every layer of the organisation? But what about the silo’s within the organization? Fixing a problem in one part of the organisation, but creating an other somewhere else?

Technology opportunities are happening faster and faster, more and more we have to be ready to make changes and adapt. How are you and your teams adopting change?

Titanic, l'iceberg sarebbe stato in 'agguato' da 100mila anni

creativity is free and 3D-printable

source: https://i.materialise.com/en

In line with their mission to create a better and healthier world, Materialise designed a hands-free 3D-printed door opener. This is intended to help minimize the unavoidable daily task of opening and closing doors and ultimately decrease the spread of germs like the coronavirus. 

The design file is free for anyone to download on Materialise official website, making it possible to 3D print locally at factories around the world. On the Materialise online shop it is also possible to order a pack of four with screws included. 

You can reduce the spread of germs during your daily tasks easily just by fastening the openers to your door handles. Help do your part to minimize risky contact and make a positive change!

In order to make this solution available to as many as possible, Materialise are introducing additional designs, including openers that fit door handles of various shapes and sizes as well as options that are smaller and therefore more affordable to print. Materialise’s Design and Engineering team is continuing to work on more variations, so check back regularly to find more models.

elements of AI

credits: Reaktor

Recently I completed the free online course elements of AI provided by Reaktor and the University of Helsinki. This is a very interesting opportunity for those who would like to learn the basics of AI and machine learning. Combining theory with practical exercises, the course can be completed at the reader’s own pace. Strongly recommended!

Robotic cane could improve walking stability

source and credits: TheRobotReport

By adding electronics and computation technology to a simple cane that has been around since ancient times, a team of researchers at Columbia Engineering have transformed it into a 21st century robotic device that can provide light-touch assistance in walking to the aged and others with impaired mobility. A team led by Sunil Agrawal, professor of mechanical engineering and of rehabilitation and regenerative medicine at Columbia Engineering, has demonstrated, for the first time, the benefit of using an autonomous robot that “walks” alongside a person to provide light-touch support, much as one might lightly touch a companion’s arm or sleeve to maintain balance while walking. Their study has been published in the IEEE Robotics and Automation Letters.

Often, elderly people benefit from light hand-holding for support,” explained Agrawal, who is also a member of Columbia University’s Data Science Institute. “We have developed a robotic cane attached to a mobile robot that automatically tracks a walking person and moves alongside,” he continued. “The subjects walk on a mat instrumented with sensors while the mat records step length and walking rhythm, essentially the space and time parameters of walking, so that we can analyze a person’s gait and the effects of light touch on it.

The light-touch robotic cane, called CANINE, acts as a cane-like mobile assistant. The device improves the individual’s proprioception, or self-awareness in space, during walking, which in turn improves stability and balance. “This is a novel approach to providing assistance and feedback for individuals as they navigate their environment,” said Joel Stein, Simon Baruch Professor of Physical Medicine and Rehabilitation and chair of the department of rehabilitation and regenerative medicine at Columbia University Irving Medical Center, who co-authored the study with Agrawal. “This strategy has potential applications for a variety of conditions, especially individuals with gait disorders.

To test this new device, the team fitted 12 healthy young people with virtual reality glasses that created a visual environment that shakes around the user – both side-to-side and forward-backward – to unbalance their walking gait. The subjects each walked 10 laps on the instrumented mat, both with and without the robotic cane, in conditions that tested walking with these visual perturbations. In all virtual environments, having the light-touch support of the robotic cane caused all subjects to narrow their strides. The narrower strides, which represent a decrease in the base of support and a smaller oscillation of the center of mass, indicate an increase in gait stability due to the light-touch contact.

The next phase in our research will be to test this device on elderly individuals and those with balance and gait deficits to study how the robotic cane can improve their gait,” said Agrawal, who directs the Robotics and Rehabilitation (ROAR) Laboratory. “In addition, we will conduct new experiments with healthy individuals, where we will perturb their head-neck motion in addition to their vision to simulate vestibular deficits in people.

While mobility impairments affect 4% of people aged 18 to 49, this number rises to 35% of those aged 75 to 80 years, diminishing self-sufficiency, independence, and quality of life. By 2050, it is estimated that there will be only five young people for every old person, as compared with seven or eight today. “We will need other avenues of support for an aging population,” Agrawal noted. “This is one technology that has the potential to fill the gap in care fairly inexpensively.

 

9th Summer School on Surgical Robotics

head_sssr_2019

The registration for the 9th Summer School on Surgical Robotics (SSSR-2019) is now open (registration deadline: July 26th, 2019).

The School will be held in Montpellier, France, from 23th to 28th September 2019, and is open to Master students, PhD students, Post-docs and participants from industry.

All information can be found on the official website: http://www.lirmm.fr/sssr-2019/

sssr-2019 Working on translationnal activities in surgical robotic inside LIRMM office located in the new medical school of Montpellier, France.

Robotics enables surgery to be less invasive and/or to enhance the performance of the surgeon. In minimally invasive surgery (MIS) for instance, robotics can improve the dexterity of conventional instruments, which is restricted by the insertion ports, by adding intra-cavity degrees of freedom. It can also provide the surgeon with augmented visual and haptic inputs. In open surgery, robotics makes it possible to use in real time pre-operative and per-operative image data to improve precision and reproducibility when cutting, drilling, milling bones, to locate accurately and remove tumours. In both cases, as in other surgical specialities, robotics allows the surgeon to perform more precise, reproducible and dextrous motion. It is also a promising solution to minimize fatigue and to restrict exposition to radiation. For the patient, robotics surgery may result in lower risk, pain and discomfort, as well as a shorter recovery time. These benefits explain the increasing research efforts made all over the world since the early 90’s.

Surgical robotics requires great skills in many engineering fields as the integration of robots in the operating room is technically difficult. It induces new problems such as safety, man-machine cooperation, real time sensing and processing, mechanical design, force and vision-based control. However, it is very promising as a mean to improve conventional surgical procedures, for example in neurosurgery and orthopaedics, as well as to provide innovation in micro-surgery, image-guided therapy, MIS and Natural Orifice Transluminal Endoscopic Surgery (NOTES).

sssr-2019 LIRMM at Montpellier faculty of medecine 2, France

The highly interdisciplinary nature of surgical robotics requires close cooperation between medical staff and researchers in mechanics, computer sciences, control and electrical engineering. This cooperation has resulted in many prototypes for a wide variety of surgical procedures. A few robotics systems are yet available on a commercial basis and have entered the operating room namely in neurosurgery, orthopaedics and MIS.

Depending on the application, surgical robotics gets more or less deeply into the following fields:

  • multi-modal information processing;
  • modelling of rigid and deformable anatomical parts;
  • pre-surgical planning and simulation of robotic surgery;
  • design and control of guiding systems for assistance of the surgeon gesture.

During the Summer school, these fields will be addressed by surgeons and researchers working in leading hospitals and labs. They will be completed by engineers who will give insight into practical integration problems. The courses are addressed to PhD students, post-docs and researchers already involved in the area or interested by the new challenges of such an emerging area interconnecting technology and surgery. Basic background in mechanical, computer science, control and electrical engineering is recommended.

Deep Neural Networks & the Nature of the Universe

source: MIT Technology Review

In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players.Risultati immagini per go game But there is a problem. There is no mathematical reason why networks arranged in layers should be so good at these challenges. Mathematicians are flummoxed. Despite the huge success of deep neural networks, nobody is quite sure how they achieve their success.

Today that changes thanks to the work of Henry Lin at Harvard University and Max Tegmark at MIT. These guys say the reason why mathematicians have been so embarrassed is that the answer depends on the nature of the universe. In other words, the answer lies in the regime of physics rather than mathematics.

First, let’s set up the problem using the example of classifying a megabit grayscale image to determine whether it shows a cat or a dog. Such an image consists of a million pixels that can each take one of 256 grayscale values. So in theory, there can be 2561000000 possible images, and for each one it is necessary to compute whether it shows a cat or dog. And yet neural networks, with merely thousands or millions of parameters, somehow manage this classification task with ease. In the language of mathematics, neural networks work by approximating complex mathematical functions with simpler ones. When it comes to classifying images of cats and dogs, the neural network must implement a function that takes as an input a million grayscale pixels and outputs the probability distribution of what it might represent. The problem is that there are orders of magnitude more mathematical functions than possible networks to approximate them. And yet deep neural networks somehow get the right answer.

Now Lin and Tegmark say they’ve worked out why. The answer is that the universe is governed by a tiny subset of all possible functions. In other words, when the laws of physics are written down mathematically, they can all be described by functions that have a remarkable set of simple propertiesSo deep neural networks don’t have to approximate any possible mathematical function, only a tiny subset of them.

To put this in perspective, consider the order of a polynomial function, which is the size of its highest exponent. So a quadratic equation like y=x2 has order 2, the equation y=x24 has order 24, and so on. Obviously, the number of orders is infinite and yet only a tiny subset of polynomials appear in the laws of physics. “For reasons that are still not fully understood, our universe can be accurately described by polynomial Hamiltonians of low order,” say Lin and Tegmark. Typically, the polynomials that describe laws of physics have orders ranging from 2 to 4.

The laws of physics have other important properties. For example, they are usually symmetrical when it comes to rotation and translation. Rotate a cat or dog through 360 degrees and it looks the same; translate it by 10 meters or 100 meters or a kilometer and it will look the same. That also simplifies the task of approximating the process of cat or dog recognition. These properties mean that neural networks do not need to approximate an infinitude of possible mathematical functions but only a tiny subset of the simplest ones.

There is another property of the universe that neural networks exploit. This is the hierarchy of its structure. “Elementary particles form atoms which in turn form molecules, cells, organisms, planets, solar systems, galaxies, etc.,” say Lin and Tegmark. And complex structures are often formed through a sequence of simpler steps. This is why the structure of neural networks is important too: the layers in these networks can approximate each step in the causal sequence. Lin and Tegmark give the example of the cosmic microwave background radiation, the echo of the Big Bang that permeates the universe. In recent years, various spacecraft have mapped this radiation in ever higher resolution. And of course, physicists have puzzled over why these maps take the form they do.

Risultati immagini per cosmic microwave background radiation

Tegmark and Lin point out that whatever the reason, it is undoubtedly the result of a causal hierarchy. “A set of cosmological parameters (the density of dark matter, etc.) determines the power spectrum of density fluctuations in our universe, which in turn determines the pattern of cosmic microwave background radiation reaching us from our early universe, which gets combined with foreground radio noise from our galaxy to produce the frequency-dependent sky maps that are recorded by a satellite-based telescope,” they say. Each of these causal layers contains progressively more data. There are only a handful of cosmological parameters but the maps and the noise they contain are made up of billions of numbers. The goal of physics is to analyze the big numbers in a way that reveals the smaller ones. And when phenomena have this hierarchical structure, neural networks make the process of analyzing it significantly easier. 

We have shown that the success of deep and cheap learning depends not only on mathematics but also on physics, which favors certain classes of exceptionally simple probability distributions that deep learning is uniquely suited to model,” conclude Lin and Tegmark. That’s interesting and important work with significant implications. Artificial neural networks are famously based on biological ones. So not only do Lin and Tegmark’s ideas explain why deep learning machines work so well, they also explain why human brains can make sense of the universe. Evolution has somehow settled on a brain structure that is ideally suited to teasing apart the complexity of the universe.

This work opens the way for significant progress in artificial intelligence. Now that we finally understand why deep neural networks work so well, mathematicians can get to work exploring the specific mathematical properties that allow them to perform so well. “Strengthening the analytic understanding of deep learning may suggest ways of improving it,” say Lin and Tegmark. Deep learning has taken giant strides in recent years. With this improved understanding, the rate of advancement is bound to accelerate.

REFERENCE PUBLICATION

karma chameleon (gripper)

source: Festo

00131-flexshapegripper-2140x940px

The chameleon is able to catch a variety of different insects by putting its tongue over the respective prey and securely enclosing it. The FlexShapeGripper uses this principle to grip the widest range of objects in a form-fitting manner. Using its elastic silicone cap, it can even pick up several objects in a single gripping process and put them down together, without the need for a manual conversion.

01-flexshapegripper-1532x900px.jpg

The gripper consists of a double-acting cylinder, of which one chamber is filled with compressed air whilst the second one is permanently filled with water. This second chamber is fitted with elastic silicone moulding, which equates to the chameleon’s tongue. The volume of the two chambers is designed so that the deformation of the silicone part is compensated. The piston, which closely separates the two chambers from each other, is fastened with a thin rod on the inside of the silicone cap.

02-flexshapegripper-1532x900px

During the gripping procedure, a handling system guides the gripper across the object so that it touches the article with its silicone cap. The top pressurised chamber is then vented. The piston moves upwards by means of a spring support and the water-filled silicone part pulls itself inwards. Simultaneously, the handling system guides the gripper further across the object. In doing so, the silicone cap wraps itself around the object to be gripped, which can be of any shape, resulting in a tight form fit. The elastic silicone allows a precise adaptation to a wide range of different geometries. The high static friction of the material generates a strong holding force.

insideOnce it has been put into operation, the gripper is able to do various tasks. This functional integration is a possible way of how systems and components can in future adapt to various products and scenarios themselves. The project also shows how Festo acquires new findings from nature for its core business of automation. But the aims of the Bionic Learning Network not only include learning from nature. Identifying good ideas and fostering them also plays a major part. The FlexShapeGripper came about through a cooperation with the the Oslo and Akershus University College of Applied Sciences and is an outstanding example for a close collaboration beyond company borders.

 

10 Things To Never Apologize For Again

source: Jessica Hagy‘s post on Forbes

I’m so sorry, but—” is the introductory phrase of doom. Apologizing when you haven’t made any mistakes makes you look weak and easy to dismiss, not polite. Still want to say sorry? Then just don’t say it in these 10 situations.

sorrynotsorry002

1. Don’t apologize for taking up space.

You’re three-dimensional in many powerful ways.

sorrynotsorry003

2. Don’t apologize for not being omniscient.

If you really were psychic, you’d be out spending your lottery winnings already.

3. Don’t apologize for manifesting in a human form.

sorrynotsorry004

You require food, sleep, and you have regular biological functions. This is not being high-maintenance. This is being alive.

4. Don’t apologize for being intimidatingly talented.

sorrynotsorry005

Do you detect a wee bit (or a kilo-ton) of jealousy? Good. You’re doing something more than right.

5. Don’t apologize for not joining the cult du jour.

sorrynotsorry006

If you don’t believe in the life-changing magic of the brand synergy matrix (or whatever the slide-show is selling), you’re more aware than you realize.

6. Don’t apologize for being bound by the laws of time and space.

sorrynotsorry007

Need to be in three places at once? Actually, no, you don’t.

7. Don’t apologize for not assisting the more-than-able.

sorrynotsorry008

Get your own stupid coffee, Chad.

8. Don’t apologize for not being unimpressed by mediocrity.

sorrynotsorry010

Work that gets praised gets repeated. Stop clapping for things you don’t ever want to see again.

9. Don’t apologize for trusting your gut.

sorrynotsorry009

Don’t walk down the dark creepy alley or into that closed-door meeting with the predator, okay?

10. Don’t apologize for standing up for people you care about. 

sorrynotsorry011

Because you’re tired of hearing them apologize for doing everything right.