Robotic cane could improve walking stability

source and credits: TheRobotReport

By adding electronics and computation technology to a simple cane that has been around since ancient times, a team of researchers at Columbia Engineering have transformed it into a 21st century robotic device that can provide light-touch assistance in walking to the aged and others with impaired mobility. A team led by Sunil Agrawal, professor of mechanical engineering and of rehabilitation and regenerative medicine at Columbia Engineering, has demonstrated, for the first time, the benefit of using an autonomous robot that “walks” alongside a person to provide light-touch support, much as one might lightly touch a companion’s arm or sleeve to maintain balance while walking. Their study has been published in the IEEE Robotics and Automation Letters.

Often, elderly people benefit from light hand-holding for support,” explained Agrawal, who is also a member of Columbia University’s Data Science Institute. “We have developed a robotic cane attached to a mobile robot that automatically tracks a walking person and moves alongside,” he continued. “The subjects walk on a mat instrumented with sensors while the mat records step length and walking rhythm, essentially the space and time parameters of walking, so that we can analyze a person’s gait and the effects of light touch on it.

The light-touch robotic cane, called CANINE, acts as a cane-like mobile assistant. The device improves the individual’s proprioception, or self-awareness in space, during walking, which in turn improves stability and balance. “This is a novel approach to providing assistance and feedback for individuals as they navigate their environment,” said Joel Stein, Simon Baruch Professor of Physical Medicine and Rehabilitation and chair of the department of rehabilitation and regenerative medicine at Columbia University Irving Medical Center, who co-authored the study with Agrawal. “This strategy has potential applications for a variety of conditions, especially individuals with gait disorders.

To test this new device, the team fitted 12 healthy young people with virtual reality glasses that created a visual environment that shakes around the user – both side-to-side and forward-backward – to unbalance their walking gait. The subjects each walked 10 laps on the instrumented mat, both with and without the robotic cane, in conditions that tested walking with these visual perturbations. In all virtual environments, having the light-touch support of the robotic cane caused all subjects to narrow their strides. The narrower strides, which represent a decrease in the base of support and a smaller oscillation of the center of mass, indicate an increase in gait stability due to the light-touch contact.

The next phase in our research will be to test this device on elderly individuals and those with balance and gait deficits to study how the robotic cane can improve their gait,” said Agrawal, who directs the Robotics and Rehabilitation (ROAR) Laboratory. “In addition, we will conduct new experiments with healthy individuals, where we will perturb their head-neck motion in addition to their vision to simulate vestibular deficits in people.

While mobility impairments affect 4% of people aged 18 to 49, this number rises to 35% of those aged 75 to 80 years, diminishing self-sufficiency, independence, and quality of life. By 2050, it is estimated that there will be only five young people for every old person, as compared with seven or eight today. “We will need other avenues of support for an aging population,” Agrawal noted. “This is one technology that has the potential to fill the gap in care fairly inexpensively.

 

Deep Neural Networks & the Nature of the Universe

source: MIT Technology Review

In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players.Risultati immagini per go game But there is a problem. There is no mathematical reason why networks arranged in layers should be so good at these challenges. Mathematicians are flummoxed. Despite the huge success of deep neural networks, nobody is quite sure how they achieve their success.

Today that changes thanks to the work of Henry Lin at Harvard University and Max Tegmark at MIT. These guys say the reason why mathematicians have been so embarrassed is that the answer depends on the nature of the universe. In other words, the answer lies in the regime of physics rather than mathematics.

First, let’s set up the problem using the example of classifying a megabit grayscale image to determine whether it shows a cat or a dog. Such an image consists of a million pixels that can each take one of 256 grayscale values. So in theory, there can be 2561000000 possible images, and for each one it is necessary to compute whether it shows a cat or dog. And yet neural networks, with merely thousands or millions of parameters, somehow manage this classification task with ease. In the language of mathematics, neural networks work by approximating complex mathematical functions with simpler ones. When it comes to classifying images of cats and dogs, the neural network must implement a function that takes as an input a million grayscale pixels and outputs the probability distribution of what it might represent. The problem is that there are orders of magnitude more mathematical functions than possible networks to approximate them. And yet deep neural networks somehow get the right answer.

Now Lin and Tegmark say they’ve worked out why. The answer is that the universe is governed by a tiny subset of all possible functions. In other words, when the laws of physics are written down mathematically, they can all be described by functions that have a remarkable set of simple propertiesSo deep neural networks don’t have to approximate any possible mathematical function, only a tiny subset of them.

To put this in perspective, consider the order of a polynomial function, which is the size of its highest exponent. So a quadratic equation like y=x2 has order 2, the equation y=x24 has order 24, and so on. Obviously, the number of orders is infinite and yet only a tiny subset of polynomials appear in the laws of physics. “For reasons that are still not fully understood, our universe can be accurately described by polynomial Hamiltonians of low order,” say Lin and Tegmark. Typically, the polynomials that describe laws of physics have orders ranging from 2 to 4.

The laws of physics have other important properties. For example, they are usually symmetrical when it comes to rotation and translation. Rotate a cat or dog through 360 degrees and it looks the same; translate it by 10 meters or 100 meters or a kilometer and it will look the same. That also simplifies the task of approximating the process of cat or dog recognition. These properties mean that neural networks do not need to approximate an infinitude of possible mathematical functions but only a tiny subset of the simplest ones.

There is another property of the universe that neural networks exploit. This is the hierarchy of its structure. “Elementary particles form atoms which in turn form molecules, cells, organisms, planets, solar systems, galaxies, etc.,” say Lin and Tegmark. And complex structures are often formed through a sequence of simpler steps. This is why the structure of neural networks is important too: the layers in these networks can approximate each step in the causal sequence. Lin and Tegmark give the example of the cosmic microwave background radiation, the echo of the Big Bang that permeates the universe. In recent years, various spacecraft have mapped this radiation in ever higher resolution. And of course, physicists have puzzled over why these maps take the form they do.

Risultati immagini per cosmic microwave background radiation

Tegmark and Lin point out that whatever the reason, it is undoubtedly the result of a causal hierarchy. “A set of cosmological parameters (the density of dark matter, etc.) determines the power spectrum of density fluctuations in our universe, which in turn determines the pattern of cosmic microwave background radiation reaching us from our early universe, which gets combined with foreground radio noise from our galaxy to produce the frequency-dependent sky maps that are recorded by a satellite-based telescope,” they say. Each of these causal layers contains progressively more data. There are only a handful of cosmological parameters but the maps and the noise they contain are made up of billions of numbers. The goal of physics is to analyze the big numbers in a way that reveals the smaller ones. And when phenomena have this hierarchical structure, neural networks make the process of analyzing it significantly easier. 

We have shown that the success of deep and cheap learning depends not only on mathematics but also on physics, which favors certain classes of exceptionally simple probability distributions that deep learning is uniquely suited to model,” conclude Lin and Tegmark. That’s interesting and important work with significant implications. Artificial neural networks are famously based on biological ones. So not only do Lin and Tegmark’s ideas explain why deep learning machines work so well, they also explain why human brains can make sense of the universe. Evolution has somehow settled on a brain structure that is ideally suited to teasing apart the complexity of the universe.

This work opens the way for significant progress in artificial intelligence. Now that we finally understand why deep neural networks work so well, mathematicians can get to work exploring the specific mathematical properties that allow them to perform so well. “Strengthening the analytic understanding of deep learning may suggest ways of improving it,” say Lin and Tegmark. Deep learning has taken giant strides in recent years. With this improved understanding, the rate of advancement is bound to accelerate.

REFERENCE PUBLICATION

10 Things To Never Apologize For Again

source: Jessica Hagy‘s post on Forbes

I’m so sorry, but—” is the introductory phrase of doom. Apologizing when you haven’t made any mistakes makes you look weak and easy to dismiss, not polite. Still want to say sorry? Then just don’t say it in these 10 situations.

sorrynotsorry002

1. Don’t apologize for taking up space.

You’re three-dimensional in many powerful ways.

sorrynotsorry003

2. Don’t apologize for not being omniscient.

If you really were psychic, you’d be out spending your lottery winnings already.

3. Don’t apologize for manifesting in a human form.

sorrynotsorry004

You require food, sleep, and you have regular biological functions. This is not being high-maintenance. This is being alive.

4. Don’t apologize for being intimidatingly talented.

sorrynotsorry005

Do you detect a wee bit (or a kilo-ton) of jealousy? Good. You’re doing something more than right.

5. Don’t apologize for not joining the cult du jour.

sorrynotsorry006

If you don’t believe in the life-changing magic of the brand synergy matrix (or whatever the slide-show is selling), you’re more aware than you realize.

6. Don’t apologize for being bound by the laws of time and space.

sorrynotsorry007

Need to be in three places at once? Actually, no, you don’t.

7. Don’t apologize for not assisting the more-than-able.

sorrynotsorry008

Get your own stupid coffee, Chad.

8. Don’t apologize for not being unimpressed by mediocrity.

sorrynotsorry010

Work that gets praised gets repeated. Stop clapping for things you don’t ever want to see again.

9. Don’t apologize for trusting your gut.

sorrynotsorry009

Don’t walk down the dark creepy alley or into that closed-door meeting with the predator, okay?

10. Don’t apologize for standing up for people you care about. 

sorrynotsorry011

Because you’re tired of hearing them apologize for doing everything right.

Robots Cascadeurs

source: ce site

scaled_full_d0915f5286a901838e00A l’heure où la modélisation numérique fait de tels progrès qu’il est devenu possible de faire “jouer” des acteurs décédés dans un film, Disney explore une piste qui sent bon l’animatronique à l’ancienne: des robots humanoïdes cascadeurs. Présenté dans une vidéo de démonstration diffusée par le site TechCrunch, le projet consiste plus exactement à créer des machines qui exécutent des figures de voltige, sauts de l’ange, salto et autre vrille, avec un réalisme confondant – sinon troublant, surtout pour les cascadeurs.

stuntronics-un-robot-cascadeur-impressionnant-bientot-dans-les-parcs-disney-77003Oubliez, en effet, les mouvements saccadés et les postures rigides qui trahissent la machine. Walt Disney Imagineering, la division recherche et développement du studio, a conçu un robot capable non seulement de mimer un humain mais aussi de corriger sa gestuelle en plein vol, comme par exemple en se mettant en boule au moment de la descente pour atterrir dans des piles de cartons ou en effectuant un mouvement de balancier accroché à une liane avant de se propulser dans les airs en lançant les jambes en avant.

x1080-gtVAppelé Stuntronics (contraction de stunt, cascade, et electronics, comme animatronics venait de animation et electronics), ce robot est équipé d’accéléromètres et de gyroscopes, des capteurs servant à mesurer accélération et position angulaire de la machine, ainsi que de télémètres laser mesurant les distances et d’une technologie de vision par ordinateur. Cette machine est en fait la continuité directe d’un autre projet appelé Stickman qui testait toutes ces techniques sur une ébauche de robot cascadeur consistant en trois barres articulés. A priori, le projet est destiné aux parcs à thèmes de Disney, où il est impossible par définition de s’en remettre à des personnages virtuels. Mais, au vu du résultat, il est tout à fait envisageable de les voir intégrer un prochain tournage de film d’action.

 

Scoperto un nuovo organo nel corpo umano

fonte: Ansa del 29/03/2018

Rivoluzione in arrivo in anatomia, con la scoperta di un nuovo organo, tra i più grandi del corpo umano: si chiama interstizio e si trova diffuso in tutto l’organismo, sotto la pelle e nei tessuti che rivestono l’apparato digerente, i polmoni, i vasi sanguigni e i muscoli. E’ formato da cavità interconnesse piene di liquido e sostenute da fibre di collagene ed elastina. Agisce come un vero e proprio ammortizzatore, ma la sua presenza potrebbe spiegare anche molti fenomeni biologici come la diffusione dei tumori, l’invecchiamento della pelle, le malattie infiammatorie degenerative e perfino il meccanismo d’azione dell’agopuntura. A indicarlo è lo studio pubblicato sulla rivista Scientific Reports dall’Università di New York e dal Mount Sinai Beth Israel Medical Centre.

interstizioEtichettato per decenni come semplice tessuto connettivo, l’interstizio era rimasto invisibile nella sua complessità a causa dei metodi usati per esaminarlo al microscopio, che lo facevano apparire erroneamente denso e compatto. La sua vera natura è stata invece osservata per la prima volta grazie ad una nuova tecnica di endomicroscopia confocale laser, che consente di vedere al microscopio i tessuti vivi direttamente dentro il corpo, senza doverli prelevare e poi fissare su un vetrino. Impiegata su alcuni pazienti malati di tumore che dovevano essere sottoposti a chirurgia per rimuovere pancreas e dotto biliare, la tecnica ha permesso di osservare la reale struttura dell’interstizio, che è stato poi riconosciuto anche in tutte le altre parti del corpo sottoposte a continui movimenti e pressioni. Alla luce della sua complessità, l’interstizio si è così “meritato” la promozione ad organo.

interstizio 2Questa scoperta ha il potenziale per determinare grandi progressi in medicina, inclusa la possibilità di usare il campionamento del fluido interstiziale come potente strumento diagnostico“, spiega Neil Theise, docente di patologia all’Università di New York. Il continuo movimento di questo fluido potrebbe spiegare perché i tumori che invadono l’interstizio si diffondono più velocemente nel corpo: drenato dal sistema linfatico, questo sistema di cavità interconnesse è la sorgente da cui nasce la linfa, vitale per il funzionamento delle cellule immunitarie che generano l’infiammazione. Inoltre, le cellule che vivono in questi spazi e le fibre di collagene che li sostengono cambiano con il passare degli anni e potrebbero contribuire alla formazione delle rughe, all’irrigidimento delle articolazioni e alla progressione delle malattie infiammatorie legate a fenomeni di sclerosi e fibrosi. Il reticolato di proteine che sostiene l’interstizio, infine, potrebbe generare correnti elettriche quando si piegano, seguendo il movimento di organi e muscoli, e per questo potrebbe giocare un ruolo nelle tecniche di agopuntura.

Arriva Hunova, un robot per la riabilitazione

fonte: Ansa.it

section-3Le nuove tecnologie e l’industria 4.0 estendono le loro applicazioni nel settore sanitario con un robot per la riabilitazione di pazienti con disabilità in ambito neurologico e spinale. Il robot si chiama hunova, è nato con brevetti dell’Istituto Italiano di Tecnologia (IIT) ed è prodotto e commercializzato in tutto il mondo da Movendo Technology, la prima medical company made in Italy attiva nella robotica riabilitativa (50% Dompé, 43% i fondatori e inventori Simone Ungaro, Carlo Sanfilippo, Jody Saglia, 7% IIT).

hunova integra meccatronica, elettronica, sensoristica, e software: 4 motori, 2 sensori di forza/coppia, un sensore inerziale, più di 100 metri di cavi, un cervello elettronico, 1 interfaccia e 4 schede elettroniche di controllo. La sua intelligenza artificiale o centro di controllo combina big data, algoritmi avanzati di interazione uomo-macchina e rete di sensori, mantenendo un’estrema semplicità di utilizzo da parte dell’operatore come del paziente. I fattori che caratterizzano hunova sono la rilevazione e misurazione oggettiva dei parametri biomeccanici del paziente e l’elevato livello di assistenza e intervento robotico che facilita e guida chi è sottoposto alla riabilitazione, stimolandolo con protocolli somministrati in forma di gioco (videogame interattivi). Gli ambiti di applicazione terapeutica in campo neurologico riguardano gli esiti di ictus ischemico con o senza emiplegia, malattie neurodegenerative, morbo di Parkinson, Sclerosi Multipla, ma anche il campo ortopedico, quello geriatrico e della medicina dello sport.

Al momento sono operativi 28 robot di cui 2 negli Stati Uniti, 1 in Germania e Grecia. Il centro spinale dell’ospedale Niguarda di Milano diretto da Michele Spinelli e il Centro di Recupero e Riabilitazione Funzionale Villa Beretta (Lecco) diretta da Franco Molteni (Ospedale Valduce di Como) stanno implementando l’uso del robot.

Immagine-Configurazione-Monopodalica