The Robotic skin comes with a new type of processing system based on synaptic transistors.
A team of researchers led by an Indian-origin engineer in the United Kingdom has created a Robotic skin capable of feeling “pain.” According to the researcher, this discovery will pave the way for the development of a new generation of intelligent robotics with human-like sensitivity. The discovery is a significant step forward in the development of Artificial Intelligence and large-scale neuromorphic printed e-skin capable of responding appropriately to stimuli. The university team invented the robotic skin by utilizing a new type of processing system based on synaptic transistors. In order to learn, this process mimics the neural pathways of the brain. A robotic hand equipped with smart skin that is capable of learning to respond to external stimuli.
We all learn early in life to respond appropriately to unexpected stimuli such as pain in order to avoid injuring ourselves again. Of course, the development of this new type of Robotic skin did not involve inflicting pain in the traditional sense. It’s simply a shorthand way to explain the process of learning from external stimuli. Through this process, we were able to create an electronic skin capable of distributed learning at the hardware level, which does not require sending messages back and forth to a central processor before taking action. Instead, it significantly speeds up the process of responding to touch by reducing the amount of computation required.
The researchers describe how they built their prototype computational e-skin. They also describe how it advances the state of the art in touch-sensitive robotics. The development of the Robotic skin is described as the latest breakthrough in the field of flexible, stretchable printed surfaces. In the future, this research could be the basis for a more advanced Robotics skin that enables robotics capable of exploring and interacting with the world in new ways or building prosthetic limbs capable of near-human levels of touch sensitivity.
The advancement is thought to be the result of scientists’ decades-long efforts to create artificial skin with touch sensitivity. Spreading an array of contact or pressure sensors across the surface of the Robotics skin is a tried and true method for detecting when it comes into contact with an object. Data from the sensors is then transmitted to a computer for processing and interpretation. The sensors typically generate a large amount of data, which can take time to process and respond to, introducing delays that could reduce the skin’s potential effectiveness in real-world tasks.
The University team’s development of Robotic skin draws on the human peripheral nervous system and how it interprets signals from the skin to reduce latency and power consumption. When human skin receives an input, the peripheral nervous system begins processing it at the point of contact, reducing it to only the most important information before sending it to the brain. This reduction in sensory data allows for more efficient use of communication channels required to send the data to the brain, which then responds almost instantly for the body to react appropriately.
The researchers printed a grid of 168 synaptic transistors made of zinc-oxide nanowires directly onto the surface of a flexible plastic surface to create a Robotics skin capable of a computationally efficient, synapse-like response. The synaptic transistor was then linked to the skin sensor located over the palm of a fully articulated, human-shaped robotic hand. When the sensor is touched, it registers a change in its electrical resistance, a small change corresponds to a light touch, and a larger change corresponds to a harder touch. This input is intended to mimic how sensory neurons function in the human body.