Researchers from Singapore, which are a part of the National University of Singapore (NUS) and members of the Intel Neuromorphic Research Community, started to showcase some of their new findings. They brought in things like neuromorphic processing for robotics combined with touch and vision sensing. Right now, most robots don’t have touch capabilities, and this team of researchers stresses the importance of this feature. These neuromorphic processors are handy, and they can outperform traditional architecture.
Why is this important?
One of the main reasons why using neuromorphic computing is a big deal is because it can add new applications to robotics. Right now, most robots rely on visual processing alone, and that’s coming with a lot of limitations. By relying on another tech like the artificial skin from NUS, robots will be able to detect touch faster than our nervous system.
For a robotic arm that has such artificial skin would be able to identify and adapt to product changes. It will prevent slipping by adjusting to the product size. That could also bring in less damage on the production line. It makes the production process faster and safer.
In some cases, like surgical tasks, robots will be able to further adapt and perform very accurate incisions by changing to the patient’s requirements. These are only a few ideas, and they clearly go to show the unique perspective that can be provided through neuromorphic sensors and artificial skin.
Of course, to have artificial skin work properly, you also need to have a clip that draws conclusions from the sensory data and which deploys all the info into the robot. It’s the process of making the sensor very fast what ends up being quite the challenge right now. NUS does have a version of the clip that works for now, but they are still experimenting and improving it.
How did this all start?
Initially, the NUS team started to explore how neuromorphic technology could be used to process sensory data. They rely on the Loihi neuromorphic research chip. At first, they had a robotic hand with artificial skin that was reading Braille, and it had around 92% accuracy. It also took 20% less than the regular Von Neumann processor. Since that was a success, the NUS team went onward to improve the robotic perception capabilities, while also focusing on things like touch and vision data.
Also interesting: Biomimetics, When Robotics Imitate Nature – Robot See, Robot Do
They put the robot to classify opaque containers based on how much liquid they had inside. They relied on sensors to test the perception ability of the robot equipped with artificial skin. The data received was very exciting, showing that there’s a vast potential for this technology as it improves robot capabilities.
We are still in the early days for this technology, but it has a lot of potential, and it really goes to show the real value that it can bring to the table. We are sure that the technology will improve, and lots of manufacturing companies would benefit from it.
YouTube: Combining Vision and Touch in Robotics Using Intel Neuromorphic Computing
YouTube: National University of Singapore Uses Robotic Hand with Intel Tech to Read Braille
Photo credit: All images shown are owned by the National University of Singapore and were provided by Intel as part of a press kit.
Source: Intel press release