RadarCat Can Recognize Objects Based on Radar Signal

-

When thinking about an immersive augmented reality workspace, product designers and engineers have overlooked a really useful functionality. What if systems could identify objects and values in real-time? A team of computer scientists, including Hui-Shyong Yeo, from the St. Andrews Computer Human Interaction (SACHI) is working on exactly such a project right now and the results look great already.

Their solution is called “RadarCat“, which stands for Radar Categorization for Input and Interaction, and it allows systems to analyze the radar signals of objects and can recall them based on machine learning effort. They released their info website on the 7th of October this year.

The sensor solution was originally provided by Google ATAP (Advanced Technology and Projects) as part of their Project Soli alpha developer kit program. However the research of SACHI went into a slightly different direction, but a truly interesting direction.

With the RadarCat you are now able to let machines identify all kinds of objects. It can differentiate between air, a glass, and even understand the difference between an empty glass and one filled with water. It understands what is part of a human and not, as well as it is able to identify what body part is being scanned. It can differentiate between what is a green surface and what is a red surface.

radar_cat_diagram

Ok, this is all pretty interesting for science sake, but what are possible applications for this? What technology sets could be enhanced by this? For instance, I think it could be used to make Industry 4.0 a little safer, by making sure that mighty robot arm only squishes and bends only metal pieces and never humans.

Or you think of advanced input mechanisms for future media editing/creation software. For instance, you could use a physical color palette in order to select the color your want to paint with digitally. The system could possibly “comprehend” the picked color and then translate it into a hex-code for you.

On the project’s website, Professor Aaron Quigley, Chair of Human Computer Interaction at the University, states, “Beyond human-computer interaction, we can also envisage a wide range of potential applications ranging from navigation and world knowledge to industrial or laboratory process control”.

It could be used to enhance special education or maybe even support disaster recovery operations, by scanning collapsed buildings for buried people. There are a lot of applications where this research could significantly impact the workflow on how you work with technology for the better. A project you should keep your eyes on. Make sure you check the video below! What do you think?


YouTube: RadarCat: Radar Categorization for Input & Interaction using Soli [UIST2016]

liangpupuStory pitched by news scout Pupu Liang.
Thanks for that!

Photo credit: SACHI Research
Editorial notice: We received interesting feedback on this article on the Imzy platform. The user Twinning_Ivy commented on how she thought some of the examples above wouldn’t be applicable or reasonable. This is an early stage of this solution’s development and in the review we didn’t want to limit the possible applications too much. Thank you for making the effort to share your thoughts with us.

Was this post helpful?

Christopher Isak
Christopher Isakhttps://techacute.com
Hi there and thanks for reading my article! I'm Chris the founder of TechAcute. I write about technology news and share experiences from my life in the enterprise world. Drop by on Twitter and say 'hi' sometime. ;)
- Advertisment -
- Advertisment -
- Advertisment -
- Advertisment -
- Advertisment -
- Advertisment -