Seattle, US, May 10 — Microsoft hosted their Build 2017 event from May 10 – 12 in Seattle. All their projects and speeches were interesting, but in this article we are concentrating on their workplace safety demonstration.
How to improve workplace safety?
In their concept video (embedded below), Microsoft explains several cases on how technology that we have today, could be used to increase workplace safeness in high-risk environments. They focus on industries like manufacturing, logistics, constructions, and health care / medical sectors.
The essence of letting systems help people staying safe in the workplace is based on sensors, such as cameras, and the ability to understand what the images mean. This could be achieved by teaching machines what they are seeing and how to react to that input.
This doesn’t need an artificial intelligence. It doesn’t require machine learning. It only requires simple programming. An example for that would be cameras spotting a barrel falling over and having toxic liquids leaking from it. If the machine gets this visual input, it could then consider a spill scenario and will inform the right personnel about the location of the spill.
In a future phase two, that same programming could perhaps activate maintenance robots to fix the spill, but as of now, informing a foreman about it would already help to prevent serious risk to the safety of the workers there.
With an asset management system that records equipment data and it’s real-time location, you could save time and therefore money at construction sites. In many cases expensive gear is available for many to carry it around. So perhaps one of the workers is looking for a special tool and right now he or she would need to use radio or check around the floors to find the person with that particular tool and get it from there. Next to a location tracker directly on the assets, this could also be done via a network of cameras, that comprehend what they see and know where a tool is merely by seeing them.
In a digital environment, that worker could instead check a knowledge base to learn more about the activity that is required and can also check where the tool is right now. That worker can then directly go there and grab it, without a lot of radio chatter and looking around. This could save a lot of time when trying to raise a building on a tight schedule. Will the workers like that? Whole different question.
This type of visual understanding of machines and systems can also be used in a health and medical environment such as a hospital. Microsoft showcases an example of a patient doing too much walking after a critical surgery. This going unnoticed by staff could lead to the person suffering pain or even collapsing in bad cases.
If a camera could comprehend the medical status of that patient, the system could notify staff about this person’s location and status as well as giving them the fastest route to an available wheelchair to get the patient back to their room safely.
These concepts would not require a lot of technical innovation. Most of the things for that are already available or could be polished for service delivery within a year. The essence is a combination of positioning cameras in a network, connected to systems that also involve communication and collaboration software. Hardware endpoints wouldn’t necessarily need to be Windows-powered. If you break everything down to basics, you could even send information to people via simple text messages (SMS).
Comments on YouTube criticize the concept harshly by dropping comments about a “Big Brother watching you.” Yet, it is up to everybody on how much of their privacy they’d like to give up for an improvement in workplace safety. The machine won’t mind you digging your nose, but it might end up saving people’s lives.
YouTube: Build 2017: Workplace Safety Demonstration
Story pitched by news scout Pupu Liang.
Thanks for that!
Photo credit: Microsoft
Source: Microsoft Build 2017 / YouTube