Search

An Innovative AI-powered Computer Vision And Gesture Recognition System

How does the brain interpret what we see and how can computers be made to mimic the natural workings of the human brain when it comes to sight? That is the question that computer vision technologies seek to answer. Today, many technologies use computer vision in artificial intelligence, or AI. This AI rely on neural networks and they have to process a large amount of data in a very short space of time. Many AI-powered computer vision systems have been introduced into the market and they are being used in hi-precision surgical robots, as health monitoring equipment and in gaming systems. Heard of the Google computer vision, or Google cloud vision API? Those are examples. But engineers want to go beyond these computer vision applications. They want the AI-powered computer systems to recognize human gestures so as to complement its visual capabilities. That is why gesture recognition technology has become a hot topic in computer vision and pattern recognition.

artificial intelligence computer vision and gesture recognition system
 

The drive to create AI systems that recognize hand gestures came from the need to develop computer systems and devices that can help people who communicate using sign language. Early systems tried to use neural networks that incorporate the ability to classify signs from images captured from smartphone cameras while this data is converted from pictures to text. They were systems that involved computer vision with image processing. But AI systems have grown more advanced and more precise than those humble beginnings. Today, many systems seek to improve on this visual-only AI recognition system by integrating input from wearable sensors. This approach is known as data fusion.

Data fusion is the process of integrating more data sources into computer systems that make these systems more reliable and accurate than if the data was coming from a single source. AI-powered computer vision systems incorporate date fusion using wearable sensors that recreates the skin’s sensory ability, especially the somatosensory functionality of the skin. This has resulted in the ability of computer systems to recognize a wide variety of objects in their environment and increase their functionality and usefulness. But there are still challenges which hamper the precision and the growth of these data. One of these challenges is that the quality of data from wearable sensors are low and this is as a result of the fact that wearable sensors that have been produced are bulky and sometimes have poor contact with the user. Also when objects are visually blocked or there is poor lighting, the ability of these AI-powered systems are reduced. One area that has been troubling to engineers is how to efficiently merge the data coming from the visual and the sensory signals. Therefore, this has led to information that is inefficient, resulting in slower response times for gesture recognition systems.

In an innovative approach that is said to solve many of these challenges, a team of researchers at the Nanyang Technological University, Singapore (NTU, Singapore), have created an AI data fusion system that drew its inspiration from nature. This system uses skin-like stretchable sensors made from single-walled carbon nanotubes. This is an AI approach that closely mimics the way the skin’s signals and human vision are handled together in the brain.

How the NTU artificial intelligence gesture recognition system works

The NTU bio-inspired AI system was based on the combination of three neural network approaches. The three neural networks that were combined are: 1. a convolutional neural network which is an early method for visual processing, 2. a multi-layer neural network which was used for early somatosensory information processing, and 3. A sparse neural network which fuses the visual and the somatosensory information together.

Therefore combining these three neural networks makes it possible for the gesture recognition system to more accurately process visual and somatosensory information more efficiently than existing systems.

The lead author of the study, Professor Chen Xiaodong, from the school of Material Science and Engineering at NTU says that the system is unique because it drew its inspiration from nature and tries to mimic the somatosensory–visual fusion hierarchy which is already existing in the human brain. According to him, no other system in the gesture recognition field has undertaken this approach.

What makes this system particularly accurate in data collection is the fact that the stretchable skin sensors used by the researchers attach comfortably to the skin and this makes the data collection process not only more accurate but makes it to deliver a higher-quality signal which is vital for hi-precision recognition systems.

The researchers have published their study in the scientific journal “Natural Electronics”.

High accuracy even in poor environmental conditions

As a proof of concept the bio-inspired AI system was tested using a robot that was controlled through hand gestures and then the robot was guided through a maze. It was discovered that the AI system was able to guide the robot through the maze with zero errors, compared to the six recognition errors from another visual recognition system. It then seems evident that this bio-inspired AI system is more accurate and efficient.

Also it was tested under noise and unfavorable lighting conditions. Even under this unfavorable conditions the bio-inspired AI system still maintained its high accuracy. When it was tested in the dark, it worked efficiently with a recognition accuracy of over 96.7%.

The authors of this study said that the success of their bio-inspired AI system lies in its ability to interact with and complement at an early stage the visual and somatosensory information it was receiving even before any complex interpretation is carried out. This makes it possible for the system to rationally collect coherent information with low data redundancy and low perceptual ambiguity with better accuracy.

Promise of better things to come

This innovative study shows a promise of the future. It helps us to see that humans are one step closer to a world where we could efficiently control our environment through a gesture. Applications that could be built for such a technology are endless, and it promises to create a vast amount of opportunities in Industry. Some examples include a remote robot control over smart workplaces along with the ability to produce exoskeletons for those who are elderly.

The NTU team are aiming to use their system to build virtual reality (VR) and augmented reality (AR) systems. This is because their system is more efficiently used in areas where hi-precision recognition control is required such as in the entertainment and gaming Industries.

Material for this post was taken from a press release by the Nanyang Technological University, Singapore.

No comments:

Post a Comment

Your comments here!

Matched content