Search

Showing posts with label brain. Show all posts
Showing posts with label brain. Show all posts

An Innovative AI-powered Computer Vision And Gesture Recognition System

How does the brain interpret what we see and how can computers be made to mimic the natural workings of the human brain when it comes to sight? That is the question that computer vision technologies seek to answer. Today, many technologies use computer vision in artificial intelligence, or AI. This AI rely on neural networks and they have to process a large amount of data in a very short space of time. Many AI-powered computer vision systems have been introduced into the market and they are being used in hi-precision surgical robots, as health monitoring equipment and in gaming systems. Heard of the Google computer vision, or Google cloud vision API? Those are examples. But engineers want to go beyond these computer vision applications. They want the AI-powered computer systems to recognize human gestures so as to complement its visual capabilities. That is why gesture recognition technology has become a hot topic in computer vision and pattern recognition.

artificial intelligence computer vision and gesture recognition system
 

The drive to create AI systems that recognize hand gestures came from the need to develop computer systems and devices that can help people who communicate using sign language. Early systems tried to use neural networks that incorporate the ability to classify signs from images captured from smartphone cameras while this data is converted from pictures to text. They were systems that involved computer vision with image processing. But AI systems have grown more advanced and more precise than those humble beginnings. Today, many systems seek to improve on this visual-only AI recognition system by integrating input from wearable sensors. This approach is known as data fusion.

Data fusion is the process of integrating more data sources into computer systems that make these systems more reliable and accurate than if the data was coming from a single source. AI-powered computer vision systems incorporate date fusion using wearable sensors that recreates the skin’s sensory ability, especially the somatosensory functionality of the skin. This has resulted in the ability of computer systems to recognize a wide variety of objects in their environment and increase their functionality and usefulness. But there are still challenges which hamper the precision and the growth of these data. One of these challenges is that the quality of data from wearable sensors are low and this is as a result of the fact that wearable sensors that have been produced are bulky and sometimes have poor contact with the user. Also when objects are visually blocked or there is poor lighting, the ability of these AI-powered systems are reduced. One area that has been troubling to engineers is how to efficiently merge the data coming from the visual and the sensory signals. Therefore, this has led to information that is inefficient, resulting in slower response times for gesture recognition systems.

In an innovative approach that is said to solve many of these challenges, a team of researchers at the Nanyang Technological University, Singapore (NTU, Singapore), have created an AI data fusion system that drew its inspiration from nature. This system uses skin-like stretchable sensors made from single-walled carbon nanotubes. This is an AI approach that closely mimics the way the skin’s signals and human vision are handled together in the brain.

How the NTU artificial intelligence gesture recognition system works

The NTU bio-inspired AI system was based on the combination of three neural network approaches. The three neural networks that were combined are: 1. a convolutional neural network which is an early method for visual processing, 2. a multi-layer neural network which was used for early somatosensory information processing, and 3. A sparse neural network which fuses the visual and the somatosensory information together.

Therefore combining these three neural networks makes it possible for the gesture recognition system to more accurately process visual and somatosensory information more efficiently than existing systems.

The lead author of the study, Professor Chen Xiaodong, from the school of Material Science and Engineering at NTU says that the system is unique because it drew its inspiration from nature and tries to mimic the somatosensory–visual fusion hierarchy which is already existing in the human brain. According to him, no other system in the gesture recognition field has undertaken this approach.

What makes this system particularly accurate in data collection is the fact that the stretchable skin sensors used by the researchers attach comfortably to the skin and this makes the data collection process not only more accurate but makes it to deliver a higher-quality signal which is vital for hi-precision recognition systems.

The researchers have published their study in the scientific journal “Natural Electronics”.

High accuracy even in poor environmental conditions

As a proof of concept the bio-inspired AI system was tested using a robot that was controlled through hand gestures and then the robot was guided through a maze. It was discovered that the AI system was able to guide the robot through the maze with zero errors, compared to the six recognition errors from another visual recognition system. It then seems evident that this bio-inspired AI system is more accurate and efficient.

Also it was tested under noise and unfavorable lighting conditions. Even under this unfavorable conditions the bio-inspired AI system still maintained its high accuracy. When it was tested in the dark, it worked efficiently with a recognition accuracy of over 96.7%.

The authors of this study said that the success of their bio-inspired AI system lies in its ability to interact with and complement at an early stage the visual and somatosensory information it was receiving even before any complex interpretation is carried out. This makes it possible for the system to rationally collect coherent information with low data redundancy and low perceptual ambiguity with better accuracy.

Promise of better things to come

This innovative study shows a promise of the future. It helps us to see that humans are one step closer to a world where we could efficiently control our environment through a gesture. Applications that could be built for such a technology are endless, and it promises to create a vast amount of opportunities in Industry. Some examples include a remote robot control over smart workplaces along with the ability to produce exoskeletons for those who are elderly.

The NTU team are aiming to use their system to build virtual reality (VR) and augmented reality (AR) systems. This is because their system is more efficiently used in areas where hi-precision recognition control is required such as in the entertainment and gaming Industries.

Material for this post was taken from a press release by the Nanyang Technological University, Singapore.

A Look Into The Future Of Wireless Brain-Computer Interfaces.

 

Brain-computer interfaces (BCI) or what some call the mind-machine interface is a system of connecting a human brain through wires or wirelessly to a machine in order to generate signals from the brain, transmit them to a computer and through a bidirectional information flow mechanism allow the computer to control motor functions of the human brain.

The idea of a mind-machine interface was popular in the 1970s but it was not until the 1990s that prosthetic devices that were attached to the brain appeared to be viable. One of the concepts behind these mind-machine devices is to capture the electrical activity of the brain through electroencelography (EEG), transmit them to a machine and then the machine generates signals that are able to control the functioning of the brain. Professor Jacques Vidal of the University of California, Los Angeles (UCLA) is credited with inventing the first BCI machine. This concept has been applied in neuroprosthetics which is the use of artificial devices to replace the function of impaired nervous functions and brain related problems.

Thanks to this inventive and innovative approach, many persons who have lost their vision, motor movements, and other body functions such as in paralysis, can be able to live normal lives using these machines. A breakthrough in BCI devices and neuroprosthetics was accomplished in 2009 when Alex Blainey, an independent researcher living in the UK, was able to control a 5 axis robot arm using the Emotiv EPOC. These devices could help even someone who has lost control of their spinal cord through a disease or injury to regain full movement .



One drawback though was that not only do they require wires but they can generate energy in the brain of recipients. A wired BCI device can limit the movement of the person on whose brain it is implanted. They would not be free to move at ease. While a BCI device generating enormous energy could do harm to the person on whom it is implanted.

Now, these challenges could be a thing of the past as research has suggested that neural implants in the brain could be done wirelessly while generating just one-tenth of the power of existing devices.

A team of electrical engineers and neuroscientists from Stanford University successfully created a breakthrough device that could give recipients a wider range of movement while not exposing them to harm through the heat generated by the implanted device. To test their data, they tried out their experiment on three nonhuman primates and one human participant in a clinical trial. Then while the subjects performed several movements by communicating with the computer using their brains, the researchers took extensive measurements. The results of the research validated the hypothesis from the researchers that a wireless device for neuroprosthesis was possible and commercially viable while using less power.

Only time would tell when the actual device would be built that would actually achieve the goal of the research: a mind-machine device that was safe yet wireless. The future of wireless brain-computer interfaces is just within reach.

4 tips for learning and memory recall

Every day we interact with different persons, learn different things and encounter different situations. Hours, days, and weeks from that encounter, can you properly recall what happened?

These are some useful suggestions about memory and recalling events.

  1. Our memory glosses over general details of a matter or subject.

  2. When I was working for a bank, I used to take the company bus. At end of the first day, it struck me that the company buses were of the same model and same color. So, how did I make out the bus for my route? The drivers realized one truth: people are interested in taking the gist of a matter and would rather gloss over the details. The buses were parked on the same spot at the same time every day.

    If they had not done that, I’d take the pain and an inconvenient one, of recalling license plates, driver faces, bumps on the body etcetera.

    Could you make out these faces one hour hence?
    Credit: Wikipedia.org

    When faced with daily items, our memory is poor. But given specific details, one can easily recall those items. 

  3. Our memory is much poorer than we can imagine.

  4. Close your eyes for one second. Can you recall all the items that were in front of you? Zillions, not so, but can you recall just fifty of them? Most persons don’t. Hours after an email was answered, one forgets what the email subject was especially if it was not replied. I was reading my email this week when a company wrote me that my annual subscription was renewed and extended for free. I sent a “thank you” message. If I had stopped receiving the company newsletter, I would surely have recalled that and re-subscribed.

    So, never trust your memory. Make it a habit of jotting down important details.

  5. Increased exposure does not affect memory recall.

  6. Increased exposure to a matter or subject increases familiarity but does not determine future recall. When I was a bachelor living alone, I used to meet a friend to write me recipes for a favorite African dish. I never stored that recipe in my memory. I can’t even recall that recipe if you asked me! 

  7. Distinguishing attractive details is better than learning everything.

  8. For effective learning, students and teachers need to have an idea of how every part of the subject matter are connected. For easier recall, students should concentrate on the easy parts of the subjects, the areas that attracts them most, before moving on to the difficult zones. It is the same with recalling information. Start with your zone of confidence about a subject if you want to be able to remember details about it later. An argument that you had with someone, what really piqued you about it? Make sure you make a note of that. It could be the only thing you might be able to recall weeks or months after.

Matched content