Search

Showing posts with label problem solving. Show all posts
Showing posts with label problem solving. Show all posts

Is There Life On Venus? Or Are We Alone?

Recently, it was reported in the media that phosphine has been discovered in the acidic clouds of planet Venus. According to scientists, phosphine is a biosignature for life; that means it can only be produced by living organisms. This discovery raises several questions in the minds of many, some of which are: “Are we alone? How substantial is this evidence?” Only time will answer this question when the research is well investigated. According to Carl Sagan, a famous astronomer and astrobiologist: “extraordinary claims require extraordinary evidence.” And that certainly applies to this situation.

alien life on venus

 

What type of planet is Venus?

Planet Venus is the second planet from the sun in our solar system. As you would expect, the temperatures there are very high. Temperatures can range up to 900 degrees Fahrenheit on the surface; so hot that it could melt lead. There are heavy clouds that swirl around the planet that are so acidic that we could not even measure them using our pH scale on earth. That is why Venus is referred to as a hell scape. It would be difficult to imagine life as we know it being on Venus.

Despite that fact, astronomers have voiced that possibility in the past. It was formerly proposed by Carl Sagan, an astronomer, and Harold Morowitz, a biologist, that there was a possibility that microbes could be existing in its acidic clouds; swirling around the planet. So, to confirm this idea, probes were sent to Venus to check out the planet and true to type, those probes melted on entering the planet. So, scientists have concentrated their search for life on the planet to browsing its clouds for microbial life.

What do we know about the gas, phosphine?

Phosphine gas that was discovered on the clouds of Venus is a toxic and explosive molecule with a lingering odor of garlic and dead fish. The gas was discovered on planet Venus at temperatures that were close to that on planet Earth. But the discovery was not much. The researchers describe it as: “finding some tablespoonfuls in an Olympic size swimming pool.” Yet, that amount is enough to pique our curiosity. This is because of how the gas is made here on Earth.

Phosphine gas on earth is made from either of two paths: as a natural byproduct of life, or it is manufactured artificially to produce fumigants and other biochemicals. As a byproduct of life, it is made by oxygen-hating microbes who live in swamps and marshes. It has been noticed by scientists that all living beings contain these microbes and they have called this gas the “biosignature of life”. So, with the reputation phosphine has earned, finding it on planet Venus raises a possibility: “could there be alien life on planet Venus, even if it is restricted to gas-eating microbes?”

Wise to be cautious

The data that has been collected on the presence of phosphine gas in Venus is not substantial to make astrobiologists certain that there could be alien life on planet Venus, although one could say that the potential is there – just potential. The gas could be coming from something else rather than life. An international team of researchers have set out to simulate possibilities for the existence of the gas, modeling scenarios like lightning strikes and meteors bombarding the clouds to see if such could produce any amount of phosphine on the surface of the planet but they came up short. Therefore, one can say that this detection is extraordinary. If nothing else can explain it, then alien life could be the answer. But considering the nature of Venus – a harsh place for life to inhabit – it would take a really strongly acid-loving microbe to be living in those clouds.

That is why scientists are not saying there is alien life yet. The astronomy community has gone this path before of proclaiming that there is alien life only to be disappointed. So, they would rather be cautious and optimistic rather than put a foot out. Also there are still details about the research that needs to be explored.

First, other researchers need to give credence to the claim that the gas is really phosphine. Venus clouds are surrounded by sulfur dioxide and this could influence the readings. Also, observations of the Venetian atmosphere would have to be done to confirm the existence of phosphine gas.

If it is really confirmed that the gas detected is phosphine, then the next step for researchers is to determine the source of the gas. This is really important for an hypothesis to be drawn. It would be foolhardy to run to conclusions at an early stage and say the source is biology. Other possibilities have to be explored and confirmed. If in reality scientists agree that the source is biology, then Venus would have to be explored and discovered. Missions would be sent to Venus to discover where on the clouds the microbes could be existing and if they could lead us to other areas on Venus. The microbe-hunting missions have to be well planned to prevent contaminating the Venetian clouds.

As it is, the data gathered so far cannot answer these questions. Therefore, we will have to wait and see what future research would turn up. If we could find habitable life on other planets, it would help man understand his place in the universe. It would help us understand what it means to be alive; what conditions prompt life and how we can extend it. There is a possibility that Venus would be a planet of future interest to astronomers and astrobiologists as they explore the recent findings about phosphine on the planet.

But right now, we don’t have any definite answer to the question: “Are we alone on the solar system?” We might never have an answer. But exploring the possibilities will open up new vistas of knowledge and expand our ability to solve some of the pressing challenges of planet earth.

The video below is an interesting news commentary on this discovery. Enjoy it.


 

First Tracking Device using Vibration, AI to track 17 Home appliances

At the present state of affairs, to track each appliance in your home you would need to install a separate tracker for each appliance. Now, what if you had 10, 20 or so appliances? That would be some expense to carry out, not so? But recently, some researchers at Cornell University have developed a single device that is able to track about 17 home appliances at the same time and this device uses vibration with an integrated deep learning network. With this device, you no longer need to worry about forgetting to take wet clothes out of the washing machine, or allowing food to remain in the microwave, or even forgetting to turn off faucets that are dripping. This device promises to make your home smart in a cost-effective way.

technology vibration ai

 

Vibration analysis has several uses in industry, especially in detecting anomalies in machinery, but this is the first use case for tracking home appliances using vibrations that I have found. This device, called Vibrosense, uses lasers to capture the subtle vibrations that are emitted by walls, floors and ceilings and then incorporates this received vibration with a deep learning network that is used to model the data being processed by the vibrometer in order to create a unique signature for each appliance. I tell you, researchers are getting closer to their dream of making our homes not only smarter, but more efficient and integrated.

But can it detect appliance usage across a house, you may ask? There are so many appliances in a house and the vibrations they emit can intersect. That’s right. The researchers have a solution to that problem. To efficiently detect different appliances in a house and not just in any single room, the researchers divided the task of the tracking device into two categories: First, the tracking device would have to detect all the vibrations in the house generally using the laser Doppler vibrometer, and second differentiate the vibrations from multiple appliances even if they were similar vibrations by identifying the path the vibrations has traveled from room to room.

The deep learning network that is incorporated in the device uses two modes of learning: path signatures and noises. Path signatures for identifying different activities and the distinctive noises that the vibrations make as they travel through the house.

To test its accuracy the tracking device was tested across 5 houses at the same time and it was able to identify the vibrations from 17 different appliances with 96% accuracy. Some of the appliances it could identify were dripping faucets, an exhaust fan, an electric kettle, a refrigerator, and a range hood. Also, when it was trained Vibrosense could be able to identify 5 stages of appliance usage using an accuracy of 97%.

Cheng Zhang, assistant professor of information science at Cornell University and director of Cornell’s SciFi Lab, on speaking about the device, Vibrosense, said that it was recommended for use in single-family houses because when it was installed in buildings, it could pick up the activities that were going on in neighboring houses. A big privacy risk one must say.

A smart device with immense benefits

When computers are able to recognize the activities going on in the home, it makes our dream of the smart home closer to reality. Such computers can ease the interaction between humans and computers, enabling human-computer interfaces that are a win-win for everyone. That is what this tracking device does. One advantage of this device is that it leverages on the use of computers to understand human needs and behaviors. Formerly, we would need separate devices for each appliance or need. But this device has leveraged on that need. “Our system is the first that can monitor devices across different floors, in different rooms, using one single device,” Zhang said.

I feel elated on discovering this device. No more having to wait for my food to be cooked on the microwave. With this device, I could be watching the TV while it watches the food on my behalf. There are a lot of things we could use this for. I think this innovation is very beneficial to the average American.

But one concern about Vibrosense is in the area of privacy. I wouldn’t want my neighbor to know when I am in the bathroom, or have the TV on, or that I was not in the house. But these are the information the device can send out.

When asked on the issue of privacy, Zhang said: “It would definitely require collaboration between researchers, industry practitioners and government to make sure this was used for the right purposes.” I hope that cooperation does come.

The device could even help in enabling sustainability and energy conservation in the home. In so doing, it could help homes to monitor their energy usage and reduce consumption. It could also be used to estimate electricity and water usage rates since the device has the ability to detect both the occurrence of an event and the exact time period that event took place. This is badly sought-for energy-saving advice that home owners need. This is great!

I was thinking about the benefits of a device like this in a typical home and was wowed by its potential benefit that I decided that this innovation needs a place in my solvingit? blog. So, this is a thumbs up to Cheng Zhang and his team at Cornell.

The material for this post was based on the paper: “VibroSense: Recognizing Home Activities by Deep Learning Subtle Vibrations on an Interior Surface of a House from a Single Point Using Laser Doppler Vibrometry.” Cheng Zhang was senior author of the paper. The paper was published in Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies and will be presented at the ACM International Joint Conference on Pervasive and Ubiquitous Computing, which will be held virtually Sept. 12-17.

Object Oriented Programming (OOP) In Python: Classes and Objects Part 1

Computer scientists are continually refining how they write programs. Therefore, they have resorted to several methodologies to do so. Object Oriented Programming (OOP) is a popular methodology and it is the methodology that python relies on. Other programming languages that rely on the OOP methodology include Java and C++.

oop in python class and object

 

Object oriented programming as the name implies relies on objects as its main concept, and along with other related features of objects like classes, inheritance, abstraction and encapsulation. This is a wide departure from functional programming which depend on functions as its main concept. In OOP, every operation is defined as an object and every attribute is based on that owned by the object.

Object oriented programming has become popular because it brings programming close to real life, to the things people could associate with, and not to mathematical functions that are most times the province of professional scientists and mathematicians.

In python, we will start by describing how python implements classes and objects in OOP before we relate it to other features of OOP in python.

Python Class in OOP.

A class is a blueprint for defining data and behavior of objects. It helps us to define similar objects and relate them to a class. For example if you have two dogs, one called “James” and another called “Bingo”, these dogs are both objects with similar behavior and properties. Therefore, we could create a Dog class for both objects.

When we create a python class, we are bringing together the data and behavior of an object and defining them in a bundle. Therefore, creating a new python class is the same thing as creating a new type of object in python and with the ability that new instances of that type can then be made. For example, if we create a Dog class from the example above, new instances of dogs named ‘James’ and ‘Bingo’ can then be made. These class instances are given the attributes defined in the class to maintain their state, and they can also have methods defined by the class for modifying their state.

It is through the python class definition that we can then implement all the features of object oriented programming in python such as class inheritance allowing for multiple child classes, the overriding of the methods of a parent class by a child class, and a child class having methods that have the same name as the parent class. Note that the amount and kinds of data that objects can contain that are derived from a class is unlimited. Just like modules, classes in python are dynamic in nature since they are created at runtime and can be modified after creation.

The following is the syntax of a python class definition:

    
class ClassName:

    statement 1
    statement 2
    

To define a python class you need to use the keyword class to precede the name of the class. Then the name of the class is followed by a colon. The colon delimits the block of statements that represents what goes into a class like the attributes and methods that the class defines for its objects.

Before a python class definition can have any effect, it must be executed like function definitions. The moment you call a python class, you are creating an object, called class instantiation. You can create and call a class this way:

    
# class definition
class Dog:
    
    def method1():
        statement 1...

# class execution. Creates an object
james = Dog()
        
        

When a class is called, a new namespace is formed which includes all the attributes and methods of the class.

Most times when you are instantiating an object or creating an object, you define the instantiation special method, __init__(). This special method contains all the attributes of the object instances that are needed when objects are created. Therefore, when this exists, the moment you invoke the class by creating an object, the object creation process automatically calls the __init__() special method and implements any statement that are contained within the special method.

For example, let’s take a class Animal that specifies that whenever an Animal object is created, it has to be given a name that would be bound to the object. We could write the code with the __init__() special method this way.

    
# class definition
class Animal:
    
    def __init__(self, name):
        ''' name is a string '''
        self.name = name

# class execution. Creates an object
james = Animal('James')

With the code above any Animal object that is created will be supplied a name. Here we gave the animal the name, James. That name is bound to the python object, james, throughout its lifetime and is unique to it.

Python classes also contain data attributes and methods. The data attributes are of two types: data attributes that are generalized for the class and is applicable to all objects (called class variables) and data attributes that are specific to each instance of a class or each object and called instance variables. I will show how class variables are distinguished from instance variables in another section. But note that the ‘name’ attribute for our Animal class above is an instance variable because it pertains to each specific object of the class.

Python class methods are operations that will be performed by the objects of the class. A method is a function that “belongs to” an object. We add the self parameter when defining methods in a class. This tells the python interpreter that the method belongs to the class that called it. But this need not be enforced although many editors will tell you there was an error if you do not insert the self parameter. Let us illustrate an example of an animal that walks and talks below.

Notice that every method of the class has self as the first parameter. The methods walk and talk define what each object of the animal can do.

Python Objects and OOP.

Objects are the physical entities of the world. They serve as a realization of classes which are the logical entitites. An object can be anything – a book, a student, an animal etc. Objects are defined by three important characteristics: (A). An identity or a piece of information that can be used to identify the object e.g name, number. (B). Properties. The attributes of the object. (C). Behavior. These refers to the operations that are done on the object or the functions it can perform. E.g a student can write, a car can move, an animal can walk.

Objects of a python class support two types of operations: attribute reference and instantiation. I have outlined these two operations in the embedded code above but will bring them out again for emphasis.

In python, the standard syntax for attribute references is the dot syntax. In the above code, when referring to the method walk and talk, we used objectname.walk() and objectname.talk(). Also, in the walk and talk methods when referring to the name data attribute we used self.name. Valid attribute names are all the names that are in the class’s namespace when the python object was created following from the class definition.

Class instantiation that creates a python object uses function notation. Class instantiation results in the creation of an object for the class. We denoted class instantiation above with the code: james = Animal(‘James’) where Animal refers to the class Animal. Animal here is being used as a function to create the object james. The class instantiation function can have an argument or no argument based on how you defined your __init__() method. In our __init__() function above we specified that on object creation we need to supply a name for the object.

Python Class and instance variables in OOP.

As we said above, class variables refer to data and methods that are shared by all instances of the class, while instance variables refer to data and methods that are specific and unique to each instance of a class. If you want an attribute to be a class variable, you should not initialize it in the __init__() method but if you want it to be an instance variable you should initialize it in the __init__() special method. Let’s denote these with examples below.

From the example above you can see that we created two Animal objects, james and frog. In the class definition we defined the type attribute outside the __init__() method and therefore when we call it for both objects we have the same value or reply. But we defined the name attribute inside the __init__() method and then when we called the name attribute we received different values for both objects. Always remember this difference between class variables and instance variables in your code so you don’t get code that doesn’t work as you expect when creating classes and objects.

Data Hiding in Python.

In other object oriented programming languages like java they give you the ability to hide data from users getting access to them from outside the class. In this way, they make data private to the class. The makers of python do not want any data hiding in the language implementation. They state that they want everything in python to be transparent. But there are occasions where you can implement data hiding. This can be achieved by prefixing the attribute with a double underscore, __. When this is done you cannot directly access the attribute outside the class.

For example, let’s make the type attribute hidden in the Animal class.

You can see now that to reference the type attribute we get an AttributeError. But there is a workaround that we can use to get the attribute. When you call objectname._Classname__attributename you can get back that attribute. So nothing is hidden in python. Let’s show this with an example. Take note of line 16 in the code below.

It is beneficial to understand how python implements OOP extensively because when you are working in python, you not only use the built in types but have to create your own types. I hope this post helped you along that line. The subsequent posts will treat on other OOP concepts like python class inheritance in OOP and OOP in python - polymorphism that will build on this knowledge. I hope you also enjoy them.

Happy pythoning.

Sustainable GameBoy That Runs Forever On No-Batteries

Have you ever wished you had a device in which the battery never runs out, or have you ever wanted one day to put an end to all the sustainability issues caused by batteries landing up in landfills? Well, that possibility will soon be a reality with the new proof of concept that was developed by some engineers at Northwestern University and Delft University of Technology (TU Delft) in the Netherlands. They were able to manufacture handheld sustainable game devices that can run without batteries, relying on solar energy and the key presses of the user.

sustainable gameboy

 

Battery-free intermittent computing has long been an idea that has plagued researchers in the technology industry for a long time. With this sustainable device we will soon see an end to the costly and environmentally hazardous batteries that were used to power electronic devices like interactive games which end up in landfills. This device relies on energy from the sun which it attracts and also energy from the user when he presses some keys on the gamepad.

“It’s the first battery-free interactive sustainable device that harvests energy from user actions,” said Northwestern’s Josiah Hester, who co-led the research. “When you press a button, the device converts that energy into something that powers your gaming.”

On September 15, 2020 this team of engineers will present their sustainable game device virtually at the UbiComp 2020 conference.They promise that this is not a toy but the real thing,

So one may ask: how does this device function? This is an energy-aware gaming platform (ENGAGE) that was equipped with precisely the size and form factor of the original Gameboy. The screen has a set of solar panels that attracts and transforms energy from the Sun into its internal energy. Then another source of energy for the device comes from the button presses by the user. It is pertinent to note that an important component of the game device is that it impersonates the original Gameboy processor. While using a lot of computational power, impersonating the processor has the advantage of making it possible that any retro game can be played straight from the original cartridge.

There existed some challenges when the device is power switching. As it switches power from one source to the other the game device can experience loss of power. This problem was overcome when the engineers made the device to be energy aware as well as energy efficient so that the duration of the power failure will become inconsequential. A new technique was also developed to store the system state in non-volatile memory such that the overhead from power failures became minimal and the system could restore itself to previous state when power is restored. This makes it possible that the ‘save’ button which you can find on other devices does not exist as it is state aware and can make the game continue just from precisely where it stopped even if the player was in the course of completing an action.

It was discovered that on days where the sun shone heavily, or the clicking was moderate, interruptions could be ignored by the player. Yet the engineers have not gotten to where they desire the device to be, that is, to have non-interruptible states. But they are happy about one fact - this proof-of-concept shows that sustainable, environmentally-conscious devices that do not use hazardous batteries are possible in the near future.

“Sustainable gaming will become a reality, and we made a major step in that direction — by getting rid of the battery completely,” said TU Delft’s Przemyslaw Pawelczak, who co-led the research with Hester. “With our platform, we want to make a statement that it is possible to make a sustainable gaming system that brings fun and joy to the user.”

“Our work is the antithesis of the Internet of Things, which has many devices with batteries in them,” Hester said. “Those batteries eventually end up in the garbage. If they aren’t fully discharged, they can become hazardous. They are hard to recycle. We want to build devices that are more sustainable and can last for decades.”

You can watch Hester describing this sustainable device in the video below:


 

First Pain-Sensing Electronic Skin that Reacts Like Human Skin

Imagine that you touch a hot stove, how do you perceive that the stove is hot and that you should withdraw your hand? In other words, how did you feel the pain? Doctors tell us that when the skin comes in contact with a hot object such as a hot stove, sensory receptors transfer the information to the nerve fibers at the skin, then the nerve fibers transfer it to the spinal cord and the brainstem where it is then taken to the brain and the information is registered and processed. The brain tells the skin that it has come in contact with a hot object, which then perceives the pain. All of these processes occur in microseconds, Can humans mimic this process with technology?

electronic skin

 

Some scientists at the RMIT University in Australia have concluded that they can mimic the pain reception process of the human skin using an electronic skin. They have built a prototype device that can replicate the way the human skin actually perceives pain and gathers information from the environment. When tested it was found that the reaction of the electronic skin was near instant, close to the instant feedback mechanism we get from our human skin. That is just wonderful.

The team at the university did not just stop there; they went further. They have built stretchable electronic devices that complement the pain reception of the prototype electronic skin which stretchable devices can also sense temperature and pressure. With this accomplishment they have integrated all these functionalities into the prototype electronic skin so that it cannot only perceive pain, it can also perceive temperature and pressure.

Lead researcher, Professor Madhu Bhaskaran, co-leader of the Functional Materials and Microsystems group at RMIT, said that this electronic skin was optimized to act as the human skin.

How the electronic skin works

This optimized electronic skin was a brain child of 3 previous devices and patents that were produced by the team. These patents were:

1. A stretchable electronic device that was transparent and unbreakable. It was made of silicon and could be worn on the skin.

2. Temperature-reactive coatings which are thinner than human hair and could react to changes in the temperature of the surroundings. The coatings were also transformable in the presence of heat.

3. A brain-mimicking electronic device that works as the brain does in using long-term memory to recall and retain previous information.

In the electronic skin prototype, the pressure sensor makes use of the stretchable electronic device and brain mimicking device, the heat sensor makes use of the temperature reactive coatings and the brain-mimicking device using memory cells, while the pain sensor combines all three technologies into one.

PhD researcher Md Ataur Rahman said the memory cells in each prototype were responsible for triggering a response when the pressure, heat or pain reached a set threshold. He hailed this as an accomplishment; the creation of the first electronic somatosensory device that will be able to replicate the complex neural mechanisms involved in transferring information from the skin to the brain and back to the skin in order to interpret what information the skin receptors were receiving from the environment. Compared to previous receptors for the skin which concentrated only on pain, he said this prototype electronic skin was the first of its kind to react to real mechanical pressure, temperature and pain at the same time and provide the correct response.

And this comes with a distinction in reception of different threshold of pain, temperature and pressure.

“It means our artificial skin knows the difference between gently touching a pin with your finger or accidentally stabbing yourself with it – a critical distinction that has never been achieved before electronically,” he said.

A purview of good things to come in the future

According to Bhaskaran: ““It’s a critical step forward in the future development of the sophisticated feedback systems that we need to deliver truly smart prosthetics and intelligent robotics.”

Yes, Imagine a prosthetic leg that could be able to feel real pain, pressure and temperature or even a robot that can distinguish different stimuli. Yes, imagine the future where human creativity has met the demands of Mother Nature. Lead researcher Professor Madhu Bhaskaran said the pain-sensing prototype was a significant advance towards next-generation biomedical technologies and intelligent robotics. We cannot wait to have people without legs know that they can have real legs right now and not feel disadvantaged. Imagine skin grafts that make you feel like this is the real thing and not an artificial skin.

The benefits of this technology are enormous. That is why I decided to include it in my solvingit? blog.

The research was supported by the Australian Research Council and undertaken at RMIT’s state-of-the-art Micro Nano Research Facility for micro/nano-fabrication and device prototyping.

Artificial Somatosensors: Feedback receptors for electronic skins’, in collaboration with the National Institute of Cardiovascular Diseases (Bangladesh), is published in Advanced Intelligent Systems (DOI: 10.1002/aisy.202000094).

Breakthrough 3D Printing Of Heart For Treating Aortic Stenosis

When a narrowed aortic valve fails to open properly and thereby the pumping of blood from the heart to the aorta is obstructed, this might result in a condition called aortic valve stenosis. Aortic stenosis is one of the most common cardiovascular conditions in the elderly and affects about 2.7 million adults over the age of 75 in North America. If the doctors decide that the condition is severe, they may carry out a minimally invasive heart procedure to replace the valve. This procedure is called transcatheter aortic valve replacement (TAVR). But this catheterization procedure is not without some risks which might include bleeding, stroke, heart attack or even death. That is why it is important that the doctors take all care to reduce the risks. The TAVR procedure is less invasive than open heart surgery to repair the damaged valves,

3D printing of heart

In a new paper published in Science Advances, a peer-reviewed scientific journal published by the American Association for the Advancement of Science (AAAS), some researchers from the University of Minnesota along with their collaborators have been able to produce a new technique that involves 3D printing of the aortic valve along with creating lifelike models of the aortic valve and surrounding structures which models mimic the look and feel of the valve. These 3D printing would possibly help reduce the risks for doctors who want to carry out a TAVR procedure on a patient.

Precisely, they 3D printed a model of the aortic root. The aortic root is a section of the aorta that is closest to the heart and attached to the heart. Some of the components of the aortic root include the aortic valve, which is prone to aortic stenosis in the elderly, along with the openings of the coronary artery. The left ventricle muscle and the ascending aorta which are close to the aortic root are also not left out in the model.

The models include specialized 3D printing soft sensor arrays built into the structure that prints the organs for each patient. The 3D printing process is also customized. The authors believe that this organ model will be used by doctors all over the world to improve the outcomes for patients who will be subject to invasive procedures when treating aortic stenosis.

Before the models are produced CT scans of the patient’s aortic root are made so that the printing will mimic the exact shape of the patient's organ. Then specialized silicone-based inks are used to do the actual printing in order to match the exact feel of the patient's heart. These inks were specially built for this process because commercial printers in the market can print 3D shapes but they cannot be able to reflect the real feel of the heart’s organs which are soft tissues. The initial heart tissue that were used for the test of the 3D printers were obtained from the University of Minnesota's Visible Heart Laboratory. The researchers found that the specialized 3D printers produced models that they wanted, models that mimic the shape and the feel of the aortic valve at the heart.

To watch a video of how the 3D printers work, I encourage you to play the video below. You would find it interesting.


The researchers are happy with what they have achieved.

“Our goal with these 3D-printed models is to reduce medical risks and complications by providing patient-specific tools to help doctors understand the exact anatomical structure and mechanical properties of the specific patient’s heart,” said Michael McAlpine, a University of Minnesota mechanical engineering professor and senior researcher on the study. “Physicians can test and try the valve implants before the actual procedure. The models can also help patients better understand their own anatomy and the procedure itself.”

These models will surely be of help to physicians who will use them to practice on how they will carry out their catheterization procedures on the real heart. Physicians will soon have the ability to practice beforehand on the size and placement of the catheter device on patients before carrying out the real procedure thereby reducing the risks involved. One good thing about the integrated sensors that are fitted into the 3D models is that they will provide physicians with electronic pressure feedback which will guide them in determining and selecting the optimal position of the catheter when being placed into the aorta of a patient.

But the researchers do not think these are the only use cases for their findings or the models. They aim to go beyond that.

“As our 3D-printing techniques continue to improve and we discover new ways to integrate electronics to mimic organ function, the models themselves may be used as artificial replacement organs,” said McAlpine, who holds the Kuhrmeyer Family Chair Professorship in the University of Minnesota Department of Mechanical Engineering. “Someday maybe these ‘bionic’ organs can be as good as or better than their biological counterparts.”

I think these are laudable futuristic goals. If they could achieve their ambition, then McAlpine would be solving a problem that gives sleepless nights to many physicians who have to operate on elderly patients with weak aortic valves.

Because this is a problem-solving innovative solution to a challenging problem, I decided to include it in my blog. I hope you enjoyed reading about the achievements of McAlpine and his colleagues. I wish that they go further than just helping physicians have 3D models but be able to make those models replace weak natural organs.

In addition to McAlpine, the team included University of Minnesota researchers Ghazaleh Haghiashtiani, co-first author and a recent mechanical engineering Ph.D. graduate who now works at Seagate; Kaiyan Qiu, another co-first author and a former mechanical engineering postdoctoral researcher who is now an assistant professor at Washington State University; Jorge D. Zhingre Sanchez, a former biomedical engineering Ph.D. student who worked in the University of Minnesota’s Visible Heart Laboratories who is now a senior R&D engineer at Medtronic; Zachary J. Fuenning, a mechanical engineering graduate student; Paul A. Iaizzo, a professor of surgery in the Medical School and founding director of the U of M Visible Heart Laboratories; Priya Nair, senior scientist at Medtronic; and Sarah E. Ahlberg, director of research & technology at Medtronic.

This research was funded by Medtronic, the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health, and the Minnesota Discovery, Research, and InnoVation Economy (MnDRIVE) Initiative through the State of Minnesota. Additional support was provided by University of Minnesota Interdisciplinary Doctoral Fellowship and Doctoral Dissertation Fellowship awarded to Ghazaleh Haghiashtiani.

You can read the full research paper, entitled "3D printed patient-specific aortic root models with internal sensors for minimally invasive applications," at the Science Advances website.

First Walking Microscopic Robots (Nanobots) To Change The World

Although it has been said several times that the future of nanoscale technology with nanobots is immense, each day researchers continue to expand it. Recently, in a first of its kind, a Cornell University-led collaboration has manufactured the first microscopic robot that can walk. The details seem like a plot from a science fiction story.

microscopic robots or nanorobots

 

The collaboration is led by Itai Cohen, professor of physics, Paul McEuen, the John A. Newman Professor of Physical Science – both in the College of Arts and Sciences – and their former postdoctoral researcher Marc Miskin, who is now an assistant professor at the University of Pennsylvania. The engineers are not new to producing nanoscale creations. To their name they already have a microscopic nanoscale sensor along with graphene-based origami machines.

The microscopic robots are made with semiconductor components that allow them to be controlled and made to walk with electronic signals. The robots have a brain and torso, and legs. They are 5 microns thick, 40 microns wide, and 40-70 microns in length. A micron is 1 millionth of a metre. The torso and the brain were the easy part. They are made of simple circuits manufactured from silicone photovoltaics. But the legs were completely innovative and they consist of four electrochemical actuators.

According to McEuen, the technology for the brains and the torso already existed, so they had no problem with it except for the legs. “But the legs did not exist before,” McEuen said. “There were no small, electrically activatable actuators that you could use. So we had to invent those and then combine them with the electronics.”

The legs were made of strips of platinum. They were deposited by atomic layer deposition and lithography, with the strips being just some dozen atoms thick. Then these strips of platinum are capped by layers of titanium. So, how did they make these legs to walk? By applying a positive charge to the platinum. When this is done, negative ions from the solution surrounding the surface of the platinum are adsorbed to the surface and they neutralize the charge. Neutralization makes the platinum to expand and the strips bend. Because the strips are ultrathin, they can bend on neutralization without breaking. To enable three dimensional motion control, rigid polymer panels were patterned on top of the strips. The panels were made to have gaps and these gaps made the legs to function like knees or ankles, enabling the legs to move in a controlled manner with generated motion.

A paper describing this technology titled: “Electronically integrated, mass-manufactured, microscopic robots,” has been published in the August 26 edition of Nature.

The future applications of this technology is immense. Since the size of the electronically controlled microscopic robots is that of a paramecium, one day when they are more sophisticated, they could be inserted into the human body to carry out some functions like cleaning up clogged veins and arteries, or even analyzing the human brain. Also this first production will become a template for the production of even more complex versions in the future. This initial mcroscopic robot is just a simple machine but imagine how sophisticated and computational complex it will be when it is installed with complicated electronics and onboard computers. Furthermore, to produce the robots do not take much in terms of time and resources because they are silicone-based and the technology already exists. So we could see the possibility of mass-produced robots like this being used in technology and medicine to the benefit of the human race. In fact the benefits are immense when one calculates the economics involved.

“Controlling a tiny robot is maybe as close as you can come to shrinking yourself down. I think machines like these are going to take us into all kinds of amazing worlds that are too small to see,” said Miskin, the study’s lead author.

The frontiers of nanobot technology is expanding by the day. With these mass produced robots in the market, I see a solution in the offing for various medical and technological challenges. This is an innovative nanobot.

Material for this post was taken from the Cornell University Website.

Light Trapping Nano-Antennas That Could Change The Application Of Technology

Travelling at a speed of 186,000 mi/s, light can be extremely fast. Even Superman, the fastest creature on Earth, cannot travel at the speed of light. Humans have shown several times that they can control the direction of light by passing it through a refractory medium. But is it possible to trap light in a medium and change its direction just as you can trap sound in an echo device? Before now that possibility was theoretical but new research has shown that this could be practical. Since light is useful for information exchange and so many applications, the ability to control light, trap it or even change its direction could have several applications in science and technology.

outline from light trapping device
 

In a recent paper published in “Nature Nanotechnology”, some Stanford scientists who were working at the lab of Jennifer Dionne, an associate professor of materials science and engineering at Stanford University, have demonstrated an approach to manipulating light which has been successful in its ability to significantly slow the speed of light and also change its direction at will. The researchers structured silicon chips into fine nanoscale bars and these bars were used to trap lights. Later, the trapped light was released or redirected.

One challenge the researchers faced was that the silicon chips were transparent boxes. Light can be trapped in boxes but it is not so easy to do if the light is free to enter and leave at will just as you find in transparent boxes.

Another challenge that was faced by the researchers was in manufacturing the resonators. The resonators consist of a silicone layer atop a wafer of transparent sapphire. The silicon layer is extremely thin and it has the ability to trap lights very effectively and efficiently. It was preferred because it has low absorption in the near-infrared spectrum which was the light spectrum that the scientists were interested in. This region is very difficult to visualize due to inherent noise but it has useful applications in the military and technology industry. Underneath the silicone layer is a bottom layer of sapphire which is transparent and the sapphire are arranged in wafers. Then a nano-antenna was constructed through this sapphire using an electron microscopic pen. The difficulty in etching the pattern for the microscopic pen lies in the fact that if there is an imperfection then it will be difficult for it to direct light as the sapphire layer is transparent.

The experiment would be a failure if the box of silicon allowed the leakage of light. There should be no possibility of that. Designing the structure on a computer was the easy part but the researchers discovered the difficulty lay in the manufacturing of the system because it has a nano-scale structure. Eventually they had to go for a trade-off with a design that gave good light trapping performance but could be possible with existing manufacturing methods.

The usefulness of the application

The researchers have over the years tinkered with the design of the device because they were trying to achieve significant quality factors. They believed that this application could have important ramifications in the technological industry if it was made practical. Quality factors are a measure of describing the resonance behavior involved in trapping light and in this case it is proportional to the lifetime of the light.

According to the researchers, the quality factors that were demonstrated by the device was close to 2,500 and if you compare this to similar devices, one could say that the experiment was very successful because it is two times order-of-magnitude or 100 times higher than previous devices.

According to Jennifer Dionne at Stanford University, by achieving a high quality factor in the design of the device, they have been able to place it at a great opportunity of making it practical in many technology applications. Some of these applications include those in quantum computing, virtual reality and augmented reality, light-based Wi-Fi, and also in the detection of viruses like SARS-CoV-2.

An example of how this technology could be applied is in biosensing. Biosensing is an analytical device used for the detection of biomolecules that combines a biological component with a physicochemical component. A single molecule is very small that essentially it is quite invisible but if light is used as a biosensor and passed over the molecule hundreds or even thousands of times, then the chances of creating a detectable scattering effect is increased, thereby making the molecule discernible.

According to Jennifer Dionne, her lab is working on applying the light device on the detection of Covid-19 antigens and antibodies produced by the body. Antigens are molecules produced by viruses that trigger an immune response while antibodies are proteins produced by the immune systems in response to the antigens. The ability to detect a single virus or very low concentration of multitudes of antibodies comes from the light – molecule interaction created by the device. The nanoresonators are designed to work independently so that each micro-antenna can detect different types of antibodies simultaneously.

The areas of application of this technology is immense. Only the future can predict the possibilities when other scientists start experimenting with what was discovered. I think this innovation is a game changer.

Materials for this post was taken from the Stanford University website.

An Innovative AI-powered Computer Vision And Gesture Recognition System

How does the brain interpret what we see and how can computers be made to mimic the natural workings of the human brain when it comes to sight? That is the question that computer vision technologies seek to answer. Today, many technologies use computer vision in artificial intelligence, or AI. This AI rely on neural networks and they have to process a large amount of data in a very short space of time. Many AI-powered computer vision systems have been introduced into the market and they are being used in hi-precision surgical robots, as health monitoring equipment and in gaming systems. Heard of the Google computer vision, or Google cloud vision API? Those are examples. But engineers want to go beyond these computer vision applications. They want the AI-powered computer systems to recognize human gestures so as to complement its visual capabilities. That is why gesture recognition technology has become a hot topic in computer vision and pattern recognition.

artificial intelligence computer vision and gesture recognition system
 

The drive to create AI systems that recognize hand gestures came from the need to develop computer systems and devices that can help people who communicate using sign language. Early systems tried to use neural networks that incorporate the ability to classify signs from images captured from smartphone cameras while this data is converted from pictures to text. They were systems that involved computer vision with image processing. But AI systems have grown more advanced and more precise than those humble beginnings. Today, many systems seek to improve on this visual-only AI recognition system by integrating input from wearable sensors. This approach is known as data fusion.

Data fusion is the process of integrating more data sources into computer systems that make these systems more reliable and accurate than if the data was coming from a single source. AI-powered computer vision systems incorporate date fusion using wearable sensors that recreates the skin’s sensory ability, especially the somatosensory functionality of the skin. This has resulted in the ability of computer systems to recognize a wide variety of objects in their environment and increase their functionality and usefulness. But there are still challenges which hamper the precision and the growth of these data. One of these challenges is that the quality of data from wearable sensors are low and this is as a result of the fact that wearable sensors that have been produced are bulky and sometimes have poor contact with the user. Also when objects are visually blocked or there is poor lighting, the ability of these AI-powered systems are reduced. One area that has been troubling to engineers is how to efficiently merge the data coming from the visual and the sensory signals. Therefore, this has led to information that is inefficient, resulting in slower response times for gesture recognition systems.

In an innovative approach that is said to solve many of these challenges, a team of researchers at the Nanyang Technological University, Singapore (NTU, Singapore), have created an AI data fusion system that drew its inspiration from nature. This system uses skin-like stretchable sensors made from single-walled carbon nanotubes. This is an AI approach that closely mimics the way the skin’s signals and human vision are handled together in the brain.

How the NTU artificial intelligence gesture recognition system works

The NTU bio-inspired AI system was based on the combination of three neural network approaches. The three neural networks that were combined are: 1. a convolutional neural network which is an early method for visual processing, 2. a multi-layer neural network which was used for early somatosensory information processing, and 3. A sparse neural network which fuses the visual and the somatosensory information together.

Therefore combining these three neural networks makes it possible for the gesture recognition system to more accurately process visual and somatosensory information more efficiently than existing systems.

The lead author of the study, Professor Chen Xiaodong, from the school of Material Science and Engineering at NTU says that the system is unique because it drew its inspiration from nature and tries to mimic the somatosensory–visual fusion hierarchy which is already existing in the human brain. According to him, no other system in the gesture recognition field has undertaken this approach.

What makes this system particularly accurate in data collection is the fact that the stretchable skin sensors used by the researchers attach comfortably to the skin and this makes the data collection process not only more accurate but makes it to deliver a higher-quality signal which is vital for hi-precision recognition systems.

The researchers have published their study in the scientific journal “Natural Electronics”.

High accuracy even in poor environmental conditions

As a proof of concept the bio-inspired AI system was tested using a robot that was controlled through hand gestures and then the robot was guided through a maze. It was discovered that the AI system was able to guide the robot through the maze with zero errors, compared to the six recognition errors from another visual recognition system. It then seems evident that this bio-inspired AI system is more accurate and efficient.

Also it was tested under noise and unfavorable lighting conditions. Even under this unfavorable conditions the bio-inspired AI system still maintained its high accuracy. When it was tested in the dark, it worked efficiently with a recognition accuracy of over 96.7%.

The authors of this study said that the success of their bio-inspired AI system lies in its ability to interact with and complement at an early stage the visual and somatosensory information it was receiving even before any complex interpretation is carried out. This makes it possible for the system to rationally collect coherent information with low data redundancy and low perceptual ambiguity with better accuracy.

Promise of better things to come

This innovative study shows a promise of the future. It helps us to see that humans are one step closer to a world where we could efficiently control our environment through a gesture. Applications that could be built for such a technology are endless, and it promises to create a vast amount of opportunities in Industry. Some examples include a remote robot control over smart workplaces along with the ability to produce exoskeletons for those who are elderly.

The NTU team are aiming to use their system to build virtual reality (VR) and augmented reality (AR) systems. This is because their system is more efficiently used in areas where hi-precision recognition control is required such as in the entertainment and gaming Industries.

Material for this post was taken from a press release by the Nanyang Technological University, Singapore.

Do Face Masks Really Protect Against Covid-19 As Claimed?

Public health officials have launched a protracted campaign to make us believe that wearing a face mask prevents the spread of Covid-19, the latest pandemic. But is this true? That was the question on the mind of a Duke physician, Eric Westman, who was a champion for people putting on masks. He wanted to be sure that he was recommending the right prevention technique to people and businesses. So he decided to carry out a proof-of-concept study. A proof-of-concept study is a study that aims to test whether a technique or method is really as effective as claimed. Also it is used in science as a testing phase to verify the feasibility of a method, a technique or even an idea. When scientist carry out an investigation in science, they start with an idea like this.

face mask like this are good against pandemics
 

Doctor Westman was working with a non-profit and he needed to provide face masks to people. He was also skeptical about the claims that mask providers were making about how effective their marks were against a pandemic like covid-19, so he went to a chemist and physicist at the University, Martin Fischer, Ph.D. and asked him to carry out a test on various face masks. The test they did was based on both surgical masks used in medical settings and cloth face masks. They also carried out tests on bandanas and neck fleeces used by people who claim they can prevent the spread of covid-19.

Fischer’s line of work usually involves exploring the mechanisms involved in optical contrast while doing molecular imaging studies. He was intrigued by the doctor’s challenge so he set out to help him. For the study he used materials that were freely available; something that can easily be bought online. These materials include a box, a laser, a lens, and a cell phone camera.

From the study it was reported that it proved positive and showed that face mask were effective at preventing the spread of Covid-19. They recently published their scientific study article in the journal “Science Advances”. Despite this being a low cost technique, it helped to prove that face mask prevent droplets coming out of the mouth while speaking, sneezing or coughing from being transmitted from one person to another. They reported that while carrying out the study they could see that when people speak to each other, cough, or sneeze that molecules of droplets are passed from one person to the other. They also confirmed that it is not all masks that are effective at preventing the spread of droplets. Some face coverings were seen to perform better than others in preventing the spread of droplets.

So how did the masks compare? They tried out the proof-of-concept study on various masks and compared their effectiveness. They found that the best masks were the N95 masks which were used in medical settings. They also found that surgical masks and masks made of polypropylene were equally effective in preventing the spread of droplet molecules. Face masks which were made from cotton allowed some molecules to pass through but were found to have good coverage. They could eliminate several droplets that were being passed when people were speaking. Overall it was shown that bandanas and neck fleeces should not be used at all or recommended as face covering. This was because they were ineffective in blocking the spread of droplet molecules.

When the physicist was asked if this was the final word on the subject, he replied in the negative. Therefore more studies need to be carried out because this is just a demonstration to show the effectiveness of various face masks. The study was done to help businesses see that they can carry out these tests themselves before investing on any type of face mask or face covering.

When asked on the benefits of the study, Westman, who was inspired to start it, said that many people and businesses have been asking him about how they could test all these face masks that were new in the market. So he decided to show that businesses could carry out the tests themselves with very simple materials. He said that the parts for the testing were easily purchased online and they were putting out this information to help others.

As they hoped, they have shown that various face coverings that were being promoted by public health officials were indeed effective in preventing the transmission of molecular droplets from one person to the other.

Although this is just a proof-of-concept and not a rigorous testing technique, one can confidently recommend the use of face masks to individuals and businesses because they really work in preventing the spread of covid-19. My advice to everyone is to stay safe, wear a face mask, and help stop the spread of covid-19. We will see an end to this soonest. Please, carry out social distancing and regular hand washing to prevent the spread of this current pandemic.

Material for this post was provided by the dukehealth.org website.

Why Shaving Blades Become Useless After Cutting Human Hair

For a long time scientists have been fascinated with one problem when it concerns blades. Although blades are made of stainless steel and have edges that are razor-sharp, to further strengthen them they are coated with diamond-like carbon, but a material that is 50 times softer than a blade such as a human hair can be able to make the blade useless over time. From a logical point of view, this should not be the case.

 

Intrigued by this problem, the engineers at MIT’s department of Material Science and Engineering have come up with an innovative solution. These engineers concern themselves daily with exploring the microstructure of materials in order to design and make new materials that could be able to have exceptional damage-resistance properties. The lead researcher, Gianluca Roscioli, an MIT graduate student, came up with his idea when he was shaving his own hair.

After noticing that his blades tend to get dull with time after shaving, he decided to take images of the blades after each shaving activity. He took these images with a scanning electron microscope (SEM), scanning the blade’s edge in order to track how the blade wore down over time. What he discovered showed that the process is much more complex than a simple wear over time. He noticed very little wear and rounding out at the edges but instead realized that chips were being formed around certain regions of the razor’s edge. These led him to ask himself: Under what conditions do these chipping take place, and what are the ingredients for a strengthened blade to fail after shaving a material as soft as human hair?

To answer these questions conclusively he built an apparatus that was designed to fit inside an SEM and he used it to take samples of his shaving and that of his colleagues. They found that there were some conditions that might cause the edges of a blade to chip and as the chipping proceeds with time, it will cause the blade to get dull. The conditions depend on the blade’s microstructure. If the blade is heterogeneous or the microscopic structure is not uniform, the blade will be more prone to chipping. Also, the angle at which the cutting was done was found to be significant. Therefore, they found that shaving at right angles were better than lower angles. Finally, the presence of defects in the steel’s microstructure was another factor that played a role in initiating cracks on the blade’s edge. Chipping was found to be more prominent when the human hair met the blade at a weak point in the blade’s heterogeneous structure.

These conditions illustrate a mechanism that is well known in engineering - stress intensification. This is the intensification of the stress applied to a material because the structure of the material has microcracks. Once an initial microcrack has formed, the material’s heterogeneous structure enabled these cracks to easily grow to become chips. Therefore, even though the material might be fifty times stronger than what it is cutting, the heterogeneity of the material can increase the stress on it, making cracks to intensify.

The implications of this discovery is immense. It will save money and costs to the average user of shaving blades because it will offer clues on how the edges of a blade can be preserved, and give manufacturers the opportunity to make better blades or cutting materials by using more homogenous materials.

The engineers have already taken their discovery one step further. They have filed a provisional patent on a process to manipulate steel into a more homogenous form, with the hope that they could use this process to build longer-lasting, and more chip-resistant blades.

Material for this post was taken from the MIT news website.

Matched content