Search

Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Is There Life On Venus? Or Are We Alone?

Recently, it was reported in the media that phosphine has been discovered in the acidic clouds of planet Venus. According to scientists, phosphine is a biosignature for life; that means it can only be produced by living organisms. This discovery raises several questions in the minds of many, some of which are: “Are we alone? How substantial is this evidence?” Only time will answer this question when the research is well investigated. According to Carl Sagan, a famous astronomer and astrobiologist: “extraordinary claims require extraordinary evidence.” And that certainly applies to this situation.

alien life on venus

 

What type of planet is Venus?

Planet Venus is the second planet from the sun in our solar system. As you would expect, the temperatures there are very high. Temperatures can range up to 900 degrees Fahrenheit on the surface; so hot that it could melt lead. There are heavy clouds that swirl around the planet that are so acidic that we could not even measure them using our pH scale on earth. That is why Venus is referred to as a hell scape. It would be difficult to imagine life as we know it being on Venus.

Despite that fact, astronomers have voiced that possibility in the past. It was formerly proposed by Carl Sagan, an astronomer, and Harold Morowitz, a biologist, that there was a possibility that microbes could be existing in its acidic clouds; swirling around the planet. So, to confirm this idea, probes were sent to Venus to check out the planet and true to type, those probes melted on entering the planet. So, scientists have concentrated their search for life on the planet to browsing its clouds for microbial life.

What do we know about the gas, phosphine?

Phosphine gas that was discovered on the clouds of Venus is a toxic and explosive molecule with a lingering odor of garlic and dead fish. The gas was discovered on planet Venus at temperatures that were close to that on planet Earth. But the discovery was not much. The researchers describe it as: “finding some tablespoonfuls in an Olympic size swimming pool.” Yet, that amount is enough to pique our curiosity. This is because of how the gas is made here on Earth.

Phosphine gas on earth is made from either of two paths: as a natural byproduct of life, or it is manufactured artificially to produce fumigants and other biochemicals. As a byproduct of life, it is made by oxygen-hating microbes who live in swamps and marshes. It has been noticed by scientists that all living beings contain these microbes and they have called this gas the “biosignature of life”. So, with the reputation phosphine has earned, finding it on planet Venus raises a possibility: “could there be alien life on planet Venus, even if it is restricted to gas-eating microbes?”

Wise to be cautious

The data that has been collected on the presence of phosphine gas in Venus is not substantial to make astrobiologists certain that there could be alien life on planet Venus, although one could say that the potential is there – just potential. The gas could be coming from something else rather than life. An international team of researchers have set out to simulate possibilities for the existence of the gas, modeling scenarios like lightning strikes and meteors bombarding the clouds to see if such could produce any amount of phosphine on the surface of the planet but they came up short. Therefore, one can say that this detection is extraordinary. If nothing else can explain it, then alien life could be the answer. But considering the nature of Venus – a harsh place for life to inhabit – it would take a really strongly acid-loving microbe to be living in those clouds.

That is why scientists are not saying there is alien life yet. The astronomy community has gone this path before of proclaiming that there is alien life only to be disappointed. So, they would rather be cautious and optimistic rather than put a foot out. Also there are still details about the research that needs to be explored.

First, other researchers need to give credence to the claim that the gas is really phosphine. Venus clouds are surrounded by sulfur dioxide and this could influence the readings. Also, observations of the Venetian atmosphere would have to be done to confirm the existence of phosphine gas.

If it is really confirmed that the gas detected is phosphine, then the next step for researchers is to determine the source of the gas. This is really important for an hypothesis to be drawn. It would be foolhardy to run to conclusions at an early stage and say the source is biology. Other possibilities have to be explored and confirmed. If in reality scientists agree that the source is biology, then Venus would have to be explored and discovered. Missions would be sent to Venus to discover where on the clouds the microbes could be existing and if they could lead us to other areas on Venus. The microbe-hunting missions have to be well planned to prevent contaminating the Venetian clouds.

As it is, the data gathered so far cannot answer these questions. Therefore, we will have to wait and see what future research would turn up. If we could find habitable life on other planets, it would help man understand his place in the universe. It would help us understand what it means to be alive; what conditions prompt life and how we can extend it. There is a possibility that Venus would be a planet of future interest to astronomers and astrobiologists as they explore the recent findings about phosphine on the planet.

But right now, we don’t have any definite answer to the question: “Are we alone on the solar system?” We might never have an answer. But exploring the possibilities will open up new vistas of knowledge and expand our ability to solve some of the pressing challenges of planet earth.

The video below is an interesting news commentary on this discovery. Enjoy it.


 

First Tracking Device using Vibration, AI to track 17 Home appliances

At the present state of affairs, to track each appliance in your home you would need to install a separate tracker for each appliance. Now, what if you had 10, 20 or so appliances? That would be some expense to carry out, not so? But recently, some researchers at Cornell University have developed a single device that is able to track about 17 home appliances at the same time and this device uses vibration with an integrated deep learning network. With this device, you no longer need to worry about forgetting to take wet clothes out of the washing machine, or allowing food to remain in the microwave, or even forgetting to turn off faucets that are dripping. This device promises to make your home smart in a cost-effective way.

technology vibration ai

 

Vibration analysis has several uses in industry, especially in detecting anomalies in machinery, but this is the first use case for tracking home appliances using vibrations that I have found. This device, called Vibrosense, uses lasers to capture the subtle vibrations that are emitted by walls, floors and ceilings and then incorporates this received vibration with a deep learning network that is used to model the data being processed by the vibrometer in order to create a unique signature for each appliance. I tell you, researchers are getting closer to their dream of making our homes not only smarter, but more efficient and integrated.

But can it detect appliance usage across a house, you may ask? There are so many appliances in a house and the vibrations they emit can intersect. That’s right. The researchers have a solution to that problem. To efficiently detect different appliances in a house and not just in any single room, the researchers divided the task of the tracking device into two categories: First, the tracking device would have to detect all the vibrations in the house generally using the laser Doppler vibrometer, and second differentiate the vibrations from multiple appliances even if they were similar vibrations by identifying the path the vibrations has traveled from room to room.

The deep learning network that is incorporated in the device uses two modes of learning: path signatures and noises. Path signatures for identifying different activities and the distinctive noises that the vibrations make as they travel through the house.

To test its accuracy the tracking device was tested across 5 houses at the same time and it was able to identify the vibrations from 17 different appliances with 96% accuracy. Some of the appliances it could identify were dripping faucets, an exhaust fan, an electric kettle, a refrigerator, and a range hood. Also, when it was trained Vibrosense could be able to identify 5 stages of appliance usage using an accuracy of 97%.

Cheng Zhang, assistant professor of information science at Cornell University and director of Cornell’s SciFi Lab, on speaking about the device, Vibrosense, said that it was recommended for use in single-family houses because when it was installed in buildings, it could pick up the activities that were going on in neighboring houses. A big privacy risk one must say.

A smart device with immense benefits

When computers are able to recognize the activities going on in the home, it makes our dream of the smart home closer to reality. Such computers can ease the interaction between humans and computers, enabling human-computer interfaces that are a win-win for everyone. That is what this tracking device does. One advantage of this device is that it leverages on the use of computers to understand human needs and behaviors. Formerly, we would need separate devices for each appliance or need. But this device has leveraged on that need. “Our system is the first that can monitor devices across different floors, in different rooms, using one single device,” Zhang said.

I feel elated on discovering this device. No more having to wait for my food to be cooked on the microwave. With this device, I could be watching the TV while it watches the food on my behalf. There are a lot of things we could use this for. I think this innovation is very beneficial to the average American.

But one concern about Vibrosense is in the area of privacy. I wouldn’t want my neighbor to know when I am in the bathroom, or have the TV on, or that I was not in the house. But these are the information the device can send out.

When asked on the issue of privacy, Zhang said: “It would definitely require collaboration between researchers, industry practitioners and government to make sure this was used for the right purposes.” I hope that cooperation does come.

The device could even help in enabling sustainability and energy conservation in the home. In so doing, it could help homes to monitor their energy usage and reduce consumption. It could also be used to estimate electricity and water usage rates since the device has the ability to detect both the occurrence of an event and the exact time period that event took place. This is badly sought-for energy-saving advice that home owners need. This is great!

I was thinking about the benefits of a device like this in a typical home and was wowed by its potential benefit that I decided that this innovation needs a place in my solvingit? blog. So, this is a thumbs up to Cheng Zhang and his team at Cornell.

The material for this post was based on the paper: “VibroSense: Recognizing Home Activities by Deep Learning Subtle Vibrations on an Interior Surface of a House from a Single Point Using Laser Doppler Vibrometry.” Cheng Zhang was senior author of the paper. The paper was published in Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies and will be presented at the ACM International Joint Conference on Pervasive and Ubiquitous Computing, which will be held virtually Sept. 12-17.

Sustainable GameBoy That Runs Forever On No-Batteries

Have you ever wished you had a device in which the battery never runs out, or have you ever wanted one day to put an end to all the sustainability issues caused by batteries landing up in landfills? Well, that possibility will soon be a reality with the new proof of concept that was developed by some engineers at Northwestern University and Delft University of Technology (TU Delft) in the Netherlands. They were able to manufacture handheld sustainable game devices that can run without batteries, relying on solar energy and the key presses of the user.

sustainable gameboy

 

Battery-free intermittent computing has long been an idea that has plagued researchers in the technology industry for a long time. With this sustainable device we will soon see an end to the costly and environmentally hazardous batteries that were used to power electronic devices like interactive games which end up in landfills. This device relies on energy from the sun which it attracts and also energy from the user when he presses some keys on the gamepad.

“It’s the first battery-free interactive sustainable device that harvests energy from user actions,” said Northwestern’s Josiah Hester, who co-led the research. “When you press a button, the device converts that energy into something that powers your gaming.”

On September 15, 2020 this team of engineers will present their sustainable game device virtually at the UbiComp 2020 conference.They promise that this is not a toy but the real thing,

So one may ask: how does this device function? This is an energy-aware gaming platform (ENGAGE) that was equipped with precisely the size and form factor of the original Gameboy. The screen has a set of solar panels that attracts and transforms energy from the Sun into its internal energy. Then another source of energy for the device comes from the button presses by the user. It is pertinent to note that an important component of the game device is that it impersonates the original Gameboy processor. While using a lot of computational power, impersonating the processor has the advantage of making it possible that any retro game can be played straight from the original cartridge.

There existed some challenges when the device is power switching. As it switches power from one source to the other the game device can experience loss of power. This problem was overcome when the engineers made the device to be energy aware as well as energy efficient so that the duration of the power failure will become inconsequential. A new technique was also developed to store the system state in non-volatile memory such that the overhead from power failures became minimal and the system could restore itself to previous state when power is restored. This makes it possible that the ‘save’ button which you can find on other devices does not exist as it is state aware and can make the game continue just from precisely where it stopped even if the player was in the course of completing an action.

It was discovered that on days where the sun shone heavily, or the clicking was moderate, interruptions could be ignored by the player. Yet the engineers have not gotten to where they desire the device to be, that is, to have non-interruptible states. But they are happy about one fact - this proof-of-concept shows that sustainable, environmentally-conscious devices that do not use hazardous batteries are possible in the near future.

“Sustainable gaming will become a reality, and we made a major step in that direction — by getting rid of the battery completely,” said TU Delft’s Przemyslaw Pawelczak, who co-led the research with Hester. “With our platform, we want to make a statement that it is possible to make a sustainable gaming system that brings fun and joy to the user.”

“Our work is the antithesis of the Internet of Things, which has many devices with batteries in them,” Hester said. “Those batteries eventually end up in the garbage. If they aren’t fully discharged, they can become hazardous. They are hard to recycle. We want to build devices that are more sustainable and can last for decades.”

You can watch Hester describing this sustainable device in the video below:


 

First Pain-Sensing Electronic Skin that Reacts Like Human Skin

Imagine that you touch a hot stove, how do you perceive that the stove is hot and that you should withdraw your hand? In other words, how did you feel the pain? Doctors tell us that when the skin comes in contact with a hot object such as a hot stove, sensory receptors transfer the information to the nerve fibers at the skin, then the nerve fibers transfer it to the spinal cord and the brainstem where it is then taken to the brain and the information is registered and processed. The brain tells the skin that it has come in contact with a hot object, which then perceives the pain. All of these processes occur in microseconds, Can humans mimic this process with technology?

electronic skin

 

Some scientists at the RMIT University in Australia have concluded that they can mimic the pain reception process of the human skin using an electronic skin. They have built a prototype device that can replicate the way the human skin actually perceives pain and gathers information from the environment. When tested it was found that the reaction of the electronic skin was near instant, close to the instant feedback mechanism we get from our human skin. That is just wonderful.

The team at the university did not just stop there; they went further. They have built stretchable electronic devices that complement the pain reception of the prototype electronic skin which stretchable devices can also sense temperature and pressure. With this accomplishment they have integrated all these functionalities into the prototype electronic skin so that it cannot only perceive pain, it can also perceive temperature and pressure.

Lead researcher, Professor Madhu Bhaskaran, co-leader of the Functional Materials and Microsystems group at RMIT, said that this electronic skin was optimized to act as the human skin.

How the electronic skin works

This optimized electronic skin was a brain child of 3 previous devices and patents that were produced by the team. These patents were:

1. A stretchable electronic device that was transparent and unbreakable. It was made of silicon and could be worn on the skin.

2. Temperature-reactive coatings which are thinner than human hair and could react to changes in the temperature of the surroundings. The coatings were also transformable in the presence of heat.

3. A brain-mimicking electronic device that works as the brain does in using long-term memory to recall and retain previous information.

In the electronic skin prototype, the pressure sensor makes use of the stretchable electronic device and brain mimicking device, the heat sensor makes use of the temperature reactive coatings and the brain-mimicking device using memory cells, while the pain sensor combines all three technologies into one.

PhD researcher Md Ataur Rahman said the memory cells in each prototype were responsible for triggering a response when the pressure, heat or pain reached a set threshold. He hailed this as an accomplishment; the creation of the first electronic somatosensory device that will be able to replicate the complex neural mechanisms involved in transferring information from the skin to the brain and back to the skin in order to interpret what information the skin receptors were receiving from the environment. Compared to previous receptors for the skin which concentrated only on pain, he said this prototype electronic skin was the first of its kind to react to real mechanical pressure, temperature and pain at the same time and provide the correct response.

And this comes with a distinction in reception of different threshold of pain, temperature and pressure.

“It means our artificial skin knows the difference between gently touching a pin with your finger or accidentally stabbing yourself with it – a critical distinction that has never been achieved before electronically,” he said.

A purview of good things to come in the future

According to Bhaskaran: ““It’s a critical step forward in the future development of the sophisticated feedback systems that we need to deliver truly smart prosthetics and intelligent robotics.”

Yes, Imagine a prosthetic leg that could be able to feel real pain, pressure and temperature or even a robot that can distinguish different stimuli. Yes, imagine the future where human creativity has met the demands of Mother Nature. Lead researcher Professor Madhu Bhaskaran said the pain-sensing prototype was a significant advance towards next-generation biomedical technologies and intelligent robotics. We cannot wait to have people without legs know that they can have real legs right now and not feel disadvantaged. Imagine skin grafts that make you feel like this is the real thing and not an artificial skin.

The benefits of this technology are enormous. That is why I decided to include it in my solvingit? blog.

The research was supported by the Australian Research Council and undertaken at RMIT’s state-of-the-art Micro Nano Research Facility for micro/nano-fabrication and device prototyping.

Artificial Somatosensors: Feedback receptors for electronic skins’, in collaboration with the National Institute of Cardiovascular Diseases (Bangladesh), is published in Advanced Intelligent Systems (DOI: 10.1002/aisy.202000094).

Breakthrough 3D Printing Of Heart For Treating Aortic Stenosis

When a narrowed aortic valve fails to open properly and thereby the pumping of blood from the heart to the aorta is obstructed, this might result in a condition called aortic valve stenosis. Aortic stenosis is one of the most common cardiovascular conditions in the elderly and affects about 2.7 million adults over the age of 75 in North America. If the doctors decide that the condition is severe, they may carry out a minimally invasive heart procedure to replace the valve. This procedure is called transcatheter aortic valve replacement (TAVR). But this catheterization procedure is not without some risks which might include bleeding, stroke, heart attack or even death. That is why it is important that the doctors take all care to reduce the risks. The TAVR procedure is less invasive than open heart surgery to repair the damaged valves,

3D printing of heart

In a new paper published in Science Advances, a peer-reviewed scientific journal published by the American Association for the Advancement of Science (AAAS), some researchers from the University of Minnesota along with their collaborators have been able to produce a new technique that involves 3D printing of the aortic valve along with creating lifelike models of the aortic valve and surrounding structures which models mimic the look and feel of the valve. These 3D printing would possibly help reduce the risks for doctors who want to carry out a TAVR procedure on a patient.

Precisely, they 3D printed a model of the aortic root. The aortic root is a section of the aorta that is closest to the heart and attached to the heart. Some of the components of the aortic root include the aortic valve, which is prone to aortic stenosis in the elderly, along with the openings of the coronary artery. The left ventricle muscle and the ascending aorta which are close to the aortic root are also not left out in the model.

The models include specialized 3D printing soft sensor arrays built into the structure that prints the organs for each patient. The 3D printing process is also customized. The authors believe that this organ model will be used by doctors all over the world to improve the outcomes for patients who will be subject to invasive procedures when treating aortic stenosis.

Before the models are produced CT scans of the patient’s aortic root are made so that the printing will mimic the exact shape of the patient's organ. Then specialized silicone-based inks are used to do the actual printing in order to match the exact feel of the patient's heart. These inks were specially built for this process because commercial printers in the market can print 3D shapes but they cannot be able to reflect the real feel of the heart’s organs which are soft tissues. The initial heart tissue that were used for the test of the 3D printers were obtained from the University of Minnesota's Visible Heart Laboratory. The researchers found that the specialized 3D printers produced models that they wanted, models that mimic the shape and the feel of the aortic valve at the heart.

To watch a video of how the 3D printers work, I encourage you to play the video below. You would find it interesting.


The researchers are happy with what they have achieved.

“Our goal with these 3D-printed models is to reduce medical risks and complications by providing patient-specific tools to help doctors understand the exact anatomical structure and mechanical properties of the specific patient’s heart,” said Michael McAlpine, a University of Minnesota mechanical engineering professor and senior researcher on the study. “Physicians can test and try the valve implants before the actual procedure. The models can also help patients better understand their own anatomy and the procedure itself.”

These models will surely be of help to physicians who will use them to practice on how they will carry out their catheterization procedures on the real heart. Physicians will soon have the ability to practice beforehand on the size and placement of the catheter device on patients before carrying out the real procedure thereby reducing the risks involved. One good thing about the integrated sensors that are fitted into the 3D models is that they will provide physicians with electronic pressure feedback which will guide them in determining and selecting the optimal position of the catheter when being placed into the aorta of a patient.

But the researchers do not think these are the only use cases for their findings or the models. They aim to go beyond that.

“As our 3D-printing techniques continue to improve and we discover new ways to integrate electronics to mimic organ function, the models themselves may be used as artificial replacement organs,” said McAlpine, who holds the Kuhrmeyer Family Chair Professorship in the University of Minnesota Department of Mechanical Engineering. “Someday maybe these ‘bionic’ organs can be as good as or better than their biological counterparts.”

I think these are laudable futuristic goals. If they could achieve their ambition, then McAlpine would be solving a problem that gives sleepless nights to many physicians who have to operate on elderly patients with weak aortic valves.

Because this is a problem-solving innovative solution to a challenging problem, I decided to include it in my blog. I hope you enjoyed reading about the achievements of McAlpine and his colleagues. I wish that they go further than just helping physicians have 3D models but be able to make those models replace weak natural organs.

In addition to McAlpine, the team included University of Minnesota researchers Ghazaleh Haghiashtiani, co-first author and a recent mechanical engineering Ph.D. graduate who now works at Seagate; Kaiyan Qiu, another co-first author and a former mechanical engineering postdoctoral researcher who is now an assistant professor at Washington State University; Jorge D. Zhingre Sanchez, a former biomedical engineering Ph.D. student who worked in the University of Minnesota’s Visible Heart Laboratories who is now a senior R&D engineer at Medtronic; Zachary J. Fuenning, a mechanical engineering graduate student; Paul A. Iaizzo, a professor of surgery in the Medical School and founding director of the U of M Visible Heart Laboratories; Priya Nair, senior scientist at Medtronic; and Sarah E. Ahlberg, director of research & technology at Medtronic.

This research was funded by Medtronic, the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health, and the Minnesota Discovery, Research, and InnoVation Economy (MnDRIVE) Initiative through the State of Minnesota. Additional support was provided by University of Minnesota Interdisciplinary Doctoral Fellowship and Doctoral Dissertation Fellowship awarded to Ghazaleh Haghiashtiani.

You can read the full research paper, entitled "3D printed patient-specific aortic root models with internal sensors for minimally invasive applications," at the Science Advances website.

First Walking Microscopic Robots (Nanobots) To Change The World

Although it has been said several times that the future of nanoscale technology with nanobots is immense, each day researchers continue to expand it. Recently, in a first of its kind, a Cornell University-led collaboration has manufactured the first microscopic robot that can walk. The details seem like a plot from a science fiction story.

microscopic robots or nanorobots

 

The collaboration is led by Itai Cohen, professor of physics, Paul McEuen, the John A. Newman Professor of Physical Science – both in the College of Arts and Sciences – and their former postdoctoral researcher Marc Miskin, who is now an assistant professor at the University of Pennsylvania. The engineers are not new to producing nanoscale creations. To their name they already have a microscopic nanoscale sensor along with graphene-based origami machines.

The microscopic robots are made with semiconductor components that allow them to be controlled and made to walk with electronic signals. The robots have a brain and torso, and legs. They are 5 microns thick, 40 microns wide, and 40-70 microns in length. A micron is 1 millionth of a metre. The torso and the brain were the easy part. They are made of simple circuits manufactured from silicone photovoltaics. But the legs were completely innovative and they consist of four electrochemical actuators.

According to McEuen, the technology for the brains and the torso already existed, so they had no problem with it except for the legs. “But the legs did not exist before,” McEuen said. “There were no small, electrically activatable actuators that you could use. So we had to invent those and then combine them with the electronics.”

The legs were made of strips of platinum. They were deposited by atomic layer deposition and lithography, with the strips being just some dozen atoms thick. Then these strips of platinum are capped by layers of titanium. So, how did they make these legs to walk? By applying a positive charge to the platinum. When this is done, negative ions from the solution surrounding the surface of the platinum are adsorbed to the surface and they neutralize the charge. Neutralization makes the platinum to expand and the strips bend. Because the strips are ultrathin, they can bend on neutralization without breaking. To enable three dimensional motion control, rigid polymer panels were patterned on top of the strips. The panels were made to have gaps and these gaps made the legs to function like knees or ankles, enabling the legs to move in a controlled manner with generated motion.

A paper describing this technology titled: “Electronically integrated, mass-manufactured, microscopic robots,” has been published in the August 26 edition of Nature.

The future applications of this technology is immense. Since the size of the electronically controlled microscopic robots is that of a paramecium, one day when they are more sophisticated, they could be inserted into the human body to carry out some functions like cleaning up clogged veins and arteries, or even analyzing the human brain. Also this first production will become a template for the production of even more complex versions in the future. This initial mcroscopic robot is just a simple machine but imagine how sophisticated and computational complex it will be when it is installed with complicated electronics and onboard computers. Furthermore, to produce the robots do not take much in terms of time and resources because they are silicone-based and the technology already exists. So we could see the possibility of mass-produced robots like this being used in technology and medicine to the benefit of the human race. In fact the benefits are immense when one calculates the economics involved.

“Controlling a tiny robot is maybe as close as you can come to shrinking yourself down. I think machines like these are going to take us into all kinds of amazing worlds that are too small to see,” said Miskin, the study’s lead author.

The frontiers of nanobot technology is expanding by the day. With these mass produced robots in the market, I see a solution in the offing for various medical and technological challenges. This is an innovative nanobot.

Material for this post was taken from the Cornell University Website.

Light Trapping Nano-Antennas That Could Change The Application Of Technology

Travelling at a speed of 186,000 mi/s, light can be extremely fast. Even Superman, the fastest creature on Earth, cannot travel at the speed of light. Humans have shown several times that they can control the direction of light by passing it through a refractory medium. But is it possible to trap light in a medium and change its direction just as you can trap sound in an echo device? Before now that possibility was theoretical but new research has shown that this could be practical. Since light is useful for information exchange and so many applications, the ability to control light, trap it or even change its direction could have several applications in science and technology.

outline from light trapping device
 

In a recent paper published in “Nature Nanotechnology”, some Stanford scientists who were working at the lab of Jennifer Dionne, an associate professor of materials science and engineering at Stanford University, have demonstrated an approach to manipulating light which has been successful in its ability to significantly slow the speed of light and also change its direction at will. The researchers structured silicon chips into fine nanoscale bars and these bars were used to trap lights. Later, the trapped light was released or redirected.

One challenge the researchers faced was that the silicon chips were transparent boxes. Light can be trapped in boxes but it is not so easy to do if the light is free to enter and leave at will just as you find in transparent boxes.

Another challenge that was faced by the researchers was in manufacturing the resonators. The resonators consist of a silicone layer atop a wafer of transparent sapphire. The silicon layer is extremely thin and it has the ability to trap lights very effectively and efficiently. It was preferred because it has low absorption in the near-infrared spectrum which was the light spectrum that the scientists were interested in. This region is very difficult to visualize due to inherent noise but it has useful applications in the military and technology industry. Underneath the silicone layer is a bottom layer of sapphire which is transparent and the sapphire are arranged in wafers. Then a nano-antenna was constructed through this sapphire using an electron microscopic pen. The difficulty in etching the pattern for the microscopic pen lies in the fact that if there is an imperfection then it will be difficult for it to direct light as the sapphire layer is transparent.

The experiment would be a failure if the box of silicon allowed the leakage of light. There should be no possibility of that. Designing the structure on a computer was the easy part but the researchers discovered the difficulty lay in the manufacturing of the system because it has a nano-scale structure. Eventually they had to go for a trade-off with a design that gave good light trapping performance but could be possible with existing manufacturing methods.

The usefulness of the application

The researchers have over the years tinkered with the design of the device because they were trying to achieve significant quality factors. They believed that this application could have important ramifications in the technological industry if it was made practical. Quality factors are a measure of describing the resonance behavior involved in trapping light and in this case it is proportional to the lifetime of the light.

According to the researchers, the quality factors that were demonstrated by the device was close to 2,500 and if you compare this to similar devices, one could say that the experiment was very successful because it is two times order-of-magnitude or 100 times higher than previous devices.

According to Jennifer Dionne at Stanford University, by achieving a high quality factor in the design of the device, they have been able to place it at a great opportunity of making it practical in many technology applications. Some of these applications include those in quantum computing, virtual reality and augmented reality, light-based Wi-Fi, and also in the detection of viruses like SARS-CoV-2.

An example of how this technology could be applied is in biosensing. Biosensing is an analytical device used for the detection of biomolecules that combines a biological component with a physicochemical component. A single molecule is very small that essentially it is quite invisible but if light is used as a biosensor and passed over the molecule hundreds or even thousands of times, then the chances of creating a detectable scattering effect is increased, thereby making the molecule discernible.

According to Jennifer Dionne, her lab is working on applying the light device on the detection of Covid-19 antigens and antibodies produced by the body. Antigens are molecules produced by viruses that trigger an immune response while antibodies are proteins produced by the immune systems in response to the antigens. The ability to detect a single virus or very low concentration of multitudes of antibodies comes from the light – molecule interaction created by the device. The nanoresonators are designed to work independently so that each micro-antenna can detect different types of antibodies simultaneously.

The areas of application of this technology is immense. Only the future can predict the possibilities when other scientists start experimenting with what was discovered. I think this innovation is a game changer.

Materials for this post was taken from the Stanford University website.

Matched content