Search

An Innovative AI-powered Computer Vision And Gesture Recognition System

How does the brain interpret what we see and how can computers be made to mimic the natural workings of the human brain when it comes to sight? That is the question that computer vision technologies seek to answer. Today, many technologies use computer vision in artificial intelligence, or AI. This AI rely on neural networks and they have to process a large amount of data in a very short space of time. Many AI-powered computer vision systems have been introduced into the market and they are being used in hi-precision surgical robots, as health monitoring equipment and in gaming systems. Heard of the Google computer vision, or Google cloud vision API? Those are examples. But engineers want to go beyond these computer vision applications. They want the AI-powered computer systems to recognize human gestures so as to complement its visual capabilities. That is why gesture recognition technology has become a hot topic in computer vision and pattern recognition.

artificial intelligence computer vision and gesture recognition system
 

The drive to create AI systems that recognize hand gestures came from the need to develop computer systems and devices that can help people who communicate using sign language. Early systems tried to use neural networks that incorporate the ability to classify signs from images captured from smartphone cameras while this data is converted from pictures to text. They were systems that involved computer vision with image processing. But AI systems have grown more advanced and more precise than those humble beginnings. Today, many systems seek to improve on this visual-only AI recognition system by integrating input from wearable sensors. This approach is known as data fusion.

Data fusion is the process of integrating more data sources into computer systems that make these systems more reliable and accurate than if the data was coming from a single source. AI-powered computer vision systems incorporate date fusion using wearable sensors that recreates the skin’s sensory ability, especially the somatosensory functionality of the skin. This has resulted in the ability of computer systems to recognize a wide variety of objects in their environment and increase their functionality and usefulness. But there are still challenges which hamper the precision and the growth of these data. One of these challenges is that the quality of data from wearable sensors are low and this is as a result of the fact that wearable sensors that have been produced are bulky and sometimes have poor contact with the user. Also when objects are visually blocked or there is poor lighting, the ability of these AI-powered systems are reduced. One area that has been troubling to engineers is how to efficiently merge the data coming from the visual and the sensory signals. Therefore, this has led to information that is inefficient, resulting in slower response times for gesture recognition systems.

In an innovative approach that is said to solve many of these challenges, a team of researchers at the Nanyang Technological University, Singapore (NTU, Singapore), have created an AI data fusion system that drew its inspiration from nature. This system uses skin-like stretchable sensors made from single-walled carbon nanotubes. This is an AI approach that closely mimics the way the skin’s signals and human vision are handled together in the brain.

How the NTU artificial intelligence gesture recognition system works

The NTU bio-inspired AI system was based on the combination of three neural network approaches. The three neural networks that were combined are: 1. a convolutional neural network which is an early method for visual processing, 2. a multi-layer neural network which was used for early somatosensory information processing, and 3. A sparse neural network which fuses the visual and the somatosensory information together.

Therefore combining these three neural networks makes it possible for the gesture recognition system to more accurately process visual and somatosensory information more efficiently than existing systems.

The lead author of the study, Professor Chen Xiaodong, from the school of Material Science and Engineering at NTU says that the system is unique because it drew its inspiration from nature and tries to mimic the somatosensory–visual fusion hierarchy which is already existing in the human brain. According to him, no other system in the gesture recognition field has undertaken this approach.

What makes this system particularly accurate in data collection is the fact that the stretchable skin sensors used by the researchers attach comfortably to the skin and this makes the data collection process not only more accurate but makes it to deliver a higher-quality signal which is vital for hi-precision recognition systems.

The researchers have published their study in the scientific journal “Natural Electronics”.

High accuracy even in poor environmental conditions

As a proof of concept the bio-inspired AI system was tested using a robot that was controlled through hand gestures and then the robot was guided through a maze. It was discovered that the AI system was able to guide the robot through the maze with zero errors, compared to the six recognition errors from another visual recognition system. It then seems evident that this bio-inspired AI system is more accurate and efficient.

Also it was tested under noise and unfavorable lighting conditions. Even under this unfavorable conditions the bio-inspired AI system still maintained its high accuracy. When it was tested in the dark, it worked efficiently with a recognition accuracy of over 96.7%.

The authors of this study said that the success of their bio-inspired AI system lies in its ability to interact with and complement at an early stage the visual and somatosensory information it was receiving even before any complex interpretation is carried out. This makes it possible for the system to rationally collect coherent information with low data redundancy and low perceptual ambiguity with better accuracy.

Promise of better things to come

This innovative study shows a promise of the future. It helps us to see that humans are one step closer to a world where we could efficiently control our environment through a gesture. Applications that could be built for such a technology are endless, and it promises to create a vast amount of opportunities in Industry. Some examples include a remote robot control over smart workplaces along with the ability to produce exoskeletons for those who are elderly.

The NTU team are aiming to use their system to build virtual reality (VR) and augmented reality (AR) systems. This is because their system is more efficiently used in areas where hi-precision recognition control is required such as in the entertainment and gaming Industries.

Material for this post was taken from a press release by the Nanyang Technological University, Singapore.

Python List And Sequence Comparisons And Sorting Based On Lexicographic Orders

According to the documentation, comparing sequences is done based on lexicographic order. That is just a way of saying that comparisons between sequences are done based on their position in dictionary order if alphabets, and if they are integers, based on their position in the number line. Comparisons could be done using the lesser than operator, <, the greater than, >, operator, or the equal to, ==, operator. It really gets interesting when you are dealing with sequences that have a mix of both alphabets and numbers. These comparisons and many other comparisons are what we will be discussing in this post. We will also show that the python list sort method and python sorted function are based on comparisons.

Colorful drinks sorted like python lists
 

Note that these comparisons are Booleans. That means, they give you True or False when these items are compared.

Let us compare two lists in python and see how the comparison works on sequences. When objects are to be compared, they must be of the same type. If they are of different types, python will return a TypeError.

  1. When the two python sequences are of the same length and type.
  2. The code above compares n and m which are python sequences of numbers. You can see that they both only differ in their last items in the python list. I just used this example to show you that when python compares two sequences of the same type each index is compared to the corresponding index until a mismatch is found, and then based on the lexicographic order, one could be found to be greater than, lesser than, or equal to the other. In the code above, n was lesser than m because index 2 in n, which is 3, is lesser than index 2 in m, which is 4. Indices start from 0.

  3. When the two python sequences contain items of different types
  4. When the two sequences being compared have items of different types, python will return a TypeError. Note the code below.

    When run, the above code returns a TypeError because string and integer types cannot be compared.

  5. When the two sequences are the same length and contain the same items of the same type.
  6. When you run the code, you would realize that they compare equal. What python does is walk through each item and compare them index to index. In this case, all the items compare equal. But what if one list is shorter than the other and all the items compare equal. What does python decide? See the code below.

    When the code above is run, you would see that python takes the shorter of the two sequences as the lesser one when they compare equal, index for index. It now uses the len function to decide on the equality or non-equality of the two sequences.

I have used python lists in these examples, but you can use any sequence like a python string, tuple, or range.

Comparison of user defined objects

Can we take this notion of comparison to user defined objects? Yes, of course. You can provided your user-defined object has the appropriate comparison method. Or in other words, provided it implements the __lt__(), __gt__(), or __eq__() special methods. If that is the case, you are good to go. Here is an example of how comparison could be done on user defined objects.

When you run the code above, you can see that objects of the Length class can compare themselves even though they are not sequences.

This ability to overload native methods and python operators gives a programmer great power. That power comes with enormous responsibility. One of such power is the ability to use the concept of comparisons to carry out sorting. Python has two functions to do that, and they are the built-in python sorted function and the list.sort function that comes with the python list class. These two functions work based on the concept of comparison to sort items in sequences. We would be using the built-in sorted function since it is generic.

The python sorted function

The sorted function creates a new sorted list from any iterable. By default, it sorts based on lexicographic order and in ascending fashion. Take the following code for example.

When you run it, it sorts the list of fruits in dictionary or lexicographic order. The underlying mechanism at work is a comparison of each of the fruit items. That is why you could change the order of the sort. The sorted function has a reverse keyword argument that you can use to do that. By default, reverse is False but you can switch it to True to sort in reverse lexicographic order. Let’s do it.

After running the above, you can see that when I set the reverse argument to True, it sorted the items in the fruits list in reverse order.

There is also another keyword argument that is useful when you have items in a tuple or a nested list and you want to specify which order to sort the items. For example, if we have a list of tuples comprising names and ages, how do we sort the list such that the ages takes more prominence in the sorting order before the names? This is best defined using the key keyword argument in the sorted function. In the code below, I would use a lambda function to specify what the key should be. Lambda functions are anonymous functions. The lambda function will sort or compare the items in the python list based on their ages.

As you can see, ‘David’ who is 20 years old comes first in the list, followed by ‘Rose’ who is 25, then by the two other students, ‘Michael’ and ‘Daniel’ who are both 32. But there is a problem with the sorting. The sorting is not yet complete. If Daniel and Michael are both 32 and compare equal for ages, then naturally we should expect Daniel to come before Michael in the sorted list. That’s right. So, let’s add one more power to our key. This time, we would tell the key argument to first compare by age, and if ages are equal, to compare by names. The code below shows how it is done. The only difference from the above code is that I added x[0] to the statement in the lambda function and that makes it possible because for each item in the list, x[0] is for names while x[1] is for age. To make them compare equal, I then cast the key for age to a string.

Here is the code.

We now have a well sorted list where ‘Daniel’ comes before ‘Michael’.

Let’s take this a bit further and give more power to sort any object, not just custom data structures like sequences. We could extend this power to our custom Length class that we described earlier. Let us be able to sort any sequence that has Length objects.

This is somewhat simple because I have already given Length objects the power to compare themselves. Remember, sorting depends on comparison. So, having this power, we can do sorting on length objects.

The only functions added to the code above for the Length class is the __str__() special method. This gives us the ability to print out the values of the objects, as well as the sorted function.

So, I encourage you to use this power with responsibility. Python gives you lots of power to do all sorts of things with your objects, even to compare and sort to your desire.

Do Face Masks Really Protect Against Covid-19 As Claimed?

Public health officials have launched a protracted campaign to make us believe that wearing a face mask prevents the spread of Covid-19, the latest pandemic. But is this true? That was the question on the mind of a Duke physician, Eric Westman, who was a champion for people putting on masks. He wanted to be sure that he was recommending the right prevention technique to people and businesses. So he decided to carry out a proof-of-concept study. A proof-of-concept study is a study that aims to test whether a technique or method is really as effective as claimed. Also it is used in science as a testing phase to verify the feasibility of a method, a technique or even an idea. When scientist carry out an investigation in science, they start with an idea like this.

face mask like this are good against pandemics
 

Doctor Westman was working with a non-profit and he needed to provide face masks to people. He was also skeptical about the claims that mask providers were making about how effective their marks were against a pandemic like covid-19, so he went to a chemist and physicist at the University, Martin Fischer, Ph.D. and asked him to carry out a test on various face masks. The test they did was based on both surgical masks used in medical settings and cloth face masks. They also carried out tests on bandanas and neck fleeces used by people who claim they can prevent the spread of covid-19.

Fischer’s line of work usually involves exploring the mechanisms involved in optical contrast while doing molecular imaging studies. He was intrigued by the doctor’s challenge so he set out to help him. For the study he used materials that were freely available; something that can easily be bought online. These materials include a box, a laser, a lens, and a cell phone camera.

From the study it was reported that it proved positive and showed that face mask were effective at preventing the spread of Covid-19. They recently published their scientific study article in the journal “Science Advances”. Despite this being a low cost technique, it helped to prove that face mask prevent droplets coming out of the mouth while speaking, sneezing or coughing from being transmitted from one person to another. They reported that while carrying out the study they could see that when people speak to each other, cough, or sneeze that molecules of droplets are passed from one person to the other. They also confirmed that it is not all masks that are effective at preventing the spread of droplets. Some face coverings were seen to perform better than others in preventing the spread of droplets.

So how did the masks compare? They tried out the proof-of-concept study on various masks and compared their effectiveness. They found that the best masks were the N95 masks which were used in medical settings. They also found that surgical masks and masks made of polypropylene were equally effective in preventing the spread of droplet molecules. Face masks which were made from cotton allowed some molecules to pass through but were found to have good coverage. They could eliminate several droplets that were being passed when people were speaking. Overall it was shown that bandanas and neck fleeces should not be used at all or recommended as face covering. This was because they were ineffective in blocking the spread of droplet molecules.

When the physicist was asked if this was the final word on the subject, he replied in the negative. Therefore more studies need to be carried out because this is just a demonstration to show the effectiveness of various face masks. The study was done to help businesses see that they can carry out these tests themselves before investing on any type of face mask or face covering.

When asked on the benefits of the study, Westman, who was inspired to start it, said that many people and businesses have been asking him about how they could test all these face masks that were new in the market. So he decided to show that businesses could carry out the tests themselves with very simple materials. He said that the parts for the testing were easily purchased online and they were putting out this information to help others.

As they hoped, they have shown that various face coverings that were being promoted by public health officials were indeed effective in preventing the transmission of molecular droplets from one person to the other.

Although this is just a proof-of-concept and not a rigorous testing technique, one can confidently recommend the use of face masks to individuals and businesses because they really work in preventing the spread of covid-19. My advice to everyone is to stay safe, wear a face mask, and help stop the spread of covid-19. We will see an end to this soonest. Please, carry out social distancing and regular hand washing to prevent the spread of this current pandemic.

Material for this post was provided by the dukehealth.org website.

Application Of The Built-in Python Enumerate Function

python enumerate function
 

How many times have you ever wanted to loop through a list or tuple while keeping count of the number of times and you ended up doing it with a for-loop? For beginners, I think the answer would be many times. But that is not pythonic. For example, I notice this code often among many python programmers who are not aware of the existence of the built-in python enumerate function. They keep an antipattern consisting of a range over the length of a list while keeping a running total of how many times they have gone through the range object created.


fruits = ['mango', 'pawpaw', 'lettuce', 'orange', 'banana']
for i in range(len(fruits)):
    print(i, fruits[i])

Please, if you are doing this, it is harmful to your code and not pythonic because your code is not easy to read and is vulnerable to your making typing errors.

To prevent code like this, python has a nice function that does it elegantly called the python enumerate function.

The python enumerate function

The syntax of the python enumerate function is enumerate(iterable, start=0). This is a function that accepts an iterable as first positional argument with a start value for the counter to the iterable. The default for the start value is 0 but you can specify a starting value of your choice. When you enumerate an iterable, it gives you an enumerate object. The benefit of such an enumerate object is that it is an iterator with which you can use a for loop to return a count of the items in the enumerate object. We will get to that later. But let’s see how it works with an example of enumerate.

If you run the above code, you will find that the python enumerate function gives an enumerate object. Now, let’s loop over the enumerate object.

In the above code, I used the default start of 0 and you can see that when the counter was printed, the counter started from 0. We can tweak that feature to get a starting counter of our choice. Now for some examples of enumerate function.

So, you can see how powerful this little known feature in the python programming language is. The enumerate function gives all iterables an advantage the python dictionaries already have, which is an index notation that is compact and reliable.

So, the next question is: what can we do with the python enumerate function?

The simple answer is a lot.

Application of the python enumerate function

  1. Add a counter to a list or iterable
  2. .

    Just as in the example I gave above, there are lots of times we want to add a counter to a list or tuple, especially if it is a large list or tuple. This handy function makes it possible. You can use the counter as a key to get an item in the iterator that you want to use. For example, you have an extremely long list of cars and you want to be able to know them by number. You could use enumerate to give them a count value starting from any number and then retrieve each car based on the counter.

    For some example:

    What if we wanted to know the first, or third car?

  3. Convert to a list or tuple
  4. .

    We could convert the python enumerate object which is an iterator to a list or a tuple and use the handy functions of python lists or tuples. It is so easy to do. Just use the enumerate object as the argument to the list or tuple and it is easily done. When you have the enumerate object as a list, you can then use python list functions on them instantly.

    Here are some code examples.

    You can see from the above that I used the index of the cars_list list to get the last item in the list of cars, and then used the len function of the list to find out the number of cars in the list.

You can read about the rationale for the enumerate function from PEP 279 and its documentation at python.org.

Python Functions That Add Items To Python Lists And Their Applications

In an earlier post, I discussed on how to remove items from python lists. Today, we will expand on the concept of lists and python list functions. We will be discussing on how to add items to python lists. In doing this, we are going to be using two functions, python’s append to list and python’s extend list functions. The two do the same job of adding objects to a list but they are not cousins. I mean, they are adding very different objects and use very different concepts.

python add to list
 

First we will start by discussing the python append to list function.

Python append to list function.

The syntax for this function is list.append(x). What it says is that you are going to be adding an item, x, to the end of the list. When you call the python append list function and give it an argument, x, where x is an object, it just adds x to the end of the list. Any object can be added to the end of the list, even sequences, but they are added just as one item.

Let us give examples of the python append to list function.

You can see from running the code above that no matter the length of the object being appended or the nature of the object, it is treated as a single item. So, whenever you want to add to a list and you want to treat that object as a single item in the list, you should use the python append to list function.

Python extend list function.

The syntax of the python extend list function is list.extend(iterable). The function takes an iterable and what it does is that it iterates through each of the items in the iterable and adds them to the list. It mutates the original list based on the number of items in the argument; the python extend list function is acting like a concatenating of the original list. Therefore, you could say that while in the python append to list function the length of the list increases by 1, in the python extend list function the list increases by the number of items in the iterable.

A picture is worth a thousand words. So, let’s illustrate the concept using examples.

I used the same examples for both the python append to list and python extend list functions just to help you better understand their functionality. You could see that for the cases, the python extend list function gives lists of longer length.

These two python list functions are not the only way you can add items to a python list. There are overloaded methods we could also use.

Using overloaded operators to add to python lists.

Operator overloading or function overloading is when the same built-in operator like + or * have different behaviors for different objects or classes. You might have noticed this earlier in your python journey. For example, adding integers, 2 + 3, gives a different behavior for the + operator from adding strings, ‘mango’ + ‘banana’.

We will discuss how the + and += operators are used to add iterables to lists. They are both semantically similar to the python extend list function. When using either of them, the second object must be an iterable, otherwise python will return an error.

Here is some code to show how these overloaded operators work.

So you now have at your arsenal methods to add items to python lists.

Happy pythoning.

To cap it all, note that the worst case complexity of the python append to list function is O(1) i.e constant complexity while that of python extend list is O(k) where k is the length of the iterable that is being used to extend the original list.

From Carbon Dioxide To Liquid Fuel With New, Cheaper, Efficient, And Innovative Technique

Human activity has been impacting the environment in negative ways. The greenhouse effect which produces climate change due to the trapping of the sunlight energy in the atmosphere is caused by having extra carbon dioxide in the atmosphere which is not removed by the photosynthetic processes of green plants. These extra carbon dioxide causes global warming. Climate change mitigation, or actions to reduce the magnitude and rate of global warming, along with climate change mitigation strategies, are now very popular. Several approaches have been proposed to remove these extra carbon dioxide or what is called carbon sequestration. Today, we will focus on a new innovation that not only promises to help the environment, but is commercially viable.

Scientists at the U.S Department of Energy’s (D.O.E) Argonne National Laboratory in collaboration with Northern Illinois University have undertaken research that have realized a way to not only remove carbon dioxide from the environment, but to also break it down and use it to manufacture ethanol.

Carbon dioxide to fuel
 

The discovery involves using catalysts, specifically electrocatalysts under low voltage. A catalyst is a substance that increases the rate of a chemical reaction without undergoing in the reaction itself and electrocatalysts are types of catalysts that function at electrode surfaces or they may be the electrode themselves. The electrocatalyst that was used by the researchers was copper, or atomically dispersed copper on carbon-powdered supports. These copper was used to break down trapped carbon-dioxide and water molecules and then these molecules were selectively reassembled into ethanol under an external electric field. When the efficiency was measured, it was found that the electrocatalytic selectivity process was 90 percent efficient, much better than existing techniques for converting carbon dioxide to ethanol. Furthermore, over extended periods of time it was found to operate under stable conditions at low voltages. The researchers also say that the costs for the process is also reasonable.

So one may ask: why convert carbon dioxide and water to ethanol? This is because ethanol is widely used in the U.S. It is used to produce gasoline and it is the chemical for many personal care and household products. Also, industries need ethanol to manufacture a host of products that provide lots of benefits to other industries and humans.

This is not the first time though that carbon dioxide will be converted into ethanol. But this method is more efficient and more cost-effective. Furthermore, the researchers say it is more stable than other previous methods. According to Tao Xu, a professor in physical chemistry and nanotechnology from Northern Illinois University, this process would open the doors to technology that would convert carbon dioxide electrocatalytically not only to ethanol, but to a vast array of other industrial chemicals.

So what are the benefits of removing carbon dioxide from the environment? The benefits are immense. Reusing carbon dioxide to manufacture ethanol would provide raw materials for industries making use of this ethanol at a cheaper cost. It reduces the increase in global temperatures. Presently, the world is working to make sure global temperatures do not exceed the two degrees Celsius mark. This approach will contribute its share. Also, greenhouse gases are being removed from the environment, helping to slow down climate change. Greta Thunberg, the Swedish teenage climate activist, would be happy to promote this technique. Also, since this approach has an efficient and reasonable cost, it will confer a lot of benefit to fossil fuel industries and alcohol fermentation plants who emit a lot of carbon dioxide annually into the atmosphere. They could derive some revenue by converting that carbon dioxide into ethanol. Furthermore, lots of jobs and career options would be created in the process in the U.S and around the world if this efficient technique is implemented.

The research has become successful that the researchers are in collaborative talks with industries to start producing ethanol. According to Di-Jia Liu, a senior chemist at Argonne’s Chemical Science and Engineering division, and one of the authors, they have plans to collaborate with industries in order to advance the promising technology. There are also plans to produce several other catalysts.

Material for this post was taken from Argonne National Laboratory press release.

From Zero To Hero In Handling Python Exceptions

No code is perfect. No matter how expert you are at programming, once in a while an error will occur. For beginners, this is more common. One area of errors that beginners encounter are syntax errors. These are errors that occur due to the fact that the programmer did not follow one of the rules of the language. In whatever IDE you are working with, syntax errors are usually highlighted so that the programmer can learn from them. We will not be talking about syntax errors in this post. Rather our focus will be on another type of error that occurs in programming, and that is exceptions.

python exceptions
 

Exceptions are errors that are detected during the execution of the program. They are different from syntax errors. They disrupt the normal operation of the program and tends to make the program produce results that are, though not fatal, unintended by the programmer. In python, there are standard ways of handling exceptions and we will discuss on those ways.

All exceptions in python have a type, and that type derives from a parent class, the BaseException class. If you go to the python documentation page on exceptions, you will realize that there are so many documented exceptions in python and ways to handle them. You can take a look at the various ones. But we will be discussing on only a representative few.

As we have said before, when a code contains lines that would cause unusual execution in the program, such as asking for a division by zero, an exception will be raised. When an exception is raised if it is not handled, python will ask the program to stop execution. Therefore, it is in our best interest to catch and handle exceptions if we want our programs to run to the end.

Let’s look at some examples of common exceptions you must have already encountered in your programs: ZeroDivisionError and IndexError.

A ZeroDivisionError is raised when the denominator for a division or modulus operation is zero. Because you cannot divide by zero, this is an error in the code that must have to be handled, otherwise python will stop execution of the program. If it is not handled, python will stop execution and print out a stack traceback report, indicating where the exception happened and the type of the exception. Let’s take an example to illustrate this. Just run the code below.

The code has a list of denominators we want to use to divide the numerator, 20. It runs well until the denominator becomes 0 and we run into an exception. Because the exception is not handled, python stops execution of the program and prints out a stack traceback of the exception so the programmer can understand where it occurred.

Now, let’s talk about the IndexError exception. IndexError exceptions are raised when a sequence subscript is out of range. Remember that the index for a sequence starts from zero to length of the sequence minus 1, so if we call for an index that is out of that range, an IndexError exception is raised. Here is an example.

When we called for the fourth fruit, it raised an error which was not handled and the program had to stop execution. Notice the message in the stack traceback. It gives the type of exception and what caused the exception to happen.

How to handle exceptions in python.

When you notice that a line of code might result in an error, you can handle it using that try…except…else…finally blocks. I will describe each of them in turn. But here is how they would be arranged in code.


try:
    # place relevant code that you expect
    # will result in an error here
except ExceptionErrorToCatch:
    # place code to handle the exception here
else:
    # place code for what to do after 
    # the code runs successfully at 
    # try block
finally:
    # place clean up code here 

The try block:

The try block is where you place the code that can raise an exception. More than one type of exception can occur at the try block but they can all be handled at the except block area.

The except block:

At the except block is where you place code that will handle the exception After the except statement you state the exception error that you want to handle or catch. It could be a ZeroDivisionError, IndexError, AttributeError etc. You just state it here. You could decide to handle all the exceptions separately and so this means you can write more than one except statement, but if you decide to use the same code to handle all the exceptions coming from the try statement, then you will denote the exceptions you are catching one after the other in a single except statement and separate them using a comma. For example, if your code will result in a ValueError or ZeroDivisonError, you could handle them in the following ways:


try:
    m = int(input()) #expect ValueError here
    n = 20/m         #Expect ZeroDivisionError here
except (ZeroDivisionError, ValueError):
    print('Exception handled here')    

You can see the both exceptions I was expecting were placed in a tuple. You use code like this to handle more than one exception at the same time. But if you want to handle each exception separately, you could use the code below:


try:
    m = int(input()) #expect ValueError here
    n = 20/m         #Expect ZeroDivisionError here
except ZeroDivisionError:
    print('ZeroDivisionError exception handled here')
except ValueError:
    print('ValueError exception handled here')        

Some people use a generic Exception clause when handling exceptions and I would encourage you not to do so. When handling exceptions be specific as to the type of exception you want to handle.

Note: when the statements in the try block are run, if no exception occurs, then the except block is not triggered, instead it is skipped. And that is when you come to the else block.

The else block:

In case you want to run some code if no exception was raised in the try blocks or statement, then you would use an else block to write the code. The else block continues execution when no exception was realized in the try statement. But if an exception is realized, python uses the except blocks to try to handle the exception. Here is some code that illustrates the use of the try…except…else statements. It will ask you to input a number of your choice when you run it.

Try running the above code with the following scenarios to see how the try…except…else statements with work. When the program asks for your input, first 1. Insert a zero to see how the exception will be handled. 2. Second, insert something that is not a number, like a string, ‘mmj’, to see how it handles the exception where something that is not a number is entered. 3. Enter a valid number to see what is printed out. You will notice that the code in the else block is executed when a valid number is entered.

Finally, there is a fourth block we have not discussed, the finally block.

The finally block.

The finally block is an optional block. It is used to clean up resources after the try block and other blocks have been run. No matter what happens in the code, whether an exception is raised or not, the finally block is called last before the program leaves the exception handling area. Note that if an exception occurs during the try block and it is not handled, python will re-raise the exception again after it has run the finally block. Exceptions can also sometimes occur during execution of the except clause, it will be re-raised again after the execution of the statements in the finally block.

Let’s demonstrate an example of how finally block can be used.

Try to run the scenarios again that I gave in the else paragraph to see how the finally block will be run. It is executed every time, whether an exception is raised or not. Whenever you are using up resources, or opening and writing to files, you use the finally block to do clean up of those resources.

The Arguments of an Exception:

Sometimes, we might want to retrieve the additional information that comes with the exception. That is when we supply the arguments variable when specifying the exception(s) in the except block. You state the arguments variable after specifying the name of the exception. In the code below, I used the keyword, as, to separate the exception name from the argument, I then referenced the argument in the exception handling code.

Raising an Exception

What if, as a programmer, you want to raise an exception when a condition is breached. For example, instead of waiting for python to detect the exception in the division by zero case above, you want to check if the user inputted zero yourself. You can raise an exception to handle that error yourself. You raise an exception by calling the raise statement like this: raise ExceptionName(argument). The argument could be optional.

Here is some example:

In line 3, I checked on the condition that the user entered 0 and if that is the case, I raised an exception which was handled by the except block in lines 5 and 6. Notice that I supplied an argument to the exception when I raised it and that argument was retrieved when the exception was handled for informative debugging. Try running the code above using two scenarios: 1. Enter 0 on input to see how it will run. 2. Enter a number on input to see how it skips the conditional in the try statement without raising an exception and goes to the else block. Please, don’t enter a non-number; I'm not handling that exception.

So, you have all you need to work with exceptions. I wish you get creative and start experimenting with them. In case you want further information on exceptions and how to handle them, you can reference the python documentation page on exceptions here.

5 Interesting Ways To Remove Items From A List.

In python, a list is an ordered sequence of elements that is mutable, or changeable. Each element in a list is called an item. Although the types of items in a list can vary, most times you will find that the items in a list are of the same type. You denote a list by having the items enclosed between []. The items in a list could be mutable or immutable data types like strings, integers, floats, or even lists themselves in a nested list structure.

python list
 

Lists have several functions that could be used to operate on the items like adding items to a list, reversing the items in a list, and sorting the items in a list according to defined keys or order. In this post, we will be discussing five ways you can use to remove an item or items from a list.

The different ways are divided into two groups: removing a single item from the list and when you want to remove more than one item in a single operation.

Removing a single item.

In this section I will mention two ways you can use just to remove a single item. They are designed for just that.

list.remove(x):

The remove function is a function that comes with the list class in python. It is used to remove a single item, x, from the list and it does not return anything. It removes the first item, x, from the list even if there are more than one occurrence of the item in the list.

Now for some code to demonstrate it:

You can see that I asked the remove function to remove ‘mango’ from the list of fruits and it removed the first occurrence of the fruit, mango, leaving behind the second. If there is no item like mango in the list, remove function would have returned an error.

list.pop([i]):

This function gives you the ability to remove an item by index. Although you can remove only one item at a time, using the pop built-in function, you do not remove by value but by index in the list. And remember, the indices of lists in python starts from 0. In the syntax above, the item at index i is being removed from the list. If you do not specify an index to pop, it then removes the last item in the list thereby acting as a stack, i.e last in first out behavior. The function also returns the item you popped, giving you the added ability to reuse that item that was popped from the list.

Here is some code on the fruits list to demonstrate that.

You could experiment with removing items from a list you created yourself using any of the two functions above.

Now we will go on to methods that gives you the ability to remove more than one item in a single operation.

Ways to remove more than one item from a list.

These methods can remove a single item or more than one item in a single operation. I have catalogued three ways that this can be done, using both functions and programming logic.

del object_name[index]:

The del operator can be used to remove any object in python. It can act on lists and other types in python. Unlike the earlier two functions, the del operator can be used to remove a single item from a list or a slice of items provided they exist consecutively. It does not return anything. When using del on a list, you must specify the list index of the item you want to remove or the slice you want to remove.

Here is some code illustrating the operation of the del operator.

Notice that when I completely deleted the fruits list and tried to reference it again, it gave a NameError. Fruits list does not exist any longer after applying del operator to the object. So, you can see that the del operator is a powerful tool. Use it at your discretion.

Using slicing logic to delete items:

We can use the concept of the slice operator, just as we used for the del operator above, to remove specific items from a list. I find this cumbersome sometimes but I need to add it here. A slice can be denoted as list_name[start:stop:step]. The stop and step are optional but if you don’t specify step, step defaults to 1, and if you don’t specify stop and step, what it gives you is a value, the item, and not a list, but we need to have a list back in this operation, so we will be specifying both the start and stop.

Here is some code.

Now, let’s move on to the last method for removing more than one item from a list. It involves using programming logic and list comprehension.

Programming logic method:

In this method, which I use often, you just move through the list and filter out the item you don’t need. That's all. I use list comprehension to do it, but you can also use for loop if you don’t understand list comprehension.

For those of you who don’t understand list comprehension, here is the same logic using a for loop. But list comprehension is more preferred because it looks more beautiful.

The for loop took four lines while list comprehension took only one line. Grrrh!

That’s all folks. I hope I have given you ideas on how you can remove items from your lists.

Happy pythoning.

The Secrets Of Random Numbers In Python – The Number Guessing Game Part 2

Yesterday, we explained in depth some of the functions that are included in the random module in python. Today, we will be applying that knowledge to build a simple guessing game. This is how the game goes.

The computer will think of a random number between 1 and 20 inclusive and ask the player to guess it. The player has 6 tries to guess it right. For each guess, the computer will tell the player if the guess is too high, too low, or the correct guess. After 6 tries, if the player has not correctly guessed the number, the game will end.

python random numbers
 

Interesting, not so? Well, here is the code for the game.

Now, let’s explain the code line by line so you understand what is going on.

Line 1: To use the random number functions, we have to import the random module. So, that is what this does.

Line 3: using the random.randint function with arguments 1, 20, we are asking the computer to generate a random number between 1 and 20. Simple. I believe you understood this line.

Lines 4-7: The computer asks the player to input his or her name, initializes num_guesses, the counter we use to calculate how many guesses the player has made so far, and sets the guessed_right switch to False to indicate that the player hasn’t made the right guess yet.

Line 9: Using a while loop, we wait for the player to make at most 6 guesses until he guesses the number right or fails to guess it right. Now, here comes the interesting parts after this line.

Lines 10-28: This is the main logic of the game and it occurs inside the while loop. First the computer informs the player of how many guesses have been made so far and asks him or her to make a guess. We use a try-except-else clause to catch the guess entered. If the player enters a guess that is not a number, such as a float or string, that is an error and the program catches it as an error and penalizes the player for the error by increasing the number of guesses made by 1. But if the player enters a number, it skips through the try-except block and goes to the else block which starts at line 16. From here on we are checking to see what number the player entered. The first conditional checks for whether the number is within the acceptable range. If it is not, the program asks the player to enter a number within the acceptable range and penalizes him by increasing the guess count by 1. The next elif blocks checks if the number, which is within the range for acceptable numbers, is too low or too high. If either of these, the number of guesses count is increased by 1. But if these two are skipped that means the player guessed the right number and the guessed_right switch is set to True. The program then breaks out of the while loop.

Elegant! Not to mention beautiful.

Lines 30-33: This is the cleaning up code. Here after the loop exits the program checks whether the player guessed right or not. If he or she guessed right, the program prints a congratulatory message and tells him how many guesses he took. But if the player guessed wrongly, the program encourages the player to try again and then tells the player what the number was.

You could write your own code that implements the functions in the random module. It is a really useful module. I wish you success in doing so.

You can download the script for the game here if you want to run it on your own machine.

The Secrets Of Random Numbers In Python Part 1

python random numbers
 

When people think of random numbers, they think of something disorganized and that cannot be predicted. That is what I thought until I entered the world of random numbers using python. You see, random numbers do not just run arbitrarily. The name is deceptive. They are based on what are called seeds. A seed can either be provided by the programmer, and if one is not provided, python uses the system clock or what is supplied by the operating system.

If you decide to set the random number generator rather than allow the program to initialize it on its own, you call the random.seed function, supplying it with a seed that you desire. The seed has to be an int. This could be done using the code:

As you can see from running the above code where we gave the random function the same seed, the same output was produced consecutively. That is why the random function is deceptive. Why it is really random is because since the programmer most times does not provide a seed but allows it to self-generate, the random number becomes really random.

By the way, the disclaimer from the makers of the python programming language is that the functions in the random module should not be used in cryptography.

Now let’s take a ride on some of the functions included in the random module.

The functions are divided into functions that manipulate integers, sequences, and real-valued distributions. We will concentrate on the first two in this article.

What are the functions that manipulate integers.

The two functions in the random module that manipulate integers are random.randint and random.randrange. They both do the same thing. They return an integer based on a given range. The differences between them is that in random.randint the second positional argument is included in the range of numbers to be considered, while in random.randrange the range to consider stops just before the second positional argument. The syntax for both of them is random.randint(a, b) and random.randrange(start, stop[, step]). For random.randrange the stop is not included in the range that is to be considered, while in random.randint the last positional argument, b, is considered in the range to be considered. Also, note that you could step through random.randrange using the step argument. For example, you could leave out some numbers in a range using step, just as you can do using the in-built range function.

Overall, both of them serves the same function: give a random integer from a given range and conditions.

If you run the code below, you will be sure to get a random integer between 1 and 20 inclusive.

What are the functions that manipulate sequences.

The functions for manipulating sequences are four in number and they sometimes overlap. I will describe each of them and give code you can run.

random.choice(sequence):

This function returns a random element from sequence provided sequence has items in it otherwise it will raise an IndexError. I think this is simple enough. Now, the code.

Each time you run the code above, it will give you a chosen fruit. There are repeats too. So, don’t be surprised because the list of fruits is not exhaustive.

random.choices(population, weights=None, *, cum_weights=None, k=1):

This function returns a list of size k (default is 1, note) from the population which could be a sequence or a range. If you do not want to apply any probability to the items that would be returned in the list, then you would leave weights and cum_weights keyword arguments alone. But if you want to give some weight or assign probabilities to what can be chosen at random, you have to provide value for weights or cumulative weight, cum_weights. Using the list from above, let us give higher probability to apple and orange using the keyword argument, weights=[5,5,5,15,15]. I will also tell the code to chose just two items from the list of fruits at random.

If you run the code, you will see that you will have a high probability of having orange and apple in the returned list rather than the other fruits. One point to note is that weights cannot be negative.

random.shuffle(x[, random]):

This function takes as argument a sequence, x, and shuffles the items in place. You can provide an optional random function that could be used to shuffle the items. I think this is well explained using code. I will provide code that does not use the random function and one that does.

This code does not use the random function.

Each time you run it, it randomly rearranges the items in the list.

Now this one uses a random function. I want you to run it more than one time and notice the arrangement of the items.

Did you notice that the rand_function acted as a seed to the random.shuffle function? Each time it runs, it returns the list of items in the same sequence. Very cool! You can play with it. Note that the random function can be any function that returns a floating point number between 0.0 and 1.0. I used 0.09.

Now for the last function that manipulates sequences.

random.sample(population, k):

This function is used for random sampling without replacement. It returns a list of size k from the population. It is similar to the choices function but without the weights and without replacement. In random.choices, the returned list can have repeated items but not in random.sample. Now for some code.

Run the above code repeatedly and see. The sample size, k, should not be larger than the population size otherwise python will raise a ValueError.

You can use this function to generate a random list of integers from a very large sample size using the range function as the population argument using code like this: random.sample(range(1000000), 100). Very cool and very efficient.

So, that is the secret to the random function. Now that you have learned what it is and how to use it, why don’t we play with it a little.

Tomorrow, I am going to post a little game that uses the random function. So, make sure to watch out for the game.

Happy pythoning.

7 Ways To Keep Yourself Motivated While Programming

Programming to be honest is not easy. It can be hard especially when you are stuck in a problem, or even when you have a bug and you don’t know a way out of it. I once wrote a project that had 400 lines of code and eventually there was a bug in it. I had to use bisection search to go through each of the lines of code to find out the bug. That one hour of work was very demotivating. You really need to be motivated to be a programmer, even while programming in python which is one of the easiest languages around.

People often ask me: “Nnaemeka David, what do you do to be motivated? Sometimes, it gets hard and frustrating?” So, I decided to write this post on some of the things one can do to get motivated while programming.

programming motivation
 

  1. Be disciplined.

  2. According to some dictionaries, discipline is the practice of obeying rules or a code of behavior. This skill is essential in programming because if you are not disciplined, you will end up in a lot of obstacles to your programming. Disciplining frees up your mind to think about solutions, and not just to rush into putting lines of code on an editor. Discipline makes you set up your programming defensively; you plan ahead for obstacles to your programming when disciplined. Discipline involves understanding the conventions of python as a programming language and following them. They keep you motivated. The style guide for python code, or sometimes called PEP8, is a good resource to instill discipline.

    When I learned programming from Professor Eric Grimm at MIT, (online course), he kept emphasizing on doing tests and setting yourself up for defensive programming before you begin writing code. I have seen the wisdom of that as time goes by. It helps me to debug; and bugs are something you will find all the time.

    If you are not disciplined, you will find yourself looking for a needle in a haystack. That is a very frustrating endeavor.

  3. Start with the bare minimum.

  4. Often beginners to python keep asking the question: Can I learn python in 3 months? What do I do to be a data scientist in the shortest space of time? I always tell anyone that asks me that they should go with the flow. Don’t force it. If you force it and you want to learn everything in the shortest time, you will get disappointed. Start with the bare minimum. Increase your aptitude as time passes by and your confidence increases. Otherwise, when it is 3 months and you discover you have not even scratched the surface of python programming, you will lose motivation. It is similar to when someone is working on the treadmill.

    While working on the treadmill, you don’t just start all at once doing 3 hours at a go. First, you start with five minutes intervals and take breaks. Then, as your body gets used to the routine, you increase the time you spend. So it is with programming in python and any other language. My advice to anyone who wants to take a fast paced approach is that they might be setting themselves up for disappointment. Rather, you take it slowly, one step at a time. Don’t constrain yourself to a time limit, but regularity is the secret to success.

  5. Try the Pomodoro technique.

  6. The Pomodoro technique is a time management technique that encourages you to work using the time you have rather than work against it. For example, say you can spare 30 minutes for programming, what you do is you spend that 30 minutes and set a timer to alert you when the time is up. When it is up, you take a break, and when you can spare another 30 minute you continue again, setting the timer. Many programmers have confessed that the pomodoro technique helps them to be productive. It can also help you.

    As programmers, we spend a lot of time in front of the computer and burn-out can easily set in, causing you to lose motivation. With the pomodoro technique, you will never get burned out because you are pacing yourself and taking breaks that helps you to get refreshed. Pomodoro technique also helps to boost your concentration and focus. There are a lot of softwares that implement the pomodoro technique and you can use them, like the focus booster app that runs on both windows and mac.

    A programming friend of mine on a forum told me that he used the pomodoro technique to learn 3 languages in 2 months. You can try it out yourself.

  7. Set goals.

  8. There is a trite saying that if you fail to plan, you are planning to fail. I have found setting specific goals helpful in helping me to get motivated while programming. But you don’t want to be setting goals you cannot achieve like starting from zero to hero in python in one month. You would be setting up yourself for failure. Set SMART goals, that is, goals that are specific, measurable, achievable, relevant, and time-oriented. After you have achieved one goal, congratulate yourself and move on to the next. Doing this, you will find python programming, or any other programming task, very enjoyable and fun.

    When I started programming, one goal I set is to do one challenge on hackerrank.com on a daily basis. I succeeded in completing the 30 days of code challenge and also doing other challenges. On some days, I would really feel in high spirits, congratulating myself for moving on to the next level. Try it out yourself by signing up at hackerrank.com and take a challenge. If you are a new programmer, take the basic challenge. There are lots of languages to choose from at the website.

  9. Projects, Projects and lots of Projects.

  10. To boost your confidence level and set yourself up for the industry, you can never underestimate the value of doing projects. My advice is that you do projects, more projects and lots of projects. Sign up at github and look for projects to collaborate. There are even projects on github.com that accepts beginners. You could sign up for those if you are new to python programming.

    If you don’t have ideas on what projects to do that would fit your level, just Google it. You will find all sorts of projects that you can choose from on Google; from beginner to expert.

    If you keep going from tutorial to tutorial while on your programming journey, you will be disappointed in record time. You need to learn to practice. That is the secret to being motivated.

  11. Love what you are doing.

  12. You must be passionate about programming, in python or any other language, to survive in this industry. Without passion and love for coding, you would be disappointed in no time. It involves lots of screen time, sometimes it steals the time from your relationships. You could set yourself up for failure if you are doing it for the money because you will encounter a lot of obstacles in the programming journey.

    Love programming. Write a line of code every day. In fact, to be motivated, you must love doing this and if you have read this post this far, I believe you love coding and want to improve. So congratulations.

  13. Start teaching others about programming.

  14. Teaching is a way of imparting knowledge to others. While you are teaching, you are improving your ability to use python and to learn python. Remember, as you are teaching others, you are teaching yourself. You are gaining benefits for yourself that are valuable.

    Also, teaching others imparts other soft skills to you. By teaching others, you gain communication skills and presentation skills. While successfully teaching others, you increase your confidence in using python programming language and you also gain leadership skills. According to glassdoor.com, a job search site, these skills are in high demand in the programming industry.

    If you cannot be part of a class to teach, you can participate in forums to help others. Forums like python-forum.io, freecodecamp.org, and Stackoverflow.com, can be of immense help to you to get teaching opportunities.

I wish you success in your programming career.

Why Shaving Blades Become Useless After Cutting Human Hair

For a long time scientists have been fascinated with one problem when it concerns blades. Although blades are made of stainless steel and have edges that are razor-sharp, to further strengthen them they are coated with diamond-like carbon, but a material that is 50 times softer than a blade such as a human hair can be able to make the blade useless over time. From a logical point of view, this should not be the case.

 

Intrigued by this problem, the engineers at MIT’s department of Material Science and Engineering have come up with an innovative solution. These engineers concern themselves daily with exploring the microstructure of materials in order to design and make new materials that could be able to have exceptional damage-resistance properties. The lead researcher, Gianluca Roscioli, an MIT graduate student, came up with his idea when he was shaving his own hair.

After noticing that his blades tend to get dull with time after shaving, he decided to take images of the blades after each shaving activity. He took these images with a scanning electron microscope (SEM), scanning the blade’s edge in order to track how the blade wore down over time. What he discovered showed that the process is much more complex than a simple wear over time. He noticed very little wear and rounding out at the edges but instead realized that chips were being formed around certain regions of the razor’s edge. These led him to ask himself: Under what conditions do these chipping take place, and what are the ingredients for a strengthened blade to fail after shaving a material as soft as human hair?

To answer these questions conclusively he built an apparatus that was designed to fit inside an SEM and he used it to take samples of his shaving and that of his colleagues. They found that there were some conditions that might cause the edges of a blade to chip and as the chipping proceeds with time, it will cause the blade to get dull. The conditions depend on the blade’s microstructure. If the blade is heterogeneous or the microscopic structure is not uniform, the blade will be more prone to chipping. Also, the angle at which the cutting was done was found to be significant. Therefore, they found that shaving at right angles were better than lower angles. Finally, the presence of defects in the steel’s microstructure was another factor that played a role in initiating cracks on the blade’s edge. Chipping was found to be more prominent when the human hair met the blade at a weak point in the blade’s heterogeneous structure.

These conditions illustrate a mechanism that is well known in engineering - stress intensification. This is the intensification of the stress applied to a material because the structure of the material has microcracks. Once an initial microcrack has formed, the material’s heterogeneous structure enabled these cracks to easily grow to become chips. Therefore, even though the material might be fifty times stronger than what it is cutting, the heterogeneity of the material can increase the stress on it, making cracks to intensify.

The implications of this discovery is immense. It will save money and costs to the average user of shaving blades because it will offer clues on how the edges of a blade can be preserved, and give manufacturers the opportunity to make better blades or cutting materials by using more homogenous materials.

The engineers have already taken their discovery one step further. They have filed a provisional patent on a process to manipulate steel into a more homogenous form, with the hope that they could use this process to build longer-lasting, and more chip-resistant blades.

Material for this post was taken from the MIT news website.

A Microscopic View Of Python’s Lookbehind and Lookahead Regex Assertions

Any discussion on regular expressions, or regex, is not complete without taking note of the lookaround assertions. Lookaround assertion in regex are assertions that state that at the current position of the string, check whether so and so pattern exists before or after the string. Note that when doing lookarounds, the string used in the lookaround is not consumed and the current position in the string does not change.

Now we will be making use of four types of lookaround assertions in regex today. They are the positive lookbehind, the negative lookbehind, the lookahead, and lastly the negative lookahead assertion.

Python Regex with lookahead and lookbehind assertions
 

The Positive and Negative lookbehind assertions.

In lookbehind assertions, we are only looking for what precedes the current position in the string that we want to match. The pattern in the lookbehind assertion does not participate in the match, or as it is said, is not consumed in the match. It only helps in asserting that the match is true. Lookbehind can be positive or negative. In positive lookbeind assertions, we are asserting that the pattern is present before the string. In negative lookbehind assertions, we are asserting that the pattern is not present before the string.

The syntax for positive lookbehind assertion is (?<=foo) where a match is found if foo precedes the current position of the string that is to be matched and foo ends at the current position.

Let’s illustrate this with some example. For example, let’s assert some currency figures. If we have a string like ‘USD100’ and we only want to match 100. We can assert that USD should come before the number 100 with this code: (?<=USD)\d{3} which states to match a digit consisting of exactly 3 characters which is preceded by the string, USD.

The syntax for the negative lookbehind assertion is (?<!foo) which matches the current position of the string if foo is not before the string. If we could continue with the string, ‘USD100’, then to say that the EURO should not be in the string we could use the code: (?<!EURO)\d{3}. I believe by now you must understand what the pattern represents.

Now, we will go on the the second set of lookaround assertions which are the lookahead assertion and negative lookahead assertion.

The lookahead and negative lookahead assertions.

The lookahead assertions are just the opposite of the lookbehind assertions. The lookahead assertions look for the pattern or non-existence of the pattern ahead of the current position in the string.

The lookahead assertion looks for the existence of the specified pattern from the current position. The syntax of the lookahead assertion is (?=foo) where from the current position of the string we are looking ahead if foo exists. Let’s take our 'USD100' string again. If we want to do a lookahead to see if the number exists after the dollar symbol, we could use the following code: w{3}(?=\d{3}). But the 100 number is not consumed in the match, we are taking out only the USD, just that we only want to assert that the 100 comes after the USD.

The negative lookahead assertion is an opposite of the lookahead assertion. If included in a pattern, it asserts that the pattern in the assertion does not come after the current position in the string. For example, if we have the following string 'USD100' and we want to assert that it is not 'USD200' we could use the following code: \w{3}(?!200). The code states that we are matching any string that has three letters but without 200 following it literally.

So, that is what we can take from the lookaround assertions in python. Now, let’s use our knowledge to solve a problem.

Assuming you are given the following string, rabcdeefgyYhFjkIoomnpOeorteeeeet, and you want to match all substrings of the string that contain 2 or more vowels on the condition that each of the substrings must lie between two consonants and must only contain vowels. How do you go about it.

If you look at the question, it involves lookbehind and lookahead assertions. That is, a consonant must lie before the vowel (lookbehind) and must also lie ahead of the vowels (lookahead). When you understand this, your work is nearly done. Then we must denote what it means to have a consonant. That means, it must be any letter that does not lie within the set of vowels, [aeiou]. We will be doing a case-insensitive match, so we will have to raise the Ignore case flag for the regex search pattern, re.I.

Here is how the code is written:

There is nothing new in the code. I have already explained most of the code in another blog post, entitled: The Big Advantage of Understanding Python Regex Methods.

You can download the script here if you want to take a deeper look at it.

I hope you have a nice day. If you want to keep receiving python updates like this, just subscribe to the blog using your email.

A Game Changing, Innovative, Microscale E-waste Recycling Strategy For The Environment And Manufacturing.

A typical recycling process involves the following – waste is collected, sorted, cleaned and processed, then the processed waste is used to manufacture more of the same product, and the cycle is closed when consumers buy these products. The recycling process depends on the fact that these collected waste are made of the same material. That makes it possible to manufacture new products. Yet, that is not the typical case for e-waste.

E-waste, or electronic products that are no longer working, unwanted, or close to the end of their useful lives, are usually made of heterogeneous materials which cannot be readily separated. Therefore, recycling them and putting them back into the cycle does not seem too commercially profitable for the average manufacturer. Yet, these wastes have to be recycled because if they are put back into the environment, which turns out to be usually the case, the toxic materials contained in them can poison our soil, water, air, or wildlife. 

 

To solve this problem some researchers have developed a selective, small-scale microrecycling strategy which can be used to convert old printed electronic parts like circuit boards and monitors into a new type of strong metal coating. These researchers, Veena Sahajwalla and Rumana Hossain, based their research on the copper and silica that are usually the components of electronic devices. They realized that based on the properties of these compounds they could be extracted from e-waste, then combined at high temperatures, even up to 2,732 F, thereby in the process generating silicon carbide nanowires which can then be processed further to create a durable, new hybrid material that is ideal for protecting metal surfaces.

These technique is innovative and a game changer. This could reduce the number of e-waste that end up in landfills and make it profitable for recycling plants to go into recycling larger amounts of e-waste. Imagine, the typical electronic device like a laptop and a TV screen contains lots of potentially valuable substances that could be used to modify the performance of other materials or used to manufacture new reliable materials. That is what this innovation makes possible. Also, the process, which the researchers have called material microsurgery, could be used to recover a large amount of copper annually for use in industries such as electronic devices, industrial, transportation, and consumer products.

Imagine the number of jobs that could be created in the recycling industry if e-waste was taken out of the ecosystem daily. Imagine what benefits it would bring to our environment not to have to dispose of these devices in landfills where they could percolate back to our water cycle or food.

These material microsurgery technique could also be used to create durable, new hybrid materials that could be used to protect metal surfaces. Yes, and it has been tested. During laboratory experiments it was discovered that the hybrid materials when fixed to steel remain firmly entrenched and when the steel is struck with a nanoscale indenter the hybrid layer does not get detached from the steel but remains firm, showing no signs of cracking or chipping. Further, it was seen to increase the hardness of steel by about 125%.

The potential benefits of this small-scale microrecycling strategy is very high. Thanks to the innovation of these two researchers, we could have a cleaner environment free from e-wastes and making sure more electronic products are not disposed of improperly.

I included this innovation in my blog because of its high potential benefit to mankind. I think this is a problem solved and worthy of being acclaimed.

Materials for this post were taken from a press release by the American Chemical Society, ACS.

Coding Python Functions Converting To Any Base That Fits Your Fancy

Sometimes while doing maths, we are faced with converting from one base to the other. Python is good for occasions like this. That is why I love programming in python. In this post, I will describe two ways you can convert any number in base 10 to any other base.

 

First, the more than one line way.

We will use simple mathematical logic to do the base conversion. In maths when you are converting from base 10 to any other base, what you do is that you repeatedly divide the number by the base, including the quotient, and keep the remainder until the quotient gets to zero. When you get the list of remainders, you then reverse the list and you have your converted number in that base. In this simplistic example that is also what we are going to do. I will be using two built-in functions to carry out this activity.

The first function is the divmod function. The syntax of the divmod function is divmod(numerator, denominator). The denominator divides the numerator. What the function returns is a tuple consisting of the quotient and the remainder. That’s it. So, we will be using this function to repeatedly divide the number and its quotient while returning the quotient and the remainder. The quotient will be checked against a condition that it has not gotten to zero while we will be collecting the remainder in a list. Just like you do in maths. When the quotient gets to zero, the division will end.

Then the second function we will be using is the reversed function. The syntax of the reversed function is reversed(sequence). As the name implies, reversed just takes a sequence and reverses the items in the sequence. So simple.

Now that we have our two handy functions, we are ready to write code that will convert any number to any base. I will call the function we will use for this, converter.

Here is the code:

I want you to note that while appending the remainder to each remainder_digits list, I first converted them to a string. If I did not, reversed would not be able to reverse them. Also in line 6 I used the ‘’.join statement to cast the list to a string.

You can download the script for the code above from here.

That’s the logical way to go about it. Now let me introduce you on how to do it with numpy.

One liner Numpy.

Numpy is fundamental for scientific and mathematical computing in python. As you know we are dealing with mathematical things here; numpy has a handy function for handling it. But I would add that numpy is somewhat of an overkill. The simple method I introduced above for programming it in python is enough. But in case you want something that can do it in one line and faster, then you can use numpy.

First, to use numpy it has to be installed on your machine. You can use the command: pip install numpy to install it on your machine, and if it is already installed, then you are good to go. Next, on your script you need to import numpy into your script using the statement: import numpy.

We will be using the numpy.base_repr function for this conversion. The syntax of the function is numpy.base_repr(number, base=2, padding=0). The only positional argument is the number you want to convert. The next is a keyword argument and this signifies the base you want to be converting to. The default is base 2. Then the next keyword argument is the padding you want for the numbers. The default is 0, that is, don’t pad the result with leading zeros. This is in case you want to pad the results. In case you need a refresher on positional and keyword arguments, you can see this earlier post on keyword and positional arguments.

Now that we have everything set up, let us see the one line of code that can do this for us.

You can download the script for the code above from here.

That’s it. Numpy is just another easier way of doing it with more added functionality. Numpy is beautiful. Note though that this function in numpy can only convert bases from base 2 to base 36. So if you need a base that is higher than 36, you should use the first converter function instead.

If you want to keep receiving python posts like this, just subscribe to this blog using your email. Happy pythoning.

Matched content