Search

Constructing An XML Parser In Python

XML, or extensible markup language, is a markup language defining a set of rules for encoding documents such that they can be both human-readable and machine-readable. The World Wide Web Consortium has published a set of standards that define XML. You can reference the specifications here. Although XML was initially designed for documents, its use case has included several types of media and files.

python xml parser

 

A well-formed XML document among other things would include elements and attributes. An element, like in HTML, is a logical document component that begins with a start-tag and ends with an end-tag. A start-tag is denoted as <tag_name> while the end-tag as </tag_name>. An empty tag is a combination of both and is denoted as <tag_name />. An element could also have attributes within the start-tags or empty tags. Attributes are name-value pairs for the document and each name can only have one value. Example of an element with an attribute is <subtitle lang=’en’> where the subtitle element has the lang attribute with ‘en’ value. At the top of the XML document is a root element which is the entry into the document.

Now, in our code that parses XML we will only be dealing with elements and attributes.

To parse XML, python has an API for doing that. The module that implements the API is the xml.etree.ElementTree module, so you would have to import this module into your python file to use the API.

What the xml.etree.ElementTree module contains

This module is a simple API for parsing and creating XML in python. Although it is robust, it is not secure against maliciously constructed data. So, take note. Among several classes of interest, for our parsing activity we will be concentrating on two classes in this module – ElementTree which represents the whole XML document as a tree, and Element which represents a single node in the tree.

To import an XML document you could import it from a file or pass it as a string.

To import it from a file use the following code:

    
import xml.etree.ElementTree as etree
tree = etree.parse('data.xml')
root = tree.getroot()

while to get it directly from a variable as a string use the following code:

    
import xml.etree.ElementTree as etree
xml = 'data as string'
root = etree.fromstring(xml)

The root variable above refers to the root element in the XML document.

The ElementTree constructor

We will be using the ElementTree constructor to get to the root of our XML document, so it is worth mentioning here. The syntax for the constructor is xml.etree.ElementTree.ElementTree(element=None, file=None). The constructor can accept an element which serves as the root element as argument or you could pass it a file that contains the XML document. What it returns is the XML document as a tree that could be interacted with.

One interesting method of this class is the getroot() method. When you call this method on an ElementTree root, it returns the root element in the XML document. We will use the root element as our doorway into the XML document. So, take note of this method because we will be using it in our parsing code below.

That’s all we need from ElementTree class. The next class we will need is the Element class.

Objects of the Element class.

This class defines the Element interface. It’s constructor is xml.etree.ElementTree.Element(tag, attrib={}, **extra). But we will not be creating any elements but just using the attributes and methods. Use the constructor to create an element. But you can see from the constructor definition that an XML document element has two things: a tag and a dictionary of attributes. Objects of this class defines every element in the XML document.

Some interesting attributes and methods we will be using from this class are:

a. Element.attrib: This returns a dictionary that represents the attributes of the said element. What is included in the dictionary are name-value pairs of attributes in the Element or what some call Node in an XML document.

b. Element.iter(tag=None): this is the iterator for each element. It recursively iterates through the children of the element and gives you all the children, even the children of its children recursively. You could filter which result it can give by providing a tag argument that specifies the specific tag whose children you want to receive information about. It iterates over the element’s children in a depth-first order. But if you do not want to get the children in a recursive fashion but only want the first level children of any element, then you can use the next method below.

c. List(element): This is casting an element to a list. This casting returns a list of all the children, first level only, of the element. This method replaces the former Element.getchildren() method which is now deprecated.

So, I believe you now have a simple introduction into some of the features of the xml.etree.ElementTree module. Now, let’s implement this knowledge by parsing some XML documents.

The XML document we are going to parse is a feed for a blog. The XML document is given below:

    
<feed xml:lang='en'>
        <title>SolvingIt?</title>
        <subtitle lang='en'>
               Programming and Technology Solutions
                     </subtitle>
        <link rel='alternate' type='text/html' 
         href='https://emekadavid-solvingit.blogspot.com' />
        <updated>2020-09-12T12:00:00</updated>
        <entry>
            <author>
                <name>Michael Odogwu</name>
                <uri>
                https://emekadavid-solvingit.blogspot.com
                </uri>
            </author>
        </entry>
    </feed>   

You can reference this document in the code while reading the code. You can see that the XML document has elements or nodes and the root tag is named feed. The elements also have attributes.

The first task we are going to do is that we are going to find the score of the XML document. The score of the XML document is the sum of the score of each element. For any element, the score is equal to the number of attributes that it has.

The second task is to find the maximum depth of the XML document. That is, given an XML document, we need to find the maximum level of nesting in it.

So, here is the code that prints out the score and maximum depth of the XML document above. I want you to run the code and compare the result with what you would have calculated yourself. Then, after running the code, the next section is an explanation of relevant points in the code along with a link to download the script if you want to take an in-depth look at it.

Now, for an explanation of the relevant sections of the code. I will use the lines in the code above to explain it.

Line 1: We import the module, xml.etree.ElementTree and name it etree.

Lines 23-35: The XML document.

Line 36, 37: Using the fromstring method of the module, we import the xml document and pass it to the ElementTree constructor which then constructs a tree of the document. Then from the tree created we get the root element (or node) so that we can parse the document starting from the root element.

Line 39: We pass the root element to our function, get_attr_number, that calculates the score of the XML document.

Lines 3-8: What the get_attr_number function does is that it takes the root element or node and recursively iterates through it using node.iter() to get all the children, even the nested children. For each child element, it calculates the score for that child by finding out the length of the attribute dictionary in it, len(i.attrib) and then adds this score to the total score. It then returns the total score as the total variable.

Next is to find the maximum depth. In the XML tree, we take the root element, feed, to be a depth of 0. Take note.

Lines 41,42: Here the depth function is called, passing it the root element of the tree and the default level is noted as -1. Then maxdepth, a global variable, is printed out after the depth function has finished execution. I now describe the depth function.

Lines 12-20: When this function is called, it increases the level count by 1 and checks to see if the level is greater than the maxdepth variable in order to update maxdepth. Then for each node or element, if that element has children, list(elem), it calls the function, depth, recursively.

You can download the above code here, xmlparser.py.

Now, I believe you understand how the code works. I want you to be creative. Think of use cases of how you can use this module with other XML functions like creating XML documents, or writing out your own XML documents and parsing them in the manner done above. You can also check out another parser I wrote, this time an HTML parser.

Happy pythoning.

Validating Credit Cards With Python Regex

We have been exploring python regex in previous posts because it is an interesting area of programming. It gives me the adrenaline surge when I get a pattern right. I usually use regex101.com to confirm my patterns in python regex.

 

python regex

Because I have touched extensively on the syntax of python regex in previous posts, like on the syntax of python regex and the methods that are used in python regex, I will go straight to describing today’s task.

In today’s coding challenge, we want to validate credit card numbers. We know credit cards are 16 digits long, but what goes into their creation? For a credit card number to be valid, it needs to have the following characteristics:

1. It must start with 4, 5, or 6 (assuming that is the requirement for our bank MZ. Other banks have different requirements but we can scale)

2. It must contain exactly 16 digits.

3. It must only consist of digits, i.e, 0 – 9.

4. It may have digits in groups of 4 separated by a single hyphen, ‘-‘.

5. It must not use any other separator except the hyphen.

6. It must not have 4 or more consecutive repeated digits.

Now, that list of requirements is long. Yes, it is long. But we can scale to the requirements. Here is code that does just that. I would like you to read the code and then run it to see the output. Later, I will explain each of the regex patterns and what the code is doing.

Now that you have read it and run it, I sure hope you understand the code. Not to worry, I am here to walk you through the lines of the code. But first, let me explain some relevant details in regex that will help you understand the code. That is, the python regex meta characters that would be of help.

? This is called the optional match. This tells python to match 0 or 1 repetitions of the preceding regular expression.
{m} This tells python to match exactly m repetitions of the preceding regular expression.
[abc] Used to indicate a set of characters. Any character within the set is matched. This example matches either a, b, or c that are within the set.
(…) This matches whatever regular expression is inside the parenthesis and indicates the start and end of a group. The contents of the group can be retrieved after the match has been performed and can be matched later in the string with the \number special sequence.
\number Matches the contents of the group with this number. Group numbering starts from 1.
(?!...) matches if … doesn’t match next. This is the negative lookahead assertion. For example Isaac(?!Asimov) will match Isaac only if it is not followed by Asimov.

Now that I have explained all the relevant meta characters you need to understand the code, let’s go through the code, starting first with the patterns for the match.

On lines 4 and 5, you can see that I wrote two patterns we will be using to do the matches.

Line 4, the valid_structure pattern: r"[456]\d{3}(-?\d{4}){3}$”. First it indicates a set of characters to start off the pattern, [456]. That means the start of the pattern should be a 4, 5, or 6 based on our credit card characteristic. Then this should be followed by exactly 3 digits. \d indicates digits. After this we have a group which consists of an optional hyphen, -, and then exactly 4 digits. This group should be matched three times. When we calculate the digits together, this goes to 16. So, with the valid_structure pattern, we have satisfied nearly all the requirements, except the requirement that there should be no 4 consecutive repeats of any digit.

That is where no_four_repeats pattern comes in, on line 5. The pattern is r"((\d)-?(?!(-?\2){3})){16}". Now let’s go through it. First we did a global grouping. This is group 1. Then we grouped the digit at the start of the pattern; it will become group 2. What the pattern is saying is that a digit could be followed by a hyphen. Then we did a negative lookahead assertion in a group. In the negative lookahead assertion we said that the group should not include an additional hyphen and other digits exactly three times. What we are saying is that the grouping in the negative lookahead assertion should not exist in the string and if it does, there is no match. Then all the digits in the group should be exactly 16 digits. If you want a refresher on negative lookahead assertions, you can check this blog post on assertions in regular expressions.

The rest of the function following from the patterns is self explanatory. We pack the patterns into a tuple in line 6. From lines 8 to 12 we search for a match based on the patterns and the list of credit cards that was passed.

I hope you did find the code interesting. It was beautiful matching credit card numbers. I hope to bring you more like this if I find any challenge that seems interesting.

If you would like to receive instant updates when I post new blog articles, subscribe to my blog by email.

Happy pythoning.

First Tracking Device using Vibration, AI to track 17 Home appliances

At the present state of affairs, to track each appliance in your home you would need to install a separate tracker for each appliance. Now, what if you had 10, 20 or so appliances? That would be some expense to carry out, not so? But recently, some researchers at Cornell University have developed a single device that is able to track about 17 home appliances at the same time and this device uses vibration with an integrated deep learning network. With this device, you no longer need to worry about forgetting to take wet clothes out of the washing machine, or allowing food to remain in the microwave, or even forgetting to turn off faucets that are dripping. This device promises to make your home smart in a cost-effective way.

technology vibration ai

 

Vibration analysis has several uses in industry, especially in detecting anomalies in machinery, but this is the first use case for tracking home appliances using vibrations that I have found. This device, called Vibrosense, uses lasers to capture the subtle vibrations that are emitted by walls, floors and ceilings and then incorporates this received vibration with a deep learning network that is used to model the data being processed by the vibrometer in order to create a unique signature for each appliance. I tell you, researchers are getting closer to their dream of making our homes not only smarter, but more efficient and integrated.

But can it detect appliance usage across a house, you may ask? There are so many appliances in a house and the vibrations they emit can intersect. That’s right. The researchers have a solution to that problem. To efficiently detect different appliances in a house and not just in any single room, the researchers divided the task of the tracking device into two categories: First, the tracking device would have to detect all the vibrations in the house generally using the laser Doppler vibrometer, and second differentiate the vibrations from multiple appliances even if they were similar vibrations by identifying the path the vibrations has traveled from room to room.

The deep learning network that is incorporated in the device uses two modes of learning: path signatures and noises. Path signatures for identifying different activities and the distinctive noises that the vibrations make as they travel through the house.

To test its accuracy the tracking device was tested across 5 houses at the same time and it was able to identify the vibrations from 17 different appliances with 96% accuracy. Some of the appliances it could identify were dripping faucets, an exhaust fan, an electric kettle, a refrigerator, and a range hood. Also, when it was trained Vibrosense could be able to identify 5 stages of appliance usage using an accuracy of 97%.

Cheng Zhang, assistant professor of information science at Cornell University and director of Cornell’s SciFi Lab, on speaking about the device, Vibrosense, said that it was recommended for use in single-family houses because when it was installed in buildings, it could pick up the activities that were going on in neighboring houses. A big privacy risk one must say.

A smart device with immense benefits

When computers are able to recognize the activities going on in the home, it makes our dream of the smart home closer to reality. Such computers can ease the interaction between humans and computers, enabling human-computer interfaces that are a win-win for everyone. That is what this tracking device does. One advantage of this device is that it leverages on the use of computers to understand human needs and behaviors. Formerly, we would need separate devices for each appliance or need. But this device has leveraged on that need. “Our system is the first that can monitor devices across different floors, in different rooms, using one single device,” Zhang said.

I feel elated on discovering this device. No more having to wait for my food to be cooked on the microwave. With this device, I could be watching the TV while it watches the food on my behalf. There are a lot of things we could use this for. I think this innovation is very beneficial to the average American.

But one concern about Vibrosense is in the area of privacy. I wouldn’t want my neighbor to know when I am in the bathroom, or have the TV on, or that I was not in the house. But these are the information the device can send out.

When asked on the issue of privacy, Zhang said: “It would definitely require collaboration between researchers, industry practitioners and government to make sure this was used for the right purposes.” I hope that cooperation does come.

The device could even help in enabling sustainability and energy conservation in the home. In so doing, it could help homes to monitor their energy usage and reduce consumption. It could also be used to estimate electricity and water usage rates since the device has the ability to detect both the occurrence of an event and the exact time period that event took place. This is badly sought-for energy-saving advice that home owners need. This is great!

I was thinking about the benefits of a device like this in a typical home and was wowed by its potential benefit that I decided that this innovation needs a place in my solvingit? blog. So, this is a thumbs up to Cheng Zhang and his team at Cornell.

The material for this post was based on the paper: “VibroSense: Recognizing Home Activities by Deep Learning Subtle Vibrations on an Interior Surface of a House from a Single Point Using Laser Doppler Vibrometry.” Cheng Zhang was senior author of the paper. The paper was published in Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies and will be presented at the ACM International Joint Conference on Pervasive and Ubiquitous Computing, which will be held virtually Sept. 12-17.

Techniques for packing and unpacking in python

When I first started learning python, one of the cool things I learned was packing and unpacking of sequences like tuples. I fell in love with them that I ended up using them all the time. They were a feature I had found only in python and javascript.

python packing and unpacking

 

Packing and unpacking, also called destructuring i.e the ability to extract multiple values from data stored in objects, has lots of techniques I have discovered over the years. I will highlight some in this post.

a. Packing and unpacking of sequences

As a convenient way to assign variables with a single line of code, automatic packing is an often used technique that can serve you well. It usually is applied to sequences like tuples, lists, and ranges. For a refresher on sequences and how they are a subset of iterables, see this post.

For example, we could automatically pack and unpack this list into three variables, x, y, and z.

We could do a similar thing with a tuple, unpacking the tuple and assigning its values.

Most often where programmers apply this automatic packing and unpacking technique is when they want to return multiple values from a function. Take this function for quotients and remainder that returns a tuple.

You can see that when the call returns the output, a tuple, is automatically packed into two variables. You get to see this often in code and this is a cool feature from python.

The packing feature has its dual companion, the unpacking feature for sequences. I highlighted one above but let me show you another with unpacking in tuples.

See how python nicely unpacks the range and assigns it immediately to the variables on the left.

One thing you need to note when packing and unpacking with sequences is that the number of variables on the left should match the number of items you want to unpack or pack on the right.

You do not only pack and unpack on the right side of the assignment operator. You can also pack and unpack on the left. Take this example.

I first assigned the first element to variable ‘a’ and then packed the remaining items in the tuple into ‘d’. Just use your creativity.

You can unpack in a for loop. Remember, for loops need an iterable. That is why unpacking can be deployed there also. Take this list of tuples and see how one can unpack the tuples in a for loop.

This technique of unpacking in a for loop is what you are doing when you call the dictionary.items() method with a syntax like this:

This technique of packing and unpacking has resulted in the tradition of simultaneous assignments in python which has come to replace the need for temporary variables when you want to swap values among two variables. For example, in the former archived way of swapping, one could do it this way:

    
temp = y
y = x 
x = temp

This has come to be replaced with the more convenient and readable simultaneous assignment syntax that goes like this:

x, y = y, x

Don’t you just love the way python is evolving? I just adore it.

Now, packing and unpacking is not restricted to sequences. It extends to the arena of function arguments and function calls.

2. Packing and unpacking in functions

There are several techniques for packing and unpacking in functions and I will highlight some of the common ones.

a. Python argument tuple packing and unpacking

To carry out a packing of the arguments that are passed to the function, you just precede the formal parameter with an asterisk, *. Then any value or object that is passed to the function will be packed into a tuple.

Here is an example.

Because this argument packing can accept multiple types, you would be wise to assert the types you desire in your function.

Python argument tuple packing has its counterpart, python argument tuple unpacking. This is when on the calling side of the function you precede the object with an asterisk. The object would usually be a sequence or iterable. Then when the object is passed to the function it is unpacked.

Take this example:

Note that the number of formal arguments should be equal to the object when it is unpacked.

But who says we cannot do unpacking and packing at the same time. Yes, you can do packing and unpacking in function calls at the same time.

Let’s go to another data structure that is not a sequence – a dictionary.

b. Python argument dictionary packing and unpacking.

Just as you can do packing and unpacking for sequences, you can also do the same for dictionaries, but in the case of dictionaries you use double asterisks, **, to indicate that you are dealing with a key=value pair. While reading the python documentation several times, even on pylint if your IDE has pylint installed, you would find formal arguments specified as **kwargs. The **kwargs formal argument just says that the object that is passed would be packed into a dictionary.

Let’s take an example of a common case.

The keywords passed in the function call can be any number but the function will pack all of them into a dictionary that fits.

Now let’s illustrate the argument dictionary unpacking. This occurs at the opposite end, at the function call. This is when the object passed is preceded by double asterisk. This tells the function that the object, which is a dictionary, should be unpacked.

One final example.

f(**names) is just a shorthand for the explicit keyword arguments f(x=’A’, y=’A+’, z=’C’).

There are so many techniques which have sprung up from the principle of packing and unpacking iterables. If you have some you want to share which I didn’t mention, make a comment below and I will be sure to update this post.

Thanks. Happy pythoning.

Complete Methods For Python List Copy

After my post on python shallow and deep copy, a reader asked me: you can also copy a list with list slicing and it is fast. Which should I use?

Well, I decided to dedicate a post on the numerous ways you can copy a python list and then evaluate their timing to find out which is the fastest in order to give a concise answer.

python list copy

 

So, here are the different methods one after the other.

1. The built-in python list.copy method

This method is built-in for sequences and a list is a sequence. Because it is built-in, I guarantee you that like everything python built, it should be fast. I just love using this method whenever I need to copy a list. But let the timing will tell us the most efficient.

An example of how you can use it to copy a list is:

As you can see from the code names2 was independent of names after copying. So, it gives desired behavior. But I need to tell you a caveat. List.copy() does a shallow copy; it cannot recursively copy nested lists. Too bad.

2. Slicing the entire list

Yes, I said it again. When you slice the entire list, you eventually copy everything into another object. This method is so cool. The syntax is listname[:]. That’s all you need to do to copy.

Let’s try this with an example.

Yes, it is extremely convenient. It worked just as we expected, producing an independent list as output even when the original was changed. Like the first method, this method of slicing to copy python lists is shallow copy also. Too bad.

3. Using the built-in list constructor, list()

This is just like creating a list from another list. The syntax is list(originallist). It returns a new object, a list.

Here is an example.

4. Use the generic shallow copy method.

For the generic shallow copy method, you need to import the copy module: import copy. Then call the copy method of the module on the original list: copy.copy(originalist). I talked all about how to do this in the post on python shallow copy and deep copy. You can reference it for a refresher.

Here is an example.

So, as we expected. The returned list, names2, was independent of the original list, names. But as the name says, it does shallow copy. That means it cannot copy recursively. Like where we have a nested list, it cannot copy deep down but returns a reference to the nested items.

5. The generic deep copy method

This is the last and the method I use whenever I have lists in my custom classes and need to copy them. This method copies deep down, even to the nested items in a nested list. It is also a method of the copy module. You can read all about it in the link I gave above.

Let’s do an example.

I really need to do one more example with this method, to show that it copies deep down even to nested lists.

As you can see from the above nested list, when we change one of the nested items in the original list, the copy did not reflect that change to show that it was not copying by reference but copying deep down to the values.

Now that you are familiar with all the different ways to copy a list, which is the most time efficient?

First, I will have to tell you that if you have a nested list or an array, the only method you can use is the python deep copy method. That is the only method that copies everything in the nested list or array without leaving any references.

Now, for the other types of lists, that is, lists that are not nested, all the methods can be used so we will now try to find out which is more time efficient by timing their processes.

Which method is more time efficient?

To test it out, you have to run the code below and see for yourself.

You will notice that the built-in python list copy method was approximately faster than all the other methods of copying a list. That’s why I love using any function or method that is built-in specifically for any data type or data structure. But list slicing comes at a close second place. Although I would not want to use list slicing if I have a very large list.

That’s it. I hope you did enjoy this post. Watch out for other interesting posts. Just update via your email and they will surely land right in your inbox.

Happy pythoning.

Visualizing ‘Regression To The Mean’ In Python

Let’s take a philosophical bent to our programming and consider something related to research. I decided to consider regression to the mean because I have found that topic fascinating.

regression to the mean python

 

What is regression to the mean?

Regression to the mean, or sometimes called reversion towards the mean, is a phenomenon in which if the sample point of a random variable is extreme or close to an outlier, a future point will be close to the mean or average on further measurements. Note that the variable under measure has to be random for this effect to play out and to be significant.

Sir Francis Galton first described this phenomenon when he was observing hereditary stature in his book: “Regression towards mediocrity in hereditary stature.” He observed that parents who were taller than average in the community tend to give birth to children who became shorter or close to the community average height.

Since then, this phenomenon has been described in other fields of life where randomness or luck is also a factor.

For example, if a business has a highly profitable quarter in one year, in the next coming quarter it is likely not to do as well. If one medical trial suggests that a particular drug or treatment is outperforming all other treatments for a condition, then in a second trial it is more likely that the outperforming drug or treatment will perform closer to the mean the next quarter.

But the regression to the mean should not be confused with the gambler’s fallacy that states that if an event occurs more frequently than normal in the past, then in the future it is less likely to happen even where it has been established that in such events the past does not determine the future i.e they are independent.

I was thinking about regression to the mean while coding some challenge that involved tossing heads and tails along with calculating their probability, so I decided to add a post on this phenomenon.

This is the gist of what we are looking for in the code. Suppose we have a coin that we flip a set number of times and find the average of those times. Then we aggregate the flips for several trials. For each trial, we look for the averages that were extremes and find out if the average flip after that extreme regressed towards the mean. Note that the mean of the flip of a coin is 0.5 because the probability that a fair coin will come heads is ½ and the probability it will come tails is also ½.

So after collecting the extremes along with the trial that comes after it, we will want to see if the trials were regressing towards the mean or not. We do this visually by plotting a graph of the extremes and the trials after the extremes.

So, here is the complete code. I will explain the graph that accompanies the code after you run it and then provide a detailed explanation of the code by lines.

After you run the above code, you will get a graph that looks like that below.

regression to mean python


We drew a line across the 0.5 mark on the y-axis that shows when the points cross the average line. From the graph you will see rightly that for several occasions, when there are extremes above or below the average line, the next trial results in an flip that moved towards the mean line except for one occasion when it did not. So, what is happening here? Because the coin flip is a random event, it has the tendency to exhibit this phenomenon.

Now, let me explain the code I used to draw the visuals. There are two functions here, one that acts as the coin flip function and the other to collect the extremes and subsequent trials.

First, the code for the coin flip.

    
def flip(num_flips):
    ''' assumes num_flips a positive int '''
    heads = 0
    for _ in range(num_flips):
        if random.choice(('H', 'T')) == 'H':
            heads += 1
    return heads/num_flips

The function, flip, takes as argument a specified number of flips that the coin should be tossed. Then for each flip which is done randomly, it finds out if the outcome was a head or a tail. If it is a head, it adds this to the heads variable and finally returns the average of all the flips.

Then the next function, regress_to_mean.

    
def regress_to_mean(num_flips, num_trials):
    # get fractions of heads for each trial of num_flips
    frac_heads = []
    for _ in range(num_trials):
        frac_heads.append(flip(num_flips))
    # find trials with extreme results and for each 
    # store it and the next trial
    extremes, next_trial = [], []
    for i in range(len(frac_heads) - 1):
        if frac_heads[i] < 0.33 or frac_heads[i] > 0.66:
            extremes.append(frac_heads[i])
            next_trial.append(frac_heads[i+1])
    # plot results 
    plt.plot(extremes, 'ko', label = 'Extremes')
    plt.plot(next_trial, 'k^', label = 'Next Trial')
    plt.axhline(0.5)
    plt.ylim(0,1)
    plt.xlim(-1, len(extremes) + 1)
    plt.xlabel('Extremes example and next trial')
    plt.ylabel('Fraction Heads')
    plt.title('Regression to the mean')
    plt.legend(loc='best')
    plt.savefig('regressmean.png')
    plt.show()

This function is the heart of the code. It flips the coin a set number of times for a set number of trials, accumulating each average for each trial in a list. Then later, it finds out which of the averages is an extreme or outlier. When it gets an outlier, it adds it to the extremes list, and then adds the next trial to the next_trial list. Finally, we used matplotlib to draw the visuals. The visuals is a plot of the extremes and next_trial figures with a horizontal line showing the average line for the viewer to better understand what direction the next trial is expected to move to when there is an extreme.

I hope you sure enjoyed the code. You can run it on your machine or download it to study it, regress_to_mean.py.

Thanks for your time. I hope you do leave a comment.

Happy pythoning.

Python Shallow Copy And Deep Copy

Sometimes while programming, in order to prevent having side effects when we want to change an object, we need to create a copy of that object and mutate the copy so that we can later use the original. Python provides methods that we can use to do this. In this post, I will describe the shallow copy and deep copy methods of python that you can effectively use to copy objects even recursively.

python shallow copy and deep copy

 

Many programmers think that the assignment operator makes a copy of an object. It is really deceptive. When you write code like this:

object2 = object1

You are not copying but aliasing. That is, object2 is getting a reference to the objects which serve as the value of object1. Aliasing could seem intuitive to use, but the caveat there is that if you change the value of any one of the aliased objects, all the objects referencing that value also change. Let’s take an example.

You could see that I made an aliasing between second_names and names in line 2 so that they both reference the same object. When I appended a name to second_names, it reflected in names because they are both referencing the same object.

Sometimes, we don’t want this behavior. We want the fact that when we have made a copy, we have made a copy that is independent from the original copy. That is where python shallow copy and deep copy operations come in. To make this work, we need to import the methods from the copy module: import copy.

How Python shallow copy works with examples

The syntax for python shallow copy is copy.copy(x). The x in the argument is the original iterable you want to copy. I need to state here that the iterable that needs to be copied must be mutable. Immutable iterables are not copied.

Let’s take an example of how python shallow copy works on a list.

You can see that the copy, second_names, remained unchanged even after we added items to the original.

Let’s take an example on how python shallow copy works on a dictionary.

You can see in the dictionary also that the python shallow copy function operates on a dictionary as we expected.

You can also do copy on sets; they are mutable iterables. If you want a refresher on iterables, you can check this post.

There is one weakness of python shallow copy. As the name implies, it does not copy deep down. It copies only items at the surface. If in a list or dictionary we have nested items, it will reference them like in the aliasing operation rather than copy them.

Let’s use an example to show this.

Now, you can see that we changed Rose’s grade in the original from ‘C’ to an ‘A’ but the change was reflected in the copy. Too bad! That is behavior we don’t want. This is because python shallow copy does not go deep down or does not copy recursively. We need another type of copy to make both lists or dictionaries independent. That is where python deep copy comes in.

How python deep copy works

Python deep copy will create a new object from the original object and recursively will add all the objects found in the original object to the new object without passing them by reference. That’s cool. That makes our new nested objects copy effectively.

The syntax for deep copy is copy.deepcopy(x[, memo]). The x in the argument is the original object that has to be copied while memo is a dictionary that keeps a tab on what has already been copied. The memo is very useful in order to avoid recursive copying loops or for deep copy not to copy too much. I find myself using the memo often when I am implementing deep copy in my custom classes.

Now, let’s take an example of python deep copy on a list, a nested list precisely, and see how it performs.

You can see now that the original nested list was changed without affecting the copy.

That goes to show you the power of python as a programming language.

We can take this concept further by showing how to implement shallow copy and deep copy in python using custom classes. All you really need to do is implement two special methods in your classes for whichever you need. If you need to use python shallow copy in your class, implement the __copy__() special method and if you need to use python deep copy, just implement the __deepcopy__() special method.

Let’s show this by examples.

In the code above we defined a Student class with each student having a name, grade and dept. Then we defined a Faculty class that aggregates a list of students. Then in the Faculty class we implemented the __deepcopy__() special method in order to be able to recursively copy the list of students. Finally in the driver codes, lines 25 to 37, we created the objects for the classes and then copied the faculty object to a new faculty object to see how it will run, printing out the students in the new faculty object.

That’s cool. Just love this code. I hope you enjoyed yourself. I would love to receive feedback as comments.

Happy pythoning.

Python Print() Function: How it works

One of the ubiquitous and most often used functions in python is the python print function. We use it for realizing the output of our codes and even for debugging. So, it is pertinent that we understand how it works.

python print

 

In its essential form what the python print function does is to take a given object, convert it to a string object and print the value out to the standard output, or what is called the screen. It can even send the output to a file.

The python print syntax

The python print function despite its wide ranging value has a simple syntax. The syntax of the python print statement is print(*objects, sep=' ', end='\n', file=sys.stdout, flush=False). I will be explaining each of the arguments in this post. So, just take note of the syntax.

Usually when you want to print something to the screen you provide the python print function with an object or several object arguments. If you don’t specify other parameters, what the function does is print each of the arguments to the screen, each separated by empty space and after all the arguments are printed, to go to a new line. Let’s illustrate this with an example and explain how it relates to the syntax.

When you run the code above, you will see that it nicely prints out each of the objects to the python print function. Here is what happened. I passed it 5 objects and it prints out the five objects each separated by a space. The separation by a space comes about from the sep parameter in the syntax above. The sep means separator. By default its value is a space. Notice that I cast one of the objects to a string before printing it out. This is a trick to make the period adhere to the value of the string. Very cool. We can change the value of the separator. I will highlight it in the separator section below.

Now what happens if we print without passing an object. Let me give an example following from our example above.

You can see that I repeated the earlier code. But on line five I wrote a print statement without giving it any argument or object. If you look at the output on the screen, you will see that it translated it into an empty space. Yes, without any argument the python print function just looks at what is at the end parameter and since the default is a newline, ‘\n’, it creates a new line.

Now let’s see how we can customize the working of the python print function using the keyword parameters outlined in the syntax.

Customizing python print with the sep keyword

The sep keyword separates each of the objects in the python print function based on its value. The default is a whitespace character. That means if you use the default, as outlined above, each of the objects when printed out will be separated by a whitespace.

What if we want another separator on python print, like we want a colon, :, to separate each of the objects to be printed. Here is code that could do it.

If you watch the output to the screen, you could see that each of the objects that was passed to the python print function now has a colon separator between them.

You could create any separator of your imagination. Most times when I have specific ways to print an output it could call for my customizing the separator.

Customizing python print with the end keyword.

The end keyword is another parameter that we could use to customize the python print function. As I highlighted above where I printed a print function without objects, the default for the end keyword is a newline, ‘\n’, which creates a new line after printing the objects. That means python print adds a newline to each line. Most times when I want python print without newlines, that means, subsequent lines of objects to print on the same line, I customize the end keyword. You just replace the default with a space character, ‘ ‘, which signifies to concatenate all the subsequent lines on one single line.

For example, you have code you want printed in the same line. Here is the code that could do it.

You can see that by customizing the end parameter to a space, I have made all the objects in the python print function print without newline to the same line.

How to print to file using file keyword

Most usually when you call the python print function, it prints to standard output, that is, the screen. That is the default. I will show you how to print to a file. You can customize it to print to a file by specifying a file object as the value to the file parameter which file object should be writable. For details on how to open, read, and make files writable, see this blog post.

Now, let’s take an example. This time instead of printing to the screen we will be printing to a file or writing to a file. Here is the code:

    
text = 'I feel cool using python.
        \nIt is the best programming language'
with open('new_file.txt', 'w') as source_file:
    print(text, file=source_file)

You can run it on your machine. When you do, rather than getting the text message to your screen, it will print to the file, new_file.txt. If new_file.txt doesn’t exist, it will create one.

One thing to note about file objects passed to the file keyword – you cannot use binary mode file objects. This is because python print function converts all its objects to the str class (strings) before passing them to the file. So, note this and if you want to write to binary mode file objects, use the write methods that are built-in for file objects.

You must really be feeling empowered with all the cool features in python print function. I am. You can subscribe to my blog or leave a comment below. I feel happy when I believe I have made an impact.

Happy pythoning.

Simulating A Random Walk In Python

Deterministic processes are what every programmer is familiar with when starting out in their journey. In fact, most beginner books on programmer will teach you deterministic processes. These are processes where for an input, you always get the same output. But when you get into industry, you find out that most times stochastic processes are the norm when finding solutions to problems. Stochastic processes give different results for the same input.

random walk python

 

In this post, I will be simulating a stochastic process, a drunkard’s walk, which is an example of a random walk.

Python random walks are interesting simulation models for the following reasons:

  1. They are widely used in industry and interesting to study.
  2. They show us how simulation works in practice and can be used to demonstrate how to structure abstract data types.
  3. They usually involve producing plots which are interesting and as they say, a picture is worth a thousand words.

So, let’s go to the simulation exercise. It is interesting to find out how much distance a drunk would have made from his starting position if he takes a number of steps within a given space of time. Would he have moved farther after that time, would he still be close to the origin, or where would the drunk be? Such questions can only be simulated for us to get a general idea of the drunk’s position. We’ll imagine that for each movement, the drunk can take one step either in the north, south, east, or west direction. That means he has four choices to choose from for each step.

To model the drunk’s walk after some time, we will be using three classes representing objects that define his position relative to the origin: Location, Field, and Drunk classes.

The Location class defines his location relative to the origin. We could write code for the class this way:

    
class Location(object):

    def __init__(self, x, y):
        ''' x and y are numbers '''
        self.x, self.y = x, y

    def move(self, delta_x, delta_y):
        ''' delta_x and delta_y are numbers '''
        return Location(self.x + delta_x, self.y + delta_y)

    def get_x(self):
        return self.x

    def get_y(self):
        return self.y

    def dist_from(self, other):
        ox, oy = other.x, other.y
        x_dist, y_dist = self.x - ox, self.y - oy
        return (x_dist**2 + y_dist**2)**0.5

    def __str__(self):
        return '<' + str(self.x) + ', ' + str(self.y) + '>'

Each location has an x and y coordinate representing the x and y-axis. When the drunk moves and changes his location, we could return a new Location object to signify this. Also using the location class, we can calculate the distance of the drunk from another location, and most possibly the origin.

The second class we need to define is the Field class. This class will allow us to add multiple drunks to the same location. It is a mapping of drunks to their locations. Code could be written this way for it:

    
class Field(object):

    def __init__(self):
        self.drunks = {}

    def add_drunk(self, drunk, loc):
        if drunk in self.drunks:
            raise ValueError('Duplicate drunk')
        else:
            self.drunks[drunk] = loc

    def move_drunk(self, drunk):
        if drunk not in self.drunks:
            raise ValueError('Drunk not in field')
        x_dist, y_dist = drunk.take_step()
        current_location = self.drunks[drunk]
        # use move method of Location to get new location
        self.drunks[drunk] = 
                  current_location.move(x_dist, y_dist)

    def get_loc(self, drunk):
        if drunk not in self.drunks:
            raise ValueError('Drunk not in field')
        return self.drunks[drunk]            

As you can see, the Field class is a mapping of drunks to locations. When we move a drunk, his location reflects this move and we take note of the current location. Also, we can use this class to find out the location of any drunk.

The last class of interest is the Drunk class. The Drunk class embodies all the drunks we will be playing with. It is a common class or parent class as all other drunks will inherit from this class.

    
import random

class Drunk(object):

    def __init__(self, name=None):
        ''' Assumes name is a string '''
        self.name = name

    def __str__(self):
        if self != None:
            return self.name
        return 'Anonymous'

What the Drunk class does is give identity to each drunk object or subclass.

Now, we will create a drunk with our expected way of movement: that is take one step each time in the north, south, east, or west direction. We will call this drunk class, UsualDrunk. Here is the definition of the class.

    
class UsualDrunk(Drunk):

    def take_step(self):
        step_choices = [(0,1), (0, -1), (1,0), (-1,0)]
        return random.choice(step_choices)

The UsualDrunk class inherits from the Drunk class and the only method it defines is the random step it can take. From the take_step method you can see that it can only move one step to the east, west, north or south, and this in a randomized fashion.

So, now that we have our classes let us try to answer the question – where will the drunk be after taking a series of walks in a random fashion? Like taking 10 walks, or 100, or 1000? Normally, we would expect that when the number of walks increases, the distance from the origin should increase. But this might not be the case because you know how drunks walk – haphazardly. Some drunks can even retrace their steps back to where they started and go nowhere!

So, for our simulation, we will write code that makes use of these classes and run the code on the drunk taking a number of steps with different trials for each step. We are using different trials in order to balance out the randomized walk and get a mean of distances.

Here is the code:

When you run it there is one fact that stands out: The mean distance from the origin increases as the number of steps increases. That is the hypothesis we started with.

Some pertinent new driver code are the following:

    
def walk(f, d, num_steps):
    '''Assumes: f a field, d a drunk in f, 
    and num_steps an int >= 0.
    Moves d num_steps times; returns the distance between
    the final location and the location at the start 
    of the walk.'''
    start = f.get_loc(d)
    for _ in range(num_steps):
        f.move_drunk(d)
    return start.dist_from(f.get_loc(d))

The walk function returns the distance from the final location for a single trial based on the drunk taking a number of steps that is defined.

    
def sim_walks(num_steps, num_trials, d_class):
    '''Assumes num_steps an int >= 0, num_trials an int > 0,
    d_class a subclass of Drunk. 
    Simulates num_trials walks of num_steps steps each. 
    Returns a list of the final distance for each trial'''
    homer = d_class()
    origin = Location(0,0)
    distance = []
    for _ in range(num_trials):
        f = Field()
        f.add_drunk(homer, origin)
        distance.append(round(walk(f, homer, num_steps), 1))
    return distance

The sim_walks function (simulated walks) is different from the walk function only in one aspect: it relates to all the different trials that are used for a specific step. Say for a 10 steps walk we did 100 trails so as to get the mean. So sim_walk returns a list of the distances for the trials. This is so that we can take the mean distance for each number of steps since we are randomizing the walk.

And finally, the drunk_test function.

    
def drunk_test(walk_lengths, num_trials, d_class):
    '''Assumes walk_lengths a sequence of ints >= 0
    num_trials an int > 0, d_class a subclass of Drunk
    for each number of steps in walk_lengths, runs sim_walk
    with num_trials walks and prints results '''
    for num_steps in walk_lengths:
        distances = sim_walks(num_steps, num_trials, d_class)
        print(d_class.__name__, 'random walk of ', num_steps, 'steps')
        print('Mean:', round(sum(distances)/len(distances), 4))
        print('Max:', max(distances), 'Min:', min(distances))

This serves as the test of our code. It prints out the mean for each number of steps after doing the various trials and then the max and min for those trials in a specific step.

You could download the above code here, random_walk.py.

But a picture is worth a thousand words. Let us use a plotted graph to illustrate the variation in the number of steps to the distance from the origin.

drunkards walk python

 

You can see from the graph above that when the drunk is taking ten steps for each of the 100 trials, the distance he moves is closer to the origin than when he takes 100 or 1000 steps. But the drunk seems more determined to walk farther away if he is given the opportunity to take several steps. Drunks really mean to get home it seems! A graph of number of steps for each trial to mean distances shows that this is truly the case: the more opportunity he is given to take higher steps, the closer he gets to home and away from where he started with. The graph below shows that information.

drunkards walk python


The scales in the graph have been extrapolated to logarithmic scales to clearly show the straight line relationship between number of steps and mean distance from the starting point. To see how the code for the plotted graphs were written you can download it here, random_walk_mpl.py.

Now, our simulation has dwelt on a drunk walking the way we expect: for each step one unit towards the east, west, north, or south.

What if we could make the drunkard’s walk somewhat biased by skewing it a little. That would involve creating different drunks with different steps and comparing them to our usual drunk.

A biased random walk simulation.

Let’s imagine a drunkard who hates the cold and moves twice as fast in a southward direction. We could make him a subclass of Drunk class and change his way of movement in the class, calling him ColdDrunk.

This could be his class definition:

    
class ColdDrunk(Drunk):
    def take_step(self):
        step_choices = [(0.0, 1.0), (0.0, -2.0), 
                       (1.0, 0.0), (-1.0, 0.0)]
        return random.choice(step_choices)

You can see that whenever he moves southwards, y axis, he takes two times a unit step.

Now let’s also add another hypothetical drunk that moves only in the east-west direction. He really moves with the sun or is phototrophic. We could define his class, EWDrunk, in the following way:

    
class EWDrunk(Drunk):
    def take_step(self):
        step_choices = [(1.0, 0.0), (-1.0, 0.0)]
        return random.choice(step_choices)

So, we have all our drunks ready. Now let’s write code that will run them and compare their mean distances for various number of steps.

If you run the code above you will get a plotted graph that shows number of steps against mean distance from the origin for the three drunks. You will get a graph that looks like the following:

drunkards walk python


You will notice that for both the UsualDrunk, who we highlighted earlier, and the phototrophic drunk, EWDrunk, their variation in mean distance as the number of steps increases is not much compared to the South loving or north hating drunk, ColdDrunk. That means the ColdDrunk, or north hating drunk, is moving faster than all other drunks. This is not surprising based on the fact that whenever he moves south, he moves twice as fast. That means randomly the drunk’s movement is more favorable than the other two.

We could extrapolate on this conjecture and build a scatter plot of the location of each drunk’s movement for each step but I think the point has already been made: simulating a random walk could give us insights into a model and could confirm or deny a hypothesis.

If you would like a copy of the code for the three drunks, you can download it here, random_walk_biased.py.

That’s it folks. I hope you enjoyed this post. I really enjoyed coding it. It was fun.

This helps us to see the insight that plotting a class or set of classes can give to a programmer.

Happy pythoning.

Object Oriented Programming (OOP) in Python: Polymorphism Part 3

Polymorphism is a concept you will often find yourself implementing in code, especially in python. Polymorphism means essentially taking different forms. For example, a function implementing polymorphism might accept different types and not just a specific type.

oop in python polymorphism

 

There are different ways to show polymorphism in python. I will enumerate some here.

1. Python polymorphism with functions and objects.

In this implementation of polymorphism, a function can take objects belonging to different classes and carry out operations on them without any regard for the classes. One common example of this is the python len function, or what is called the python length function. You can read my blog post on the python length function here.

Here is an example of how the python length function implements polymorphism.

You can see from the above code that the python len function can take a str object as argument as well as a list object as argument. These two objects belong to different classes.

Now, not only can built-in functions exhibit polymorphism, we can use it in our custom classes.

I believe you have followed through with the earlier posts on python OOP so the example classes are self-explanatory. If not, you can get the links at the end of this post. I will just dwell on the print_talk function. You can see that we invoked the function in the for loop after creating the dog and person objects. Each time the print_talk function is invoked, different objects are passed to it and it successfully invoked their talk methods. Different objects but same function and different behavior.

2. Polymorphism with inheritance

Polymorphism with inheritance has been discussed in the python OOP inheritance post. I will just briefly highlight it here. It involves a method of the child class overriding the methods of the parent class. The code above shows it but let me point it out again. The method of the child class has the same name and arguments as the method of the parent class but the implementation of the method is different.

    
class Animal:
    
    type = 'Animal'

    def __init__(self, name):
        ''' name is a string '''
        self.name = name

    def talk(self):
        print(f'My name is {self.name} and I can talk')

    def walk(self):
        print(f'My name is {self.name} and I can walk')        

class Dog(Animal):

    legs = 4

    def __init__(self, name, bark):
        Animal.__init__(self, name)
        self.bark = bark 

    def talk(self):
        print(f'My name is {self.name} and 
                    I can bark: "{self.bark}"')

    def walk(self):
        print(f'My name is {self.name} and 
                   I walk with {self.legs} legs')

From, the code above you can see that the Dog class inherits from the Animal class. The Dog class overrides the talk and walk methods of the Animal class and implements them differently. That is what polymorphism in inheritance is all about.

As your skill with python increases, you will find yourself implementing polymorphism in a lot of instances. It is a feature of python that is commonplace.

I hope you enjoyed my three part series on how OOP is implemented in python. The first part was on OOP in python classes and objects, the second part was on OOP in python class inheritance and this is the third part. I hope you have learned something new about python and will be determined to implement these features in your coding.

Happy pythoning.

Object Oriented Programming (OOP) in Python: Inheritance. Part 2

Inheritance as a concept refers to the ability of python child classes to acquire the data attributes and methods of python parent classes. Inheritance occurs many times in programming. We might find two objects that are related but one of the objects has a functionality that is specialized. What we might do is take the common attributes and functions and put them in a class and then take the special attributes and functions and put them in another class, then the objects with the special functions inherit the common attributes and functions.

oop python inheritance

 

For example we have persons and dogs. We know that all persons and dogs are animals with names and they can walk but dogs bark and persons speak words. So the commonality here is being animals. We can create a different animal class for the common features and then create separate dog and person classes for the special features.

The python class which inherits from another python class is called the python child class or derived class while the class from which other classes inherit is called the python parent class or base class.

The syntax for a python class inheritance from a parent class is as stated below:

    
class ChildClassName(ParentClassName):

    statement 1

    statement 2

Let’s take an example from the dog and person objects above. We could create a class for dog objects and another class for person objects and then make the dog and person objects inherit from the animal class since that is their common features. The code could run like the one below:

You can see from the code below that the python child classes, Dog and Person, call the python parent class constructor, Animal.__init__() to initialize their names because name is a common feature for both of them. But Dog and Person have special features like bark and words that they use to talk differently. Also, notice that there is a class variable, legs, for Dog and Person that have different values. This is to emphasize their different legs and these class variables apply to all their objects.

When a derived class definition is executed, it is executed the same way as the parent class and when the python child class is constructed through the __init__() method, the parent class is also remembered. When attribute references are being resolved, the parent class will also be included in the hierarchy of classes to check for such attributes. Just like the self.name calls we had for Dog and Person objects above. When the attribute is not found in the child class, it searches for it in the parent class. The same process occurs for method references. Notice that when I called bingo.talk() and Michael.talk() above, the program first searched the class definitions of the objects to find if there was a talk method defined therein. If there were none, it would have gone on to search the parent classes until the specified method is found.

Another python class inheritance feature you have to notice is that the python child classes in this example override the methods of the python parent classes. This feature is allowed in python class inheritance mechanism. In the example above, the Animal class has a talk and walk method but the child classes also implement a talk and walk method, overriding the talk and walk methods of the parent classes.

You can access the python parent class attributes and functions in a python child class by calling parentclassname.attribute or parentclassname.function. Try it out yourself in code and see. That is what python class inheritance is all about. Child classes own the artefacts of their parents.

Also, another feature you have to notice is that overriding methods may not just want to override the parent methods but they want to extend the parent methods by adding more functionality to what the parent can do. Let me give an example of how python extends classes in inheritance.

You can see that the worker class defines a worker by name and the company he works in. The CEO class inherits from the worker class and also defines the company he works in. But in the works() method, the CEO class calls the works method of Worker using Worker.works(self) and then extends it by printing out that the worker CEO also owns the company. So, you can see how one can extend a method from a child class. You call the method of the parent and add further functionality.

Python functions used to check python class inheritance.

Python has two built-in functions that you can use to check for python class inheritance in your objects. They are isinstance() and issubclass().

isinstance(): The syntax is isinstance(object, classinfo) and it checks if object is an instance of classinfo. That is, it checks for the type of the object. It returns True if object is an instance of classinfo and false otherwise. Examine the code below and run it to see how it works for our inheriting classes.

You will notice that since a dog is a childclass of an animal, it is also an instance of an animal, or it is of type Animal. So, a dog object is an instance (type) of Dog and also an instance (type) of Animal. But a dog is not an instance of a person because it is not inheriting anything from class Person. Please, note these differences.

issubclass(): This function is used to check for python class inheritance. The syntax is issubclass(class, classinfo) where class and classinfo are classes. The function returns True if class is a subclass of classinfo but False otherwise. Let’s use our example classes to check for inheritance.

You will notice that since Dog and Person classes are inheriting from Animal class, they are subclasses of the Animal class but Dog or Person classes are not subclasses because there is no inheritance relationship.

Types of python class inheritance: multilevel inheritance and multiple inheritance.

Python multilevel inheritance: Here we inherit the classes at separate multiple levels. C inherits from B and B inherits from A. For example, consider an example where Person inherits from Animal and Student inherits from Person. If you use isinstance() and issubclass() functions, you can see that Student by this multilevel class inheritance mechanism acquires the data attributes and methods of Animal. Let’s show this with example.

You can see that the Student has a name attribute that is inherited from Animal even though it is directly inheriting from Person, this is because in the inheritance tree it is also inheriting from animal. We show this when we call isinstance() method at the last line.

This is an example of multilevel inheritance.

Python multiple inheritance: In multiple inheritance, a python child class can inherit from more than one python parent class. The syntax for multiple inheritance is:

    
class ChildClass(ParentClass1, ParentClass2, etc):

    statement 1

    statement 2
    

The way python works is that when the object of the child class is searching for an attribute it first looks for it in its own name space, and when not found in that of parentclass1 and in all the parent classes of that class and if it is not found, it then goes to parentclass2 and so on in the order they are written.

Benefits of using python class Inheritance.

Inheritance as an OOP concept has many advantages that it gives to the programmer.

1. It makes the programmer to write less code and to avoid repeating himself. Any code that is written in the parent class is automatically available to the child classes.

2. It also makes for more structured code. When code is divided into classes, the structure of the software is better as each class represents a separate functionality.

3. The code is more scalable.

You can check out my other posts about OOP concepts in python like that about classes and objects, as well as that on python polymorphism.

Happy pythoning.

Matched content