Death Swamp

Recently a friend sent me this. I recognized it instantly, although I never knew that it had a name.

There is a management technique called “death swamp” (or “death bog” or “fly paper”). It works this way. Occasionally some young fire-eater comes up with an idea to Do Something. The bureaucracy can’t simply reject his idea because then they’d have to give an explanation for why his idea was rejected. So they pat him on the back, agree that his idea is a good one, and encourage him to pursue it. In fact they think so highly of his idea that they helpfully volunteer information about How We Get Things Done Around Here. They provide a sheaf of forms and advice on how to get the ball rolling.

The young and inexperienced fire-eater happily starts down the road in the direction that has been pointed out to him. In short order he finds himself in a swamp of procedures and paperwork so thick that he is completely bogged down and making no progress. Eventually he gives up.

The next time he comes up with an idea, he is given the same forms again. This time, seeing the forms, he realizes his mistake. He politely accepts the forms and walks away. Around the corner, he throws the forms in the trash and gives up on his idea. Because by then he knows that the only way to escape the swamp is not to enter it in the first place.

enum in Python

Recently I was reading a post by Eli Bendersky (one of my favorite bloggers) and I ran across a sentence in which Eli says “It’s a shame Python still doesn’t have a functional enum type, isn’t it?”

The comment startled me because I had always thought that it was obvious how to do enums in Python, and that it was obvious that you don’t need any special language features to do it. Eli’s comment made me think that I might need to do a reality-check on my sense of what was and was not obvious about enums in Python.

So I googled around a bit and found that there are a lot of different ideas about how to do enums in Python. I found a very large set of suggestions on StackOverflow here and here and here. There is a short set of suggestion on Python Examples. The ActiveState Python Cookbook has a long recipe, and PEP-354 is a short proposal (that has been rejected). Surprisingly, I found only a couple of posts that suggested what had seemed to me to be THE obvious solution. The clearest was by snakile on StackOverflow.

Anyway, to end the suspense, the answer that seemed to me so obvious was this. An enum is an enumerated data type. An enumerated data type is a type, and a type is a class.

class           Color : pass
class Red      (Color): pass
class Yellow   (Color): pass
class Blue     (Color): pass

Which allows you to do things like this.

class Toy: pass

myToy = Toy()

myToy.color = "blue"  # note we assign a string, not an enum

if myToy.color is Color:
    pass
else:
    print("My toy has no color!!!")    # produces:  My toy has no color!!!

myToy.color = Blue   # note we use an enum

print("myToy.color is", myToy.color.__name__)  # produces: myToy.color is Blue
print("myToy.color is", myToy.color)           # produces: myToy.color is <class '__main__.Blue'>

if myToy.color is Blue:
    myToy.color = Red

if myToy.color is Red:
    print("my toy is red")   # produces: my toy is red
else:
    print("I don't know what color my toy is.")

So that’s what I came up with.

But with so many intelligent people all trying to answer the same question, and coming up with such a wide array of different answers, I had to fall back and ask myself a few questions.

  • Why am I seeing so many different answers to what seems like a simple question?
  • Is there one right answer? If so, what is it?
  • What is the way — the best, or most widely-used, or most pythonic — way to do enums in Python?
  • Is the question really as simple as it seems?

For me, the jury is still out on most of these questions, but until they return with a verdict I have come up with two thoughts on the subject.

First, I think that many programmers come to Python with backgrounds in other languages — C or C++, Java, etc. Their experiences with other languages shape their conceptions of what an enum — an enumerated data type — is. And when they ask “How can I do enums in Python?” they’re asking a question like the question that sparked the longest thread of answers on StackOverflow:

I’m mainly a C# developer, but I’m currently working on a project in Python. What’s the best way to implement the equivalent of an enum [i.e. a C# enum] in Python?

So naturally, the question “How can I implement in Python the equivalent of the kind of enums that I’m familiar with in language X?” has at least as many answers as there are values of X.

My second thought is somewhat related to the first.

Python developers believe in duck typing. So a Python developer’s first instinct is not to ask you:

What do you mean by “enum”?

A Python developer’s first instinct is to ask you:

What kinds of things do you think an “enum” should be able to do?
What kinds of things do you think you should be able to do with an “enum”?

And I think that different developers probably have very different ideas about what one should be able to do with an “enum”. Naturally, that leads them to propose different ways of implementing enums in Python.

As a simple example, consider the question — Should you be able to sort enums?

My personal inclination is to say that — in the most conceptually pure sense of “enum” — the concept of sorting enums makes no sense. And my suggestion for implementing enums in Python reflects this. Suppose you implement a “Color” enum using the technique that I’ve proposed, and then try to sort enums.

# how do enumerated values sort?
colors = [Red, Yellow, Blue]
colors.sort()
for color in colors:
    print(color.__name__)

What you get is this:

Traceback (most recent call last):
  File "C:/Users/ferg_s/pydev/enumerated_data_types/edt.py", line 32, in <module>
    colors.sort()
TypeError: unorderable types: type() < type()

So that suites me just fine.

But I can easily imagine someone (myself?) working with an enum for, say, Weekdays (Sunday, Monday, Tuesday… Saturday). And I think it might be reasonable in that situation to want to be able to sort Weekdays and to do greater than and less than comparisons on them.

So if we’re talking duck typing, I’m happy with enums/ducks that are motionless and silent. My only requirement is that they be different from everything else and different from each other. But I can easily imagine situations where one might reasonably need/want/prefer ducks that can form a conga line, dance, and sing a few bars. And for those situations, you obviously need more elaborate implementations of enums.

So, with these thoughts in mind, I’m inclined to think that there is no single, best way to implement an enum in Python. The concept of an enum is flexible enough to cover a variety of implementations offering a variety of features.

Python Decorators

In August 2009, I wrote a post titled Introduction to Python Decorators. It was an attempt to explain Python decorators in a way that I (and I hoped, others) could grok.

Recently I had occasion to re-read that post. It wasn’t a pleasant experience — it was pretty clear to me that the attempt had failed.

That failure — and two other things — have prompted me to try again.

  • Matt Harrison has published an excellent e-book Guide to: Learning Python Decorators.
  • I now have a theory about why most explanations of decorators (mine included) fail, and some ideas about how better to structure an introduction to decorators.

There is an old saying to the effect that “Every stick has two ends, one by which it may be picked up, and one by which it may not.” I believe that most explanations of decorators fail because they pick up the stick by the wrong end.

In this post I will show you what the wrong end of the stick looks like, and point out why I think it is wrong. And I will show you what I think the right end of the stick looks like.

 

The wrong way to explain decorators

Most explanations of Python decorators start with an example of a function to be decorated, like this:

def aFunction():
    print("inside aFunction")

and then add a decoration line, which starts with an @ sign:

@myDecorator
def aFunction():
    print("inside aFunction")

At this point, the author of the introduction often defines a decorator as the line of code that begins with the “@”. (In my older post, I called such lines “annotation” lines. I now prefer the term “decoration” line.)

For instance, in 2008 Bruce Eckel wrote on his Artima blog

A function decorator is applied to a function definition by placing it on the line before that function definition begins.

and in 2004, Phillip Eby wrote in an article in Dr. Dobb’s Journal

Decorators may appear before any function definition…. You can even stack multiple decorators on the same function definition, one per line.

Now there are two things wrong with this approach to explaining decorators. The first is that the explanation begins in the wrong place. It starts with an example of a function to be decorated and an decoration line, when it should begin with the decorator itself. The explanation should end, not start, with the decorated function and the decoration line. The decoration line is, after all, merely syntactic sugar — is not at all an essential element in the concept of a decorator.

The second is that the term “decorator” is used incorrectly (or ambiguously) to refer both to the decorator and to the decoration line. For example, in his Dr. Dobb’s Journal article, after using the term “decorator” to refer to the decoration line, Phillip Eby goes on to define a “decorator” as a callable object.

But before you can do that, you first need to have some decorators to stack. A decorator is a callable object (like a function) that accepts one argument—the function being decorated.

So… it would seem that a decorator is both a callable object (like a function) and a single line of code that can appear before the line of code that begins a function definition. This is sort of like saying that an “address” is both a building (or apartment) at a specific location and a set of lines (written in pencil or ink) on the front of a mailing envelope. The ambiguity may be almost invisible to someone familiar with decorators, but it is very confusing for a reader who is trying to learn about decorators from the ground up.

 

The right way to explain decorators

So how should we explain decorators?

Well, we start with the decorator, not the function to be decorated.

One
We start with the basic notion of a function — a function is something that generates a value based on the values of its arguments.

Two
We note that in Python, functions are first-class objects, so they can be passed around like other values (strings, integers, objects, etc.).

Three
We note that because functions are first-class objects in Python, we can write functions that both (a) accept function objects as argument values, and (b) return function objects as return values. For example, here is a function foobar that accepts a function object original_function as an argument and returns a function object new_function as a result.

def foobar(original_function):

    # make a new function
    def new_function():
        # some code

    return new_function

Four
We define “decorator”.

A decorator is a function (such as foobar in the above example) that takes a function object as an argument, and returns a function object as a return value.

So there we have it — the definition of a decorator. Anything else that we say about decorators is a refinement of, or an expansion of, or an addition to, this definition of a decorator.

Five
We show what the internals of a decorator look like. Specifically, we show different ways that a decorator can use the original_function in the creation of the new_function. Here is a simple example.

def verbose(original_function):

    # make a new function that prints a message when original_function starts and finishes
    def new_function(*args, **kwargs):
        print("Entering", original_function.__name__)
        original_function(*args, **kwargs)
        print("Exiting ", original_function.__name__)

    return new_function

Six
We show how to invoke a decorator — how we can pass into a decorator one function object (its input) and get back from it a different function object (its output). In the following example, we pass the widget_func function object to the verbose decorator, and we get back a new function object to which we assign the name talkative_widget_func.

def widget_func():
    # some code

talkative_widget_func = verbose(widget_func)

Seven
We point out that decorators are often used to add features to the original_function. Or more precisely, decorators are often used to create a new_function that does roughly what original_function does, but also does things in addition to what original_function does.

And we note that the output of a decorator is typically used to replace the original function that we passed in to the decorator as an argument. A typical use of decorators looks like this. (Note the change to line 4 from the previous example.)

def widget_func():
    # some code

widget_func = verbose(widget_func)

So for all practical purposes, in a typical use of a decorator we pass a function (widget_func) through a decorator (verbose) and get back an enhanced (or souped-up, or “decorated”) version of the function.

Eight
We introduce Python’s “decoration syntax” that uses the “@” to create decoration lines. This feature is basically syntactic sugar that makes it possible to re-write our last example this way:

@verbose
def widget_func():
    # some code

The result of this example is exactly the same as the previous example — after it executes, we have a widget_func that has all of the functionality of the original widget_func, plus the functionality that was added by the verbose decorator.

Note that in this way of explaining decorators, the “@” and decoration syntax is one of the last things that we introduce, not one of the first.

And we absolutely do not refer to line 1 as a “decorator”. We might refer to line 1 as, say, a “decorator invocation line” or a “decoration line” or simply a “decoration”… whatever. But line 1 is not a “decorator”.

Line 1 is a line of code. A decorator is a function — a different animal altogether.

 

Nine
Once we’ve nailed down these basics, there are a few advanced features to be covered.

  • We explain that a decorator need not be a function (it can be any sort of callable, e.g. a class).
  • We explain how decorators can be nested within other decorators.
  • We explain how decorators decoration lines can be “stacked”. A better way to put it would be: we explain how decorators can be “chained”.
  • We explain how additional arguments can be passed to decorators, and how decorators can use them.

Ten — A decorators cookbook

The material that we’ve covered up to this point is what any basic introduction to Python decorators would cover. But a Python programmer needs something more in order to be productive with decorators. He (or she) needs a catalog of recipes, patterns, examples, and commentary that describes / shows / explains when and how decorators can be used to accomplish specific tasks. (Ideally, such a catalog would also include examples and warnings about decorator gotchas and anti-patterns.) Such a catalog might be called “Python Decorator Cookbook” or perhaps “Python Decorator Patterns”.



So that’s it. I’ve described what I think is wrong (well, let’s say suboptimal) about most introductions to decorators. And I’ve sketched out what I think is a better way to structure an introduction to decorators.

Now I can explain why I like Matt Harrison’s e-book Guide to: Learning Python Decorators. Matt’s introduction is structured in the way that I think an introduction to decorators should be structured. It picks up the stick by the proper end.

The first two-thirds of the Guide hardly talk about decorators at all. Instead, Matt begins with a thorough discussion of how Python functions work. By the time the discussion gets to decorators, we have been given a strong understanding of the internal mechanics of functions. And since most decorators are functions (remember our definition of decorator), at that point it is relatively easy for Matt to explain the internal mechanics of decorators.

Which is just as it should be.


Revised 2012-11-26 — replaced the word “annotation” with “decoration”, following terminology ideas discussed in the comments.


					

Unicode – the basics

An introduction to the basics of Unicode, distilled from several earlier posts. In the interests of presenting the big picture, I have painted with a broad brush — large areas are summarized; nits are not picked; hairs are not split; wind resistance is ignored.

Unicode = one character set, plus several encodings

Unicode is actually not one thing, but two separate and distinct things. The first is a character set and the second is a set of encodings.

  • The first — the idea of a character set — has absolutely nothing to do with computers.
  • The second — the idea of encodings for the Unicode character set — has everything to do with computers.

Character sets

The idea of a character set has nothing to do with computers. So let’s suppose that you’re a British linguist living in, say, 1750. The British Empire is expanding and Europeans are discovering many new languages, both living and dead. You’ve known about Chinese characters for a long time, and you’ve just discovered Sumerian cuneiform characters from the Middle East and Sanskrit characters from India.

Trying to deal with this huge mass of different characters, you get a brilliant idea — you will make a numbered list of every character in every language that ever existed.

You start your list with your own familiar set of English characters — the upper- and lower-case letters, the numeric digits, and the various punctuation marks like period (full stop), comma, exclamation mark, and so on. And the space character, of course.

01 a
02 b
03 c
...
26 z
27 A
28 B
...
52 Z
53 0
54 1
55 2
...
62 9
63 (space)
64 ? (question mark)
65 , (comma)
... and so on ...

Then you add the Spanish, French and German characters with tildes, accents, and umlauts. You add characters from other living languages — Greek, Japanese, Chinese, Korean, Sanscrit, Arabic, Hebrew, and so on. You add characters from dead alphabets — Assyrian cuneiform — and so on, until finally you have a very long list of characters.

  • What you have created — a numbered list of characters — is known as a character set.
  • The numbers in the list — the numeric identifiers of the characters in the character set — are called code points.
  • And because your list is meant to include every character that ever existed, you call your character set the Universal Character Set.

Congratulations! You’ve just invented (something similar to) the the first half of Unicode — the Universal Character Set or UCS.

Encodings

Now suppose you jump into your time machine and zip forward to the present. Everybody is using computers. You have a brilliant idea. You will devise a way for computers to handle UCS.

You know that computers think in ones and zeros — bits — and collections of 8 bits — bytes. So you look at the biggest number in your UCS and ask yourself: How many bytes will I need to store a number that big? The answer you come up with is 4 bytes, 32 bits. So you decide on a simple and straight-forward digital implementation of UCS — each number will be stored in 4 bytes. That is, you choose a fixed-length encoding in which every UCS character (code point) can be represented, or encoded, in exactly 4 bytes, or 32 bits.

In short, you devise the Unicode UCS-4 (Universal Character Set, 4 bytes) encoding, aka UTF-32 (Unicode Transformation Format, 32 bits).

UTF-8 and variable-length encodings

UCS-4 is simple and straight-forward… but inefficient. Computers send a lot of strings back and forth, and many of those strings use only ASCII characters — characters from the old ASCII character set. One byte — eight bits — is more than enough to store such characters. It is grossly inefficient to use 4 bytes to store an ASCII character.

The key to the solution is to remember that a code point is nothing but a number (an integer). It may be a short number or a long number, but it is only a number. We need just one byte to store the shorter numbers of the Universal Character Set, and we need more bytes only when the numbers get longer. So the solution to our problem is a variable-length encoding.

Specifically, Unicode’s UTF-8 (Unicode Transformation Format, 8 bit) is a variable-length encoding in which each UCS code point is encoded using 1, 2, 3, or 4 bytes, as necessary.

In UTF-8, if the first bit of a byte is a “0″, then the remaining 7 bits of the byte contain one of the 128 original 7-bit ASCII characters. If the first bit of the byte is a “1″ then the byte is the first of multiple bytes used to represent the code point, and other bits of the byte carry other information, such as the total number of bytes — 2, or 3, or 4 bytes — that are being used to represent the code point. (For a quick overview of how this works at the bit level, see How does UTF-8 “variable-width encoding” work?)

Just use UTF-8

UTF-8 is a great technology, which is why it has become the de facto standard for encoding Unicode text, and is the most widely-used text encoding in the world. Text strings that use only ASCII characters can be encoded in UTF-8 using only one byte per character, which is very efficient. And if characters — Chinese or Japanese characters, for instance — require multiple bytes, well, UTF-8 can do that, too.

Byte Order Mark

Unicode fixed-length multi-byte encodings such as UTF-16 and UTF-32 store UCS code points (integers) in multi-byte chunks — 2-byte chunks in the case of UTF-16 and 4-byte chunks in the case of UTF-32.

Unfortunately, different computer architectures — basically, different processor chips — use different techniques for storing such multi-byte integers. In “little-endian” computers, the “little” (least significant) byte of a multi-byte integer is stored leftmost. “Big-endian” computers do the reverse; the “big” (most significant) byte is stored leftmost.

  • Intel computers are little-endian.
  • Motorola computers are big-endian.
  • Microsoft Windows was designed around a little-endian architecture — it runs only on little-endian computers or computers running in little-endian mode — which is why Intel hardware and Microsoft software fit together like hand and glove.

Differences in endian-ness can create data-exchange issues between computers. Specifically, the possibility of differences in endian-ness means that if two computers need to exchange a string of text data, and that string is encoded in a Unicode fixed-length multi-byte encoding such as UTF-16 or UTF-32, the string should begin with a Byte Order Mark (or BOM) — a special character at the beginning of the string that indicates the endian-ness of the string.

Strings encoded in UTF-8 don’t require a BOM, so the BOM is basically a non-issue for programmers who use only UTF-8.


Resources

Python’s magic methods

Here are some links to documentation of Python’s magic methods, aka special methods, aka “dunder” (double underscore) methods.

There are also a few other Python features that are sometimes characterized as “magic”.

I’m sure there are other useful Web pages about magic methods that I haven’t found. If you know of one (and feel like sharing it) note that you can code HTML tags into a WordPress comment, like this, and they will show up properly formatted:

I found a useful discussion of magic methods at
<a href="http://www.somebodys_web_site.com/magic-methods">www.somebodys_web_site.com/magic-methods</a>

 

Gotcha — Mutable default arguments

Goto start of series

Note: examples are coded in Python 2.x, but the basic point of the post applies to all versions of Python.

There’s a Python gotcha that bites everybody as they learn Python. In fact, I think it was Tim Peters who suggested that every programmer gets caught by it exactly two times. It is call the mutable defaults trap. Programmers are usually bit by the mutable defaults trap when coding class methods, but I’d like to begin with explaining it in functions, and then move on to talk about class methods.

Mutable defaults for function arguments

The gotcha occurs when you are coding default values for the arguments to a function or a method. Here is an example for a function named foobar:

def foobar(arg_string = "abc", arg_list = []):
    ...

Here’s what most beginning Python programmers believe will happen when foobar is called without any arguments:

A new string object containing “abc” will be created and bound to the “arg_string” variable name. A new, empty list object will be created and bound to the “arg_list” variable name. In short, if the arguments are omitted by the caller, the foobar will always get “abc” and [] in its arguments.

This, however, is not what will happen. Here’s why.

The objects that provide the default values are not created at the time that foobar is called. They are created at the time that the statement that defines the function is executed. (See the discussion at Default arguments in Python: two easy blunders: “Expressions in default arguments are calculated when the function is defined, not when it’s called.”)

If foobar, for example, is contained in a module named foo_module, then the statement that defines foobar will probably be executed at the time when foo_module is imported.

When the def statement that creates foobar is executed:

  • A new function object is created, bound to the name foobar, and stored in the namespace of foo_module.
  • Within the foobar function object, for each argument with a default value, an object is created to hold the default object. In the case of foobar, a string object containing “abc” is created as the default for the arg_string argument, and an empty list object is created as the default for the arg_list argument.

After that, whenever foobar is called without arguments, arg_string will be bound to the default string object, and arg_list will be bound to the default list object. In such a case, arg_string will always be “abc”, but arg_list may or may not be an empty list. Here’s why.

There is a crucial difference between a string object and a list object. A string object is immutable, whereas a list object is mutable. That means that the default for arg_string can never be changed, but the default for arg_list can be changed.

Let’s see how the default for arg_list can be changed. Here is a program. It invokes foobar four times. Each time that foobar is invoked it displays the values of the arguments that it receives, then adds something to each of the arguments.

def foobar(arg_string="abc", arg_list = []): 
    print arg_string, arg_list 
    arg_string = arg_string + "xyz" 
    arg_list.append("F")

for i in range(4): 
    foobar()

The output of this program is:

abc [] 
abc ['F'] 
abc ['F', 'F'] 
abc ['F', 'F', 'F']

As you can see, the first time through, the argument have exactly the default that we expect. On the second and all subsequent passes, the arg_string value remains unchanged — just what we would expect from an immutable object. The line

arg_string = arg_string + "xyz"

creates a new object — the string “abcxyz” — and binds the name “arg_string” to that new object, but it doesn’t change the default object for the arg_string argument.

But the case is quite different with arg_list, whose value is a list — a mutable object. On each pass, we append a member to the list, and the list grows. On the fourth invocation of foobar — that is, after three earlier invocations — arg_list contains three members.

The Solution
This behavior is not a wart in the Python language. It really is a feature, not a bug. There are times when you really do want to use mutable default arguments. One thing they can do (for example) is retain a list of results from previous invocations, something that might be very handy.

But for most programmers — especially beginning Pythonistas — this behavior is a gotcha. So for most cases we adopt the following rules.

  1. Never use a mutable object — that is: a list, a dictionary, or a class instance — as the default value of an argument.
  2. Ignore rule 1 only if you really, really, REALLY know what you’re doing.

So… we plan always to follow rule #1. Now, the question is how to do it… how to code foobar in order to get the behavior that we want.

Fortunately, the solution is straightforward. The mutable objects used as defaults are replaced by None, and then the arguments are tested for None.

def foobar(arg_string="abc", arg_list = None): 
    if arg_list is None: arg_list = [] 
    ...

Another solution that you will sometimes see is this:

def foobar(arg_string="abc", arg_list=None): 
    arg_list = arg_list or [] 
    ...

This solution, however, is not equivalent to the first, and should be avoided. See Learning Python p. 123 for a discussion of the differences. Thanks to Lloyd Kvam for pointing this out to me.

And of course, in some situations the best solution is simply not to supply a default for the argument.

Mutable defaults for method arguments

Now let’s look at how the mutable arguments gotcha presents itself when a class method is given a mutable default for one of its arguments. Here is a complete program.

# (1) define a class for company employees 
class Employee:
    def __init__ (self, arg_name, arg_dependents=[]): 
        # an employee has two attributes: a name, and a list of his dependents 
        self.name = arg_name 
        self.dependents = arg_dependents
    
    def addDependent(self, arg_name): 
        # an employee can add a dependent by getting married or having a baby 
        self.dependents.append(arg_name)
    
    def show(self): 
        print
        print "My name is.......: ", self.name 
        print "My dependents are: ", str(self.dependents)
#--------------------------------------------------- 
#   main routine -- hire employees for the company 
#---------------------------------------------------

# (2) hire a married employee, with dependents 
joe = Employee("Joe Smith", ["Sarah Smith", "Suzy Smith"])

# (3) hire a couple of unmarried employess, without dependents 
mike = Employee("Michael Nesmith") 
barb = Employee("Barbara Bush")

# (4) mike gets married and acquires a dependent 
mike.addDependent("Nancy Nesmith")

# (5) now have our employees tell us about themselves 
joe.show() 
mike.show() 
barb.show()

Let’s look at what happens when this program is run.

  1. First, the code that defines the Employee class is run.
  2. Then we hire Joe. Joe has two dependents, so that fact is recorded at the time that the joe object is created.
  3. Next we hire Mike and Barb.
  4. Then Mike acquires a dependent.
  5. Finally, the last three statements of the program ask each employee to tell us about himself.

Here is the result.

My name is.......:  Joe Smith 
My dependents are:  ['Sarah Smith', 'Suzy Smith']

My name is.......:  Michael Nesmith 
My dependents are:  ['Nancy Nesmith']

My name is.......:  Barbara Bush 
My dependents are:  ['Nancy Nesmith']

Joe is just fine. But somehow, when Mike acquired Nancy as his dependent, Barb also acquired Nancy as a dependent. This of course is wrong. And we’re now in a position to understand what is causing the program to behave this way.

When the code that defines the Employee class is run, objects for the class definition, the method definitions, and the default values for each argument are created. The constructor has an argument arg_dependents whose default value is an empty list, so an empty list object is created and attached to the __init__ method as the default value for arg_dependents.

When we hire Joe, he already has a list of dependents, which is passed in to the Employee constructor — so the arg_dependents attribute does not use the default empty list object.

Next we hire Mike and Barb. Since they have no dependents, the default value for arg_dependents is used. Remember — this is the empty list object that was created when the code that defined the Employee class was run. So in both cases, the empty list is bound to the arg_dependents argument, and then — again in both cases — it is bound to the self.dependents attribute. The result is that after Mike and Barb are hired, the self.dependents attribute of both Mike and Barb point to the same object — the default empty list object.

When Michael gets married, and Nancy Nesmith is added to his self.dependents list, Barb also acquires Nancy as a dependent, because Barb’s self.dependents variable name is bound to the same list object as Mike’s self.dependents variable name.

So this is what happens when mutuable objects are used as defaults for arguments in class methods. If the defaults are used when the method is called, different class instances end up sharing references to the same object.

And that is why you should never, never, NEVER use a list or a dictionary as a default value for an argument to a class method. Unless, of course, you really, really, REALLY know what you’re doing.