New and Improved

Coming changes to unittest in Python 2.7 & 3.2

Be stubborn, obey the goat!

The Pycon Testing Goat.


This article started life as a presentation at PyCon 2010. You can watch a video of the presentation:

Since that presentation lots of features have been added to unittest in Python 2.7 and unittest2.

This article also introduces a backport of the new features in unittest to work with Python 2.4, 2.5 & 2.6:

For a more general introduction to unittest see: Introduction to testing with unittest.

There are now ports of unittest2 for both Python 2.3 and Python 3. The Python 2.3 distribution is linked to from the unittest2 PyPI page. The Python 3 distribution is available from:


unittest is the Python standard library testing framework. It is sometimes known as PyUnit and has a rich heritage as part of the xUnit family of testing libraries.

Python has the best testing infrastructure available of any of the major programming languages, but by virtue of being included in the standard library unittest is the most widely used Python testing framework.

unittest has languished whilst other Python testing frameworks have innovated. Some of the best innovations have made their way into unittest which has had quite a renovation. In Python 2.7 and 3.2 a whole bunch of improvements to unittest will arrive.

This article will go through the major changes, like the new assert methods, test discovery and the load_tests protocol, and also explain how they can be used with earlier versions of Python.

unittest is changing

Change you can believe in.

The new features are documented in the Python 2.7 development documentation at: Look for "New in 2.7" or "Changed in 2.7" for the new and changed features.

An important thing to note is that this is evolution not revolution, backwards compatibility is important. In particular innovations are being brought in from other test frameworks, including test frameworks from large projects like Zope, Twisted and Bazaar, where these changes have already proved themselves useful.

New Assert Methods

The point of assertion methods in unittest is to provide useful messages on failure and to provide ready made methods for common assertions. Many of these were contributed by google or are in common use in other unittest extensions.

  • assertGreater / assertLess / assertGreaterEqual / assertLessEqual
  • assertRegexpMatches(text, regexp) - verifies that regexp search matches text
  • assertNotRegexpMatches(text, regexp)
  • assertIn(value, sequence) / assertNotIn - assert membership in a container
  • assertIs(first, second) / assertIsNot - assert identity
  • assertIsNone / assertIsNotNone

And even more...

  • assertIsInstance / assertNotIsInstance
  • assertDictContainsSubset(subset, full) - Tests whether the key/value pairs in dictionary full are a superset of those in superset.
  • assertSequenceEqual(actual, expected) - ignores type of container but checks members are the same
  • assertItemsEqual(actual, expected) - ignores order, equivalent of assertEqual(sorted(first), sorted(second)), but it also works with unorderable types

It should be obvious what all of these do, for more details refer to the friendly manual.

As well as the new methods a delta keyword argument has been added to the assertAlmostEqual / assertNotAlmostEqual methods. I really like this change because the default implementation of assertAlmostEqual is never (almost) useful to me. By default these methods round to a specified number of decimal places. When you use the delta keyword the assertion is that the difference between the two values you provide is less than (or equal to) the delta value. This permits them to be used with non-numeric values:

import datetime

delta = datetime.timedelta(seconds=10)
second_timestamp =

self.assertAlmostEqual(first_timestamp, second_timestamp, delta=delta)


Sometimes you have to throw things away...

unittest used to have lots of ways of spelling the same methods. The duplicates have now been deprecated (but not removed).

  • assert_ -> use assertTrue instead
  • fail* -> use assert* instead
  • assertEquals -> assertEqual is the one true way

New assertion methods don't have a fail... alias as well. If you preferred the fail* variant, tough luck.

Not all the 'deprecated' methods issue a PendingDeprecationWarning when used. assertEquals and assert_ are too widely used for official deprecations, but they're deprecated in the documentation. In the next version of the documentation the deprecated methods will be expunged and relegated to a 'deprecated methods' section.

Methods that have deprecation warnings are:

failUnlessEqual, failIfEqual, failUnlessAlmostEqual, failIfAlmostEqual, failUnless, failUnlessRaises, failIf

Type Specific Equality Functions

More import new assert methods are the type specific ones. These provide useful failure messages when comparing specific types.

  • assertMultiLineEqual - uses difflib, default for comparing unicode strings
  • assertSetEqual - default for comparing sets
  • assertDictEqual - you get the idea
  • assertListEqual
  • assertTupleEqual

The nice thing about these new assert methods is that they are delegated to automatically by assertEqual when you compare two objects of the same type.

Add New type specific functions

  • addTypeEqualityFunc(type, function)

Functions added will be used by default for comparing the specified type. For example if you wanted to hookup assertMultiLineEqual for comparing byte strings as well as unicode strings you could do:

self.addTypeEqualityFunc(str, self.assertMultiLineEqual)

addTypeEqualityFunc is useful for comparing custom types, either for teaching assertEqual how to compare objects that don't define equality themselves, or more likely for presenting useful diagnostic error messages when a comparison fails.

Note that functions you hook up are only used when the exact type matches, it does not use isinstance. This is because there is no guarantee that sensible error messages can be constructed for subclasses of the registered types.


The changes to assertRaises are one of my favourite improvements. There is a new assertion method and both methods can be used as context managers with the with statement. If you keep a reference to the context manager you can access the exception object after the assertion. This is useful for making further asserts on it, for example to test an error code:

# as context manager
with self.assertRaises(TypeError):
    add(2, '3')

# test message with a regex
msg_re = "^You shouldn't Foo a Bar$"
with self.assertRaisesRegexp(FooBarError, msg_re):

# access the exception object
with self.assertRaises(TypeError) as cm:

exception = cm.exception
self.assertEqual(exception.error_code, 3)

Command Line Behaviour

python -m unittest test_module1 test_module2
python -m unittest test_module1.suite_name
python -m unittest test_module.TestClass
python -m unittest test_module.TestClass.test_method

The unittest module can be used from the command line to run tests from modules, suites, classes or even individual test methods. In earlier versions it was only possible to run individual test methods and not modules or classes.

If you are running tests for a whole test module and you define a load_tests function, then this function will be called to create the TestSuite for the module. This is the load_tests protocol.

You can run tests with more detail (higher verbosity) by passing in the -v flag:

python -m unittest -v test_module

For a list of all the command line options:

python -m unittest -h

There are also new verbosity and exit arguments to the main() function. Previously main() would always sys.exit() after running tests, making it not very useful to call programmatically. The new parameters make it possible to control this:

>>> from unittest import main
>>> main(module='test_module', verbosity=2,
...      exit=False)

Passing in verbosity=2 is the equivalent of the -v command line option.

failfast, catch and buffer command line options

There are three more command line options for both standard test running and test discovery. These command line options are also available as parameters to the unittest.main() function.

  • -f / --failfast

    Stop the test run on the first error or failure.

  • -c / --catch

    Control-c during the test run waits for the current test to end and then reports all the results so far. A second control-c raises the normal KeyboardInterrupt exception.

    There are a set of functions implementing this feature available to test framework writers wishing to support this control-c handling. See Signal Handling in the development documentation [1].

  • -b / --buffer

    The standard out and standard error streams are buffered during the test run. Output during a passing test is discarded. Output is echoed normally on test fail or error and is added to the failure messages.

The command line can also be used for test discovery, for running all of the tests in a project or just a subset.

Test Discovery

Test discovery has been missing from unittest for a long time, forcing everyone to write their own test discovery / collection system.

python -m unittest discover
-v, --verbose Verbose output
-f, --failfast Stop on first fail or error
-c, --catch Catch ctrl-C and display results so far
-b, --buffer Buffer stdout and stderr during tests
-s directory Directory to start discovery ('.' default)
-p pattern Pattern to match test files ('test*.py' default)
-t directory Top level directory of project (default to start directory)

The options can also be passsed in as positional arguments. The following two command lines are equivalent:

python -m unittest discover -s project_directory -p '*'
python -m unittest discover project_directory '*'

There are a few rules for test discovery to work, these may be relaxed in the future. For test discovery all test modules must be importable from the top level directory of the project.

Test discovery also supports using dotted package names instead of paths. For example:

python -m unittest discover package.test

There is an implementation of just the test discovery (well, plus load_tests) to work with standard unittest. The discover module:

pip install discover
python -m discover


If a test module defines a load_tests function it will be called to create the test suite for the module.

This example loads tests from two specific TestCases:

def load_tests(loader, tests, pattern):
    suite = unittest.TestSuite()
    case1 = loader.loadTestsFromTestCase(TestCase1)
    case2 = loader.loadTestsFromTestCase(TestCase2)
    return suite

The tests argument is the standard tests that would be loaded from the module by default as a TestSuite. If you just want to add extra tests you can just call addTests on this. pattern is only used in the of test packages when loaded from test discovery. This allows the load_tests function to continue (and customize) test discovery into the package. In normal test modules pattern will be None.

Cleanup Functions with addCleanup

This is an extremely powerful new feature for improving test readability and making tearDown obsolete! Push clean-up functions onto a stack, at any point including in setUp, tearDown or inside clean-up functions, and they are guaranteed to be run when the test ends (LIFO).

def test_method(self):
    temp_dir = tempfile.mkdtemp()
    self.addCleanup(shutil.rmtree, temp_dir)

No need for nested try: ... finally: blocks in tests to clean up resources.

The full signature for addCleanup is: self.addCleanup(function, *args, **kwargs). Any additional args or keyword arguments will be passed into the cleanup function when it is called. It saves the need for nested try:..finally: blocks to undo actions performed by the test.

If setUp() fails, meaning that tearDown() is not called, then any cleanup functions added will still be called. Exceptions raises inside cleanup functions will cause the test to report an error, but all cleanup functions will still run.

If you want to manually clear out the cleanup stack you can call doCleanups().

Test Skipping

Decorators that work as class or method decorators for conditionally or unconditionally skipping tests:

@skip("skip this test")
def test_method(self):

@skipIf(sys.version_info[2] < 5, "only Python > 2.5")
def test_method(self):

@skipUnless(sys.version_info[2] < 5, "only Python < 2.5")
def test_method(self):

More Skipping

def test_method(self):
    self.skipTest("skip, skippety skip")

def test_method(self):
    raise SkipTest("whoops, time to skip")

def test_that_fails(self):'this *should* fail')

Ok, so expectedFailure isn't for skipping tests. You use it for test that are known to fail currently. If you fix the problem, so the test starts to pass, then it will be reported as an unexpected success. This will remind you to go back and remove the expectedFailure decorator.

Skipped tests appear in the report as 'skipped (s)', so the number of tests run will always be the same even when skipping.

As class decorator

If you skip an entire class then all tests in that class will be skipped.

# for Python >= 2.6
@skipIf(sys.platform == 'win32')
class SomeTest(TestCase)

# Python pre-2.6
class SomeTest(TestCase)
SomeTest = skipIf(sys.platform == 'win32')(SomeTest)

Class and Module Level Fixtures

You can now define class and module level fixtures; these are versions of setUp and tearDown that are run once per class or module.

Class and module level fixtures are implemented in TestSuite. When the test suite encounters a test from a new class then tearDownClass from the previous class (if there is one) is called, followed by setUpClass from the new class.

Similarly if a test is from a different module from the previous test then tearDownModule from the previous module is run, followed by setUpModule from the new module.

After all the tests in the suite have run the final tearDownClass and tearDownModule are run.

setUpClass and tearDownClass

These must be implemented as class methods.

import unittest

class Test(unittest.TestCase):
    def setUpClass(cls):
        cls._connection = createExpensiveConnectionObject()

    def tearDownClass(cls):

If you want the setUpClass and tearDownClass on base classes called then you must call up to them yourself. The implementations in TestCase are empty.

If an exception is raised during a setUpClass then the tests in the class are not run and the tearDownClass is not run. Skipped classes will not have setUpClass or tearDownClass run.

setUpModule and tearDownModule

These should be implemented as functions.

def setUpModule():

def tearDownModule():

If an exception is raised in a setUpModule then none of the tests in the module will be run and the tearDownModule will not be run.

The Details

The default ordering of tests created by the unittest test loaders is to group all tests from the same modules and classes together. This will lead to setUpClass / setUpModule (etc) being called exactly once per class and module. If you randomize the order so that tests from different modules and classes are adjacent to each other then these shared fixture functions may be called multiple times.

If there are any exceptions raised during one of these functions / methods then the test is reported as an error. Because there is no corresponding test instance an _ErrorHolder object (that has the same interface as a TestCase) is created to represent the error. If you are just using the standard unittest test runner then this detail doesn't matter, but if you are a framework author it may be relevant.


Note that shared fixtures do not play well with features like test parallelization and they also break test isolation. They should be used with care.

A setUpModule or setUpClass that raises a SkipTest exception will be reported as skipped instead of as an error.

Minor Changes

There are a host of other minor changes, some of them steps towards making unittest more extensible. For full details on these see the documentation:

  • unittest is now a package instead of a module
  • Better messages with the longMessage class attribute
  • TestResult: startTestRun and stopTestRun
  • TextTestResult public and the TextTestRunner takes a runnerclass argument for providing a custom result class (you used to have to subclass TextTestRunner and override _makeResult)
  • TextTestResult adds the test name to the test description even if you provide a docstring

setuptools test command

Included in unittest2 is a test collector compatible with the setuptools test command. This allows you to run:

python test

and have all your tests run. They will be run with a standard unittest test runner, so a few features (like expected failures and skips) don't work fully, but most features do. If you have setuptools or distribute installed you can see it in action with the unittest2 test suite.

To use it specify test_suite = 'unittest2.collector' in your This starts test discovery with the default parameters from the directory containing, so it is perhaps most useful as an example (see the unittest2/ module).

The unittest2 Package

To use the new features with earlier versions of Python:

pip install unittest2

Replace import unittest with import unittest2. An alternative pattern for conditionally using unittest2 where it is available is:

    import unittest2 as unittest
except ImportError:
    import unittest

python -m unittest ... works in Python 2.7 even though unittest is a package. In Python 2.4-2.6 this doesn't work (packages can't be executed with -m).

The unittest2 command line functionality is provided with the unit2 / script.

Classes in unittest2 derive from the equivalent classes in unittest, so it should be possible to use the unittest2 test running infrastructure without having to switch all your tests to using unittest2 immediately. Similarly you can use the new assert methods on unittest2.TestCase with the standard unittest test running infrastructure. Not all of the new features in unittest2 will work with the standard unittest test loaders and runners however.

There is also the discover module if all you want is test discovery: python -m discover (same command line options).

The Future

The big issue with unittest is extensibility. This is being addressed in an experimental "plugins branch" of unittest2, which is being used as the basis of a new version of nose:



  • No, unittest2 won't support functions as tests out of the box.
  • No, the assert methods on TestCase won't be broken out into separate functions.
  • No, the APIs won't be made PEP8 compliant.


The signal handling, the -c command line option, should work on all CPython platforms. It doesn't work correctly on Jython or IronPython that have missing or incomplete implementations of the signal module.

There is a feature in unittest2 that isn't in Python 2.7 (yet), the removeHandler function. This can either be used as a standalone function or as a decorator for test methods that use the signal.SIGINT handler and so can't be executed with the unittest2 control-c handling in place:

def test_signal_handling(self):

For buying techie books, science fiction, computer hardware or the latest gadgets: visit The Voidspace Amazon Store.

Hosted by Webfaction

Return to Top

Page rendered with rest2web the Site Builder

Last edited Tue Aug 2 00:51:34 2011.