Unit testing Tornado handlers with pyDoubles
A unit test is a small and automated piece of code written by a developer. Its purpose is checking a small piece of code works as expected in isolation.
Tests are first!
# -*- coding: utf-8 -*- import httplib import tweepy from tornado import web, testing from hamcrest import * URL = u'/my_mentions' CONSUMER_KEY = "xxx" CONSUMER_SECRET = "xxx" ACCESS_TOKEN = "xxx" ACCESS_TOKEN_SECRET = "xxx" class TestMentionsHandler(testing.AsyncHTTPTestCase): def test_get(self): #GET /my_mentions response = self.fetch(URL) assert_that(response.code, is_(httplib.OK)) def setUp(self): auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET) self.api = tweepy.API(auth) super(TestMentionsHandler, self).setUp() def get_app(self): return web.Application([ web.url(URL, MentionsHandler, dict(api=self.api)) ])
Tornado has a couple of classes for helping us writing tests. This test case inherits from AsyncHTTPTestCase, and thus we have to implement the get_app method. This method will be called from setUp method, executed before running each test. As we are creating a web.Application object, we are able to use dependency injection through the web.url call, passing its dependencies as third parameter, in this case an tweepy.API object.
So, this test is going to launch a GET request to /my_mentions URL. That URL is mapped to MentionsHandler which is a web.RequestHandler and inside its get method we can retrieve mentions, through its tweepy.API object passed in web.url call, and make stuff with them.
This test is small and automated buuut is far away from being a nice unit test. Perhaps it could be used as a Walking Skeleton, but is not a unit test.
Unit Tests are FIRST!
I wanna mean, FIRST!
This test takes in my machine about 3'5 seconds! This is really slow taking account of a nice velocity is about 100 tests per second. And second, this test is not repeatable because it needs an internet connection for running it. And what about if our API provider is down?
Then, how can we make our test faster and repeatable? We have several choices but, I would use a technique like test doubles And as we are writing our code in Python, we have a nice framework for doing it easy, pyDoubles :)
This is an example:
# -*- coding: utf-8 -*- import httplib import tweepy from tornado import web, testing, escape from pyDoubles.framework import * from hamcrest import * URL = u'/my_mentions' class TestMentionsHandler(testing.AsyncHTTPTestCase): def test_get(self): when(self.api.mentions).then_return([self._status()]) response = self.fetch(URL) assert_that(response.code, is_(httplib.OK)) def _status(self): return tweepy.Status.parse(tweepy.API(None), escape.json_decode( #... JSON that mimics Twitter response ) def setUp(self): self.api = stub(tweepy.API(None)) super(TestMentionsHandler, self).setUp() def get_app(self): return web.Application([ web.url(URL, MentionsHandler, dict(api=self.api)) ])
Now this test takes about 0.01 second! This is fast, and is repeatable too! I can run this on every environment: QA, production, my laptop… But still there are something strange in this test, what does it happen if tweepy.API behaviour is changed in the next version?
Please, remember: Avoid mocking types you can’t change!
- Are you going to change third-party code even you have the source code?
- Are you sure the behaviour you are mocking does the same than external library?
This is (IMHO) the first D letter of TDD. The tests are driving your development, if you listen your tests they are shouting you that there are some weird stuff in your code. This is the difference about write tests first or let tests guide your code! So what is this test telling us?
This test is telling us about we are passing an instance of tweepy.API freely around the code. What about cohesion and coupling? How many places I have to check if interface of tweepy.API object changes? What about if I change tweepy to another library which uses Tornado ioloop?
It feels natural we need an abstraction in form of some kind of MentionsProvider, which simplifies tweepy.API interface. The only thing we need are mentions, not friends or followers or direct messages… Then our code becomes decoupled from tweepy.API and we are able to write another provider for identi.ca, for example, and interchange them if needed. Anyway this is a topic for another post :)
I started to use pyDoubles several months ago. I loved it quickly, and I replaced the mocking library I was using for all my Python projects, including the professional ones. In fact, pyDoubles is my favourite library for writing test doubles (not only mocks!) when I’m working with Python.
I love Test Driven Development, it feels so natural to me and in this moment I can’t imagine myself writing code without writing their tests first, it’s something like imagining a developer working without an SCM.
As a happy TDD practitioner, I’m pretty used to reread my tests looking for examples, clarifications, corner cases… And as PEP-8 states, code is read much often than it’s written so that I try to emphasize code legibility. And, I’ve found that I’m able to read pyDoubles expectations and asserts from left to right, like a sentence (or like Hamcrest and pyHamcrest statements). That’s awesome! And of course, I can use my pyHamcrest matchers in order to match objects with pyDoubles. What a handy integration!
I like the Arrange Act Assert pattern. It’s a really simple notation and allows me to view at a glance the different steps in my tests. When I see that pattern I know that tests are not trying to check several things in the same test and this is useful for keeping my tests isolated. I’m able to use this pattern because pyDoubles doesn’t rely on (IMHO) unnatural metaphors.
I love the distinction among three types of test doubles: stub, spy and mock. I’m able to write more expressive tests because I can refine the kind of interactions among an object and its neighbors.
Finally, I think that pyDoubles is an opinionated piece of software and if you agree with that opinion is a pleasure work with it. For example, other Python Mocking libraries gives you the “feature” of patching objects. pyDoubles don’t. It forces you to think a bit more about your design and SOLID principles which IMHO is one of the best features of pyDoubles.