Monday, October 17, 2011

Sometimes you need the big guns

I'm a fan of using libraries to create test doubles. Having said that, I'm not going to use them all of the time and even when I do my primary use is a stub and sometimes a spy.

In C++ I have yet to try any libraries; I tend to hand-roll my test doubles. There are tools, I just haven't taken the time to try. I can say the same about Objective-C - though given the id type, the libraries are starting with the basic supported needed to go a great job.

In Java I prefer Mockito, while in C# I prefer Moq. These are light-weight tools that give you nice/loose mocks (my preferred place to start). There are more powerful tools to be sure.

For example, in Java there's JMockIt and in .Net there's a commercial product called Isloator.Net. These tools allow you to do things that verge on black magic. For example, do you want to override a static method? No problem. How about a sealed class? No problem.

In both Java and C#, these 4 libraries are using either proxies or dynamic byte code (IL) injection. In a strong sense, these tools are highly stylized forms of Aspect Oriented Programming. What the second two tools allow you to do is change the language semantics, on the fly, per test. That is exceedingly cool to me.

Unfortunately, I have a rule that guides me regarding cool things. If I genuinely think something is cool, then it's probably too much to use in general. That rule serves me well. However, like all rules, it will not apply at some times.

Years ago at Hertz I used Aspect Oriented Programming to solve a problem. The original design, which I was a part of creating, using a decorator, did not scale well. 2 - 3 years in we didn't want to change that. Had we started from scratch, a reflective object walker would have made more sense. Rather than rip out a deeply embedded design, I created a simple aspect that gave us the same semantics as the decorator and obviated the need to create error-prone, and mostly pass-through classes. The result is still in use years later and it's pretty much maintenance free.

Last week, I came across a case where using these more powerful test-doubling tools seems to make sense. I was working with a client and they have an API I'll call X. They have a number of tests they are calling unit tests, which are actually functional and integration tests rolled up into one. Those tests are somewhat fragile but they do add value. They could improve those tests by making a clear distinction between functional and integration tests, and I hope they will.

We decided to give a go at writing unit tests, or at least tests that were much closer to a unit orientation rather than an integrated or functional orientation. The first example was generally successful. The developer had already identified the place of change and, more importantly, generally knew the required changes. The underling object was a bit big and hard to make, so we captured a good example of an object in a file (moving slightly away from unit orientation) and used it to continue the test.

Previously, for a similar test, they would need to initialize "the application" in the form of a Project and several such things. Now we had a stand-alone test that worked in memory with very little setup required.

We then tried it on another part of the system that was more coupled with the X API. The object we needed to create was complex and required an initialized environment to create. Furthermore, in its creation, the objects upon which it depended were sealed. The type itself was also sealed.

We first tried to use reflection to call private constructors, but the underlying type proved a bit too complex and API X is highly coupled. Ultimately, we backed up a touch and did something similar to the previous example: extract an object, put it in a file. Even so, reading this file, unlike the first example, still required a fully-initialized environment.

While it might be possible to introduce interfaces and then write all code to depend on interfaces, essentially using an adaptor for every sealed class, the signal to noise ratio seemed low. The option we chose was to minimally initialize the system. However, this will require either running from a particular directory or doing some (necessary anyway) work to make the execution environment quite a bit less coupled to directories on the file system.

I think a more powerful tool, like Isolator.Net, makes sense in this case. It still seems a bit cool for my general use, but sometimes those handy rules are the problem and not the tool.

No comments:

Post a Comment