Thursday, December 27, 2007

Respect your unit tests

Our team just completed a major release of our software. It's been three months since the last release. This is a very long time based on our usual pace of 2 to 3 weeks, so we could expect to have some issues after deployment. Had we shown some respect for our unit tests, we would have fewer issues.

One of the problems we had was related to some external libraries we were using, which required our models to use conform to the JavaBeans spec ( having both a getX() and a setX(X x) method). In some cases, the getX() was the only meaningful method, and the setX(X x) was only put there as a placeholder to conform to the API. If the setX(X x) had a nice comment explaining it's purpose in life, we were safe. But sometimes, we had the stub setX(X x) method, without an explanation of why it was there. And upon doing some reference checks, we would quickly determine that it was not used by any code, and could be removed.

That's just what happened in this case. A developer noticed the dead code, asked around if it was used, got concensus that it wasn't and proceeded to remove it. He ran the unit tests, fixed a couple of broken tests, and checked in his changes.

He removed un-used code, and then had to fix some unit tests.

And when we released the code to production, we had problems.

The fact that the unit tests failed and needed fixing should have been the indication that the code was used, even if not explicitly referenced by any of our code. The fact that we had production problems confirmed it. The unit tests were in place to prevent this error. But the developer bypassed that warning.

The unfortunate fact is that it is very easy to learn from a mistake. But it's these mistakes that we need to file away as experience, so that we get better at what we do. Take care in writing unit tests, so that they are meaningful and accurate. And then take greater care when modifying them. When in doubt, trust that the unit tests are right. Grab a pair; get a second opinion; get a third opinion. Let the unit tests serve as one of your safety nets when making changes to code. And respect your unit tests. They don't lie.

Tuesday, December 4, 2007

Back to the basics of Refactoring

I ran into a couple of separate bugs the last couple of days, that were caused because of some refactoring. How can that be, you ask? I asked myself the same thing! By definition, refactoring should change the structure of the code, without changing the behavior. Yet, in both cases, the behavior did change. Let's see if there is a way to prevent this from happening, and keep with the true nature of refactoring.

Refactoring defined

Martin Fowler defines refactoring like this:

Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior. Its heart is a series of small behavior preserving transformations.

The key point I see is "small behavior preserving transformation." A refactoring is a small change, and it does not change the behavior.

Preserving Behavior with Unit Tests

The best way to ensure that you are preserving the behavior, is to have a suite of fast and accurate unit tests. Yes, I am asking alot here.

The unit tests need to be fast, so that developers do not feel bogged down by having to run them. When they are fast, developers can run all unit tests many times throughout their development cycles.

The unit tests need to be accurate so that they go so far as to specify the intended external behavior of a module (or class, or method, or whatever your "unit" is), yet they are flexible enough so that they do not specify the internal structure. This is part science and part art, and is well beyond the scope of this discussion. But the goal is to specify behavior, not implementation.

So why did these refactorings go south?

We all know the theories of modern development (everything is tested, pair programming, the simplest thing that works), yet it can be hard to put them into practice.

One of the refactorings I encountered was very subtle in how it broke down. The developer ran all units tests, they passed. Then he made the refactoring. Then he ran all tests again, and they passed again. So he was done, and went on to the next task. However, the tests passed because the code in question was not covered by tests.

So let me restate my thesis. The best way to ensure that you are preserving the behavior, is to have a suite of fast, accurate, and complete unit tests. There, that should do it.

When unit tests are not enough

The other refactoring was a bigger change. The goal was to put the soft into the software. We wanted to take some data that was hardcoded into the application, and move it to the database, so that changes became configurable. The problem was that we had three scenarios, but two of them shared the same hardcoded implementation. So when we went to a configurable implementation, we lost one of the scenarios.

This problem could also have been avoided by tests, though maybe not what is considered a "unit" tests. To catch this problem as soon as it happened, a broader suite of test would be needed: integration tests, functional tests. The key, though, is that these tests should be broad enough to cover the scope of the refactoring, fast enough to be able to be run by the developer, and complete enough to have tested (and exposed) all scenarios.

One last time. The best way to ensure that you are preserving the behavior of your code while refactoring, is to have a fast, accurate, complete and broad suite of tests. Fast enough to run often, accurate enough to specify the behavior, complete enough to test what you are changing, and broad enough to cover your code base at different levels of granularity.

Conclusion

Refactoring improves the code base. Refactoring simplifies the code, makes it easier to comprehend, and easier to change. No arguments there. The key, however, is that refactoring preserves behavior, and the only way to ensure that behavior is preserved is to have a broad and accurate suite of tests around the code.

Stated another way, the end user should not be able to tell immediately that you have completed a refactoring. Instead, they should be somewhat impressed when you quickly deliver the next feature that they request.

Thursday, November 29, 2007

A quick review of debugging Struts applications

I had worked on a Struts web application a couple of years ago, and within our team I am still considered the "expert" on that application. So yesterday, when something wasn't working correctly, a teammate approached me and asked for help. As I walked her through what was going on, I made it a point NOT to rely on any knowledge of the application (after all, it has been two years and several maintenance programmers since I have worked on the application.) Here's what we did:

  • Based on the URL, track down the action mapping
  • Look in the struts-config.xml file and find the JSP that is rendered
  • Examine the JSP and see where the data is coming from; identify the form object that holds the data
  • Back in struts-config.xml, find the action that does the work
  • In the action, look at how the form is populated

In finding and resolving the problem, I didn't have to use any tacit knowledge of application. Instead, we just ran the application, identified the problem, and tracked it back to the code that was causing the problem.

Tuesday, November 20, 2007

A quick and easy way to minimize java.lang.NullPointerExceptions

In Java, we often see code that compares a particular value to some know constant. Often this is written like this:

someObject.getSomeValue().equals("SomeConstant");

This works ok, assuming that someObject is not null, and as long as you are sure that getSomeValue() will never be null.

If you aren't so sure, or if you just want to develop a good habits that will minimize the number of NullPointerExceptions you run into, you may try to write the same comparison this way:

"SomeConstant".equals(someObject.getSomeValue());

You are ensuring that you will not run into the dreaded java.lang.NullPointerException, because your constant value will never be null. And you are improving your own productivity, because you and your teammates will spend less time tracking down and fixing NullPointerExceptions.

Friday, November 16, 2007

Glimmer to ECLIPSE RubyOnRails?

When I hear about Ruby, the first thought that comes to mind is Ruby on Rails and Web 2.0 applications. I would have never made the association from Ruby to desktop application. Until now. About a year ago, it was suggested that JRuby and SWT might be a viable combination for Ruby on the desktop. After all, SWT is the performant, native desktop library available from Eclipse, and Ruby gives you many productivity advantages. There was even a SWeeTgui project at the time, though it doesn't seem like there was much traction. Fast-forward one year, and we now have Glimmer: "a JRuby DSL that enables easy and efficient authoring of user-interfaces". What advantages are there with Glimmer? Here's what I see:
  • A compact api that allows Ruby developers to write native destop applications
  • A clean wrapper around the SWT libraries, that takes a minimalist approach by exposing the most important features and applying smart defaults everywhere
  • An API that is based on Ruby's programming paradigms, not Java's.
  • The ability to implement complex SWT desktop applications with only 25% of the code.

That last point is what brought me over. Being able to write the same functionality with just a quarter of the code (and time). I've been developing Java applications for nearly a decade now, and using SWT for two years, and I feel very comfortable. .

When I first saw Glimmer, I didn't believe that I needed it for my Java SWT applications, because I know Java, and I know SWT. But as we discussed the merits of this API, and I saw a demonstration of some complicated user interfaces, I got a "glimmer" in my eye. I could see alot of productivity benefits here.

Take a look for yourself, and consider it for your next desktop project.

Friday, November 2, 2007

A REAL Onsite Customer

I'm currently working on a project that is using an agile process to manage development. We use most of the XP practices, but are mising what I would consider the most important one: an onsite customer. Though our customer is a manager, and the final decision-maker, she doesn't participate directly in the daily development activities. Instead, she sends a proxy.

This works out great about 75% of the time, but doesn't work so well the other 25%, when we need a tough question answered quickly. And that often introduces some long waiting. Here are a couple of examples.

Last Friday, the manager had her proxy call a meeting with the developers to discuss a featuer we had just completed. She wanted to improve the flow of the feature, and make sure that it was as easy as possible for the users. We got together with the proxy, brainstormed, offered ideas and estimated the different steps. He then went back to her, she had some other ideas, and some questions. So he came back to us and .....

All this going back and forth was costing precious time. We are still answering questions and going over ideas, though we never really meet with our customer. There are several problems with this approach. Information is lost, intent is misunderstood with each link in the chain, and delays are introduced. And this happens both ways. The proxy is an intelligent guy, but all the intelligence in the world doesn't help here. Its the nature of communication, similar to the game we played in grade school, where we each whispered something to the next person, and what came out of the chain is not what went in. Extra energy is expended going back and forth. It would be much more convenient, and direct, to be talking directly to the decision-maker.

Contrast this to what happened yesterday. We were discussing a feature that the developers thought must be included, but the manager thought wasn't necessary. Of course, she sent the proxy to discuss this with us. However, we didn't feel she had a strong argument, and she didn't think we had a strong argument. By chance, one of our teammates saw her walking by, and we asked her to explain her point. Then we explained our. We were quickly able to come up with some options that would meet her requirements and our own. I walked out of this meeting energized and excited about our application. We were going to deliver a valuable feature that met the needs of our users, and our developers, and we were able to get to that point very quickly.

All modern books preach the need for an onsite customer. This experience of two extremes solidified the idea in my mind. Your team can definitely work more effectively with an onsite customer. Not a proxy, nor a proxy to the proxy, but the real decision-maker, in the trenches with your team, day in and day out.

Wednesday, October 31, 2007

RTFM

Yesterday, a colleague sent out an email to the group, asking for help on a certain topic. Over IM, I asked if he was doing this for a particular scenario, which he was. So I suggested he use a feature of the tool we are using. He said he wasn't aware of that feature, so I said "RTFM" (read the frickin' manual).

This didn't go over very well with him. His first reaction was to lecture me, never to do something like this again. I didn't need a lecture, but I apologized, trying to quell an escalating situation. Later, after he calmed down, he explained that this triggered memories of bad relationships with past colleagues, and that he didn't want our rapport to get out of hand. Fine, I can accept that. But this situation got me to thinking about two things: why I said RTFM to begin with, and how I could have said the same thing in a more gentle manner.

As a consultant, I know alot is expected of me by the client. There is a reason they are paying more for my services than they pay their employees, and I have an obligation to keep up my end of the deal. I expect the same of the other consultants on the project. In this case, I expected my colleague to know the tools that we are working with, and to have at least gone through the manuals and have been familiar with the concepts. Staff employees often expect to be sent to training. As consultants, we need to go out of our way to quickly learn as much as we can about the tools and technologies we are using. We also need to be able to pass that knowledge on to our teammates, whether they are consultants or employees. It's by this transfer of knowledge that the entire team gets better, and the entire project can go smoother.

Which brings me to my next thought: how could I have been more effective in that communication. After thinking about it, I should have said something along these lines:

You can use this feature. You should be able to find information about this in the manual. Take a look, and let me know if you have any questions.
It would have gotten the point across much better. It also would have been in line with the kind of response my colleague expected. Even though I don't have any particular title, I am often perceived as a leader on our team. To fulfill my part of being a leader, I should have used the gentler statement to get my point across.

The work we do executing projects and developing software is difficult enough. We don't need to make it any more difficult by creating problems among our colleagues. As a team leader, whether by title or by perception, we need to excel at increasing the knowledge of our entire team (and organization), and working towards project success.