I had what I would consider a nearly perfect programming day today. It really started yesterday before I left, when Jim (tech lead) reviewed an upcoming task with me. I was in the middle of another task, so I suggested we pick up in the morning.
The api team was processing a product file, and we needed to adapt that process so that the web team could also consume it. The next day, Jim and I reviewed the current implementation of the process, as well as the goals of the updates. The process took a file, filtered and translated it, and created a new file. We wanted to add to that process, to create a second file to be used by the web team.
Our BA came by to discuss the details of the fields and the formats we would be getting, as well as how we wanted to filter the records. We all agreed to an approach. Our BA would do the write-up for the sake of sharing the understanding with QA and other team members. We also agreed not to wait for a detailed write-up before starting development.
I was going to work with Matt, who had written the initial process. When Matt arrived, he asked what we needed to do. I gave him an overview of what I understood. He immediately knew where to jump into the code and start. We were working at a station that had not been used for this project before, so first we synched with source control. Then we ran the test suite. After installing a couple of gem dependencies, the tests passed.
Matt showed me the classes involved in the process, and gave me a detailed walk through of how the input was filtered and transformed. This took about 10 minutes. We then talked about what parts were common and could be re-used, and what would change to support the creation of a second file. At that point, instead of digging in, we decided to refactor the current implementation. This would allow us to more easily add the code to create the second file.
This process consisted of two classes: a
Processor and a
Processor looped through source file, passed each line to a
Line instance to be filtered and transformed, and wrote the output file. The
Line class was responsible for the actual filtering and transforming.
We spent a few minutes creating a superclass, and moving common processing up into the superclass. We ran all tests, and they passed. We didn't change any tests along the way. We only changed the implementation by introducing a superclass. This was pure refactoring. We were ready for an initial local commit.
We spent a few minutes trying different names for the superclass and the subclass with the current implementation. Initially, the
Lineclass was called
SocLine, named after the file which was being filtered and transformed. We tried
Linefor the superclass, but that was too generic. We finally went with
ProductFileLineas the superclass, representing the source of the data, and
ApiLinefor the subclass.
In afterthought, these classes' responsibilities were filtering and transforming. Maybe better names would have been
Next, we looked at the code that was looping through the input file, filtering and transforming, and creating the output file. We added acceptance tests to check that a second file was being created. We modified the loop to create a second file, and our tests passed.
Now we wanted to implement the new filtering and transformation for the web file. This we decided would be better served through unit tests. First we wrote the specs for the new filtering rules, and then the code to get those to pass. Then we wrote the specs to transform each line to the correct format.
At this point, we were done with the happy path for this story. What remained were a couple of specific transformations, and any final cleanup. We also wanted to look at an example input file for any special data considerations. We continued after lunch.
We transformed several columns to change their data types from text flags to booleans. Our specs failed as expected. We updated them to pass, and continued.
We reviewed the order of the data we were exporting, and arranged the fields in a more logical order. We also reviewed the BA's write-up (which was done at this point) for anything that we may have missed. We identified a small item, fixed it, and continued.
At this point, we squashed our local commits down to one, pushed our changes up to source control, and called it a day.
The obvious point is that the day flew by, and it was incredibly fun. But there is more to it than just fun.
Working with Jim and our BA, we were able to start development quickly with a minimal amount of ceremony. I was able to start developing while the BA was detailing the story. At the end of the day, we were able to do a final check before moving the story to QA. We went with a light a process as we could, and we worked in parallel once we had a good common understanding.
We never strayed too far from all tests passing. We kept our changes small. We worked in small increments, and committed (locally) whenever we were at a stable point. Though there was no need today, had we needed to go back and take a different approach, it would have been as easy as reverting a commit or two.
The code that we started with was already well-factored. This made it easy to make our changes.
There were tests in place when we started, so that refactoring was safe. When we finished refactoring, we were quite confident that we hadn't broken anything.
Finally, we were pairing. Both Matt and I were at the keyboard, and taking turns driving and thinking throughout the episode.
PS: Towards a stronger design
The extension to the design that we did today was sufficient. We could have taken it further, having our
Processor class follow the Open-Close Principle. That way, it's core functionality would not need to change when we needed to add another output file. We would simply need to add a new
Line class with it's own filters and transformations.
Post a Comment