MEF and the Personal Spike

February 12, 2009

A week or two ago I attended the Open Space Coding day arranged by Alan Dean, and held at the Conchango offices. The Alt.Net community is very good at getting together and talking about code, software, and how it should all be done, but the focus of this meeting was to get on and write something!

The format was much like an open space conference, the first thing we did was suggest things we’d like to look in to, experiment and play with. There was a morning and afternoon session, in the morning I went to one on static reflection. A very interesting technique of using lambda syntax to analyse code as a traversable tree of expressions. All very good. Unfortunately for me this is the opportunity my Windows 2008 Server VM on my MacBook decided to bomb out on me. And bomb out it did. Blue Screen of death even before windows got a chance to get it’s boot on. How embarrassing, there was me thinking, I could finally be one of the cool kids, with my shiny white MacBook with after market Ram and HD upgrade just so I could run Windows in a VM, and it all fell apart.

From the Static reflection session I took away that it did make me a whole lot more comfortable with the reflection thing. Up until now I had always treated it like a leper of hackery, unjustly so, but this experience made me much more comfortable. It did appear to me that there was space here for a good library to make the traversal of the tree a lot more intuitive. Let me know if you know of any or if I missed something that made it all a lot easier than it looks.

The second session I attended was one on MEF. As we were all new to the format (or at least I thought we were) this got off to a bit of a slow start. We did that thing where go in to a room preped and briefed not to expect anyone to lead and then stare at someone who seems to know more than anyone else until they stand up and start presenting. Andrew Clancy, an Conchango employee, admitted to have played with MEF, and so showed us all very basically what its all about.

Once we had been briefed we paired up and made our own toy examples. I think it was a good thing that Andy’s example was out of date and the version of MEF we all downloaded was completely different. It forced us all to learn a bit better how it all worked. Mike and I quickly came up with a toy example that involved contract killers.

We quickly cut two dlls, that each contained one type of contract killer. MEF made it unbelievably easy to export this implementations of IContractKiller. Just attribute them up and they’re ready for consumption. The contract it self was also implemented in it’s own library and finally we had a console app that would tie it all together and offer up some victims to be killed (in various ways).

MEF allowed us to simply load all the dlls in a directory and in an IoC like way made them available to plug in to IContractKiller shaped holes (properties tagged with corresponding attributes). It also rather neatly allowed us to get all the implementations and put them in an IEnumerable for us to use.

Now here’s what I really learnt at this day. I knew about MEF when it was released and have seen many a blog about people using it (seen, not read). But only when I actually played with it, did it start clicking, how I could leverage this in my work. Where it might be appropriate to alleviate some problem we were having, and also where it wouldn’t be so handy. It’s very easy to get carried away with a new toy. Scott Cowan who also attended the session described this to me as a personal spike.

Take an hour or two, some time boxed period, branch your code or knock up some simple harness, and just play with something you’ve not used before. Something you’re interested in learning. With no goal other than to learn a little more about what it is you’re playing with. As is described in Pragmatic Thinking and Learning by Andy Hunt. Learn by play really does work. It may seem obvious but reading about something really is totally different. Andy explains that it’s because it uses a different part of your brain, and that neglected part, the play part, seems to have a hidden ability. To perculate and strike you in the down time. Let your mind wander a little and it often does go find something interesting and useful!

Woo, my first post written and published in one sitting. I hope it doesn’t show (much).

The object epiphany

February 2, 2009

My friend Mike Wagg recently had what I’m calling the Object epiphany that all Ruby devs have at some point, that makes them fall in love with the language. Now I can’t claim to have yet had this great moment, I haven’t given my self any time to play in the language to have achieved Ruby nirvana.

The reason I’ve decided to blog about it, is because, it does tend to lead in to some kind of OO rebellion, whereby devs cry from the roof tops that all OO languages before Ruby (or some other dynamic language) weren’t truly OO. That in fact, Class based languages are of the false gods and that we should all come in to the light that is duck typing. I may have got a little carried away with that there. Sorry.

I want to keep here my current view point and opinion that, in time, I can come back and look at what a fool I had been. Hopefully that comes soon after I’ve made a million on my latest Rails app. Or indeed you can do just that right now if you’ve already got there. You lucky so and so you.

So it goes a little something like this. Ruby emphasises the messaging side of objects, it gives you complete freedom to look at the polymorphism side of OO all by itself. This is great. It’s called duck typing. You send a message to (or call a function on) an object and if it can respond it does so. Awesome, I’m no longer tied down by the compiler and it’s evil desire to know all about all before it allows you to ‘compile’, and completely unnecessary step in the world where I’m a rock star programmer. Sheet, if I wanted to I could open up that there object (not class) and add to it if I wanted to. This must be OO, I just referred to something as an object and not a class!

You then start to think, we can do away completely with inheritance! For me this is where I get a little confused as to why this is such a revelation. I’ve never thought of Inheritance as the primary mechanism for polymorphism (I mean why would they be distinct OO concepts if that was so). The idea that one would use Inheritence in order to achieve some Polymorphism and for that reason alone seems little odd. I’ve always thought Inheritance was for that old chestnut, code reuse. I’ve even heard that some folk don’t like Inheritance when it’s used for just that reason! Yes I know to prefer composition over inheritance, it’s more flexible and so on. It’s all to easy to start arguing away Inheritance entirely by looking over something I think is quite integral to OO.

OO’s primary benefit, in my opinion, has always been that it’s just easier to map a real world domain problem in to computer code with it. My little brain has a better chance of understanding what a computer is doing, if its expressed to me in groupings of stuff (logic and data) that I have a chance of mapping to something in the real world. Further to this inheritance, does just fit this model of thinking. If I go about building a dog, and then I have to build a cat, and I see that they both work in the same way for some task (I don’t know, chewing), I’m going to throw that there stuff in to something they both are, let’s go with animals. Yeh sure I could make a Animal mix in that gives anything the ability to chew. I don’t disagree that that’s a potential course of action, that may well have it’s benefits. It is still bit easier to ‘get’ though when Inheritance links the two. The idea that OO is for dealing with objects and dealing with object blue prints (Classes) isn’t OO, seems little baby + bathwater. I still get to think in objects, and alright I don’t get to monkey patch them, but really when is the last time a monkey rocked up and gave you the ability to quack like a duck? Not that wouldn’t be cool if I was somehow trapped in a pond.

So, inheritance is for my thinking, and for code reuse, I don’t think its for anything else. You get Polymorphism for free, but it’s not there for it. When I hear or read that Interfaces are in Java / C# for the purpose of multiple inheritance, I’m a little bit sick in my throat. Every time. Where was the code I inherited, or even the frikin data? I didn’t. The only thing you could say I inherited was my public interface contract, and even then just potentially a small part of it. No, Interfaces are for polymorphism, and that alone. I appear to have gotten a little confident in my rant. Interfaces allow you to send message to any objects that can handle them, they let you take a homogeneous collection of objects that have some common interface and play with them as though they were the same. Allowing them to specialise how they behave for that contract. All fairly nifty stuff.

Why do we have to put up with C< languages and their need to know everything. Where did it come from? From what I can tell, and this has nothing to do with any research, just a feeling, is that it’s because these language’s were made by hardcore computer scientists. They had to deal with just 1k of ram or had fresh memories of punch card pains. I think what it gets us is performance and maybe stability, in the day of 3.2ghz x 4 cores on my personal computer, that probably means not so much, but if twitters up time is anything to go by, I’d say it’s still got a little bit left in it. (NB: I don’t think that’s all it gives us, I just want to keep to less than 1k words)

Keep in mind Ruby is a language that throws all languages together to allow anyone to join in. How much of this is achieved my accident I don’t know. I suspect much of it comes from Python and Ruby has just made it accessible to us foolish C/C++/Java/C# newbies by pretending it likes Classist design. It’s a Rockstar language made by a rockstar for rockstars (the dudes name is Matz with a ‘z’ and everything)*

I can’t wait for my object epiphany.

*the z thing may have been made up by me I’m not sure

Evil Burndown Charts

January 26, 2009

This post is in response to Tim Ross’s post on ‘Are Burndowns Evil?

Tim asks how useful a Burndown chart is, especially in the case of a new team with a new or unknown code base. A burn down chart (with an ideal line) for a new team immediately gives a team ‘schedule pressure’. Some may say that pressure instantly kills creativity and induces a ‘get it done’ mentality. I’d like to disagree, just a little bit. I think what little pressure a burn down gives at the start of the iteration can be a good thing. Every team needs a little hustle. Indeed in the case of a new team after a few days of a flat line bundown, this could well start having it’s negative affects, but this is down to the team. Should the team’s first call of action be to abandon all quality, that for me is a people problem. A fresh new team with no idea of it’s velocity should be made aware that the burndown for them is very much an indicator. I think this indicator is always a good thing. It tells us early, if we’ve overcommited and vice versa. It can inform your decision on changing and new requirements.

When a Burndown is used by project managers or anyone else to see how much work is done, you’ve got a small problem there. As Tim notes this just plain aint true. If you’re strict and award your points only when a task is ‘Done Done’ (and not half the points for half the work) you could misinterpret the burndown as saying no one’s done any work. Again its indicating how much of the work, maybe a better way of putting it would be business value, you committed to has been delivered, and perhaps even is ready to ship.

Breakthroughs, insights, and epic refactorings. You can consider these to be big chunks of work that go toward improving the code base and potentially not adding any immediate business value. Now let me stress here that these types of changes go a long way to a maintainable code base. No one likes working with a big ball of mud, and it’s not just a selfish emphasis. From a purely business focused view allowing the time for these kinds of things early and often plain old means you get more for your money in the long run. Code changes get cheaper and code changes. (period).

Burn-downs don’t reflect the value a team gets from such activities very well, and indeed this could lead to a vicious circle of teams never getting to implement these insights, which would be plain wrong. I think the key thing here is to note that the burn-down would actually allow you to make a slightly more informed decision as to when to take on these tasks. Some would say later is always the wrong time to do them, that they would get neglected and forgotten. Now I’m sure small easily lost refactorings should come in to your normal work flow and therefore velocity, so I’m not including them in this. The larger pieces, well they may well need to wait, perhaps even in future iterations they can be thought about and planned for, assigned points. Some what of a technical story, but when it’s presented and it’s value weighted and dare I say it costed, this is something quite more powerful to present to people that pay the bills. The automatic notion that these things should be done as soon as they’re discovered is almost a form of premature optimisation in my opinion.

To summarise, indeed a burn-down charts use can get lost for a new team, and it shouldn’t be the focus of morning stand-ups, it’s an information radiator, not an information altar. Use it effectively, and don’t let it make you feel so bad so easily!

If you’d like to know what good a burn-down chart is for a team that has a velocity, search the Internet.

Sustainable pace

January 2, 2009

One of the great things about Agile software development that I’ve noticed is sustainable pace. Planning poker, point based estimation and velocity are all great tools for finding a team’s sustainable pace. Knowing a team’s velocity and consistent estimating by the entire team, allows a team to commit to as much work as it is (increasingly) likely to complete, and complete consistently.

As well as properly managed expectations, a sustainable pace has the benefit of ensuring the quality of a team’s work is consistent. When the team is working at such a rate, there is no crunch time, there is no working late. As developers, working beyond our capacity can result in long lasting problems. Beside the whole no life outside of work thing, it also affects the quality of our work. Which always results in cost later in the life-cycle, be it with maintainability or bugs that come out of last minute fixes.

So I sound pretty sold on this whole idea don’t I? Well I am, and I’m almost proud that one of my close friends always leaves the office at ‘hometime’. I taught him both he and his work would be better off than if he stayed behind to finish that last bit! Well this all bit me in the butt recently and it I’ve noticed it has definitely killed off a great part of the job. The Programmers Hi-Five.

So some 6 weeks ago I was handed a project that was ambitious to say the least. I was asked to take our biggest product, which made the most money, and was closely watched for compliance, and rebuild it. No pressure. Being such an important product it naturally followed that it was the most hideous piece of ill maintained code we had. Given said compliance issues we were also not allowed to deliver functionality incrementally, as it was all required to stay compliant. [see curse of the rewrite].

All of this was to be delivered by one of the hardest deadlines we had ever had. As a business there’s an awful lot of buy in in to the whole agile thing, It’s something I’m quite proud of, but this wasn’t very ‘incremental’. So with all of that in mind, I agreed to do the project. As much as I love what we do and how consistently we do it, I work best under unreasonable pressure. I don’t understand any other kind of deadline.

Oh and did I mention it was with a fresh team that hadn’t worked together before? All good fun.

Given a mighty fine team and even some license to rework our QA process we worked ourselves hard. Often staying late, sometimes because we were in the groove (the best kind of working late) and sometimes just to make sure we stayed on course (the worst). We even worked for a few hours on the weekend. Once.

I’m getting mighty long winded, so I’m going to try and summarise the rest:

Good things:

  • Pushing ourselves that hard meant we achieved a lot in a short space of time.
  • We had to work hard to ensure quality throughout, but that has certainly brought new in sight on how we can improve our QA process going forward.
  • The team gelled very quickly. I don’t think we’re done, but I certainly witnessed us passing through forming and storming in an iteration.
  • Programmer Hi Five. A whole lot of satisfaction when something came together. I don’t know why this doesn’t happen more often in the normal course of a project. My hands have never stung so bad.

Bad things:

  • From outside of the team and maybe even from within the team at times, we appeared to not be in control. Almost firefighting. We weren’t (always) but I’m sure as a team we’d prefer to give the impression we knew what we were doing at all times.
  • We don’t have a velocity. We worked too hard to be able to find a velocity, in the upcoming ‘normal’ iterations I have no idea how much work the team is capable of at a Sustainable pace.
  • Left behind. The amount of work we had to do meant often more junior members of the team were left to fend for themselves.
  • Cool down period. We need a cool down period, luckily this all happened at the end of the year. The only time for it if you ask me. Hopefully it won’t be difficult to ramp up after this.

So I think we should throw out sustainable pace once in a while. (I’m thinking once every one or two years). Just to remind ourselves of the fun we had as hackers, and why it’s so much better as developers. The Edge of Chaos is fun.

In an effort to get our teams understanding ‘Done Done’ we’ve started looking at acceptance testing. We’ve moved, over the last 18 months or so, from a traditional waterfall process to a full on Agile one, our final hurdle is the QA process. We still have a very waterfall-esque QA approach.

We’ve done lots in terms of getting QA involved early on, making environments available for the different stages of testing. But we still have a major problem where by all our QA tends to get done once the developers yell ‘FINISHED!’.

The problem is, we developers tend to lie. Not knowingly, more out of a desire to move on to something more interesting. So even though we may be making our burn down chart look a little neater, we’re in fact delaying the inevitable. QA return to us our shiny code, complete with bugs we never thought possible.

So recently, we’ve started looking at Fitnesse in an effort to bolster our QA efforts and also to bring more light on to the process that is QA. If we get more people looking at it, we can surely all help to improve it by offering new tests and finding ways to improve how the existing tests work.

I’ve known about Fitnesse for quite some time, since CITCON made it to London, my first open space conference. However, even then the general consensus was it was great, but quickly turned in to a beast to maintain. This problem still remains, as from the recent conference, Gojko himself said, it aint great, but it’s all we got.

I’ve waited too long for someone to come up with a better alternative (whilst keeping an eye on J Miller) we’ve started to give Fitnesse a go. I’m doing my best to ensure we don’t fall in to the pitfalls people have done before. My main gripe would have been the need to build fixtures. I have heard many a nightmare about the fixtures replicating ‘value’ already given by unit testing, fixtures being brittle, and requiring as much effort as production code to keep going. These are all things I’m tying to keep us from.

So recently my new team showed me our second good acceptance test, the first one referred to in the title that inspired this post was by another team which I used to experiment with. The approach I’m taking, is to use the FIT approach against the parts of the system that have to handle all the ‘scenarios’. We’re following a flavour of SOA that I like (not full on SOA, that still makes me uncomfortable). We’re also cleaning up some of the more terrible examples of misplaced responsibility we have. This gives us an opportunity to move domain logic from the ASP.NET front end (urgh) to the places it belongs.

The SOA gives us a nice potential for isolation. That means we’re able to get our examples from the business (or indeed from the beast that we’re rewriting) and feed them in, table like, to check the code we wrote was the right code! One thing I really don’t want to get hung up on is driving selenium or other slow front end tests from Fitnesse. That would just give us the illusion of acceptance testing when all we’d really be doing it moving are old fragile, brittle, slow selenium tests into a fancy wiki that no one would read.

But I’ll come back to that later. One decision which we’ve made, and now you might disagree with this, is to tie the fixtures directly beneatch the WCF contract stuff we have for the service. this means it talks to a more pure, less noisy version of the services the service offers. This so far has allowed our tests to read quite clearly in terms of what makes up a table. It has also hidden quite a bit of fluff we use to on all our services which qould give our fixtures another reason to be brittle.

It does however mean that we wouldn’t pick up an problems with the WCF layer we put atop. I’m going with this is an OK trade off considering how good it felt to see 400 test cases passing and 100 failing. I never thought I’d enjoy seeing red tests so much!

Me feels I need to be more specific (and more frequent!) in my blogging, maybe if I don’t let post sit in drafts for 2 weeks they’ll be better. Hopefully there’s more on acceptance testing to follow!

Estimating tasks with points

September 17, 2008

Following on from my entry on the altnetuk session on Agile Process, I’m just going to layout the different types of estimating with points that I know of. I’m not sure why but at that session it felt like someone had put and end to point to complexity estimating and forgot to let me know!

Pure time based estimates

You’ll be wrong. The time it takes for dev a to do a task will be different to the time it takes dev b. You will run in to problems with the task and it’ll totally throw all your estimates out (may be exaggerating).

Your manager will not understand why you can’t do 7.5 hours worth of work in a day. There is no room for ramp up, context switching or any of those minutes you don’t think of.

Burn down charts tend also to look like complete failures. As every hour of your day is accounted for, but you don’t actually spend every hour of our day on storied tasks, those other hours show up on a burn-down chart as slacking. When this happens, it’s not reasonable for the team to commit to less in the next iteration, as it implies they’re not spending their time doing any work. This totally ruins the point of committing to any work in a planning meeting, if there is little or no intention of completing it all. This also makes it difficult to ball park future iterations.

Time to points

The team eventually manages to estimate tasks without thinking about the time it takes. Easy to figure out your capacity or velocity. Easy to manage when team size fluctuates (illness, holiday, expansion). Easier to sell to management. No room for improvement, how do you increase our velocity without making time? It’s possible, but only by admitting that you’re not estimating with time any more.

Pure points

Difficult to start a project with a new team. I do this by assuming we’re unstoppable and committing to all the stories we have. Then I use the number of points we do complete as a baseline for the following iterations. Motivates developers quickly. Increasing velocity in this way is far easier to motivate developers with. The idea that you’re getting better at doing work is much nicer than the idea that your old estimates were slack. More quickly with pure points do new developers fall into line in planning poker. They stop influencing the task with how quick they think they are (maybe quicker than some of the team maybe slower) and go straight to how complicated the task is. Points always help when you get estimating discrepancies. More clearly separates your teams estimating technique / accuracy from another’s. With a per team velocity and meaning for points, it’s more difficult for management to unfairly compare two teams estimates and use it as a flawed performance indicator.

Agile session Alt.Net London

September 17, 2008

So to start the conference I chose to take part in a session about agile development and processes. I think mainly because I wanted a refresher on some of the points and to see if I could contribute. It’s nice to see plenty of people there new to Agile as I was at my first open conference. Learning the basics of the process, and making a good start. I was also interested to see how people had developed from their initial attempts at introducing Agile to their shops.

At one point there was a lot of talk about how we estimate, and I was surprised to see that everyone had assumed estimating in hours and half days. Assigning some time to points and then estimating with points. I wonder what happened to estimating by points representing complexity. At uSwitch, when we introduced planning meetings and planning poker, we started with 5 points to a day. This was mainly to get developers used to the idea of estimating in points and as a team. Over time in teams where this method had remained, they have naturally loosened the tie between the points and the number of hours they represent when estimating. They still use the rule to define how much capacity they have in a given iteration, 5*Devs* 10days = capacity.

For me and my teams, we’ve moved this to the next logical stage of entirely decoupling estimation points from time and tying them to the complexity of a task. Has this gone out of fashion and no one told me?

I think I’ll break this off into a separate post on estimating

Crystal and full project breadth

Ian Cooper shared with the group the method or agile flavour he’s using with his current team, it’s called Crystal and sounds interesting. It sounds like the emphasis of crystal is the continuous improvement and customisation of the process to your team and project’s needs. A quick bout of googling tells me that the following has not much directly to do with Crystal. From what I’ve been able to gleam (and it’s not much) Crystal is primarily focused on improving methodology frequently, aiming to minimise it’s weight.

The main point that Ian brought up that caught my attention was the move from the pure priority prioritisation to a more whole system or breadth approach with incremental refinement. Purely working of stories by priorities, apparently leads to systems that have a lot of work and attention spent on apparent high priority sections while those features and functions that the client may deem as a lesser priority are essentially neglected, typically tacked on the end of the project. This can lead to seemingly unfinished projects (my conclusion).

The three thirds approach that Ian described seems to encompass the entire system with increasing levels of ‘focus’. The first phase is the ‘Walking Skeleton’ this is just enough of the entire system to get something going from a technical perspective, this might take up most of the ‘framework’ stuff that one has to do to get a project going. Many people likened this to an iteration 0, which doesn’t deliver much or any business value but is necessary to build something you can demonstrate.

The next phase delivers the ‘First Business Value’ taking the walking skeleton and adding enough to actually add to what the business has, making a contribution.

Here is where my notes get a little woolly and I wish I wrote this post while I was in the room! The final phase adds all the other features, improves on the initial business value, works on feedback from the client on the second phase, and perhaps includes changes to requirements.

From the weakness of those last two paragraphs, I think I’m definitely going to do a little more research, so check back for a bit more bulk in later posts.

This notion of increasing the focus with subsequent iterations really did intrigue me, and I’m definitely going to see how I can apply it to our next project.