The object epiphany

February 2, 2009

My friend Mike Wagg recently had what I’m calling the Object epiphany that all Ruby devs have at some point, that makes them fall in love with the language. Now I can’t claim to have yet had this great moment, I haven’t given my self any time to play in the language to have achieved Ruby nirvana.

The reason I’ve decided to blog about it, is because, it does tend to lead in to some kind of OO rebellion, whereby devs cry from the roof tops that all OO languages before Ruby (or some other dynamic language) weren’t truly OO. That in fact, Class based languages are of the false gods and that we should all come in to the light that is duck typing. I may have got a little carried away with that there. Sorry.

I want to keep here my current view point and opinion that, in time, I can come back and look at what a fool I had been. Hopefully that comes soon after I’ve made a million on my latest Rails app. Or indeed you can do just that right now if you’ve already got there. You lucky so and so you.

So it goes a little something like this. Ruby emphasises the messaging side of objects, it gives you complete freedom to look at the polymorphism side of OO all by itself. This is great. It’s called duck typing. You send a message to (or call a function on) an object and if it can respond it does so. Awesome, I’m no longer tied down by the compiler and it’s evil desire to know all about all before it allows you to ‘compile’, and completely unnecessary step in the world where I’m a rock star programmer. Sheet, if I wanted to I could open up that there object (not class) and add to it if I wanted to. This must be OO, I just referred to something as an object and not a class!

You then start to think, we can do away completely with inheritance! For me this is where I get a little confused as to why this is such a revelation. I’ve never thought of Inheritance as the primary mechanism for polymorphism (I mean why would they be distinct OO concepts if that was so). The idea that one would use Inheritence in order to achieve some Polymorphism and for that reason alone seems little odd. I’ve always thought Inheritance was for that old chestnut, code reuse. I’ve even heard that some folk don’t like Inheritance when it’s used for just that reason! Yes I know to prefer composition over inheritance, it’s more flexible and so on. It’s all to easy to start arguing away Inheritance entirely by looking over something I think is quite integral to OO.

OO’s primary benefit, in my opinion, has always been that it’s just easier to map a real world domain problem in to computer code with it. My little brain has a better chance of understanding what a computer is doing, if its expressed to me in groupings of stuff (logic and data) that I have a chance of mapping to something in the real world. Further to this inheritance, does just fit this model of thinking. If I go about building a dog, and then I have to build a cat, and I see that they both work in the same way for some task (I don’t know, chewing), I’m going to throw that there stuff in to something they both are, let’s go with animals. Yeh sure I could make a Animal mix in that gives anything the ability to chew. I don’t disagree that that’s a potential course of action, that may well have it’s benefits. It is still bit easier to ‘get’ though when Inheritance links the two. The idea that OO is for dealing with objects and dealing with object blue prints (Classes) isn’t OO, seems little baby + bathwater. I still get to think in objects, and alright I don’t get to monkey patch them, but really when is the last time a monkey rocked up and gave you the ability to quack like a duck? Not that wouldn’t be cool if I was somehow trapped in a pond.

So, inheritance is for my thinking, and for code reuse, I don’t think its for anything else. You get Polymorphism for free, but it’s not there for it. When I hear or read that Interfaces are in Java / C# for the purpose of multiple inheritance, I’m a little bit sick in my throat. Every time. Where was the code I inherited, or even the frikin data? I didn’t. The only thing you could say I inherited was my public interface contract, and even then just potentially a small part of it. No, Interfaces are for polymorphism, and that alone. I appear to have gotten a little confident in my rant. Interfaces allow you to send message to any objects that can handle them, they let you take a homogeneous collection of objects that have some common interface and play with them as though they were the same. Allowing them to specialise how they behave for that contract. All fairly nifty stuff.

Why do we have to put up with C< languages and their need to know everything. Where did it come from? From what I can tell, and this has nothing to do with any research, just a feeling, is that it’s because these language’s were made by hardcore computer scientists. They had to deal with just 1k of ram or had fresh memories of punch card pains. I think what it gets us is performance and maybe stability, in the day of 3.2ghz x 4 cores on my personal computer, that probably means not so much, but if twitters up time is anything to go by, I’d say it’s still got a little bit left in it. (NB: I don’t think that’s all it gives us, I just want to keep to less than 1k words)

Keep in mind Ruby is a language that throws all languages together to allow anyone to join in. How much of this is achieved my accident I don’t know. I suspect much of it comes from Python and Ruby has just made it accessible to us foolish C/C++/Java/C# newbies by pretending it likes Classist design. It’s a Rockstar language made by a rockstar for rockstars (the dudes name is Matz with a ‘z’ and everything)*

I can’t wait for my object epiphany.

*the z thing may have been made up by me I’m not sure