Showing posts with label programming. Show all posts
Showing posts with label programming. Show all posts

Everyone should learn programming

I had no intention of writing about this meme because I'm biased by the fact that we're building a platform to allow programmers to teach programming, but then Jeff Atwood wrote his "Please don't learn to code" post. I'm sitting at home, grumpy because I'm ill, so this was all the excuse I needed to get this off my chest.

Frankly, I don't care if Mayor Bloomberg should learn to code, mostly because I don't live in New York. For me, what's important is the fact that he will never learn to code non-trivial programs by going about it in this manner. Much of the hype on the internet misleads people into assuming that by writing a code snippet a week, they're going to become programmers. That's like assuming if you solve a Sudoku problem once a week for a year, you will become a mathematician.

Learning programming is not easy. It's easier than many other disciplines because you can learn through experimentation, but to be a good programmer, you need to make the effort to understand how computers work.

The less you understand, the less effective you are as a programmer.

You see, any non-trivial software program depends on dozens or hundreds of abstractions. For the non-programmers reading this post, an abstraction basically hides a complex mechanism or concept behind a simple interface. A real-world example of an abstraction would be the remote control of a television - you don't need to know what goes on in the circuitry of the TV to switch channels; you know that hitting that button makes it happen, and all of the underlying complexity is "abstracted away" from you, the user. This is ok, because you are not in the business of designing and building TVs.

You can certainly write many trivial programs without looking under the hood, but for bigger pieces of software, some understanding of how all the abstractions in your system work is essential. Memory management, threading, storage and scheduling are just some of the areas one needs to be familiar with before you can understand why your program behaves in a certain way. Once you get into production software you need to understand networks, packets, routing - I could go on like this for five minutes. Without understanding all of this, the behaviour of your program is a black box to you. When something goes wrong - which it will - you will have no idea why it happened. And you will not be able to fix it.

Getting the basics sorted would easily take six to twelve months of serious study. Without this, you're just messing around doing building "Hello world" programs that provide gratification to you and little else.

Long story short, there's nothing stopping anyone from becoming a kick-ass programmer, but remember, programming isn't an exception to the 10,000 hour rule.


If you liked this post, you could

subscribe to the feed

or simply comment on this post


Evolutionary design of interfaces and the IClass pattern

My previous post on the IClass pattern where, for example, class Tree has an interface ITree and Tree is the only class implementing ITree, sparked a small debate in the comments section on whether creating an interface for every class
  • affects coupling (I don't think it makes a difference)
  • makes it easier to design in an evolutionary manner (I think it actually runs counter to principles of evolutionary design)
A significant number of my readers disagreed, so I started to respond in a comment - but it grew so big I figured I might as well make a post of it. I'm looking forward to feedback so I can refine my understanding and correct any misconceptions I may have had.

BTW, here are the exceptions I'd mentioned where creating an interface per class makes sense:
  • you are developing a framework where you may have just one class, but third parties may wish to implement functionality in the future in ways you can't predict
  • you are forced to do so by a framework you're using like an IoC framework or certain mocking frameworks
  • and a new one from refactoring.com: several clients use the same subset of a class' interface
Basically, I'm trying to say that we're talking about OO in isolation without limitations imposed from outside. So...

If there is a 1:1 mapping between a class and an interface, then using the interface instead in no way increases or decreases the level of OO coupling. Indeed, you try to decrease coupling for a reason, right? It's so you can polymorphically swap classes in and out at runtime without disturbing higher layers of abstraction. When there is no polymorphism (just one class), what benefit do we get?
Only if there are two or more classes implementing that interface, or multiple clients seeing the same subset of a classes' interface does creating and using the Interface instead of the implementors make a difference to the way you'd write and think about your code.

When you define an interface for a single class you're imposing a perception of the ways in which higher layers of abstraction will see that class in the future, without actually knowing what behaviour it will share with its sibling classes, because the siblings don't exist yet. The idea is to wait until a sibling exists, then extract an interface based on the way the two classes are being seen by higher layers. That is how I evolve interfaces at the moment.

If there is ever any need to introduce a second class which does the same things but in a different way, then I identify the common bits and run the 'extract interface' refactoring. This creates an interface signature based on how I'm actually using the interface as opposed tempting me to predict how it may be used in the future.

Otherwise, I cannot think of a single instance where I would use the class any differently from the interface in my code. Of course the interface makes stubbing easier, but that's a limitation of Java and is easily solved by using a good mocking framework. The test is to see if you need interfaces in other languages in the same sort of situation. Does the lack of interfaces in Ruby increase coupling? Not really. Then why should it make a difference in Java?

Can someone give me an example where creating an interface for a class (with a 1:1 mapping) makes a difference to the way you write your code, without any external influences (including external frameworks)?

'Programming to interfaces' strikes again

Why oh why do people create interfaces which are implemented only by a single class. There's no polymorphism there that necessitates the interface. What value can there possibly be in writing that superfluous code?
For the umpteenth (well, OK, second) time , 'Programming to interfaces' does not mean 'create one interface for every concrete class'.

Extract an interface if you have two or more classes doing the same 'things', but with differences in the implementation, or if you are designing a framework and want to define a contract for future extension. Not otherwise.

Static or dynamic, use whatever works best

All this hype and noise about static vs. dynamic is starting to get to me. You know what? As a developer, I don't care if a language is static or dynamic. I want to write code to get things done quickly, efficiently and in an aesthetically pleasing manner. The last also translates to 'very maintainable', by the way.

Today, Ruby does that for me. Yeah, so it's dynamic. So is Python. Why am I not using Python? Because it doesn't, for one, support first class function objects. I love my blocks and closures. I can do everything in Ruby that I can do in Python, but it'll run slightly slower - that's a trade-off I'm prepared to make in my current context. Because I started out with Ruby and haven't needed something faster, so never really looked at it seriously (thanks, Paul, for pointing out that Python does have first class function objects). I'm also very very productive in Ruby as opposed to Java - I can create elegant code without having to create a bunch of interfaces first. Programming to an interface is not the same as creating a blessed Java interface, jeez, why the constant misunderstanding? Just because you create an interface first in Java doesn't mean you have a better design. In fact I'd argue that you should extract your interfaces in an evolutionary manner, not design them up-front.

However, I'm fairly convinced that Ruby doesn't work on teams of more that 10 people, unless all the developers are absolutely superb. Even then, you still need good documentation in the form of both rdocs as well as unit tests to ensure that code quality doesn't drop. If I'm on a large project, I'd probably vote against Ruby. I'd want to eliminate trivial errors by having a compiled, statically typed language. If it gave me blocks and closures I'd jump at it. Maybe something like Scala. But here's the point - I will use whatever helps me get my job done well given the current context.

An important thing to remember is that all the programming 'swords' we use cut two ways. Take Java - it was designed to reduce developer error. It's fairly successful at that, but is consequentially a singularly ugly and verbose language and the libraries aren't very intuitive. I have seen more crap code in Java than you'd believe possible and half the enterprisey OSS projects out there are an attempt to make development in Java half-way productive. And without a decent refactoring IDE like IntelliJ IDEA or Eclipse, you've already lost the battle. But hey, it's pretty damn fast, and that hotspot JVM is really cool.

Ruby, on the other hand, was designed for the programmer. It has a superb set of libraries which are arguably more intuitive than Java's. Ruby gives a disciplined developer the power to work miracles. Your code will be readable and elegant without being verbose. Every thing's an object and you get blocks, closures, continuations, first class function objects, message based method invocation, meta-programming...
But you know what? It's kinda on the slow side. And all that freedom in the hands of a novice is a sure recipe for disaster. Class behaviour hacked at runtime to fix bugs. No unit tests. Lots of magic in method_missing. Duck-typing misused causing type errors at runtime. Need I go on?

What I'm getting at is that there is no silver bullet - everything has its place. Your average 70,000 employee outsourcer trying to do Ruby projects on a large scale is probably a bad idea, because they can't always guarantee developer quality. Java on the other hand, combined with CMM, ISO and three other certifications may them help achieve some minimum level of code quality.
On the other hand, take something like Mingle or Slideshare, both running on Rails. I know for a fact that Mingle has a very small team (< 10 including devs, analysts etc) and yet managed a 1.0 release in about 9 months. Slideaware on the other hand supposedly started out with Ruby but I believe has now moved to Erlang for performance reasons.

Deciding which language to use should be based on factors like the delivery timeline, experience level of the team, level of discipline in development (TDD, CI) and performance requirements. Not because you like interfaces and believe that you can't use them in a dynamic language.

So the moral is this. Get. The. Job. Done. And use what it takes to do it quickly and well in that context. Period.