Hacknot has published a contrarian paper on Python, Ruby, and other dynamic languages called “Invasion Of The Dynamic Language Weenies”. Its an interesting read, with the central premise being that dynamic language partisans (like me) haven’t empirically proven any of the benefits they tout. I think this is a fair criticism, albeit one taken to a ridiculous extreme in this article. I do find it odd that he spends so much time pointing out the lack of empirical evidence and the use of biased and/or anecdotal evidence on the pro-dynamic language side of the debate, but then turns around and makes exactly the same mistakes on the con side of the argument:
Actually, I had trouble imagining how it might be true even within limited domains. Though I’ve principally used Java for the last 10 years or so, and C/C++ for five years preceding that, I have a basic familiarity with Python, having written a few utilities here and there with it. Reflecting on those experiences I could see no basis for such a startling claim to code brevity.
In other words, “I can’t imagine it and haven’t experienced it, therefore it must be false…”
Can dynamic typing result in a reduction in code volume of 80 to 90 percent? It seems highly improbable to me. Suppose I had a piece of Java code and went through it removing all the type qualifications. Would that reduce the code volume by 80 to 90 percent? No, I don’t think so. Maybe there are other omissions that dynamic typing would make possible, such as obviating adapter-style classes and some interfaces. But even including such omissions, claiming an 80 to 90 percent code saving seems a bit rich.
He dismisses actual studies where people have compared static and dynamic languages as being overly biased, then does a “mental exercise” to prove his own side of the argument.
On the whole, no matter how hard I tried, I simply could not convince myself that dynamic typing and runtime code modification, either individually or in concert, could produce a code base that was “10 to 20 percent” the size of its static equivalent. Either the quote was inaccurate, had been taken out of context, or Deibel was engaging in some fairly outrageous marketing hyperbolae.
Again, “I can’t imagine it and haven’t experienced it, therefore it must be false…”
I recall doing maintenance and extension work on a system written in C containing approximately one million lines of code. To build that entire system from scratch took several hours. But that didn’t mean that every test/debug cycle we performed was punctuated by that same delay. There were large parts of the code base that we were not involved in changing, and they were built into libraries once at the beginning of the project, and thereafter we just linked against them. The net result was that in each test/debug cycle we were compiling only a small fraction of the code base, then linking against the pre-built libraries. This took seconds, not hours. It’s part of basic build management when working with a static language that you only recompile what you have to. The DL weenies would like you to believe that a burdensome compile/link time is an unavoidable downside of using static languages when in fact, it is not.
“This wasn’t true on one system I worked on, therefore it is universally untrue”
I could go on, but you get the idea. So here’s my challenge to hacknot: you seem awfully certain about things. Why don’t you back up your side of the argument with some hard and fast empirical data of your own?
Posted in Dynamic Languages