Apple’s Take on Application Programming Environments?

This is a followup to yesterday’s post on Application Programming Environments. As I thought more about what I wrote, it brought to the forefront some stuff I’ve been wondering about in this area. Specifically, does Apple have a comprehensive strategy/philosophy/outlook on the issue of native code vs. JIT code, or on Objective-C vs. dynamic languages? If so, could someone please explain it to me (and no, I’m not being facetious – I really want to know).

But First, A Mini-rant on Objective C

I don’t like Objective C. Never have. There. I said it. Flame away!

Why don’t I like it? First, let me say what I do like: I think the runtime is amazing in the way it combines dynamic behavior with native speed. I think Cocoa is fantastic. Toll-free bridging is brilliant. But all this discussion of bridging between native languages like C and Objective C and dynamic languages like Ruby and Python has really crystalized for me what I dislike about Objective C: the constant need to do a cognitive shift back and forth between dynamic Smalltalk goodness and static C minutiae. Here’s an example of what I’m talking about, courtesy of Wil Shipley:


- (id)_realItemForOpaqueItem:(id)findOpaqueItem outlineRowIndex:(int *)outlineRowIndex
    items:(NSArray *)items;
{
  unsigned int itemIndex;
  for (itemIndex = 0;
       itemIndex < [items count] && *outlineRowIndex < [self numberOfRows];
       itemIndex++, (*outlineRowIndex)++) {
    id realItem = [items objectAtIndex:itemIndex];
    id opaqueItem = [self itemAtRow:*outlineRowIndex];
    if (opaqueItem == findOpaqueItem)
      return realItem;
    if ([self isItemExpanded:opaqueItem]) {
      realItem = [self _realItemForOpaqueItem:findOpaqueItem outlineRowIndex:outlineRowIndex
          items:[realItem valueForKeyPath:[[self _treeController] childrenKeyPath]]];
      if (realItem)
        return realItem;
    }
  }
}

This mix of id and C-style pointer dereferencing just sets my teeth on edge, and after years of using “scripting” languages I’m not super-fond of all those square brackets either. The equivalent code in Ruby would be both more readable and more succinct. I am, of course, aware that this is a subjective aesthetic judgement. If you don’t mind this style of programming, I’m happy for you. But for me it was enough to keep me out of the Cocoa/Mac OS X programming universe – kinda ironic since I once worked in MacDTS and knew more about Mac internals than most folks.

Apple’s Investments in JITs and Dynamic Languages

So despite my inability to overcome my aversion to Objective C, I still try to keep up with all things Mac OS X. It is clear that Apple has a strong commitment to Objective C. They’re making lots of improvements in Objective C 2.0. But they’re also making some interesting technology investments around JITs, dynamic languages, and so forth:

  • Core Image has a nifty JIT compiler that takes a GLSL code and dynamically compiles it for the target GPU or CPU. Cool.

  • OpenGL in Leopard uses LLVM to dynamically recompile graphics code on the fly for systems that don’t do hardware vertex shading. This code replaces a previously existing JIT compiler (who knew!).

  • Apple is doing LLVM work targeting the ARM processor. Is this for the iPhone or something else? Beats me. But using LLVM would allow Apple to build a higher level Java-style VM on top of it but targeting Objective C, or it would let Apple dynamically recompile programs for new processors (kinda like Rosetta on steroids) and architectures: no more fat binaries. Or it could be something else entirely.

  • Apple is shipping Java with OS X but doesn’t really put a lot of investment into it. They killed the Cocoa-Java bridge. Can’t say I miss it.

  • They ship Python, Ruby, and Perl with Mac OS X. New bridge technology makes these quite usable with Cocoa, which is cool, but it isn’t clear whether or not these bridge technologies will be included with Leopard or not.

On the other hand, you have the current situation with fat binaries: PowerPC, x86, x64, and maybe even ARM binaries. This is obviously not a scalable solution when you start talking about optimizing for many different configurations of CPU cores, GPUs, etc.

Microsoft’s Strategy

By comparison, Microsoft’s strategy is obscenely simple: .NET everywhere, and dynamic languages are welcome to come along for the ride. IronPython, JRuby, etc. are all supported on .NET. LINQ adds more dynamic behavior to C# and VB.Net. .Net Compact is part of the Windows Mobile story. WPFE will someday have a miniCLR that runs on Macs and maybe elsewhere. Microsoft has buried the hatchet with Novell (unfortunately they seem to have buried the hatchet in the back of the Linux community, but that’s another topic), so the Mono team no longer needs to worry about patent litigation from Microsoft. Heck, they even want Apple to support Cocoa#.

A Plea to the Lazyweb

What I want to know is this: exactly where does Apple see the future of JIT compilers and dynamic languages? Will Apple maintain a slavish devotion to natively compiled Objective C, or do they plan to start moving more and more to dynamic language development as so many others are doing? I’d love to hear what someone from Apple has to say, of course, but welcome contributions from other industry watchers.

One Last Tangent

As an update to my earlier piece, see this interview with Mark Hamburg about the use of Lua in Lightroom:

So what we do with Lua is essentially all of the application logic from running the UI to managing what we actually do in the database. Pretty much every piece of code in the app that could be described as making decisions or implementing features is in Lua until you get down to the raw processing, which is in C++. The database engine is in C; the interface to the OS is in C++ and Objective C as appropriate to platform. But most of the actually interesting material in the app beyond the core database code (which is SQLite) and the raw processing code (which is essentially Adobe Camera Raw) is all in Lua.

[Update 02-17-2007] Corrected some editing errors and typos.]

~ by Andrew Shebanow on 16Feb07.

6 Responses to “Apple’s Take on Application Programming Environments?”

  1. I don’t think shipley’s code should be quoted in any instance. Especially when it has a severe lack of error checking and dereferences items for no reason.

    Secondly, how are fat binaries not scalable? IIRC, on tiger 31 separate archs were supported (but that was a limitation of the devtools generating them, not the OS reading them).

    [Andrew says] On fat binaries: simply that the idea of having to compile & link 2 or 3 times (which is the story today) sucks as a developer. Dong it 4 times, or 5 times, or whatever is even worse. Then there is the problem of increased application size – Flash Player for Mac is now twice as big a download as it used to be.

    As for Shipley’s code, I chose it because it had the mixture of C and Smalltalk-isms I think are typical in Objective C, not because it was or wasn’t a sterling example of best practices. Besides, it was too long for my purposes even without the error checking.

  2. Intersting piece. I particularly agree with your thoughts on Objective-C.

    When I first came across it I loved the dynamism for constructing user interfaces. It was the first time I’d come across a real seperation between UI definition and code (nib and objective-c file) in a desktop environment. It felt leaguees ahead than Microsofts code generation approach (which its nice to see them moving away from with xaml).

    Two things I dislike about the current Objective-C:

    1) Memory management. The reference counting method is little better than C style new and free. And it suffers from the same amiguity in terms of who actually is responsible for doing things like incrementing the ref count etc. Sure there’s a convention but convention goes wrong.

    2) The messaging syntax. Maybe I’m being picky but I really find all the square brackets an eyesore.

    Its intersting what you say about Microsofts approach being simpler. In some ways it is but it feels to me like C# is the first citizen of .net and all the others are just kinda along for the ride.

    [Andrew says] I completely agree that C# is the first citizen of .Net, but that doesn’t really make Microsoft’s strategy any more complicated imho.

  3. Right on! I last used to develop desktop applications about ten years ago, on Windows, when the technology was a lot different to today. Since then I have been a 100% server-side UNIX-based developer and have moved to OS X.

    I had a reason to try some client app development again last year and while I appreciated Objective C and Cocoa, it just felt like stepping back in time compared to the advances in server-side development.

    Sadly, the Cocoa bindings for most scripting languages are incomplete and reasonably unstable (I’d contribute, but it’s not my field). I’ve also looked at Microsoft’s situation, and had a play with things like SharpDevelop, Delphi and a trial of MSVS 2005 and developing GUI apps on Windows still seems streets ahead of OS X in terms of choice.

    I guess if you can get ‘into’ Objective C then OS X gives you the best of every world.. but if you want language choice, it’s poor. I’d rather knock up a Web app to run locally!

  4. On fat binaries: simply that the idea of having to compile & link 2 or 3 times (which is the story today) sucks as a developer. Dong it 4 times, or 5 times, or whatever is even worse. Then there is the problem of increased application size – Flash Player for Mac is now twice as big a download as it used to be.

    This isn’t an inherent problem in fat binaries. It’s an issue with the suckiness of Apple’s implementation of ld. If you’re compiling things for windows x64, you’re still going to have to compile and link more than once.

    The complaints you state are common with any 32-bit/64-bit hybrid system without mentioning the benefits. Such as smaller overall file sizes (localizations and resources like images don’t have to be duplicated) and a much, much better user experience like not having to worry if a binary is for one architecture or another.

    As for Shipley’s code, I chose it because it had the mixture of C and Smalltalk-isms I think are typical in Objective C, not because it was or wasn’t a sterling example of best practices.

    The example shown is also an evil hack. His example isn’t typical because most people using both would never dereference pointers like that. If you remove the dereferences and add error checking, it’d be far more typical.

  5. I’m with you. I seriously hope Apple is putting substantial resources into a post-ObjC world, because otherwise they’re going to fall further and further out of graces with developers outside the core NeXTies. Anyway, I’m in the same boat– the last software I wrote for the Mac (besides Java stuff that just runs) was in Metrowerks’ PowerPlant. I’ll probably re-adopt the platform once they’ve moved beyond ObjectiveC.

    Apple embracing and extending C# and Mono would be pretty nifty and would go a LONG way towards bringing more developers to the platform, and probably dramatically increase the quality of the tools at the same time.

  6. Peter Cooper: I like your turn of a phrase even though I’ll bet it was unintentional. You characterize Cocoa bindings as “reasonably unstable.”

    What would it take for you to consider them “UNreasonably unstable?” It also sounds like a state of mind.

    James:

    I love Objective-C and the square brackets. I’m sure someone may be able to prove this definitively, but for every pair of square brackets you use in ObjC, you would have had a pair of parentheses in C, C++, C# and Java.

    One thing I really like about ObjC is the labels on the methods. For example,

    + (NSColor *)colorWithCalibratedRed:(float)red green:(float)green blue:(float)blue alpha:(float)alpha

    When using Xcode, you will be guided through each labeled argument, where you can confidently fill in each one correctly. This is clearly superior to the standard argument list of the C/C++/C#/Java function or method. While you can argue that the author of a method/function MAY supply descriptive dummy parameters, you are at the mercy of the suppliers of frameworks whether you have to refer to documentation excessively. The practice in the Cocoa world has been well-established as excellent. It is not REQUIRED to label ObjC methods, but I have yet to see an example of one that is not.

    [Andrew says] You raise some good points. I don’t agree completely about the argument passing stuff – I didn’t mention it in my original post, but the use of C casting operators to declare types is another one of those annoying cognitive shifts I dislike.

Comments are closed.

 
%d bloggers like this: