Wednesday, February 24, 2010

The 0-Guard

Last week I was reading an excellent article by Bill Simmons on the "true" trade value of NBA players. While talking about likely rookie-of-the-year Tyreke Evans, Simmons dubbed him 0-guard. I thought that this was quite clever, and several other players immediately came to mind. Simmons went on to list three other players as 0-guards: Gilbert Arenas, Dwayne Wade, and Brandon Roy. I thought of some others. Here are what I would call the defining characteristics of the 0-guard:

1.) They must have the ball in their hands. These are players who generally useless on offense without the ball. They are not good spot-up shooters. They aren't going to move well without the ball to get open or get teammates open. They generally look lost in a half-court offense if the ball is not in their hands at all times.
2.) They are not good passers. Now many 0-guards rack up a lot of assists. However that is just because the ball is always in their hands. The only way they pass the ball is if the defense is all over them and teammates are wide open, hence the assists. However, they don't always do this very effectively. For example, in his best years Arenas averaged 6 assists/game, but also 3.5 turnovers/game while averaging 21 shots/game and 10 free throw attempts/game. Last year, Dwayne Wade averaged 7.5 assists/game, but that came with 5 turnovers/game, while taking 22 shots and 10 free throws per game.
3.) They can get their shot -- easily. This kind of goes without saying. All of these guys are effective at creating for themselves/getting open off the dribble, around screens, etc. If they couldn't do this, they would not be playing at all. Often they are spectacularly good at this -- Dwayne Wade for example.

These are the three obvious 0-guard characteristics. However, there are some other more subtle shared characteristics.

4.) They are poor rebounders. These guys want to get out in transition, so they do not hit the defensive glass and they make no attempts to box out anyone.
5.) They are defensive liabilities. Sort of a continuation from #4. All of their energy goes into scoring, not much is left for defense. Now some guys may average a lot of steals, but that is a notoriously misleading statistic when it comes to defense.
6.) Point guard size. Maybe this helps to explain #5 and #4, but these guys generally don't have the size to match up against shooting guards. This does not necessarily imply being short -- Wade is 6-4 and Evans is 6-6.

Given the above, I don't think Brandon Roy belongs on the list. For his career he has averaged 5 assists but only 2 turnovers per game, while taking 16 shots and 6 free throw attempts per game. That does not sound like a guy who has to have the ball in hand -- or maybe I'm wrong and he's just not a very good 0-guard. I think Simmons put him on the list because Roy demands the ball in his hands at the end of games.

There are some other obvious guys who belong on this list. Starting locally, Monta Ellis is definitely a 0-guard. This year he has averaged 5 assists / 4 turnovers per game, while taking 22 shots and 6 free throw attempts per game. This is definitely Arenas/Wade country. Brandon Jennings is borderline. So is Derrick Rose, but in both cases these guys are young enough to "grow out of it." Rodney Stuckey is a 0-guard, but not a very good one. Jason Terry also qualifies, but he doesn't realize it. He thinks he can spot-up and shoot, but he can't. 

Finally, I think Bill Simmons got it wrong when he said that Gilbert Arenas invented the 0-guard position. To me, the quintessential and original 0-guard was Allen Iverson. He had a season (01-02) where he averaged 5.5 assists / 4.0 turnovers to go with 28 shots and 10 free throw attempts per game! Now late in his career, he did become a better passer, but was still more of a 0-guard than a point guard.

Thursday, February 18, 2010

eBay Mobile for Android

I have been working on mobile at eBay since the fall of 2008. Almost immediately upon joining the team, I started running my mouth about how we needed to create an Android application. At the time, the G1 had just come out, and let's be honest. It was definitely a cut below the iPhone -- the standard that all phones are measured against whether they like it or not. The development environment was attractive, but we had to be pragmatic about things. eBay Mobile for iPhone was a big hit, and we had tons of ides for improving it. So Android had to wait. Yesterday, the wait was over and eBay Mobile for Android became available for download from the Android Market. It requires Android 1.6, and we have not rolled it out in all of the countries that our iPhone app is available in, so depending on your device and your location, it may not yet show up in the Market. If you're on 1.6, just be patient, hopefully we'll be in your country soon. If you're on 1.0/1.1/1.5... well your carrier needs to upgrade you to 1.6 :-)

The app was the result of a lot of hard work by some very talented engineers. If you have an Android phone, then of course I am hoping you will go try it out. One of the cool features of the Android platform that we took advantage of is integration with the system level QuickSearch. This makes it possible for popular search terms on eBay to be automatically suggested to you as type in the QuickSearch box on the home screen of your phone. However, this is one of those kind of curious features in Android. It is relatively easy to do (though making it fast, for something dynamic like popular search terms on eBay, can be tricky -- our engineers did an awesome job on this), but it is not something that you can "enable" by default. What I mean is that any application can become a provider for the QuickSearch box, but the applications cannot get their suggestions to automatically appear. Instead the user must manually enable add the provider, and this is not the most obvious thing to do. To give you an idea, I did a short video that shows you how to do this:

See what I mean? It's hard to expect that most users will figure this out on their own. Maybe we could put some kind of message in the app instructing them how to do this, but it is not something you can really describe succinctly. Anyways, as you see in the video, one you add eBay to your list of search providers, the results from eBay will only appear at the bottom of the list of suggestions. Once again, it would be very easy for a user to not even know that they are there. If they scroll down and find them, and do this a few times, then supposedly Android is smart and starts putting the eBay results closer to the top of the list...

I think this is something that Android could definitely improve on, since I think it is a great idea to allow other apps to hook into the "global" search if you will. I can definitely imagine the Facebook app and various Twitter apps doing this too. However, one could understand if apps did not do this, given how difficult it is for an apps' contributions to surface in the search suggestions.

Wednesday, February 17, 2010

Nexus One vs. iPhone

I recently purchased a Nexus One, primarily for Android development. I needed a device for developing Android 2.0+ on. The 2.0 SDK has been out for almost three months now, and yet it is still not available for my old developer phone, the Android Dev Phone One. I had gotten pessimistic about HTC making Android 2.1 (since apparently 2.0 is being skipped) for the ADP1, which is essentially the G1, so it was time to buy a new phone for development. I decided to spend some quality time with the phone, i.e. as a user, not a developer, and share my thoughts.

First, a caveat... I plopped in my AT&T SIM card into the N1 before turning it on. That worked just fine -- with some known limitations. By working just fine, I mean that the N1 recognized the SIM card and configured itself for AT&T's APN without me doing anything. This was not the case with the ADP1, which was actually a pain to get working initially. You see, every Android phone that I have used wants you to sign in to Google before doing anything else. This can be a real problem if it cannot use the SIM card to access a data network. Anyways, it was no problem at all for the N1. However, as Google has pointed out, the N1 is not able to access AT&T's 3G network. It does not have the right radio for this. I have no idea why this is the case, but I will say this. Let's just imagine that I loved the N1 so much, that I decided that I wanted it to be my everyday phone instead of my iPhone. Even if that was the case, there is no way I would do this, since I would have to switch service to T-Mobile in order to access a 3G network on the G1, and for me (as well as most iPhone users) that would meaning breaking my contract with AT&T. On the other hand, if Google/HTC had only put the right radio in the N1, so that it could access AT&T's 3G network, then that would not be an issue. I bring all of this up now, because it does make it more difficult to compare the two phones. If you have one phone that is on 3G and another on Edge, you will prefer the 3G one unless the Edge phone is just way better in every other way.

Ok, so the good news is that the N1 is definitely a phone that I could live with. It is fast, I mean really fast. Navigating between applications, applications execution, startup, etc. are all very fast on this phone. I would say it is noticeably faster than my iPhone 3GS. In fact, I did some mini-benchmarking between the eBay Mobile app for the iPhone and our new eBay Mobile app for Android, running on the 3GS and N1 respectively. I put both phones on the same Wi-Fi network for this test. The N1 app was definitely faster than the 3GS app. Now the code is different, as our Android app takes advantage of some the Android features to speed up some common things in the app. Still, the Android app is running in an interpreted environment, the Dalvik VM, not as native code like the iPhone app. So even though it was definitely an apples to oranges comparison, I still found it relevant. Our app on the ADP1 (Android 1.6) was much slower.

The phone is also aesthetically pleasing. It is a pleasure to use. It looks nice. It is incredibly thin. In fact, the iPhone feels very chunky in my hand, after using the N1. Using the phone was very intuitive, but of course I have been using Android phones for over a year now, so I am certainly used to the OS. As a phone, it is on par with the 3GS, which is to say it is ok, nothing to brag about. The soft keyboard is easy to use for me, probably because I am used to using an iPhone. The word suggestions on Android are really nice, and it took me no time to start using them aggressively, thus saving myself a lot of typing.

My favorite part about the phone is the email. We use MS Exchange at work, and I am a long time GMail user for my personal email. With the N1, I could setup both to be in a push mode. On my iPhone, I can only setup one or the other to be in push. So on my iPhone, I have to check my mail to know if I have new GMail, but I immediately know about it on the N1. Also, I think email for both of my accounts looks better on the N1.

However, this brings me to a negative for the N1. It will connect to our MS Exchange server with no problems and can sync both email and contacts from Exchange, but not calendar. I have been told that this should be possible, but all solutions I have seen involved either 3rd party software, or using a workaround like syncing my Exchange calendar to a Google calendar first, and then syncing the Google calendar to the N1. I don't want workarounds, I just want something that works easily, so this is a huge negative for me. When I am at work, I often go from one meeting to another, and it is invaluable for me to be able to look at my phone to know when/where my next meeting is.

Speaking of apps, the Android Market is growing rapidly. The Facebook app for Android is very nice, and its use of the OS to allow you to integrate your friends list with your address book on the N1 is excellent. There are a lot of Twitter apps for the N1, I personally liked Swift. There are a lot of other quality apps out there as well. Here is a handy equivalency table of apps between iPhone/Android from Alex Payne. Still, this is definitely an area where the iPhone has a huge advantage. For me, the advantage is most pronounced for games. There are so many more games on the iPhone, with more well known titles (often backed by big companies like EA.) Also, the games seem to be much higher quality. The number/quality of titles will likely improve over time. Hopefully the quality of the games will too. Perhaps this is an area where the Dalvik VM gets in the way, though you can certainly go native for games, potentially reusing code developed for the iPhone.

A couple of more things, both good and bad... The browser on the N1 is excellent, definitely on par with the iPhone's. That is both from a user perspective and a developer perspective. Having multi-touch in the browser makes a huge difference for users. Adopting HTML 5 standards makes a big difference for developers. I never saw any pages that rendered noticeably differently on the N1 than they did on my 3GS. Usually it was the same only faster. Overall the browser, like most other things, tend to look better on the N1 than the 3GS simply because the screen is so much better... On the downside, the camera was a little more inconsistent for me on the N1. There have been times where I took pictures on it that came out completely out of focus. They certainly did not look out of focus when I took the picture. Maybe this is bad luck. The N1 has a 5MP camera, but I would not say that rates it above the 3GS. Don't get fooled the megapixel myth. I think pictures on both cameras were of similar quality -- except when the N1 would get things out of focus. The flash on the N1 seems useful however.

So there it is. If I lost both of my phones and AT&T cancelled my contract, it would actually be a tough decision between the 3GS and the N1. I would go with the 3GS because of the calendar issue and its big advantage in number of apps/games available for it. The calendar issue seems like an easy issue to address, and the apps/games discrepancy is become less pronounced. It's not like the Android Market needs to have parity with the App Store, at least not in terms of quantity. Also my issues are far from universal. So I could definitely see a lot of people picking the N1 over the 3GS.

Sunday, February 14, 2010

The Anti-Netbook

For Christmas this year, I bought my kids (ages 4 and 5) their first computer: an Acer Revo. I call this the anti-netbook, but it actually shares a lot of things with a netbook. First is its processor, an Intel Atom clock in at 1.6 GHz. This helps to make for an incredibly small form factor, especially for a desktop computer. It has a 160 GB hard drive, but no optical drive.

When I bought the computer, I had exactly one reason for it. I wanted my kids to play educational games on it. That was all. The Revo came with Windows XP installed on it, and this was perfect. I needed Windows on there to get the largest number of potential games for it. Windows 7 seems to be much less of a resource hog than Vista, but I am still wagering that XP comes with less overhead.

I had another somewhat unusual plan for the Revo. It is completely isolated. No network access, not even local intranet. This is why I call it the anti-netbook. I want no Internet on it, I only want software that I personally installed to ever run on it. It is like a miniature Battlestar Galactica. Now I know there are a lot of websites with great games for kids, but I don't need them. I can find plenty of shrink-wrapped software to buy at incredibly cheap prices. However, this did lead me to some problems with the computer.

The first problem was how to install all of this software, since I had no optical drive. I was prepared for this. I simply copied the software to a flash drive and plugged that into one of the four USB ports on the Revo. This is where I encountered my first problem. Many of these titles require the original CD to be inserted into an optical drive in order to run the game. I suppose this is an anti-piracy tactic, but it really caused me problems. For a couple of the games, I could leave the flash drive plugged in and they were happy. This did not work for most of the games.

So I bought an external optical drive that could be connected via USB. This worked perfectly well, but it does mean that to change the games, one has to generally change discs in the external drive. This is not something that I trust my kids to do yet... I had one more nasty surprise in store for me. Many, actually most games published these days require the game to connect to the Internet, either to install or to play or both. There was a PBS Kids game we bought that was like this, and it was complete fail when we tried to install on the anti-netbook. So I had to make sure to only buy games that did not explicitly list an Internet connection as a hardware requirement, which generally meant buying a lot of older games. Still I found plenty of such games to choose from.

At some point I will want my kids to be able to do more with a computer. They will need a word processor. Of course I would like to teach them both to program. I would like to get them something like Mathematic/MatLab. And of course by then, they will need a computer that is connected to the Internet. When that day arrives, I will get them a Mac -- but probably not an iPad. I will teach them the old school ways!

Friday, February 12, 2010

The Truth About Europe

Earlier this week I read yet another fascinating post by Peter-Paul Koch. In this post, @ppk takes on developers who think that developing for mobile just means developing for the iPhone. He points out that the iPhone has a small market share, in terms of number of devices being bought or in use. He tackles the obvious counter-argument that the iPhone dominates in terms of mobile Internet usage in the US. In particular he points out that the US does not matter, since Asia and Europe account for a much higher percentage of mobile users worldwide. He is completely right, but if you are a mobile web developer, these facts are completely irrelevant.

It does not matter how many mobile web users there are in North America, Asia, or Europe. This has no bearing on how you design your site or what kind of technologies you choose to adopt. All that matters are the users of your site and what kind of devices they use. There can be 100x more smartphones in Japan than in Spain, but if all of your users are in Spain, then you should not care about Symbian's market share in Japan.

For a lot of sites, it is very easy to figure out where their users are because they are not localized. If your site is focussed on Brazil, then it's a good bet that most of your users are in Brazil. For sites that are international, things get trickier. This is a problem that I have had to deal with at eBay. Fortunately my job was easier because we already had a mobile site that was setup almost ten years ago. It is designed to support all devices, from old school WAP devices all the way up to iPhones and Droids. It has a presence in all of the countries where eBay has a presence. So there I had a lot of data to use where I could say "this % of our users are using iPhones, this % are using Blackberries" etc.

Most people do not have this luxury. If you do not have a specialized site for mobile, you can still look at your server logs and figure out what mobile devices are being used to access your site. However, you should be cautious with this data. If you do not have a specialized site, chances are your main site does not work properly on a lot of devices (especially true if you use a lot of JavaScript.) So chances are your users that use those devices will not use your site from their devices. However, chances are that your site does work properly on an iPhone, even if it is far from optimized for it. So iPhone users may be over-represented in your data.  You might want to fall back to using X% of users are from A, Y% from B, and the devices breakdowns in A and B are... Now if your site relies on Flash...

Let's get back to the @ppk's post and the iPhone for a moment. He makes a nice analogy to IE6 back in the day. When doing analysis on mobile eBay traffic and our strategy for the future, I actually made a similar analogy. There is one huge difference. The iPhone's browser, Mobile Safari, has been a front-runner in adopting HTML 5 standards. If you are optimizing for the iPhone, there is a good chance that you are in fact using HTML 5 standards. In other words, optimizing for the iPhone does not mean adopting a plethora of technologies that are proprietary to Mobile Safari. It's just the opposite in fact. This is the fundamental difference between Mobile Safari today and IE6 circa 2002. That is not to say that there aren't some technologies that are unique to Mobile Safari. There are. However, those are the exception, not the norm. Most of the code that you could write for Mobile Safari that would not work on other mobile browsers is actually standardized APIs that other browsers simply have not yet implemented. So yes, you could be excluding users -- this is where you need to figure out what devices your users are using -- but you are not putting up an iPhone-only wall. You are putting up a wall, but it is a "modern device that supports web standards" wall.

Wednesday, February 10, 2010

On Open Floor Plans

Way back in 2006, I was working for a startup, originally called Sharefare and later renamed to Ludi Labs. When I first joined Sharefare, we had an old office space in Los Altos. It was a "traditional" office space, divided up into offices, with only two large open areas. One was the front reception, and other was a break room area. While there, we had around ten engineers. We would put 2-3 engineers per office. It worked pretty well.

After I had been at Sharefare a few months, we received another round of funding, but it came with a catch. Our board wanted us to hire a "real" CEO, since our CEO/founder was a very technical guy, not an MBA type at all. Our CEO came from Under Armour, you know the sports apparel maker. He was an east coast guy, who I'm sure we paid a ton to relocate to the Bay Area. When he arrived, we were running out of space in our little old office in Los Altos. So he decided that one of his first priorities would be getting us a new office. He had a vision for the new office -- the floor of the Wall Street stock exchange. He wanted an open floor plan that would encourage collaboration and keep everyone energized. So we moved into a brand new office (we were the first tenants) with a completely wide open space.

Now I always attributed this hubris to our CEO being an east coast guy who had never really been around engineers. You see, engineers do not work well in hectic, loud environments like the floor of the Wall Street stock exchange. A very large part of their job is thinking, and people running around making noise does not help an engineer think. Lots of other folks have written about what kind of environment does work well for engineers, so I won't go too much into it. I'm just here to tell you what does not work.

The open floor plan is complete fail. The most obvious problem is the noise. First, most companies have some non-engineers who work for them. Some of these non-engineers really need to do a lot of talking to do their job. They need to be on the phone ordering office supplies, or talking to potential business partners, or screening new hires. These people constantly generate noise. Put that in an open floor plan, and there is always noise. This is a constant drain on other folks, like engineers and designers, whose work requires them to think a lot.

The next problem is that engineers and designers, do indeed need to talk sometime as well. Sometimes this is talking about work stuff, but there is also a lot of non-work stuff that engineers talk about. Maybe they are debating closures in Java or the merits of Python vs. Ruby. You want engineers who bullshit about stuff like that. Or maybe they are just talking about the Super Bowl or Lost. There is value in this too (y'know that whole "team building" thing.) Anyways, when you are in an open environment where noise is already problem, then it really discourages this kind of engineering conversations. If you are lucky enough to need to talk to your fellow engineer when it is relatively quiet in the office, you may be reluctant to engage him/her because you want to relish the peace. If instead it is during a typically noisy time, you will ask yourself if whatever topic is really worth talking about if it means shouting over the din. Instead of encouraging collaboration, the open floor plan does the opposite.

Back to Sharefare... It did not take long for everyone to realize that the open office was not working. First, we tried to put the engineers and non-engineers at opposite ends of the office. We had a handful of real offices that were intended for execs. Instead they were kept open, so that people could use them for longer phone conversations. I personally spent a lot of time in those rooms, doing phone interviews with potential new hires. Finally, the CEO came clean and admitted that the whole thing had been a bad decision on his part. He bought everyone Bose noise canceling headphones, supposedly out of his own pocket. This was his high point of popularity with engineering. Now when you walked into the office, everyone constantly had the Bose headphones on. I think this also hurt collaboration. I know I am personally less likely to try to start a conversation with somebody with headphones on.

Anyways, that was our failed experiment. It seems like the idea has been picked up more and more in the Valley. A lot of companies are opting for this configuration. I think some do it as a cost cutting measure. You can spend a lot less on furniture with an open floor plan, and you can pack in a lot more people in a given space. Others claim that it is all about encouraging collaboration. Again, my experience is that it will do the opposite. Still others do it because they want to break away from the traditional cubicle environment. To those folks I would say: "function over form." An ugly office full of productive engineers is better than a stylish office full of miserable engineers.

Friday, February 05, 2010

It's 2010, where's my HTML5?

This morning, one of my co-workers related to me that there had been mention of HTML 5 on an NPR broadcast that he heard. He joked that this morning there were probably lots of advertisers asking their designers about using HTML 5 for their next whiz-bang ad campaign. So obviously we are still in the early part of a hype cycle on HTML 5. So what's the state of the technology and where is it heading?

In terms of pure technology, there is a lot of notable progress. Google did the right thing recently and axed Gears. They had some nice innovations in Gears that have since been rolled up into the HTML 5 standard. Now it makes perfect sense to abandon their proprietary implementation, and embrace the standard. Gears is important because it is (or was) part of Chrome. Thus there are essentially five browsers that now implement a significant subset of HTML 5: IE8, Firefox 3.5, Safari 4, Chrome 4, Opera 10. According to the latest browser share numbers, that means that 53% of users out there are using a browser that supports a lot of cool features, like local storage and selectors. That's more than half! It's tempting to extrapolate from that, but don't get too carried away.

The #1 browser of the HTML 5 browsers, is IE8 -- which does not support a lot of the features that the other browsers support like canvas, video tag, and web workers. For those would be HTML 5 advertisers out there, you better scale back your expectations. For desktop browsers, a lot of the most compelling HTML 5 capabilities are far from being widely supported. How long until IE9 comes out? How much more of HTML 5 will it support? How long until there is high adoption of it? That's too many questions to be comfortable with. Ask me again this time next year.

However, it's fun to pretend sometimes. Let's pretend that IE 9 did come out soon and that it supported everything supported by Firefox and Chrome. Let's even pretend that Microsoft was able to awaken some of their old monopolist mojo and somehow force people to upgrade. So we have a world full of canvas. Will the next great annoying ad use this? If you read my last post on Flash hate, you probably realize the answer is no. As Terry pointed out in the comments there, the Flash authoring tools really empower designers over developers. So before we see an explosion of canvas magic, there will have to be huge strides made in tooling. Now what about video? Even in our dream world, there are issues known as codecs. There is no common video format that will run on both Firefox and Chrome right now, and who knows what Microsoft will do here (WMA FTW!) So we are going to need some technical consolidation here. Let's summarize all of the things that we need to see happen to open up the golden age of HTML 5.

1.) New version of Internet Explorer that supports much more of the HTML 5 specification.
2.) This new version of IE needs to become the dominant flavor of IE.
3.) Outstanding tooling developed to enable designers to leverage HTML 5 capabilities.
4.) Standardization on video codecs by browser vendors.

Don't get too bummed out about all of this. I actually think there is real progress happening on #1 and #3. #4 probably has to wait for #1 to happen. #2 is the most problematic. Since we're dreaming, maybe if IE 9 came out with H.264 support, Google could dramatically drop support for anything else on YouTube, and thus nudge all IE users to upgrade?

If you want to drink the HTML 5 Kool-Aid, and this is bumming you out, I'm sorry. You might feel better to know that the HTML 5 situation is significantly better on mobile devices. According to AdMob, 81% of internet usage in North America was either on iPhones or Android devices. Now, most of those Android devices are running Android 1.5 or 1.6, whose browsers (can we call it Mobile Chrome please?) still use Gears -- not HTML 5. Let's  hope that most of those are upgraded (by their carriers) to Android 2.1 soon, and then we're in business. We've got all kinds of HTML 5 goodness available to 80%+ of the users out there.

The bad news is that the mobile market is fundamentally different than the desktop (though, that may be changing.) Here mobile websites compete directly against native applications. Desktop users rarely face the question, should I use an app or a website to do XYZ? Normally XYZ dictates one or the other. So for a company, there may be a lot of cool stuff you can do with HTML 5 on a mobile browser, but there are even more cool things you can do in a native app. Not only are the native apps more powerful, they are generally easier to build because of superior tooling and cohesive architecture. Maybe JavaScript vs. Objective-C vs. Java is a matter of taste, but HTML 5 has nothing even resembling the application models presented by Cocoa and Android. Instead it has the DOM APIs in JavaScript. Now throw on the differences in tooling, and you can see why for many companies it is cheaper and easier to build an iPhone app and an Android app instead of building one HTML 5 app. That's not even considering the hypothesis that the "app store" model is more appealing to end users than the browser model.

Wednesday, February 03, 2010

The Flash Haters

The iPad announcement had lots of subtleties to it. One of them was that the iPad browser, presumably Safari, does not include a Flash plugin. This has lead to a lot of people from Adobe getting upset. What has amazed me through all of this, is that there is even more hate for Flash than ever before. As somebody who has done a fair amount of Flash development in my career, this has really struck me as odd. Why all of the hate for Flash?

In some ways, I can actually understand why Apple would hate Flash. The Flash plugin on OSX is nowhere near as good as the Flash plugin on Windows. Now Adobe will counter they spend a disproportionate amount of money on Flash for OSX, i.e. they spend more money per OSX user than they do per Windows user. So what? There are a lot of other technologies out there that work equally well on both Windows and OSX. Just focussing on web technologies, Firefox and Chrome on the Mac are very comparable to their Windows versions. So is the Java plugin. Some other products, like Opera and Skype, tend to trail in terms of features on OSX, but not in terms of quality. If these other companies can do this, why can't Adobe? It's obviously either lack of ability or prioritization. My guess is the latter, and hence you must expect hate from Mac users. And from Apple. So it should not be so surprising that Flash is left off the iPad, iPhone, etc. If Adobe really wanted to get Flash on those devices, an obvious first step would be to make Flash on OSX as good as Flash on Windows. That is something that Adobe can do all by themselves. Instead it seems like they would rather publicly bemoan their exclusion on the iPad. Now is there a guarantee that if they improved Flash on OSX, Apple would care? No, of course not. However, there is no way they will get on those devices without this first step, IMO.

Ok, so Mac user hate for Flash is understandable. Along the same lines, I would guess that Linux user hate is also understandable. That is, if there were actually any Linux users out there. I digress. I see a lot of hate from developers. What about that? That one is not so easy for me to understand. As a web developer, Flash has been opening doorways for me for a long time. Need to cache 30 KB of data on the client? For a long time this was impossible unless you used Flash (or maybe Java.) Need to make a cross-domain call? That is still generally impossible from the browser, unless you use Flash. Need to show 2D/3D graphs and visualizations? Again, this is still generally impossible, except through Flash. Oh and video... The only way to show a video that can be played on all of the top five web browsers (IE8, IE7, Firefox 3.5, IE6, Firefox 3.0) is to use Flash.

But, but, but, what about HTML 5? Many people think it will kill/replace/maim Flash. Apparently this list includes a certain Steve Jobs. As I have said before, I hate web standards. I mean, I love them, too. But I hate them mostly. Standards are the opposite of innovation, and perhaps its enemy. However, they are the engineer's best friend. Flash has provided a lot of innovation, that HTML 5 will simply copy and standardize. That is the best possible function of standards. However, the story is not so simple. Some things, like local storage,  are already supported by IE8, Firefox 3.5, and Safari 4. This is like Flash's SharedObjects, though slightly less powerful (but good enough.) Not everything is so rosy; canvas and video playback immediately come to mind. Canvas is particularly a hot topic. You can do some amazing things with it, but it requires payment in blood. I cannot imagine what it would be like to program Farmville in JavaScript using Canvas. Maybe somebody will come up with tools for that. In fact, I would be surprised if Adobe did not do this. Of course, we still need to hope and pray that canvas is supported in IE9...

So back to the original point. Why do developers hate Flash? It certainly enables them to do more stuff in web applications. Even if that list of extra features is being shrunk by the HTML 5 monster, it's still there. So shouldn't they love it? Well, you would think so, but they don't. Here are some guesses as to why:

1. Expensive tools -- Most of the tools used by a lot of developers are free. The Flash authoring tools are far from it. That can be annoying. You have to throw down a lot of money just learn something. That's not cool.
2. Bad tools -- At least from a developer perspective. The Flash authoring tools have historically been aimed at designers. That changed with Flex Builder/Flash Builder, but the damage was done. Also, those tools were aimed more at application development, which largely seemed like a waste of Flash.
3. Back to the Mac  -- A lot of developers use Macs, and we all know about Flash on OSX...
4. Accessibility -- Unfortunately this doesn't mean much to most developers. But it should. I must admit, that a year ago I probably wouldn't have listed this. However, as I have learned more, I have learned how Flash is such a huge problem for accessibility. This is actually one area where HTML 5 really shines, with the inclusion of the ARIA standards.
5. Design envy -- This kind of goes back to #2. Flash has had all the awesome capabilities for years, but they have largely gone untapped in web applications. Maybe by playing to designers all of those years, Flash has earned enemies. Imagine a developer being asked to scope some radical web feature. They give a typically large scope for it. His manager turns around and gets a designer, a freakin' designer, to do it in half the time.

What have I missed? Why do web developers hate Flash so much?

Friday, January 29, 2010

The iPad: A Brave New World

Has any piece of technology ever been more anticipated than the iPad? Has anything created as many extreme reactions? Has anything generated as much immediate ridicule?

Anyways, there are a lot of opinions out there, including some very good ones. I really like Alex Payne's take on the iPad. His post led me to Daniel Tenner's analysis, which is probably the closest thing I have seen to my own opinion. While we are at it, I think Timothy Blee's case against the iPad is definitely worth a read. Adobe's Mike Chambers has a very predictable reaction, but is still worth a read as well. But who cares about those guys, you're reading this because you want my brilliant insights, right?

I think the iPad is Apple's boldest move since the 80's. In the 80's, they liberally borrowed a lot of existing ideas and added some of their own twists to reinvent personal computing. It was no longer the command line driven process. There no need for Unix or DOS. The iPad is equally ambitious. 

Over the last two years, Apple has had a chance to experiment with a new kind of computing. The iPhone is accurately described as very capable personal computer that fits in your pocket. With it, Apple changed the way a user interacted with their computer. There was no more mouse, yet the keyboard was rarely needed. There was very little physical media to interact with, i.e. no floppy disks, CDs, or DVDs, only a very occasional physical linking to a desktop/laptop. Apple gave software developers a much more limited environment than they had ever had to deal with. Compared to desktop computer software, iPhone software is amazingly limited in how it can interact with the system (the iPhone). The distribution model was also controlled in a way rarely seen before. Apple had to approve your applications before they could be sold to users.

Yet even in this most restricted of environments, software developers have flourished. Why? As Joe Hewitt wisely points out, many of these restrictions really are for the best. I can install any app from the App Store, and I don't have to worry about it harming my iPhone. This is in stark contrast to desktop software, particularly the ubiquitous world of Windows. 

But wait, it's worse. At least if you're a software developer. The Draconian distribution machine that is the App Store and its approval process is a key part of this peace of mind given to iPhone users. It also makes their lives much easier. Just go to the App Store, and you can find whatever kind of software that you are looking for. Or just browse the games, or maybe lifestyle or social media apps, and you can find something interesting. Reviews are built right into the system. It could get better, and Apple is working on this with their Genius system, but it beats the heck out of anything else out there. 

The iPad takes this new model of personal computing and packages it into a personal computer. The iPad does not invent a new category. Just like the iPhone, it is going after an existing huge category, only this time it is the desktop/laptop category. Yes, this happens to be a category that Apple already plays in and is having a lot of success in. That is part of why this is so bold. The iPad is not going after the statistically insignificant category known as netbooks. Apple does not aim so low. Instead it is going after every person who owns a laptop.

Perhaps you think I am overstating, that Apple is not as bold as I claim. There were four new apps that Apple unveiled along with the iPad. One of those is iBooks, and I won't bother too much with that. The other three were Pages, Numbers, and Keynote. This is Apple's office suite. These are competitors of Microsoft's Word, Excel, and PowerPoint. Now you can read many Office documents on an iPhone, but it is largely a read-only experience. These are not read programs. These are content creation programs. If you think about the history of Apple, could there be anything more symbolic than them shipping a word processing application with their new computing platform? 

I bought my wife a laptop a couple of years ago. It was essential for her to have Office on it. Word processors and spreadsheets are the meat and potatoes of desktop software. Of course she uses photo management and media management software too. All of these things are available on the iPad. The truth is she could easily replace her laptop with an iPad. It would be easier for her to carry around, have better battery life, and the 3G would work even if she was visiting her mother (who does not have wi-fi.)

Think about all of the people whose job involves a computer. For most people, could they not do their job on an iPad? Wouldn't it even be easier in many cases? Of course it would be more convenient, again because of the size, battery life, and always being connected.

Now there will definitely be some people who cannot use an iPad for their job. I am one of those people. Maybe you could write code on it ok, but compile/debug? Don't think so. Designers who do intense things with Illustrator, Photoshop, etc. will need more than an iPad, too. Ditto for scientists, engineers, etc. There are more groups as well, but it's still probably a minority. For all of those people who have to do a lot of typing? Well there's a dock for that. You get the idea.

Apple is offering a new model for personal computing. They are attempting to disrupt an entire industry, once again. The iPad is not a big iPod Touch or a big iPhone. It is not Apple's attempt to go after Amazon's Kindle. It is an attempt to dominate in a way that even Microsoft never got close to. Apple is going to every person who owns a computer and saying "look there's another, better, easier way." I think it's an all-or-nothing bet. The iPad is either a world changer, or a total failure. Which one will it be? 

Wednesday, January 20, 2010

JVMOne

This morning, my co-worker Jason Swartz had the great idea of a conference focussing on JVM languages. This seemed like a particularly good idea, given the uncertainty surrounding JavaOne. Personally, I think the JavaOne powers-that-be have done a good job showcasing other languages, especially the last two years. Anyways, I joked that we could probably host it at our north campus, since it has proper facilities and regularly hosts medium sized conferences,  and that we just needed the support of folks from the Groovy, JRuby, Scala, and Clojure communities. A lot of folks seemed to like the idea, and had some great feedback/questions. So let me phrase some of these questions in the context of "a JavaOne-ish replacement, focussing on alternative languages on the JVM, but ultimately driven by the community". Ok here are some of the questions.

1.) What about Java?
2.) What about languages other than Groovy, JRuby, Scala, and Clojure?
3.) What about first class technologies with their roots in Java, like Hadoop, Cassandra, etc.?

Certainly I have opinions on some of these, but obviously any kind of effort like this requires huge participation from the developer community. So what do you think? What other questions need to be asked? If there is enough interest, I will definitely try to organize it. So please leave comments below!

Tuesday, January 19, 2010

Stop the IE6 Hate

For a long time now, it has been common for developers to go on and on about how they hate IE6 and how users needed to be discouraged from using. It's gotten old. It's trite. It's time to move on. 

IE6 has become a non-factor for most sites. It trails far behind IE7 and IE8, not to mention Firefox. And of course, its numbers are always shrinking. I know that at a certain popular website that I happen to work for, we no longer test for IE6. It's been that way for us for 6+ months. It's over.

So now that it's over, there are some things that I want to see. First, I want to see all of those JavaScript libraries that everyone uses, drop their IE6 code branches. Shouldn't jQuery, Prototype, ExtJS, Dojo, etc. all be able to shave off a huge amount of their code? Shouldn't these libraries now progress even faster since they don't have to worry about IE6? Similarly, GWT and similar libraries should benefit as well. Websites should be able to remove some CSS hacks as well, though not all of them... 

Next, I want to see the same passionate disdain directed towards IE7. Now IE7 is not the bug ridden beast that IE6 was. Instead, IE7 is basically a properly patched IE6. As a developer, you would definitely prefer to deal with IE7. It works differently from all other browsers, but it is relatively consistent in the way it works. You could not say this about IE6. However, IE8 is much closer to the rest of the world. If IE7 were to drop off to obscurity like IE6 has done, then imagine the benefits. There would be far less need to write things in a different way for IE. There would be far less need for conditional CSS. 

So stop hating on IE6. Move on. Start taking advantage of a world where IE6 is irrelevant.  And if you still want to hate, start hating on IE7. With a passion.

Monday, January 18, 2010

Strikethrough Android

There are a lot of people working on mobile applications these days. Some of them are experienced mobile developers. Maybe they were doing J2ME or Brew apps in the past, and now they are probably happy to be doing iPhone or Android development. However, I would wager that a majority of current iPhone and Android developers were not doing mobile a few years ago. Maybe they were doing desktop development, or maybe they were doing web development. For these folks, it can be very surprising how tricky it is to do some (seemingly) simple things. Case in point, strikethrough text.

If you ever need to do this in Android, here is how you do it, or at least a relatively simple way I found to do it. If you just search the Android documentation, you will find this reference page. As you can see for static text, it really is quite simple. It is very similar to doing it in HTML.
<resource>
    <string id="@+id/strike_one"><strike>Strike one!</strike></string>
</resources>
Nice! Android rocks! But what if your text is dynamic? Well if you keep reading the documentation referenced above, you get into the world of Spannable. This sounds good at first, because it sounds like it's again inspired from HTML. If you look at the sample, it is not so simple. Still, it's not too hard, maybe just a little awkward. But wait, there's more. The class most commonly used for text, TextView, is not a Spannable. Instead, you must use an EditText, which is a subclass of TextView. This is a class that you would not normally use for normal text on the screen, instead you would use it for a text input field.
Turns out there is an easy way to strikethrough a TextView after all. Well easy, if you don't mind use some bit twiddling:
TextView someLabel = (TextView) findViewById(R.id.some_label);
someLabel.setText(someDynamicString);
someLabel.setPaintFlags(someLabel.getPaintFlags() | Paint.STRIKE_THRU_TEXT_FLAG);
What is going on here? For painting text, there are several bit flags for doing things like bold, italics, and yes strikethrough. So to enable the strikethrough, you need to flip the bit that corresponds to this flag. The easiest way to do this is to use a bitwise-or on the current flags and a constant that corresponds to a set of flags with only the strikethrough flag enabled. For any other bit flags that are already set, this will have no effect. Actually if the strikethrough flag happened to be already set, this will have no effect. However, if it is not set, then it will flip it, so that it will be set and thus add a nice strikethrough effect.

Saturday, December 19, 2009

Music of the Decade

In alphabetical order (more or less) ...

Favorite Songs

Rehab -- Amy Winehouse
The Rising -- Bruce Springsteen
Yellow -- Coldplay
The Scientist -- Coldplay
Clocks -- Coldplay
Stan -- Eminem
Bring Me To Life -- Evanescence
Where'd You Go -- Fort Minor
Crazy -- Gnarls Barkley
Feel Good, Inc. -- Gnarls Barkley
Izzo (H.O.V.A.) -- Jay-Z
D.O.A. (Death of Auto-Tune) -- Jay-Z
Gold Digger -- Kanye West
Gone -- Kanye West
On Call -- Kings of Leon
Dance With My Father -- Luther Vandross
Paper Planes -- MIA
Float On -- Modest Mouse
Hey Ya! -- Outkast
Young Folks -- Peter Bjorn and John
Dani California -- Red Hot Chili Peppers
Killing the Blues -- Robert Plant & Alison Krauss
So Says I -- The Shins
Lazy Eye -- Silversun Pickups
City of Blinding Lights -- U2
Beverly Hills -- Weezer
The Joker & The Thief -- Wolfmother
Seven Nation Army -- The White Stripes
In Da Club -- 50 Cent

Favorite Albums

Neon Bible - Arcade Fire
The Rising -- Bruce Springsteen
Parachutes -- Coldplay
A Rush of Blood to the Head -- Coldplay
Viva La Vida -- Coldplay
The Crane Wife -- The Decemberists
American Idiot -- Green Day
The Black Album -- Jay-Z
The Blueprint -- Jay-Z
Late Registration -- Kanye West
The National -- Boxer
The Slip -- Nine Inch Nails
Year Zero -- Nine Inch Nails
Speakerboxxx/The Love Below -- Outkast
Backspacer -- Pearl Jam
Wolfgang Amadeus Phoenix -- Phoenix
In Rainbows -- Radiohead
Kid A -- Radiohead
Stadium Arcadium -- Red Hot Chili Peppers
Carnavas -- Silversun Pickups
Gimme Fiction -- Spoon
Is This It -- The Strokes
Toxicity -- System of a Down
All That You Can't Leave Behind -- U2
No Line on the Horizon -- U2
Vampire Weekend -- Vampire Weekend
Elephant -- The White Stripes

Tuesday, November 24, 2009

Web Developers Are Stupid and Arrogant

The past couple of days has seen an amusing rant by PPK, that then turned into a retraction and call to action. The original rant included a condemnation of iPhone developers as being stupid and arrogant. Others have adequately refuted PPK, so I won't bother with any of that. His post made me realize that it is in fact web developers who are mostly commonly guilty of stupidity and arrogance. Here's what I mean.

It's easy to look at the world today and say that web applications have won. This is web developer arrogance. Stupidity is to think that web applications have won because web applications are superior to desktop applications. Smarter, but probably still arrogant developers would point to web applications as disruptive technologies. This involves admitting that web applications are inferior, but good enough, and present enough other "cheaper" advantages to compensate for their inferiority.

To understand why the "web apps have won" claim is dubious. There are definitely a lot of awesome web applications out there. Many of them were created back in the mid/late 90's, The "features" of these applications were the key to applications, not the user interface. Now these days, most of these web applications offer APIs/web services/RESTful interfaces/whatever you want to call them. In many cases it is possible to build desktop applications that tap into the same features as these web applications. However, this was certainly not the case 10-15 years ago.

So if APIs make it possible to build desktop apps that offer the same features as popular web applications, why haven't people switched? The first most obvious answer is inertia. If you are used to accessing Google or Amazon on the web, that's probably the way that you will always use it. Something else to consider is that for many web applications, it does not make sense for them to offer all of their features through APIs because it hurts their core business. This is most obviously true for advertising based companies like Google, Yahoo, and Facebook. Their web applications not only provide very useful features to end users, but they serve ads that make money for the companies. If all of their users switched to using desktop applications that only offered the features with no ads, then the companies would lose their revenue streams. Their business is connected at the hip with their UI, so it is in their best interest to make sure people use their UI -- which is a web application.

However, there are other very successful web applications whose main revenue does not come from ads. Their business is distinct from their UI. E-commerce companies like Amazon and my employer, eBay are obvious examples. For example, eBay offers trading APIs that provide almost all of the trading features of eBay. This is particularly true for selling on eBay. This makes sense, as eBay does not need a seller to use the eBay UI to sell something, as the UI is not what makes money for eBay. As a result, around 50% of all items for sale on eBay come through 3rd party applications built on top of the eBay trading APIs. The vast majority of these (especially the popular ones) are desktop applications. Give people a choice, and a lot of people choose desktop applications.

For another interesting example, just look at Twitter. This is a company that came into an existence after web APIs had become the norm. So Twitter has provided a comprehensive set of APIs since early in its existence. Further, they have not pursued an advertising model that would marry their web based UI to any revenue streams. So they have kept their APIs in sync with their features. For example, they recently added list and retweet features to their site, and added them to the APIs at the same time. As a result, there are a huge number of desktop applications for accessing Twitter. Indeed, Twitter says that 80% of their traffic is from APIs -- either desktop or mobile applications. For most Twitter users, there have always been feature-equivalent desktop alternatives to Twitter's web based UI, so many users chose desktop applications over the web.

Finally, let's look at one more example: existing desktop apps. There has been an incredible amount of money spent on creating web applications that provide similar functionality to traditional desktop applications: email, word processing, etc. Heck, Google has spent a lot in this space just by itself. These are useful applications, but it is rare for people to choose these apps over their desktop equivalents. In most cases these apps try to go the disruptive route, i.e. don't try to be as good, but good enough and cheaper. They have had little success so far. Of course, inertia is a valid argument here, too. The one case where there has been success is GMail. In my opinion its success is not because people like it's web UI over a desktop UI, or even that the web UI is "good enough" and cheaper. No, it's success is because it has offered innovative features over other web and desktop based alternatives: fast search, stars/labels, threaded conversations, etc. Even give all of that, many people still choose to use desktop clients to access their GMail (I'm definitely not one of them.) Anyways, once again it's the features, not the UI.

I am not going to sit here and claim that desktop wins over web all the time. I'm not arrogant or stupid enough to make such claims. However, be wary of claims that web apps win over desktop apps. One could argue that with the preponderance of APIs (especially spurned on by mobile apps) and the popping of the advertising based web 2.0 bubble, that the future will hold even more opportunities for desktop alternatives to web applications. Maybe web applications have jumped the shark. So don't put up with web developers who insist that web applications have won (especially if they try to extrapolate this flawed argument to the mobile world). They can go on and on about technology, standards, interoperability, etc. Just remind them that it's the users who matter, and when given a choice, the users do not always choose web applications. Time to polish off your MFC and Cocoa skills!

Saturday, November 21, 2009

Passing on the Duby




Last week I had a fun conversation with Charles Nutter. It was about "java.next" and in particular, Charlie's requirements for a true replacement for Java. He stated his requirements slightly before that conversation, so let me recap:

1.) Pluggable compiler
2.) No runtime library
3.) Statically typed, but with inference
4.) Clean syntax
5.) Closures

I love this list. It is a short list, but just imagine if Java 7 was even close to this. Anyways, while I like the list as a whole, I strongly objected to #2 -- No runtime library. Now, I can certainly understand Charlie's objection to a runtime library. It is a lot of baggage to drag around. All of the JVM langs have this issue. JRuby for example, has an 8 MB jar. Groovy has a 4 MB jar. Clojure has two jars totaling 4.5 MB. Scala is also around 4 MB. These become a real issue if you want to use any of these languages for something like Android development. I've written about this issue as well.

However, there is a major problem with this requirement. The building blocks of any programming language includes primitives and built-in types. Those built-in types are part of the runtime library of the language. So if your JVM based language cannot have a runtime library, then you will have to make do with primitives and Java's built-in types. Why is this so bad? Java's types (mostly) make sense in the context of Java, its design principles and syntax. I don't think they would make sense in the java.next hypothetical language described above. The easiest example of this are collection classes. Don't you want a list that can make use of closures (#5 requirement above) for things like map, reduce, filter, etc. ? Similarly, if you static typing, you probably have generics, and wouldn't you like some of your collections to be covariant? You can go even further and start talking immutability and persistent data structures, and all of the benefits these things bring to concurrent programming. This is just collections (though obviously collections are quite fundamental to any language,) but similar arguments apply to things like IO, threading, XML processing, even graphics (I'd like my buttons to work with closures for event handling thank you very much.)

One argument against this is that you can just include the runtime library of your choice. Pick your own list, hash map, thread pool, and file handler implementations. This is what I'd like to call the C++ solution -- only worse. At least in C++ you can generally count on the STL being available. The thing that is so bad about this is that it really limits the higher-order libraries that can be built. Java, Ruby, and Python all have a plethora of higher order libraries that have been built and widely used. These libraries make extensive use of the built-in types of those languages. Imagine building ORMs like Hibernate or ActiveRecord if you did not have built-in types (especially collection classes) that were consistent with the language. You could not do it. If all you could rely on was the Java built-in types, then at best your libraries would have a very Java-ish feel to them, and doesn't that defeat the purpose?

Charlie gave an alternative to this -- leverage the compiler plugins. Now it is certainly true that with a pluggable compiler, you could do this like add a map method to the java.util.List interface, and all of the JDK classes that implement this interface. It can be done. However, if you are going to build higher order libraries on top of this language, then you need to be able to count on these enhancements being present. In other words, the compiler plugin needs to be a required part of the compiler. Fine. Now what is it that we were trying to avoid? Extra runtime baggage, right? Well if you have a compiler that is going to enhance a large number of the classes in the JDK, hello baggage. Maybe it won't be as much baggage as 4 MB runtime library, but it might not be a lot less either. Also, it raises the potential problem of interoperability with existing Java libraries. If I import a Hibernate jar that gives me a java.util.List, that list is not going to be enhanced by my compiler -- because it wasn't compiled with my javac.next, it was compiled with javac.old. What's going to happen here?

Now there is an obvious happy way to deal with the runtime library question: rt.jar. If java.next was Java, and the super revamped collections, IO, threading, etc. classes were part of the JDK, then there is no need for a runtime library since now it has been included with the JDK. However, given the incredibly uncertain state of Java 7, not to mention the long term prospects of Java, does this seem remotely possible to anyone? I don't think so. I think the heir apparent to Java cannot be Java, and I think because of that, it's gotta have a runtime library of its own.

Monday, November 16, 2009

Cross Browser Geolocation

You might have heard that Joe Hewitt has given Apple the finger, and is moving on (or more accurately moving back to) mobile web development. You can do a lot on mobile web browsers -- more than you can do on desktop browsers. This can seem counterintuitive at first. However, flavors of WebKit have a huge share of mobile web use, and it supports a lot of features that you can't count on in the IE-dominated world of desktop browsers. One of those coveted features is geolocation. However, geolocation support is far from standardized even among the high end WebKit based browsers.

Geolocation on Mobile Safari is nice, or more accurately it is "standardized." It follows the W3C standard. All you have to do is access the navigator.geolocation object. Here is a super simple example:
var gps = navigator.geolocation;
    if (gps){
        gps.getCurrentPosition(successHandler);
    } 
}

function successHandler(position){
  var lat = position.coords.latitude;
  var long = position.coords.longitude;
  doSomethingUseful(lat, long);
}
Pretty easy, huh? This will work on Mobile Safari running on iPhone OS 3+. As mentioned, it is standard. It will also work on newer versions of Firefox, which presumably includes Mobile Firefox, a.k.a. Fennec. However, that is the limit of the portability of this code. It will not work in Android's browser.
Android's flavor of WebKit, does not support the standard. However, it does support geolocation via Google Gears. That's right, instead of supporting the open standard, it implements the same feature using a proprietary technology. Shocking, I know. So if you are going to run JS that should access geolocation on both the iPhone and one of the zillion Android phones out there, you will need to include the gears_init.js bootstrap file. Then you can re-write the above like this:
function load(){
    var gps = navigator.geolocation;
    if (gps){
        gps.getCurrentPosition(successHandler);
    } else {
        if (window.google && google.gears){
            gps = google.gears.factory.create('beta.geolocation');
            gps.getCurrentPosition(function (position){
               successHandler({coords : position});
            }) ;
        }
    }
}

function successHandler(position){
  var lat = position.coords.latitude;
  var long = position.coords.longitude;
  doSomethingUseful(lat, long);
}
So this is pretty close to the standard, but not quite. The Gears variant also supports the getCurrentPosition API. However, the object that it passes to the success function is different. It is a flatter structure. So in the above example, we wrap it inside another object so that we can reuse the same handler.
I haven't tried the webOS browser, yet or the Nokia WebKit variant yet. Here's hoping they are close to the standard used by Safari.

Tuesday, November 10, 2009

Clojures Primes Shootout

Back in May, I was working on my JavaOne talk on JVM language performance. The first comparison was a prime number sieve. If you look back at the slides, you might notice that I did not include a Clojure implementation. There were a couple of reasons for this. First, I was dumb. Not only dumb, but especially dumb when it came to Clojure. I had recently learned it -- mostly on an airplane to Israel earlier that month. Second, realizing that I was dumb, I was at least smart enough to reach out to the Clojure community for help. I got some great help on the other algorithms, but the prime sieve was unsatisfactory. The implementations I got were just too different from the algorithms used for the other languages, that it did not seem like a good comparison. I have been meaning to go back and come up with a satisfactory implementation. And before you ask, I know about this blazingly fast one:
(defn lazy-primes []
  (letfn [(enqueue [sieve n step]
            (let [m (+ n step)]
              (if (sieve m)
                (recur sieve m step)
                (assoc sieve m step))))
          (next-sieve [sieve candidate]
            (if-let [step (sieve candidate)]
              (-> sieve
                (dissoc candidate)
                (enqueue candidate step))
              (enqueue sieve candidate (+ candidate candidate))))
          (next-primes [sieve candidate]
            (if (sieve candidate)
              (recur (next-sieve sieve candidate) (+ candidate 2))
              (cons candidate
                (lazy-seq (next-primes (next-sieve sieve candidate)
                            (+ candidate 2))))))]
    (cons 2 (lazy-seq (next-primes {} 3)))))
and this one that is very close to what I wanted:
(defn sieve [n]
  (let [n (int n)]
    "Returns a list of all primes from 2 to n"
    (let [root (int (Math/round (Math/floor (Math/sqrt n))))]
      (loop [i (int 3)
             a (int-array n)
             result (list 2)]
        (if (>= i n)
          (reverse result)
          (recur (+ i (int 2))
                 (if (< i root)
                   (loop [arr a
                          inc (+ i i)
                          j (* i i)]
                     (if (>= j n)
                       arr
                       (recur (do (aset arr j (int 1)) arr)
                              inc
                              (+ j inc))))
                   a)
                 (if (zero? (aget a i))
                   (conj result i)
                   result)))))))
and of course the one from contrib:
(defvar primes
  (concat 
   [2 3 5 7]
   (lazy-seq
    (let [primes-from
   (fn primes-from [n [f & r]]
     (if (some #(zero? (rem n %))
        (take-while #(<= (* % %) n) primes))
       (recur (+ n f) r)
       (lazy-seq (cons n (primes-from (+ n f) r)))))
   wheel (cycle [2 4 2 4 6 2 6 4 2 4 6 6 2 6  4  2
   6 4 6 8 4 2 4 2 4 8 6 4 6 2  4  6
   2 6 6 4 2 4 6 2 6 4 2 4 2 10 2 10])]
      (primes-from 11 wheel))))
  "Lazy sequence of all the prime numbers.")
Instead I came up with my own bad one...
(defn primes [max-num](
  loop [numbers (range 2 max-num) primes [] p 2]
    (if (= (count numbers) 1)
      (conj primes (first numbers))
      (recur (filter #(not= (mod % p) 0) numbers) (conj primes p) (first (rest numbers))))))
Why is this so bad? Well it is horribly inefficient. It creates a list of integers up to the max-num that is the only input to the function. It then loops, each time removing all multiples of each prime -- just like a sieve is supposed to. However, most sieves take a prime p and remove 2p, 3p, 4p, etc. (often optimized to 3p, 5p, 7p, since all the even numbers are removed at the beginning.) This does the same thing, but it uses a filter to do it. So it goes through each member of the list to check if it is divisible by p. So it does way too many calculations. Next, it is constantly generating a new list of numbers to pass back into the loop. Maybe Clojure does some cleverness to make this not as memory inefficient as it sounds, but I don't think so.
So what is it that I did not like about the other many examples out there or suggested by the community? Most of them used a lazy sequence. That's great and very efficient. However, it's not a concept that is easy to implement in other, less functional languages (Go Clojure!) On the other hand, the Rich Hickey example is much closer to what I wanted to do. However, it uses Java arrays. There is no duplication of the list and the non-primes are eliminated efficiently. Using Java arrays just seems non-idiomatic to say the least. The other examples did for JavaOne all used variable length lists instead of arrays.
Anyways, I am satisfied to at least have a seemingly equivalent, idiomatic Clojure implementation. I looked at optimizing it through using type hints. I actually had to use a lot of type hints (4) to get a 10% decrease in speed. Anyways, not that this is out here, others will improve it, as that shouldn't be too hard to do!

Tuesday, November 03, 2009

Persistent Data Structures

A few weeks ago I blogged about concurrency patterns and their relative tradeoffs. There were some really poor comments from Clojure fans -- the kind of abusive language and trolling that reminded me of Slashdot in its heyday. In fact, for the first time in a very long time, I had to actually delete comments... A lot of people get confused by the fact that I used Clojure as a representative for software transactional memory, and thought that I was talking about concurrency in Clojure. Anyways, I was flattered that Stuart Halloway commented on my blog. He said I needed to watch Rich Hickey's talk on persistent data structures. So I did. I wanted to embed it on my blog, but it didn't look like InfoQ supports that. So I stole his slides instead:
A little after all of this, I learned that persistent data structures were coming to Scala. So a lot of folks seem to think that these are pretty important. So why is that exactly? It was time to do some homework...

You can read some basic info on Wikipedia. You can go to the original source, Phil Bagwell's paper. The latter is very important. What's amazing is that these are relatively new ideas, i.e. the science behind this is less than ten years old. Anyways, I would say that the important thing about persistent data structures is that they are data structures designed for versioning. You never delete or for that matter change anything in place, you simply create new versions. Thus it is always possible to "rollback" to a previous version. Now you can see how this is a key ingredient for software transactional memory.

Going back to the original ideas... Bagwell's VLists are the foundation for persistent arrays and thus persistent hashtables in Clojure. These form the foundation of Clojure's Multi-Version Concurrency Control, which in turn is the basis of its STM.

These are not just persistent data structures, they are a very efficient implementation of the persistent data structure concept. Each incremental version of a list shares common data with the previous version. Of course you still pay a price for getting the built-in versioning, but this is minimized to some degree.

Scala fans should also be interested in persistent data structures. They are coming to Scala, which is supposed to be available in beta by the end of this month. The aforementioned Bagwell did his work at EPFL -- the same EPFL that is the epicenter of Scala. Indeed, Bagwell himself has contributed to improving the implementations of VLists in Scala. With VLists in tow, Scala STM is just around the corner.

Tuesday, October 27, 2009

Minimalism? Programming? Huh?

Last week, I was at the Strange Loop conference, and it was a great conference. It inspired one blog post that managed to piss some Clojure people off. So here is another. This time I'm going after the closing keynote, Alex Payne. Here are his slides, including detailed notes.
So where to begin... Even though I am going to harsh on this, I will give it very high marks. Why? Because it was thought provoking. If something makes you think long and hard about how and why you do your job, that is a good thing. I worry about people who do not question their assumptions on a regular basis.
Ok, so now on to what you've been waiting for -- the harshness. I hate software engineering analogies. Well, not all, but most. I hate when people try to romanticize about what they do by making a far-fetched analogy to something more glamorous. If you're a programmer, you're not a rock star. You're not an artist. You're not a musician. You're not even a scientist. Sorry. Despite having the word "architect" in my official job title, I would add that you're not an architect either.


You're a programmer. At best, you're an engineer, and that is really stretching it at times. So obviously if I find it ridiculous to compare programming to things like creating art or making music, it seems even more ridiculous to compare the output of programming to art or music.

That being said, I enjoy programming. It is not just a job. I also take pride in my work. I like to show off my code as much as the next guy.

From this perspective, I can definitely appreciate Alex's thoughts on minimalism. Or perhaps more generally, associating some subjective qualities with programming. If an analogy to art or construction or whatever helps one express those subjective qualities, then fine. Just don't forget who you are, and maybe even try to find some pride in just that.

Now I should really end this post now, on kind of a feel good note. But I won't. Instead, I want to touch on some of the *ahem* deeper thoughts that Alex's talk provoked. I was reminded of a paper I read several years ago that tried to explain two obviously overly general (and probably not politically correct) things: why Americans are good at software but not cars, while the Japanese are good at cars, but not software. I tried to find a link to this paper, and was quite sure that I had saved it in Delicious. However I could not find, and I'm convinced that Delicious lost it. Maybe it fell out of cache...

Anyways, the crux of the argument is that Americans are good at getting something done quickly, even though the quality may be poor. So we are good innovators, but terrible craftsmen. This is a good fit for software, but obviously not cars.

If you accept this idea, and it has its merits, then in a world of low quality, rapidly written code, is there any room for subjective qualities? Why should you take any pride in your work at all if its value is directly proportional to how quickly you can produce it, throw it away, and move on to something else? To some degree, doesn't the software world's affection with agile development codify these almost nihilistic ideals? Perhaps the propensity to try and compare programming to art is simply an act of desperation caused by the realization that you'd be better off giving up on good design and instead you should invest in duct-tape.

Anyways, this is a close-to-home topic for me. My day job often involves telling developers the "right" way to do something, and the more unsavory flip-side of this, telling developers when they've done things the "wrong" way. You don't make a lot of friends, but I'm an INTJ so it works for me. Every once in awhile, I work with a programmer who writes beautiful code. Yeah, I said it. Any such programmer knows that they write good code, even if they don't like to talk about it. When you write good code, you constantly see bad code around you, and you can't help but get a big ego. I always compliment the good code, and it always surprises the arrogant pricks. They think I'm blown away and have never seen such good code. In reality, I just feel sorry for them.

Sunday, October 25, 2009

Social Technology Fail

This is the kind of posting that needs a disclaimer. I'm going to talk a little about recent changes at Facebook and Twitter, but strictly from a technology perspective. It goes without saying that I have no idea what I'm talking about. I am fortunate enough to be acquaintances with several engineers at both companies, and I have a college classmate (and fellow Pageboy) who seems to be a pretty important dude at Facebook, but I have no extra knowledge of these companies' technology than anybody else. So just to repeat: I have no idea what I'm talking about. You should really stop reading.

Since you are still reading, then I will assume that you too enjoy being an armchair architect. Since my day job is as an architect at eBay, I tell myself that exercises such as this make me better at my job. Heh heh. Let's start with Facebook.

For several months now, I've noticed an interesting phenomenon at Facebook. My news feed would often have big gaps in it. I have about 200 friends on Facebook, and I'd say that around 70% of these friends are active, and probably 20-25% are very active Facebook users. So at any time I could look at my feed, and there would be dozens of posts per hour. However, if I scrolled back around 3-4 hours, I would usually find a gap of say 4-6 hours of no posts. The first time I ever noticed this, it was in the morning. So I thought that this gap must have been normal -- people were asleep. Indeed, most of my friends are in the United States. However, I started noticing this more and more often, and not always in the morning. It could be the middle of the day or late at night, and I would still see the same thing: big gaps. So what was going on?

Well here's where the "I don't know what I'm talking about" becomes important. Facebook has been very happy to talk about their architecture, so that has given me speculation ammo. It is well known that Facebook has probably the biggest memcached installation in the world, with many terabytes of RAM dedicated to caching. Facebook has written about how they have even used memcached as a way to synchronize databases. It sure sounds a lot like memcached has evolved into something of a write-through cache. When you post something to Facebook, the web application that you interact with only sends your post to the cache.

Now obviously reads are coming from cache, that's usually the primary use case for memcached. Now I don't know if the web app can read from either memcached and a data store (either a MySQL DB, or maybe Cassandra?) or if Facebook has gone for transparency here too, and augmented memcached to have read-through cache semantics as well. Here's where I am going to speculate wildly. If you sent all your writes to a cache, would you ever try to read from anything other than the cache? I mean, it would be nice to only be aware of the cache -- both from a code complexity perspective and from a performance perspective as well. It sure seems like this is the route that Facebook has taken. The problem is that not all of your data can fit in cache, even when your cache is multiple terabytes in size. Even if your cache was highly normalized data (which would be an interesting setup, to say the least) a huge site like Facebook is not going to squeeze all of their data into RAM. So if your "system of record" is something that cannot fit all of your data... inevitably some data will be effectively "lost." News feed gaps anyone?

Maybe this would just be another useless musing -- an oddity that I noticed that maybe few other people would notice, along with a harebrained explanation. However, just this week Facebook got a lot of attention for their latest "redesign" of their home application. Now we have the News Feed vs. the Live Feed. The News Feed is supposed to be the most relevant posts, i.e. incomplete by design. Now again, if your app can only access cache, and you can't store all of your data in cache, what do you do? Try to put the most "relevant" data in cache, i.e. pick the best data to keep in there. Hence the new News Feed. The fact that a lot of users have complained about this isn't that big of a deal. When you have a very popular application, any changes you make are going to upset a lot of people. However, you have to wonder if this time they are making a change not because they think it improves their product and will benefit users overall, but if instead it is a consequence of technology decisions. Insert cart before horse reference here...

Facebook has a great (and well deserved) reputation in the technology world. I'm probably nuts for calling them out. A much easier target for criticism is Twitter. I was lucky enough to be part of their beta for lists. Now lists are a great idea, in my opinion. Lots of people have written about this. However, the implementation has been lacking to say the least. Here is a very typical attempt to use this feature, as seen through the eyes of Firebug:

It took my five attempts to add a user to a list. Like I said, this has been very typical in my experience. I've probably added 100+ users to lists, so I've got the data points to back up my statement. What the hell is going on? Let's look at one of these errors:

Ah, a 503 Service Unavailable response... So it's a temporary problem. In fact look at the response body:

I love the HTML tab in Firebug... So this is the classic fail whale response. However, I'm only getting this on list requests. Well, at the very least I'm only consistently getting this on list requests. If the main Twitter site was giving users the fail whale at an 80% clip... In this case, I can't say exactly what is going. I could try to make something up (experiments with non-relational database?)
However, this is much more disturbing to me than what's going on at Facebook. I don't get how you can release a feature, even in beta, that is this buggy. Since its release, Twitter has reported a jump in errors. I will speculate and say that this is related to lists. It would not be surprising for a feature having this many errors to spill over and affect other features. If your app server is taking 6-10 seconds to send back (error) responses, then your app server is going to be able to handle a lot less requests overall. So not only is this feature buggy, but maybe it is making the whole site buggier.
Now, I know what we (eBay) would do if this was happening: We'd wire-off the feature, i.e. disable it until we had fixed what was going wrong. Twitter on the other hand...

Huh? You've got a very buggy feature, so you're going to roll it out to more users? This just boggles my mind. I cannot come up with a rationale for something like this. I guess we can assume that Twitter has the problem figured out -- they just haven't been able to release the fix for whatever reason. Even if that was the case, shouldn't you roll out the fix and make sure that it works and nothing else pops up before increasing usage? Like I said, I just can't figure this one out...