Wednesday, September 28, 2011

The Perfect Android

Yesterday Android developer advocate Reto Meier asked about what developers loved/hated about application frameworks. It got me thinking about if I could change anything about Android development, what would it be? I mentioned some of these things in my comment on Reto's post, but I'll go into a little more detail here.

Less XML. There are a lot of places where XML is really unnecessary in Android. Top among those things is having to declare Activities, and to a lesser degree Services and BroadcastReceivers. I don't know about your code, but the vast majority of the Activity declarations that I've ever made simply stated the class and maybe some options about config changes and screen orientation. Does this have to be in the manifest XML? I'd prefer to not have to declare Activities in XML and still be able to navigate to them without my app crashing. I'd also like to be able to handle most of their options in Java. This could be as annotations, or maybe some kind of getConfiguration() method that you override, or maybe just as APIs to invoke during onCreate. I think I could live with any of those things. Now if you need to do something more complicated like put an IntentFilter, then that can be in XML. In fact I'm OK with XML always being allowed, just optional for the simpler cases that make up the majority of the Activities that developers write. You could apply similar logic to Services. If you have a Service that you only communicate with by binding to it and it runs in the same process as your main application, then it seems like you should not need any XML for it.

Functional programming. If you've read my blog for very long then this is probably no surprise. This probably requires closures to come to Java, but it seems like we are pretty close to that happening. I think that once that syntax is settled on, then Android should not only get this language feature, but the app framework should be dramatically upgraded to take advantage of it. Technically some of this can be smoothed over by the compiler allowing closures to substitute for single-method interfaces. I'd rather be more explicit about it. And by the way, I don't see why Android should have to wait for Java 8 to be finished by Oracle. Android needs to maintain language and bytecode compatibility, but that doesn't mean it has to use somebody else's javac...

No more Parcelables. Yeah I know that Parcelables are a faster serialization/deserializtion solution. Cool. In most cases this requires a developer to write a lot of mindless code. On one hand tooling can help with this, but why not just have the compiler handle it? Then we could get rid of Parcelable. If a class implements good 'ol Serializable, then the compiler could easily generate the Parcelable code that tooling could generate, but that is currently generated by annoyed developers. Of course it's cool to let developers override this if they want to do something clever. If a class has fields that are Serializable, then the compiler could generate a warning or even an error.

MVC. Now this one is a lot more controversial for me. I'm not a big believer in MVC, so I've always liked that Android wasn't either. However I think that a lot of developers would have an easier time if Android followed a more pure MVC pattern. I think that's one of the ways that Cocoa Touch is easier for new developers to pick up. I've had more than one new developer ask me if an Activity was Android's version of a controller. It is not. But maybe it should be? There's room for things beyond MVC as well. With ICS, Fragments should become the norm for app developers. Developers will have to decide for themselves about the best way to coordinate and communicate fragment-to-fragment, fragment-to-activity, etc. Direct method invocation? Shared memory? Message/handlers? Event listeners? Too many choices and too many choices makes life especially difficult for newbie programmers. Provide a "standard" mechanism, while not ruling out the other choices so more experienced developers can still do things the way that they think is best. The same issue exists for activity-to-activity communication.

Better memory management. I'd love to have gobs of RAM and an amazing GC. In the absence of those just do me some favors like make sure all my soft/weak references are GC'd before an OOM. Always fire Application.onLowMemory before an OOM, and give me 20ms to react and prevent the OOM. Kill all non-visible/foreground processes and re-allocate their memory before an OOM. An OOM is a crash and is the worst possible thing for users. I know lots of snooty engineers will look down their nose and say "your app needs to use less memory", but that kind of attitude makes Android seem like a ghetto to end users.

No more sync. This is one of those features in Android that sounds great on paper but is more trouble than it is worth at this point. By adding your app to sync, you let the system schedule a time for your app to synchronize with your server. Sounds great. However your app won't be the only one getting busy at that time, that's part of the point of sync. Worst of all if the user decides to launch some other app when the sync is in progress then that app has a good chance of being hosed. Tons of CPU & I/O will be hogged by the sync and that app will be sluggish to say the least and may even crash. All of this because of sync. What's more is that there is no real need for sync anymore. With C2DM, a server can send an event to the phone to let your app know that it needs to sync state with the server. Your server can decide how often this needs to happen and can make sure there are no unnecessary syncs.

Friday, September 02, 2011

Offshoring

Wikipedia describes offshoring as "the relocation of a business process from one country to another." There once was a time in Silicon Valley that offshoring was a dirty word. I was here (in the Valley) at the time and I remember it well. Let me take you back a few years in time and describe what I've seen of offshoring over the years.

I think there is still a general fear and bitterness associated with offshoring in the United States. It was much the same when I first heard about it in the Valley around 2002. It had all started several years before, but it was in the midst of the Valley's worst recession that it started to take full effect. The big companies of the Valley were under tremendous pressure to cut costs. They had offshored things like call centers, and so it only made sense to move up the value chain. There was plenty of programming talent to be found in India and China. Soon it wasn't just big companies, but even startups. It wasn't uncommon to hear about a startup where the founder was non-technical and shipped the programming tasks to a firm in

For me personally, I heard a lot about this kind of offshoring, but it did not affect me until 2004. That was when I was working for Vitria Technology. Vitria had been a shining star in the dot com era, and made a lot of money selling the enterprise integration software known as BusinessWare. However by 2004, Vitria's glory days were long past. They had been searching for their second hit for years, but were still being buoyed by recurring revenue from BusinessWare. I joined to work on one of their new ideas, what was eventually known as Resolution Accelerator or RA for short. I worked on a small team of engineers at Vitria's office in Sunnyvale. We developed RA, but our QA was in India. Vitria had moved its QA functions to India the year before we built RA.

RA turned out to be a success. We had lots of paying customs in healthcare and telecom. RA was built on top of BusinessWare, as that was part of our strategy. However a lot of the technical leadership behind BusinessWare felt that the RA team had built a lot of reusable features that belonged in BusinessWare. So even as we were selling RA to happy customers, we started a new project to migrate many of RA's key components to BusinessWare and then rebuild RA on top of this new version of BusinessWare as it was being developed at the same time. Great idea, right?

I was working with the BusinessWare team who were essentially re-inventing my wheels but within the mammoth piece of software that was BusinessWare. However this team turned out to be a lot different than the team that I worked on that built RA. There were no developers. There were multiple architects who were assigned to BusinessWare, as it was the key piece of software in the company. Then there was a tech lead from the BusinessWare team. However this tech lead did not code. Instead he wrote specs that were then implemented by a development team in China and then tested by our QA team in India. This was the model they had adopted fro BusinessWare for a couple of years already. I was essentially a consultant to the tech lead and was not supposed to interact with the developers in China -- even though they were largely writing code that was functionally equivalent to code I had already written. Meanwhile I was the tech lead of RA and we were now supposed to "graduate" to the same development process as BusinessWare. So now I had a development team in China who I was supposed to direct. My job had "evolved" into writing a spec for developers in China and coordinating with a QA team in India. Sunday - Thursday you could find me on the phone from 9 - 11 PM every night. Good times.

Meanwhile all of the developers on the RA team took off. One went to Google and worked on GMail. One went to Symantec. Finally I could take no more and split for a startup… That startup burned through cash like crazy, and went out of business a year later. All I've got is this patent to show for it, along with the valuable experience of writing my own programming language. In 2007 I went to work for eBay and ran into offshoring at a much larger scale.

Development organizations typically consisted of engineers in San Jose plus teams from our Shanghai office. Plus they were heavily augmented with contractors from two firms based in India. The contractors would have a tech lead in San Jose who interacted with a tech lead from eBay. In addition there would be a development manager who "owned" the application and who was usually the manager of eBay's tech lead on the project. The tech lead's job was similar to what it had been at Vitria. He was in charge of a spec and coordinated with the contractors plus engineers in San Jose and Shanghai. As you might guess, most of the actual coding was being done either in Shanghai or by contractors in India. The tech lead usually didn't interact with contractors in India though, instead they worked with the contractors tech lead/liaison.  Finally in addition to the development manager for the application, there might also be an architect -- if the application was important enough. The tech lead worked with the architect on the design that would become the spec that would be sent off to India and China. The tech lead would also interact with a data architect to design database schemas and operations architect to design the physical deployment and requirements of the application. The point is that the tech lead had almost no chance of actually coding, and this was just accepted and understood.

Just before I left eBay, things started to change a bit. In particular eBay brought in some new blood from Microsoft of all places, who took over the search organization. This was the organization that trail blazed offshoring at eBay. It had been a search VP who had "opened" our Shanghai office, and most of the engineers and QA there were part of the search organization. New management decided that this was not working and sought to move all search engineering to San Jose. I'm not sure how the Indian contractors would play in this new vision, but it should sounded like they would be squeezed out too. The Shanghai engineers were unilaterally "reorged" to another organization in the company (an organization that had already been gutted and whose VP was being pushed out, but that's another long story.)

Ok, so what's the point of all of this? I'm not sure to be honest. From my perspective, I was very dissatisfied with how we used offshoring at Vitria. When I joined eBay we were using it in a similar way. If anything, it seemed the eBay way was even more inefficient. There was an unwritten rule at eBay that if a San Jose tech lead estimated that a technical task would take N days, then you would need to set aside 2*N days if the development was being done in China and 2.5*N days if the development was being done by contractors in India. This might seem harsh or much worse, but certainly a big part of it was the operational overhead with offshoring. Further this system was judged a failure by the largest organization at eBay.

At the same time it would be foolish to say that offshoring has been a failure. There is a lot of awesome software development being done via offshoring. I'm not so sure about the practice of "design it in America, develop it offshore" when it comes to software. At the very least I have not seen this work well in person. Then again perhaps the problem is simply one of tech leads who don't code and that just happened to coincide with my offshoring experiences.Whatever the case, one thing is for certain. There was palpable fear about offshoring a decade ago, and it turned out to be a false fear.

Monday, August 15, 2011

Android Fragmentation: The Screen Myth

As someone who develops Android apps for a living and who has worked on two Android apps that have been downloaded 10M+ each, I know about fragmentation. I can tell you all about device differences for cameras, address books, and accelerometers. However when most pundit talk about fragmentation, they usually rely on exactly one example: screen sizes. I'm here to tell you that this is a myth.

Chart
If you look at the Screen Sizes and Densities page from the Android developers site, you will see a chart that looks something like the one to the right. There are several slices to this pie, but two of them are much than the rest. One is "Normal hdpi" at 74.5% (as of August 2011) and the other is "Normal mdpi" at 16.9%. What does that mean? It means that 91.4% of all devices have a "Normal" sized screen, so roughly 3.5 - 4.3". The hdpi ones have a higher resolution of course. So for those devices you will want higher quality images and what not.

Of course for anything like this, it is natural to compare things to the iPhone. On the iPhone all devices have 3.5" sized screen. However you once again have the situation with different resolutions between the iPhone 3G/3GS and iPhone 4. So similar to Android, you will probably want to include higher resolution artwork to take advantage of the iPhone 4's higher resolution screen. However as a developer you don't get much of an advantage with the higher resolution screen since the overall size is the same. It's not like you're going to make your buttons the same number of pixels and thus fit more buttons on the screen, etc.
No wait a minute, there is a difference between 91.4% and 100%. A good chunk of that difference is because of tablets, with their "Xlarge mdpi" screens. You can expect that segment to continue to grow in the future. But again, this is a similar situation to the iOS. An iPad has a larger screen than an iPhone. If you care about this, you either make a separate app for the iPad, or you incorporate the alternate layouts within your existing app and make a "universal" app. This is the same deal on Android.

To really make a fair comparison, you would have to exclude the xlarge screens on Android. Then the percentage of phones that fall in the combined Normal mdpi/hdpi bucket is even higher. It's still not 100% but it is pretty close. My point is that if somebody wants to talk to you about Android fragmentation and they use screen size as an example, some healthy skepticism is appropriate.

Sunday, July 24, 2011

What's a MacBook Air good for?

At the beginning of this year, I bought a MacBook Air. I bought a maxed out one, with a 13" screen, 2.13 GHz cpu, 4 gb ram, 256 gb SSD. This past week Apple refreshed the MacBook Air line and I've seen a lot of people asking the question "Could I use a MacBook Air for ___?" So here's what all I use it for, along with a comparison to my work computer a maxed out 15" MacBook Pro.

  • Web browsing. I run Chrome on it and it screams. Actually the MBA set an expectation for me about how ChromeOS and in particular how my Samsung Chromebook should perform. I basically expected the Chromebook to perform exactly like the Chrome browser on my MBA, which is awesome. I was very disappointed, as the Chromebook is nowhere close. Anyways, browsing on the MBA is fantastic. I can notice a slight difference in performance on super-JS heavy sites, with my MBP being slightly smoother. I think a non-engineer would have difficultly spotting these differences.
  • Word processing. I'm not just talking about rudimentary word processing either. When I got the MBA, I had a couple of chapters and appendices that I was still working on for Android in Practice. I used the MBA to write/complete these chapters. This involved using Microsoft Word along with the Manning Publications template for Word. The chapters were usually in the 30-50 page range, and often with tons of formatting (code, sidebars, etc.) and large graphics. I cannot tell any difference between my MBP and MBA for these massive Word docs.
  • Programming. Speaking of AIP, part of finishing it meant writing code for those chapters. I've run pretty much all of the code from AIP on my MBA with no problems at all. For smaller apps like I have in AIP, there is no appreciable difference between my MBA and MBP. Building a small app is an I/O bound task, and the SSD on the MBA shines. Now I have also built Bump on my MBA, and there is a very noticeable difference between it and my MBP. There are two major phases to the Bump build. The first is the native compilation, which is single-threaded. You can definitely notice a major CPU speed difference here, even though the raw clock speeds of the two machines are close. The second phase is the Scala/Java compilation phase. This is multi-threaded, and the four cores on my MBP obviously trump the two cores on the MBA. Still, I would saw the MBA compares favorably to the late-2008 era MacBook Pro that I used to use for work.
  • Photography. I used to use Aperture on my old MBP. On the MBA I decided to give Adobe Lightroom a try. So it's not an apples-to-apples comparison. However, Lightroom on my MBA blows away Aperture on my old MBP. I haven't tried either on my current MBP. Obviously the SSD makes a huge difference here. Nonetheless, post-processing on the MBA is super smooth. When it comes to my modest photography, Lightroom presents no problem for my MBA.
  • Watching video. I haven't watched too much video on my MBA, mostly just streaming Netflix. It is super smooth, though it will cause the machine's rarely used fan to kick-in (as will intense compilations, like building Bump.) Next week I am going on a cruise to Mexico, and I will probably buy/rent a few videos from iTunes to watch on the cruise. Based on previous experiences, I expect this to be super smooth and beautiful, though I have considered taking the MBP just for its bigger, high resolution screen.
  • Presentations. I've done a couple of talks using the MBA. For creating content in Keynote, I do sometimes notice a little sluggishness on the MBA that is not present on my MBP. For playback, it seems very smooth. However, I did have an animation glitch during one presentation. This never happened when previewing just on my MBA's screen, it only happened on the projected screen. Nobody seemed to notice during the presentation, except for me of course.

So there you go. What's a MacBook Air good for? Pretty much everything. If you are a professional developer who compiles software, then I think your money is best spent buying the most powerful computer available. This has always been true. Maybe this is also the case for professional photographers/videographers as well. However my MBA is way more than enough for most folks. Finally keep in mind that the new MBAs introduced this week are a good bit faster than my MBA, at least in terms of CPU speed.

Friday, July 22, 2011

Android JSON Bug

Today I was working on some code that needed to invoke a JavaScript function on a web page that was being displayed in a WebView within an Android application. As part of that function invocation a JSON object is passed in. It's actually a pretty complex JSON object. Of course it must be passed in simply as a string and then parsed by the JavaScript to an object. Android includes the JSON "reference implementation", so naturally I wanted to use this instead of relying on some 3rd party JSON library that would fatten up my APK. The standard way to do this with the reference implementation is to create an instance of org.json.JSONObject and use its toString method. You can create an empty instance and programmatically build up a JSON data structure, or you can give it a Java Map to build from. I chose to go the latter route.
When my web page choked, I wasn't too surprised. I'm never surprised when code I write has bugs initially. I added a log statement to show the data being passed in and was surprised to see that it looked like this:
{"a": "{b={c=d}}", "e":"f"}
This was not correct. This should have looked like this:
{"a": "{"b":"{"c":"d"}"}", "e":"f"}
To create the JSONObject that I was passing in, I passed it in a HashMap whose keys were all strings but whose values were either strings or another HashMap. So to create the above structure there would be code like this:
HashMap<String,Object> top = new HashMap<String,Object>();
HashMap<String,Object> a = new HashMap<String,Object>();
HashMap<String,Object> b = new HashMap<String,Object>();
b.put("c", "d");
a.put("b", b);
a.put("e", "f");
top.put("a",a);
It seemed that the JSONObject code was flattening out my objects and producing incorrect JSON (WTF equal signs!) as a result. I put together a quick workaround to recursively replace any HashMap values with JSONObjects like this:
JSONObject jsonIfy(HashMap<String,Object> map){
	HashMap<String,Object> fixed = new HashMap<String,Object>();
	for (String key : map.keySet()){
		Object value = map.get(key);
		if (value instanceof Map){
			value = jsonIfy((HashMap<String,Object>) value);
		}
		fixed.put(key,value);
	}
	return new JSONObject(fixed);
}
This worked perfectly.

Saturday, July 16, 2011

ChromeOS and Photography

So far I've been pretty disappointed with the Samsung Chromebook that Google gave me for attending I/O this year. That sounds pretty ungrateful for a free computer, but I am intrigued by the idea of ChromeOS. I'd like to see it work, just as a technical achievement. I think that perhaps the Chromebook that I have falls short because of its lack of hardware, but perhaps its shortcomings are inherent with the OS.
Anyways, obviously the main idea with ChromeOS is that people spend all of their time on the web. However, one of the other common tasks that people do is use their computers to share digital photographs that they took with the camera. This might seem like a hard thing for ChromeOS to handle, but Google claimed that they had this figured out. So I decided to give it a spin.Here are the tools that I used.
Chromebook and Camera
I had taken a few dozen pictures at a Giants baseball game I went to last weekend with family. I took the SD card from my camera (a modest Nikon D3000) and plopped it into the SD card slot on the Chromebook. ChromeOS immediately opened a new tab with a file browser that allowed me to navigate and view my pictures on the SD card. Very nice!
Viewing SD card contents
You could preview and even delete photos. The preview was a little slow to load. You could select photos and start an upload to ... Picasa Google Photos of course. The upload takes place in the background with a little notification window letting you know about the progress. Again, very nice. Once the pics are uploaded, browsing/previewing is much smoother. I assume this is because you are browsing downsized versions of the pics, whereas the file manager on ChromeOS has you browsing through the full versions.
Pictures uploaded to Picasa
One of the things that I often do with my photos is edit them. One my MacBook Air, I use Adobe Lightroom. I didn't expect to have something as sophisticated as Lightroom, but I did expect to be able to do simple things like rotate and crop. I would also expect red-eye removal to be available, since this is a pretty common need. Anyways, the editing tool on the Picasa website is Picnik. I've used it before, and it is great. However, it had significant problems loading on ChromeOS:
Loading Picnik
Still loading...
Whoops!
Restart your browser? Umm...
I thought that maybe this memory error was because of the size of the photo (2.5 MB). That would imply that Picnik is purely client-side? I don't think so. I would assume that the full size photo was on the server, and in which case the memory problem is purely from the Picnik tools loading and has nothing to do with the size of the picture. Either way, I don't think this photo is too much above average. Megapixels are cheap these days.

So I couldn't edit the pics once uploaded to Picasa. I actually tried just using Picnik directly, not through Picasa, but it had the same problem. The nice thing is that any web app that allows you to upload pictures from your computer works great with ChromeOS. Here's Facebook for example:
Upload to Facebook
You could essentially upload from your SD card to Facebook this way. I would think that if a tool like Picnik worked, you could edit and then upload. You could probably sell a Chrome app that did that (it should allowing tagging of your Facebook friends too) and make a few bucks, assuming Chromebooks started selling.
I suppose a pretty common use case will be to simply upload all of the pics off of your SD card to Facebook. It seems like ChromeOS handles that pretty good. Put in you SD card, open up Facebook, and start uploading. If you use Picasa and Google+, it is even a little simpler. Editing seems to be a problem right now. Much like the general power performance of the Chromebook, it might be purely a function of subpar hardware. Maybe if Samsung put more memory in it, then Picnik wouldn't have choked? Hopefully the shortcomings can be addressed through software, since I don't think Google can update the RAM on my Chromebook.

Note: The "screenshots" were taken with my Nexus S. Why? Because the screenshot tool for ChromeOS didn't work for me. It works great on Chrome browser, but it would only capture part of the screen on ChromeOS and then freeze up. Sigh.

Thursday, July 14, 2011

Spotify vs. Rdio

Today was the much anticipated US launch of Spotify. I've been using Rdio for several months, and really love it. Not surprisingly, there are a lot of posts out there comparing the two services. That's all good stuff to read. Here's my summary:

  • Spotify has more music. If you stick to stuff from the last 10 years and that you can find on iTunes, then you probably won't find much difference. But going further back or by going "more obscure", you will notice the differences.
  • Spotify has a desktop app, but it is an iTunes clone. The Rdio desktop app is more of a wrapper on their website. So big advantage to Spotify, despite being an ugly iTunes clone. Also, Spotify will play music on your computer, so it tries to be a replacement for iTunes.
  • The Rdio mobile app is way better, at least on Android. Subjectively, it has a cleaner design and is easier to use. The Spotify mobile app looks like I designed it. Objectively, the Rdio app's sync is far superior. Spotify requires your mobile device to be on the same wifi network as your computer that is running Spotify. On Rdio, you can easily send any song, album, playlist, etc. to all of your mobile devices with no requirements except that you've got an Internet connection.

Tuesday, July 12, 2011

Netflix Disappointment

Remember when Netflix started making us wait two weeks to get new releases? It was supposed to be a tradeoff to get more streaming content from Hollywod. Now here we are with higher prices than ever, less streaming content, and still no new releases on DVD.

Why can't we have a Rdio or Spotify for movies and TV? At some price, this should be possible, right? I know that is what I want, and I don't think I'm the only one. How much would you pay for it? I think my max price is somewhere north of $35/month, assuming I can use it on all of my devices. Doesn't it seem like there's a very profitable business out there for this kind of service at that price? I'm guessing backwards thinking in Hollywood is to blame here, along with continued paranoia over privacy. At least HBO seems to be getting this right.

Monday, July 04, 2011

Google+ and Hitting the Reset Button

Reset your social network

So you might have heard this week that there's this new social networking thing (*yawn*) called Google+. That's right, it's from Google. So it's gonna suck, right? I was skeptical at first, as were many others. After all, nobody roots for The Big Company who clones popular products made by smaller companies, and Google has had a well known poor track record in this space. But after a few days of using Google+, I'm a believer -- in its potential. Here's why.

Google+ is a chance to hit the rest button on social networking. For me, when I first started using Facebook it was mostly to catch up with old college classmates. Two big events happened later. Facebook was opened up to everybody, and Twitter showed up. When Facebook was opened up to everyone, I pretty much accepted friend requests from anyone I knew at all. I still didn't use Facebook that much. Meanwhile I really fell in love with Twitter when it showed up. On there I connected mostly with people in the same or related professions as me. Most of my tweets were around things that had to do with my job (programming, technology.)

Meanwhile on Facebook, I had more and more family members join. Suddenly Facebook became a great place to share family related things, especially pictures. Then I hooked up my Twitter account to Facebook. Now I could just post things to Twitter, and they would show up on Facebook. Then I would occasionally post pictures on Facebook as well. However, most of my tweets were geeky stuff. I did have some co-workers and college classmates who found such things interesting, but more and more most of my friends on Facebook (lots of family high school classmates) found this useless. Eventually I cut off the Twitter-Facebook connection.

My story here is certainly different from a lot of folks, but I imagine there are a lot of similarities too. Google+ seems to offer the possibility to do things over and get it right this time. The key is its grouping feature, Circles. You have to put people in Circles, so you are motivated to organize your friends. This is important. Facebook and Twitter have both had similar features for awhile, and they just aren't used. Twitter's lists aren't really comparable since you still send tweets to everyone. Facebook's groups are more comparable, so why aren't they used?

All your privacies are belong to us

First and foremost, I don't think Facebook really wants anyone to use them. They have a pretty strong history of trying to decrease privacy on their network. Obviously Facebook benefits if everything posted on their network can be searched and viewed by others on Facebook. It seems like one of those features that they added because some users wanted it, but it did not benefit Facebook The Business. Within a couple of days of Google+'s debut, reports came out of a Facebook engineer easily hacking together the same interface to use with Facebook groups. So clearly it would have been pretty easy for Facebook to make groups easy for users to use to organize their friends and incorporate groups into content published on Facebook, but instead Facebook chose not to do this.

This raises the question of why the heck is Google+ doing it? If I had to guess, I doubt that Google really wants to do this either. However, this is one of many places where Google+ feels like something designed around the strengths and weaknesses of its competition, Facebook and Twitter. Privacy was an obvious weakness of Facebook and so Google+ takes advantage of that. It's the kind of thing you do to get market share, whereas Facebook has been doing just the opposite because they are trying to monetize existing users and content.

Resistance is futile

Privacy is not the only place where Google+ feels like a product that has been cleverly designed around its competition. In fact it reminds me a lot of embrace, extend, extinguish era Microsoft. I think they have realized that they don't necessarily have to come up with something that Facebook and Twitter don't do at all, they can just do a lot of the same things and do them a little better. Some other examples of this are viewing pictures and allowing rich markup in status updates. So they make a slightly better product, then they play their own monopoly card by putting the G+ bar on top of all Google pages, including search results and GMail...

Anyways, going back to privacy... The creation of Circles is only half the battle. The other half is picking which of your Circles to publish to. G+ has made this easy to do, and it is a feature that I want to use. However, I don't know if others will do the same. Right now it still seems like most posts that I read are public. This may change as more people start to use G+, but maybe not.

If it doesn't change, then G+ then seems like it will be more of a competitor to Twitter than to Facebook. It already has a lot of similarities, since it uses an asymmetric friendship model like Twitter. I definitely noticed a drop in tweets by those I follow on Twitter since G+ came out. If people don't use the privacy features, then the most it could become is a better Twitter. There have been other better Twitters before, so I don't know if that is good enough. Features like hangouts (group video chat) and huddles (group messaging) seem like they could appeal to Facebook users, but it's hard to say right now. For me, the kind of folks who I use Facebook to communicate with, but would not use Twitter to communicate with, have not even heard of Google+. Yet.

 

Wednesday, June 29, 2011

Web Apps, Mobile Web Apps, Native Apps, ... Desktop Apps?

There's never any shortage of people debating mobile web apps vs. native apps. I don't want to waste your time with yet another rehash of this debate. Personally I sum it up as a choice between user experience and cost. But obviously I am biased -- I build native apps and even wrote a book on how to do it. So you shouldn't believe anything I have to say about this topic, it must be wrong. Anyways, one of the interesting lines of reason that often comes up in this debate is how web apps replaced desktop apps. I want to examine this a little bit closer.

I'm actually writing this blog post from a Chromebook that I got for attending the Google I/O conference last month. This device is perhaps the logical conclusion of the web apps replacing desktop apps axiom. However, I have a problem with that axiom. It is often based on the emergence of popular websites like Yahoo, Google, Amazon, and eBay. The argument is that these apps were web based and the fact that they ran on servers that could be rapidly updated is a key to why they succeeded. The long update cycle of desktop software would have made it impossible for these apps to be anywhere but in a browser.

There is some truth in this, but it's misleading. The most important factor in the success of those apps was that their data was in the cloud. They brought information and interactions that could not exist simply as part of a non-connected desktop app. They had business models that were not based on people paying to install the software. These were the keys. The fact that end users interacted through a web browser was secondary. It's pretty much the same story for newer super popular web apps like Facebook and Twitter.

Going back to the world of mobile for a minute... One of the things that mobile has taught us is that users don't care so much about how they interact with your "web-based" application. Hence the popularity of native apps for web giants like Amazon and Facebook. In fact, some might even argue that users prefer to use native apps to interact with traditional web properties. I won't argue that, but I would definitely disagree with any claims that users prefer to use a browser.

Anyways, the point is that the notion that web apps replaced desktop apps is dubious. In fact, if you look at places where web apps were tried to exactly replace desktop apps, such as word processing, they have had limited success. Currently we see a movement to replace music players like iTunes with web apps. These web apps have some distinct advantages, but it is not at all clear that they will prove popular with users. Apple has taken the approach of adding more network based features (store and download your music from their servers) to iTunes instead of going with a web app -- at least for now.

Connected desktop apps have a lot to offer. I often find myself using desktop apps like Twitter, Evernote, Sparrow (for GMail), and Mars Edit (for Blogger.) They provide a better experience than their browser based cousins. Apple's Mac App Store has definitely made such apps more popular on the Mac platform, as they have made it easier to discover, purchase, and update such apps. Speaking of updates, these apps update frequently, usually at least once a month. Again, I think that our expectations have adjusted because of mobile apps.

So will desktop apps make a comeback? Are mobile web apps doomed? I don't know. I think it is very unclear. It's not a given that a computer like this Chromebook that I'm using is "the future." We can haz network without a browser.

Saturday, June 18, 2011

This is the Year of Music

Last month I wrote a bit about cloud music. Since then Apple got in the game with iTunes in the iCloud. I'm not a fan of it because you have to manage what songs are downloaded to your devices before you can listen to them. Of course having to download a song before you can listen to it is not surprising from a company that sells hardware priced by storage capacity. If you didn't have to download to use your media, why would you spend an extra $100 on the 32gb iWhatever instead of the 16gb version? Still you gotta give Apple props on the price point. I am optimistic that competition around price and features between Apple, Google, and Amazon will be very beneficial for consumers.

Anyways while the your music-in-the-cloud market is obviously very hot market being contested by three of the biggest technology companies in the world, there is a lot more interesting innovation around music going in in technology. This blog post is about two of my favorite music startup/services: Rdio and Turntable.fm. There's tons of tech press on each of these companies, so I won't go into that. I'm just going to talk about why I like them.

Rdio launched in late 2010, and I've been a paying customer since January 2011. It's conceptually similar to subscriptions services like fake Napster and Rhapsody. What I like about it is how great it is to use on mobile devices. I have it on my Nexus S, on my Nexus One that I listen to when I run/work-out, and on the iPod Touch that I have plugged in to my car. Now you might be thinking, now how can I use this on a iPod in my car with no internet connection? Well you can mark songs/albums/playlists as songs to sync to your devices and be usable with no connection. So I can use the Rdio Mac app, mark some songs, and then just connect my iPod or Nexus One to a wifi network, and the songs get synced. Then I can listen to them anytime. I regularly read reviews of new music on Metacritic, listen to some of the music I find there on Rdio, and then sync it to my devices. Then I can listen to it while running or driving to work.

Speaking of discovering new music, that is the sweet spot of Turntable. It's pretty new, I only heard about it earlier this month and started using it this week. The idea of being a DJ for the room may sound lame at first -- and maybe it is -- but it is also awesome. However I will warn you to resist being a DJ. You will be drawn in and waste massive amounts of time. It's completely unnecessary too. Like I said, Turntable is awesome for music discovery. Go to a room that sounds interesting (avoid the Coding Soundtrack unless you really enjoy electronica/dance AND don't mind witnessing pissing contests measured by playing obscure remixes) and enjoy. There's a good chance that you will hear familiar music, but an even better chance you will hear something you've never heard before. The DJs try hard to impress, and you win because of it. It's like Pandora, but better, with no ads, and ... you always have the option to enter the chat or even take your turn at DJ'ing.

Sunday, June 12, 2011

I hate your smartphone

When people talk about smartphones, they often mean iPhones and Android phones. Sure there are still Blackberries out there, I think I once saw a guy using a webOS phone, and Microsoft has 89,000 employees who have to use Windows Phone 7, but it's mostly an Android and iPhone world right now. If you have a phone running Android or iOS, life is pretty good. You've probably got a really powerful phone, with a huge number of apps and games to choose from. You've also got a web browser that's more advanced than the one used by most desktop computer users. So given all of those things to be happy about, why is that smartphone owners are so hostile to each other?

Can't we all get along?

iPhones user love to look down their noses and make derisive comments about Android users. Not to be outdone, Android users never miss an opportunity to mock iPhone users. There is an obvious parallel here to Mac vs. Windows, but I think it's actually much nastier than Mac vs. Windows ever was. Here's a list of my theories (in no particular order) on why there is such animosity.

  1. It's so easy to do. The truth is that both iOS and Android are deeply flawed. Going back to the Mac vs. Windows comparison, think about how much less mature these smartphone OS's are compared to Windows and OSX. So if you have any inclination to make fun of either OS, it's embarrassingly easy to do. This is where things really differ from Mac vs. Windows. There's not much to debate there. If you wanted to say "Mac sucks", there is a short list of things that you can point to. Not true for iOS or Android.
  2. Social networking. In this day and age of social networks, your "friends" shove what they are doing in your face constantly. Smartphone apps make heavy use of this, as a way to spread virally. But what happens when there is an impedance mismatch caused by apps available on one OS but not the other? The folks with the unsupported OS get a little annoyed, and the other folks get an artificial feeling of 133tness
  3. It starts at the top. Apple and Google both frequently take shots at each other. They do it at developer conferences. They do it when reporting quarterly earnings. It doesn't stop there. Motorola, Verizon, T-Mobile and others have all taken shots at Apple in commercials seen by millions.
  4. Big decisions. You only have one smartphone (unless you are weird like me) and in the US, you usually buy it on a two-year contract. So you have to pick just one, and then you are stuck with it for two years. Thus if anybody is gonna suggest that you made a bad decision, most people will defend their decisions vehemently.
  5. The Apple effect. So this really is straight out of Mac vs. Windows. Apple wants their products to seem elite and luxurious. They want the owners to feel like they have purchased far superior products, and feel that they (the user) is superior because of this. It's brilliant marketing, and it totally works. The air of superiority hangs over any Apple product owners, but especially iPhone users. So naturally any non-iPhone users are going to react negatively to the haute attitudes of iPhone users.

Friday, June 10, 2011

Rallying the Base

This week was WWDC 2011. Last year I was lucky enough to attend what appears to be the final stevenote. This year I followed along online, like much of Silicon Valley. There are a lot of reasons why so many of us who work in the tech world pay such close attention to the WWDC keynote. This is the place where Apple typically unveils innovative hardware and software. However, this year's event reminded me of another  watershed moment in recent US history: John McCain choosing Sarah Palin as his VP candidate back in 2008.

Rallying the Base

These two events were similar because they were both examples of rallying the base. In 2008, McCain decided against trying to appeal to moderate Republican/Democrats/independents who were either inclined to vote for Obama or undecided. Instead he went with Palin, a candidate who did not appeal to those people. The idea was to appeal to the most conservative elements of the Republican party and get those people to vote instead of staying home for whatever reason. Obviously this did not work.

So how was WWDC 2011 a rallying of the base tactic? Apple did not try to introduce near software or hardware that would get non-iPhone owners to go out and buy an iPhone or more strongly consider buying an iPhone the next time they were in the market for a new phone. Instead they did their best to make sure that current iPhone owners continued to buy iPhones. The strategy was two-fold.

First, they needed the places where they were weak and other were strong. Now let's be honest here, by others we are talking about Android/Google. There were a couple of glaring problems with iOS 4. First was notifications, so Apple essentially adopted Android's model here. Second was the dependency on iTunes the desktop software application. They introduced wireless sync and their iCloud initiatives to address this weakness. Apple did not break any new ground in any of these areas, they simply removed some obvious reasons for people to buy an Android device over an iPhone.

Phase two of rallying the base was to increase lock-in. If you are an iPhone user, you already experience lock-in. Buying an Android phone means losing all of your apps and games. Depending on what you use for email, calendar, etc. you might also lose that data too. Of course this is true to some degree about any smartphone platform. However, with the expansion of the Android Market (I've seen many projections that it will be bigger than the App Store soon), pretty much every app or game you have on your iPhone can be found on Android. Further, there's a good chance that it will be free on Android, even if you had to pay for it on the iPhone. Further, with the popularity of web based email, especially GMail, you probably would not lose any emails, calendar events, etc. So the lock-in was not as high as Apple needed it to be. Enter iCloud and iMessaging.

As many have noted, iCloud/iMessaging does not offer anything that you could not get from 3rd party software. Syncing your docs, photos, email, calendar, etc. is something that many of us already do, and that includes iPhone users. Further many folks already have IM clients that do everything that iMessaging does. The big difference is that all of those existing solutions are not tied to iOS or OSX. Thus they are cross-platform (no lock-in) but that also means that you have to add this software to your devices. It's very nice for users to not have to worry about installing Dropbox, Evernote, or eBuddy. But the obvious win for Apple is here is the lock-in. If you start relying on Pages for writing docs and sync'ing them across devices, you are going to be very reluctant to buy anything other than an iPhone (and a Mac for that matter.) If you get used to using iMessaging to chat with your other iPhone toting friends, same thing.

Apple is keeping the cost of using all of their new offerings very low. It's a classic loss leader strategy. It's ok if iCloud and iMessaging both lose a lot of money for Apple. If they can just lock-in existing iPhone users, they can continue to make huge profits. In that scenario, it's ok for Google/Android to have N times the number of users as Apple. The Apple users won't be going anywhere, and they spend a lot of money. Seems like a smart strategy by Apple. It should work out much better than it did for the Republican party in 2008.

Saturday, June 04, 2011

The Concurrency Myth

For nearly a decade now technology pundits have been talking about the end of Moore's Law. Just this week, The Economist ran an article about how programmers are starting to learn functional programming languages to make use of the multi-core processors that have become the norm. Indeed inventors of some of these newer languages like Rich Hickey (Clojure) and Martin Odersky (Scala) love to talk about how their languages give developers a much better chance of dealing with the complexity of concurrent programming that is needed to take advantage of multi-core CPUs. Earlier this week I was at the Scala Days conference and got to hear Odersky's keynote. Pretty much the second half of his keynote was on this topic. The message is being repeated over and over to developers: you have to write concurrent code, and you don't know how to do it very well. Is this really true, or is it just propaganda?

There is no doubt that the computer that we buy are now multi-core. Clock speeds on these computers have stopped going up. I am writing this blog post on a MacBook Air with a dual-core CPU running at 2.13 GHz. Five years ago I had a laptop with a 2.4 GHz processor. I'm not disputing that multi-core CPUs are the norm now, and I'm not going to hold my breath for a 4 GHz CPU. But what about this claim that it is imperative for developers to learn concurrent programming because of this shift in processors? First let's talk about which developers. I am only going to talk about application developers. What I mean are developers who are writing software that is directly used by people. Well maybe I'll talk about other types of developers later, but I will at least start off with application developers. Why? I think most developers fall into this category, and I think these are the developers that are often the target of the "concurrency now!" arguments. It also allows me to take a top-down approach to this subject.

What kind of software do you use? Since you are reading this blog, I'm going to guess that you use a lot of web software. Indeed a lot of application developers can be more precisely categorized as web developers. Let's start with these guys. Do they need to learn concurrent programming? I think the answer is "no, not really." If you are building a web application, you are not going to do a lot of concurrent programming. It's hard to imagine a web application where one HTTP request comes in and a dozen threads (or processes, whatever) are spawned. Now I do think that event-driven programming like you see in node.js will become more and more common. It certainly breaks the assumption of a 1-1 mapping between request and thread, but it most certainly does not ask/require/suggest that the application developer deal with any kind of concurrency.

The advancements in multi-core processors has definitely helped web applications. Commodity app servers can handle more and more simultaneous requests. When it comes to scaling up on a web application, Moore's Law has not missed a beat. However it has not required all of those PHP, Java, Python, Ruby web developers to learn anything about concurrency. Now I will admit that such apps will occasionally do something that requires a background thread, etc. However this has always been the case, and it is the exception to the type of programming usually needed by app developers. You may have one little section of code that does something concurrent, and it will be tricky. But this has nothing to do with multi-core CPUs.

Modern web applications are not just server apps though. They have a lot of client-side code as well, and that means JavaScript. The only formal concurrency model in JavaScript are Web Workers. This is a standard that has not yet been implemented by all browsers, so it has not seen much traction yet. It's hard to say if it will become a critical tool for JS development. Of course one of the most essential APIs in JS is XMLHttpRequest. This does indeed involve multiple threads, but again this is not exposed to the application developer.

Now one can argue that in the case of both server side and client side web technologies, there is a lot of concurrency going on but it is managed by infrastructure (web servers and browsers). This is true, but again this has always been the case. It has nothing to do with multi-core CPUs, and the most widely used web servers and browsers are written in languages like C++ and Java.

So is it fair to conclude that if you are building web applications, then you can safely ignore multi-core CPU rants? Can you ignore the Rich Hickeys and Martin Oderskys of the world? Can you just stick to your PHP and JavaScript? Yeah, I think so.

Now web applications are certainly not the only kind of applications out there. There are desktop applications and mobile applications. This kind of client development has always involved concurrency. Client app developers are constantly having to manage multiple threads in order to keep the user interface responsive. Again this is nothing new. This has nothing to do with multi-core CPUs. It wasn't like app developers used to do everything in a single thread, but now that multi-core CPUs have arrived, you need to start figuring out how to manage multiple threads (or actors or agents or whatever.) Now perhaps functional programming can be used by these kind of application developers. I think there are a lot of interesting possibilities here. However, I don't think the Hickeys and Oderskys of the world have really been going after developers writing desktop and mobile applications.

So if you are a desktop or mobile application developer, should you be thinking about multi-core CPUs and functional programming? I think you should be thinking about it at least a little. Chances are you already deal with this stuff pretty effectively, but that doesn't mean there's room for improvement. This is especially true if language/runtime designers started thinking more about your use cases.

I said I was only going to talk about application developers, but I lied. There is another type of computing that is becoming more and more common, and that is distributed computing. Or is it called cloud computing? I can't keep up. The point is that there are a lot of software systems that run across a cluster of computers. Clearly this is concurrent programming, so bust out the functional programming or your head will explode, right? Well maybe not. Distributed computing does not involve the kind of shared mutable state that functional programming can protect you from. Distributed map/reduce systems like Hadoop manage shared state complexity despite being written in Java. That is not to say that distributed systems cannot benefit from languages like Scala, it's just that the benefit is not necessarily the concurrent problems/functional programming that are often the selling points of these languages. I will say that Erlang/OTP and Scala/Akka do have a lot to offer distributed systems, but these frameworks address different problems than the multi-core concurrency.

It might sound like I am a imperative program loving curmudgeon, but I actually really like Scala and Clojure, as well as other functional languages like Haskell. It's just that I'm not sure that the sales pitch being used for these languages is accurate/honest. I do think the concurrency/functional programming angle could have payoffs in the land of mobile computing (desktop too, but there's not much future there.) After all, tablets have already gone multi-core and there are already a handful of multi-core smartphones. But these languages have a lot of work to do there, since there are already framework features and common patterns for dealing with concurrency in mobile. Event driven programming for web development (or the server in client/server in general) is the other interesting place, but functional languages have more to offer framework writers than application developers in that arena.  My friend David Pollak recently wrote about how the current crop of functional languages can hope for no more than to be niche languages like Eiffel. I think that he might be right, but not just because functional programming has a steep learning curve. If all they can offer is to solve the concurrency problem, then that might not be enough of a problem for these languages to matter.

Friday, June 03, 2011

Local data exchange with NFC and Bluetooth

One of the exciting technologies being shown off at Google's I/O conference this year was near field communication or NFC. It certainly got my interest, so I attended an excellent talk on NFC. Here's a video of the talk:

One of the things mentioned in the talk was that you did not want to use NFC for any kind of long running, verbose communication. Its range was too short and its transfer speed was too slow. Bluetooth was the way to go for such data exchange, so what you really wanted to do was an NFC-to-Bluetooth handoff. It was even mentioned that the popular Fruit Ninja game did this, or would do this in the future. Earlier this week at Bump we had our second hackathon. I decided that local communication using NFC and Bluetooth would make for an interesting hackathon project. So based on what I had learned from the I/O presentation, the examples in the Andoird SDK, and a tip from Roman Nurik, here's some code on how to do the NFC-to-Bluetooth handoff to setup a peer-to-peer connection between two phones to exchange data between them.
We'll start with the NFC pieces. You want the phone to do two things. First, it needs to broadcast an NFC "tag". This tag can have whatever information you want in it. In this case we will have it send all of the information needed to setup a Bluetooth connection: the Bluetooth MAC address for our phone plus a UUID for our app's connection. You can add more stuff to the tag as well, but these two parts are sufficient. Technically you could do without the UUID, but you'll want this in case other apps are using a similar technique. Here is some simplified code for generating an NFC text record:
public static NdefRecord newTextRecord(String msg) {
    byte[] langBytes = Locale.ENGLISH.getLanguage().getBytes(
            Charset.forName("US-ASCII"));
    byte[] textBytes = msg.getBytes(Charset.forName("UTF-8"));
    char status = (char) langBytes.length;
    byte[] data = new byte[1 + langBytes.length + textBytes.length];
    data[0] = (byte) status;
    System.arraycopy(langBytes, 0, data, 1, langBytes.length);
    System.arraycopy(textBytes, 0, data, 1 + langBytes.length,
            textBytes.length);
    return new NdefRecord(NdefRecord.TNF_WELL_KNOWN, NdefRecord.RTD_TEXT,
            new byte[0], data);
}
This code only handles English/ASCII characters. Take a look at the Android samples for a more generic approach. Next we need to get the Bluetooth MAC address to pass in to the above function. That is simply: BluetoothAdapter.getDefaultAdapter().getAddress(). Now we can create the text record to broadcast using NFC. To do this, you need to be inside an Android Activity:
@Override
public void onResume() {
    super.onResume();
    NfcAdapter adapter = NfcAdapter.getDefaultAdapter(this);
    // code to generate a string called msg with the MAC address, UUID, etc.
    NdefMessage message = new NdefMessage(new NdefRecord[] { newTextRecord(msg) });
    adapter.enableForegroundNdefPush(this, message);
    // more code to come later
}
In this code there is a String called msg that I didn't show how it was generated. It would have the Bluetooth MAC address, as well as the UUID for your app, plus whatever else you want to include in the NFC broadcast. Now when your app loads, it will use NFC to broadcast the info needed for the Bluetooth handoff. The app needs to not only broadcast this, but also listen for this information as well:
@Override
public void onResume() {
    // see above code
    PendingIntent pi = PendingIntent.getActivity(this, 0, new Intent(this,
            getClass()).addFlags(Intent.FLAG_ACTIVITY_SINGLE_TOP), 0);
    IntentFilter ndef = new IntentFilter(NfcAdapter.ACTION_NDEF_DISCOVERED);
    try {
        ndef.addDataType("*/*");
    } catch (MalformedMimeTypeException e) {
        throw new RuntimeException("fail", e);
    }
    IntentFilter[] filters = new IntentFilter[] { ndef, };
    String[][] techLists = new String[][] { new String[] { NfcF.class.getName() } };
    adapter.enableForegroundDispatch(this, pendingIntent, filters, techLists);
}
This code configures an NFC listener using an IntentFilter and a type of NFC tag (there are many.) It uses a PendingIntent for this same Activity. So when a NFC tag that matches our criteria (based on the IntentFilter and tag type), then an Intent will be fired that will be routed to our Activity (because that's the Activity we put in the PendingIntent.) Now we just need to override the onNewIntent method of our Activity, since that is what will be invoked when an NFC tag is encountered:
@Override
public void onNewIntent(Intent intent) {
    NdefMessage[] messages = getNdefMessages(intent);
    for (NdefMessage message : messages) {
        for (NdefRecord record : message.getRecords()) {
            String msg = parse(record);
            startBluetooth(msg);
        }
    }
}
public static String parse(NdefRecord record) {
    try {
        byte[] payload = record.getPayload();
        int languageCodeLength = payload[0] & 0077;
        String text = new String(payload, languageCodeLength + 1,
                payload.length - languageCodeLength - 1, "UTF-8");
        return text;
    } catch (UnsupportedEncodingException e) {
        // should never happen unless we get a malformed tag.
        throw new IllegalArgumentException(e);
    }
}
For our example, there should only be one NdefMessage received, and it should have exactly one NdefRecord, the text record we created earlier. Once we get the message from the NFC tag, we it's time to start the Bluetooth connection. Bluetooth uses sockets and requires one device to act as a server while the other acts as a client. So if we have two devices setting up a peer-to-peer Bluetooth connection, which is one is the server and which is the client? There are a lot of ways to make this decision. What I did was have both phones include a timestamp as part of the NFC tag they broadcast. If a phone saw that it's timestamp was smaller than the other's, then it became the server. At this point you will want to spawn a thread to establish the connection. Here's the Thread I used:
public class ServerThread extends Thread {

    private final BluetoothAdapter adapter;
    private final String macAddress;
    private final UUID uuid;
    private final Handler handler;

    public ServerThread(BluetoothAdapter adapter, String macAddress, UUID uuid,
            Handler handler) {
        this.adapter = adapter;
        this.macAddress = macAddress;
        this.uuid = uuid;
        this.handler = handler;
    }

    @Override
    public void run() {
        try {
            BluetoothServerSocket server = adapter
                    .listenUsingInsecureRfcommWithServiceRecord(macAddress,
                            uuid);
            adapter.cancelDiscovery();
            BluetoothSocket socket = server.accept();
            server.close();
            CommThread comm = new CommThread(socket, handler);
            comm.start();
        } catch (IOException e) {}
    }
    
}
This Thread uses the device's BluetoothAdapter to open up an RFCOMM socket. Once you start listening, you'll want to immediately turn off Bluetooth discovery. This will allow the other device to connect much quicker. The server.accept() call will block until another devices connects (which is why this can't be in the UI thread.) Here's the client thread that will run on the other device:
public class ClientThread extends Thread {

    private final BluetoothAdapter adapter;
    private final String macAddress;
    private final UUID uuid;
    private final Handler handler;

    public ClientThread(BluetoothAdapter adapter, String macAddress, UUID uuid,
            Handler handler) {
        super();
        this.adapter = adapter;
        this.macAddress = macAddress;
        this.uuid = uuid;
        this.handler = handler;
    }

    @Override
    public void run() {
        BluetoothDevice remote = adapter.getRemoteDevice(macAddress);
        try {
            BluetoothSocket socket = remote
                    .createInsecureRfcommSocketToServiceRecord(uuid);
            adapter.cancelDiscovery();
            socket.connect();
            CommThread comm = new CommThread(socket, handler);
            comm.start();
        } catch (IOException e) {}
    }

}
On the client thread, you find the other device by using it's MAC address (not the client's.) Then you connect to it using the device using the shared UUID. On both client and server, we stated another thead for communication. From here on out this is just normal socket communication. You can write data on one end of the socket, and read it from the other.

Monday, May 23, 2011

Not all mobile apps are great to use

I've played ESPN fantasy sports for many years. ESPN also creates one of my favorite mobile apps, their ScoreCenter app (though it could be sooo much better.) However their fantasy baseball app is one of the most frustrating apps out there. It provides access to your fantasy baseball teams. The other way to access your teams is through the website. The website is what sets your expectation of course. When you login to the website, you are first presented with a list of your teams. Once you choose a team, you are shown your team's stats for the day:
This not only sets one's expectations for interacting with your fantasy baseball team, but it really is quite useful. You want to see how your team is doing today when you login. We should expect something similar from the corresponding mobile app. Instead you get this:
This is from the Android app, but the iPhone one is almost identical (which is also a sad state of things.) You start off with having to pick your team. However when you pick your team, you are presented with the season stats for all of your players. That's not what you want. To get how your team did today, you have to open  up this crazy selector, then change the option whose default value is "Season" (we only the value, not the name of what this property is) to "Day." This brings us back to the crazy selector, where you select Done and then you get today's stats for your team.

What's even worse is that this is the 2011 version of the app. There was a (different) app for 2010. To get the 2011 one, you had to buy it. The 2010 app had the same frustrating UI. Last year at WWDC I talked about this to the developer of the iPhone app (which is exactly as bad as the Android one.) I told them about how this sucks. So not only did the 2010 app not get fixed, but they didn't fix it for the 2011 app which was a "new" app that required one to

Tuesday, May 17, 2011

We All Love Music

Last week at Google I/O, big G finally delivered on Google Music. Well sort of. As many have pointed out, it's taken a long time to get Music out the door, despite it being announced a year ago. What is most interesting is that it comes after Amazon made a similar offering with its Cloud Drive/Player service. I have no used both services along with their Android apps quite a bit. So I thought I'd share my experiences, in no particular order...


  • Uploading 4000 songs takes a long time. That's about how many songs I have on my MBP, and it comes out to a little more than 50 GB. I was one of the lucky attendees of I/O, so not only do I have access to Google Music, but it is currently free. Amazon gives you 5 GB, and 20 GB free if you buy an MP3 album. I did the latter. However 20 GB is not enough space for me, so I have not uploaded a lot of music to Amazon. I have done this with Google Music. It took many days, and it tends to wreck havoc on WiFi networks (which should be the subject a future blog post/rant.) 
  • The Android players are good, but both have room for improvement. Google Music has a an instant mix feature, similar to the Genius feature in iTunes. I would say that it is better than Genius for several reasons. First, it seems to do fine with "non-standard" sings. I mean stuff like Girl Talk, or remixes and live versions of popular (or not) songs. Genius fails on this consistently, maybe because these are songs (or in the case of Girl Talk, artists) that are not in iTunes? Genius also fails for newer music. Google Music seems to do fine in this situation too. The Cloud Player does not have this feature, and that is a shame. However it does have an equalizer. This is something that Google Music lacks, and that is a shame. I generally find that mobile devices (and mobile headphones, if you will) especially need equalization. The Amazon EQ is not that great though, as it only has a list of presets (Jazz,  Rock, etc.) 
  • I don't like listening to music in the browser. For desktop computers, both of these services have you open a browser and listen to music that way. I'd say that Google's is a little better, but they both seem clunky. The Amazon one does not have the equalizer that their Android app has. The sound on the Google one also seems a little better, which is counter-intuitive. It is my impression that Google may downsample your music during playback, based on bandwidth, whereas Amazon plays your music back as-is. Anyways, neither sounds as good as iTunes. Of course they aren't the ginormous mess that iTunes is either.
  • Google Music works better over crappier networks. It seems to do fine over edge, even though I *think* I can hear a difference in sound quality. This could be psychosomatic. On the other hand Amazon has a lot more noticeable pauses. 
  • Google Music seems to manage metadata better, both metadata about songs and about collections (albums, playlists, etc.) However, I have heard other users complain about this.
I am generally pleased with both services. Since I was able to upload all of my music to Google for free, I have used it more. However it has convinced to upload more of my music to Amazon, and consider paying for it. However, that would cost me $100, since I have more than 50 GB of music. 

So far I have only uploaded music from my laptop. I have about 80 GB of music on my desktop computer, though this is pretty much a superset of what is on my laptop. I am going to start the Google Music uploader on it too. Hopefully it will not do two copies of songs that are in common, and only upload the 30 GB of music that is not already present. If it does that well, and I have all of my music on their servers, it will be very tempting to pay for this service once the beta holiday ends.

These cloud based services have made me wish that I could have similar access in my car. I have an old iPod Touch (8 GB) hooked up in my glove box currently. It would be nice to have 10x capacity, but at what cost? Not to mention that the car interface (I have a large Sony head unit with a touchscreen interface) leaves a lot to be desired. That only gets worse with 10x data to deal with.

Tuesday, April 19, 2011

Thoughts on Windows Phone 7

A few weeks ago, I was contacted by a friend from Microsoft about going to a get-together that Microsoft was having. The reason for the meetup was Windows Phone 7. It was going to be at Alexander's Steakhouse, which is one of the best places to eat in the Bay Area -- at least in my carnivorous opinion. So I said that I was interested, even though I had done no development for WP7. He took that as an opportunity to hook me up with an LG Quantum running the latest and greatest version of WP7 and a Programming Windows Phone 7 book. I figured that since MS was being so kind to invite me to such a nice place, I should at least kick the wheels on WP7. Here's my take.

First, here's my impression of WP7 and the LG Quantum. I will give MS props for trying out some new user experience patterns. However, I am not a fan. I do not like the left-to-right organization of apps. Often I finding myself scrolling down a page trying to read something and accidentally scrolling to the next page on the right. This would just be a minor inconvenience if scrolling back to what I was interested in was quick and easy, but unfortunately the app starts loading another screen (the page on the right) and when I go back to the left one, it has to reload. Not only does this take awhile, but it almost always loses my context of what I was reading on the left page. It really sucks.

The other side effect of this way of organization screens in an application is that each screen feels narrow. Worse, most screens seem to use a small font. This seems to be true of all of the apps, from the built-in ones by Microsoft to the big-name ones like Facebook, Netflix, Foursquare, and Twitter. I have a much harder time reading text on these apps than I do on their Android or iPhone cousins. I think WP7 is to blame here, these apps are consistent with the platform and that's why they make me squint.

That last paragraph is clearly influenced by my experience with writing code for WP7. However, I would say that the development situation on WP7 is generally pretty good. The worst part is the "getting started". For me this involved installing Windows 7 on my MBP. I already had Parallels on there fortunately, but not Win7. Once I had Win7 running under Parallels, I followed the WP7 getting started guide. I was a little annoyed to see this involved a three-step download/install process. First download the WP7 developer tools. Then install the WP7 developer tools January 2011 update. And then install the WP7 developer tools fix! Really? Why in this not all one download? Why does that third download even (the "tools fix") even exist?

I've always thought that the Android tools installation process left a lot to be desired. But at least they had a reasonable excuse that IDE support was optional. So you could just install the SDK, but you had additional steps if you wanted to use either Eclipse or IDEA (and you got options here, which is nice.) Things are much better on iOS, but there are a lot less options there. WP7 is the worst of all worlds, as there are no options, but you have three installs including a "fix". Oh wait there are options, but they are for using Visual Basic which also requires that you use the "Pro" version of the tools. I'm not a big fan of an up-sale when it comes to developer tools. Do you want me to build for your platform? Make it easy and make it free until I want to publish something.

So there's my big gripe with the tools. Otherwise they are very good. If you've ever used Visual Studio, you know that it has a lot of nice features. The book mentioned earlier is a good companion too. Of course when you tie development so closely to a tool, then a book gets old fast, but for now it works. I wrote a neat little program that downloaded and parsed some XML and then used data binding to show the results in a list. Tapping on items in the list could bring up more information or open an external browser link. It was sweet and the tools made the development go by fast. I think this (along with the tightly integrated designer tools) may be WP7's biggest asset. There is no doubt in my mind that if you want to build some very "standard" looking apps, you can do it much faster on WP7 than on Android or iOS. It's not even close.

However the tale of the tools does not have a happy ending. I got my little app to run on the emulator, and I was pleased that the emulator loaded quickly and was responsive. However I don't trust emulators (and you should not either), so I was eager to run the app on the LG Quantum. However when I tried, I got an error saying "Zune software is not installed." Lucky for me that developing for WP7 was just a hobby, so I could laugh at this WTF error.

I have one last critique of WP7, and actually this came from Matt Trunnell. He's a bad-ass WP7/Silverlight developer who works for Netflix and wrote their WP7 app. We met at the MS event I mentioned at the beginning and shared with me some of his perspectives on WP7. I stated that I thought that MS seemed to be trying to "out Apple, Apple". What I meant was that they had followed Apple's lead on most issues, from how the OS ran, to what developers could do, etc. However Matt made me realize that I was wrong about this. He pointed out that MS had really done a lot of "triangulation" where they had tried to find middle ground between how Apple and Google had done things. An example of this is the multitasking in WP7 (well coming this fall to WP7.) MS adopted the pseudo-multitasking (fast app switching) of the iPhone, but also included "Live Agents" that could run in the background, but only in a limited way that is beyond the developer's control.

I think I have become immersed in the Android world so much that Microsoft's triangulation seemed like being very Apple-ish to me. But Matt was right on. MS has very consistently tried to "learn" from Apple and Google and come up with a best-of-both worlds approach. There are some very good things about this kind of strategy (you *should* learn from your competitors) but ultimately it is frustrating. Everything always seems second-rate.

Tuesday, April 05, 2011

Fragmentation? What fragmentation?

Fragmentation is a fun word to use in the mobile space. The devotees of Apple and the iPhone delight in the term because it makes Android aficionados cringe. Heck even the executives at Apple use the term at every opportunity. Some of the folks on the Android team deny it exists. However in a recently published study, most Android developers say it is a real problem. So what's the deal?

As with so many other contentious things in this world, a lot comes down to how you define something. If you write an Android application, you will have to test it on multiple devices. So I will use this as my definition of fragmentation. So obviously Android suffers from fragmentation. Here is a simple example of it:

HTC Thunderbolt winning

This is a screenshot of an app running on the HTC Thunderbolt. According to some folks, it is selling quite well. What's the point of the screenshot? Well take a look at the text field right below the "Your phone number" prompt. Here is the code for it:

<EditText
	android:layout_height="wrap_content"
	android:layout_width="fill_parent"
	android:id="@+id/entry_phone"
	android:textStyle="bold"
	android:inputType="phone"
	android:imeOptions="actionNext"
/>

See the problem? WTF is up with those zeros in the field? They clearly should not be there. This was the only phone I could find that produced this problem, but I will have to add code just to deal with this annoyance. This is fragmentation. Code has to be tested on many devices, and you may have to put in some device-specific things to get the app to work the same.

Now don't be fooled, fragmentation exists on all mobile platforms. Yes, even iOS. There is less of it, but it is not insignificant. Fragmentation is quite prominent on almost all application platforms. Do not get any web developer started about IE specific CSS for example.

So it is a huge problem on Android? My answer to that question is irrelevant, as the perception that it is a problem is all that does matter. I think this perception is completely reasonable. Many Android developers were programming for the iPhone in the past, and were used to testing on a single iPhone and nothing else. Thus Android seems arduous.

Friday, March 18, 2011

My Last Day at eBay

This is The End

Today is my last day at eBay. I have been there for nearly four years and it has been a remarkable experience. When I joined eBay, I had come from a startup called Ludi Labs. One thing I realized while at Ludi Labs was that "scalability" was largely a hypothetical term to me. I thought I understood the keys to making an application function at web scale, but it unfortunately it was very theoretical knowledge. None of the startups I had worked at had ever really hit that kind of scale. Hence my interest in working a large scale web company, like eBay.

Four years later and I realize how very limited my theoretical knowledge of scalability really was. Sure I understood the basic principles, but there are definitely many subtleties involved in implementing these principles. I won't now claim to be a scalability expert, but I have certainly learned some real world lessons.

When I first joined eBay, I worked with the infrastructure team on eBay's homegrown presentation technology. A lot of this technology was based on what works at scale: high volume web pages, working in dozens of different languages, running on servers spread across multiple data centers, and built by hundreds of engineers around the globe. I went on to help many of the different teams at eBay adopt and optimize their use of eBay's presentation technology. As such I got opportunities to work on many of the major pages of eBay: home page, search results, view item, my eBay, just to name a few. Often my experience was limited to helping the team solve some site speed issues, but I still got a chance to understand how each of these highly complex web pages could function at massive scale.

Of course the last couple of years of my career at eBay was dominated by mobile. It's been an exciting time, as our mobile team saw massive growth both in terms of users and their usage (measured using the best possible metric: money) as well as in terms of the team behind the technology. It's had some great moments as well, such as seeing eBay named by Fast Company as the #2 most innovative company in mobile.

This leads to the obvious question of why I am leaving eBay. The future for mobile at eBay is very bright. However it's one of those cases where the change is not about where I'm coming from, but about where I'm going. In this case that's Bump Technologies. I'm very excited about the opportunity at Bump. They already have a great product that they've built in just two years. However, I think that two years from now the current product will look very limited in comparison. It's going to be an exciting time.