Wednesday, June 29, 2011

Web Apps, Mobile Web Apps, Native Apps, ... Desktop Apps?

There's never any shortage of people debating mobile web apps vs. native apps. I don't want to waste your time with yet another rehash of this debate. Personally I sum it up as a choice between user experience and cost. But obviously I am biased -- I build native apps and even wrote a book on how to do it. So you shouldn't believe anything I have to say about this topic, it must be wrong. Anyways, one of the interesting lines of reason that often comes up in this debate is how web apps replaced desktop apps. I want to examine this a little bit closer.

I'm actually writing this blog post from a Chromebook that I got for attending the Google I/O conference last month. This device is perhaps the logical conclusion of the web apps replacing desktop apps axiom. However, I have a problem with that axiom. It is often based on the emergence of popular websites like Yahoo, Google, Amazon, and eBay. The argument is that these apps were web based and the fact that they ran on servers that could be rapidly updated is a key to why they succeeded. The long update cycle of desktop software would have made it impossible for these apps to be anywhere but in a browser.

There is some truth in this, but it's misleading. The most important factor in the success of those apps was that their data was in the cloud. They brought information and interactions that could not exist simply as part of a non-connected desktop app. They had business models that were not based on people paying to install the software. These were the keys. The fact that end users interacted through a web browser was secondary. It's pretty much the same story for newer super popular web apps like Facebook and Twitter.

Going back to the world of mobile for a minute... One of the things that mobile has taught us is that users don't care so much about how they interact with your "web-based" application. Hence the popularity of native apps for web giants like Amazon and Facebook. In fact, some might even argue that users prefer to use native apps to interact with traditional web properties. I won't argue that, but I would definitely disagree with any claims that users prefer to use a browser.

Anyways, the point is that the notion that web apps replaced desktop apps is dubious. In fact, if you look at places where web apps were tried to exactly replace desktop apps, such as word processing, they have had limited success. Currently we see a movement to replace music players like iTunes with web apps. These web apps have some distinct advantages, but it is not at all clear that they will prove popular with users. Apple has taken the approach of adding more network based features (store and download your music from their servers) to iTunes instead of going with a web app -- at least for now.

Connected desktop apps have a lot to offer. I often find myself using desktop apps like Twitter, Evernote, Sparrow (for GMail), and Mars Edit (for Blogger.) They provide a better experience than their browser based cousins. Apple's Mac App Store has definitely made such apps more popular on the Mac platform, as they have made it easier to discover, purchase, and update such apps. Speaking of updates, these apps update frequently, usually at least once a month. Again, I think that our expectations have adjusted because of mobile apps.

So will desktop apps make a comeback? Are mobile web apps doomed? I don't know. I think it is very unclear. It's not a given that a computer like this Chromebook that I'm using is "the future." We can haz network without a browser.

Saturday, June 18, 2011

This is the Year of Music

Last month I wrote a bit about cloud music. Since then Apple got in the game with iTunes in the iCloud. I'm not a fan of it because you have to manage what songs are downloaded to your devices before you can listen to them. Of course having to download a song before you can listen to it is not surprising from a company that sells hardware priced by storage capacity. If you didn't have to download to use your media, why would you spend an extra $100 on the 32gb iWhatever instead of the 16gb version? Still you gotta give Apple props on the price point. I am optimistic that competition around price and features between Apple, Google, and Amazon will be very beneficial for consumers.

Anyways while the your music-in-the-cloud market is obviously very hot market being contested by three of the biggest technology companies in the world, there is a lot more interesting innovation around music going in in technology. This blog post is about two of my favorite music startup/services: Rdio and Turntable.fm. There's tons of tech press on each of these companies, so I won't go into that. I'm just going to talk about why I like them.

Rdio launched in late 2010, and I've been a paying customer since January 2011. It's conceptually similar to subscriptions services like fake Napster and Rhapsody. What I like about it is how great it is to use on mobile devices. I have it on my Nexus S, on my Nexus One that I listen to when I run/work-out, and on the iPod Touch that I have plugged in to my car. Now you might be thinking, now how can I use this on a iPod in my car with no internet connection? Well you can mark songs/albums/playlists as songs to sync to your devices and be usable with no connection. So I can use the Rdio Mac app, mark some songs, and then just connect my iPod or Nexus One to a wifi network, and the songs get synced. Then I can listen to them anytime. I regularly read reviews of new music on Metacritic, listen to some of the music I find there on Rdio, and then sync it to my devices. Then I can listen to it while running or driving to work.

Speaking of discovering new music, that is the sweet spot of Turntable. It's pretty new, I only heard about it earlier this month and started using it this week. The idea of being a DJ for the room may sound lame at first -- and maybe it is -- but it is also awesome. However I will warn you to resist being a DJ. You will be drawn in and waste massive amounts of time. It's completely unnecessary too. Like I said, Turntable is awesome for music discovery. Go to a room that sounds interesting (avoid the Coding Soundtrack unless you really enjoy electronica/dance AND don't mind witnessing pissing contests measured by playing obscure remixes) and enjoy. There's a good chance that you will hear familiar music, but an even better chance you will hear something you've never heard before. The DJs try hard to impress, and you win because of it. It's like Pandora, but better, with no ads, and ... you always have the option to enter the chat or even take your turn at DJ'ing.

Sunday, June 12, 2011

I hate your smartphone

When people talk about smartphones, they often mean iPhones and Android phones. Sure there are still Blackberries out there, I think I once saw a guy using a webOS phone, and Microsoft has 89,000 employees who have to use Windows Phone 7, but it's mostly an Android and iPhone world right now. If you have a phone running Android or iOS, life is pretty good. You've probably got a really powerful phone, with a huge number of apps and games to choose from. You've also got a web browser that's more advanced than the one used by most desktop computer users. So given all of those things to be happy about, why is that smartphone owners are so hostile to each other?

Can't we all get along?

iPhones user love to look down their noses and make derisive comments about Android users. Not to be outdone, Android users never miss an opportunity to mock iPhone users. There is an obvious parallel here to Mac vs. Windows, but I think it's actually much nastier than Mac vs. Windows ever was. Here's a list of my theories (in no particular order) on why there is such animosity.

  1. It's so easy to do. The truth is that both iOS and Android are deeply flawed. Going back to the Mac vs. Windows comparison, think about how much less mature these smartphone OS's are compared to Windows and OSX. So if you have any inclination to make fun of either OS, it's embarrassingly easy to do. This is where things really differ from Mac vs. Windows. There's not much to debate there. If you wanted to say "Mac sucks", there is a short list of things that you can point to. Not true for iOS or Android.
  2. Social networking. In this day and age of social networks, your "friends" shove what they are doing in your face constantly. Smartphone apps make heavy use of this, as a way to spread virally. But what happens when there is an impedance mismatch caused by apps available on one OS but not the other? The folks with the unsupported OS get a little annoyed, and the other folks get an artificial feeling of 133tness
  3. It starts at the top. Apple and Google both frequently take shots at each other. They do it at developer conferences. They do it when reporting quarterly earnings. It doesn't stop there. Motorola, Verizon, T-Mobile and others have all taken shots at Apple in commercials seen by millions.
  4. Big decisions. You only have one smartphone (unless you are weird like me) and in the US, you usually buy it on a two-year contract. So you have to pick just one, and then you are stuck with it for two years. Thus if anybody is gonna suggest that you made a bad decision, most people will defend their decisions vehemently.
  5. The Apple effect. So this really is straight out of Mac vs. Windows. Apple wants their products to seem elite and luxurious. They want the owners to feel like they have purchased far superior products, and feel that they (the user) is superior because of this. It's brilliant marketing, and it totally works. The air of superiority hangs over any Apple product owners, but especially iPhone users. So naturally any non-iPhone users are going to react negatively to the haute attitudes of iPhone users.

Friday, June 10, 2011

Rallying the Base

This week was WWDC 2011. Last year I was lucky enough to attend what appears to be the final stevenote. This year I followed along online, like much of Silicon Valley. There are a lot of reasons why so many of us who work in the tech world pay such close attention to the WWDC keynote. This is the place where Apple typically unveils innovative hardware and software. However, this year's event reminded me of another  watershed moment in recent US history: John McCain choosing Sarah Palin as his VP candidate back in 2008.

Rallying the Base

These two events were similar because they were both examples of rallying the base. In 2008, McCain decided against trying to appeal to moderate Republican/Democrats/independents who were either inclined to vote for Obama or undecided. Instead he went with Palin, a candidate who did not appeal to those people. The idea was to appeal to the most conservative elements of the Republican party and get those people to vote instead of staying home for whatever reason. Obviously this did not work.

So how was WWDC 2011 a rallying of the base tactic? Apple did not try to introduce near software or hardware that would get non-iPhone owners to go out and buy an iPhone or more strongly consider buying an iPhone the next time they were in the market for a new phone. Instead they did their best to make sure that current iPhone owners continued to buy iPhones. The strategy was two-fold.

First, they needed the places where they were weak and other were strong. Now let's be honest here, by others we are talking about Android/Google. There were a couple of glaring problems with iOS 4. First was notifications, so Apple essentially adopted Android's model here. Second was the dependency on iTunes the desktop software application. They introduced wireless sync and their iCloud initiatives to address this weakness. Apple did not break any new ground in any of these areas, they simply removed some obvious reasons for people to buy an Android device over an iPhone.

Phase two of rallying the base was to increase lock-in. If you are an iPhone user, you already experience lock-in. Buying an Android phone means losing all of your apps and games. Depending on what you use for email, calendar, etc. you might also lose that data too. Of course this is true to some degree about any smartphone platform. However, with the expansion of the Android Market (I've seen many projections that it will be bigger than the App Store soon), pretty much every app or game you have on your iPhone can be found on Android. Further, there's a good chance that it will be free on Android, even if you had to pay for it on the iPhone. Further, with the popularity of web based email, especially GMail, you probably would not lose any emails, calendar events, etc. So the lock-in was not as high as Apple needed it to be. Enter iCloud and iMessaging.

As many have noted, iCloud/iMessaging does not offer anything that you could not get from 3rd party software. Syncing your docs, photos, email, calendar, etc. is something that many of us already do, and that includes iPhone users. Further many folks already have IM clients that do everything that iMessaging does. The big difference is that all of those existing solutions are not tied to iOS or OSX. Thus they are cross-platform (no lock-in) but that also means that you have to add this software to your devices. It's very nice for users to not have to worry about installing Dropbox, Evernote, or eBuddy. But the obvious win for Apple is here is the lock-in. If you start relying on Pages for writing docs and sync'ing them across devices, you are going to be very reluctant to buy anything other than an iPhone (and a Mac for that matter.) If you get used to using iMessaging to chat with your other iPhone toting friends, same thing.

Apple is keeping the cost of using all of their new offerings very low. It's a classic loss leader strategy. It's ok if iCloud and iMessaging both lose a lot of money for Apple. If they can just lock-in existing iPhone users, they can continue to make huge profits. In that scenario, it's ok for Google/Android to have N times the number of users as Apple. The Apple users won't be going anywhere, and they spend a lot of money. Seems like a smart strategy by Apple. It should work out much better than it did for the Republican party in 2008.

Saturday, June 04, 2011

The Concurrency Myth

For nearly a decade now technology pundits have been talking about the end of Moore's Law. Just this week, The Economist ran an article about how programmers are starting to learn functional programming languages to make use of the multi-core processors that have become the norm. Indeed inventors of some of these newer languages like Rich Hickey (Clojure) and Martin Odersky (Scala) love to talk about how their languages give developers a much better chance of dealing with the complexity of concurrent programming that is needed to take advantage of multi-core CPUs. Earlier this week I was at the Scala Days conference and got to hear Odersky's keynote. Pretty much the second half of his keynote was on this topic. The message is being repeated over and over to developers: you have to write concurrent code, and you don't know how to do it very well. Is this really true, or is it just propaganda?

There is no doubt that the computer that we buy are now multi-core. Clock speeds on these computers have stopped going up. I am writing this blog post on a MacBook Air with a dual-core CPU running at 2.13 GHz. Five years ago I had a laptop with a 2.4 GHz processor. I'm not disputing that multi-core CPUs are the norm now, and I'm not going to hold my breath for a 4 GHz CPU. But what about this claim that it is imperative for developers to learn concurrent programming because of this shift in processors? First let's talk about which developers. I am only going to talk about application developers. What I mean are developers who are writing software that is directly used by people. Well maybe I'll talk about other types of developers later, but I will at least start off with application developers. Why? I think most developers fall into this category, and I think these are the developers that are often the target of the "concurrency now!" arguments. It also allows me to take a top-down approach to this subject.

What kind of software do you use? Since you are reading this blog, I'm going to guess that you use a lot of web software. Indeed a lot of application developers can be more precisely categorized as web developers. Let's start with these guys. Do they need to learn concurrent programming? I think the answer is "no, not really." If you are building a web application, you are not going to do a lot of concurrent programming. It's hard to imagine a web application where one HTTP request comes in and a dozen threads (or processes, whatever) are spawned. Now I do think that event-driven programming like you see in node.js will become more and more common. It certainly breaks the assumption of a 1-1 mapping between request and thread, but it most certainly does not ask/require/suggest that the application developer deal with any kind of concurrency.

The advancements in multi-core processors has definitely helped web applications. Commodity app servers can handle more and more simultaneous requests. When it comes to scaling up on a web application, Moore's Law has not missed a beat. However it has not required all of those PHP, Java, Python, Ruby web developers to learn anything about concurrency. Now I will admit that such apps will occasionally do something that requires a background thread, etc. However this has always been the case, and it is the exception to the type of programming usually needed by app developers. You may have one little section of code that does something concurrent, and it will be tricky. But this has nothing to do with multi-core CPUs.

Modern web applications are not just server apps though. They have a lot of client-side code as well, and that means JavaScript. The only formal concurrency model in JavaScript are Web Workers. This is a standard that has not yet been implemented by all browsers, so it has not seen much traction yet. It's hard to say if it will become a critical tool for JS development. Of course one of the most essential APIs in JS is XMLHttpRequest. This does indeed involve multiple threads, but again this is not exposed to the application developer.

Now one can argue that in the case of both server side and client side web technologies, there is a lot of concurrency going on but it is managed by infrastructure (web servers and browsers). This is true, but again this has always been the case. It has nothing to do with multi-core CPUs, and the most widely used web servers and browsers are written in languages like C++ and Java.

So is it fair to conclude that if you are building web applications, then you can safely ignore multi-core CPU rants? Can you ignore the Rich Hickeys and Martin Oderskys of the world? Can you just stick to your PHP and JavaScript? Yeah, I think so.

Now web applications are certainly not the only kind of applications out there. There are desktop applications and mobile applications. This kind of client development has always involved concurrency. Client app developers are constantly having to manage multiple threads in order to keep the user interface responsive. Again this is nothing new. This has nothing to do with multi-core CPUs. It wasn't like app developers used to do everything in a single thread, but now that multi-core CPUs have arrived, you need to start figuring out how to manage multiple threads (or actors or agents or whatever.) Now perhaps functional programming can be used by these kind of application developers. I think there are a lot of interesting possibilities here. However, I don't think the Hickeys and Oderskys of the world have really been going after developers writing desktop and mobile applications.

So if you are a desktop or mobile application developer, should you be thinking about multi-core CPUs and functional programming? I think you should be thinking about it at least a little. Chances are you already deal with this stuff pretty effectively, but that doesn't mean there's room for improvement. This is especially true if language/runtime designers started thinking more about your use cases.

I said I was only going to talk about application developers, but I lied. There is another type of computing that is becoming more and more common, and that is distributed computing. Or is it called cloud computing? I can't keep up. The point is that there are a lot of software systems that run across a cluster of computers. Clearly this is concurrent programming, so bust out the functional programming or your head will explode, right? Well maybe not. Distributed computing does not involve the kind of shared mutable state that functional programming can protect you from. Distributed map/reduce systems like Hadoop manage shared state complexity despite being written in Java. That is not to say that distributed systems cannot benefit from languages like Scala, it's just that the benefit is not necessarily the concurrent problems/functional programming that are often the selling points of these languages. I will say that Erlang/OTP and Scala/Akka do have a lot to offer distributed systems, but these frameworks address different problems than the multi-core concurrency.

It might sound like I am a imperative program loving curmudgeon, but I actually really like Scala and Clojure, as well as other functional languages like Haskell. It's just that I'm not sure that the sales pitch being used for these languages is accurate/honest. I do think the concurrency/functional programming angle could have payoffs in the land of mobile computing (desktop too, but there's not much future there.) After all, tablets have already gone multi-core and there are already a handful of multi-core smartphones. But these languages have a lot of work to do there, since there are already framework features and common patterns for dealing with concurrency in mobile. Event driven programming for web development (or the server in client/server in general) is the other interesting place, but functional languages have more to offer framework writers than application developers in that arena.  My friend David Pollak recently wrote about how the current crop of functional languages can hope for no more than to be niche languages like Eiffel. I think that he might be right, but not just because functional programming has a steep learning curve. If all they can offer is to solve the concurrency problem, then that might not be enough of a problem for these languages to matter.

Friday, June 03, 2011

Local data exchange with NFC and Bluetooth

One of the exciting technologies being shown off at Google's I/O conference this year was near field communication or NFC. It certainly got my interest, so I attended an excellent talk on NFC. Here's a video of the talk:

One of the things mentioned in the talk was that you did not want to use NFC for any kind of long running, verbose communication. Its range was too short and its transfer speed was too slow. Bluetooth was the way to go for such data exchange, so what you really wanted to do was an NFC-to-Bluetooth handoff. It was even mentioned that the popular Fruit Ninja game did this, or would do this in the future. Earlier this week at Bump we had our second hackathon. I decided that local communication using NFC and Bluetooth would make for an interesting hackathon project. So based on what I had learned from the I/O presentation, the examples in the Andoird SDK, and a tip from Roman Nurik, here's some code on how to do the NFC-to-Bluetooth handoff to setup a peer-to-peer connection between two phones to exchange data between them.
We'll start with the NFC pieces. You want the phone to do two things. First, it needs to broadcast an NFC "tag". This tag can have whatever information you want in it. In this case we will have it send all of the information needed to setup a Bluetooth connection: the Bluetooth MAC address for our phone plus a UUID for our app's connection. You can add more stuff to the tag as well, but these two parts are sufficient. Technically you could do without the UUID, but you'll want this in case other apps are using a similar technique. Here is some simplified code for generating an NFC text record:
public static NdefRecord newTextRecord(String msg) {
    byte[] langBytes = Locale.ENGLISH.getLanguage().getBytes(
            Charset.forName("US-ASCII"));
    byte[] textBytes = msg.getBytes(Charset.forName("UTF-8"));
    char status = (char) langBytes.length;
    byte[] data = new byte[1 + langBytes.length + textBytes.length];
    data[0] = (byte) status;
    System.arraycopy(langBytes, 0, data, 1, langBytes.length);
    System.arraycopy(textBytes, 0, data, 1 + langBytes.length,
            textBytes.length);
    return new NdefRecord(NdefRecord.TNF_WELL_KNOWN, NdefRecord.RTD_TEXT,
            new byte[0], data);
}
This code only handles English/ASCII characters. Take a look at the Android samples for a more generic approach. Next we need to get the Bluetooth MAC address to pass in to the above function. That is simply: BluetoothAdapter.getDefaultAdapter().getAddress(). Now we can create the text record to broadcast using NFC. To do this, you need to be inside an Android Activity:
@Override
public void onResume() {
    super.onResume();
    NfcAdapter adapter = NfcAdapter.getDefaultAdapter(this);
    // code to generate a string called msg with the MAC address, UUID, etc.
    NdefMessage message = new NdefMessage(new NdefRecord[] { newTextRecord(msg) });
    adapter.enableForegroundNdefPush(this, message);
    // more code to come later
}
In this code there is a String called msg that I didn't show how it was generated. It would have the Bluetooth MAC address, as well as the UUID for your app, plus whatever else you want to include in the NFC broadcast. Now when your app loads, it will use NFC to broadcast the info needed for the Bluetooth handoff. The app needs to not only broadcast this, but also listen for this information as well:
@Override
public void onResume() {
    // see above code
    PendingIntent pi = PendingIntent.getActivity(this, 0, new Intent(this,
            getClass()).addFlags(Intent.FLAG_ACTIVITY_SINGLE_TOP), 0);
    IntentFilter ndef = new IntentFilter(NfcAdapter.ACTION_NDEF_DISCOVERED);
    try {
        ndef.addDataType("*/*");
    } catch (MalformedMimeTypeException e) {
        throw new RuntimeException("fail", e);
    }
    IntentFilter[] filters = new IntentFilter[] { ndef, };
    String[][] techLists = new String[][] { new String[] { NfcF.class.getName() } };
    adapter.enableForegroundDispatch(this, pendingIntent, filters, techLists);
}
This code configures an NFC listener using an IntentFilter and a type of NFC tag (there are many.) It uses a PendingIntent for this same Activity. So when a NFC tag that matches our criteria (based on the IntentFilter and tag type), then an Intent will be fired that will be routed to our Activity (because that's the Activity we put in the PendingIntent.) Now we just need to override the onNewIntent method of our Activity, since that is what will be invoked when an NFC tag is encountered:
@Override
public void onNewIntent(Intent intent) {
    NdefMessage[] messages = getNdefMessages(intent);
    for (NdefMessage message : messages) {
        for (NdefRecord record : message.getRecords()) {
            String msg = parse(record);
            startBluetooth(msg);
        }
    }
}
public static String parse(NdefRecord record) {
    try {
        byte[] payload = record.getPayload();
        int languageCodeLength = payload[0] & 0077;
        String text = new String(payload, languageCodeLength + 1,
                payload.length - languageCodeLength - 1, "UTF-8");
        return text;
    } catch (UnsupportedEncodingException e) {
        // should never happen unless we get a malformed tag.
        throw new IllegalArgumentException(e);
    }
}
For our example, there should only be one NdefMessage received, and it should have exactly one NdefRecord, the text record we created earlier. Once we get the message from the NFC tag, we it's time to start the Bluetooth connection. Bluetooth uses sockets and requires one device to act as a server while the other acts as a client. So if we have two devices setting up a peer-to-peer Bluetooth connection, which is one is the server and which is the client? There are a lot of ways to make this decision. What I did was have both phones include a timestamp as part of the NFC tag they broadcast. If a phone saw that it's timestamp was smaller than the other's, then it became the server. At this point you will want to spawn a thread to establish the connection. Here's the Thread I used:
public class ServerThread extends Thread {

    private final BluetoothAdapter adapter;
    private final String macAddress;
    private final UUID uuid;
    private final Handler handler;

    public ServerThread(BluetoothAdapter adapter, String macAddress, UUID uuid,
            Handler handler) {
        this.adapter = adapter;
        this.macAddress = macAddress;
        this.uuid = uuid;
        this.handler = handler;
    }

    @Override
    public void run() {
        try {
            BluetoothServerSocket server = adapter
                    .listenUsingInsecureRfcommWithServiceRecord(macAddress,
                            uuid);
            adapter.cancelDiscovery();
            BluetoothSocket socket = server.accept();
            server.close();
            CommThread comm = new CommThread(socket, handler);
            comm.start();
        } catch (IOException e) {}
    }
    
}
This Thread uses the device's BluetoothAdapter to open up an RFCOMM socket. Once you start listening, you'll want to immediately turn off Bluetooth discovery. This will allow the other device to connect much quicker. The server.accept() call will block until another devices connects (which is why this can't be in the UI thread.) Here's the client thread that will run on the other device:
public class ClientThread extends Thread {

    private final BluetoothAdapter adapter;
    private final String macAddress;
    private final UUID uuid;
    private final Handler handler;

    public ClientThread(BluetoothAdapter adapter, String macAddress, UUID uuid,
            Handler handler) {
        super();
        this.adapter = adapter;
        this.macAddress = macAddress;
        this.uuid = uuid;
        this.handler = handler;
    }

    @Override
    public void run() {
        BluetoothDevice remote = adapter.getRemoteDevice(macAddress);
        try {
            BluetoothSocket socket = remote
                    .createInsecureRfcommSocketToServiceRecord(uuid);
            adapter.cancelDiscovery();
            socket.connect();
            CommThread comm = new CommThread(socket, handler);
            comm.start();
        } catch (IOException e) {}
    }

}
On the client thread, you find the other device by using it's MAC address (not the client's.) Then you connect to it using the device using the shared UUID. On both client and server, we stated another thead for communication. From here on out this is just normal socket communication. You can write data on one end of the socket, and read it from the other.