As the first wave of images flow onto the internet featuring the elite and lucky few who have gotten their Explorer Editions of Google Glass, much of the discourse over Glass has focused on the intrusiveness and awkwardness of wearing the product. Is privacy an issue if you don’t know if people are recording? Is it really going to be socially advisable to wear this ridiculous piece of electronics above your eye? These are appropriate questions to ask, but they also sidestep the larger question of what Glass is and why you want it. Has Google ever explained why they think this is a viable product that people should want? Is this reason compelling or even accurate?
It turns out that they have. In all of Google’s marketing material and in speeches from their top executives, one theme has definitely emerged as their selling point for Glass. It isn’t necessarily a smartphone replacement but rather a new way to experience and capture the world without the distractions that come with looking down at a phone. By putting the screen at the top of your line of sight, along with a camera, Google believes that you can experience the world with fewer distractions. Check your text messages without looking down. Record what you’re seeing without fishing for your phone.
A central tenant for why Google believes this will be less distracting seems to be rooted in their belief that reducing friction between you and your device is the secret to being less distracted. On the face of it (every pun intended), it seems to make sense, but a few moments of historical reflection show that they are incredibly misguided in their messaging.
If taking more effort to check your notifications equals more distraction, we should be able to look a little further back in our technological ancestry to test this hypothesis. Prior to the smartphone revolution, the PC was the primary mode of computing. Everything that came in from and went out to the Internet flowed through that box. Originally, it was a stationary computer that say in one room of the house. Later, it turned into a mobile device that could be transported throughout the house and even outside the house. If requiring more effort to check your messages equals more distraction, one would expect the desktop computer to be the ultimate form of distraction. You had to actually go into a specific room take care of your computing needs. Sure, there was a chorus of parents who complained about their children being on the computer too long, but these complaints were hardly in the “everyone is always distracted” vein. When laptops became more prominent, we started to see the first examples of people in the living room and kitchen being distracted by having a screen in front of them while people were trying to talk or watch TV.
The shift from desktops to laptops wasn’t especially momentous in terms of distractions. The Internet was still growing into a dominant force in our lives and the change in form factor was not especially big. The real dramatic change came with the smartphone revolution (read: iPhone revolution in 2007). All of a sudden, people had access to the Internet everywhere. It was always within reach, just a pocket or purse away. Now, maybe my memory is faulty, but I don’t remember the rash of articles and complaints about constant distractions prior to the ubiquity of iPhones and Android devices. Having a smartphone always accessible was absolutely responsible for creating a culture of people standing in line, absorbed in their own world.
In other words, the lack of friction between you and your device actually increases the level of distraction to everyday life. A computer that’s out of sight does nothing to draw you in and distract you. A smartphone is far worse in that regard, but leaving it in your pocket or bag and not checking it is a clear sign that you’re engaged with the world around you. How does one keep the information flow from Glass at arm’s length? It’s right in front of your face all the time. How can you easily guarantee the person you’re speaking to that you won’t be distracted the second something comes in?
There may be plenty of reasons to use Google Glass that have nothing to do with the issue of distractions. Some of the camera features seem compelling. Perhaps navigation will prove to be easier as well. What is very clear, however, is that this possibility of Glass being less distracting, less disruptive to your life is anything but true. If Glass is the next paradigm of computing, it will also continue down the path of technology being more intrusive, pervasive and, yes, distracting. Google’s messaging could not be more off-base or disingenuous in this respect.
Apple CEO Steve Jobs died today at age 56. In many ways, it is the ultimate absurdity of modern American life that the passing of the head of the world’s largest corporation would be mourned by millions in the way that a friend would be. Respect for a business icon is to be anticipated, but the sense of personal loss at the death of one seems almost inappropriate and dismissive of the true personal relationships we all have. In the age of the internet and 15 minutes of pop culture fame, the lionization of celebrities and the personalization of them has reached unparalleled levels. We feel as if we know people who we would otherwise never even have access to. It seems all too predictable that the passing of the man who introduced the personal computer and set in motion the increasingly pervasive culture of connectivity would impact people, at least initially, in a similar manner to losing a real friend.
The fact that anyone who never met Jobs would feel this impact is a testament to the uniqueness of Jobs as an executive icon. The man who co-founded Apple at age 21 was never cut from the same cloth as an oil executive from Exxon Mobile, the company Apple passed a few months ago to establish itself as the world’s most valuable company. Though Steve eventually grew older than the company he kept in Cupertino, CA, he maintained a youthful presence that belied the fact that he had already fought one battle with cancer. There’s no question that the personal charisma of Jobs contributed immensely to the public perception of him as a pied piper who could not only lead people into a new age but would be willing to explain why things were the way they are. His role as not just the company’s top executive, but also its top spokesman endeared him to millions.
Beyond his presence and charisma, however, lies the true connection that Jobs made with the public. That connection was built on Apple’s products, almost all of which were solely designed to serve users. Apple’s focus on consumer products is unparalleled in the computer industry, but the importance of the products go beyond the scope of a singular industry. Jobs helped define eras and revolutions that ended up meaning something to individuals. Can you get more personal than the Apple II and the Macintosh, the leaders of the “personal computer” revolution? It turns out that you can. The iPod became a constant companion for millions over the last decade, a device that carried users’ intimate music libraries and playlists of the moment. The first revolutionary handheld device from Apple was not only an electronic reflection of one’s taste, but a precursor to the most personal product that exists in consumer electronics today.
The introduction of the iPhone in 2007 was more than just an entrance into the phone market. While smartphones had existed before, the iPhone was a realization of the potential of a device that could not only keep you connected, but knew everything about you. Every major smartphone today looks like an iPhone. What is an iPhone to its owner? It’s a phone and a media player, but it’s also a computer in your pocket. It can connect you with people by voice or by text. It’s built around your email, your calendar, your contacts and your photos. It can take your pictures and your video and it allows you to share them with your friends on your chosen network. It knows who you are and where you are. It can access your money and it can answer almost any question instantly. And more and more every day, it is with you at all times.
You can’t get more personal than that. So while it may seem odd to mourn the death of an ex-CEO and founder of what is today a giant corporation, the usual rules don’t apply here. Most of us never knew Steve Jobs and can’t fully appreciate the life of this great, but flawed, individual. Nevertheless, his vision and drive led directly to the products we live with and the connected world that we take for granted. Combine that with a public persona that was as engaging as any we’ve ever seen and the loss of Steve Jobs hangs heavy on many of us. Odd as it may seem, for millions who never knew Steve Jobs, this is a personal loss.
If months ever felt like years, this is one of those times as it was only in June that Steven Sinovsky, President of Microsoft’s Windows Division, gave a first look at Windows 8 at the D9 Conference. What we saw back then was only a snapshot of Windows 8 and what we heard was only a broad, intentionally vague discussion of it. Microsoft promised that we’d have much more answered at their BUILD conference in September. They didn’t let us down this week as Sinovsky took developers through a very long keynote highlighting more features of Windows 8 and the ways to develop for it. As if that and another few days of sessions wasn’t enough, Microsoft released a developer preview to the public for anyone to download. We took our own turn in our walkthrough of Windows 8 on a laptop. With a few days to reflect on our time with the OS and what Microsoft has been willing to tell us, it’s time to turn our attention to the broader questions of how this new direction may play out for Microsoft.
After that intital look in June, we speculated that Windows 8 threatens to actually do harm to the three elements of the Windows ecosystem. In our editorial, we separated out the areas of traditional desktops, tablets and phones as the areas that could be negatively affected in their own way. So much of the piece was speculative by necessity, because Microsoft wasn’t willing to tell us everything at that time. So now that we know more, how are we doing?
We identified two missteps in Microsoft’s strategy 1) that the Metro interface is not useful to the traditional desktop user or business and is actually inferior and that 2) the desktop experience is not being improved in any meaningful way.
1) After using the Metro Start screen and the associated Metro apps, I’m more convinced than ever now that this is not a useful desktop interface. It’s very visually appealing but is significantly less productive and won’t be of much use to businesses or individuals for work-related tasks. It’s now more clear than ever that while Metro works with a mouse and keyboard, it’s absolutely built for touch.
2) The Windows Desktop has been deprecated in Windows 8. Even if Microsoft allows OEMs to default to the traditional desktop for enterprise machines, Windows 8 offers almost nothing of value that we’ve seen so far. If BUILD is any indication, Microsoft is entirely focused on Metro and the lack of changes in the desktop reflect that. The most noticeable change is the inclusion of the Microsoft Office ribbon UI in Windows Explorer, something that’s being met with derision. Now, enterprise customers will have to look at this and decide; is it worth it to upgrade from Windows 7 to Windows 8? The answer is a clear no, and Microsoft has another Vista on its hands. Remember, there are countless businesses and government entities still using Windows XP.
We criticized that 1) Windows 8 is not a lightweight and simple OS that is what consumers want for tablets and 2) the application languages available for deveopers pale in comparison to the native development available for the iPad.
1) One of the first things Sinovsky addressed in his keynote was the fear that Windows 8 is going to be too heavy for low-power machines. To prove his point, he pulled out an old Thinkpad that he’d used to demo Windows 7 as proof that Windows 8 hasn’t ballooned into an unmanageable size. The only problem with that is that nobody cares how well it runs on an Intel CPU. The real action in the tablet space is on ARM. Intel keeps promising lower power consumption, but by now they have to actually prove it and have failed to do so. Microsoft won’t really demonstrate anything running properly on ARM, so they have failed to put anyone’s mind at ease there.
Even worse is the experience available to consumers. So far we’ve seen every indication that consumers will not be able to immerse themselves completely in the Metro environment. In everyone’s demonstrations, including Microsoft’s, people would get thrown into the traditional desktop by necessity. After using it myself and comparing my experiences to other reviewers’, I’m even more disappointed that escaping the desktop seems next to impossible. Some have argued that ARM tablets won’t have the desktop enabled, but the message has been very convoluted and Microsoft is not helping at all in that department. At this point, it seems to be a stretch to say that the desktop won’t be available on ARM. If that was the case, Microsoft would have said so, and in this case it’s subtraction by addition.
We said that Microsoft is making a mistake by keeping separate its existing phone OS and planned tablet OS.
This is actually bearing itself out, but in a different and worse way than we originally thought. For starters, Silverlight is the dominant development language for apps on Windows Phone and is not in Microsoft’s plans at all moving forward. Not giving developers the opportunity to develop for both phones and tablets with Silverlight will drive people away from Silverlight at a time when Windows Phone is struggling mightily to gain any type of traction. Making Metro a familiar experience for millions of users could help Windows Phone in the long term, but that won’t happen for well over a year, which we noted could be too late for Windows Phone.
But it gets even wilder. There have been a number of reports that Microsoft plans to eventually make Windows 8 the OS for its phones with a unified code base. The amount of effort that will go into that would be enormous. It might also cause problems with existing apps and again force developers to change course slightly. Plus, all of that is putting aside the question of why they would put something as large as Windows on their phones. It harkens back to the problems they had with Windows Mobile to keep it fresh as a phone OS because of their need to have a unified platform that ran on everything. If Microsoft does this, it couldn’t possibly happen for another year. In another year, if Windows Phone has any momentum, this could hurt it and hurt it unnecessarily. It might be a decent long-term play, but right now the timing is very crucial in the phone market and time is running out for Windows Phone.
One more thing
It’s worth mentioning that I think the new Windows Metro environment looks great and feels fresh. The question isn’t whether it’s appealing. The question is if the way that Microsoft is executing is going to be effective for their business and ecosystem as a whole. There are a number of market forces that threaten Microsoft beyond its capability to control its fate. There will be time to evaluate those factors, but for now it’s enough to ask if they are doing everything they should at such a critical time.
After their Build Conference unveiling of Windows 8 on Tuesday, Microsoft released a developer preview of Windows 8 for public download. After a little finagling, I managed to get it up and running on a Dell Studio 17 that has served me well over the years. This particular machine runs a 2GHz Core 2 Duo Processor and 4 GB of RAM. In every respect, it’s a boring and fairly outdated computer. The screen is 17 inches, but for some reason this install wouldn’t let me bump the resolution above 1220 x 768. As a result, everything looked a little stretched and worse than it needed to, but the new Windows 8 Metro interface was still a pleasure to look at and use.
After a day of tablet demonstrations online, what was interesting was to use the new Windows on a traditional laptop. It’s hard to get a full picture at this early stage, but it’s clear that the Metro interface is built for touch. On a traditional computer, it’s visually appealing but fairly pedantic to use. Aggravating this is Microsoft’s insistence that it fall back to the traditional Windows desktop for certain tasks and apps. There will be much more to say over the next year, but for impressions and a walkthrough after only 15 minutes of playing with the system, check out the videos below!
It has to stop. No, really. It has to stop. What could I be referring to? The mammoth, monster, gigantic phones. Over the last two years, we’ve seen an explosion of screen sizes in smartphones that is unquestionably out of control. Two years ago, the largest screen size of note on the market was the 3.5” LCD of the iPhone 3G. The most high-end Android phone was the HTC Hero with a 3.2” screen. Yes, the HTC HD2 was a 4.3” Windows Mobile monster on T-Mobile, but nobody cared about that. More on that in a moment. Just months later, the Motorola Droid would hit the market and change the game for Android and the entire phone landscape as a whole. It actually had a 3.7” screen, but that wasn’t considered all that important. It just had a slightly larger screen.
It was 2010 that changed everything. In February, HTC and Sprint introduced the HTC Evo 4G. The Evo was the HD2 screen and most of its design repackaged as an Android superphone. It was an interesting jump, to say the least. Hardly anything 4” or above had even been announced, but all of a sudden Sprint was dropping this mammoth phone on the public. Did it have more to do with the fact that HTC had already made this screen and resolution for the HD 2? Probably. Why was the HD 2 so big anyway?
As Steve Jobs so clearly put it in his D8 appearance last year, when you throw out the stylus and intend for your phones to be operated by a finger, you have to throw out your UI and rebuild it (i.e. bigger) for the precision of a finger. Microsoft was late in doing this, which is why the OEMs like HTC at the time had to figure out a way to make a Windows Mobile phone work with a capacitive screen. The answer? Make it absurdly big so that people can actually hit all those X boxes and all the small elements in Windows Mobile. The result was the last major Windows Mobile 6.5 phone to be released: the HTC HD 2.
That’s all fine and good for Windows Mobile, but the biproduct of that experiment was that the concept of a bigger screen fit very well with the Android arms race that was just taking off. Build a bigger phone and you may stand out more in the store. Sure, it will be a bit large for most pockets and ultimately not comfortable for most hands, but that’s all stuff the consumer has to worry about after the purchase. The larger screen size had an even bigger benefit in that it masked some of the failings of the Android operating system at the time. In Android 2.1, the accuracy of the screen digitizer and software was often lacking. All of the available keyboards were terrible. Make the screen and all the targets bigger and it will be easier to type. For people with large hands, the iPhone was never comfortable to type on and a number of them felt more at home with these larger screens.
Of course, when the Evo came out, it was matched spec-wise by current devices like the Nexus One and the Droid Incredible, both of which sported 3.7” screens. After all, isn’t Android all about choice? Well say goodbye to that concept. Samsung’s premier line for 2010 was the Galaxy S. The 4” Super AMOLED screen on the Galaxy S was considered by many to be a sweet spot of sorts. Unfortunately, the device manufacturers couldn’t have cared less about customers who found 3.7-4.0” to be ideal. The Droid X came out in August 2010 with a 4.3” screen and was even taller than the Evo in total size. But the real madness didn’t begin until this year.
For starters, there hasn’t been a single premier device released this year smaller than 4”. The Droid Incredible 2 was a fine device that was bumped up to 4” from its previous generation, but it is hardly the latest and greatest in smartphone technology. No, the premier lines this year are the Samsung Galaxy S II, the Motorola Photon, the Droid Charge, the HTC Evo 3D, the HTC Sensation and the Droid Bionic (still to come). All of those phones are 4.3”. What was considered giant and niche last year is now the standard high-end phone. When Samsung released their oddly-positioned Infuse on AT&T with its 4.5” screen in the spring, everyone laughed at them taking it to the extreme. No longer. The T-Mobile and Sprint versions of the Galaxy S II are sporting the same 4.5” screen. The rumored Nexus/Droid Prime is also expected to have a 4.5” screen. That’s right, the next flagship “Google experience” phone is going to be 4.5″.
And it’s just getting madder. HTC just announced a 4.7” Windows Phone and Samsung just introduced their absurd 5.3” Galaxy Note device. While some people who have handled the devices claim that the edge-to-edge screens and thinness of these phones makes the phones easier to handle than you might think, don’t be fooled. That’s still 4.7” of real estate that you have to interact with. Last I checked, notifications in Android and Windows Phone were both handled at the very top of the screen. Of course, for Samsung this isn’t as much of a problem since they’re including what everyone wants: a stylus.
The madness has to stop. It seems obvious by now that Apple is going to introduce a slightly larger-screened iPhone in month. What you won’t see is them stepping anywhere near 4.3 or 4.5”. And yet, something tells me their phone will still run really fast and have great battery life. If you want that on an Android device, looks like it’s time to start working on hand stretches and stylus technique.
MG Siegler says he’s seen and used the Amazon tablet. According to Siegler, the tablet will be Kindle-branded, 7” and running Android, but not the Google-sanctioned kind. As we detailed in our 2-part series on Google and Open Source, Google has compatibility standards and other requirements that Android manufacturers must meet. For those whom Google choose to turn away or those who choose to go in a different direction, every version of Android through Android 2.3 has been open-sourced for anyone to take and manipulate for their own needs. In April, we laid out how the open-source nature of Android provides an opportunity for a major player to fork the OS and make it something of their own. We specifically named Amazon as the company ready to take Google’s loose ball and run with it.
Apparently, that’s what’s going on here. Amazon has taken Android somewhere around the 2.2 development and created something of their own. In order to chart its path forward, it’s almost a given that they’ll have put considerable work into fundamental parts of the OS beyond the kernel. Make no mistake about it, Google will not be sanctioning this product and working with Amazon because this device has the potential to be significantly disruptive to the Android ecosystem.
Amazon has already laid down the tracks for anyone hoping to make Android something of their own. Their App Store for Android devices has proven to be popular and compelling despite controversy with some of the participating developers. Already, the App Store has many of the most compelling Android apps available, diminishing the need for the Google-controlled Android Market. While everyone anticipated this becoming a core element of an Amazon Android tablet, it’s already proven to be an alternative for those who don’t want to play by Google’s rules. Witness the excommunicated HTC Merge and its release on smaller carriers with the Amazon App Store. Perhaps more importantly, witness Fusion Garage’s new Grid 10 and Grid 4 devices. The Grid OS devices are entirely forked versions of Android that carry the Amazon App Store as their alternative.
Admittedly, the Grid 10 is probably not going anywhere commercially, but that’s entirely our original point about Amazon as the company of choice. Only a company with the clout and money of Amazon can take Android and make it a compelling alternative, not just for consumers but for developers. We originally predicted that Nokia could do something similar, but they obviously ended up going a different direction. Amazon, on the other hand, has very little need to create an ecosystem that’s designed around selling hardware as their main profit generator. Amazon’s core business has always been selling physical and digital goods, and that’s exactly what it sounds like this new Kindle will be geared towards.
Which leads to another amusing facet of this new development. Barnes & Noble makes the closest competitor to this product, a completely customized version of Android on a 7” screen that is locked down to serve as a conduit for their content. You can bet Amazon will learn from B&N’s mistakes by doing a better job of locking down the Kindle to prevent easy hacking. You can also bet that Amazon will not be allowing other bookstores on its device. While companies like Sony complained months ago about Apple’s new App Store rules regarding in-app purchases, Amazon remained relatively quiet through the process. No doubt they don’t want to find themselves in a Daily Show-style hypocritical stance of demonizing Apple’s rules while not allowing any competitors in their store at all. Google isn’t the only company that’s capable of using Android in a way that’s anything but “open.”
What will be fun is to see how Google chooses to address the Amazon threat if the new Kindle does take off. The prospect and the threat of the best-selling Android tablet, one controlled by Amazon and having no Google services built in, has to be one that Mountain View won’t ignore. How they choose to spin it, most likely in the name of “openness,” will be the main attraction.
And you thought Google buying Motorola Mobility was the biggest story of the week? Not to be upstaged, HP announced today that they are effectively killing their WebOS devices and are exploring options to sell or spin-off their Personal Systems Group. This is gigantic news because the Personal Systems Group is the part of HP that makes PCs. Yes, the world’s largest PC manufacturer (by volume) is now looking to not make PCs anymore. That alone is a story worthy of serious investigation, but frankly, by other sites. The other aspect of all of this is that HP is now in the position of trying to decide what to do with WebOS. What we know is that HP is not going to be making any more WebOS products and is pulling the Veer, Pre 3 and Touchpad. WebOS’s fate either lies in enterprise use or through licensing to a third party that doesn’t care about ecosystems.
Less than four weeks ago, we wrote that it may be time to count HP and WebOS out.
That’s it. It’s done. WebOS is now dead. No, don’t be distracted by the reports and the insistence that it’s in fact not dead. Think somebody is going to license it from HP? Tack on a year or more for a device to ship. Think someone is going to buy WebOS and start cranking out killer phones and tablets? Tack on a year or more for a device to ship.
Yes, we all loved WebOS in our own way.
But it’s time to move on.
This morning, Google announced that it was purchasing Motorola Mobility for $12.5 billion. Google’s largest purchase to date, Motorola brings a portfolio of leading Android devices and hardware legacy that stretches back almost a century. More importantly, Motorola owns in upwards of 12,500 patents and 7500 patent applications. There’s no question that this purchase has more to do with patent litigation than with creating great Android devices. Google essentially acknowledged this in their press conference this afternoon, calling the acquisition an opportunity “supercharge the Android ecosystem.” In other words, Google’s going to use Motorola’s patents to defend themselves and Android licensees against the barrage of attacks from Apple, Microsoft and Oracle.
So much will be written about the patents involved in the coming days. The most compelling aspect of this purchase is definitely how it affects phones and the swirling battle over patents and licensing connected to Android. Another question will be how Google will be effectively able to manage the difficult task of making hardware that competes with their licensees. In the meantime, there are some other interesting aspects of this deal that are likely to go overlooked in the crazed furor over mobile.
1. Motorola Mobility makes DVRs
Few people realize that in the Motorola, Inc split on Jan. 4 of this year, the division that spun off wasn’t just the mobile phone division. Motorola Mobility also includes the video solutions division that is responsible for making the DVRs that are in so many American households today. Most people don’t identify their DVR as a Motorola product because the logo of the cable manufacturer typically outshines the bare Motorola logo on the box. All the same, Motorola is one of the two leading manufacturers of the DVR boxes that cable companies use.
So why would Google care? Maybe nobody remembers, but Google tried to launch Google TV as a major platform in 2010 only to have content companies push back. Google has not exactly tried hard to inspire confidence in the platform, but most observers believe they will try to revamp the platform for a future push. The first products were the Logitech Revue and a Sony TV with Google TV built in. While the Revue was too expensive, the Sony had a chance of success before companies started to block Google TV from accessing their sites. The entire experience also suffered from feeling far too much like using a computer on your TV, something most consumers don’t want.
For Google TV to have success moving forward, it will have to find a way to be part of cable companies’ offerings. One way to move that forward would be to use the DVR box as a bargaining chip with cable companies. If Google can not only use Motorola’s TV expertise in the product, but try to find a less aggressive way to work with cable providers and ISPs, they may have a way to make Google TV more relevant.
One thing that may inhibit this from happening is Google’s insistence that they intend to run Motorola as a separate company while taking control of its patents. If they maintain that course, any use of Motorola’s DVR business won’t happen. Google may soon learn, however, that a company that is losing money is less useful than a company with assets that can be integrated.
2. Motorola holds patents that are part of the H.264 patent pool
The battle over Android and patents is not the only battle Google is fighting. Google is also heavily invested in a fight to try and make their own video standard, WebM, the standard video codec for HTML5 video. In 2010, Google announced that they had purchased the video codec VP8 and open-sourced it as WebM. Google’s intention was to usurp the position of H.264 as the industry standard for video encoding. By making it free and open source, Google’s hoped that open source browsers such as Firefox and Opera would adopt WebM as their plugin-free option for HTML5 video. Mobilified wrote about the issues surrounding WebM back in January
The problem is that H.264 has a long history in the industry and has always had titans behind it. Furthermore, large companies like Apple and Microsoft have always resisted using VP8-based code in their products for fear of patent litigation. Their stance has always been that it infringes upon existing patents and is too risky to use. H.264, on the other hand, has the backing of a number of licensees through the MPEG LA. As soon as Google announced WebM, the MPEG LA began gathering interest from their partners to take on WebM.
In July, MPEG LA appeared to have 12 companies on board to help establish WebM’s infringement of patents believed to be essential to the VP8 standard. MPEG LA would not release those companies names, but it was assumed that Motorola was on that list as they were a major contributor to the H.264 patent pool. It would appear, then, that you can strike Motorola from the list of companies willing to contribute to the lawsuit. While this may not matter in the long run for WebM’s future, it is certainly something that will have to be considered, since there is little doubt Motorola had effective video patents in addition to their phone patents.
Motorola has been around for a long, long time. It would be a shame if this deal only focused on patents and Android phones and didn’t involve using the various parts of Motorola’s business to deliver something interesting to consumers.
Ask a Captivate owner on AT&T what their phone is and they might scramble for the name before coming up with it. Ask an Epic 4G owner on Sprint what their phone is and they will say it’s the Epic and has an epic keyboard. Ask a Fascinate owner on Verizon what their phone is and they’ll tell you they’re not sure but would be happy to Bing it for you. But ask a Galaxy S owner in Europe what their phone is and they’ll give you the answer anyone could have supplied for the above — it’s a Samsung Galaxy S!
At least, that’s how the story is supposed to go. And in reality, Samsung did a great job last year pulling off the extremely rare in American smartphone launches. They managed to launch the same basic phone on all four carriers in the U.S. at roughly the same time. Sure, you could disqualify the Epic on Sprint because they stuck a keyboard and a WiMax radio on there, but the basics of the phones were the same: A 4-inch Super AMOLED screen, a 1GHz Hummingbird processor and the TouchWiz UI. Although each carrier got in there and messed around to make the phones all look and perform a little differently, there was a synergy across the line that gave the phones an identity.
That identity was a serious win for Samsung. While the stylings of TouchWiz might not appeal to many people (and has landed Samsung in the middle of a trade dress dispute with Apple), the Galaxy S line was a giant success last year. For an OEM that tends to bend over backward for carriers, Samsung did a little better job at putting Galaxy S branding on the phones and maintaining the core of the phone. As each new phone rolled out, there was little doubt that it would perform well and meet expectations, even if Verizon chose to mess around with the firmware and replace Google services with those from Microsoft.
A number of people were hoping that Samsung would take that success and go further this year. With the introduction of the Galaxy S II, maybe Samsung would take that phone to U.S. carriers with more clout than before and insist they not make too many customizations. That was always a pipe dream for a company that makes so many variations that it’s not even within their DNA to think to insist on a single model. The Galaxy S II line has yet to be formally announced in the U.S., but some details are leaking out that are sure to disappoint many. The diversity (fragmentation?) of the line is worse than last year!
For starters, Samsung confirmed months ago that the Galaxy S II that would be distributed worldwide would have some variations inside. The Galaxy S II is the first phone to showcase Samsung’s own dual-core Exynos processor. Like the Hummingbird processor from last year, the Exynos is a huge part of the story with the Galaxy S II. By all accounts, it seems to be the king of mobile chips with the possible exception of Apple’s A5 processor. It’s one reason that the phone has blown the doors off the benchmarks its been tested against and one reason why the phone was reviewed so highly by a number of sites upon its European release. Performance matters, and the Exynos chip is a huge part of this. In March, however, ITProPortal reported that Samsung confirmed they are going to ship some Galaxy S II units with Nvidia’s Tegra 2 processors. They intend to minimize confusion by separating the regions the different models are sent to, but that does little for establishing one identity for the handset that carries the same name.
For Samsung, it makes sense from a production standpoint. The Galaxy S sold so well last year that they have to hedge their bets in case they can’t produce enough chips to fill all of the demand for the new phone. Still, if the “canonical” Galaxy S II gets reviewed with certain performance, should Samsung be able to misrepresent its product with an entirely different and inferior processor? This isn’t the first time Samsung has done this. After 3G versions of the original Galaxy Tab failed to make much impact, Samsung released the WiFi-only version in the U.S. at the seemingly low price of $350. What they didn’t bother telling anyone, including associates such as Amazon, was that they swapped out the Hummingbird Processor for an inferior Texas Instruments OMAP chip. How this is considered acceptable practice is beyond us.
So while we all knew that U.S. carriers would release different variations of the Galaxy S II, it’s a bit surprising that they’re going farther than anyone realized. First of all, it appears that Samsung is going to further mess around with yet another processor type in T-Mobile’s Hercules. The Hercules has now been reported by multiple sources to use a Qualcomm chipset, meaning that Samsung is going to be supporting all three major mobile processor types in their “single” Galaxy S II line. Further confounding everyone is the increasing evidence that the Hercules and the Verizon version, possibly named the Stratosphere, will sport 4.5″ Super AMOLED Plus screens. That’s even larger than the 4.3″ Super AMOLED Plus display on both the European Galaxy S II and the rumored Sprint Epic Touch 4G. Putting aside the question of just how absurd an 800 x 480 resolution looks on a 4.5″ screen, this is just more variation that could impact consistency.
Since this is the U.S., what would a Samsung line be without differences in the radio and internet technologies? Verizon’s version is rumored to have LTE onboard, and Sprint’s Epic Touch 4G will run on WiMax just like last year’s Epic. Even if the Verizon version doesn’t have LTE, a Korean variation has popped up with LTE 800 MHz radio and a 1.5 GHz Qualcomm chipset. Exactly how many variations do they plan on making?
So, let’s recap: Three processor types, two different screen sizes, three different radio technologies, handfuls of frequencies, different hardware chassis, and probably different amounts of RAM. For all of the success Samsung had with the Galaxy S last year, they had a number of problems with screens, radio performance, GPS and other areas. It’s hard to believe that some of that didn’t come from the unnecessary variation within the line. With the expanding differences in the Galaxy S II this year, it’s reasonable to expect a slew of issues as the different phones roll out across the globe and in the U.S. If that happens, Samsung may be glad that they diluted their brand with so many different names and models.
In a world where PC and tablet makers are finding it difficult to compete with Apple on price, it seems natural that the industry would turn to a gimmick to try and find their way in. No, Apple is not undercutting traditional PCs with sub-$500 notebooks, but they are setting a new standard in design that they are forcing the PC competition to chase after. The Macbook Air is on fire right now, becoming Apple’s best-selling Mac in less than a year on the market (we’ll forget about the relatively modest first generation for now). With the new models unveiled last month, the Macbook Air is clearly positioned to become the main, and possibly only, Macbook design sooner than later. In response to consumer reaction to the Air, Intel detailed its design for “ultrabooks” in late May. These designs are meant to be taken by PC manufacturers to create cheaper, appealing alternatives to the Macbook Air.
One of the most interesting developments of this design and its roadmap comes from Paul Thurrott on the weekly TWiT podcast Windows Weekly and on his Supersite for Windows. It appears that Intel is encouraging manufacturers to put processing power behind the screen of these ultrabooks, allowing them to create detachable designs that allow the user to pop off the screen for a ready-to-go tablet. This is not a new concept as similar ideas float around CES year after year now, making Louderbacks out of people, but this is the first time that we’re seeing any type of major company push towards this form factor. None of these designs have worked before because there was the issue of what OS your tablet will run after you’ve detached it from the keyboard. Will it be Windows 7? Will it switch to Android? Will it be a terrible custom Linux distro like on Lenovo’s U1 Hybrid? Hardware limitations like battery life have killed these products, but the software limitations have been worse. There hasn’t been one cohesive product to date.
Going into 2012, however, the story is going to get far more interesting. The reason? Windows 8. Windows 8 is a big story if only because it’s Microsoft’s attempt to enter the tablet market by recasting the PC as a tablet OS. Their vision going forward is that Windows 8 will have a touch-specific shell and launcher well-suited to touch that can also run and use traditional Windows with touch input. As we detailed in our Windows 8 strategic critique, this concept is flawed at its core as a tablet strategy and may even end up hurting their PC innovations. A detachable screen, however, makes a lot of sense for these types of machines. The user interface doesn’t have to change when you remove the keyboard. You can have Windows with a keyboard and all of the usefulness of a mouse, but the product that you carry around as a tablet will not be an unusable brick like the Windows 7 tablets we see today. Since Windows 8 is in many ways built for touch, this detachable touchscreen will be just fine on its own.
At first glance, this is a very appealing concept. Rather than have two devices, why not just have one that can act as the ideal device for any scenario? Type away and use your power applications with the keyboard, then settle down on the couch with just the screen and its HTML5 apps. This is might be an upgrade over existing laptops that serve one purpose, but is it really going to be that appealing to most people? The real problem with this new ultrabook idea is that it still doesn’t address the main issues surrounding Windows 8. We laid out the reasons that Windows 8 might be a misstep for Microsoft’s platforms back at the first Windows 8 unveiling. There are still a number of questions to be answered, hopefully at Microsoft’s BUILD conference in September. Unfortunately, from where we stand right now, these ultrabooks actually represent everything that’s wrong with Microsoft’s Windows 8 plans. Their do-everything approach to Windows is flawed at the core and bound to be exacerbated by these new products. The balance between cost and features is going to be that much more difficult to strike.
Without going too much into the territory we covered before about user experience, there are a number of reasons this form factor idea highlights the worst decisions for Windows 8. For starters, aren’t these new ultrabooks supposed to be competitive with the Macbook Air on price? If companies can’t compete right now in materials and design, how are they supposed to compete when they start throwing touchscreens on these computers? Will the design stay competitive if they have to be concerned with cramming processing power and batteries behind the screen?
Battery life is actually one of the main concerns with this detachable tablet idea. Full Windows, with all of its backgrounding and other processing needs, requires a lot of power. It’s one of the main reasons that no Windows tablet has been successful to date. The move to ARM processors might help this, but the two most prominent modern mobile platforms, iOS and Android, handle memory and processing much more efficiently than Windows. To compensate, either the form factor or the battery life will suffer. There’s no way around that.
Today, one of the biggest costs in an iPad or any other tablet is the screen itself. A high-quality touchscreen is a necessity for a tablet. It has to look excellent because you’re holding it close to you. It also has to be accurate and responsive because it’s the only thing you can interact with. If a quality screen is going to be more necessary on an ultrabook/PC than ever before, it’s just one more area where they won’t be able to cut corners in price. Either the ultrabook has a disappointing screen, and therefore a disappointing tablet experience, or it costs more. And there we have another pain point for OEM’s trying to do everything with one device.
In so many ways, the ultrabook represents the difference between how Apple and Microsoft approach the tablet. Apple designed a new OS from the ground up to suit what they thought was a more accessible type of device. Microsoft refuses to view tablets as distinct from the personal computer and modified Windows to better fit their concept. Intel’s hopes for the ultrabook may be in keeping with Microsoft’s plans, but they aren’t necessarily the way to create the best product. Ultimately, price and other sacrifices will take it out of the market that consumers have settled into for both iPads and Macbook Airs. Typically, a more expensive and less satisfying device tends to not be that successful in the marketplace.
If the tidal wave of crowds outside Best Buy and Staples stores this weekend hasn’t tipped you off, the HP Touchpad is on sale for $100 off this weekend. This discount brings the 16 GB model down to $499 and the 32 GB model down to $599. As if that wasn’t enough to send the people to the streets, Staples appears to have a $100 coupon available from for the Touchpad that can be used in conjunction with the sales to bring the price down even more. At $299, the Touchpad is almost a justifiable buy at this point, and HP might manage to actually move some products, something no iPad competitor has been able to do up to this point.
While this is great for most consumers, it is unfortunately the last, final insult to some of the most faithful WebOS users. You may remember that early this year HP confirmed that previous WebOS phones, the original Pre, Pre Plus and Pixi Plus, would not be receiving an update to WebOS 2.0. The lack of support for phones that had been on sale so recently struck many as a slap in the face. If that wasn’t enough, it appears extremely unlikely that Sprint will be receiving the Pre 3 later this summer, meaning that the largest group of WebOS users are further left out in the cold unless they change carriers.
To be fair, HP has to make its business decisions independent of its relatively small user base, but the idea clearly struck some people as wrong, including the WebOS team themselves. In an interview with Engadget and in a blog post, HP expressed their desire to “make it right” with those early supporters of the platform. While they never officially confirmed that this was the way they were going to follow through, HP recently gave Pre and Pre Plus owners a $50 mail-in rebate on the 32 GB Touchpad. That was a fairly weak offering, particularly since it targeted the 32 GB model, but it was something.
What’s also something is the firesale of this weekend. One month after the Touchpad launch and days after the promised software update, HP is now deep discounting the Touchpad far beyond what their gesture to previous Palm customers offered. If anything, the $50 rebate may have gotten people off the fence to buy a Touchpad only to find out that they could have had one for substantially less. A 32 GB Touchpad could theoretically sell for $150 less today and the total buy-in cost of going with a 16 GB model would be a whopping $250 less than what Pre owners might have paid for a Touchpad in July.
Again, HP can make whatever decisions they feel are in their best interests, but to pretend that they ever made a real attempt at helping out their most ardent supporters is pretty unbelievable at this point. This is the wrong way to make things right.
For anyone who hasn’t listened to it yet, last week’s This American Life was a breathtaking look at the broken system of software patents. Entitled When Patents Attack, the episode educated consumers about how the business of software patents has not only begun to stifle innovation but is becoming a business unto itself. A large portion of the program is devoted to an exposé of sorts about the company Intellectual Ventures, a patent holding company that presents itself as a service for inventors but in practice has become a major litigating force in the technology space. If you haven’t listened to it already, follow the link above and enjoy one of the more accesible introductions to this matter that ends up affecting us more than most people realize.
It’s worth mentioning that the hosts of the show made clear that large companies like Google and Apple are now faced with the necessity of applying for and purchasing patents that could help them in future cases. On the heels of last week, Google’s senior vice president David Drummond penned the blog post When Patents Attack Android, blasting the consortium of companies that outbid Google for the Nortel patent portfolio last month. While everyone is frustrated with the current state of patent litigation in this country, Google is blaming the wrong party by attempting to call out Apple, Microsoft, RIM and everyone else in the group. The truth is, I can’t do a better job of explaining this and pointing out the hypocrisy than John Gruber of Daring Fireball. His article is short and cuts right to the matter. Go have a read.
With the launch of the Touchpad having come and gone a year after HP’s purchase of Palm and WebOS, we’re now at a point where it might be fair to assess the current state of WebOS and its future. HP released the Veer months ago to not so much of a bad reception as a non-reception. The phone was too small to make any impact whatsoever on the market, leaving everyone to hope that the Touchpad and the Pre 3 would be the breakout devices for WebOS. Right now, WebOS’s marketshare sits somewhere between 1% and 3% depending on what sources and numbers you use. Mobilified, like other sites, has been fairly positive about both the quality of WebOS and its future in the mobile space. Nearing the end of July 2011, however, it may be time to revisit that view.
It’s hard to believe that WebOS has now been in the marketplace for over two years. To be fair, those two years have been tumultuous and Palm/HP has faced a number of challenges to delivering a competitive product into the marketplace. The original Pre was lacking in both hardware quality and software refinement. Palm didn’t move fast enough in iterating its products for it to make any impact before they were bought by HP. The transition to HP presumably slowed things down, as they managed to only release the Pre 2 (which might as well have been called the Pre Plus Plus) and the disappointing Veer before the launch of the Touchpad. Now, once again, reviews have been largely negative with a palpable sense of disappointment. In fact, disappointment seems to be a trend when it comes to WebOS, going back all the way to the beginning.
Consider these reviews of the original Palm Pre, way back in summer 2009:
Jason Chen for Gizmodo:
The software is agile, smart and capable. The hardware, on the other hand, is a liability. If Palm can get someone else to design and build their hardware—someone who has hands and can feel what a phone is like when physically used, that phone might just be one of the best phones on the market…
…Impressive start to an OS that should form the base of some quality phones in the future
Hardware quality is lacking, and feels flimsy and plasticky compared to the G1, G2 and the iPhone.
Joshua Topolsky, writing for Engadget:
…Generally speaking, the Pre’s UI makes sense and makes it easy to get things done rather quickly and painlessly. It is an impressive beast, though a beast nonetheless — and that means taming will be in order. We saw plenty of little glitches: messages that wouldn’t pop up (or go away), transitions that hung for a bit, and we definitely had a crash or two. In particular, it seems like Palm still needs to work on memory management — we noticed the device getting a little laggy after a day of heavier use, so we’re thinking not every process is being killed completely.
Keeping us hopeful about these issues is the way in which Palm plans to address them. According to the company, updates for the phone will be made OTA as necessary, which means they’ll be able to put out fires quickly, and respond to customer needs with greater agility than a lot of their competition. We have a feeling we’ll see a handful of fixes just after launch based on our conversations….
…There’s also no guarantee of developer support with this phone. As we mention earlier, Palm needs to stoke those fires or the Pre will quickly be cemented as a tiny island in a large sea. We think the platform looks very promising, but with no big push (yet) to put a great SDK into dev’s hands, and no existing user base for those apps, it’s hard to feel assurance that the software will come.
Last week, I had an unexpected encounter at a Best Buy with a retail representative whose job it was to demo and answer questions about the Samsung Galaxy Tab 10.1. Ordinarily, I wouldn’t really be near the Honeycomb tablets in Best Buy, but I’d wandered in to play with the nearby HP Touchpad. Not being one of the privileged few to receive a unit for an early review, I was dying to get my hands on the only tablet I’ve actually looked forward to this year. Like the early reviewers, I found the Touchpad a pleasure to use but hampered by unimaginative, boring hardware and software sluggishness that has yet to be resolved. In the end, though, it wasn’t the Touchpad that left the most lasting impression from this trip. Rather, it was the conversation I had with a hired gun for Samsung.
Just steps away from the dedicated Touchpad display sat a row of various Android tablets. As I swiped, flipped and otherwise investigated the Touchpad, what appeared to be a Best Buy employee struck up numerous conversations about the Samsung Galaxy Tab 10.1. While the word “iPad” came up numerous times, the employee was giving a fairly hard pitch for the Samsung. Even though there was a row of Android tablets, this representative was clearly only interested in talking about the Galaxy Tab. Given that the Galaxy Tab is the undisputed champion of the Android tablet world, I can’t blame him, but it’s also not hard to spot someone dedicated to selling just one product.
After I finished up with the Touchpad, I moved over to the lonely HTC Flyer sitting next to the Galaxy Tab (Side note: Best Buy, if you plan on selling any Flyers, you might want to have its only distinguishing feature, the pen, available for people to use). As I played with it, the rep (who we’ll call James) finished with the customer they were talking to and asked if I had any questions. I said no, I probably don’t, but I am curious about how you’re pitching the Samsung to customers like the ones I’d seen. To a regular consumer, how do you sell them on it? In the process of asking the question, I also inquired for my own benefit about who exactly was his employer.
For the first time in years, Nokia owned the tech news this week as they unveiled their first and possibly last, MeeGo phone, the Nokia N9, on Monday. This phone, based on Nokia’s MeeGo partnership with Intel, will likely receive no support from the developer community and will probably exist as a one-off project by Nokia, thereby fulfilling Nokia’s obligations to Intel and Meego before they transition entirely to Windows Phone. The Meego implementation on the N9 is very attractive and shows a number of good ideas about how tasks on a smartphone should be managed. With such low expectations for Meego, the N9 surpassed even MeeGo’s more ardent supporters.
In addition to excitement about the software, reviewers everywhere were excited to see the direction Nokia is going with hardware. Could this be an indication of what the first Nokia Windows Phones will look like? The answer is an unequivocal yes, as yesterday Nokia showed off a prototype of the same basic hardware running Windows Phone Mango, codenamed “Sea Ray.” This controlled leak set off another firestorm of questions about what Nokia was doing and what message it’s trying to convey. In the end, it’s not difficult to see what Nokia is trying to say, but the typical distraction of a good demo is causing some serious over-reaction by the tech press.
Every so often, devices that aren’t set to hit the market are shown off in controlled environments to the awe of the press. In 2011, it’s not difficult to create a basic UI that looks attractive. It’s similarly easy to get demo code to run well for the few tasks you’re showing off. What’s disappointing is that so many people seemed taken with the software to the point that they questioned if Nokia was making the right decision to move to Windows Phone. Even the typically skeptical and savvy Vlad Savov was so overwhelmed that he wrote an editorial opining that Nokia may have jumped ship too early. If Nokia’s own OS is this good, then why be dependent on Microsoft by using Windows Phone?
The truth is, we have no idea how good this is. We saw only basic tasks and apps. We don’t know how it runs in the real world. We don’t know how the live multitasking affects battery life. We don’t know if it requires more power to have a screen that can be woken up by a double tap. What’s worst of all is we don’t know what type and quality of SDK Nokia will give to developers. Is it really competitive with the SDK of an iOS or Windows phone? These are all questions that might have served the press better when they saw the Palm Pre for the first time. There are a million unanswered questions but only a few demo videos that look good. Nokia would be crazy to be going with MeeGo at this point when high-quality mobile OS’s like Windows Phone and WebOS are struggling to stay relevant in a dualistic iOS and Android world.
Way back in December, we predicted that MeeGo could still become a force in mobile. That prediction was, to put it mildly, a little bit off. The mobile world is moving too fast for an unfinished platform at this point. Nokia and Microsoft are both dependent on making an impact with Windows Phone soon or not at all. Nokia knows that MeeGo is a dead-end in the phone space, which is why they had no compunction against throwing the N9 under the bus by leaking the “Sea Ray.” They may as well be saying, “So you liked the N9 hardware, huh? Well, here’s a camera key and an OS that we’ll actually support!” They may have gone about it an odd way this week, but that’s beside the point. In the end, they had to release the N9, but that doesn’t mean that they’re not going to make perfectly clear that they intend to make Windows Phone their focus going forward.
…and they should.
This week at the All Thing Digital D9 Conference, Microsoft’s Steven Sinofsky showed off the next version of Windows, codenamed Windows 8. The fresh new UI that they demonstrated is intended to be the new default screen when users log into a Windows 8 machine. The design is very much in line with Microsoft’s “Metro” interface, something that debuted on desktop apps and became the default experience on the Zune HD and now Windows Phone 7. Instead of the traditional Windows desktop, what users now see is a series of tiles laid horizontally. These tiles can represent anything, from applications to specific files to websites, etc. In this sense, the utility of the new Start screen is very similar to Windows Phone 7, where the tiles define the experience and the primary way to move in and out of tasks.
Make no mistake about it, this new look and feel to Widows is the biggest change Microsoft’s OS since Windows 95. The ideas behind it and the look and feel are absolute upgrades that make Windows feel truly modern. The most obvious goal of the new interface is that it’s clearly built for touch and for use on tablets. Unfortunately, the approach that Microsoft is taking is a unified approach that demands that every Windows tablet will run a full version of Windows with this new layered interface on top. This strategy is fundamentally flawed from a technical perspective, but what is far worse is that it is strategically harmful to the core ecosystems that Microsoft is trying to build and maintain. With this one strategic move, Microsoft is stunting meaningful progress in three product categories: Desktops, tablets and smartphones.
With this new interface and approach, Microsoft is essentially halting meaningful progress on their desktop Windows experience. There’s no question that Windows 7 was a commercial and critical success. It was a commercial success by default because Windows has such an enormous installed user base, and it was a critical success because it picked up the pieces from Windows Vista and turned out to be a solidly improved experience. Other than a few key UI tweaks, most notably the improved taskbar, Windows 7 was nothing more than a polished version of the disappointing Windows Vista. At the end of the day, the user experience hasn’t really changed since Windows 95, and industry observers have been calling for something exciting for some time. It’s a shame that Microsoft has chosen this direction for Windows 8, because it’s actually a misstep in two ways.
Just as Google seems distracted by the tablet in improving Android, Microsoft seems overwhelmed by the storm of criticism for not putting out a competitive touch-based operating system. Instead of delivering this new Start menu and touch-based UI in the form of a new tablet OS, Microsoft is currently pitching the new interface as the default interface that each new system will ship with. I say “currently” because I expect that will change as the launch gets closer and enterprise customers and OEMs complain. The problem with this approach is that the new UI, though attractive, seems very unappealing to use on a traditional computer. There’s hardly any advantage offered by the tile-based Start screen, and the large icons are derivative of touch-based systems that don’t have the advantage of cursor or keyboard. On a traditional desktop, the new Start screen is wasted space that offers no real utility. What is the purpose of live tiles if you can’t see them all at once like you could with a desktop widget/gadget? Why have a lock screen that shows information when someone is already sitting down at the computer to do something, rather than pulling their phone out of their pocket to quickly check something?
Of course, Microsoft hasn’t fully committed to this new interface and offers a way to quickly return to the traditional Windows desktop. That would be all well and good if Windows 7 was a perfect desktop OS that didn’t need improvements in its own right. What they showed at D9 was a desktop environment that looks identical to Windows 7 with no notable improvements. Microsoft is certain to make some changes before Windows 8 rolls out, but if we’re being honest they aren’t likely to be revelatory. The Windows team is clearly focused on moving Windows towards this new paradigm and porting Windows to ARM processors. It’s unlikely that they’re going to have the time or manpower to radically change the desktop, and that’s unfortunate given that Windows still has major issues with window management, different installers, 3rd-party application consistency and a myriad of other issues that Mac OS X seems to have figured out. This focus on making Windows fit this tablet UI is going to have the effect of essentially slowing down meaningful progress on Windows.
Not that this is going to have any effect on Microsoft’s dominance in the desktop marketplace. They still hold an insurmountable lead and no matter how fast the Mac is growing, it’s unlikely to impact Microsoft’s bottom line in a significant way. This will affect Microsoft’s hardware partners more than anything, since they will continue to be uncompetitive in the $1,000+ space and increasingly feel the pressure on the low end by the growing number of tablet-like devices. Sadly, Microsoft is not offering much to help them out on the tablet front either.
After years of waiting, the world will finally get Microsoft’s answer to the iPad in 2012. It’s not a lightweight operating system like iOS or Android. It’s not an expansion of Windows Phone. No, Microsoft’s answer is to deliver a full Windows PC with a touch-friendly layer on top. The PR coming out of Microsoft insists that this is not just a skin or shell, but if you can touch one button to launch into the full Windows desktop, it’s a skin. If you can push an icon for Microsoft Office and immediately launch into a traditional Windows app, complete with scroll bars, close buttons and the Start Menu in the corner, it’s a skin. Microsoft’s answer to the iPad is a PC with the same baggage underneath that has killed the battery life and performance of every Windows tablet that’s ever been shipped. The difference, it seems, is this new Start menu and the types of applications that can be created for the new environment.
What does this all boil down to? Microsoft is shipping a tablet OS that is not actually optimized for tablets because it runs the entire, bloating Windows OS. This puts tremendous pressure on the hardware and will most likely result in unfavorable comparisons when sized up against the available offerings from Apple and Google. Meanwhile, the ability to run Windows will result in a disjunct experience where applications don’t look right side by-side. Just look at the picture at the top of this article. And to top it all off, this insistence on putting a shell over the existing Windows experience leads to a limited development framework for the tablet that is incapable of creating rich applications. If you are a Windows developer and knew that every Windows tablet would have both environments, which environment would you code for? You would use the traditional tools from Microsoft that have given rise to the most dominant software platform on the planet. After all, they’re not letting you create native touch-based apps.
Lost in all of this excitement is Windows Phone, a genuinely exciting offering from Microsoft that is attractive and headed for true competitiveness once the Mango update hits in the fall. Windows Phone shares so much with Windows 8 in terms of design, but is clearly the odd man out of these three categories. Consider that Apple chose to make iOS the singular framework for both the iPhone and the iPad. They did this for two simple reasons; the iPad is a touch-based mobile OS, just like the iPhone, and this is now a game of ecosystems. Having major synergy between the iPhone and iPad helps iOS developers and helps Apple maintain developer support. Microsoft absolutely could have delivered tablets based on the Windows Phone OS. Instead, they’ve decided to align their tablets with desktop Windows, further stranding Windows Phone in its battle to gain market relevance.
Why is this a mistake? The tablet market has yet to be defined as a true productivity successor to the desktop. Right now, the most important and fierce battle in computing is in the mobile space. At a time when Microsoft is falling too far behind in phones, they’ve chosen not to help the ecosystem. If Windows Phone is not successful, it will severely hurt Microsoft’s future success, regardless of how well “Windows” does in the short term. If lightweight operating systems like iOS, Android or WebOS are in everyone’s pockets 5 years from now, it will be much harder for Microsoft to make the case that their Windows tablets are essential just because they run legacy apps. Microsoft will end up regretting practically ceding the phone market and not doing everything it can to help it.
Supporting a dying legacy
Given that Windows Phone runs on ARM, it’s particularly odd that Microsoft chose to take this approach. While legacy apps will run on their Intel-based tablets, Microsoft has already announced that legacy apps will not work on the version of Windows 8 that will be ported to ARM. If they are going to have tablets on the market that don’t run legacy, then why bother insisting that desktop Windows is in every installation of Windows? They are essentially insisting on legacy app support and all the downsides of that, even though many of these devices won’t run these apps because they’ll be built on ARM architectures. It shows a disturbing lack of confidence on Microsoft’s part and an inability to see beyond the scope of Windows as a significant revenue stream.
Whether it’s to protect their current business model or because of a lack of technical ability, Microsoft is putting a clear stake in the ground and signaling how they view tablets and the next generation of Windows. Microsoft is hoping that even though they are coming to the tablet market late they will be able to sell their approach by offering a full computing experience underneath the consumer-friendly tablet layer. For some people, this will prove to be a selling point, and there’s no question that the Metro UI that they’ve chosen is attractive and inviting. What’s going to happen, though, is that enterprise customers will be turned off by the new UI, consumers will be underwhelmed by the encroaching presence of desktop Windows and Windows Phone fans will be left languishing in an unsupported ecosystem.
It’s impossible to write this without drawing a comparison to Apple, who will announce the new versions of both Mac OS X and iOS tomorrow. Apple’s iPhone and iPad ecosystem is thriving and they are not swinging too far to the extreme in bringing iOS elements to their desktop OS. While Lion carries over some familiar elements of iOS, Apple is still clearly differentiating the two types of experiences, and rightly so. Microsoft can’t afford to lose in mobile, yet they’re doing everything they can to be bogged down by their devotion to Windows.
***Addendum: It bears mentioning that this article was outlined in Text Edit on a Mac which was then put into Dropbox, started in Pages on the iPhone, emailed by Gmail to myself, downloaded on a Macbook Air, entered in to WordPress and then finished in WordPress on an iMac. While that’s an unusual workflow and perhaps not ideal, it’s worth mentioning that the process didn’t come close to touching the Microsoft ecosystem. This is the challenge they face going forward in mobile where the lock-in to one platform doesn’t exist.
Boy, did Google ever have a lot to announce this week at Google I/O! Every year, Google drops a ton of announcements and demos for eager developers and journalists to look at, and this year was no different. Between the two major keynotes Tuesday and Wednesday, Google announced enough new products and services to take up a week of discussion and debate. You can find a full list of announcements on any major tech site, including this collection of links by Engadget. Any list is going to include the following:
- Android 3.1 Honeycomb demo
- Android “Ice Cream Sandwich” announcement (unification of phone, tablets, and TV) for Q4 2011
- Google Music
- Google Video rentals
- Android partnership with OEMs and carriers to deliver more timely updates
- Android Open Accessory standard
- Android@Home standards for embedding Android
- New Chrome OS Chromebooks by Acer and Samsung
- Netflix and offline access for Chrome OS
- Chrome Web Store improvements (including Angry Birds)
There are some interesting stories hidden within these major announcements. One of the most interesting points that’s gone undiscussed is that the next major version of Android, “Ice Cream Sandwhich” is targeted for Q4 2011. That was definitely not a date set in stone, as the engineer giving the demo said they were “aiming for Q4 2011.” This is a perfectly reasonable timeline, except when you consider that they’ve essentially admitted that Honeycomb will not run on phones and they will not be open-sourcing it. While Google cleans up the code and finishes Honeycomb, phones are running a distinctly different track that may not see any major update for the rest of the year. That doesn’t mean that Android on phones won’t be updated. After all, the Nexus One and Nexus S are both now receiving updates to Android 2.3.4. None of these releases will be terribly feature-packed, though, and you have to wonder why Google isn’t pushing harder to improve Android on phones.
By the time Ice Cream will likely hit, it will be about a year from the release of Android 2.3 Gingerbread. A yearly cycle is nothing to sneeze at, of course, since that is what Apple has been on for years now. Apple, in fact, will even miss a year by a few months as the rumors point to a fall launch of iOS5. Still, Gingerbread is far from a polished phone operating system, and Google has shown the ability to iterate quickly in the past. So what is different with this year’s cycle? In short, Honeycomb and the rush to push out competitive tablets.
Google has shown the ability to have multiple projects running simultaneously. Look no further than this year’s Google I/O for evidence of this, as Google held separate events on back-to-back days to showcase Android and Chrome OS, respectively. When it comes to one product category, however, Google does not have unlimited resources. The Android team can only spread itself so thin as it works on a unified solution for tablets, phones and TVs. Considering the importance of mobile and the fiercely-competitive smartphone market, you would think that they would concentrate on improving their phones. Instead, the Android team is now responsible for two potentially failed projects that may serve as distrations: Google TV and Android tablets.
While Google TV has been largely a bust and has the wrong approach to the living room, it needs support from the Android teams as Google soldiers on. Furthermore, it remains to be seen if there is anything called the tablet market for Google to compete in. Right now, it’s an iPad market, and Honeycomb has been a major misstep in the direction of Android. There’s almost nothing compelling about a Honeycomb tablet and Apple’s positioning of the iPad has made it extremely hard to make a compelling case for a competing tablet right now. If Google’s original purpose with Android was to ensure people would continue to use Google Search on phones and use their own services, how necessary are tablets anyway? Is Google competing with tablets simply because Apple built one and they feel they need to chase after the same market? What’s the end goal?
No matter what the motivation is, Honeycomb is now a learning project, as Google is not going to open-source the code and is not going to release it on phones. Without question, Google is spending considerable time just to try and make it work on phones. The question is, “why bother?” What is the killer function of these Honeycomb tablets that requires a major effort to move it to phones. And can it possibly be worth the trouble of integrating the OS, since maintaining Android’s lead in the phone market is more important? The answer is probably no, meaning that no matter how flashy Ice Cream Sandwich looks at the end of this year, you’ll still have to ask yourself how much better it could have been.
This morning, the world saw the public demonstration of Android apps running on the Blackberry Playbook. RIM had previously announced that they would be supporting this functionality, but this was the first chance to show off the process in public. Mobilified expressed concern over this approach in the past, and today’s demonstration has done nothing to change that stance. What’s odd is the level of excitement that overlooks some key problems with both the implementation and the strategy as a whole.
RIM has clearly been building this on the fly over the last couple months and the fact that they could show anything at this point was somewhat impressive. In fact, the people running the demo seemed well aware of this, as their scripted interplay seemed designed to push back on some of the concerns that had already been raised. I admit, I was surprised that they managed to get individual Android apps to show up in the application list as if they were native apps. I wouldn’t use the word impressed, since that was probably the easiest task. What would have impressed me would have been a deeper level of integration that is missing form the Android implementation. Allow me to explain.
The demonstrators kept of stressing that the Android apps are integrated, but I didn’t see that at all. Other than getting them all into the general-purpose app list on the Playbook, the apps live in their own world on the Playbook. Remember, RIM has told us time and time again that the advantage of the Playbook is the UI and the fact that QNX is a powerful, real-time environment that allows for true multitasking. While running Android apps, they only exist in the Android app player and not as separate applications running in the WebOS-like “cards” metaphor. What that means on a practical level is that every time you select an Android app from the app list, it replaces the one that was already running in the app player. There is no such thing as true multitasking when it comes to the Android apps. What’s unclear from the demo is what exactly happens to the state of the app when multiple Android apps are selected in succession. Is the state frozen? Do they keep running in the background? How does the Playbook manage the battery life if they do?
From an aesthetic perspective, you’ve now added a separate layer of interactivity to the apps. The hands-down, best thing about the Playbook is moving around the multitasking interface, but none of that exists for Android apps. To return to the one that is not the last one used, you have to bring up the menu, select the app list, hit the app and dive into it, breaking you away from the flow that RIM worked so hard to steal from WebOS. Yes, it seems to functionally work, but functionality is not what is selling the iPad right now. What’s selling the iPad is the clean, simple experience, something that RIM at least had a shot at offering in its own way.
Furthermore, they made a point to talk about how easy it is to move an Android app over to the Blackberry App World. Presumably, this will entice Android app developers to submit their apps to RIM, but at what cost? The cost will be to native development on QNX. If RIM thought it would be hard to attract developers to work on QNX before, just wait until everyone realizes they can just use their pre-existing Android app and that will be “good enough.” In the end, it hurts RIM’s platform.
What’s worse, it hurts the opportunity to write native apps that could end up being superior to Android apps. Are there any cross-platform apps that don’t look worse on Android than they do on iOS? Just look at these side-by-side comparisons on Android Gripes. There’s no question that the comparatively weak Android Market, weak SDK and need to support multiple resolutions and hardware has hurt the quality of Android apps, yet this is the catalog that RIM thought it absolutely had to support. If blowing up phone apps, designed partially for portrait use, and putting them in landscape on something you call a tablet is a good idea, then RIM has really knocked this one out of the park. If it’s not, then this smacks of desperation and a lack of confidence in their own platform.
Yesterday, in the middle of what is shaping out to be another lackluster Blackberry World, RIM announced in a press release that they would be opening up support for Blackberry Enterprise Server to the competing platforms of Apple’s iOS and Google’s Android. Dubbed Blackberry Enterprise Solution, the product will be a web-based console that will allow iOS and Android devices to utilize RIM’s industry-leading information and security management. Practically speaking, this will allow IT departments that have been slow to move away from RIM’s services to incorporate those services into iPhones, Android phones and tablets. This means email, first and foremost, but also includes managing, activating and distributing software to devices over the air. In recent years when these services were exclusive to RIM’s own hardware, this has been one of the major reasons why Blackberry devices themselves have sold so dramatically.
Times change, however, and its no secret that the Blackberry is losing ground fast to iOS and Android. The Blackberry Playbook has done nothing to break this fall, and RIM itself has acknowledged that they face major challenges going forward with the aging Blackberry OS. There’s been no indication when RIM’s new QNX operating system will make it to phones, and it may be too late by the time that happens. RIM may or may not be moving away from the hardware business as a central focus, but in the meantime the explosion of demand for other devices is definitely putting pressure on them to solidify their place in the enterprise. Yesterday’s announcement is the first step towards doing exactly that.
As with any announcement that integrates companies and services, there are winners and losers. The following is for your tally sheet at home:
RIM Enterprise services: Of course, RIM would never have taken this route if they didn’t think it was necessary to help their enterprise division. It may not be enough to stop the bleeding entirely, as more desirable handsets enter the market with different server options, but it does help maintain their position as much as possible in the short term.
Apple/ iOS: While the iPhone and iPad have taken great strides in the last few years working their way into enterprise, there are markets that Apple never would have come close to touching without this kind of credible security support.
Google/ Android: Google stands to gain from this move even more than Apple in the sense that Android lacks many of the same security protocols that the iPhone supports and is currently not even considered in the enterprise. This could be a bit of a turning point for Google, though it has to make you wonder how this plays against their stance that Chrome OS is the best “thin client” solution.
Smartphone OEMs: Chained to Google and Android, manufacturers such as HTC, Motorola and Samsung lacked any way of breaking into enterprise without the leadership of Google. In reality, it’s RIM that will be bringing them into the fold through their own Android support.
RIM Hardware division: As much upside as there may be for RIM’s enterprise divisions, there has to be just as much downside for RIM’s hardware. RIM can no longer go to companies and tout their own devices as the only way to access their services. This is just the first in a long line of transitions away from hardware as the focal point of RIM’s business.
HP/WebOS: As if HP didn’t have a hard enough time trying to build a compelling case for its upcoming WebOS products, RIM just took away one of their primary strategies. It’s no secret that RIM’s aging Blackberry platform left a void in the business world that a few companies, HP included, would try to capture. With RIM allowing iOS and Android integration, HP has just gone from competing against the Blackberry to competing with iOS and Android.
Microsoft/ Windows Phone: Microsoft’s revamped Windows Phone already lags behind iOS in enterprise options and support. This isn’t going to help.
Microsoft/ Enterprise Support: One of the only companies currently competing with RIM in this space from a services perspective, Microsoft now has less of an opportunity to gain back clients from RIM
***Addendum: Microsoft CEO Steve Ballmer took the stage this morning to announce a new partnership with RIM that will bring Bing services to the core of the Blackberry OS. While you can feel for Microsoft trying everything it can to make up ground against Google, this seems like more wasted money in the context of this article. As RIM moves away from their hardware and platform and focuses on their services, Microsoft is investing the the dying side of RIM’s business. It’s almost hard to watch.
If you read our 2-part series on Google and Android’s relationship with open source, you’re well aware that while Google has open-sourced the code of each version of Android through Gingerbread, major obstacles exist for any company to take that code and use it in a big way that doesn’t still leave them beholden to Google in major ways. The disadvantages of building an Android product without Google apps and early access to new versions of the platform are too great for any single manufacturer to overcome in the fast-moving and competitive mobile landscape. For this reason, Google has never been afraid of the consequences of releasing Android to the wild. Making it “open” and free has allowed them to get it on as many devices as possible. The most influential, proprietary chess piece that Google holds is the Android Market and the main access point to delivering mainstream Android apps. It’s on this front that Google must feel threatened by the first company with the size and clout to make Android something of their own. It’s no secret that Amazon is working on an Android tablet, but what could be most interesting will be the departure from a reliance on Google and an ownership of their own version of Android.
To be precise, Amazon will not be the first company to try and build a compelling tablet ecosystem of their own. They will, however, be the first to make a major impact. Barnes & Noble intrigued everyone last year when they released the Nook Color, an evolution of their Nook E-Reader. Marketing it as an e-reader with an LCD screen and some additional capabilities didn’t stop everyone from viewing it as a shockingly capable and affordable Android tablet. At a time when there were no 10-inch Android tablets, the 7-inch Nook Color immediately became a hacker’s dream as it was easily rooted and joined the staple of devices supported by Cyanogen and other modders. For a $250 tablet, priced to be almost a loss-leader and gateway to Barnes & Noble’s e-bookstore, the Nook Color ran stock Android 2.2 Froyo surprisingly well. This interest couldn’t have escaped B&N’s notice and likely added to their existing motivation to evolve the Nook Color into a full development platform, complete with a Nook-specific app store. This week’s announcement of the expansion of the Nook’s capabilities is interesting, but ultimately B&N doesn’t have the complete media presence that Amazon has. The company is still clearly limiting the number of apps it will host and expanding cautiously as if to stress that this is still an e-reader first and foremost.
While B&N works to expand its media offerings to accomodate the potential of its tablet already on the market, Amazon appears to be building towards the release of a tablet that will accomodate the breadth of its media offerings. Almost overnight, Amazon seems to have built a media empire that is in virtually every area important to consumers. They have a proven system for delivering e-books. They sell and stream music. They sell and stream video. They make buying physical goods simple and painless. This type of media and sales presence puts Amazon on a competitive level with Apple that nobody else has attained. That alone would make Amazon a prime candidate to release a tablet that could be a credible alternative to the iPad.
Amazon, however, is already one step ahead of that position. If the Android Market is the essential app that enslaves manufactures to Google, then the Amazon App Store is Moses to lead them out of Egypt. Amazon launched their App Store in a very intelligent manner, allowing it on existing Android phones and drawing in developers who have failed to make much money on the Android Market. The response to the Amazon App Store has been very positive to the point of phones launching with it pre-installed already (see. Cellular South’s HTC Merge). The main obstacle for Amazon on phones today is that the App Store is not integrated as tightly as the Android Market usually is. The key will be Amazon using its app store as the focal point for downloading apps on their own devices.
With a credible app store on their own tablet, the only remaining disadvantage for Amazon would be the lack of early access to new builds of Android. Gmail, while useful, can easily give way to a native email client that supports all types of email with a minimum of blowback for Amazon. Would Amazon be harmed by a lack of early access to Honeycomb or its successors? That’s unlikely, since Amazon doesn’t need Honeycomb. Truth be told, Gingerbread is far more stable at this point, and well suited to a small tablet experience. While Honeycomb struggles with a confusing UI and a lack of a useful SDK, Gingerbread fully supports the existing app ecosystem and can serve as a launching point for a new fork of Android. The foundation of Android is all Amazon needs at this point. They’ve no doubt been working hard to build their own layer on top to better suit promoting their content over anything else.
If an Amazon tablet based on Gingerbread and built by Samsung hit the market with the Amazon App Store and tie-ins to Amazon Mp3, Cloud Player, Cloud Drive, Kindle and all of Amazon’s retail offerings, how compelling would it be? Would it need further support from Google to be successful? Would it need any more development beyond what Amazon itself chooses to put in? Probably not, since Amazon is the only company with the ability to deliver a media experience on par or possibly beyond what Apple offers. This isn’t a challenge that any other company could take on, which is why Google can still maintain control over its supposedly open platform. Contrary to what Google would have you believe, not just anyone can take Android and use it effectively. Amazon may be the only player out there big enough to realize the potential of Android’s open source existence.
After a few weeks of people debating and waiting with bated breath, it appears that the issue is settled on whether or not the T-Mobile G2x is a quad-band phone capable of supporting AT&T 3G. The issue at hand was that T-Mobile’s website specifically listed the G2x as supporting the frequency bands (850/1900) that would enable it to work on most North American carriers’ 3G networks, i.e. AT&T in the U.S. This would mean that users could unlock the phone and use it on AT&T (rare network interoperability in the U.S.), and that the phone would be “future proofed” for the day when T-Mobile is absorbed into AT&T and the T-Mobile 3G bands are repurposed. Immediately after the phone went on sale, however, users reported that unlocked devices with AT&T SIMs were not connecting to AT&T and that the phone may not be quad-band after all. Diligent investigation by Engadget Mobile’s new senior editor Myriam Joire has now revealed that the phone does not support AT&T 3G in the following statement from T-Mobile:
The T-Mobile G2x fact sheet, attached [PDF link], contains accurate information. The T-Mobile website is incorrect and we’re working to correct it. The G2x supports 850/900/1800/1900 MHz for 2G/GPRS only, and supports 3G/4G UMTS/HSPA+ bands I and IV. The G2x does not support AT&T’s 3G bands. This banding is hardware based.
This comes as a disappointing resolution for many consumers who were excited about the G2x. What’s interesting is to ask is why is this disappointing? Why was everyone excited about this solid but relatively ordinary Android phone? Outside of a non-offensive, understated design, this phone has/had only a few distinguishing features:
- It runs stock Android 2.2 Froyo
- It has a particularly good camera at 8 MP and 1080p video capture that actually works
- It is one of the first phones in the U.S. with a dual-core Tegra 2 chipset
- It would operate fully on both T-Mobile and AT&T (now disproven)
A great camera and dual-core performance are both compelling features, but they are hardly unique or earth-shattering. The first and last reasons, however, highlight two sad realities of the Android landscape and wireless options, respectively:
1. Stock Android is still rare and considered valuable.
As we noted in our anticipation of the Nexus S last year, very few phones actually hit the market running stock Android. Instead, almost all Android phones are loaded down with OEM and carrier skins, user experiences, and other software. The reason the Nexus line exists, in fact, is so Google can insure that they have at least one handset running unadulterated Android on the market. Today, no phone on Verizon runs stock Android. Until the Nexus S 4G hits Sprint, they have only a few low-end phones that run a modified version of stock, designed to push their Sprint ID service. AT&T does not officially support any stock Android handset, while T-Mobile supports the Nexus S and now the G2x.
These oft-maligned Android skins are clearly not the preferred choice of consumers, enthusiasts, and hackers. As a result, the very idea of a stock handset is still rare and valued by consumers who don’t care for the hardware of Samsung’s Nexus S. Even the fact that the G2x runs Froyo, a version behind the current Gingerbread, is not enough to diminish excitement over this phone. That says a lot about the value of these Android skins.
2. U.S. phones remain completely locked to carriers for the time being.
The very idea that consumers can take a phone from one carrier and use it on another is completely foreign to the U.S. Unlike in Europe, where almost all of the carriers operate the same technologies and frequencies, the U.S. carrier lineup is fragmented. Verizon runs CDMA and LTE, Sprint runs CDMA and WiMax, AT&T runs GSM on a few frequencies and T-Mobile runs GSM on separate frequencies. The difference in frequencies is the reason that the iPhone does not work on T-Mobile 3G. Furthermore, while Verizon and Sprint both use CDMA, the carriers maintain tight control over phone usage on their networks, making it nearly unheard of to use a phone from one carrier on another. This is the very issue that Nilay Patel and Chris Ziegler explored in yesterday’s online debate over the AT&T/T-Mobile merger.
As long as this situation continues, consumers will experience pressure from carriers on the hardware that they own. It’s the exact opposite of true competition. The excitement over the G2x is an example of a rare bit of independence for the person knowledgable enough to use a phone on their carrier of choice.
Sadly, none of this will come to fruition, as the G2x turns out to be functionally locked to T-Mobile, failing to provide that choice or a stock Android experience on AT&T.
In Part 1 of this series, we looked at Google’s relationship to the open source community and how it uses open source as a political tool to present themselves as “open” vs. Apple’s “closed.” The recent firestorm of reports that sem to indicate the Google is tightening the reigns of Android actually prompted Andy Rubin to respond to the criticism. Unsurprisingly, the response did little to silence the critics, since Rubin did not directly address the specific allegations made in the various reports, leading to a number of counter-responses, none better than the analysis from Ars Technica.
With that aside, let’s look at what the more recent Businessweek report actually says. The report cites a number of anonymous sources who claim that Google is tightening the reigns of Android, hoping to enforce a level of quality and consistency from manufacturers. The key element of the story has to do with Google choosing to use the elements of Android that it controls, Gmail, Google Maps, Android Market, etc. as leverage against OEMs to comply with Google’s wishes. The fact that Android is open source does absolutely nothing to change this scenario, as the proprietary elements of Android are not in any way “open” to anyone. From the beginning, there have been allegations that Google uses these proprietary elements as their only way of enforcing any kind of quality control on Android, since not having Gmail or Market access is a nearly devastating blow to any Android handset. In this sense, the Businessweek article is not alleging anything that hasn’t been said before.
What is new, however, is the earlier revelation in Businessweek that Google is waiting to open-source Honeycomb until they can clean up the code and make it ready for phones. Google admits a few realities that one could have guessed from this different approach. First of all, in order to get a Honeycomb device on the market, they took shortcuts in order to tweak it to the point where it would run on the Motorola Xoom. In this rushed process, the software is rough around the edges. Early reviews of the Xoom have confirmed this, with many frustrating bugs and missing features hurting the experience of using the disappointing tablet. Second, Google isn’t even sure the same software can run on phones, demonstrating that the team didn’t consider an integrated phone and tablet solution while rushing out the software. There seems to be a major question as to whether or not Honeycomb will ever be available for phones, but it’s looking more and more likely that the next version of Android, “Ice Cream” or “Ice Cream Sandwich” will be the version intended to bring some of the Honeycomb enhancements to phones.
These two stories from Businessweek are distinct and raise different issues. While the public has conflated them into an indication that Google is “locking down” Android, the truth of the matter is that very little has changed. What these stories do show, however, is that there are a few ways in which Google’s version of their open source participation and the advantages of open source have not been true from the beginning.
Open until Closed
While Google is quick to talk about Android as open source, they’re a lot slower to talk about how some very important elements are proprietary and not open source. When consumers think about a typical Android phone, they think of a few of these key elements. These are Google-branded applications like Gmail and the Android Market that are not part of the open-source license of Android. It is perfectly reasonable for Google to not allow those apps to be put on any device, since they bear the Google name. What’s often overlooked, however, is that these apps are extremely important and considered table stakes for a quality Android device today. They carry a value that is belied by the fact that Google has to approve individual devices before the manufacturer can install them. It’s in this approval that Google acts more like a powerful entity approving licenses than as a benevolent software distributer.
Of these apps, the two that were previously mentioned, Gmail and the Android Market, are the most valuable. Gmail is probably the most highly-regarded by journalists and those on the cutting edge because it is a native implementation of all of the much-beloved features of Gmail. Gmail, in fact, may be the single-most compelling reason to use an Android phone, as email in general is so central and Google nailed the implementation in the official Android app. Because of this appeal, it’s also incredibly important for manufacturers to get approval for Gmail, since almost everyone who uses an Android phone will have a Google account and it’s an expected feature.
Even more important and expected, however, is the Android Market. When Apple created the App Store for the iPhone and made it the sole place to search for and install apps, they set a standard for the industry that required centralized places to install software. While Google took a far more lenient approach in its app approval process with the Android Market, it was essentially providing a similar store. Google also made the decision to allow users to download and install apps from other sources, but the truth of the matter is that only enthusiasts would do this more than a few times. For years then, the Android Market has been the only credible place to take advantage of the hundreds of thousands of apps on the platform. For any company to not provide access to the Market on their device was tantamount to signing a commercial death warrant.
This, of course, leads to the real crux of the matter. While Google claims to be open and uses it as justification for the decisions they make, the truth of the matter is that partners still need Google’s approval to make anything meaningful. This approval goes even further, however, when you consider the importance of early access to the newest versions of Android. Early access is key to manufacturers, regardless of whether or not Android is eventually open-sourced. That’s the real story out of the Businessweek article, and it’s right in the first paragraph:
No more partnerships formed outside of Google’s purview. From now on, companies hoping to receive early access to Google’s most up-to-date software will need approval of their plans. And they will seek that approval from Andy Rubin, the head of Google’s Android group.
If that sounds contrary to the ideals of open source, that’s because it is. The idea that a powerful company holds partnerships with other powerful companies and offers only them certain software and meaningful access is not true to the ideals of open source. The biggest hurdle for anyone outside of the blessing of Google is the threat of not having early access to the next version of Android. The difference in time could be as much as 6 months or more. Anyone out there trying to make the case that a difference in time that large in the mobile space is unimportant is being willfully ignorant of the speed at which the mobile space is moving. What’s more, Andy Rubin did absolutely nothing to dispute this claim in his response. He didn’t even address this most important allegation.
It’s worth taking a second and noting that this approach by Google is not inherently bad. It makes complete sense for their company and helps push their services first and foremost. What’s spurred this renewed emphasis on review by Google is most likely the introduction of the Amazon App Store and the Verizon-Microsoft deal to make Bing the default search engine on many of the Verizon Android phones. All of a sudden, companies are finding ways to use Android to their advantage instead of Google’s. In a sense, Google is simply using the tools it has to protect its own interests.
This approach may not be inherently bad, but what is bad is Google even bothering to claim that they are in any way more “open” in a broad sense. As we discussed in Part 1, open-sourcing software doesn’t make you benevolent, particularly when you don’t open-source the most valuable elements of every project. Somehow, Google has managed to do to mobile OS’s the same thing it’s done with countless web services. It’s commoditized the basic functions of a smartphone OS and given it away in order to sell more ads. This is a good business model, but the next time to you hear Andy Rubin say “open” fifty times in an interview, evaluate his posture with this in mind. When Vic Gondotra stands up at Google I/O next month and delivers another revisionist history that paints Google as liberators from the control of Apple, consider that Google is reviewing handsets and evaluating partnerships based on manufacturer’s willingness to appease Google. The open source superiority tends to not look the same without all the chrome.
The Myth of the Hive Mind in Consumer Electronics
While nothing tangible has really changed in Google’s approach, this renewed focus on the things it does control is likely to turn out well for them. Since Google develops Android in-house, there’s no appreciable benefit to them falling over themselves to appease the open source community. While it’s fair to take issue with Google’s pretense of openness, it’s sad to see the ideologically-based criticisms of the open source diehards. The open source purists have never had a successful home in consumer electronics, as Google is clearly demonstrating with its recent decisions.
The “other” part of this media-conflated puzzle is the Businessweek report that Google is not open-sourcing Honeycomb for a while, perhaps waiting until the next version of Android to do so. The reason is two-fold: 1) Honeycomb was not designed for phones and 2) Google and Motorola barely got Honeycomb running on the Xoom. Both reasons are major indictments of open source. Honeycomb was not designed for phones, and Google knows that if they release the code someone will put it on a phone and the experience will be terrible. In open source, you can’t prevent that from happening, meaning that consumers can be fed a poor experience that wasn’t intended for the software, strictly because of the hands-off nature of open source.
Just as sad is the fact that Honeycomb in its current state is almost a collection of sloppy code running on the Xoom. Rubin admitted in that Businessweek article that they took shortcuts in order to allow the Xoom to ship on time. If Android is open source, how has that in any way helped prepare it for the next evolution of mobile? Google isn’t lacking for engineers, yet a significantly smaller team at Apple is able to ship completely finished software on the iPad in less time. Every review of the Xoom discusses the buggy and unfinished software. Honeycomb may be a new thing for Google, but this is a familiar tune that has been around since the introduction of Android. Open source doesn’t seem to have given back to Android nearly enough to make it stable.
The consumer electronics space, and the mobile landscape in particular, is litered with examples of failed open-source platforms. The open-sourcing of Symbian proved to be a huge mistake that’s already been corrected. The stillborn birth of Meego is just as disturbing, particularly since Nokia has given up as fast as they partnered with Intel. Pointing out that Linux, an open-source OS kernel, serves as the foundation of Android and other successful platforms does no good in light of the reality that all of those platforms have proprietary, tightly-controlled layers on top.
The open source community may complain that Google has not contributed to open source in full, but Google could just as easily turn that around and admit how little it’s received from the community.
By now, everyone is done losing their minds over the multiple revelations of the last few weeks that seem to indicate that Google will not be as “open” as they’ve claimed to be up to this point. One significant article, appearing on Businessweek, gave the impression that Google was significantly tightening the reigns over the future direction of Android. It remains to be seen just how much control Google intends to exert over Android, but the more interesting question is how they intend to go about exerting that control. In theory, since Android is open source, they shouldn’t have the ability to reign in any unwanted use of their platform. The truth is that Google has always skirted the line of open source and in doing so has taken the liberty of criticizing Apple for doing they very things that they appear ready to do themselves. The issue isn’t a simple one but is getting clearer to understand by the day as multiple stress points have collided in the last few weeks.
“It’s Complicated” with Google and Open Source
To begin, we should back up a little bit to recognize exactly what Google’s relationship with the open source community has been up to this point. Contrary to the popular belief that Google is completely open, Google has had a contentious relationship with open source diehards for long time. The open source purists would have you believe that nothing is actually open source unless it is developed alongside the community that has full access at every point of development. Nightly builds, placed on software repositories and available to anyone, are a staple of open-source projects. To call one’s software open-source is to accept that people are going to take the work at some point and do something with it that you didn’t intend and probably don’t want. The trade-off is that by making source code available at all times, you’re also receiving the collective aid of the online community. Successful open-source projects include various versions of desktop Linux, particularly Debian, OpenOffice, Firefox, WordPress and Webkit, the browser engine that Apple open sourced in 2005.
There are, of course, varying degrees to which an entity might choose to make their source code available to anyone online. There is no open source regulatory agency that runs around enforcing the release of code in a regular and timely manner. For this reason, a company like Google has often been able to adopt the moniker of open source and take credit for their openness while not completely abiding by the accepted rules of the open source community. Google is famous and infamous for open-sourcing most of their products. Famous because they are a giant company that offers popular services, but infamous because they choose to develop nearly everything in-house and then release the source code after they’ve already launched their service and gained traction. To some people, this is just a for-profit corporation doing what it can to help the community. For others, this is a for-profit corporation using the open source community as a sort of indemnification from responsibility. If Google eventually gives away its code, it can maintain the political posture that they can’t be held responsible for leveraging their monopolistic position against new industries. Since they “give away” their code, anyone can theoretically benefit from their work. In practice, however, Google tends to move into whatever services they choose and dominate quickly, making the open sourcing of their code an irrelevant action. In other words, many people view it as an empty gesture that only serves to foster goodwill.
Android’s Programming and Mission
This has never been more true for Google than with Android. Google purchased Android in 2005 and worked to reform the ultimate goals of Android to better benefit Google. Put simply, while Android was designed as a mobile OS to compete and be sold by license or services like most other mobile OS’s of the time, Google saw Android as an opportunity to ensure that Google Search, essentially their only profitable service, would continue to be dominant in the emerging mobile space. In order to secure that future, Google needed a way to get Android, with search front and center on it, on as many handsets as possible. Just like with Google Maps, Google Calendar, GMail and virtually every one of their web services, the best way to make that happen was to make it free. In fact, free wasn’t even enough because the mobile market was already occupied by established manufacturers with their own goals, relationships with carriers and rivalries with each other. In order to make the greatest possible impact, Google would need to make the source code available to be manipulated and customized by manufacturers. in 2007, Google did just that with the introduction of the Open Handset Alliance, a consortium of industry players who would promote open standards in the wireless industry. The OHA unveiled Android at the same time, giving the impression that the OHA would be controlling Android in the future.
This expectation proved to be false, as Google not only developed Android entirely in-house but even worked with HTC to make the first Android handset, the G1. The G1 even had Google branding on it, beginning a long trend of Google putting their own name on handsets that introduce a new version of Android. To what degree the Open Handset Alliance does anything is not known, as they don’t appear to have had any hand in developing Android or in producing anything else. In a sense, this was another tactic by Google to position Android as not locked-down by one company. As Apple’s iPhone began gaining significant mindshare, Google knew that the appeal of Android was in the idea that it was an open platform, unlike the curated, closed iPhone ecosystem. You could view this as the beginning of significant, inauthentic posturing by Google that their open-source OS was in some way open to consumers. Just as the OHA and open source communities had no control over Android until significantly after releases, so did consumers have no real choice or options with their Android handsets. Virtually every Android phone from the G1 to handsets today go through a similar development cycle: Google develops a version of Android, a manufacturer either works with Google or takes the source code when it’s available, a handset is made and negotiated with carriers, and then the phone is sold through the carrier to the consumer. At no point along this path does the consumer have real control over the device, and nobody other than a small percentage of hacker-types is going to modify the software to suit their purposes. In such cases when they do, companies like Motorola have no problem voiding the warenty of the handset and not offering support. In this respect, modifying an Android phone is no different than modifying an iPhone, except that someone who jailbreaks an iPhone can always restore the phone by plugging into iTunes.
Yet nobody would ever get this impression by listening to Google or people who don’t understand the industry well. Andy Rubin, the founder of Android and its director at Google, is famous for saying the word “open” as many times as possible when people criticize the modifications to Android that have provided poor experiences. He seems unwilling to admit that this “openness” only exists for OEMs and carriers, not for actual consumers. After Steve Jobs jumped on the Apple earnings call in October last year and criticized the fragmented nature of Android, Rubin tweeted the directions for extracting the Android source code as “the definition of open.” We’ll revisit that bravado at a later point, but it’s worth pointing out that that type of response is a clear indication that Rubin, and Google by proxy, is not really focused on what matters to consumers. No consumer is going to go to the Android repository, extract the source code, and build their own Android phone. Just saying “open” seems to work, though. In the FCC’s net neutrality order, they actually said that they didn’t impose restrictions on wireless networks because of “recent moves towards openness, including the introduction of open operating systems like Android.” The confusion woven by Google seems to have worked out just fine.
Meanwhile, Google has stayed true to its other practices in open source. They’ve developed Android internally, worked with a single manufacturer on each release of the OS, then open-sourced the code months later. For consumers, this has proven to be a frustrating process, as OEMs haven’t gotten access to new versions until months after each new version is unveiled. This was particularly evident this last year, as Google introduced Android 2.2 “Froyo” in May, but no phone shipped with it until August, the same time updates came to phones other than the Google Nexus One. In August, phones were still shipping with versions 1.6 and 2.1. The Galaxy S line from Samsung , which shipped in July-September with 2.1 has only seen updates to 2.2 in the last two months, a sad fact since Google unveiled Android 2.3 “Gingerbread” in December with the Nexus S. Open-sourcing the code has not led to timely or meaningful releases from most manufacturers, yet Google has treated the very fact that they open-source as evidence that they are on the consumers’ sides.
As more and more OEM skins began fragmenting the Android experience, the one trump card people believed Google held was to withhold access to the Android Market and Google apps such as Gmail. Yet Google has proven to be surprisingly indiscriminating in what devices on which to allows those apps. Low-end phones with terrible skins and tablets that aren’t given the Google blessing (Samsung Galaxy Tab) receive the apps, giving the impression that Google really doesn’t care what devices run Android or how poor the experience is. All that’s mattered up to this point is getting Google search on as many phones as possible, and the strategy has worked to dramatically boost Google search from mobile devices.
Taking the High Road?
So where did Google and Android stand in March 2011? Google is a giant search and advertising company using Android to boost use of its search on mobile devices. Google releases each version of Android as open source, both to drive adoption by virtually every manufacturer and to gain good will as a benevolent company. Google is not restricting how Android is used in any meaningful way, weighing Android and search adoption over ensuring a good consumer experience. Google is also, very importantly, using its position as open source advocates as a way to jab Apple for being comparatively draconian. None of the examples given here factor into Google’s willingness to pretend that they take the moral high road in developing their OS.
So while we have demonstrated the hypocrisy and incongruence of Google’s positions up to this point, a number of developments in the last few weeks have placed the question of Google’s moral leadership up for public debate. Truth be told, Google’s position going forward is no more incriminating than what they’ve done up to this point. It does, however, shine more light on the process and myths of the superiority of open source. We’ll examine these issues in Part 2.
Ladies and gentlemen, RIM just announced that they are going to allow Android apps to run on the Blackberry Playbook. Let that soak in for a moment. Your favorite Android tablet apps, every single one of them, will be able to run in emulation on the Playbook. All of you out there who are dying to get your hands on a Playbook and make it your own, you, along with all of those many Android tablet owners, will be able to run those Android apps. You won’t have to go out and buy a Xoom or a Galaxy Tab to buy into that ecosystem. RIM is providing a way for you to have that Playbook AND the apps you really want. It’s like the best of both worlds, assuming that you use a Blackberry as your main phone and will have it in order to use the email and calendar clients.
Or, if you’re like just about everyone else out there, this is a completely meaningless development that is worth noting not for it’s own merits but as an indicator of just how sad a state RIM finds itself in. For months now, people have been wondering if RIM would do something like this. Since Android is built on Java, it’s always been theoretically possible to run Android apps in an “Alien Dalvik” layer that runs apps in virtualization. Running the apps in virtualization is not as unsatisfactory as it might sound, since all of Android is a virtual machine and the apps are essentially “virtualized.” The advantage of allowing the Playbook to run Android apps is that there would theoretically be an ecosystem of 200,000 existing Android apps that could be easily ported over and submitted to the Blackberry App World. There are a myriad of problems with this plan, however, from the perspectives of Android developers and even RIM itself.
For starters, the opening of this article is actually a bit misleading. You can’t currently port Android tablet apps over, because the guidlines outlined by RIM only mention Android 2.3 apps. Android 3.0 Honeycomb is the first true tablet OS from Google and includes its own unique SDK for apps. The experiment of putting Android 2.2 (designed for a phone) on a tablet was already tried by Samsung, resulting in a lackluster Galaxy Tab and a complete about-face as soon as they had Honeycomb with which to begin building their Galaxy Tab 10.1. Ordinary Android apps don’t work well for a tablet, even on a diminutive 7-inch tablet like the Playbook. So right out of the gate, you have the entire purpose of using Android apps drawn into question. It seems certain that RIM will eventually allow Honeycomb apps to work on the Playbook, but that time hasn’t yet come, and the entire point of this strategy was to have useful apps at launch.
What happens, then, when RIM does build in the tools to use Android 3.0 apps on the Playbook? Won’t that fix all the problems that were just mentioned? That all depends on whether or not there are going to be many useful Honeycomb apps. The Motorola Xoom just launched and has barely any apps or developer support behind it. If development for Android tablets follows the same path that development for Android phones took, then it isn’t going to be until a significant amount of hardware sells before developers take it seriously. Right now, there isn’t a single Android tablet that looks to be remotely competitive to the iPad. So where does that leave the state of tablet-specific Android apps? In the starting blocks, just like the Playbook itself.
Furthermore, as RIM adds yet another way to build apps that the Playbook supports, what is the clear message in terms of what frameworks developers should use to write Playbook apps? They’re allowing people to write in Adobe Air and sort of get by with Android apps. Where is the incentive to write natively for the Playbook? It doesn’t exist, and probably for a more fundamental reason than just having too many options. The tools that RIM is providing for their new QNX framework must not be that compelling if they’ve already lost confidence in their ability to convince developers that it’s an environment worth investing in.
And finally, the reason this is a bad idea is the thing that should have been considered first. What kind of experience is this really going to be for consumers? Playbook owners, however few there might be, will be greeted with a complete mashup of Blackberry Java, Adobe Air, and Android apps, none of which will be particularly strong on their own merit. And since the Playbook does actually do real multitasking, how is a virtualized app, designed for a different OS and hardware, going to perform and affect battery life? Battery life has already been identified as a weak spot for the Playbook before they can even get it to market, but RIM doesn’t seem concerned that these Android apps will likely tax the system in ways that true native development wouldn’t. How can anyone respect this device if RIM can’t show any true ownership of it even at launch?
The reality is that nobody can. The Playbook absolutely demos well and has shown tons of potential in its interface, but like so many other products, the experience of actually using it over time will likely tell the true story. Since RIM went out of their way to steal so much from Palm’s WebOS, it seems only fitting that after an initial flurry of hype, this device is also likely to trip out of the starting blocks and watch the competition run out of sight.
A while back, I wrote an article detailing the various ways I use my iPad. The intent of the article was to illustrate that while a tablet is not currently a “necessary” device, the appeal is so strong that I, like other iPad owners, am using it more and more. This is evident in the sheer number of apps that I use on a daily or regular basis, as I listed all of them in the article. As I spend more and more time on a three-week trip, I find myself blown away how many apps I regularly use on my iPhone as well. While I’ve always maintained that the core elements of an operating system (browser, touch response, reliability, general appearance) help establish the quality of a smartphone OS, there’s no doubt that we live an an app world that influences how we value certain platforms.
More and more, the iPhone’s simplistic interface seems to be a well-placed bet on the eventual importance of mobile apps. The layouts of Android or Windows Phone 7 both seem confused in their approach, likely because they both hit the market after the iPhone had gained significant mindshare. In order to differentiate themselves, they added layers of abstraction and obstruction that don’t make it any easier to get to your apps. Adding multiple drawers and places to put apps (Android), or forcing them into arbitrary hubs (WP7), makes it harder, nor easier, to get to your apps. Only WebOS has a comparable method, since they treat space in the OS differently and put all apps into an easily-accessible launcher. That’s not without its problems, however, because if WebOS had a less-embarrassing catalog, their users might be finding the launcher an inadequate way to navigate their apps.
To those who question whether or not Apple had plans for the App Store all along, I would point to how perfectly the iPhone homescreen is for dropping apps onto it. While they might have said at the iPhone introduction that they were committed to web apps, they had a perfect solution ready for developers within 8 months of the iPhone launch. To go along with this “new” approach to native app development, the iPhone just happened to be the perfect console for launching these apps. It had to have been planned all along, and it was planned pretty well, at that.
This idea of the iPhone and iOS devices as “app consoles” isn’t mine, of course. It’s been written about extensively by John Gruber on DaringFireball, and by Watts Martin and Marco Arment. The point is underscored, however, in a mere list of frequently-used apps on my iPhone, just like the list months ago for my iPad: