Our first look at Supergirl is here, and the reaction seems to have been decidedly mixed. On the one hand you have people who think it looks like fun, the special effects are pretty good, and the lead seems very appealing. On the other hand, a lot of people are disappointed or even angry that the show seems to be a light rom-com. Take a look at the trailer below and see for yourself. Then follow me below the fold for my thoughts.
The Apple Watch has been fully unveiled with a launch date and price point. Lots of pixels are being spilled about whether it’s going to be successful, whether we should buy it, and whether wearables really are the future of technology. As I said last time I talked about the Apple Watch, it’s a device that I can see the utility of and that I certainly want. Just not now.
One of the big pieces of news coming out of the show yesterday was the price point ranging from $349 for the basic model to $10,000 for the limited edition 18k gold model. Understandably, that bigger number has gotten a lot of attention. The gold Apple Watch Edition is clearly a product for the very wealthy who wouldn’t think much of dropping a year’s worth of car payments on a status icon.
Except the thing is the Apple Watch Edition is now essentially a piece of jewelry. And people usually buy jewelry as an investment or as a family heirloom that’s going to last decades or even generations. So how does that work with a piece of consumer electronics which will eventually grow obsolete?
It’s not that the Apple Watch will necessarily have a yearly lifecycle like other iOS devices do. By this time next year, the Apple Watch will still keep time, count how many steps you take, and talk to your phone. It won’t have whatever new features the Apple Watch 2 has, but it will still be viable.
But what about five years from now? Will an Apple Watch you buy today still be able to sync up and work properly with an iPhone 9? Is there going to be same way for Apple to switch out the innards and update the software? Or is the Apple Watch going to end up much like the iPhone 3g — ultimately disposable. I don’t know that the public is going to be convinced that a watch is something they can switch out every two years. It makes more sense to me that Apple will instead introduce new Watches to appeal to different segments (maybe a kids’ version or a version specifically designed for medical professionals).
These are all issues that I’m sure Apple has considered. It will be interesting to see what their solution is.
It began as a seemingly spontaneous uprising, startling the establishment and gathering a popularity that nobody expected. Its participants were fueled by a righteous sense of justice and communicated with each other using social media in ways that observers hadn’t anticipated. For a while, they were all the talk of the town, and there were pundits in the media who openly speculated that this movement would lead to a permanent change. And then, slowly but surely, the momentum died down, the press started paying less attention, and the participants dispersed except for a dedicated core group. A few years later, hardly anybody talks about it, and no actual change has come out of all the noise and fury.
This was the story of Occupy Wall Street, and a few years from now, it will be the story of GamerGate. Not too long ago, the front pages of many traditional newspapers as well as blogs and social media were posting new stories about GamerGate on a daily basis. Now it’s already pretty clearly dead, having accomplished nothing that anybody can discern. The unofficial motto of the movement, “It’s about ethics in videogame journalism,” is now more likely to be used as a sarcastic punchline than a sincere wish. To understand what happened, I think it will be useful to look at how Occupy Wall Street started with a similar bang and then slowly collapsed in upon itself.
Remember Google Glass? Remember the hype, the backlash, and all the jokes about how nobody looks good while wearing them? Whatever happened to those things anyway?
According to this Reuter’s article, Google Glass has basically just faded away with major developers dropping support and no launch date in sight. It may continue to be available in some form, but it’s looking more and more likely that a mass consumer launch is never going to happen.
Many pundits have noted that the problem with Google Glass was the way it was launched. It was more like an experiment which was given to a very limited audience with plans to expand the launch later. Without a clear launch date, however, no positive publicity came out and Google Glass essentially withered on the vine.
The thing is this is Google’s normal way of doing business. It’s not necessarily a bad way to go. I remember when Gmail was invitation-only, and it has grown into a smashing success even before it was attached to other Google services like calendar and productivity apps. But for every Gmail there has been a Google Plus or Google Wave — products which started out in limited release and never managed to get off the ground (yes, I’m aware that Google Plus still exists, and this blog even publishes a link to it on my Google Plus page. But does anybody really worry about it as a major player?).
Contrast the Google way with how most other companies launch their products. Most other companies will make some announcement about their new upcoming product, give a firm launch date, and then start marketing it with commercials and social media. Apple is a prime example of this model, but any other company including Samsung and Nintendo will do the same thing. You have to hope that the product actually works and does everything promised, and you don’t have the benefit of testing under real world conditions the way Google does with their products. But it also means that you can control the information that gets out and spin things positively for yourself before the launch rather than have all the flaws sitting out for the world to see.
The other thing that killed Google Glass was that smart watches solved some of the problems it was designed for. One of the draws of Google Glass was supposed to be the ability to look up information on the internet without having to pull out your phone all the time. Smart watches offer the same functionality while also not looking irreparably goofy. And they are available now. In that sense, the Apple Watch will be competition for Google Glass early next year. The one reason to get Google Glass now is if it delivers on the promise of being able to overlay information on the world (such as by translating signs written in foreign languages). But that capability seems very far away now. And therefore Google Glass has faded away to die ignobly.
It makes me wonder if Google might not be better served with a little more opacity. Did Google Glass really have to be announced to the world and tested out in the open? From where I sit, there’s no reason why Google Glass couldn’t have been tested and developed internally much the same way as the next iPhone or Galaxy Note is. Apple has surely had a lot of failed ideas and products which didn’t pan out. The difference is we don’t know about them.
I came across this article in the New York Times talking about the generation divide in Silicon Valley. It’s very well-written, and I recommend reading the whole thing. There was one part that was a bit tangential to the author’s main point, but it’s what really got me thinking:
There’s a glass-half-full way of looking at this, of course: Tech hasn’t been pedestrianized — it’s been democratized. The doors to start-up-dom have been thrown wide open. At Harvard, enrollment in the introductory computer-science course, CS50, has soared. Last semester, 39 percent of the students in the class were women, and 73 percent had never coded before. These statistics are trumpeted as a sign of computer science’s broadening appeal and, indeed, in the last couple of years the class has become something of a cult and a rite of passage that culminates in the CS50 fair, where students demo their final projects and wear T-shirts reading “I Took CS50.”
I myself have taken an introductory computer science course (at my school, it was simply CS-001). I am not a programmer or anything close to a computer engineer. But here I am running my own website which contains formatting and networking features you could only have dreamed of in the late 90s (I remember what it was like to code HTML manually). This stuff is no longer hard.
Writing software is no longer as obscure as it used to be. Don’t get me wrong. It still takes focused study and education. But contrary to what you might think, most computer programming these days consists of taking components or lines of code other people have written, tweaking them to fit your purpose, and then putting them together. And we’ve now gotten to the point where those off-the-shelf components are incredibly versatile, powerful, and easy to use. There are still companies out there building brand new things from scratch. But most development houses are building an app for a restaurant who wants to create smart menus or creating a system for hospitals to track their patients. Most developers now spend their time thinking about exactly what problem they want to solve or what feature they want to implement and not so much about exactly what piece of code they use to do it.
It’s similar in many ways to how journalism is more concerned with the ideas and stories you tell and not so much about whether you should have used the subjunctive voice on the third paragraph.
App development is not quite as easy as writing prose, but it’s getting closer. And while that’s a good thing overall, it’s worth thinking about what that will mean in the next few years.
Many of the startups that get the most buzz these days rely on building a network—people use them because other people use them. The most obvious examples are the ones that have a social component such as Pinterest, Instagram, and Spotify. But there are others such as Venmo, a system for paying your friends when you don’t have cash on hand. It only works if your friends also have Venmo.
What this means is that many of the startups coming out of Silicon Valley are more about social engineering than they are about resolving a technical issue. The technology behind Venmo is nothing huge. The real difficulty is in convincing enough people to use it in much the same way as modern journalism these days has to convince enough people to read it.
And much like journalism, a lot of services coming out of Silicon Valley give away their product for free which means they rely on advertising for their revenue. I don’t think this is sustainable for journalism in the long run, and I don’t think it is for software development either.
So what to do? The main difference for me is that people care about the quality of the software they use much more than they care about the quality of the journalism they read (sad but true). This should mean that they are willing to pay for quality. And maybe the next bold step for a Silicon Valley startup is to dare to charge for their service.