Tag Archives: mobile

Bringing balance to the force: CarrierIQ

There’s a severe lack of balance in this whole conversation surrounding CarrierIQ. The fact that CarrierIQ logs keystrokes makes the whole issue so terrifyingly intrusive that it’s difficult to look at the broad picture objectively.

Working in telecom isn’t much different than working in application development. Consider this common scenario:

> A user’s application crashes. They immediately call your customer service and spew vitriol, claiming the application has crashed ten times in the last week and lost all their data. ZOMG!

> Because you’re a seasoned — albeit somewhat cynical — developer, you include a crash reporter component that sends you detailed application usage and crash information. You pull up the customer’s records and see the application has only crashed three times in the two years the customer has owned it. The log of their data file size shows the file size has only ever gone up, and the report of data file integrity that runs every time the app boots reports no issues reading the file. Ever.

Wow, this must be the worst customer ever, right? And what’s up with this developer spying on his users. What a sociopath, right? Oh, I see he has got SpyMeSat Mobile App to notify him when an imaging satellite could be taking his picture. Of course he would say he just needs new satellite images, but the real reason is evident.

Probably not. This is more typical than we’d like to admit, but what drives users to such hyperbole when reporting issues? Tech support practices teach users this behavior. In order to understand how, you must understand a few things.

First is that, to your customer, the technology is a “black box”. To quote Arthur C. Clark: “Any sufficiently advanced technology is indistinguishable from magic.” When a customer calls a support line, there is a good chance the CSR is going to have them go through some basic steps they’ve already tried. Unfortunately, you can’t just take the customer’s word for it, because customers are liars…

Woops. See what just happened there? That’s the second thing you need to understand: common tech support methodologies tell us to distrust what the user says.

Thus begins the cycle of distrust. A certain percentage of customers will lie to save time. Another set will lie because they *think* they know what’s causing the problem, but they lack the depth of subject matter knowledge to even understand why they’re wrong. The technology is magic to them, so “restart the application/device” might as well be “say hocus-pocus three times.”

This issue runs even deeper because many customers really *do not* want to call your support line. They really don’t. Who wants to feel distrusted? They learn from every support experience, and will often take the basic troubleshooting steps themselves. They’ll tell you this, but as we know, a certain percentage will lie, and you have no way of knowing who this percentage is, therefore we must treat all customers as liars.

You spin me right round
baby right round
like a record baby…

*Ah-hem…* Sorry.

In a tech support conversation, the customer very quickly feels distrusted, and as we know from some rather infamous psychological experiments [1], people who feel distrusted will act in a way worthy of said feeling. Because of this deep rooted, dysfunctional relationship with our customers, we develop solutions that circumvent the issue entirely and gather the data directly. Pay no mind to the man behind the curtain, and all that. You see, the customer relationship problems are the same in telecom as they are in application development; web or otherwise.

This begs several questions:

Why ask them if they recently rebooted the device when the technology can tell us so accurately?

Why ask a customer how many dropped calls they experienced if we can simply look at a log?

Why not have a look at where the user was when they reported poor call quality, so we can correlate it to our tower location database?

Why trust that a particular setting is configured correctly when we can inspect the condition of the device?

*Why rely on a user’s assertion that they typed the URL correctly when we can just look at their keystrokes.*

Whoa, hold on a minute.

Let me back up a moment and be clear about something. I am not advocating that the data collection performed by CarrierIQ is “OK”. It’s also not entirely clear whether carriers can actually see your keystrokes, but they are logged on some devices. I am playing devil’s advocate here. I hope the scenario I’ve presented is identifiable to you. The technical groups at the carriers want you to have a positive experience, and this drives them to collect data.

I know more than a few people who work in technical departments at AT&T. They don’t live in a mountain-side complex plotting schemes for world domination. They really don’t. They’re normal people like you and I, and they care that people think their service sucks. Like we’ve all experienced, management doesn’t always give them the resources they need to fix the root cause. As is typical in service-related enterprise, they focus on *fulfilling* failure demand [2], rather than restructure their organization to reduce it.

This (excessive failure demand) is what drives the market for tools like CarrierIQ. I would be very surprised if the genesis of CarrierIQ was the marketing department, but the conundrum we face is that data collected for troubleshooting is like a trifecta of meth-heroin-cocain to marketers. The same data you’d need to build a robust support mechanism where the user does zero troubleshooting could be used to lead thousands of marketers right off a cliff. It’s too powerful an attraction. No firewall can withstand the gravity of “the bottom line”.

So take a step back for a moment and re-evaluate the CarrierIQ situation. Should there be more transparency? Yes, definitely, but let’s not turn this in to Salem 1692. These tools are incredibly valuable for carriers from a tech support perspective. They can’t go away entirely, but we do need better transparency and regulation of how the data is used.

Comments welcome on the [Hacker News item](http://news.ycombinator.com/item?id=329997).

1 – I’m referring to the [Stanford Prison Experiment](http://www.prisonexp.org/) conducted by Philip G. Zimbardo in 1971.

2 – [Failure demand](http://leanandkanban.wordpress.com/2009/05/24/failure-demand/): A work product that does not meet the customers needs and generates additional work. It is opposed to value demand, which is the customer wanting something new.

Google WebM: Who will think of the users?

A quote:

bq. That is all well and good for Google, but what does that mean for me, the guy who just wants to lay on his sofa and watch cute kittens? At this point, pretty much nothing.

This is a short excerpt from an otherwise “well balanced article”:http://blog.andrewhubbs.com/?p=87 explaining the players, roles, and technologies involved in Google’s decision to remove H.264 support from their HTML5 video tag implementation in Chrome. The sentiment expressed is that it doesn’t matter much to us mere mortals. That couldn’t be further from the truth. If Google is successful in pushing WebM as the standard means of encoding video on the web, it will render millions of devices obsolete, impacting the millions of consumers who own those devices. How?

In many articles on this topic, you’ll find passing mention of something called “hardware decoders”. Since my goal is to explain what Google’s actions mean to the Average Joe, I’m going to go through the trouble of backing up a bit and explaining a few things about video, and how it is played back on various devices.

All this talk about video codecs, what does it mean? A codec (short for coder/decoder) could be thought of as a process definition. Say you had a letter that you wanted to send to a friend, but the post office charged based on the length of the letter. You have two choices: you either shorten your message, or you find a method by which you can reduce the number of characters required to communicate the same information. Expressing the same information using fewer characters is something computer scientists call compression. In addition to compressing the message to save on costs, you’d want to make sure the letter was written in a language that the recipient understands. And what about his ability to open it and access the contents? You need to make sure the envelope is accessible and allows the recipient to easily access what’s inside. That sounds like a silly requirement, but it’s relevant when you look at the details. You could think of all these details as a “codec” for writing and delivering a letter.

I’ve lumped codec together with file format here, which is technically incorrect, but trivial for understanding this issue from an end-user perspective.

So how does this relate to web video? All of the seemingly inane details expressed above are the type of things that computer scientists think about when they design a video file format. Interestingly, codec is just one aspect of a video format. I won’t go in to the others, but it’s worth understanding that the problem is very complex and covers many different areas of knowledge. For the moment, let’s look at the compression part.

Inside your computer is a very, very powerful microprocessor called a CPU. Your CPU is capable of computing solutions to a very wide variety of problems. Because of this, we call it a general purpose microprocessor. It is possible, however, to build a kind of CPU that is optimized to perform a very specific task. In the various articles written about Google’s WebM decision, you’ll find mention of an “H.264 hardware decoder”. What does that mean?

H.264 hardware decoder: a specialized microprocessor that is purpose-built to decode the target codec.

Examples of H.264 hardware decoders:

* The video card in your computer probably has one
* The iPhone has one
* The iPod has one
* Most Android phones have one
* If your TV can play video from an SD card or computer, it has one
* If your digital camera shoots video, it probably has one
* Your digital camcorder probably records in AVCHD (incorporates H.264)
* Virtually every video production suite on the market can utilize an H.264 hardware encoder-decoder

So what does an H.264 hardware decoder do for you? In short, it allows you to watch high-resolution video while using far less battery than it would if you used your device’s CPU. When sitting at your desk, you’d think this wouldn’t matter, but playing back a 1080p video encoded using H.264 can peak even modern processors at 80%-90% utilization. That means the loud fan in your computer is going to turn on and make noise while you’re trying to watch your movie. On laptops, the consequence is even more severe. You can lose hours of battery life by not using H.264 hardware decoders. On mobile devices, it’s game over. Your phone doesn’t have a powerful dual-core CPU. It has a tiny mobile CPU that simply doesn’t have the horsepower to decode high-resolution video on the fly. You’ll be stuck with lower resolution, larger-size video that requires less computing power.

Feel that pit developing in your stomach? Yeah, I’m right there with you.

Let’s look at some numbers:

* 50 million iPhones [1]
* 450,000 iPads [1]
* 220 million iPods (as of Sept 2009) [2]
* 8.5 million Android phones (as of Feb 2010) [3]

That’s close to 280 million devices with H.264 hardware support, and I haven’t scratched the surface. There are no televisions on that list. Remember CES and all the hype over Android tablets? None of them have WebM hardware decoders. On every one of these devices, the cost of WebM video playback will be:

* Greatly reduced battery life
* Larger file-sizes (less compression will be required for smooth playback)
* Lower resolution

We’re talking about falling back from every major milestone met by mobile device manufacturers in the last three years, and millions of devices rendered obsolete for video encoded in WebM. What happens if Google goes WebM-only for YouTube? Right now, Apple supports H.264 exclusively on their mobile devices? Why is that? Because Apple considers user experience to be first priority. Even if Apple were to implement WebM on their mobile devices, the consequence would be jittery video playback that sucked your battery dry in no time. That’s not a good user experience.

So, what does this mean for the Average Joe? If Google is successful, it means that your user experience will be significantly degraded on any device you own that contains H.264 hardware, but no WebM hardware. Have a look at the specs for your phone, portable media player, television, and home theater media devices. Any of them that rely on H.264 hardware are at risk for becoming obsolete.

1 – “TechCrunch”:http://techcrunch.com/2010/04/08/apple-has-sold-450000-ipads-50-million-iphones-to-date/
2 – “World of Apple”:http://news.worldofapple.com/category/world-of-apple-events/
3 – “Numberof.net”:http://www.numberof.net/number-of-android-phones-sold/