Thursday, October 8, 2009

THE PLEDGE

My good friend and colleague Per John Thorenfeldt just concocted this pledge for telcos that provide services to third-party over the top content providers:



THE PLEDGE


I will never again call a customer a "CONTENT PROVIDER"

I will never again call a customer a "3rd PARTY"

I will never again call a customer an "OVER-THE-TOP PLAYER"

They are CUSTOMERS or potential customers, period!


I will never again speak of innovation THROUGH customers

On occation we have the privilege of innovating WITH them

Our job is to stimulate innovation BY them by offering tools and materials

And hopefully innovating a bit ourselves


(Creative Commons Attribution-Noncommercial-Share Alike 3.0 Norway License)

8-)

Wednesday, September 30, 2009

What should we do? Development methodologies. Just a few thoughts.

Every few years I spend some time thinking about development methodologies. This last year I've been very inspired by Bill Buxton's book sketching user experiences. The idea that design is both a specialist task and a collaborative task is one I do subscribe to. Also the very practical tip to keep sketches look like sketches is one I have found useful: When something looks to polished, it is easy to get the wrong kind of feedback. When something looks unfinished it solicits input, and one of the things Buxton shows is how designers use this knowledge to add pencil-drawn lines to computer-drawn models, just to make them look more sketchy (but still very nice :-).

We applied this very idea to a portal design project that started a little under a year ago. It's not in production yet, *cross-fingers* but the first six months got every bit of contribution from the stakeholders that we could wish for. I'm quite willing to give a good chunk of the credit for that to the inspiration from Buxton's book (hmm, perhaps it's time to read that book again).

Today a colleague of mine pointed to the "guimags" website. They promote a way of developing user interfaces using magnetic gui elements on a whiteboard (combined with whiteboard markers). They have even written a book about it called "The Unplugged". I believe this is an intersting addition to a toolkit for interaction design. It's not a substitute for mockup and prototyping tools like Axure, balsamiq or napkee, but it's sure something that lets the number of throwaway prototypes go way up without adding much or anything to total project cost, so that alone means it's probably a neat idea. (Also, in the spirit of full disclosure, I'm getting a free ebook for having blogged about this :-)





Friday, July 3, 2009

Hm, Drucker was a smartie.

http://www.versaggi.net/ecommerce/articles/drucker-inforevolt.htm

Twitter was originally intended for SMS


Sort of a followup of my previous post. I browsed for Twitter on Wikipedia and found this little gem: Originally Twitter was inteded to be "a service that used SMS to tell small groups what an individual was doing"

Hehe.

The times they are achanging

Just heard from a friend of mine working in the "mobile content" industry. They are essentially fed up with all the sillyness operators are making them put up with, so they have recently decided that they will simply stop using SMSes for content delivery and instead use Internet protocols, possibly some chat protocol instead.

This is rich: SMS being dropped as a messaging protocol on phones due to operator sillyness. Whoever thought that operators "own" messaging on phones needs to have their expectations adjusted. As my recently deceased friend Erik Naggum said: «If this is not what you expected, please alter your expectations

Wednesday, June 24, 2009

Erik Naggum died

My longtime friend Erik Naggum just died. I won't say much about him here although there is a lot to be said about him, but he was a near and dear friend that I was in contact with on an almost daily basis. I will miss him very much for the rest of my life.

Google voice etc.: The plot thickens

Ars Technica is reporting that Google Voice looks good, and that Google both has aquired about a million phone numbers, and that they are supporting number portability. No surprises there, but certainly interesting.

Google now has a bunch of tools to facilitate personal communications and interactions: Talk, Voice, Docs, Mail, Picasa, Orkut, etc. Googe Wave is not yet a product, but in many ways it is the most interesting of the lot. One can only wonder what comes next.

All of the products I mention above will probably consolidate somewhat through incremental improvements, but there are obvious synergies to be tapped here that requires more than slow consolidation. The main question is therefore not if Google will tap into these, but when and how. My guess is that there will be no "big bang" unification projects, but that the services will converge over some time. The end result however, will be like this:

  • It will cover all the bases: For most users will be (almost) no need to escape from the Google universe whatever their communications and cooperation needs are.
  • It will be tremendously open for third party innovation: This will enhance the point made above, but will do so by enabling differentiation to be performed by third party developers rather than Google itself. Very cost effective, very useful, and something Google already has ample experience with already.
If I was working for a Telco I would be afraid. Oops :-)

Wednesday, June 17, 2009

More grails woes


Ok. This is just a short status report.

In grails 1.1 (the current version as of june 17. 2009) the following does not work:

  • Cascading deletes.
  • Automatic firing of beforeDelete events.

Consequently anyone depending on these mechanisms for maintaining database consistency will not have consistent database after any deletes that should have involved cascading deletes or automatic firing of beforeDelete events.

The workaround I've used is:

  • Hand-code cascading of deletes into the beforeDelete method in the classes where cascading is supposed to be applied.
  • Hand-firing beforeDelete before every invocation of delete.

I won't tell you how long it took to figure these ones out. Suffice it to say it was far to long.


Tuesday, June 16, 2009

Google wave

This guy has got the most important point about wave, and he has a cool demo to show it ;)

Saturday, June 6, 2009

Perhaps Google Wave will form a good habitat for intelligent agents?

Google Wave is a significant innovation. See the stuff on the google pages for intro, prepare to be impressed, but I won't repeat it here.

It just struck me that perhaps Google wave will enable the growth of a bunch of interesting intelligent agents. These are of course not new, and the initial wave team has certainly gone out of their way to facilitate robots to participate in conversations. Intelligent agents in the web area had a boom in the late nineties but sort of faded from the public view after that. Patti Maes's research group at the MIT Media lab did a bunch of intersting stuff, I recommend reading through their publications.

One of the thing that are striking when reading these articles is number of hoops that needed to be jumped through in order to get anything working. Proxy servers, strange layouts, genetic programming and even then the performance was interesting but not altogether impressive, and certainly not ready for prime time in the form of mass market products.

This might change now. If wave is launched, as I hope it is, most of the "hygiene factors" necessary for a good user experience will be taken care of in a properly Googlish, Marissa Meyerish-pedantic manner. The wave team has aleady taken care of many but not all of the issues an intelligent agent needs in order to thrive, so this means that it might very well become common practice to include agents in conversations, and that means that agent-writers will finally have a big consumer market to offer their wares to. This will take place in two distinct ways:

Semi-covertly; Like for instance a spell checker that is quietly but consistently checking all input for spelling mistakes. Many other types of covert agents can be imagined: Looking up words and phrases and trying to find related texts in books, articles etc. If a covert agent does this work, it will be added as some kind of decoration to the actual content: In separate layout boxes, as links on the side of the main content (like google does sponsored links today) etc. The main difference being that you are in charge of which agents are doing the decorating, not Google.

Overtly: Agents that are more or less real subjects. They can present input somewhat like humans in various roles: Research assistant ("I found sixty nine references to this compound in the standard journals, but none of them refering to toxicity in larvae, however ..."(etc.)), and other roles as well I can imagine (boss, mother, social network surveilance agent etc. :-) This is of course a minefield for user interaction design, but that doesn't really matter: You see, this kind of behavior simply won't happen if it's not wanted. Agents are invited, not imposed on conversations. Furthermore agents are simple to add and since the substrate on which they act is already quite rich, work on the agents can be concentrated on adding interesting substance. When some interesting agents are produced, they can start breeding and mixing, perhaps metaphorically and perhaps almost literally through genetic programming. I believe this could be the start of some really interesting times. I just can't wait to get my hands on wave.

Please please please Google let me have access soon? ;)

About complimentarities, competition and open source

Perhaps the greatest joy of working in a multi diciplinary research department is that you get to rub sholders with great practicioners from other diciplines. By accident or design, I don't know, my research group which works on open innovation and open source shares office space with a group of hardcore economists wonks that spend their time making economical models to rationalize pricing decisions and how to relate to regulatory concerns. Despite the themes they are working on they are bright and plesant people, and from time to time I learn a few really interesting things from them, and I hope that goes the other way to.

In the interest of full disclosure: I am an unabashed Internet fanboy ;) I believe that internet players will eat the telco industry alive if it is given the opportunity. I am not the only one with this point of view in the telco industry, but it is fair to say that this view is not presently dominating. This is not the time nor place to elaborate on this but it is something to keep in mind when reading the text below.

Over the last six months there has been some discussion back and forth between our groups about the role of open source and software platforms in general in various situations, and I don't know if we have landed on a consensus or not, but there are some really interesting points that has come up that I would like to share with you:

It makes business sense to encurage lower prices on complementing services.

Complementing goods pairs of goods on which lowering the price of one of them will increase the sale of both. Egg and bacon, ham and cheese, phones and phone subscribtions are examples. Complementarity also extends to services. If you sell bacon it is very much in your interest that eggs are as cheap as possible preferably free, since that will increase the sales of egg and bacon. Of course, if eggs are totally free or if bacon is very expensive eggs may may become a substitute for bacon (at least as a protein source) so which goods are actually complementarities are very much post hoc classifications of observed behavior, not laws that are written in stone tablet. These things may change over time.

This is all very fine and compelling as an abstract explanation, but listen, it also perfectly explains why free and open source has been a good move commercially in some cases: Case in point: Sun Microsystem decided to give Java away for free. Arguably this saved Sun Microsystems from oblivion. Sun was, and still is, earning revenue mostly from the sales of server hardware. At the time when Java was made available for free Sun was facing fierce competition from Microsoft windows/Intel ("wintel") based software/hardware, market share was declining and something needed to be done. What was done was to kickstart an ecosystem of free (as in gratis) software that served as an alternative to the wintel platform. Software development tools are complementary to hardware, but they are not substitutes. By dropping the price of basic development tools very low, it allowed developers to develop software that worked both for wintel and for Sun and in a browser too, which was certainly better for Sun than having developers choosing one platform only, and then presumably chosing something other than Sun.

Complements can turn into substitutions

As indicated in the egg and bacon example above there are situations where eggs can become substitutes for bacon. This is certainly not the only situation where this can happen, the telecom industry in which I work is full of examples of complements that have or is slowly turning into substitutions. My favorite example is location. Phone companies has always had some idea about where the phones are located. This is true for fixed line phones but also for mobile phones. This means that phone companies have always believed that one of their unique assets is to know where their customers are. This was certainly true when GPS was introduced. The first GPS receivers I saw in the eighties were huge beasts intended for ship navigation. I believe it is fair to say that they complemented phone based location :-) At that time most phones were based on landlines. The fact that you knew where your ships or airplanes were in some cases made it necessesary to make a phone call, and the location of the caller might be useful in some way, but not to any great extent. In in no way did the refrigerator sized GPS boxes substitute for phone based location. Then mobile phones appeared and that changed the game somewhat. If you wanted to report where you were, it was possible to get the position from the phone company or your could get it from what was then becoming smaller sized GPS devices. In order to get your position from the phone company you had to enter into an agreement with them, pay for a subscribtion to a location service and the per lookup. In addition you had to grapple with less than well defined interfaces over strange communications protocols. It is fully understandable that GPS's well defined NMEA interfacing became the positioning technology of choise for solution developers. As time has progressed, the size of the GPS receivers have decreased. Today the SirfstarIII chipsets have unheard of sensitivity (-159 dBm) and without antenna it is the size of an adult's fingernail. GPS receivers are cheap, and they will only get cheaper. They are almost a checklist item in today's phones, and soon they will be in basically every phone sold at least in the western world. Nobody in their right mind is using telco based location any more. It is just not worth the effort when GPS enabled devices are ubiquitous. What was once a weak complement has now become a direct substitute, but it is actually even worse than that: The ecosystem of solutions surrounding location based services is almost exclusively based on GPS, so even in the cases where telecom positioning is a possible substitute for GPS, the telcos are not in a position to sell their services because the prospective buyers are just not interested.

So, a weak complement became a strong substitute. I asked my economist neigbours about this, was this something they had any appropriate theory for but they couldn't think of anything. I am not discuraged :-) I am sure there is relevant theory out there and when i hear about it I will incorporate it into this story.

What other complements can become competitors for telcos and what can be done about it?

In the same way that location was lost to GPS, what other presumably unique assets can we lose? Oh let me name the ways :-) VOIP is an obvious one: SMS/MMS being substituted by instant messaging (AIM, MSN, XMPP, ...). In fact, it its hard to think of a single basic product offered by telcos that cannot be directly substituted by something or someone. This seems obvious, but it is not really helpful since it really forces all participants into a Bertrand-style competition (a.k.a. "race to the bottom") where the prices will end up being dictated by the marginal cost of production. Another place to look is further up in the service stacks where the customer relationship is being formed and develped: There are many interesting opportunities, perhaps the most intertesting ones are based on social networking? What if telcos were able to both compete directly with the likes of Facebook, Twitter and Google for the attention and contexts of the users and to create products that mesh seamlesly into the services these other guys are offering, but at the same time sending some revenue to the telcos? It seems to me (and I have to admit this is something I heard from someone else that I unfortunately can't credit properly at this time :-( ) that this is a prime area where telcos can actually gain from participating in an open source effort: Identify some of the central tasks that the social networking sites needs to do in order to retain their business, define that as an area where the cost of using telco assets instead should be very low possibly free, invest in free or open source infrastructure to make it happen and be able to both defend income from existing services and to learn more about how to compete on equal terms with the social networking people for the user's attention and money. It's the same type of play Sun did with Java and it might just save us from being swallowed by the internet playes ;)

Saturday, May 30, 2009

Dumping DDLs directly from the commandline for Grails apps. Brilliant!

Burt Beckwiths little script was brilliant. (this is the kind of brilliance that goes unrecognized by those who haven't spent a few hours banging their heads against the problem that this script solves. There are lots of issues like that in programming).

Tuesday, May 26, 2009

I don´t like databases

From my favorite irc channel:
rmz don't like databases
rmz: they are nasty
rmz: data should be stored in RAM, ergo sum
lmiw: Amen!
Nuff said.

Wednesday, May 6, 2009

Note to self: How to do integration testing

When writing integration tests in Grails. First construct the data objects, then save them (and get error messages when consistency rules are violated), then finally construct connections and test integrations.

Just a practical note to all Grails developers

As you all know, testing is good for you. But you know, in Grails it is not just good for you; You absolutely positively need to add tests for everything and you need to do it up front. No exceptions accepted and severe punishment metered out to those who fail to follow this commandment.

Why? Well Grails, wonderful as is, when running in production mode probablyhas the least intelligible error messages in the known universe. Usually (or at least usually in my case) they consist of a mile-long stacktrace containing references to null pointers and bad servlet contexts and other things that will give you exactly zero clue about what you have done wrong.

The unit and integration tests on the other hand will give you intelligible error messages, so for that reason alone you should write tests like it was your religious duty to do so, which it probably should be anyway, so just do it.

Monday, March 30, 2009

Nework investments

Just a short notice this time.  From time to time i hear an argument that can be roughly paraphrased like this:
Building broadband  network infrastructure is expensive, and unless infrastructure owners have some way of getting their investment back, the growth of the internet will soon stop, so we had better find some way of letting the infrastructure builders get more money or we'll all suffer in dialup hell.
... or some version thereof.  I won't argue directly against the fact that network hardware is expensive, because it is.   However I do want to challenge the definition of who the "infrastructure owners" are, and through that the conclusion that investment in network infrastructure is doomed to halt. The reasoning goes something like this:

  • The first premise is that network infrastructure is no longer limited to routers and wires.  P2p networks and other types of overlay networks are becoming an increasingly important part of the net's future.  
  • In addition to, and in part also as a substitute for wires and routers, the overlay networks use storage and processing in their infrastructure.  A big local disk can make the need for a fat pipe less pressing. Not only that, but a big disk in the neigbourhood makes the  need for a fat pipe out of the neigbourhood less pressing. 
  • This means that not only will investments in p2p hardware make an individual's service better, it will also make the community and internet as  a whole's service better.  
  • Add to this that storage is dropping in cost quicker than  wires (and to some extent routers), we see that there is an increasing incentive for end users to invest in cooperative overlay networks.
The conclusion of this line of reasoning is that the network will continue to evolve. Investment will shift gradually (but probably never completely) from wires and routers purchased as a service from an internet service provider, to processing and storage infrastructure owned and operated by end users, and used by users local (by some network distance metric)  to each other.

I just needed to write that down, because I believe it is important ;)



Wednesday, February 4, 2009

Skynet infrastructure is arriving on schedule ;)

IBM is apparently planning to deliver a 20 peteflops supercomputer in 2011. Now twenty petaflops is 20 * 10^15 = 2 * 10^16 operations. This amount of computational power might be in the ballpark for emulating a human brain: Matt Bamberger (who I found from a random google search, don't know much about him) estimates that you need about 20 petaflops. Singularity-guru Ray Kurzweil estimates [1 p. 71] that you need about 10^16 calculations per second. Kurtzweil estimates that 10^16 operations will be reached "early in the next decade" (sometime after 2010). This means that Kurzweil's estimate on when the (assumed) necessary computational power is available is right on track. There might still be a small matter of programming the thing to actually emulate a human, but within a decade or so it seems that we can have a proper hardware substrate for a Skynet implementation if we so desire ;)

[1] "The Singularity is near- when humans transcend biology" Ray Kurtzweil 2005 (2006 printing). Duckworth publishers.

Wednesday, January 28, 2009

Reading Google's tea leaves

Google isn't like other companies, at least not in every way (more below ;). One difference is that they usually do not announce their strategic intents very clearly, by normal standards. Interpreting their intent is a bit like reading tea leaves. You mix present world facts, history, guesses about the personalities of the involved people and hope to get an interpretation. The sources I look in daily are:
In addition I read other things, and talk to people. The thing to keep in mind is that Google is a very technical company, and in order to understand what they are doing one must understand a lot of technical details in order to make educated guesses about the intent behind the interest in those issues. I guess the average journalist or business analyst does not have these skills, hence Google's reputation for being very tight-lipped about their plans. I don't find them to be. I am very seldom (much) surprised by what they eventually announce. The thing is that Google does say a lot. Metaphorically they leave a lot of dots around in clear sight, but they leave it to us observers to connect these dots to form coherent pictures. Of course it helps if you can recognize what a dot is, and that you can read the technical literature to make educated guesses about what types of lines the particular dots tend to be endpoints for :-)

Right now I'm interested in Google the phone company, so I'm interpreting tea leaves every day to see what I can divine about the subject. The last few weeks a few things has popped up indicating that Google is preparing a massive onslaught on the telecommunications industry.

The facts:

Google already has the Google mail, Google Talk and Grand Central services established.

The Android mobile platform is really hot. I got my hands on a G1 developer handset last week, and apart from a few issues with battery lifetime that my contacts in the G1's manufacturer HTC assure me will not be present in production models, it is hot. There are a few warts and wrinkles but this is normal for all new phone models, and there is a sequence of upgrades planned (the first one called "cupcake") so I'm not really worried about this. Just about every major handset manufacturer except Nokia has announced that they are going to produce Android handsets. The apps in Google's appstore is, as expected, a mix of great and bogus. But there are a lot of great little apps there. I predict that Android will be a winner. In some ways the situation is similar to the PC market when Microsoft arrived with MS-DOS. Finally the coupling of handset hardware and software seems to loosen up, which in turn opens up for greater specialization in the industry, which in turn leads to better and less expensive products. To be sure, Nokia is the largest manufacturer (a billion handsets or so sold in 2008), but that doesn't matter in this picture: If most of the competent developers in this world create interesting and useful stuff for the Android, Nokia will be reduced to insignificance within five years (give or take a few years).

Then there is the case of Google's reductions and changed recruiting policy. They are closing down a bunch of engineering offices, firing temporary workers and axing projects. I won't discuss if this is a smart thing to do or not, but I will point out that the relatively anonymous Grand Central service does not seem to be affected by the reductions. It is a little like the dog that didn't bark in the Sherlock Holmes story. What doesn't happen can have greater significance than what does happen, and I think it has in this case.

Finally there is the rumor that Google is again considering to buy Skype. Skype has turned out to be a bad fit with eBay's other business, but I agree with eBay's assessment that it is a great standalone business. However, it is an even better business if combined with Google's offerings:

Grand Central is a great telephony product with superb configurability and call filtering, but it is currently only availalable in north america, still in beta, and has a limited user base. Skype is a great telephony product with world wide presence, it is so out of beta and mix traditional and IP telephony in a superb manner, but has very limited configurability and phone filtering opions. This is perfect complementarity if I ever saw it.

The chat and video conferencing options are more or less equivalent to what is available in Google Talk. There is some room for technical consolidation, and certainly no showstoppers when contemplating this. Complementarity, but not super-compelling.

Skype has recently announced a fairly decent mobile offering both for java based phones and Android. Google has a thoroughbred mobile effort. I'd say this is another perfect match. (Update: They didn't buy Skype, but bought Gizmo5 instead, cute move :-)

In conclusion

In sum, from a functional perspective, an aquisition of Skype could be of great benefit to both Google and Skype and would be a long step towards creating a world wide comprehensive phone service integrated with Google's infrastructure.

Google's growth in advertising revenue is declining, this means that Google is maturing as a company. Maturing companies as a rule see decerasing returns on their main product lines, but the market still impose an imperative of growth. One of the few sectors that is both a match with Google's core values and capabilities, and that has the kind of numbers that would make Google interested is the telecommunications industry. The Telecom industry is dominated by huge, mature companies that to a large extend depend on yesterday's technologies to deliver the day before yesterday's products (voice telephony on synchronous digital synchronous networks to be specific ;). I'd say this is also a good match for Google with its superb engineering capabilities, highly efficient internal infrastructure and an existing revenue base that will not be cannibalized by entering into telecommunications.

The way I read the tea leaves is that Google at present is pointing its guns at the telecom industry. They are doing so in broad daylight so everyone who looks can see, they are just not announcing it in clear text. The dots are visible, but we have to draw the pointing gun ourselves.




Monday, January 12, 2009

Google, the phone company

It's about time we start taking Google seriously as a phone company. I'll walk quickly through three facets of their current offerings, and then think aloud about what this means

Today (january 2009) Google has three major phone-related products that I know of:

  1. Android. A software architecture for mobile phones. Google doesn't actually sell any Google phones, but the Android architecture makes it -really- simple to write decent software for mobile phones. Nobody has actually done that before them.
  2. Grand Central. When signing up for Grand Central (currently in private beta) you get a phone number. When someone calls that phone number you decide what to do with the incoming call. You can route it to another phone, pick it up on the web, route it to a voice mailbox, listen in on the voice mailbox and pick up the call (like you can do on an ordinary old fashioned answering machine). All of this is administrated through the web. So far there seems to be little integration between Grand Central and the rest of the Google suite (talk, docs, mail etc.) but it's a fair guess that this will happen at some stage. One particularly interesting possibility that configurations and data for the service is made accessible throug Googles data API, similarly to the way blogger, docs, and other services has been made available. As I understand it, Grand Central as it operates today, route phone calls to the location within GC´s network where it is the least expensive to terminate the call. If the call turns out to be a local call, so much the better. This is a north american phone centric solution to solving the termination cost problem that will otherwise bedevil anyone trying to enter the telephony market. More about this later.
  3. Google Talk. Google talk has had voice telephony a long time, and video was introduced in 2008. Google Talk has never had any kind of direct connectivity between licensed telephony with phone numbers. The only people you can talk to are other Google Talk users.

Now, let´s see what could happen if we start playing with these components.

Fully web configurable telephony
Actually this is reality already. Any phone with an internet connection and a decent browser (like the ones Android or the iPhone has) can access the configuration menus of Grand Central, and Grand Central can of course route phone calls to the phone, so there you have it: A mobile phone with all the best telephony features from the last twenty years, with a decent user intrface (no more %&/()#!!! DTMF commands to remember).

Receiving phone calls into GTalk
This is of course a no-brainer, and in all likelyhood something that is already in the works in some lab somewhere. Technically it is just a matter of adding an adapter for GTalk´s jingle interface, hook it to the internals of Grand Central, and when it rings in GC, it rings in GT. Simple as that. Of course it is not quite as simple as that, because just doing this will make a new product:

A "SkypeIn" clone ("GTalkIn"?)
Google talk with incoming phones to real phone numbers is effectivel equivalent with SkypeIn. Of course, the filtering and routing in Grand Central far surpasses what is available in Skype, but that is only to be expected ;) Another interesting feature with this option is that it effectively removes termination costs for handling incoming calls. Since Gtalk use the internet, phone traffic is priced as best effort internet traffic. In many cases this is much less expensive than having to pay termination costs to another operator, in addition to the transfer cost of reaching that operator in the first place.

But wait, there is more:

Android GTalk Client
If a GTalk client in introduced to Android, an Android user will have several options for receiving incoming calls:
  • Receive them directly to the phone number associated with the phone (as an ordinary mobile phone does today).
  • Receive them to an ordinary listed phone number (as Grand Central allows today) and route that conversation to the mobile phone either as an ordinary phone conversation, or as a GTalk conversation.
  • Choosing which type of conversation to use can be done manually through Grand Centra´s menus, as today or ...
  • ... automatically selecting GTalk if a sufficiently fast wifi network is available, or a 3G data plan with sufficient bandwidth and low enough price is available or ..
  • ... selecting ordinary mobile phone termination if gtalk termination is available or ..
  • ... selecting a fixed line phone if there is some rule that says "if I can see my home wifi network, that means that I am home, but that means that I want my phone calls to be routed to the fixed line phone"
  • If the connection quality for GTalk falls below some treshold, the conversation can be passed back to the phone network, and when internet connectivity is reestablished, the conversation is passed back.
etc. The pattern emerging here is that the real-time voice traffic will go through whatever channel is the most opportune at the moment, but the signaling and configuration traffic ("signaling" in telco parlance) will almost exclusively be sent over the Internet.

Competition on features
If the competition of voice operators (fixed and mobile) no longer centers on cost, then the amount of features available will become important. Google is well positioned to offer a wide spectrum of services for their phone products (voicemail, text recognition, mailbox integration etc.), and this in itself might be an important factor to attract customers from their cometitors.

Termination cleverness
Apart from running a physical network, a major cost component of being a telecom operator is the combination of transit termination costs. Transit costs is the cost of line capacity between operators. Termination costs are the costs you pay to terminate a call in someone else's network. So if I call you, my phone company has to pay for the use of the physical line connecting your and my telco, in addition my telco has to pay your telco termination costs. If the call happens within the same company, no termination costs needs to be payed. The combination of termination and transit costs are really important for the cost structure of a telco, so all kinds of games are played with technologies and regulatory bodies in order to minimize these cost.

In the US mobile to mobile (m2m) termination is usually based on bill and keep, meaning that mobile operators don´t bill each other for calls going between them, fixed to mobile (f2m) and mobile to fixed (m2f) is symmetrically priced, and usually equal to the termination cost of the fixed operator incumbents (which is very cheap, in the order of one US cent). So in the US termination is not really a big cost, but long distance transit is. In Europe local termination represent the dominant cost but transit is usually quite inexpensive. European termination costs are (mostly) regulated by EU, and they try to regulate in the direction of symmetric pricing for fixed line, mostly symmetric pricing between mobile operators within a country, and wildly varying asymmetric pricing both between mobile and fixed operators and between mobile operators in different countries.

This also means that Google has a few cards of its own to play: The vast amount of network fiber they has bought and rented around the world can be used in any way they like. If they choose they can route phone conversations through them. In the US they can use local (and inexpensive) termination to forward calls to ordinary phones, and their own backbone fiber to handle transit. In Europe they can still use the internet for transit, but because of higher termination costs there doesn´t seem to be a single move that can put them in a winning position with regards with local termination. There are several moves that can help them tough:
  • They can become a local voip operator in several countries. Voip operators are regulated as fixed line operators, meaning that they enjoy low cost symmetric pricing to other fixed line operators within a country. Calls routed through GC can then seem to appear from a VOIP operator, and calls going into local VOIP phone numbers can be routed to wherever GC wish them to be routed. If the amount of incoming calls plus whatever advertising revenue Google can get from these calls exeeds the costs of outgoing VOIP calls, this will generate positive revenue.
  • They might get away with becoming an Mobile Virtual Network Operator, and play the mostly symmetrical pricing game between the mobile operators. I don´t how they can get away with being both a voip operator and an mvno, and be allowed to route traffic between these through the internet, but if they can, there it might be possible to save som cost there too. Perhaps.

If you use Grand Central and the number you are being called on (your GC number) is "located" (terminated) somewhere else than where you are located several things happen. First the call is terminated at Google, and google is payed a termination fee for terminating the call, but then they have to forward the call to you and if they only used the phone network, that would probably mean that they would have to pay a termination fee to reach you where you are. But this is Google, they don't have to use the phone netowork. They route the call to a phone through the internet to a phone interface physically close to you and then call you from that number with very low termination costs. If you are in Ireland you will be called from a number in Ireland, if in New York, you will be called from a number in New York. The conversations themselves are moved around the world by Google, who does it as cheap as or cheaper than anyone else, meaning that it will be very hard indeed to undercut the prices we an expect Google to offer, once they start offering international phone calls.

Your GoogleID, not your phone# is your primary identity
When you open up the box with an Android phone, the first thing you do after sticking in your SIM card, is to enter your gmail account. Immediately our GTalk contact list appears in the phone and you are off. Now, normally GTalk contact lists don´t contain phone numbers, but this is sure to change quickly, and once entered, why bother with phone numbers any more? What you want to reach is your contact, not the phone number, right? This of course opens up the possibility of avoid using the phone operators at all to reach your contacts. If you both are on GTalk, then use gtalk, if not use the phone networks. You'll always reach your contact, you'll reach them in the least expensive way and if you can't reach them by phone you can either send a mail or a voicemail.

Interfacing with VOIP through SIP
An interesting issue was pointed out to me by Ruben O. at open-voip.com; interfacing with other voip operators offers a bunch of opportunities for Google.
  • Today VOIP operators typically use traditional telcos for interconnect, even if this is technically not necessary. According to Ruben the reason is that there is a very poorly developed VOIP exchange infrastructure. Google is large enough to change this if they wish. They could become a hub for SIP based voip infrastructure and possibly even start to collect termination fees from other voip operators.
  • Even if Google has no wish to become a generic hub, a SIP interface has other advantages. One of them being that it is quite simple to interface to both Microsoft and Cisco's unified communiations solutions as well as IMS telephony (if and when that ever becomes a reality :-). This would mean Google empower organization to have interconnect with Google's users (who are suceptible to Googles ads) at costs that may be lower than if they connect through other types of telephony systems with more traditional pricing structure.

All kinds of voice mashups
Once a data API is available, all kinds of mashups are possible, and some of them will be crated. This is the same type of game BT is playing with Ribbit and its API offerings, a well as what Vodafone is trying to do on Betavine. Whatever good ideas are discovered elsewhere, you can becertain that Google telephony will be able to do similar things. In fact, when I first heard about Grand Central, my first thought was "wow, this is almost just like Ribbit, without Adobe software and without scripting ability". Google has a good chance of being a first choice among those innovators that already has an audience using Google's services. Integration into CRM and sales support system are sure to come (similar to the Salesforce integration of Ribbit), and possibly even social networking sites.

Advertising
I don't know how, but I am sure there will be some way to earn advertising money on voice, and that whatever the solution for this is, Google will implement it.


I am sure there is more, and I'll try to update this post when I find it, but this is all i have for now ;)



Friday, January 9, 2009

More grails fun and games

In an app I'm writing, I have a field declaired "byte[] data" in a class called MediaContent. Now just for grins and chuckles i inadvertently set this field to be public. After fixing a lot of other things, I then started getting weird error messages about failed attempts to access the static method MediaContent.getData(). This confused me a lot until I discovered that if a field X is public, then grails will assume that getX is a static method.

This is counterintuitive and it just cost me half an hour *fume* ;)

Grails upgrade fun and games

I'm using grails in some of my projects, and yesterday I tried to upgrade from 1.0.3 to 1.1.beta, and boy that was tons of fun. In essence everything crashed. Now things work, so I'm happy to report the procedeures necessary to make them work:

Before upgrading, dot this:
  • Delete any plugins you don't want to take with you into to the upgrade.
  • If you are upgrading the acegi plugin you MUST DO THIS BEFORE YOU UPGRADE GRAILS, so just do that (instructions found here).
  • Possibly (I don't know) you also need to upgrade any other plugins before upgrading grails.
Then modify your path to point to the new 1.1.beta binary, and run the ordinary upgrade command from the command line, and that doesn't work either. *sigh* so in short. Avoid 1.1-beta 2, use 1.0.4 instead. It at least seems to be working.

If you don't follow these instructions there will be all kinds of incomprehensible error messages, and the upgrade will fail. *sigh*

Thursday, January 8, 2009

Has OpenID lost its mojo?



The Ostatic website has an article titled
"OpenID Gets Explained, Maligned, and Dropped" that reports that support for OpenId is levelling out, and that some that have tried it are less than enthusiastic about it. It also points out that the necessity of making sites like openidexplained is a symptom that OpenID isn't as simple to understand and use as one could wish.

I've been studying and used OpenID for about a year now so I think I'm qualified to have some opinions on this issue ;)

First off, I believe OpenID is great, but there are some issues:

  1. Maturity.
    The libraries supporting OpenID are not very mature. Not in the sense that they don't work (the ones I've tried do work as advertised), but in the sense that they don't support all the usecases that are probably necessary for succss. The ones I'm thinking about are:

    • Actual transfer of additional data (age,nickname etc.) from the identity providers. The standard supports this and it is clearly useful, it just isn't supported in practice. When logging in, an ability to chose which local account you wish to associate with the OpenID identifier you use.
    • The ability to let a user identity at a site (an identity consumer) be associated with multiple OpenID identities. This is useful because it allows you to be less dependent on a single OpenID provider for access to services. Again, there is nothing in the standards that prevents this, it's just not implemented yet.

    These issues reflects the simple fact that the web is not yet used to using a separation between identity providers and identity consumers, and there are several details that needs to be fixed before the experience becomes flawless ;) I still believe this can happen, it might even happen with OpenID as a carrier.

  2. Lack of support for namagement of trust relationships. This is imho a much more serious issue. If you run a site, how do you decide which ID providers to trust? Example: If I run a newspaper and I wish that everyone allowed to comment on articles must be above the age og 18. Even if an identity provider provides an age, as the OpenID standard open for, why should I trust that datum? The solution to this problem is not covered by the OpenID standard, and probably can't be since it involves a trust relationship between the identity
    provider and the consumer. Unless a way is found to make it simple to enter into this type of relationships, nobody with a need for trustworthy identities will have a strong incentive to use OpenID; since the trust relationship will be bilateral anyway, there probably won't be many of them and then the benefit from using a standardized protocol won't be that big. I believe that this issue needs to be adressed. There are many ways in which that could be done: Standard
    trust contracts could be produced to simplify the production of bilateral trust agreements. Alternatively a "trust network" could be established, letting some set of authorities authorize identity providers (for instance; a national tax authority could authorize theidentity providers that are allowed to identify people when filing their income statements). It is completely unrealistic to believe that there will ever be a single source of all authority, a tree with
    multiple roots, or possibly a web of trust are more viable models. However, this -needs-to-be-in-place- for a separation of identity provision and consumption to be successfull for sites that put any value on actual identities.

To summarize: I do believe that the basic ideas behind OpenID are great, and OpenID could very well be the best identity protocol available yet. However there is still something unfamiliar about the whole concept of seperating identity provision and consumption not just into different technical servers, but into possibly totally different organizations. The issues related to this needs to be discovered, acknowledged and fixes needs to be made and disseminated,
and unfortunately this will take some time.

Tuesday, January 6, 2009

The future of TV

First a disclaimer: A lot has been said and written about the future of TV, and what I'm aboutto say  is probably not very new at all.  That said, I feel a need to formulate clearly what i believe is the future of TV, and why.   This blog is not the place to be complete with references annotations etc.,  so I'll just give some facts and  some of the future, as I see them.

Some facts

TV and computer technologies  are converging technically.  This is nearly a no-brainer, but I'll just list a few facts to hammer it in: 
  • New PCs and monitors (TV and otherwise)  can in general be plugged into each other using standard cables, usually HDMI.  This blurs the market separations and lowers costs both for TV and computers.
  • Set-top boxes and media players are computers and has been for a while.   They look at digital signal streams and present them on a screen and are in general not user programmable, but inside they are just computers with video cards.
  • DVRs are now mainstream technology, but in essence they are just computers with disks and video cards. Some are even user programmable like Mythbox.
  • Newer TV-boxes like Popcorn Hour looks and feels like just another TV box you buy at the electronics store, but it plays all sorts of video formats, can talk to file servers and even contains a P2P client for downloading content from the net.
In sum I think it's fair to say that technical differences between TVs and computers are mostly gone, what is left is to find how the usage patterns from TV and computers will merge.  This will happen all by itself driven by users once cost of the enabling devices and services are low enough.  I belive it is happening as we speak.

TV programming is becomming universally available.    The trend I'm observing is that all content ever made becomes available more or less for free in some form or way.  Not necessarily with high fidelity, but from some source, with some kind of degraded or modified service quality.   YouTube is one source, iTunes another  more or less legal p2p networks another, but significantly more and more content producers are putting some or all of  their works online legally (BBC, Comedy Central, NRK, just to mention three). I heard a story today about some kids that wished to see a football game, but didn't want to pay for it. They searched the net and found some site in the middle east that made the game available on the net, they then streamed from that site while chatting with their buddies about the game on MSN.   A catchphrase for this phenomenon could be "The content is already out there" :-)

The internet is carrying an increasingly important channel for TV distribution.  Streaming internet over TV is a way for traditional TV channels to reach audience abroad and at times when the broadcast schedule isn't synchronized  with viewers schedule.  It also broadens the usage of TV material in derived works (blogs etc.). 


The future 

The internet is morphing into a partly user-financed content delivery network.  When you buy an internet router, you are in effect adding a network to the existing internet. In fact, any box you add to your network becomes part of the internet. When you decide to participate in a p2p network, you are in addition adding cacheing capacity as well as network bandwidth to the network. In fact, you are adding exactly the kind of resource that Akamai is selling to traditional content distributors, but you are adding resources in a way that explicitly improves  the distribution of the kind of content you like.   I'll spell this out clearlyl: It is your preferences as a consumer that decides what kind of network you are adding resources to. This is directly the oposite of the traditional broadcast method where it is the broadcaster that decides where to invest in network resources.   Also, it means that the investment decisions for the consumer is divided in two: How much to invest, and how to split that investment between raw bandwidth and storage.  In general the cost development for storage is better than for bandwidth; the yearly percentagewise increase in performance per dollar increases faster for storage than for bandwidth.  Over time this gives an ever increasing incentive to find new ways of using more storage than network.

If you have a single broadband connection, the all of your content obviously has to flow through that.  If at some time you get more than one broadband connection (for instance ADSL and cable and perhaps even fiber), then both switching  traffic between these networks and gaming pricing policies becomes useful (i.e. prefer a slower network if the faster network is dangerously close to pass through some barrier that makes it more expensive to ues).   Using wireless networks between homes is also an option that ties into this "multiple networks per household" world, but i won't explore it in any detail here.

In general we don't know much about what is happening.   The direct indicators are poor.  Unless we do into a representative selection of homes and observe what is actually being consumed, we probably won't know.   Internet traffic is to some extent possible to monitor using deep packet inspection (DPS), but unless it's actually done and the results interpreted with someone who has some idea about what to look for and how to interpret it, then neither publishers or network operators will have the foggiest clue about how consumption patterns are changing.  Encrypted P2P (the default) can possibly be interpreted as P2P traffic by DPS, but exactly which content is being distributed is not possible to determine by inspecting packets as they float over the wire.

If you want to know your viewers, have them contact you.   Direct observation is good, so if you create some media content, your best way of knowing who is consuming it is probably to have them content you. This can be done by clicking on links, sending mail, whatever.   I believe that this fact alone will lead to the emergence of  business models based on user interaction with media content more than passive consumption. Since most content imho will be available for anyone, passive consumption will essentially be unbillable and untracable, so business models based on mandatory billing or tracing of consumption are unviable.

That's all for now, but i'll revise this entry as I think of more.