Friday, January 9, 2009

More grails fun and games

In an app I'm writing, I have a field declaired "byte[] data" in a class called MediaContent. Now just for grins and chuckles i inadvertently set this field to be public. After fixing a lot of other things, I then started getting weird error messages about failed attempts to access the static method MediaContent.getData(). This confused me a lot until I discovered that if a field X is public, then grails will assume that getX is a static method.

This is counterintuitive and it just cost me half an hour *fume* ;)

Grails upgrade fun and games

I'm using grails in some of my projects, and yesterday I tried to upgrade from 1.0.3 to 1.1.beta, and boy that was tons of fun. In essence everything crashed. Now things work, so I'm happy to report the procedeures necessary to make them work:

Before upgrading, dot this:
  • Delete any plugins you don't want to take with you into to the upgrade.
  • If you are upgrading the acegi plugin you MUST DO THIS BEFORE YOU UPGRADE GRAILS, so just do that (instructions found here).
  • Possibly (I don't know) you also need to upgrade any other plugins before upgrading grails.
Then modify your path to point to the new 1.1.beta binary, and run the ordinary upgrade command from the command line, and that doesn't work either. *sigh* so in short. Avoid 1.1-beta 2, use 1.0.4 instead. It at least seems to be working.

If you don't follow these instructions there will be all kinds of incomprehensible error messages, and the upgrade will fail. *sigh*

Thursday, January 8, 2009

Has OpenID lost its mojo?



The Ostatic website has an article titled
"OpenID Gets Explained, Maligned, and Dropped" that reports that support for OpenId is levelling out, and that some that have tried it are less than enthusiastic about it. It also points out that the necessity of making sites like openidexplained is a symptom that OpenID isn't as simple to understand and use as one could wish.

I've been studying and used OpenID for about a year now so I think I'm qualified to have some opinions on this issue ;)

First off, I believe OpenID is great, but there are some issues:

  1. Maturity.
    The libraries supporting OpenID are not very mature. Not in the sense that they don't work (the ones I've tried do work as advertised), but in the sense that they don't support all the usecases that are probably necessary for succss. The ones I'm thinking about are:

    • Actual transfer of additional data (age,nickname etc.) from the identity providers. The standard supports this and it is clearly useful, it just isn't supported in practice. When logging in, an ability to chose which local account you wish to associate with the OpenID identifier you use.
    • The ability to let a user identity at a site (an identity consumer) be associated with multiple OpenID identities. This is useful because it allows you to be less dependent on a single OpenID provider for access to services. Again, there is nothing in the standards that prevents this, it's just not implemented yet.

    These issues reflects the simple fact that the web is not yet used to using a separation between identity providers and identity consumers, and there are several details that needs to be fixed before the experience becomes flawless ;) I still believe this can happen, it might even happen with OpenID as a carrier.

  2. Lack of support for namagement of trust relationships. This is imho a much more serious issue. If you run a site, how do you decide which ID providers to trust? Example: If I run a newspaper and I wish that everyone allowed to comment on articles must be above the age og 18. Even if an identity provider provides an age, as the OpenID standard open for, why should I trust that datum? The solution to this problem is not covered by the OpenID standard, and probably can't be since it involves a trust relationship between the identity
    provider and the consumer. Unless a way is found to make it simple to enter into this type of relationships, nobody with a need for trustworthy identities will have a strong incentive to use OpenID; since the trust relationship will be bilateral anyway, there probably won't be many of them and then the benefit from using a standardized protocol won't be that big. I believe that this issue needs to be adressed. There are many ways in which that could be done: Standard
    trust contracts could be produced to simplify the production of bilateral trust agreements. Alternatively a "trust network" could be established, letting some set of authorities authorize identity providers (for instance; a national tax authority could authorize theidentity providers that are allowed to identify people when filing their income statements). It is completely unrealistic to believe that there will ever be a single source of all authority, a tree with
    multiple roots, or possibly a web of trust are more viable models. However, this -needs-to-be-in-place- for a separation of identity provision and consumption to be successfull for sites that put any value on actual identities.

To summarize: I do believe that the basic ideas behind OpenID are great, and OpenID could very well be the best identity protocol available yet. However there is still something unfamiliar about the whole concept of seperating identity provision and consumption not just into different technical servers, but into possibly totally different organizations. The issues related to this needs to be discovered, acknowledged and fixes needs to be made and disseminated,
and unfortunately this will take some time.

Tuesday, January 6, 2009

The future of TV

First a disclaimer: A lot has been said and written about the future of TV, and what I'm aboutto say  is probably not very new at all.  That said, I feel a need to formulate clearly what i believe is the future of TV, and why.   This blog is not the place to be complete with references annotations etc.,  so I'll just give some facts and  some of the future, as I see them.

Some facts

TV and computer technologies  are converging technically.  This is nearly a no-brainer, but I'll just list a few facts to hammer it in: 
  • New PCs and monitors (TV and otherwise)  can in general be plugged into each other using standard cables, usually HDMI.  This blurs the market separations and lowers costs both for TV and computers.
  • Set-top boxes and media players are computers and has been for a while.   They look at digital signal streams and present them on a screen and are in general not user programmable, but inside they are just computers with video cards.
  • DVRs are now mainstream technology, but in essence they are just computers with disks and video cards. Some are even user programmable like Mythbox.
  • Newer TV-boxes like Popcorn Hour looks and feels like just another TV box you buy at the electronics store, but it plays all sorts of video formats, can talk to file servers and even contains a P2P client for downloading content from the net.
In sum I think it's fair to say that technical differences between TVs and computers are mostly gone, what is left is to find how the usage patterns from TV and computers will merge.  This will happen all by itself driven by users once cost of the enabling devices and services are low enough.  I belive it is happening as we speak.

TV programming is becomming universally available.    The trend I'm observing is that all content ever made becomes available more or less for free in some form or way.  Not necessarily with high fidelity, but from some source, with some kind of degraded or modified service quality.   YouTube is one source, iTunes another  more or less legal p2p networks another, but significantly more and more content producers are putting some or all of  their works online legally (BBC, Comedy Central, NRK, just to mention three). I heard a story today about some kids that wished to see a football game, but didn't want to pay for it. They searched the net and found some site in the middle east that made the game available on the net, they then streamed from that site while chatting with their buddies about the game on MSN.   A catchphrase for this phenomenon could be "The content is already out there" :-)

The internet is carrying an increasingly important channel for TV distribution.  Streaming internet over TV is a way for traditional TV channels to reach audience abroad and at times when the broadcast schedule isn't synchronized  with viewers schedule.  It also broadens the usage of TV material in derived works (blogs etc.). 


The future 

The internet is morphing into a partly user-financed content delivery network.  When you buy an internet router, you are in effect adding a network to the existing internet. In fact, any box you add to your network becomes part of the internet. When you decide to participate in a p2p network, you are in addition adding cacheing capacity as well as network bandwidth to the network. In fact, you are adding exactly the kind of resource that Akamai is selling to traditional content distributors, but you are adding resources in a way that explicitly improves  the distribution of the kind of content you like.   I'll spell this out clearlyl: It is your preferences as a consumer that decides what kind of network you are adding resources to. This is directly the oposite of the traditional broadcast method where it is the broadcaster that decides where to invest in network resources.   Also, it means that the investment decisions for the consumer is divided in two: How much to invest, and how to split that investment between raw bandwidth and storage.  In general the cost development for storage is better than for bandwidth; the yearly percentagewise increase in performance per dollar increases faster for storage than for bandwidth.  Over time this gives an ever increasing incentive to find new ways of using more storage than network.

If you have a single broadband connection, the all of your content obviously has to flow through that.  If at some time you get more than one broadband connection (for instance ADSL and cable and perhaps even fiber), then both switching  traffic between these networks and gaming pricing policies becomes useful (i.e. prefer a slower network if the faster network is dangerously close to pass through some barrier that makes it more expensive to ues).   Using wireless networks between homes is also an option that ties into this "multiple networks per household" world, but i won't explore it in any detail here.

In general we don't know much about what is happening.   The direct indicators are poor.  Unless we do into a representative selection of homes and observe what is actually being consumed, we probably won't know.   Internet traffic is to some extent possible to monitor using deep packet inspection (DPS), but unless it's actually done and the results interpreted with someone who has some idea about what to look for and how to interpret it, then neither publishers or network operators will have the foggiest clue about how consumption patterns are changing.  Encrypted P2P (the default) can possibly be interpreted as P2P traffic by DPS, but exactly which content is being distributed is not possible to determine by inspecting packets as they float over the wire.

If you want to know your viewers, have them contact you.   Direct observation is good, so if you create some media content, your best way of knowing who is consuming it is probably to have them content you. This can be done by clicking on links, sending mail, whatever.   I believe that this fact alone will lead to the emergence of  business models based on user interaction with media content more than passive consumption. Since most content imho will be available for anyone, passive consumption will essentially be unbillable and untracable, so business models based on mandatory billing or tracing of consumption are unviable.

That's all for now, but i'll revise this entry as I think of more.