March 2010


As a comms professional I’m loving the sparring that’s taking place between the Labour and Conservative parties over the issue of cutting the budget deficit.

You have to take your hat off to the Labour comms team for turning a Tory strength  – less Government spending – into a weakness in the minds of the voter, a strategy that has seen the opposition’s double digit lead cut to less than three points in some polls.

Today we’ve seen the Labour team do a political 180 on the messaging front in response to the Tory promise to partially overturn the rise in National Insurance set to come into force in 2011. They’ve gone from warning over Tory cuts damaging the economic recovery to trying to convince the public that the Tory’s are promising tax cuts that will divert funds away from cutting the deficit.

Personally, I feel this goes back somewhat on what’s been very successful anti-Tory message and allows the Tories to get their ‘tax less, spend less’ creed back onto the media agenda. However, it goes to show how agile the parties have become in reacting quickly to policy changes from either side of the house.

At Edelman’s recent Budget 2010 Breakfast Briefing, the Executive Editor of The Times Daniel Finkelstein, gave a great overview of how Labour managed to get cut through with the general public by making them worried about what they could lose due to the Tory’s proposed cost cutting strategy. I dare say Tory HQ is working on a way to get voters to appreciate what they could save under the same set of policies. It’s all fascinating stuff.

I wonder what lessons, if any, PR professionals can take from this issue and apply to their day to day work? A big takeaway for me is that nothing is sacred.I think It was a bold move for Labour to focus on de-constructing such a core pillar of Conservative messaging – and one that has served the party well even over the last 13 years in opposition, yet it’s clearly paid off. Perhaps Election 2010 will provide a case study in how to win the messaging battle around a modern election? Whatever the result, you can safely say that the comms teams will have had a major role in deciding who’s in No.10 Downing Street this Summer.

So, after months of deliberating and speculation, Murdoch has today come out and announced that access to The Times and Sunday Times’ online content will be charged as of June – with daily access for £1 and weekly for £2.

New websites will be launched in May for each title – replacing the existing timesonline portal – and it sounds like there’ll be limited free access to entice readers once the paid content sites launch.

That a major publication has come out with a paywall is hardly a surprise, but it is certainly high risk. The reception has been mixed – certainly if you simply consider the newspaper content on its own, it seems slightly extravagant to charge for what is currently free; that idea’s not going to get very far in the Dragon’s Den

However, the Times has quietly upped the ante with its acquisition of digital partners and content providers and it could be these added ’membership’ style benefits which could tip the balance between simply paying for news content, and being part of an entertainment hub – the Times is certainly already associated with quality culture and cultural insight.

The major issue this development throws up however is that Murdoch, despite being something of a publishing leviathan, doesn’t own everything online, and if other newspaper publishers go down the same route, the market will become incredibly siloed headlessand surely disenfranchise the customer. (We’ll not mention the problems having multiple access / subs to every newspaper would throw up for a PR Agency).

If key sites have to be individually paid for, it’s going to get very expensive very quickly for a consumer – and micro-payments will only work if there’s a unified platform to base the content on, (rather like a pre-paid Oyster card network for news), which of course doesn’t exist.

We’ve seen the problems similar approaches have encountered in the mobile space with the walled garden approach – surely content owners will have learnt its lessons here?

The internet has democratised information and made knowledge ‘free’ to an extent – does it really need to be boxed back up again?

@wonky_donky

In 2006, group of academics from the University of Southampton’s School of Electronics and Computer Science (ECS) and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), announced an ambitious plan to launch a new branch of science – Web Science.

These were no ordinary academics.  Among the elite group were Sir Tim Berners Lee (the creator or the World Wide Web), Professor Dame Wendy Hall and Professor Nigel Shadbolt of ECS.

This group perceived a need to better understand the nature of the Web and to engineer its future and ensure its social benefit.  The rationale was that the Web is a construct unrivaled in human history and the scale of its impact and the rate of its adoption are unparalleled.  

They recognised a great opportunity to study the Web through a new academic discipline, as well as an obligation to do so.

Web Science came into being in October 2006 with the launch of the Web Science Research Initiative (WSRI), which was later to be renamed the Web Science Trust (WST). Edelman was proud to assist in launch of WSRI, which achieved widespread international media coverage at the time.

Such a huge undertaking as establishing a new branch of science was bound to take some time, but the WST has been hard at work these past few years, putting the concept of Web Science on the academic map, developing curriculum, establishing partnerships, growing a support base and engaging in initial research projects.

Two weeks ago, I attended an event at the Royal Society in London, designed to explain Web Science to prospective students and at an undergraduate level.  It was quite exciting to see just how far Web Science has come in a few short years.  The auditorium was full and it was clear there was a real interest in the subject, with a lively discussion following presentations from academics and industry representatives.

Unlike Computer Science, Web Science will have an interdisciplinary approach.  To this end, participation from students with an interest in humanitarian disciplines such as psychology and anthropology will be as important as more apparently aligned areas such as mathematics and physics. 

Yesterday, the evolution of Web Science took a great leap forward with the formation of the Institute for Web Science, which was announced by Prime Minister Gordon Brown during a speech in London. Significantly, it will be backed with £30 million funding from the UK Government.

The Institute for Web Science will be led by Sir Tim Berners Lee and Professor Nigel Shadbolt and will be jointly run by the University of Southampton and Oxford University.

According to a statement issued by the University of Southampton, the Institute for Web Science will be designed to make the UK the hub of international research into the next generation of web and internet technologies and their commercialisation.

Web innovation is viewed as an important issue by the government at present and they have also been consulting with Sir Tim Berners Lee and Professor Nigel Shadbolt on the issue of unlocking government data. 

Recently, the government launched a new site called data.gov.uk.  The site is a refuse of government data and seeks to provide access to interested companies, organisations and developers for with the aim of encouraging the development of new businesses.  It is hoped this will generate tax revenue greater than could have been realised by selling that data for commercial use.

In my opinion, both the funding of the Institute for Web Science and the launch of data.gov.uk are outstanding ideas and that could put the UK at the centre of the Web innovation. At a time when government funding cuts are a central theme from both sides of politics, it’s good to see that a long-term view is also being taken.

@andyrobertson

 

The dotcom turns 25 today.  It’s hard to imagine it’s been around that long, and it’s amazing to think that it predates the advent of HTML and the World Wide Web.  I can’t help thinking that dotcom is another example of how innovation in the tech sector can sometimes generate success far beyond the dreams of its creators.

In 1985, when .com was created, the Internet was in its infancy and the was largely administered by the U.S. Government – or the Department of Defence (DoD) to be precise.  Later, in 1991, the National Science Foundation (NSF) assumed responsibility for its administration. 

Dotcom was designated as an Internet domain for non-military purposes and both the DoD and the NSF contracted it’s operation to a third party – Network Solutions.

Monetisation of the .com commenced in 1995 – a full ten years after its creation – when Network Solutions was granted  permission to charge an annual renewal fee for dotcom domains.

Later, Network solutions was acquired by VeriSign,  at a time that roughly coincided with creation of the World Wide Web and the mass uptake of Internet by businesses and private citizens, so it was them  who really hit the jackpot.

According to a BBC report today, .com registrations grew from a handful at the outset, to one million in 1997 and finally to the present level of 668,000 monthly registrations.  That’s a nice little earner for VeriSign.

Today the domain name system is overseen by ICANN of course.  In 2005, a new contract between ICANN and VeriSign was signed, under which VeriSign was granted the right of presumptive renewal.  In essence, this means VeriSign has an almost automatic right of renewal on the contract in any future review, virtually guaranteeing its cash cow.  This was quite a contentious issue at the time, with rival registry operators vying for a piece of the action.

Dotcom will no doubt remain the most prominent domain for some time to come.  However, changes are on the way that will result in competition and possibly lower registrations.

ICANN is currently administering the introduction of new generic Top Level  Domains via a protracted process that should conclude towards the end of the year.  When that happens, there will be more consumer choice, which may over time dilute the dominance of dotcom.

Another factor that is set to have an impact on the development of the DNS in coming years is the introduction of Internationalised Domain Names (IDNs).  Until recently, it wasn’t possible to interact fully with the DNS in a non-Latin based language.  That finally changed this year with the introduction of the first IDNs.

From a commercial perspective, the real power and potential of IDNs may be realised when they are combined with new gTLDs in markets where dotcom doesn’t necessarily dominate, such as Asia and the Middle East.  In those regions, we may see the emergence of .com equivalents that generate significant income for the operators and limit VeriSign’s potential for growth outside the English speaking world.

http://twitter.com/andyrobertson

A wonderful video by @tomscott is currently doing the rounds following his excellent presentation at Ignite London 2 – follow the link below, it’s about five minutes long so watch while you chow down your lunch as it’s absolutely superb.

Mob (a near-future science fiction story) by Tom Scott from hurryonhome on Vimeo.

Having recently finished reading the mind bendingly brilliant ‘Amusing Ourselves to Death’ by Neil Postman this seems an aptly Huxleyan sci-fi story, taking much of Postman’s thoughts on the impact of TV and entertainment on society to a new social media level…

A brilliant story – and unerringly feasible too.

@wonky_donky

It used to be said that an Englishman’s castle is his home and certainly it was from a privacy point of view.

A great deal has been written on the nature of privacy in the social media age recently but the scale of the change was brought home to me by the tragedy of Ashleigh Hall, who was murdered after meeting up with a ‘friend’ she had met through her Facebook account.

The Facebook page showed Peter Chapman as a teenager when in reality he was a 35 year old registered sex offender.  As the Daily Mail headline across half the front page asked  ‘Who’s Your Child Talking to on Facebook Tonight?”.

The sheer openness of social media is at stake.  As stated a home used to be castle in the late 20th century: electronic family life took place within closed channels; the telephone was fixed and family regulated; television was a joint activity involving parental guidance; and if anything the most social form of content was music.

The level of interaction with the outside world introduced by the world wide web was unimaginable.  Today, as the web celebrates its sixteenth birthday, we don’t appear to have developed a full understanding of what this new form of privacy means.

It is easy to dismiss the Daily Mail and threats from a new order of privacy that is being ushered in by the widespread adoption of social media, but it’s impact is profound.  One reaction may be to see if we can re-engineer the old world of privacy.

Yet this is an option that could be self defeating, as clearly the need to educate and create new behaviours is at the heart of safe behaviour in a social media society.  To ignore this and pretend children are not going to access social media and networking sites would be to deny them this protection.   Yet with all the education in the world mistakes can happen.

So, should society regulate to create greater protection, should it be illegal to present an image of oneself that is patently false?

A truly adequate response requires an understanding of what privacy means in this new world, and the creation of social systems that help prepare and guide people from many aspects.

The social media industry itself must face this challenge head on and in conjunction with government, education and consumer groups otherwise the arguments for regulation take root.

@Naked_Pheasant

NOTE: interesting that since this was drafted, further developments have meant the Mail has had to come out and apologise for the Facebook accusations (brilliantly summarised by @ruskin147 here)

A month or so ago I attended a really interesting session in the hallowed halls of Westminster. Hosted by Tom Watson MP, Taking Games Seriously sought to prompt a discussion “on the place of video games and virtual worlds in modern society – the lessons we might learn from them, their dangers, and why the public debate needs to move beyond breathless accusations about violent, screen-addicted young people.”

There was an element of preaching to the choir about the event. The panel and audience were wholeheartedly pro-gaming. It was more Middle Earth than Middle England. No one was there to condemn games as a modern curse that will blind the youth of today. So it wasn’t exactly a balanced debate around the perceptions of an industry that now outsells Hollywood. However, it did get me thinking…

Both Tom Chatfield and Sam Leith raised interesting points around the language of video games. Tom spoke about the need for a new vocabulary. Games, as he put it, have had existing terms re-appropriated to them; games are routinely referred to and reviewed with the same standards as a movie – but this does the unique dynamic of a game a severe injustice. We simply can’t talk about the plot of the latest Cohen Brothers film with the same words that we’d describe the fragmented, multi-layered, individual experience of three hours spent in WoW (World of Warcraft for the less-geeky of you). Sam then touched on the need to have a formal conversation about genre, in his view WoW is more akin to the architecture of a medieval cathedral, than any other cultural artefact. But Little Big Planet, again, couldn’t be described using the same words.

So. Interesting stuff. It struck me that when talking about games, we also need to address what ‘playing’ can mean. You play a boardgame, but no one expects Sunday Times Culture to write about Scrabble. Alex Fleetwood and Hide and Seek have created some incredible real-world experiences that are literally play-full. So if playing is so much fun, then why is there a need for games to be taken so seriously? Well, being a paid up member of the pro-games choir, I’d say it’s because not only is there real artistry involved in the development of a game, but that games are changing how we interact online and in the real world. They are having a societal effect. They’re also changing our perceptions of narrative – influencing the wider arts world in terms of multi-faceted story telling. For those reasons alone we should be talking about games seriously.

As PRs we can learn something from games – how to build multi-layered campaigns that talk to many people on many levels, for example. But as PRs talking about Digital Entertainment and the promotion of games, perhaps we also need to build stories that directly challenge perceptions rather than rely on old, traditional tactics. This is easier said than done of course. Many years ago I tried to pitch a feature looking at how World in Conflict, a game that sees Russia invade America and kick start a Nuclear War, resonated with contemporary Anglo-Russian relations (it was just after Litvinenko). Obviously this pitch fell on deaf ears. But as the media becomes more accustomed to games in the mainstream, so must we embrace new, serious, conversations around them.

As I was sat in Westminster, talking about games, my mind wandered up the river to the National Theatre. Drama is another form of play, of course. So what distinguishes King Lear from Lemmings? Again genre can play a role in terms of perception. For every Oedipus there is a pantomime dame (though arguably if Oedipus was attracted to a man in drag it could have ended very differently). Some plays are taken more seriously than others and the same can be said of video games. However, unlike video games – whether we’re talking about Widow Twanky or Mistress Quickly – all forms of theatre have the power to hold a mirror up to society. In Panto it is pop culture, in Shakespeare it is politics. It’s this fundamental difference that I think prevents video games being embraced by the cultural elite.

Not enough games (if any – though I’d happily be proved wrong) directly comment or offer a new perspective on society. Though the recent Chime – the first game from OneBigGame (a Live Aid for Video Games) – does show that gaming can engage in real world issues, even if only indirectly. [DISCLAIMER: Xbox is a JCPR Edelman client]. So perhaps at the end of the day games won’t be taken seriously, until they start saying something serious. Tom Watson joked that the Houses of Parliament would be an appropriate setting for an MMO – coteries of players, machinations and deceit framed within a Gothic landscape. With a disenfranchised electorate and an election looming it’s not such a bad idea.

A frightening precedent was set in Italy last week, which outside of the country itself seemed to have nominal impact in the press, but which could undermine the ‘net as we know it, at least as far as content hosting and delivery is concerned. The initial incident in question occurred in 2006, when an autistic child was bullied at school and the subsequent video put up on GoogleVideo – thankfully for the video to shortly be taken down after being alerted by Italian police. Google worked with the police to help uncover the person responsible for posting the video, and the perpetrator and several schoolmates in the video were subsequently charged.

But, last week the Italian courts convicted three Google executives (who received  suspended sentences) of failing to comply with Italian privacy codes. Now, bear in mind none of the executives were in the video; condoned the content; nor knew of it until being made aware by the police and once that happened, worked with the police to ensure convictions for those responsible. 

What this conviction raises is the prospect of platform owners being responsible for the content being created and put on them by users – the implications for the likes of YouTube, Twitter, MySpace and Facebook are massive. Essentially, this means that hosts can no longer disclaim responsibility for content, and thus be held responsible for illegal or reprehensible content whacked up on their sites.

How can this work in the long term, especially if it is replicated outside of Italy – how  can a host network possibly monitor everything posted online without hugely deteriorating from the quality of service, and without massive investment in resources, if everything has to be checked rigorously?

More to the point, if hosting now includes responsibility, where does the line get drawn, given what might offend one person may be acceptable to another? I personally swear like an Australian in some of what I put online, but if this offends someone, should my content be taken down – and what makes their morals ‘better’ and more acceptable than my slightly sweary ones? Who is the judge in all this?

This conviction throws up some very worrying precedents not just for the internet hosting companies and content distributors, but for everyone associated with content creation as well.

@wonky_donky

Who knew the future of the farming industry existed online?

Love it or hate it, when you log on to Facebook you probably can’t help but notice that some of your ‘friends’ are keen for you to build a farm with them or herd some cattle. But rather than this being a semi-idyllic request to escape the rat race and do something outdoorsy, it is of course part of the latest breed of online games – Farmville. Dismissed by many as an annoyance on an increasingly cluttered social networking site, it cannot go unnoticed that this has now become a pretty big deal in terms of highlighting the serious business of casual gaming.

It soon becomes evident quite how big a deal this really is when you take a look at the statistics: 118m installs, 75m monthly players and 27m daily players (Farmville, 2009). Farmville was created by Zynga, a company created less than three years ago but which is now the number one gaming company on the web. The company has a catalogue of games with an average of 65 million people a day playing one of their games – when you consider that this is more than the population of France, the true scale of the business opportunity becomes apparent, even more so when you consider the amount of money people are spending on such games. In the UK last year there were more than 13 million ‘casual gamers’ with 2.4 million of these going on to spend money on these games, equating to £280m or £117 per person.

With this level of interest, the market has quite rightly been heralded as an area of growth, capitalizing on the fact that most people now want to include some level of socialization into everything they do online. The fact that users can share, recommend, help and compete with their friends when playing is perhaps what has led to the impressive levels of interest. It also heralds a new attitude to gaming – the peak time of use for Farmville is between 8am and 9am, highlighting how users are dipping in and out of gaming, perhaps much like checking their emails in the morning. So with huge demand and money to be made, where is casual gaming set to go?

The founders of Flickr are due to launch ‘Glitch’, a 2D platform game which they hope will take online gaming to the masses, although that is perhaps what Farmville has already done. Similarly, EA recently bought Playfish in attempt to catch up on the casual gaming market – a move which has been a success for them but one which shows how smaller developers can be on an equal playing field with the big boys, something which is less easy in the rest of the gaming world. In terms of building on current successes, Zynga has suggested that the next wave of growth will be centered around personalization of content, a common answer to most question in the digital space but one which makes perfect sense. With over 5 million members in the anti-Farmville Facebook group, there is clearly space to improve in terms of avoiding being seen as spam. Personalizing the content is something which will help sustain interest and increase loyalty which in turn should see the money being spent continue to rise. This is already being done by Zynga to some extent in terms of advertising, but the interesting space to watch will be to see how this transfers to the actual content. Watch this space.

@AJGriffiths

We’ve just come out of another Mobile World Congress in Spain—now the largest and most influential mobile/telecoms trade show in the world. Mobile continues to grow in strategic importance and is at an interesting cross-road where digital content, the Internet, faster mobile networks, better phones, and heightened consumer awareness/engagement are coming together to fuel some very dramatic growth. 

MWC has grown incredibly quickly over the last few years, from being a show for mobile carriers and phone manufacturers to now including vendors from across computing, Web, music, film, television, infrastructure, retail, and countless other sectors.  Mobile’s intersection with the Internet is probably its most interesting chapter yet:  Fittingly, Google CEO Eric Schmidt gave the main keynote at MWC this year (a true first for the show) and spent much of it assuring the old guard of the mobile industry that Google could co-exist with them without stealing all their revenue—a pretty telling message and a portent of pretty serious change in what’s traditionally been a very conservative industry.  These are interesting times for those of us following this sector.

Below is a recap of some of the major trends/themes we observed at MWC.  We also created a short (informal) video diary from the show, available above.  Those of you interested in mobile/telecoms please come back to us with feedback, additional input, observations, etc—we’re keen to keep up the discussion around mobile/telecoms – we also encourage you to follow the Telecom team on Twitter via @Edel_Telecom.

Key learnings from MWC

Cautious optimism:  Following a rather gloomy MWC last year, this year’s show was marked by business optimism.  There’s recognition that while the global markets felt the pain of the recession, the mobile industry for the most part sidestepped the crisis, particularly in APAC and the US.  For many countries, 2009 was something of a banner year for new mobile data services (e.g. mobile email, Twitter, location services, surfing the Web) —the gold standard of progress for the mobile industry— with some markets exceeding 90% subscription penetration of these services.  On the down side, some operators (particularly in Europe) are still extricating themselves from years of stagnation, and there is awareness that carriers in general are struggling to convert network traffic gains to meaningful revenue and need to up their game.

Telecoms as a force for good:  Beyond the exhibition stands, there was much discussion this year around the economic and social impact that mobility is having on the developing world, and what the industry can do to accelerate that.  The Mobile Money Working Group (an initiative promoting the use of mobile for micro-financing in poor countries) made lots of noise at the show this year, endorsed by a number of high-profile celebrities.   Vodafone announced the world’s cheapest smartphone, part of a growing effort to make mobility accessible in countries where many people’s first-ever experience of the Internet  is happening via mobile phones rather than PCs.  Various education initiatives such as 1GOAL are banking on mobile advertising and messaging to drive global support and revenue generation.  Meanwhile, Vodafone’s CEO used his keynote speech to remind the world of the telecoms industry as an engine of growth, and to push governments to ease regulation of the industry (telecoms remains one of the most heavily regulated industries in the world).       

New kids on the block:  Some of the biggest brands at the show this year were from outside Europe, North America or developed markets—testament to how the mobile industry is growing, diversifying and globalizing.  These vendors had some of the biggest booths of the show.  They span infrastructure (Huawei, ZTE), service provision (China Telecom, TurkCell), hardware (HTC) and any sector within telecoms you can name.  What’s interesting is that these guys are not just selling big in their native markets; they are also pushing plenty of fresh thinking and innovation in all sectors of the industry, from hardware design (e.g. HTC) through to new services such as mobile marketing and mobile payments (e.g. Turkcell, Bharti Airtel).  Some of them are famously encroaching on the turf of established Western incumbents (and provoking lots of mergers), as in the case of the Chinese infrastructure vendors versus the Ericssons and Alcatel-Lucents of the world.

Mobilizing the ‘cloud’: This year’s show was all about the mobile Internet (again).  For years, the mobile industry has talked of the ‘mobile Internet’ as a version of the Internet specifically adapted to work on phones with small screens and poor Web browsers, but this year it was all back to the Internet (the one) and getting people onto it from mobile phones.  Pretty timely, as some analysts are estimating that mobile broadband connections multiplied by over 150% in the last year, fuelled by better, cheaper phones with functional browsers (the ‘i-Phone effect’).   Some of the more daring speakers at MWC talked of a tantalizing future of mobile content usage driven directly out of the Web, as it’s portended for PCs in the ‘cloud computing’ context.  Fittingly, Google Eric Schmidt’s used his show keynote to make a point that operators are not just ‘dumb-pipes’ and have a role to play in facilitating and regulating mobile services.  This was understandable if not entirely predictable:  Mobile operators generally view Google with suspicion, arguing that it takes advantage of their network assets (on which they have invested heavily) with so-called ‘over-the-top’ (OTT) services where operators bear the network costs but don’t see any of Google’s revenue.  These conversations are likely to develop in Google’s favor as value shifts towards content and applications.

Apps, apps, apps and more apps: One message at the show was clear this year: the mobile industry is going through a major transition from being voice-centric to data-centric, from consumers spending most of their time talking on the phone to a significant portion of their time (now upwards of 80% in some markets) on new services driven by data and online content.  Mobile applications (think of the iPhone App Store) are reaching the mainstream, no longer the remit of the early adopter or tech geek.  Facebook and social media are starting to dominate mobile use in some countries.  MWC was awash with talk of apps ranging from mobile healthcare apps  to presence apps linking to social media features.

So it’s no surprise that this was the show of the ‘mobile app’.  From their respective corners, all major mobile vendors want to get a slice of the application action.  Attention now shifts to the world’s huge pool of application/content developers and the tools and revenue models that the mobile industry needs to bring them to the table and incentivize them.  This is one of the elusive last-frontiers of the mobile industry:  Everyone seems to want to support developers, but everyone supports them differently.  Despite the success of the iPhone (and increasingly Android) in building cottage industries around apps, nobody has solved the question of how all these cooperative players manage to combine their efforts to create something stable, easily supported and capable of generating scalable revenue.  That discussion continues.  This year for the first time ever, MWC hosted a separate conference dedicated to applications (aptly named ‘App Planet’) that pitted mobile vendors with app developers in a series of high-intensity workshops and networking sessions.   It was at times a shambolic event that will surely improve in its next iterations, but a symbolically important one.  It’s also telling that it was sponsored by some of the large mobile operators, who for years have been perceived as being too dictatorial with developers and too greedy with content revenues. 

The platform wars: So one of the many big questions of the show was: who are the big facilitators of the application boom—the operators?  The handset makers?  The software vendors?  In a flurry of news on day-one of the show, some of the major operators and phone companies banded together to announce an initiative called the Wholesale Application Community—an effort to try to get themselves back in the market for providing apps for use in mobiles.  But the announcement quickly met its sceptics and couldn’t take the spotlight away from Apple (still the app provider extraordinaire, owning 95% of the mobile app market) or the various software and platform vendors who made some of the splashiest news at the show this year:  Microsoft made an extraordinary return-from-the-dead announcement with its announcement of Windows Phone 7; Google’s Android, which is starting to drive wide adoption in emerging markets such as China and is widely credited for reviving Motorola’s handset business, found its way into some of the sexiest hardware announced at the show by companies like HTC; Samsung shouted from the rooftops about its new Bada platform, which promised seamless video and one-stop social media integration; and Nokia and Intel banded together to announce yet another oddly-named mobile platform.

Skott Ahn, the head of LG’s handset unit, speculated that mobile software platforms will consolidate to only three within a few years.  Many doubt consolidation will be that aggressive—but certainly nobody believes the big mobile operators are going to account for any of these platform ecosystems:  A study by Nielsen timed with the MWC found that over 65% of consumers in the next six months will use applications tailored to their personal preferences and location/context.  But the catch is that consumers don’t expect to get these services from their carrier—they expect to get them from the likes of Google and Facebook. 

Bandwidth: It made for some big headlines this year:   Wireless networks are running out of capacity.  The FCC chairman called it “a looming spectrum crisis.” Nokia-Siemens Networks CEO Rajeev Suri warned of a ‘mobile data tsunami’ on its way.  Operators, who only five years ago were scrambling to find customers for their new networks (in which they had invested billions) now find themselves struggling to catch up with network demand driven by the i-Phone and applications craze.  Even as operators add more spectrum, pervasive flat-rates for data services means that the added bandwidth doesn’t necessarily translate to extra revenue, forcing operators to find ways to use networks more efficiently.  Mobile operators are struggling to make a business case out of mobile broadband when the acceleration in data traffic volume is outstripping the pace of mobile data revenue growth.  So much of the talk at MWC this year was about how to resolve this crisis.  There was much talk about how traditional transmission technologies like Fiber and Microwave will benefit from technologies such as Optical Wireless to provide backhaul capacity that is economical as well as future-proof.   Vendors discussed the pros and cons of emerging network technologies such as HSPA+ and LTE.

The Gadgets:  At its heart, MWC is still about phones and gadgets.  Hardware announcements still set the tone for the show.  This year was the year of the smartphone:  Gone are the days of talking of smartphones as a niche product at the high-end of the market—the smartphone is now mainstream.  The ‘iPhone effect’ has had a huge impact on phone design and functionality in the last years, causing a flurry of innovation and an increase in computing power and functionality.  Hardware is now also inextricably bound to software and applications.  This year’s buzz at the show was around ‘integrated apps’ —meaning that all of your personal ‘feeds’ (email, social media updates, messaging, photos, etc) are accessible easily and intuitively through icons directly from your phone’s home screen, without the need to open/close applications.  Samsung and HTC made plenty of noise around this trend.   A few of the manufacturers showed off phones and notebooks fitted for advanced networks. Touch-screens continue to make news, and the economics of the hardware market mean that new phone/device categories (different types for different users) are born all the time and almost any brand out there can make and brand its own phone. 

Thanks to all and see you at next year’s show.

@kbossi

Follow

Get every new post delivered to your Inbox.

Join 30 other followers