There’s a lot of talk about ‘the Outerweb’ right now. It’s a great term and way of looking at what augmented reality, or “AR” might mean but I think it goes beyond this, and the clever people at TrendOne have encapsulated this as “the explosion of the internet into the real world.”

They go beyond a definition that refers to technology (mobile devices, in-windshield displays, etc.) that can overlay information from the Web on top of objects in the real world and look at how connections are occurring between devices data, video and social networks.

They highlight a contact lens technology that allows you to visualize the social networks another person may be accessing on their mobile device in quite scary manner. This is not just pointing your Phone up at a building, and getting an overlay of information about the building but rich levels of data links and connection between the digital and real world. This idea of an outernet is in truth a speculative idea right now. Indeed the idea of a web of things and the principles of the semantic web have been talked about for a long time without really happening.

Yet I feel there are a number of developments converging that will make the Outernet more likely; firstly the widespread deployment of the next generation of protocol IPV6 and it’s potential functionality is a major enabler, and importantly it’s ability to enable the mass connection of mobile devices.

Secondly the prevalence of video images again largely via mobile computing devices will make the connection of images with data highly desirable.

Lastly, and perhaps the biggest driver, is the spread of local based services, and social networks such as Four Square that connect to locations and the hyper local connections that are emerging in many urban areas. Added together these trends could well make the Outernet the next big thing.

So the term “outer Web” means the extension of the information outside the normal confines of broadband networks and into the real world, mainly via the screens of wirelessly-connected mobile devices. The outernet then is this idea taken further to include the connections and networks between computing devices within this mashed up world in many way shadowing the difference between the worldwideweb and the Internet itself.

What’s your call Outerweb or Outernet?

 Social media is gaining a greater foothold in the lives of older Americans.

According to a Pew Internet & American Life Project survey, social networking use among internet users ages 50 and older has nearly doubled from 22% to 42% over the past year. Half (47%) of internet users ages 50-64 and one in four (26%) users age 65 and older use social networking sites.

Most users of social networking tools are between 18 and 29 but this growth in older users shows that different segments of the population are getting involved. As we try to reach different social segments, looking at usage profiles across age groups can help us to better target audiences.

“While email may be falling out of favour with today’s teenagers, older adults still rely on it heavily as an essential tool for their daily communications. Overall, 92% of those ages 50-64 and 89% of those ages 65 and older send or read email and more than half of each group exchanges email messages on a typical day. Online news gathering also ranks highly in the daily media habits of older adults; 76% of internet users ages 50-64 get news online, and 42% do so on a typical day. Among internet users ages 65 and older, 62% look for news online and 34% do so on a typical day.”

While overall interest in social networking is growing amongst older users, this doesn’t necessarily translate into larger percentages using all social networking tools. According to the survey, one in 10 (11%) online adults ages 50-64 and one in 20 (5%) online adults ages 65 and older now say they use Twitter or another service to share updates about themselves or see updates about others.

In order to reach the right people with the right message via social media, it is important to look at what segments of the population are involved in social networking and what online tools are most applicable to their social segment. As the survey tells us, the number of older internet users are getting involved with social networking is growing rapidly but their activities online are still largely dominated by other things. 


The US is the traditional home of the internet but it doesn’t crack the top 20 in terms of internet speeds.

According to Akamai’s quarterly State of the Internet report China and the US account for almost 40% of all IP addresses in the world but both rank well behind South Korea’s 11.7 Mbps average.

Average internet speeds are down around the world due to wider spread adoption of mobile broadband but the US is lagging behind both developed and emerging economies.


While the $7.2 billion US broadband stimulus package has focused on connecting all of its citizens, the next phase must be to ensure that connectivity speeds can cope with e-healthcare, e-government and other bandwidth intensive applications. Lagging bandwidth speeds might be a question of geography but there are ICT-orient governments, like South Korea, that are already seeing the benefits. As the $7.2 billion gets spent, it is also no guarantee that the US will move up the rankings as emerging economies ramp up development. The US is behind now, and might be well in to the future.



Akamai gathers data from its global network to track internet trends across geographies.

The dotcom turns 25 today.  It’s hard to imagine it’s been around that long, and it’s amazing to think that it predates the advent of HTML and the World Wide Web.  I can’t help thinking that dotcom is another example of how innovation in the tech sector can sometimes generate success far beyond the dreams of its creators.

In 1985, when .com was created, the Internet was in its infancy and the was largely administered by the U.S. Government – or the Department of Defence (DoD) to be precise.  Later, in 1991, the National Science Foundation (NSF) assumed responsibility for its administration. 

Dotcom was designated as an Internet domain for non-military purposes and both the DoD and the NSF contracted it’s operation to a third party – Network Solutions.

Monetisation of the .com commenced in 1995 – a full ten years after its creation – when Network Solutions was granted  permission to charge an annual renewal fee for dotcom domains.

Later, Network solutions was acquired by VeriSign,  at a time that roughly coincided with creation of the World Wide Web and the mass uptake of Internet by businesses and private citizens, so it was them  who really hit the jackpot.

According to a BBC report today, .com registrations grew from a handful at the outset, to one million in 1997 and finally to the present level of 668,000 monthly registrations.  That’s a nice little earner for VeriSign.

Today the domain name system is overseen by ICANN of course.  In 2005, a new contract between ICANN and VeriSign was signed, under which VeriSign was granted the right of presumptive renewal.  In essence, this means VeriSign has an almost automatic right of renewal on the contract in any future review, virtually guaranteeing its cash cow.  This was quite a contentious issue at the time, with rival registry operators vying for a piece of the action.

Dotcom will no doubt remain the most prominent domain for some time to come.  However, changes are on the way that will result in competition and possibly lower registrations.

ICANN is currently administering the introduction of new generic Top Level  Domains via a protracted process that should conclude towards the end of the year.  When that happens, there will be more consumer choice, which may over time dilute the dominance of dotcom.

Another factor that is set to have an impact on the development of the DNS in coming years is the introduction of Internationalised Domain Names (IDNs).  Until recently, it wasn’t possible to interact fully with the DNS in a non-Latin based language.  That finally changed this year with the introduction of the first IDNs.

From a commercial perspective, the real power and potential of IDNs may be realised when they are combined with new gTLDs in markets where dotcom doesn’t necessarily dominate, such as Asia and the Middle East.  In those regions, we may see the emergence of .com equivalents that generate significant income for the operators and limit VeriSign’s potential for growth outside the English speaking world.

A recent study directed by David Nicholas at the University College of London showed that young people are losing their ability to concentrate because of the internet – as if, on hitting adolescence they don’t already have enough to deal with?


The study showed that adolescents are losing the ability to read and write long text because the internet is changing the way they think compared to older volunteers. Over a three year period researchers asked hundreds of 12 to 18 year olds to answer a series of questions by surfing the internet.


The results were interesting if not predictable.  The majority of adolescents viewed half the number of web pages as the older volunteers.  According to Professor Nicholas, 40% of those who participated in the study did not consult more than three web pages from the thousands available and rarely viewed a site more than twice.  

However, people who were educated before the age of the internet tended to relate back to previously viewed pages more often and go deeper into the details instead of jumping from one page to another.


It seems that there is evidence that the internet is changing the way the youth think because it encourages users to view multiple sources of information instead of from one traditional source such as a book.  Based on the study, this new “associative” thinking has left the young with the inability to read and write at length because their minds are being rewired by the web. This sits alongside the long suggested divide between ‘digital natives’ and ‘digital immigrants’.

I recently went back to school and got my MBA.  While researching for various projects I couldn’t help but think how lucky I was to now have the internet.  Whoever remembers the dewy decimal system and reading book after book to source materials for that 20 page term paper understands my excitement.  I can imagine that the internet is increasing the potential for our youth to have lower attention spans, but novels such as the Harry Potter or Twilight series have had phenomenal success.  They are long texts and teenagers have no problem flying through them. How does this sit in relation to the above study which suggests attention spans are reducing?  

What do you think?  Is the internet changing the way we think?

Marisa Mittelstaedt

There’s an interesting story on the BBC website today about cheating in school exams, specifically the use of technology in both aiding and catching cheaters.

For me, the companies selling the products, the schools buying them and the exam watchdog Ofqual have got this all wrong.


Technology presents us with an opportunity to change our antiquated, almost draconian exam system which puts a premium on a person’s memory and wrist stamina as opposed to their ability to find, process and place into context information relevant to a subject.

For one thing, a clampdown on ‘technology cheaters’ smacks of hypocrisy and sheer bloody-mindedness. In an age of rampant digital piracy, how can you expect kids to take a moral stand on not secretly using the Internet in an exam, when everyone is ripping music, films and software from the Internet? The music and film industries have accepted this and adapted their business models. Education resolutely refuses to budge.

Secondly, technology has changed irreversibly the way in which we all communicate and interact. To try and remove from an exam situation a reference tool such as the Internet is so counter intuitive it actually offends even an 11 year olds intelligence.

And thirdly, unless you’re sitting a technical or scientifc exam where defined answers can be copied without going through a necessary process – therefore demanding a certain amount of isolation – the written exam is just about the worst form of test imaginable.

Course work has already been scaled back due to issues around quality, so surely we should be embracing new technology as a way to find a new way of testing pupils and pushing them to the limits of their mental ability? Instead we seem to be demonising children for doing what now comes naturally when they’re tasked with answering questions.

All of the main political parties in the UK are banging on about information superhighways and the importance of getting families on-line, so why are schools getting all Stasi on our children at a time when, arguably, the Internet could be of the most benefit too them? It makes no sense.

What’s needed is a root and brand reassessment of the written exam with the express intention of putting the Internet and other technologies into the examination hall as a tool, not threatening pupils with expulsion if they try and Google Pharmaceutical Society of Great Britain v Boots Cash Chemists Ltd [1953] for their law GCSE.

Of course there is a great temptation to cheat and copy verbatim answers that are already on the Web, but we have developed sophisticated methods of authenticating and filtering information on the Internet. It’s hardly a technological stretch to checks papers against text already on the Net.

And besides, we should be setting exam questions that can only be answered by original research and thought. We should play to the strengths of controlled, time-limited exam conditions not view them as strict rules of governance within which careers can be decided depending on how individual pupils perform.

The bottom line for me is that unless you’re planning a career as a magician or a card shark, memory tests are no measure of intelligence. It’s about time our education authorities realised that, rather than coming down hard on the Internet generation.



You may have heard or read about two seemingly rather dull announcements from the UK government relating to Met Office and the Ordnance Survey data in the last day or so. What has actually been announced though is quite interesting – and perhaps even revolutionary.

The government has decided to make data from these organisations (or at least a certain amount of it) freely available to the public. What they are hoping to do is encourage entrepreneurs to develop new businesses through the inventive use of this data. It is hoped this will generate tax revenue greater than could have been realised by selling that data for commercial use.

This is a very interesting move on the part of the government that could result in the creation of a wide range of new businesses.

But surely this is a bit too forward thinking of a government that is most likely approaching the end of its days? Well yes it is. The idea was actually seeded by Sir Tim Berners-Lee, creator of the web, and Professor Nigel Shadbolt from the University of Southampton. Both were recently appointed as government advisers on technology.

For Sir Tim and Professor Shadbolt, the real motivation here will be to encourage the growth of the semantic web, which has been long talked about but painfully slow in realisation.

Essentially, the semantic web is an ongoing effort to make the web more “intelligent” by allowing it to "understand" and satisfy user requests (including requests from machines) to a greater degree. At the heart of the semantic web is linked data and because much of the data held Met Office and Ordinance Survey can be classified as linked, it is essentially semantic web ready, making it ideally suited to the purpose of encouraging the next stage in the Internet’s evolution.

This article from the FT provides more detail on the government’s announcement and is worth reading:






















A Star Wars example of how semantic web works – taken from the excellent folk at