Back to home page click here

MY INTERNET COLUMN

"Connected" is the journal of Connect, the section of the UK trade union Prospect which covers managers and professionals in communications [click here]. I contribute a column on aspects of the Internet and the text of all these articles, with relevant hyperlinks, are filed here. If you would like to comment on any of them e-mail me.

Dec 2002/Jan 2003 Who Is The Internet For?
February 2003 What Is Your Child Doing On The Net?
March 2003 Who Controls The Internet?
April 2003 Damn That Scam And Spam
May 2003 Welcome To The Blogger
June 2003 E-mail: Convenience Or Curse?
September 2003 Where Do Computers Go To Die?
October 2003 How To Net A Mate
November 2003 3G Or Not 3G?: That Is The Question
December 2003 VoIP: The Small Acronym With Big Implications
Jan/Feb 2004 Are They Playing Tag With Your Liberty?
March 2004 E-commerce Means Clicks And Mortar
April/May 2004 Extremism On The Net
June 2004 Has Net Growth Stalled?
July/Aug 2004 Media Literacy In The Age Of The Internet
September 2004 How To Unstitch The Web
October 2004 The Battle Of The Browsers
November 2004 Can The Internet Survive?
Jan/Feb 2005 The Wave And The Web
March 2005 The Big Switchover
April/May 2005 E-government Rules, OK?
June 2005 The Hitchhiker's Guide To Television
July/Aug 2005 The New World Of Social Media
September 2005 Next Generation Networks
October 2005 The Barbarians Are At The Gate
November 2005 The Deepening Of The Digital Divide
Jan/Feb 2006 The Personalisation Of New Media
March 2006 The Serious World Of Gaming
April/May 2006 The Invisible Heart Of The Communications Revolution
June 2006 Who Is Connected To The Net?
September 2006 What Is Internet Neutrality?
October 2006 What Do Consumers Really Want?
November 2006 Web 2.0: Hype, Hope Or Here?
Jan/Feb 2007 How You Became The Web
March 2007 The Network Of The Future
April/May 2007 The Changing Picture Of Television
June 2007 What Is It With Being Connected?
July/Aug 2007 Could The Net Fall Over?
September 2007 More Digital Divides Open Up
Oct/Nov 2007 Is Wikipedia The Best Site On The Web?
December 2007 Are You Safe On-line?
Jan/Feb 2008 Is Your Broadband Up To Speed?
March 2008 The New Plan For The 21CN
April/May 2008 Our Fragmenting Television Picture
June 2008 Who Is Bringing Us NGA?
July/Aug 2008 Should We Be Worried By Behavioural Targeting?
September 2008 What's Next For The Net?
Oct/Nov 2008 The Mobile Revolution
December 2008 The Internet Of Things
Jan/Feb 2009 The Challenge Of Digital Inclusion
March 2009 Do We Need A New Internet?
April/May 2009 How Will NGA Be Delivered?
June 2009 Does The Internet Improve Lives?
July/Aug 2009 How The Smartphone Changed The World
September 2009 How Do We Solve The Copyright Conundrum?
Oct/Nov 2009 What Should Ofcom Do?
December 2009 At Last: Action On Digital Inclusion
February 2010 Taking Faster Broadband Further
March 2010 The Data Deluge
May 2010 Should The Net Be Neutral?
July/Aug 2010 More Pictures On A New Canvas
September 2010 The Wind To Cloud Computing
Oct/Nov 2010 What Now For Digital Britain?
December 2010 How The Web Works
Feb/March 2011 The Political Power Of Social Media
April 2011 How To Control Your Online Persona
June 2011 Regulating Content In A Converged World
August 2011 Is The Net Changing Your Brain?
Oct/Nov 2011 The Future Of Digital Radio
December 2011 What Sort Of Net Do We Want?
Feb/March 2012 Whose News Is It Anyway?
April/May 2012 Casting The Net Wider
June/July 2012 Big Moves On Small Payments
Oct/Nov 2012 Breathing New Life Into The Comms Review
December 2012 What Can't You Say on The Net?


In the first of a new series of columns, Roger Darlington - Strategy Adviser at the Communication Workers Union and Chair of the Internet Watch Foundation - asks:

WHO IS THE INTERNET FOR?

The Internet started life in 1969 as the ARPAnet - the network of the Advanced Research Projects Agency of the US Department of Defense. It was constructed as a distributed system that would survive a Soviet nuclear attack.[For a history of the Internet click here]

Subsequently the network was extended to American universities and then other academic institutions around the globe. The World Wide Web - arguably the most useful feature of the Net - was invented in 1989 by Tim Berners-Lee while he was working at the European Centre for Nuclear Research. [For further information on his development of the Web click here]

So is the Internet really for scientists and academics? Should the Internet be a 'free' space with no regulation of content and no contamination by commercial interests?

The Internet has now become a mass medium and, as such, has to be subject to some laws and regulations. There are now as many children using the Net as adults and they need some guidance and protection. These are themes to which we will return in future columns.

Furthermore I see nothing wrong in people making money out of the Net. The likes of Amazon [click here] and eBay [click here] are terrific services and they deserve to make a profit - although I wish the companies concerned would recognise trade unions.

The Internet started in North America and Europe and these two regions of the world still account for almost two-thirds of all users world-wide. However, I am about to visit India and Nepal on holiday and Internet penetration in these countries is only around 0.7% and 0.2% respectively. [For statistics on Internet penetration throughout the world click here]

How are AIDS victims in South Africa supposed to obtain independent information and access to on-line support groups when Internet access in that country is limited to just 7%? How are Muslim women in Nigeria able to have a cross-community and cross-national dialogue about the sharia or female circumcision when a mere 0.1% of homes there are on-line?

So is the Net really for the economically developed countries? I believe that Net has at least as much to offer to Swaziland as Switzerland and we have to find imaginative ways of cross-subsidizing the costs of Internet access on a global basis.

Since the Internet started in the USA and originally focussed on the academic community, the language of choice has always been English and even now some 70-80% of Web content is still in English. Does that mean that the Internet is really for English readers?

I recently saw the James Bond film "Die Another Day" and read a novel by the Brazilian writer Paulo Coelho. The Warner Brothers' site for the 007 movie is in six languages [click here] , but Coelho's site manages to deploy 14 [click here].

English will always be the premier language on the Internet because it is now the global language, but we should be promoting more multi-language sites and more powerful and effective on-line translation services [for Babel Fish Translation click here].

So - who is the Internet for? In my view, it should be for everyone, regardless of occupation, nation or language, regardless of age, class, or income. This challenge raises a host of issues, some of which will be examined in future columns.

For now, let us start at home.

The British trade union movement owns thousands of national, regional and branch offices located throughout the length and breadth of the nation. In the case of some unions like the CWU, a network of learning centres is being developed.

All of these offices are stuffed full of both Internet-enabled PCs and people who know how to use them. Suppose we threw open these facilities at the evenings and weekends to run low-cost training courses on the use of the Internet, including effective use of e-mail and searching on the Web.

We could show them on-line information sources that would empower and enthuse them and community and newsgroups that would transform their lives.

Part of the training might be use of an interactive program explaining the role and work of unions and part of the package might be introductory membership to the relevant union.

We have six million union members in the UK. If each member approached partner, children and parents, we could potentially reach out to around 30 million citizens, even before we threw the doors open to local communities. Now wouldn't that be something?


Our Internet columnist Roger darlington - Chair of the Internet Watch Foundation for the last three years - poses the disturbing question:

WHAT IS YOUR CHILD DOING ON THE NET?

Most readers have children or have friends with children. The Internet is the first technology where frequently children know more than the adults supposedly supervising them, so just how safe are these youngsters?

Of course, for the overwhelmingly majority of the time that they are on the Net, children are going to have a wonderfully fun experience - contacting existing friends through e-mail, making new friends in chat rooms, playing games on-line, finding educational web sites to help with homework, and much more.

However, allowing young children to surf the Net without guidance or supervision is equivalent to letting them freely browse in a large book store that has sections providing hard porn magazines, race hate propaganda and all sorts of bizarre and disturbing material. Allowing them to use chat rooms or instant messaging without protection is similar to letting them visit parks on their own where there are adult strangers interested in meeting children.

You would not expose your child to such risks in the real world, but you may not be so aware of the dangers - and the action to take - in cyberspace.

The first set of dangers arises from the enormously diverse range of content on the Web.

Children may find offensive content, such as adult pornography or racist propaganda, or receive unwanted spam promoting pornography or scams. Depending on the age, maturity, and cultural background of the child, other material - such as sites celebrating the bizarre or murderous or promoting religious cults or pro-bulimia views- may cause upset or even fear [for examples click here].

What can you do?

Link: Home Office advice on safe surfing click here
Kidsmart safety site click here

More serious problems can occur with the use of chat rooms by children. Paedophiles deliberately target children in some chat rooms and 'groom' them over a period of time to obtain personal information and then physical access to the child with a view to physical abuse.

Girls in their early teens are particularly vulnerable to the idea of meeting a 'friend' whom they believe from on-line chat to be a boy of a similar age.

What advice should you give to your child? Tell them:

Links:
Internet Crime Forum report "Chatwise, Streetwise" click here
Home Office advice on chat rooms click here
Chat Danger site click here

Finally, on a more positive note, many children have gone beyond being mere consumers to become creators of their own web sites, either alone or in conjunction with friends at their school or a community group.

The organisation Childnet International runs a global competition each year to identify and make awards to some of the best such sites and many of the sites they identify are truly outstanding. Maybe you could help your youngster to make a start in this direction?

Link: Childnet International awards click here


Our Internet columnist Roger Darlington asks a question which must have crossed the mind of many Net users:

WHO CONTROLS THE INTERNET?

We are frequently told that the Internet is a new kind of communications medium that is not - and cannot - be controlled by anyone, whether individuals, corporations or governments.

This is, of course, nonsense. Somebody has to be running the Internet, otherwise it would not be possible for some 600M users (at the last count) to be able to communicate almost instantaneously to every country of the world at every second of the day.

But certainly control of the Net is a much more complex and complicated matter than in the case of a newspaper or magazine or a radio or television station.

In fact, there are three main bodies that currently control the global Internet:

Of course, hardly any Internet users have actually heard of these bodies and only a tiny, tiny fraction has any chance of influencing them. Overwhelmingly they are made up of representatives of powerful corporations, mostly American-owned.

In 2000, ICANN held direct elections for almost half its board of directors, theoretically allowing anyone in the world with an e-mail address to vote. The election was a sham and it has now abandoned this method of choosing some of its directors.

In the November/December 2002 issue of "Foreign Affairs" magazine [click here], Zoe Baird - President of the Markle Foundation - wrote: "International institutions engaged in Internet governance will have to confront three significant challenges if they are to achieve legitimacy: increasing participation by developing countries, providing access to non-profit organisations and ensuring democratic accountability".

She is absolutely right and international trade unions should be allying with other civil society organisations to make this a reality.

For the individual Internet user, however, this is all pretty esoteric stuff.

At a more practical level, where does he or she go if one finds child pornography or race hate material on the Net, for advice on how to block access by children to pornographic content, what to do if one's credit card details have been misused on-line, how to respond when you have been defamed on a Web site, how to act when your copyrighted article or music has been used without permission?

In the UK, the only relevant body is the Internet Watch Foundation [click here] and that only has the narrow remit of criminal content. For about 18 months, Home Office officials and industry representatives have been discussing the idea of some kind of 'one stop shop' for Internet consumer problems but, at a time when Ministers are fond of emphasizing that Internet time is shorter than chronological time, progress is glacially slow.

The Internet raises major social, economic and ultimately ethical issues: How can we make the Net accessible to all, regardless of income or disability? How can we extend broadband to all parts of the country? How much free speech should be allowed when individuals can be defamed and ethnic groups can be insulted? What is the impact on children of using chat rooms and on-line gaming?

We have nowhere to debate and discuss these issues. In China - where I have spoken on Internet regulation - they take these matters very seriously and have formed the Internet Society of China [click here]. But this is a country where democratic values are at best embryonic and we ought to be able to do much better.

In short: we need to democratise existing global institutions to empower civic society and we need to create new institutions to empower local consumers. Preferably before the number of Internet users hits one billion.

Link: ITU Workshop on Internet Governance click here


Our Internet columnist Roger Darlington shares your anger:

DAMN THAT SCAM AND SPAM

Have you ever had an e-mail that:

If you have never had such an e-mail, you are not connected to the Internet. Such communications are scam or spam or both. They are the bane of Internet users' lives and they threaten to slow down the Net and render it less effective.

As far as scams are concerned, the golden rule is, if something seems too good to be true, it is almost certainly a ruse. Recipients of such e-mails should never respond, even to express anger or opposition. This simply indicates that one's e-mail address is valid and active which, in itself, is useful information to criminals.

The West African Organised Crime Section (WAOCS) of the National Criminal Intelligence Service (NCIS) [click here] leads the opposition to the so-called '419 scam', named after the section of the Nigerian penal code that prohibits it, but individuals receiving approaches should report the details to the fraud squad of their local police force.

Spam is unsolicited e-mail (the colloquial term spam comes from a well-known Monty Python sketch). Most spam is not of itself illegal - although some of it might become illegal in some jurisdictions quite soon. The problem with spam is that, because it so cheap for the spammer, there is so much of it - slowing down the Internet - and much of it is unwanted, time-wasting and even offensive (especially where the promotion of pornography is concerned).

It is estimated that spam now accounts for almost 10% of all e-mail in the UK, almost 40% of e-mail in the USA, and some 80% of e-mail received by Hotmail accounts. Worldwide some 10 billion spam e-mails are sent every day.

Many Internet users wonder how their address has been acquired by those issuing such e-mail. In fact, there are all sorts of technical means and software to search out or create e-mail addresses, even before we ourselves reveal that address by issuing e-mail or accessing web sites. This is particularly a problem for someone using a well-known e-mail facility like Hotmail or someone who has used the same e-mail address for some time. Therefore one option for minimising this type of mail - although it is certainly not a convenient one - is to change one's e-mail address.

In both Europe and the USA, there is such growing concern about the volume and nature of spam that there are now serious efforts to make it illegal to send such communications without either the option to opt out or a requirement to opt in to receiving such material (similar to arrangements that we have for the off-line equivalent which is direct mail, often known as junk mail).

In May 2002, the European Parliament voted to protect European Internet users from spam by adopting a directive, making it illegal to send unsolicited e-mail, text message or other similar advertisements to individuals with whom companies do not have a pre-existing business relationship [for analysis of the EU Directive click here]. Meanwhile, here in the UK, the Advertising Standards Authority [click here] has decided that in Britain bulk e-mail should either require the permission of the recipient or be clearly marked as 'unsolicited', although it is unclear how these requirements will be enforced and, in any event, most spam originates from the USA.

Since all legal measures will take time and will only be partially effective, there are some technical measures which one can take to minimise (but not stop totally) this problem. One can express concern to one's Internet service provider (ISP) and ask what arrangements the ISP has in place to block or minimise spam. Furthermore one can purchase anti-spam software or, at no cost, make appropriate settings to Microsoft Outlook. Microsoft's own answer to spam is that we pay to send e-mail - but hopefully that will be a non-starter.

Links:
Fuller discussion of scams click here
Fuller discussions of spam click here


Our Internet columnist Roger Darlington introduces you to the newest hot feature of the Net

WELCOME TO THE BLOGGER

Currently one of the 'hottest' new features of the Internet is the weblog or blog. This is like an electronic diary, but differs from a paper diary in two major respects. First, it is accessible in real time all the time by every Internet user in the world. Second, it can be linked to any other information on the Net.

When I first started my personal Web site almost four years ago, I thought that soon most Internet users would have their own site. After all, one of the coolest features of the Net is that you can be your own publisher at negligible cost with a potential world-wide readership in the hundred of millions.

But it hasn't happened. Very few Web users have their own Web site. Why not?

There are technical obstacles - it takes time to learn HTML (hypertext markup language) if you want total control, although packages like Dreamweaver make creation of material very easy. Also few people have the enthusiasm or the stamina to ensure that a site is regularly up-dated and indexed with interesting material.

Blogging - the technique of running a blog - overcomes these problems.

Blogs use a standard software which is downloadable free of charge. This software makes the publication of text as simple as typing a Word document. Content-wise all that is required is a few sentences every few days, linked to material one has already accessed or seen on the Web.

It is no surprise therefore that blogs are blossoming. They are easy to create and compulsive to read.

The most commonly used blogging software is called Blogger [click here], but other options include Movable Type [click here], Radio Userland [click here] and pMachine [click here].

The term 'weblog' was first used in 1997. Then, in 1999, a San Francisco-based company called Pyra Labs released Blogger, free software that soon became ubiquitous among the blogging community. This enables anyone to build their own blog, provided you have a title, a user name and a password.

The dreadful events of 11 September 2001 led to a desire for many Net users to want to express their feelings and blogs were an obvious forum. Then, in February 2003, Google gave the 'official' stamp of approval to blogging by buying up Pyra.

Whereas the first Gulf War was the breakthrough of real-time satellite television (like CNN), the second Gulf War was the breakthrough of the Internet as a medium for both receiving and commenting upon breaking news - and the reason is blogging.

Some of the media representatives embedded in the coalition forces ran their own blogs, although one of CNN's correspondents Kevin Sites was told by the company to stop this practice. Some citizens 'on the ground' offered their experiences and views to the world, the most famous for a time being a Baghdad man who used the codename Salam Pax [click here].

In the world of bloggers (called the "blogosphere"), this practice is known as "warblogging". One of the best examples is The Command Post [click here] which, within a week, brought 120 correspondents together into a 'collective weblog'.

Like newsgroups (which use the Usenet system) or community groups (which are hosted by companies like AOL and MSN), weblogs have created communities on the Net. People with similar interests link to one another, make comments on each other's material, and use each other's news feeds.

The exchange of news feed items uses a standard XML (extensible markup language) file called Really Simple Syndication (RSS). This is a format for exchanging new items on news sites and weblogs in a speedy and simple manner.

There are now thousands of RSS news feeds and, if one pulls them altogether, one has a new kind of search engine - one that provides a real-time insight into the content of tens of thousands of weblogs. One example is called Feedster.

Nobody knows how many blogs are out there now, but currently the best estimates start at around 750,000. This number is starting to include some politicians. In the USA, the infamous Senator Gary Hart has one. While here, in Britain, Labour MP Tom Watson has started one: click here.

Starting to use blogs is like starting to use the Web. You probably want to have some sort of portal and you want one or two favourite locations from which you can then follow the links. Look for the blog of someone you know or someone with a similar professional or personal interest.

As a portal to the world of blogs, I recommend a British site about the digital media called mbites: click here

For an initial favourite, let me recommend the blog of a Dutch trade union official and Internet enthusiast who is a friend of mine, Oskar van Rijswijk: click here

Be warned - I'm even thinking of starting a blog myself [I have now done so: click here].


Our Internet columnist Roger Darlington finds that not everyone loves the Net.

E-MAIL: CONVENIENCE OR CURSE?


Like most readers, I have used e-mail for years and regard it as an integral part of my life. However, I was recently on a group trip to St Petersburg [for an account click here] and most of the 60+ year olds on the tour could not understand why anyone would want to use something as strange and impersonal as e-mail when one could send a letter or make a telephone call.

This made me ponder. Perhaps we should pull back a little and remind ourselves of the many benefits of e-mail, while acknowledging some of the undoubted problems with its use.

So, why is e-mail so useful?

For all these advantages, if we are honest, e-mail does have some drawbacks.

We all have to find our own ways of balancing these advantages and disadvantages. Above all, it is important not to be come a slave to e-mail and to have allocated times for dealing with it.

Organisations too have to work out how to maintain a sense of balance. Some companies are now designating one day a week when colleagues should talk rather than e-mail.

Already the mobile equivalent of e-mail - texting - is massive. Many people who would never bother to send a postcard from their holiday destination are happy to text away from the beach or the bar.

The arrival of 3G mobile [for the UK's first such service click here] means that Net connection on the move is going to become easier, faster, and (eventually) cheaper. Then you will have e-mail everywhere all the time - unless you control it.

For all the problems of e-mail, I would not be without it. It has made the world smaller and friendlier.

However, I would like us to make it less prosaic and a little cheerier. I am not looking for more use of emoticons [for the longest list of emoticons in the world click here]. But, as a start, we could make the subject title wittier, so that people actually want to open the e-mail and do so with a smile. Give it a try.

Link: A Beginner's Guide To Effective E-Mail click here


We all use computers, but probably few of us have paused to ask the question posed by our monthly columnist Roger Darlington:

WHERE DO COMPUTERS GO TO DIE?

When I was a boy, I saw one of those black and white films on television in which explorers in darkest Africa discover a secret valley where aged elephants go to die. Of course, this provided an enormous stock of valuable ivory and so the explorers saw the location as a kind of 'El Dorado'.

Staying with this image, one can view old or broken computers as kind of white elephants. So, where do PCs go to die? This is not an academic question. According to Gartner Dataquest, an American research firm, the world computer industry shipped its one billionth PC in 2002 and another billion are expected to be built in the next six years.

The problem of surplus PCs is part of a bigger issue described by the European Commission as Waste Electrical and Electronic Equipment (amusingly abbreviated as WEEE), known less pedantically in the USA as e-waste, and dubbed by some environmentalists as techno trash. The relentless advance of technology and our consumerist wish for the latest gizmo mean more and more obsolescence and waste.

The UK alone produces one million tonnes a year and this is set to double by 2010. White goods - things like microwaves - contribute 43% of this figure, while IT - including computers - is the next largest component at 39%. Consumer electronics - predominantly televisions - is next on the list at 8%.

So where does all this material go and what is done with it?

Much of it goes into landfills, further spoiling and even endangering our environment. At its worst, it is shipped out to impoverished communities in countries like China, India, Pakistan, Vietnam and Singapore. Here it is broken up to recover component materials of steel, aluminium, copper, plastic and gold.

This is hazardous work. Wires are burned in the open air, creating toxic fumes, to free the metals from their plastic surrounds; computer monitors are broken up by hand to extract tiny amounts of copper; circuit boards are melted over coal grills to release valuable chips but also toxic vapours; leftover plastics are either burned, creating piles of contaminated ash, or dumped into rivers or canals, polluting the water. The people who do this dirty and dangerous work typically receive less than £1 a day for it.

European nations have now signed a total ban on toxic waste exports, although some European waste still seems to be sent to India and Pakistan. But the USA refuses to sign the ban.

Meanwhile the European Parliament has adopted a WEEE Directive and a related measure called "the restriction of the use of certain hazardous substances in electrical and electronic equipment" (RoHS). These ban untreated e-waste from landfills, bans most hazardous materials from electronic goods, sets recovery and recycling targets for e-waste, and, most crucially, shifts the onus of waste disposal to the producers of these goods in a process called Individual Producer Responsibility (IPR).

So the focus is very much on recovery and recycling, but ultimately we might see a shift to redesign, making greater use of more resource-friendly and reusable components.

However, it would be good to think that at least some of those PCs no longer wanted in the rich West could be refurbished and redistributed to less privileged communities in the developing world.

World Computer Exchange (WCE) is an educational non-profit organisation in the USA focused on helping the world's poorest youth to bridge the disturbing global divides in information, technology and understanding. WCE does this by keeping donated Pentiums, Power Macs, and Laptops out of landfills and giving them new life connecting youth to the Internet in Africa, Asia, and Latin America.

In more localised schemes, it is possible to redistribute computers unwanted by businesses to local schools or community centres.

Of course, a really radical approach to the problem of computer obsolescence would be to rethink totally the role of the PC, making it a relatively 'dumb' terminal - requiring less frequent replacement - by putting much more of the intelligence and power in the network. This was model envisaged by Larry Ellison, the maverick head of Oracle, when he talked of the 'network computer' in the mid 1990s. But it will never happen.

Links:
"The High Tech Trashing Of Asia" click here
The current version of the European Directive on WEEE click here
The current version of the European Directive on RoHs click here
World Computer Exchange click here
"Just Say No To E-Waste" click here
Tools For Schools click here
Computer Aid International click here
US Computer Takeback Campaign click here
Electronics Recycler's Pledge of True Stewardship click here
Cartridges4Charity click here


Whether you want a mate or a date, you would be amazed who you can meet on-line, writes our Internet columnist Roger Darlington

HOW TO NET A MATE

I recently took out to lunch in London a young American woman called Emily who - together with her partner - was making her first trip outside the United States. Although I had never met her, I already knew more about Emily than most people I've known for decades and appreciated that for instance we shared strong criticisms of the Bush administration.

How come? Diligent readers of this column may remember a piece I wrote for the May issue of "Review" concerning the growing phenomenon of blogging [click here]. At the end of that piece, I said that I was even thinking of starting a blog myself - and I did. As a result, I have been in contact with others bloggers around the world, including Emily who runs a fascinating blog from Portland in Oregon.

Several of my friends have used the Net to seek meaningful relationships and, in at least one case, the impact has been life transforming.

Peter (at this point, I switch to fictitious names) is a professional man in his late 20s. His technical skills are stronger than his chat up lines but, in any event, his job takes him all around the country and he has neither the time nor the opportunity to meet many young, single women. So he joined an on-line dating agency.

Mary is a single attractive woman in her early 40s who has had several relationships. She leads a busy life career-wise and certainly has no desire to cruise bars looking for guys who just might share her interests. She has used several Internet dating agencies and met many interesting men as a result.

Paul is in his late 30s and has recently come out of a painful divorce. The idea of dating again was, in many respects, an uncomfortable one. He recognised that he had a fair bit of emotional baggage and that meeting someone who understood him and shared his interests was not going to be easy.

He joined an Internet agency and eventually struck up an on-line friendship with a woman living at the other end of the country whom he who would never have met in the physical world. They got to know each other well through the Internet and phone calls before actually meeting and they now live together at what was her place.

Peter, Paul and Mary (I know!) are just the tip of the iceberg when it come to on-line friendships and dating. All sorts of specialist web sites are now being set up.

For instance, if you are a young Muslim who accepts the notion of arranged marriages but does not want to leave the choice entirely to your parents, there are web sites focused precisely on this situation. Such sites enable customers to specify what branch of Islam and what cultural beliefs and practices any potential partner should follow.

Obviously, there are advantages and disadvantages to finding a friend or date on-line.

The main benefits are access to a far wider range of possibilities than is possible by going to parties or bars and the opportunity to establish in advance whether you share important interests.

The principal disadvantage is the risk of emotional hurt if the person one meets is not like the image they created on-line - plus the (remote) possibility that the person may be threatening or abusive.

So, the normal rules apply. Spend some time learning about someone on-line before meeting in the 'real' world and, if in any doubt, meet in a public place and even take along a friend.

Children are much less inhibited than adults in making friends on-line but, of course, they are much more vulnerable too. Many children use chat rooms and sadly some paedophiles enter chat rooms, adopt a false persona, and seek to win children's friendships through a process know as 'grooming'.

So, if you are a parent with a child who uses the Internet, the rules you must insist upon are:

Links:
Dating Direct click here
Face Party click here
Make Friends Online click here
Love @ Lycos click here
Muslim Marriages click here


Our Internet columnist Roger Darlington tackles a question that would have challenged Hamlet.

3G OR NOT 3G? - THAT IS THE QUESTION

For a few weeks in the spring of 2000, some of the brightest business brains in Britain went bonkers. The result was that, between them, five companies paid a total of £22.5B in an auction for third generation (3G) mobile licences.

The Chancellor of the Exchequer was delighted and promptly used this unexpected munificence to reduce the national debt. However, ever since then, analysts have wondered how it would be possible for these services to generate the sort of revenues and profits that would make such investments remotely credible.

Jump forward three or so years and where are we?

Only one of the networks - the appropriately named 3 [click here] - is operating and it is struggling to reach its proclaimed target of a million customers by the year end. Many of its customers have been attracted by its ultra-competitive prices for basic voice services rather than by its more advanced features.

Meanwhile one of 3's shareholders, the Dutch telecoms group KPN, is being sued by its main shareholder Hutchison Whampoa in a dispute over the Dutch company's unwillingness to provide further funding.

None of the other four 3G licence holders has announced firm launch plans. Meanwhile two of them - mmo2 and T-Mobile [click here] - have substantially written down the value of their investment.

So, what is the future for 3G?

I recently visited 3's offices in central London and got to play with an NEC e606 clam-shell 3G phone [click here]. The technology is, of course, in its infancy, but it is already clear that an exciting range of new services is on the way.

One service - using a version of GPS - tells you where you are or where you want to be and where to find the nearest cash point, cinema or whatever. Using the same service, you can send a friend a map and directions for a meeting or social event.

3G phones can act as a picture phone or a video phone and you can send the picture or the video to relatives or friends. You can download short video clips of weather information, news reports, sports events or music. There are full-colour high-resolution games.

What I really look forward to - and it will come - is broadband access to the Web while on the move.

But the 3G licence holders do face some formidable hurdles.

The first set is technical. 3 has struggled with battery and handset problems. Roll-out of nationwide coverage will be costly and slow because the network requires many more masts than existing mobile networks. Then there are international problems, since three different technical standards are operating worldwide.

The next set of problems revolve around services which can compete in some respects with 3G but at lower cost. Using GPRS technology (known as 2.5G), existing mobile operators can provide similar services - such as shorter, less resolution video clips - for more competitive prices. Then, if one is not on the move, increasingly Wi-Fi can provide a fast and economically priced service.

Above all, 3G does not yet have compelling content, still less a killer application. But more attractive content is coming: news, star interviews, sports clips, music clips, video wallpaper, comedy clips, video trailers. Expect to see next the offer of adult pornography protected by a pin code system, followed by controversy because some under18s find a way round this system.

Furthermore it is not possible to predict how people will use a new technology (who would have forecast the attraction of texting?) and no doubt exciting new applications will evolve. Location-based services and the delivery of public services may be very successful.

In short then, 3G networks have a mountain to climb before those who invested in them will see any sort of decent return, but in time 3G will probably achieve a ubiquity and utility greater than most can currently imagine.

Many new technical developments have been the subject of early cynicism or ridicule, but historically the gap between mockery and mass take-up has often proved to be astonishingly quick.

So 3G does have a future, but it will take some years and strong nerves before it succeeds. Whether all five players will survive is another matter, with the first in the field (3) probably the most likely to be taken over.


New year, new technology, new challenge - our Internet columnist Roger Darlington explains all.

VoIP - THE SMALL ACRONYM WITH BIG IMPLICATIONS

Ever since the Internet 'took off' as a data network - for sending e-mail and browsing web sites - companies have been exploring the option of putting voice traffic onto the Net or other networks deploying the same technical specifications. Since the Internet uses particular protocols (known as Transmission Control Protocol/Internet Protocol or TCP/IP), this development is called Voice over Internet Protocol (VoIP).

VoIP has been around for years, but it is now starting to have a major impact on the thinking and planning of telcos like BT - and for good reason.

For business customers, in the past, IP telephony has been plagued by doubts over line quality by end users and concerns about relying on one network (instead of deploying dedicated voice and data networks). However, these problems are now diminishing, as a result of improved technology and greater investment in in-house systems. The major remaining problem is the much greater cost of handsets.

For most domestic customers, VoIP currently means using a pair of lightweight headphones with a PC to make and receive calls at nil cost even if these calls - as is usually the case - are international. However, it is already possible to make VoIP calls over a traditional analogue telephone provided one has a special adaptor connected to an ASDL or cable broadband service.

In future, residential customers will obtain VoIP packaged as a standard item with their broadband provision.

In a number of countries, VoIP has already taken off and Britain cannot be far behind. Examples include:

A report by the Yankee Group, released in October 2003, states that 83% of European operators surveyed by the group are expected to be offering VoIP services within two to three years. The main reasons given are more cost-effectiveness, ability to bundle voice and data services, and provision of more compelling broadband services.

The switch to VoIP will further diminish the core revenues of traditional telcos like BT - already being hit by a combination of ferocious competition, excess capacity and tough regulation. VoIP customers will not pay by calls made, but instead pay a flat-rate charge for unlimited calls along the current model for broadband Internet.

Consequently the whole basis on which tariff structures have traditionally rested could be swept away. Already there are tariff options which are not based on numbers of calls (such as BT Together). Within five years, telco customers will not buy lines or calls at all, but packages and bandwidth.

This will have enormous implications for how telcos market their products and account for their revenues. It will also affect regulatory controls, since market share will no longer be measurable by call volumes.

At this early stage, we can only speculate about the impact on staffing, but it seems likely that over time VoIP will have a significant impact on staff numbers, skill levels, and workplace locations.

For Connect members, the move to VoIP has to be seen in the context of BT's moves to what it calls "the 21 century network" and is more generically known throughout telcos as "next generation networks". BT aims to cut operating expenditure on its network by 30-40% in five years, representing an annual saving of some £1B by 2008.

Key aspects of this new network have still to be decided, notably where the intelligence should reside and how open the network should be. The main purposes of the new network, however, are to reduce the complexity and rigidity of the system so that new services can be provided quickly and flexibly. A good example will be the provision of broadband on demand or what is now being called 'liquid broadband'.

The 21st century network will have fewer exchanges and buildings and lower levels of network staffing. Therefore there will be staffing reductions and relocation and reskilling of many of the staff who remain. It will represent the largest technology/staffing challenge for the company, its staff and its unions since the days of the introduction of System X exchanges and the Customer Services System (CSS).

Links:
My guide to VoIP click here
Wikipedia page on VoIP click here
OECD paper on VoIP click here
Federal Communications Commission Panel on VoIP click here
Vonage - The Broadband Phone Company click here
MediaRing VoIP service click here


Secretly and silently a new technology is set to transform our lives. Our Internet columnist Roger Darlington asks:

ARE THEY PLAYING TAG WITH YOUR LIBERTY?

Very few people have heard of it and virtually no one has actually seen it. The technical name is Radio Frequency Identification (RFID), but think of it as an electronic tag.

It is a tiny micro-chip. It measures less than a third of a millimetre wide - little bigger than a grain of sand. It contains a microscopic antenna - invisible to the naked eye - that broadcasts via very low-power radio. It beams information in the form of a 96 digit identity code to reader devices located up to 10 metres way.

It is set to replace the ubiquitous bar code and, in some ways, it is similar to it. The main differences are that it can be read at a distance and it can be encrypted, protected, and written to as well as read from.

It will begin - indeed it has started - with pallets and batches of products. A big boost was given last year when Wal-Mart [click here], the world's largest retailer, insisted that its top 100 suppliers use RFID tags by 2005. Even bigger news was the anouncement by the US Department of Defense that it would require the same from its suppliers.

Meanwhile there is a shop called the Future Store, owned by the Metro Group, in Rheinberg Germany which is already completely based around RFID through the extended supply chain, linking to in-store customer relations management (CRM) and loyalty programmes, and trolleys which are simply swiped past a check-out full of goods.

The next stage is to tag all consumer products and goods. Another major push came with the formation of the Auto ID Centre, a consortium of 100 global companies and five of the world's leading research centres, which has launched the electronic product code (EPC), the successor to the bar code [for details click here].

The EPC system will permit every product in the world to have a unique number. Via the radio transmission of these devices to readers connected to networks, they can all be linked to the Internet in what some people are already calling "the Internet of things".

But why stop at things? Animals with a high financial value, like cows, and animals with a high emotional attachment, like cats, are already being tagged.

Will it end with animals? Certain classes of criminals are already being tagged. What about tagging children so that, if they are lost or abducted, they can be easily found? What about tagging babies with details of blood group, allergies, or congenital problems? As the baby becomes a child and then an adult, one could add data on all inoculations and major illnesses.

Does this sound crazy? Well, a US firm is looking for banks willing to ask their customers to have RFID chips implanted under their skin as a replacement for debit and credit cards [for further information click here].

How quickly will things happen? The current consensus is that it will take around a decade for tagging to reach the level of individual products. I think it will be faster.

There are massive advantages, not all of which can be currently foreseen.

Manufacturers and retailers will be able to revolutionise supply chain management, significantly reducing costs and ensuring constant availability of goods, both benefits which can be passed on to the consumer. The customer will not have to queue at check-out desks because the RFID reader will scan all purchases instantly. If a food product is found to be contaminated, its entire history will be immediately available.

Once at home, tags could highlight when the 'eat by' date of food has passed or indicate if a medicine has contra-indications or if a plug has the incorrect fuse. When the technology develops a little, the smart fridge could order replacement products and the smart washing machine could determine the correct temperature for the wash.

There are an enormous number of applications in the fields of crime and security. London's new bus and tube tickets already work this way [for details click here]. The European Central Bank is working on a project to embed RFID into every Euro note by 2005. What about tagging every gun and rifle?

Of course, as with any new technology, there are dangers - in this case, many threats to privacy.

At its most fearsome, RFID systems could trigger CCTV cameras to record customers buying particular goods (this actually happened in a trial involving Gillette razor blades at a Tesco store in Cambridge).

Such privacy concerns have prompted the establishment of an alliance of 30 European and American libertarian organisations to issue a statement calling for a voluntary moratorium on the use of RFID until a formal technology assessment process involving all stakeholders takes place.

Some chance. It's already moving too fast - and it's coming soon to a store near you.

Links:
Association for Automatic Identification and Data Capture Technogies click here
RFID Journal click here
Home Office Chipping of Goods Initiative click here
No Tags UK campaign click here
"Chips with everything" by Mary O'Hara click here


Thought e-commerce had flopped? Think again, urges our Internet columnist Roger Darlington.

E-COMMERCE MEANS CLICKS AND MORTAR

Remember the late 1990s? The media was full of e-commerce taking over the world and spotty young entrepreneurs were becoming paper millionaires by opening a dotcom company with the vaguest outline of a business plan.

Inevitably it all went horribly wrong with the stock market crash of April 2000. Since then, we've heard very little about e-commerce - so what's going on?

According to IMRG [click here], the trade body for on-line retailing, UK consumer sales increased by around 80% in 2003, taking the total to £14 billion. This may still be a small proportion of the total retail market, but Net sales are growing at something like 10 times the retail market as a whole. Already Royal Mail believes that on-line sales have overtaken mail order catalogues in its share of the total retail market.

British success stories include such very different businesses as the travel operation lastminute.com [click here], the gadget company firebox.com [click here], and the bra business amplebosom.com [click here].

At the global level, we are seeing some major dotcom companies actually making money. Both Amazon [click here] and Ask Jeeves [click here] have announced their first full-year profits. Google [click here] is planning a stock market flotation.

BBC 2 television recently ran a feature in the Money Programme series entitled "Dotcoms Bounce Back". So, why is it different now, compared to the crazy days of the dotcom boom?

On the supply sides, companies - and crucially their bankers and investors - are now much more tough-minded and realistic about their prospects and plans. On the demand side, we have twice as many people on-line and many are now on broadband.

The other interesting development is that most of the successful companies are not pure on-line operations, but extensions of businesses with real assets and a proven record in the physical marketplace - a case of 'clicks and mortar'.

Here in Britain, only four of the top 20 Internet retailers are pure e-commerce: Amazon [click here], e-Bay [click here], Kelkoo [click here] and CD Wow! [click here]. The rest are well-established retailers like Tesco [click here], Next [click here] and Argos [click here].

In many respects, Amazon is an example of e-commerce at its best.

When I access the web site, because I have already made purchases from the site, it welcomes me by name and makes recommendation as to the books, CDs and DVDs that I might like to purchase based on the tastes revealed by previous transactions. Since the site has a record of my credit card details and my delivery address, I have the option of making a further purchase quite literally with one click.

But not all e-commerce operations are this slick and not all Net users are so familiar with such transactions. The key factors determining the growth of e-commerce can be categorised as the 'four Cs'.

First, connectivity. It is self-evident that you cannot engage in e-commerce unless you have a connection to the Internet and, in spite of recent growth, half the population is still not on the Net and penetration seems to have stalled.

Second, cost. It is essential that consumers can surf at leisure, so that they can compare and contrast e-commerce offerings and take time to choose best value products and services, so broadband prices need to fall still further.

Third, cash. By definition, e-commerce has to involve some form of payment and this is primarily through the use of credit cards, but there are some consumer concerns about the privacy of data and credit card scams, so we need to develop various new forms of e-cash including smart cards.

Fourth, confidence If e-commerce is to thrive, consumers should not fear ordering goods and services because they are concerned whether they will receive exactly what they want and whether they will be able to return easily defective or unwanted items, so we need effective and trusted best practice schemes among e-retailers.

Unquestionably, the main factor holding back consumer e-commerce is concern about security.

IMRG has established a scheme called Internet Shopping Is Safe or ISIS. This scheme carries out an annual audit of security and service on the sites of its members. The Office of Fair Trading has a section of its web site devoted to shopping from home [click here] and the Department of Trade & Industry has launched a special section of its web site to advise consumers on e-shopping.

Meanwhile most of e-commerce is totally unknown to and unseen by consumers because it is business-to-business (B2B) as opposed to business-to-consumer (B2C). On some estimates, B2B accounts for around 80% of all e-commerce. Such operations link retailers, manufacturers and suppliers, strengthening the supply chain and reducing costs to the ultimate benefit of consumers.


This issue, our Internet columnist Roger Darlington considers a subject most of us would rather not think about too much:

EXTREMISM ON THE NET

As regular readers of this column will be aware, I am a passionate enthusiast for the Internet. But I have always been well aware that the Net has a dark side.

My concerns were brought into sharp focus when I was recently interviewed for ITN in my capacity as Chair of the Internet Watch Foundation [click here]. My interviewer was Sue Barnett, the sister of Jane Longhurst who was murdered by Graham Coutts [for information on the case click here]. During the trial of Coutts, the court heard how he had repeatedly accessed web sites depicting violent sex and how elements of his actions mirrored what he had seen on-line.

The issue of violent sexual images on the Net is simply the most high profile of a range of deeply offensive material which includes political fascism, skinhead fascism, white power, white supremacy, militia groups, race hate, anti-Semitism, Holocaust denial, world conspiracy, religious cults, Islamic militancy, virulent anti-homosexuality, pro-anorexia/bulimia, virulent anti-abortionism, violent propagation of animal rights, sports hooliganism, violent political activism, bombmaking information, and suicide assistance - to name a few.

Of course, some of these categories merge into one another or overlap. For instance, many white supremacy sites endorse conspiracy theories and many Islamic militancy sites are anti-Semitic.

Now these views have always been held and propagated, but with 'old' media - such as pamphlets, books, newspapers, radio and television - outlets are limited and usually come at a price, while publishers, editors and regulators exercise a web of control. In the case of the Internet, anyone can publish any view at any time with virtually nil cost and no controls whatsoever.

It is as easy for a white supremacist to put up a web site as for a multi-national company and the same number of Net users - approaching half a billion world-wide - have the same private access, literally at the click of a mouse. In this new kind of virtual environment, the extremist has a voice of a power and reach that he could never hope for in the physical world.

What is to be done?

First, we have to accept that the Internet will never be controlled like radio and television and, in many respects, this is a strength of the medium. It empowers citizens and democratises our world.

However, governments and legislatures around the globe need to review the relevance and adequacy of laws devised before the Net was even imagined, so that maybe UK laws on obscenity and race hate need modernising to make them more appropriate to the world of the web.

We need to understand though that, wherever the law draws the line, it is likely to be different in different countries and we are dealing with a worldwide network. Furthermore, wherever the line is drawn, there will always be plenty of material on the legal side of that line that is grossly offensive to many and potentially harmful to children.

Therefore, when grossly offensive material is brought to their attention, hosting companies should ask themselves whether they really want to host such material. Understandably such companies do not want to appear to be acting as moral guardians or censors.

However, where the creation of such material involves abuse or harm or where viewing such material may well encourage or incite the viewer to commit harm, there is an obligation on the company to think very hard about their responsibility.

Ultimately, though, all Internet users - and especially parents, teachers and those with responsibility for children and other vulnerable groups - need to accept a measure of responsibility for the use of the Net by those in their charge.

Rating of web sites and use of filtering software certainly have a role to play, but are far from perfect tools. Parents and teachers need to have clear understandings with children about acceptable and responsible use of the Internet and to monitor that use closely and sensitively.

There has been too much moral panic about the Internet, whipped up often by populist newspapers that know little about the web and are mainly interested with selling more copies of their paper. What we need is more informed and balanced debate about what is technically possible and politically acceptable and what role each of the parties - including you and me - have to play.

Links:
"Sex On The Net" click here
"Extremism On The Net" click here


We act as if everyone is busy surfing the web, but half of Britons are still not connected. Our Internet columnist Roger Darlington wonders why and floats an innovative idea.

HAS NET GROWTH STALLED?

Of course, Internet subscriptions and broadband take-up continue to grow - but not as fast as they should. There are now 14.5 million Internet subscriptions in the UK which is 50% of all households. Broadband accounts for 3.5 million subscriptions, about a quarter of the Internet total.

What seems to be happening is that existing Internet users are up-grading to broadband, but the total of Internet users is only edging up slowly, as we appear to reach some kind of plateau.

A similar thing seems to be happening in the USA where the number of Internet users is also increasing only slowly now, although the plateau there is higher at around 60-65% and around four out of ten of those have broadband.

Another interesting point is that those who switch to broadband frequently do so in order to do exactly what they have previously done with a narrowband connection, but with the convenience of an 'always-on' connection and a flat-rate subscription. Put another way, broadband users are not using the extra bandwidth for services which require that additional speed.

Which all leads us to two related dilemmas. Why aren't people connecting to the Internet in much greater numbers and, once connected, why aren't they using the extra bandwidth which is now available?

No doubt cost is factor, especially for lower income groups, and prices need to fall still further. The new communications regulator Ofcom will be looking at the pricing of BT's wholesale broadband product and at the options for competitive wholesale offerings through local loop unbundling.

Certainly we need more compelling broadband services and content. BT has made some useful moves recently with the launch of Broadband Voice and Rich Media and other players need to rise to the challenge.

But I suspect that more fundamentally there are two related answers to the questions on slow take up and low-scale usage: lack of confidence and lack of knowledge.

There is a whole generation of consumers - broadly anyone over 40 - for whom computing and the Internet are strange, even frightening, phenomena, unless their work has brought them into contact with these technologies.

These consumers are not comfortable with fitting a modem and connecting to an Internet service provider, they worry about the risk of viruses and spam, and they are terrified at the idea of the PC crashing.

They have no real idea how to make focused use of a search engine, they know very little about how to create and organise favourites, and they know nothing about downloading software or MP3 files. They might be excited about the idea of having a simple web site or weblog but would have no clue about how to start.

Now, of course, many people - typically those under 40 - have used the technology at school, college, university and work. Far from fearing the technology, they love it and delight at searching out new sites and trying out new applications.

So, is there a low-cost, user-friendly way of connecting these two constituencies to enable Internauts to help out Internoughts?

We could have a national web site where those who need help and those who can offer it can register and then a postcode search facility would enable people to find each other. At the local level, churches, residents' associations, community groups, and newspapers could act as 'clearing houses' to put those who need support in touch with those who can provide it.

How would it work?

Albert, a retired postal worker of 64, is visited by Jason, a 19 year old media studies student at the local college. Jason sets up Albert's Internet connection and installs a firewall and anti-virus protection. He calls round for an hour each weekend for the next couple of months to answer Albert's queries and show him how to make best use of his e-mail and the web.

When Albert gets into trouble, he telephones or e-mails - or texts - Jason who immediately offers practical advice and reassurance. Albert and Jason find that they enjoy each other's company, do other things together, and introduce some of their friends.

The scheme - provisionally called NetAid - would benefit from national branding and resources, but essentially would be a local, volunteer-driven initiative. Any takers?


Getting connected to the Net is only the start, argues our Internet columnist Roger Darlington. We want users who are informed and critical or, put another way, media literate.

MEDIA LITERACY IN THE AGE OF THE INTERNET

The new communications regulator Ofcom has no less than 263 statutory duties as a result of the Communications Act 2003. One of the least known, but most important, is a specific duty - set out in Section 11 of the Act - to promote media literacy.

The notion of media literacy is an extension to the audiovisual world of the traditional idea of literacy with which we are so familiar in the written world. Originally media literacy concerned radio, television, video and cinema, but Ofcom - in spite of having no responsibility for regulating Internet content - rightly appreciates that the Internet cannot be left out of any initiatives which are taken in this field.

There are many reasons why media literacy must embrace the Internet:

But what exactly is media literacy? And what does it mean specifically for Internet users?

Ofcom's consultation document on media literacy gives a succinct but comprehensive definition, suggesting that "media literacy is a range of skills including the ability to access, analyse, evaluate and produce communications in a variety of forms".

The four elements of the definition are expressed as "the ability to operate the technology to find what you are looking for, to understand that material, to have an opinion about it and where necessary to respond to it".

In the particular context of the Internet, these four elements raise the following issues:

This is only a brief run-though of some of the issues which are raised by media literacy in the age of the Internet, but it is sufficient to make clear that the Net has to be part of any meaningful and comprehensive media literacy programme.

In fact, the specific proposals in the Ofcom document are limited in number, although quite ambitious in scope.

First, there will be an Ofcom programme of research to determine consumer knowledge and needs. This should include what users appreciate about the dangers of illegal and harmful content on the Internet and what they know about the tools available to minimise access to such material.

Second, Ofcom will promote what it calls "connecting, partnering and signposting" to direct people to advice and guidance concerning the new communications technologies. As far as the Net is concerned, this should include direction on how to deal with spam, scams, and viruses as well as problematic content, the danger of chat rooms, and difficulties with e-commerce operations.

Third, Ofcom wants to encourage a common content labelling scheme for electronic audiovisual material delivered across all platforms. A system for labelling Internet content already exists [see the ICRA site click here] and the relationship between Internet content and other audiovisual material must be part of the debate.

It is a substantial agenda but, at its best, media literacy means an efficient worker, an informed consumer, an active citizen, and a protected child.

Links:
Ofcom's strategy on media literacy click here
"What Is Media Literacy?" by Sonia Livingstone click here
"The Changing Nature And Uses Of Media Literacy" by Sonia Livingstone click here
"Assessing The Media Literacy Of UK adults" by Sonia Livingstone with Nancy Thumim click here


As the Web is weaved ever wider and deeper, it becomes more and more necessary to know where to go and how to search. Our Internet columnist Roger Darlington offers some tips and techniques.

HOW TO UNSTITCH THE WEB

In 1994, there were merely 3,000 web sites. Twelve months later, the number of sites had climbed to 25,000. By 2000, it was 7 million. Today the figure is something like 10 million.

Some web sites are literally one page. But one I use a lot contains 32,000 articles. A large number of sites contain rubbish. Many which are not rubbish are nevertheless seriously out of date.

So how can we make sense of this huge information network and find the material we want? The answer is two-fold: creating a comprehensive set of 'favourite' sites and learning a few good search techniques. Many Connect members will be very familiar with such tools but might want to advise and assist less Net-savvy family and friends.

The starting point for any list of 'favourites' should be a couple of good news sites. Personally I think that the BBC [click here] and the "Guardian" [click here] are the best general news sites in the world. Google News [click here] is useful because it syndicates different sources

Weblogs (or blogs) are frequently great sites for news because they are often focused on a particular interest, are usually up-dated very regularly, and display the most recent material at the top of the site.

For instance, Connect members could do worse than check out my blog CommsWatch [click here] which carries news of telecommunications, broadcasting and the Internet. Two other good blogs on communications issues are OfcomWatch [click here] and that of my Ofcom Consumer Panel colleague Azeem Azhar [click here].

Any set of 'favourites' should include a weather site such as that of the BBC [click here] and a couple of travel sites such as National Rail Enquiries [click here] plus a finder or locational site such as Streetmap [click here].

There are now over 100 competing directory enquiry services, but I do not understand why anyone on the Net would want to use them when one can access BT's on-line directory service [click here].

It is a good idea to have 'favourites' which enable you to locate your Member of Parliament, such as TheyWorkForYou [click here] and various Government Departments and your local council, such as Directgov [click here].

Obviously you will want some 'favourites' which reflect your hobbies or interests. For instance, I am a big film fan and there is nothing better for me that the Internet Movie Database [click here]. My favourite television programme is "The West Wing" and I can check out episode synopses and background information on a fan site [click here].

The last type of 'favourite' which is an absolute must-have is some sort of on-line encylopaedia and, for me, there is little to beat Wikipedia [click here].

Most large web sites - including my own [click here] - have a search facility and certainly, when exploring the web as a whole, you will need a good search engine.

Currently the best search engine is Google [click here], but this may not always be the case and it is rumoured that Blinkx [click here] is one to watch. It is worth studying advice on the Google site about the basics of searching and how to conduct an advanced search. This advice includes the following points:

A final tip: when you have a favourite site or find a site through searching, check out the links from that site which will often take you to material of similar nature and provenance.

Links:
The basics of Google search click here
Advanced search on Google click here


Thought that the browser war was long over? Well, serious skirmishes continue. Our Internet columnist Roger Darlington reports from the battlefront.

THE BATTLE OF THE BROWSERS

After e-mail, the most used feature of the Internet is unquestionably the web. But no surfer would be able to make sense of the web without a browser, the software that converts masses of complex code into text and graphics.

For most people, this is not an issue because they simply use the browser which comes bundled into their operating system and, since the vast majority of PC owners use Microsoft Windows, that means that they use Microsoft's browser Internet Explorer (IE).

Yet it was not always so. The first mass-market web browser was Mosaic developed by Marc Andreessen. In 1995, Mosaic morphed into Netscape Navigator which is still around. However, Bill Gates quickly realised the importance of the web and paid out $2M to buy browser code from a company called Spyglass and subsequently launch Internet Explorer.

IE was bundled into Microsoft's Windows 95 and war commenced. In 1997, Netscape Navigator was the clear leader with 72% of the browser market, compared to Internet Explorer 3's 18%. But, towards the end of that year, there came IE 4. This was much better than Navigator and, by bundling it into Windows 98, Microsoft dealt a killer bow to its rival. So, within a couple of years (1996-1997), Gates had won a crushing victory over Netscape.

One of the most damaging features of the browser war was that it weakened compliance with standards developed by the World Wide Web Consortium (W3C).

For years now, IE's dominance has been near total with around 96% of web surfers using the Microsoft product. Besides the lack of competition and choice, there have been two main problems with this hegemony.

First, Microsoft essentially stopped developing its browser. With only one major up-grade since 1999, IE is now technologically behind its rivals with significantly fewer features.

Second, Explorer is a security nightmare with plenty of flaws that can and have been exploited by hackers and virus writers. Since the browser is so ubiquitous, these flaws literally have worldwide consequences with viruses infecting million of PCs in a matter of days or even hours.

Microsoft has promised that in 2006 it will launch a major up-grade of its Windows operating system code-named "Longhorn". The company claims that this will deliver "major improvements in user productivity, important new capabilities for software developers, and significant advancements in security, deployment and reliability".

In the meanwhile, the battle is not totally over and there is a rumble in the hills. During the summer, it transpired that - according to figures from US analysts WebSideStory - Microsoft's market share fell by a percentage point from 95.73% to 94.73% as rivals started to eat into Gates' lead. This might yet signify a significant turning of the tide.

The competitors include Mozilla's Firefox , Apple's Safari and Opera. According to Mozilla, downloads of its Firefox browser have hit 200,000 per day and I have recently become one of its newest users.

Firefox is open source, completely free, and easy to install. It has several advantages over Explorer and has richer features than the market leader.

It is much more secure and keeps your computer safe from malicious spyware by not loading harmful ActiveX controls. A comprehensive set of privacy tools keep your online activity your business. Also it stops those utterly infuriating pop up advertisements.

It provides a function known as tabbed browsing. This enables you to view more than one web page in a single window. There are open links in the background, so that they are ready for viewing when you are ready to read them.

Another feature called Live Bookmarks is popular with people like me who read weblogs. This is a new technology that lets you view RSS (Really Simple Syndication) news and blog headlines in the bookmarks toolbar or bookmarks menu.

It may be that the battle of the browsers is about to get bloodier. The Net is buzzing with rumours that search engine specialist Google is now working on a web browser. Several weblogs have put together a series of developments which suggest that the search engine is developing new web tools, while one US newspaper has reported that Google has poached former Microsoft workers who created early versions of the Internet Explorer browser.

The war is not yet over …

Links:
The browser war click here
Internet Explorer click here
Netscape Navigator click here
Mozilla Firefox click here


Our Internet columnist Roger Darlington is known for 'thinking the unthinkable', but maybe even he has gone too far in asking:

CAN THE INTERNET SURVIVE?

This was the title of a seminar that I attended recently. The event was held at the London Business School and organised by the Oxford Internet Institute [click here]. The main speaker was Professor David Farber, of the Carnegie Mellon University in the United States. So - in spite of the seemingly alarmist title - the discussion was a serious affair.

At one level, the answer to the question is obvious: 'The Internet has to survive because it is now so critical to so many governments, companies, organisations and individuals'. At another level, though, the answer is very different: 'The Internet is now being used for so many more purposes by so many more people than for which it was designed that it cannot continue to exist in its present form for much longer'.

Above all, the Internet is just so vulnerable to attack and to failure - as a result of viruses, worms, hackers and spammers.

For most people, the major use of the Net is e-mail. But the growth of spam e-mail has resulted in something like 80% of all e-mail now being unsolicited and unwanted. The volume of spam is literally slowing down the Net and it is forcing many people to rethink their use of this incredibly useful and versatile facility.

At the corporate level, companies are suffering 'denial of service' attacks (when web sites are overwhelmed by e-mail from multiple locations) and 'phishing' (when web sites are mocked up to look like genuine commercial sites) plus straightforward hacking and theft. But the Net was never designed to locate the original source of data or a message, so tracking down offenders is often very difficult.

Even where there is no malevolent intent, we have problems. The Net was never intended to operate with the reliability of a public telephone network, yet increasingly it is being used for services where reliability is critical such as telephony itself, airline ticketing systems, and - much more seriously - critical infrastructures such as the electricity network or national security systems.

To understand how all this has come about, we need to start at the beginning. The Internet was first developed and used by a small number of technical and academic individuals (mostly Americans) who knew and trusted each other. Therefore the sociology of the Net is one of an open community - free for anyone to enter and free for anyone to do anything.

These early pioneers had few thoughts about security or commerciality and never envisaged that some users would set out deliberately to frustrate and undermine the experience of others. Therefore the open architecture of the Net makes it a wild frontier for spammers, scammers and hackers, for copyrights abusers, for child pornographers, and many more.

The whole philosophy of the Net is to enable those data packets to get through by one route or the other. The network does not know, still less care, about the content or the importance or even the source of the data.

Vint Cerf - often described as 'the Father of the Internet' - has been reported as commenting: "I think we're still in the Stone Age when it comes to serious networking".

Of course, technically it would be perfectly possible to bring about a fundamental re-engineering of the architecture of the Internet. After all, companies regularly do this with their private communications networks and telecommunications companies periodically do this with their public networks (for instance, the current move to Internet Protocol networks, such as BT's 21st Century Network).

However, there are two huge problems here.

First, nobody owns or manages or regulates the Internet. So, even when one has a new, improved feature - such as IP version 6 which is a decade old - there is no means to enforce or require global implementation.

Second, the companies that invest in the physical infrastructure of the Net are not generally the ones that make money from the Net. The later tend to be service companies like eBay or Amazon. So there is little incentive for manufacturers to make large-scale, high-risk investments.

As a result, there has been no fundamental changes to the core technologies of Internet in a decade or more. So, how are we going to get out of this mess? Well, that will have to be the subject of another article.


Over the Christmas/New Year break, we were all horrified by the tsunamis in south-east Asia. Our Internet columnist Roger Darlington examines the role of the Net in responding to such an unprecedented disaster.

THE WAVE AND THE WEB

The Boxing Day earthquake off Indonesia and the resultant tsunamis across the Indian Ocean resulted in the greatest humanitarian disaster of our lifetimes. The Internet is the most dramatic technological development of our lifetimes. How has one responded to the other?

There are three answers to this question.

In the hours between the shift in the tectonic plates and the slamming of the tsunamis into coastal communities in 11 countries, there was a total failure to use the technology at our command. In the days and weeks following the catastrophe, the Net provided an extraordinary means of communicating the true scale of the horror and of mobilising unprecedented funding and resources to the stricken communities. In the months and years ahead, we will have to see whether the Net can be used to maintain a focus on what is happening and what is needed.

The tsunamis took several hours to reach most of the shorelines where they caused the damage, yet no one was warned that the waves were coming. Two days after the event, I wrote on my weblog [click here]: "I understand that a sensory system as sophisticated as the one operating in the Pacific Ocean is expensive, but why were there not phone calls and e-mails to the local and national officials of the countries about to be hit and why were radio and television warnings not issued?"

I am still amazed and angered that the relevant authorities did not use all the means of modern communications, including the Net, to warn the threatened communities.

In the first few hours and days after the tsunamis hit, the traditional media could not keep up with the pace of developments as the death toll rose hourly and the true nature of the havoc became more apparent. Only news web sites like the BBC could begin to track what was happening [click here].

In days, bloggers explained to web users how the Richter scale is constructed [click here] and how a tsunami is created [click here]. Then people who had witnessed the events posted dramatic photographs and video clips of the waves coming inland and the destruction that was caused. In the most remote or damaged regions, Net access was not possible, but observers used mobile phones and SMS messages to send information to friends who then put the news on web sites or weblogs.

The newly created South-East Asia Earthquake and Tsunami Blog quickly became an enormous source of information and news, with sections to post offers of help and requests for help in each of the affected countries [click here]. The Wikipedia site immediately set up a special section to provide detailed information on all aspects of the tsunamis and their aftermath [click here].

In a matter of days, hundreds of others created dedicated spaces on the web. For instance, LabourStart - which reports trade union news around the world - opened a special section to report trade union responses to the disaster [click here].

As the scale of the disaster became clearer, people everywhere wanted to help in some way and the most obvious was to donate money. The British public responded with unparalleled generosity as they contributed to the Disaster Emergency Committee [click here] and other charities, a process made easier by the opportunity to donate on-line. At its peak, the DEC was taking around £1 million an hour, helped considerably by its web site.

All this led the "Guardian" in a leader to refer to the Internet as "an angel of deliverance" [click here].

Then, a couple of days later, there came the news that a hoaxer in Lincolnshire had been charged with e-mailing the relatives of British people lost in the tsunamis to announce that they were dead [click here]. Also, around the world, there were a number of Internet scams seeking to exploit people's willingness to contribute money to the tsunami victims [click here].

What of the future? We know, from sad experience, that the traditional media will soon move its attention and coverage away from south-east Asia to the latest media 'hot spot'. It will be left to web sites and weblogs to maintain the attention on and the pressure for reconstruction efforts that will take years to restore lives, communities and economies.

In short: the events of the last few weeks have underlined that the Net is now so much a part of our lives that its use, abuse and non-use will be a feature of every human activity and event.

Links:
British Foreign & Commonwealth Office click here
British Department for International Development click here
United Nations Reliefweb click here
Telecoms Sans Frontieres click here
Blogs from Thailand click here
Blogs from Malaysia click here


Something momentous is happening to television. Our Internet columnist Roger Darlington warns that it's coming soon to an aerial near you.

THE BIG SWITCHOVER

Since its launch in Britain in 1998, digital television has grown faster than almost any other electronic household good or service and currently some 56% of households obtain digital TV through satellite, cable or terrestrial means.

There are real benefits from going digital: better signal quality, much broader range of programming, new interactive features, and - possibly - a different way of accessing the Internet from buying an expensive PC.

However, not everyone lives in a part of the country that is cabled, not everyone wants to subscribe to satellite, and Freeview - the digital terrestrial service backed by the BBC - can only be received by 73% of homes for technical reasons.

The only way to make digital terrestrial available to virtually all homes in the country is to boost the signal; this can only be done if more spectrum is made available; and more spectrum can only be released by switch off of the existing analogue signal.

Once analogue is switched off, not only does digital TV become accessible to all, but also the new spectrum that becomes available could generate significant income for the Treasury and stimulate the development of new communications services.

So, in September 1999, the Government announced that, at some point, the whole country will go digital in a process characterised - depending on your point of view - as switch off (of analogue), switch on (of digital) or (more neutrally) switchover.

In fact, it will not happen all at once. For complex technical reasons, the process will be carried out television region by television region.

The likely order (but not the timing) has already been announced. The indicative switchover order – subject to final Government decision - is follows:

It will be a massive and immensely complicated operation. We currently have 1,154 transmitters in a variety of locations - all of which will need to be converted.

There are about 25 million television households which contain around 75 million television sets - set-top boxes will need to be installed, some aerials will need to be adjusted, and viewers will need to master a new type of remote control and an electronic program guide (EPG).

Not everyone wants to go digital - they are content with their existing range of programmes and do not want the expense and trouble of switching just to access (as they would see it) a lot more of the same stale material.

Then there are particular groups who will be especially vulnerable in this exercise and the Ofcom Consumer Panel - of which I am a member - has produced a special report for the Secretary of state Tessa Jowell on how best to support these most vulnerable consumers. Our proposals would cost between £250M-£400M.

But the so-called 'refuseniks' and the vulnerable are not going to have a choice. Switchover is going to happen and the Government will have to sell the case and massage the political sensitivities.

In a practical sense, the whole exercise will be managed by a cross-industry group called SwitchCo, headed by broadcaster Barry Cox.

Berlin has already gone digital, but it is only a city, not an entire country. Some countries - like Italy and Spain - have already announced a deadline for conversion, but in reality they are far behind the UK. We are likely to be the first country to make the switchover.

So, when will it happen? The Government has not yet decided but, once the date is determined, it will take around two years to plan and another four years to execute. A good guess is that the first regions will convert in 2008 and the whole process will be complete by 2012.

A lot of organisations - notably electronics manufacturers - are pushing the Government to announce a firm timetable for switchover. They point out that currently consumers are still buying analogue televisions at twice the rate they are purchasing the set-top boxes to make them digital.

This will not change much until consumers believe that switchover will actually take place and know when it will occur. At that point, a massive public awareness programme will need to be launched. But all this will not happen until we get the General Election out of the way.

Links:
DTI/DCMS web site on Digital Television Project click here
"Driving Digital Switchover" - a report from Ofcom to the Secretary of State (April 2004) click here
"Persuasion Or Complusion? Consumers And Analogue Switchoff" - a report from the Consumer Expert Group to the Broadcasting Minister (October 2004) click here
"DTI/DCMS Cost/Benefit Analysis Of Digital Switchover" (February 2005) click here


As the nation awaits a general election, our Internet correspondent Roger Darlington looks at the new electronic relationship between government and citizen.

E-GOVERNMENT RULES, OK?

A quiet revolution is taking place in government departments and council offices around the country: one service after another is going on-line. It has taken five years and several billion pounds, but the intention is that by the end of the year all national and local government services will be available electronically.

In practice, 96% of central government services and 98% of the services offered by our 468 councils should meet the target. The aim is to improve services and save costs.

One example of service improvement comes from East Riding council in Yorkshire [click here] which has cut the time taken to process assessments for home care from 24 days to one day. An indication of the financial benefits that might be expected is found in the London borough of Westminster [click here] which expects net savings of £2.86 million in 2007/08.

The first local authority to claim to have all its services e-enabled was the borough of Tameside in Greater Manchester [click here] which hit the target two years ago. Another pioneer was Bracknell Forest in Berkshire [click here] whose leader attended a recent ntl-sponsored breakfast seminar on e-government that I co-chaired.

In preparation for this event, I checked out the web site of my own council which is the London Borough of Brent [click here]. I was impressed: it is available in four non-English languages, it is speech enabled, 100 forms are available on-line, 50 services can be accessed on-line, and there are links to 2,500 other useful sites.

However, e-government has the potential either to confuse or to simplify. On the one hand, the number of e-government sites is now enormous - about 4,000 in the UK. On the other hand, one of the other benefits of e-government is that it enables government to be more 'joined up'.

For instance, all the five councils in Dorset are taking down their individual web sites and they are all now using one county-wide site [click here]. London has 32 boroughs and many other bodies responsible for public services, but soon we can look forward to a portal from London Connects which pulls all these together.

Even more useful is a portal run by the Cabinet Office's E-government Unit which is designed to be a first port of call to all levels of government [click here].

Credit should be given to our government for driving this process. In the latest United Nations survey of e-government [click here], the UK was beaten only by the USA and Denmark in a league table of 143 countries' "e-government readiness".

A hint of where we might go comes from the experience of Fairfax County in Virginia, USA [click here]. This is reckoned to be the most e-enabled authority in the United States and it receives over a million visitors a month. It provides a brilliant range of services from on-line payment of local taxes to its own local television station. There are even sections for parents, teens and kids.

However, we are only at the start of putting government services on-line. The challenges for the future are developing new services and new ways of relating with citizens and enabling and encouraging citizens to make full use of these services and propose new ones. An ntl-sponsored survey has found low awareness of such services.

E-government is very difficult to promote with citizens because most people's interaction with government is infrequent. Therefore they need to be offered incentives: quicker service, cheaper service, new services, and relevant and up-to-date information and advice.

The ideal would be the kind of personalised service that Amazon offers to the site's established customers so that, following a registration process, citizens could be directed to those services or that information most relevant to their needs and circumstances.

Also currently the e-government agenda is all about delivery of services. In the future, it needs to encompass empowerment and participation, so that citizens have more interaction with their elected representatives, more involvement in local issues, and more control over local government.

Of course, all this is irrelevant if one does not have Internet access. Over 40% of UK homes still do not have even an narrowband connection and those citizens who would benefit most from e-government - especially poorer and older citizens - are precisely those groups least likely to be connected. So e-government makes the digital divide an even more relevant and urgent issue.


Increasingly you need help navigating yourself around the growing choice of TV channels, as our Internet columnist Roger Darlington explains.

THE HITCHHIKER'S GUIDE TO TELEVISION

Many readers will remember when you could count the number of available television channels on one hand. However, these days more than half of us have digital television - whether delivered terrestrially or by cable or satellite - so we have access to many hundreds of channels.

As Ronald Reagan might have put it: "You ain't seen nothing yet". The arrival of high-capacity Personal Video Recorders (PVRs) and Internet Protocol Television (IPTV) means that both off-line and on-line you will have access to an unbelievable volume and range of television and other video material.

How will you cope? Never fear. The electronic programme guide (EPG) is already here and it is set to become of increasing importance in how we access visual material.

Over half of homes in the UK already make use of an EPG because it comes with their access to digital television. Most users think that the EPG they know is the only one and have no idea of how it is likely to develop.

The most familiar EPG is that supplied by Sky [click here] to its satellite television subscribers, but cable operators Telewest [click here] and ntl [click here] have their own EPGs, the video on demand (VoD) company HomeChoice [click here] has its own, and there are already several others on the market especially for those using digital terrestrial set-top boxes (such as those made by Thomson and NetGem [click here]).

In the cut-throat competitive world of television, all channels want to be displayed on all guides and they want to be in a position and prominence that encourages viewer access to their programming. Ofcom regulates the way in which channels are displayed on all these broadcast EPGs, but it does not have the power to regulate on-line guides (such as DigiGuide [click here]).

In practice, all the guides place the five public service broadcasting channels - BBC1, ITV 1, Channel 4, Five and S4C - in the first five slots. Usually other channels are organised thematically, so that for instance all the film, sports or music channels tend to be together, but this arrangement does not necessarily suit all viewers.

The functionality of EPGs and the range of programme information to which they provide access will both increase. Indeed, instead of information simply on the current and next programme, many EPGs already offer schedules for the next week or fortnight.

Instead of simply a sentence or two describing content, you will have an indication of the frequency and nature of strong language, violence or sexual imagery to assist your control of the viewing of your children. This process will be assisted by the increasing use of meta tags on programmes, so that parents can more easily block access by their children to programmes which they judge unsuitable.

Increasingly we will see a 'personalisation' of television access, so that viewers will modify their EPG to arrange the channels in the order they want (a bit like 'favourites' or bookmarks' in the Web world) and the PVR will learn the series we watch and the programme types we like and automatically record this material without us having to set each individual programme.

As the technologies develop, the EPG and the PVR will become increasingly useful to those with hearing and/or visual impairments.

So far, we have been talking about conventional television programmes accessed over a conventional television. However, digital terrestrial television (DTT) tuners are now being built into personal computers (PCs), turning the PC into both a television and a PVR.

When you can access programmes and video clips from around the world and record and store months of viewing, you will need to be able to search the Web and your hard drive for the material that you want to view at the time in the same way that you do now with sites on the Web and documents on your PC. The EPG will then be a searcher as well as a guide.

Of course, these technical developments will totally transform the world of broadcasting. So-called linear viewing and the notion of the 9 pm watershed will become almost irrelevant, as viewers can watch what they want when they want.

The strong regulation of broadcasting will collide brutally with the non-regulation of the Internet, as regulators and parents seek to enforce the taste and decency standards with which they are so familiar on television to the visual material streaming over the Net.

Links:
Ofcom consultation on the regulation of EPGs click here
Ofcom Code of Practice on EPGs click here
Blinkx TV video search click here


The Net provides the most accessible medium in the history of humankind for both consumers and creators. Our Internet columnist Roger Darlington explains the implications.

THE NEW WORLD OF SOCIAL MEDIA

Before Johannes Gutenberg invented the printing press in 1438, there were only about 30,000 books throughout the whole of Europe, nearly all Bibles or biblical commentary. By 1500, there were more than 9 million books on all sorts of topics.

Five centuries later, the world is undergoing a digital media revolution every bit - sorry about the pun - as profound. This is because we now have technologies and networks that allow us easily and cheaply to access vast volumes of information and, just as significantly, to create our own material for our own audiences.

This latter phenomenon has been dubbed 'social media'. This social media - as opposed to conventional media - is the use of digital technology and digital networks to enable consumers to create their own media content and experiences.

Before the advent of the Internet, it was not easy to publish a book or a pamphlet or even to have a letter published in a newspaper. Furthermore editors and publishers shaped the material and how it was presented.

In the realm of social media, the user becomes the contributor, the editor and the publisher combined. A member of the digerati called Doc Searls has put it this way: "Social media is an example of the demand-side supplying itself".

Examples of social media include:

This is just the start. Already it is possible for anyone with minimal resources to open a radio station on the Net (see, for instance, Radio LabourStart [click here)]. Soon one will be able to set up one's own web-based television channel (see, for example, Narrowstep [click here]).

Of course, as with any new social development, there are some problems. First, one of the benefits of conventional media is that the editing and publishing processes ensure a degree of reliability as to the information in the newspaper or book. On the Net anyone can publish anything, however spurious or unsubstantiated. So, when accessing the Net, we need to be media literate and use critical thinking skills.

Second, there are important privacy issues. Someone may be happy to debate politics at a dinner party, but they may not want their views subsequently reported to the world via a blog. A parent may be very proud of his new baby and regularly post news reports and photos, but that child may be less than thrilled in years to come to find intimate details of their life available to everyone. So we need to exercise discretion.

Third, there can be copyright issues. A lot of social media involves reworking other people's material - commenting on, quoting from, parodying or remixing text, pictures and sound. But maybe McDonalds does not want its yellow arch to be associated with Islamic fundamentalism. So be careful.

Fourth and most importantly, there is a real risk of harm - whether it is a false allegation against a teacher in respect of child abuse or a footballer for taking bribes, whether it is web site or newsgroups that targets staff who work in abortion clinics or research establishments experimenting with animals or that promotes anoxeria or even suicide. Especially now that broadcasting and the Internet are converging, we need a debate about what controls on Internet content are necessary or possible.

One thing is for sure: in this age social media, it is a whole new more open, more democratic, more confessional world that provides power but also confers responsibility.


So far the communications revolution has been provided over networks that were essentially designed for plain old telephone service - but that is set to change big time, as our Internet columnist Roger Darlington explains.

NEXT GENERATION NETWORKS

For more than a century, the public switched telephone network (PSTN) has used circuit switching. The alternative is packet switching and traditionally this has been used for private data networks connecting computers and for the public Internet. In the late 1990s, worldwide the volume of data traffic overtook that of voice and increasingly carriers are now looking to integrate all their services onto what are called next generation networks.

There is no agreed definition of next generation networks but, at the heart of the concept, there is the integration of existing separate voice and data networks into a much simpler and more flexible network using packing switching and IP protocols. This will enable voice, text and visual messages to be carried on the same network and for each type of message to be responded to in any of these formats on that network.

This is sometimes characterised as a move from the existing model of a smart network and dumb terminals to a new model of a dumb network and smart terminals. In fact, the NGN will be far from dumb, but it is true that, compared to the existing PSTN, there will be much more intelligence in terminals.

BT calls its next generation network the 21st Century Network (21CN) [click here] and, in terms of scale and speed, it is a world leader in the roll-out of such a new network. It will integrate onto one network what are currently 16 separate network platforms each supporting different services. As well as substantially reducing operating costs and improving quality of service, BT's 21CN will allow future products and services to be built much quicker and - since open standards are involved - by new as well as traditional players.

Announced in summer 2004, the 21CN project is envisaged as a five-year programme. Mass migration of customers onto the new network will start in 2006, with the majority of customers expected to be on the network in 2008.

If Britain is a world leader in developing a new network at the core, it is lagging seriously in developments in the local loop known as next generation access.

Through a variety of digital subscriber line (DSL) technologies - notably ADSL [click here] - we are squeezing more and more capacity out of those 34 million traditional copper pairs. BT's standard broadband speed is now rising to 2 mega bits a second (Mbps), while companies like UKOnline and Bulldog are offering 8 Mbps. ADSL2+ [click here] offers the prospect of 24 Mbps.

However, in other countries, they are deploying optical fibre in the local loop and supplying broadband speeds that make the UK offerings look pedestrian. Different countries are using different options, such as fibre to the curb (FTTC), fibre to the building (FTTB), and fibre to the home (FTTH) [click here].

For instance, in Japan at the end of 2003, new FTTH connections passed any other type of new connection and over 3M customers now have a 24 Mbps service. South Korea makes substantial use of FTTB and is aiming to supply 100 Mbps to 5M customers by 2007.

Even Italy, in the form of FastWeb (a venture between the Milan gas & electricity company and a group of private investors) [click here], has approaching 200,000 fibre customers. The British communications regulator Ofcom is currently considering the regulatory options for next generation access, but meanwhile there is minimal investment in fibre and our industrial competitors are racing ahead.

The other technology which will increasingly transform local access to the Net and other services is radio.

Wi-Fi (Wireless Fidelity) [click here] provides wireless connectivity in an office or a home for computers (or other devices) within around 150-300 feet of a base station.

WiMax (World Inter-operability for Microwave Access) [click here] is a wireless metropolitan area network (MAN) technology with a bandwidth of around 75 mega bits a second across a distance of about 30 miles.

Altogether these new networks and new technologies will ensure than, in a matter of years, you will be able to access the Net and other services from virtually anywhere at speeds that will make your scalp tingle.


When you connect to the Net, you invite a weird array of cyber-strangers into your virtual home. Our Internet columnist Roger Darlington looks at what can be done.

THE BARBARIANS ARE AT THE GATE

Too many people think that the personal computer is a 'plug and play' device like a radio or television or game station. It is not. When the PC is connected to the Internet, it is open to possible access by anyone of the one billion Net users worldwide and some of them are evil characters.

The threats include spam, scams, viruses, worms, phishing, identity fraud, distributed denial of services attacks, botnets or zombie PCs, and a range of spyware. These threats are sometimes collectively called malware which is short for malicious software.

One of the first Internet columns I wrote for Connect was on the problem of spam [click here]. Then (April 2003), it was estimated that spam accounted for almost 10% of all e-mail in the UK.

About two-thirds of all e-mail is now spam. In the UK, it accounted for 12.4 billion items last year. At a conference, I attended recently, Richard Cox, Chief Information Officer at Spamhaus [click here], declared: "We are at war".

Now that the United States has taken more action, the UK is seen as a good location by spammers. Our legislation and its enforcement are regarded as weak.

For consumers, spam is exceedingly annoying and a major source of scams. Many consumers still know little about what to do about spam. They tend to be either paranoid or complacent about the security of their PC.

For industry, spam is not simply a productivity issue, but more seriously a security issue. Spam is the vehicle for all sorts of malware including viruses and Trojans.

One line of attack is legislation. Some tightening of the Computer Misuse Act is necessary and penalties need to be heavier, but generally legislation takes too long to enact, it is always behind developments, and penalties are low compared to the benefits to the spammer.

Another, more useful, approach is user education and awareness.

The Internet Watch Foundation - which I chair - has a useful FAQ section for general user issues.

The site ITsafe [click here] aims to provide both home users and small businesses with advice in plain English on protecting computers, mobile phones and other devices from malicious attack. Soon now, a range of partners will launch an ambitious campaign called Get Safe Online [click here].

By setting up an information-rich web site and resources, Get Safe Online aims to become the UK's recognised source on online security and protection. This will be backed up by a high profile awareness campaign to communicate the Get Safe Online message to the citizens of the UK.

However, if legislation is a blunt tool and consumers are struggling to understand what needs to be done, ultimately the most effective line of attack is technical.

At the conference I attended, Mark Sunner, Chief Technology Officer at MessageLabs [click here], argued that the further back in the system one deals with spam the better because the volumes are greater and detection is easier, so that ISPs could use the sort of blocking mechanisms that many companies use.

One approach might be the development of the network computer, so that all the blocking can be done and the protection provided by the network operator as is the case in the corporate sector.

More fundamentally, the massive vulnerabilities of Microsoft software, deployed by the overwhelming majority of PC users, must be addressed and corrected by the company, or there will be a shift to the open source operating system Linux and open source browsers like Mozilla Firefox (which I use).

The architecture of the Internet itself needs to be considered by bodies like the Internet Engineering Task Force (IETF) [click here]. The Net was simply not designed for its present uses and it is altogether just too vulnerable to attack and even to failure.

If we do not collectively solve the problem of malware, two things will happen.

Those who are already on the Net - especially corporates - will increasingly find ways to avoid the public Internet by using private networks, virtual networks, or networks overlaying the general Net. This will be expensive and inconvenient.

Those who are not yet on the Net will be frightened to do so. Remember that 30% of Americans and 40% of Britons are still not connected at home.


Thought that the digital divide was dissolving? Think again, urges our Internet columnist Roger Darlington.

THE DEEPENING OF THE DIGITAL DIVIDE

In truth, there are many digital divides – between those who have no Internet connection at all, those with dial-up narrowband, those with always-on broadband, those with faster than ‘basic’ broadband (512 kbps), those with the confidence to conduct targeted searches and economic transactions, those with the interest and ability to create content such as web sites or weblogs.

Of course, the most important divide at this stage is between those with home access to the Internet and those without. This is an issue that has slipped down the political agenda because politicians have tended to concentrate on broadband access rather than overall take-up.

Most of the movement in the market is narrowband users up-grading to broadband and broadband users up-grading to faster speeds. The overall number of Net-connected homes in the UK is only edging up slowly and is still only around 60-65%.

This means that very roughly we have a situation in which a third of homes have narrowband, a third have broadband, and a third have ‘noband’.

Why does this matter?

First, because to gain easy access to all local and national government services, to be able to use vast, worldwide sources of information, and to benefit from cheaper goods and services, one has to be on-line. In the information society, it is simply not possible to be an empowered consumer or citizen without having access to the Net.

Second, because the sections of the community that are least likely to be connected – the older and the poorer – are precisely those people who use government services most and need access to cheaper products and services. To leave a third of homes off-line is to increase social isolation and poverty and to diminish our society.

It is partly an issue of cost and complexity. Prices of computers are now falling, but PCs still need to be cheaper and simpler.

It is partly an issue of confidence. People need more support in combating threats like spam and scams and so an initiative like ‘Get Safe Online’ is really welcome.

Above all, though, it is an issue of support. BT, in partnership with others, is developing ideas such as Internet Rangers [click here], the Circuit Rider model and the Everybody Online project [click here].

In fact, there are many, successful but small-scale training schemes by organisations like Citizens Online [click here] and SustainIT [click here], but they need financial and logistical backing if we are to achieve scaleability.

In my view, all this too will not be enough. Many new Net users need practical and sustained help – in their own homes not just in training centres – to set up the PC, connect to the Net, and learn how to surf, search and shop.

A year and a half ago, I highlighted the problem of the digital divide to Connect members in my June 2004 column that was titled: “Has Net growth stalled?” [click here]. In that column, I proposed something I called NetAid.

This would be a locally-driven, volunteer-based initiative to pair up young, Net-savvy enthusiasts with those – especially older and poorer citizens – who want to connect to the Net for the first time.

Local schools and colleges plus church, business and community groups could link with organisations with real experience of volunteering on the ground to match those who are familiar with and enthusiastic about the technology with those who know nothing but want to learn.

Assistance would be provided in the home both in the initial set-up process and in regular follow-up sessions as problems were addressed and confidence was created. Once the new user was sufficiently proficient, further support could be on-line or over the phone.

Although NetAid would be very much a local affair, it would benefit from some national branding and promotion with a web site and off-line resources like leaflets in local libraries and citizens’ advice bureaux and public funding for trials and pump-priming would be very helpful.

Since I first floated this idea, I have spoken to volunteer organisations like Community Service Volunteers [click here] and Timebank [click here] as well as Help The Aged [click here] and I am hopeful that something will eventually get off the ground.

It cannot come too soon – but it could start to break down the basic digital divide in a cheap and cheerful way and give us valuable experience for the similar exercise of digital switchover from 2008-2012.

Link:
Alliance For Digital Inclusion click here


Feeling overwhelmed by the digital deluge? Well, it's going to get much worse but help is at hand, explains our Internet columnist Roger Darlington.

THE PERSONALISATION OF NEW MEDIA

Most of us already receive far too many e-mails and most of it is spam. But a third of UK homes are not even on the Net and worldwide five-sixths of the population is not on-line. More contacts for you – and for the spammers. So the volume of e-mail will continue to explode.

The web already seems gargantuan. It is – around 75 million web sites. But literally every day tens of thousands of new web sites, new weblogs, and new message boards come online and content is often up-dated almost minute by minute.

Two-thirds of UK homes already have access to digital television with a choice of hundreds of TV channels. An increasing number of us have a digital radio giving us access to many dozens of radio stations. But, as broadcasting and the Net converge, you will be able to access over your broadband connection literally thousands and thousands of TV and radio channels worldwide, not to mention a vast volume of video and audio files.

How can you possibly manage this electronic blizzard? In a word, the answer is 'personalisation'.

This means that you, your electronic communities, and the organisations with which you interact will shape what and when and how you access digital media. There are three main ways this will happen.

First, you decide what you will access.

This will involve 'pull' devices, like a clear and comprehensive organisation of 'favourites' or 'bookmarks' for the web sites you find most useful or appealing and like cleverer use of even more sophisticated search engines . It will involve 'push' techniques like use of RSS (Really Simple Syndication) feeds from your favourite news web sites and weblogs so that you know whenever a new story is posted.

Using a personal video recorder (PVR) such as Sky+ [click here] will enable your favourite TV programmes to be recorded automatically and even in the future to suggest which new programmes you might like to watch. Even your e-mail can be managed, going beyond the use of spam filters than 'learn' what is really spam to new systems (such as SNARF from Microsoft [click here]) that rate 'proper' e-mail according to the importance you are likely to attach to it.

Second, your e-friends and like-minded Net users can guide what you access.

You can use what are increasingly called 'social networks' to keep in touch with old school friends or people who share the same interests and be guided by them as to information that will be useful or sites that should be visited. Indeed this can become a very active process by, for instance, labelling your photographs on a site like Flickr [click here] or tagging a web site using the del.icio.us system [click here].

Third, organisations will highlight content for you.

Passive techniques like the use of 'cookies' and active input of data by you will enable organisations with whom you deal to know who you are and what you are likely to be most interested in. The Amazon site [click here] already welcomes me by name, recommends books, CDs and DVDs based on my past purchases, and gives me the option of ordering through 'one click' because the site already knows my credit card details and my delivery address.

This kind of relationship allows organisations to target promotions and products in the knowledge that you are probably going to be quite interested in them. Amazon knows that I will purchase each new DVD set of “The West Wing”.

These kind of techniques do not have to be confined to relationships with commercial organisations.

Imagine if your trade union organisation could use your personal and professional profile to personalise your use of the web site so that it automatically displayed the latest information which is most relevant to your workplace or work or gender. Imagine if your local authority web site could display immediately the rubbish collection times or planning applications for your street or your entitlement to certain benefits or services.

The advantages of these various techniques are obvious: it enables us to combat information overload and it allows us to make the most effective use of our time and attention.

However, there are some problems with such personalisation techniques.

There will be concerns about privacy in relation to personal data, but the evidence is that consumers are willing to strike a bargain with companies: some extra information in return for better content or service.

Also there is a danger that we will only engage electronically with like-minded people, we will only seek information and opinions which reinforce our prejudices, and we will lose the serendipity of discovering a new interest or a new insight.

The answer is to combine several techniques: use personalisation techniques to manage e-mail and speed up use of e-commerce sites; use trusted sources of general information such as (in my case) the BBC [click here] and “Guardian” [click here] web sites and selected blogs; and regularly open ourselves to new material by reading a quality daily newspaper and some good magazines and deliberately surfing for new radio stations, television channels and web sites.

It's in your hands ...


Thought that digital games were just for young geeks having fun? Games are bigger and more serious than you ever imagined, writes our Internet columnist Roger Darlington.

THE SERIOUS WORLD OF GAMING

Video games are big, big business. Worldwide the market is $25 billion (£14.5 billion). We are now on the sixth round of the games console war with Microsoft's new Xbox 360 [click here], Sony's Playstation 3 [click here] and Nintendo's Revolution representing the state-of-the-art.

In the UK alone, the Entertainment and Leisure Software Publishers Association has announced record sales in 2005 of interactive entertainment software across all formats totalling £1.35 billion.

This is a world with a language of its own and a plethora of acronyms such as role playing game (RPG), first person shooter (FPS), real time strategy (RTS) and massively multiplayer online role playing game (MMORPG).

There are lots of blogs devoted to video games such as the gamesblog [click here] on the “Guardian” site and ludology.org [click here] which is run by Gonzalo Frasca whom I recently met at a conference in Hungary. You can even do a degree in computer games technology at places like the University of Abertay Dundee [click here].

If you thought that most games were not for you, you are probably right. Just before Christmas, Keith Stuart, in a “Guardian” column called “Gamesblog”, wrote: “The festive chart is a nightmarish, gut-splattered collision of zombies, street yobs, robotic assassins and killer cowboys”.

More recently, female writer Aleks Krotoski in the same column observed: “To date, few games have adequately captured the poignancy and the turmoil of real-life emotions. Sorrow, joy and anger are notably absent from interactive experiences”.

However, some really interesting things are happening to the gaming market.

First, according to a study released by Nielsen Entertainment’s Interactive Group, the gamer demographic in the USA is expanding beyond its core young male audience to include more women and older adults and video games in general are becoming far more pervasive as the medium approaches mass market status.

Not all gamers are spotty teenage boys. The study found that nearly 40% of American gamers are female (there is a site called womengamers.com [click here]) and nearly a quarter of gamers are over the age of 40 (another site is called theoldergamers.com [click here]).

In the UK, in the 16-35 age group, 42% of men and 22% of women play games regularly.

Second, online gaming is becoming massive, especially as broadband becomes more generally available. In South Korea - broadband capital of the world – gaming on the Net is simply huge and some people are literally professional at it.

The most popular MMORPG is called World of Warcraft [click here] which has a user base of over 5 million people including over 2.5M in North America and 1.5M in China. The postmodern world of Azeroth is a complex one with 60 levels of difficulty and the game manual runs to 200 pages.

However, for a monthly subscription of £7.50, you can spend an unlimited time in a fantasy world that can easily become addictive.

Third, games are moving beyond the sphere of 'mere' entertainment.

The most cost-effective recruitment tool currently used by the US Army is a war game called America's Army click here] which is available as a free download from a web site. Some 29 million have taken a copy, there are over 5 million active users, and 1.34 billion missions have been played.

Games are increasingly being used as tools to provide education or instruction, to advertise products in innovative ways (so-called advergaming), or even to engage people in politics.

Here in the UK, a project undertaken by Nesta Futurelab and game-maker Electronic Art - called Teaching with Games – conducted a poll which found that a third of teachers are using computer games in the classroom and a majority believe they improve pupils' skills and knowledge [click here].

In the world of advergaming, instant-win promotions and contests requiring some level of consumer participation are increasingly popular. The company ePrize LLC [click here] has clients including IBM, General Motors and “The New York Times”.

As examples of the political use of games, Howard Dean was the first US presidential candidate to deploy a video game on his web site [click here] and more recently the technique was used in a Uruguayan election campaign. The web site newsgaming.com [click here] uses games and simulations to analyse, debate, and comment on major international news.

In short, you may not think of yourself as a gamer, but gaming is going to become a mainstream feature of the digital world – if it is not already.


Our Internet columnist Roger Darlington lifts the lid on the least known part of the work of the regulator Ofcom.

THE INVISIBLE HEART OF THE COMMUNICATIONS REVOLUTION

It has no weight, shape or colour; you cannot see, feel or smell it; you cannot create it but it can be endlessly reused; it is immensely useful and hugely valuable. What can it be? It is, of course, the radio frequency spectrum.

In some respects, spectrum is like land: both are limited natural resources which input into a huge array of services; both have areas where demand exceeds supply and vice versa; both suffer from the problem of competing and incompatible uses in some areas; both have complex planning frameworks.

However, frequency has some extra complications: radio emissions spill over into neighbouring frequencies; radio emissions spread over national boundaries; and the value of spectrum is related to standardisation.

The uses of spectrum are immense. However, some 30% of frequency is taken up by defence needs. Around 28% is used for fixed, mobile and satellite communications. Aeronautical and maritime travel uses another 14%. Broadcasting takes up 13%.

Some indications of the value of spectrum are that it is estimated to contribute at least £24 billion a year to Gross Domestic Product (GDP), while the auctions for 3G mobile licences raised a total of £22.5 billion.

Considerable technological developments and the rapid growth of communications needs are making demand for spectrum greater and greater. This is especially true in the section of the radio frequency spectrum known as the “sweet spot” which is between 300MHz and 3 Ghz.

All spectrum is a trade off between range and bandwidth and the so-called “sweet spot” combines good range with significant bandwidth.

Now everyone knows that Ofcom is the UK regulator for broadcasting and telecommunications. But many forget that it also has responsibility for spectrum and that more Ofcom staff work in this area than any other.

Traditionally spectrum has been allocated to competing uses on a command and control model with the regulator effectively making all the decisions. This is how 90% of spectrum is allocated in the UK today.

Some parts of the spectrum – around 9% at present – is called licence -exempt. This means that the regulator sets some rules but there is no licensing.

A third approach is to use market mechanisms to allocate spectrum. Essentially this means that available spectrum goes to the highest bidder and that previously allocated spectrum can be traded and used for new purposes or by new players.

The market approach was advocated in a review conducted by Professor Martin Cave; it has been adopted by Ofcom and is increasingly governing its decisions on spectrum; and it has the support of the Government, with the Chancellor of the Exchequer mentioning in the Budget that sale of spectrum is intended to raise billions for the Treasury.

At first sight, market mechanisms seem a sensible way of deciding between competing demands for a scarce resource like spectrum. However, some of the complications soon become apparent when one considers what is often called the digital dividend.

This is the allocation of the spectrum that will be released when the whole of the UK has switched from analogue to digital broadcasting – a process that will take place from 2008-2012.

When digital switchover has occurred, some 14 UHF channels in Bands IV and V will be come available. How should these be used?

This is what, in Ofcom jargon, is known as the Digital Dividend Review (DDR). This is already under way with Ofcom currently analysing all the technical, economic and policy issues.

The regulator will publish a consultation document in late 2006 and make a policy statement in spring 2007.

In the meantime, some will argue that the available spectrum should be allocated to mobile broadcasting. Current 2.5G and 3G services can handle video clips through present cellular systems but, if users want to watch programmes or films on their mobiles, a broadcast method of delivery will be necessary and spectrum will need to be given to that.

Others will put the case for the spectrum from switchover to go the the provision of high definition television (HDTV). Cable and satellite operators can provide HDTV without too much problem but, if we want HDTV to be available throughout the country in terrestrial form (such as Freeview), then additional channels will be needed.

So we see that spectrum is not such an arcade issue after all.

Links:
Wikipedia guide to spectrum click here
Ofcom's digital dividend review click here


A lot of media comment on the Internet seems to assume that almost everyone is now connected – but is this true? Our Internet columnist Roger Darlington looks at the demographics of the UK Net population.

WHO IS CONNECTED TO THE NET?

In the summer of 2005, the communications regulator Ofcom commissioned a massive programme of research as the first-ever large-scale media literacy audit of the UK. Over 3,200 adults (aged 16+) and over 1,500 children (aged 8-15) were interviewed.

In the spring of 2006, the findings of this audit have been published in a series of six reports covering adults as whole, the nations and English regions, minority ethnic groups, those with a disability under 65, older people (65+) and children (and their parents).

The six reports total 400 pages. They represent an immensely rich source of data that will hopefully lead to some useful policy discussions.

What do these reports tell us about who is using the Net in this country?

At the time of the survey, 66% of adults had a PC at home and 57% had access to the Internet at home. Consumers living in rural areas, while no more likely to have a PC, were significantly more likely to have access to the Net than average.

Adults living in Scotland and Northern Ireland were significantly less likely than the UK average to own a PC or have Internet at home. There are also indications of lower PC ownership in Wales and significantly lower than average penetration of the Net in this nation.

However, adults living in London, South East and East of England were significantly more likely to own a PC and higher than average Internet take-up was also apparent in London and the South East.

In contrast to the take-up trends of mobile phones, the take-up of the Internet was affected more by socio-economic group than age.

Taking 45 as the cut-off age, while older consumers (47%) were significantly less likely than younger consumers (66%) to have Net access at home, the differences were less marked than between social groups ABC1 (72%) and C2DE (40%).

Some 6% of adults across the UK said they were likely to get Internet access in the next 12 months and 13% were unsure. People living in the South West of England were more likely to be unsure, with those in Yorkshire and the Humber the least. Overall, 19% of adults were voluntarily excluded from taking up the Internet (that is, they could afford it but did not want it). This was highest in Scotland at 24%.

Turning now to the situation regarding children, Ofcom's audit found that nearly two-thirds (64%) of children aged 8-15 have access to the Internet at home, and this is more common for 12-15s than for 8-11s (67% compared to 61%).

Access to the Internet at home is significantly below this UK average for children in Northern Ireland, at 38%. Access is also significantly lower for children from minority ethnic groups (51%) and those in low income households (42%).

A third of children aged 8-15 and well over half of all children in low income households have no access to the Net at home.

The reason that low income households do not connect to the Net is not primarily economic. Such households typically spend more on digital television than it would cost to buy a cheap PC and an Internet subscription.

It is more a matter of culture: such households are headed by parents who often are less supportive of their children’s education than many middle-class households and such parents themselves are not naturally attracted to an overwhelmingly text-based medium.

Recently the Government confirmed that it is setting up a cross-departmental team dedicated to improving access to technology for socially excluded groups.

The main focus of the Digital Inclusion Team will be to improve IT literacy among adults and schools, as well as helping the socially excluded access technology via mobile phones and enter the digital television world.

It will also look at the feasibility of extending access within the community. There are currently 6,000 UK Online centres across the UK, with 3,000 in deprived wards.

Following the Cabinet reshuffle after the local elections, the Minister responsible for the Digital Inclusion Team will now be Pat McFadden. He will need to work hard to eliminate the class divide in Internet take-up.

Links:
Ofcom report on media literacy amongst adults click here
Ofcom report on media literacy amongst the nations and regions click here
Ofcom report on media literacy amongst older people click here
Ofcom report on media literacy amongst disabled people click here
Ofcom report on media literacy amongst adults from minority ethnic groups click here
Ofcom report on media literacy amongst children click here


Our Internet columnist Roger Darlington explores an argument that is raging fiercely in the United States but has hardly been mentioned so far here in Britain.

WHAT IS INTERNET NEUTRALITY?

Internet neutrality - or 'net neut' as some (American) media call it – sounds cosy and comfy, like motherhood or apple pie, so how is it possible that the subject has provoked huge political debates in the American Congress and bitter commercial debates among the media giants?

Writing in the “Observer” newspaper, John Naughton has said: “The case for net neutrality is abstract, sophisticated and long term”. A feature on The Register website referred to “the occasionally surreal and often hysterical debate around net neutrality on US blogs and discussion forums”.

So let's start by seeking a simple understanding of what the term means and then establish what the debate is all about.

The basic issue is whether the telcos and cable companies who build and run the infrastructure of the Internet – the pipes, switches and routers over which all Net traffic passes – should be allowed to charge content suppliers to carry their content on a differential basis according to the speed and/or quality of service they want to offer consumers.

Those who support Net neutrality argue that a basic axiom of the Internet is that all services should be treated equally. They insist that the fact that historically the Net has been 'neutral' to applications – that it did not privilege one applications over another – is the reason for the staggering creativity of the Net with a plethora of new services being developed and offered in such a short period of time.

They argue that, in the absence of Net neutrality, access providers will erect electronic toll booths that will inspect all data packets and apply different charges to each which would reduce creativity and innovation and lead to the development of specialised networks optimised for specific applications.

This is the position of many content providers such as Google, Yahoo! and Microsoft, as well as many Net luminaries such as the inventor of the World Wide Web Tim Berners-Lee.

Caroline Fredrickson, writing for the web site CNET News, argued: “The Internet did not get where it is by letting gatekeepers determine what information reaches its destination or slowing the information from competitors. If the Internet is to be preserved as a forum for speech and innovation, Congress must reinstate the requirements of net neutrality”.

The opponents of net neutrality insist that the principle may have worked when Net users were using low bandwidth applications such as e-mail and blogging, but that now there is increasing use of high bandwidth applications like Bit Torrent and requirements for high quality of service with applications like Voice over IP.

These organisations argue that this bandwidth has to be paid for somehow and, if they are forced to continue with a 'neutral' approach, the whole Net will suffer as traffic slows down and investments in new infrastructure are held back.

Opponents of net neutrality insist that the idea that content would be blocked by differential pricing is nonsensical scare-mongering because consumers have become used to comprehensive access to Net content and services and would desert any company that tried to limit access to content.

This is the position of the telcos like AT&T and the cable providers.

Richard Bennett, an American engineer who played a role in the design of the Internet, told an interviewer: “In the name of opening up the Internet for big content, the 'Net Neutrality' bills [in the US Congress] criminalize good network management and business practices. Why can't we have more than one class of service on the Internet?”

So who has got it right?

David Tansley, a technology specialist at the UK consulting firm Deloitte, has said: “The question is, how do you marshall a finite resource where demand exceeds supply? Do you just keep adding lanes to the motorway or do you look for a way of disincentivising some of the motorists?

Unless you can defy the laws of physics, you have to consider a congestion charge. I think BT will do the same as the American telecoms companies.”

Of course, if the likes of Google or Skype do not pay for the necessary network investments, who will? The poor end user of course.

Therefore it seems to me that this is not an issue to be resolved by legislators or regulators, but one best left to the market and customers.

Links:
John Naughton's column in the "Observer" click here
Caroline Frederickson's piece on CTNET News click here
Interview with Richard Bennett on The Register click here


Are you confused over the proliferation of offerings in the communications marketplace? Well, you're not alone. Our Internet columnist Roger Darlington poses the question:

WHAT DO CONSUMERS REALLY WANT?

I obtain my fixed line service from BT and my mobile service from O2 because these companies recognise trade unions. My Internet service provider is Pipex (it was a recommendation) and I take my digital television from Sky (I love the Sky+ PVR).

However, this approach is being challenged by an increasing number of providers in the communications marketplace. They want customers to take bundled services.

This might be double-play (such as fixed and mobile from the same company) or triple-play (such as fixed telephony, cable television and Internet from the same provider) or even quadruple play – or as Virgin boss Richard Branson calls it, foreplay – with all four services from the same source.

In a market where customers are switching all the time, the idea of bundling is to reduce 'churn' and maximise 'stickability'.

Recently I did something for the first time, when I was a panellist at a telecommunications conference for business leaders in my capacity as a member of the Ofcom Consumer Panel. The event was arranged by an organisation called the Telecommunication Executive Network [click here].

My particular session was called: "Building A Customer Friendly Brand And Customer Service Bundle: Are Triple And Quadruple Play What Customers Want?" (snappy, eh?).

I told the assembled business leaders that there's good news and bad news for players in today's communications market.

The good news is that, according to the Ofcom “Communications Market” report for 2006, last year the total communications market (that is, both telecommunications and broadcasting) was massive and growing. Total retail revenues were £50 billion and this was up by 5% on the previous year.

The bad news is that the same report shows average household expenditure growing by less than 1% to £87.67 a month. Cost savings from improved technology and all these so-called 'free' offers are keeping expenditure flat.

The good news is that there is still plenty of market growth opportunities. Internet penetration is still only 60% of households, broadband is still only about 35%, and even digital television is still just 70%.

The bad news is that the traditional markets are saturated. Some 90% of households have a fixed line and this figure is slowly falling. Some 90% of households have a mobile phones and the figure is hardly moving.

Even Internet growth is only edging up at an extra 3% of households a year. Most the movement is people up-grading from narrowband to broadband.

The sectors of the market which remain unconnected – notably households with older and poorer consumers – are not easy to win over because unfortunately these people remain unconvinced that the Net is relevant to their lives.

So, in this fiercely competitive marketplace where customers are increasingly promiscuous in choosing suppliers, what do customers want?

  1. They want honesty. They want to know exactly what 'free' broadband means. If a broadband service cannot be delivered for one, two or three months, they want to know that now.

  2. They want fairness. Customers should be delivered exactly what they thought they were buying. All the existing new offers to new customers should be made available and offered to existing loyal customers.

  3. They want simplicity. They are massively confused by the plethora of offerings and tariffs and want things to be made easier to understand. They want more independent information on comparison of services.

  4. They want staff empowerment. They want staff to have the authority to do what it takes to meet the customer's needs. A few weeks ago, it took me talking to seven members of my mobile operator before I was able to get a database changed.

  5. They want a deepening of the relationship. Most customers use only a small proportion of the functionality of their technical equipment. Suppliers should find ways of enabling their customers to get more from their existing products and services.

  6. They want a broadening of the relationship. What else can they offer their customers? This leads to double play, triple play, quadruple play. They want attractive content. They want more innovation – too much differentiation is simply on price.
Finally, companies need to be clear how they perceive the customer. No company 'owns' the customer. They should think of their customers as you would think of your partner. It is a relationship that has to be nurtured. So less fixation on the big 'o' – in this case, the offer. More concentration on extended foreplay.


There is a general feeling that the web has reached an important new stage in its development, but there is little consensus on how to define this. Our Internet correspondent Roger Darlington gives his explanation.

WEB 2.0: HYPE, HOPE OR HERE?

The term “Web 2.0” [click here] was a designation coined at a conference in 2004 by a promoter of web business called Tim O'Reilly. He described the term as “an attitude rather than a technology” - which is hardly an exact definition.

The numerical term alludes to new Internet developments like the next version of the Windows operating system or the next version of the browser Internet Explorer where there is a clear date or number, but the web is evolving too quickly and too randomly to lend itself to such a clear designation of stages.

Nevertheless the web in the mid 2000s is so very different from that in the late 1990s that it is not unreasonable or unhelpful to give this difference the designation Web 2.0. I would identify five inter-related reasons for this difference.

First, the web is so much bigger now.

Arguably the Internet 'took off' in 1993 when its use doubled to more than 25 million people. But today around one billion people worldwide are on-line. The Net is now not just a mass media – it is the most extensive media of all time.

Meanwhile the content of the web has grown exponentially. There are now well over 10 billion pages. Many thousands of new pages are added every day.

Second, broadband is transforming the Net experience.

In most industrialised countries now (and this is certainly true of the UK), there are more broadband users than narrowband users and, in countries like Japan and South Korea, broadband speeds of up to 100Mbps are becoming quite commonplace.

When one's Net connection is always-on, it changes the way one uses the medium. When bandwidth is greater, one is more likely to look at news sites offering video clips or social networking sites with photographs or e-commerce sites with graphical catalogues.

Third, e-commerce has really come of age.

The web has created the largest marketplace in the history of humankind and growth rates for use of e-commerce rates are phenomenal. Just three years after it was founded, Skype accounts globally for 7% of all international telephone calls.

Sites like Amazon [click here] and eBay [click here] are hitting traditional 'bricks and mortar' retailers, down-loading of MP3 music files is slashing the sale of CDs, and on-line advertising is now hitting hard television and press advertising.

As an example of the explosive growth, consider the forecasts from the Interactive Media in Retail Group (IMRG) that British shoppers will purchase £7 billion worth of goods this Christmas – an expansion of 40% on last year.

Fourth, social networking is now huge.

People are using the web to make or renew contact with old school friends, current university students, or just people who share an interest or location. Almost every Net user under 30 is a member of one or more such sites.

Facebook [click here] is used by 90% of the students in the 41 UK higher education institutions which have signed up to the site. MySpace [click here], bought by Rupert Murdoch for a staggering $580M, has 110M users, 4M of them here in the UK.

When a disaster occurs, like the tsunami in south-east Asia [click here] or the flooding of New Orleans, the web can almost instantly create a site that creates a network of individuals and organisations that need to communicate with one another.

Fifth, user-generated content is changing the face of the web.

The biggest example of this is blogging – there are now 55M weblogs and the number jumps every day. Those bloggers who are writing about current affairs are challenging traditional media and becoming “citizen journalists”.

The on-line encyclopedia Wikipedia [click here] now has 1.4 million articles in the English language alone – all written for free by users – and the aim of the site's owners is to have at least 250,000 articles in every language spoken by at least one million people.

Flickr [click here] is available for anyone to display their photographs to the world. When it comes to video clips, the sudden popularity of YouTube [click here] – recently acquired by Google for an eye-watering $1.6B – is stunning.

In short: Web 2.0 is essentially about a step change in participation. One thing is for sure though: as far as the development of the web is concerned, “we ain't see nothing yet” and, in even five years time, nobody will be using the term Web 2.0.

Links:
"Guardian" magazine article click here
Web 2.0 for trade unions click here


In Britain, at least until recently, UGC was a reference to a cinema chain (it's now called Cineworld [click here]). But these days the acronym means user-generated content – the phenomenon that is transforming the web. Our Internet correspondent Roger Darlington explains ..

HOW YOU BECAME THE WEB

"Time" magazine's "Man of the Year" for 1982 was not a man at all but a machine - the computer. In those days, computers were in very few homes, hardly anyone had heard of the Internet, and the World Wide Web did not exist.

In 1999, “Time” magazine's “Person of the Year” (notice the gender change) was Jeff Bezos, the founder of the e-commerce web site Amazon. By then, PCs were ubiquitous and the web commonplace but dominated by major commercial interests.

In 2006, “Time” [click here] made its “Person of the Year” choice as “You. Yes, you. You control the Information Age.” For this edition of the magazine, seven million pieces of reflective Mylar were ordered for sticking on the front cover, so that you saw yourself on the page.

So, what was all this about? Why did the magazine then devote no less than 27 pages to examining how you and me are now shaping the Net [for main article click here]?

In a sense, the Internet – first created in 1969 - started as a reservoir of user-generated content (UGC). It initially consisted mainly of e-mail and bulletin boards.

Then, with the invention of the World Wide Web in 1989, the Net became dominated by information and transactional sites created by old and new corporations.

What has happened in the last few years is that increasingly the largest volumes of material on the Net – and the fastesdt-growing and the most interesting – are coming not from organisations but from individuals.

It started with blogging and, as the free software became simpler, this spread like wildfire. Then it received a whole new impetus from social networking sites where users posted personal details (see Facebook [click here]) or photographs (see Flickr [click here]) or video clips (see YouTube [click here]) for all of us to access.

2006 was the year we knew this was big when Rupert Murdoch's News Corporation snapped up MySpace [click here] for a cool $580 million and Google acquired YouTube [click here] for an eye-watering $1.65 billion. But just how big is this development?

Today there are around one billion Net users globally. Now, of course, not all of these have a web site or blog or even put details on a social networking site or comments on a blog posting. But, compared to old media (for instance, the number who send letters to newspapers), the level of participation on the web is breathtaking.

The number of blogs worldwide is now over 60 million. Wikipedia [click here] now has around 1.6M articles in English alone. Every day, some 65,000 new video clips are uploaded to YouTube and every day over 100 million videos are watched on this site.

The level of activity of some users is astonishing.

For instance, there is a 25 year old Vietnamese-American called Tila Nguyen – she is Tila Tequila professionally – who has a profile on MySpace [click here] that has been viewed more than 50 million times and she has between 3,000-5,000 new friend requests a a day. Or take the 25 year old Canadian Simon Pulsifer who has authored over 2,000 articles on Wikipedia [click here] and edited roughly 92,000 others.

Of course, Tila and Simon are very exceptional. But consider 45 year old South Korean housewife Kim Hye Won. She has authored about 60 pieces for the on-line newspaper OhMyNews [click here] which has 47,000 such contributors to a site which obtains between 1-1.5 million page views a day.

Much of this UGC is – like much newspaper and magazine content – inconsequential, but sometimes it is explosive.

When a video clip made by S. R. Sidarth of American politician George Allen making a racist remark about him was posted on-line, it had repercussions that arguably led to the Democrats winning the US Senate in November 2005 [for explanation click here].

When Iraqi dictator Saddam Hussein was hanged in December 2006, the world became aware of the taunts and insults he received because someone there filmed the event with a mobile phone and put it on the Net [click here].

I guess that I am part of this revolution. I created a web site in 1999 and started blogging in 2003. Today my site receives around 3,000-4,000 visits a day. So maybe “Time” is right and I am “Person of the Year” after all – but then so are you.


Our current communications infrastructure is unsatisfactory and unsustainable – but rebuilding it will take money and imagination, explains our Internet correspondent Roger Darlington.

THE NETWORK OF THE FUTURE

In January 1980, I wrote a research report entitled “Optical Fibre Technology” for what was then called the Post Office Engineering Union (POEU). This examined trials of optical systems in the network of what was then the Post Office Telecommunications Business.

In May 1989, I organised a public conference called “The Network Of The Future” for what by then was the National Communications Union (NCU). The event promoted the case for a national broadband network carrying both telecommunications and broadcasting on optical fibre.

Almost two decades later, where do we find ourselves?

In a sense, Britain now has a national broadband network, since over 99% of homes can access exchanges that will provide bandwidth of half a megabit per second. In a way which we never envisaged in 1980 or 1989, the copper network has proved able to deliver broadband through the use of Asymmetric Digital Subscriber Line (ADSL) technology.

In another sense, though, we are no further forward, since we have no optical fibre in the local loop and, unlike other countries, there are no current plans to develop what is now called Next Generation Access (NGA).

Certainly the core network will be transformed, as all network operators invest heavily in what is called Next Generation Networks (NGNs) which use the Internet Protocol (IP), of which the most important in the UK is BT's 21st Century Network (21CN). But, as far as Next Generation Access is concerned, this country is in a state of paralysis.

In the UK, we are going to develop ADSL technology using those old trusty and rusty copper wires. Theoretically ADSL2+ technology will deliver bandwidth of up to 24Mbps, but in reality the state of the copper lines and the distance from the exchange will ensure that most customers will receive much slower speeds.

Meanwhile, other countries are taking a much more imaginative approach. The USA, Japan, South Korea, France, Germany and the Netherlands are all making significant investments in NGA.

In this country, all that has happened so far is that in November 2006, the regulator Ofcom issued a public discussion document – not a formal consultation document – entitled “Regulatory challenges posed by next generation access networks”.

The key regulatory issue here is whether NGA will create what the economists call “an enduring economic bottleneck”, that is a stranglehold on the competition by total control of a key element of the network.

The regulatory answer to such a bottleneck is to promote downstream competition through mandated access. The problem is that BT will not want to make heavy and risky investments in optical fibre if it is immediately forced to open up access to these networks to its competitors.

The regulatory answer to that dilemma is called forbearance - that is, permitting the investor in NGA sole use for a period of time sufficient to make the investment worthwhile.

In principle, I would favour a measure of forbearance but limited in both time and location. There is a case for forbearance in rural areas and downstream competition in urban areas.

In its discussion document, Ofcom states: "It is not for Ofcom to determine when or how public policy is employed with respect to next generation access deployment. However, the wider social implications are a key feature of the debate on next generation access."

This leads me to assert that next generation access is too big an issue to be left to the regulator. Ultimately NGA must be a public policy issue for Government. There is a strong case therefore for arguing that there should be a Government review of the case for this 'network of the future'.

In the summer, we will have a new Prime Minister looking for some big forward-looking ideas. I offer him this one.

Links:
BT's 21st Century Network click here
Next Generation Networks UK click here
Ofcom discussion document on next generation access networks click here
What's happening in the Netherlands click here
What's happening in Sweden click here
What's happening in Norway click here
CWA's Speed Matters campaign click here


Do not adjust your sets. The strange things happening to television are what you should expect from a digital revolution, explains our regular columnist Roger Darlington.

THE CHANGING PICTURE OF TELEVISION

In 1999, Andy Grove, then Intel’s Chief Executive, wrote a book called “Only The Paranoid Survive” [for my review click here]. This work was based on his concept of the strategic inflection point which he defined as “a time in the life of a business when its fundamentals are about to change”.

If you were working in the television industry today – a £11 billion business in the UK – you would have no doubt that you are smashing right into such a strategic inflection point because all the fundamentals of television are being utterly transformed by digital technologies.

Let's consider these one-time eternal verities one by one.

We have long ago moved from the single set in the living room, so that now most households have smaller, even portable, sets in the kitchen or bedroom. However, broadband now means that we are starting to watch television material on our PC or lap top and the BBC's iPlayer and Internet Protocol television (IPTV) will give a big boost to this trend. Other devices – such as games consoles and mobile phones – are increasingly offering access to television.

In the 1980s, the video cassette recorder (VCR) started the process of time-shifting, but half of viewers never managed to master these consumer-unfriendly devices. Now though we have the personal video record (PVR) a hard disc device that is both much simpler to use and has much greater storage capacity and, if (like me) you have a Sky+ box, recording material is a one-click operation and recording a whole series is only a two-click operation.

There was a time when the most popular programmes would attract audiences of around 20 million and you could chat to many people at work about the item you viewed the night before. This was great for advertisers who knew what they were getting for their money. The current fragmentation of audiences over many hundreds of channels restricts 'water cooler TV' to programmes like “Big Brother”.

If we take the broadest view of PSB (BBC 1 & 2, ITV 1, Channel Four & Five), then we find that only two-thirds of viewing is of the five main channels as viewers switch rapidly to the proliferation of digital channels. As the 'big five' respond to this challenge with more popular programming, this trend is undermining the whole concept of PSB.

In fact, advertising revenues are actually falling which is presenting a serious threat to ITV especially. Meanwhile subscription revenues are rising rapidly and subscriptions now raise more money for television than advertising. For the moment, the BBC licence fee is safe but, as viewers watch less and less BBC television, the sustainability of this funding model becomes ever strained.

We used to have documentaries and drama, but many programmes must now be classed as drama-documentaries as the styles are mixed. Similarly comedy and drama used to be quite different but a series like “Desperate Housewives” cleverly embraces both genres. Programming and advertising used to be completely separate whereas we now have a lot of shopping channels in which the programme is the advert.

In the beginning, the BBC and ITV made most of the British-produced programmes, but the independent production companies – the so-called 'indies' – are now strong not just on the commercial channels but on the BBC channels too. The value chain is becoming more complicated as one company screens a programme, a second makes it, and a third provides the interactive elements such as voting.

Increasingly we will all become television-makers since events can be filmed with relative ease and little expense using new digital technologies. Community Channel [click here] is a voice for community groups, charities of all sizes and not-for-profit organisations. A new channel called Information TV [click here] carries material from government departments, public bodies and other public service institutions. Current TV [click here] – promoted by Al Gore - offers a mix of self- and viewer-made short shows.

Most of these changes are good news for consumers: more choice and more empowerment. But, if you are Mark Thompson of the BBC or Michael Grade of ITV, it must make management and funding little short of a nightmare.


You may think that you're already bombarded with electronic messages, but many of us are seeking more and the technology is certainly going to deliver many more. Our Internet columnist Roger Darlington considers the question:

WHAT IS IT WITH BEING CONNECTED?

It is not so long ago that each day most of us received simply a few letters and several dozen phone calls. Today typically many of us also receive a dozen or so text messages on our mobile and several hundred e-mails on our PC, lap top or PDA.

Many of us think that we have reached our capacity to absorb and handle messages and that we are already suffering from information overload – and yet consider some current behaviours and some future technologies and you will appreciate that this is just the beginning.

When you wake up in the morning, what is the first thing you do? For many of us, one of the first things we do is to switch on the mobile (if it was even off) and to check our e-mail.

When you are at a meeting, are you paying full attention to the discussion? Chances are that you're checking your e-mail or surfing web pages on your lap top or Blackberry (not for nothing known as a 'crackberry').

When you go on holiday, do you leave it all behind? A lot of us are regularly texting family and friends and some of us are even checking our e-mails on the lap top or PDA.

Young people – and, even if Connect does not have many young members, we have our children and know their friends – are even more compulsive. You might be surprised at how many Instant Messenger or text messages your daughter sends – even to school friends that she sees every day.

People in their 20s and 30s are signing on big time to a host of new social networking sites that marry the web and the mobile and enable them to share the details of their everyday life with a close circle of friends, or anyone who wants to connect with them, or even any web user – and all in real time at very low cost.

These sites have weird names like Twitter [click here], Kyte [click here], Radar [click here] and Jaiku [click here]. Most people have never heard of them, but they are symptomatic of how technologies are changing our patterns of communication and of our seemingly insatiable appetite for messages.

Why is this happening?

In previous societies, people obtained a strong sense of identity from their family, village, class or religion. All these social units have become fragmented and less important to many people. But all of us still have a strong desire to belong and to be recognised - perhaps especially young people who are still establishing their role in life.

This is where new communications technologies are so appealing and even addictive. They give people a sense of belonging, involvement and participation or even – to become a bit more psychological for a moment – a feeling of validation, inclusion and desirability. And the networks are operating 24/7, the feedback is instantaneous, the usage is simple, and the cost is minimal.

Even for those of us not looking for more messages, the technology is going to make communications more useful and desirable. This is because things like radio frequency identification (RFID) tags, satellite navigation chips, and new radio technologies are going to enable millions and eventually billions of everyday objects to communicate with each other and with us.

Miniature devices in a medicine container or special bracelet could tell you whether your aging grandmother has taken her tablets today or failed to rise from bed. Similar devices in your food packaging could tell you whether the 'eat by' date has passed or in your clothes could set the washing machine to the correct programme.

And, of course,distance will be no obstacle at all. Even on holiday in Australia, you will be able to alter the heating and lighting in your home or to record a radio or television programme.

As well as issues of information overload and personal control, there will be a profound question of privacy. Currently there is a pact – often unwritten - between citizen and state and between consumer and company. But, as new communications technologies become ubiquitous, we will need much more transparency and openness about how information is collected, communicated and controlled.


We are now critically dependent on the Internet, but the network is under pressure both internally and externally, as Roger Darlington explains.

COULD THE NET FALL OVER?

Every government department, almost every business of any size, and over 60% of homes in the UK now use the Internet. Over one billion people worldwide are now connected. Every day, the numbers increase. Last year, China alone added 26 million users.

As more organisations and individuals make more intensive use of the Net, we become ever-more dependent on the network to work fast and efficiently every second of every day. But can we totally rely on this?

There are two main threats to the effectiveness of the Internet: an internal one which is that it could run out of capacity and an external one which is that it could be struck by a malicious attack.

Let's start with the capacity problem. Originally the Net only carried messages: e-mails and bulletin boards. Then we saw the development of picture-rich web sites. Now we have sites such as YouTube with streaming video and surfers using the likes of BitTorrent to downloaded huge files.

This has required the capacity of the Internet to increase year on year on year. Indeed Verisign, the American firm which provides the backbone for much of the Net, including domain names .com and .net, is investing $100M (£51M) over the next three years to increase bandwidth 10-fold for new services.

Early in the year, a report from Deloitte said 2007 could be the year the Internet approaches capacity, with demand outstripping supply. It predicted bottlenecks in some of the Net's backbones as the amount of data overwhelms the size of the pipes.

In fact, there is virtually unlimited capacity in the Net's backbone infrastructure because so much optical fibre has already been laid. A problem is more likely to occur with the routers, although Cisco is now manufacturing routers than can handle a staggering 92 terabits a second.

Much more serious is the weakness of the so-called 'last mile' – the copper cables that connect almost all Net users to their ISP. This is why ISPs are putting caps on usage or charging more for extra usage or using techniques like bandwidth shaping.

The other threat to the operation of the Net is external: either an incident which takes out capacity or an attack which damages the proper running of systems or sites.

It was hardly reported outside Asia but, a few weeks ago, Vietnam lost half of its Internet connections because some fishermen in Ca Mau (the south-east point of the country) dived down into the sea and cut off about 98km of optical fibre cable in order to sell it. Apparently it will take $4 million and some four months to repair the lines.

My man in Hanoi e-mailed me: “From this morning, it's very difficult for us to load international web pages. The Internet turns very slow and idle. That's really a disaster to me. Hope that things will be OK soon and those fishermen should be punished for their actions."

Even more serious is the risk of attacks on web sites for political reasons or terrorist purposes.

In May, a significant number of web sites in Estonia - especially ones of Government departments and political parties - were under heavy attack for weeks and the strong suspicion was that Russia was behind the assault because of strained relations between the two countries. NATO dispatched some of its top cyber-terrorism experts to Tallinn to investigate and to help the Estonians beef up their electronic defences [for more information click here].

But this was merely the visible tip of a much more extensive series of unreported attempts to compromise systems and sites.

Why should Hamas and Hezbollah content themselves with sending suicide bombers and rockets into Israel if they could disable the IT networks of Mossad or the Israeli Defence Ministry? You can be sure that, even as you're reading this, there are people in Syria and Iran attempting such cyber-attacks.

The sort of electricity outage we saw in the north-west of the USA a few years ago could look a minor inconvenience compared to a major collapse or attack on a section of the Net. Every government agency and sizable company should have a contingency plan for such an eventuality.

It may never happen – but don't be too surprised if it does.

Links:
Could the UK face a cyber attack? click here
Chinese attacks on US and UK sites click here
Attacks on New Zealand's sites click here
DoS attack on Verizon click here


From some of the media hype, you could be forgiven for thinking that now everyone is on-line and generating their own content, but our Internet correspondent Roger Darlington studies the latest survey and finds that the picture is more variable than one might think.

MORE DIGITAL DIVIDES OPEN UP

The Oxford Internet Institute (OII), headed by Professor William H Dutton, has now run three biennial Oxford Internet Surveys and the 2007 report gives not just a detailed profile of UK Net users and usage but an indication of some broad trends.

We tend to think that everyone is now on-line, but the survey finds that two-thirds of homes have access to the Net but 34% still do not. The OII notes that “the increase in access and use of the Internet from 2003 to 2007 has been slow, if it has not yet reached a plateau”.

Age is a major factor: while 90% of those under 18 in the survey use the Net, only 24% of those aged over 75 did so. Lifestyle is another important factor: 97% of students and 81% of the employed are on-line, but only 31% of the retired are.

Perhaps the biggest differentiator is income: of those earning less than £12,500 a year, only 39% use the Net but, of those taking home more than £50,000, the figure is 91%. A final factor worth mentioning is disability: the survey found that only 36% of those with a disability are connected compared to 77% of those without a disability.

So, are these digital divides going to dissolve any time soon? The OII records that, among households without the Internet, the number that say they are planning to get access in the next year has dropped dramatically. Less than one-fifth (18%) plan to obtain access in comparison to 44% in 2005.

Just why do those off-line not wish to become connected? Expense is far from being the major reason (51%). Mostly it is all about not knowing how to use the Internet (81%) or a computer (77%) or feeling that it is not for people like them (60%).

There is a major challenge to Government here. Something like a third of households are unlikely to go on-line anytime soon unless there are some significant support programmes.

So, what about those who are connected? Are they making full use of the Net?

Well, e-commerce is doing fine with 79% of Net users saying that they buy products and services on-line, although 38% report that it is difficult to return or exchange goods which have been purchased on the Internet. Also there is slow growth in use of government web sites: 29% access local government sites and 26% go to central government sites.

However, active civic participation is very, very low: only 7% have signed an on-line petition and a mere 2% have used the Net to contact a politician.

But politics is boring, right? Instead we are all running our own web site or blog and social networking like crazy. Well, not exactly.

Passive production such as posting photographs is fairly common (28%), but only 15% maintain a personal website and only 12% (actually down from 17% in 2005) run a blog. Less than one-fifth (17%) of Net users have created a profile on a social networking site and students are three times as likely (42%) as employed users (15%) to have a profile and almost no retired users (2%) have such a profile.

Among the rich data in the OII survey, one other subject is particularly intriguing, namely the divided views on the idea of Internet regulation. There is simply no consensus about whether the Government should regulate the Net. Just over a third (36%) think that it should, just under a third (30%) think that it should not, and the remaining third (33%) are undecided.

Non-users and ex-users think the Internet should be regulated by Government to a larger extent than users. 51% of non-users support such regulation, compared to 31% of users.

So it would appear that usage of the Internet alleviates some of the fears about being on-line, but a massive majority (85%) believe that children's content should be restricted and a third (34%) have received a virus on their computer and 24% complain about too much spam.

Link: Oxford Internet Surveys click here


We all have our favourite web sites and here our Internet columnist Roger Darlington looks at the background to one of the sites he uses most.

IS WIKIPEDIA THE BEST SITE ON THE WEB?

It is has over 8 million pages in more than 250 languages. It is by far the biggest encyclopedia ever written. And it's all done by volunteers and free to all users.

It is of course Wikipedia – the web site that shouldn't be possible. So how has it happened?

The site was founded by the American Jimmy Wales [click here] and his then partner Larry Sanger and it was originally called Nupedia. The concept then was to invite experts to contribute articles and, by the end of the first year, they had a grand total of 22. The next year was not that much better.

The plan changed dramatically when the founders decided to use the idea of the wiki which enables any Net user to contribute an article or to edit one. In the first two weeks of the new approach, they had more articles than in the two years of Nupedia.

The whole enterprise seems to defy the laws of business and economics. The Wikimedia Foundation is run as a charity on a budget of £700,000 a year provided by donations, mostly of around £20. It takes no advertising.

It employs only seven people with an office in St Petersburg, Florida and one-room outposts in California and Poland. Its main servers are in Tampa, Florida with additional servers located in Amsterdam and Seoul.

And yet it is among the top ten most visited web sites in the world and, at peak times, has around 15,000 visitors a second. It is probably worth more than £2 billion.

To merely summarize the content of the site is to stretch the imagination. In its main language English, there are almost two million articles. There are another 630,000 articles in German and 550,000 in French. There are over 400,000 in Polish and Japanese and well over 300,000 in Italian and Dutch.

And so it goes on. The top 14 languages have over 100,000 articles and the top 139 have over 1,000 articles At the last count, there was some material in 253 languages and the total number of pages is currently 8.2 million.

Every page contains links to other pages. And amendments are being made and pages are being added every second of every day. A typical page has been edited 16 times and the site is growing at the rate of 1,700 articles a day.

Wikipedia already has a range 20 times greater than the entire 17 volumes of the “Encyclopaedia Britannica”. The grandiose mission of Jimmy Wales is “to bring the sum of human knowledge to every single person on the planet, free, in their own language”.

It is not just the pages of factual content that are impressive. For each page, there is a record of all the amendments made and a discussion forum to debate the content. The democratic nature of the original authorship and subsequent amendments, together with the transparency of whole the process and the invitation to improve and debate the material, make Wikipedia a truly radical project.

The contributors to the site number in the hundreds of thousands, but there are some 75,000 active contributors, there is a core of around 4,000 people who make more than 100 edits a month, and there are 1,000 official 'admins' who arbitrate when 'trolls' try to vandalise pages and who if necessary block unruly users from the site.

But does the process work? Can one rely on the accuracy of Wikipedia?

Like most information sources – both on and off-line – there are mistakes. But the evidence suggests that the level of accuracy is as good as other comparable sources.

The scientific journal “Nature” ran an exercise to test the comparative accuracy of Wikipedia as contrasted with “Encyclopaedia Britannica” in 43 randomly selected articles [click here]. They found 162 mistakes in Wikipedia compared to 123 in “Britannica”.

Also Wikipedia pages are open about material which needs to be checked or sourced.

After years of using the site literally every day, I am a huge fan. It is not perfect, it is not brilliantly written, but it is hugely informative and very user-friendly. As a starting point to learn about a topic, it is currently unbeatable.

Links:
Wikipedia search page click here
Wikipedia main page click here
Wikipedia's About Wikipedia page click here
Wikipedia's Wikipedia FAQ page click here
John Naughton on Wikipedia click here


Something many don't like to talk about too openly is the issue of Internet security, but our columnist Roger Darlington lifts the curtain on a darker side of the Net.

ARE YOU SAFE ON-LINE?

Let's start with what we know. More people are connecting to the Net and they are using ever faster speeds. They are spending more time on-line, doing more things, and carrying out more transactions.

What we don't know is the true level and sophistication of e-crime. We don't know whether proportionately it's worse than off-line crime, how badly companies and individuals are being hit, and how the various players are coping.

This is because there is no central collation – or even an agreed definition – of e-crime. Furthermore many companies are reluctant to admit problems because this could damage their brand and deter visitors to their site.

However, in the first half of 2007, the issue of personal Internet security was examined by the House of Lords Science and Technology Committee led by Lord Broers (formerly of IBM and now a Vodafone board member). In August 2007, it published a 121-page report [for text click here].

While being very clear that “the Internet is a powerful force for good”, the Committee insisted that “the Internet is now increasingly the playground of criminals” and that these bad guys are “highly skilful, specialised, and focused on profit”.

The Committee looked at problems like denial of service attacks, malicious code (malware), phishing, identity theft, and on-line fraud and theft. It insisted that “there is a growing perception, fuelled by media reports, that the Internet is insecure and unsafe”.

In the course of the inquiry, a major difference emerged between the Government and the Committee.

In its evidence to the Lords, the Government insisted that the responsibility for personal Internet security ultimately rests with the individual. However, the Committee – rightly, in my view – took a different stance.

It argued that the Government's position is “no longer realistic” and argued that this attitude “compounds the perception that the Internet is a lawless 'wild west'". The Lords asserted that “it is clear to us that many organisations with a stake in the Internet could do more to promote personal Internet security”.

Therefore the Committee had messages and recommendation for a wide range of players: manufacturers, retailers, Internet service providers, businesses that operate on-line, police and the criminal justice system, Government, and Ofcom.

The prime challenge, though, was to the Government: “Government leadership across the board is required. Our recommendations urge the Government, through a flexible mix of incentives, regulation, and direct investment, to galvanise the key stakeholders".

Seem sensible? So, how did the Government react?

In its response [for text click here], it insisted: “The Government does not agree with the implication within the report that the public has lost confidence in using the Internet”. Instead it argued that there is “an acceptable level of comfort with the technology”.The response asserted that “we would refute the suggestion that the public has lost confidence in the Internet and that lawlessness is rife".

Ministers stated: “Legislation will be kept under review but the Government does not consider that imposing additional burdens on business is the best way forward".

Unsurprisingly, the House of Lords Committee feels that the Government is taking too relaxed a view. Committee member the Earl of Erroll – who has a long track record in IT - was reported in the press saying: “The Government's response is a huge disappointment”.

So, who is right? Are people worried about being on-line?

In the latest Oxford Internet Institute survey [for access click here], one subject was particularly intriguing, namely the divided views on the idea of the Government regulating the Internet which must be one measure of the level of concern. Just over a third (36%) think that it should, just under a third (30%) think that it should not, and the remaining third (33%) are undecided.

Significantly non-users and ex-users think the Internet should be regulated by Government to a larger extent than users. 51% of non-users support such regulation, compared to 31% of users.

A massive majority (85%) believe that children's content should be restricted and a third (34%) have received a virus on their computer and 24% complain about too much spam.

A report from the Ofcom Consumer Panel on “Consumers And The Communications Market: 2007” [for text click here]found that 61% had worries or concerns about using the Internet, while more specifically 26% expressed anxiety about Internet security.

So this debate is going to run and run ...

Links:
Response to Lords report from Ofcom click here
Response to Lords report from Children's Charities' Coalition on Internet Safety click here
EURIM workshop on e-crime click here


Are you getting enough? Broadband speed, that is. Our Internet columnist Roger Darlington explains the problems and offers some solutions.

IS YOUR BROADBAND UP TO SPEED?

In the four years that I have been a member of the Ofcom Consumer Panel, none of the issues we have raised has excited more consumer and media interest than that of broadband speeds.

My own experience is typical: I signed up for a service offering “up to 8 Mbit/s”; I was told that I could only receive about 4 Mbit/s; and most of the time I obtain just over 2 Mbit/s. Apparently I live too far down the line from my local exchange.

But I am far from being the only one with problems. A report published by “Which” in August 2007 concluded that, while many packages now advertise speeds of “up to 8 Mbit/s”, the average speed for such connections was 2.7 Mbit/s.

When the Consumer Panel went public on the issue, the BBC web site opened an on-line discussion which was flooded with angry comments. They closed the discussion after receiving almost 2,000 submissions.

Now there are many reasons why an individual customer may not receive the maximum possible broadband speed: Some relate to physics: the state of the copper cable varies around the country, some of it is decades old and, the further one lives from the exchange, the worse the speed.

Some relate to the network design of ISPs: the more people using the service at any given time - known as the contention ratio - the worse the speed.

Some relate to the consumer's own premises or equipment: the internal wiring may be of low quality, the PC may be inadequate, and there could be interference from other electrical appliances.

But all this does not excuse ISPs in being more open and honest with consumers about what they can reasonably expect and what can be done if their expectations are not fulfilled.

In October 2007, the Chairman of the Ofcom Consumer Panel wrote to the top six UK ISPs about the problem and had a series of meetings with them. Then, in December, the Panel submitted its proposals to Ofcom.

The Panel wants to see Ofcom leading discussions with industry to produce an enforceable code of practice that would be mandatory for ISPs. This code would establish agreed processes to give the customer the best information during and after the sales process. Also it would give them flexibility to move freely to different packages that reflect the actual speeds with which their ISPs are able to provide them.

The code of practice should include a commitment from ISPs to:

The Panel has asked Ofcom to make information publicly available to consumers on its website. This information would help consumers understand the technical issues affecting their broadband speeds over which they have some control. It would also provide quality of service information to assist in their decision over which ISP to opt for.

The Panel also wants the advertising of broadband speeds to be tightened up. It will be requesting that Advertising Standards Authority, working with industry, considers how the range of factors affecting broadband speeds can be given much greater prominence in advertising material.

In an initial response to the Panel's proposals, Ofcom Chief Executive Ed Richards stated that the regulator's “initial proposals”are “very much in line” with those of the Panel. He announced that Ofcom has already started initial discussions with leading ISPs and wants to see that “any measures are implemented in the shortest time fame possible”.

The latest Ofcom survey of “The Consumer Experience” generally shows sustained high levels of satisfaction with communications services. The exception is broadband services. Overall satisfaction has fallen from 92% to 88%, while the figure for those very satisfied has fallen from 44% to 38%.

The writing is on the wall: ISPs need to up their broadband game and be more open with their customers.

Link: letter from Ofcom Consumer Panel to Ofcom Chief Executive click here


BT's next generation network project was always ambitious, so maybe it's no surprise that something of a rethink has now taken place on implementation, as Roger Darlington reports.

THE NEW PLAN FOR THE 21CN

Around the globe, backbone or core telecommunications networks are being replaced by new networks which use the Internet Protocol (IP) that is at the heart of the Internet. Such a new core network is called a Next Generation Network (NGN).

The one currently being developed by BT is called the 21st Century Network (21CN) and the public face of this programme is a web site called “Switched On”.

The purpose of the project is to move all the company's networks dedicated to particular services (around 12 depending on definitions) to a single network deploying the Internet Protocol.. Total investment will be of the order of £10 billion, although a proportion of that money would have been invested even without 21CN.

BT has presented itself as something of a first mover and world leader in NGN.

When an indicative timetable was first announced in June 2004, the plan was that mass migration would commence in 2005, up to half of PSTN customers would have migrated by the end of 2007, and the whole programme would be completed by the end of 2011.

The intention then was that customers would notice nothing as they were switched from the old to the new network with no noticeable change in quality and no introduction of any new services.

Since then, BT has been consulting with industry and gained real life experience of customer migrations in the field and a review has taken place in the light of industry input and the early customer feedback.

Consequently, in November 2007, BT proposed to industry (via the Consult21 process) and Ofcom that it would allow for extended periods of voluntary customer upgrades to new services rather than an early forced migration and make available the offer of a new broadband service when a customer switches.

The company insists that the fundamental objectives of the programme are the same. Also it believes that it is still a year or two ahead of other leading telcos around the world, although it concedes that some niche players are ahead in some respects.

The proposed new approach has largely come about as a result of the experience of the initial test phase known as Pathfinder that has been taking place in the Cardiff area, together with consultation with industry and recognition of evolving new technologies. The test phase commenced at Wick on 28 November 2006 when the first customers were put on the new network.

The programme gradually expanded to include all the analogue telephone lines connected to the Wick exchange and subsequently the Bedlinog exchange – a total of around 1,100 customers. However, BT had originally intended to migrate about 350,000 customers in South Wales by December 2007.

The new 21CN roll-out proposal is fundamentally different from the one originally conceived and announced:

Originally the plan was to migrate customers forcibly one geographical location after another in accordance with a national plan. Now BT is proposing that Communications Providers would be invited - for an extended period - to switch or have their customers switch to the new network voluntarily where these new services are available, although at some point down the line there will need to be a national migration date, followed by a planned legacy platform and service withdrawal.

Originally moving from the old to the new network would have involved no difference in services. Now the plan is to offer those switching to the new network ADSL 2+ which will provide broadband service up to 24 Mbit/s compared to the current ADSL offering of up to 8 Mbit/s (in both cases, of course, actual speeds depend very much on distance from the exchange and other factors).

The launch of the new ADSL2+ service will be from April 2008 when the new service will be available commercially to around 5% of the UK marketplace, rising over the next 12 months to around 50%. This compares to the original plan when large scale migration was intended for 2006. BT will not give an end date for the new timetable as it is still the subject of consultation with industry, but it might reasonably be assumed to be somewhere after 2011 (the original end date).

Links:
NGN UK click here
BT's 21st Century Network click here
Switched on click here


We're no longer all watching the same television programmes at the same time – but does this matter? Roger Darlington explores the issues.

OUR FRAGMENTING TELEVISION PICTURE

As a teenager, I lived in a district of south Manchester where a converted church housed the BBC studios which broadcast the immensely popular programme “Top Of The Pops”. The show began in 1964 and ran for over 2,200 performances and an incredible 42 years, only finishing in 2006 as its audience shrank and shrank.

In the 1960s, a programme like “Top Of The Pops” - or a popular soap or a dramatic documentary – could attract an audience of between 15-25 million. People would watch the same programme at the same time and often discuss it at work the following morning.

But no more. Most television viewers now have a choice of hundred of channels through digital technologies whether delivered through cable, satellite or terrestrially. Even if some of them are watching the same programmes, they may not be watching them at the same time thanks to personal video recorders, catch-up TV on broadband, and the wide availability of DVD sets.

Some commentators talk of the simultaneous viewing of a programme by large numbers as “common spaces” and argue that the retention of such “common spaces” is vital to social cohesion and our sense of community.

It is part of the argument for public service broadcasting as opposed to leaving everything to commercial forces. It is part of the argument for retention of the BBC licence fee even when fewer people are watching BBC channels.

But does this fragmentation of television viewing matter and, even if it does, is there anything we can do about it?

Point one: why should broadcasting be different from other media in terms of “common spaces”?

After all, we don't all read the same newspapers or books or view the same films or plays? And, when we were all watching very similar programmes, arguably they represented a rather narrow, white, middle-class view of Britishness.

Mass audiences for TV programmes was only a phenomenon of the 1960s and 1970s – before then we didn't all have TVs and after then we saw the growing impact of multi-channel TV.

So one could argue that “common spaces” in broadcasting was a sector-specific, culturally-specific and time-specific experience that is no longer appropriate or desirable.

Point two: is the loss of “common spaces” in public service broadcasting that complete?

We still watch lots of television – on average some three and a half hours per person per day. And we still watch a lot of the main channels – even in multi-channel households, two-thirds of all viewing is on the five major channels.

No programme regularly wins the viewing figures that “Top Of The Pops” - or “Coronation Street” - used to clock up in its heyday, but each evening the BBC and ITV news programmes at 10 pm have a combined viewing audience of around 8M and, at times of national crisis (like the London bombings of 7/7), viewers still flock to the BBC.

Point three: are we wrong to think of traditional broadcasting in isolation?

People now watch what we still call television not just on TV sets but on PCs, mobiles, and iPods and the arrival of faster broadband and IPTV will accelerate this trend.

Furthermore media is blurring. Popular television programmes have their own web sites and even blogs and they place or stimulate coverage in newspapers and magazines. Think about the Shilpa Shetty row on “Celebrity Big Brother”: few people saw it as it happened, but millions learned about it through replays on television or discussions on the radio or coverage in the newspapers.

Point four: ultimately don't consumers want greater choice about what and when and how they view audiovisual material?

Consumers are voting through the remote control and the mouse by selecting what to watch and when and how it suits them. Consumers are choosing through the marketplace by taking up digital television in ever greater numbers and purchasing new devices that give new forms of access to ever greater volumes of material.

In doing so, consumers are asserting their individualism. While I am recording “Lost”, my wife is recording “Location, Location”. Neither of us are satisfied with the volume of coverage on the BBC or ITV of the American presidential primaries, but we can watch CNN or Fox News and access the web sites of the “New York Times” or the “Washington Post”. On the whole, it is good news.


The pioneer of next generation broadband in the UK is not who you expect. Our columnist Roger Darlington reveals a dark horse leading the way.

WHO'S BRINGING US NGA?

The technical term is next generation access (NGA), but a more user-friendly term is super-fast broadband. We're talking here of downloads speeds of up to100 Mbit/s.

The debate on NGA for the UK has picked up tempo but there's still very little happening on the ground compared to many other countries.

In March, Connect made a valuable contribution to the debate by publishing its booklet: “Connecting Britain's Future: The Slow Arrival Of Fast Broadband” [for text click here]. This is the most intelligible guide to all the main issues that has so far been produced.

In June, several pieces of new research are to be published to coincide with a one-day conference organised by the Broadband Stakeholder Group (BSG). One research report by Plum Consulting is on the economic and social value of NGA, while a further report from the Analysys Mason consultancy addresses the case for public sector intervention.

Two further studies are in progress: Analysys is looking at the case for fibre to the cabinet (FTTC) vs fibre to the home (FTTH), while Ofcom is reviewing the prospects for duct sharing. Following a six-month review commissioned by the Government, a major report will come from former Cable & Wireless CEO Francesco Caio some time in the Autumn.

So we have plenty of reports – but still very little action. Why?

One of the reasons why next generation access is not happening anywhere near as fast in the UK as in many other countries is that this is a disruptive technology and none of the existing actors has a real incentive to move.

Take BT. It owns the current copper network and it wants to 'sweat' those assets by extracting the maximum potential from them. ADSL technology has enabled it provide up to 8 Mbit/s over copper and now ADSL2+ promises to deliver up to 24 Mbit/s (for a few anyway).

What about the alternative network operators (altnets)? Well, in the last few years, they've invested millions in local loop unbundling (LLU) and the move to NGA would undermine the value of those investments. Unbundling in the NGA world would not be at the exchange, but at the cabinet, with considerable technical and economic implications.

Then there is the regulator Ofcom. It has spend years constructing a model of competition based on the functional separation of BT through Openreach. If we moved to NGA, this separation model would needed to be revisited big time and completely new regulatory remedies – sub-loop unbundling and active line access – would have to be negotiated.

So, who is going to pioneer NGA in the UK?

Virgin Media – which owns the cable networks passing around half of UK homes - has announced an upgrade to its local networks that will enable the launch of a 50 Mbit/s broadband service (plus an upstream speed of around 1.5 Mbit/s). It is intended that this will be available to around 70% of its customer base by the end of 2008. But this is not fibre to the home (FTTH).

The immediate prospect for the deployment of FTTH is in the Ebbsfleet Valley part of the Thames Gateway project in Kent. BT Openreach will supply the infrastructure, but BT Retail and its competitors will be offered access to the high speed lines on a wholesale basis.

The top available speed will be 100 Mbit/s. This is expected to start in August 2008. However, it will initially be limited to around 600 new houses. The development will eventually have some 10,000 homes but the project could take until 2020 to complete.

But, if you want to see an early and significant use of fibre, you have to look to an unusual source.

H2O Networks Ltd, the pioneer of providing fibre connectivity via 360,000 miles of sewers, has announced that the UK's first Fibrecity will be Bournemouth. Work will begin on the deployment of the fibre within the next few months.

This will be the largest Fibrecity project in Europe and the company will be funding and providing the network at a cost of around £30 million. The fibre will provide ultra high bandwidth to all Bournemouth's businesses and more than 88,000 homes at speeds far exceeding current DSL or cable modem speeds.

Links:
“Regulatory Challenges Posed By Next Generation Access Networks”, Ofcom discussion document, November 2006 click here
“Pipe Dreams? Prospects For Next Generation Broadband Deployment In The UK”, Broadband Stakeholder Group, April 2007 click here
“Future Broadband - Policy Approach To Next Generation Access”, Ofcom consultation document, September 2007 click here


Users of the web want high privacy and low prices, but sometimes there might be a trade-off between the two, as our Internet columnist Roger Darlington explains.

SHOULD WE BE WORRIED ABOUT BEHAVIOURAL TARGETING?

As broadband prices continue to fall but there remains a need for new infrastructure investment, Internet service providers (ISPs) continue to look for new sources of revenue. One of the newest possible sources is also proving to be one of the most controversial: behavioural targeting.

The idea is that your ISP will install special software in its network which will intercept web site requests that you make as you roam around the Net. The software will then scan these pages for key words in order to build up a profile of your interests and then use this information to target you more accurately with online advertising that you are likely to find of interest.

So, for example, you might access sites which include the words 'Cyprus', 'hotel' and 'flight'. When you later look at sites that carry online advertising, you are more likely to see offers of flights to Cyprus or hotels on the island.

Such online advertising – because it is targeted – will be more effective, so more can be charged for it and ISPs using behavioural targeting software will receive a share of the extra revenue.

The companies leading the way in providing such behavioural targeting software are Phorm (previously 121Media) [click here], NebuAd [click here] and FrontPorch [click here]. Phorm has managed to sign up the three biggest ISPs in Britain: BT, Virgin Media and TalkTalk. Between them, these three ISPs account for around 70% of UK Internet subscribers.

So what's the problem? Privacy campaigners worry about how much information ISPs will have on our surfing behaviour and how they will use that information.

Here in Britain, computer security expert Richard Clayton is not happy and the Information Commissioner has queried aspects of the system. The European Commissioner is monitoring the situation. In the USA, 15 pro-privacy organisations have written to the House of Representatives demanding public hearings on the use of such technologies.

Consumers were not thrilled to learn that BT had trialled Phorm without advising the relevant customers. Some 36,000 customers were included in this first trial in September-October 2006 which may technically have breached the law. A second trial involving some 10,000 customers is due shortly and this time BT promises to make an announcement.

However, Phorm claims the technology does not gather personally identifiable information, does not store IP addresses, search terms or browsing histories, and only sees users as a unique, random number. So it argues, unlike search engines or most websites, Phorm's technology cannot know who users are or where they have browsed.

Phorm states that its privacy claims have been validated under best industry practices, both through an independent audit conducted by Ernst & Young and a Privacy Impact Assessment undertaken by Simon Davies, MD of 80/20 Thinking and Director of Privacy International.

Providers of behavioural targeting software insist that search engines like Google already hold far more data on users without there being an outcry.

A key part of the debate is whether consumers should have to 'opt out' of the system or 'opt in' to it. In the United States, two Congressmen have questioned the 'opt out' approach used by the NebuAd system that is being used by Charter Communications, the country's fourth-largest ISP. Here in the UK the Information Commissioner has suggested that such systems should be 'opt in'.

While the 'opt in' model would obviously be welcome to privacy campaigners, this would probably reduce the use of the software to levels that render it uneconomic, unless consumers can be incentivised to do so by for instance lowering prices. BT is following this route by offering users of its free Webwise service [click here] extra levels of security, such as anti-phising protection, in return for accepting more targeted advertising.

There seems little doubt that behavioural targeting is here to stay and will grow in use. The solution to the current controversy is to for those companies using such systems to be much more open and honest with their customers and communicate much more fully what the systems do and do not do. Ultimately customers need to be in control if they are to be content and this points the way to 'opt in' systems.

Links:
"Economist" article "Watching while you surf" click here
Federal Trade Commission testimony click here


In such a short period of time, the Internet has become a central part of our lives. But, as our columnist Roger Darlington explains, there is more – a lot more – to come.

WHAT NEXT FOR THE NET?

There is no historic precedent for the speed and influence of the development of the Internet. Although originally conceived as a military communications network called the ARPAnet in 1969, the World Wide Web - the graphical part of the Internet - was only invented (by a British scientist) in 1989 and arguably the Internet really 'took off' in 1993 when its use doubled to more than 25 million people.

Yet future developments promise to transform the Net as we understand it today. Consider – in no particular order – just seven changes that we know are just round the corner (no doubt there will be others that take us by surprise).

1) The reach of the physical infrastructure will become genuinely global.

East Africa remains the only large, inhabited coastline cut off from the global fibre-optic network that is the heart of the modern Internet. Reliant entirely on expensive satellite connections, people on the world's poorest continent pay some of the highest rates for logging on. But, in the next couple of years, three new undersea cable systems will give millions of African Net users the sort of experience and prices that we take for granted. [For more information click here].

2) The number of users will explode.

Today there are around 1.4 billion users of the Net. But the world population is currently 6.7 billion, so only just over a fifth of the global citizenry in on-line. By 2020, the world population is expected to be over 7.7 billion. So there is plenty of scope for growth in the number of Net users and, thanks to what economists called externality, every new user benefits every existing user. [For statistics on world population click here].

3) Broadband speeds will leap.

Basic broadband with an always-on connection could be thought of as 512 Kbit/s, but already BT's ADSL2+ service is providing up to 24 Mbit/s and Virgin Media's cable network can offer 50 Mbit/s. Next generation broadband systems – using fibre to the cabinet (FTTC) or fibre to the home (FTTC) – will soon routinely provide speeds of 100 Mbit/s and more. BT's recent announcement of a £1.5 billion investment to bring next generation access to up to 10 million households by 2012 should kick-start the NGA roll-out in the UK. [For information of next generation access click here].

4) Much more content will be available.

We think that all information is now on-line – but this is far from true. The bulk of human knowledge (one estimate is 85% of published information) remains off-line and the challenge is to change that as quickly as possible. I was recently contacted by a researcher who wanted on-line access to a telecommunications study I wrote in 1979 and, of course, it is simply not available on the Net. More significantly, many public records and scientific studies are still not accessible.

5) Many more services will be on-line.

Ten years ago, Google was merely two guys in a garage; now it is the world's most used search engine (and much more) employing 20,000 and worth tens of billions of dollars. The next decade will see many similarly tranformative services being developed. As the Web gets much bigger, perhaps we will have a search engine that understands better our personal needs and interests and the context and meaning of search terms.

6) The Net will go mobile.

Currently most people access the Net via a PC but worldwide many more have a mobile than a computer. Therefore Internet-enabled mobile phones will allow millions in Africa, Latin America, China and the Indian sub-continent to gain access to the Net for the first time. Even those with a PC will find that smart mobiles provide faster, easier and more ubiquitous access and, after less than a year with an iPhone, I could not imagine not having the Net in my pocket.

7) The Net will become more multilingual.

Most of the content of the Net is still in English – which is fine for me but useless for the majority of the world's population. Over the next few years, there will be much more content in Russian, Mandarin, Arabic and other languages. Then we will see the development of much better automatic language translation tools so that any of us can read any language quickly and accurately.

In short: we ain't see nothing yet.

Link: views of the 'Father of the Internet' ckick here


Mobile has totally revolutionised the communications marketplace and further dramatic changes are on the way, explains our columnist Roger Darlington.

THE MOBILE REVOLUTION

My first published piece on mobile was a study called “Telephones On The Move” written for the then Post Office Engineering Union (POEU) in May 1984. The booklet majored on the award of two new cellular licences to a British Telecom-Securicor consortium (now O2) and a Racal-Millicom consortium (now Vodafone) and pointed out that, at that time, the number of users of what we then called radio telephones was a mere 315,000.

The changes since then have been truly breathtaking. In so many respects, the UK mobile industry has been an outstanding success story:

The future for mobile looks equally exciting: Yet, in spite of all these achievements, regulators and consumers still have some problems with the UK mobile industry: As a result of these problems, the industry is coming under the spotlight from several quarters: In most European countries (and in many other countries around the world), the incumbent fixed line operator is one of the major players in the mobile market. The UK is the only major European market where the incumbent fixed line operator does not own a mobile network. Should BT have sold O2? Will it ever buy a mobile network?

The mobile story is far from over.

Link: Ofcom's mobile sector assessment click here


Well before everyone in the world is connected to the Net, a much more ambitious project is under discussion. Our columnist Roger Darlington explores

THE INTERNET OF THINGS

One way of looking at the evolution of the Internet is to see it in three stages: first, a fixed Net essentially connecting desktop PCs; second, a mobile Net connecting hand-held mobiles; third, what we call the Internet of things.

This is not a new concept – it goes back to a body called the Auto-ID Center [click here] which was founded in 1999 and based at the time in MIT. At that time, the vision was the widespread use of Radio Frequency Identification (RFID) chips to locate products in a company's supply chain.

What gives the idea new potency is the increasing adoption of a new version of the addressing system at the heart of the Internet. The current one is called Internet Protocol version 4 (IPv4) [click here] and the number of addresses that it generates has led many to forecast it will hit its limit as soon as 2010.

What will replace it is something called IPv6 [click here] which will represent a paradigm shift in the addressing system. While IPv4 'only' supports 2 to the power of 32 addresses, IPv6 provides 2 to the power of 128 which is over seventy nine billion billion billion times more than IPv4.

Put another way, this is roughly 2 to the power of 95 addresses for each of the 6.5 billion persons on the planet. In other words, every human on the globe could have a personal network the size of today's Internet.

In practice, the Internet of things might well encode up to 100,000 billion objects and follow the movement of those objects. It is estimated that every human being is surrounded by 1,000 to 5,000 objects.

Who on earth would want to use such a system?

Well, in September 2008, a range of powerful companies founded something called the Internet Protocol for Smart Objects (IPSO) Alliance [click here] to promote just this vision. Currently it has almost 30 members, including such payers as Cisco, Ericsson and Sun Microsystems.

But what would an Internet of things actually do?

Given the twin challenges of climate change and energy supply, the early adoption of the Internet of things is likely to be driven by utility companies to assist energy management. So, for instance, electricity companies might offer you discounts if they are allowed access your washing machine or dishwasher so that they can shut them down for a few minutes to manage peak energy usage.

Retail companies will then be massive users of the Internet of things to encode batches of products or even individual high-value products to assist stock management and to deter theft.

The state will find many uses whether it is for traffic management or theft control because all cars are encoded or for flood control because sensors are placed along river banks and flood plains.

Individuals will then embrace the idea so that they can restock basic food items automatically or identify food that has passed its 'eat by' date or control the heating, lighting, or security of their homes from any remote location – even on the other side of the world during a business trip or a holiday.

Imagine if the activation of a smoke detector automatically turned off all your gas appliances and signalled to your mobile or a fire station. Eventually it will not just be your valuables – purse, wallet, mobile, car – that will be encoded; so will all your clothes and all your books.

We already put RFID chips in animals – so farmers can track their cows or a person can locate their pet. In time, we might even think of implanting RFID chips in people. If this seems bizarre, imagine if a worried parent could locate her child anywhere at any time; imagine if a doctor could identify your blood group and allergies if you passed out in the street or abroad.

The implications of the Internet of things are profound. Like the current Net, no single authority will design or control it and no individual will be able fully to opt out.

There will be enormous benefits - on the global scale in terms of things like energy conservation, at the government level for control of crime and enhancement of security, and on the individual level including for those with special needs or disabilities. But there will also be risks – especially concerning privacy and data protection. So we should start debating the idea now.

Links:
Wikipedia page click here
"Guardian" article click here
International Telecommunications Union report click here
European Commission consultation document click here


In his 50th column for Connect, our Internet specialist Roger Darlington addresses what is probably the most central issue posed by the Net.

THE CHALLENGE OF DIGITAL INCLUSION

In spite of great progress in Britain in getting people online, the stark reality is that two in four citizens do not have broadband, and this figure is only edging upwards, while one in three citizens has no home Internet connection at all, and that figure is effectively static.

This problem of digital inclusion can be seen as consisting of two linked but different challenges: access and take-up.

Let's start with access. There are three main issues here.

First, can a citizen obtain a reliable connection to the Internet? BT claim that 99.6% of homes and businesses are connected to ADSL-enabled exchanges but some - like the Communications Management Association (CMA) - have challenged this. There are still a number of so-called 'not spots' and it would be helpful for Government to both quantify and locate the homes in such 'not spots'.

Second, once a citizen can gain access to current generation broadband, how practical is it to obtain faster speeds as and when these are required? Currently we appear to have three broad categories of access: those limited to somewhere between 0.5-2 Mbit/s, those who theoretically can access up to 8Mbit/s, and those who theoretically can access up to 24 Mbit/s (ADSL2+). All new technologies take time to roll out to all parts of the country, but there should be concern that a range of digital divides are now growing up.

Third, there is growing debate about the provision of next generation broadband - technically known as next generation Access (NGA) - which is likely to be based principally on optical fibre and will provide access to speeds of up to 100Mbit/s and beyond. Understandably the Government's Digital Inclusion Action Plan is primarily about current generation broadband, but it is not too early to start thinking now about the danger of new, bigger digital divides opening soon if a purely commercial  approach is adopted to the roll-out of NGA.

Now let's consider take-up. Again there are three important issues.

First, there are practical barriers. These include the cost of Internet access and of a PC or other device for connecting to the Net. Cost is becoming less of a barrier to take-up, but it is still a factor in lower income households and for some older persons.

Second, there are attitudinal barriers. These include a failure to see the relevance of the Net to one's personal life circumstances and fear of both how to use a PC and of malware on the Net. These barriers are of major importance to the sectors of the population which have still not connected to the Net.

Third, there are usability issues. Even when cost and attitude are not problems, a significant number of our citizens have usability problems in relation to the Net because of a variety of physical or mental impairments. These problems have not had the attention they deserve.

The Government's consultation document on its Digital Inclusion Action Plan makes four proposals:

These proposals are welcome, but we could be more radical. We could have a Government- sponsored 'one stop shop' for advice on IT issues modelled on Consumer Direct and NHS Direct. We should review the case for using the Digital Switchover Help Scheme as a means of accessing and supporting those not on the Net.

Another idea might be a major take-up campaign run - across all its outlets - by the BBC, perhaps using money from the 'switchover levy' on the licence which looks unlikely to be used fully in the Help Scheme. A related campaign – to be run across all Government Departments and in conjunction with local authorities – should highlight how many public public services are now online and the benefits of accessing those services via the Net.

In fact, the tailing off of Net take-up is a problem common to all the advanced industrial economies and we should look at what other such countries are doing and see if any schemes or initiatives are transferable to the UK.

Link: "Delivering Digital Inclusion: An Action Plan For Consultation" click here


The Internet was never conceived or designed for its current uses, so our columnist Roger Darlington considers a fundamental question.

DO WE NEED A NEW INTERNET?

The Internet started as a private network that was publicly owned, but now it is a public network that is privately owned. These changes have thrown up profound problems that have led some to suggest that we need to start again with a new Net.

To access the grounds for the argument, let's go back to the beginning.

The Internet was originally called the ARPAnet because it began with the US Department of Defense's Advanced Research Projects Agency in 1969. This government-funded network was designed to enable military communications to survive a nuclear attack by using distributed nodes and packet switching.

The Internet gradually migrated from the military to the academic community, but it was still the preserve of a small number of trusted users who would never have wanted to hurt other users or damage the network. [For a history of the Internet click here]

Today all that has changed.

No single organisation owns or controls the Net. The physical infrastructure is provided by hundreds of organisations, most of them private companies. And, instead of a few hundred (originally American) academics, we now have some 1.5 billion users all around the world and that number is growing daily.

The Internet has utterly transformed individual lives, created huge new businesses, and stimulated national economies. It is overwhelming a power for good – and yet ...

Some users are selfish and around 90% of all traffic on the Net is now spam which increases operating costs and slows down speeds for us all. Some users are plain evil and create and distribute child abuse images online which is why we have organisations like the Internet Watch Foundation [click here] which I chaired for six years.

Some users are malevolent and infect the Net with what is collectively called malware, one of the most dangerous being botnets when individuals' computers are taken over for nefarious purposes unknown to the owner of the PC. The latest fear is over something called the Conficker.B worm which is said to have infected some 12 million computers.

The crux of the problem is that the current Internet gives anonymity to users and changing that could be perceived as a threat to privacy. So what is to be done?

A recent feature in the “New York Times” [click here] asserted that ".. there is a growing belief among engineers and security experts that Internet security and privacy have become so maddeningly elusive that the only way to fix the problem is to start over. What a new Internet might look like is still widely debated, but one alternative would, in effect, create a “gated community” where users would give up their anonymity and certain freedoms in return for safety."

The piece referred to the Clean Slate project being carried out at Stanford University in the United States [click here]. Here researchers have set up a closed user network codenamed Ethane which will eventually form the basis of new corporate networks. But what about the rest of us?

In his weekly column written for the “Observer” newspaper [click here], John Naughton took a pessimistic, if pragmatic, view on the problem: “.. we're stuck with the trade-off between the creativity, innovation - and, yes, insecurity - that comes with openness; and the security - and stagnation - that comes with a tightly-controlled network”.

So, do we need a new Net? Of course, we do. But we aren't going to get one – just some technical improvements, such as the long-awaited introduction of the new protocol known as Internet Protocol version 6 (IPv6) [click here] that would fix many of the shortcomings of the current IPv4.

That leaves it down to us, the end users, to take the precautions necessary to protect ourselves from spam, scams, viruses and various malware.

An excellent source of advice is the Government-supported web site Get Safe Online [click here]. What would be even better would be a hot line advice centre on all IT problems along the lines of NHS Direct or Consumer Direct. Who would fund it? I would suggest Nominet [click here] which is the Internet registry for .uk domain names and has surplus funds. It would be a great way for the industry to help its users and ultimately its businesses. An idea for Lord Carter and the final report of the Digital Britain project?


Super-fast broadband – or next generation access as it is properly known – will not be delivered by a single technology. Our communications specialist Roger Darlington looks at the main options.

HOW WILL NGA BE DELIVERED?

There are many options using optical fibre which are collectively known as FTTx where FTT stands for 'fibre to the ..' But, in the context of residential customers, two options are especially likely:

The first option is fibre to the cabinet (FTTC).

In this model, fibre runs from the local exchange to the street cabinet and the active electronics are installed in the cabinet. The link from the cabinet to the customer's home remains the existing copper loop. Typically FTTC would offer a download speed of 50 Mbit/s with a much slower upload speed. The vast majority of BT's roll-out of NGA will use FTTC.

The second option is fibre to the home (FTTH).

In this model, fibre runs all the way from the local exchange to the customer's home. There are two main types of FTTH model:

a) In a passive optical network (PON), a single fibre from the exchange serves multiple customers by having its capacity divided or split. Typically this would provide each customer with a download speed of around 80 Mbit/s and technically the upload speed could be similar. This is the approach being used by BT at the Ebbsfleet development and other new build environments.

b) In a point to point (P2P) fibre network, each customer has a dedicated fibre connection to their premises. This allows virtually limitless and completely symmetric access speeds to be offered. This is the topology being used in countries like Hong Kong, Korea, and Japan and by Verizon in the United States.

These different options involved different costs and operational factors to the providers and different access speeds to the customers. FTTH is regarded as more secure than FTTC as it does not require active street cabinets, while the long-term operating costs would be lower than for other technology solutions. However, the up-front costs of deploying fibre all the way to the home would be significantly higher than for FTTC.

For the UK, according to the consultancy Analysys Mason, fibre to the cabinet using very high bit-rate digital subscriber line would cost an estimated £5.1 billion, while fibre to the home using a Gigabit passive optical network would cost an estimated £24.5 billion.

For a cable operator, the technology will be a little different than for a telco. So Virgin Media – which owns the cable networks passing around half of UK homes - was first off the block to provide NGA in advance of BT and this service uses a technology called DOCSIS 3.0 (DOCSIS stands for Data Over Cable Service Interface Specification which is an international standard).

Effectively this is a kind of FTTC since it ultilises fibre to a cabinet and coaxial cable to the home. Virgin is offering a 50 Mbit/s service (plus an upstream speed of around 1.5 Mbit/s).

Besides these wireline technologies, a number of wireless technologies might play a role in delivering next generation access in specific (typically rural) locations, if sufficient appropriate spectrum can be made available to support such services. Such technologies may well include WiMax (Worldwide Interoperability for Microwave Access) and WiBro (Wireless Broadband).

Another promising wireless technology is called Long Term Evolution (LTE) which is a mobile technology otherwise known as 4G. Although formally the relevant technical standard does not exist yet, LTE will offer massively more bandwidth than 3G – peak download rates of 326.4 Mbit/s – but it suffers from the physical problems of 3G in that it requires masts and of course hills and valleys are still significant obstacles.

DoCoMo in Japan and Verizon in the USA are leading the charge in the proposed use of LTE.

Finally, satellite will be a fill-in solution for locations that terrestrial options cannot serve, but this will be a rare solution in view of the costs involved and the relatively long delay in signal transmission. Although not as satisfactory a solution as copper or radio, satellite is almost universally accessible except for a few homes on the wrong side of hills. Also there are no "up to" problems in advertising of speeds, since all users receive the same service.

It seems likely that satellite can play a larger role in the provision of both current and next generation broadband than is the case at present, since the launch of the new HYLAS satellite this year and the launch of the HERCULES satellite some time later should make available download speeds up to 8Mbit/s and 50 Mbit/s respectively.


Those of us who use the Net tend to take it for granted that everyone would want it, but our correspondent Roger Darlington explains that we need evidence to make the business case for digital inclusion programmes.

DOES THE INTERNET IMPROVE LIVES?

This is the title of a recent research report carried out by the company Fresh Minds for UK Online Centres. The aim was to strengthen the business case for the sort of excellent work carried out by the more than 6,000 centres across the nation.

We know that a third of the UK population – some 17 million people over the age of 15 – are digitally excluded as a result of not have having an Internet connection at home and that some 70% of this group are in social groups C2, D and E. Some 36% are both C2DE and over 65.

So the Fresh Minds research involved a quantitative telephone survey of 810 respondents based on random samples of Internet users and non-users from C2DE groups in England. The aim was to see whether one could identify real benefits to being online.

The researchers acknowledge that the survey results cannot prove a direct causal connection between Internet use and improvements in life experiences, but argue that the differences across most of the survey areas show significant correlation and are highly suggestion of causality. This interpretation of the data is reinforced by insights from focus groups and UK Online Centres' users.

So, what did the research find? It looked at five core areas.

The first was social capital: the extent of people's social lives and their sense of connection, community and civic involvement. The study found that, in response to the statement 'I find it easy to organise social gatherings, such as meetings and parties with friends and family', Internet users were 12% more likely to agree strongly with the statement than non-users.

The second area examined was confidence and quality of life. Internet users were found to rate their self-confidence much higher than non-users.

Third, the study considered employment where there were some significant results. Over three-quarters of Internet users felt confident of their skills to find a new job compared to only half of the non-user group. That means that users' confidence exceeds that of non-users by 25%.

Next, health. Although speaking to a GP or medical professional was still by far the most common response to a question of what would be the first source used by people wanting to find out what was wrong with them when they fall ill, 19% of Internet users would turn to the Net in the first instance.

Fifth and finally, the survey looked at the amount of money that Internet users felt that they saved because of their use of the Net as opposed to other methods. Some 48% of Internet users thought that they saved more than £20 a month. A saving of at least £240 a year is significant to consumers in these social groups.

So does the Internet improve lives?

The report concludes: “This study has been able to show an overarching correlation between Internet use and positive experiences across most of our areas of enquiry”.

It points out that, in the midst of a recession, more confidence at finding a new job and savings by making use of online services represent real benefits, especially for the social groups least likely to be online. At a time when government is trying to make savings on public services, the evidence that Internet users will use online sources – such as information on health – is appealing.

As Helen Milner – the first class Managing Director of UK Online Centres - puts it in her foreword to the report: “We can build the pipes, we can create the sites, and we can deliver the skills but, unless the 17 million people currently off-line are motivated to take that first step on a digital journey, we will achieve very little.”

Basically those not online fall into two categories: what the report calls the excluded (those who lack access or skills) and the rejectors (those who lack motivation). Or, put another way, the disadvantaged as opposed to the disinterested. These two groups require rather different approaches and programmes.

We now await the Government's impending announcement of a Digital Inclusion Champion and Digital Inclusion Task Force who will be charged with taking forward this vitally important agenda and promoting relevant new programmes.

Links:
UK Online Centres click here
Fresh Minds report "Does the Internet Improve Lives?" click here
"Delivering Digital Inclusion: An Action Plan For Consultation" click here


Our columnist Roger Darlington explains his love affair with the iPhone.

HOW THE SMARTPHONE CHANGED THE WORLD

Over the last two decades, the mobile phone has become ubiquitous. Here in the UK, there are more mobile contracts than there are men, women and children and 86% of adults own a mobile.

In the last few years, however, we have seen a huge impact of the smartphone, as it moved from the business market into the consumer market, and as it developed dramatically increased flexibility and functionality. We can now see the smartphone as a paradigm shift in the consumer experience.

What exactly is a smartphone? There is no formal definition and no industry standard, but the term is usually taken to refer to a mobile phone offering advanced capabilities, often with PC-like functionality.

The first smartphone had the unlikely name Simon; it was designed by IBM in 1992 and shown as a concept product that year at the computer industry trade show COMDEX. Until recently, the most famous smartphone was the Blackberry; this was released by RIM in 2001 and was the first smartphone optimised for wireless e-mail use, rapidly achieving a customer base of millions, mostly business users in North America.

Now we have a growing range of smartphones with the latest products including Apple's iPhone 3GS, Nokia's N97, and the Palm Pre.

So, what's it like to own a smartphone and why is it so life-changing? Well, let me tell you about my love affair with the iPhone. The product was launched in Britain at 6.02pm on Friday 9 November 2007 and I bought mine on the following Tuesday.

For me, the single most advantageous experience was that at last, instead of taking everywhere with me my mobile and my PDA (personal digital assistant), I only had to take my iPhone and, although an 8GB device (nothing compared to the 32B of the most expensive 3GS version), it was light enough to put into my inside suit pocket.

So all my contact details are with me everywhere I go. Today I have over 1,000 and, in each case, I can store mobile and fixed numbers, home and business addresses, company and job title, and any personal details I want to record.

I never used the calendar on earlier mobile phones – the screen was just too small. But the iPhone's screen is 50mm by 75mm so I can easily view a day or month in detail. Naturally there are also all the normal features of a modern mobile: clock, alarm, calculator, notes, and camera (although at 2 megapixels, the original iPhone was a disappointment).

Like any smartphone, the iPhone permits the receiving and sending of e-mail and the surfing of the web. What is special is the virtual keyboard (I was always losing my stylus with my Palm Pilot PDA) and a touchscreen that enables one to expand and contract the size of text simply by 'pinching' with one's fingers.

The maps feature is brilliant. One just types in a post code and the address is pinpointed on a street map and, if you want, a click will take you to a Google Earth shot.

In July 2008, Apple introduced its innovative Appications Store with both paid and free applications. The App Store was a new way to deliver smartphone applications developed by third parties directly to the iPhone (or iPod Touch) without using a PC via download over wi-fi or a cellular network.

The App Store has been a huge success for Apple and delivered its billionth application to users in April 2009. Today the store hosts around 50,000 applications. Let me give you just a feel for the functionality that this gives an iPhone user and all the apps I'll quote are free and downloadable in seconds.

Dictionary.com gives me a searchable dictionary in my hand; Wikipanion provides me with a mobile-friendly version of Wikipedia; Cambio converts from imperial to metric units for all measurements; ITN News gives me the very latest news stories; tvguide tells me what's on all the major channels now and for the rest of the day.

A great app is called Aroundme. It works out where you are and tells you all the nearest banks, bars, cafes, restaurants, hotels, cinemas and so on. For a movie fan like me, there is Flixster which tells me what's on at all the cinemas near where I am and even gives me screening times plus telephone numbers and maps. A very practical app called WC Finder advises you of the location of the nearest public toilet.

As if topping up one's current mobile with all sorts of apps was not cool enough, Apple has come along with a new 3.0 operating system and one simply downloads the new software via a PC. This makes over 100 changes improving speed, function and performance. Now one can turn the virtual keyboard round to landscape, search contacts, calender and e-mails, and cut, copy and paste text.

When one has a mobile that starts with so much functionality including e-mail and web access, that just keeps getting better and better at no extra cost, and has more and more of the functionality of a PC, then the world has changed.

Links:
Wikipedia page on smartphones click here
how to get the iPhone click here
Stephen Fry's review of the iPhone 3GS click here


Balancing the interests of rights holders and consumers in the new digital world is a policy maker's nightmare. Our Internet columnist Roger Darlington explains what is at stake.

HOW DO WE SOLVE THE COPYRIGHT CONUNDRUM?

In a world in which most copyrighted material takes digital form and most consumers have access to a broadband connection, it is all too easy technically and usually cost-free for users to copy, distribute and adapt copyrighted works. Mainly we are talking here of music files and feature films, but such file sharing – usually through peer-to-peer networks using technology such as BitTorrent – can involve anything from television programmes to books, business software to computer games.

Let us be clear: such file sharing is clearly illegal and morally wrong. Unfortunately it is so easy and young people especially have become so used to it that we face a situation where in the UK up to nine million Internet users could technically be criminals.

The first problem in tackling illegal file sharing is that we do not know how extensive it is, what the revenue loss is, and what would be a reasonable objective for a reduction in such activity. All this did not stop the Government in its Digital Britain Final Report suggesting a target for a reduction of 70%

However, the answer cannot be to criminalise all such everyday activity and to regard a 13 year old music lover as the same as a gang selling counterfeit Hollywood movies on the High Street. We need a sophisticated and multi-dimensional approach which would include:

It is the final action that will prove overwhelmingly the most controversial and problematic. Some of the issues involved are: Again it is the final action that will prove the most controversial.

The Digital Britain Final Report proposed that Ofcom be given the power to require ISPs to take six technical measures: blocking by site, IP or URL, protocol blocking, port blocking, bandwidth capping, bandwidth shaping, and content identification & filtering.

However, in the middle of a consultation process on these proposals, the Government issued a new statement adding suspension of accounts to the list of measures that could be taken. Obviously this is a serious power since the same Digital Britain document argued that broadband access is now so essential to citizenship that there should be a new universal commitment to 2 megabits a second.

The Government statement also made clear that the decision whether or not to require ISPs to introduce these technical measures would not be taken by Ofcom (as originally intended) but by the Secretary of State (currently Lord Mandelson) although Ofcom would still provide the technical information to inform that decision.

In between the rights holders on the one hand and Net users on the other, the Internet service providers – such as BT and Virgin - find themselves sitting uncomfortably in the middle. They do not want to be directly involved in enforcing any new rights regime.

Partly, this is a matter of principle: the European Directive on E-Commerce states that ISPs are “mere conduits” and ISPs do not want to become involved in policing the content that they are carrying. They have been willing to work with the Internet Watch Foundation in combating child abuse images on-line, but the simple viewing of such material is a criminal offence and breach of copyright is a much greyer area.

Partly, it is practical matter of costs. ISPs operate in a highly competitive market with low margins, but the Government is proposing that costs directly incurred by ISPs be borne by them and that the operating costs of sending notifications be split 50:50 between them and rights holders.

Link: Government consultation on illegal filesharing click


As we approach the General Election, the regulation of communications is proving to be a surprisingly political issue. Our columnist Roger Darlington explains why.

WHAT SHOULD OFCOM DO?

At one level, the answer is easy: Ofcom should do what Parliament has mandated it to do in legislation that was the subject of extensive consideration.

The Parliamentary passage of the legislation that is now the Communications Act 2003 involved 26 sessions of the Commons Standing Committee and in total 17 days of Parliamentary business representing some 300 hours of debate.

The Communications Act – which created Ofcom and defines its duties - is a formidable piece of legislation: 411 Sections and 19 Schedules running to 590 pages.

Ofcom has an enormously wide range of duties. Indeed Ofcom itself has calculated that in all the regulator has 263 statutory duties, compared to 128 imposed on the five earlier regulators.

Put briefly, what Ofcom does is to regulate telecommunications, broadcasting and spectrum in the UK. Our telecommunications and broadcasting industries are at the heart of an enabled, educated and informed citizenry and together the industries generate some £52 billion of revenues.

So, why the controversy?

On the one hand, we have the current Government proposing in the Digital Britain Final Report - to be enacted in the Digital Economy Bill - that Ofcom be given two new duties.

The first new duty would be to promote efficient investment in communications infrastructure, alongside the promotion of competition, when furthering the interests of consumers. The second new duty would be to report to the Secretaries of State for Business, Innovation and Skills and for Culture, Media and Sport every two years giving an assessment of the UK’s communications infrastructure.

The same legislation will give Ofcom special responsibilities in relation to combatting illegal filesharing. Furthermore there are some calls for the regulation of the BBC to move from the BBC Trust to Ofcom.

On the other hand, we have the Leader of the Opposition, in a major speech on the need to reduce the number and role of so-called quangos, insisting: “With a Conservative Government, Ofcom as we know it will cease to exist. Its remit will be restricted to its narrow technical and enforcement roles. It will no longer play a role in making policy. And the policy-making functions it has today will be transferred back fully to the Department for Culture, Media and Sport."

What's going on here?

Part of the explanation is that there is a difference between how Ofcom is perceived to be carrying out its duties in relation to telecommunications and broadcasting.

Ofcom is widely judged to have done well in achieving the settlement with BT that created Openreach and resulted in the rapid expansion of local loop unbundling which has led to a more competitive market and more choice for consumers. It has worked hard on next generation access and there are no major regulatory obstacles to the roll-out of super-fast broadband.

On the other hand, Ofcom's role in the broadcasting sector has proved more controversial. Some people do not like its contributions to the debate on how public service broadcasting should be sustained in an era of multi-channel television and a haemorrhaging of advertising revenues to the Internet. But, in fact, it has not made policy so much as put recommendations to Ministers.

Sky TV does not like Ofcom's interventions in the pay TV market which it dominates, but the regulator had to respond to complaints from Sky's competitors and any regulatory remedies have to meet the principles set out in the Communications Act.

A further part of the explanation may be a lack of clarity over the respective roles of Ofcom and Government Departments and over the dividing lines between regulation and policymaking. This is a difficult area not just for Ofcom but for all regulators, but the industries that Ofcom regulates are of special size and visibility.

My suggestion is that, at the beginning of each new Parliament (and more often if circumstances require), each Government Department should draw up a public policy document for each regulated area within its remit.

This document should spell out the Government's vision and objectives for the sector, what it intends to do to give effect to that vision, what it expects the regulator to do, and how it intends to ensure coordination of the two streams of work. The implementation and operation of the contents of this document should be monitored by the relevant Select Committee.

Links:
David Cameron speech on quangos click here
BIS consultation on new duties for Ofcom click here


Our Internet columnist Roger Darlington welcomes initiatives to bring more people online.

AT LAST : ACTION ON DIGITAL INCLUSION

As long ago as the summer of 2004, I wrote in my column [click here] for Connect about my concern over the slowing down of Internet take-up and floated the idea of one-to-one tutoring through the use of local volunteers. I returned to the subject in the autumn of 2005 in another column [click here] when I emphasized that the digital divide was deepening and again looked at the role of volunteers.

In the autumn of 2007, I wrote for a third time [click here] on the problem when I lamented: "There is a major challenge to Government here. Something like a third of households are unlikely to go on-line anytime soon unless there are some significant support programmes." Then, at the beginning of 2009, I made a fourth effort [click here] to highlight the issue in a column suggesting: "Another idea might be a major take-up campaign.”

So recently, I've felt like the man that waits ages for a bus and then three turn up, as there has just been a whole series of welcome announcements about initiatives to tackle the digital divide and to promote digital inclusion and digital participation.

Co-founder of Lastminute.com Martha Lane Fox has been appointed by the Government as the Champion for Digital Inclusion and she is supported by a Digital Inclusion Task Force whose members include Anna Bradley, the Chair of the Communications Consumer Panel (on which I sit).

The Champion and her Task Force have announced the launch of Race Online 2012 [click here]. This is described as "a rallying call to the country to get four million of the most disadvantaged people online over the next three years" – that is, by the time of the Olympic Games in London.

The aim is to be a creative and intelligent ‘hub’ for existing and future work across the public, private, and third sectors. The campaign will highlight and encourage replication of great ideas, amplify the voices of those who might struggle to be heard, and link up people, funders and projects that might not otherwise meet. The use of volunteers will be important.

To provide some economic rationale for this campaign, Martha Lane Fox has published a report prepared for her by consultants PriceWaterhouseCoopers which seeks to make the economic case for digital inclusion. The headline figure is £22 billion for the estimated benefits of getting everyone online in the UK - a combination of education, employment & government efficiencies, and consumer savings.

Meanwhile, following a recommendation in the Digital Britain Final Report, a new Digital Participation Consortium [click here] has been launched under the leadership of the regulator Ofcom. The chair of the consortium is senior Ofcom staffer Stewart Purvis.

The Consortium already has more than 50 members. It aims to increase the reach, breadth and depth of digital technology use and to maximise digital participation and promote its economic and social benefits. The Consortium will encourage people to take up digital communication technologies by providing information, motivation and support. A major social marketing campaign is likely.

Whereas the Champion for Digital Inclusion will focus on those who are not just digitally excluded but also socially excluded – like older people and poorer households – the Digital Participation Consortium will look at all households – still about one-quarter – who are not online and at how those who are online can make better use of the Net.

In the welter of recent activity around digital inclusion, Ofcom has published its latest media literacy audit in two parts which provide useful data for the regulator and campaigners.

First, there was “UK Adults' Media Literacy” [click here].

This highlighted that three in four adults use the Net at home or elsewhere in 2009 (75%), compared to two-thirds (63%) in 2007 and three-fifths (59%) in 2005, but of course this means that a quarter of British households is still not online.

Second, there was “UK Children's Media Literacy” [click here].

This revealed that, when looking specifically at use of the Net within the home, children in D & E socio-economic groups are the only groups not to have experienced an increase in use since 2008, despite the increase in access.

So, at long last, we seem to be taking the problem of digital inclusion seriously and developing some targeted campaigns. But, of course, this is just the start of the digital ladder. Once people start to use the Net, they want faster speeds and more reliability and that means next generation broadband for all.


Super-fast broadband will never reach all parts of the country without some sort of public sector intervention, explains our columnist Roger Darlington

TAKING FASTER BROADBAND FURTHER

In March 2008, Connect published a booklet entitled “The Slow Arrival Of Fast Broadband” which I drafted for the union in a consultancy capacity [for up-dated version click here]. Two years later, real progress has been made in the roll-out of what is technically called next generation access (NGA) but is more popularly called super-fast broadband (SFB).

Virgin Media has up-graded the whole if its network to provide 50 Mbit/s services, while BT has announced a £1.5 billion investment to cover 10 million premises by 2012 , mainly through fibre to the cabinet (FTTC) which would provide services up to 40 Mbit/s.

However, Virgin's cable networks only cover half the country in population terms and BT's current plans would only reach around 40% of homes. Research conducted by consultants Analysys Mason for the Broadband Stakeholder Group suggests that the market alone is unlikely to take NGA to more than 60-70% of homes [for text of report click here].

So what is the answer? Various Regional Development Authorities, local authorities and community groups are developing local initiatives. On behalf of the Communications Consumer Panel, I have produced a report pulling together information on these and, while there are over 40, almost all are very small-scale [for text of report click here].

The Government's answer was a surprise feature of the “Digital Britain Final Report” published in June 2009 [for text click here]. It is what is popularly known as the next generation levy but the Government now calls the landline duty. This is a charge of 50 pence per month on each fixed line to raise money for a fund that will be used to stimulate investment in the so-called 'final third' of the country that otherwise might not see NGA for decades if ever.

The Conservative Party is firmly opposed to this levy, but the Government seems determined to press ahead with the measure. It was mentioned in the Pre-Budget Report and is expected to be in the Budget. The problem is that a General Election is likely to be called before the Finance Bill has reached the Statute Book and, in those circumstances, there will be tough negotiations between Government and Opposition over what stays in the Bill and what is dropped.

Meanwhile the Government has issued two consultation documents over the measure - effectively one on how the money will be raised [for text click here] and another on how it will be spent [for text click here]. So we have a fairly good idea what it will look like.

The duty will apply to fixed line phone and broadband services – not mobile, satellite or other wireless communications.

Broadband has been included to prevent market distortion arising from phone users moving to Voice over Internet Protocol (VoIP) services for their voice needs. Mobile is excluded because it has developed in a different environment to fixed (that is, without the monopoly incumbent) and it would be difficult to apply fairly (for instance, ensuring equal burden for prepaid and contract users).

The duty will be payable on all local loops that are made available for use regardless of whether they are actually used, regardless of technology (copper, cable and fibre), and regardless of whether voice or data services are delivered over the connection. Liability for the duty lies with the owners of the physical assets – such as BT – because a small number of network owners is easier to identify and less costly than registering retailers or the local loops themselves and this approach will prevent customers being charged twice where two retailers provide services over one line.

The decision whether or how network owners will pass on the tax to retailers is a commercial decision for the network owners to make. The Government expects that the tax will be passed on to retailers and, subsequently, to consumers. Obviously consumers will not welcome such a charge – if it is passed on – but Ofcom figures show that average bills have fallen by more than 50p per month over the last three years.

The Government intends to implement the duty on 1 October 2010. It is expected to raise £175 million per year (£150 million from the duty, £25 million from VAT on the duty) and the intention is that it will be used to help fund the roll-out of next generation broadband to 90% of the UK population by 2017.

I support the measure and hope to see it implemented – but watch this space.


The Internet has utterly transformed how individuals and organisations access and use information, as explored by our columnist Roger Darlington.

THE DATA DELUGE

Communications technologies have changed our world so much that it's hard to comprehend just how transformative this revolution has been. A useful prism through which to examine some of these changes is information or data.

First, consider the mind-blowing volume of data that is now available and the scary pace at which this volume is increasing.

According to “The Economist”, the total amount of information in existence this year is 1.2 Zettabytes (ZB) where a Zettabyte is 2 to the power of 70 bytes. The same publication suggests that the rate of growth of data is a compound annual rate of 60% which means that the amount of digital information increases tenfold every five years.

These figures are incomprehensible to most of us, so let's try some smaller – but still huge – numbers around sites you know, so that you can grasp something of the scale of the Net.

Take Wikipedia, a wonderful online encyclopedia: today it has over 3.2 million articles in English alone and more than 100,000 articles in 30 other languages. Or take YouTube which hosts short videos posted by anyone: it is estimated that there are more than 140 million such videos and that more than 20 hours of video is uploaded every minute worldwide.

Storage and classification of such exponentially growing volumes of data present huge challenges of both technology and cost.

Next, think about how all this data is used and how it can be abused. Companies use “data-mining” or “business intelligence” to decide how best to meet our needs as customers. So, if you use Amazon, it will track which books or videos you order and even which pages on the site you look at in order to make targeted recommendations and offers. And, if you use Gmail, then the entire content of your mail will be scanned using “content extraction” to enable targeted adverts to be presented to you.

Wherever we go on the web, we leave what is called “data exhaust”, a trail of clicks that show where we've been, how long we stayed there, and what we did online.

The ability to track and trace online is necessary if law enforcement is to tackle a host of abuses such as spam and scams, hacking and malware, phishing and identity theft, circulation of child abuse images, or planning of terrorist activities.

On the other hand, in cyberspace privacy has become almost meaningless. Already social networking sites – there are more than 400 million active users of Facebook alone – have put a ton of very personal data out there for anyone to see or hack. The move to “cloud computing”, with applications hosted externally rather than on our PCs, will put much more sensitive data on the web.

Finally, contemplate how we access the data that we want to see and block out the data that we don't want to see.

If there is more and more data available to us, how do we decide where to go for it and which of it is most accurate or useful? Most of us use implicit or explicit recommendation tools. Implicit tools include Google's ranking of web pages by the number of links to that page or Wikipedia's facility for anyone to create or amend an entry. Explicit recommendations come from our use of trusted sources which may be organisations - such as the BBC or the “Guardian” - or individuals – such as someone's blog or Tweeter feed.

These techniques for managing data make sense but they tend to reduce the wonderful serendipity that is presented by the web. Following links can take you to interesting new sources of information or insight, but so many people tend to visit a very limited number of web sites or blogs.

An opposite problem is how we block out information that we don't want to see or which we don't want our children to see. Somehow we have to control the volume of e-mails, text messages, telephone calls and other messages that increasingly bombard us. Filtering software can limit exposure of children to inappropriate material online but such software has its limits.

Let's give the last word to an academic researcher who is a friend of mine, Professor Sonia Livingstone at the London School of Economics: “Data, data everywhere, but never time to think.”


Our columnist Roger Darlington examines one of the most divisive debates around the Internet.

SHOULD THE NET BE NEUTRAL?

Net neutrality – as well as being nicely alliterative – sounds as comforting as motherhood and apple pie. How could anyone be opposed to, or even anxious about, something neutral? On the other hand, if you call the issue traffic management, then the debate becomes less clear.

The terms used (and abused) mean different things to different players, but essentially what we are talking about is whether and where there should be a principle of non-discrimination regarding different forms of Internet traffic carried across networks.

At the extreme, net neutrality means that there should be no prioritisation of any types of traffic by network operators – so all bits would be treated as equal – and no charging for content providers. In practice, it is about whether communications providers should be allowed to block, degrade or charge for prioritising different applications or different content providers' traffic or whether network operators should be able to charge consumers, service providers or both for tiered quality of service.

The reality is that, in its pure form, net neutrality does not and cannot exist because all network operators have to use a variety of network management techniques to ensure that their networks do not become congested and that types of application that are sensitive to latency and jitter (like voice and video) are treated differently from others (like e-mail).

Where the debate becomes really anguished is when there is any suggestion that traffic management goes further to prioritise some service providers' content or applications over others or even to block access to a rival's content or applications.

The last time that I addressed the issue of net neutrality in this series of columns (September 2006), the debate was raging in the USA but hardly featured in Europe. Why has it always been such a big issue in the United States? The explanation is that, in most parts of America, effectively there is a duopoly between the telephone company and the cable company and little real competition.

Here, in the UK, we have more network competition and net neutrality has not been the issue that it has on the other side of the Atlantic. However, the debate here is becoming more high-profile for several reasons: the growth of bandwidth- hungry services like iPlayer and YouTube, the explosion of Internet traffic over mobile networks, a review by the body representing regulators throughout the European Union, and implementation of a revised EU Framework for telecoms.

So far, the debate has been highly polarised. For instance, BT has argued that the BBC and other content providers should not expect a free ride over its network while, for its part, the BBC has complained that BT throttles iPlayer at peak times.

One might think that, from a consumer point of view, net neutrality is obviously desirable and its strongest proponents want it to be mandated by the regulator or even (in the USA) enforced by legislation. However, the current position – effectively a 'best effort' approach in the face of limited capacity and rising volumes of bandwidth-demanding traffic – presents several dangers.

The most obvious threat is increasing congestion which will degrade quality of service for all consumers and most applications. More particularly, delay-sensitive applications, such as voice over Internet Protocol (VoIP) or video services, could be degraded or even lost. Most worrying of all, if we do not agree a consensual and realistic approach to net neutrality, the net could fragment into two (or more) tiers with business users, who are ready to pay for it, receiving a better service.

Later this year, therefore, our regulator Ofcom will issue a discussion document and then a consultation document before making policy and recommendations. The two key issues are likely to be discrimination and transparency.

On discrimination, we need to think of options for Internet traffic management as a continuum and determine what forms of discrimination are fair and reasonable and what forms are anti-competitive and unacceptable and should be the subject of regulatory intervention. Ofcom will need to decide when intervention and what form of intervention would be appropriate.

On transparency, consumers need to be fully informed of any traffic prioritisation, degradation or blocking policies being applied by their ISP, so that they can take this into account when choosing a service provider. These policies need to be prominent, accessible and intelligible. Canada and Norway have examples of good practice here.


Our regular columnist Roger Darlington looks at a new initiative that will bring together television and broadband.

MORE PICTURES ON A NEW CANVAS

You may not have heard of it, but it's coming soon, it will transform our television offerings, and it may provide a further boost to broadband take-up. It's called Project Canvas but, when it is launched on the market, it will have a different brand name.

The central proposition is the delivery of video-on-demand (VOD) and other interactive services to the television set via Freeview and broadband. In effect Canvas is an attempt to replicate the success of Freeview for Internet television.

Behind the core proposal are three specific elements:

It sounds like an ambitious project and it is. But it is backed by some major and serious players: the BBC, BT, TalkTalk, ITV, 4, Five and Arqiva. As well as serious players, there is serious money: Canvas will cost £116M over the first five years and the BBC will contribute £25M of this.

So what would Canvas mean for consumers?

Sounds wonderful, doesn't it? But Canvas has been controversial. A similar (but much less open) project called Kangeroo was blocked by the BBC Trust but, following a review with around 60 industry submissions and over 700 public responses, the Trust has now given the green light to Canvas, subject to a whole load of conditions.

The benefits of Canvas will be:

The BBC has described Canvas as “a game changer” and BT Vision sees it as central to its future plans.

But possible downsides to the project are:

BSkyB and Virgin Media are fierce critics. They fear the impact on their own offerings and question whether BBC licence fee revenue should be used to support what is effectively a commercial venture.

When the Canvas device is first launched – probably in early 2011 in good time for the Olympic Games – it will be expensive, somewhere between £200-£249. But the 'base case' scenario is that, by 2015, there will be 4M Canvas devices out there and 23% of digital terrestrial television (DTT) households will have Canvas on its primary set. The 'high case' scenario would see 8.3M devices sold and 50% of DTT households with at least one set covered.

An appealing feature of the Canvas project is the amount of thought that has gone into the issue of accessibility regarding the new device. While the set top box will not at first have all the features that disability groups want, the plan is that the boxes will be upgradeable enabling new accessibility solutions to be added in the future.

A particular interesting consequence of the launch of the Canvas offering is the likelihood that it will stimulate some further take-up of broadband, since one will need a broadband line to access the services, and some further use of the Internet, since the consumer will be able to access the Net without needing a computer.

The BBC estimate is that, over five years, between 500,000-870,000 homes which currently are not online will take broadband as a result of Canvas. Such a development would have positive implications for the online delivery of public services such as NHS Direct.

Links:
Project Canvas info site click here
BBC Q & A page click here


Our Internet columnist Roger Darlington explains why the current buzz about 'the cloud' is more than hot air.

THE WIND TOWARDS CLOUD COMPUTING

When it comes to the development of the Internet, there's always something new. Not everything new is important or lasting, but cloud computing is a very significant trend that will impact both users and networks.

The 'cloud' here is really just a fancy word for the Internet, but it's rather appropriate because it's large, it's out there, and it's fuzzy at the edges. Cloud computing is an umbrella term but essentially it refers to putting more material and software on the Net itself rather than on the computers and servers that businesses or users have themselves.

An analogy could be with electricity. At the beginning of the industrial revolution, companies generated their own electricity needs locally but, once the grid became truly national and reliable, the provision of electricity was outsourced to specialised companies and networks.

Similarly today many companies are questioning why they need to own sophisticated computers and servers which need constant and expensive maintenance and regular up-dating with new software when they could access and pay for software and services just when they need them.

For years, we've talked of putting the intelligence in the network and making terminals cheaper and dumber. Now a version of this vision is starting to happen. So, what's the point of cloud computing?

Of course, there are always downsides or risks to any development. In practice, most of the risks are unlikely or exaggerated or simply outweighed by the enormous benefits, so that currently there is an explosion of cloud computing options and a rush to the cloud by business users.

Options include software-as-a-service (SaaS), platform-as-a-service (PaaS), and infrastructure-as-a-service (IaaS), depending on what the customer wants. Suppliers of such services include some of the very biggest names on the Net, including Microsoft, Google and Amazon.

Users of cloud computing include many familiar names like the “Guardian” newspaper and the supermarket giant Tesco. But the largest potential user is yet to come on board. At the beginning of 2010, the then Labour Government talked of bringing in cloud-based infrastructure and services across government departments. The new coalition government is likely to drive the idea faster as a means of saving money. The so-called 'G-Cloud' strategy will probably come online in 2011 and could eventually save central government £3.2 billion out of the £16 billion annual IT budget – an impressive 20%.

If the project works for Whitehall, expect central government to follow suit soon afterwards.

The biggest roadblock in the rush to cloud computing for what we used to call the information superhighway is something which is rarely discussed: network capacity. While new investments in the local loop – mainly fibre to the cabinet (FTTC) but some fibre to the home (FTTH) – are receiving attention, too little thought is being given to the need for heavy investments in the backhaul – the circuits that take traffic from the local exchange back to the routers and servers of Internet service providers (ISPs) which in the UK are normally located in the Docklands area of London.

There are only six or seven backhaul networks in the country – the largest being BT Openreach – and, unless the necessary investments are made, the heavy traffic requirements of cloud computing will lead to growing contention and the inability of networks to self-heal. Networks could slow down to the point of not being usable or usage could be capped or charged for which might lead to a two-tier system.

Links:
Wikipedia page on cloud computing click here
Discussion of the 'G cloud' click here


Our regular columnist Roger Darlington examines the Coalition Government's communications policies so far.

WHERE NOW FOR DIGITAL BRITAIN?

In June 2009, the then Communications Minister Stephen Carter in the then Labour Government launched the “Digital Britain Final Report”. It ran to 238 pages and it weighed in at 1 kg (2.3 lb).

But Labour is no longer in office and Carter is no longer a minister, so what's happened to the ambitious proposals in “Digital Britain” as regards the telecommunications sector? Well, the report did lead to legislation in the form of the Digital Economy Act, but the Bill was caught in the 'wash up' process of the last Parliament which meant that the Conservative Opposition blocked a number of proposals even at that late stage.

So the regulator Ofcom obtained new responsibilities to combat illegal file-sharing and review the adequacy of the nation's communications infrastructure, but the suggested duty to encourage investment was dropped.

Meanwhile the new Coalition Government is planning huge cuts in public expenditure and a savage reduction in so-called quangos and it has Ofcom in the firing line. The debates are still raging but, almost certainly, Ofcom will finish up with a reduced role and a smaller budget that will have major implications for the industry which ironically funds a large part of the regulator's work.

For consumers and businesses, even more important than having an effective regulator is having access to a communications network that meets their needs and ensures the nation's international competitiveness. So what's the new government doing here?

Well, it's saying the right things. The new Culture Secretary Jeremy Hunt assured an industry conference in July 2010: “I hope you are in no doubt whatsoever about how important the Government considers broadband as a part of our economic infrastructure.” He asserted: “All of us share the ambition that, by the end of this Parliament, this country should have the best superfast broadband in Europe and be up there with the very best in the world.”

Of course, it's one thing make the declaration of intent; it's another thing to deliver policies that make a positive difference on the ground or – in this case – in the ground. The first decisions have been exceedingly disappointing.

As far as current generation broadband is concerned, the last government planned to roll out a universal broadband commitment of 2 Mbit/s – a speed lambasted by the Conservative Opposition as paltry – by 2012, but the current government has retained the 2 Mbit/s speed and delayed implementation until 2015.

As far as next generation broadband is concerned, the last government planned a 50p a month levy on all fixed lines to fund delivery of next generation access to the ‘final third’ of the country that would not receive it under private sector provision, but the present government immediately dropped the levy idea and has now announced that it will not decide whether public funding is needed until January 2012 and, if the decision is affirmative, funding – probably from the BBC licence fee - will not commence until 2013.

Another way that the government might have assisted private sector investment in next generation broadband is through an amendment to the present taxation rules. In Opposition, the Conservatives undertook to conduct a review. Now Communication Minister Ed Vaizey has abandoned that promise to look again at the taxation regime for new fibre networks.

The current taxation regime works to the advantage of BT and Virgin Media, but potential competitors to these companies claim that the current arrangement adds 10% to their costs compared to the situation faced by BT and Virgin.

So, is there any good news from government these days?

DCMS Secretary of State Jeremy Hunt has announced that there will be three trials of the delivery of next generation access in rural and hard-to-reach areas. Broadband Delivery UK – the organisation which will be the delivery vehicle for the Government's broadband policies  – will manage the procurement of these testing projects.

Also the Coalition Government has at last backed a simplified version of a package of proposals on spectrum, that the Labour Government failed to get through the Commons before the General Election. This should in time enable mobile networks to make a better contribution to the delivery of new high-speed services.

But the truth is that the Coalition Government is still a long way from having the necessary detailed and credible set of policies to deliver a truly digital Britain. Watch this space.

Links:
"Digital Britain Final Report" click here
Jeremy Hunt's speech of 8 June 2010 click here
Jeremy Hunt's speech of 15 July 2010 click here


Ever wondered how web sites obtain their names? Our Internet columnist Roger Darlington explains the system.

HOW THE WEB WORKS

The World Wide Web - the graphical part of the Internet - was invented by a British scientist, Tim Berners-Lee in 1989 while he was working at the European Centre for Nuclear Research (CERN). Every day it becomes bigger and bigger – the Wikipedia site alone has some 3.5 million pages in English, each of which has its own unique address so that all users can find it and link to it.

Each web page address can be thought of as having three elements. The first element is everything up to the first dot. This usually involves the letters 'http' which stands for Hypertext Transfer Protocol and the letters 'www' which of course stands for World Wide Web.

The second element is everything after the first single forward slash which identifies the particular page on a web site with more than one page which is almost every site. The third element is everything in between the other two elements. It is called the domain name and is the subject of the rest of this column.

There are two sorts of domain name. Generic Top Level Domains – abbreviated as gTLDs – are those ending in terms like .com or .org (currently there are around 20). Then there are Country Code Top Level Domains – abbreviated as ccTLDs – such as .uk for Britain or .de for Germany (a wide definition of the term country means that there are over 240).

Generic Top Level Domains – which tend to be used by companies or organisations that operate beyond a single nation – are overseen by an organisation called the Internet Corporation for Assigned Names and Numbers (ICANN) which is a not-for-profit private sector organisation based in California.

Country Code Top Level Domains – which tend to be used by bodies operating solely within a particular nation state – are managed by a separate body for each country with ICANN providing a coordination role. In the UK, that national registry is called Nominet which was founded in 1996.

Nominet has its offices at the Oxford Science Park where it employs some 115 staff. It is the second largest national registry (after Germany) with around 9 million .uk domain names, the very first of which was issued 25 years ago (domain names preceded the web). In fact, some four-fifths of these names are not actively used.

Nominet itself 'sells' very few domain names directly. Instead it operates through a large body of registrars who sell domain names to web site owners and often provide other web-related services such as hosting. The UK has a very large community of registrars – many more than Germany – running to some 3,000, but the largest 20 account for about 75% of .uk domain names.

Domain names are cheap: all registrations are for two years and Nominet charges a 'wholesale' price of just £5. Registrations are renewable on the same terms without limit. I am one of the 9 million with a .uk domain name for my web site.

Disputes over the use of domain names tend to be of two main types.

One case concerns intellectual property: A company might complain that their name has been hijacked or misused in a domain name and Nominet has a system of 40 independent arbitration experts to resolve around 700 such disputes a year.

Then there are cases of sites responsible for misrepresentation and fraud and on occasion Nominet will suspend domain names in response to a police request because the name is being used for criminal activity.

Nominet is a not-for-profit organisation with a turnover around £20 million which makes a healthy operating surplus, but much of this is ploughed into underpinning the security and resilience of the system.

It is very important to the government and to stakeholders that Nominet operates in an open and transparent way and it has recently carried out a major review of governance. Following this review, the approach to making policy in the .uk space has been changed to a stakeholder-led process based on issues and anybody who wishes can now set up an appropriate Issue Group to address a perceived problem.

The new .uk policy process is supported by a Stakeholder Committee appointed by the Nominet Board. This committee has four Nominet members and four with a wider background. I have been selected to reflect the interests of consumers - hence your columnist's knowledge.

Links:
Internet Corporation for Assigned Names and Numbers click here
Nominet click here
Nominet .uk policy process click here
Nominet Stakeholder Committee click here


Activists at home and aboard are making increasing use of online tools as explained by our Internet columnist Roger Darlington.

THE POLITICAL POWER OF SOCIAL MEDIA

The role of the activist – whether campaigning for a political cause, pressing for a social change, or pursuing a trade union objective - has been changed dramatically by the new communications technologies, most especially the Internet, and most powerfully of all social media.

Consider some very different cases:

These are just a few examples of how political and social activism is increasingly using social media to challenge authority and to promote change.

What are the advantages and strengths of using social media to further activists' causes?

What are the limitations and weaknesses of these online tools? The Net can be used for conversation or control, for protest or propaganda, for reform or repression. The battle has only just begun.


There's more about you on the Net than you realise. Roger Darlington asks whether you should be worried.

HOW TO CONTROL YOUR ONLINE PERSONA

Have you ever Googled your name? Of course you have. You should check from time to time what the web has to say about you so that you know how other Net users see you.

If your Google search revealed no entries, then you are a deeply sad person and need not read the remainder of this column. If the search located a single entry, you have achieved something called Googlewacking (look it up).

The chances are that, depending on how common your name is and how active you are on the web, you found lots of entries. In my case, since my name is not that common and I'm a Net enthusiast with a web site and two blogs, Google offers up 120,000 entries in 0.11 seconds.

Should we be impressed or scared? Of course, it depends on where the information came from and how accurate it is. There are three types of information about you online.

First, information you can see which you put there. This includes any website or blogs that you run, the social networking sites that you've joined such as Facebook or LinkIn, the microblogging sites you use such as Twitter, and any comments that you've posted to other users' blogs or social networking pages.

This might seem fine, but you might have blogged or tweeted when you were misinformed or angry or drunk and subsequently regretted what you wrote .Or a student revealing his university antics might later find that a potential employer is not so amused by the semi-naked frivolities or anti-capitalist tirades.

Second, information you can see which other people put there. Your family and friends might be mentioning you on their blog postings or Twitter tweets or on their social networking pages. If you've given a speech or presentation or attended a seminar or conference, you may well be mentioned on the organisation's web site. Your employer and any social organisation of which you are a member may well have something about you online.

Again, at first this may not seem problematic. But you may not want the world to know a former girlfriend's rating of your sexual prowess or what you said about the boss at the company's Christmas party. Teachers can be the victim of outrageous comments by pupils and politicians and celebrities are often the subject of unsubstantiated rumours or simply outright lies.

Third, there is information you can't see and which you created probably unknowingly. Cookies often record your visits to a website and what you viewed or did there. Data mining techniques collate your online activity and interpret your interests in order to personalise content or target advertisements.

For instance, consider my use of Amazon. When I view information on a book, they know I'm interested in that subject or genre; when I actually buy that book, they know I'm very interested; when I go on to review that book, they know that I'm exceptionally interested. This enables Amazon to target recommendations and offers.

Research commissioned by the Communications Consumer Panel – on which I sit – suggests that most Net users have not really thought through the implications of all this information being so available to so many people and have little understanding of what they should be doing to ensure that their profile online is the one they wish the world to see not just now but in the years to come.

So what can you do to manage your online persona?


As broadcasting and the Internet collide, can we continue to regulate them so differently? Our columnist Roger Darlington explores the options.

REGULATING CONTENT IN A CONVERGED WORLD

Broadly speaking, I would suggest that, in most democratic countries, broadcasting is regulated around some concept or definition of offence. So 'excessive' or 'inappropriate' bad language, violent behaviour, sexual activity, and such anti-social practices as smoking, drinking and drug-taking are prohibited or confined to certain times or certain channels. Therefore essentially the test of acceptability is offence.

I would suggest that, by contrast, in most democratic nations 'regulation' of the Internet simply borrows from general law and that, as far as is practical, what is illegal offline is regarded as illegal online. This would include criminal content such as child abuse images (what many – wrongly, I believe – call child pornography), 'extreme' adult pornography, race hate material, and inducement to violence or other activities which are of themselves illegal such as drug-taking, fraud or robbery. It would also include content such as libel or copyright infringement. Therefore the test of acceptability is domestic law.

Regulation of broadcasting and the Internet cannot be the same. It would both technically impossible and socially unacceptable. The issue is whether the sharp differences in regulation and the fundamentally different tests of what is acceptable should continue. My own view is that, over time and with consumer education, we should move to a less differentiated model. Why?

First, the current broadcasting model is no longer appropriate as the justifications for it are evaporating. Effectively there is no scarcity of spectrum or channels – the volume of content and the range of choice is now enormous. In many countries, there is no longer a real consensus about what constitutes offence – we are much more cosmopolitan and much more variegated in our tastes and values and what would outrage one family would be no problem at all to another.

Second, the current Internet model is no longer adequate. When the Net was used by a few thousand academics and nerds, maybe we did not need to worry too much about its content. But now the Internet is a mass medium – indeed it is the mass medium – with some two billion users. To limit 'regulation' simply to material which is illegal is not facing up to some serious challenges of Internet content – such pro-anorexia, pro-bulimia and pro-suicide sites – or to the wishes of consumers for some more protection and guidance.

Third, convergence now means that regulation based on device – one system for broadcasting because it is delivered on radio and television sets and another system for the Internet because it is delivered on a computer – is wholly inappropriate and unsustainable. Already one can have a split screen with the broadcasting of a television programme as the main picture and a live Twitter feed about the programme on a smaller section of the same screen. Tablet computers (like the iPad) and Internet Protocol Television (IPTV) are accelerating the convergence of content delivery.

If we are going to have a more converged approach to content regulation, then essentially we have three broad choices.

First, we could regulate broadcasting the way we regulate the Internet, so that all content would be accessible unless it was illegal. This would throw open what was permissible on television to an extent which I believe would be politically and socially unacceptable. Our screens would be awash with sex and violence and not just when we 'pull' it down from the Net but when it is 'pushed' at us by broadcasters.

Second, we could regulate the Internet the way we regulate broadcasting, so that anything offensive on the Net would have to be blocked or limited in some way. In a global medium where every user has the opportunity to create content, this would be technically impossible (although it is feasible in a particular totalitarian regime like China or Iran). Furthermore it would change the whole concept of the Net and radically diminish the rich and varied content that we currently enjoy.

Third, we could seek some sort of middle way that uses a different test of acceptable content – one that is not so strict and subjective as offence or taste or decency, but one that is not so limited and difficult to enforce as illegality. What could such a test be? I would suggest for debate the test of harm. But, for a full examination of this model, you will have to check out my web site essay [click here].


Our use of the Internet is having invisible consequences that effect how we read and remember, explains our columnist Roger Darlington.

IS THE NET CHANGING YOUR BRAIN?

We often think of the brain as a kind of computer but this is a poor analogy for several reasons. A computer is literally hard-wired and use of it does not change either the location or the intensity of the connections.

The brain is very different. It contains an estimated 100 billion neurons and these neurons are constantly making and remaking synaptic connections as a result of our behaviour. Scientists, therefore, talk of neuroplasticity, the capacity and indeed the inevitability that different types of reading and thinking will result in different synaptic connections which over time become more or less strengthened in our brains.

In his seminal work "The Shallows" (sub-titled "How the Internet is changing the way we think, read and remember") [for my review click here], American writer Nicholas Carr argues that intensive use of the Internet is literally reconnecting our brains in ways which make it harder for such users to concentrate on linear text for a sustained period because of "the switch from paper to pixels".

Web pages contain short pieces of text dotted with hyperlinks and Web users frequently skim that text and follow these links in ways which, according to Carr, take us into "the shallows".

He insists: "Dozens of studies by psychologists, neurobiologists, educators, and Web designers point to the same conclusion: when we go online, we enter an environment that promotes cursory reading, hurried and distracted thinking, and superficial learning" and that "people who read linear text comprehend more, remember more, and learn more than those who read text peppered with links".

But, in so far as it true that some Net users - perhaps especially younger, more intensive users - find it hard to concentrate and to digest linear text, it is surely overblown to assert – as Carr does - that "one of the greatest dangers we face" is "a slow erosion of our humanness and our humanity".

All new media companies are interested in how their customers are using and coping with the explosion of new digital devices and services. BT is one of them and recently sponsored an international study led by the Engineering Design Centre at Cambridge University.

The subsequent report, entitled "Culture, Communication And Change" [for text click here] comes to some balanced conclusions, but still sounds warnings.

The study found that one in three people has felt overwhelmed by communications technology to the point that they feel they need to escape it. Those people who have frequently felt overwhelmed are also more likely to feel less satisfied with their life as a whole.

As a result of these findings, BT has produced a set of 'five-a-day' recommendations for using this technology which it calls "The BT Balanced Communications Diet". This advice is aimed primarily at families with children but more generally might alleviate some of the problems identified in "The Shallows".

The advice can be summarised as: centralise the location of the technology, create rules and awareness, educate all family members about responsible use, and find a good point of balance.

The recommendations are well-intentioned but experts like Professor Sonia Livingston of the London School of Economics [for more information click here] would question how realistic some of them are. Her research highlights how difficult it is for parents to insist on use of a computer in the living room when so many children are accessing the Net over laptops, tablets, and smartphones in their own room and when teenage children especially will want to assert their right to privacy.

For adults, the tyranny of e-mail seems to be a particular problem. The Cambridge University study involved interviews with 12 experts around the world. Dan Ariely of Duke University in the USA [for my review of his book "Predictably Irrational" click here] pointed out that the checking of e-mail can become compulsive. He explains that researchers at Stanford University have found that people who multi-task frequently are usually worse at filtering distractions and remembering information.

However, it may be that we are worrying too much. Every new communications technology has provoked a degree of moral panic. According to Plato, Socrates was even worried at how written text would effect our capacity to remember things.

The truth is that we are all going through a transformative period in history and learning new mechanisms and strategies for coping with different forms of information presentation and therefore different styles of thinking and remembering.


Just when you thought switchover was almost over, along comes a debate on another one, explains our columnist Roger Darlington.

THE FUTURE OF DIGITAL RADIO

Digital switchover of television has much gone much more smoothly than most people expected and the complex and nationwide process is now well on the road to completion. By the end of this year, almost two-thirds of homes will have switched and by the end of 2012 the whole country will have gone over.

Now a debate is running about whether we should do the same thing for radio and put all radio stations except small local and community ones onto a Digital Audio Broadcasting (DAB) platform. But the arguments are much more finely balanced than in the case of television.

Television switchover will release a large amount of valuable spectrum which will be be auctioned by Ofcom to providers of 4G mobile networks and generate useful capital for the Treasury. A future radio switchover, though, would release only a small segment of spectrum and the value of it is low.

So why is digital switchover for radio being promoted? The industry case is that it would save transmission costs because there would no longer be the need to maintain two transmission platforms. Currently, for commercial radio, the analogue network costs just under £20 million a year to operate, while the digital network costs over £30 million a year to run. The BBC has similar duplicate costs.

Switchover to digital would remove the pressing need to make new investments in analogue transmission infrastructure, much of which is now dated.

So what's in it for the consumer? There are three major benefits of digital radio.

But, from a consumer point of view, there are many downsides. The Government has said that it will announce in mid 2013 whether, and if so when, it will commit the country to digital switchover of radio. The main consumer voice in the debate is the Digital Consumer Expect Group which brings together representatives of groups like Citizens Advice, Consumer Focus & Which? plus organisations of older and disabled citizens.

I have just been appointed Chair of this Group by DCMS Communications Minister Ed Vaizey, so I am in the thick of the debate. For the outcome, stay tuned.

Links:
A short guide to digital switchover for UK radio click here
Digital Radio Action Plan click here
Ofcom consultation on DAB coverage click here
Digital Radio UK click here
Guide to choice of digital radios click here


You might think that you have easy access to all the web, but think again suggests our columnist Roger Darlington.

WHAT SORT OF NET DO WE WANT?

In the beginning, the Internet was seen by the early enthusiasts as a totally open and free space where content could not be controlled and would not be constrained.

In a famous “Declaration Of The Independence Of Cyberspace” issued in 1996 [for text click here], Net guru John Perry Barlow began: "Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone.”

Boldly he asserted: “I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.”

This fantastically ultra-liberal declaration was never true; nor should it have been true. The work I did as Chair of the Internet Watch Foundation for six years to combat child abuse images online is one of many cases where content on the Net does require some form of regulation and control.

But, in recent years, a new concern has emerged that access to the wonderfully wide and wild web is being constrained and shaped by devices and algorithms that are unappreciated and often unseen by the user. Two particular books have highlighted the issue.

The first is "The Future Of The Internet And How To Stop It" by Jonathan Zittrain published in 2008 [for my review click here]. His main theme is that: “The future is not one of generative PCs attached to a generative network. It is instead one of sterile appliances tethered to a network of control”.

The personal computer and the Internet are open and flexible systems (he uses the word “generative” ) which have enabled an incredible flowering of innovative products and services from a multitude of sources. However, the very openness of the PC and the web have exposed them to a whole variety of threats such as hacking, viruses, spam, and a host of malware.

In the face of such threats, the temptation will be to 'lock down' such systems so that they can be controlled more tightly. So, Zittrain argues, devices increasingly will be “tethered” to limit what they can do (for instance, smart phones like the iPhone or PVRs like Sky+) and the Net will attract the attention of governments and regulators who will endeavour to limit what we can access and do on-line.

The second book is "The Filter Bubble" by Eli Pariser published in 2011 [for my review click here]. In Pariser's case, the fear is that personalisation of the web means that we are increasingly accessing only a selected slice of the richness on offer.

A key date was 4 December 2009. Little noticed at the time, from then onwards Google started to personalise its search results based on no less than 57 signals. So what you see is different from what I see when we type in the same words in the search box.

This is merely the most dramatic example of personalisation. Using cookies which note what we look at and what we do when on different web sites, subsequent content – from the delivery of news & information to the offering of products & services – is shaped to our past behaviour and assumed preferences.

Obviously this process has advantages for the user: you tend to see material and advertisements that interest you and, in a world in which we are all overwhelmed by content of all sorts, it can be helpful to have irrelevant material being relegated and information we value being promoted.

The danger – to use Pariser's terms – is that this filtering process places us in a bubble in which we have a limited view of the world that reinforces our prejudices.

And one of the worst things about our enclosure by these bubbles is that the process is overwhelmingly invisible. You do not see the cookies and the algorithms that shape the content that you do see and, of course, you do not know what content you are missing as a result.

But, what can you do about it? Try to understand more how the rules on which algorithms and filters work, to chose web sites that give more visibility and control over use of personal information, and to erase regularly all the cookies on your web browser.


How news is compiled, communicated and consumed is being utterly transformed by the Internet, explains our columnist Roger Darlington.

WHOSE NEWS IS IT ANYWAY?

News used to be the exclusive responsibility of professional journalists supervised by professional editors in the print, radio and television industries, but the Internet has changed all that.

Now web sites, blogs, YouTube, social networking sites, Twitter accounts and smartphones all enable anyone anywhere to create a version of news without any editorial supervision or political constraint, although totalitarian governments around the world are still struggling to contain and control this information tsunami.

This means that you can hear the voice and see the pictures of the protestor in Syria whatever the efforts of the dictator Bashar al-Assad to hide the truth of his brutality. But it also means that whole web sites can argue that 9/11 was perpetrated by the Jews or that Barack Obama is a Muslim non-American when there is no shred of evidence for such claims.

News used to be communicated at set times and in set formats: hourly radio slots, a couple of television slots, daily newspapers, weekly magazines, monthly journals. Now news is continuous and instant without printing delays or space constraints.

News is set to become more local with more digital radio stations and the development of local television. Simultaneously it is becoming more global, so that immigrants or travellers can track the news in their county or even their town of origin.

News used to be what the journalists and broadcasters – and crucially their editors – decided was what the general public should see in the case of serious outlets like the BBC or what they thought the specific audience wanted to see in populist media like the “Sun”. Now increasingly news is tailored to individuals and consciously or unconsciously we shape what we consume.

The consequences of these changes are massive for individuals, for companies, and for society as a whole.

Citizens can now access news anytime anywhere over a whole variety of digital devices – not just radios, televisions, and desktop computers but laptops, tablets like the iPad, e-readers like Kindle, and smartphone like the iPhone.

However, whereas in the past the reader could rely on the editorial process to ensure a degree of veracity and balance, now a racist or climate change denier can achieve similar prominence to saner voices and the source and authority of the news or opinion are often utterly opaque.

National newspapers no longer feel compelled to produce a print version every day. For the first time since 1912, last Christmas Day no national newspapers were printed. Conversely even daily newspapers have to maintain a web site where news is constantly up-dated.

National newspaper circulation is on a massive slide. Fifty years ago, the “Daily Mail” and the “Daily Express” each sold more than 4M copies a day; now the best-selling “Sun” only manages 2.6M.

Local newspaper sales have collapsed and dozens of titles have closed. The flight of advertising to online media such as Craigslist and Google is killing newspapers here and around the developed world. People are reluctant to pay or pay much for online content. In short, the newspaper industry is in an existential crisis.

Democracy itself is in flux. In the era of the 24/7 news cycle, politicians seem compelled to have a soundbite to hand for any development and jump from one short-term issue to another. Newspapers can no longer afford investigative journalism so that politicians are freer to act unaccountably and sometimes criminally.

Citizens do not necessarily see the range of news that they used to do, either in terms of subject matter or political perspective. When we can tailor our online news feeds, we select which blog and Twitter feeds we monitor, and even Google personalises our search results, we are in danger of entering what one write has dubbed ”the filter bubble”.

Then there are those who do not use these new technologies. A quarter of all homes in the UK do not have access to the Net, let alone the tablet computer or the smartphone. If your local newspaper has closed, your national newspaper has shrunk, the web is a foreign land and public services are going 'digital by default', you depend more than ever on services like the BBC which is being hit hard by cuts. This is a new version of the digital divide.


We're all using everything the Net has to offer, right? Our columnist Roger Darlington begs to differ.

CASTING THE NET WIDER

A lot of the hype around the Internet appears to assume that almost everyone is on the Net and they are all using the full range of services with ease and confidence.

Even those who know that this is not true - such as Government proponents of the 'digital by default' programme - underestimate the scale of the task in ensuring that the vast majority of citizens are brought onto the Net and are enabled to use all the public and private services that are or will be online.

Ofcom's Adult Media Literacy Report for 2012 gives the latest figures for take-up. Some 21% - one in five – of UK adults lives in a household with no Internet access. Among social groups D & E, 40% - two in five – are without the Net and, for adults aged 65 and over, the figure is 59% or three in five. These figures are changing only slowly.

The levelling off of Internet take-up here in the UK parallels the experience of many other industrialised countries. Outside the Scandinavian states where culture is different, near ubiquitous Internet take-up will not happen without significant targeted interventions. 

Ofcom's Adult Media Literacy Report also examines actual usage of the Internet.

The Government is keen that we should access public information and services but currently, in a typical week, only 15% of those on the Net do so. Per head of population, the UK is the world's biggest e-commerce market yet, again in a typical week, less than half of Net users (48%) conduct transactions online.

The point is that digital participation is not a one-step process but a journey and people need encouragement, support and confidence to make fuller and fuller use of the richness that the Net has to offer.

So what can we do about this?

In the last couple of years, the most significant intervention in this area has been Race Online led by the Government's Digital Champion Martha Lane Fox. We should be asking: how effective has this programme been? will it be independently audited or evaluated?

Then we need to ask: what comes after RaceOnline? will this be adequate to the challenge?

We should be highlighting and contrasting the amount of public money going into supply-side stimulation - such as the £530M that Broadband Delivery UK has for super fast broadband - compared to the paltry sums going into demand-side stimulation - such as support for RaceOnline and the Online Centres Foundation (declaration of interest: I recently became a director of OCF). 

We should be challenging Government to explain more fully how 'digital by default' is going to work and how Departments will provide the required Digital Assist support for those not online. We should particularly pick up on the challenges around making the new Universal Credit a digital-only benefit.

Other more specific suggestions are:

Finally we should avoid any notion that we think people should be forced to take up the Net or are silly or stupid if they do not do so. For the foreseeable future, there will be many who are not online and they are often making rational choices.

Having said that, they may well need or benefit from access to online services and we should be recommending easy and affordable access through trusted public access points like post offices and libraries. 

Links:
Ofcom Aduklt Literacy Report click here
Race Online 2012 click here


Whatever happened to the cashless society? Our columnist Roger Darlington assesses whether it is now coming to an e-wallet near you.

BIG MOVES ON SMALL PAYMENTS

When I first became involved in the communications industry in the late 1970s, I was convinced that the then new microelectronics revolution would soon eliminate a lot of the need for cash. At that time, it looked as if some kind of electronic card would be the approach.

In the 1980s, there was a trial in Swindon of a card called Mondex which was supposed to promote e-payments. It never happened and most of us still have as many notes in our wallet and coins in our pocket as ever.

Some three decades later, it does now look as if finally e-cash is set to happen big-time. A host of schemes are being launched and a plethora of players are involved and, at this stage, we cannot know which technology and which organisation will come out top.

A strong contender is a smartphone app called Pingit which was launched by Barclays Bank in the spring of 2012. This service allows anyone with a UK bank account and a mobile phone number to send and receive payments between £1 and £300.

Initially only Barclays' 12 million current account customers can use the app, but soon the service will be extended to all current account users, regardless of their bank or building society.

Subject to regulatory clearance, another approach to micro-payments is coming from UK mobile operators. Everything Everywhere (Orange and T-Mobile) plus Telefonica UK (O2) and Vodafone UK are forming a joint venture currently called Project Oscar.

Their service will use a technology called near field communications (NFC) which will put an e-wallet on your phone with deductions made by passing the mobile over a NFC-enabled reader at a till or check out. The popular name for this is 'wave and pay'.

A third initiative is a mobile payments platform called Mobile Money Network (MMN) which is being backed by Carphone Warehouse.

But Pingit, Oscar and MMN are merely three of the schemes under development. Just about any company in the money, mobile or Internet business is working on something.

Visa is preparing to introduce its own digital wallet service which will allow users to make speedier payments online and over mobile phones. PayPal is in negotiations with retailers to get them to accept payments from its mobile Internet service in stores.

Google Wallet has been designed to sit on an mobile phone and be linked to a credit card with payment made by near field technology. Apple is running a trial in its US stores to use iTunes as a virtual bank.

And, as well as the big names, there are all sorts of start-ups – tiny companies with grandiose plans - ranging from WePay in the USA to Blockchain in the UK.

For all these companies, providing the basic service is just the start. If their service achieves mass appeal or lucrative niche success, then they will have a wonderful treasure trove of data on who buys what, where and when. By using sophisticated data mining techniques, companies can then leverage this information to promote products and services in finely-targeted techniques.

For the consumer, there are many advantages from the new micro-payment services on offer: faster and easier transactions and the convenience of not having to carry around so much cash. But there are some serious challenges too: will the data be secure? how will the data be used? who is going to regulate such services?

Micro-payments have many of the characteristics of premium rate services (PRS), such as small payments, ease of payments, prevalence of experience goods, and lack of transparency as to the provider of the service or the nature of the value chain.

PRS is regulated by PhonepayPlus which operates under the ultimate control of the communications regulator Ofcom. Currently neither PhonepayPlus nor anyone else regulates micro-payments, but PhonepayPlus has asked the Government to consider the issue as part of the DCMS Communications Review which is supposed to lead to a new Communications Act by 2015.

By then, the Government will be playing catch up and, by 2015, we may not have a cashless society but we will certainly have a much less cash one.

Meanwhile the irony is that a country like Kenya is already ahead of us. It has been using a mobile money transfer system called M-Pesa since 2007.

Link: Phonepay Plus submission to DCMS on regulation of micropayments click here


Roger Darlington writes an open letter to the new Secretary of State for communications.

BREATHING NEW LIFE INTO THE COMMS REVIEW

Dear Maria Miller,

Congratulations on your elevation to lead the Department for Culture, Media and Sport. Now that the Olympics are over, it is time for your ministry to refocus on the promotion of the UK's creative industries.

In May 2011, the DCMS launched its Communications Review [for details click here] which is intended to lead to a new Communications Act by 2015. Almost a year and a half later, we are very little further forward and a real opportunity has been missed, but there is still time for you to make the review a meaningful and useful exercise.

Your predecessor Jeremy Hunt launched the review with an open letter which was very short and very open-ended [for text click here] . The letter posed 13 questions and invited submissions of no more than five pages within six weeks – a tough task and a tight timetable. In spite of the apparent rush to receive submissions, it took your Department more than five months before, in December 2011, it finally published the 168 submissions including my own on the regulation  of convergence.

At the time the open letter was published, we were promised a Green Paper by the end of 2011. Month after month, publication was rumoured to be imminent. Then, in May, we learned that there would be no Green Paper after all, but instead the summer would be used to hold five invitation-only seminars.

Those seminars have covered respectively the consumer perspective, competition in content markets, maximising the value of spectrum, the television content industries, and the radio sector. I attended the first and the fifth of these seminars.

Each seminar was the subject of a background paper posing further questions and submissions were invited on these papers and questions. Sadly these seminars have made the agenda of the Comms Review even wider and even loser.

While the DCMS has been excellent in making the proceedings of the seminars available in both text and visual form [for details click here], it does not look as if the resulting submissions will be published and there is no clarity over the next stages of the review, so that the process has become even more opaque.

The review now needs to concentrate on a limited number of major themes where the Government is prepared to act either through policy or legislation. Five big topics that need addressing for a forward-looking communications policy are as follows:

You do not have to wait for a new Bill in 2015; there is much that you can be doing now to promote the UK communications sector. And, if Leveson does require legislation, a new Communications Bill should not wait until 2015.


The law is beginning to bite on British users of social media, but the picture is confusing and unsustainable, claims our columnist Roger Darlington.

WHAT CAN'T YOU SAY ON THE NET?

There are many misconceptions about what you can and can’t say on the Internet, including web sites, blogs, Facebook and Twitter.

One view is that the Internet is a free space and you can – or at least should – be able to say anything. Another view is that what is illegal offline should simply be illegal online, except that most people do not know what exactly is illegal offline and it is much harder to enforce the law online. A third view is that electronic communications networks are different from print media and should be subject to their own laws, except that they already are and people are disturbed to find just how broadly drawn are those laws.

There are several pieces of legislation that make certain types of communication on the Net a criminal offence, but all of these these predate the arrival of the Internet and more importantly the proliferation of social media.

The most used piece of legislation is Section 127 of the Communications Act 2003 [for full text click here] which is buried within the 600 pages of the Act best known for creating the regulator Ofcom.

Section 127 makes it a criminal offence to send “by means of a public electronic communications network a message or other matter that is grossly offensive or of an indecent, obscene or menacing character”.

Although an offence of this kind has been on the statute books since 1935, the reach and the sheer volume of communications over social networks means that it is increasingly being used to arrest people. In 2007, 498 people were convicted under Section 127. In 2011, the figure had more than doubled to 1,286.

Some of these prosecutions were undoubtedly warranted, but consider the case of the unfortunate Paul Chambers who, frustrated that snow had closed the airport from which he wished to fly out to his girlfriend, tweeted: ”You’ve got a week and a bit to get your shit together otherwise I am blowing the airport sky high!!’.

The poor man was successfully prosecuted for his joke and only won an appeal after a massive public campaign.

As well as various pieces of criminal law, the civil law applies, as became very evident when Lord McAlpine brought actions not just against the BBC and ITV but apparently no less than 10,000 Twitter users including 9,000 who had retweeted false and libellous assertions.

But damage to Lord McAlpine's reputation had already been done. And what about the teacher falsely accused of child abuse who does not have the resources of the multi-millionaire peer?

Sometimes problems occur even before any involvement of the law. Take the case of Adrian Smith who was demoted by his employer Trafford Housing Trust for posting on his Facebook page, visible only to his friends, his opposition to gay marriage. He had to go to court to overturn his employer's breach of contract.

So the current situation is a mess.

On the one hand, free speech can be stifled because Net users have no clear idea of what they can and can't say and prosecutions are brought in very variable circumstances.

On the other hand, users of the Net can insult and offend, make racist, sexist and homophobic statements, troll, stalk and bully, libel and defame, seemingly with impunity.

What is to be done?

The immediate need, which the Director of Public Prosecutions is addressing, is to produce guidelines on when and under what legislation prosecutions should be brought in the public interest [for details of the consultation on such guidelines click here].

Then we need a full review of current legislation which might be applied to Internet content to bring it up to date and make it relevant in an age when social media has utterly transformed the nature of public discourse.

Meanwhile the various Net intermediaries – Internet service providers, web hosters, and social media networks – need to consider what more they can do to bring some more civility to cyber space.

The reality is that the volumes of material involved make pre-checking and even post-checking by companies themselves impossible, but they can provide tools and mechanisms to enable their users to police content for them and notify inappropriate content for rapid and consistent adjudication.

And users themselves need to exercise more sense and civility. If peer pressure cannot achieve this, some of them are going to finish up in prison.


Back to home page click here