Back to home page click here

HOW THE INTERNET COULD BE REGULATED

Text of a presentation made in various forms at:

- last modified on 7 January 2009
Contents


INTRODUCTION

This presentation is informed by my six-year term from 2000-2005 as the first independent Chair of the Internet Watch Foundation, a body which exists to combat illegal content on the Net in the UK. However, it does not represent the policy of the IWF and indeed its scope goes well beyond the remit of that organisation.

The presentation is prompted by the growing debate about whether existing controls on Internet content adequately meet the concerns of users and what happens when the heavily regulated world of broadcasting collides with the virtually unregulated world of the Internet.

WHY WE SHOULDN'T REGULATE THE INTERNET

Many people argue that it would be wrong to attempt to regulate the Internet and advance arguments such as the following:

WHY WE SHOULD REGULATE THE INTERNET

However, there are strong arguments in favour of some form of regulation of the Internet, including the following:

REGULATING ILLEGAL CONTENT

It is a major proposition of this presentation that any sensible discussion of regulation of the Internet needs to distinguish between illegal content, harmful content, and offensive content. I now deal with these in turn.

In the UK, effectively illegal content is regulated by the Internet Watch Foundation [click here] through a self-regulatory approach.

What is the nature of the IWF?

The IWF has a very specific remit focused on illegal content, more specifically: The IWF has been very successful in fulfilling that remit: How is illegal material removed or blocked under the IWF regime? The problem now for the IWF - and indeed for the other such hotlines around the world - is abroad, more specifically: In early 2005, a study by the International Centre for Missing and Exploited Children (ICMEC) in the United States found that possession of child abuse material is not a crime in 138 countries and, in 122 countries, there is no law dealing with the use of computers and the Internet as a means of distribution of child abuse images [for more information on this report click here]. So the UK needs the cooperation of other governments, law enforcement agencies and major industry players if we are to combat and reduce the availability of child abuse images in this country and around the world.

Since the IWF's remit is illegal material, there are some possible areas of the law which might be amended in terms which would suggest a minor extension to the IWF's existing remit, specifically:

However, the IWF has absolutely no intention or wish to engage in harmful or offensive content, so the proposals that now follow are my personal suggestions for discussion and debate.

REGULATING HARMFUL CONTENT

It is my view that currently there is Internet content that is not illegal in UK law but would be regarded as harmful by most people. It is my contention that the industry needs to tackle such harmful content if it is to be credible in then insisting that users effectively have to protect themselves from content which, however offensive, is not illegal or harmful. Clearly it is for Government and Parliament to define illegal content. But how one would define harmful content?

I offer the following definition for discussion and debate: “Content the creation of which or the viewing of which involves or is likely to cause actual physical or possible psychological harm.” Examples of material likely to be ‘caught’ by such a definition would be incitement to racial hatred or acts of violence and promotion of anorexia, bulimia or suicide.

Often when I introduce such a notion into the debate on Internet regulation, I am challenged by the question: How can you draw the line? My immediate response is that, in this country (as in most others), people are drawing the line every day in relation to whether and, if so how and when, one can hear, see, or read various forms of content, whether it be radio, television, films, videos & DVDs, newspapers & magazines. Sometimes the same material is subject to different rules - for instance, something unacceptable for broadcast at 8 pm might well be permissable at 10 pm or a film which is unacceptable for an '18' certificate in the cinema might receive a 'R18' classification in a video shop.

Therefore I propose in relation to Internet content that we consult bodies which already make judgements on content about creation of an appropriate panel. Such bodies would include the Ofcom Content Board [click here], the BBC [click here], the Association for Television On Demand (ATVOD) [click here], the British Board for Film Classification (BBFC) [click here], and the Independent Mobile Classification Body (ICMB) [click here]. I would suggest that we then create an independent panel of individuals with expertise in physical and psychological health who would draw up an agreed definition of harmful content and be available to judge whether material referred to them did or did not fall within this definition.

What would one do about such harmful content?

REGULATING OFFENSIVE CONTENT

Once we have effective regimes for illegal and harmful content respectively, one has to consider that material which is offensive - sometimes grossly offensive - to certain users of the Internet. This is content which some users would rather not access or would rather that their children not access.

Now identification of content as offensive is subjective and reflects the values of the user who must therefore exercise some responsibility for controlling access. The judgement of a religious household would probably be different from that of a secular household. The judgement of a household with children would probably be different from that of one with no chidren. The judgement of what a 12 year old could access might well be different from what it would be appropriate for an 8 year old to view. Tolerance of sexual images might be different to those of violent images.

It is my view that, once we have proper arrangements for handling illegal and harmful content, it is reasonable and right for government and industry to argue that end users themselves have to exercise control in relation to material that they find offensive BUT we should inform users of the techniques and the tools that they can use to exercise such control. What are such techniques and tools? They include:

Of course, it would help parents and others with responsibility for children if they could buy a PC with filtering software pre-installed and set at the maximum level of safety and if the default setting for all web browsers was a child-safe mode. Then adult users of such hardware and software would have the opportunity, when they wished, to to switch to a less filtered or completely open mode.

WHAT ELSE WOULD HELP?

Looking at Internet content generally, what else would help? Let me make a few final suggestions:

In the medium term, we are going to see:

THE IMPLICATIONS FOR BROADCASTING

If we can determine an acceptable regine for regulating Internet content, this will beg the question of why we regulate broadcasting content so differently. Indeed why do we regulate broadcasting at all?

Historically there have been three reasons:

  1. Broadcasting has used scarce spectrum and, in return for free or 'cheap' spectrum, broadcasters have been expected to meet certain public service broadcasting obligations and this requirement has provided leverage to regulators to exercise quite strong controls on content. BUT: increasingly broadcasting is not done using scarce spectrum; it uses satellite, cable or simply the telephone lines with technologies like ADSL.
  2. Broadcasting has been seen as having a special social impact because so many people watched the same programme at the same time. BUT: increasingly with multi-channel television and time-shifting devices like the VCR and the PVR, any given broadcast is probably seen by a relatively small proportion of the population.
  3. Broadcasting has been seen as a 'push' technology over which viewers had little control once they switched on the television set. BUT: increasingly viewers are 'pulling' material through the use of VCRS, PVRs, video on demand, near video on demand, podcasting, and so on.
Therefore it is possible to argue that the historic reasons for regulating broadcasting in the traditional ways are fast disappearing. In these circumstances, one could well argue that broadcasting should not be regulated much differently from how we regulate the Internet or at least how we might regulate the Internet in the manner proposed in this article.

Therefore regulation of broadcasting would focus on illegal and harmful content, leaving offensive content as a matter essentially for viewers to block if they thought that appropriate for their family. This would suggest a convergence of the regulation of broadcasting and the Internet to a model which, compared to the present situation, would involve a lot less regulation for broadcasting and a bit more regulation for the Internet.

Two issues are crucial here:

CONCLUSION

Let me attempt to summarize this presentation and my recommendations:

If we do not have a rational debate on the regulation of the Internet and come up with practical and effective proposals, then many of the one-third of UK homes that are still not on the Internet will be deterred from doing so, many of those who are on the Net will be reluctant to use it as extensively and confidently as they should, and we run the risk that scare campaigns will be whipped up around particularly harmful or offensive content, tempting politicians and regulators to intervene in ways that the industry would probably find unhelpful.

ROGER DARLINGTON

Back to home page click here