Back to home page click here


Since 2003, I have written regular articles on information technology for Connect, which used to be a separate trade union and in January 2010 became a section of the larger union Prospect. Since 2013, I have contributed such articles for both the Connect section newsletter "DigitalEye" and the Prospect magazine "Profile". The text of all these articles, with relevant hyperlinks, are filed on my web site and this page brings together all those from 2016. If you would like to comment on any of them e-mail me.

March 2016 How Secret Algorithms Are Increasingly Shaping Your Life
July 2016 From Gutenberg To Zuckerberg To The Quantum Internet
November 2016 Have We Lost Control Of The Net?

All that data that is being collected about you is being used for both good and ill, as our columnist ROGER DARLINGTON explains.


We all know that there is a lot of digital data out there. We appreciate that the rate of growth in that volume of data is exponential. We’ve all heard the term big data. But putting figures on these trends and understanding what is happening stretches and twists the mind.

A recent estimate suggests that at least 2.5 quintillion bytes of data is produced every day (that’s 2.5 followed by a staggering 18 zeros!). That’s everything from data collected by the Curiosity Rover on Mars to your Facebook photos from your latest holiday.

Unsurprisingly the largest ‘big data’ company in the world is Google. This one organisation processes 3.5 billion requests per day and stores 10 exabytes of data (10 billion gigabytes!) Facebook, Microsoft, and Amazon all give Google a run for their money; Facebook alone has 2.5 billion pieces of content, 2.7 billion ‘likes’ and 300 million photos – all of which adds up to more than 500 terabytes of data.

Currently most of this data is generated by people – the more than three billion who are now on the Net. In future, machines and devices will create most of the digital data in the world through what is popularly known as the Internet of Things. Over six billion devices are expected to be connected this year.

Which brings us to big data that is a term for data sets that are so large or complex that traditional data processing applications are inadequate. To handle big data, organisations – private and public – use automated calculation systems, typically algorithms that depend on creating correlations between large sets of often diverse data.

Ultimately data analytics is all about selection and essentially it is pattern recognition. Data cannot speak for itself; it is enabled to speak through the algorithm used to mine the data. And these algorithms make inferences that lead to decisions that range widely in utility and importance.

One type of decision might be which web pages should be presented to you when you do a Google search on a particular term or what advertisements or news stories should be shown to you on Facebook. Another category of decision might relate to your creditworthiness and whether you should be given a loan or might determine your health risk and whether you should be provided with insurance cover.

There are some obvious and huge benefits to such algorithmic techniques. Our use of the Net and our e-commerce offerings become much more personal to us and therefore more relevant and useful. Automated decision-making is usually faster and more objective than the use of slow, inefficient and some times prejudiced people. Health apps can warn us of certain risks, given certain personal behaviours. Smart devices will enable us to control who can enter our house and drive our car.

On the other hand, there are clearly many risks. The data may be wrong or partial. The inferences may be mistaken or flawed. Decision-making can become opaque and beyond human control or intervention. Some have argued that we are already in a ‘black box society’ where the algorithms at the heart of big data analytics are beyond human understanding and exercise too much control.

For instance, it is said that the algorithms behind Google’s search engine are so complex that nobody understands them. And financial markets are based on split-second decisions about buying and selling stocks and shares for reasons that might be irrational and dangerous to the global economy.

There are no easy answers but, for starters, citizens and consumers need the right to view and correct personal data, organisations need to be able to explain the basics of their algorithms and decision-making systems, and there should be effective appeal mechanisms involving humans against decisions based on algorithms.

We all need to have some understanding of what is going on with big data and of the power of algorithmic authority. Because, if you are not at the table, then you are on the menu.

Depressed by the Brexit decision? Look at the bigger picture: we are entering a new Renaissance explains our columnist ROGER DARLINGTON


I recently read a fascinating book called “Age Of Discovery” written by Ian Goldin and Chris Kutarna [click here]. The theme of the book is that we are living in a time that is in effect a New Renaissance and we should learn some of the lessons from the original Renaissance some 500 years ago.

In the introductory chapter, the authors write:
"For the first time ever, the number of poor people in the world has plummeted (by over one billion people since 1990) and the overall population has swelled (by some two billion) at the same time. Scientists alive today outnumber all scientists who ever lived up to 1980, and – in part thanks to them - average life expectancy has risen more in the past fifty years than in the previous 1,000."

At the heart of the original Renaissance was the invention of the printing press and an underlying platform of the new Renaissance is the Internet. Goldin and Kutarna compare the two communications technologies in sections of their book headed Gutenberg and Zuckerberg respectively.

In about 1450, German entrepreneur Johann Gutenberg combined movable type, oil-based ink, a press, and paper to set off a communications revolution. The number of books existing beforehand – all hand-produced over a period of 1,500 years – doubled in a mere 50 years. It doubled again in the next 25 years. Growth was exponential.

In 2004, Harvard student Mark Zuckerberg conceived what we now call Facebook that used the new communications network called the Internet. If you have viewed the excellent film “The Social Network” [for my review click here], you will have seen the explosive growth of his invention in a matter of months. Again growth was exponential. Today Facebook has some 1.7 billion active monthly users (including me).

Facebook may be the largest network of people using the Net but the Internet of Things takes the Net to a whole new level. In 2011, there were as many networked devices on the planet as people. By 2015, they outnumbered us three to one. Yet again, growth is exponential.

At the very core of digital networks and digital devices is the integrated circuit or micro chip which carries multiple transistors on a wafer-thin piece of silicon. In 1965, Gordon Moore – founder of Intel – noted that the number of transistors that his company could fit onto a chip was doubling around every two years. This observation was dubbed Moore’s Law and has held true ever since. Once more, a case of exponential growth.

However, the laws of physics mean that Moore’s Law cannot hold for ever. At the present rate of progress, scientists may reach the scale limit of a reliable silicon switch within a decade. One option then is to exchange silicon for some other material that forces electrons to behave as we wish and the present best candidate is graphene.

Another, more radical, option is to move to quantum computing which exploits the bizarre feature of the quantum world whereby an electron can be in two different states simultaneously – something called superposition [for more information click here].

In 2011, a Canadian company called D-Wave Systems brought to market the world’s first commercial quantum computer. So the quantum concept has been proved. Now it has to be scaled.

These computers, operating at the smallest of scales, could be used to resolve some of the biggest of questions. In their book, Goldin & Kutarna suggest some of the questions that might be answered: “how exactly the different levels of a biological system affect one another; how consciousness emerges; and what the ultimate fate of the universe will be”.

Meanwhile physicists have been using quantum mechanics to think about new types of communications systems. One idea is to use the power of what is called quantum teleportation – roughly, how to pass a tiny bit of data from one place to another without its having to travel through the intervening space.

As of 2015, the record distance for a quantum jump was 150 km, the minimum distance between the ground and an orbiting satellite. So one day we might have a quantum Internet [for more information click here]. Now that would be a new Renaissance.

Link: the world's first quantum satellite click here

A succession of disturbing events has recently cast ever-more doubt on the security of the Internet. Our columnist ROGER DARLINGTON asks:


In October – the month before the bitterly–contested US presidential election – the transparency campaign web site Wikileaks started to publish e-mails from the personal Gmail account of Hillary Clinton’s campaign chairman John Podesta. In over two dozen batches in the weeks before polling day, some 50,000 e-mails were released.

The content and the timing of these leaks were clearly designed to inflict maximum damage on Clinton’s electoral prospects and conversely give a boost to her Republican challenger Donald Trump.

Who would have the technical know-how and political wish to mount such a cyber assault? Many commentators have speculated that the culprit could be Russia – either official agencies or private individuals perhaps acting with the support of the state – although Wikileaks has denied this.

Whether or not this particularly attack was mounted by Russians, the Obama administration has officially accused Russia of attempting to interfere in the 2016 elections by hacking the computers of the Democratic National Committee and other political organisations. The denunciation came from the Office of the Director of National Intelligence and the Department of Homeland security.

Of course, Russia is not alone in being accused of launching cyber attacks on parts of America’s political, business, and critical infrastructure communities. China, Iran and North Korea have also been publicly blamed for aggressive actions in cyberspace.

Again in October, popular websites such as Twitter, Spotify, and Reddit were among a number of sites that were actually taken offline. Each site uses a company called Dyn, which was the target of the attack, to direct users to its website.

Now Dyn is part of the Domain Name System - an Internet "phone book" which directs users to the Net address where the website is stored. Such services are a crucial part of web infrastructure.

Dyn came under attack through something called a distributed denial of service (DDoS) that deploys thousands of machines to send co-ordinated messages to overwhelm the service. In this case, the global event involved tens of millions of Internet addresses.

Security analysts believe the attack used the Internet of Things (IoT) - web-connected home devices - to launch the assault. Hackers used internet-connected home devices, such as CCTV cameras and printers, for their attack.

Security firm Flashpoint said it had confirmed that the attack used what are called botnets infected with malware called "Mirai". Many of the devices involved come from Chinese manufacturers, with easy-to-guess usernames and passwords that cannot be changed by the user - a vulnerability that the malware exploits.

"Mirai scours the web to find IoT devices protected by little more than these factory-default usernames and passwords and then enlists the devices in attacks that pile up junk traffic at an online target until it can no longer accommodate legitimate visitors or users. However, the owner of the device would generally have no way of knowing that it had been compromised to use in an attack.

We do not know who launched this attack or why, but we do know that it compromised a key part of the web’s infrastructure and there is no guarantee that it will happen again.

It is not just private data such as e-mails that hackers can abuse. Public data – which is so useful to citizens – can be hacked. As Tim Berners-Lee, the inventor of the web, warned when over for a recent conference in London: “If you disrupted traffic data for example, to tell everybody that all the roads south of the river are closed, so everybody would go north of the river, that would gridlock you [and] disable the city.” This is described as “the Italian Job scenario”.

As another illustration of the dangers, Berners-Lee said criminals could manipulate open data for profit, for example by placing bets on the bank rate or consumer price index and then hacking into the sites where the data is published and switching the figures.

It’s no wonder that Chancellor Philip Hammond has just announced a new £1.9 billion cyber security strategy. The money almost doubles the amount set out for a similar strategy in 2011. Implementation cannot come soon enough.

Back to home page click here