The Evolution of The Data Centre

Data Centre image of server racks with tiled roof like a house

Click on image to see the full size version

The evolution of the data centre has been no simple linear trajectory. Both the purpose of data centres and the technology powering them have changed dramatically over the years and while many might have expected a fairly stable growth in its fortunes from the early days of the 1950’s to now, for a period during the 80’s, the fate of the data centre very much hung in the balance. From the birth of the very first data centre in post-war America through to its modern-day equivalent’s current place at the heart of the mobile internet, we explore here the history of the data centre as we have come to know it today.

The Early Days and into the 1960’s

The origins of the data centre are very much intertwined with the US Army’s technology programmes after the end of WW2, as the country moved into almost half a century of quieted hostility with the Soviet Union. Along with nuclear weapons and the space programme, ‘information technology’, at that time not even an expression itself, was very much at the heart of US military strategy. The first serious arrival for the US Defence Department was the ENIAC (Electronic Number Integrator, Analyser and Computer) machine, the first ever ‘computer’ which covered around 1,800 sq ft of space, weighed over 30 tons and required six full-time technical staff to ensure its smooth-running. While grand in scale, ENIAC was fairly modest in its abilities, performing just 5,000 operations per second, compared to single modern-day microprocessors which provide well into the billions. Right up until the mid-sixties, computing technology remained the preoccupation principally of military agencies but even in those early non-commercial years, rapid technological progress was very much the order of the day. During the early 1960’s, a major transition took place from computers based around vacuum tube technology to systems which utilized transistors, which tended to last longer, used less space and ran more efficiently and reliably than the vacuum tube equivalent.

Vacuum tubes in the ENIAC computer

Vacuum tubes in the ENIAC computer

And with the technological improvements and the reduction in space requirements further into the 1960’s, computers ceased to be the near exclusive preserve of the military, as the various commercial applications became more apparent. In the same way as companies renting infrastructure from data centres today, computers could be rented out on a monthly basis and shared by multiple users. One of the first major commercial projects using a data centre, came when American Airlines teamed up with IBM to design a telephone reservation system which could process 10’s of thousands of calls per day. Again, just as the first years of the 1960’s had been marked by a major technological breakthrough, their closing years saw yet another technological game-changer; this time a shift away from the magnetic core devices which had been employed to solid-state semi-conductor memory which again reduced the cost and size of the data centre while also improving its efficiency.

Into the 1970’s and 80’s

With the 1970’s came the world’s first microprocessors – and the world’s first commercial microprocessor was Intel’s 4004, released in 1971. An emerging business opportunity for the first commercial data centres which employed these new microprocessors was disaster recovery. By 1978 the first commercial disaster recovery proposition had been launched in New York. With personal computing by this time already gaining ground and the increasing requirement for mainframe cooling however, much of the infrastructure now moved from the data centre to the office, which for a time seemed to spell the data centre’s demise.

Into the 1980’s and personal computing went from strength to strength. The landmark device was IBM’s Personal Computer and over the first years of the decade, these devices spread rapidly, with little heed within many organisations paid to the collective impact of so many individual, personal devices and the possible scaled inefficiencies which often resulted. From 1985 though, technology was clearly starting to point towards the shape of things to come. One particularly notable achievement in the evolution of the data centre was IBM and Cornell University’s joint development of a supercomputer facility at Ithaca, New York. And as the potential role for information technology within organisations became ever more apparent, companies grew increasingly aware of ‘IT’ as a controllable, manageable resource.

The 1990’s

With the birth of the internet, microcomputers increasingly found their way back into the ‘data centre’, in their newly-inaugurated role as servers for the proliferation of emerging websites which required permanent, around-the-clock hosting. But with the availability of relatively inexpensive networking equipment, many companies opted for the construction of server rooms within their own premises.

The New Millennium and Beyond

With internet consumption expanding dramatically into the latter years of the 1990’s and expectations of internet users simultaneously growing, the need for data centres – somewhere to enable fast internet connectivity and non stop operation – became ever more pressing for the many companies who were part of this internet revolution.

It was around this time that the data centre, more or less as we know it today, began to take shape. And as computer processing technology and memory requirements have continued to grow since that time, the data centre’s position in the 21st century’s technological landscape has been firmly cemented. The advent of mobile internet devices with high processing capacity and higher mobile bandwidths has again increased the demands made on data centres, being required to store and process ever-increasing amounts of data and serve more and more devices.

One of the key roles of the modern-day data centre is to provide affordable business continuity to minimise disruptions for companies whose systems may become impaired or at times unavailable. And as cloud computing becomes ever more prevalent, there’s clearly a collective concern from all users of modern day data centres to ensure that high standards of security can be maintained. For instance, leading UK data centres today, should conform to ISO and IEC standards of quality management and information security management.

As our collective reliance on the internet shows no sign of abating and data centres seem ever more the loci of the internet of the future, one concern is for their energy and environmental impact. Organisations with their own private data centres such as Google have come under fire in recent years for the secrecy around their data centre operations and what might seem to be a disregard for their levels of energy consumption. The data centre industry is though responding. Campaigns like Facebook’s OpenCompute which received widespread attention several years ago, provided full publically-accessible specifications of Facebook’s energy-efficient data centres, thereby challenging other industry operators to raise their own standards and also enjoy the by-product of more affordable running costs. Although not every operator can benefit from the same economies of scale as the likes of Google or Facebook, the level of scrutiny which B2B data centres are under from their customers, much as it pertains to customer data security and regulatory compliance, is driving the industry very much in the right direction.

PS. If you found this post useful, please share it using the social sharing buttons below.

Why not join our Managing Director Conleth McCallan on a virtual tour of our data centre on YouTube or contact us for a discussion to see how we might be able to help your business with one of our secure hosting solutions?

Datanet Aspen House data centre

Datanet Aspen House data centre

Call our team today on 01252 810010 to find out more about how we can help you