What's in your datacentre?
It's not a giant central heating system
At the fifty-thousand foot level, a datacentre is nothing more than a box in which IT assets are stored. While it gets more complicated, if you want a quick overview of what's in a datacentre, this is the place to look.
Broadly speaking, the job of the IT equipment in a datacentre is to accept incoming data, such as a request for a web page, process it into something else - such as a web page - and return and/or store the result. To do this it needs three key components: a processing device, commonly known as a server, a network that allows the data to move around inside the datacentre and to connect it to the outside world, and a place to store both live data and data created by the processing.
To fill in some detail, servers come in a number of shapes and sizes but a broad rule of thumb is that size equals power. So a blade server that might be hosting a website slots inside a manufacturer's chassis, which is not interchangeable with other makers' chassis. It will tend to house a couple of processors and a relatively limited amount of memory. On the other hand, a large rack-mounted server hosting dozens of so-called virtual servers might be seven units high - commonly abbreviated to 7U, where each unit equates to 1.75 inches or 44.45 mm - contain up to eight processors and half a terabyte of memory - or more.
Network switches allow servers to connect with each other, with the storage systems, and with the outside world often live in separate racks - each of which is a standard 19 inches wide. Connected by Ethernet cables, the switch's job is to ensure that each bit is routed to the right place transparently, quickly and, in the case of many high-end switches, as securely as possible. Security is just one of the many features you might expect a modern datacentre-level switch to perform.
Storage is the other corner of the triangle. It consists of many forms, both solid-state and rotating media, and can connect to the servers in a variety of different ways, depending on the kinds of work it's expected to do. Storage is probably the most complex of the three main datacentre hardware components.
There's also an invisible component: software. Its job is to direct the server what to do, how to do it, and what to do with the result.
Finally, a quick look at cooling. IT equipment generates its own environment in the form of heat. Technology products - such as servers, networks and storage - work best inside certain boundaries largely defined by temperature and humidity. Step outside those and you could be in trouble.
For example, if you just put a bunch of servers in a box and left them running, they would quickly overheat and, while today's products are smart enough to switch themselves off rather than keep going until they melt, that's not a desirable outcome.
So the datacentre needs a way of sucking out waste heat, and of pumping in air cool and dry enough to keep the equipment inside its environmental boundaries. Some of this equipment will be sited inside the datacentre, although much of it lives outside. Managing the cooling systems and making them work efficiently is a discipline all to itself.
Any remaining equipment in the datacentre needs to be directed towards the goal of performing the above tasks efficiently, which means ensuring that it uses as little power and, preferably, as little space as possible.
This has been a high-level, whistle-stop tour of the datacentre - be sure to check out the other features in this series where we'll be exploring all these technologies and issues in much greater depth. ®
Sponsored: Are DLP and DTP still an issue?