Revamp the network to cope with explosion in mobile kit
Coping with end-point number growth and altered traffic flows
Expert Clinic Three experts; three different views; that's what you get when the three look at the impact of the substantial rise in the number of mobile devices accessing the network. All these devices send traffic across the network and much if it hits servers and causes storage transactions. Ethernet set up as a fabric can help here. But it has to be upgraded so that its architectural traffic flow assumptions, the number of addresses, its security, bandwidth and management don't become problems
For Duncan Hughes the term fabric is the key. He sees goodness flowing from cloud-capable Ethernet fabrics linking the sea of users' mobile devices with applications and servers located "in the cloud."
Duncan Hughes - Pre-Sales Engineering Manager at Brocade
It is incredible how ‘connected’ our lives are today. Increasingly sophisticated wireless devices are becoming common every day tools, leading us to expect (or be expected) to be connected wherever we are. We do not work from a desk, buy things off the shop floor, or move money at the bank teller anymore. At the same time the data, information and applications that are related to all those activities when we perform them online, are no longer sat on a specific server, mainframe, network or PC. It’s all ‘out there’ in the ‘cloud’.
In a true ‘virtual enterprise’, the network infrastructure must be cloud-optimised through a highly virtualised environment that is simple, flexible and scalable, offering high performance and secure connectivity. By virtualising the infrastructure, IT departments have the flexibility to move assets around the enterprise, and ensure there is enough resource to support them. Simplicity is the key, and with the right management application, they can even do it via their mobile! However, it doesn’t stop there.
The cloud-optimised network is designed to reduce cost, improve agility, and extend virtualisation across the data centre. A key enabling technology is the Ethernet fabric – a new approach to network design that is revolutionising data centre architectures. Compared to classic hierarchical Ethernet networks, the Ethernet fabric delivers higher levels of performance, utilisation, availability, and simplicity.
By enabling the virtual data centre and providing a platform for cloud migration, Ethernet fabrics ensure ‘always-on’ availability and simplify network management, which in turn increases end-user productivity while reducing operational costs. So as far as the data centre is concerned, if virtualisation revolutionised computing, Ethernet fabrics are revolutionising networking.
Compared to classic hierarchical Ethernet networks, the Ethernet fabric delivers higher levels of performance, utilisation, availability, and simplicity.
Users need the cloud to do their jobs; and a cloud-optimised network means more access, more speed and, halleluiah, fewer ‘network’ issues that prevent me from connecting to applications and systems when I need to. But if you try and build a cloud on old plumbing and traditional network designs, then you’ll just end up with problems raining down while your IT department drowns... in data, in support calls, in system failures.
Fabrics are perfect for delivering a virtualised infrastructure, so we can all enter the cloud and embrace the evolution; and more importantly have ubiquitous access to the applications and data we all rely on. All of which means you can access that proposal off of a colleagues PC, phone, laptop, or the server direct. You won’t know where you have accessed it from, but you won’t care, because you’re too busy working. And that’s the point. Welcome to the cloud.
Duncan Hughes is a pre-sales Engineering Manager at Brocade, joining when Brocade acquired Foundry Networks, where he was also a systems engineering manager, having previously been at Anite Networks.
Tony Lock, a programme director at Freeform Dynamics, is conscious of limitations coming from basic Ethernet nuts and bolts as the number of connected devices grow. Do we have enough addresses? Will the network get upgraded cleanly?
Tony Lock - Programme Director, Freeform Dynamics
Ethernet was originally conceived of as being used by networks consisting of limited numbers of devices, each painstakingly connected and configured. When mainstream business adopted Ethernet, only relatively small numbers of devices, nearly all of them computers of one sort or another, were connected utilising the original IPv4 addressing scheme. This was designed to handle what at the time was considered to be “vast” numbers of connected devices, but today it is clear that such optimism was misplaced.
With the numbers of PCs, laptops and now mobile devices exploding, the pressure on Ethernet networks to provide each device with connectivity has the potential to seriously impact the addressing scheme. The use of solutions such as one-to-many Network Address Translation (NAT) allows large numbers of private IP addresses to be hidden behind one or a small range of public IP addresses, but at the cost of added management complexity and security. So how will networks develop to allow the number of devices connected to continue growing rapidly?
New pools of IPv4 addresses are diminishing day by day as the rush of device connectivity, servers, storage, desktops, laptops and mobile systems continues at a frantic pace.
Of equal importance is the question of how can the bandwidth required by a central Ethernet be calculated, managed and, if need be, “rationed” with hugely escalating pressures on usage? To help cater for these demands it is clear that there may be major network architectural modifications to be made. Some of these, such as network ‘flattening’ where aggregation layers of the network are removed, offer the potential to gain major benefits in terms of quality of service, predictability and manageability. Other changes, most notably the migration from IPv4 to IPv6, provide the means to meet the demand for ever more addresses for devices and to provide new ways to manage service quality.
New pools of IPv4 addresses are diminishing day by day as the rush of device connectivity, servers, storage, desktops, laptops and mobile systems continues at a frantic pace. The transition from IPv4 to IPv6 is likely to prove to be taxing. There is little doubt that IPv6 will grow in popularity, as it is already doing so in certain geographies, most notably Japan, the only question is when. IPv4 and IPv6 are likely to be utilised side by side for many years adding another layer of complexity to network management, an area hardly free of such challenges.
Meanwhile the rapid adoption of “virtualisation” is adding further complexity to the mix. The absence of good practices and established processes to help migration projects is inhibiting progress and propagating the deployment of a mix of solutions to extend the usage of the existing address range.
Security and management will also need to be reappraised as mobile connectivity grows in enterprises, especially as the range of devices allowed to connect to corporate systems expands. Many organisations already recognise that their network monitoring and management tools need to be upgraded and this recognition will grow further as networks become more stressed.
The importance of the monitoring and management of networks are once again, after a lull of a decade or more, growing in visibility as a major factor in service quality. As device connectivity grows, as flexible IT systems take off and as organisations grow their use of external systems and devices linking to the core, the management of resource demand becomes vital to ensure network resources are utilised according to business goals.
Tony Lock is a As Programme Director at Freeform Dynamics, responsible for driving coverage in the areas of Systems Infrastructure and Management, IT Service Management, Outsourcing, and emerging hosting models such as Software as a Service and Cloud Computing.
Greg Ferro is concerned about server virtual machine mobility and about storage traffic, and about how these affect traffic flows in the core of the network.
Greg Ferro - Network Architect and Senior Engineer/designer
The approach to Data Centre networks has been mostly stagnant since the mid-90s. In Expert Clinic Three where we talked about Flattening the Ethernet fabric I talked about the performance problem of Ethernet:
The performance problem is at two levels. The first is described as the North-South/East-West design problem. In the past, all traffic flowed up and down a tree-like network, metaphorically North-South from edge to core and back again.
Now that we have new standards and technologies that allow the use of all connected bandwidth for all flows, we still have another problem - mobility. Simply, if a virtual server is dynamically migrated from Point A in the network to Point B what happens to the traffic flows in the network core. Answer ? There is no right answer, it depends. It works for some designs, and not for others.
When traffic flowed from server to the Data Centre core and then outwards to the user, network architects could reasonably predict and design to meet the requirements. Network workloads were static and well known to the team. The rise of well managed and reliable hypervisors has led to more dynamic server moves and their impacts on the network changes constantly.
Older data centre designs won’t cope with converged networking for fundamental reasons.
The migration process consumes large amounts of bandwidth to achieve memory synchronisation on the server. And the server data workload from user and/or applications is also a concern. And the traffic volumes generated by in-band backup tools must also be considered.
But the most significant impact is the rise of iSCSI, NFS and, to a lesser extent, FCoE and the storage traffic from virtual server to array. Storage traffic is not very large in networking terms but it does have specific requirements. Latency and reliability are key to successful operation.
Modern data centres use DCB (Data Centre Bridging) Ethernet to help solve the storage challenge in three key areas.
1) Priority Flow Control and Enhanced Transmission Selection can manage the latency by applying QoS to Ethernet and IP traffic loads. 2) 10 Gigabit Ethernet solves the bandwidth crunch. It’s rare that a single application needs 10 Gigabit, but combining data, virtualisation and storage traffic means that the peak bandwidth is vital. 3) A variety of Layer 2 Multipath and Multi-chassis Link Aggregation technologies deliver the path reliability for sub-second failover for storage availability in a multidirectional network.
In short, older data centre designs won’t cope with converged networking for fundamental reasons. Well, no one is stopping you, but you won’t have good outcomes over the long term. So, by all means start implementing today, but plan upgrades tomorrow so that your data centre mobility is ready for the hard stuff.
Greg Ferro describes himself as Human Infrastructure for Cisco and Data Networking. He works freelance and has spent time at financial institutions, service providers, working for resellers and dot coms, in both the largish and smallish companies.
The constant onrush of mobile device connectivity means that Ethernet has to be upgraded to cope with the sheer number of addressable devices, to make the best use of its bandwidth to cope with cross-network traffic as well as the in-out core-edge traffic that it is currently designed for, and to be able to detect and prioritise traffic types.
There is no end-point here, no magic number of one hundred billion devices after which the network will stop growing. Network architects, designers and managers are, as ever, chasing a constantly growing target and our three experts are giving their views from here and now. In five years time they will most likely say something different. But that's the nature of being involved in a changing game. Ethernet has changed amazingly since it was first invented and is going to change more. ®
Sponsored: DevOps and continuous delivery