Feeds

Chinese boffins ginger up Hadoop with OpenFlow funnel

Software-defined networking speeds scheduling across clusters

The essential guide to IT transformation

Few doubt that the premise of software-defined-networking (SDN) is a good one: organisations that run a lot of virtual machines and virtual networks can benefit from the flexibility and automation SDN provides.

Critics, however, point out that SDN in its current form might not have particularly broad applicability: there just aren't that many data centres with the traffic problems SDN solves.

Enter an interesting paper from Chinese scholars titled “Bandwidth-Aware Scheduling with SDN in Hadoop: A New Trend for Big Data”, that proposes SDN as a solution to a big data problem.

The authors, from Wuhan's Huazhong University of Science and Technology, note that Hadoop has several task schedulers but none of them take into account available bandwidth. That lack, they argue, means “losing optimized opportunities for task assignment.”

With Hadoop's parallelism one of its key advantages, missing the chance to slot in a job is obviously not a great outcome. The scholars therefore ask the question: “Can we combine the bandwidth control capability of SDN with Hadoop system to exploit an optimized task scheduling solution that has high efficiency and agility in terms of job completion time for big data processing?”

Unsurprisingly, their answer is yes, thanks to a new task scheduler they propose called “Bandwidth-Aware Scheduling with SDN in Hadoop”, aka “BASS”.

BASS' approach is to interface with an OpenFlow controller to learn as much as it can about the available bandwidth in a Hadoop cluster and its attendant networking rig. Once BASS has gathered that data, it allocates tasks based on how speedily the network can carry it to an Hadoop node.

The authors offer test results suggesting BASS is rather faster than other job schedulers and even suggest an improvement called “Pre-BASS” that adds some extra pre-processing grooming so queues can be made even more efficient.

The paper details tests made on a six-node Hadoop cluster spread across give physical hosts. That is, of course, a long way short of the scale at which many Hadoop clusters operate, but the authors are optimistic they can scale BASS in the future. ®

Boost IT visibility and business value

More from The Register

next story
Pay to play: The hidden cost of software defined everything
Enter credit card details if you want that system you bought to actually be useful
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
VMware's high-wire balancing act: EVO might drag us ALL down
Get it right, EMC, or there'll be STORAGE CIVIL WAR. Mark my words
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

Top 10 endpoint backup mistakes
Avoid the ten endpoint backup mistakes to ensure that your critical corporate data is protected and end user productivity is improved.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up distributed data
Eliminating the redundant use of bandwidth and storage capacity and application consolidation in the modern data center.
The essential guide to IT transformation
ServiceNow discusses three IT transformations that can help CIOs automate IT services to transform IT and the enterprise
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.