Feeds

Chinese boffins ginger up Hadoop with OpenFlow funnel

Software-defined networking speeds scheduling across clusters

Secure remote control for conventional and virtual desktops

Few doubt that the premise of software-defined-networking (SDN) is a good one: organisations that run a lot of virtual machines and virtual networks can benefit from the flexibility and automation SDN provides.

Critics, however, point out that SDN in its current form might not have particularly broad applicability: there just aren't that many data centres with the traffic problems SDN solves.

Enter an interesting paper from Chinese scholars titled “Bandwidth-Aware Scheduling with SDN in Hadoop: A New Trend for Big Data”, that proposes SDN as a solution to a big data problem.

The authors, from Wuhan's Huazhong University of Science and Technology, note that Hadoop has several task schedulers but none of them take into account available bandwidth. That lack, they argue, means “losing optimized opportunities for task assignment.”

With Hadoop's parallelism one of its key advantages, missing the chance to slot in a job is obviously not a great outcome. The scholars therefore ask the question: “Can we combine the bandwidth control capability of SDN with Hadoop system to exploit an optimized task scheduling solution that has high efficiency and agility in terms of job completion time for big data processing?”

Unsurprisingly, their answer is yes, thanks to a new task scheduler they propose called “Bandwidth-Aware Scheduling with SDN in Hadoop”, aka “BASS”.

BASS' approach is to interface with an OpenFlow controller to learn as much as it can about the available bandwidth in a Hadoop cluster and its attendant networking rig. Once BASS has gathered that data, it allocates tasks based on how speedily the network can carry it to an Hadoop node.

The authors offer test results suggesting BASS is rather faster than other job schedulers and even suggest an improvement called “Pre-BASS” that adds some extra pre-processing grooming so queues can be made even more efficient.

The paper details tests made on a six-node Hadoop cluster spread across give physical hosts. That is, of course, a long way short of the scale at which many Hadoop clusters operate, but the authors are optimistic they can scale BASS in the future. ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
Ellison: Sparc M7 is Oracle's most important silicon EVER
'Acceleration engines' key to performance, security, Larry says
Linux? Bah! Red Hat has its eye on the CLOUD – and it wants to own it
CEO says it will be 'undisputed leader' in enterprise cloud tech
Oracle SHELLSHOCKER - data titan lists unpatchables
Database kingpin lists 32 products that can't be patched (yet) as GNU fixes second vuln
Ello? ello? ello?: Facebook challenger in DDoS KNOCKOUT
Gets back up again after half an hour though
Hey, what's a STORAGE company doing working on Internet-of-Cars?
Boo - it's not a terabyte car, it's just predictive maintenance and that
prev story

Whitepapers

A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.