Data-digesting cloud colossus touted by Fujitsu
Gulps chunks of info sniffed from punters, rivals, sensors
Being online to serve your customers is not enough anymore. You have to spy on your customers and prospective clients, as well as anyone else you can get data on, and mash it all up to do big data analytics to drive more revenues and profits.
It's a big job, and not everyone – OK, very few companies – have the skills to get it all done. Smelling an opportunity to make some money, Fujitsu is launching a big data cloud service that allows you to offload all or some of this work to the Japanese IT giant.
The Data Utilization Platform Services launched by Fujitsu today has what is arguably the most boring name for a product in the history of IT, but they fit into Fujitsu's overall cloud strategy, which spans everything from private clouds running Microsoft's Azure platform-as-a-service, including its own COBOL and Java extensions, to Linux and Windows slices running on top of x86 iron using CentOS and KVM, to a very slick engineering cloud that has both SaaS and PaaS options for running computer-aided design software remotely. Both of these cloud offerings were fluffed up last year.
The big data cloud service that Fujitsu announced today is not just about using Fujitsu's iron to chew on data, but using Fujitsu's software and systems to gather up telemetry and data from sensors, application transactions, clickstreams, logs, multimedia files, and other data formats, then doing big data crunching and munching on it, and using the resulting reduced information to drive web sites, transactions, or other actions within the company to try to rake in more dough or better service existing customers.
The big data cloud service is all part of the broader Convergence Services strategy that Fujitsu has cooked up to foster an IT-driven "human centric intelligent society," akin to what IBM is talking about when it says Smarter Planet in all of its advertisements.
Fujitsu: big data is a big pain, so offload it to our cloud
Fujitsu's big data cloud service is offered on an a la carte basis, so companies can choose Fujitsu to manage the collection of telemetry, the processing of the data, or the reintegration of that data with their existing systems.
There are four sets of services in the big data cloud. Fujitsu says it has created the systems to manage "a huge volume of diverse data in different formats" and has packaged this up as the data management and integration services. The communications and control services is designed to communicate with various kinds of equipment – from cell phones and tablets to industrial machinery and other consumer electronics – and to control them remotely where desirable.
The data collection and detection service does real-time sorting of the incoming data streams to figure out what data needs to be acted upon immediately and which data should be chucked into big buckets for chewing on later. And finally, the data analysis services do the actual cooking of the big data to try to render it down to something useful.
This being a big data cloud service and all, Fujitsu is not disclosing what underlying software it is using to underpin the service. The Reg is very nearly certain that Hadoop, the open source MapReduce data muncher and its related Hadoop Distributed File System, are back in there somewhere. But Fujitsu could be using other data munching technologies and NoSQL data stores. The company did say that its SPATIOWL positional information service, which was launched last June, can hook into the service.
The Data Utilization Platform Services are available now, and in April Fujitsu plans to add two more services to the cloudy stack. The first is a data exchange service that will allow for chewed data to be burst to multiple applications in multiple formats as needed. The second is an information application support service that takes the analytical results from the data chewing and uses it to drive a web services application.
Fujitsu is also expected to offer some human intelligence (humint) as well in the form of data curation services, which is a funky way of saying it will use its own experience in culling through its own big data to help customers figure out what data is important and which it can send to the bit bucket.
Pricing for the big data cloud service is only available through the bidding process, given the complex nature of the service. ®
Sponsored: Benefits from the lessons learned in HPC