AI hype surge numbers, robo-radiologists, Apple voxels, and lots more
Plus: We'll see you at, er, NIPS next week!
Roundup Here's a human-compiled, totally non-robot generated summary of AI news beyond what we've already reported the past month week.
AI Index – A team of AI experts have published this year’s annual AI Index report that gathers data to show how the field is progressing and changing over time.
The rise of deep learning has accelerated the AI hype, and it can be difficult to have meaningful conversations and shape policy without basic metrics. The report shows that the number of active AI startups has increased 14 times since 2000, and venture capital has risen six times across the same period.
The appetite for AI far exceeds the number of skilled workers, leading to inflated salaries and fierce competition over hiring top names. It means that more researchers and students are publishing and enrolling in AI classes. Publishing journals and papers have increased nine times in the last 20 years. The number of students taking the introductory machine learning course at Stanford University has gone up a whopping 45 times in 30 years.
Although the boom in research and interest has advanced AI, the report shows machines are a long way off from general intelligence. AI may have superior skills in tasks like image and speech recognition or the game Go, but little progress has been made in teaching machines to gather deeper intelligence beyond processing data.
Yoav Shoham, chair of the AI Index and professor emeritus of computer science, said: “AI has made truly amazing strides in the past decade but computers still can’t exhibit the common sense or the general intelligence of even a 5-year-old.”
You can read the full report here.
New AI hub + Fluency – Samsung is the latest major tech company to announce that it will be opening an AI research center in an effort to accelerate the use of AI in its products.
The center will be a joint effort working under Samsung’s mobile and consumer electronics businesses.
The Korean company said in a statement that “a slight reshuffle will be carried out in order to promptly respond to market changes and increase operational efficiency."
It did not say where the center would be built. Cho Seung-hwan, former vice chief of the Samsung’s research and development software center, will supervise Samsung Research, and Lee Keun-bae, who used to lead AI research at the Software Center, will head up new the AI center under Samsung Research.
It was also reported this week that Samsung has acquired Fluenty, a Korean AI startup that specializes in a smart assistant capable of generating appropriate replies to messages in English and Korean.
It is hoped that Fluenty’s expertise will help improve Samsung’s smartphone assistant, Bixby, which suffered setbacks due to its poor performance in English.
AI Radiologist – Nvidia and Nuance Communications announced that they are collaborating on a project to accelerate the development of using AI for medical imaging.
Nuance communications is an American software company focusing on speech and imaging applications based in Burlington, Massachusetts.
The goal is to combine Nvidia’s deep learning platform containing algorithms geared toward image recognition, and Nuance’s PowerScribe, a digital platform for radiology reporting and Powershare, its image exchange network used by 70 per cent of all radiologists in the United States, into a complete package known as Nuance AI Marketplace for Diagnostic Imaging.
Dr Luciano Prevedello, division chief of medical imaging informatics at the Ohio State University, said: “We stand on the edge of a new age in radiology, where artificial intelligence and machine learning will become a necessity in every radiologist's essential toolkit. It is critical for the state of AI adoption and its potential to improve patient outcomes and operations that AI-based tools are more than just available – they must be valuable, validated and valued by the institution of radiology.”
Apple’s VoxelNet – Researchers at Apple have published a paper describing VoxelNet, a neural network that can detect 3D objects from LiDAR equipment. LiDAR stands for light detection and ranging. It’s a method that allows autonomous cars to build up real-time 3D maps of their surroundings. A pulse of laser light is emitted from the hardware, and when it strikes a surface, the light is reflected and detected by a sensor.
The time of flight – the period it takes for the beam to return – is measured for every pulse. By knowing the speed of light and the reflection time, the distance to the object can be rather accurately calculated. Machines can use LiDAR to map out stuff around them in space, and create a 3D representation of their environment called a 3D point cloud. VoxelNet can be used by software to split up the point cloud into little chunks called voxels, which turn out to be easier for neural networks to process.
The upshot is that VoxelNet can be used to highlight objects such as pedestrians and cyclists by putting a bounding box over them. Apple received a permit to test a driverless cars on California roads back in April. VoxelNet gives us a hint into how the iGiant 's developing its autonomous car software.
You can read more about it here.
New AI research hub – The launch of AI Now Institute, a research center focused on the sociological impacts of AI, at New York University (NYU) was announced mid-November. Machine learning is a pretty cool subject right now, but there are a few ongoing worries: the lack of transparency in decision making, bias in data, potential job losses, privacy, security and safety issues. People have different opinions regarding these areas, and it’s not always clear to see how the technology will impact us as it advances and becomes more common.
The AI Now Institute focuses on four key areas: rights and liberties, labor and automation, bias and inclusion, and safety and critical infrastructure. Researchers and members of the advisory board have a range of backgrounds from law, computer science to anthropology and engineering. It will be led by Kate Crawford, cofounder and director of research, who is also a professor at NYU and a researcher at Microsoft, as well as Meredith Whittaker, cofounder and executive director, who led Google’s Open Research Group.
The full list of names part of the team and advisory board are available here.
The Cray CS-Storm 500 NX system has up to eight Nvidia Tesla Pascal P100 GPUs per node attached to Intel Xeon E5-2600 v4 processors. Samsung went for three cabinets with Pascal GPUs that make use of Nvidia’s NVLink hardware, which ramps up GPU-to-GPU communications. It’ll also include Samsung’s NVMe SSDs for memory and 2666MHz DDR4 RDIMMs.
Transferring AI – Amazon joined the Open Neural Network Exchange (ONNX), an open-source collaboration that makes it easier to transfer AI systems across different frameworks. Amazon will rework its Apache MXNet framework to fit a more universal format ONNX members are working on. Other frameworks include Facebook’s Caffe 2, PyTorch, and Microsoft’s Cognitive Toolkit. Hardware-angled companies such as Arm, IBM, Huawei, and Qualcomm have also joined this partnership, lured by the prospect that deep learning models written in different formats can be optimized across different chips.
We wrote more about ONXX in a previous roundup. ®