Self-driving truck boss: 'Supervised machine learning doesn’t live up to the hype. It isn’t C-3PO, it’s sophisticated pattern matching'
Starsky Robotics shuts down, plus more news from world of neural networks
Roundup Let's get cracking with some machine-learning news.
Starksy Robotics is no more: Self-driving truck startup Starsky Robotics has shut down after running out of money and failing to raise more funds.
CEO Stefan Seltz-Axmacher bid a touching farewell to his upstart, founded in 2016, in a Medium post this month. He was upfront and honest about why Starsky failed: “Supervised machine learning doesn’t live up to the hype,” he declared. “It isn’t actual artificial intelligence akin to C-3PO, it’s a sophisticated pattern-matching tool.”
Neural networks only learn to pick up on certain patterns after they are faced with millions of training examples. But driving is unpredictable, and the same route can differ day to day, depending on the weather or traffic conditions. Trying to model every scenario is not only impossible but expensive.
“In fact, the better your model, the harder it is to find robust data sets of novel edge cases. Additionally, the better your model, the more accurate the data you need to improve it,” Seltz-Axmacher said.
More time and money is needed to provide increasingly incremental improvements. Over time, only the most well funded startups can afford to stay in the game, he said.
“Whenever someone says autonomy is ten years away that’s almost certainly what their thought is. There aren’t many startups that can survive ten years without shipping, which means that almost no current autonomous team will ever ship AI decision makers if this is the case,” he warned.
If Seltz-Axmacher is right, then we should start seeing smaller autonomous driving startups shutting down in the near future too. Watch this space.
Waymo to pause testing during Bay Area lockdown: Waymo, Google’s self-driving car stablemate, announced it was pausing its operations in California to abide by the lockdown orders in place in Bay Area counties, including San Francisco, Santa Clara, San Mateo, Marin, Contra Costa and Alameda. Businesses deemed “non-essential” were advised to close and residents were told to stay at home, only popping out for things like buying groceries.
It will, however, continue to perform rides for deliveries and trucking services for its riders and partners in Phoenix, Arizona. These drives will be entirely driverless, however, to minimise the chance of spreading COVID-19.
Waymo also launched its Open Dataset Challenge. Developers can take part in a contest that looks for solutions to these problems:
- 2D Detection: Given a set of camera images, produce a set of 2D boxes for the objects in the scene
- 2D Tracking: Given a temporal sequence of camera images, produce a set of 2D boxes and the correspondences between boxes across frames.
- 3D Detection:Given one or more lidar range images and the associated camera images, produce a set of 3D upright boxes for the objects in the scene.
- 3D Tracking: Given a temporal sequence of lidar and camera data, produce a set of 3D upright boxes and the correspondences between boxes across frames.
- Domain Adaptation: Similar to the 3D Detection challenge, but we provide additional segments from rainy Kirkland, Washington, 100 of which have 3D box labels.
Cash prizes are up for grabs too. The winner can expect to pocket $15,000, second place will get you $5,000, while third is $2,000.
You can find out more details on the rules of the competition and how to enter here. The challenge is open until 31 May.
More free resources to fight COVID-19 with AI: Tech companies are trying to chip in and do what they can to help quell the coronavirus pandemic. Nvidia and Scale AI both offered free resources to help developers using machine learning to further COVID-19 research.
Nvidia is providing a free 90-day license to Parabricks, a software package that speeds up the process of analyzing genome sequences using GPUs. The rush is on to analyze the genetic information of people that have been infected with COVID-19 to find out how the disease spreads and which communities are most at risk. Sequencing genomes requires a lot of number crunching, Parabricks slashes the time needed to complete the task.
“Given the unprecedented spread of the pandemic, getting results in hours versus days could have an extraordinary impact on understanding the virus’s evolution and the development of vaccines,” it said this week.
Interested customers who have access to Nvidia’s GPUs should fill out a form requesting access to Parabricks.
“Nvidia is inviting our family of partners to join us in matching this urgent effort to assist the research community. We’re in discussions with cloud service providers and supercomputing centers to provide compute resources and access to Parabricks on their platforms.”
Next up is Scale AI, the San Francisco based startup focused on annotating data for machine learning models. It is offering its labeling services for free to any researcher working on a potential vaccine, or on tracking, containing, or diagnosing COVID-19.
“Given the scale of the pandemic, researchers should have every tool at their disposal as they try to track and counter this virus,” it said in a statement.
“Researchers have already shown how new machine learning techniques can help shed new light on this virus. But as with all new diseases, this work is much harder when there is so little existing data to go on.”
“In those situations, the role of well-annotated data to train models o diagnostic tools is even more critical.” If you have a lot of data to analyse and think Scale AI could help then apply for their help here.
PyTorch users, AWS has finally integrated the framework: Amazon has finally integrated PyTorch support into Amazon Elastic Inference, its service that allows users to select the right amount of GPU resources on top of CPUs rented out in its cloud services Amazon SageMaker and Amazon EC2, in order to run inference operations on machine learning models.
Amazon Elastic Inference works like this: instead of paying for expensive GPUs, users select the “right amount of GPU-powered inference acceleration” on top of cheaper CPUs to zip through the inference process.
In order to use the service, however, users will have to convert their PyTorch code into TorchScript, another framework. “You can run your models in any production environment by converting PyTorch models into TorchScript,” Amazon said this week. That code is then processed by an API in order to use Amazon Elastic Inference.
The instructions to convert PyTorch models into the right format for the service have been described here. ®