US Senators want more AI, while Microsoftie Paul Allen wants to use it to save wildlife, etc
DeepMind also collaborating with Unity
Roundup Welcome to this week's AI Roundup. It looks live the US government does care about AI, and four Senators are urging the government to use more of it in new legislation. Microsoft had a few announcements at its Ignite conference, and DeepMind is collaborating with Unity for its research.
New playground for AI: DeepMind is working with Unity Technologies, a company that builds game engines, to build new, virtual environments to aid its AI research.
Unity has had some experience with working in machine learning environments and released its latest version of the ML-Agents toolkit last year. The software is designed to help developers test their reinforcement learning (RL) algorithms in simulation.
DeepMind is most well-known for its research in RL, an area of AI that trains agents how to perform a specific task with rewards. Games are the perfect environment to test RL methods as performance can be easily tracked with scores.
“Games and simulations have been a core part of DeepMind’s research programme from the very beginning and this approach has already led to significant breakthroughs in AI research,” Demis Hassabis, CEO and co-founder of DeepMind, said in a statement.
The hope is that as RL algorithms improve, they’ll be useful practically. But it’s not easy to switch over from perfect simulations to the messy real world.
DeepMind and Safety: Here’s more DeepMind news. Its safety team now has a Medium blog, and has published a post about some of the technical challenges to ensuring the development of AI systems is safe.
They focus on three areas: specification, robustness, and assurance. Specification ensures that the models behave as the developer intended. There are numerous stories of agents exploiting bugs in games or working out ingenious shortcuts to earn points quickly without actually completing the task properly.
A common example is an OpenAI bot that was coaxed into playing a speedboat racing game. To guide it along the race track, the team added points whenever the agent hit targets laid out along the route. Instead of trying to complete a lap in the fastest time, the AI speedboat decided that it’d get more points by ramming into the targets repeatedly, where eventually the damage would build up causing it to erupt in flames.
The lack of robustness is a problem that all deep learning systems face. Neural networks are only really good at memorising the patterns seen during the training process, so it’s easy to trick them when a particular example is only slightly different.
Adversarial examples loaded with noise can make an image classification model fail to recognise a picture correctly. There are tons of examples of this happening, where cats are believed to be guacamole, a turtle taken as a rifle gun, etc etc.
Assurance is a tricky one. The folks on DeepMind’s safety team have defined it as the quality that ensures a system can be understood and controlled during its operation.
Although researchers have created a particular model, it’s difficult to work out how exactly the machine learns the specific patterns in data to arrive at an answer. It’s no good if you’re trying to work out the exact risks of how a medical system might spit out wrong diagnoses, if you don’t know how it really works.
If you’re interested in AI safety then you can read more of it here.
AI making AI: Microsoft announced that developers can now tweak their neural networks automatically in its Azure Machine Learning service on cloud during its annual Ignite conference.
Azure Machine Learning is designed to make the process of creating neural networks for a specific task easier. Microsoft are aiming for an “end-to-end” solution, basically a fancy way of saying you can build and train a model using its tools and then deploy the system on its cloud right after.
It won’t create the whole model for you, but it could help tweak the hyperparameters - a set of external conditions fine tuned so a system learns the correct patterns - automatically. Careful tuning is required so that the model’s performance is good enough under those predetermined settings.
There are a few more tiny AI tidbits in Microsoft's announcement here.
Microsoft co-founder is trying to save wildlife with AI: Vulcan, a company set up by Paul Allen, who cofounded Microsoft alongside Bill Gates, will focus on using AI for wildlife conservation.
Its AI arm, Vulcan Machine Learning Center for Impact (VMLCI), is partnering up with non-profits, NGOs, universities, corporations and institutes. Other Vulcan departments are already working on projects tackling “maritime activity, coral reef health and wildlife populations.”
Machine learning models can help sift through satellite images and other forms of data taken from sensors used to tag animals to help estimate populations and understand ecosystems. VMLCI is also looking at how AI can be applied to broader issues like rising sea temperatures and overfishing.
US Artificial Intelligence in Government Act: Four Senators are urging the US government to invest and use AI to help federal agencies carry out “data-related planning”. Brian Schatz (D-HI), Cory Gardner (R-CO), Rob Portman (R-OH), and Kamala Harris (D-CA) have set up the Artificial Intelligence (AI) in Government Act, a piece of legislation that would help the government adopt AI more widely.
“The United States won’t have the global competitive edge in AI if our own government isn’t making the most of these technologies,” Senator Schatz, the ranking member of the Senate Subcommittee on Communications, Technology, Innovation, and the Internet, said in a statement.
“This bill will give the federal government the resources it needs to hire experts, do research, and work across federal agencies to use AI technologies in smart and effective ways.”
The Act proposes to:
- Create a separate office within the General Services Administration that will be tasked with sourcing expert advice to government agencies, focus on AI policy issues and how the US can use the technology to remain competitive.
- Set up an external advisory board for AI and policy.
- Work with the Office of Management and Budget on how best the government should use and invest in AI, including what new roles and skills are needed.
You can read more about it here. ®
Sponsored: What next after Netezza?