This article is more than 1 year old

AI can now generate fake human bodies and faces, OpenAI to share a larger GPT-2 model, and more

Which model is real? The top one? The one on the bottom left? Or the one on the middle?

Roundup Hello, your regular AI roundup. We have a video of Mark Zuckerberg making a bad joke at F8, a neural network that generates fake whole human bodies, with their clothes on, and more. Enjoy.

AI at F8: Mark Zuckerberg kicked off Facebook’s annual developer conference F8 last week in Silicon Valley with his usual spiel of how its desperately trying to use AI to keep the social media platform safe.

All eyes and ears were on Zuck especially after the company’s been embroiled in a string of scandals, ranging from possible political wrongdoing to downright stupid mistakes.

So it’s only right he bangs on about his new favourite word: privacy. He even tried to make an awkward joke about it, but it didn’t really go down well as you can see.

Not one to be deterred however, he continued harping on about his next favourite word – yep you’ve guessed it, it’s safety. AI is being used to deal with harmful language, images and videos.

Engineers have trained LASER, a language model that embeds data from 93 languages as vectors and maps them onto the same latent space, to help translate between rarer language pairs. The hope is that Facebook will be better at detecting hate speech and online bullying across multiple languages.

For visual content, Facebook uses a mixture of things. It has a computer vision model called a panoptic feature pyramid network to segment images by objects so it can identify things in the foreground and background. In videos, Facebook has trained software to understand specific actions from 65 million public Instagram videos to identify violent or graphic footage.

They may sound impressive, but it’s trickier dealing with real content that deviates from the training data. For example, Facebook failed to take down the video of the Christchurch mosque shootings being live streamed.

There are more details about AI and safety at Facebook here.

Also, in another announcement Zuckerberg said its video-calling device and speaker Portal will be making its way to Canada and Europe. It’s currently only available to customers in the US at the moment.

Meanwhile, Facebook has hired as many as 260 contractors in India to categorize millions of pieces of people's content on the social network, to train its AI-based filtering systems.

Fake AI bodies: You’ve heard about fake AI faces, but did you know that neural networks can now dream up completely imaginary beings, face and body and all?

Watch this:

Youtube Video

DataGrid, a startup based in Tokyo, have built a generative adversarial network (GAN) to spit out images of nonexistent people wearing make-believe clothes. Why? Well, “automatic generation of full-body model AI is expected to be used as a virtual model for advertising and apparel EC,” said DataGrid. Computer-generated characters can be designed to look how their creators want them to look, perfect and unflawed.

The next stop is to bring these AI bodies to life, DataGrid said. “We will further improve the accuracy of the whole-body model automatic generation AI and research and develop the motion generation AI. In addition, we will conduct demonstration experiments with advertising and apparel companies to develop functions required for actual operation.”

Northrop Grumman is collaborating with universities for ML research: The US Defence contractor, Northrop Grumman, announced it has formed a research consortium to apply AI in cyber security.

“In today’s environment, machine learning, cognition and artificial intelligence are dramatically reshaping the way machines support customers in their mission,” said Eric Reinke, vice president and chief scientist of mission systems at Northrop Grumman.

“The highly complex and dynamic nature of the mission demands an integrated set of technologies and we are excited to partner with academia to enhance our customers mission.”

Some key areas, include: “multiple sensor track classification, identification and correlation; situational knowledge on demand; and quantitative dynamic adaptive planning.”

Three groups of researchers made up from top US universities such as Carnegie Mellon University, Johns Hopkins University, Massachusetts Institute of Technology, Purdue University, Stanford University, University of Illinois at Chicago, University of Massachusetts Amherst and the University of Maryland, have collectively received $1.2m for research.

A bigger GPT-2 model is coming! OpenAI is planning to release larger and more powerful versions of its GPT-2 language model.

The AGI-research lab divided the AI community with its decision to release the code for its Reddit-trained language model, claiming that it was potentially too dangerous to handle. Some applauded OpenAI for playing safe, as it’s possible the model could be manipulated to spit out hate speech, fake news for bot accounts. But others, believed that it was all a front designed to whip up a media frenzy.

Instead of publishing the full model, OpenAI gingerly released a smaller model dubbed GPT-2-117, containing 117m parameters rather than the full 1.5bn. Now, it’s planning to unleash a larger model with 345m parameters. The larger model performs better than the smaller 117m model, but not as well as the full-sized one.

The 762m and 1.5bn model will be reserved for researchers in the AI and security community who are “working to improve societal preparedness for large language models,” OpenAI said.

“In making our 345M release decision, some of the factors we considered include: the ease of use (by various users) of different model sizes for generating coherent text, the role of humans in the text generation process, the likelihood and timing of future replication and publication by others, evidence of use in the wild and expert-informed inferences about unobservable uses, proofs of concept such as the review generator mentioned in the original blog post, the strength of demand for the models for beneficial purposes, and the input of stakeholders and experts,” it said in a statement.

“We remain uncertain about some of these variables and continue to welcome input on how to make appropriate language model publication decisions.”

OpenAI have described this as a “staged release strategy”, whereby it will publish various versions of the model over time. “The purpose of our staged release of GPT-2 is to give people time to assess the properties of these models, discuss their societal implications, and evaluate the impacts of release after each stage,” OpenAI said. ®

More about

TIP US OFF

Send us news


Other stories you might like