Nonprofit OpenAI looks at the bill to craft a Holy Grail AGI, gulps, spawns commercial arm to bag investors' mega-bucks
Capital idea, old boy... yes, plenty of capital
Analysis OpenAI, a leading machine-learning lab, has launched for-profit spin-off OpenAI LP – so it can put investors' cash toward the expensive task of building artificial general intelligence.
The San-Francisco-headquartered organisation was founded in late 2015 as a nonprofit, with a mission to build, and encourage the development of, advanced neural network systems that are safe and beneficial to humanity.
It was backed by notable figures including killer-AI-fearing Elon Musk, who has since left the board, and Sam Altman, the former president of Silicon Valley VC firm Y Combinator. Altman stepped down from as YC president last week to focus more on OpenAI.
Altman is now CEO of OpenAI LP. Greg Brockman, co-founder and CTO, and Ilya Sutskever, co-founder and chief scientist, are also heading over to the commercial side and keeping their roles in the new organization. OpenAI LP stated it clearly it wants to "raise investment capital and attract employees with startup-like equity."
There is still a nonprofit wing, imaginatively named OpenAI Nonprofit, though it is a much smaller entity considering most of its hundred or so employees have switched over to the commercial side, OpenAI LP, to reap the benefits its stock options.
“We’ve experienced firsthand that the most dramatic AI systems use the most computational power in addition to algorithmic innovations, and decided to scale much faster than we’d planned when starting OpenAI,” the lab's management said in a statement this week. “We’ll need to invest billions of dollars in upcoming years into large-scale cloud compute, attracting and retaining talented people, and building AI supercomputers.”
OpenAI refers to this odd split between OpenAI LP and OpenAI Nonprofit as a “capped-profit” company. The initial round of investors, including LinkedIn cofounder Reid Hoffman and Khosla Ventures, are in line to receive 100 times the amount they’ve invested from OpenAI LP's profits, if everything goes to plan. Any excess funds afterwards will be handed over to the non-profit side. In order to pay back these early investors, and then some, OpenAI LP will have to therefore find ways to generate fat profits from its technologies.
The reaction to the “capped-profit” model has raised eyebrows. Several machine-learning experts told The Register they were somewhat disappointed by OpenAI’s decision. It once stood out among other AI orgs for its nonprofit status, its focus on developing machine-learning know-how independent of profit and product incentives, and its dedication to open-source research.
Now, for some, it appears to be another Silicon Valley startup stocked with well-paid engineers and boffins.
Conflict of interest?
“Most tech companies have business models that rely on secrecy or monopolistic protections like copyright and patents,” said Daniel Lowd, an associate professor at the department of computer and information science at the University of Oregon in the US.
"These practices are the opposite of openness. Already, there are concerns that companies like Google are patenting machine learning methods and that this could stifle innovation. A profit incentive is a conflict of interest."
Rachel Thomas, co-founder of fast.ai and an assistant professor at the University of San Francisco's Data Institute, agreed. “I already see OpenAI as similar to the research labs at major tech companies: they hire people from the same backgrounds; focus on publishing in the same conferences; are primarily interested in resource-intensive academic research.
"There is nothing wrong with any of this, but I don’t really see it as ensuring that AI is beneficial to humanity, nor as democratizing AI, which was their former mission statement. To me, forming OpenAI LP is just one more step to being indistinguishable from any other major tech company doing lots of academic, resource-intensive research.”
OpenAI’s commitment to openness, as it name would suggest, has already been called into question. Last month, it withheld a trained language model known as GPT-2, claiming it was too dangerous to release: the software is capable of translating text, answering questions, and generating prose from writing prompts.
At first glance, its output appears competent, however, it composes words with little or no regard to facts and balance, and descends into nonsense, contradictions, and repetition. It was feared the algorithms would be used to churn out masses of fake news articles and product reviews, spam and phishing emails, instant messages, and other text that would fool humans, if it fell into the wrong hands. And thus, OpenAI suppressed it.
Some details about the system were published in a paper, along with a smaller version of the model, however, the full payload was withheld. The AI community was split into two on this decision. One half supported OpenAI’s efforts to avoid causing any harm to society through the dissemination of its technology, and the other half thought it was hypocritical and harmful to research.
Now that OpenAI hopes to generate a profit to pay back its investors in spades, will it now sell and monetize this powerful GPT-2 software and similar work? We're assured it has no plans.
OpenAI has tried to quash concerns that will drift from its primary mission – which is fulfilling its grandiose humanity-benefiting charter – by insisting the for-profit company is ultimately controlled by the nonprofit's board. That means whatever the for-profit side wants to do, the nonprofit board will ensure it abides by the charter, which states it will "build safe and beneficial artificial general intelligence," by having the final say on any major decision.
“The mission comes first even with respect to OpenAI LP’s structure," it stated. "We may update our implementation as the world changes. Regardless of how the world evolves, we are committed — legally and personally — to our mission.”
Looking for the Holy Grail
That mission is to develop so-called artificial general intelligence (AGI) that helps humanity and is safe. Such a system, seen only in science fiction so far, would be able to do pretty much anything a human could, and more. It is the Holy Grail of computer intelligence.
“OpenAI is an outstanding research organization, so it’ll be disappointing if they won’t be as open in the future," Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, an AI research lab founded by the late Microsoft co-founder Paul Allen, told The Register. "AGI is still the stuff of science fiction and philosophical speculation. I see no evidence that anyone, OpenAI included, is making tangible progress towards it.”
Remember the OpenAI text spewer that was too dangerous to release? Fear not, boffins have built a BS detector for itREAD MORE
We note that there will or may be people – such as OpenAI LP staffers – on the non-profit board, overseeing the for-profit arm's activities, who have a direct financial interest in seeing the for-profit side succeed: OpenAI said “only a minority of board members are allowed to hold financial stakes in the partnership at one time.”
Some may feel this will still hinder the board's ability to keep the for-profit side on the right side of the charter, putting safety and humanity ahead of money.
However, OpenAI stated that anyone with a conflict of interest will not be allowed to vote: "Only board members without such [financial] stakes can vote on decisions where the interests of limited partners and OpenAI Nonprofit’s mission may conflict—including any decisions about making payouts to investors and employees." When The Register asked how many people on the board were working on the for-profit and non-profit side, we did not get a clear answer.
As mentioned above, OpenAI Nonprofit will only receive leftover profits after the initial investors get a 100X return on their investments. It’s hard to judge how likely OpenAI LP will reach that target. OpenAI didn’t disclose how much the initial investments amounted to, but did tell us that it doesn't have any immediate plans to make money.
“We don't plan to monetize GPT-2 or other prior research projects at OpenAI. We're focused on the mission,” a spokesperson said. “Though, this structure gives us flexibility to do other things if we want — always subject to approval from the board.” ®
Sponsored: Becoming a Pragmatic Security Leader