OpenAI is a non-profit artificial intelligence (AI) research company that aims to promote and develop friendly AI in such a way as to benefit humanity as a whole. Founded in late 2015, the San Francisco-based organization aims to "freely collaborate" with other institutions and researchers by making its patents and research open to the public. The founders (notably Elon Musk and Sam Altman) are motivated in part by concerns about existential risk from artificial general intelligence.
Video OpenAI
History
In October 2015, Musk, Altman and other investors announced the formation of the organization, pledging over US$1 billion to the venture.
On April 27, 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research.
On December 5, 2016, OpenAI released Universe, a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications. On February 21, 2018, Musk resigned his board seat, citing "a potential future conflict (of interest)" with Tesla AI development for self driving cars, but remained a donor.
Participants
- Chairman: Sam Altman, president of the startup accelerator Y Combinator
- Ilya Sutskever, Research director, a former Google expert on machine learning
- Greg Brockman, CTO
Other backers of the project include:
- Reid Hoffman, LinkedIn co-founder
- Peter Thiel, PayPal co-founder
- Greg Brockman, former chief technology officer at Stripe
- Jessica Livingston, a founding partner of Y Combinator
Companies:
- Amazon Web Services, Amazon.com's cloud-services subsidiary
- Infosys, an IT consulting firm
The group started in early January 2016 with nine researchers. According to Wired, Brockman met with Yoshua Bengio, one of the "founding fathers" of the deep learning movement, and drew up a list of the "best researchers in the field". Microsoft's Peter Lee stated that the cost of a top AI researcher exceeds the cost of a top NFL quarterback prospect. While OpenAI pays corporate-level (rather than nonprofit-level) salaries, it doesn't currently pay AI researchers salaries comparable to those of Facebook or Google. Nevertheless, Sutskever stated that he was willing to leave Google for OpenAI "partly of because of the very strong group of people and, to a very large extent, because of its mission." Brockman stated that "the best thing that I could imagine doing was moving humanity closer to building real AI in a safe way." OpenAI researcher Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead.
Maps OpenAI
Motives
Some scientists, such as Stephen Hawking and Stuart Russell, believe that if advanced AI someday gains the ability to re-design itself at an ever-increasing rate, an unstoppable "intelligence explosion" could lead to human extinction. Musk characterizes AI as humanity's biggest existential threat. OpenAI's founders structured it as a non-profit so that they could focus its research on creating a positive long-term human impact.
OpenAI states that "it's hard to fathom how much human-level AI could benefit society," and that it's equally difficult to comprehend "how much it could damage society if built or used incorrectly". Research on safety cannot safely be postponed: "because of AI's surprising history, it's hard to predict when human-level AI might come within reach." OpenAI states that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible..." Co-chair Sam Altman expects the decades-long project to surpass human intelligence.
Vishal Sikka, former CEO of Infosys, stated that an "openness" where the endeavor would "produce results generally in the greater interest of humanity" was a fundamental requirement for his support, and that OpenAI "aligns very nicely with our long-held values" and their "endeavor to do purposeful work". Cade Metz of Wired suggests that corporations such as Amazon may be motivated by a desire to use open-source software and data to level the playing field against corporations such as Google and Facebook that own enormous supplies of proprietary data. Altman states that Y Combinator companies will share their data with OpenAI.
Strategy
Musk poses the question: "what is the best thing we can do to ensure the future is good? We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity." Musk acknowledges that "there is always some risk that in actually trying to advance (friendly) AI we may create the thing we are concerned about"; nonetheless, the best defense is "to empower as many people as possible to have AI. If everyone has AI powers, then there's not any one person or a small set of individuals who can have AI superpower."
Musk and Altman's counter-intuitive strategy of trying to reduce the risk that AI will cause overall harm, by giving AI to everyone, is controversial among those who are concerned with existential risk from artificial intelligence. Philosopher Nick Bostrom is skeptical of Musk's approach: "If you have a button that could do bad things to the world, you don't want to give it to everyone." During a 2016 conversation about the technological singularity, Altman said that "we don't plan to release all of our source code" and mentioned a plan to "allow wide swaths of the world to elect representatives to a new governance board". Greg Brockman stated that "Our goal right now... is to do the best thing there is to do. It's a little vague."
All such discussions, however, have as a background the almost unlimited range of AI applications. In addressing the possibility of AI-enabled sex bots, for example, a juried commentator has asked this question: "Are we really willing to let our society be infiltrated by autonomous software and hardware agents whose details of operation are known only to a select few?"
Products
Gym
Gym aims to provide an easy-to-setup general-intelligence benchmark with a wide variety of different environments - somewhat akin to, but broader than, the ImageNet Large Scale Visual Recognition Challenge used in supervised learning research - and that hopes to standardize the way in which environments are defined in AI research publications, so that published research becomes more easily reproducible. The project claims to provide the user with a simple interface. As of June 2017, the gym can only be used with Python. As of September 2017, the gym documentation site is not maintained, and active work is focused instead on its GitHub page.
RoboSumo
In "RoboSumo", virtual humanoid "metalearning" robots initially lack knowledge of how to even walk, and given the goals of learning to move around, and pushing the opposing agent out of the ring. Through this adversarial learning process, the agents learn how to adapt to changing conditions; when an agent is then removed from this virtual environment and placed in a new virtual environment with high winds, the agent braces to remain upright, suggesting it had learned how to balance in a generalized way. OpenAI's Igor Mordatch argues for that competition between agents can create an intelligence "arms race" that can increase an agent's ability to function, even outside the context of the competition.
Debate Game
OpenAI launched the Debate Game, which teaches machines to debate toy problems in front of a human judge. The purpose is to research whether such an approach may assist in auditing AI decisions and in developing explainable AI.
OpenAI Five
OpenAI Five is the name of a team of five OpenAI-curated bots that are used in the competitive five-on-five video game Dota 2, who learn to play against human players at a high skill level entirely through trial-and-error algorithms. Before becoming a team of five, the first public demonstration occurred at The International 2017, the annual premiere championship tournament for the game, where Dendi, a professional Ukrainian player of the game, lost against a bot in a live 1v1 matchup. After the match, CTO Greg Brockman explained that the bot had learned by playing against itself for two weeks of real time, and that the learning software was a step in the direction of creating software that can handle complex tasks "like being a surgeon". OpenAI calls the system "reinforcement learning", as the bots learn over time by playing against itself hundreds a times a day for months, in which they are rewarded for actions such as killing an enemy and destroying towers. By June 2018, the ability of the bots expanded to play together as a full team of five and were able to defeat teams of amateur and semi-professional players. At The International 2018, OpenAI Five played in two games against professional players. Although the bots lost both games, OpenAI considered it a successful venture, stating that playing against some of the best players in Dota 2 allowed them to analyze and adjust their algorithms for future games.
Dactyl
Dactyl uses machine learning to train a robot Shadow Hand from scratch, using the same reinforcement learning algorithm code that OpenAI Five uses. The robot hand is trained entirely in physically inaccurate simulation.
See also
- Machine Intelligence Research Institute
- Future of Humanity Institute
- Future of Life Institute
- OpenCog
- Vicarious
- Open-source robotics
- Partnership on AI
- Open Neural Network Exchange
References
External links
- Official website
- Interviews
- with chairs Musk and Altman
- with employee Andrej Karpathy
Source of article : Wikipedia