New York (The Verge) — OpenAI’s latest breakthrough is astonishingly powerful, but still fighting its flaws trib.al/H1P39ae.
OpenAI’s GPT-3 is the latest version of its impressive, text-generating, autocomplete AI programs. Some think it might be the first step toward creating true artificial intelligence, while others are skeptical of anything that produces so many errors and biased answers. Here’s how GPT-3 works and what it means for the future of AI.
The program itself is called GPT-3 and it’s the work of San Francisco-based AI lab OpenAI, an outfit that was founded with the ambitious (some say delusional) goal of steering the development of artificial general intelligence or AGI: computer programs that possess all the depth, variety, and flexibility of the human mind. For some observers, GPT-3 — while very definitely not AGI — could well be the first step toward creating this sort of intelligence. After all, they argue, what is human speech if not an incredibly complex autocomplete program running on the black box of our brains?
with it a number of computer-vision enabled technologies, from self-driving cars, to ubiquitous facial recognition, to drones. It’s reasonable, then, to think that the newfound capabilities of GPT-3 and its ilk could have similar far-reaching effects.
Like all deep learning systems, GPT-3 looks for patterns in data. To simplify things, the program has been trained on a huge corpus of text that it’s mined for statistical regularities. These regularities are unknown to humans, but they’re stored as billions of weighted connections between the different nodes in GPT-3’s neural network. — James Vincent/@verge