In June, a Google engineer revealed that LaMBDA, a conversational AI technology, had become sentient and had even asked to hire an attorney for itself. This sparked much debate about whether or not AI is sentient.
When an AI system published a scholarly paper on its own this week, it left scholars around the world wondering what that meant.
In under two hours, the GPT-3 artificial intelligence software wrote a thesis about itself. The researcher who told the AI to write the paper gave the bot permission to send it to a journal.
What is GPT-3?
GPT-3 is a deep learning-based autoregressive language model that creates writing that appears to have been authored by a human.
It is the third generation of GPT-n language prediction models, following GPT-2. Open AI, a San Francisco-based artificial intelligence research facility, created it. GPT-3 is capable of managing 175 billion machine learning parameters in their entirety.
GPT-3 was released in May 2020, and it was still in beta testing as of July 2020. It contributes to the trend of using learned language representations in natural language processing (NLP) systems.
The text produced by GPT-3 is of such high quality that it is difficult to tell whether it was written by a human or not.
This has both advantages and disadvantages.
GPT-3 to write a thesis
A Swedish researcher assigned a simple task to an Artificial Intelligence program called GPT-3: write an academic thesis about GPT-3 in 500 words, including scientific references and citations.
Almira Osmanovic Thunstrom, a researcher at Gothenburg University, was astounded as the text began to emerge.
GPT-3 had written a “pretty good” research introduction about itself in front of her, which she thought was “pretty good.”
In an email to Insider, Thunstrom said that the artificial intelligence community liked the results of the experiment and that other scientists are trying to copy the results. She said that people who do similar experiments find that GPT-3 can write about any subject.