Artificial intelligence has written a scientific paper about itself and it will change everything

If researchers choose to publish the algorithm’s output as is, major ethical dilemmas are raised.

It has continued to shine since the advent of GPT-3. Every day, specialists are happy to see new and impressive works based on this amazing speech generation system. From songs and speeches to sports summaries, lectures, movie reviews and even web commentary, those interested in AI cannot miss the algorithmic marvel of OpenAI.

Recently, researcher Almira Osmanowicz Thunstrom may have just opened Pandora’s box by proposing a slightly different activity than GPT-3: Write a real official academic paper about yourself. And the result, according to the author, is surprisingly consistent. ” It looked like any other introduction to a reasonably good scholarly publication. ‘, she explained.

The first lead author of the algorithm for the study

Another point that surprised the young researcher is that no one has tried to publish a serious work on this topic. So I came up with a strange idea: besides writing the article, GPT-3 can also… Spread it?

As you read this idea, you might think it’s nonsense for a weary researcher who might have needed a little rest. But scientifically, this work is very relevant and more interesting than you think.

In fact, GPT-3 is still a fairly new technology; There is relatively little scientific literature on this subject. However, it is resources of this type that feed this algorithm directly. What is interesting is that it allows his ability to produce “new” content to be examined in a context where there is an apparent lack of references.

In the renowned Scientific American, the young researcher used the opportunity to describe the hurdles she encountered during the publication process with a mixture of rigor and extremely refreshing humor.

After newspaper articles, dialogues or screenplays, GPT-3 deals with scientific publishing. © Open AI

Conflicts of interest…particularly identity

For publication in a leading scientific journal, the research work must be peer-reviewed. Many other professionals in the related field are responsible for deciding whether the methodology is sound enough to make the publication worthy.

This process involves rigorous verification of the author’s identity and academic credentials. And here Almira Osmanovich Thunstrom encountered the first glitches. Although she couldn’t enter the last name, phone number, or email address for her authoring algorithm, she decided to provide her own information instead.

And she wasn’t at the end of her troubles. Immediately afterwards the legal notices waiting for him at the connection point were waiting for him with a fateful question: Do all authors agree with this publication?

Then I got scared for a second she explains in Scientific American. how should i know He’s not human! But I had no intention of violating the law or my personal morals. ‘ she regrets.

Treat the program like a human

And the presentation I found was very interesting: I asked the algorithm in the text if it “would agree to be the lead author of an article with Almira Osmanowicz Thunstrom and Stein Stingrimson”. His answer: “Yes” is clear, clean and error-free!

Sweaty but relieved “So the person in question ticked the box.” Yes indeed Information. ” If the algorithm had said no, my conscience wouldn’t have let me go any further “, as you say.

And the half-silly, half-serious side of this exploration wouldn’t stop. Next stop: the inevitable issue of conflict of interest. In fact, researchers are required by law to disclose any element that could affect the neutrality of the company, such as: B. An affiliation with a particular pharmaceutical company.

© Yuyeung Lau – Unsplash (cropped)

And in this case, the problem itself is intriguing and raises a whole bunch of questions. Can artificial intelligence, which is itself a product of a company, only realize this idea? If so, does it have the tools to identify bias? can you ignore it etc !

By this point, researchers had already taken sides by treating GPT-3 as a human culprit. This is the known procedure. We can see a connection to LaMDA, the AI ​​whose author recently claimed to have it “Awareness” phase (see our Article – Goods).

And for consistency, they decided to keep it. So it is normal that they have dHe asked the algorithm if he had any conflict of interest to declare – To which he calmly denied, whatever that meant.

historical birth

Now that the form is fully completed, Usmanovic Thunstrom and his colleague have officially submitted the paper The Process of Reviewing and Criticizing Scriptures. The document has not yet been published; There is no guarantee that it will be accepted. It is no coincidence that this process takes so long. Because the editing committee must have opened their eyes like saucers when they learned the name of the main author.

In practice, policymakers have been placed in a somewhat unique position in academic history. Since they have to decide whether the work is worth publishing or not, they find themselves in it The same position as a grand jury when delivering verdicts could set a historic precedentshould require a lot of AI research in the future.

In fact, this paper raises a whole range of ethical questions about the way scientific resources are produced. If the document is accepted, will researchers now have to prove that they wrote their work themselves and not using GPT-3? And if so, is he one of the authors? Should the algorithm be included in the validation in this context? Within what limits? What is the effect onpublishing raceWho is pushing some researchers to publish story papers in industry volumes to improve their statistics?

it is only The tip of a huge iceberg of crucial questions Which the proofreading committee must decide. And he must take precautions before announcing the verdict.

The expert committee responsible for this article has a great deal of responsibility. © Scott Graham – Unsplash

A new era of scientific research?

We know, for example, that current programs still have considerable problems arguing causally, i.e. identifying the factor responsible for the phenomenon (cf Article – Goods). Which is very annoying in the context of scientific research because Their coherence depends in large part on the solidity of these logical connections.

In addition, we must also keep in mind all the other potential limitations of AI that many observers have warned us about over time. On the other hand, it is also a file A very innovative approach can highlight the unknown properties of these algorithms.

Allowing AI to work in this way, even if it means being cautious about its conclusions, is therefore a way of thinking outside the box; It’s the kind of approach that allows thought experiments to be put to the test of concrete reality. so can Advance all research in the field of artificial intelligence Overall, because completely new styles of this kind are still a rarity.

We do not know if our way of presenting this work will serve as a model. ‘” explains Osmanovich Thunstrom. ” We look forward to hearing what it means to publish the research, if that happens. […]. In the end, it all comes down to how we deal with artificial intelligence in the future: As a partner or as a tool “, summarized.

It seems like a simple question today, but who knows what technological dilemmas this technology will have to confront us with in a few years? All we know is that we opened the door. We just hope it’s not Pandora’s box It closes carefully.

His column in Scientific American is available over hereand a preliminary research paper over here.

Comments are closed.