Skip to content


ChatGPT is the newest and most impressive AI chatbot yet. It was released two weeks ago, and in just five days, it has registered one million users. It is used so much that its servers have reached capacity several times.

OpenAI, the company that developed it, is already being discussed as a potential Google killer. Why search for something in a search engine when ChatGPT can write a whole paragraph explaining the answer? (There’s even a Chrome extension that lets you do both side-by-side.)

But what if we never know the secret sauce behind ChatGPT’s capabilities?

The chatbot benefits from a number of technical advances published in the open scientific literature over the past few decades. However, any innovation specific to him is a secret. OpenAI may well try to build a technical and business moat to keep others out.

What it can (and can’t) do

ChatGPT is very capable. Want haiku in chatbots? Of course.

Chatbots, oh so wise!

Endless recycling

Answers from the void.

How about a joke about chatbots? No problem.

Why did the chatbot cross the road?

To reach the other side of your screen.

ChatGPT can do many other tricks. It can write computer code according to the user’s specification, draft employment letters or rental agreements, homework essays, and even pass university exams.

Just as important is what ChatGPT can’t do. For example, it struggles to distinguish between truth and falsehood. It is also often a convincing liar.

What is the fastest marine mammal?

The fastest marine mammal is the sailfish.

A sailfish is not a mammal.

You are right, the sail is not a mammal. I apologize for the mistake. The fastest marine mammal is the peregrine falcon.

ChatGPT is a bit like autofill for your phone. Your phone is built on a word dictionary to fill in the words. ChatGPT is taught almost all over the web and therefore can fill entire sentences or even entire paragraphs.

However, he doesn’t understand what he’s saying, just the words that are likely to come next.

Open by name only

In the past, advances in artificial intelligence have been accompanied by peer-reviewed literature.

For example, in 2018, when the Google Brain team developed the BERT neural network on which most natural language processing systems are now based (and we suspect ChatGPT as well), the methods were published in peer-reviewed scientific articles and the code was open source. – source.

And in 2021, DeepMind’s AlphaFold 2, the protein-folding software, succeeded. of science Breakthrough of the year. The software and its results were open source, so scientists everywhere could use them to advance biology and medicine.

Since the release of ChatGPT, we only have a short blog post describing how it works. There was no hint of an accompanying scientific publication, or that the code would be open source.

To understand why ChatGPT can remain private, you need to understand a little about the company behind it.

OpenAI is perhaps one of the strangest companies to emerge from Silicon Valley. It was established as a non-profit organization in 2015 to promote and develop “friendly” artificial intelligence in a way that “benefits humanity as a whole”. Elon Musk, Peter Thiel and other leading tech figures have pledged US$1 billion to help him achieve his goals.

Their thinking was that we can’t trust for-profit companies to develop a more capable AI that is aligned with human prosperity. Therefore, AI had to be developed by a non-profit organization and, as the name suggested, in an open manner.

In 2019, OpenAI went public (limited to investors with a maximum return of 100x) and took a $1 billion investment from Microsoft to expand and compete with the tech giants.

Money seems to have gotten in the way of OpenAI’s initial plans for openness.

Monetizing Users

Additionally, OpenAI appears to be using user feedback to filter out fake responses to ChatGPT hallucinations.

According to its blog, OpenAI originally used reinforcement learning in ChatGPT to downplay false and/or problematic responses using an expensive hand-trained training set.

But ChatGPT now seems to be regulated by its more than one million users. I imagine this kind of human response would be very expensive to get any other way.

Now we are faced with the prospect of significant advances in AI, using methods not described in the scientific literature and data sets that are restricted to a company that appears to be open in name only.

Where next?

The rapid progress of AI in the past decade is largely due to the openness of scientists and businesses. All the major AI tools we have are open source.

But in the race to develop more capable artificial intelligence, it may be over. If the openness of AI diminishes, we may see a slowdown in progress in this area as a result. We can also see the development of new monopolies.

And if history is anything to go by, we know that lack of transparency is a driver of bad behavior in tech spaces. So while we continue to praise (or criticize) ChatGPT, we should not ignore the circumstances in which it came to us.

If we’re not careful, the very thing that seems to mark the golden age of AI could actually spell its end.

Toby Walsh, Professor of AI at UNSW, Research Group Leader, UNSW Sydney

This article is reprinted from The Conversation under a Creative Commons license. Read the original article.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *