The arrival of ChatGPT and other artificial intelligence software has been met with awe and wonder but also some concern as warnings are issued about the program’s susceptibility to ‘hallucinations’.
Discover our latest podcast
The software exploded onto the scene last November, with many users testing it out and putting it to work writing songs and doing their homework. However, while the AI bot is smarter than the average person, it was built by humans and therefore still currently susceptible to errors and bias.
Artificial intelligence ‘hallucinations’
There has been much fanfare and reporting about the potential of the software to do a range of jobs and concern about students using the program to cheat on their coursework. However, Google's search engine boss Prabhakar Raghavan explained that the software is not to be relied on just yet, as per The Sun:
This type of artificial intelligence we're talking about can sometimes lead to something we call hallucination.
This is then expressed in such a way that a machine delivers a convincing but completely fictitious answer.
That technology is still in its infancy, relatively speaking, but already it has sparked much debate about the dangers of the software as well as some of the ethical questions it poses. This current version is limited to content from the internet as it was in 2021, and so far does not update.
People have been warned not to ask AI questions they cannot confirm themselves, such as legal advice. It presents its answers as fact yet when as we know the internet is full of misinformation - some of it quite dangerous.
Read more:
⋙ Artificial intelligence can tell us what we'll look like in 10 years
⋙ Millions warned as Google Chrome could become risky to use by next week
Read more:
⋙ Artificial intelligence robot paints surprising portrait of the Queen for her Platinum Jubilee
‘Google killer’
ChatGPT has had more than 100 million users in the first two months of its launch and has more than 13 million daily visitors, making it the fastest-growing consumer application in such a short period of time, as per Demand Sage.
OpenAI, the creators of ChatGPT, is harnessing this popularity with the much more lucrative target of entering into the multi-billion dollar sector of internet search engines and as such has been dubbed ‘the Google killer’ by some experts, according to the BBC.
Alphabet, the owner of Google, made $104 bn (£86bn) in revenue in 2020 from its search engine and so any business that can take even a tiny percentage of that will be doing very well indeed. In response, Google has released its own version of AI writing software called Bard, which unfortunately made a mistake on its first demonstration last week.
Following the mistake, a Google spokesman explained, as per Review Geek:
This highlights the importance of a rigorous testing process, something that we’re kicking off this week with our Trusted Tester program.
When ChatGPT itself was asked about it’s rivalry with Bard, it gave the diplomatic response that "it is not a matter of one being 'better' than the other" and then added:
I do not have the capability or intention to harm any company, including Google.
Sources used:
- The Sun 'AI ALERT Google issues urgent warning over ChatGPT after millions race to use AI chatbot'
- Review Geek 'Google’s Bard AI Chatbot Is Already Making Mistakes'
- BBC ''Google killer' ChatGPT sparks AI chatbot race'
- Demand Sage 'ChatGPT Statistics for 2023: Comprehensive Facts and Data'