views
Danish Prime Minister Mette Frederiksen on Wednesday delivered a speech to parliament partly written using artificial intelligence tool ChatGPT to highlight the revolutionary aspects and risks of AI.
The head of the Danish government was giving a traditional speech as parliament gets ready to close for the summer.
“What I have just read here is not from me. Or any other human for that matter”, Frederiksen suddenly said part-way into her speech to legislators, explaining it was written by ChatGPT.
“Even if it didn’t always hit the nail on the head, both in terms of the details of the government’s work programme and punctuation… it is both fascinating and terrifying what it is capable of”, the leader added.
There has been a lot of speculation on the effects on conversational and intelligent AI tools could have on human society and jobs. Let’s take a look at some aspects:
What is ChatGPT?
As per reports, ChatGPT is a natural language processing tool powered by AI technology that allows you to have human-like discussions with the chatbot and much more. The language model can answer questions and help you with tasks like writing emails, essays, and coding. ChatGPT is now free to use because it is in the research and feedback-collection phase. ChatGPT Plus, a paid subscription version, was released in early February.
Why Has ChatGPT Caused a Row?
Earlier, a computer scientist often dubbed “the godfather of artificial intelligence” quit his job at Google to speak out about the dangers of the technology, US media reported Monday. Geoffrey Hinton, who created a foundation technology for AI systems, told The New York Times that advancements made in the field posed “profound risks to society and humanity”.
“Look at how it was five years ago and how it is now,” he was quoted as saying in the piece, which was published on Monday. “Take the difference and propagate it forwards. That’s scary.”
Hinton said that competition between tech giants was pushing companies to release new AI technologies at dangerous speeds, risking jobs and spreading misinformation. “It is hard to see how you can prevent the bad actors from using it for bad things,” he told the Times.
In 2022, Google and OpenAI — the start-up behind the popular AI chatbot ChatGPT — started building systems using much larger amounts of data than before.
Hinton told the Times he believed that these systems were eclipsing human intelligence in some ways because of the amount of data they were analyzing.
“Maybe what is going on in these systems is actually a lot better than what is going on in the brain,” he told the paper.
While AI has been used to support human workers, the rapid expansion of chatbots like ChatGPT could put jobs at risk. AI “takes away the drudge work” but “might take away more than that”, he told the Times.
The scientist also warned about the potential spread of misinformation created by AI, telling the Times that the average person will “not be able to know what is true anymore.”
Can ChatGPT Replace Writers?
There has been speculation that professions reliant on content creation, ranging from playwrights and professors to programmers and journalists, may become obsolete.
Academics have used the tool to generate responses to exam questions that they claim would result in full marks if submitted by an undergraduate, and programmers have used it to solve coding challenges in obscure programming languages in seconds.
The ability of technology to generate human-like written text has led to speculation that it could eventually replace journalists.
However, at this point, the chatbot lacks the nuance, critical-thinking skills, and ethical decision-making ability required for successful journalism, reports argue.
Its current knowledge base will be decommissioned in 2021, rendering some queries and searches obsolete.
ChatGPT can also give completely incorrect answers and present misinformation as fact, writing “plausible-sounding but incorrect or nonsensical answers,” according to the company.
According to OpenAI, resolving this issue is difficult because the data used to train the model contains no source of truth, and supervised training can also be misleading “because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.”
AFP contributed to this report
Comments
0 comment