Advertisement
23rd Jun 2023
My Google search history is 100% shocking. It’s an horrific collection of questions about crime scenes, possibilities when it comes to gruesome murders, motivations for murder, the technicalities of murder or how far blood spatter might go.
But besides from my crime-novel related information binge (I’m currently writing my third thriller) I’ve always been a straight-to-Google person when it comes to, well, pretty much everything in life.
I have a deep-seated curiosity about all things. I prefer to know than not know when it comes to things like medical operations (I watch them live on YouTube before I go under the knife). If someone mentions something I’m not familiar with, I’ll quickly Google it under the table. When it comes to any form of problem solving, same again. The internet usually has the answer.
The problem is that the internet has all the answers. And often there’s not time to surf every response that my trusty friend Google throws up. Scanning, discarding information and honing in on the specific piece of learning is time consuming and tedious.
Scarily good
Then a couple of months ago I was told about ChatGPT. It’s a head-spinningly large language model that uses deep learning techniques to generate human-like text. In a nutshell, it’s a chatbot. In other words, it’s like Google but frighteningly better because instead of getting pages and pages of answers relating to the words you have put into your search bar, it generates specific, precise answers to some of the most complex questions.
And it’s life-changing.
In fact, earlier this year ChatGPT (Chat Generative Pre-trained Transformer) hypothetically solved Northern Ireland’s most difficult challenge – the protocol and post-Brexit trading arrangements.
It also passed the New York bar (law) and passed some of the highest medical exams thanks to its ability to understand and generate natural language.
ChatGPT’s ability to read and understand legal texts, such as court rulings and statutes allows it to provide accurate and detailed summaries of legal cases, and can even predict the outcome of a case based on similar cases in the past.
In the medical field, ChatGPT has been used to assist with medical diagnosis and treatment planning. It can read and understand medical texts, such as journal articles and patient records, and can provide relevant information and suggestions to doctors and other medical professionals.
There is great excitement about how it can analyse the accuracy and efficiency of medical diagnosis and treatment, and can also help to reduce costs. Developed by the company OpenAI, the AI is capable of generating natural-sounding text on demand, including in a specific style or in several languages, in just a few seconds.
The quality of the copy it produces is even sufficient to impress teachers in secondary and higher education, and even researchers leading to some schools banning the use of the system when it comes to homework or college essays.
Next-gen communication
But how does it work? ChatGPT is trained on a massive dataset of internet text, which allows it to understand and respond to a wide range of topics and questions. That’s how it can seemingly communicate with humans in a way that feels intuitive and human-like.
Obviously, this also comes with the whole Terminator fear that AI technology is going to outsmart us and then turn on us.
But there are very real fears for the potential for AI technology to replace human jobs, specifically in publishing, finance and the medical field, as well as worries about the implications for privacy and security. Using this type of generated text to spread misinformation is another factor some tech commentators are raising.
Another concern is the potential for AI to be used for bias and discrimination. For example, if an AI system is trained on a dataset that is biased, it may perpetuate that bias in its output. Malicious misuse such as hacking or cyber attacks can be used and made more difficult to detect.
Progress
OpenAI has stated that it is taking steps to address some of these issues, such as by fine-tuning the model on a more diverse dataset.
But working with these advances instead of against them presents so many opportunities for us as humans. ChatGPT also presents an opportunity to learn in a different way. It can problem solve in a way that lacks our natural human bias or emotional nuances which is sometimes the best way.
But let’s not totally freak out just yet. As The Guardian put it “ChatGPT cannot tie a pair of shoelaces or ride a bicycle. If you ask it for a recipe for an omelette, it’ll probably do a good job, but that doesn’t mean it knows what an omelette is. It is very much a work in progress, but a transformative one nonetheless.”
I’m a fan of progression, even if it comes with uncomfortable conversations. I use ChatGPT to collate information, find new ways to problem solve and creatively collaborate (ask it to write you an idea for a screenplay in the style of Woody Allen about a basketball player and you’ll see what I mean).
It refuses inappropriate questions and avoids making stuff up by churning out responses on issues it has not been trained on.
Plus Elon Musk has described it as “scary good” which may or may not add to your shivers. Would it give you goosebumps to know that as a once-off experiment, it helped me write this article?
You’ll have to keep guessing. Because no matter how impressive my new bot bestie is, I’m not about to give up my job…