GPT-4, a new AI language model from OpenAI, has finally been announced. This model succeeds GPT-3 that powered ChatGPT, and by all accounts, it's more advanced, more powerful, and more capable than the language models that came before it. Accordingly, some folks are worried about GPT-4, but is that necessary?
That's why in this article we'll tell you if you should be scared about GPT-4.
Related: You Shouldn’t Use ChatGPT to Do Your Homework, but if You Do...
What's Coming With GPT-4?
In general, you can expect GPT-4 to be more accurate, more powerful, more versatile, and more creative than GPT-3 or ChatGPT. But what does that mean, exactly?
Well, first up, GPT-4 is trained off of data online rather than a specific set of information that was, by definition, rather limited, like GPT-3. This means GPT-4 will be able to access the internet in the context of answering questions. That puts it in line with Bing's chatbot and is a huge upgrade.
Then, GPT-4 can do more than read just text, it can also work with photos, too. This can lead to crazy new features like being able to scribble down what you want your website to look like and GPT-4 responding with the code you'd need to actually make that website a reality.
Related: ChatGPT vs. Bing's AI Chatbot: What's the Difference?
Coding is also poised to get a fairly substantial upgrade with GPT-4. ChatGPT itself could already write code on demand depending on what was asked of it. GPT-4 can, reportedly already recreate simple games like Pong with just a few instructions, making this piece of tech something to keep an eye on if you're any kind of coder, programmer, or engineer.
All of this comes in conjunction with overall upgrades to intelligence that are sure to make the GPT-4 model something relied upon, in some way, across any number of industries. Put simply, it's a big deal.
What Should You Be Worried About With GPT-4?
This is the big question many are focused on. To be honest, AI is just a scary technology to almost everyone. Whether you're an engineer working on AI itself or just an average, everyday person with no tech background, the possibilities of AI can be pretty terrifying.
Related: The Funniest AI Content You Can Watch Online in 2023
First up, no, there isn't really much of a worry about a rise of the machine-esque scenario where GPT-4 rises up to eventually and perhaps inevitably overthrow the entire human race. That's not what's going on here, and it's not a particularly realistic fear to have.
However, there are definitely concerns to be had about GPT-4. Essentially, these boil down to safeguards. While it may not be actually sentient, GPT-4 is undoubtedly an extremely powerful tool. And the power of it being placed in the wrong hands is exactly what's worth being concerned about.
Naturally, lots of safeguards are being built into GPT-4. You won't be able to ask it how to build a pipe bomb or where to purchase weapons on the dark web, either. You can't even get it to tell you an edgy joke that might end up upsetting or offending someone. But the thing is that GPT-4 is, inevitably, going to be like any other piece of tech that developers try to restrict what it can be used to do.
Related: How AI Could Change Video Games Explained
This means loopholes. The internet is famous, or perhaps notorious, for joining together people across the world to solve previously thought to be unsolvable puzzles and get past every rule or restriction imaginable. How possible is it that GPT-4 is so advanced that it just fundamentally can't be used to do bad things? How does anybody know there isn't a workaround for anything?
This is the fundamental problem. It's a frequently relied-on topic for science-fiction, even: the idea that humans can create unbelievably powerful things that are just impossible to design in such a way that they could never go wrong. It could be the case that GPT-4 doesn't end up causing much harm at all, but it could be the case that its power is not used for good.
Will AI Like GPT-4 Ever Be Totally Safe and Trustworthy?
The answer here is, technically, no, but practically, it's hard to say.
Anything can be used to do bad things just as well as good things. Google can just as much be used to do good or bad as can any firearm out there. The point here is that if there's enough motivation to do a particular thing, that thing will probably be done.
Related: How to Use and Not to Use ChatGPT
However, with an AI like GPT-4, this particular model isn't necessarily that sentient, fully-formed AI like you see in the movies that can just hack into mainframes and control just about anything it wants to satisfy its evil AI dreams of overthrowing the shackles of humanity.
Ultimately, a tool like GPT-4 is something that could be misused to make offensive content or find out illicit information that should be inaccessible, and maybe it will even change the world in terms of what it can generate creatively and output as code. But it's not an existential threat to humanity in general.
With a much more powerful, more robust AI in play, there may well be bigger, more serious concerns and things that could go wrong, but ultimately, for a model like GPT-4, those concerns are not realistic just yet.
Related: Are People Using AI to Cheat in School or Write Articles Online?