By Josh Fike
What if there was something destructive on this Earth that could end humanity as we know it? What if someone created devastation, just by accident? What if that threat to humanity was sitting quietly in your pocket? This hazard is A.I, also known as Artificial Intelligence. A.I is already all around us. On our phones, assisting us to the closest movie theater. On our computers, Google Maps showing us the tremendous big world, we even have created self driving cars. Yet, do we know the danger? A.I is rapidly growing in its intelligence, but what happens when we give it the ability to think for itself, and it becomes smarter than any human on the planet, what then?
The truth is, progress in A.I is ultimately unpredictable. Of course technology is only erupting in advancement, but we still don’t know when we’ll leap from one generation of A.I to the next.
Creating a form of A.I that is as smart as human will be laborious and wearisome, some scientist predict it’s centuries in the future, a small portion say it will never happen and some think that as soon as 2050 we’ll have this technology 1. Interestingly enough, one of the most mortifying things about A.I is once we make a leap in this generation of A.I to the next, it will quickly become one of the most intelligent things alive, possibly, the most intelligent thing alive. This is because A.I will be able to rewire its own brain to make itself more sophisticated, more efficient and overall better. Causing this superior A.I to be peering around the corner of technology, patiently waiting for its attack.
So what could possibly go wrong? If we program it, we’re in control right? Well, not exactly, the most spine-chilling thing about A.I is not how smart it is, it’s the unforeseen way of thinking this A.I might have.
Let’s say that we’re the future, when super A.I is created. A computer scientist programs a robot to completely get rid of all spam emails, and we give the smartest form of artificial intelligence to do this task. This is a innocuous task, so we presume that it can pass this test with flying colors. Of course the A.I will achieve its goal, but it may have a different solution to this problem. It will immediately start brainstorming ideas to get rid of spam emails. It’ll plunge into a bit of basic coding, and then it realises that humans are the ones sending these emails. If the A.I’s goal really is to get rid of all future spam, it will have to get rid of all humans. With A.I’s intelligence towering over any humans inadequate brain, it will find a way to end us all.4
Just envision all of the human race, gone. Because we were too stubborn and adamant to be cautious.
An article from The Day states, “Just think about how we relate to ants. We don’t hate them. We don’t go out of our way to harm them. In fact, sometimes we take pains not to harm them. But whenever their presence conflicts with one of our goals, we annihilate them without a qualm.”3
This provides a very tangible example of how A.I will see us. A.I’s brain will think in a way we can not foresee; that’s why we have to be so patient and aware. If we don’t take every step to this new generation meticulously, there will be devastation. If we’re the ants, A.I’s is the exterminator.
- “AI Timeline Surveys.” AI Impacts, 10 Jan. 15AD, https://aiimpacts.org/ai-timeline-surveys/
- “Greg Brockman on Unintended A.I. Risks.” The New York Times, The New York Times, 20 Feb. 2018, www.nytimes.com/video/admin/100000005752726/greg-brockman-on-unintended-ai-risks.html.
- Harris, Sam. “’Why I Believe AI Could Destroy Us’.” The Day, 30 May 2017, http://theday.co.uk/opinion/why-i-believe-ai-could-destroy-us
- Musk, Elon, director. Elon Musk Speaks About Tesla and SpaceX at Vanity Fair’s New Establishment Summit. Youtube, 17 Oct. 2014, www.youtube.com/watch?v=fPsHN1KyRQ8.
- “Rebecca Walzak.” Progressinlending.com, 31 May 2017, http://progressinlending.com/2017/05/31/the-good-the-bad-and-the-reality-of-ai/