Artificial intelligence, big data, machine learning, predictive analysis.
We've all heard the increase in these buzz words recently (or at least you should have if you've been paying attention), but what in the world does it all mean?
In a nutshell, it's the inevitable and scary future. Could you imagine a world in which we create software that surpasses human capabilities? Well, we're in that world, and Google is the lead pioneer.
Well first of all, the search giant always has been the lead pioneer. It's search algorithm takes the first typed word and uses predictive analysis to populate suggestions based on anticipated keywords. Second, more strategically, they've bought almost every machine-learning and robotics company out there like Boston Dynamics, NestLabs, Bot & Dolly, Meka Robotics, and so much more!
Third, they have Ray Kurzweil leading it. I don't even know where to begin with this guy. He's a lunatic. He's a genius. He has a 30 year proven track record of accurate predictions. He's eerily freaking right about nearly everything. He and his team are building a "Google brain" - encompassing both hardware and software requirements - to ultimately replicate the way our inter-neural connections are formed and fired in real time. In his words : "Information defines your personality, your memories your skills. And it really is information. We ultimately will be able to capture that and actually recreate it".
Sounds scary? Well it should.
But again, this isn't really a new phenomenon. Technological singularity (the hypothesis that artificial intelligence will merge with humanity) first came about in 1958 and according to Kurzweil, will come true in 2045.
Think it's too futuristic to care about right now?
Think again. You wouldn't wait for a burglar to break into your house before you take precautions right? That's why we have locks, alarm systems and furry four-legged creatures barking at the sound of our doorbell. So it's important you start doing that now with artificial intelligence.
It's all starting with messaging apps and chatbots.
More people are spending time on messaging apps than social media apps.
Bots are created to accomplish a specific service. Such as assisting a shopper in making a purchase or answering a customer support question without needing a human on the other end.
In the most simplistic form, it's a service powered by rules (very basic algorithms) that humans use to interact with via a chat interface. In more complex ways, it uses artificial intelligence to accomplish the same or more advanced service. And anybody can build one. Literally, you don't even need to be an expert at coding. (Helllooooo Mr. Watson!)
This creates huge potential for businesses and also makes life significantly easier for the user.
So what's the problem?
Well there are two main problems. In the immediate sense, the increase in artificial intelligence gives rise to cyber-crime. In the long term, it can evolve to a point we no longer have control over the machines we initially created.
The rise in immediate criminal activity
Take Google's artificial intelligence division, DeepMind. They claimed they found a way to produce and mimic a much more natural and human sounding synthesized speech better than any other text-to-speech systems. And like any mainstream tech phenomenon out there, you know criminal hackers aren't too far behind. So that call you got from your brother last week asking you to lend him some money....was it really your brother? Or was it ...do I dare say, a faceless criminal deceptively sounding like him?
Okay, so now with technology we can 1) replicate synthetic speech to sound like humans and 2) create an artificial brain to capture and react to our thinking and emotions. Next stop, talking to the dead. I recently watched "Be Right Back", an episode of the eerie and very futuristic drama Black Mirror, and thought, holy cow this is terrible! In the episode, a woman looses her fiancé in a car accident and subscribes to a service that, with...wait for it...artificial intelligence....creates a digital avatar of the guy, that based on his mannerism and demeanor, replicates his exact personality.
And this exists. It's called Luka. The AI (Artificial Intelligence) startup's main chatbot was built for restaurant recommendations. But after co-founder Eugenia Kuyda lost a dear friend, she expanded it to create a "memorial bot". A bot that lets you talk to your deceased friend as if they weren't deceased, relying on actual messages, posts and other digital ephemera they had left behind.
What?! NO! This is wrong!
It's eerily unethical. It's troublesome, it's not right. Life's greatest invention is death. It is the essence of "feeling" that defines our existence in humanity. Using artificial neural networks to imitate the ability of the human brain to learn and recognize patterns in images, audio, text and other forms of data to rid ourselves of grieving is morally wrong.
Sounds AWFUL! Don't talk to me, I have no idea who you are!!!
The irony behind Mazurenko's death was his grandiose plans for the future. He longed to see the day of singularity and is now the byproduct of eternal life. Perhaps, that's what Kurzweil was implying when he said we, as humans, would reach eternity.
Fast forward to loosing control
Alright. Let's talk about foreign policy real quick (without actually talking politics). Understanding the dynamics of interaction in a global world is a difficult concept to firmly grasp, especially if you're going to use a narrow theoretical approach like neorealism or neoliberalism. To clearly develop an understanding of international relations, we must look past the simple and concrete path realism has to offer and explore more socially construed factors.
The key to interpreting foreign, domestic or really just human policy is understanding the affects norms have on a given actor. What one country or non-state actor perceives to be a norm either allows or restricts certain actions. Actors use norms instrumentally, either by manipulation and changing of norms or by compliance to such norms, in order to fashion their interests rather than being held captive to a concrete normative structure. But, if we're teaching our computers to think this way, wouldn't that only exacerbate the risks of losing control of these robots?
Erving Goffman, an influential sociologist, in 1959 wrote that internal norms represent the environment and identity as it appears to an actor and external norms represent situations or behavior. These norms serve as functional tools for justifying social behavior but because multiple norms can influence actors, it is difficult to predict which norms will be most influential. Until, of course, you use artificial intelligence to handle such predictions.
Actors must have a process of making sense of the world, but more fundamentally, they must have a process to communicate those mental representations to others. This "process of making sense" in the world, produces the norms that justify an actor's behavior. It is during this process that we interact with other actors in shaping goals and interaction, whether they comply with our agenda or not. For this reason, it's imperative, we make sure the computers we're teaching have goals align with ours. If not, we're looking at human race wipe-out.
We've heard it before. The next catastrophic world war will be cyber. But the extent of people's imaginations was a technological, apocalyptical meltdown, not a global AI arms race Hawking, Musk and Wozniak are warning us of. They're almost replacing judges with AI, will they be replacing the military with AI as well? The ability of a machine to kill, independent of human control, is something I think we'd all prefer to NOT happen.
So what do you we do?
Build an underground safe house in the outskirts of Nebraska and prepare for a robotics apocalypse.
We just sent you an email. Please click the link in the email to confirm your subscription!
OKSubscriptions powered by Strikingly