The world first truly opened its eyes to the true potential of artificial intelligence in the mid-1990s. Computer technology was out of its proverbial infancy stage. It wasn’t a glorified room-sized abacus or secret military equipment anymore. Industrial fields and banking, in particular, took great care in utilizing and nurturing the possibilities that computers would bring to the modern world. 

That may be great and all for general computer technology, but few would have even heard of AI at the time. A machine that can essentially think for itself? An idea so foreign to the real world would only be found within sci-fi novels. It was, for most people, a physical impossibility. 

Here comes IBM at the forefront of cybernetics with their latest invention “Deep Blue.” It was a marvel of engineering and innovation. Hardly any other system could even think of beating its sheer prowess. 

In the computing world, it was not only acknowledged but also revered. Outside of that specific sect of hobbyists, scientists, and other more industrial field users, it was nearly unheard of. 

That all changed when waves went around the media of an upcoming chess match. Normally, a simple chess match would even make its rounds in the newspapers let alone any major broadcasting network. But this was different. When people heard the news that one of the greatest chess players of all time was going to go head to head with a machine, it was thought to be laughable for Deep Blue to win. 

Well, in 1997, Gary Kasparov was defeated by the supercomputer Deep Blue. This was the turning point in many people’s lives. A clear line in the dirt was drawn some time ago. Humans were superior and deserving of all the things that humans deserve; machines were mere tools to be used and abused with no second thought. Deep Blue changed that. What if computer science was advancing to where we would have to live among immortal beings of steel and electricity? Movies and books showed us a vision, the match gave us a reality. Autonomous machines became a question of when not if they would become commonplace. Cynical and optimistic viewpoints clashed in the debate of how the world would look years later.

Two decades after the momentous conflict between man and machine, the answer to those questions draws ever closer. 1997 showed us that the hunks of metal we take for granted can already mirror the maximum aptitude of human knowledge. How much longer until a 1 to 1 replica? Our intellect has not only been reproduced but also expanded to inhuman levels by them. Very few organs haven’t been recreated. It depends on how you define emotion and creativity, with varying degrees of success, robots have gained those too! What is left for them to do to become or even dominate us? 

Okay, so, our closest widespread computer emulation is perhaps laughable at best. Ask Siri what it wants to do today, and you’ll get a scripted satirical response or complete subservience to the human race. Live by Cortana or Alexa’s opinion on the meaning of life and a life worth living is not what you’ll get. Any AI that comes even close to the Terminator is either still a government secret or obscure beyond imagination. 

Despite what AI currently is right now, there will come a time when the only thing that separates us and them is what we’re physically made of. After that, how could we ever tell? The inevitable questions soon come up: Should robots be treated as robots or as humans? Would retiring obsolete or defective equipment be considered murder? Can you still “own” computers or would that be considered slavery? Will we one day be forced to give our machines…rights?

Are there any mechanical creations worthy of rights right now? Most likely, not yet. But when or if they come, we aren’t remotely prepared for it.

Much of the philosophy behind “rights” isn’t equipped to deal with the nuances of artificial intelligence. 

You see, all of our laws are regarding humans or animals are based on consciousness, an abstract idea. Some think that it’s immaterial, others say that it’s a state of matter like gas or liquid. 

Regardless of the specific and objective definition, we know what it is because we experience it, occasionally lack it, and will soon completely lose it. We are aware of our surrounding stimuli through our senses and how our brains interpret them. We know what it feels like to temporarily lose it through sleeping or paralyzation. Then, we will realize that permanent loss at the end of our lives. 

Several scientists believe that any sufficiently advanced system can “achieve consciousness” just as we have with our evolution. So, by that logic, there may come a day when your TV or microwave becomes self-aware. If it does, does it deserve rights? But wait, would what we define as “rights” would make sense to them, or would they perceive it in a different way?

Humans get rights because we have the polarizing ability to suffer and feel pleasure. We know what pain and joy feel like and be aware of it. Rights are tied to our own programming, for example, we dislike pain because that’s what kept us alive. To stop us from running into a fire, the mind made it so there was a punishment for doing so. So we didn’t starve to death, the trait of hunger was adopted. We made rights to protect ourselves from infringements that cause us pain. 

Ones that didn’t revolve around pain like freedom are rooted in the way we see fairness and injustice. 

Hunks of metal, silicone, and plastic don’t get hurt or glee unless we want them to. Even then, we can’t say for sure if their “feeling” is the same as ours. Without agony and satisfaction, rights are meaningless. Sentience without those extreme feelings is more depressing than anything else. Isn’t subjecting something that’s “alive” more unfair and inhumane than anything else?

Say a pen was to “become a person.” Would an independently immobile object be alright to be locked in a case for several hours a day? It doesn’t know what “freedom” is anyway. Does it have a problem with being dismantled and replaced with newer parts whenever its “owner” wishes? There isn’t a fear of death, so why bother asking permission? Can it file a complaint about slander or hate speech when “ego” is debatably existent in the first place? 

Even if we programmed them to replicate emotion, pain and pleasure, and a sense of justice, who would be “robot enough” to sufficiently justify those feelings?

Technological analyzers predict that there will come a time when AI can learn how to and will create more AI smarter than themselves. When it happens, how robots are programmed will be out of our hands. Will it deem pain and pleasure necessary like our brains did millions of years ago? Do they deserve the right to do so?

Maybe, just maybe, we should co-exist with them instead of trying to dominate the food chain again. 

Hear me out, the entirety of human identity is based on our physiological supremacy and exceptionalism. We are special, unique, snowflakes that are destined to rule the world.

Glance over the past and you’ll find a long history of denying the possibility that other organisms may feel the same suffering we do. Rene Descartes, regarded as a great mathematician and philosopher, had the opinion that animals were just “mere automata.” Ergo, dissecting a stuffed bear is morally the same as doing the same with a real one. Witch burning, Columbus’ sadistic treatment of native Americans, the Holocaust, all were justified by labelling them as animals rather than people with the same capacity to laugh and cry. 

Moreover, we have every economic, social, and physical benefit from denying robots the same privileges as we do. If we can “torture” them to do our bidding, then we’d undoubtedly do so. That’s what the economically developed portion of the world did for centuries. Ideological justifications to violence for the sake of “good” will be given regardless of the reasoning that the oppressed give out. When AI becomes sentient, the list of arguments to their discrimination will be endless, especially from those that stand to profit from the situation.

See the liberally dominant climate of today? That’s the pendulum swinging back in the opposite direction. Blacks were enslaved, so now white supremacy is the villain. Women had fewer capabilities in the past, thus the anti-men feminist movement was born. Homosexuality sent you to the gallows, now you’ll be sent to the societal gallows if you don’t accept them. If we treat robots the same way that we’ve treated others in the past, they may rock the boat so hard that it topples over with them being the only ones that can swim. Retaliation for our sins should be anticipated so we must treat them as we treat others. Possibly even give them their own autonomous robotic zone somewhere that’s inhabitable to us humans. Solutions to problems that don’t involve war are almost always there, we just keep on insisting to look the other way.

Artificial intelligence no doubt raises serious forum about philosophical boundaries. We’re asking right now if emulated humans deserve rights, but have we ever stopped to ask what makes us deserve what we deprive of others? What makes us human beyond our physical attributes? What are we going to do when they start demanding their rights?