The Question Of Balance In Emerging Technologies
Technology is pretty great, isn’t it? In the past 50 years humanity has become more technologically advanced than even Jules Verne or Ray Bradbury could have imagined. Companies like Google have started rushing toward creating truly innovative, never-before-seen tech with the potential to change how we not only live, work and play, but what it means to be alive.
Technology has shone light every place there is dark with almost religious connotations. The invention of the car now allows us to cross vast expanses in a fraction of the time it would take a rider on a quarter horse. The internet gave the average person communication capabilities on global level, changing all aspects of business and leisure. Nuclear power has opened doors beyond our imagination.
These technologies, for all their good intentions, have come with their own set of negative implications. Our connectivity makes our true privacy and online safety questioned, nuclear power is obviously a life-giver and taker, and you’re well aware of the negatives that come with cars.
The negative implications of these technologies never once stopped their progress. They are here. They are unquestionably a source of true revolution, a testament to our superiority as a civilization and necessities to our very existence. And that’s the rule. Many are awed by, and accept all, tech without asking themselves if it need be tempered with balance. Isn’t that a question we should demand be asked? Is there technology on the horizon that we’re dragging into existence without enough knowledge or focus on balance?
Balance is a funny concept that many people feel foreign to much of their lives. When we think of balance, we might think it’s a notion reserved for a minority of hippies, spiritualists and others that aren’t in the business of pushing forward the train of progress. For as distasteful a word ‘balance’ might seem to be, it’s a concept that has created the universe and kept it – and us – in business all these eons. A chaotic universe that abided by no natural laws to promote balance would be snuffed out as quickly as it came. Are individuals and companies that have no ethical stopgap putting humanity at the same risk?
There are emerging technologies that have well-documented risks associated with them – that we’re not balancing.
At first glance, biotechnology seems like the catalyst that will lead us from all manner of suffering. It has the potential to rid us of disease, to lengthen life and modify DNA to give us the physical appearance and performance we’ve always dreamed of.
So, what would a future where we all could live well past 100 really look like? What would the strain on our healthcare system look like? If we increase life span to 160, we’re doubling the current life expectancy, so what would our planet’s resources look like? One of the most cited risks with biotechnology is weaponizing the tech. With a reduction in cost and increase in availability, how do we mitigate the risk of synthetic biology getting into the hands of those who wish to do harm? Are these questions on the lips of the creators? Biotech isn’t the only tech that poses serious questions.
Artificial Intelligence paints a picture of a semi-utopian future where we can transcend our own course of evolution with the technological singularity, tough decisions will be made using super-intelligent computers and everyone has a robot to get them an ice-cold Coca-Cola. Refreshing.
What we don’t often see in big media is the risks that are associated with AI. Tucked away in the back of the room with meek voices are the proponents of the real risks of AI. One of these voices speaks of programmed morality. It’s difficult to give ethics and morality to a robot. Who’s morality should we give it? Yours? Mine? Isaac Asimov’s? If we give it morality, can it one day reprogram itself to override it for improved survival?
So maybe we don’t give them morality. If we build robots without morality then they are essentially big, metal sociopaths that could turn into psychopaths bent on using resources for their own survival rather than humans’. The mere fact that something is super-intelligent doesn’t mean it will have the disposition of Superman. In fact, people that are super-intelligent often have problems relating to or socializing with others. Lastly, as with all technology we create, you can bet it will be weaponized as soon as possible. These are just a few of the risks, and along with this we have to assume we will see a number of unintended consequences once these technologies are fully realized.
The occurrence of unintended consequence spatters human history. Although the internet has been largely regarded as a success, we now have government programs that encourage our kids to get outside more. Some have become so attached to their computers and phones that there are addiction programs (with low rates of success) designed to wean them off. Hiroshima is the quintessential example of ‘unpredictable consequences’.
We’re really smart and capable, but it will be nearly impossible to assess all the risks with biotechnology and AI until the tech actually comes full swing. It’s as if we’re hurtling down the road at 180mph after being told that the road might abruptly end. We’re not slowing down and using caution. Instead we’re simply enjoying the powerful feel of the engine and the look of the scenery. There’s a few groups of people on the roadside trying to get our attention, but we just roll the windows up.
Technology isn’t bad. It’s a fantastic tool. We can’t run to the other side of the spectrum and thrust ourselves back into the Stone Age just as much as we can’t kneel down at the feet of the AI gods. It’s more like walking a tight rope as we inch forward, asking the right questions, moving slightly to the left or the right depending on which way we might fall.
Balance is crucial, if only because it’s an inescapable fact of nature that when disregarded it often comes back at us with a great leveler. Balance in this sense means we ask ourselves if we really need this technology, will it truly benefit humanity, can it backfire on us in a big way, what does it mean for future generations and are the quality of the cons of it really worth the pros.
Shouldn’t the idea that we can avoid a catastrophe be seen as a sign of human progress, revolution and a testament to our civilization, as well?
Credits to Matthew Edwards who works for Progressive Automations, a 12 volt linear actuator manufacturer and distributor based in British Columbia, Canada.