Wednesday, February 21, 2018

Artificial Intelligence - why?

As our technological advances streamline just about every aspect of human productivity, manufacturing, technological interface, and living, companies all around the world make the push for Artificial Intelligence.

AI promises a brighter future for mankind, or so we are told. However, there is another side to AI, one which is not often talked about.

Let us not even mention the fact that our laws are trailing far behind when it comes to the leaps and bounds in tech industries. Let us not mention the fact that what every totalitarian regime failed to achieve through force - the control of news and information, the tracking of the movements of its citizens, the manipulation of opinions - young people the world over submit to voluntarily on a daily basis. Let us not mention the fact that Facebook filters the information you receive, Google knows more about you than your own family, your phone and fit watch tracks your every step, and your movement is tracked through license plates and location services. Big data is everywhere, and it is here to stay.

Never mind the fact that a large segment of today's most valued companies produces absolutely nothing tangible.

The push for autonomous vehicles, drone deliveries, automated services, and other AI applications is, in my opinion, wrong.

The loss of jobs aside (yes, there are counter arguments and even universal salary tests); the environmental impact aside (autonomous vehicles will likely lead to expansion of suburban areas - for those who can afford it - and destruction of arable land); the evolutionary impact aside (the general public is not getting any smarter since the dawn of Smartphones); and the psychological impact aside (people are struggling with interpersonal relationships and interactions since the rise of Social media); there are many other aspects to consider.

In a report featured on Euronews:  http://www.euronews.com/2018/02/21/commercial-drones-could-be-turned-into-weapons-using-ai-report-warns
citing the University of Cambridge's Centre for the Study of Existential Risk
its authors warn of the possibility that AI could be used in harmful ways:

'Harmful ends'

Attackers could capitalise on the “proliferation of drones” and re-use them for “harmful ends”, according to the University of Cambridge's Centre for the Study of Existential Risk, who helped put the report together.
It said we could see the “crashing of fleets of autonomous vehicles, turning of commercial drones into missiles or holding critical infrastructure to ransom”.
AI could also herald novel cyber attacks such as automated hacking and the production of highly-believable fake videos to be used as “powerful tools to manipulate public opinion on previously unimaginable scales”.
“Artificial intelligence is a game changer and this report has imagined what the world could look like in the next five to ten years,” said Dr Seán Ó hÉigeartaigh, one of the co-authors of the report.
“We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real. There are choices that we need to make now, and our report is a call-to-action for governments, institutions and individuals across the globe.
“For many decades hype outstripped fact in terms of AI and machine learning. No longer. This report looks at the practices that just don’t work anymore — and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable — and what type of laws and international regulations might work in tandem with this.”
Miles Brundage, research fellow at Oxford University’s Future of Humanity Institute, said: “AI will alter the landscape of risk for citizens, organisations and states — whether it’s criminals training machines to hack or ‘phish’ at human levels of performance or privacy-eliminating surveillance, profiling and repression — the full range of impacts on security is vast.
“It is often the case that AI systems don’t merely reach human levels of performance but significantly surpass it. It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour.”

This is not too far fetched to imagine.

Still, in my opinion, the simplest case against AI is this - Humans are, by nature, fallible creatures. We do some things right, but a lot of things wrong. Any artificial intelligence worth its weight is likely to realize that humans are not good for long-term survival of the planet,and with it, the AI system itself.

If you were an AI system, what would you do once you become aware of that?
 

No comments:

Post a Comment