Artificial intelligence’s current ability, future prospects discussed

Published March 24, 2018
DR Muhammad Haris, Farieha Aziz and Dr Sajjad Haider participate in the discussion at Habib University on Thursday.—Photo by writer
DR Muhammad Haris, Farieha Aziz and Dr Sajjad Haider participate in the discussion at Habib University on Thursday.—Photo by writer

KARACHI: “Would you be comfortable if I was actually a robot? What if your child’s teacher in school was a robot? Would you be fine with sending your child to school in a self-driving car?”

Two experts on artificial intelligence (AI) made one sit up and wake up to reality during a discussion about concerns regarding AI, titled, ‘Journey on intelligence: a dialogue where philosophy inquires artificial intelligence’ organised by the School of Science and Engineering at Habib University on Thursday.

As we step into the age of AI, the question of ethics remains hanging in the middle. How will mankind deal with this conflict between artificial intelligence and human intellect while also dealing with the moral and ethical issues?

Dr Sajjad Haider, head of the AI lab at the Institute of Business Administration, started with Alan Turing’s 1950 test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. But ‘AI’ or ‘artificial intelligence’ was truly born in 1956 when the computer scientist John McCarthy coined the term.

‘Real-world application of robots has people worried about losing their jobs’

“The years following that saw millions of dollars being poured into developing human-like intelligence, which was not really happening, making it the dark period of AI,” he said.

But things started happening and picking up pace around 1996 and 1997 when IBM’s Deep Blue defeated the world-class chess master Garry Kasparov.

“We didn’t have that much computing power in the 1950s but we did in the 1990s. Then recently, in 2016, Google DeepMind’s AlphaGo defeated Go champion Lee Sedol leading to much media hype about AI. Now we see the DARPA challenges involving autonomous or driverless vehicles.

“We also have robots now that look like humans,” he said. “They happen to have Asian faces because they are mostly manufactured in Japan or China. We already have service robots that look like machines but the human face on robots helps smooth interaction between the machine and the human,” he explained.

“But real-world application of robots has people worried about losing their jobs now. For instance there is the Google translator that has got call centre workers all worried,” he said.

“But throughout history we have seen that whenever something new comes along it may make some professions less popular but then there are new professions that come into demand while creating new openings,” Dr Sajjad pointed out.

Other things that are equally or perhaps more worrisome for humans include data mining, deface technology, etc. There is Facebook, Cambridge Analytica, the world’s biggest social network, at the centre of an international scandal involving voter data, the 2016 US presidential election and Brexit. There are smart programmes that can analyse your facial expressions to know your personality.

Dr Sajjad thought that people have been very successful with many great AI tools and they will keep on using them. But what if only a few have access to such tools? “Then it will be just like nuclear technology, which can be misused by those who have access to it,” he pointed out.

Meanwhile, Dr Muhammad Haris, a professor of philosophy at Habib University, said that the combination of biogenetics, AI and state power is making us wonder who’s going to hold power in the future.

“When we think, we think and then there is a gap before the process of reflection,” he said. “That kind of gap diminishes with AI. So AI and genetics will get entangled,” he added.

He also reminded about what is already there such as systems of surveillance and things like face recognition by computers. He quoted the example of a movie rights company sending a legal notice to Vimeo to take down a video that they presumed belonged to them. But they were in fact mistaken as it was a computer simulation of the original which was very difficult to differentiate.

“So now with AI you have a situation where the sources of similarities are increasing, leading to the production of hyper-reality,” he said.

The discussion was moderated by Farieha Aziz, a journalist and co-founder of Bolo Bhi, a digital rights and civil liberties group.

Published in Dawn, March 24th, 2018

Opinion

Editorial

Palestine MPC
Updated 09 Oct, 2024

Palestine MPC

It's a matter of concern that PTI did not attend the Palestine MPC. Political differences should be put aside when showing solidarity with Palestine.
A welcome reform
09 Oct, 2024

A welcome reform

THE Punjab government’s decision to abolish the corruption-ridden and inefficient food department, and replace it...
Water paradox
09 Oct, 2024

Water paradox

A FULLY fledged water crisis is unfolding across the world, with 2023 recorded as the driest year for rivers in over...
Terrorism upsurge
Updated 08 Oct, 2024

Terrorism upsurge

The state cannot afford major security lapses. It may well be that the Chinese nationals were targeted to sabotage SCO event.
Ban hammer
08 Oct, 2024

Ban hammer

THE decision to ban the PTM under the Anti-Terrorism Act is yet another ill-advised move by the state. Although the...
Water tensions
08 Oct, 2024

Water tensions

THE unresolved tensions over Indus water distribution under the 1991 Water Apportionment Accord demand a revision of...