Artificial intelligence’s current ability, future prospects discussed

Published March 24, 2018
DR Muhammad Haris, Farieha Aziz and Dr Sajjad Haider participate in the discussion at Habib University on Thursday.—Photo by writer
DR Muhammad Haris, Farieha Aziz and Dr Sajjad Haider participate in the discussion at Habib University on Thursday.—Photo by writer

KARACHI: “Would you be comfortable if I was actually a robot? What if your child’s teacher in school was a robot? Would you be fine with sending your child to school in a self-driving car?”

Two experts on artificial intelligence (AI) made one sit up and wake up to reality during a discussion about concerns regarding AI, titled, ‘Journey on intelligence: a dialogue where philosophy inquires artificial intelligence’ organised by the School of Science and Engineering at Habib University on Thursday.

As we step into the age of AI, the question of ethics remains hanging in the middle. How will mankind deal with this conflict between artificial intelligence and human intellect while also dealing with the moral and ethical issues?

Dr Sajjad Haider, head of the AI lab at the Institute of Business Administration, started with Alan Turing’s 1950 test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. But ‘AI’ or ‘artificial intelligence’ was truly born in 1956 when the computer scientist John McCarthy coined the term.

‘Real-world application of robots has people worried about losing their jobs’

“The years following that saw millions of dollars being poured into developing human-like intelligence, which was not really happening, making it the dark period of AI,” he said.

But things started happening and picking up pace around 1996 and 1997 when IBM’s Deep Blue defeated the world-class chess master Garry Kasparov.

“We didn’t have that much computing power in the 1950s but we did in the 1990s. Then recently, in 2016, Google DeepMind’s AlphaGo defeated Go champion Lee Sedol leading to much media hype about AI. Now we see the DARPA challenges involving autonomous or driverless vehicles.

“We also have robots now that look like humans,” he said. “They happen to have Asian faces because they are mostly manufactured in Japan or China. We already have service robots that look like machines but the human face on robots helps smooth interaction between the machine and the human,” he explained.

“But real-world application of robots has people worried about losing their jobs now. For instance there is the Google translator that has got call centre workers all worried,” he said.

“But throughout history we have seen that whenever something new comes along it may make some professions less popular but then there are new professions that come into demand while creating new openings,” Dr Sajjad pointed out.

Other things that are equally or perhaps more worrisome for humans include data mining, deface technology, etc. There is Facebook, Cambridge Analytica, the world’s biggest social network, at the centre of an international scandal involving voter data, the 2016 US presidential election and Brexit. There are smart programmes that can analyse your facial expressions to know your personality.

Dr Sajjad thought that people have been very successful with many great AI tools and they will keep on using them. But what if only a few have access to such tools? “Then it will be just like nuclear technology, which can be misused by those who have access to it,” he pointed out.

Meanwhile, Dr Muhammad Haris, a professor of philosophy at Habib University, said that the combination of biogenetics, AI and state power is making us wonder who’s going to hold power in the future.

“When we think, we think and then there is a gap before the process of reflection,” he said. “That kind of gap diminishes with AI. So AI and genetics will get entangled,” he added.

He also reminded about what is already there such as systems of surveillance and things like face recognition by computers. He quoted the example of a movie rights company sending a legal notice to Vimeo to take down a video that they presumed belonged to them. But they were in fact mistaken as it was a computer simulation of the original which was very difficult to differentiate.

“So now with AI you have a situation where the sources of similarities are increasing, leading to the production of hyper-reality,” he said.

The discussion was moderated by Farieha Aziz, a journalist and co-founder of Bolo Bhi, a digital rights and civil liberties group.

Published in Dawn, March 24th, 2018

Opinion

Editorial

Digital growth
Updated 25 Apr, 2024

Digital growth

Democratising digital development will catalyse a rapid, if not immediate, improvement in human development indicators for the underserved segments of the Pakistani citizenry.
Nikah rights
25 Apr, 2024

Nikah rights

THE Supreme Court recently delivered a judgement championing the rights of women within a marriage. The ruling...
Campus crackdowns
25 Apr, 2024

Campus crackdowns

WHILE most Western governments have either been gladly facilitating Israel’s genocidal war in Gaza, or meekly...
Ties with Tehran
Updated 24 Apr, 2024

Ties with Tehran

Tomorrow, if ties between Washington and Beijing nosedive, and the US asks Pakistan to reconsider CPEC, will we comply?
Working together
24 Apr, 2024

Working together

PAKISTAN’S democracy seems adrift, and no one understands this better than our politicians. The system has gone...
Farmers’ anxiety
24 Apr, 2024

Farmers’ anxiety

WHEAT prices in Punjab have plummeted far below the minimum support price owing to a bumper harvest, reckless...