Artificial intelligence’s current ability, future prospects discussed

Published March 24, 2018
DR Muhammad Haris, Farieha Aziz and Dr Sajjad Haider participate in the discussion at Habib University on Thursday.—Photo by writer
DR Muhammad Haris, Farieha Aziz and Dr Sajjad Haider participate in the discussion at Habib University on Thursday.—Photo by writer

KARACHI: “Would you be comfortable if I was actually a robot? What if your child’s teacher in school was a robot? Would you be fine with sending your child to school in a self-driving car?”

Two experts on artificial intelligence (AI) made one sit up and wake up to reality during a discussion about concerns regarding AI, titled, ‘Journey on intelligence: a dialogue where philosophy inquires artificial intelligence’ organised by the School of Science and Engineering at Habib University on Thursday.

As we step into the age of AI, the question of ethics remains hanging in the middle. How will mankind deal with this conflict between artificial intelligence and human intellect while also dealing with the moral and ethical issues?

Dr Sajjad Haider, head of the AI lab at the Institute of Business Administration, started with Alan Turing’s 1950 test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. But ‘AI’ or ‘artificial intelligence’ was truly born in 1956 when the computer scientist John McCarthy coined the term.

‘Real-world application of robots has people worried about losing their jobs’

“The years following that saw millions of dollars being poured into developing human-like intelligence, which was not really happening, making it the dark period of AI,” he said.

But things started happening and picking up pace around 1996 and 1997 when IBM’s Deep Blue defeated the world-class chess master Garry Kasparov.

“We didn’t have that much computing power in the 1950s but we did in the 1990s. Then recently, in 2016, Google DeepMind’s AlphaGo defeated Go champion Lee Sedol leading to much media hype about AI. Now we see the DARPA challenges involving autonomous or driverless vehicles.

“We also have robots now that look like humans,” he said. “They happen to have Asian faces because they are mostly manufactured in Japan or China. We already have service robots that look like machines but the human face on robots helps smooth interaction between the machine and the human,” he explained.

“But real-world application of robots has people worried about losing their jobs now. For instance there is the Google translator that has got call centre workers all worried,” he said.

“But throughout history we have seen that whenever something new comes along it may make some professions less popular but then there are new professions that come into demand while creating new openings,” Dr Sajjad pointed out.

Other things that are equally or perhaps more worrisome for humans include data mining, deface technology, etc. There is Facebook, Cambridge Analytica, the world’s biggest social network, at the centre of an international scandal involving voter data, the 2016 US presidential election and Brexit. There are smart programmes that can analyse your facial expressions to know your personality.

Dr Sajjad thought that people have been very successful with many great AI tools and they will keep on using them. But what if only a few have access to such tools? “Then it will be just like nuclear technology, which can be misused by those who have access to it,” he pointed out.

Meanwhile, Dr Muhammad Haris, a professor of philosophy at Habib University, said that the combination of biogenetics, AI and state power is making us wonder who’s going to hold power in the future.

“When we think, we think and then there is a gap before the process of reflection,” he said. “That kind of gap diminishes with AI. So AI and genetics will get entangled,” he added.

He also reminded about what is already there such as systems of surveillance and things like face recognition by computers. He quoted the example of a movie rights company sending a legal notice to Vimeo to take down a video that they presumed belonged to them. But they were in fact mistaken as it was a computer simulation of the original which was very difficult to differentiate.

“So now with AI you have a situation where the sources of similarities are increasing, leading to the production of hyper-reality,” he said.

The discussion was moderated by Farieha Aziz, a journalist and co-founder of Bolo Bhi, a digital rights and civil liberties group.

Published in Dawn, March 24th, 2018

Opinion

Editorial

Missing links
Updated 27 Apr, 2024

Missing links

As the past decades have shown, the country has not been made more secure by ‘disappearing’ people suspected of wrongdoing.
Freedom to report?
27 Apr, 2024

Freedom to report?

AN accountability court has barred former prime minister Imran Khan and his wife from criticising the establishment...
After Bismah
27 Apr, 2024

After Bismah

BISMAH Maroof’s contribution to Pakistan cricket extends beyond the field. The 32-year old, Pakistan’s...
Business concerns
Updated 26 Apr, 2024

Business concerns

There is no doubt that these issues are impeding a positive business clime, which is required to boost private investment and economic growth.
Musical chairs
26 Apr, 2024

Musical chairs

THE petitioners are quite helpless. Yet again, they are being expected to wait while the bench supposed to hear...
Global arms race
26 Apr, 2024

Global arms race

THE figure is staggering. According to the annual report of Sweden-based think tank Stockholm International Peace...