LILA: We understand that AI is founded on the claim that ‘human intelligence can be described, thus making it possible for machines to simulate it’. Can the potential of human intelligence be described? Aren’t we talking merely about the signs, symptoms and manifestations of human intelligence here? How do you comment on this?
Arnab Basu: I do not know whether human intelligence can be totally described even after a long long time but, definitely, not anytime in the near or not-so-near future. We are very far away from actually formalising it and hence the question of describing it technically, and thereafter simulating it, does not arise as of now. Yes, we are still only at the stage of trying to understand the multi-fold manifestations of human intelligence. For example, trying to extract sentiments of a speaker from her speech is routinely done by us humans as we not only analyse the text of the speech but also automatically categorise her bass, timbre, delivery etc. Such things are much more difficult to capture, measure and analyse in a machine-enabled way. AI-based systems are working towards them but are still far from accuracy.
LILA: If AI is a mimicking act, isn’t it critical that human intelligence itself has to evolve continuously to facilitate AI’s growth and development? But are we getting into a vicious circle when most of the AI equipment are used to render common human intelligence unnecessary? Does not it lead to a situation when a majority of humanity would find it easier to ‘use’ the equipment created by a minority than developing native intelligences at the individual level? Do you see these prevalent practices as leading to a long-term risk — degradation of the human material?
AB: Definitely not. As discussed above, AI has to go a very long way even to catch up with the current state of human intelligence. And, as rightly mentioned in this question, human intelligence is not static but is dynamically evolving. So, the creation of such an ‘equipment’ that will render human intelligence useless is highly improbable, if not quite impossible. Hence, in my opinion, there is no real long-term risk as is being incorrectly anticipated. Let me explain this further. Humans really need to use only a very little percentage of their brain power for their regular day-to-day living. We are not able to realise the complete potential of our brain and empowering ourselves with more AI tools like GPS will give us an opportunity to shift the limited usage of brain power in otherwise routines and utilise the current thinking capabilities to other directions (or requirements).
LILA: One of the major uses of AI is to successfully understand the human speech. Many a time, common human speech relies on folk psychology and sub-symbolic forms of knowledge like intuition. Do you visualise the future of AI with effective means to represent such knowledge?
AB: Yes, a day should come when AI can achieve better results in human speech analysis (and hence recognition) with more rigorous feedbacks from our brain signals which underlie some of our basic psychological or emotional processes. AI systems are now trying to emulate basic human emotions/feelings by studying neuronal signals in the brain and processing them to extract their actual meanings. In this context, there is reasonable progress in AI-enabled hospitals etc. where patients just need to ‘think’ about getting their head-rest up or down and AI systems shall catch those ‘thought’ signals from the brain and actually change the position of the head-rest through the attached mechanical devices.
LILA: AI is largely used in military situations. War and love are two fields where intuition and knowledge about others’ knowledge play a big role, beyond all meticulous attempts at strategizing. Do you think it is a balanced practice to foreground ‘machine intelligence’ when one discusses preparedness of war?
AB: Artificial Intelligence in general and Machine Intelligence in particular are broadly and primarily pattern recognition methodologies at their core and in their current state of scientific maturity. By their very structural nature, they are not in a position, at least as yet, to strategize. At the best they could give ‘good’ predictions in technically rather well-specified atomic situations. They cannot generalize and create abstractions like human thoughts, and hence are far from being effective strategists. Smart cluster munitions releasing explosive smart bomblets designed to kill personnel and military targets based on pattern recognition can be one of the applications. Similar application for drones to engage highly customised targets come with reduced threat exposure to the Forces, but with a risk to civilians as well as an escalation of strategic reach. These are important issues, and humanity at large has to take a call on having a proper control on such applications. That is where the leaders of nations play an important role.
LILA: About the question of leadership—our histories as well as mythologies are full of stories of heroes whose capacity for decision-making were rooted in their upbringing and training. But in our times, a nation’s strength is not projected through its people, but through its cumulative machine capability. Isn’t such articulation of one’s strengths through external representations a direct sign of our under-confidence in self and nature as well as the decadence of human intelligence?
AB: No, it is not. We must remember that such external representations are also creations of our human intelligence. We are just shifting the war from physical, the so-called hand-to-hand, to indirect machine-based means, but the thoughts and analysis needed to create such ‘capable’ machines are still human. We might never see a day when machines can create such machines as we humans can today create both humans as well as these machines!!!
LILA: In cases where AI is expected to perform intelligent routing in content delivery networks, do the limitations in interpreting emergent situations and unprecedented occurrences pose a problem in determining delivery destinations? Do you see this as a danger—and could you tell us about the latest research in AI towards bridging this gap?
AB: Yes, this is indeed a problem and not even an uncommon one. Yes, there is some amount of uncertainty here which can be serious, but I see a lot of work going on in these directions based on systems like IoT-based Supply Chains that do real-time feedback analysis etc., which can effectively handle such situations to a large extent, if not totally. There are several companies in the world like Amazon, Accenture, IBM etc. who are actively working on such tools and methodologies for deploying such products in the market soon.
LILA: Can self-improvising AI equipment learning new heuristics and interpreting emergent situations based on their given experience lead to developments that are irreversible? How do you foresee the future of AI in this regard?
AB: Again, by its very structure, an AI system is anyway self-adaptive and use situational feedback to make itself ‘better’. This, of course, shall make it more effective in the very narrow domain of application where it is being used but that does not mean it can transcend and operate beyond its mandate unlike a human brain (which always does so). The future of AI is very bright (most industries starting from Healthcare/Drug-design to Finance/Banking to Supply Chains as elaborated above have huge AI-dependence which is increasing) but limited like our Internet today which is an omnipresent part of our lives but does not solve all our problems.
LILA: How can we develop a communicative language that promotes AI in a manner that projects it as supportive of and not superior or antithetical to human intelligence? What efforts should be brought to train the developers and promoters of AI towards this end?
AB: We do not need to put in any separate effort for it. It shall happen automatically. The moment we realise that it is a permanent and useful tool, but only a tool, that cannot be expected to solve all human-analytic problems (as is being incorrectly projected these days), things will fall in their proper places. We have evolved from hand washing to mechanical washing machines to the automated laundries using fuzzy logic without the consumer feeling the threat of the machine or its intelligence taking over the human race. It is the application that evolves and finds its acceptability and no elevation of the society for better adaptability is required. The normal market forces make it an automated process. Hence, technology and its manifestations will come and go, while the ones that are supportive to the human race will find its place and sustain.
LILA: The use of AI is mostly directly conditioned by institutional agendas, but its impact is largely social. How can we make communication between the governor and the governed more transparent with regards to applications of AI?
AB: Good question. The answer follows from the previous one. The moment we have a common communicative language for the masses for this AI framework (a concerted effort in this direction from a social body including the stakeholders concerned might be really helpful), this shall automatically follow through. A good way to start this is for the government to create a nodal body with this mandate to create a ‘Human-AI Constitution’ for accountability and transparency. Of course, a lot of deep thinking and imaginative extrapolation on our part will be needed in future to create such a process. It has to include a lot of ethical and social issues along with the business requirements which have to be synergized well enough to be cohesive rather than contradictory.
LILA: The creation of AI is often conditioned and limited by the philosophical or ethical position assumed by the developing agency concerning the nature of mind, the modes and aims of creating artificial things etc. Is there a worldwide forum that would regulate production and distribution of AI to make the developing agency accountable for their creations?
AB: Again, I think my answer to the previous question somewhat addresses this. As of now, I am not aware of such world-wide forum but the creation of such governmental nodal bodies (as discussed above) in the more AI-enabled countries might lead to creating such a forum eventually at the international (e.g. UN) level. Then we can conceive of something like the WHO (which monitors world physical health standards) to monitor human-machines interaction and its corresponding ethics, integrity, transparency and accountability.
LILA: In a world that has large sections of people still struggling for basic needs, is committing disproportionate funds for research in AI justified? Is it true that rampant use of AI will lead to mass unemployment? Which areas do you feel such research can help the society at large?
AB: It is quite difficult to answer this question given the potential future benefits of an AI-enabled human society but, yes, basic needs should not be sacrificed at the cost of AI-enablement. No, the assumption that AI will cause mass unemployment is absolutely unfounded. The reasons for this should be clear from my previous answers. Routine jobs requiring monotonous and repetitive actions might suffer to some extent, but not the entire human job market as a whole. We shall very soon learn to live with AI and retrain our human faculties for sharing jobs with it— something like cars having removed the services of palanquin-bearers but opening up services for car mechanics. I have already commented that AI is pervading and shall continue to pervade all aspects of human existence starting from food and medicines to entertainment and luxury. So, it is for the individual (or the organisation) to choose and decide where to make an impact. Personally, I would love to see AI research helping us in a more effective way to combat and contain human diseases primarily those like SLE, AIDS, Cancer etc. for which we do not have total solutions as yet.