Artificial Intelligence (AI) has made rapid advances in recent years – significantly improving the sophistication with which it can make decisions and automate tasks. This has far-reaching impacts on an individual’s autonomy and considerably alters how societies function. It has thus become imperative that we assess how AI is built and ensure that it is deployed in a manner that does not negatively impact individuals or societies. Hence, many academics and institutions across the globe have published research that present guidelines, frameworks, and best practices to ensure the development of Responsible AI.
Using this research and guidelines, Analytics India Magazine surveyed enterprises intending to find the state of Responsible AI in India. This report presents an overview of where Indian enterprises stand in terms of adhering to AI principles such as fairness, transparency, accountability, explainability, human control, etc. The report highlights the various efforts made by AI enterprises in India to develop responsible AI and draws attention to areas that need improvement. This report can help in better policymaking at an enterprise and an industry-level to ensure the safe and responsible development of AI and avoid risks associated with the technology in an Indian context.
EY published a report last year by surveying more than 500 employees from Jan to Mar 2020 to analyse the various roadblocks in AI adoption in India. Apart from the availability of talent and AI, the study showed that ensuring trust through explainability and accountability, and ethical use were some of the major concerns for wider AI adoption in enterprises. Almost nine in ten respondents (88%) stated that their risk management frameworks require improvement to address AI-specific concerns such as ethics, accountability, and explainability.
Following the EY report, the world was hit with the pandemic, and firms had to come up with innovative ideas to continue functioning while reducing human contact/intervention. This meant automating tasks through intelligent technologies. The resulting lockdowns also led to the acceleration in digitisation, leaving huge amounts of data for enterprises to analyse or develop applications that can improve productivity and revenue. This accelerated the adoption of AI. According to a PwC report (based on a survey conducted between Aug and Sep 2020), India became a leading adopter of AI amid the pandemic. While the adoption accelerated and respondents were convinced of AI’s benefits, around half of them (48-49%) still expressed concerns about the resulting control, ethics, performance, and compliance risks.
Hence, while AI adoption accelerated after the pandemic, the concerns related to its adoption remained unaddressed. It is thus essential to assess the current state of Responsible AI within India. This can then help in formulating policies that will avoid risks associated with AI. While there are some reports (PwC, Deloitte, etc.) that have analysed the state of Responsible AI at a global level or before the pandemic, this report is an attempt to provide a holistic overview of the current/post-pandemic view of Responsible AI within India by referring to the latest research in the field.
The Berkman Klein Center (BKC) For Internet & Society, Harvard University collated 36 AI policy documents that provide ‘normative guidance’ for AI-based systems to identify the common set of themes lying underneath them. Policymakers and other stakeholders can use this report to capture the principles of Responsible AI and reduce the harms of technology.
The BKC report also includes the Responsible AI for All document by NITI Aayog published in Feb 2021. The NITI document has laid the groundwork for developing and deploying ethical and responsible AI in an Indian context by identifying the core issues. While still a working document, several research organisations like The Dialogue, a tech policy think-tank, have also developed a feedback report to identify its drawbacks and help NITI Aayog improve the policy document. At the same time, many private companies, management consultancy firms, or AI summits have come up with checklists to deploy ethical and responsible AI.
By researching all these documents, AIM Research curated a list of principles that firms developing AI in India should adhere to. The list includes the following principles: Accountability, Safety & Security, Transparency & Explainability, Fairness & Non-discrimination, Professional Responsibility, Human Control & technology, and the Promotion of Human Values. This list translated into survey questions that were then sent out to various AI enterprises.
The collective results of the data collected through this survey are presented in this report.