AI is now being used for decision-making in life and death situations. The technology has been painted both as a villain and a hero where people have taken turns deeming it as Frankenstein’s monster on some occasions and Mother Teresa on other times.

It is necessary for the end-user to be aware of what and how the technology will benefit and impact them. Responsible AI can guard against the use of biased data or algorithms. It will ensure that automated decisions are justified and explainable and help maintain user trust and individual privacy.

“There have been numerous ethical concerns about the use of AI, one of which led to Google deciding not to renew a contract with the Pentagon. There are multiple instances of AI systems being biased against a certain race or gender and who doesn’t remember Cambridge Analytica. Hence, it is of critical importance that there are high levels of transparency and responsibility woven into the AI systems being designed,” said Abhinav Girdhar, Founder, Appy Pie, a no-code development platform.

Girdhar further shared that there have been many instances where AI models have replicated and institutionalized bias. An AI model named COMPAS, for example, was originally built to predict recidivism eventually turned out to be biased against black people. And another AI-powered recruiting tool ended up biased against women.

In the modern, evolved society, these AI applications can spell disaster for the companyAbhinav Girdhar, Founder, Appy Pie

AI works on the basis of analyzing a series of patterns (Neural networks) which is gathered via machine learning and deep learning and in such case if there is a slightest mistake or biasness, it will unquestionably jeopardize the desired output and may potentially cause massive damage to the form of loss of reputation, financial loss and even bigger loss in the form of environmental damage.

In such a situation how do we build AI algorithms and systems which are transparent, explainable and responsible?

Rajarshi Bhattacharyya – Chairman and Managing Director, ProcessIT Global, helped us answer this question.

“Software Algorithm typically comprises a series of instructions and these instructions are the modus operandi to decipher a precise problem to achieve a definite or explicit outcome,” Bhattacharyya said.

These essential distinctiveness of an algorithm are:

1. Unambiguous: Algorithms should be clear, understandable, precise and explicit. Each of its phases and their inputs/outputs should be clear and must lead to desirable outcomes.

2. Input: An algorithm should have well-defined inputs.

3. Output: An algorithm should have 1 or more well-defined outputs and it should match the desired, expected, probable, anticipated and estimated result.

4. Finiteness: Algorithms should cease after a finite number of loops/steps i.e. should follow the Finite Element Methodologies.

5. Feasibility: Should be feasible with the available resources.

6. Independent: An algorithm should have step-by-step directions, which should be independent of any programming code.

“Additionally, a theoretical analysis of the algorithm should be taken into consideration. In the theoretical analysis of algorithms, the normal practice is to estimate their complexity in the asymptotic sense. The most commonly used notation to describe resource consumption or “complexity” is Donald Knuth’s Big O notation, representing the complexity of an algorithm as a function of the size of the input n. This will help in sizing the optimal speed of the algorithm to function,” he added.

In terms of the functionality of the algorithm, it should follow a standard (e.g. ISO/IEC JTC 1/SC 22) protocol which is internationally accepted and should be compliant and conformed. The algorithm should not have any negative impact/harmful effect on humanity and on the environment and should be bound in a framework which is internationally acceptable.

Another promising technique to ensure transparent, explainable, and responsible AI is counterfactual fairness, hence ensuring that decisions taken by the AI models are in line with the real world. Another facet is the data, which is being used to train the AI, if that data has biases or is incorrectly tagged, this can lead to significant impact on the AI’s outcome.

An essential requirement for responsible AI is explainability and interpretability. First and foremost, it enables transparency, which facilitates trust.

“A lack of interpretability can expose an organization to operational, reputational, and financial
risks. By practicing responsible AI, it will help develop user trust by enabling businesses accountability. Explainability goes hand in hand with responsibility,” said Nitin Agarwal, CTO and Co-founder, Locobuzz.

Through the whole AI cycle, i.e., from designing to execution, the steps must be documented, hence making it very clear why an algorithm is designed in a specific mannerNitin Agarwal, CTO and Co-founder, Locobuzz

Agarwal believes that at the end of the day, AI is a machine, and it is bound to fail, however taking responsibility, and having an explanation ensures that the interest of the user is kept intact.





Source link