Would you trust AI to make your life decisions?
Dear Reader,

If you’re a Tom Cruise fan, you’d have definitely watched Minority Report. The 2002 crime flick seemed futuristic at the time as it portrays how a computer could predict when a person is likely to commit a crime, long before they’ve even thought about it.

Imagine the police using a system like that to predict a crime and make pre-emptive arrests to stop the crime from happening. Sounds great for the safety of your city, right? Now, what if the prediction is wrong and the system is sounding off alarm against citizens who have no likelihood of committing any crime? That’s what happens when an artificial intelligence-led system is rooted in biases.

The future showcased in Minority Report started to unravel itself in 2012 when several police departments all across the US started using a system called COMPAS for preventing policing and continued to use it until at least 2016. The system, an AI algorithm, could predict how likely a person is to commit a crime again based on previous records. A detailed report soon after blamed the system for being biased against a particular section of the society, putting an end to this highly advanced algorithm.

The fate of all AI algorithms ultimately rests on this issue around bias. How certain are you that the predictions made by your algorithm are indeed accurate and free of biases? If you’re not, then can spend months working on a project but it’ll still be junked if the project can’t be trusted. Worse, if implemented, the project can have a far-reaching impact on your company’s reputation and finances.

With that in mind, in this week’s AI Bulletin, we put down simple ways of identifying and then eliminating biases in your AI system. We also have an interesting case study on Exide Life Insurance on how the insurance firm worked on eliminating biases.

That’s a lot of food for thought for you this week. Let us know what you think about the stories.

Regards

Varun Aggarwal

Editor, ETCIO

varun.aggarwal@timesinternet.in

Would you trust AI to make your life decisions?
Before you fix AI bias, understand what it really is

Today, technology talent is in heavy demand. Companies are innovating in different ways to find and attract talent by applying AI models to have an edge over competition. Evidence in experimental models highlight that there has been biases towards male candidates over female candidates. While this may sound and look like societal bias, it is data bias.

With the advent of AI and ML taking over traditional CRMs, biases are very tough to find out. Each step taken in the process of AI modelling can have risk of biases creeping in. There are various ways where one can identify biases. But if there is a requisite framework, best practices and tools are in place, it can catch and clean many of these biases.
Read More..

Would you trust AI to make your life decisions?
What can you do to mitigate AI bias?

Data and analytics professionals need to be mindful of AI governance to circumvent negligence that is prevalent, via biases, in AI-based algorithms created by humans. Analytics and data science teams should approach AI-based projects, with a keen eye for biases in model data.
Read More..

Would you trust AI to make your life decisions?
Here’s how Exide Life Insurance is making AI trustworthy

One of the biggest roadblocks in implementing AI is trust. The preferential treatment towards a specific segment, either due to narrow training data or malicious intent cannot be ruled out.

“Building trust in the AI models usually takes time. We started the process by extracting assumptions from the historical data and feeding them into the models. We then moved to some pilot stages where production data was fed to the system to enhance the ML algorithm, and at present, we are moving towards extending these models to all user data. This will finally expand the assumption base and build more trust in the outcome from these AI models,” Ayan De, Chief Technology Officer, Exide Life Insurance explained.
Read More…

Would you trust AI to make your life decisions?

Why do we need AI which is transparent, explainable, and responsible

It is necessary for the end-user to be aware of what and how the technology will benefit and impact them. Responsible AI can guard against the use of biased data or algorithms. It will ensure that automated decisions are justified and explainable and help maintain user trust and individual privacy.

AI works on the basis of analyzing a series of patterns (Neural networks), which is gathered via machine learning and deep learning and in such case if there is the slightest mistake or biasness, it will unquestionably jeopardize the desired output and may potentially cause massive damage to the form of loss of reputation, financial loss and even bigger loss in the form of environmental damage.
Read More…





Source link