Algorithmic Unfairness – Automated decision making

Proprietary algorithms are developed to analyse vast amounts of data for trends, patterns and hidden nuances by businesses. These algorithms are typically trade secrets developed by the businesses to aid them in taking commercial decisions or could be their business model itself. Take, for instance, data trend which talks about the accident insurance worthiness of an applicant which could analyse the applicant’s driving behaviour, history, general rate of accidents caused by someone in same age group, location, etc. It certainly helps the insurance company to choose the “right applicant” who deserves insurance should there be an accident, or “profitable applicant” who will not cause accidents. There is a fair amount of discrimination and profiling that comes by as a result. This picture says a thousand words. Lauren Smith has this table in the article blogged at Future of Privacy Forum,(https://fpf.org/2017/12/11/unfairness-by-algorithm-distilling-the-harms-of-automated-decision-making/) which classifies the various discrimination and profiling that comes by. width=434 It is opined that deep learning neural networks provide for great predictions but is not very transparent in providing for a causal audit trail.  The question remains for now on better predictions versus transparency. Artificial Intelligence tools have been around for a while and used extensively in various industry verticals. In Loomis v. Wisconsin, the case challenged the use of proprietary, closed source risk assessment software in sentencing Mr Loomis to prison. The case alleged that the software Correctional Offender Management Profiling for Alternative Sanctions or COMPAS violates due process rights by taking gender and race into account. The algorithms used were considered trade secrets and the causal audit process was not clearly known to the Judge. GDPR, General Data Protection Regulation in the EU, has provided for certain “decisional privacy rights”, ie. the privacy of certain significant self-defining choices. Article 22 of GDPR says that “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” The Article further provides for certain exceptions. The law will create a ‘right to explanation’, where a user can ask for an explanation an algorithmic decision that was made about the user. It may be noted that GDPR is quite sprawling and has wide territorial scope to include any place where the data is processed. Explaining explanations: If a human Judge were to give reasons for the decisions taken, then can the AI explain itself as well?The New York Times magazine also recently asked the question- Can AI be taught to explain itself? https://www.nytimes.com/2017/11/21/magazine/can-ai-be-taught-to-explain-itself.html DARPA’s explainable artificial intelligence provides for some answers on the “explainability. https://www.darpa.mil/program/explainable-artificial-intelligence width=380 The new machine learning, AI systems will have to have a strategy to include an ability to produce explainable models and in a way that humans can understand and trust.

  • Transparency of how a decision is made is not enough. The explanation should cover “why” the decision was made. This might include human intervention.
  • “Outcome” is not enough. The explanation should cover how the points are assigned and why the points are assigned the way they are assigned
  • There should be a meaningful information about the logic involved.
  • The model should be interpretable and thought through while developing the model.

So, when that insurance application or credit application is rejected, there should be an explanation provided by the insurance company or the bank, on why and how the decision was made. The Whitepaper of the Committee of Experts on Data Protection Framework for India, was released recently inviting public comments asks the question “Should there be a prohibition on evaluative decisions taken on the basis of automated decisions?” Our humble answer is no, it is not “prohibition” but the regulatory framework should consider “explanation”. Author: Sharda Balaji  

Similar Articles

Contact us for a Solution

Contact us for more information about our services and how we can help

Contact
Disclaimer

As per the rules of the Bar Council of India, we are not permitted to advertise or solicit work. By accessing and browsing through this website, all users agree and acknowledge that the content of this website is for informational purposes only and that there has been no form of solicitation, advertisement or inducement by NovoJuris Legal or its members, in any form. No information provided on this website should be construed as legal advice and NovoJuris Legal shall not be liable for consequences of any action taken by relying on the information provided on this website.