1 00:00:00,050 --> 00:00:05,090 Case study ensuring accountability and fairness in AI driven loan approval systems. 2 00:00:05,090 --> 00:00:10,880 The Finn Trust Bank Case study, accountability and automated decision making systems is integral to 3 00:00:10,910 --> 00:00:16,490 ensuring that the decisions made by these systems are transparent, fair, and justifiable. 4 00:00:17,090 --> 00:00:22,550 Consider the case of Finn Trust Bank, a large financial institution that recently implemented an AI 5 00:00:22,580 --> 00:00:29,060 powered loan approval system to streamline its application process and enhance decision making efficiency. 6 00:00:30,350 --> 00:00:35,570 One morning, an email lands in the inbox of John, the director of financial services at Finn Trust 7 00:00:35,570 --> 00:00:36,170 Bank. 8 00:00:36,620 --> 00:00:41,810 The email is from Emma, a small business owner who applied for a loan to expand her bakery but was 9 00:00:41,810 --> 00:00:44,090 denied by the new AI system. 10 00:00:44,510 --> 00:00:49,820 Emma is perplexed and frustrated because she has a solid credit history and a thriving business. 11 00:00:49,850 --> 00:00:54,920 She demands an explanation for the denial and expresses her concern that the decision might have been 12 00:00:54,920 --> 00:00:55,670 unfair. 13 00:00:58,550 --> 00:01:01,590 John immediately senses the gravity of the situation. 14 00:01:01,590 --> 00:01:07,200 He understands that the AI systems decision needs to be transparent and justifiable to maintain the 15 00:01:07,200 --> 00:01:09,780 bank's credibility and customer trust. 16 00:01:10,680 --> 00:01:17,040 He calls a meeting with his team, including AI developers, data scientists and customer service representatives, 17 00:01:17,040 --> 00:01:19,920 to understand the rationale behind the loan denial. 18 00:01:21,780 --> 00:01:27,480 As the team delves into the system's processes, they realize that the AI model relies on a complex 19 00:01:27,480 --> 00:01:31,320 set of algorithms and vast amounts of data to make its decisions. 20 00:01:31,980 --> 00:01:38,400 The system scored Emma's application low, but the exact reasons behind this score are not easily decipherable. 21 00:01:38,430 --> 00:01:41,910 A classic example of black box problem in AI systems. 22 00:01:42,570 --> 00:01:46,350 Here, the opacity of the system poses a significant challenge. 23 00:01:46,860 --> 00:01:50,850 How can Fintrust Bank provide a clear and understandable explanation to Emma? 24 00:01:52,050 --> 00:01:57,630 The first step towards addressing Emma's concern is enhancing the transparency of the AI system. 25 00:01:58,170 --> 00:02:04,650 John suggests documenting the data sources, algorithms and decision making criteria used by the system. 26 00:02:04,920 --> 00:02:10,830 The goal is to ensure that all stakeholders, including applicants like Emma, can understand the basis 27 00:02:10,830 --> 00:02:11,820 of the decisions. 28 00:02:11,820 --> 00:02:17,820 The team decides to develop explanatory tools like local interpretable model agnostic explanations and 29 00:02:17,820 --> 00:02:23,070 Shapley additive explanations to offer insights into the decision making process. 30 00:02:24,330 --> 00:02:29,850 During the analysis, the team discovers that the system might have used outdated or biased data, which 31 00:02:29,850 --> 00:02:33,090 could have unfairly impacted Emma's loan application. 32 00:02:33,720 --> 00:02:40,110 This brings up another critical question how can Fintrust Bank ensure that its AI system does not perpetuate 33 00:02:40,110 --> 00:02:42,030 biases present in the data? 34 00:02:43,530 --> 00:02:47,460 To tackle this, John advocates for regular audits of the AI system. 35 00:02:47,850 --> 00:02:53,970 These audits aim to identify biases, errors, and other issues that might affect decision fairness. 36 00:02:54,420 --> 00:02:59,780 Internal audits by the bank's data science team, coupled with external audits by independent experts, 37 00:02:59,780 --> 00:03:04,120 can provide a robust mechanism for oversight and continuous improvement. 38 00:03:04,480 --> 00:03:11,560 The European Union's General Data Protection Regulation mandates such auditing practices to ensure transparency 39 00:03:11,560 --> 00:03:12,640 and compliance. 40 00:03:13,660 --> 00:03:19,060 As the team continues to investigate, they consider the broader implications of accountability. 41 00:03:19,600 --> 00:03:25,570 Who should be responsible if the AI system makes an error or produces an undesirable outcome, such 42 00:03:25,570 --> 00:03:28,030 as denying a loan to a deserving applicant? 43 00:03:28,750 --> 00:03:32,950 This question of responsibility allocation is complex and multifaceted. 44 00:03:33,880 --> 00:03:38,860 John believes that responsibility should be shared among the developers who design the algorithms, 45 00:03:38,860 --> 00:03:44,530 the bank that deploys the system, and regulatory bodies overseeing financial practices. 46 00:03:45,100 --> 00:03:50,200 Each stakeholder has a role in ensuring that the system operates ethically and effectively. 47 00:03:50,590 --> 00:03:56,650 For instance, in the event of a self-driving car accident, liability might involve the car manufacturer, 48 00:03:56,650 --> 00:03:59,080 software developers, and the car owner. 49 00:03:59,590 --> 00:04:01,780 Similarly for Fintrust Bank. 50 00:04:01,780 --> 00:04:07,480 Clear legal and ethical frameworks are needed to delineate responsibilities and actions when issues 51 00:04:07,480 --> 00:04:08,230 arise. 52 00:04:09,730 --> 00:04:15,310 Furthermore, the team reflects on the ethical principles that should guide the AI systems operation, 53 00:04:15,670 --> 00:04:17,770 ensuring fairness is paramount. 54 00:04:18,280 --> 00:04:23,620 The team recalls a study revealing that facial recognition systems can exhibit biases based on race 55 00:04:23,620 --> 00:04:27,430 and gender, leading to misidentification and unfair treatment. 56 00:04:27,790 --> 00:04:31,810 Although Fintrust AI system is different, the principle remains the same. 57 00:04:31,810 --> 00:04:36,610 Biases in data or algorithms must be addressed to ensure equitable decisions. 58 00:04:38,740 --> 00:04:45,070 To mitigate biases, the team decides to use diverse training data and implement algorithmic adjustments. 59 00:04:45,430 --> 00:04:51,490 Additionally, the bank adopts organizational policies for regular bias assessments and interventions. 60 00:04:51,940 --> 00:04:57,550 These measures are essential to uphold fairness, justice, and respect for individual rights in the 61 00:04:57,550 --> 00:04:59,740 bank's decision making processes. 62 00:05:00,790 --> 00:05:06,290 While technical and organizational measures are crucial, the team also recognizes the importance of 63 00:05:06,290 --> 00:05:08,330 legal and regulatory frameworks. 64 00:05:09,320 --> 00:05:14,540 Governments and regulatory bodies can set standards for the ethical use of automated decision making 65 00:05:14,540 --> 00:05:15,350 systems. 66 00:05:15,980 --> 00:05:21,620 For example, the Algorithmic Accountability Act in the United States requires companies to conduct 67 00:05:21,620 --> 00:05:25,850 impact assessments and address identified risks in their AI systems. 68 00:05:26,720 --> 00:05:31,820 Fintrust Bank commits to complying with such regulations to ensure ethical governance. 69 00:05:32,900 --> 00:05:36,950 Finally, the team acknowledges the need for public engagement and education. 70 00:05:36,950 --> 00:05:42,980 Engaging with customers about the capabilities, limitations, and ethical implications of the AI system 71 00:05:42,980 --> 00:05:46,970 can build, trust and align the system with societal values. 72 00:05:47,810 --> 00:05:53,240 Educational initiatives can empower customers like Emma to understand and critically assess AI driven 73 00:05:53,240 --> 00:05:57,110 decisions, fostering informed and active participation. 74 00:05:58,370 --> 00:06:05,020 In conclusion, John and his team at Fintrust Bank embark on a comprehensive strategy to address accountability 75 00:06:05,020 --> 00:06:07,750 in their AI powered loan approval system. 76 00:06:08,290 --> 00:06:13,480 They enhance transparency by making the system's workings more understandable and accessible. 77 00:06:14,110 --> 00:06:18,940 They establish robust auditing mechanisms to identify and correct biases and errors. 78 00:06:19,300 --> 00:06:23,980 They clarify responsibility allocation among developers, the bank, and regulators. 79 00:06:23,980 --> 00:06:28,390 They uphold ethical principles by addressing biases and ensuring fairness. 80 00:06:28,420 --> 00:06:33,700 They comply with legal and regulatory standards, and they engage with the public to build trust and 81 00:06:33,700 --> 00:06:35,680 promote informed participation. 82 00:06:37,630 --> 00:06:42,970 By addressing these various aspects, Fintrust Bank ensures that its automated decision making system 83 00:06:43,000 --> 00:06:46,390 operates transparently, fairly and justifiably. 84 00:06:46,900 --> 00:06:52,750 Emma receives a clear explanation of her loan application denial, restoring her trust in the bank. 85 00:06:53,140 --> 00:06:59,410 The bank's commitment to accountability not only upholds ethical standards, but also enhances its credibility 86 00:06:59,410 --> 00:07:01,210 and customer satisfaction.