1 00:00:00,050 --> 00:00:06,140 Case study mitigating AI bias, hallucinations, and errors Finn trusts ethical and technical approach. 2 00:00:06,170 --> 00:00:11,510 Artificial intelligence systems, despite their transformative potential, are vulnerable to failures 3 00:00:11,510 --> 00:00:14,540 such as bias, hallucinations, and errors. 4 00:00:15,110 --> 00:00:20,270 These issues are particularly pertinent for AI governance professionals, who must ensure the ethical 5 00:00:20,270 --> 00:00:23,300 and accountable deployment of AI technologies. 6 00:00:25,160 --> 00:00:31,760 Consider the case of AI implementation in a multinational bank, Fintrust, which has adopted AI algorithms 7 00:00:31,760 --> 00:00:34,160 to streamline loan approval processes. 8 00:00:34,190 --> 00:00:40,790 The AI system, designed to evaluate credit worthiness based on historical data and socio economic indicators, 9 00:00:40,790 --> 00:00:44,720 initially promised to increase efficiency and reduce human error. 10 00:00:45,440 --> 00:00:49,250 However, as the system was deployed, a disturbing trend emerged. 11 00:00:49,280 --> 00:00:55,130 A significant disparity in loan approvals among different demographic groups, African American and 12 00:00:55,130 --> 00:01:00,890 Hispanic applicants were systematically denied loans at higher rates compared to their Caucasian counterparts. 13 00:01:01,940 --> 00:01:06,910 How can APIs identify and address the root causes of such biases in AI systems. 14 00:01:08,290 --> 00:01:13,720 The bias in fintrust AI systems stemmed from the training data, which was based on historical loan 15 00:01:13,720 --> 00:01:18,490 approval records that reflected decades of discriminatory lending practices. 16 00:01:19,300 --> 00:01:25,600 By learning from biased data, the AI system perpetuated these biases, leading to unfair outcomes. 17 00:01:26,500 --> 00:01:31,870 To rectify this, Fintrust data scientists embarked on a thorough review of the dataset, identifying 18 00:01:31,870 --> 00:01:34,060 key features that contributed to bias. 19 00:01:34,090 --> 00:01:39,430 They augmented the dataset with more representative samples of all demographic groups, and incorporated 20 00:01:39,430 --> 00:01:44,050 fairness aware algorithms designed to mitigate bias during the learning process. 21 00:01:44,500 --> 00:01:49,270 What specific techniques can be used to ensure datasets are balanced and representative? 22 00:01:51,820 --> 00:01:57,760 To further enhance fairness, Fintrust implemented regular audits of the AI system involving external 23 00:01:57,760 --> 00:02:01,180 auditors to review the models decisions and provide feedback. 24 00:02:01,210 --> 00:02:06,940 These audits helped identify any remaining biases and ensured continuous improvement of the system. 25 00:02:07,480 --> 00:02:13,640 Moreover, the bank established an ethical AI Committee comprising technologists, ethicists and community 26 00:02:13,640 --> 00:02:19,340 representatives to oversee the AI deployment and ensure alignment with societal values. 27 00:02:21,170 --> 00:02:25,520 Another challenge emerged in the use of AI for customer service at Fintrust. 28 00:02:25,820 --> 00:02:32,090 The bank deployed a chatbot powered by an advanced NLP model to handle customer inquiries. 29 00:02:32,420 --> 00:02:38,630 Initially, the chatbot performed well, but over time it began providing inaccurate or irrelevant information. 30 00:02:39,020 --> 00:02:44,660 For example, it fabricated details about loan products that did not exist and gave incorrect advice 31 00:02:44,660 --> 00:02:46,010 on financial planning. 32 00:02:46,010 --> 00:02:52,790 How can I prevent AI systems from generating hallucinations and ensure the accuracy of the information 33 00:02:52,790 --> 00:02:53,660 provided? 34 00:02:54,140 --> 00:02:59,870 The issue of hallucinations in the chatbot arose from the model's training objective, which prioritized 35 00:02:59,870 --> 00:03:03,380 fluency and coherence over factual accuracy. 36 00:03:03,740 --> 00:03:09,800 To address this, Fintrust integrated factual verification mechanisms into the chatbots architecture. 37 00:03:10,220 --> 00:03:15,940 These mechanisms cross-check the information generated by the chatbot against a verified knowledge base, 38 00:03:15,940 --> 00:03:19,330 ensuring that responses were grounded in factual data. 39 00:03:19,870 --> 00:03:26,020 Additionally, the bank employed a human in the loop approach where human agents reviewed and validated 40 00:03:26,020 --> 00:03:29,590 the chatbots responses before they were delivered to customers. 41 00:03:30,430 --> 00:03:36,100 This hybrid approach significantly reduced the incidence of hallucinations and enhanced the reliability 42 00:03:36,100 --> 00:03:36,820 of the chatbot. 43 00:03:39,010 --> 00:03:43,720 The use of AI in healthcare at Fintrust also posed significant challenges. 44 00:03:44,080 --> 00:03:50,080 The bank developed an AI diagnostic tool to assist medical professionals in interpreting medical images, 45 00:03:50,110 --> 00:03:55,030 aiming to improve diagnostic accuracy and reduce the workload on radiologists. 46 00:03:55,450 --> 00:04:01,120 However, during a pilot phase, the tool misinterpreted several images, leading to incorrect diagnoses 47 00:04:01,120 --> 00:04:03,040 and recommendations for treatment. 48 00:04:03,460 --> 00:04:08,410 These errors, if left unaddressed, could have had severe consequences for patient health. 49 00:04:09,070 --> 00:04:15,340 What steps can be taken to improve the accuracy and interpretability of AI diagnostic tools in healthcare? 50 00:04:16,240 --> 00:04:21,710 The errors in the AI diagnostic tool were partly due to the complexity and opacity of the deep learning 51 00:04:21,710 --> 00:04:26,660 models used, which operated as black boxes with limited interpretability. 52 00:04:27,200 --> 00:04:33,020 To enhance transparency, Fintrust data scientists employed model agnostic interpretability methods 53 00:04:33,020 --> 00:04:39,410 such as Lime and Shape, which provided insights into the decision making process of the AI system. 54 00:04:39,830 --> 00:04:45,020 These methods allowed medical professionals to understand the rationale behind the AI's predictions 55 00:04:45,020 --> 00:04:47,060 and identify potential errors. 56 00:04:47,810 --> 00:04:53,210 Moreover, the bank developed inherently interpretable models that offered a balance between complexity 57 00:04:53,210 --> 00:04:59,480 and transparency, ensuring that the diagnostic tools decisions were more understandable and trustworthy. 58 00:05:02,030 --> 00:05:08,180 The deployment of AI in dynamic and unpredictable environments, such as autonomous vehicles, presents 59 00:05:08,210 --> 00:05:09,590 additional risks. 60 00:05:10,430 --> 00:05:17,030 Fintrust had invested in an AI powered fleet of autonomous delivery vehicles to enhance logistics efficiency. 61 00:05:17,540 --> 00:05:23,000 However, during a trial run, one of the vehicles failed to recognize an obstacle on the road, resulting 62 00:05:23,000 --> 00:05:24,410 in a minor accident. 63 00:05:24,890 --> 00:05:30,170 This incident underscored the potential dangers of AI errors in safety critical applications. 64 00:05:30,170 --> 00:05:35,810 What measures can be implemented to ensure the robust performance of AI systems in such contexts? 65 00:05:37,190 --> 00:05:43,310 To address the risks associated with autonomous vehicles, Fintrust adopted a rigorous testing and validation 66 00:05:43,310 --> 00:05:44,150 framework. 67 00:05:44,570 --> 00:05:50,390 This framework involved extensive simulations and real world tests under various conditions, to assess 68 00:05:50,390 --> 00:05:54,710 the AI system's performance and identify potential failure points. 69 00:05:55,520 --> 00:06:00,770 Continuous monitoring was also implemented to detect anomalies and ensure timely intervention. 70 00:06:01,400 --> 00:06:07,160 Additionally, the bank collaborated with regulatory bodies and industry experts to develop safety standards 71 00:06:07,160 --> 00:06:10,520 and best practices for autonomous vehicle deployment. 72 00:06:12,410 --> 00:06:17,960 The ethical implications of AI failures extend beyond technical considerations affecting broader societal 73 00:06:17,960 --> 00:06:18,710 impacts. 74 00:06:19,190 --> 00:06:25,160 Bias, hallucinations, and errors in AI systems can exacerbate inequalities, undermine trust, and 75 00:06:25,160 --> 00:06:27,500 erode public confidence in technology. 76 00:06:27,920 --> 00:06:33,800 How can I integrate ethical principles with technical solutions to ensure responsible AI deployment? 77 00:06:35,330 --> 00:06:41,540 At Fintrust, a comprehensive approach to AI governance was adopted, integrating ethical principles 78 00:06:41,540 --> 00:06:43,310 with technical solutions. 79 00:06:43,850 --> 00:06:49,790 The Bank established clear accountability mechanisms, including transparent documentation of AI development 80 00:06:49,790 --> 00:06:52,850 processes and rigorous auditing practices. 81 00:06:53,420 --> 00:06:58,850 Channels were created for stakeholders to report grievances and provide feedback, ensuring that any 82 00:06:58,850 --> 00:07:00,620 issues were promptly addressed. 83 00:07:01,220 --> 00:07:07,220 Moreover, interdisciplinary collaboration among technologists, ethicists, legal experts, and affected 84 00:07:07,220 --> 00:07:13,610 communities was fostered to ensure that AI systems were designed and deployed in ways that aligned with 85 00:07:13,610 --> 00:07:15,470 societal values and norms. 86 00:07:16,340 --> 00:07:21,440 Understanding AI failures is crucial for developing ethical and accountable AI systems. 87 00:07:21,890 --> 00:07:27,440 Addressing these challenges requires a multifaceted approach that combines technical innovations with 88 00:07:27,440 --> 00:07:29,210 robust governance frameworks. 89 00:07:29,570 --> 00:07:36,610 By prioritizing fairness, accuracy, transparency, and collaboration, AIG can help mitigate the risks 90 00:07:36,610 --> 00:07:42,460 associated with AI failures and harness the technology's potential for positive societal impact. 91 00:07:43,390 --> 00:07:49,000 The case of Fintrust illustrates the complexities and challenges associated with AI deployment across 92 00:07:49,000 --> 00:07:50,320 various sectors. 93 00:07:51,130 --> 00:07:56,830 The bank's commitment to addressing bias involved curating balanced data sets and implementing fairness 94 00:07:56,830 --> 00:08:01,510 aware algorithms, ensuring that the AI systems decisions were equitable. 95 00:08:01,990 --> 00:08:07,930 By integrating factual verification mechanisms and human in the loop approaches, Fintrust minimizes 96 00:08:07,930 --> 00:08:14,080 the risk of hallucinations in its chatbot, enhancing the reliability of customer interactions in the 97 00:08:14,080 --> 00:08:15,010 healthcare domain. 98 00:08:15,010 --> 00:08:20,800 Model agnostic interpretability methods and inherently interpretable models improve the accuracy and 99 00:08:20,800 --> 00:08:23,380 transparency of AI diagnostic tools. 100 00:08:23,380 --> 00:08:26,050 Building trust among medical professionals. 101 00:08:27,640 --> 00:08:33,640 The incident with the Autonomous delivery vehicle highlighted the importance of rigorous testing, validation, 102 00:08:33,640 --> 00:08:39,620 and continuous monitoring to ensure the robust performance of AI systems in dynamic environments. 103 00:08:40,280 --> 00:08:45,590 The bank's collaboration with regulatory bodies and industry experts underscored the need for safety 104 00:08:45,620 --> 00:08:49,580 standards and best practices in deploying autonomous technologies. 105 00:08:51,620 --> 00:08:57,080 Finn Trust's comprehensive approach to AI governance, integrating ethical principles with technical 106 00:08:57,080 --> 00:09:03,050 solutions, exemplified how organisations can navigate the ethical implications of AI failures. 107 00:09:03,080 --> 00:09:08,810 Establishing clear accountability mechanisms, fostering interdisciplinary collaboration, and engaging 108 00:09:08,810 --> 00:09:14,570 with affected communities ensured that AI systems were aligned with societal values and norms. 109 00:09:16,820 --> 00:09:22,190 In conclusion, the case of Finn Trust demonstrates the importance of understanding and addressing AI 110 00:09:22,220 --> 00:09:28,610 failures, bias, hallucinations, and errors to develop ethical and accountable AI systems. 111 00:09:28,610 --> 00:09:34,520 By combining technical innovations with robust governance frameworks, organisations can mitigate risks 112 00:09:34,520 --> 00:09:38,330 and leverage AI's potential for positive societal impact.