1 00:00:00,050 --> 00:00:00,710 Case study. 2 00:00:00,710 --> 00:00:06,290 Mitigating AI bias diverse hires journey through fairness, legal compliance, and ethical AI. 3 00:00:06,320 --> 00:00:13,070 Bias in AI and non-discrimination laws intersect in complex ways, raising crucial questions about fairness, 4 00:00:13,100 --> 00:00:15,740 legality, and ethical responsibility. 5 00:00:16,310 --> 00:00:22,130 Consider the case of diversity Hire, a prominent tech company striving to create an inclusive workforce. 6 00:00:22,280 --> 00:00:28,400 The company invested significantly in an AI driven recruitment platform, AI hire, aimed at streamlining 7 00:00:28,400 --> 00:00:31,880 the hiring process and ensuring a diverse pool of candidates. 8 00:00:32,240 --> 00:00:35,090 The hope was that I would eliminate human biases. 9 00:00:35,090 --> 00:00:42,200 Yet unforeseen issues arose, presenting a practical example of how AI and non-discrimination laws interact. 10 00:00:42,710 --> 00:00:46,490 Diverse hires human resources team noticed a troubling pattern. 11 00:00:46,490 --> 00:00:52,910 Despite the AI systems promise, the demographic diversity of hired candidates did not reflect the applicant 12 00:00:52,910 --> 00:00:54,260 pools diversity. 13 00:00:54,560 --> 00:00:59,930 The system seemed to favor candidates from specific backgrounds, particularly white males. 14 00:00:59,960 --> 00:01:06,160 This observation led to an internal investigation of how the AI system was trained and functioned. 15 00:01:06,910 --> 00:01:12,730 The first question arising from this situation is how could the data set used to train AI hire contribute 16 00:01:12,730 --> 00:01:14,680 to biased hiring outcomes? 17 00:01:15,700 --> 00:01:21,520 The investigation revealed that the historical data fed into AI hire mostly consisted of resumes from 18 00:01:21,520 --> 00:01:26,560 previous successful hires, who predominantly belonged to specific demographic groups. 19 00:01:27,040 --> 00:01:32,890 This data set inherently carried the biases of past hiring practices, perpetuating the same bias. 20 00:01:32,890 --> 00:01:34,870 AI hire was meant to eliminate. 21 00:01:35,680 --> 00:01:41,590 Historical biases in data can be a significant source of discrimination in AI systems, as these models 22 00:01:41,590 --> 00:01:44,470 learn from past trends and may replicate them. 23 00:01:45,730 --> 00:01:50,110 Given these findings, Diversa hire sought to understand their legal obligations. 24 00:01:50,590 --> 00:01:56,620 The Civil Rights Act of 1964, particularly title seven, prohibits employment discrimination based 25 00:01:56,620 --> 00:02:00,640 on race, color, religion, sex, or national origin. 26 00:02:00,660 --> 00:02:05,250 This law applies to I in the same way it applies to human decision makers. 27 00:02:05,520 --> 00:02:11,190 The company's legal team reviewed EEOC guidelines emphasizing the need for transparency and fairness 28 00:02:11,220 --> 00:02:13,170 in AI driven decisions. 29 00:02:13,200 --> 00:02:19,500 They recognize the need to question what steps should Diversey hire take to ensure I hire complies with 30 00:02:19,500 --> 00:02:23,190 title VII and avoids perpetuating discriminatory practices? 31 00:02:24,180 --> 00:02:28,860 Firstly, Diversity Hire implemented a robust governance framework for AI hire. 32 00:02:29,250 --> 00:02:35,370 This included regular audits and assessments to evaluate the algorithm's fairness and identify biases 33 00:02:35,370 --> 00:02:36,420 in training data. 34 00:02:36,450 --> 00:02:42,150 By conducting these audits, the company aimed to uncover and rectify any factors leading to biased 35 00:02:42,180 --> 00:02:42,990 outcomes. 36 00:02:43,110 --> 00:02:48,750 For instance, more diverse and representative data sets were curated to train AI hire anew. 37 00:02:49,020 --> 00:02:55,020 This process required interdisciplinary collaboration involving computer scientists, ethicists, and 38 00:02:55,020 --> 00:02:59,100 social scientists to ensure a multifaceted approach to fairness. 39 00:03:00,150 --> 00:03:03,970 One key aspect of ensuring compliance is transparency. 40 00:03:04,210 --> 00:03:10,180 Diversity hire needed to make AI hires decision making process more transparent to the candidates and 41 00:03:10,180 --> 00:03:11,020 the public. 42 00:03:11,350 --> 00:03:17,050 This involved providing clear explanations of how the AI system evaluates candidates and what criteria 43 00:03:17,050 --> 00:03:17,770 it uses. 44 00:03:17,800 --> 00:03:23,440 Transparency helps build trust and allows for accountability, ensuring that the system operates within 45 00:03:23,440 --> 00:03:25,330 legal and ethical boundaries. 46 00:03:26,650 --> 00:03:33,070 But an essential question that followed was how can diversity hires audit process effectively identify 47 00:03:33,070 --> 00:03:35,710 and mitigate biases in AI hire? 48 00:03:36,430 --> 00:03:41,950 Analyzing the AI systems decisions through various metrics and comparing them to human made decisions 49 00:03:41,950 --> 00:03:43,750 helped identify discrepancies. 50 00:03:43,750 --> 00:03:50,200 For example, the company found that AI higher rated resumes with certain educational backgrounds higher 51 00:03:50,230 --> 00:03:52,810 even when not relevant to the job requirements. 52 00:03:52,840 --> 00:03:58,900 This insight allowed them to adjust the system's weighting mechanisms to focus on skills and experiences 53 00:03:58,900 --> 00:04:00,220 pertinent to the job. 54 00:04:01,470 --> 00:04:06,000 Moreover, Diversity Hire looked at external oversight and regulatory compliance. 55 00:04:06,030 --> 00:04:12,150 They adhered to guidelines similar to those outlined by the GDPR, which requires organisations to provide 56 00:04:12,150 --> 00:04:18,150 individuals with information about how their data is used and implement measures to prevent discriminatory 57 00:04:18,150 --> 00:04:18,960 outcomes. 58 00:04:19,950 --> 00:04:26,070 By adopting these principles, Diversity Hire aim to ensure that AI hires operation was not only fair 59 00:04:26,070 --> 00:04:27,960 but also legally sound. 60 00:04:29,460 --> 00:04:33,000 Collaboration with external experts also proved beneficial. 61 00:04:33,030 --> 00:04:39,840 Diversity invited consultants specialising in ethics, law and social sciences to review AI hire and 62 00:04:39,870 --> 00:04:41,400 offer recommendations. 63 00:04:41,640 --> 00:04:47,100 This interdisciplinary approach ensured that the AI system considered diverse perspectives and did not 64 00:04:47,100 --> 00:04:49,560 inadvertently disadvantage certain groups. 65 00:04:49,740 --> 00:04:56,400 For instance, sociologists provided insights into social structures and biases, while legal experts 66 00:04:56,400 --> 00:04:59,520 ensured compliance with non-discrimination laws. 67 00:05:00,480 --> 00:05:03,920 A thought provoking scenario emerged when diversity hire expanded. 68 00:05:03,950 --> 00:05:08,000 IE hires application to loan approvals for its financial branch. 69 00:05:08,450 --> 00:05:10,910 This highlighted another critical question. 70 00:05:11,300 --> 00:05:16,190 What safeguards should be in place to prevent AI bias in financial decision making? 71 00:05:16,730 --> 00:05:22,910 The company recognized that financial data could also carry historical biases leading to discriminatory 72 00:05:22,910 --> 00:05:24,170 lending practices. 73 00:05:24,800 --> 00:05:30,320 By applying similar audit and transparency measures as in hiring diversity hire, aim to ensure that 74 00:05:30,320 --> 00:05:32,900 loan approvals were fair and unbiased. 75 00:05:33,740 --> 00:05:39,230 Addressing bias in financial AI required understanding the nuances of different demographic groups, 76 00:05:39,230 --> 00:05:41,390 financial behaviors, and histories. 77 00:05:41,900 --> 00:05:47,990 This involved training the AI model on diverse data sets representing various financial backgrounds, 78 00:05:47,990 --> 00:05:53,000 and ensuring that the system did not prioritize certain demographics over others. 79 00:05:53,630 --> 00:05:59,900 Analyzing the outcomes of loan approvals across different groups helped identify and rectify any biases. 80 00:06:00,900 --> 00:06:07,080 To further illustrate the importance of addressing bias in AI, consider the case of facial recognition 81 00:06:07,080 --> 00:06:08,010 technology. 82 00:06:08,520 --> 00:06:14,670 Diverse hire explored using AI for security and attendance, but studies showed that facial recognition 83 00:06:14,670 --> 00:06:18,180 systems could be biased against people of color and women. 84 00:06:18,600 --> 00:06:23,640 These systems were significantly less accurate in identifying individuals from these groups compared 85 00:06:23,640 --> 00:06:28,650 to white males, leading to false identifications and discriminatory practices. 86 00:06:29,430 --> 00:06:35,040 Diversity hire decided to halt the implementation of facial recognition technology until these biases 87 00:06:35,040 --> 00:06:36,660 could be effectively mitigated. 88 00:06:37,320 --> 00:06:37,920 Births. 89 00:06:38,400 --> 00:06:44,040 In response to these findings, some jurisdictions implemented bans or strict regulations on facial 90 00:06:44,040 --> 00:06:49,500 recognition technology used by law enforcement and other entities, illustrating the importance of ongoing 91 00:06:49,530 --> 00:06:52,050 vigilance and regulatory oversight. 92 00:06:52,770 --> 00:06:58,800 Diverse hire adopted a similar stance, emphasizing that ethical considerations must guide technological 93 00:06:58,800 --> 00:07:02,480 deployment, especially in sensitive areas like security. 94 00:07:04,280 --> 00:07:10,790 In conclusion, the case of Diversifier demonstrates the intricate relationship between AI and non-discrimination 95 00:07:10,790 --> 00:07:11,540 laws. 96 00:07:11,630 --> 00:07:18,080 The company's proactive approach in identifying and mitigating biases in AI hire showcases the importance 97 00:07:18,080 --> 00:07:24,830 of regular audits, transparency, interdisciplinary collaboration, and adherence to legal standards. 98 00:07:25,160 --> 00:07:31,460 Addressing these challenges head on allows organizations to harness AI's potential while promoting fairness 99 00:07:31,460 --> 00:07:32,390 and equality. 100 00:07:33,380 --> 00:07:39,320 Diverse hires experience provides valuable lessons for any organization integrating AI into decision 101 00:07:39,350 --> 00:07:40,640 making processes. 102 00:07:41,120 --> 00:07:47,270 By continuously evaluating and refining AI systems, companies can ensure compliance with non-discrimination 103 00:07:47,270 --> 00:07:49,970 laws and uphold ethical principles. 104 00:07:50,150 --> 00:07:55,460 Ultimately, this contributes to creating a more inclusive and just society where technology serves 105 00:07:55,460 --> 00:08:00,050 as a tool for progress rather than perpetuating existing inequalities.