1 00:00:00,050 --> 00:00:06,260 Case study addressing AI bias lessons from tech hires, recruitment and predictive policing challenges. 2 00:00:06,290 --> 00:00:12,140 Artificial intelligence has revolutionized various sectors, including health care, finance, and law 3 00:00:12,140 --> 00:00:19,370 enforcement, but it also possesses the potential to perpetuate discrimination and bias, often unintentionally. 4 00:00:19,970 --> 00:00:25,190 A case in point is Techhire, a leading tech firm known for its innovative human resource management 5 00:00:25,190 --> 00:00:26,150 solutions. 6 00:00:26,300 --> 00:00:32,360 The company decided to automate its recruitment process by employing an AI based recruiting tool named 7 00:00:32,360 --> 00:00:33,410 Hire Smart. 8 00:00:33,440 --> 00:00:39,440 This decision was driven by the need to streamline the hiring process and enhance the selection of top 9 00:00:39,440 --> 00:00:40,070 talent. 10 00:00:40,100 --> 00:00:44,150 However, this innovative step brought unforeseen challenges. 11 00:00:45,350 --> 00:00:51,470 The AI system Hire Smart was developed by a diverse but relatively small team of data scientists and 12 00:00:51,470 --> 00:00:52,640 software engineers. 13 00:00:52,670 --> 00:00:57,740 They trained the model on a decade's worth of resumes and job applications from tech hires. 14 00:00:57,740 --> 00:00:58,910 Historical data. 15 00:00:59,270 --> 00:01:05,090 The data set predominantly included resumes from men, reflecting the traditional male dominated tech 16 00:01:05,090 --> 00:01:05,900 industry. 17 00:01:06,410 --> 00:01:12,110 The developers aimed to create an efficient model to identify top talent without human biases. 18 00:01:12,590 --> 00:01:18,590 Despite their best efforts, the model soon started to display unintended bias against female candidates. 19 00:01:19,340 --> 00:01:24,680 As the AI system was rolled out, it became apparent that male candidates were overwhelmingly recommended 20 00:01:24,680 --> 00:01:28,880 for technical roles, whereas female candidates were often sidelined. 21 00:01:29,390 --> 00:01:31,340 This discrepancy was alarming. 22 00:01:31,820 --> 00:01:37,430 What could have caused an AI system designed to be impartial to exhibit such gender bias? 23 00:01:38,060 --> 00:01:40,160 The issue lay within the training data. 24 00:01:40,490 --> 00:01:46,160 The historical resumes contained implicit biases favoring male candidates, which the AI model then 25 00:01:46,160 --> 00:01:47,540 learned and amplified. 26 00:01:47,570 --> 00:01:54,050 Techhire had inadvertently created a system that perpetuated existing gender biases rather than eliminating 27 00:01:54,050 --> 00:01:54,470 them. 28 00:01:55,730 --> 00:02:01,940 This situation raises a critical question how can organizations ensure that their AI models do not perpetuate 29 00:02:01,950 --> 00:02:03,360 historical biases. 30 00:02:03,750 --> 00:02:09,810 One essential measure is to diversify the training data, including resumes from various demographics, 31 00:02:09,810 --> 00:02:14,100 and ensuring that the data is representative can help mitigate such biases. 32 00:02:14,940 --> 00:02:20,790 Additionally, employing techniques like reweighting and resampling during data preprocessing can address 33 00:02:20,790 --> 00:02:21,990 imbalance issues. 34 00:02:23,640 --> 00:02:30,300 Another challenge emerged when the AI system began to perpetuate societal stereotypes through word embeddings. 35 00:02:30,300 --> 00:02:36,570 For instance, job titles and descriptions were automatically tailored in a manner that reinforced traditional 36 00:02:36,570 --> 00:02:37,650 gender roles. 37 00:02:37,950 --> 00:02:45,030 Words like leader and analytical frequently appeared in the context of male candidates, whereas supportive 38 00:02:45,030 --> 00:02:48,450 and nurturing were more common for female candidates. 39 00:02:48,690 --> 00:02:54,780 How do societal stereotypes make their way into AI models, and what can be done to counteract this? 40 00:02:55,140 --> 00:02:57,840 The issue stems from the textual data utilized. 41 00:02:57,840 --> 00:03:04,500 Word embeddings capture associations from large text corpora, and if those texts are imbued with societal 42 00:03:04,530 --> 00:03:06,930 biases, the model will reflect them. 43 00:03:07,860 --> 00:03:13,650 To counteract this, developers can use techniques such as debiasing word embeddings to remove gendered 44 00:03:13,650 --> 00:03:17,280 associations before they influence the model's decisions. 45 00:03:18,150 --> 00:03:23,820 Fairness aware algorithms designed to explicitly account for biases can be programmed to ensure balanced 46 00:03:23,820 --> 00:03:27,360 outcomes, preventing the perpetuation of stereotypes. 47 00:03:28,740 --> 00:03:34,890 In a separate incident, Techhire faced criticism for their facial recognition system used for employee 48 00:03:34,890 --> 00:03:35,760 attendance. 49 00:03:36,180 --> 00:03:41,880 The AI system showed higher error rates when identifying employees with darker skin tones, compared 50 00:03:41,880 --> 00:03:43,620 to those with lighter skin tones. 51 00:03:44,130 --> 00:03:49,560 This issue was particularly concerning as it led to discrepancies in attendance records and employee 52 00:03:49,590 --> 00:03:50,910 dissatisfaction. 53 00:03:51,750 --> 00:03:57,510 How can developers ensure that their AI systems are equitable across different demographic groups? 54 00:03:57,630 --> 00:04:00,930 A key factor here is the diversity of the training data. 55 00:04:01,170 --> 00:04:06,180 If the dataset predominantly features lighter skin tones, the model's performance will naturally be 56 00:04:06,180 --> 00:04:06,960 skewed. 57 00:04:08,790 --> 00:04:14,310 Techhire addressed this by expanding their data set to include a wide range of skin tones and conducting 58 00:04:14,340 --> 00:04:18,330 thorough evaluations to ensure fairness across demographics. 59 00:04:18,990 --> 00:04:24,840 Regular audits and impact assessments by independent third parties can also help ensure that AI systems 60 00:04:24,840 --> 00:04:26,220 perform equitably. 61 00:04:27,780 --> 00:04:33,450 Predictive policing is another sector where AI bias can have significant societal impacts. 62 00:04:33,600 --> 00:04:39,150 Consider the case of Metro City, a sprawling urban area that turned to predictive policing to combat 63 00:04:39,150 --> 00:04:40,470 rising crime rates. 64 00:04:40,860 --> 00:04:46,500 The AI system, designed to forecast crime hotspots, heavily relied on historical crime data. 65 00:04:46,530 --> 00:04:52,620 Unfortunately, this data was biased, reflecting a history of overpolicing in minority neighborhoods. 66 00:04:52,890 --> 00:04:59,280 Consequently, the AI system disproportionately targeted these communities, exacerbating existing social 67 00:04:59,280 --> 00:05:00,360 inequalities. 68 00:05:01,230 --> 00:05:07,540 This poses a fundamental question how can bias in predictive policing algorithms be mitigated to ensure 69 00:05:07,540 --> 00:05:09,760 fair treatment for all communities. 70 00:05:10,090 --> 00:05:15,760 A multifaceted approach is essential here, starting with the collection of diverse and representative 71 00:05:15,760 --> 00:05:16,390 data. 72 00:05:16,630 --> 00:05:22,390 Incorporating fairness constraints and adversarial debiasing in the algorithm can also help ensure balanced 73 00:05:22,390 --> 00:05:23,320 predictions. 74 00:05:24,130 --> 00:05:30,820 Continuous monitoring and updating of datasets are crucial to capture shifts in societal norms and behaviors. 75 00:05:32,140 --> 00:05:38,950 Credit scoring systems are another area where AI bias can lead to unequal access to financial services. 76 00:05:39,340 --> 00:05:45,100 A case study involving fair finance, a leading financial institution, revealed that their AI driven 77 00:05:45,100 --> 00:05:50,530 credit scoring system systematically disadvantaged applicants from certain demographic groups. 78 00:05:51,010 --> 00:05:56,830 This was due to the training data reflecting historical lending biases, which the AI system then learned 79 00:05:56,830 --> 00:05:57,970 and replicated. 80 00:05:58,390 --> 00:06:03,130 How can financial institutions develop fair AI systems for credit scoring? 81 00:06:04,180 --> 00:06:09,760 One approach involves the use of fairness aware algorithms and the implementation of fairness constraints 82 00:06:09,760 --> 00:06:14,290 to ensure that no particular group is disproportionately harmed or benefited. 83 00:06:14,680 --> 00:06:21,250 Furthermore, financial institutions should establish robust evaluation metrics to assess fairness across 84 00:06:21,250 --> 00:06:23,260 different demographic contexts. 85 00:06:25,210 --> 00:06:29,260 Transparency and accountability are pivotal in addressing AI bias. 86 00:06:29,620 --> 00:06:34,810 Tech hire and fair finance both adopted documentation practices such as model cards and data sheets 87 00:06:34,810 --> 00:06:35,890 for datasets. 88 00:06:35,920 --> 00:06:41,620 These documents detailed the data sources, methodologies, and potential biases, providing transparency 89 00:06:41,620 --> 00:06:43,660 into the AI systems operation. 90 00:06:44,020 --> 00:06:49,270 Establishing regular audits and independent impact assessments further ensured accountability and adherence 91 00:06:49,270 --> 00:06:50,650 to ethical standards. 92 00:06:53,620 --> 00:06:59,890 Stakeholder engagement is another critical component involving a diverse range of stakeholders, especially 93 00:06:59,890 --> 00:07:05,470 from marginalized communities, in the development and deployment of AI systems can provide valuable 94 00:07:05,510 --> 00:07:08,690 insights into potential biases and their impacts. 95 00:07:09,110 --> 00:07:14,870 This collaborative approach can help design and implement AI systems that are inclusive and equitable. 96 00:07:15,530 --> 00:07:19,100 How can organizations ensure effective stakeholder engagement? 97 00:07:20,990 --> 00:07:26,750 Creating forums and working groups that include representatives from various communities and fostering 98 00:07:26,750 --> 00:07:30,260 open dialogue can lead to more inclusive AI development. 99 00:07:30,560 --> 00:07:35,690 This ensures that the perspectives and concerns of all stakeholders are considered, leading to fairer 100 00:07:35,690 --> 00:07:36,530 outcomes. 101 00:07:38,900 --> 00:07:44,660 Education and training are essential for promoting awareness and understanding of AI bias. 102 00:07:45,200 --> 00:07:50,450 Techhire implemented professional development programs focused on ethics and fairness in AI. 103 00:07:50,930 --> 00:07:56,360 These programs equip developers with the knowledge to identify and address biases in their work. 104 00:07:56,870 --> 00:08:02,780 Policymakers were also engaged to understand the potential risks and benefits of AI, enabling them 105 00:08:02,780 --> 00:08:07,250 to craft effective regulations that promote fairness and accountability. 106 00:08:08,450 --> 00:08:13,820 Finally, fostering a culture of ethical AI development within organizations is crucial. 107 00:08:14,570 --> 00:08:20,420 Techhire cultivated an environment where ethical considerations were prioritized throughout the AI development 108 00:08:20,420 --> 00:08:27,680 lifecycle, encouraging a culture of ethical reflection and discussion supported by organizational policies 109 00:08:27,680 --> 00:08:33,110 and incentives, was key to ensuring that AI systems were developed and deployed responsibly. 110 00:08:34,400 --> 00:08:40,460 In conclusion, addressing discrimination and bias in AI systems requires a comprehensive, multifaceted 111 00:08:40,460 --> 00:08:47,030 approach ensuring the diversity and representativeness of training data, employing fairness aware algorithms, 112 00:08:47,030 --> 00:08:51,260 and establishing transparency and accountability are critical steps. 113 00:08:51,590 --> 00:08:57,950 Stakeholder engagement, education, and fostering a culture of ethical AI development are also essential. 114 00:08:58,580 --> 00:09:04,250 By adopting these measures, organizations like Techhire and Fair Finance can harness the power of AI 115 00:09:04,280 --> 00:09:07,130 to create more just and equitable systems.