1 00:00:00,050 --> 00:00:03,740 Lesson group harms discrimination and bias in AI systems. 2 00:00:03,770 --> 00:00:09,590 Artificial intelligence systems have increasingly become integral to various aspects of society, from 3 00:00:09,590 --> 00:00:13,190 health care and law enforcement to finance and advertising. 4 00:00:13,940 --> 00:00:19,310 However, with this growing reliance on AI, there is an urgent need to address the potential harms 5 00:00:19,310 --> 00:00:23,900 these systems can inflict, particularly in terms of discrimination and bias. 6 00:00:24,920 --> 00:00:30,890 Discrimination and bias in AI systems manifest when these technologies perpetuate or even exacerbate 7 00:00:30,890 --> 00:00:37,580 existing societal inequalities, often unintentionally due to the data they are trained on or the algorithms 8 00:00:37,580 --> 00:00:39,080 that govern their operations. 9 00:00:40,910 --> 00:00:45,680 One of the primary sources of bias in AI systems is the data used to train them. 10 00:00:46,100 --> 00:00:51,920 AI systems, particularly those based on machine learning, rely on vast amounts of data to learn and 11 00:00:51,920 --> 00:00:53,090 make decisions. 12 00:00:53,390 --> 00:01:00,070 If the training data contains biases, the AI system will likely replicate and even amplify these biases. 13 00:01:00,070 --> 00:01:06,190 For instance, if an AI system is trained on a data set that reflects historical hiring practices favoring 14 00:01:06,190 --> 00:01:11,890 certain demographics over others, the system may recommend job applicants in a biased manner, thus 15 00:01:11,890 --> 00:01:14,290 reinforcing workplace discrimination. 16 00:01:14,890 --> 00:01:17,200 This phenomenon is not merely hypothetical. 17 00:01:17,200 --> 00:01:22,060 There have been real world instances where AI systems have demonstrated biased behavior. 18 00:01:22,570 --> 00:01:29,380 In 2018, Amazon scrapped an AI recruiting tool after discovering it was biased against women as it 19 00:01:29,380 --> 00:01:34,630 had been trained on resumes submitted over a ten year period, predominantly from men. 20 00:01:36,190 --> 00:01:42,700 Another critical issue is that AI systems can inadvertently learn and perpetuate societal stereotypes. 21 00:01:43,000 --> 00:01:43,840 Word embeddings. 22 00:01:43,870 --> 00:01:49,090 A common technique in natural language processing have been shown to capture and reflect gender and 23 00:01:49,090 --> 00:01:51,760 racial biases present in the training data. 24 00:01:52,090 --> 00:01:54,040 For example, Bolukbasi et al. 25 00:01:54,070 --> 00:02:00,300 Demonstrated that word embeddings trained on a large corpus of text, associated the word man with computer 26 00:02:00,300 --> 00:02:03,090 programmer and woman with homemaker. 27 00:02:03,540 --> 00:02:09,540 This kind of bias in AI can have far reaching implications, influencing everything from language translation 28 00:02:09,540 --> 00:02:12,270 services to search engine suggestions. 29 00:02:13,440 --> 00:02:18,600 Algorithmic bias can also arise from the design and implementation of AI systems. 30 00:02:19,080 --> 00:02:25,140 Developers subjective choices, whether in selecting training data, defining objectives, or designing 31 00:02:25,140 --> 00:02:28,260 evaluation metrics, can introduce biases. 32 00:02:28,650 --> 00:02:33,870 For instance, facial recognition technologies have been found to have higher error rates for individuals 33 00:02:33,870 --> 00:02:37,770 with darker skin tones compared to those with lighter skin tones. 34 00:02:37,800 --> 00:02:43,140 This disparity is often due to the lack of diversity in the training datasets, and the developer's 35 00:02:43,140 --> 00:02:48,000 oversight in ensuring the system's fairness across different demographic groups. 36 00:02:48,600 --> 00:02:55,840 Such biases can lead to serious consequences such as misidentification and wrongful accusations, particularly 37 00:02:55,840 --> 00:02:57,520 in law enforcement contexts. 38 00:02:58,630 --> 00:03:04,810 Moreover, the deployment of biased AI systems can have significant societal impacts, particularly 39 00:03:04,840 --> 00:03:07,180 on marginalized and vulnerable groups. 40 00:03:07,600 --> 00:03:13,720 For example, predictive policing algorithms, which aim to forecast criminal activity based on historical 41 00:03:13,720 --> 00:03:18,520 crime data, have been criticized for disproportionately targeting minority communities. 42 00:03:18,910 --> 00:03:25,150 These systems can perpetuate a cycle of overpolicing and criminalization of these communities, exacerbating 43 00:03:25,150 --> 00:03:27,070 existing social inequalities. 44 00:03:28,000 --> 00:03:34,360 Similarly, AI driven credit scoring systems can systematically disadvantage certain demographic groups, 45 00:03:34,360 --> 00:03:39,730 leading to unequal access to financial services and perpetuating economic disparities. 46 00:03:40,780 --> 00:03:46,090 Addressing discrimination and bias in AI systems requires a multifaceted approach. 47 00:03:46,660 --> 00:03:51,330 One critical step is ensuring the diversity and representativeness of training data. 48 00:03:51,750 --> 00:03:57,660 This involves not only collecting data from diverse sources, but also continuously monitoring and updating 49 00:03:57,660 --> 00:04:01,560 datasets to capture changes in societal norms and behaviors. 50 00:04:01,950 --> 00:04:07,770 Additionally, employing techniques such as reweighting, resampling, and debiasing during the data 51 00:04:07,770 --> 00:04:10,950 preprocessing stage can help mitigate bias. 52 00:04:11,760 --> 00:04:17,430 Another important measure is the implementation of fairness aware algorithms, which are designed to 53 00:04:17,460 --> 00:04:20,430 explicitly account for and mitigate biases. 54 00:04:21,030 --> 00:04:26,490 These algorithms can be programmed to ensure that their decisions do not disproportionately benefit 55 00:04:26,490 --> 00:04:28,650 or harm any particular group. 56 00:04:29,040 --> 00:04:34,950 Techniques such as fairness constraints and adversarial debiasing can be utilized to achieve this goal. 57 00:04:35,730 --> 00:04:41,910 Furthermore, it is essential to establish clear guidelines and standards for fairness in AI, including 58 00:04:41,910 --> 00:04:48,220 the development of robust evaluation metrics that can assess the fairness of AI systems across different 59 00:04:48,220 --> 00:04:50,500 contexts and demographic groups. 60 00:04:51,700 --> 00:04:56,410 Transparency and accountability are also crucial in addressing bias in AI. 61 00:04:57,070 --> 00:05:02,650 Developers and organizations must be transparent about the data sources, methodologies, and decision 62 00:05:02,680 --> 00:05:05,770 making processes used in their AI systems. 63 00:05:06,550 --> 00:05:11,890 This transparency can be achieved through documentation practices such as model cards and data sheets 64 00:05:11,890 --> 00:05:17,440 for data sets, which provide detailed information about the data and model characteristics, including 65 00:05:17,440 --> 00:05:19,720 potential biases and limitations. 66 00:05:20,350 --> 00:05:26,020 Additionally, establishing accountability mechanisms such as regular audits and impact assessments 67 00:05:26,020 --> 00:05:32,320 by independent third parties can help ensure that AI systems are held to high ethical and fairness standards. 68 00:05:34,210 --> 00:05:40,750 Public and stakeholder engagement is another vital component in addressing AI bias, involving a diverse 69 00:05:40,750 --> 00:05:46,470 range of stakeholders, including those from marginalized communities in the development and deployment 70 00:05:46,470 --> 00:05:52,140 of AI systems can provide valuable insights into potential biases and their impacts. 71 00:05:52,740 --> 00:05:57,900 This collaborative approach can help ensure that AI systems are designed and implemented in a manner 72 00:05:57,900 --> 00:06:00,060 that is inclusive and equitable. 73 00:06:02,190 --> 00:06:07,980 Education and training are also essential in promoting awareness and understanding of AI bias among 74 00:06:07,980 --> 00:06:11,160 developers, policymakers, and the general public. 75 00:06:11,820 --> 00:06:17,880 Incorporating ethics and fairness in AI curricula and professional development programs can equip developers 76 00:06:17,880 --> 00:06:22,350 with the knowledge and skills needed to identify and address biases in their work. 77 00:06:23,190 --> 00:06:28,890 Policymakers, on the other hand, need to be informed about the potential risks and benefits of AI 78 00:06:28,920 --> 00:06:34,110 to create effective regulations and policies that promote fairness and accountability. 79 00:06:35,310 --> 00:06:40,710 Finally, fostering a culture of ethical AI development within organizations is crucial. 80 00:06:41,290 --> 00:06:46,810 This involves creating an environment where ethical considerations are prioritized and integrated into 81 00:06:46,810 --> 00:06:49,450 every stage of the AI development life cycle. 82 00:06:49,930 --> 00:06:55,420 Encouraging a culture of ethical reflection and discussion, supported by organizational policies and 83 00:06:55,420 --> 00:07:00,790 incentives, can help ensure that AI systems are developed and deployed responsibly. 84 00:07:02,530 --> 00:07:08,350 In conclusion, while AI systems have the potential to bring about significant societal benefits, they 85 00:07:08,350 --> 00:07:12,700 also pose risks of discrimination and bias that must be carefully managed. 86 00:07:13,420 --> 00:07:19,990 Ensuring the fairness and equity of AI systems requires a comprehensive and multifaceted approach involving 87 00:07:19,990 --> 00:07:26,650 diverse and representative data, fairness aware algorithms, transparency and accountability, stakeholder 88 00:07:26,650 --> 00:07:31,540 engagement, education and training, and a culture of ethical AI development. 89 00:07:31,570 --> 00:07:37,720 By addressing these challenges, we can harness the power of AI to create a more just and equitable 90 00:07:37,720 --> 00:07:38,590 society.