Read part I of this blog series: How AI will change the DEI landscape
The last few months have seen exciting developments in the world of AI. OpenAI’s launch of ChatGPT the previous year started a race to create the next AI-powered chatbot. Unsurprisingly, Microsoft’s Bing AI and Google’s Bard have entered the race to challenge OpenAI to varying degrees of success. Bard cost Google dearly during its launch, while Bing has become self-aware. With the news buzzing with funny stories of “chatbots gone rogue,” it’s easy to forget that AI is much more mundane than we may think.
Although generative AI (the type that can spontaneously create content) is the newest iteration of machine learning, AI has long been a part of our everyday technology. TikTok’s algorithm is a famously contentious example of this. Advertising firms use AI to determine which ads perform better and capture attention longer. Recruitment firms can utilize AI for resume screening and sourcing candidates. AI is everywhere, which begs the question: How will this impact us?
There are many concerns about the usage of AI, but most can be summarized into two buckets:
AI will be used to spread and circulate misinformation
It’s not hard to see how generative AI could be used to create false narratives online. The growing circulation of deepfakes (AI-generated images or videos) has made it harder to discern the legitimacy of content. On top of that, content can now be circulated quicker online as AI creates, posts, and generates views for posts.
AI will replicate human bias
The most common concern regarding AI is in its training data. Researchers like Safiya Umoja Noble (writer of Algorithms of Oppression) have sounded the alarm bells of programmed biases in AI software. If AI requires large amounts of data to learn and humans curate the data provided, then the same biases its programmers held will be embedded into its coding. To untrain this bias, other humans would need to watch and flag traumatic content repeatedly. OpenAI is attempting to untrain ChatGPT but on the backs of low-paid workers.
AI’s impact on DEI
It’s imperative to consider the impact of AI on equity, diversity, and inclusion regardless of your organization’s function. Furthermore, as AI evolves and becomes embedded in our everyday technologies, it is important to remain critical of its influence. Here are 3 ways to consider the impact of AI more generally:
No matter how convincing the chatbot may seem, AI cannot replicate human subjectivity. Context, tone, and culture are crucial to EDI work, which requires a subjective understanding of your environment. For example, it would be a bad idea for an organization to ask ChatGPT, “how to be better allies at work?” Although the chatbot could find examples across the internet, it would not be able to address the unique concerns of allyship within that organization.
The proliferation of AI in specific industries may result in the depreciation of its value. Take graphic design, for example, now that organizations can utilize DALL-E to generate images (albeit this takes some finesse); they may hire fewer Graphic Designers. ChatGPT’s proven ability to create technical or legal writing could make these jobs less valuable. Automated vehicles may also risk jobs like truck driving, delivery, and taxis. Even if AI didn’t take jobs away, their usage might depreciate the value of entry-level jobs.
The truth is, without government regulation, there is very little the average person can do to compete with AI. The use of art posted online to train AI like DALL-E has already caused a stir in the art community. Some artists have filed a lawsuit against generative AI platforms, while others have called to boycott AI-generated art. Unfortunately, for those artists and creators who are already marginalized, pushing back against AI platforms is an impossible task. Until there are government regulations to protect digital content or any content posted online, AI will be given free rein over the internet.
Charlotte Wray is a contributing writer at MT Consulting Group.