Bridging the Digital Divide: AI, Gender, and Security
By: Zoe Luce
Presentation Summary
The Student Consortium on Women, Peace, and Security (WPS) hosted a roundtable discussion on March 20th, 2025 on Bridging the Digital Divide with AI, focusing on the impacts specifically on gender and security. The discussion explored how AI systems reinforce societal biases impacting marginalized communities and explored how we can include the communities, especially women, to embrace them in implementing AI models.
Discussion
The discussion began by addressing how AI contributes to gender-based digital disparities and how this causes cybersecurity, surveillance, and online safety concerns. The participants approached this question focusing on the idea that you get out of AI what you put into it, highlighting how creators and programmers have significant control over the actions of AI. One participant shared a story about her work testing an AI hiring system and how it took out resumes based on certain aspects an applicant may find relevant to their story. She noted that the people building these hiring systems are often white men and so their bias seeps into the AI system dictating what they find important or not. Furthermore, it was discussed that whoever is creating AI systems can censor what is being shared. One participant spoke about her experience testing a Chinese AI model. She found that if you ask it questions about certain history it will write out an answer and then censor itself based on its code by blurring out parts of the answer. These examples from the discussion drew the picture that AI models can create another tool for the majority to use to perpetuate racial and patriarchal hierarchies. By controlling the actions and biases in AI models, the majority can further hinder the security and safety of marginalized communities by reinforcing stereotypes and social inequalities.
The conversation then turned to brainstorming strategies that can be used to ensure that AI models are designed and implemented equitably. The question asked was about what role governments, tech companies, and civil society can play in addressing the gendered risks of AI. Participants in the discussion began by talking about how AI needs to be legislated, restricting specifically who can access it and enforcing consequences for misuse. Talking about companies specifically, participants acknowledged that it is morally required for companies to create AI products to censor harmful information. Some tech companies disagree with this, stating it censors free speech but that argument tends to lose sight of protecting the communities these companies are catering to. Tech companies must protect their users from AI systems and information that could harm them, especially since users don't always know how to protect themselves. This can be compared to a parent baby-proofing their house to protect their child from household items they could harm themselves with because they don't know how to use them. Another participant addressed how some time ago, Twitter did have a feature that tried to censor fake news by labeling posts with notes that some of the information may be false. This is one attempt a company has made, but with other AI models it is especially important to address the biases and legislate the uses of AI to protect users and prevent gendered harms.
The last topic of the discussion focused on digital literacy and how students and professionals can work towards a more equitable digital future. One of the biggest issues many participants brought up was the viewpoint of AI as fact. This is an issue amongst any user of AI especially as people are trying to learn. People have grown to believe and trust everything AI says as fact and this directly degrades media literacy and the ability to research and know where information is coming from. Participants pointed out that with AI models it is not immediately clear where the information is coming from that it tells you so it is easier for users to view it as factual and the truth. This, however, disregards the biases previously discussed that are built into AI models and the point that AI models are not the rule of fact. Often different AI models will give you different answers to the same questions, and this shows how the information you consume from AI is not a universal fact. AI responses are algorithmically formulated based on internet resources and the biases built into the model. As students and professionals aiming to combat this growing issue of overuse and trust of AI, we need to plan for a future with AI by implementing guardrails before it becomes an uncontrollable entity.
“A world with increased AI leads to a world with decreased critical thinking”