By Birgit Neu
Artificial intelligence (AI) continues to dominate the business landscape, becoming a phenomenal force shaping industries. While some promises of AI have been tempered by economic and practical realities, new promises and more investments in AI infrastructure continue to be announced at a breathtaking pace.
As AI evolves, so too must the understanding of how it will impact LGBTQ+ people. Governments, regulators and academics are still in relatively early stages of grasping the ways in which AI, including generative AI, will affect different groups like the LGBTQ+ community. This understanding is fragmented and behind the speed at which AI technologies are advancing. There is no indication to date that the business world is much further along in addressing these crucial issues.
As Out Leadership and its member/partner organisations, we have an important role to play during times of rapid change like this – including rapid technology change, whether we have a tech background or not. We need to inform ourselves about the evolving issues and opportunities, and ask questions in our businesses to make sure that collectively, we are putting the right practices into place that will not only protect LGBTQ+ people from the downsides, but also lead us to more positive change.
As AI becomes embedded in key processes for employees, customers and other stakeholders, it’s essential to ensure that it doesn’t exacerbate biases or exclude LGBTQ+ people. Failing to address the challenges posed by AI could mean that more hard-fought LGBTQ+ progress to date will be undone. It’s also critical to be deliberate in how we understand and address the areas where AI can fast-track opportunities for LGBTQ+ inclusion. Given that over 20% of Gen Z identifies as LGBTQ+, over time it should only become more important for businesses to leverage this crucial talent pool and potential customer base effectively.
So, what should businesses do next to ensure that the AI they use leads to fair outcomes for all, including the LGBTQ+ community?
Responsible AI
The risks of bias in AI have been on the radar for years, and many companies have introduced Responsible AI policies to help mitigate those risks. However, Responsible AI can mean different things depending on the organization, and in many cases these policies are created without engaging diverse communities like LGBTQ+ people in the policy content. Also, policies may have good intent, but how they are implemented can be another matter entirely.
Another potential challenge arises when AI gets designed to take over processes traditionally handled by people. If inclusivity and bias checks weren’t a formal part of those processes before, explicit effort is required to add those steps during AI design. Even if an organization consciously tries to build inclusivity into AI systems, the lack of reliable demographic data on LGBTQ+ stakeholder sets means that these considerations may not be adequately addressed, even in LGBTQ+-friendly countries.
Data Limitations
One of the biggest challenges in ensuring AI is fair for LGBTQ+ people is the scarcity and inconsistency of the demographic data. In many countries, it’s illegal or culturally unsafe to ask people about their LGBTQ+ identity. Even in places where it is legally permissible, many companies don’t ask these questions. This leaves businesses without a full picture of their LGBTQ+ stakeholders. In the UK and the US where the question can be asked, at present only 16% of Fortune 100 and 12% of FTSE 100 companies publicly report on their LGBTQ+ workforce data, as tracked by Windō.
In locations where LGBTQ+ data is collected, how it’s done varies widely due to a lack of consistent standards. Some organizations may simply ask people if they identify as “LGBTQ+”, while others may break the questions down into more specific categories by sexual orientation and gender identity. This creates inconsistencies in the data available for AI to analyze. Additionally, not all LGBTQ+ people may feel comfortable sharing their status, adding further complexity.
Without comprehensive data, AI models are often trained on datasets that do not fully represent LGBTQ+ people. This could lead to flawed or biased outcomes in areas such as employee sentiment analysis, recruitment, and marketing. Inaccurate representations in AI-generated content can also lead to the spread of misinformation or harmful stereotypes about LGBTQ+ people.
(For more information on LGBTQ+ self-identification activity in businesses, see Out Leadership’s Visibility Counts: Corporate Guidelines for LGBTQ+ Self-ID and global LGBTQ+ Board Diversity Guidelines for guidance at the board level).
Data Privacy Concerns
For businesses that are collecting LGBTQ+ demographic data, concerns about the maturity of data privacy and security practices remain. An Out Leadership survey found that 77% of companies cited data privacy as the biggest challenge in implementing LGBTQ+ self-identification campaigns. The risk of data exposure, whether through unintentional breaches or deliberate misuse, remains a significant issue. In the worst-case scenario, LGBTQ+ people could be outed, placing their safety at risk, particularly in countries where being LGBTQ+ is illegal, dangerous, or even punishable by death.
This risk becomes even more pronounced when such data is used in tracking, monitoring, or surveillance systems. LGBTQ+ people, already vulnerable to discrimination, may face additional risks if their data is misused or inadequately protected.
Ensuring Fair AI Outcomes for LGBTQ+ Communities
As businesses race to implement AI solutions to benefit from expected productivity gains, cost savings or growth opportunities, many have yet to establish clear accountability for ensuring fair outcomes for diverse groups like LGBTQ+ people. This lack of oversight could have devastating consequences. As it stands, 52% of LGBTQ+ workers channel at least 30% of their time at work into hiding or downplaying their sexual orientation and gender identity. If AI systems exacerbate existing biases, people may feel even more compelled to conceal their identities. This could impact everything from LGBTQ+ professional advancement through to how well LGBTQ+ customer needs are met.
There is limited data globally about LGBTQ+ representation in technology teams responsible for building and implementing AI, and little is known about the level of awareness these broader teams have around particular AI issues that might disproportionately impact LGBTQ+ communities. AI is already being implemented in areas like recruitment, talent identification, and content generation – areas that can disadvantage individuals who do not fit conventional standards. Without the recognition of where the challenges may be and proactive measures to get ahead of that, AI could be negatively impacting LGBTQ+ individuals already.
Addressing AI-Driven Inequities
One of the biggest challenges for LGBTQ+ individuals in an AI-driven world is that they may not even be aware when AI has contributed to an unfair outcome. Proving that AI was the cause of discrimination may be even more difficult.
The European Union’s AI Act, enacted in August 2024, was the first comprehensive AI regulatory framework that includes promoting responsible AI development from a major jurisdiction, but its protections are limited. Even there, unless explicitly banned or classified as high-risk, many AI applications remain unregulated. More AI regulation is still in the works around the world to address concerns about whether AI technologies will produce fair outcomes. In the meantime, of course, AI deployment continues.
Questions We Should Ask Now
As scrutiny increases around how businesses use AI and achieve fair outcomes, we can build our understanding of where further efforts may be required from an LGBTQ+ perspective by engaging with colleagues around the current state of the following questions in our own workplaces:
Where is AI in use in my business?
Are relevant business stakeholders aware of where AI is being deployed, particularly in relation to LGBTQ+ and DEI strategies? For example, if inclusive marketing is a priority, how is AI shaping marketing efforts? If inclusive recruitment is key, what role is AI playing in recruitment systems and processes? What checks and balances are in place to determine the impact of AI on specific populations?
How is AI governance structured?
Clear AI governance is essential. Are LGBTQ+ and broader DEI concerns integrated into AI oversight, from design to complaints handling? Is there a transparent process for feedback and input from employees or other stakeholders who feel that AI may be having a particularly positive or negative impact on specific groups? How robust are efforts around data privacy and security related to AI tools and systems?
How do we check for LGBTQ+ bias in AI?
Organizations can look to understand specific LGBTQ+ challenges in training data being used and identify ways to address the quality of the datasets. They can also audit relevant AI for potential biases against different groups, including LGBTQ+ individuals.
What are our plans for AI-related upskilling, reskilling and pivoting?
As AI continues to reshape the workplace, ensuring that LGBTQ+ people are included in corporate upskilling, reskilling and pivoting programs is important to avoid a digital divide. Are LGBTQ+ employee networks involved in AI training to help improve diversity in the tech-skilled workforce? Are companies working with external LGBTQ+ community groups to support AI-related skill building too?
The Way Forward
AI brings new challenges and opportunities to businesses committed to the LGBTQ+ agenda. To navigate this landscape, companies have to understand and address the specific risks AI poses as well as the superpowers it can bring. By asking the right questions and taking proactive steps, businesses can be intentional about using AI to foster fair outcomes, close existing gaps, and achieve greater progress for their LGBTQ+ ambitions. AI-related decisions made today will shape the future of LGBTQ+ individuals and other marginalized groups. Now is the moment for all of us to take steps in our businesses to ensure that AI drives more inclusion, not exclusion, for the LGBTQ+ community