industry whistleblowers, acting on their own independent integrity and agency, provide us with powerful insights into ideological and political motivations for decisions made within Big Tech.

this is where we start.

we list Whittaker at the top.
there are a few key reasons for this.

Meredith Whittaker came to global prominence as a dissenting voice within Google and an influential activist organiser within the firm. as CEO of Signal, Whittaker today represents the only establishment-approved Big Tech organisation that has truly operationalised the ethical principles the industry claims they stand for. here is Whittaker talking about her time at Google in a 2023 Guardian interview.

i was running a research group looking at the social implications of AI. i [...] discussed these issues in ways that were counter to Google’s public messaging. i was an internal dissenter, an academic.

toward late 2017 I [learned] that there was a secret contract [known as Maven] between Google and the Department of Defense [DOD] to build AI systems for drone targeting. that for me was when my organising started, because i realised i had been presenting very clear arguments that people agreed with … but it didn’t really matter. this was not an issue of force of argument. this was an issue of the ultimate goals of this company are profit and growth and DOD contracts were always going to sort of trump these intellectual and moral considerations.

i wrote the Maven letter [which gathered 3,000 employee signatures] and we got the Maven contract cancelled.

the walkout was a big spectacle. there was a news story about Andy Rubin getting a $90m (£72m) payout after an accusation of sexual misconduct [which he denies]. what came to a head was deep concerns about the moral and ethical direction of Google’s business practices and an understanding that those moral and ethical lapses were also reflected in the workplace culture.

how did I feel about that? i am very happy. i did not stand by and let my integrity get eaten away by making excuses for being complicit.

after leaving Google, Whittaker joined Signal as president and continued to speak out about Big Tech and the AI sector. she critiques the business models, power and profit structures, and political alliances of the sector, while exposing the technological hype as mere hot air, describing the brute force approach as ecocidal and immoral.
here are some of her most important insights.

read more on Whittaker's experience and perspectives:

Timnit Gebru co-led Google's ethical AI team. as a widely respected leader in AI ethics research, she is known for coauthoring a groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of color, as an important and indicative basis of tech discrimination. Gebru was allegedly forced out of Google in December 2020 for research showing systemic neglect and insufficient due diligence in addressing potential risks associated with developing large language models, including monumental environmental and ethical risks. they include environmental and financial costs; massive data, inscrutable models; research opportunity costs; and illusion of meaning.

i’m thinking of it more from the perspective of developing technology that works for people. a lot of the AI research that happens right now is AI for the value of AI itself. a lot of people are thinking about this body of tools known as AI and saying, “well, everything looks like a nail, and we have this big hammer.”

we already know that deep learning has problems. these modes of research require organizations that can gather a lot of data, data that is often collected via ethically or legally questionable technologies, like surveilling people in non-consensual ways. if we want to build technology that has meaningful community input, then we need to really think about what’s best. maybe AI is not the answer for what some particular community needs.

after leaving Google, Gebru founded the Distributed AI Research Institute. there, researchers provide consulting and auditing on ethical AI usage, and communicate AI ethics research to concerned audiences within the private and public sectors and the general public.
here are some of her most important insights.

read more on Gebru's experience and perspectives:

Joy Buolamwini's 2018 research paper Gender Shades: Intersectional accuracy disparities in commercial gender classification, co-written with Gebru, put her on the map as a critical researcher in digital activism while she was at MIT Media Lab. She found large racial and gender bias in AI services from companies like Microsoft, IBM, and Amazon.
Having founded the Algorithmic Justice League for digital equity and accountability, Buolamwini advocates for affirmative consent, meaningful transparency, continuous oversight and accountability, and actionable critique.

with the adoption of AI systems, at first i thought we were looking at a mirror, but now i believe we're looking into a kaleidoscope of distortion. because the technologies we believe to be bringing us into the future are actually taking us back from the progress already made.

i truly believe if you have a face, you have a place in the conversation about AI. as you encounter AI systems, whether it's in your workplace, maybe it's in the hospital, maybe it's at school, ask questions: why have we adopted this system? does it actually do what we think it's going to do?

AI, Ain't I a Woman? travelled to places I didn't expect. probably the most unexpected place was the EU Global Tech Panel. it was shown to defense ministers of every EU country ahead of a conversation on lethal autonomous weapons to humanise the stakes and think about what we're putting out.

apart from continuing to advise the European Union on their Global Tech Panel, Buolamwini appeared on at June 2023's Presidential Meeting on Artificial Intelligence in San Francisco, California.
in 2025 she published a Unmasking AI: My mission to protect what is human in a world of machines.
here are some of her most important insights.

read more on Buolamwini's experience and perspectives:

Joy Buolamwini's 2018 research paper Gender Shades: Intersectional accuracy disparities in commercial gender classification, co-written with Gebru, put her on the map as a critical researcher in digital activism while she was at MIT Media Lab. She found large racial and gender bias in AI services from companies like Microsoft, IBM, and Amazon.
Having founded the Algorithmic Justice League for digital equity and accountability, Buolamwini advocates for affirmative consent, meaningful transparency, continuous oversight and accountability, and actionable critique.

with the adoption of AI systems, at first i thought we were looking at a mirror, but now i believe we're looking into a kaleidoscope of distortion. because the technologies we believe to be bringing us into the future are actually taking us back from the progress already made.

i truly believe if you have a face, you have a place in the conversation about AI. as you encounter AI systems, whether it's in your workplace, maybe it's in the hospital, maybe it's at school, ask questions: why have we adopted this system? does it actually do what we think it's going to do?

AI, Ain't I a Woman? travelled to places I didn't expect. probably the most unexpected place was the EU Global Tech Panel. it was shown to defense ministers of every EU country ahead of a conversation on lethal autonomous weapons to humanise the stakes and think about what we're putting out.

apart from continuing to advise the European Union on their Global Tech Panel, Buolamwini appeared on at June 2023's Presidential Meeting on Artificial Intelligence in San Francisco, California.
in 2025 she published a Unmasking AI: My mission to protect what is human in a world of machines.
here are some of her most important insights.

read more on Buolamwini's experience and perspectives:

visual credits

glitch rain, Samantha Suppiah, 2025