While the fight for diversity is far from being won in companies and society, another battlefield is emerging: that of technology – within development teams, the tools developed, and even virtual worlds that are being created today. To discuss this need for diversity and the means to achieve it, we gave the floor to Angeles Garcia-Poveda, chair of the board of directors at Legrand and former recruiter at Spencer Stuart, and Laurence Devillers, a researcher in computer and social sciences.
One might expect AI to be rational and objective – a form of ideal judge, influenced only by facts. But although technology offers us a greater openness to the world, some tools seem to reproduce and even amplify existing prejudices within society. Why? Because the algorithms that guide them are the creation of humans who themselves carry these biases and stereotypes.
How can we fight these forms of discrimination? By using education to challenge the preconceptions that underlie them, and by making sure professions in science and technology include a variety of profiles, from all genders, origins, and social backgrounds. But how can this be achieved if the recruitment tools that are supposed to help us are themselves as biassed as we are? How can we counteract the standardised cultural norms conveyed by the technologies developed by Big Tech, which we’ve become dependent on, and which can’t easily be modified? Can we really rely on technology to achieve a more inclusive society? We interviewed Angeles Garcia-Poveda, chair of the board of directors at Legrand and former headhunter at Spencer Stuart, and Laurence Devillers, professor of computer science applied to social sciences, and a specialist in ethical issues related to machines.
Interview by Anne-Sophie Moreau.
What do you think diversity is? Do recent technological advances undermine it, or on the contrary, promote it?
Angeles Garcia-Poveda: For me, true diversity lies in the diversity of thought and perspectives, which comes from a combination of nature and nurture, and which evolves throughout our lives. The social environment in which we grew up, our culture, our language, our preferences, our gender, the generation we belong to, our family, and our professional environment all shape us in spite of ourselves. They condition our outlook. We don’t naturally seek out difference; we rather tend to like in others that which brings us together. Hence the need for conscious thought and action in this area.
Advances in technology have been helpful when it comes to diversity. Access to data has been a very positive contribution, as it has allowed us to measure and compare, and therefore make diagnoses and set objectives. Access to information was made easier, which in turn allowed us to confront and enrich our points of view. Communication technologies have enabled teams around the world to work together. Digital platforms spread musical or audiovisual content all over the planet, which allows the greatest number of people to access works that were once confined to a niche audience. But technologies – and in particular AI – are designed, developed, and managed by groups of humans, which can vary in terms of diversity, at the risk of reproducing their creators’ biases.
‘Learning systems don’t have natural biases, but they’re fed by our data’
—Laurence Devillers
Laurence Devillers: Gender, age, and cultural diversity produces a wealth of points of view which are useful to our collective intelligence. Recent technological advances have improved the access to knowledge for everyone, but also amplified the phenomena of fake news and the manipulation of information. It’s important to understand that human beings all have biases based on their upbringing, their culture, and their emotional reactions to different stimuli. Unlike human beings, learning systems don’t have natural biases, but they’re fed by our data. AI is therefore at the mercy of its trainer and of their own personal opinions, experiences, and filters in their choice of data and the way they optimise the system. Using data to recognise the existence of these inherent biases and working to mitigate them is the first step to a truly successful integration of diversity.
Algorithms are sometimes accused of being misogynistic. Why?
L. D.: Algorithms aren’t misogynistic, but they can be discriminating. Take the case of voice recognition: if we only train the algorithm with male voices, it will have more difficulty recognising a female voice. The other aspect is that 80% of coders are currently male. They therefore model our social interactions according to their own masculine models. Chatbots are rife with stereotypes: they mostly have a female face and voice – except when they’re providing security or a respectable expertise, like that of a surgeon, for example.
In your car, by default you’re given a female voice to guide you. But who can change this option? A Swedish banker told me that he had done surveys about his chatbox’s default voice among his male and female customers, and that both preferred a female voice.
A. G.-P. : Some platforms give users the option to change the default voice, but only a small percentage of users use it. These are the stereotypes at play here: the female voices probably have a reassuring side and reflect a kind of docility, gentleness, a sense of service or even servility...
There are also biases in some algorithms that stem from the subjectivity of the data used. In 2015, Amazon stopped using a recruitment AI because it was discriminating against some candidates’ CVs as a result of biassed data sets. Shopping suggestions from an online supermarket are based on your past purchases, and generally only invite you to repeat these habits. Job offers received on digital platforms meet mechanical criteria and are not based on skills or potential. We should be constantly concerned about neutrality.
‘These algorithms are created by someone who has a vision of the world, assumptions so integrated into their DNA that they’re no longer even aware of them’
—Angeles Garcia-Poveda
L. D.: In Japan, one company is selling holograms of a pretty young woman in socks and a miniskirt, who lives with you and takes care of your home (she turns on the light, turns up the heating, etc), like an ersatz house fairy. In the advertisement, she becomes omnipresent in the life of the young man she helps. She says things like, “Honey, how are you? Come back quickly tonight,” to which he replies something like, “I finally have someone to stay home for me”!
A. G.-P. : These algorithms are indeed created by someone, and this someone has a vision of the world, assumptions so integrated into their DNA that they’re no longer even aware of them. No one – male or female – is totally immune to stereotypes. These biases are etched into the way we were brought up and exposed to the world, and they condition our perceptions.
In my life as a recruiter, I quickly became aware of these diversity issues. Years ago, the firm I worked for had studied candidate presentation reports given to clients. We extracted reports from men and women around the world who had applied for high-level positions, internally or externally, and ran them through very thorough language analysis software. What do you think were the results of this study?
L. D.: I imagine that the correlation percentages of the words were different? When we analyse millions of texts online, we see that the masculine world is most often associated with the terms “power”, “strength”, “money”, while the terms most correlated to the feminine world are “care”, “ household”, etc.

Angeles Garcia-Poveda © Stéphanie Lacombe for Philonomist.
A. G.-P. : The study showed certain biases: the characteristics that were presented as positive qualities of leadership in men were sometimes presented as flaws in women. On the subject of authority, for example: men were described as having a strong presence, an assurance that inspired confidence, and an ability to make decisions. In women, we found these same characteristics, but rather in the areas of development: someone would be described as having a “divisive” personality. They would be less often described as having authority or gravitas, but rather as “authoritarian”, etc. These results made us think. We thought we were doing the right thing by systematically proposing female profiles, and yet that wasn’t enough: there remains something subtle and underlying, which is more difficult to get rid of.
We say to ourselves that AI is neutral, that it has no a priori judgments. How are its prejudices formed?
L. D.: AI is made up of algorithms that learn from our data. The raw material is provided by us humans – with all our positive and negative traits. In this data, we find our lives, our opinions, our knowledge, our stereotypes, etc. Our prejudices are what we acquire in order to survive: when we look at a scene, our neural system checks that there is no danger, and this is what allows us to flee very quickly. The machine learns from patterns it finds in our data, but not in the same way we do. Humans don’t look at an image like a computer does. A doctor analysing an X-ray to detect cancer looks at particular areas, whereas a machine looks at absolutely everything. Its interpretation of statistical regularities isn’t the same as ours.
‘We no longer have control over what the machine is really looking at’
—Laurence Devillers
There are three types of learning algorithms: those which learn the model that allows it to pass from signals to semantic annotations given by humans – so-called “supervised” learning, so it can recognise a photo (a visual signal) of a cat (annotation), for example. There are those which build this model by resembling shapes – so-called “unsupervised” learning. And then there are those that proceed by trial and error – learning by reinforcement. The most powerful models are neural networks, which operate on the principle of deep learning, a so-called “supervised” learning with a large number of hidden layers. With the first models, we could check the succession of rules applied. Today, we’re aggregating a lot of data that we then propel into a digital universe to represent shapes. As a result, we no longer have control over what the machine is really looking at. But we can still evaluate it on new data in the margins, to check what it has learned, as well as its limits.
A. G.-P. : I read that some facial recognition software had a harder time recognising women or black people. Why is that? Is it a technological problem or a sample size problem?
L. D.: The databases used to train the systems must have contained few or not enough black people or women. Here we’re faced with a problem of representativeness in the design, which is the primary driver of discrimination.
How can we fight this involuntary discrimination?
L. D.: By standards, laws, training, and ethics.
‘Discrimination is based on millions of individual decisions and perceptions, every day’
—Angeles Garcia-Poveda
A. G.-P. : Awareness is already the first element. It allows us to play an active role, to exercise control over our way of seeing the world, and to ask ourselves if we might be applying some conscious or unconscious bias. Yes, we need standards, but that also involves each and every one of us. Discrimination is based on millions of individual decisions and perceptions, every day.
L. D.: It also involves businesses. Employees can play an important role, but they still have to be listened to. Google fired several women, including Timnit Gebru and Margaret Mitchell, because they had called out what I have just explained to you.
Is AI useful in recruitment?
A. G.-P. : Certainly. I don’t think you can delegate the decision to a machine, but it can help at every step: the sourcing [the proactive identification of candidates, editor’s note] and the processing of applications, the matching tests, the collection of references, etc. When I’m asked if it’s more efficient to have a machine sort through a pile of CVs or a person, who will get tired after several hours and who has their own biases, the question arises. In a sourcing campaign, technology makes it possible to reach a much larger and potentially more diverse number of potential candidates. In the United States, more than 80% of companies use these tools. This percentage is lower in France (around a third). Used wisely and with enough discernment, they don’t replace humans, they give us more time to devote to decision-making and less to repetitive tasks.
L. D.: I worked precisely on this subject of recruitment. An AI system identified clues in people’s voices that allowed candidates to be sorted according to vocal or linguistic qualities (such as vocabulary range, hesitation, personality, etc.). However, people’s accents or lisps weren’t taken into account by the model. I was among the people who criticised this method, which is still used in the United States, Australia, or France.

Laurence Devillers © Stéphanie Lacombe for Philonomist.
How do Big Tech’s tools impose cultural norms?
L. D.: They have an enormous normative power, and so they can be manipulative. The risk is to create a non-human language that will feed future systems, as is currently the case with ChatGPT. There is also a whole marketing of emotional generation which is a total masquerade, with systems claiming to detect people’s moods, as if it were as easy as reading a horoscope... Meanwhile, cultural differences remain very strong: there are countries where looking someone in the eyes is seen as rape, others where people don’t touch each other, others where it’s frowned upon to talk a lot... Cultural diversity around emotions is evident between the Global North and South. In Japan, for example, there is an ambiguous laugh that expresses both agreement and disagreement. We must be aware of the limits of technology, which cannot reflect these subtleties.
A. G.-P. : I find that we don’t pay enough attention to diversity of thought. It’s a terrible paradox: on the one hand, social media offers us a window onto an absolutely incredible world, which allows us to discover different areas of interest and points of view. It is now possible to do online courses on subjects as varied as climate change, learning an instrument, repairing machines... Access to knowledge is much richer, more democratic and immediate than it has ever been. But I wonder if we are really freer and if we’re not permanently influenced by currents of thought. If you click on a post on Facebook or Twitter, you’re inundated with similar posts within twenty-four hours. Buttons tell us, “click here if you want to receive other posts like this one.” In other words, “click here if you want to reinforce the idea you already have, rather than opening yourself up to other perspectives”!
‘We must cultivate the ability to disagree in order to enrich ourselves’
—Angeles Garcia-Poveda
How can one get out of their own bubble?
A. G.-P. : You have to question yourself constantly. When I was a teenager, my father encouraged me to read at least three newspapers of very different political leanings. “We can discuss it all afterwards, but not before,” he said. It struck me. We must cultivate the ability to disagree in order to enrich ourselves. But the most important thing is to surround yourself with people who are different from you, and who not only have the right but also the duty to contradict us. When I see teams from very homogeneous cultures, I always wonder: they can certainly work in perfect harmony, but it lacks pizzazz! These kinds of teams don’t necessarily achieve the best results for companies, consumers, or stakeholders. It’s very difficult to arrive at the best solution when everyone agrees, because we won’t see the flaws or the defects of our own reasoning. Human beings grow when they’re confronted with crises, conflicts, disagreements. We make progress through contradiction and confrontation.
L. D.: This is another point that terrifies me: it’s the power that certain governments have to push an entire population towards such and such an opinion – like Russia, which is urging its people to war, or China, which is trying to make sure that their citizens remain positive. In Japan, I came across a company that claimed to be developing systems to make people happy. “If you’re not unhappy, you can’t be happy,” I told them, which shocked them. If we smooth out negative emotions, we lose the ability to think, to doubt, to be indignant in the face of injustice and make choices...
A. G.-P. : You also have to train yourself to confront yourself. Confrontation isn’t necessarily synonymous with fighting: the Greek philosophers were already debating in their agora and making progress in their thoughts, all thanks to their disagreements. I’m not sure that we have been able to maintain this culture of debate. We tend to move straight from orthodox thought to dissent or conflict, with little nuance. And diversity of thought isn’t always encouraged by the system, because minorities have a weaker voice until they reach a certain critical size.
L. D.: Democracy in the Greek model allows for debate. You have to develop the solution, not just bring it in. But machines don’t help us with that. By modelling things on machines, we tend to standardise. Well argued points of view are lost, as well as the thought processes involved in the formation of opinions. This poses a threat to democracy.
A. G.-P. : I’m not a fatalist. We do manage to do incredible things with machines. In the business world, we use data from technology every day for decision-making. If we find the right indicators and can be sure of the quality of the raw material we’re analysing, we can measure progress in the things we want to see change, such as diversity or carbon impact for example. We have ratios that we constantly measure and we commit to specific objectives. Data is very useful, provided it’s used with discernment and common sense.
Can we imagine a technology that includes the whole of society instead of bringing the most vulnerable down?
L. D.: The first thing I wanted to do with robot-assisted technology was help older people. There’s a lot of progress to be made to support Alzheimer’s patients and those who lose their cognitive abilities and language. We can create conversational robot assistants for people who stay at home longer, in their own environment. It would be unethical not to use these technologies to help dependent people, but not at any price!
A. G.-P. : The issue of technological inclusion is crucial. Are we not excluding the elderly? I have a father over 80 who is comfortable with technology, but that’s because he was interested in it and was trained in it. Today, in local banks, we see elderly people who are struggling to manage their accounts using the banking applications, and who come to get help. If we don’t make a conscious effort, a significant part of the population can end up on the sidelines. In addition to this issue of inclusion related to the need for training and support, there is another issue related to the cost of certain technologies. I have personal experience with the world of disability. Technology has been an incredible help for my daughter who has no oral language. There are softwares that allow people with severe physical and mental disabilities to access language or online research, and which have truly transformed their quality of life for the better. But there remains the problem of cost: 90% of families with disabled people are unable to acquire such a tool. The market is too narrow to interest manufacturers, and there is very little competition: investments and resources tend to go towards more profitable projects.
‘Manufacturers offer platforms and applications without studying the impacts they might have on our social and physiological life’
—Laurence Devillers
What about diversity in the metaverse? There’s already talk of discrimination, and sometimes even rape, in virtual reality spaces…
L. D.: Virtual worlds already exist in video games, even if they’re not metaverses yet. Inclusion isn’t so easy. To start, you have to be able to buy a helmet and haptic gloves in order to have sensory experiences in these universes. Manufacturers offer platforms and applications without studying the impacts they might have on our social and physiological life, that is to say on the way our senses will react and probably change in these universes. How are we going to experience these two realities? One in everyday life and one in this virtual universe. Pathologies will develop, such as addiction to machines… Psychiatrists will have quite a job on their hands! Statistics already show that young people spend too much time on screens, which has a negative impact on their performance in school. All of this must be regulated – not necessarily as the Chinese are doing, with a two-hour-a-week limit on screen time for children –, but with measures, quantified uses, and prevention at school. The machines interact with us, and we adapt to the machines. I work a lot on these problems of co-evolution between humans and machines. The challenge is to avoid long-term deviations.
A. G.-P. : Virtual reality is a subject that concerns me. How will it affect our ability to live a real life, in the present moment, and in the presence of others? Imagine that we could invent a character, a totally fictional life… The subject of consumption in the metaverse also raises questions for me.
Digital twins, on the other hand, open up a world of possibilities for science and R&D, and arguably allow us to make significant advances whilst minimising cost and risk. There are digital twins of factories, tertiary buildings, patients, and they’re planning to make one of the ocean by 2024. The possibilities are endless.
L. D.: This poses problems in both directions. We’re aware of the deviations caused by screens, but we don’t know what the immersive world is. We absolutely did not anticipate the emotional vulnerability to which we will be subjected in the metaverse, for example. I was recently invited to some conferences on digital immortality. The prospects are terrifying: some are offering the possibility to meet a deceased parent in the metaverse, for example. We’re already creating deadbots – chatbots able to speak with the voice of someone who has died and say things the person would never have said, which is a new form of impersonation. There’s a video on YouTube that shows a Korean mom being introduced to a virtual version of her dead daughter. Seeing this woman who stretches out her arms but cannot touch her daughter’s avatar, we discover what suffering technology might cause... How can we mourn whilst listening to the now omnipresent voice of the deceased person?
Why are women so absent from tech?
A. G.-P. : It starts very early, from a very young age. The first excluding factor is women themselves, because they tend to choose science studies less than men. I grew up in a family of engineers and yet I didn’t choose this field, preferring business school instead. Why? Today I work in an industrial company and I was happy to discover the world of factories. I have no regrets, but I think I didn’t really ask myself enough questions at the time when I was making these choices.
These questions should be proactively addressed, starting in high school. By offering scientific and industrial work experiences to all students, for example. This would also mean strengthening the teaching of mathematics and science, as well as targeted communication.
L. D.: We lack women in AI! When I was a kid, I had a game that illustrated how a computer works. On the cover of the game, there was a dad in a suit and tie and his little boy doing logic and electronics to figure out the internal components of a processor. It was rare that this type of game was offered to a little girl: in play school, we put the pink games on one side and the blue ones on the other. The world of computing and science is still very gendered. I think the socio-technological, care, and creative potential of AI aren’t explained enough to attract girls. The Blaise-Pascal Foundation, which I chair, aims to help parents and grandparents to encourage girls to go into computer science and mathematics.
A. G.-P. : It takes communication and education to attract young girls to these professions, but also role models. Are there enough women members of the juries of the entrance exams to engineering schools? Enough role models to support them in their choice of courses? Enough female leaders in the industrial sector? Enough female science teachers?
‘As human beings, we need the perspective of the other to grow’
—Angeles Garcia-Poveda
L.D.: Few. And in computer engineering schools, they’re told: “Be careful, there are very few girls.” And this is also counterproductive. Diversity is our strength, whether it’s in terms of gender, origins, or social background. We have a lot to gain. I like the image of resistance to viruses: if we want to resist them, we had better mix! We must improve our educational discourse from early childhood at school.
A. G.-P. : I fundamentally believe that diversity and performance, but also adversity and well-being are very much linked. As human beings, we need the perspective of the other to grow. It’s in this sense that I seek diversity. I’m exposed to disability within my own family, so I have experienced several forms of difference, and I think that’s a real richness. And I myself have often been the diverse element in the environments where I evolved – in terms of gender, but also training, nationality, generation... Today, STEM [science, technology, engineering, mathematics, editor’s note] studies are depriving themselves of extraordinary talents. And discussions on the ethics, meaning, and application of these technologies can in no way take place without the participation of women.
L. D.: Learning about algorithms and ethical reflection around these technologies at school should stimulate career ambitions among girls and boys in these creative and well-paid fields. Let’s give ourselves the means of our ambitions. Maybe we have to decide on quotas to achieve parity.
A. G.-P. : Quotas have worked very well on boards of directors. Ten years ago, we were told that there were no women capable of being administrators. Now we have succeeded, and this gender diversity has been a wonderful vector of various forms of diversity: nationalities, generations, skills, etc. Executive governing bodies will be the next frontier.
Would you say the same about ethnic quotas, which are banned in France but not in the United States?
L. D.: We cannot compare ethnic quotas to women’s quotas: we still make up 50% of the population, all ethnicities combined!
A. G.-P. : It’s very simplistic (and also illegal in France) to ask people to “qualify” themselves ethnically in a questionnaire. The movement of communitarianism thinks it can take a representative of each religion or ethnic group so that they can “represent their interests”. This isn’t my vision of diversity or of the value of collective intelligence. The participation of women in social and economic life isn’t a question of representation. I’m in favour of multiple forms of diversity and creating organisations that are inclusive enough for everyone to feel free to be who they are.
‘We must keep in mind our founding myths and cultural stereotypes and work to improve our understanding of all these biases’
—Laurence Devillers
Some argue that as technological professions are increasingly automated, soft skills are regaining importance. Could this be an opportunity for women to take their revenge in the professional world?
A. G.-P. : I prefer to talk about progress than revenge. And these soft skills concern everyone, not just women! It’s quite useful to have a minimum of empathy towards others as well as the communication skills to lead teams – at the very least, to be able to negotiate, motivate, inspire others, and achieve your goals. Perhaps some of these attributes are considered feminine. But stereotypes shouldn’t be taken to extremes. For me, these are more attributes of a modern leader, male or female.
L. D.: Indeed, if we take the caricature of the male fantasy of creating a robotic human clone, as found in Greek mythology, women wouldn’t need it, because they already carry life! We must keep in mind our founding myths and cultural stereotypes and work to improve our understanding of all these biases. We won’t change everything, but we can improve the balance we need to build the world of tomorrow in collective, masculine and feminine intelligence.
What are the qualities of a leader in a world where everything seems like it can be calculated and optimised?
A. G.-P. : They need to be able to adapt and learn, which implies a certain curiosity, humility, and an ability to question oneself. But also courage, to be able to make decisions using a lot of data but with little certainty, as well as empathy and generosity. And finally, they need to be able to collaborate – with their team, but also with the open ecosystem that revolves around their company: suppliers, customers, competitors, public authorities, etc.
L. D.: I totally agree with you… and they need to be able to go straight to the point!
