Thanks for speaking with us, Prof Rieser. Could you start by introducing yourself, please?
I’m a professor of computer science at Heriot-Watt University in Edinburgh, and I lead a team of researchers working in natural language processing and machine learning. At the moment, I hold a Leverhulme Senior Research Fellowship, which was awarded by the Royal Society. I’m also a co-founder of a company called ALANA AI. With ALANA, we’re especially interested in applications for the societal good.
What kind of societal good might that be?
For example, we have a project with UNICEF on detecting and dispelling myths on the pandemic and on vaccinations. We’ve also got another project with the Royal National Institute of Blind People to help blind people get information, support, and advice through spoken conversation, and also to assist them in finding objects in their surroundings. The main idea is that Alana is your eyes and you can ask what objects are around you, where things are in relation to other things, how to find them, and so on.
That sounds amazing. Another of your projects is about gender bias in conversational AI. Could you tell us a bit about that?
Sure. First of all, what do we mean by conversational AI? It’s whenever you can talk to a machine, be it a voice assistant, your car or your egg timer. And what do we mean by gender bias? In this project, we are looking at how these systems are deliberately designed, i.e. design choices that someone has made about the AI’s persona.
The systems all have voices which carry gender information; it’s very, very hard to produce a gender neutral voice. Even if you have a voice with a pitch that is sort of neutral, there will always be gender clues in the way you speak and in your word choice. There will always be some assignment of gender.
Conversational AIs are currently presented in a gendered way, specifically, they’re presented as female. And we think that’s a problem.
Why is it a problem?
Let me backtrack. Our research is inspired by our participation in the Amazon Alexa Challenge, an annual challenge where university teams compete to build an open domain chatbot which can talk about all possible topics in a human-like way. This is an unsolved problem, an AI holy grail.
My student, Amanda Curry, noticed that our system got a lot of quite sexually charged abuse. Around the same time, UNESCO published a very good report called I’d Blush if I Could and that title came from one of the AI responses to gendered verbal abuse.
Some people might say, they’re not sentient beings, why do you care? But the argument is that it’s a reinforcement of stereotypes. By interacting with the system, you’re getting used to abusing a woman – artificial or not – without pushback. At the moment a lot of voice assistants, like Amazon’s Alexa or Apple or Google, have female personas that are deliberately submissive. They never push back and let you say whatever you want. A big worry is that many of us now have these systems in our daily lives and our children hear how we interact with them and begin to associate negative stereotypes with being female.
So what are the options? Could we just make the assistants male?
Now it’s more publicly advertised that you can change your voice assistant to a male voice and in some languages, the default voice is male. However, in a forthcoming publication, we show that the public still perceive the systems as female by analysing the pronouns people use to refer to them. People fall into the trap of thinking, “this is my personal assistant, so she’s a woman”.
At the Amazon Alexa Challenge, Amanda tried to come up with a persona that would push back against this very inappropriate speech from users, but it was hit and miss. So we thought this would make a good research project: how do you design a good AI persona? The main focus is to understand how certain design choices influence user behaviour and how we can design a persona that has a positive influence.
We have an interdisciplinary team funded by the UK Research Council. On the technical side, there’s my team at Heriot-Watt; we’ve got psychologists working on digital stereotyping at the University of Strathclyde in Glasgow; and we’ve got people at the University of Edinburgh working on digital education. So we really look at the problem from all possible sides.
One of the things we’re actively doing is encouraging women to apply; we headhunt them
As someone who works in probably the most male-dominated academic discipline, do you think things are changing in academia, in terms of diversity and inclusion? Are policies making a real difference?
Things are changing, yes. A very recent example is that a lot of conferences moved online due to the pandemic, which is actually much more inclusive for people with families, and there’s a commitment for these conference to stay hybrid going forwards. In computer science, conferences are incredibly important because it’s where you publish your papers. It’s really hard to participate if you have a family, especially with smaller children, or if you’re pregnant or breastfeeding. But last year it was much easier.
The other thing is that I work in natural language processing and a lot of the top researchers are actually women, e.g. our Head of School and our Head of Department. One of the things we’re actively doing is encouraging women to apply; we headhunt them.
Do you think those active policies are needed for inclusion to take hold?
Yes, the leaky pipeline is real. Academia is not just a job, it’s a career, which means you have to have a much higher commitment, you’re expected to work long hours and there’s extreme competitiveness. When you throw a family into the mix, that’s where it gets really hard. That’s what a lot of policies need to focus on: how can we retain young women in academia, how can we enable them to have a career and a family.
Do you think this burden of the academic career and having a family has become worse during the pandemic?
Yes. It’s affecting anyone with caring responsibilities but what is unique for female academics is that they’re often part of dual career couples, whereas many male academics aren’t. This means that, if as a man you’re without childcare, there’s often someone else who can take the main caring load. This isn’t necessarily true for women.
I think it would be really beneficial for women to get study leave, because the first thing to suffer is your research. And it would be great to get advice on how to include the lack of childcare during the pandemic on your CV. Some funders have a deadline or cut-off date past your PhD and they usually factor in maternity leave, but they currently don’t factor in the pandemic. So I think it should be treated like another maternity leave.
That’s a very clear-sighted take on a difficult situation. Thank you so much for speaking to us today, Prof Verena Rieser.(© Emilie Steinmark / AcademiaNet / Spektrum.de)