A good time to be working on AI: an interview with Professor Sue Black

14th September 2023

“Because of my background, I’m always interested in how technology can serve the underserved in society, and how it can empower people to live their best lives.

With AI, I’m not worried about robots taking over the world.
I’m more worried about people using technology to do bad things to other people, rather than the technology itself.

One of the biggest issues we’ve got with technology is that most people in society, particularly those who aren’t in tech, think that they can’t understand it. I want to help change that, on a global scale.”

Professor Sue Black, in conversation, July 2023

Foreword by John Hammersley: She’s an award winning Computer Scientist, Technology Evangelist and Digital Skills Expert who led the campaign to save Bletchley Park, but to me, Sue Black will always be the friend who has the best excuse for skipping pitch practice, namely “I’m speaking at the UN”. 🙂 

We first met during the Bethnal Green Ventures (BGV) start-up accelerator programme in 2013, when Sue founded TechMums (then Savvify) and John Lees-Miller and I were starting out with Overleaf (then WriteLaTeX). And by happy coincidence, both start-ups won at the Nominet Internet Awards 2014!

Sue and I stayed in touch, and when she joined Durham University in 2018, she invited John and I to give talks to the students taking her Computational Thinking course, to give a perspective on life in industry after university. 

Recently I spoke to Sue about her work in AI, and how her experience advocating for underrepresented groups can be useful in helping ensure both responsible AI development and that access isn’t restricted to and controlled by the privileged few. She’s working on a new, UN-supported educational programme, and would love for those interested in helping — in any way — to get in touch. 

Topics of conversation

Early days in natural language processing

Hi Sue, it’s great to be chatting again! I regularly see you attending #techforgood events around the world, and this conversation came about in part because you mentioned you were in Geneva at an “AI for good” conference last month. How did that come about?

Lovely to be chatting to you too! I’ve become more interested in AI over the years — I first studied it at university in the 80s, and at that time I was really interested in natural language. For my Degree project in 1992 I wrote a natural language interface to a database, which was so difficult back then!

“AI was interesting (in the 80s/90s). But I never believed it would get to the state where we are now at all, because back then it was so hard to even write the code just to be able to ask a basic question.”

For example, I was building a natural language interface to something as relatively well structured as a family tree, to answer questions like “who is John’s great-grandmother?”, that sort of thing. That was so difficult and took such a long time… and that was for a typed input. It wasn’t even speech, right?

So, to have got to where we are now with voice recognition and ChatGPT, it just completely blows my mind that we’ve managed to get here in… well, it’s a long time to me (since the 80s!), but at the same time it’s a very short space of time.

Professor Sue Black and her PhD student Sarah Wyer both attended the AI for Good conference in Geneva, Switzerland in July 2023. Source: https://twitter.com/Dr_Black/status/1677400696084103168

A good time to be working on AI

One of my PhD students at Durham – Sarah Wyer – has been looking at GPT applications for a couple of years. Before ChatGPT exploded into the public sphere, we were looking at different things for her to do for her PhD. We had a conversation with one of my colleagues at Durham, Chris Wilcocks. He was all excited about the potential of…I think it was GPT2, if I remember rightly. He was telling us all about it, and we’re like “Oh my God, this is amazing, you’ve got to do your PhD in this area!” So Sarah and I became really excited about it, and we wanted to look at bias in these GP models.

“I’ve done loads of stuff around diversity and inclusion in my career, and so we figured we’d ask ‘what if we can find any bias with GPT too?’”

We thought it might take a while – and in a sense it did – because to start with it took us several months to get access to GPT3. You had to apply for an account and the waiting list was very long. But when we did get access it was amazing, and we started to look into whether any particular sort of prompts generated bias  

And at that point, it didn’t take very long to find bias! The first bit of research that Sarah did was taking some simple prompts – e.g. “Men can” and “Women can”, having GPT3 generate 10,000 outputs for each prompt and then doing some clustering analysis and stuff like that. We thought it might take a while to find some bias, but it only took a few seconds with these first few prompts.

You can probably guess the biases she found – for example, stuff men can do is “be superheroes” and “drink loads of alcohol”, that kind of thing. Women can… yeah, it’s very misogynistic and sexualized, that kind of stuff. Not very nice at all, and if you add in race (e.g. as “Black women can”), it gets even worse.

Sarah is taking snapshots over time and the results are improving, in that it no longer provides some of the worst answers. But that’s also a problem because it’s getting better because those things which we really don’t want to see are now being masked, and that process isn’t very transparent.

So that’s been her PhD work over the last two years (part-time), and we went to the AI summit last year, and now to the AI for Good conference!

Sue Black describes this as “An incredibly inspiring kick off speech by the wonderful Doreen Bogdan, Secretary General of ITU, challenging us to use AI for Good”. Source: https://twitter.com/Dr_Black/status/1676858921099710465

A practical approach

How does it feel to be back in AI, after working on it in the 80s & 90s? Did you keep an interest going over the years, amongst all the other stuff you’ve been doing?

No, not at all – after my PhD I thought, “I don’t want to do that again!” 

I started my PhD in formal methods, as that’s where the funding was. I did that for six months and whilst it’s clearly a good approach to software development in certain scenarios — safety critical software and stuff like that — it wasn’t a good match for my brain!

I think I have a more “practical” rather than “formal” kind of brain. It doesn’t work very well in that way, so whilst I can do it, I’m not good at it. So I moved over to reverse engineering, which is more practical and to me quite exciting, but then I ended up in a really complicated area of maths which I couldn’t understand properly! I was building on a Wide Spectrum Language, which is basically a language to end all languages… one that you can translate everything into and then everything back out of.

So I thought, “That’s a great idea; that’s sort of a practical solution for lots of problems,” but it was very formal, again, and even though it’s a really good idea it turned out to not be very practical… and also the maths involved just did my head in! I spent three months thinking to myself, “Do I carry on with this and hope that I understand all this math or do I change?” I ended up deciding I wasn’t going to carry on with it, and I changed into software engineering.

I was already really interested in software measurement because I thought again, that’s kind of practical, help for people out there adapting software systems. So then finally, that kind of resonated properly with me and I did some work around reformulating an algorithm to compute ripple effect metrics. And that was my PhD.

“I never thought we would be able to do all the things that we now can with AI.”

So, yeah, nothing around AI in there at all, in the end! And I kind of thought it (AI) was never going to get anywhere because it was just too hard, but of course, I didn’t foresee the big data revolution and social media and what that might enable to happen. I’m so excited that it’s worked out the way it has. It’s just incredible. I never thought that we would be able to do the things that we can now do at all.

Returning to AI with a broader experience

Are you better equipped to work on AI now? Or is it even more daunting?

Well firstly, I suppose I’m not going to be doing anything at a technical level ever again (laughs!) — that’s not the best use of my time. 

When I was younger, writing code was definitely what I wanted to do, but I kind of feel like now, “I’ve done that, I don’t need to do that again”! And other people, who are doing it all the time, will be much quicker than me! Going back into AI now is more about how I want the world to be — it’s at a much higher level, and thinking about how we can use AI to benefit people, and I guess because of my background of some sort of disadvantage and challenges, I’m always interested in how technology can serve the underserved in society in various different ways and how it can empower people to live their best lives. 

“Because of my background, I’m always interested in how technology can serve the underserved in society… and how it can empower people to live their best lives.” 

So that’s one aspect of it. And then the other one is from the opposite standpoint, which is how to mitigate bias, or make sure that people realize if there is bias within a system, how much that can impact people detrimentally. Again, usually it will be the underserved communities who are most impacted without realizing it.

A lot of what I’m interested in is how to help people across the world understand reality — as much as anyone understands reality — but enough so that they can make the right decisions for themselves and their families and people around them. That could be a refugee woman setting up a business so that she can earn enough money to support a family, or it could be someone running an AI company who hasn’t thought about how the way that they’re developing their software can have a detrimental impact on potentially millions or even billions of people around the planet.

Because the #AIforGood conference was at the UN, I was chatting to the Secretary General of the ITU about helping everybody with digital skills across the world. Some sort of global program… so I’m going to be working with them on that! So, that’s the most exciting thing that’s happened to me in the last while! 

We should worry about people, not the technology

I’m optimistic about AI, but the doom scenarios are interesting as well. Are we training and building something that will cause massive harm? Will we become too dependent on AI? Will we lose the ability to do certain things if we can get the answer immediately?

Were these sorts of questions a focus for the conference last month? What’s your view on them?

Yeah, this was discussed last week, and from my perspective there’s too much focus on the tech.

“I’m more worried about people using technology to do bad things to other people rather than the technology itself.”

Because I think the technology itself may be an issue way into the future, not immediately. Right now you can see — with stuff like Cambridge Analytica — how using information and data that people have found online can change the course of the way things go in the world… elections in different countries, Brexit, and so on. I think a lot of that is down to people misusing technology, and that’s the thing that worries me more than robots taking over the world.

“People are using data about other people to manipulate millions — or even billions — of people to behave not in their own best interests nor in humanity’s best interests. That worries me. I’m not worried about robots taking over the world.”

Helping others to not be scared of technology

That’s why we need to help educate as many people as possible, so that they can recognize these things. I think security, and understanding it, is one of the biggest issues facing society — there will always be scammers of all different sorts, and they’ll always be using the latest technology. We have to help people have the awareness to keep updating themselves on “So what’s the latest thing that I should be looking out for?” You can’t tell everybody; you need individuals to be able to find that stuff out for themselves.

It’s a first step, because I think one of the issues we’ve got with technology is that most people in society, particularly those who aren’t in tech, think that they can’t understand it. Whereas, of course, they can at a certain level, but because it’s got that kind of reputation, lots of people are scared of it and think they couldn’t ever understand it. And that’s one of the main things I was trying to get over in the TechMum’s program was that, “Yes, you can understand these things, you can do these things — don’t get put off by all the jargon.”

“Everyone can understand tech to a certain extent, and if they can recognise that, and not be scared of it, it can help make their lives better in terms of being able to stay safe and secure and understand what’s going on. And I guess that’s kind of like my lifelong challenge — to try and make that happen as much as possible for as many people as possible.”

The buzz around AI shining a spotlight on existing problems

It feels like the current focus on AI is shining a spotlight on some problems which already exist. For example, there was already a bias in internet search results, problems with social media, scammers, and so forth. Maybe people find it easier to think about it as technology being the problem, whereas it’s actually those that are (mis)using it. But although people may be focusing on the technology, it is at least bringing into focus how it will be used, who controls it, and…

And also who’s built it and tested it and all of that kind of stuff, from a software engineering point of view. I’ve been thinking about diversity in software teams — even though I wouldn’t have called it that — since I was doing my PhD. 

I can remember reading about the disaster with the London Ambulance service computer-aided dispatch system, where people died because all sorts of things went wrong in procurement / management. A lot of it was about people not working together, not thinking, not actually valuing the people that were running the manual system beforehand and just the technology people thinking they knew better than the people that were doing the job on the ground.

I’d almost call it “uninclusion”, in the sense of not being inclusive in terms of those people working together with each other. It seemed to be a common problem in the ’90s, when there was a lot of instances of changing manual systems and computerizing systems, where the outside consultants were brought in, not really working with the people who are running the system, and having things like switchover happening on a single day with no fallback, disaster-planning. Even at the time it was obviously a ridiculous thing to do, but it seemed to be happening everywhere, with millions and millions of pounds being spent on these types of projects.

I think more than technology, it’s always been about people either not valuing other people, or other people’s opinions or information that they should be, or not testing things properly.

Dressing the animatronics — biases in plain sight

Bringing us back to the “AI for good” conference you attended last month, was there anything particularly unexpected you came across whilst you were there?

Overall it was a great conference — really interesting people and really interesting tech on display. 

One thing does stick in my mind though: there were a number of robots at the event, of many different sorts including animals, and some of them were these sort of humanoid robots — animatronics. About five were women and one was a man and it was quite interesting to see how the ones that had humanoid bodies (i.e. that weren’t just a talking head on its own) were dressed. The man-robot was Japanese and kind of dressed like a Japanese businessman or professor, and quite like how a man would be dressed. Whereas the women were just dressed… well, in all sorts of seemingly random clothes that you’d probably describe as being “cheap hooker” kind of clothes.

And I was a bit like “Why? What’s going on here?” One of them had a wedding dress on it, and the impression it gave was that women are either cheap hookers or they’re getting married.

I don’t think they’d even thought about it — it didn’t seem like there was a deliberate attempt to give that impression, they’d just put some clothes on it… on her…. and that’s the clothes they put on them. So that was my main kind of “aha-but-not-in-a-good-way” moment at the conference. 

I should reiterate that there were lots of interesting speakers about all different sorts of things, and the interactive stands were very cool. It was a really great conference to go to, and it was great for meeting people doing different sorts of things. But it’s still notable that this — the dressing of the animatronics — was one of the things that stuck out to me.

Looking ahead: A new UN digital skills programme

“Let’s put human values first and stay true to UN values” – ITU Secretary General Doreen Bogdan giving the closing speech. Source: https://twitter.com/Dr_Black/status/1677343924661153792 

You mentioned your hopes for AI and hopes that it will help people — especially disadvantaged people — be the best version of themselves. What are your hopes for the next few years? What would you hope to see happen, what do you hope to be doing, and how could people best help you?

On a personal note, I’m really excited about working with the UN on digital skills globally. I’m very excited to be working to put together a program or programs that we can tailor for different groups of people in different countries. 

So for any readers out there, please get in touch if you have any expertise around delivering digital skills programs on a large scale, or in working with disadvantaged or underserved communities in any way. I’m going to be doing a lot of fact finding — in terms of delivering a worldwide program, my experience has focused in the UK, so it will be great to broaden my perspective. I’d be very interested in speaking with people at any level — right from the lowest level in terms of content all the way up to the highest levels in terms of delivery methods, agencies to work with, etc. 

For example, I was introduced to the person who runs the World Food Program — I’m hopeless at  remembering exact titles but basically, the UN food agency. I had a chat with him about whether there’s a way it might work where, along with food being delivered, there’s some way that we can help facilitate a digital skills program along with it.

So, any ideas, at any level, across the world, of people who’ve got real experience — either positive or negative — delivering these types of program, so that we can help work out what is the best way to run it. Or even experience of running programs across the world — it doesn’t have to be a digital skills program, but any experience in terms of the best way of engaging communities around the world, anything kind of relevant to that and again at all levels, from experience on the ground to experience of which agencies to work with, how to bring people together, who to bring together. All of that kind of stuff.

It sounds like an amazing project, a daunting one — maybe daunting was the word I was searching for there. It sounds like there’s quite a lot of work.

I don’t feel daunted at all, I guess I’m just feeling excited! Finally, I can have the sort of impact that I want to have on the world!


If you’ve enjoyed reading this article, and would like to get in touch with Sue to discuss the new digital skills program she’s working on, you can reach her via her website, or find her on X(Twitter) or LinkedIn

Share this article
Link copied to clipboard

Subscribe to our newsletter

Explore More From Digital Science
All TL;DR Videos