Summary of the Interview
This interview features Dr Laura Gilbert, Executive Director of AI at the Tony Blair Institute for Global Change (TBI) and former Director of Data Science, in conversation with David Savage, Tech Evangelist at Harvey Nash. The discussion centres on how AI can be intentionally designed and deployed to improve government, support society and shape a more equitable future.
Laura begins by reflecting on her recent talk at the “AI for the Rest of Us” conference, where she argued that instead of trying to predict the future of AI, leaders should build the future with intent. At TBI, she leads an AI incubator that develops practical AI solutions for governments worldwide, bridging the gap between policy ambition and real technological capability.
A key theme is the critical need for AI literacy at leadership level. Laura explains that many organisations struggle not because staff cannot use AI, but because leaders lack hands-on experience. This results in poor strategic decisions, ineffective procurement and slow adoption. She stresses that meaningful progress only happens when leaders personally engage with AI tools and understand how they can be applied within their organisation.
The conversation also touches on the widespread narrative that organisations are prioritising investment in AI over people. Laura argues the framing is misleading. The real opportunity lies in empowering workers to become AI-enabled, not replaced, and in building tools that elevate human capability rather than diminish it. At the same time, she warns against assuming we can accurately predict which jobs will be most affected, citing how unexpected it was that software engineers and photographers became early groups disrupted by generative AI.
Laura explores the future of work across different demographics, highlighting that older adults may be one of the groups who could benefit most from AI through improved access to services, protection from scams and reduced loneliness. For younger workers, she is overwhelmingly positive, describing AI as a tool that can free people from low-value tasks and enable them to focus on high-impact, meaningful work.
Ethics, intent and societal responsibility form another major theme. Drawing on her experience in government, Laura explains why transparency, open source solutions and a focus on human value are essential for creating AI systems that truly serve the public. She discusses TBI’s Promethean Project, designed to unite governments and industry to build open, accessible AI tools that support global public good. Without intentional design, she cautions, AI will deepen existing digital divides and disproportionately benefit those already in privileged positions.
Policy is also examined. Laura believes policy can keep pace with AI, but only if governments build flexible, outcome-first frameworks rather than rules tied to specific technologies that will quickly become obsolete.
The interview ends with Laura’s wider reflection on AI’s potential to reshape society. While she refuses to predict the future, she emphasises that with thoughtful design, AI can create space for more fulfilling work, greater wellbeing and a more equitable economy.
Key Findings from the interview
- The importance of intentional AI design rather than prediction
- Why leadership-level AI literacy is essential
- The risks of poor procurement and misunderstanding of generative AI
- The misconception that AI investment competes with human investment
- Workforce impacts across demographics, including older adults and young people
- Ethical AI development, transparency and open source practices
- How AI can transform public services, from teaching to healthcare
- The need for flexible, future-proof policy and regulation
- Addressing the digital divide through active intervention
Tech Flix 6 Documentary
This interview is part of Harvey Nash’s latest Tech Flix documentary, which explores the AI paradox: AI is scaling, skills are not. The film examines how AI is transforming work, education and regional economies, while asking a fundamental question:
Are we preparing people for the future, or leaving them behind?
Watch the full documentary here.
Full Transcript
David Savage 00:00
First of all, Laura, thank you very much for giving up some time today. How are you?
Laura Gilbert 00:04
Very well, thank you.
David Savage 00:05
You've had a busy morning, you were giving a speech. It was at one of the museums and I can't remember which one now.
Laura Gilbert 00:09
But yet there was the London Museum, it was the AI for the rest of us conference. Good talk. Well I mean I gave it, so you'd have to ask the, I mean I thought it went brilliantly.
David Savage 00:17
What was the focus of it?
Laura Gilbert 00:19
Actually it was about how we build for the future with AI, which is something I feel very strongly about. There's a lot of people spend a lot of time trying to predict the future, which we have a lot of evidence doesn't really work.And my angle on this is that instead of trying to guess what's coming next, we should build what's coming next with intent. So it's about that, loosely speaking.
David Savage 00:42
So, look, we're here, Sat, in your lovely studio at the Tony Blair Institute for Global Change. Hopefully I've got that right in its full title.
Laura Gilbert 00:51
to tell me what you do here. TBI generally advises world leaders in better government and we are starting to directly apply AI to that work.So I built an AI incubator, engineers, AI engineers, data scientists and we are directly building AI solutions to help leaders govern better.
David Savage 01:10
And you yourself worked at the heart of government, so very well placed to help TBI in the Endeavour.
Laura Gilbert 01:15
Absolutely, I joined government in September 2020, where I was director of data science in Downing Street. So I built the 10 data science team that aimed to use evidence to improve policymaking. And then in 2023, built i.ai, the incubator for artificial intelligence, both of which is still going in government and going very well, as I understand, so.
David Savage 01:38
Look, it makes sense that the current government cares greatly about AI literacy. They're trying to encourage a base level of AI literacy within the general working population.How do we get that to a point that it works for enterprise as opposed to being something that's quite vague or broad or too light touch?
Laura Gilbert 02:01
Well, it's an interesting question. I think we talk about this a lot. And I think people thought, for one thing, there's a lot of confusion. Generally, when people say AI, now they do mean generative AI.And the tooling in that space is very rapidly developing. And in many cases, it's very good. I think it can be difficult for organizations to figure out which sort of style of generative AI they should be picking up and running with. And there's a lot of scope for experimentation. I think that where things don't always go very well in organizations is when the leadership doesn't really use it themselves. We get a lot of examples in an outside government where somebody very senior will go, well, you know, this organization is now gonna use AI. We're going to be permissive. And then they sort of wait for, I don't know, an IT team to pick it up or something. When you speak to that leader themselves, they don't have a good mental model of what works and what doesn't work and what would be effective for their organization so they don't physically pick it up themselves. I think we're very much in the space of if you can upskill the people at the top of the organization and the reason that's hard to do is because they're busy and they're often not used to doing things a certain way, then they start to understand what their staff maybe are talking about or working with and it disseminates a lot more quickly. In the government space as well, I think the lack of grip of real generative AI solutions at the top levels is actively, sometimes quite harmful because it means that people are making decisions about things like data privacy or what you could do with AI in an organization. They don't really, really understand it and it can lead to very bad procurement as well. So if you are an organization or the leadership of an organization, you think you'll procure this AI solution, but you've really never picked it up and touched it. It's very hard to tell if this company is selling you something really useful at a good price or entirely the wrong thing at a terrible price and it comes down to the sales tactics of the organization doing the selling. So I think it's very, very important that we sort of upskill at the leadership level and the only way I know to do that really is very hands-on and often it involves ambushing those people, shoving a phone in their hand, going, I've made you an account, off you go. We do offer a service akin to that.
David Savage 04:24
Look, I can't help but ask this question then, because listening to you there and saying part of the problem is leadership doesn't understand, when you then see reports, as I did in the paper last week, suggesting that 41% of organisations are likely to invest in AI over people, you can't help but be a bit concerned.
Laura Gilbert 04:42
Yeah, I think that's an interesting framing, isn't it? Because if somebody asked me that as a leader, would you invest in AI over people? What does that mean? Do I want to reduce the size of my workforce? Do I want AI enabled? Do I want to be hiring staff that are good at AI'd, you know? Or am I actually just thinking black and white? What I'm going to do is fire a bunch of people and then just, I think that AI will answer all of my problems. If you're in that last category, that's not a good place to be.It's not going to help you. But we certainly do think that this is said a lot, but that your AI-enabled worker who understands the power of the tools and can use AI tools is going to out-compete someone that can't. And I don't think that in itself is a surprise. So I suppose my reflection on that is I'm not sure that that's a well-formed question.
David Savage 05:34
No, it's interesting because it's certainly the narrative in the media and perhaps it's not the most helpful one then, if that's the case to the industry as a whole.
Laura Gilbert 05:43
I think that's true, and I think also it's one of those areas where the future of work is challenging. There have been a number of reports over the past 10 years that sort of talk about the impact on the economy of AI, all of which are big numbers in the positive direction, and sort of likely changes in the workforce. Now, anyone that's ever heard me talk or had a conversation in a bar with me over a couple of classes of wine will know that I feel very strongly that predicting the future doesn't work. You've got loads of evidence about it.People are trying to apply that to the workforce at the moment, and if I look back three, four, five years ago and somebody had said to me, what parts of the workforce will be negatively effective, how will the workforce change over the next five years? I would have said shelf stacking, robotics. I wouldn't have said software engineers and photographers. So the odds of us being bang on right about it now are probably very slim. Companies that just sort of look at it and say, well, I'll be able to get rid of loads of the workforce in this way that I can fully predict, I think are probably jumping a bit too early, and I think you need to be very careful about that sort of prediction because we can really get it wrong. That said, the kinds of AI we can build now really can replace slow systems where humans are doing grant work and where people taking decisions have to wait a long time for the information to make them, or have been lost in a sea of too many decisions. You really can build systems that get the right decision to the right person much faster, and I think that really enables you to build a workforce that is more focused on what you need, and if we do it really well, has more satisfying work to do. I think that it's really worth trying to do that.
David Savage 07:35
Nash Squares Digital Leadership Report has this stat that Just half of the organizations that we've surveyed are investing in their people in terms of upskilling or training their staff And I think it's really interesting that you say that it's all that seems low It does seem low I think it's really interesting that you say the law of leaders don't necessarily understand AI because I think that's really helpful context When you then put it next to that stat where AI is concerned.How do we make it? commercially attractive to organizations to upskill in AI skills in their workforce
Laura Gilbert 08:11
I think my base assumption would probably be that it's not necessarily something that needs a lot of intervention. We really should, and I think are, see that companies that do this well perform better, both in the sense of the outcomes and the actions that they're performing, but also in staff satisfaction and being a more attractive employer. So I think it might be a case where the market takes care of itself, if I'm completely honest.There are probably sectors of the market that are quite left behind, and if they are critical to national growth, for example, one might pick out, say, the energy sector, and we think that some of those companies are not growing and adapting as quickly as we'd like them to, and particularly compared with similar sectors in other countries, then I think you might want to intervene. And at that point, it's not very, very challenging to upskill people in tooling, really. You know, incentivisation programmes are quite easy to design, I would say.
David Savage 09:15
You give a bit of focus there to different sectors, but think about different demographics within the market. There's a lot of time at the minute being given to the perhaps reduction of the number of entry level roles.It'd be interesting to know what you think about that as a very broad point, but also is that the only demographic that we should be thinking about?
Laura Gilbert 09:38
No, I think I've got, I'm coming at this from a point of bias, I've got a particular interest in a very different demographic, which are not necessarily people in the workforce. So older people could really benefit from greater access, a greater understanding of modern AI technology. Everything from having safer ways to keep them away from scams and financial problems through to the ability of tools in the large language model space to connect them with services that they might need, give them advice, help them to plan, you know, it can do everything from help you figure out how to do your will to help you figure out how to find a gardener. And we're still in a place where 1 in 10 GP appointments in the UK are taken up by older people where the fundamental problem is that they're lonely.So, you know, you've got a population who are arguably underutilised that, you know, have actually a really strong ability to be able to contribute to a community and have a lot of expertise and are potentially supporting younger family members. And if you could upskill those people so that they're able to access the services they want, they're safer, and arguably they're able to find ways to be less lonely, you could do that, you could potentially also sandwich those middle people, and a lot of the reasons we find that people at leadership levels in organisations are underskilled in AI, is the time and the mental energy to try and pick up a new skill when you're managing perhaps ageing parents and children in school as well is really significant. If your older parents are doing it, it trickles down actually. So I think you could really impact the population by targeting that older demographic and empower them at the same time. On the entry-level side, I think, I mean, work has changed so much, and I remember decades ago, you know, sort of going into university with really this narrative of you need to figure out what your career is going to be, and you need to get a good job in a company with a pension, and that's how you will succeed in life. Yeah, I think I've sort of fully changed career probably four times, but, you know, if you're a younger person now, the future of work looks so very different, and what we really need to develop in those spaces is people who are very flexible, who know how to learn and pick up new skills, who know how to apply knowledge across different domains in a way that we've never really prioritised before. So I think we really need to be looking to grow that flexible workforce who are able to sort of use their intellect and, you know, to understand how to learn very quickly and to pivot. So that would be sort of where I'd be looking at investment in that space, I think.
David Savage 12:23
I find it fascinating because we spoke to young people, both in schools and in universities, in Leeds, and we asked them about their perspective on AI and its impact on their future job opportunities or employment prospects. And they were largely positive.And yet, when you pick up the paper and you read it, again, the narrative is quite overwhelmingly negative. Just from a personal perspective, when you talk to young people, do you think they should be largely positive or what do they need to be thinking about to make sure that they are employable?
Laura Gilbert 12:56
No, I really do. I mean, probably almost quite jealous. I mean, remember all those hours and hours and hours we spent building presentations and doing reports and filling out forms, all the things that I'll never have to suffer through.I think, you know, I use AI in my work a lot in many, many different ways. And my experience, it enables me to put my efforts and, you know, the intellect that I have in the direction where it has the most impact. It really lets me enhance the value I think I'm bringing to my work. So I think they're growing up in a world where they can focus on the things that matter to them and build a career where they, you know, they're really delivering value. I think it's very exciting. So not to say there aren't pitfalls and downsides, and there are real issues with areas that we don't understand what the impact will be long term on people's mental health. We don't understand the impact on society and how society will look in a few decades. But I genuinely don't see a reason to be frightened for young people. I think it's building a world with different sorts of careers that have a huge amount of inbuilt flexibility. And it's a huge world of opportunities where you can spend more time learning and evolving than ever before, really.
David Savage 14:13
You talked about the sandwich middle. You talked about the executives who perhaps don't understand AI as much as they might like or we would want them to. I was talking to Peter Kyle when he was still at DESIT and he said that you can close the gap on the level of knowledge around generative AI from a 55-year-old to a 35-year-old in just two and a half hours. When I spoke to him, I felt that that stat wasn't necessarily rooted in the real world.Two and a half hours isn't going to convince a CIO that that person is therefore right for their workforce. How do we bridge that? How do we make sure that people who are in the workforce, who are still very productive but maybe don't have the same learning capacity as someone who's literally just starting their career? How do we make sure that they have an active role to play and bring skill sets to the table that are attractive to organisations?
Laura Gilbert 15:03
It's an interesting question. I'd certainly have questions about how that statistic's calculated. I think I'd probably reject the narrative that a 55-year-old would find it inherently a great deal harder to sort of understand and adapt to a changing technology space than a 35-year-old, particularly if they're in a position of sort of responsibility at a company really. I do think a lot of it does come down to the opportunity. And if you're at an earlier stage in your career, you do tend to have a bit more space for experimentation and perhaps a little bit less constraint on what you're likely to do on a daily basis. So I think that might be a lot of it.But when it comes to the actual upskilling point, again, upskilling's not a word I'm wild on because it tends to be interpreted as, we can take you from somebody who's not good at something to somebody who's good at something in a general and across-the-board way. And we don't really believe in that. People have different levels of aptitude. They have different types of skills. And when we're talking about upskilling people in AI, it can be everything from, if you work in sales, how do you use AI to generate better sales? Through to how do you, on your phone, record and summarize a meeting? And everybody, it can be, if you're a software developer, how do you use it to build your code better and faster without risking it being terrible? So we're using it for very different things across the board. And where I've seen it go wrong, I think, is when people go, well, what we'll do is we'll just sit you in a room for two hours and we'll tell you about AI and then you will understand it. And it doesn't have a big impact. It does have a big impact. If you could be the right tool for their job, sit down with them and show them how to use it. And that's not about AI. That's just about people and technology in general, I think.
David Savage 16:54
I mean, forgive the analogy, but is it a bit like we're still treating people and jobs in a very binary, analog way, and the jobs that we're now asking people to do do not resemble a, here's a hard skill, you either have it or you don't?
Laura Gilbert 17:09
I think that's exactly right. And when we go out and teach sort of leadership how to use AI, we find out first, what do they do in their jobs? What do they do in their jobs they hate doing? And they find frustrating that slows them down.And then we give them the concrete examples of using some relevant, easy to use AI system to streamline that. And before you leave the room, they've done it themselves on their device, and they've got all the accounts they need, and they know how to do it. And that way, when they leave, the next time they do it, they do go, oh yes. If you sit them in front of a screen and give them a presentation, they will probably really think to themselves very genuinely, well, I'll try that later. And then they don't.
David Savage 17:50
It's funny, isn't it? Whenever we talk about technology, we say that it's evolving quicker than it's ever evolved, and hey, it is.
17 mins 55
Given the acceleration of AI, can policy ever really hope to keep up and be flexible and agile enough to make sure that it creates an environment where people are able to be fit for purpose for the workforce in the future?
Laura Gilbert 18:12
Well, it's not in any way an impossibility. It certainly can. Whether or not it does is something else entirely. And I think, as with a lot in the AI space, you know, when it comes to everything from regulation or actually how you build systems, you need to make an active choice to build something where you've considered the impact, you've considered the outcomes you want, and you've baked flexibility into it. And across the board, and it's not just policy regulations, et cetera, if we're not building really flexible systems, we're probably going to come a cropper.We didn't see chat GPT coming. Understand even the open AI engineers that built it didn't see it coming were very surprised. We didn't really predict deep seek. The next thing that happens, we won't predict. We are bad at predicting the future for a number of very, very good reasons. So any system you build that is fundamentally inflexible is going to become outdated faster than ever before because of the way the technology is growing. So it's definitely not that you can't do that, but I think the people making those policies and regulations really have to have that at the forefront of their mind. So if you build systems that focus on the outcome you're trying to drive or the outcome that you're trying to protect against potentially, and do that carefully and with thought, then we've got a chance of building things that are sort of future-proof. If on the other hand, you take the approach of, you know, this specific technology we're going to build a policy for or a regulation against, it will very quickly supersede it and everyone's wasting their time, really.
David Savage 19:42
Through the course of this film we've spoken to a number of different leaders and we also went to Leeds Trinity University and unsurprisingly they placed emphasis on the ethical implementation of AI and prioritising people with the technology itself. You here at TBI are in this wonderful position where you talk to government but also you talk to a lot of very serious players in industry.What tools or initiatives can be put in place to make sure that AI genuinely is implemented in an ethical way that does put people first.
Laura Gilbert 20:15
Yeah, I mean, it's something I feel very strongly about. And for a bit of background, my AI team in government, we had two mantras really. One of them was radical transparency, be fully transparent about everything that we're building. We wanted the public to trust that we were working their best interest.And the other thing we were doing was we were trying to make government more human using AI. And I think a lot of times people are afraid that you're trying to build technology to replace humans. We are trying to build technology that gets humans to do the thing they're best at. And in some cases, what they enjoy. So, you know, as an example, in the UK, half of all teachers leave the profession within four years. Which possibly is a signal that the teaching profession is not really a job that people can do. And I want to be in a world where teachers really enjoy their job. And they are looking after the children in the way they want to.And they're providing that social value and guidance and support to children. And you use the AI to, as you would imagine, do the paperwork and ensure that the children are having their learning needs met. That's the kind of thing that AI is really good for.
21 mins 22 to 21 mins 59 *Laura Gilbert*
There will always be people that use technology in ways that I personally think are unethical, you know, perhaps in the advertising space. But as a community of technologists, we need to decide what is the intent behind the thing we're building. For me, the intent is to reduce inequality in the world.I want everybody to have a base level of safety, care, well-being, health. And I want the bottom level to move up. And I don't want this massive expansion of the difference. So when we build technology, we build that very strongly with that in mind. And a lot of people are in that space in the tech world.
21.59
As consumers, you also have a responsibility and some power in that space to not accept people building technology that is not in our best interests and that can be harmful to people. So I think being clear on what your intent is and what you care about and being prepared to actually have a bit of a fight about it becomes very important because we won't necessarily have a lot of time to intervene when people start building things. So I think it's everybody's responsibility to really care about what the future looks like and try and build for that future. I use this word a lot, but we can't predict the future, but we can intend the future. And I think this is something that pretty much everyone should be genuinely concerned with.
David Savage 22:51
We've been lucky enough to talk to Lee Serenson University as part of this film and unsurprisingly they care a lot about the ethical implementation of AI and making sure that it genuinely puts people first. You're in a great position here at CBI where you get to talk to people who are genuinely changing the way that industry uses or implements this technology.
23 mins 12
How do you make sure that it is implemented in an ethical way that puts people first?
Laura Gilbert 23:18
Yeah, I think that's a real responsibility on us as technologists and also on the consumer to build technology that creates the future that we want and to reject the sorts of things that we think are bad for us.
23 mins 33
When I was in government, we had two mantras, one of them was, this was the AI team, one of them was radical transparency and you're really proving to people that you're building technology well and in their best interests by publishing all the code and all the evaluations, you know. And the other thing that we used as a tagline was making government more human with AI. We didn't want to give people a government where it's harder to figure out how to get your benefits, you know, or where you can't get hold of a human when you need to.We wanted to free up the time of people to add the value on the public interface to support people and you can imagine in the, you know, teaching is a really great example. At the moment, half of all teachers leave within four years because the job is so hard, you know, the paperwork, the requirements, it's very, very challenging and you can, I think, should use AI to make that a more achievable job where the teacher is using their skills to keep the children safe and help with their social development and support them and maybe not spending all of their time, you know, away from the children doing paperwork and lesson plans and all the things that AI is great at doing.Similarly, in the health space. So
24 mins 48
on the side of technologists, we're doing this project here at TBI called the Promethean Project kicking off a very big hackathon in May where we're trying to bring together industry and governments to build open source code and open source solutions to solve world problems in a way that is pretty much free, you know, particularly for those governments that can't afford the kind of engineering skills you need to solve some of these issues. And we know that there's a great appetite in industry. Engineers often really want to build something for the public good and to know that the tools they build are helping people.So we want to give them that opportunity and we really want to drive that narrative that this does matter. And you know, ahead of us, a couple of potential worlds and there really is one where all of the technology supports the people who already are wealthier and better supported and a lot more people are left behind. And there's a world where everybody gets that basic standard of health care and education and safety and security and mental health. And I really believe that we as technologists and also consumers, industry leaders, government leaders should try and build that future with intent where it's a sort of better world for everybody.
David Savage 26:04
Leeds aspires to be an AI growth zone. They're rightly interested in the level of investment that that could bring into the region.But what would you say to someone who was concerned about AI deepening the digital divide?
Laura Gilbert 26:24
There's no reason to think that AI left to its own devices, as it were, by which, I mean, you know, if you just sort of wait and see, won't deepen the digital divide, you know, we already have a digital divide, there's no particular reason to think that if you just wait, the people who are on the have not side of it will be magically swept up and supported. I don't think they will.We will have to make active choices and provide opportunities and build solutions that are accessible to those sorts of people. And, you know, it's my belief that we should do that. And not just because of ethics, but, you know, national growth. The more people that you can bring up, the less you have to spend on services, you know, the better people's health tends to be. So it's both good for people and families and communities, but it's also good for prosperity.
David Savage 27:20
Look, I know you've said throughout that predicting the future is a fool's game, perhaps. Whatever, better phrase.We appear to be caught right now in a period where efficiency and productivity cuts are very much influencing the thinking of enterprise. Do you think that's just a moment? Do you think we're going to come out of that?
Laura Gilbert 27:42
Yeah, I hate predicting the future, there's loads of evidence that says that we're very bad at it and some evidence that suggests the more expert you are in something the worse you are at predicting the future. So I'm really hesitant.I don't have a confident answer to that. I would like to think there is a world where in fact we make great efficiency and productivity cuts and we grow other areas for human endeavour. So I really like the idea of a world where all the rote work and the things that don't really add human value are handled and there is another place for people currently in that point in the workforce where they're able to perhaps have more fulfilling careers. I don't see any reason that you can't make these systems more efficient and open up more space to do exciting and innovative things over there.
David Savage 28:36
Flora, thank you very much for your time.
Laura Gilbert 28:38
Thank you.
