What makes for an effective AI policy?

Latest News
Posting date: 28 June 2024

Nash Squared CIO, Ankur Anand, looks at what different components you need for a good AI policy. This article first appeared on boardagenda.com.

Board ownership of an organisation’s policy on using artificial intelligence is essential for effective governance of the technology

As artificial intelligence and, more lately, generative AI (genAI) become more widely deployed across businesses, we are in the middle of a remarkable era of innovation and possibility. The potential of AI to boost productivity, facilitate problem-solving and enhance creativity is probably higher than anyone imagined.

However, to make the most of that potential, and to successfully bring human intelligence and artificial intelligence together in a healthy balance, guidelines and guardrails are needed. 

People can’t be left to use AI at work in whatever way they feel like without some clear guiding principles to follow – as well as, where necessary, red lines that should not be crossed.

Part of the governance framework

An AI policy is a foundational element of a successful approach to AI. As AI becomes more ubiquitous, an AI policy becomes part of the governance framework that any organisation functions by. 

A policy is not merely a ‘nice-to-have’—it is becoming table stakes in the new age we have entered. Almost three-quarters of organisations have deployed generative AI to at least some extent. Encouragingly, our research at Nash Squared shows that organisations are taking this on board and acting accordingly. 

Six months ago, in our annual Digital Leadership Report which surveys technology leaders around the world, only one in five organisations had an AI policy in place.

But in a Pulse survey of tech leaders that we conducted recently, that has now doubled to 42%, while a further 38% have plans to create one.

""

That is a striking increase in a short period of time—and is a testament to the speed at which AI is developing and being adopted. Our survey found that almost three-quarters of organisations have deployed genAI to at least some extent to their employees and one in five have deployed it enterprise-wide.

Elements of an effective AI policy

A good policy will set out clearly what AI can be used for, the protocols and checks and balances that should be followed (such as having a human in the loop and the need for human review before any outputs are used or published), and the ethical principles that should dictate its use—such as transparency, accountability and fairness.

There should be control mechanisms to guard against algorithmic bias, while the data ingested into AI applications needs to be of requisite quality and accuracy.

Security is another key concern: there must be clear guidelines to prevent commercially sensitive information being released into the public domain through (certain) genAI platforms, while, needless to say, respecting data privacy and protection rules is also paramount.

There may also be intellectual property and copyright issues that come into play. Monitoring systems are needed to track the use of AI and maintain records for compliance and auditing purposes.

More broadly, an AI policy can also help the organisation with its sustainability and ESG goals—for example, by evaluating the sustainability practices of AI vendors and by embracing use cases that align with the organisation’s ESG ambitions including sustainability given that AI consumes significant energy. Companies should build a plan to stay focused on carbon neutrality while they continue to roll out AI.

For all of these reasons, an AI policy is essential—and it needs to be clearly communicated and actively discussed across the business, rather than passively published in a quiet corner of the intranet.

Done well, an AI policy can have a double benefit: not only will it reduce risk and help employees use AI safely and productively, but it can become an effective educational tool by facilitating discussion about AI and helping staff understand how AI can support them in their roles. This will increase confidence and adoption.

Owned from the top

Ownership is another key question. Operationally, the AI policy may often be owned by a technology leader such as the CIO or perhaps by HR—but in my view they are more akin to the custodians of the policy.

AI has become so strategically important that it should be owned at the very top, by the executive committee and/or even the CEO personally. It is their buy-in, engagement and sponsorship that sets the tone and establishes the culture to embed AI within the way the business operates.

""

The board has a crucial role to play. Some organisations have established AI committees made up of senior individuals, or ethics boards for whom AI is a significant part of their remit.

Our Pulse survey shows that a small proportion of businesses (5%) have also appointed a chief AI officer, with a further 7% planning to appoint one.

Whatever the governance structure at a specific organisation, non-executive directors can play an important role in the oversight of AI within the business—strongly encouraging the creation of an AI policy if one is not already in place, reviewing the effectiveness of the policy once published, and ensuring the policy remains aligned with the wider ethical values and social responsibility commitments of the business.

Directors, both executive and non-executive, also have a particular responsibility to ensure that their own use of AI tools and applications, personally and within their teams, is in line with the policy and is ethical, safe and secure. Arguably, there is a shortage of specific training for executives in this area—an area where we may see market developments moving forward.

Is your organisation up to speed?

An AI policy doesn’t solve all problems or guarantee success—it is notable that the same proportion of Pulse survey respondents (four in ten) from organisations with an AI policy as those without one are concerned about the risk of AI misuse.

Many organisations have also retrofitted their policy after AI usage has already begun among staff—an inevitable reality given the easy availability of genAI platforms such as ChatGPT, Gemini and Copilot.

The policy should be regularly reviewed—possibly even monthly, given how fast the AI landscape is evolving. Just as AI evolves (and potentially mandatory regulatory requirements are introduced), so will the policy.

If your organisation does not yet have an AI policy, is that a defensible position and should you be strongly advocating for one to be created? If your organisation does have one, is it fit for purpose and providing sufficient support and guidance for staff? These are now key questions as AI becomes one of the defining characteristics of our times.

insights

View all news and insights

""
Leeds 25th Anniversary Digital Leadership Report 2024

Teaser

Post

Content Type

Latest News

Publish date

07/04/2024

Summary

What topics are currently at the forefront of the minds of digital leaders? Nash Squared Technology Evangelist, David Savage breaks down the key discussions from the evening. Last week we

Teaser

Find out more
David Savage

by

David Savage

David Savage

by

David Savage

""
Manchester 25th Anniversary Digital Leadership Report 2024

Teaser

Post

Content Type

Events

Publish date

07/02/2024

Summary

So, what really is on the minds of tech leaders in Manchester?  Well the Digital Leadership Report event, which took place on 27th June, was one excellent way to find out. Over forty tech lea

Teaser

Find out more

by

Harvey Nash UK

by

Harvey Nash UK

""
Exploring the Project Management Office and Value Management Office

Teaser

Post

Content Type

Latest News

Publish date

06/27/2024

Summary

In the dynamic landscape of organisational management, two crucial entities often stand at the forefront: the Project Management Office (PMO) and the Value Management Office (VMO). Throughout

Teaser

Find out more
David Quearns

by

David Quearns

David Quearns

by

David Quearns

View all news and insights
List #1

Related jobs

Asssitant Management Accountant

Salary

£150.00 - £250.00 per day

Location

Leicester, Leicestershire

Sector

Financial Services

Location

Leicestershire

Job Type

Contract

Description

Assistant Management Accountant |£150 - £250 per day via umbrella company | Leicester | On site.Assistant Management Accountant job opportunity! We are looking for an Assistant Management Accountant t

Reference

BBBH106620_1720789677

Expiry Date

01/01/0001

Matthew Miller More info
Cloud and Infrastructure Architect

Salary

£65000 - £70000 per annum + Pension, Bonus, Yearly Appraisal

Location

London

Sector

Infrastructure & Operations

Location

London

Job Type

Permanent

Description

Cloud & Infrastructure Architect- £70,000- London- 1 day a week on siteI am proudly partnered with a brilliant Not for Profit organisation having a genuinely positive social impact. Currently at the v

Reference

BBBH107647_1720788113

Expiry Date

01/01/0001

Ben Sams

Author

Ben Sams
Ben Sams

Author

Ben Sams
More info
Cyber Security Manager

Salary

£450 - £500 per day + Outside IR35

Location

Glasgow

Sector

Information Security / Cyber Security

Location

Glasgow & Strathclyde

Job Type

Contract

Description

Cyber Security Manager - Glasgow - Onsite with home working - 6 Months - Outside IR35 We are seeking a highly skilled and experienced Cyber Security Manager to join our team. The ideal candidate will

Reference

BBBH107646_1720788106

Expiry Date

01/01/0001

Luke Thornborrow More info
View all Jobs