We have lift off… The opportunities and risks of generative AI
Melanie Hayes, Chief People Officer at Nash Squared, and Bill Boorman, Technology and Talent Advisor, discuss how generative AI theory is now being put into practice. This article first appeared on ComputerWeekly.com.
It feels like we’ve been talking about the potential of AI and machine learning for years. So much so, that some people may have become sceptical about the extent to which it will all actually happen. But now, the remarkable rise of generative AI shows that it’s here and it’s real.
ChatGPT and other generative AI applications are lifting off like rockets. They’re becoming embedded into ever more products and contexts. Millions of people – inside and outside the work environment – are trying them out.
For those with long memories, it’s reminiscent of when the internet and email finally became available and changed everything. There was a huge leap, followed by a number of years of working out what it all actually meant for how we work and carry out the tasks that make up our jobs.
With the advent of generative AI, the speed of change is set to become faster than ever. Individuals, businesses, regulators and governments have all got to work out how to keep pace. Sam Altman, CEO of OpenAI, recently told a US Senate committee that an agency needed to be created to regulate AI, warning that rogue AI could create “significant harm to the world”.
The EU, meanwhile, is developing an AI Act which will include the regulation of ‘foundation models’ such as ChatGPT. Any such regulation will have its work cut out to keep covering all the bases.
As custodians of the working culture – and the rules of engagement – HR professionals need to be in the vanguard of all this. There is no doubt that generative AI can and will impact on almost every job. There’s no role that wouldn’t benefit from some level of AI and/or automation, after all.
First and foremost, it represents a huge opportunity to make individuals’ roles more productive. Generative AI can take away enormous swathes of the admin-based ‘heavy lifting’ that takes up so much of our time. Drafting communications, creating templates, researching subjects, transcribing notes – these can all be revolutionised with ChatGPT and its kind.
Applications like Microsoft Copilot can create meeting notes from a verbal discussion, including who said what, where people are aligned, where they disagree and suggest action points. The days of appointing a note-taker who has to spend a couple of hours writing up a summary may soon be gone. Good news for junior team members everywhere!
Because they learn from the feedback you give it, these applications can get better and better and also evolve their output to more accurately capture the tone of voice you want. This is something that both of us have experienced.
For instance, Bill recently used ChatGPT to create texts to accompany over 100 of his short training videos that were being uploaded online – the tool improved (and sped up) significantly as the task went on. A relatively small number of edits were needed, after which most readers would have found it hard to tell that the texts weren’t actually written by a person. The app saved many hours of work.
As generative AI rapidly improves (as it will), it will create the opportunity to reimagine jobs. It will take away admin-laden responsibilities and free people up for more value-adding aspects. In HR and recruitment, generative AI combined with other automation tools means we could actually reach that Nirvana of spending less time on admin, scheduling and routine comms and more time on the candidate experience, employee engagement, career coaching and high touch support.
Critical success factors
However, there are a number of critical elements for success and a number of risks to be managed.
Effective use of generative AI requires specific skillsets. Generative AI is built around asking the right questions of the machine. It requires critical analysis to examine the results and then give further inputs to refine the output.
It requires the ability to analyse the sources the tool is using to ensure they’re suitable and appropriate; and to stand back and analyse the way an output is structured and whether it could be improved.
Our education system takes no account of generative AI as it stands. There needs to be a dialogue between employers and education providers to ensure we have a new generation coming through who have skills aligned to utilising generative AI.
That said, judging from feedback from schools where pupils are already using ChatGPT from a young age to ‘cheat’ doing their homework, the next generation will naturally acquire the ability to use foundation model AI. The key will be to make sure they are equipped to suitably translate that into the workplace.
Nor does generative AI remove the need for other human abilities. Knowledge will remain a major asset. People will still need to know their subject inside out in order to judge what the machine is producing.
Creativity will remain key too, to bring generative AI outputs to life and give them real impact. Indeed, the more machines produce content, the more personal creativity will be at a premium. We will crave human creativity if we reach a point where everything – from writing to art to pieces of code – are the work of a machine.
Five key risks
Then there are the risks. We see five principal aspects where risk needs to be mitigated and managed.
Firstly, there is the danger that, as with AI algorithms, bias could be built in – and replicated at massive scale and speed. In fact, there is the potential to do harm on a scale never envisaged. We need to be clear what we base our learning on and double-check our definitions of good to make sure machines don’t pick up on our imbalances.
Secondly, there is the risk that the more generative AI is used, the less people use their own core skills and the less they understand the data. With a Boolean search today, for example, you don’t need to understand how the technology is working, but you do need to know whether the results you’re getting look right.
There is a risk that we will lose our institutional knowledge around how to interpret and do things – gut feel, instinct, the knowledge that comes with actually doing and learning for ourselves. This reflects a wider danger in the use of technology generally – that we end up ‘managing by dashboard’ rather than by knowledge. We need to ensure that people keep their own skills current and use machines to supplement these, not replace them.
Thirdly, there is the prospect of generative AI being used to cut costs – rather than to increase productivity. If it is used in that way, we won’t see the full power and benefits of it.
Say you have a team of 100 recruiters and it’s possible to automate 80% of their work through generative AI and other automation – Would you reduce the team to 20 and save the costs of 80 people? Or would you keep the team of 100 and have them able to add hugely more value through better use of their time and skills? That’s the kind of short term vs long term debate generative AI is likely to create.
Next is the ability of generative AI to generate fake content as well as plagiarise and copy. New validation and verification methods may be needed along with new areas of compliance. It will also increase the debate around what is and isn’t authentic and legitimate – and whether that matters. If someone uses ChatGPT to help them get through an online assessment process, for example, is that good, bad or indifferent? Or if someone ‘writes’ their CV with ChatGPT?
Finally, there is the risk of people putting sensitive or confidential data into these open platforms, not realising that once it’s been ingested by the machine, it can be repeated and used in any content. Led by the HR function, organisations therefore need clear and thorough policies around generative AI. There is an argument for having a ‘driving license’ type system where only those individuals who have passed the test are able to use it.
HR needs to lead from the front
There are many aspects to consider. And these need to be thought about right now, not at a distant point in the future. HR leaders and their teams should be at the forefront of this, ensuring that generative AI is a tool that boosts the organisation rather than causing issues and becoming divisive.
So, are you accommodating generative AI into how the HR team functions and thinking about how it can map across the organisation to reimagine and enhance work?
Content TypeLatest News
Do you know what a Software Engineer is? Software engineering is generally described as the design, development, testing and maintenance of software applications to address real-world problems.
Content TypeLatest News
Could you spot an AI generated CV? The launch of advanced natural language processing models (NLP) such as ChatGPT and the ease at which they can be accessed and used is one of the most signifi
Exploring the future of Engineering with AIOps eventIn the world of engineering and architecture, the potential of AI is limitless. AI has the potential to transform your workload, making it n
Business Change & Transformation
*ACTIVE DV UKIC CLEARANCE NEEDED*Business Analyst- Contract- Harvey Nash are proud to be working with Global Cyber and Defence organisation, based in Cheltenham. Who are looking for an Agile Business
Data & Analytics
Edinburgh & Lothians
Oracle Cloud HCM Functional Consultant (RBAC Test Specialist) – 6 months – Inside IR35 - EdinburghDay Rate – Markets RatesHarvey Nash’s public sector client are currently looking to recruit an Oracle
Business Change & Transformation
Aberdeen and NE
Harvey Nash is now inviting candidates to apply for the role of Lead Business Analyst, a six-month contract for an energy client with offices throughout the UK.Inside of IR35Umbrella set up.January 20