‹ Takaisin

“Not Safe for Work:” Internal Auditors Should Follow Policies When Using AI

”I was recently gathered with a group of colleagues at a networking event in London when one of them gently chastised me for unabashedly urging the profession to jump into the artificial intelligence (AI) arena. In her opinion my words carry a lot of weight and that internal auditors who hear me say “start using AI” do so without considering all the risks or consequences. I pushed back a little, but her point was well taken. The opportunities that generative AI solutions present abound for our profession, but we must make sure we know the risks and play by our organizations’ “rules” – especially when accessing AI using the organization’s technology.

In the past year, the integration of AI technologies has proliferated across various sectors, revolutionizing how businesses operate. The advancement that has generated the most excitement is generative AI. Tools like ChatGPT are capable of creating content, from text to images, with remarkable human-like fluency. While generative AI holds immense potential for innovation and efficiency, its adoption in the workplace also brings forth a myriad of risks that demand careful consideration and adherence to established policies.

Generative AI, powered by deep learning algorithms, has garnered attention for its ability to generate content autonomously. Whether it’s crafting product descriptions, generating marketing copy, or even creating entire articles like this one, generative AI can mimic human writing styles and produce content at scale with astonishing speed.”

Read the full blog post by Richard Chambers here.