Follow us


While Artificial Intelligence (AI) has been a part of our technological landscape for years, it is getting a lot of attention thanks to the rise of Generative AI (GenAI), which has captivated the public’s imagination.

Herbert Smith Freehills’ Director of Generative AI, Susannah Wilkinson, notes that this is because “GenAI has democratised access to powerful AI systems, it’s unlocking the wave of automation”. Advising clients on AI for over a decade, and most recently in her role leading the firm’s adoption of GenAI, Susannah sees the change as being “how it’s triggered a step change in human expectations. It’s reshaping people’s perceptions of what technology can do and really challenging the status quo.”

Its potential for unexpected and unintended use poses distinct governance challenges and boards need to have a robust approach to it. AI requires a cohesive and adaptable governance framework, bolstered by strategic collaboration across various functions.

Recently, Susannah was joined by Professor Nicholas Davis, Co-Director at the UTS Human Technology Institute (HTI) and Anna Gudkov, Senior Policy Advisor at the Australian Institute of Company Directors (AICD) to get their insights on how directors can navigate AI responsibly.

In June, the HTI and AICD launched a suite of AI governance resources specifically designed for directors. Nick and Anna draw upon these materials to share their expertise in managing this rapidly evolving technology and offer practical tips that company directors can take into their next board meeting.

Key takeaways and practical tips

AI is different from other technologies due to its reliance on data, opacity in its processes, and scalability across various use cases.

Anna: one of the reasons why AI can be difficult is because it's really hard to identify where in the supply chain it's being used and often you have products already that have existing inbuilt AI within them, a lot of Microsoft products, et cetera, that have AI and you probably don't even know it's being used. So that's the first thing, opacity of existing use, really hard to pinpoint that, particularly for directors who are sitting at that oversight lens and angle. The second is opacity of AI outcomes. So again, because particularly with generative AI, where it's used very broadly, it can be very difficult to understand how it's actually going through its process, which makes it difficult to explain how it comes to an outcome. And then finally, fundamental reliance on data and AI is basically completely reliant on data and it is a little bit of rubbish in rubbish out... and then it comes with data governance complications and challenges, for example, how you're collecting that data, what your AI model is being trained on and how you're making sure that you're complying with your privacy and data governance obligations

‘Generative AI’ generally attracts greater governance challenges as opposed to ‘Narrow AI’. The latter is concerned with solving very narrow problems within a supervised learning environment; the former captures a much broader range of tasks spanning all areas of business.

Anna: Narrow AI is really more around solving very narrow problems. And so the learning is really kind of supervised learning. It's around narrow data sets. It's there to resolve a very specific issue. And some classic examples are recommender systems, what your Netflix is doing, search engines, facial recognition. Generative AI, that's used for a much broader range of tasks. And the reason that's relevant and interesting and relevant to directors is because it then has governance implications because when you are dealing with a system that can apply across your business chain and across different parts of your business, and also used for purposes that you didn't originally intend, it does create different governance challenges. And so examples are ChatGPT, you're seeing AI increasingly used to generate creative content, which was quite out of the realm of possibility years ago. So that's a really new development.

AI presents both opportunities and risks. Directors should focus on the commercial, reputational, and regulatory risks associated with AI, while also recognising its potential to drive productivity and innovation.

Susannah: the fact that it is being used across organisations, it's being used in so many diverse and different ways, we might have a sense of what it's being used for now, but the fact that it will embed itself in so many different parts of the way we interact with machines, the way we interact with ourselves, the way we work, the way we learn, understanding that those systems are being deployed in a safe and responsible way from the outset is actually critical because it will be very difficult to reverse engineer and unpack all of that complexity in the future.

Nick: [paraphrasing] Essentially, if you are using artificial intelligence and different forms of machine learning and other systems in your business, you really need to care about risks as they trace back to three sources.

When the system doesn’t work as intended: If it has biases, if the system has security failures, something goes wrong and that can create regulatory risk, reputational risk, commercial risk for you

When people can and do use AI systems for malicious or misleading purposes: So, thinking about is the system being used in ways that reflect poorly on us or indeed breach some kind of regulatory duty?

Even if everything works perfectly, it’s not malicious use, is it crossing any lines? There are increasingly cases where AI systems cross a line - in terms of privacy, in terms of rights limitation, in terms of intellectual property risks, et cetera.

Directors need to acknowledge that these unique characteristics of AI pose challenges and may require emerging governance approaches. Traditional governance approaches include guru-based governance and review based governance where AI questions are deferred to technologically literate directors or employees, or in-house legal teams. Such approaches work well for general technology but may prove ineffective for minimising the errors and harms of AI.

Nick:

Guru based governance: the approach where the most technologically literate person on the board or in your organization, everyone just defers to and says, well, what do you think about AI? Is this a good idea?

Review based governance: where legal teams are being tasked to answer, in-house legal in particular, being tasked to answer tricky AI questions. 

To effectively mitigate AI risk and harm, emerging governance approaches should be adopted. Committee based governance is a good starting point that can later evolve into a culture based governance approach.

Nick:

Committee based governance: it’s the first step for organizations to come together and assign accountability and get committees of people to look into this. We're seeing a lot more information flowing, So a lot more monitoring and tracking.

Culture based governance: culture is one of the most reliable and powerful forms of governance techniques we have. And certainly the history of health and safety culture in Australia and workplace safety has been that you start with rules and enforcement, but until you actually build it into culture on sites and in daily practice, we don't actually see those kind of failures and those errors, those harms drop.

Such governance structures require greater effort to implement and always remain an evolving process but is essential at this early stage before the risks and harm materialises.

Nick: So the process is to say, well, we definitely need a governance structure, but that structure needs to evolve over time, let's start by thinking about what are the biggest priority issues that we need to cover. And let's look at ways that then we can create processes, controls, heuristics that start to push the decision making in the right part of the organisation so that the board is absolutely sure that as this investment goes through, and as the systems evolve, nevertheless, the director's duties have obviously been executed, but the opportunity and the risk balance are there that the board is responsible for.

Nick: And so yeah, what is a fairly classic, let's get lots of people from across the organization, from technical and legal and compliance teams, let's get product owners together, and let's understand what systems we're using and make some decisions about which need kind of different forms of approval. But getting that right now rather than just leaving it to later is absolutely essential because if it's too late, essentially a lot of those risks will just migrate and get hidden into the system until you get a major crisis.

Align the use of AI in your organisation with broader business goals. Identify how and why AI is being used across your organisation to ensure employee practices support strategic objectives and mitigate risks.

Anna: what is really important is to make sure you're not doing AI for AI sake, that you're not kind of saying, oh, this is great technology, let's go and play with it. It really is about ensuring alignment to your organisational strategy. Start with your own business strategy first [and integrate your AI strategy into that].

Nick: I think what I'd be urging you as a board member is to make sure that your management team doesn't stop at that [creating a policy] because it seems that's often the easiest one from a board perspective and from a management perspective just to do a nod to, yes, we've got an AI policy, et cetera. The fact that actually lots of your employees are using it in different ways that you don't know about is critically important.

AI relies heavily on data, making data governance critical. Ensure that data used for AI is accurate, unbiased, and compliant with privacy and data protection regulations.

Anna: one of the first things you really should be doing is understanding where within your organization AI is being used. You can't really implement controls or processes or policies without knowing where they're going to sit or to what they apply. So that's where you have that discussion with your supplier in terms of, okay, like, where is AI currently being used in the suite of products? And you make sure that you're also dealing with it when you're buying or procuring new products and services.

Data protection regulations can differ between jurisdictions, and so it is important to ask the question of vendors of AI products where the data is processed, which in Nick’s experience, they often avoid answering.

Nick: For certain use cases, working at a university, working with government clients, working on mission critical things, crown jewel-like topics, I need to know that you know where that data's processed. And so do just keep, as a director, keep in mind that if you think that's important, if you think that should be on the radar, your management, your procurement folk in the organisation, they may not be getting an easy, straightforward answer on this because people do see that as something that is a competitive aspect and a side of concern.

Engaging with a diverse range of internal and external stakeholders will allow organisations to better explain and manage AI impact. This will reduce the risk of bias on vulnerable and marginalised groups and foster a culture of transparency and accountability.

Susannah: gen AI use is pervasive in organisations, it's not just a tool that sits on your desktop, it's a mindset, how we use it. We need to think differently about the way we interact with it.

Anna: there is often a lack of understanding of how the algorithm does its work and then if you actually take what the AI is doing and then use it to inform your decision making, that's a really fraught area and you need to make sure that you have human oversight into the process and you're making sure you have things like redress mechanisms and the like and you're explaining how you're using AI to your stakeholders to those that are impacted and so that's really, really key

Nick [speaking on stakeholder engagement at the University of Technology Sydney]: the administration was using artificial intelligence in lots of ways that when we started to talk to students and staff, they were really concerned. They just weren't sure why and wherefore that was being rolled out but the process flipped it on its head once we started to really lean into consulting with staff and students and it's actually strengthened and led to a lot of productivity opportunities being identified.

Continuously monitor and evaluate AI systems to ensure they function as intended and do not cause harm. This includes conducting regular impact assessments and updating governance practices as needed.

Anna: continue to monitor, report and evaluate and [do] that in a cycle to make sure you're learning from your mistakes, embedding modifications and advancements in your next reiteration

Directors should stay informed about legal obligations and regulatory developments related to AI. Check out our article on the mandatory guardrails and voluntary standards on AI and navigating Australian Privacy reform.

The Directors’ Guide to AI Governance, developed by the AICD and HTI, provides a suite of practical resources to assist boards in navigating ethical and informed use of AI.

AI governance: Practical tips for directors to take into their next board meeting


Key contacts

Susannah Wilkinson photo

Susannah Wilkinson

Director, Generative AI (Digital Change), Brisbane

Susannah Wilkinson
Julian Lincoln photo

Julian Lincoln

Partner, Head of TMT & Digital Australia, Melbourne

Julian Lincoln
Katherine Gregor photo

Katherine Gregor

Partner, Melbourne

Katherine Gregor
Alex Lundie photo

Alex Lundie

Senior Associate, Melbourne

Alex Lundie
Kosta Hountalas photo

Kosta Hountalas

Senior Associate, Sydney

Kosta Hountalas
Raymond Sun photo

Raymond Sun

Solicitor, Sydney

Raymond Sun

Stay in the know

We’ll send you the latest insights and briefings tailored to your needs

Sydney Australia Perth Brisbane Melbourne Technology, Media and Entertainment, and Telecommunications Emerging Technology Financial Institutions Mining Pharmaceuticals and Healthcare Technology, Media and Telecommunications Manufacturing and Industrials Professional Support and Business Services Energy Real Estate Infrastructure Consumer AI and Emerging Technologies Emerging Technologies Susannah Wilkinson Julian Lincoln Katherine Gregor Alex Lundie Kosta Hountalas Raymond Sun