Stay in the know
We’ll send you the latest insights and briefings tailored to your needs
Trust is a critical part of any relationship. Businesses’ relationships with their clients and consumers are no different. But the concept of trust in a corporate context is a fragile one.
At its core, corporate ‘trust’ seems to boil down to a mix of compliance, reputational and ethical issues. In the past decade phrases such as ‘purpose driven’, ‘sustainable business’ and ‘environmental social governance (ESG)’ have become part of the corporate lexicon as businesses try to demonstrate that they are attuned to shifts in public sentiment and values.
However, the rapid rise of digitization across the global economy has perhaps brought about some of the most challenging trust issues for the consumer business relationship to date. Concerns raised by the academic community around the use of unintelligible, opaque AI systems and the commoditisation of human data, pose a genuine risk to a company’s reputation. The conversation on privacy and data has moved beyond legal and compliance spheres to whether certain technologies should be used in a given context, despite being legally permissible.
In the absence of set of comprehensive international rules on the ethical procurement of data and use of AI, ESG principles can offer companies a lens through which many of the risks associated with these technological issues can be better understood and managed, in order to drive sustainable value.
Controversial ethical issues can emerge where data is being used to make automated determinations or predictions that affect individuals, even if this is done within the letter of the law.
Over the past few years we’ve seen examples of where algorithms have made flawed or biased decisions which have adversely impacted human lives, as a result of being trained on flawed or biased data sets1. This is rooted in the practises around how training data is classified and labelled, often by machines themselves or low-paid human crowdworkers. This process often involves taking data out of context and giving it a singular, reductive meaning which not only limits how the AI interprets the world, but also imposes upon it a particular, narrow world view.
In the quest to acquire ever more data to feed AI, we’ve also seen a shift away from consent based data collection. In Australia we recently saw a regulatory push back against non-consensual data acquisition practices. In November 2021, the Office of the Australian Information Commissioner found that Clearview AI, Inc. breached Australians’ privacy by scraping their biometric information from the web and disclosing it through a facial recognition tool. The “lack of transparency around Clearview AI’s collection practices, the monetisation of individuals’ data for a purpose entirely outside reasonable expectations, and the risk of adversity to people whose images are included in their database”2, all contributed to the finding. Further, Commissioner Faulkener explicitly noted that the law needed to catch up with technological developments, saying that the case “reinforces the need to strengthen protections through the current review of the Privacy Act”3.
Sometimes the technological innovation or product itself creates the ethical dilemma. Recently, Facebook announced that it would be shutting down its photo tagging function (a controversial facial recognition system). Mr Pesenti, vice-president of Facebook’s parent company Meta, said the company was trying to weigh the positive use cases for the technology "against growing societal concerns, especially as regulators have yet to provide clear rules"4
Given the pervasive use of AI and data – AI ethics and data compliance are issues that cut across all corporate activities and sectors. AI ethics and data management considerations also surface in each of the three core ESG pillars.
The relationship is most obvious for governance. Good internal governance of data (i.e. the ability to demonstrate that appropriate monitoring and controls are in place to show compliance with privacy commitments) as well as genuinely consensual procurement of data for AI training, makes compliance with evolving privacy and data protection regulations around the world more manageable. Such transparency also leads to accountability, and Australian regulators and courts are increasingly insisting on greater accountability in the use of AI by companies and government agencies.
The impact of technology and data processing on individuals falls within the ESG’s social pillar. Advances in technology and science allow increasingly pervasive analysis of the way in which we live our lives. Algorithms determine creditworthiness and are used to diagnose medical conditions. Technology is also capable of intrusive levels of surveillance often unbeknownst to the individual, whether at work, in the home or in public spaces. A failure to take into account the ethical implications of these activities undermines trust in AI, as well as the companies and governments that use it. It is important to acknowledge that data collected today may be used in the future in ways not currently contemplated. This raises additional risks for both companies and data subjects.
Finally, there are concerning environmental considerations related to data use and the AI life cycle. These include intensive energy consumption linked to data centres and the huge demands on the earth’s resources for the minerals that build up computing’s core components such as lithium. Advanced computation is rarely considered in terms of carbon footprints, fossil fuels, human labour and pollution. Companies that tout their environmental credentials should be mindful of the impact of technology on the environment and how this is communicated to stakeholders.
The rationale for considering AI ethics and data compliance in ESG mirrors that of the movement more widely – taking a holistic view of the impact of AI and data usage can lead to better risk management and ultimately creates longer term value for the company and investors.
Companies with compliance issues around data face the potential for significant reputational damage, significant remediation costs, reduced valuations, and substantial regulatory sanctions. Developing forward thinking policies on data governance grounded in principles of good data stewardship, ethical frameworks for the design and use of AI and implementing transparent goals against which progress can be measured will help to mitigate risk in an area that is moving at a colossal pace.
The contents of this publication are for reference purposes only and may not be current as at the date of accessing this publication. They do not constitute legal advice and should not be relied upon as such. Specific legal advice about your specific circumstances should always be sought separately before taking any action based on this publication.
© Herbert Smith Freehills 2024
We’ll send you the latest insights and briefings tailored to your needs