How are Investors Approaching the Risks of AI? (newsletter feature)

The AI revolution will be ‘ten times bigger than the Industrial Revolution’ according to the Head of Google’s DeepMind, and is already causing enormous disruption in sectors from energy to entertainment.

 We won’t try to summarise all these fast-moving transformations in this article, but instead aim to capture how investors are trying to manage the risks that go alongside the opportunities of AI. This builds on a report that Chronos produced this summer in collaboration with leading UK pension scheme Railpen on the topic.

Defining AI 

First, we should be clear what we mean by Artificial Intelligence (AI), as there remains no standard definition.

The term generally refers to a range of machine-based technologies which use algorithms to interpret inputs (i.e. data) in order to generate outputs (e.g. a prediction, image, or recommendation) that mimic human thought patterns and solve complex tasks.

 The table below from the Railpen report offers a useful classification which focuses on the specific branch or type of technology being applied. For example it distinguishes Generative AI (a branch that used Deep Machine Learning to generate new content), from Natural Language Processing (which generates based on recognising, processing and interpreting human language).

 Investors also distinguish between the activities of both AI developers (companies designing and producing AI), and AI deployers (companies implementing AI). The latter typically receiving less attention from investors, regulators, and the public despite being exposed to many of the same challenges.

See table of AI Classifications from the Railpen report

What are investors concerned about

AI opportunities abound for investors - roughly 75% of the S&P 500’s returns in recent years is based on the AI-related businesses according to Morgan Stanley. But investors are increasingly concerned about the risks too.

 We are, for example, seeing a steady increase in the number of AI-related incidents and controversies. In 2023, 123 incidents were reported, marking a 32.3% increase from 2022, with AI incidents having grown twentyfold since 2013.

 Only this month Deloitte hit the headlines for using generative AI to produce a report for the Australian Government but failed to check it. The number of errors in the report saw Deloitte have to issue a refund to the Australian Government.

 The financial impacts of these risks can be significant, although an extensive and wide-ranging body of empirical research and knowledge around financial materiality is still under development due to the nascent nature of the technology. Several recent incidents however illustrate the potential direction of travel. For example, Google’s shares fell by 9% in a single day, experiencing a short-term loss of US$100 billion in market cap, after its chatbot Bard made a factual error. In addition, OpenAI was fined €15 million for processing users’ personal data without adequate legal justification and by violating the General Data Protection Regulation’s (GDPR) principle of transparency.

Our research suggests the risks associated with AI can be grouped into the three broad risk categories of E, S and G:

  • Governance risks: Such as AI’s potential effect of blurring lines of accountability, transparency and cybersecurity. 

  • Social risks: Such as AI’s potential impacts on employment level, intellectual property, misinformation, privacy and digital rights, and bias and discrimination.

  • Environmental risks: Such as AI’s enormous use of electricity, water and other resources.  (AI is set to drive a 50% increase in data center power demand by 2027 and as much as 165% increase 2030, according to Goldman Sachs).

How are investors approaching AI risk management

 The wide-ranging nature of AI-related risks, augmented by the wide deployment of AI across the economy, is a challenge for investors’ risk management. Their response so far is to focus on a governance system that enable them to assess companies’ approaches to AI, and the technology’s effects on market structures and sectors.

The specifics of this approach was at the heart of the report we produced with Railpen this summer, detailing Railpen’s AIGF (AI Governance Framework). This framework identifies four things that investors need to analyse and/or set expectations for in order to assess a portfolio company’s preparedness for AI risks .

These four items can be summarised as:

  • Governance – For example, does a company have senior or board-level oversight, management and policies in relation to AI?

  • Strategy – For example, has AI’s relevance to the business strategy been assessed?

  • Risk management – For example, is there identification and monitoring of AI risks, and ongoing stakeholder engagement about these?

  • Performance rating - For example, is there AI incident reporting, and relevant annual reporting?

We are also seeing investor collaborations form on this issue. Just some include:

An ongoing process

As emphasised by the Railpen and Chronos report, the potential long- and medium-term risks of AI are highly unpredictable and developing fast. Things may even have changed in the time between us writing this article and you reading it!

That’s why many investors are focused on the governance of AI – aiming to get this right so they can understand which companies are best prepared to harness the opportunities of AI and act effectively against issues as they emerge. As the technology develops institutional investors must continue to rise to the challenge.

 --

Further reading

-              Responsible AI ESG Framework for investors - CSIRO

-              ICGN Investor Viewpoint - Artificial Intelligence - An engagement guide (2).pdf

-              Artificial Intelligence and Human Rights Investor Toolkit - Toolkits

 

 

 

 

Next
Next

Towards More Effective Climate Policy Engagment: A Conversation on Practices, Prospects and Lessons from Australia and the UK