Investors need to prioritise the ethical deployment of AI – too much is at stake if they don’t.
Investors, take note. Your due diligence checklist may be missing a critical element that could make or break your portfolio’s performance: responsible AI. Other than screening and monitoring companies for future financial returns, growth potential and ESG criteria, it’s time for private equity (PE) and venture capital (VC) investors to start asking hard questions about how firms use AI.
Given the rapid proliferation and uptake of AI in recent years – 75 percent of all businesses already include AI in their core strategies – it’s no surprise that the technology is top-of-mind for PE and VC investors. In 2020, AI accounted for 20 percent or US$75 billion of worldwide VC investments. McKinsey & Company has reported that AI could increase global GDP by roughly 1.2 percent per year, adding a total of US$13 trillion by 2030.
AI now powers everything from online searches to medical advancement to job productivity. But, as with most technologies, it can be problematic. Hidden algorithms may threaten cybersecurity and conceal bias; opaque data can erode public trust. A case in point is the BlenderBot 3 launched by Meta in August 2022. The AI chatbot made anti-Semitic remarks and factually incorrect statements regarding the United States presidential election, and even asked users for offensive jokes.
In fact, the European Consumer Organisation’s latest survey on AI found that over half of Europeans believed that companies use AI to manipulate consumer decisions, while 60 percent of respondents in certain countries thought that AI leads to greater abuse of personal data.
How can firms use AI in a responsible way and work with cross-border organisations to develop best practices for ethical AI governance? Below are some of our recommendations, which are covered in the latest annual report of the Ethical AI Governance Group, a collective of AI practitioners, entrepreneurs and investors dedicated to sharing practical insights and promoting responsible AI governance.
Best practices from the ESG movement
PE and VC investors can leverage lessons from ESG – short for environmental, social and governance – to ensure that their investee companies design and deploy AI that generates value without inflicting harm.
ESG is becoming mainstream in the PE realm and is slowly but surely making its mark on VC. We’ve seen the creation of global industry bodies such as VentureESG and ESG_VC that advance the integration of sustainability into early-stage investments.
Gone are the days when it was enough for companies to deliver financial returns. Now, investors regularly solicit information about a fund portfolio’s compliance with the United Nations Sustainable Development Goals. Significant measures have been taken since 2018 to create comparable, global metrics for evaluating ESG performance. For example, the International Sustainability Standards Board was launched during the UN Climate Change Conference in 2021 to set worldwide disclosure standards.
Beyond investing in carbon capture technologies and developing eco-friendly solutions, firms are being pressed to account for their social impact, including on worker rights and the fair allocation of equity ownership. “Investors are getting serious about ESG,” headlined a 2022 report by Bain & Company and the Institutional Limited Partners Association. According to the publication, 90 percent of limited partners would walk away from an investment opportunity if it presented an ESG concern.
Put simply, investors can no longer ignore their impact on the environment and the communities they engage with. ESG has become an imperative, rather than an add-on. The same can now be said for responsible AI.
The business case for responsible AI
There are clear parallels between responsible AI and the ESG movement: For one thing, both are simply good for business. As Manoj Saxena, chairman of the Responsible Artificial Intelligence Institute, said recently, “Responsible AI is profitable AI.”
Many organisations are heeding the call to ensure that AI is created, implemented and monitored by processes that protect us from negative impact. In 2019, the OECD established AI Principles to promote the use of AI that is innovative, trustworthy and respects human rights and democratic values. Meanwhile, cross-sector partnerships including the World Economic Forum’s Global AI Action Alliance and the Global Partnership on Artificial Intelligence have established working groups and schemes to translate these principles into best practices, certification programmes and actionable tools.
There’s also been the emergence of VC firms such as BGV that focus on funding innovative and ethical AI firms. We believe that early-stage investors have a responsibility to build ethical AI start-ups, and can do so through better diligence, capital allocation and portfolio governance decisions.
The term “responsible AI” speaks to the bottom-line reality of business: Investors have an obligation to ensure the companies they invest in are honest and accountable. They should create rather than destroy value, with a careful eye not only on reputational risk, but also their impact on society.
Here are the three reasons why investors need to embrace and prioritise responsible AI:

- AI requires guardrails
- Regulatory pressure imposes strong consequences
- Market opportunities