By Theodoros Evgeniou
Responsible practices using tested processes must be the focus when creating new technology.
Technology has always been a double-edged sword. While it’s been a major force for progress, it has also been abused and caused harm. From steam power to Fordism, history shows that technology is neither good nor bad – by itself. It can, of course, be both, depending on how it’s used.
Telecommunications, specifically the internet, and more recently AI, which is estimated to contribute more than €11 billion to the global economy by 2030, are no different.
On one hand, the internet connects us all – and kept us in touch with one another during the pandemic. AI and machine learning can help solve some of the world’s most pressing problems. Just a few examples are diagnosing disease, thwarting cyberattacks and fighting climate change. Yet, if left unchecked, algorithms can also perpetuate biases, create online echo-chambers, radicalisation and compromise safety and privacy.
2022 is poised to bring sweeping changes to digital regulations. The EU Parliament approved the Digital Services Act to increase online safety and consumer protection and is preparing the Artificial Intelligence Act to govern AI. The US Federal Trade Commission has published its guidance on AI, while China has launched a wave of regulations. The OECD currently tracks more than 700 AI policy initiatives across 60 countries.
Meanwhile, for years, the private and non-profit sectors have rallied behind the Tech for Good movement which strives to “put digital and technology at the service of humanity”. In its shortest and most sweeping form, it promises technology can help the world achieve the UN’s Sustainable Development Goals.
But in light of history, we must ask: Is it possible for Tech for Good to succeed without doing harm? We argue that the answer is largely about focusing on what we call “Good Tech”.
Good Tech prioritises processes before outcomes
One problem is that the best of intentions is no guarantee of a positive outcome. Therefore, a sole focus on what technology can do is too narrow. We need to shift our priority to how we design, implement and monitor tech, across contexts.
In other words, we need to focus on process.
To leverage the best of AI and tech, and safeguard our world from their inherent risks, we must integrate robust processes that check against abuses, biases or harmful uses into our activities. Drawing upon our research on AI, machine learning and Fair Process Leadership, we call the output of this process-oriented approach to technology innovation and regulation Good Tech.
How to develop and implement Good Tech
The goal of Good Tech is to minimise the possibility that modern technology is abused or causes harm, so that society reaps only the benefits. Good Tech demands a rigorous, inclusive process for design, implementation and monitoring through three components: “Good” principles, Fair Process and strong oversight.
1. Good Tech is inclusive, value-based, and future-proof
After goals are set, high performance starts with defining values; in an organisation or team, shared values create a wall against abuse and risks.
In recent years, companies such as Google, Microsoft, IBM, BMW and Telefonica have rallied behind principles for ethical or responsible technology. As of April 2020, the Swiss non-profit AlgorithmWatch has 173 guidelines in its AI Global Ethics Guidelines Inventory.
Of course, we always will need to scrutinise these principles, who creates them and how they are implemented.
Good Tech principles are more than words; they reflect a collaborative process among diverse stakeholders. They can’t be rushed – often these principles demand months to deliberate and implement.
The most robust and effective principles, like the UN’s Principles of Human Rights or OECD’s AI Principles, are “values-based” and distilled over time through an inclusive process that seeks input from all stakeholders and minimises bias. Luckily, we don’t have to always start from scratch. For example, principles such as the OECD’s AI framework and the work that the OECD Network of Experts on AI does can be a starting point for organisations developing Good Tech to consider.
2. Good Tech must be governed by “Fair Process”
Goals and principles are fine but fall flat if they aren’t implemented or ignored when needed. Implementation remains a key challenge.
While there are multiple frameworks for responsible tech by design, we need to make sure that they’re also fully aligned with time-tested practices for Fair Process. This is, in our opinion, critical work.
We believe that a commitment to Fair Process is instrumental to developing Good Tech. Decades of research with companies and leaders has correlated Fair Process with sustainable performance. Fair Play – also called procedural justice by organisational scientists – is defined by five values, all of which must apply to Good Tech:

- Clarity and transparency, including of goals, purpose, and ‘rules’
- Consistency in treating people and issues equally over time, without preference or bias
- Communication that favours listening over telling and that does not sanction people for what they say
- Changeability of views when faced with new evidence
- Culture of Truth-seeking and Doing the Right Thing instead of choosing what’s most popular or convenient.