Claudia Schulz Claudia Schulz

VivaTech 2024 debate: “Will AI reveal the best of humanity?”

The New York Times Debate at Vivatech 2024 AI Will Reveal the Best of Humanity

Throwback to VivaTech 2024 when Dr. Claudia Schulz, Research Director at Pivot&Co was part of the winning team at The New York Times debate organised by Kite Insights.

So what do we think?

AI can be used to benefit society but it won’t reveal the best of humanity as it is designed by humans. Only humans can reveal the best and worst of our own humanity.

Our challenge is to create AI solutions that benefit society by implementing robust governance and design considerations so we can use AI both well, and for good. This is what we do at Pivot&Co.

Convince me in 3 minutes or less: a debate transcript.

AI can be used to benefit society but it won’t reveal the best of humanity because of how it is designed.

AI, by definition, is designed to replicate human intelligence, thereby creating something artificial from the natural.

At best then, AI will reveal what we already know about ourselves.

At worst, some suggest AI will exacerbate the worst of humanity due to an alignment problem between human values and AI’s limited ability to codify them into machine learning code.

Let’s start with the at best scenario. If we consider most AI as a super powered calculator that uses information we give it, in this scenario, it won’t reveal anything we didn’t already know or couldn’t already find out. AI will just do it quicker than we can or replicate what we already know in a new format.  We’ve seen this before in the internet compared to a physical library, or a car compared to a horse.

If we consider more sophisticated AI that is generative, meaning it can speak to itself and iterate and optimise, sometimes in an unsupervised fashion, we run into the aforementioned alignment problem, where AI has the capacity to exacerbate the worst of our humanity. What could this look like? Here are a few examples.

  • Systemic inequality is arguably one of the worst aspects of humanity. Deployed at scale, AI is more likely to exacerbate these problems. As is the case with most technologies, it will be more accessible in wealthy countries both in design which is costly, and deployment which requires expensive computing and data storage infrastructure. Inequality has worsened…

  • Exploitation is another example of the worst aspects of humanity. At scale, AI requires bulk harvesting of data including from developing countries, with potentially little governance around how data is harvested and from where. The models are then commercialised or monetised for profit, most of which will be retained by developed countries. AI is unlikely at least anytime soon to be capable of reflecting the vast geographical, cultural, societal, religious, economic diversity that exists in society and, at least initially, is far more likely to resemble and favour the societal values of the regions that are producing it, something referred to by experts as “algorithmic colonisation”. Exploitation has worsened…

  • AI is incredibly effective at ‘appearing’ intelligent, rather than actually being intelligent…or even accurate! These hallucinations pose significant risks for individuals and business that believe generated outputs or utilise them un-checked (or without the capacity or ability to check them). At this stage, the potential for abuse of AI arguably outweighs the potential for good, I’m thinking of deep fakes, voice replicators, IP infringement, personal data mining, data breaches etc. Confusion and societal disruption has worsened…

So if AI won’t reveal anything good about humanity, we are left with ensuring it at least benefits society.

How do we do that? AI will only benefit society if we implement robust governance and design considerations before we can use it “for good”

Read More