Meta’s President of Global Affairs Nick Clegg may believe comparisons to AI and nuclear weapons are overblown, though he does see a use in how the technology can curb online hate speech.
“These large language models have no autonomy, no agency, they don’t know anything innately. They are pattern recognition systems,” said Clegg during the final day of the Concordia conference during the United Nations General Assembly, noting, that if larger AI becomes a reality, “it’s a whole different ball game.”
“Yes, generative AI might enable bad people to try and generate bad and misleading content more quickly. It also allows platforms like us to act against that much more quickly. It’s an adversarial space where, hopefully, and that’s our task, we can use those tools more nimbly and more effectively than our adversaries,” continued the executive.
Clegg believes that through open-sourcing we can better understand the specific elements of AI that are vulnerable.
“Of course, you can never perfectly predict how it will be used, and you can’t perfectly sort of litigant for that in advance. And no doubt there will be people who will try and use it for bad purposes,” said Nick Clegg. “But, in general, it is accepted in a sector that has been open-sourcing these things that have led to safer models because you can then openly look for vulnerabilities and fix them.”
“It’s better to take a calm look at where current legislation, a lot of which can apply to these technologies, is lacking in where the gaps need to be filled.”
Clegg emphasized how artists, academics, and innovators have free access to AI systems, which will have a positive effect. A prime example is artist Bennet Miller, who utilized Dall-E 2, an open-sourced AI system, to create works in an exhibit for New York’s Gagosian Gallery.
“If you spend all your time trying to anticipate a future that hasn’t arrived, you won’t fix the issues we know are present today.”

