AI development is currently held by a small number of companies. Public vigilance can help ensure they stick to ethical use of the technology
Warren Buffett got it partly right about AI. The billionaire investor and philanthropist told CNN earlier this year: “We let a genie out of the bottle when we developed nuclear weapons... AI is somewhat similar—it’s part way out of the bottle.”
Buffett’s rationale is that, much like nuclear weapons, AI holds the potential to unleash profound consequences on a vast scale, both for better or worse. Like nuclear weapons, AI is concentrated in the hands of the few. In AI’s case, tech companies and nations. This is a comparison that is not often talked about.
As these companies push the boundaries of innovation, a critical question emerges: Are we sacrificing fairness and societal well-being on the altar of progress?
Read more: How should we regulate AI?
One study suggests that Big Tech’s influence is ubiquitous across all streams of the policy process, reinforcing their position as “super policy entrepreneurs”. This allows them to steer policies to favour their interests, often at the expense of broader societal concerns. This concentrated power also allows these corporations to mould AI technologies using vast datasets reflective of specific demographics and behaviours, often at the expense of broader society.
The result is a technological landscape that, while rapidly advancing, may inadvertently deepen societal divides and perpetuate existing biases.