Davos overwhelmingly focuses on AI's impact, highlighting its integration in sessions and discussions. Concerns about moral implications, unemployment, and wealth distribution dominate, overshadowing fears of AI overtaking humanity.
By far the overwhelming theme here at Davos this week is Artificial Intelligence (AI). There is a dedicated “AI House” offering nonstop programs and seminars with experts and business leaders. Nearly all of the educational “houses” up and down the main Promenade have integrated multiple AI-related sessions into their programs. Beyond those formal events more than a dozen global software companies have huge spaces set up as lounges where they are hosting constant presentations and discussions about every conceivable aspect of this emerging phenomen. As the New York Times reported, AI dominated the Davos conversation and eclipsed even the bloody conflicts in Ukraine and Gaza.
True confession: I have been afraid go deep on learning about AI. All the fearmongering and misinformation roiling around the Internet left me wanting to put blinders on til things get sorted out. But all that changed this week. As I began to learn more about the mechanics of AI and some of the specific issues presented, the fear that AI would take over the world began to dissipate. Besides we all know how well it works out for the camel who sticks his head in the sand.
Most Large Language Models (LLMs) being developed today are “trained” on massive data sets of content that then enable AI applications to make super rapid connections when exposed to new information. One big worry is that AI applications will reach conclusions and make decisions without any sense of morality or any coherent set of values. In fact if an LLM is trained on a bad data set (e.g., the dark web, racist propaganda, 4Chan, the Trump campaign) AI could develop some pretty awful moral perspectives. One solution advocated by the CEO of Splunk was to “put a human in the loop.” That is, do not allow AI to implement decisions without a final sign off by a human. That may not solve the problem but it seems like an emerging best practice.
One prominent CEO of an AI company in the Emirates had a unique response for an audience raising moral qualms about AI (paraphrasing) “What’s so great about human intelligence anyway? Even today right here in Europe or in Gaza or Africa we are still solving our problems by dropping bombs and shooting bullets at human bodies. In the 20th century human decision-making resulted in the murder of more than 100 million people during the two World Wars alone. Could AI possibly do worse?” Nervous laughter erupted, followed by a sobering moment of humility.
At one of sessions I helped organize, a prominent young Indian investor made a compelling case that the tsunami of unemployment dislocations from AI, robotics, and automation is coming much sooner than we think, likely on a 2-3 year horizon based on his study of accelerating developments. He argued that no society on earth is prepared for the social, financial and emotional disruption that those trends portend in the near term future. While mass unemployment typically conjures images of an economy in free fall (e.g., the Great Depression), these anticipated waves of redundancies will likely occur in an era of stunning prosperity due to accelerating productivity of robots twinned with AI. Robots off the factory floor with nuanced AI-enabled discernment and improving agility will displace tens of millions of workers. We can expect massive waves of refugees, revolution, and accelerating populist rebellion against capitalism and free markets.
How can we ensure that the bounty from those productivity gains will be shared equitably with the millions of people who lose their livelihoods? The world currently has no effective mechanism for spreading that anticipated prosperity beyond those lucky few who “own the robots.” What is more the very high capital requirements for AI deployment suggest that the tech titans — the so-called Magnificent 7 that now dominate the stock market (Microsoft, Meta, Tesla, Nvidia, Amazon, Alphabet (Google), and Apple ) — will reap the vast majority of benefit from this transition. It won’t be creative startups that profit but rather companies the size of countries with massive cash hoards to deploy. Most participants agreed we may be looking ahead to a mind-boggling period of accelerating corporate profits for the tech titans coupled with mass unemployment and a sharp increase in “deaths of despair.” How will governments respond to the intense political pressures to share the newfound wealth in the “post-scarcity era”? (See my forthcoming post on the National Endowment).
I left these many AI discussion sessions with the realization that my fear had been misplaced. I was worried about the wrong things. The concern is not so much that AI will take over the world and do away with humankind. It is that AI and new generations of robotics will create an era of tremendous productivity and efficiency — yet the legacy global systems built on greed, short-sightedness, and selfishness will prevent us from sharing that bounty in a way that benefits us all, driving yet deeper wedges of inequality and hostility between rich and poor, capitalists and labor, and the developed and the yet-to-be-developed world. As we solve the problem of scarcity, our greatest challenge is to ensure the gains are shared fairly with all humankind.
What we value as a people defines who we are
Author Niall Ferguson expounded on his forthcoming biography on the life, legacy and philosophy of Henry Kissinger
An “Un-Davos” Experiment — The Free Thinker Society is Born