Advertisement: Click here to learn how to Generate Art From Text
AI was a major topic at Davos. As Report by FortuneOver two dozen sessions focused on AI directly at the event. Topics ranged from AI in Education to AI Regulation.
A who’s who of AI was in attendance, including OpenAI CEO Sam AltmanInflection AI CEO Mustafa Suleyman. AI pioneer Andrew Ng. Meta chief AI scientist Yann LeCun. Cohere CEO Aidan Gomez.
From wonder to pragmatism
While at Davos in 2023, the discussion was full of speculations based upon the then-recent release of ChatGPTThis year’s tempered.
“Last year, the conversation was ‘Gee whiz,’” Chris Padilla, IBM’s VP of government and regulatory affairs, said in an InterviewYou can also find out more about The Washington Post. “Now, it’s ‘What are the risks? What can we do to make AI more trustworthy?’”
The concerns raised in Davos included misinformation, job displacement, and the widening gap between rich and poor nations.
The most talked about AI risk in Davos was the threat to misinformation and deception, often in the forms of Deepfake photosVideos and voice clones could further muddy the truth and undermine trust. Recent examples include robocalls sent out before the New Hampshire primary election, using a Voice cloneThe apparent impersonation of President Joe Biden Try to avoid this:Suppressing votes is a crime.
AI-enabled deepfakes spread false information and create false reports by making someone appear to say something that they did not. In one Interview, Carnegie Mellon University professor Kathleen Carley said: “This is kind of just the tip of the iceberg in what could be done with respect to voter suppression or attacks on election workers.”
Reuven Cohen, a consultant in enterprise AI, was also recently interviewed Tell them to get on with itVentureBeat says that new AI tools will bring a flood of deepfake images, audio and video right in time for 2024’s election.
Despite considerable efforts, a foolproof system to detect deepfakes is still elusive. As Jeremy Kahn Observations on the effectiveness ofIn a Fortune article: “We better find a solution soon. Distrust is insidious and corrosive to democracy and society.”
AI mood swing
Suleyman’s mood shift from 2023 to 2020 led him to WriterThe following are some examples of how to use Foreign Affairs that a “cold war strategy” is needed to contain threats made possible by the proliferation of AI. He said that AI is a technology that has always been cheaper and easier for people to use. This technology has permeated all levels of the society and can be used in both positive and negative ways.
“When hostile governments, fringe political parties and lone actors can create and broadcast material that is indistinguishable from reality, they will be able to sow chaos, and the verification tools designed to stop them may well be outpaced by the generative systems.”
Concerns about AI date back decades, initially and best popularized in the 1968 movie “2001: A Space Odyssey.” There has since been a steady stream of worries and concerns, including over the Furby, a wildly popular cyber pet in the late 1990s. The Washington Post reportedIn 1999, the National Security Administration (NSA), banned these devices from their premises due to concerns that they could be used as listening devices and divulge national-security information. Recently released NSA Documents from this period discussed the toy’s ability to “learn” using an “artificial intelligent chip onboard.”
Contemplating AI’s future trajectory
AI experts have recently claimed that more AI experts are now expressing concerns about the technology. Artificial General Intelligence(AGI), could be achieved very soon. AGI is a vague term, but it is believed to be the point where AI becomes smarter and capable than a college educated human in a wide range of activities.
Altman Has said that he believes AGI might not be far from becoming a reality and could be developed in the “reasonably close-ish future.” Gomez reinforced this view: “I think we will have that technology quite soon.”
But not everyone is on the same page. LeCun, for instance, is sceptical about the imminent arrival of AGI. He recently Tell them to get on with itSpanish outlet EL PAÍS that “Human-level AI is not just around the corner. It will take a long, long time. And it’s going to require new scientific breakthroughs that we don’t know of yet.”
Public perception of the future
AI technology is still a subject of uncertainty. In 2024 Edelman Trust Barometer,Global respondents are divided on whether they accept or reject AI. AI has a lot of potential, but it also comes with a lot of risks. According to the report, people are more likely to embrace AI — and other innovations — if it is vetted by scientists and ethicists, they feel like they have control over how it affects their lives and they feel that it will bring them a better future.
It is tempting to rush towards solutions to “contain” the technology, as Suleyman suggests, although it is useful to recall Amara’s LawAs defined by Roy Amara. Past president of The Institute for the Future. He said: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
While there is a lot of experimentation, early adoption and widespread success are not guaranteed. Rumman Chowdhury is the CEO and cofounder Humane Intelligence’s AI testing nonprofit. You can find out more about this by clicking here.: “We will hit the trough of disillusionment in 2024. We’re going to realize that this actually isn’t this earth-shattering technology that we’ve been made to believe it is.”
We may discover how revolutionary it really is in 2024. In the interim, companies and individuals are learning more about how to use generative AI for their personal or business benefit.
Accenture CEO Julie Sweet stated in an article Interview that: “We’re still in a land where everyone’s super excited about the tech and not connecting to the value.” The consulting firm is now conducting workshops for C-suite leaders to learn about the technology as a critical step towards achieving the potential and moving from use case to value.
The benefits and the most harmful impacts of AI (and AGI), therefore, may be imminent but not necessarily immediately. In navigating AI’s complex landscape, we are at a critical crossroads. A prudent stewardship approach and an innovative spirit will help us navigate towards a future in which AI technology enhances human potential without sacrificing integrity and values. We must harness our collective courage in order to imagine and design a world where AI serves humanity rather than the other way round.
Gary Grossman, EVP Technology Practice at EdelmanThe Edelman AI Center of Excellence is led by the global leader of the Edelman AI Center of Excellence.
Welcome to VentureBeat’s community!
DataDecisionMakers provides a platform for experts, such as the technical people who do data work, to share data-related insight and innovation.
Join us on DataDecisionMakers if you want to know about cutting-edge information and ideas, best practices and the future of data, data tech and data analytics.
You might even consider Contributing an Article of your own!