Most Americans Don't Trust AI and Want Regulation
Most Americans Don't Trust AI and Want Regulation
New study finds that most American adults don't trust AI tools, and are concerned about their potential for misuse
A new study finds that most American adults don't trust AI tools like ChatGPT, and are concerned about their potential for misuse. This suggests that the recent spate of scandals surrounding AI-generated malware and disinformation is having an impact, and that the public may be increasingly open to the idea of regulating AI.
The survey, conducted by MITRE and Harris Poll, found that only 39% of the 2,063 American adults surveyed believe that AI technology today is "safe and secure," a decline of 9% from when the companies conducted their last survey in November 2022.
When it comes to specific concerns, 82% of people were worried about deepfakes and "other engineered synthetic content," while 80% were concerned about how this technology could be used in malware attacks. A majority of survey participants also expressed concerns about the use of AI for identity theft, the collection of personal data, the replacement of humans in the workplace, and more.
In fact, the survey suggests that people are becoming more wary of the impact of AI across different demographic groups. While 90% of baby boomers are concerned about the impact of deepfakes, 72% of Gen Z members are also concerned about the same issue, according to a report by Digital Trends.
While young people are less skeptical of AI and more likely to use it in their daily lives, concerns remain high in a number of areas, including whether the industry should do more to protect the public and whether AI should be regulated.
The decline in support for AI tools is likely driven by the growing number of negative news stories about generative AI tools and the controversies facing ChatGPT, Bing Chat, and other products. As stories of disinformation, data breaches, and malware continue to mount, it seems that the public is becoming less accepting of the AI future that looms ahead.
When asked in the MITRE-Harris survey whether the government should step in to regulate AI, 85% of respondents supported the idea - up 3% from the previous time. The same 85% of respondents agreed with the statement that "making AI safe and secure for public use must be a national effort across industry, government, and academia," while 72% felt that "the federal government should focus more time and funding on AI security research and development."
The widespread concern about the use of AI to improve malware attacks is interesting. We recently spoke to a group of cybersecurity experts about this very topic, and the consensus seems to be that while AI can be used in malware, it is not a particularly powerful tool at the moment. Some experts felt that its ability to write effective malware code was weak, while others explained that attackers are more likely to find better exploits in public repositories than to ask for help from AI.
However, the growing skepticism about all things AI could ultimately shape industry efforts and may push companies like OpenAI to invest more money in protecting the public from the products they release. With this overwhelming support, don't be surprised if governments begin to enact AI-specific legislation sooner rather than later.
Additional information
- The study also found that support for AI is higher among young people than older people. This is likely due to the fact that young people are more likely to be familiar with and use AI technologies in their daily lives.
- The study's findings suggest that there is a growing public concern about the potential risks of AI. This concern could lead to increased regulation of AI, as well as efforts by industry to address the concerns of the public.
Important Articles
- Have you been blocked on WhatsApp? Here's how to tell in 5 steps
- What is the difference between Facebook Lite and the regular version?
- How to reach your target audience with relevant ads
Image from [playgroundai.com].