American tech workers want strong AI Regulation so they can better use the technology, new research finds.
A survey of 300 American technology experts in data management, privacy and artificial intelligence by data analytics firm Collibra found that three-quarters of respondents support federal and state regulations to oversee technological development.
“As we look to the future, our governments must set clear and consistent rules while creating an environment that enables innovation and strengthens the quality and integrity of data,” said Felix Van de Maele, co-founder and CEO of Collibra.
The survey found that the biggest challenges facing AI – and the need for regulation – are privacy and security, both cited as a problem by 64% of respondents, followed by misinformation and ethical use/responsibility, both cited as a problem by over 50% of respondents.
IP must be protected from AI practices
In fact, the survey results suggest that accountability around data usage is a high priority for industry participants – perhaps no surprise given the excitement surrounding the way AI companies collect data. OpenAI, Microsoft and more compared to legal challenges about copyright infringement and licensing agreements.
Eight in 10 respondents said American regulators should update copyright laws to protect content from data collection and compensate creators when their material is used in training models.
“While AI innovation continues to advance rapidly, the lack of a regulatory framework puts content owners at risk and will ultimately hinder the adoption of AI,” said Van de Maele.
The survey comes against a backdrop of growing efforts to regulate technology without hampering innovation, Van de Maele noted, adding that the United States should follow the example of the European Union (EU).
In the US, there are isolated laws at the federal level, and Biden issued an executive order last year on “safe, secure and trustworthy” artificial intelligence.
However, this largely led to other national-level organizations, such as the National Institute of Standards and Technology (NIST) and the National Security Council (NSC), working on more comprehensive standards.
In the EU, the bloc’s main AI law came into force earlier this month, but full implementation will take years. First, it will address prohibited systems, such as banning AI that exploits vulnerable users, uses social scoring or predicts future crimes; this rule will be enforced from next year.
In addition, the EU AI Law requires that high-risk AI systems have a risk management system, data governance, proper data recording and other control tools to ensure safety. Additional rules apply to general-purpose AI.
On a more positive note, the survey suggests that IT workers in the U.S. trust that their companies use AI in safe and meaningful ways.
According to Collibra, 88 percent of respondents said they have a high or very high level of confidence in their own employer’s AI approach.
Another three-quarters felt their company was giving AI training the right priority. The results were even more positive for larger companies.