The future of AI regulation is up in the air: What’s your next move?
Regulation of artificial intelligence has always been a hot topic. But with AI guardrail is about to be demolished Regulation has also become a big question mark under the incoming U.S. government. This adds complexity and volatility to an already complex compliance environment. VentureBeat AI Impact JourneyJoin Capgemini for a stop in Washington, D.C., to discuss the evolving risks and surprising new opportunities that the coming regulatory environment will bring, as well as insights for navigating a new, uncertain normal.
VB CEO Matt Marshall spoke with Fidelity Labs SVP Vall Hérard and Verizon Senior Director of AI/ML Engineering Xuning (Mike) Tang to discuss the significant and growing challenges facing AI regulation in financial services and telecommunications , and dives deep into: Steve Jones, executive vice president of data-driven business and generative artificial intelligence at Capgemini, explains risk management, accountability threats, and more.
Accountability is a moving target
The problem, Jones said, is that the lack of regulation boils down to a lack of accountability when it comes to what large language models are doing, which includes looting intellectual property rights. Without regulations and legal consequences, addressing intellectual property theft will either come down to a court case or, more likely, especially if the LL.M. belongs to a company with deep pockets, the responsibility will slide to the end user. Some companies push the boundaries when profitability outweighs the risk of a financial hit.
“I think it’s fair to say that the courts are not going to be enough and, in fact, people are going to have to poison their public content to avoid losing their intellectual property rights,” Jones said. “It’s sad that it has to get there, but if the risk is, and it absolutely has to get there, you put it on the Internet and suddenly someone rips off your entire catalog and then they just sell it outright as well.”
Clarify the responsibilities
In the real world, unregulated AI companion apps have led to real tragedies, such as 14-year-old boy commits suicide He isolates himself from friends and family and turns to his chatbot companion. If regulations are further repealed, how will product liability be assumed in such cases to prevent this happening to other users?
“From an artificial intelligence perspective, these weapons of mass destruction are very powerful things. There should be responsibility for their control,” Jones said. “I firmly believe that how you can place this blame on the company that created the product is only if there is a push for it.”
For example, the child’s family is taking legal action against the chatbot company, which has now implemented new security and automated moderation policies on its platform.
Risk management in a less regulated world
Today’s AI strategy needs to revolve around risk management, understanding the risks your business faces and controlling them. Jones added that there’s not a lot of outrage over the potential breach because from a business perspective, the real outrage is how an AI misstep might affect public perception and the threat of a court case, whether it’s Human lives or the bottom line are involved.
“What’s infuriating is that if I show customers the illusion, it makes my brand look bad,” Jones said. “But will I get sued? Am I posting invalid content? Do I post content that makes me look like I’m deceiving a competitor? So I’m not too worried about outrage. I’m more worried about giving lawyers business.
Remove L from LLM
Making models as small as possible will be another key strategy, he added. The LLM is powerful and can accomplish some amazing things. But do businesses need an LLM to play chess, speak Klingon or write epic poems? Tang noted earlier that the larger the model, the greater the potential for privacy issues and the greater the number of potential threat vectors. Verizon’s traffic data contains a lot of internal information, and a model that encapsulates all this information would be very large and pose privacy risks, so Verizon’s goal is to use the smallest model that provides the best results.
Smaller models designed to handle specific, narrowly defined tasks are also a key way to reduce or eliminate hallucinations, Herard said. Controlling compliance this way is easier when the size of the data set used to train the model allows for a full compliance review.
“It’s amazing how often, in enterprise use cases, understanding my business problem, understanding my data, this little model provides an amazing set of results,” Jones said. “Then combine that with fine-tuning to do what I want to do and further reduce my risk.”
2024-12-11 14:44:04