Skip to main content

The Legal Challenges for Generative AI Policy

Are you taking baby steps toward Artificial Intelligence (AI) adoption? If so, you're not alone with the enterprise business leaders stuck on the sidelines, while the early adopters gain more momentum.

As employees across the globe experiment with Generative AI (GenAI), the most cautious corporate general counsels are issuing guidance that will be valuable to those unsure enterprise leaders.

"To craft an effective policy, general counsel must consider risk tolerance, use cases and restrictions, decision rights, and disclosure obligations," said Laura Cohn, senior principal researcher at Gartner.

Enterprise Generative AI Market Development

Having GenAI guardrails and policies in place will better prepare most slow-moving enterprises for possible future legal requirements. Meanwhile, the market leaders are racing ahead of their peer group.

Based on practices in AI policies instituted by companies and city governments, the cautious general counsel should direct organizations to consider four actions when establishing a policy.

To determine risk tolerance, legal leaders should borrow a practice from enterprise risk management and guide a discussion with senior management on must-avoid outcomes. And discuss the potential applications of GenAI models within the business.

Once these are identified, consider the outcomes that may result from them, which they must avoid, and which entail acceptable risk given the benefits of AI use cases that drive digital business growth.

"Guidance on using Generative AI requires core components to minimize risks while providing opportunities for employees to experiment with and use applications as they evolve," said Cohn.

Legal leaders should understand how GenAI could be used throughout the business by collaborating with other functional leaders. Compile a list of use cases and organize them according to perceived risk.

According to the Gartner assessment, for higher-risk situations, consider applying more comprehensive controls, such as requiring approval from a senior manager, an AI committee, or a task force.

In the highest-risk cases, legal leaders may consider outright prohibition. For lower-risk use cases, they may consider applying basic data security safeguards such as requiring a human review.

"General counsel should not be overly restrictive when crafting policy," Cohn said. "Banning use of these applications outright, or applying hard controls, such as restricting access to websites, may result in employees simply using them on their personal devices."

Leaders can consider defining low-risk, acceptable use cases directly into policy, as well as employee obligations and restrictions on certain uses, to provide more clarity and reduce the risk of misuse.

The general counsel and executive leadership should agree on who has the ultimate authority to decide on GenAI use cases. Legal teams should work with functional, business, and senior leadership stakeholders to align on risk ownership and review duties.

"Document the enterprise unit that governs the use of AI so that employees know to whom they should reach out with questions," Cohn said. "General counsel must be clear if there are uses that do not need approval, specify what they are directly in the policy, and provide examples."

For use cases that need leadership approval, inform employees what they are, clearly document the role that can provide approval, and list that role’s contact information. That seems simple enough.

 Organizations should have a policy of disclosing the use and monitoring of GenAI technologies to internal and external stakeholders. General counsel should help companies consider what information needs to be disclosed and with whom it should be shared.

A critical tenet common across global jurisdictions is that companies should be transparent about their AI tools. People want to know if companies use GenAI applications to craft corporate messages, whether the information appears on a public website, social channel, or application.

Outlook for Generative AI Policy Breakthroughs

"This means general counsel should require employees to ensure the GenAI-influenced output is recognizable as machine-generated by clearly labeling text. Organizations also may consider including a provision to place watermarks in AI-generated images to the extent technically feasible," Cohn concluded.

These suggestions are good for the most cautious organizations to act on their experimentation plans and adoption of GenAI tools. However, I believe that this policy development process must move forward with some degree of haste. Why? Progressive competitors are already likely to be gaining new ground.

Popular posts from this blog

How AI Reshapes a $360 Billion Foundry Market

Few technology sectors sit as close to the center of gravity in today's artificial intelligence (AI) economy as semiconductor manufacturing. Every AI chip that trains a frontier model, every GPU that powers a data center inference workload, and every power management IC that keeps hyperscaler facilities running traces its origins back to the global Foundry ecosystem. IDC's latest market study throws that reality into sharp relief, projecting that the broadly defined Foundry 2.0 market will surpass $360 billion in 2026, a 17 percent year-over-year gain that would have seemed optimistic even two years ago. For anyone advising boards or investment committees on technology and AI infrastructure strategy, this growth trajectory demands careful consideration. Foundry 2.0 Market Development The umbrella term covers four distinct verticals: pure-play foundry, non-memory integrated device manufacturer (IDM) production, outsourced semiconductor assembly and test (OSAT), and photomask fab...