Skip to main content

The Legal Challenges for Generative AI Policy

Are you taking baby steps toward Artificial Intelligence (AI) adoption? If so, you're not alone with the enterprise business leaders stuck on the sidelines, while the early adopters gain more momentum.

As employees across the globe experiment with Generative AI (GenAI), the most cautious corporate general counsels are issuing guidance that will be valuable to those unsure enterprise leaders.

"To craft an effective policy, general counsel must consider risk tolerance, use cases and restrictions, decision rights, and disclosure obligations," said Laura Cohn, senior principal researcher at Gartner.

Enterprise Generative AI Market Development

Having GenAI guardrails and policies in place will better prepare most slow-moving enterprises for possible future legal requirements. Meanwhile, the market leaders are racing ahead of their peer group.

Based on practices in AI policies instituted by companies and city governments, the cautious general counsel should direct organizations to consider four actions when establishing a policy.

To determine risk tolerance, legal leaders should borrow a practice from enterprise risk management and guide a discussion with senior management on must-avoid outcomes. And discuss the potential applications of GenAI models within the business.

Once these are identified, consider the outcomes that may result from them, which they must avoid, and which entail acceptable risk given the benefits of AI use cases that drive digital business growth.

"Guidance on using Generative AI requires core components to minimize risks while providing opportunities for employees to experiment with and use applications as they evolve," said Cohn.

Legal leaders should understand how GenAI could be used throughout the business by collaborating with other functional leaders. Compile a list of use cases and organize them according to perceived risk.

According to the Gartner assessment, for higher-risk situations, consider applying more comprehensive controls, such as requiring approval from a senior manager, an AI committee, or a task force.

In the highest-risk cases, legal leaders may consider outright prohibition. For lower-risk use cases, they may consider applying basic data security safeguards such as requiring a human review.

"General counsel should not be overly restrictive when crafting policy," Cohn said. "Banning use of these applications outright, or applying hard controls, such as restricting access to websites, may result in employees simply using them on their personal devices."

Leaders can consider defining low-risk, acceptable use cases directly into policy, as well as employee obligations and restrictions on certain uses, to provide more clarity and reduce the risk of misuse.

The general counsel and executive leadership should agree on who has the ultimate authority to decide on GenAI use cases. Legal teams should work with functional, business, and senior leadership stakeholders to align on risk ownership and review duties.

"Document the enterprise unit that governs the use of AI so that employees know to whom they should reach out with questions," Cohn said. "General counsel must be clear if there are uses that do not need approval, specify what they are directly in the policy, and provide examples."

For use cases that need leadership approval, inform employees what they are, clearly document the role that can provide approval, and list that role’s contact information. That seems simple enough.

 Organizations should have a policy of disclosing the use and monitoring of GenAI technologies to internal and external stakeholders. General counsel should help companies consider what information needs to be disclosed and with whom it should be shared.

A critical tenet common across global jurisdictions is that companies should be transparent about their AI tools. People want to know if companies use GenAI applications to craft corporate messages, whether the information appears on a public website, social channel, or application.

Outlook for Generative AI Policy Breakthroughs

"This means general counsel should require employees to ensure the GenAI-influenced output is recognizable as machine-generated by clearly labeling text. Organizations also may consider including a provision to place watermarks in AI-generated images to the extent technically feasible," Cohn concluded.

These suggestions are good for the most cautious organizations to act on their experimentation plans and adoption of GenAI tools. However, I believe that this policy development process must move forward with some degree of haste. Why? Progressive competitors are already likely to be gaining new ground.

Popular posts from this blog

Why GenAI Investment will Double in 2024

In 2024, every business can be a technology-driven business. The quest for business technology leadership skills, and digital transformation, will gain new momentum as more organizations seek ways to drive net-new digital growth. Large enterprises will invest more than $19.4 billion worldwide in Generative Artificial Intelligence (GenAI) solutions in 2023, according to the latest market study by International Data Corporation (IDC). This spending, which includes GenAI software as well as related infrastructure hardware and IT or business services, is expected to more than double in 2024 and reach $151.1 billion in 2027 -- that's with a compound annual growth rate (CAGR) of 86.1 percent over the 2023-2027 forecast period. Artificial Intelligence Market Development Despite the recent IT headwinds in 2023, business leaders accelerated their exploration of GenAI solutions to help boost their digital business transformation. "In 2024, the shift to AI everywhere will enter a critic