Skip to main content

The Legal Challenges for Generative AI Policy

Are you taking baby steps toward Artificial Intelligence (AI) adoption? If so, you're not alone with the enterprise business leaders stuck on the sidelines, while the early adopters gain more momentum.

As employees across the globe experiment with Generative AI (GenAI), the most cautious corporate general counsels are issuing guidance that will be valuable to those unsure enterprise leaders.

"To craft an effective policy, general counsel must consider risk tolerance, use cases and restrictions, decision rights, and disclosure obligations," said Laura Cohn, senior principal researcher at Gartner.

Enterprise Generative AI Market Development

Having GenAI guardrails and policies in place will better prepare most slow-moving enterprises for possible future legal requirements. Meanwhile, the market leaders are racing ahead of their peer group.

Based on practices in AI policies instituted by companies and city governments, the cautious general counsel should direct organizations to consider four actions when establishing a policy.

To determine risk tolerance, legal leaders should borrow a practice from enterprise risk management and guide a discussion with senior management on must-avoid outcomes. And discuss the potential applications of GenAI models within the business.

Once these are identified, consider the outcomes that may result from them, which they must avoid, and which entail acceptable risk given the benefits of AI use cases that drive digital business growth.

"Guidance on using Generative AI requires core components to minimize risks while providing opportunities for employees to experiment with and use applications as they evolve," said Cohn.

Legal leaders should understand how GenAI could be used throughout the business by collaborating with other functional leaders. Compile a list of use cases and organize them according to perceived risk.

According to the Gartner assessment, for higher-risk situations, consider applying more comprehensive controls, such as requiring approval from a senior manager, an AI committee, or a task force.

In the highest-risk cases, legal leaders may consider outright prohibition. For lower-risk use cases, they may consider applying basic data security safeguards such as requiring a human review.

"General counsel should not be overly restrictive when crafting policy," Cohn said. "Banning use of these applications outright, or applying hard controls, such as restricting access to websites, may result in employees simply using them on their personal devices."

Leaders can consider defining low-risk, acceptable use cases directly into policy, as well as employee obligations and restrictions on certain uses, to provide more clarity and reduce the risk of misuse.

The general counsel and executive leadership should agree on who has the ultimate authority to decide on GenAI use cases. Legal teams should work with functional, business, and senior leadership stakeholders to align on risk ownership and review duties.

"Document the enterprise unit that governs the use of AI so that employees know to whom they should reach out with questions," Cohn said. "General counsel must be clear if there are uses that do not need approval, specify what they are directly in the policy, and provide examples."

For use cases that need leadership approval, inform employees what they are, clearly document the role that can provide approval, and list that role’s contact information. That seems simple enough.

 Organizations should have a policy of disclosing the use and monitoring of GenAI technologies to internal and external stakeholders. General counsel should help companies consider what information needs to be disclosed and with whom it should be shared.

A critical tenet common across global jurisdictions is that companies should be transparent about their AI tools. People want to know if companies use GenAI applications to craft corporate messages, whether the information appears on a public website, social channel, or application.

Outlook for Generative AI Policy Breakthroughs

"This means general counsel should require employees to ensure the GenAI-influenced output is recognizable as machine-generated by clearly labeling text. Organizations also may consider including a provision to place watermarks in AI-generated images to the extent technically feasible," Cohn concluded.

These suggestions are good for the most cautious organizations to act on their experimentation plans and adoption of GenAI tools. However, I believe that this policy development process must move forward with some degree of haste. Why? Progressive competitors are already likely to be gaining new ground.

Popular posts from this blog

Banking as a Service Gains New Momentum

The BaaS model has been adopted across a wide range of industries due to its ability to streamline financial processes for non-banks and foster innovation. BaaS has several industry-specific use cases, where it creates new revenue streams. Banking as a Service (BaaS) is rapidly emerging as a growth market, allowing non-bank businesses to integrate banking services into their core products and online platforms. As defined by Juniper Research, BaaS is "the delivery and integration of digital banking services by licensed banks, directly into the products of non-banking businesses, commonly through the use of APIs." BaaS Market Development The core idea is that licensed banks can rent out their regulated financial infrastructure through Application Programming Interfaces (APIs) to third-party Fintechs and other interested companies. This enables those organizations to offer banking capabilities like payment processing, account management, and debit or credit card issuance without