News

Samsung Among Companies Starting to Draft ChatGPT Policies for Workers

Samsung is the world’s biggest maker of memory chips.
Jakub Porzycki | Nurphoto | Getty Images
  • Companies big and small are shaping policies that say what employees can and can't do with generative AI tools such as OpenAI's ChatGPT.
  • In May, Samsung banned the use of generative AI tools after the company discovered an employee uploaded sensitive code to ChatGPT.
  • Building company policy around generative AI is part of the process of communicating risk.

Companies big and small are shaping policies that say what employees can and can't do with generative artificial intelligence tools such as OpenAI's ChatGPT. Some organizations remain bullish while others are closing the curtains on access for the time being, but one thing is certain: the risks of generative AI are multifold, and it's up to executives to decide whether they want to get ahead of the issue or hold off while the technology rapidly unfolds. 

In May, Samsung banned the use of generative AI tools after the company discovered an employee uploaded sensitive code to ChatGPT. All inputs are part of the technology's learning protocol, making the act an issue of intellectual property.

Digital business contract software company Ironclad, on the other hand, developed its own generative AI use policy, despite the fact that its technology is empowered by generative AI itself. That just goes to show that even the most bullish of companies are creating some barriers to mitigate risk.

In generative AI, risk wanders

Ironclad CEO and co-founder Jason Boehmig was a practicing attorney prior to founding his company more than eight years ago. This legal mindset persists, which is one of the reasons Ironclad moved forward with generative AI policy so swiftly. "You're responsible for the output of AI," Boehmig said. This includes so-called hallucinations, which are factually incorrect, overstated, or unrelated responses the AI tool generates.

IP and hallucinations are just a couple of the risks associated with generative AI. According to Navrina Singh, CEO and founder of responsible AI governance platform Credo AI and a member of the National AI Advisory Committee (NAIAC), the risks are what she calls "technosocial." This means they impact technical issues like cybersecurity and liability, but also societal ones such as copyright infringement and even climate standards and regulation.

Building company policy around generative AI is part of the process of communicating risk, which goes beyond simply knowing or acknowledging risk.

Creating a generative AI company policy

Singh says the importance of a company policy around generative AI usage boils down to the question, "How do you adopt AI confidently while managing risk, while staying compliant and while actively being honest about where you won't be able to manage risk?"

Vince Lynch, CEO of AI-powered decision-making platform IV.AI who is currently working with a panel of AI leaders to develop adoptable policies for major businesses, says the time to create standards is now, even if you have the understanding they will change. "It's incredibly important that companies start right now and deploy different structures to ensure they are cautious of the way that AI can impact their company," he said.

Singh agrees, but Boehmig offers a contrarian view even as his company takes policy-based precautions. Boehmig said, "Particularly if you're in a highly regulated industry, I think it's okay to sit back and say, 'We're going to watch how it unfolds.'"

And it will unfold. NAIAC's Year 1 Report from May posits AI as a technology requiring government attention sooner rather than later, including adopting public and private policy based on the Artificial Intelligence Risk Management Framework from the National Institute of Standards and Technology. Then there's The AI Act in the European Union, which is in progress for adoption and will impact U.S. companies that do business in many nations abroad.

In the meantime, companies continue to set their own boundaries. As for Ironclad's policy, the highlights include: classifying data, banning data labeled as confidential from being input into generative AI, spelling out employee accountability for output, and prohibiting customer and identifiable data input.

'The roadblocks are when companies don't think about policy.'

If a company is considering creating a policy on generative AI usage now, or simply wants to strategize for what may be an inevitable endeavor, experts suggest certain steps to take.

One of the first steps should be determining stakeholders. Singh said, "It truly needs to be a multi-stakeholder dialogue," including teams from policy, AI, risk and compliance and legal.

Lynch suggests asking, "What is the intent of the model?" Whether a company has created a particular generative AI model itself or is referring to external models, it's important to understand how it's trained, how it functions, and how it's tested.

Creating a generative AI policy is also a good opportunity for companies to scrutinize all of their technology policies, including implementation, change management, and long-term usage. Lynch poses the question, "How often are you checking in on these policies after you deploy them?"

With errors and hallucinations common on large language models, fact checking and employee accountability is a must—as is the notion of keeping the human in the loop. This is why Lynch advises, "Don't use AI to conduct business." This includes writing code or materials that go directly to clients or users. It can also include using generative AI tools as foundational content for work due to the fact it creates inherent bias.

For some businesses, this may seem overwhelming, which is why companies are beginning to create extensions that input guardrails. Credo AI's GenAI Guardrails, for example, injects governance framework while you're typing into ChatGPT and flags for compliance as a form of automated oversight.

Whether a company's strategy around generative AI policy is to ban employees from using it (for now) like Samsung, place barriers and document accountability like Ironclad, or somewhere in between, Lynch says it's important to have some clear path forward. "The roadblocks are when companies don't think about policy," he said, because that's where crisis management enters the picture.

Copyright CNBC
Contact Us