AABP EP Awards 728x90

3 companies share insight into forming AI governance

Leaders stress importance of balancing responsibility and innovation

https://www.businessrecord.com/wp-content/uploads/2022/12/Sarah-Bogaards12-21-scaled-e1670257797918.jpg

When ChatGPT launched in 2022, Principal Financial Group Chief Information Officer Kathy Kay said many businesses saw two paths: allow people to try it out or lock it down.

Principal let adoption occur for a few months, and then it recognized a need to change its approach.

“We quickly, like lots of companies, had to do a little bit of a lockdown,” Kay said. “We realized people were asking questions that maybe they shouldn’t in the public version of ChatGPT, so it made us quickly realize we need better governance.” 

Kay said as the company started to craft guidance, leaders have found that the right path was not one or the other but a combination that balances innovative and responsible, ethical use.

“We negotiated what I felt like was a good compromise of we’re going to allow it but we’re going to make sure people know how to be responsible when using it, and we’re also going to make sure the leader [understands] the role they need to play and what our expectations are,” she said.

Other businesses are coming to the same conclusion — that they want to explore opportunities for innovation with AI but need guardrails to help manage risk and protect customers. More formalized governance, like a set of principles, can help organizations establish a guiding philosophy on AI and determine where AI aligns with company values and goals.

Principal, EMC Insurance Cos. and Lean Techniques have all started AI governance efforts, and their leaders shared with the Business Record what they have learned so far from the process of developing and implementing guidelines.

22 Kay, Kathy Fearless 220221 11 PFG

Kathy Kay, chief information officer, Principal Financial Group. Submitted photo

Forming governance requires cross-collaboration 

One of the first steps in the process of creating AI governance for all three organizations was establishing some form of cross-functional group.

Principal set up a study group for people interested in generative AI in the weeks following the release of ChatGPT. It contained people from across the company’s departments, including legal, compliance, engineering and business. 

Principal also issued a company-wide training on responsible use of generative AI models and set up “sandboxes” in its cloud environment where employees can securely test use cases.

Kay said by learning together, study group members helped move strong use cases forward quicker because compliance and legal could identify and help troubleshoot issues on the front end. 

“Oftentimes, I think security, legal, compliance, privacy [are] viewed as they want to stop everything I’m doing? They really don’t. They want to be enablers too, but we have to also make sure we’re adhering to everything that we should be,” Kay said.

EMC Insurance found that a collaborative approach to governance helps “foster buy-in from leaders and other teams,” Damon Youmans, EMC’s vice president of digital services, said in an email interview with the Business Record.

Kristina Colson

As a technology consulting business, part of Lean Techniques’ impetus to create AI principles was to equip their teams to help clients understand and navigate this new technology, said Kristina Colson, Lean Techniques’ AI strategy lead.  

The dawn of generative AI prompted a lot of questions from clients, so they needed to develop some common language and principles to guide decisions made across the company.

Colson said Lean Techniques’ governance focuses on providing employees the resources needed to make decisions independently to fit in with their culture as an agile organization.

“It’s the ‘lean’ in Lean Techniques. It’s keeping things lightweight and lean. We don’t want a massive decision framework because that’s just going to slow things down,” she said.

Their process of collaboratively creating governance was crowdsourcing ideas from people across the organization with various backgrounds. They reviewed materials, like other companies’ AI principles and the European Union’s AI Act.  

“We came together in a workshop session where we distilled the pieces from all these different sources together into which of these things do we also care about? How do we frame it in a way that it aligns with how we already think about software development, or how we already think of product development,” Colson said.

Their principles revolve around fairness, transparency and accountability, including statements that Lean Techniques will proactively reduce cultural, social or other biases and prioritize transparency into the functioning and decision-making of AI algorithms. 

Brandon Carlson

The consultancy also created an internal AI use policy to help employees evaluate risk when using AI tools in their work. Lean Techniques founder and CEO Brandon Carlson said he and Colson were passionate about having flexibility in the policy, especially as AI evolves.

“We don’t want to stifle people, we don’t want to limit people’s choice or any of those kinds of things, but we want to at least help them understand what the risks are that we’re trying to mitigate and not be as prescriptive … you need your team members to be exploring these things, and you need them to be able to experiment,” Carlson said.

A core principle for Lean Techniques, EMC and Principal is “human in the loop,” meaning a human verifies the accuracy of work done by AI.

Kay said Principal’s stance at this point is that humans will always be in the middle and customers will not interact directly with a generative AI model.

Colson said the “human in the loop” concept should be applied at checkpoints to get feedback throughout a process, similar to the way product development is done. Making a point to include humans can also help them develop trust in the technology, Carlson said.

“There’s so much fear around some of this stuff right now. [This says], ‘Hey, we want you to be in the loop because we want you to see that it’s not something to be feared,’” Carlson said.

Insights from implementation

EMC and Principal have focused on applying generative AI to help employees be more effective in their roles. Youmans said one example is an internal self-service tool for employees to use when they need technology support. 

“The platform quickly connects team members with technology solutions without the need to submit an actual support ticket,” Youmans said. “This saves time for the requestor and gets them back to work quickly, while freeing up our support teams to focus on more complex requests.”

Youmans said EMC looked to generative AI best practices, including clear objectives, privacy and responsible data handling, and openness to change, for its principles.

Turning to company values also helps fine-tune AI governance, Kay said. For example, through the lens of AI, valuing accountability and transparency looks like being able to trace and explain how an AI model reached a conclusion or decision, she said.

One of the biggest lessons all three companies learned from putting their AI governance into practice is making sure AI is qualified to address the problems they’re solving.

Kay said the enthusiasm and hype for the new technology can quickly distract from an easier solution.

“Originally when we started getting all these ideas, a lot of them could be solved quite frankly with simple automation or a traditional AI model, but there was this desire to use gen AI,” she said. “We always emphasize … fall in love with the problem, don’t fall in love with the solution.”

That mindset also helps Principal differentiate between solutions that result from hype and those with real potential to help the organization, she said.

Colson said she sees companies launching an AI initiative because of a fear of missing out, but she says businesses should find where AI can contribute value to the business.

“If you need to get to a very specific, accurate answer every single time, you don’t want to use generative AI,” she said. “If you need help getting towards an answer or you need help making a decision, then generative AI is probably great to pull the pieces of information together or to make them understandable to you in a more easily digestible way.” 

Carlson said there are many AI technologies besides generative AI that are more developed and could readily apply to businesses. They include computer vision models and predictive analytics.

One of the first tests of Lean Techniques’ AI principles was creating a chatbot to allow teams to search company documents. Colson and the developers building it met to check that it was following the principles, and she said the conversation influenced the features they chose and didn’t choose to include.

Colson said they intentionally observed themselves to detect what kind of conversations they were having that they wouldn’t have had before establishing governance.

“By going through that process, we were able to anticipate some of the questions that are going to come up or some of the hurdles that we might run into in the future, like how do we start small and then expand the bot’s capabilities as it matures? What’s something that we need to care about now? What is something that we want to care about later?” she said.

The process of creating governance and evaluating potential risks led Lean Techniques to also look harder at the policies of vendors and tools they use. Colson reviewed privacy and data use policies for AI tools employees use to determine how those companies store and use data.

Having clients who serve their own audience of customers, the consulting firm has the “extra layer” of data to safeguard, she said.

Without regulation in the U.S., Colson said the onus is on individual companies to understand how technology providers are treating their data and their customers’ data.

“I feel like you have to get down to looking at the privacy and the data-use policies of the tool that you’re working with, because basically, I think what’s going to happen is all these tools that already exist out there that we use every day, they’re all going to start getting these little AI features and AI augmentations,” she said.

In addition to the initial study group, Principal has leveraged an existing data and analytics governing body to help vet and prioritize use cases and formed its ethical and responsible AI committee, which includes executives, managers and operators. 

Kay said each business unit is now prioritizing its own AI initiatives, and the committee helps provide feedback on how policies may need to evolve. She said the company will likely create a new entity to help track what each unit is learning.

“What we’re finding, though, is we could be better taking advantage of each other’s solutions. We’re finding some similar themes. So we’re starting to step back again and say we need to have more of a [center of excellence], which helps enable these models to be built more effectively but also is doing all the standard support … so that each team isn’t having to solve those problems themselves,” she said. 

Two years in, she said the effort is still very nascent, and more learning lies ahead as AI matures.

“The real thing now is as more and more people are learning things, how do we make sure we’re capturing all those learnings … I think we have all the bases covered and we continue to evolve and formalize more but we’re also trying not to become bureaucratic at all, so that you can nimbly change,” she said. “But I think there’s a general recognition that because of how quick the adoption [was], there are going to be these ongoing changes and it’s our job to keep communicating.”


Advice for getting started on governance and outlook on AI developments 

Kay, Youmans, Colson and Carlson shared some additional advice for companies that are looking to form their own AI governance and weighed in on how they anticipate policies will need to evolve as the technology develops.

Here is what they said.


Considerations when creating AI governance

Encourage engagement
“There’s two approaches you could take. You could have the chosen few try it out for the rest of the company. I would rather those who are interested join the group. … Let your employees be engaged. When we started, there was a concern [that] they have their jobs [to do]. One of the things I found, especially with innovation, if they have an interest and are excited about trying something, they’re still going to do their job. They’re going to get their job done, and actually, they’re going to do it more effectively because they’re motivated because they’re excited about this other thing. I think just trying to be as open and inclusive as possible to get all the different perspectives, that’s where true innovation is at its best.”

— Kathy Kay

Take advantage of all lessons
“We’ve had use cases that haven’t panned out. You could just shut it down, but then you lose that learning of ‘Hey, what did we find that made us say this isn’t going to cut it?’ and making it still a celebration of a learning moment.”

— Kathy Kay

Remember governance affects the whole organization
“You’ll get far better results if you maintain a co-creation mindset. You can’t rely on technologists developing innovative solutions in a vacuum. You need a good partnership with the business to create an outcome together. Think about how AI might amplify a business outcome or process, and focus on the expected outcome of what you’re trying to do, whether that’s increasing revenue, gaining efficiencies. Work with your business areas to understand their needs and ensure they understand the value of AI solutions. In a perfect world, a business area that sees the possibilities will bring you their needs and ask whether AI can help them do things more efficiently.”

— Damon Youmans

Find a safe place to start
“I advocate for starting with AI applications that are easily adoptable and have clear productivity benefits across the organization. Tools that summarize meetings or assist with routine tasks can provide immediate value and help build confidence in AI technologies across the organization.”

— Damon Youmans

Consider potential harms
“I felt like people [at Lean Techniques] really cared about [doing] no harm. We love technology. We’re excited about it. We want to do the new things, and we want to solve problems, and we want to be creative in how we solve those problems. But we don’t want something that we’ve had involvement in creating to be abused [or] built in a way that accidentally harms people. I feel like a majority of companies or people don’t want to deliberately do that, but they don’t maybe take enough time on the right conversations to figure out what could the bad stuff be?”

— Kristina Colson

Consider other stakeholders’ perspectives when needed
“We have had to make sure that we’re aligned to our principles, and also our clients’ principles. We always have the duality, like we need to have an internal compass as a consulting agency, but we also need to obviously be respectful and considerate of where our clients want to go because they maybe have their own principles.”

— Kristina Colson

Set up for long-term success today
“I know some HR departments in local industries here … and their stance is absolutely no AI. I think it’s a great way to be really safe in the short term, but in the long term, it’s a good way to hamstring your business because AI hasn’t really materialized as a force of nature yet — the hype certainly has, but the applicability and use cases haven’t, but they will once it matures a little bit.”

— Brandon Carlson


Developing policy for evolving technology

“Continuous learning and adaptation are so important to the long-term success of your AI adoption. As AI tech and the regulatory environment evolve, so must your governance principles. This is an emerging area of law and regulation, so you’ll need to be prepared to account for that when continuing to develop your governance structure around AI.”

— Damon Youmans

“The thing I always say is our policies are digital policies. They’re not in stone, so as we learn, we’re a learning organization and we need to evolve, we’re going to change it, but we’re going to communicate why we’re changing. Here’s what we learned. Here’s what it means. This is why it’s evolved.” 

— Kathy Kay

“Everybody’s still learning what are the right controls to be able to anticipate the introduction of bias or drift, and again a lot of it just isn’t invented yet … so I think that’s the area that we’re going to see better learnings of what are ways to continue to test that you’re not introducing [bias]?”

— Kathy Kay

“Some of the bad pitfalls that we’ve heard about in the media, I think companies have gone too far with handing off control to the AI, especially externally facing. I think the question that companies need to be talking about internally or giving some guidance for is, how much control are we OK with handing off to AI? Are we letting it make a decision that is going to impact human lives to some extent or are we augmenting a person’s ability to make that decision?”

— Kristina Colson

https://www.businessrecord.com/wp-content/uploads/2022/12/Sarah-Bogaards12-21-scaled-e1670257797918.jpg

Sarah Diehn

Sarah Diehn is digital news editor and a staff writer at Business Record. She covers innovation and entrepreneurship, manufacturing, insurance, and energy.

Email the writer

rebuildingtogether brd 090124 300x250