turn ai risk into strategic advantage | innovation insights

experts outline the flexible framework that works for a three-person shop or a 30,000-employee enterprise.

sponsored by “it’s not just the numbers: how to move beyond the numbers and deliver real value for your clients.”
by penny breslin and damien greathead. see today’s special offer

click to subscribe anywhere: applegoogle/youtubespotifyiheartdeezer, amazon music and audibleplayer fmaudacygaana (india)boomplay (africa), or rss.

innovation insights
with donny shimamoto

center for accounting transformation

artificial intelligence is advancing too quickly for organizations to adopt a “wait-and-see” approach, according to experts on the latest episode of innovation insights.

host donny shimamoto, cpa.citp, cgma, speaks with jason pikoos and david wood, co-authors of the new generative ai governance framework, about how companies can responsibly integrate ai into their operations.

more donny shimamoto

“governance isn’t optional,” says pikoos, former managing partner of client experience for connor group. “it’s good hygiene for any organization—whether you’re a three-person shop or a multinational.”

the framework outlines 69 control considerations in areas such as bias risk, human oversight, and process integration. it is not a checklist to be blindly completed, the authors stress, but a flexible guide that organizations can tailor to their needs.

unique risks require new thinking
one of the biggest concerns is “hallucinations,” where ai generates false information. pikoos advises treating ai like a junior staff member: review its output, understand where accuracy is critical, and maintain healthy skepticism.

wood, an accounting professor at bringham young university, warns that many organizations don’t realize where ai is already in use—embedded in office software, search tools, or enterprise systems. “banning ai is pointless,” he says. “it’s already here.”

human, ethical, and social impacts
the framework uniquely incorporates human and social considerations, urging leaders to communicate openly about how ai adoption will affect staff. “leaders must be clear on whether ai is about cost-cutting, growth, or expanding capabilities,” pikoos says.

continuous improvement is key
both ai technology and human skill evolve rapidly. wood describes the “jagged technological frontier,” where each new version of ai improves some capabilities but may degrade others. regular evaluation and adjustment are critical.

shimamoto sees the framework as more than risk management—it’s a business opportunity. “firms can use this internally, then help clients do the same,” he says.

the framework is available for free at genai.global.

top 9 takeaways

  1. ai governance applies to organizations of all sizes.
  2. the generative ai governance framework offers 69 control considerations.
  3. hallucinations are the most common and misunderstood ai risk.
  4. ai is often already embedded in everyday tools.
  5. governance should integrate with existing it and corporate policies.
  6. culture, ethics, and human impact must be factored into ai adoption.
  7. continuous improvement is required—both for tech and people.
  8. banning ai is ineffective; risk-aware adoption is the goal.
  9. firms can turn governance expertise into client advisory services.