strategic ai, not shiny objects | arc

leaders tie ai to real workflows, not wish lists, and adoption follows. 

sponsored bytax season readiness: practical steps for a smoother busy season  | see today’s special offer

subscribe to 卡塔尔世界杯常规比赛时间 podcasts anywhere: applegoogle/youtubespotifyiheartdeezer, amazon music, audibleplayer fmaudacy, rss
is your firm truly ready for tax season—or just hoping to survive it? join this 90-minute webinar featuring an accounting arc live panel with thought leaders who know what it takes to optimize performance under pressure. 

accounting arc
with liz mason, byron patrick, and donny shimamoto
center for accounting transformation

accounting leaders are accelerating ai deployment across tax, audit, and advisory—but three accounting veterans and hosts of accounting arc argue the difference between adoption and shelfware comes down to focus, guardrails, and relentless training. 

on the latest episode, hosts liz mason, cpa; byron patrick, cpa.citp, cgma; and donny shimamoto, cpa.citp, cgmadissect how large firms are approaching microsoft copilot and adjacent tools. they agree that leaders should start now, but do so strategically. 

more accounting arc: don’t get fired by your own automation | what amazon doesn’t tell you | royalties, residuals, and reality checks | arc-slc | free speech is a right; respect is a responsibility | cash bags, casinos & audits: how first jobs shape usgen z redefines careers | bootleggers, baptitsts & cpas: rethinking licensurecpa firm ownership under firewalking violation: when showing your cpa gets you in trouble | audit bags to tiktok tags, gen z talks success | students challenge accounting’s traditional career path | true grit: recognizing struggles that shape our successes |more admins, fewer students, no planwhat career advice gets wrong for gen z – and how to fix ityour identity is not a liabilityburnout, be gone: accounting needs a boundary breakthrough

patrickceo of verifyiq and co-founder and educator at tb academy, opens with a caution that resonates across enterprise tech cycles: many organizations feel pressured to adopt generative ai without clearly defining expected outcomes. he urges leaders to ask what success specifically looks like, whether that is fewer review points, faster cycle times on close, or reduced audit adjustments. 

mason, ceo of high rock accounting, pushes further. “efficiency, she says, is not a strategy. instead, she recommends scoping use cases at the workflow levelsuch as building an agent to review every journal entry for risk signals or targeting audit findings with automations that could reduce fees or post-audit remediation. that framing anchors a rollout in risk and value, not novelty. 

their emphasis mirrors market research that finds most failed pilots lack a tightly defined problem. recent reporting on an mit study notes that approximately 95% of enterprise generative ai initiatives yield no measurable positive and negative (p&l) impact, often because tools are added without redesigning workflows. the small minority that succeeds start with specific, high-leverage problems and right-sized partners. 

choose ai—or not.
patrick stresses that many wins still come from non-ai automation. “there’s so much you can do with task automation alone,” he shares. mason agrees, distinguishing agentic ai—systems that plan steps, use tools, and adapt—from chat prompts. industry primers describe agentic ai as decision-capable and multi-step, best aimed at orchestrating tools and data rather than replacing them.  

guardrails, then scale.
the hosts advocate explicit ethical layers. mason describes adding checks (e.g., a second model to review outputs for value and policy alignment) and confining data sources to internal repositories. shimamoto, founder and managing director at intraprisetechknowlogies llc and founder and mission advocacy architect at the center for accounting transformation, highlights the features of microsoft copilot studio that enable binding an agent to specific sharepoint sites and specifying which model to use—essential for maintaining confidentiality, consistency, and auditability.  

make the policy usable.
to drive everyday decisions, the trio likes “stoplight” guidance—red (never do this), yellow (allowed with checks), green (approved). that approach is gaining traction in education and organizational policies because it clarifies permissions quickly and fits on a one-pager.  

training never stops.
all three hosts say adoption stalls without ongoing training and visible storytelling. “if people don’t see the reason why, they’ll block the bot from meetings,” mason says, recalling how reframing benefits turned skeptics into users. they suggest bite-sized “tips & tricks” sessions, internal newsletters that spotlight wins, and even contests to crowdsource use cases. 

prompt discipline matters—but libraries must evolve.
mason argues for prompt frameworks and even firm-approved prompt templates; patrick counters that rapidly changing models can make static libraries brittle. both agree that teaching prompt principles—such as objective framing, stakes, and specifics—improves results. industry guidance likewise recommends focusing on repeatable structures over one-off scripts as models evolve.  

start now—intentionally.
some leaders wonder if rapid model changes mean they should wait. the hosts disagree. headlines about failed ai pilots, they say, are a reason to sharpen strategy—not to freeze. recent coverage of the mit findings highlights their key point. problem-first pilots with workflow integration outperform tech-first experiments.  

what’s next?
the conversation hints at where firms go from here: more agentic orchestration across internal systems; clearer, values-aligned policies; and shared playbooks for prompts, review steps, and audit trails. on the profession’s horizon, the aicpa/cima “rise 2040” initiative invites the field to co-design its future, suggesting that firms that build these muscles now will be better positioned for what’s next.  

firms that tie ai to strategy, secure their sources, train continuously, and measure real outcomes are already separating signal from noise. 

7 key takeaways 

  1. define outcome-level goals (risk reduction, turnaround time, quality), not just “efficiency.” 
  2. use automation where it suffices; reserve agentic ai for multi-step, tool-using workflows. 
  3. bind copilots to controlled internal sources (e.g., sharepoint) and layer reviews. 
  4. use simple stoplight policies to guide everyday choices and reduce ambiguity. 
  5. train continuously and share wins; adoption follows clarity and repetition. 
  6. prompt discipline matters; teach structures rather than static libraries. 
  7. pilot narrowly; integrate with workflows; measure what matters. recent mit-covered findings on failed pilots reinforce this discipline.  

leave a reply