Thursday, August 22, 2024

Not long ago the Financial Times had a prominent report that more companies are disclosing artificial intelligence as a potential risk to their business. More than half of Fortune 500 companies cited AI as a potential risk in their annual reports this year, the article said, compared to only 9 percent that did so just two years earlier.

Well, OK… but exactly what are those companies disclosing about AI risks?


After all, it’s easy to see why more companies are talking about AI as a risk: because ChatGPT exploded onto the scene in late 2022. It’s cool and disturbing and everywhere and has potentially huge disruptive effects, so companies can’t really not say at least something about AI’s significance to their operations.


Still, that’s not the same as saying how AI might be a risk to your company. Some firms might see their business models eviscerated; others might see their businesses soar if they’re nimble enough to take advantage of AI in a timely manner. 


To explore this question, we fired up Calcbench’s Disclosures and Footnotes database and searched for S&P 500 companies that mentioned “artificial intelligence” in their risk factors for Q2 filings. 


Most companies kept that discussion of AI simple — say, adding AI as yet another technology that might complicate the firm’s cybersecurity risks, or mentioning AI as a technology the company needs to harness. 


For example, Nike ($NKE) had one sentence that mentioned AI, as part of a larger discussion about the company’s reliance on technology:


To  the extent we integrate artificial intelligence ("AI") into our operations, this may increase the cybersecurity and privacy risks, including the risk of unauthorized or misuse of AI tools we are exposed to, and threat actors may leverage AI to engage in automated, targeted and coordinated attacks of our systems.


Darden Restaurants ($DRI) did much the same, although it managed to stretch its discussion to two sentences:


However, because technology is increasingly complex and cyber-attacks are increasingly sophisticated and more frequent, there can be no assurance that such incidents will not have a material adverse effect on us in the future. For example, the rapid evolution and increased adoption of artificial intelligence technologies may intensify our and our service providers’ and key suppliers’ cybersecurity risks.


A more expansive discussion came from Clorox ($CLX). It mentioned AI multiple times in its risk factor discussions, including the important point that the company’s long-term prospects might suffer if it can’t reap the benefits of new technology quickly:


If the Company is unable to increase market share in existing product lines, develop product innovations, undertake sales, marketing and advertising initiatives that grow its product categories, effectively adopt and leverage existing and emerging technologies, such as artificial intelligence or machine learning, and/or develop, acquire or successfully launch new products or brands, it may not achieve its sales growth objectives.


More specifically, Clorox warned that it might struggle to wield AI in a manner that respects compliance obligations, ethical concerns, and legal risks:


In addition, the legal, regulatory and ethical landscape around the use of artificial intelligence and machine learning is rapidly evolving. The Company’s ability to adopt this emerging technology in an effective and ethical manner may impact its reputation and ability to compete, and this technology could be, among other things, false, biased, or inconsistent with the Company’s values and strategies. Further, the use of generative artificial intelligence tools may compromise confidential or sensitive information, put the Company’s intellectual property at risk, or subject the Company to claims of intellectual property infringement, all of which could damage the Company's reputation.


That’s a good point to raise. Right now the regulatory climate for AI is still a mess (read Radical Compliance if you’re a compliance nerd who wants the deets on AI compliance issues), and nobody quite knows what businesses will need to do to stay on the right side of AI law in, say, 2030.


Fedex Corp. ($FDX) made similar risk disclosures about seizing AI’s potential heightened cybersecurity threats. It also raised an interesting point about AI and social media generating so much digital noise that the company might struggle to keep up with reputation risks:


With the increase in the use of artificial intelligence and social media outlets such as Facebook, YouTube, Instagram, X (formerly Twitter), TikTok, and other platforms, adverse publicity, whether warranted or not, can be disseminated quickly and broadly without context, making it increasingly difficult for us to effectively respond. Certain forms of technology such as artificial intelligence also allow users to alter images, videos, and other information relating to FedEx and present the information in a false or misleading manner.


Somewhat to our surprise, Microsoft ($MSFT) mentioned artificial intelligence only once, despite being a huge player in AI development:


We are investing in artificial intelligence (“AI”) across the entire company and infusing generative AI capabilities into our consumer and commercial offerings. We expect AI technology and services to be a highly competitive and rapidly evolving market, and new competitors continue to enter the market. We will bear significant development and operational costs to build and support the AI models, services, platforms, and infrastructure necessary to meet the needs of our customers. To compete effectively we must also be responsive to technological change, new and potential regulatory developments, and public scrutiny.


In other words, Microsoft is betting the company on the success of AI. That’s not an unwarranted bet, but it’s going to be a big, enterprise-wide, long-term endeavor. 


We could keep going with more examples; we found 39 in Q2 filings alone, and several hundred in annual 10-K filings for 2023 — and that’s all for the S&P 500 alone, never mind all the other businesses out there. 


Filers looking for inspiration on how to describe AI in your risk factors can always start here, comparing yourselves to peers. At the least, those other disclosures would help you have more informed discussions with the legal team, the CTO, or anyone else in your enterprise involved in artificial intelligence, so you’ll know what questions to ask them as you form your narrative disclosures. Whatever data you need, Calcbench has it!



FREE Calcbench Premium
Two Week Trial

Research financial & accounting data like never before. Get features designed for better insights. Try our enhanced Excel Add-in. Sign up now to try the Premium Suite.