AI as IP™: A Framework for Boards, Executives, and Investors

J.S. Held
Contact

J.S. Held

[authors: James Malackowski, Eric Carnick, David Ngo]

This article is the second installment in our three-part series, Artificial Intelligence as Intellectual Property or “AI as IP™”, which explores how artificial intelligence assets should be treated as a form of intellectual property and enterprise capital. The first article, “A Strategic Framework for the Legal Profession”, explored the legal foundations for recognizing and protecting AI assets. The upcoming third article, “Guide for SMEs to Classify, Protect, and Monetize AI Assets”, will provide practical steps for small and mid-sized enterprises to turn AI into measurable economic value.

The Ten Billion Dollar Recognition Gap

Imagine a prominent artificial intelligence (“AI”) company that announces its Series C funding round at a $10 billion valuation is a moment investors celebrate as another success story in the AI economy. However, its balance sheet reveals a discrepancy few investors consider: the AI company reports only $500 million in tangible assets, perhaps mostly representing cash and data servers. If due diligence was done correctly, then where is the other $9.5 billion in value?

The answer exposes a fundamental disconnect in corporate accounting: the AI company’s most valuable resources—its language models, training datasets, and algorithms—generate most of its revenues yet remain “off the books,” or uncapitalized under current accounting standards. The entire valuation rested upon investors’ confidence in AI assets that accounting rules treated as nonexistent.

This story illustrates a real and widespread issue in finance. As AI’s importance continues to grow in the global knowledge-based economy, financial statements are becoming less representative of a company’s true worth. This perpetuates a class of “invisible capital” that distorts how boards, auditors, and investors assess the performance of a company’s assets; in other words, the lack of transparency and accurate valuations creates what could be a recognition gap in the billions of dollars, if not trillions.

The scale of this disconnect is staggering—global AI investment is projected to grow to $3.49 trillion by 2033, representing a 31.5% compound annual growth rate (“CAGR”). 1 Venture capital has invested heavily in AI in recent years, with more than one-third of all funding in 2024 flowing into AI start-up companies.2 Public markets have assigned AI firms up to four times the valuation premium over non-AI software peers, reflecting investor confidence in AI-driven intangibles over the types of assets that previously dominated investments in the early part of the 21st century.

The current application of Generally Accepted Accounting Principles (“GAAP”) and International Financial Reporting Standards (“IFRS”) often leaves AI assets unrecognized, even if they are responsible for such high valuations. For example, internal expenditures on AI development under current standards are treated as research and development (“R&D”) expenses rather than capital investments.3 Only acquired intangibles purchased in M&A transactions appear on balance sheets. This creates a perverse asymmetry in which firms that build some of the best AI systems record the smallest asset bases.

This accounting treatment produces three distortions that ripple through the entire economy. First, by expensing multi-year AI investments immediately, companies depress their reported earnings even when those outlays generate long-term returns. This understates the true profitability of the company’s investments. Second, investors are less able to distinguish between firms whose R&D creates durable assets from those burning cash on experiments that have yet to demonstrate a form of utility or value. Third, executives focused on reported margins may under-invest in the innovations that provide the most substantial value at a time when AI capabilities determine competitive survival.

The solution to these issues already exists within established, though often underutilized, accounting standards. International Accounting Standard (IAS) 38 defines an intangible asset with four tests: identifiability, control, measurability, and future economic benefit.4 As will be discussed further in this article, Modern AI systems clearly meet all four of these criteria, and the rules permit such recognition, but standard corporate practices have yet to follow suit.

Why AI Systems Can Qualify as Capital Assets

The transformation of AI from experimental technology to core business infrastructure demands a corresponding shift in its financial recognition as well. A new data center is universally considered a Property, Plant, and Equipment (“PPE”) asset that can face depreciation, impairment, and revaluation.5 In contrast, an AI model that generates millions in recurring revenue may be classified by an accountant as an expense.6 This is a distinction inherited from mid-20th-century accounting principles, yet AI systems are not consumables; they are productive capital capable of generating benefits long after the initial cash outlay.

Current accounting standards, specifically IAS 38 (along with the Accounting Standards Codification (ASC) 350), consider four qualifying conditions for capitalizing intangibles: identifiability, control, measurability, and future economic benefit.7 Each criterion can be applied directly to the core components of AI systems.

Figure 1: Qualifying Conditions for Capitalizing Intangible Assets (IAS 38/ASC 350)

The identifiability test determines whether an asset can be sold, licensed, or separated from its associated goodwill. AI assets, such as datasets, model architectures, and applications, can become transferable assets—and companies have been licensing their proprietary data or models in recent years. There is strong market evidence of identifiability with AI assets. High-profile data deals such as those between Dow Jones (the publisher of the Wall Street Journal) and OpenAI, and The New York Times and Amazon, among many other deals, provide confirmation that AI components can be separable economic assets.8 This type of AI asset is in high demand, and various marketplaces and intermediaries have arisen to prove it.

The control test assesses whether a company has the legal and technical capacity to determine who uses an asset. AI firms can control access to their assets through various mechanisms, including encryption protocols, API gating, trade secret protections, and employment or vendor agreements. Proprietary models and codebases stored privately and securely may satisfy a reasonable definition of controlled resources, and the technical controls over AI assets may be more robust than those typically applied to traditional IP assets.

The measurability test requires that an asset’s costs or value can be reliably determined. The growing market for AI training data provides ample market comparables and is forecasted to grow at a CAGR of 27.7% from $2.82 billion in 2024 to $9.58 billion in 2029.9 AI R&D costs can also be carefully tracked for software development, data acquisition, computation, and storage costs.10 Cost accounting for AI can achieve or exceed the levels of granularity seen in traditional software development, with platforms automatically tracking energy consumption, data processing, and model iterations.

The future economic benefit test considers whether an asset will generate future revenue, reduce cost savings, or confer competitive advantages. AI assets demonstrably achieve these outcomes. For example, predictive maintenance models can reduce equipment downtime and thereby provide cost savings;11 language models can be monetized with subscription services to generate direct revenue streams; 12 recommendation engines can increase advertisement conversion rates and provide measurable cash-flow benefits. The economic benefits are observable, attributable, and financially material.

Under these conditions, core components of an AI system can be classified as a distinct, qualifiable asset class. Training data is a foundational asset class that meets each recognition test; it can be licensable, controllable, measurable, and yield clear economic benefits, as proven by large media licensing deals to leading AI companies. Trained models may represent codified intellectual capital—encapsulating years of engineering effort and learning—and can be stored, transferred, and monetized in a similar way that patented software or copyrighted code has been considered in the last several decades. Furthermore, deployed AI applications are revenue engines that are functionally indistinguishable from other capitalized software.

Despite remaining largely absent from financial statements, the AI model economy is beginning to mirror the software-as-a-service revolution, and usage-based pricing reveals a level of economic value embedded in the underlying assets.

The Forces Maintaining Invisibility

Understanding why AI assets remain unrecognized requires an examination of the institutional dynamics. There are four forces that keep AI assets largely unrecognized: management incentives, auditor conservatism, regulatory silence, and investor complicity.

Figure 2: Forces Maintaining Invisibility of AI Assets

First, executive management has strong incentives to expense AI investments. As mentioned previously, tax codes allow for immediate R&D deductions, unlike multi-year amortization for capitalized assets.13 This provides a significant short-term cash flow boost for companies, allowing managers to claim innovation without a future asset value reset. Capitalization, in contrast, demands accountability for an asset’s performance over an extended period. Depreciating assets forces executives to explain whether AI assets are productive and expose inefficient investments.

Auditors face a level of asymmetric risk that biases towards conservatism.14 Overstating assets may more often lead to legal and regulatory troubles, though understating assets is also a compliance risk that can come with its own regulatory penalties and investor lawsuits.15 Given the recent rise and relative novelty of AI, auditors may still default to the safer option of expensing it.16 This perspective is further reinforced by past financial crises involving overvaluations of intangible assets.

While the FASB and IASB have begun to review the issue, they have yet to issue guidance on accounting for AI assets. Likewise, the Securities and Exchange Commission has focused its attention elsewhere and has yet to issue new regulations that specifically address the use of AI.17 This absence of direction creates a compliance vacuum in the United States. Without explicit guidance, companies may decide to report their assets in ways that follow the path of least resistance—promising AI developments and remaining ambiguous about their true value to investors, while valuations continue to climb.

Investor behavior completes this circle of invisibility. Analysts may focus on shaping narratives about revenue growth over investigating the true productivity of a company’s assets. And rather than analyzing underlying assets, venture capitalists may justify the valuations of AI companies with these revenue projections and multiples. Investors continue to reward announcements of AI initiatives rather than wait for validated asset strength or intellectual property (“IP”) defensibility.18

This dynamic between business, auditor, regulations, and investors creates a mutually beneficial but temporary situation for all stakeholders. Managers avoid scrutiny, reduce taxable income, and preserve flexibility; auditors minimize potential liability; regulators avoid contentious standard-setting; and investors ride the wave of momentum. However, this equilibrium conceals a growing level of systemic risk that conceals the weak fundamentals of a company. When market corrections or new regulations force transparency, companies built on narrative alone will face a sharp revaluation, a repeat of what has been seen in similar boom-bust cycles (e.g., the dot-com era of the late 1990s and early 2000s).

Establishing a Valuation Discipline

The transition from a narrative-driven to asset-based valuation of AI requires a methodological level of discipline, which can be adapted from established intangible asset practices. Bridging the gap between AI signaling and asset quality requires practical frameworks to identify, measure, and compare AI capital with the same rigor that applies to brand equity, portfolio, or customer relationships.

Three valuation approaches offer complementary perspectives for appropriately valuing assets. First, the cost approach provides a baseline value by summing development costs and adjusting for obsolescence and the remaining useful life of the asset. This method tends to provide conservative valuations; however, it does not account for strategic value, network effects, or competitive positioning. A pharmaceutical company spending $500 million to develop a machine learning or AI-driven drug-discovery platform may have an asset worth more than its historical cost if the system successfully reduces development timelines. However, cost-based valuation would only capture the input side of the financial equation.

Market-based valuation instead considers evidence from the likes of licensing deals, acquisitions, and partnerships. Recent content licensing deals provide comparables for training-data assets,19 albeit at potentially depressed rates, given widespread alleged infringement and AI platforms’ claims of fair use. AI startup acquisitions reveal market multiples for their models and technology teams. This approach is limited by incomplete disclosure and low transaction volume for certain asset categories; it works best for standardized, liquid assets, such as labeled training datasets or specialized models that serve common use cases.

Third, the income approach considers the future cash flows that may be attributable to AI capabilities, then discounts appropriately for risk and time value. This method captures the economic potential of the asset but tends to rely on assumptions and attribution models. Quantifying the benefit attributable to an asset may be easier for some systems (e.g., predictive maintenance) compared to others (e.g., recommendation engines). The income approach excels at valuing more mature, deployed systems with proven performance records, but may struggle with early-stage or experimental AI where the level of cash flow remains uncertain.

The best practice involves triangulating all three valuation approaches, documenting the assumptions made, and applying sensitivity analysis to reconcile the discrepancies. A blended valuation can reveal both the tangible investment base and the intangible, strategic upside to the asset, thereby providing board-level visibility into where AI capital resides and how it contributes to enterprise value.

Figure 3: Triangulation of the Valuation Approaches

Beyond valuation, investors require a framework for comparing AI asset quality across companies. We propose an “AI Quality score” or AIQ™ metric that provides a standardized assessment tool evaluating three dimensions: asset value, asset protection, and asset management. Asset value can be considered in terms of data uniqueness, model performance, performance advantages, and market traction, as measured by API revenue or internal adoption. Asset protection might examine the strength of a company’s IP portfolio, technical controls, and cybersecurity measures. Finally, asset management could evaluate effective governance, risk assessment processes, and insurance coverage. Similar to the concept of an AIQ, standardized due diligence frameworks could evaluate five key dimensions: data assets (examining existence, rights, and quality controls), model assets (consider proprietary development, scalability, and performance), IP (patent filings and trade-secret coverage), governance (level of board oversight and insurance coverage), and monetization (whether there exists revenue streams or a clear path to commercialization). In summary, a structured framework may be needed to transform qualitative AI discussions into investment-grade analysis.

The transition towards discipline-driven AI valuations serves multiple stakeholders. Investors could more reliably distinguish durable capital from marketing hype. Boards would have quantitative tools for strategic capital allocation, and there would be better signals for managers to determine which investments create defensive value. The key questions would then shift from “does a company use AI?” to “how strong are its AI assets?” This would parallel earlier transitions in brand valuation, patent portfolio analysis, and customer lifetime value modeling—which were initially dismissed as too subjective for rigorous analysis, but eventually standardized into mainstream practice.

Board Governance as a Fiduciary Imperative

AI has crossed the threshold from an operational tool to strategic capital, which demands the same level of board-level governance applied to financial controls, cybersecurity, and environmental compliance. AI continues to drive corporate value and competitive advantages,20 and yet, few boards can reliably produce an inventory of their AI assets or explain how they are governed.

Directors have a duty of care to understand and oversee material corporate assets. When AI drives market capitalization, failing to govern it properly could be both a technical oversight and a failure in governance. There are three possible rationales that establish this as a board-level responsibility: asset stewardship (an obligation to safeguard and maximize share value), risk oversight (paying attention to risks in model theft, data breaches, regulatory violations, and decision errors), and strategic direction (making decisions on whether to build, buy, license, or co-develop AI systems with an informed understanding of asset positions and protection).

According to a 2025 survey conducted by the National Association of Corporate Directors (“NACD”), 62% of directors reportedly set aside time on their agendas for full-board discussions on AI, up from 28% in 2023.21 Nevertheless, current board practice still lags behind. Only 23% of directors have reevaluated corporate strategies to incorporate the impact of AI or conducted an audit to determine where AI is currently in use within their company.22 And only 6% report that their boards have established metrics for management reporting.23 Establishing appropriate metrics for AI investments is becoming more critical, with respondents identifying a lack of clear returns on investment as one of the barriers to AI adoption, implementation, or deployment.24 Many boards continue to receive anecdotal AI updates and lack the key metrics to fully understand its progress, which contrasts with the rapid adoption of ESG governance and regulations. 25

Figure 4: NACD’s 2025 Public Company Board Practices and Oversight Survey.26

Boards could develop a framework, such as a “Three-Tier Maturity Model,” to help assess their readiness. For example, Tier 1 firms (where AI is the core product) might require a dedicated AI committee and deep board expertise. A Tier 2 firm (where AI is critical to its operations) could develop a greater sense of oversight into existing committees with at least one AI-literate director. Tier 3 firms (in which AI supports but does not define the business) could address AI in standard risk or strategy reviews.

Effective board oversight can also follow a structured level of routine reporting—a quarterly AI dashboard, complete with AIQ scores, could help with digesting the technical complexity of AI assets into strategic intelligence. This dashboard could include three core sections. Asset inventory and valuation would track the number of systems and their estimated value. The risk and protection status would summarize the IP portfolio, insurance coverage, and incidents. Additionally, performance and ROI would document AI's contribution to revenue, margin, and productivity. Oversight of this dashboard and related governance activities should include a reporting relationship to the Chief Intellectual Property Officer (“CIPO”) to ensure AI as IP™ is institutionalized as a core component of enterprise capital. Boards that move beyond compliance using strategic engagement are primed to gain an advantage over their competitors. Some competitive advantages can include capital efficiency, valuation premiums, and an increased measure of resilience against market conditions. AI governance is becoming the “new ESG”—a market-priced indicator of corporate sophistication.

Risk Management and Capital Protection

The world’s top companies hold billions of dollars in unlisted data and model assets.27 When these systems fail, the economic impact can be analogous to a factory fire or product recall. A corrupted dataset or leaked algorithm can erase competitive advantages overnight. These represent asset-impairment events, yet few chief financial officers account for them as such.

Traditional risk frameworks were not designed for algorithmic capital. The challenge is extending enterprise risk management to the entire AI capital stack. Each asset layer has different exposures and risk profiles. For example, data assets can face loss or corruption; models run the risk of theft or bias; applications can experience systemic failures; and infrastructure can suffer extended outages. Quantifying exposure can help to translate technical risks into proactive actions by the CIPO and provide financial knowledge for boards. Asset-value-at-risk calculations can estimate potential losses, whereas loss-event severity analyses can support the assessment of remediation costs.28

Systematic protection can be represented with a four-layer risk pyramid: avoidance, retention, prevention, and transfer.29 Avoidance can involve discontinuing or outsourcing high-risk, low-return projects to prevent the accumulation of phantom assets. Retention can involve establishing internal reserves for model-drift remediation and retraining cycles—treating algorithmic asset depreciation as a managed cost of capital. A business can also implement preventative measures to mitigate risk, such as having secure-by-design architecture, bias-testing protocols, and board-mandated model validation reviews. Finally, businesses can transfer or shift residual risk through contracts and specialized AI insurance.

Figure 5: Systematic Protection Risk Pyramid

The insurance market is in the early stages of responding to market conditions with AI-specific risk products. Recent launches from the carriers Munich Re and Lloyd’s cover model malfunction, data integrity, and other forms of financial losses from AI. 30 The insurance market may still be in the “observatory” stage of assessing the rise of new risks related to the development and use of AI solutions, but early adoption can signal credibility to investors. 31 Having a key underwriting requirement (such as, for example, an auditable AI asset inventory) can provide a powerful incentive for governance.

Effective risk mitigation for AI development will likely require cross-functional coordination across the entire senior executive team (e.g., CEO, CIO, CFO, CIPO, and General Counsel) to quantify exposure, implement security controls, and align contracts and IP protections. Quarterly reporting should be consolidated into a unified AI capital-at-risk statement for the board, following the same level of due diligence that is performed for liquidity or foreign exchange risk reports.

Five-Pillar Framework for Systematic Management

Once organizations accept AI as a form of enterprise capital, the next practical question is how to implement disciplined management for algorithmic assets. We propose a structured lifecycle approach with five interdependent pillars.

Figure 6: Five-Pillar Framework for Systematic Management and Sustainable Economic Growth

The first pillar, Identification, establishes a comprehensive inventory of AI assets. Led by the company's CIPO, organizations can conduct a “census” of their data and models, tagging them with key metadata, classified by asset type, and then mapped to revenue streams to have a visible foundation for the overall assets. Many organizations tend to underestimate their AI footprint, 32 which makes inventorying essential for any further analyses.

The second pillar, Valuation, provides economic insight. The company CFO is critical to help organizations apply hybrid methods to estimate ongoing economic contributions, rather than relying solely on the historical costs of an asset. Key activities include attributing revenue or cost savings to specific AI systems and maintaining asset-specific ledgers. Such measures can provide supporting evidence to justify capital allocation decisions and valuation premiums in mergers and acquisitions (M&A) or fundraising rounds.

The third pillar, Protection, secures legal and technical defenses by employing a layered IP strategy that combines patents, trade secrets, copyrights, and contracts. Technical safeguards, such as encryption, can deter theft and align with regulations to ensure legal sustainability. Protection converts the internal knowledge within an organization into an enforceable property right.

The fourth pillar, Management, integrates governance and risk oversight. It would require assigning clear executive ownership for asset registry, AIQ metrics, and quarterly board reports (for example, with dashboards) to ensure visibility into the assets. Routine audits of model performance and security controls help to further maintain asset quality. Without a level of active management, AI assets may silently degrade and lose their value over time.

The final pillar, Optimization, ensures AI capital appreciates through active deployment. Activities can include pursuing licensing partnerships to generate incremental returns, identifying underperforming models for retraining or retirement, and applying performance analytics to maximize ROI. Mature organizations would treat their model portfolios like balanced, performance-monitored investment portfolios.

Together, these five pillars form a continuous loop for sustainable economic growth. Identification informs valuation; valuation guides protection; protection informs management; management generates data for optimization; and optimization surfaces new assets requiring identification. This lifecycle integration mirrors established practices for managing patent portfolios, brand assets, and customer relationships—which are often subject to systematic governance.

Implementation Roadmap and Timeline

Translating this conceptual framework into reality requires a structured timeline with measurable milestones. We propose a roadmap that defines deliverables during a three-year time horizon: 90 days, one year, and three years.

The first 90 days are about establishing visibility and control; as stated in the previous section, organizations should conduct a comprehensive inventory of their AI assets and develop basic AIQ metrics. A cross-functional AI governance team, led by the CIPO, may be established by the board to initiate preliminary loss and exposure estimation. The outcome: knowing what AI assets exist, who owns them, and where the risks are concentrated.

The first year focuses on institutionalizing governance and measurement. By this time, the CFO may produce the organization’s first AI asset valuation report. Board oversight dashboards are implemented for quarterly reporting, and the legal team conducts an IP audit and files appropriate protections (e.g., patents, trade secret documentation, and updating license agreements). Risk officers secure insurance coverage specializing in AI-related liabilities, and strategy officers establish key performance indicators (KPIs) to track return on investment (ROI). The communications team may publish AI governance statements modeled on environmental, social & governance (ESG) reports to help signal transparency to investors and stakeholders. The CIPO formally engages in governance activities to align asset management with enterprise capital strategy. By the end of the year, AI governance within the organization shifts from ad hoc to structured accountability.

The three-year horizon is about delivering optimization and maturity. By this time, organizations score the quality of their AI assets on an annual basis, with information technology (IT) operations teams implementing lifecycle-management platforms to automate model maintenance. The CFO integrates AI reporting into financial filings through supplemental disclosures, and business development teams pursue additional monetization strategies. The Chief Information Officer (“CIO”) works with insurers and auditors to provide continuous risk assurance. At this stage, enterprises should have transformed their AI assets from a speculative differentiator into a fully governed asset class with measurable returns on investment and subject to the same fiduciary discipline as physical property, financial instruments, or human capital.

The Coming Era of Mandatory Discipline

Corporate governance transformations often follow predictable patterns: markets reward pioneers, and then regulations compel universal adoption. AI is on a similar trajectory to ESG and cybersecurity, but on a compressed timeline reflecting technology’s rapid economic penetration.33 AI is expected to boost global economic growth despite remaining invisible in financial statements.34 Investors, regulators, and insurers may find themselves converging on new AI capital disclosure expectations—one that is driven by regulatory developments, market pressure, and updated accounting standards.

There are already some early signs of policymakers moving towards codifying AI governance frameworks. For example, the EU’s Artificial Intelligence Act may effectively mandate up-to-date inventories of developing or deployed AI systems,35 and U.S. federal agencies are beginning to introduce AI-related regulations.36 Asia-Pacific markets are also beginning to pilot AI accountability reports for listed companies.37 This will likely lead to global harmonization as multinationals will find a need to adopt the highest standards to meet compliance requirements.

Market pressures are also beginning to create incentives for transparency and disclosure. Investors are encouraged to consider AI governance statements as they conduct due diligence,38 and credit rating agencies may begin exploring governance scores where emerging AI development is considered a risk component to ratings. 39 In fact, insurance carriers may be moving towards the phase of providing AI-related coverage conditional on verified asset inventories.40

Accounting standard-setters, although they have yet to provide strict guidance on AI assets, have acknowledged the need for updates and are considering a broader overhaul of reporting for intangibles. 41 Anticipated revisions from the IASB and FASB may follow, clarifying rules for capitalizing internal assets. This evolution to accounting standards will legitimize AI asset valuation as a mainstream accounting discipline on par with goodwill impairment testing or deferred-tax accounting.

Organizations that act before mandates will be primed to gain durable strategic advantages—for example, valuation premiums will emerge from reduced investor uncertainty, and regulatory head starts will minimize future compliance costs. Early adopters of AI disclosure will command market trust and preference.

By the early 2030s, we expect that most, if not all, major corporations will report on their AI capital. Investors would expect footnotes showing AI asset value, amortization, and insurance coverage. Independent auditors will verify these figures as routine practice, and stock exchanges could even require certified AI asset statements.

Conclusion

The recognition gap between AI’s economic reality and its accounting treatment is a fundamental misalignment between how value is created and how it is measured. As AI becomes the primary driver of value, this invisibility distorts capital allocation, obscures risk, and undermines governance.

The conceptual frameworks presented in this analysis—the four-part recognition test, five-pillar management system, a potential AI Quality score, and three-phase implementation roadmap—provide a practical path to bridge this gap. The technical standards, valuation methodologies, and governance models already exist. All that remains is the organizational will and leadership to act. AI should be treated as what it is: productive capital demanding disciplined stewardship.

Disclosure is the destiny of mature capital markets. Markets reward what they can see and discount what they cannot. Companies that treat AI as accountable capital will lead in both innovation and valuation credibility. The next frontier of corporate reporting is not just sustainability or cybersecurity, but also AI capital stewardship. The firms that master it today will define the governance practices of tomorrow.


1 https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market

2 https://pitchbook.com/news/articles/ai-startups-grabbed-a-third-of-global-vc-dollars-in-2024
https://www.cbinsights.com/research/report/venture-trends-2024/

3 https://news.bloombergtax.com/financial-accounting/accounting-groups-differ-on-tracking-intangible-assets-in-ai-era
https://www.datastudios.org/post/how-to-classify-and-measure-intangible-assets-under-ifrs-and-us-gaap

4 https://www.ifrs.org/issued-standards/list-of-standards/ias-38-intangible-assets/
https://iasplus.com/en/standards/ias/ias38

5 https://www.datastudios.org/post/how-to-account-for-property-plant-and-equipment-under-ifrs-and-us-gaap-
depreciation-impairment

6 https://www.pkf-l.com/insights/capitalising-ai-tools-accounting-ias-38/
https://kpmg.com/mt/en/home/insights/2021/05/capitalisation-of-internally-generated-intangible-assets.html

7 https://www.datastudios.org/post/how-to-classify-and-measure-intangible-assets-under-ifrs-and-us-gaap
https://kpmg.com/mt/en/home/insights/2021/05/capitalisation-of-internally-generated-intangible-assets.html

8 https://www.wsj.com/business/media/openai-news-corp-strike-deal-23f186ba
https://www.wsj.com/business/media/amazon-to-pay-new-york-times-at-least-20-million-a-year-in-ai-deal-66db8503

9 https://www.businesswire.com/news/home/20250106998432/en/AI-Training-Dataset-Global-Market-Forecast-to-
2029-Surge-in-Demand-for-Multimodal-Datasets-Propels-Generative-AI-Innovations-Expansion-of-Specialized-Data-
Annotation-Services-Opens-New-Frontiers---ResearchAndMarkets.com

10 https://dart.deloitte.com/USDART/home/publications/deloitte/industry/technology/accounting-generative-ai-
software-products

https://www.cbiz.com/insights/article/accounting-considerations-in-ai-projects

11 https://www.iiot-world.com/predictive-analytics/predictive-maintenance/predictive-maintenance-cost-savings/

12 https://www.businessofapps.com/data/chatgpt-statistics/
https://www.hbs.edu/faculty/Pages/item.aspx?num=64200

13 https://news.bloombergtax.com/financial-accounting/accounting-groups-differ-on-tracking-intangible-assets-in-ai-era
https://www.datastudios.org/post/how-to-classify-and-measure-intangible-assets-under-ifrs-and-us-gaap
https://bridgeway.com/perspectives/investing-in-the-age-of-ai-why-intangible-assets-matter-more-than-ever/

14 https://www.datastudios.org/post/how-to-classify-and-measure-intangible-assets-under-ifrs-and-us-gaap

15 https://classactionlawyertn.com/misstating-asset-values-in-financial-reporting/ https://fastercapital.com/content/Asset-Overstatement--Asset-Overstatement--The-Overlooked-Threat-to-Financial-
Integrity.html

16 https://dart.deloitte.com/USDART/home/publications/deloitte/industry/technology/accounting-generative-ai-
software-products

17 https://www.sidley.com/en/insights/newsupdates/2025/02/artificial-intelligence-us-financial-regulator-guidelines-
for-responsible-use

18 https://hbr.org/2025/10/is-ai-a-boom-or-a-bubble
https://www.forbes.com/sites/siladityaray/2025/10/06/amd-shares-surge-30-after-multibillion-dollar-deal-with-
openai/

https://www.cnbc.com/2025/09/24/alibaba-shares-rise-over-6percent-after-ceo-unveils-new-ai-products-and-
spending-plans.html

19 https://www.jsheld.com/uploads/AI-Content-Licensing-Deals-for-Textual-Works.pdf

20 https://www.wipo.int/en/web/global-innovation-index/w/blogs/2025/the-value-of-intangible-assets-of-
corporations

https://www.forbes.com/sites/digital-assets/2025/09/04/ai-drives-almost-half-of-2025-forbes-cloud-100s-11-trillion-
value/

21 https://www.nacdonline.org/all-governance/governance-resources/governance-surveys/surveys-
benchmarking/2025-public-company-board-practices--oversight-survey/2025-board-practices-oversight-ai/

22 Id.

23 Id.

24 Id.

25 https://corpgov.law.harvard.edu/2025/04/12/regulatory-shifts-in-esg-what-comes-next-for-companies/
https://www.thomsonreuters.com/en-us/posts/sustainability/transforming-business-decision-making/

26 https://www.nacdonline.org/all-governance/governance-resources/governance-surveys/surveys-benchmarking/2025-
public-company-board-practices--oversight-survey/2025-board-practices-oversight-ai/

27 https://www.wipo.int/en/web/global-innovation-index/w/blogs/2025/the-value-of-intangible-assets-of-corporations
https://www.oecd.org/en/publications/measuring-data-as-an-asset_b840fb01-en.html

28 https://www.techradar.com/pro/the-true-cost-of-outages-and-why-monitoring-ai-dependencies-is-crucial
https://www.erwoodgroup.com/blog/the-true-costs-of-downtime-in-2025-a-deep-dive-by-business-size-and-industry/

29 https://www.frostburg.edu/faculty/rkauffman/_files/images_preppers_chapters/Ch04-RiskMgmtPrimer_v2.pdf
https://hr.fullerton.edu/risk-management/information-and-document-requests/information-management/essential-
techniques-of-risk-management.php

30 https://www.munichre.com/en/solutions/for-industry-clients/insure-ai.htm
https://www.computerweekly.com/news/366586014/Munich-Re-sees-strong-growth-in-AI-insurance
https://www.techmonitor.ai/digital-economy/ai-and-automation/lloyds-insurers-introduce-protection-financial-losses-
ai?cf-view

31 https://www.deloitte.com/us/en/insights/deloitte-insights-magazine/issue-33/ai-insurance-ai-risk.html
https://www.dlapiper.com/en/insights/publications/derisk-newsletter/2024/the-insurability-of-ai-risk-a-brokers-
perspective

32 https://www.forbes.com/councils/forbestechcouncil/2025/09/16/shadow-ai-in-the-workplace-turning-hidden-ai-
use-into-a-strategic-advantage/

33 https://corpgov.law.harvard.edu/2025/10/15/ai-risk-disclosures-in-the-sp-500-reputation-cybersecurity-and-
regulation/

https://www.ey.com/content/dam/ey-unified-site/ey-com/en-us/campaigns/board-matters/documents/ey-cbm-
cyber-and-ai-oversight-disclosures-2025-3.pdf

34 https://www.goldmansachs.com/insights/articles/ai-may-start-to-boost-us-gdp-in-2027

35 https://www.ey.com/content/dam/ey-unified-site/ey-com/en-gl/insights/public-policy/documents/ey-gl-eu-ai-act-
07-2024.pdf

36 https://hai.stanford.edu/ai-index/2025-ai-index-report

37 https://www.deloitte.com/content/dam/assets-zone1/apac/en/docs/collections/2024/apac-ai-at-a-crossroads-
report.pdf

38 https://www3.weforum.org/docs/WEF_Responsible_AI_Playbook_for_Investors_2024.pdf
https://www.orrick.com/en/Insights/2025/08/AI-in-Transactions-What-Is-the-Impact-of-AI-on-Business-
Transactions-AI-FAQ-Series

39 https://www.fitchratings.com/research/banks/artificial-intelligence-poses-emerging-rewards-risks-to-credit-27-02-
2024

40 https://www.forbes.com/sites/ronschmelzer/2025/04/30/ai-gone-wrong-now-theres-insurance-for-that/

41 https://www.cpajournal.com/2023/06/01/cpaj-news-briefs-fasb-iasb-gasb-5/
https://news.bloombergtax.com/financial-accounting/accounting-groups-differ-on-tracking-intangible-assets-in-ai-era

Written by:

J.S. Held
Contact
more
less

What do you want from legal thought leadership?

Please take our short survey – your perspective helps to shape how firms create relevant, useful content that addresses your needs:

J.S. Held on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide