The surge in use and development of AI systems and products, particularly generative AI, has increased interest in investing in and acquiring companies that offer AI solutions or that have integrated AI into their operations.
The EU is close to finalizing a law to regulate AI, and many countries are considering their own legislation. The outcome of lawsuits against generative AI providers also may impact nascent business models and product development strategies.
Here are 10 steps to consider when investigating the acquisition of a company that uses or develops AI technologies. These issues span the due diligence stage to preparing the sales and purchase agreement (SPA) and related documents.
- Analyze the target company’s AI technology or products.
- Review the skills and expertise of the team developing or implementing AI solutions
- Seek to understand how the target company’s AI has been developed, tested and deployed.
- Focus on data protection.
- Determine if the target adheres to an AI governance model.
- Scrutinize customer and supplier contracts to ensure AI-specific risks are properly allocated.
- Take into account cybersecurity, performance and sector-specific risks.
- Expect scrutiny from regulators.
- Consider targeted representations and warranties.
- Assess warranty and indemnity insurance policies.
Companies should also keep in mind several pitfalls unique to tech transactions involving AI – and five potential risks that argue for a tailored approach that goes beyond standard acquisition agreements.
DUE DILIGENCE PHASE
1. Analyze the target company’s AI technology or products.
Determine whether a company’s products actually constitute AI. Not all data processing or analytics solutions qualify as “artificial intelligence.” For instance, product recommender systems and chatbots can be developed without artificial intelligence. The target should be able to explain why a product qualifies as AI.
Given the rapid evolution of AI, companies also should evaluate the long-term viability of a target’s products and the product roadmap. Consider making this assessment from a strategic and legal perspective.
2. Review the skills and expertise of the team developing or implementing AI solutions.
Assess the skills and retention levels of data scientists, engineers, researchers and others as well as the company’s ability to continue to develop the technology.
Retaining key employees is crucial for an acquired company’s success. Consider retention bonuses, equity and stock options, clear career paths or innovation and research opportunities. The acquirer should collaborate with the acquired team to realize a vision or accelerate product adoption and usage.
Review employment agreements with a focus on confidentiality and noncompete obligations. We frequently see employment agreements that are insufficiently robust in confidentiality obligations imposed on key employees. It may also be advisable to amend these as a pre- or post-closing obligation, given that in many jurisdictions, the algorithms that underpin AI systems are protected as trade secrets, not via intellectual property rights.
3. Seek to understand how the target company has developed, tested and deployed AI.
Ask these questions to identify risks at each stage of the AI life cycle:
- How has the target company trained its AI model and managed data quality and data rights?
- Where or how has the company obtained data sets for training, testing and benchmarking? Are there any issues with the sources of data sets? Are rights to the data transparent, traceable and secured?
- What measures has the company taken to protect rights and know-how in the data, algorithms and software that constitute the AI product(s)?
- Where do ownership rights reside?
- Does the target company have a strategy to protect confidential information?
Some of these issues will be similar to those arising in any software acquisition or investment deal – in relation to open-source software, for instance. Yet protecting AI innovation requires a hybrid strategy involving copyright, patent, third-party licenses, trade secrets and database protections. We recommend asking whether the company uses AI tools to generate code and whether it has a process to identify machine-generated code.
4. Focus on data protection.
The importance of data protection compliance in M&A transactions has increased markedly over the past few years. The surge of interest in AI-related deals will only intensify that focus.
If an AI product’s development and/or use involves personal data, it will be subject to data protection laws, possibly in multiple jurisdictions. Sanctions for processing illegally obtained personal data can include deletion of entire databases and even algorithms (the latter measure is known as ‘algorithmic disgorgement’ and has been imposed by the U.S. Federal Trade Commission (FTC) on more than one occasion).Determine early on whether the target company’s AI systems – licensed in or developed in-house – process personal data. Some questions to be addressed include:
- How has the data been obtained, and from where and who?
- Does the data involve sensitive personal data?
- Does the company have a data protection compliance program and does it respect the principles of privacy by design and default?
We expect increased data protection regulatory scrutiny and enforcement actions in connection with the use of AI systems in the short and long term. Any risks identified may impact valuation. Rectifying them (if possible) may constitute a pre-closing obligation.
5. Determine if the target adheres to an AI governance model.
Does the target have an AI risk management framework, such as the one proposed by the National Institute of Standards and Technology? Does it adhere to an ethical framework like the one developed by the EU High Level Expert Group on Trustworthy AI?
Although not yet a legal requirement (this will likely change with the adoption of Europe’s AI Act), following these and similar recommendations will indicate that the target has taken steps to mitigate AI-specific risks involving things like security, bias and discrimination and that it has a compliance culture.
6. Scrutinize customer and supplier contracts to ensure AI-specific risks are properly allocated.
Consider these issues:
- Who owns system inputs and outputs?
- What rights does the company have to re-use data from customers, if any, and what exemptions from liability are in place?
- Does the target company assume indemnification obligations in relation to damage its tools may cause?
- Does the company benefit from such obligations when it is the customer?
7. Take into account cybersecurity, performance and sector-specific risks.
AI systems pose unique cybersecurity risks (e.g., in the form of software vulnerabilities or susceptibility to attacks). Request and review security audits and information on risk mitigation measures the target company has adopted.
Similarly, request reports of assessments in relation to AI tools used or developed by the target company tied to accuracy, reliability, robustness and bias testing. A technical specialist should review these reports.
Finally, consider whether any sector-specific laws apply to the use or development of AI tools by the target company (for example, rules now in force in New York City require employers and employment agencies to adopt measures related to Automated Employment Decision Tools).
Preparing the SALES AND PURCHASE AGREEMENT and other required documents
When preparing transaction documents
8. Expect scrutiny from regulators.
In an increasing number of jurisdictions, transferring ownership in artificial intelligence technologies may trigger scrutiny. Companies should plan for this given the potential that an acquiror and/or target may need prior authorization before implementing the deal.
Foreign Direct Investment (FDI) Review
Regulatory scrutiny of foreign investments has increased around the world in recent years. Authorities may see a company with AI capabilities as a business with national security implications, particularly if the technology can be used for defense purposes.
Analyze potential political and regulatory implications early in the process – the Committee on Foreign Investment in the United States (CFIUS) is reviewing a record number of transactions for national security risks.
Concentration of control over the data required to train AI systems means that reviewing the use of data is in scope for antitrust analysis. Agencies may consider how the data a target company collects might trigger anti-competitive concerns if added to the acquirer’s data pool.
Furthermore, antitrust agencies in the United States and Europe are aggressively challenging so-called “killer” acquisitions of nascent competitors. Any move to acquire an AI company by one of its competitors may at the very least raise questions from regulators.
Transacting parties should have a clear understanding of what remedies they would be willing to offer, if any, should authorities challenge a transaction.
They also should anticipate that broader inquiries are likely to increase transaction costs and the time to closing.
9. Consider representations and warranties.
- Define the process to follow if representations and warranties are breached. Determine the remedies available to the acquiring company, such as indemnification.
- Assess the ability of the company/seller to cover the amount of an indemnification obligation. Consider using an escrow mechanism, collateral and/or holdbacks.
- Ensure that the representation and warranties cover:
- Intellectual property rights in the product(s) as relevant (e.g., patent, copyright).
- Respect of third-party licenses and ownership rights.
- Absence of infringement claims or other product use-related claims.
- Data acquisition protection of related know-how.
- Absence of any loss of confidential information.
- Compliance with data protection regulations and other laws.
- Customer contract risk allocation and cybersecurity incidents and risk mitigation measures.
There is some debate as to whether AI-specific representations and warranties are necessary – risks may be covered by more broadly applicable guarantees addressing intellectual property, IT, data protection, cybersecurity, material contracts and compliance.
In our view, for transactions where the AI is of strategic importance to the target company, AI-specific representation and warranties should be included given the specific nature of the risks created by AI technologies. That will help focus the target on responding to the buyer’s due diligence inquires. It also can help surface latent risks.
10. Assess warranty and indemnity insurance policies.
W&I policies typically cover most important representations and warranties, including those relating to intellectual property ownership, and, subject to some scrutiny by underwriters, freedom to operate, data privacy and security and compliance with employment laws. However, this is an evolving subject. Practices may evolve, particularly in response to AI-specific risks covered by what remain non-standard representations and warranties.