AI has dominated the conversation across markets thanks to transformer models, such as ChatGPT, providing mass consumer access to generative AI products and occupying investors’ interest. With no end to the acceleration and adoption of this technology in sight, the markets are responding (a recent CVCA Intelligence report noted that Canadian AI startups have raised $408M YTD). However, due to the greater nuances and risks that this technology presents, those dealing in AI investment face further considerations compared to traditional emerging technology transactions.
An Enhanced Due Diligence Approach
Finding the real value and risk within a product or company can be more complex in AI deals due to the multi-faceted and layered nature of the technology, the rapid changes it undergoes because of its generative nature, and the limited availability of real experts. As a result, the ability to identify risks — and assess whether an organization can take on those risks — becomes more critical than in slower-moving areas.
In addition to adding AI companies to their portfolio, an increasing number of investors are relying on AI to source data-driven investment opportunities, build financial models, and manage assets. In a recent global survey of investment managers, over half of the participants (54%) flagged that they are currently using AI within their investment strategy or asset-class research, with 37% planning to adopt the technology.
If utilizing AI as a tool within the due diligence process itself, organizations must keep in mind its limitations, and have a team prepared to assess the information that the AI has pulled. It is vital to have complete transparency on how proprietary knowledge and data will be used to train models — and on whether the application is built on a third-party foundational model. How information is inputted, stored, and computed, and who owns the base technology, will impact output, ownership, and associated liability. Organizations need to understand what they are buying, what can happen if things go wrong, and have a clear framework of accountability if they do.
Legal Liabilities
When investing in AI, venture capitalists need to be prepared to accept legal risks they would otherwise not have to consider. For example, many of the “market” AI reps carve out use of third-party data and ask for compliance with applicable standards and laws. Sellers of AI products will often schedule instances of third-party data use against such reps and — given the sector’s rapid evolution — push back on compliance with the market standards portion flagged in those reps.
Investors should assess model governance, R&D activities, proprietary technology, and the company’s ability to manage and mitigate risk, in addition to identifying where the responsibility lies for the quality of predictions, incentives for acting in good faith, and liability across each layer of the company’s technology stack
Betting on Unknown Outputs? Bring In Your Tech Experts
Investors in the AI buying cycle must understand the technology as it currently stands and assess potential outputs that have yet to be created. It must be tested on its accessibility by those who possess deep technological know-how so that what is being bought and how it will integrate with current, and planned, operations can be identified and mapped out.
While the interfaces of some AI applications may be similar, all models work individually and have different underlying stacks. These different training models result in different learnings, which can limit their interchangeability. Investors must review each model on an individual basis — without guarantees of interoperability.
Whose IP Is It Anyway?
As with all investments – IP rights reign supreme when determining value. Due to the multi-layered, and often nuanced, nature of AI, investors must ask questions about the IP rights of what is being bought, how the model is designed to process inputs, and if the seller has the right to sell. They must also map out who owns the inputs and outputs of the data, evaluate its scalability, and determine, if any, the level of consent from third-party IP owners.
Current Canadian legislation offers no consensus, and little guidance, on parameters around when work created by AI could be considered copyrightable, and therefore owned by the AI. This makes it even more critical to identify the owner of the source information, to ensure no IP violations occur.
What’s Next?
Global venture capital investment in AI is forecasted to hit US$12 billion by the end of 2024. According to Statista, the market size in Canada is expected to show an annual compound growth rate of 27.67% between 2024 and 2030, resulting in a market volume of C$24.08 billion by 2030. However, amid a noisy market, investors are becoming increasingly discerning and placing higher importance on sustainable growth, trustworthiness, veracity, safety, and obsolescence potential.
The Office of the Superintendent of Financial Institutions is also making changes as a response to the growing number of federally regulated financial institutions (FRFIs) and federally regulated pension plans (FRPPs) using AI in decision-making. They are set to release an updated guideline on the governance and risk management framework for data modelling, earmarked to come into effect on July 1, 2025. While it remains to be seen what implications the updated guideline will have on funds, dealmakers can likely expect to make changes to their policies and procedures.
This article was contributed to CVCA Central by Konata T. Lake, Partner at Torys LLP.