We recently discussed the topic of risk management as it relates to artificial intelligence (AI) in financial services, and suggested certain tips for the financial services sector. This article is the first of a series that will take a closer look at those tips and suggest how funds can maintain and demonstrate accountability for automated processes.
1. Define the Risk
Investment advisers have been using algorithms for many years. Since some programs are capable of automating more functions than others, it is necessary to understand the risks associated with both a program’s technological features and its use. Here are two examples of AI use in trading and advising:
Robo-advisers: According to the Securities and Exchange Commission (SEC), the term “robo-adviser” refers to an “automated digital investment advisory program [that] allows individual investors to create and manage their investment accounts through a web portal or mobile application, sometimes with little or no interaction with a human being.” Of note, a robo-adviser’s recommendation is limited by the information it receives from the individual investor, and the recommendation may or may not be customizable. Some robo-advisers focus on a specific category of investment products, such as ETFs.
Algorithmic trading: An SEC Commissioner has observed that “algorithmic strategies are generally aimed at making a tiny profit over a large number of trades.” One subset of algorithmic trading, high frequency trading, has drawn regulatory scrutiny and continues to be , but may account for up to half of trading activity in the U.S. stock markets.
These are only two of many available AI solutions that funds can use, whether to provide advice or to execute trades (or both). As demonstrated by these examples, AI can be used to simplify a fund’s operations expenses, but the manner in which AI solutions are used impacts risk and, in turn, affects the means or methods of accountability expected by securities regulators.
2. Maintain Appropriate Policies and Procedures
Anything pertaining to the provision of advice and interaction with retail investors is a priority for regulators. This is why robo-advisers are regulated in the U.S. as “registered investment advisers.” For the past two years, robo-advisers have been subject to the Advisers Act, and have been required to comply with substantive and fiduciary obligations that go to the core of securities law, such as:
Providing disclosure to clients.
Obtaining client information to provide suitable advice.
Maintaining compliance programs to support those two objectives.
With respect to compliance, the SEC requires specific to address risks that are unique to algorithms (in addition to the measures usually expected from investment advisers). The SEC requires robo-advisers have policies in place to address:
The development, testing and backtesting of the algorithmic code, and the monitoring of its performance after implementation.
The collection of sufficient information to allow the robo-adviser to conclude that its initial recommendations and ongoing advice are suitable for the client (based on the client’s financial situation and investment objectives).
The disclosure of changes to the algorithmic code that may materially affect robo-advisers’ portfolios and investment determinations.
The appropriate oversight of any third party that develops, owns or manages the algorithmic code or software modules utilized by the robo-adviser.
The prevention and detection of cybersecurity threats, and the response to these threats.
The use of social and other forms of electronic media in connection with the marketing of advisory services.
The protection of client accounts and key advisory systems.
These conditions were included in the SEC’s examination priorities for the first time in 2017, and again in 2018. In the list of 2019 examination priorities, the focus remained on retail investors — particularly with respect to the disclosure of fees — with cybersecurity being another area of concern, though robo-advisers and algorithmic trading were not expressly listed. In December 2018, the SEC carried out its first enforcement action against two robo-advisers, over allegedly misleading advertising and false statements about their products.
While disclosure obligations in the U.S. and EU are not identical — since the EU’s General Data Protection Regulation (GDPR) requires funds to explain to customers what personal data is being collected and how this data is being used — there is , because U.S. funds, including robo-advisers, must also make certain disclosures about how client information is used (for example, when using such information to generate a recommended portfolio).
3. Identify Additional Obligations Applicable to Securities Traders
Other than retail investors, the SEC’s main enforcement priority for 2019 relates to compliance and risk in registrants responsible for critical market infrastructure. Although algorithmic trading isn’t mentioned in those priorities, this technology continues to have implications for market structure and, relatedly, market integrity.
In 2010, the SEC identified the four main strategies of algorithmic trading as follows: (1) market making, (2) arbitrage, (3) structural strategies and (4) directional strategies. At the time, the SEC expressed concerns about certain aspects of the changing market structure. Then, in 2016, it approved an amendment to a rule proposed by the Financial Industry Regulatory Authority that requires algorithmic trading developers to register as securities traders, in a move to reduce market manipulation.
There are additional changes to the regulation of market structure underway, as the SEC is currently analyzing the results of a roundtable consultation held last year on market data and market access. Among other things, the SEC is reportedly considering measures to increase the speed of, and add more price information to, public data feeds to help make them more competitive against private data feeds sold by stock exchanges.
Additionally, algorithmic trading has been an area of focus at the Commodity Futures Trading Commission (CFTC), which recently abandoned a proposal that would have required traders to submit their proprietary source code to the authorities. If adopted, this requirement would have posed significant competitive risk and faced intense industry pushback. The measure was initially proposed in 2015 in response to the so-called “Flash Crash” of 2010, in which one large sale triggered massive losses in the U.S. stock markets within minutes. The CFTC chair had expressed concerns about the constitutionality of the proposal, but said he was open to discussing potential revisions.
EU regulation is having an impact on U.S. firms in this area as well. Since January 2018, the implementation in the EU of the Markets in Financial Instruments Directive (MiFID II) requires algorithmic traders and trading venues to test their algorithms’ compatibility with the systems of the trading venues and market access providers, and to perform stress-testing on the algorithms themselves. Traders and trading venues must also have “kill switch” functionality in case the program malfunctions, and business continuity arrangements to manage disruptive incidents.
The CFTC had already anticipated and analyzed the impact of MiFID II on the U.S. financial markets, including as it relates to market structure. More recently, the Securities Industry and Financial Markets Association (SIFMA) wrote a letter to the SEC asking the regulator to consider adopting MiFID-style changes that could let banks charge clients separately for stock and bond analysis. While SIFMA previously resisted the unbundling of trading and research services, the industry association is now vocal about the challenges of maintaining different systems in the U.S. and the EU.
4. Monitor Any Changes Through Proper Governance
In the past three years, robo-advisers and algorithmic traders have become subject to new regulatory obligations as “registered investment advisers” and “securities traders,” respectively. Both registration categories have their own obligations, which are set to become more complex, partly due to the implementation of GDPR and MiFID II in Europe. Regulators in the U.S. and abroad are increasingly expecting transparency of AI tools, both through regulatory disclosures and directly to retail consumers vis-à-vis marketing and AI tool platform disclosures.
The rules that apply specifically to algorithmic traders and robo-advisers are comparatively new, and there are few enforcement cases applying those rules. The compliance challenge is significant: the rules are evolving, the technology is advancing and regulatory guidance is limited. These factors make it difficult to have the proper systems in place.
As a result, it is especially important to maintain proper governance to identify, monitor and respond to any new risks as the technology evolves. Although there are some differences between the U.S. and EU approaches, regulators consistently expect to see robust risk management frameworks in place, even if — and especially if — the technology monitored by these frameworks is increasingly complex.