Walking the AI Tightrope: AI Compliance Strategies for Wealth Management Firms
Part 2 of a special podcast series featuring Carolyn Welshhans of Morgan, Lewis & Bockius LLP
By Carolyn Welshhans, Pete McAteer, Dan Garrett and Jeff Gearhart
Subscribe to our original industry insights
In this second installment of our special Oyster Stew podcast series on artificial intelligence in the financial services industry, experts Pete McAteer, Dan Garrett and Jeff Gearheart from Oyster Consulting and Carolyn Welshhans from Morgan, Lewis & Bockius LLP dive deep into how firms can navigate regulatory scrutiny while leveraging AI compliance, trading, surveillance and strategic potential.
Whether you’re evaluating new fintech tools, implementing generative AI, or managing governance across platforms, this series offers essential insights from industry veterans with decades of hands-on experience in wealth management, trading, technology, and SEC enforcement.
Listen to Part 2
SEC Focus on “AI Washing” and Market Integrity
The Securities and Exchange Commission (SEC) has already targeted what they call “AI washing” – firms claiming to use artificial intelligence without implementing it meaningfully. Beyond disclosure concerns, regulators are watching for potential market disruptions, cybersecurity vulnerabilities, and governance gaps related to AI trading systems. Our panel breaks down these regulatory flashpoints while providing practical guidance for compliance teams working to stay ahead of examination priorities.
Managing AI Risk in Trading and Surveillance Functions
For trading desks, our experts outline the convergence of technology and trading functions, emphasizing the need for enhanced surveillance, regular model validation, and comprehensive training programs.
Emerging AI Capabilities and Managing Third-Party Risk
As vendors rapidly release new AI features within existing platforms, financial firms face growing complexity in managing third-party risk. Documentation emerges as a critical compliance tool – capturing not just what AI systems do, but how they reach decisions. The podcast offers invaluable insights on establishing appropriate governance frameworks that start at the executive level and flow through to operational implementation.
Integrating AI with Confidence: Expert Guidance for Compliance and Innovation
If you’re wondering how AI fits into your firm’s strategy, compliance program, or operational efficiency, now is the time to start the conversation.
Oyster Consulting’s technology and regulatory compliance experts provide the guidance firms need to implement AI-driven solutions while maintaining compliance with evolving regulations. We can help your firm
- Review your current technology to identify strengths, weaknesses, and opportunities for improvement;
- Provide insight and guidance on prioritizing current technology initiatives.
- Evaluate and select the best technology solutions for your firm.
- Develop policies and procedures for AI and other innovative technologies.
Transcript
Transcript provided by Buzzsprout
Libby Hall: Hi and welcome to a special Oyster Stew podcast series presented by Oyster Consulting and Morgan Lewis. This week’s episode is part two of the series. If you’d like to learn more about how Oyster Consulting and Morgan Lewis can help your firm, visit our websites at oysterllc.com and morganlewis.com.
Pete McAteer: With the evolution of AI and the vast frontier facing the financial services industry. Oyster Consulting and Morgan Lewis have partnered to bring to light some of the critical challenges, threats, risks and the opportunities that we’re all thinking and talking about. We have set up this podcast series to share our unique perspectives on what we are seeing in the wealth management space from the legal, compliance and regulatory perspectives, along with practical applications from trading supervision, surveillance, risk management, vendor management and operations, and technology viewpoints. The team of experts joining me today is:
Carolyn Welshhans, a partner at Morgan Lewis in the Securities Enforcement and Litigation Group, where she advises and defends financial institutions, companies and their executives in investigations by the SEC and other financial regulators. Prior to joining Morgan Lewis, Carolyn held a number of senior leadership roles at the SEC, including Associate Director of the Division of Enforcement and Acting Data Officer for the division.
Dan Garrett, a Managing Director at Oyster Consulting with 30 years of wealth management experience working at RIAs, broker-dealers and clearing and custody firms running operations and technology groups at Oyster. Dan provides technology consulting and is leading our AI service offerings.
Jeff Gearhart, also managing director at Oyster Consulting, with 35-plus years of capital markets experience in senior leadership roles with institutional broker-dealers. At Oyster Jeff leads the capital markets consulting services.
And me, your host, Pete McAteer, a Managing Director of 25 Years in Financial Services, in my 13th year with Oyster, leading many of Oyster’s management consulting engagements that include firm strategic direction, platform strategy decisions as well as execution, with a focus on readiness and change management.
Thank you for joining us. Today we’re going to dive into the topic of the pressures around leveraging AI and navigating the regulatory crosshairs. So, it makes sense to start with Carolyn. Carolyn, what are the current regulatory concerns regarding AI use in the financial services industry?
Carolyn Welshhans: Sure, and thank you. These are just some of the topics that our team has been advising clients on and counseling clients on. When it comes to the regulatory interests, particularly the SEC, with regard to AI, the first is disclosure to investors. That’s where the SEC has already gone when it comes to AI. They’re really focused on, what are clients being told about how is a firm using AI, and does that match up to reality? The SEC has brought several cases in this space, including against investment advisors, where the SEC has alleged what they call “AI washing,” meaning that firms are saying they’re using AI and the SEC is alleging that they’re not – that they’re using kind of the buzzwords of this very hot topic to draw in clients and investors. So that’s issue number one.
Issue number two is market concerns. We haven’t seen these cases yet, but we have seen cases in the past involving algorithmic trading and so, with the newest iteration being firms starting to use AI, including potentially generative AI when it comes to their trading, if there is a market event, I would not be surprised to see a market event if it’s tied to AI cases that are analogous to those algorithmic trading cases. Looking at questions of market manipulation, but also looking at trading rules and regulations that don’t require intent. So things such as the Market Access Rule and how are trades being placed, and what are the policies and procedures around those.
The third main issue is cybersecurity and privacy. So, what is the third party risk and the use of client data? For example, if a financial firm is outsourcing in any way, using another vendor, knowing another vendor and sharing client data, is that implicating privacy concerns writ large, but also specifically under SEC rules and regulations that apply to financial firms such as Reg SID or Reg SP?
And then, finally, compliance and governance. How are firms balancing the use of AI, such as in compliance or in trading, with ensuring that obligations, again under the federal securities rules, are being met and that there’s proper supervision being applied? Again, those are just some of the issues that come up when considering or thinking about how firms are using AI when it comes to trading and other aspects of their business, and there’s a lot of room here, I think, to make that technology work but also still have an eye towards compliance.
Pete McAteer: Terrific. Thank you, Carolyn. Dan, anything to add?
Dan Garrett: Yeah, I’d like to talk about a couple of different things there. That was great.
We’re seeing right now a lot of firms that are looking to improve their policies and procedures – particularly those that are just delving into using generative AI – examining policies and procedures and they’re looking for guidance around that. The regulators have been putting out some reports and so forth. The SEC is maintaining that they’re trying to release information rules that are not specific to AI but very general, and a lot of the rules that are out there today apply.
And Carolyn touched on cybersecurity and your vendor risk, and so forth. Those are all things that we do today with BCP (Business Continuity Planning) and instant response policies, things like that. Where we are having these relationships with third-party vendors, understanding what those vendors do, what the risk is, what data they have, how they’re maintaining that data. So, it’s very general and not specific to AI, but still applies.
And, to take it one step further, it’s very important that firms understand what and how AI is being used at these vendors within these applications. One of the big concerns is that there is so much work being done by these vendors right now. Everybody’s trying to promote AI within their applications, some running the risk, as Carolyn said, of AI washing. But some of them are coming out with real great applications and features and so forth which are in these point releases that just get released to a firm. So, you’ve been with the vendor for years and all of a sudden they have a new AI feature, and the question is, did you go through the proper process of reviewing your policies and procedures to really contemplate all of that? Did you review your BCP incident response plans to make sure you understood how that technology is being used, how it impacts the provider, and how it impacts the data that’s being provided back to you? These are things that need to be considered at the vendor level, and you need to be aware of these kinds of releases as they come up and keep track of those types of things. Understand the impacts of those, document them, and include them in your policies and procedures.
Pete McAteer: That’s some really good stuff there, Dan. It’s really important to keep in mind your vendors’ obligations, and to stay on top of your vendors throughout this whole process as AI evolves – what they’re using AI for, and adding those things to your disclosures. That’s going to get really complex over the next several years as this thing grows and builds and permeates the wealth management business. Jeff, anything to add from your perspective?
Jefferey Gearhart: Maybe just a small point of emphasis. I certainly think everything Carolyn and Dan said was spot on, but from a regulatory aspect there are certain things, and what’s jumping to mind is the Market Access Rule that provides a lot of structure and requirements on market access and the use of AI tools and decisioning around that. In our practice, we’ve seen firsthand where some of these things maybe weren’t addressed or implemented correctly. So, the existing regulatory framework really does have implications when you’re implementing AI into your trading models or operational processes, so you can’t fall asleep at the switch on those topics as well.
Pete McAteer: Great point, Jeff. Thank you. Dan, I’ll let you lead off here. How can firms ensure their AI tools are transparent and auditable to meet these compliance and governance standards?
Dan Garrett: That’s a tough one Pete. A lot of times these AI systems, particularly the ones that are out there and for free, which no one should be using, are black boxes. You don’t know what it is. You know you put information into them. It does something and it spits out information to you. There’s no audit trail. There’s no security with these platforms.
It’s very important that you are using an enterprise, secure, generative AI model internal to your shop, for two reasons. One is that you know it’s maintained and, if configured properly, the data is your data. The data is not training the model; the data is not going outside of your enterprise version of that model. And two, any of the output of that model is your own intellectual property. So, if you’re using free versions, you’re training the model, which has the risk of data getting out there. And two, anything that it’s producing is not yours, it’s that of the AI provider.
Transparency and audit are very important and a lot of AI providers are now starting to provide some of that transparency that wasn’t in the earlier models, and it will actually show you the process or the logic which the AI is going through to arrive at an answer that it provides you back. I think probably over time there’ll be more rules and regs around capturing that information just so you understand what the model was doing. What was it looking at? What was it referring to? What data was it trained on previously to answer that question?
The problem becomes that models, like we talked about earlier, change and adapt over time and so as you ask them questions, the same question, over time you might get different answers, and so at any particular point in time when you’re using a model and it gives you an answer, the question is can you backtrack and understand how it arrived at that answer for you? There aren’t a lot of rules and regs right now about capturing and storing that. That may come, but I think it’s very important that you’re aware of that today and that you understand.
Then you have models that are providing some of those details, I think, best efforts. And then, as we’re working with firms on policies and procedures, what you want to do is capture one that you’re using an AI model for some function you’re documenting, the date and time and what the prompt was, maybe what the output was.
If you have that trail of the logic to keep and store that, and all of that stored within your books and records, it can be referred back to if there’s a question or a concern when something goes wrong, in the event of hallucination or bad information that you didn’t properly review.
Which is another important good practice, you always want AI to cite sources. You always want to review those sources. You always want to check those types of things and not just put out anything and everything that a generative AI model is providing you. It’s up to you to review and audit those types of things, and the firm’s policies and procedures should reflect that.
Pete McAteer: Wow, there’s a lot there, Dan. Thank you for all of that. It’s really important for firms to really understand the tools they’re using, the risks around that, the governance and compliance around using these tools and ensuring policies and procedures are updated to reflect that use and the controls that they have in place to manage this technology. Great, great stuff. Thank you. Carolyn, anything to add?
Carolyn Welshhans: I thought Dan raised a number of really excellent points, including the emphasis on the importance of the auditable and transparency aspects of your governance, but also what does that mean in practice? When we’re talking about AI, we’ve moved well beyond capturing some emails. We’re talking about really voluminous data and exactly what data does need to be captured and potentially retained. Those are questions we’ve been talking to clients about at our firm quite a bit.
I think what it really comes back to is the fact that the SEC requirements, when it comes to financial institutions, the policy and procedure requirements, for example, really rest on a reasonableness standard. So, I think it’s important to start with what is reasonable for your firm in terms of what are you doing and are you being reasonable in how you are trying to govern and provide oversight and supervision of that? And then I think that comes back to the points Dan has raised about documentation. Are you keeping those policies and procedures up to date, reflecting what you’re doing, and then reasonably trying to capture and show the oversight that you have? And how is that occurring and on what basis and who’s in charge of it?
I think those are the really important points to show the supervision, to show the oversight, with the understanding of how the technology works in practice and so that practically, you know we may not be in a situation where you know it’s an iteration, you know, of a software rollout and therefore easier to capture.
I think those are just sort of the practical and reasonable day-to-day discussions that need to be occurring. You need to be thinking about, from a legal standpoint, as well as trying to marry up what actually works in reality and make sure that those things are thought through. I think the starting point of it being thought through at the time goes a long way towards showing reasonableness and that you’ve tried to address the risks that might be there specific to your business.
Pete McAteer: Thank you, Carolyn. There is a lot to think about here, isn’t there? So, finally, in this episode of talking about navigating the regulatory crosshairs and preparing your firm, what steps should trading desks and trading firms take to align AI implementations with regulatory expectations?
Jefferey Gearhart: Thanks, Pete. I think a lot of good points have already been made regarding documentation and good policies and procedures that really govern a firm’s use. So that’s where I would like to start is on the governance and accountability piece making sure that the use of AI models, decision-making models, are consistent with the framework set up by the firm. And that framework really needs to start at the top of the house. Approval policy set by the executive level, even approved by the board and then when it moves down into the actual use within whether it’s trading operations or securities trading, to have clear governance, you need to have clear explainability of what the model is doing and making sure people understand it. That requires documentation. It requires acknowledging the decision logic. It also requires knowing who owns it.
Increasingly, you’re seeing a blending of technology and trading levels, technology and operations functions. Those roles have operated differently under different standards, but they’re coming together. So, it really takes a heightened degree of surveillance and oversight on all participants there. I also need to emphasize decision-making is dependent on the data quality and where it’s sourced, and making sure that it’s appropriately sourced and not disclosing information it shouldn’t. And then the key part from my direct experience is the control over regular testing, validation of the models, stress testing of the models. Is it doing what you expect it to do? Is it learning? Is it staying consistent with how the model was set up? Are you surveilling, looking for any potential trade decisions or security routing decisions that might be inconsistent with the rules and regs we all know we operate under, whether it’s Reg NMS, Reg SHO, the Customer Protection Rule, all those kinds of things. You need to surveil for those types of activities, again, keeping in mind that you’re bringing a lot of non-security people, such as technologists, into the equation who will find fast ways to do things very efficiently that maybe shouldn’t be done.
Pete McAteer: Human accountability is paramount, with what we’re seeing here, in managing this technology, ensuring there’s good, solid governance in place. Never assume general operation of this technology and these applications of this technology. It needs to be governed, it needs to be understood, it needs to be tested and controlled. That’s what the regulators are going to be looking for is ensuring that governance is in place, that oversight is in place, that testing and controls are there. All right, Carolyn, I’m going to ask you. Anything to add to what Jeff was talking about relative to trading and trading firms? It’s kind of right up your alley.
Carolyn Welshhans: Yeah, I had a lot of experience with that. I think that Jeff’s point on surveillance is excellent. I think that firms are going to have their own business reasons, obviously, for making sure that any AI trading model is doing what they want it to do. You know you’re going to have a business on a bottom line to make sure that’s what’s happening. But I think part of that surveillance also has to be with an eye towards regulators. Also, making sure for regulation reasons that your surveillance, I think, is documented – what it is you’re looking for and how you’re doing it is really important to have in place for those reasons. And then you know, as Jeff was talking, he was also making me think about training.
So again, I think this is somewhat firm specific and specific to the use you’re having of AI, as to what that training should be and who it should cover. But I think training is a really important aspect of all of this, whether that’s of the trading desk, whether it’s of the people who are handling surveillance on the back end, and the topics in terms of what you want people to be looking out for. Whether it’s trading that you know is not doing what it’s supposed to be, or you’re getting really strange answers back from the AI model. Whatever it might be, what are the things you think you want your employees to be looking for, and what are they supposed to do if they see something? How are they supposed to escalate that or share it with others in the firm, and is that clear to them? I think those are also important things to keep in mind when firms are thinking about navigating the regulatory crosshairs when it comes to the use of AI, specifically and especially when it comes to trading.
Libby Hall: Thanks for joining us for this episode of our special AI series from Oyster Consulting and Morgan Lewis. We hope our conversation gave you new insights into how AI is shaping the future of financial services. Be sure to subscribe so you don’t miss upcoming episodes as we continue exploring the legal compliance and operational impacts of AI.