The Power and Potential of Artificial Intelligence in Financial Services
A Special Podcast Series from Oyster Consulting and Morgan, Lewis & Bockius LLP
By Carolyn Welshhans, Pete McAteer, Dan Garrett and Jeff Gearhart
Subscribe to our original industry insights
Artificial intelligence (AI) is reshaping every corner of the financial services industry—from compliance and operations to trading and client engagement. In this special series of the Oyster Stew podcast, Oyster Consulting’s Pete McAteer, Dan Garrett and Jeff Gearhart team up with Carolyn Welshhans, a partner at Morgan, Lewis & Bockius LLP, to unpack the real-world applications, regulatory risks, and emerging opportunities of AI.
Whether you’re evaluating new fintech tools, implementing generative AI, or managing governance across platforms, this series offers essential insights from industry veterans with decades of hands-on experience in wealth management, trading, technology, and SEC enforcement.
Part 1: The Power and Potential of Artificial Intelligence
In our kickoff episode, the panel explores how firms are already leveraging AI to drive compliance efficiency, manage operational workflows, and reduce false positives in surveillance systems—starting with smarter email review. You’ll hear insights into:
- AI-powered solutions in compliance and operations
- Legal implications and disclosure expectations from the SEC and FINRA
- Bias detection, guardrails, and governance for AI models
- AI in trading desks, risk management, and trade surveillance
- The importance of oversight, documentation, and ongoing testing
This conversation lays the foundation for the rest of the series that addresses the transformational impact of AI across the financial services landscape.
Listen to Part 1
Integrating AI with Confidence: Expert Guidance for Compliance and Innovation
If you’re wondering how AI fits into your firm’s strategy, compliance program, or operational efficiency, now is the time to start the conversation.
Oyster Consulting’s technology and regulatory compliance experts provide the guidance firms need to implement AI-driven solutions while maintaining compliance with evolving regulations. We can help your firm
- Review your current technology to identify strengths, weaknesses, and opportunities for improvement;
- Provide insight and guidance on prioritizing current technology initiatives.
- Evaluate and select the best technology solutions for your firm.
- Develop policies and procedures for AI and other innovative technologies.
Whether you’re integrating AI into your operations or refining your compliance strategy, Oyster Consulting delivers the insights and expertise necessary to keep your firm competitive, compliant, and prepared for the future.
Transcript
Libby Hall: Hi, and welcome to a special Oyster Stew podcast series presented by Oyster Consulting and Morgan, Lewis & Bockius LLP. This week’s episode is the first of the series. We hope you’ll continue to listen, and we look forward to engaging you in the AI conversation.
Pete McAteer: With the evolution of AI and the vast frontier facing the financial services industry. Oyster Consulting and Morgan Lewis have partnered to bring to light some of the critical challenges, threats, risks and the opportunities that we’re all thinking and talking about. Everywhere we turn, we’re all hearing about AI. We have set up this podcast series to share our unique perspectives on what we are seeing in the wealth management space from the legal, compliance and regulatory perspectives, along with practical applications from trading, supervision, surveillance, risk management, vendor management and operations and technology viewpoints. The experts joining me today are –
Carolyn Welshhans, a partner at Morgan Lewis in the Securities Enforcement and Litigation Group, where she advises and defends financial institutions, companies and their executives in investigations by the SEC and other financial regulators. Prior to joining Morgan Lewis, Carolyn held a number of senior leadership roles at the SEC, including Associate Director of the Division of Enforcement and Acting Data Officer for the Division.
Dan Garrett, a Managing Director at Oyster Consulting, with 30 years of wealth management experience working at RIAs, broker-dealers and clearing and custody firms running operations and technology groups. At Oyster, Dan provides technology consulting and is leading our AI service offerings.
Jeff Gearhart, also a Managing Director at Oyster Consulting, with 35-plus years of capital markets experience in senior leadership roles with institutional broker-dealers. At Oyster Jeff leads our capital markets consulting services.
And me, your host, Pete McAteer, a managing director of 25 years in financial services. In my 13th year with Oyster, leading many of Oyster’s management consulting engagements that include firm strategic direction, platform strategy decisions, as well as execution, with a focus on readiness and change management. Thank you for joining us.
Today we’re going to be focusing on the power, the real potential of AI in financial services. So, Dan, I’m going to start with you first. Can you share an example of where AI significantly improved compliance or operational efficiency in a financial firm?
Dan Garrett: Yeah, thanks, Pete. We’re seeing lots of places where AI is being applied within the compliance realm. A lot of firms are starting to use it now and we’re seeing a lot of applications, well-known applications. These are existing compliance applications that are enhancing their application to have AI capabilities within it. This isn’t necessarily new startups, although those are out there and we’re seeing more and more of those enter into the infrastructure, but these are existing applications that many firms are using.
It’s important for firms to go back and look at their current vendors and see what applications are being enhanced, and having AI included in it to look for those opportunities. But specifically, what I wanted to talk about is email review. In the past it’s been very lexicon-based, where you have a long list of bad words that you’re searching through emails trying to find potential alerts, and that creates a lot of just false reporting that has to get done because the lexicon is not smart. There’s no context. If you say, “I guarantee I’ll meet you for lunch on Friday at noon,” it gets flagged because of the word guarantee. And what we’re seeing now is, with generative AI, the ability to actually read the email, not just having the lexicon, but understanding the context of the email and then alerting based on that context. So, it is reducing an enormous amount of workload the theses email review folks need to go through to sift through all of these emails, get them reviewed, improved and move on. So that’s one area that I think has had the biggest impact and we’re seeing adoption for, but there are others.
Pete McAteer: Yeah, and we’re hearing a lot of that throughout the industry and some of the conferences that we’re attending. A lot of AI applications are seeming to help streamline and cut through that and get some of that off of our plates. So yeah, email review is a primary example of that for everyone. Carolyn, from the regulatory perspective, what are your thoughts?
Carolyn Welshhans: I agree with Dan. I think, even before getting into a discussion of okay, well then, what are the risks and what have you got to think about when it comes to regulators, I think it’s important to start with what the promises are and the potential of AI. I agree that it’s very flashy. It’s the hot topic right now, the in topic, and everybody is thinking about how it’s exciting. But I think those back-office implications of it and use cases are really important ones, and a good place to start thinking about whether it makes sense to start adopting AI into an organization, and where they can really use it to look for efficiencies. I think that email compliance review is a really important consideration.
Dan Garrett: So, Pete, one of the other areas that we’re seeing outside of the compliance area, is operational efficiencies throughout the firms. And we’re seeing that AI has been used quite a bit at some of the larger firms. What we’re seeing is some of the smaller, midsize firms starting to adopt AI tools, especially generative AI, within operations and workflows. One of the big features that we’re seeing is note-taking. There were about seven or eight different firms out there that are actually providing note-taking capabilities for financial advisors that are listening to client calls, recording them, storing them and allowing you to then chat with the chatbot about the calls. What did we talk about? Summarizing the takeaways. What are the next steps? What are the things that we need to do?
The next iteration of that is we’ll start seeing agentic AI that’ll be doing something with that information. So, if the conversation is talking about opening up accounts, we can see it then take the information that it gathered from the call and apply that information into an account opening form which is then sent off to the client.
We’re not quite there yet. I think that’s one of the areas that’s getting done. But coming back to these types of things, some of the areas around automation and just letting things run, propose a lot of risk. And so, “in loop” is what it’s called, where we have somebody, an individual financial advisor, an operations person that is reviewing what the AI is doing, approving it, before it takes action. Those are some of the things that need to be taken into consideration. You know, as we kind of implement some of these tools and talk about risk, I think this is where the risk gets elevated again with these tools. Automating some of these processes, it has a great opportunity to improve efficiencies, but it also provides some risk if it’s not applied correctly.
Pete McAteer: Thanks, Dan. It’s about monitoring, oversight, governance, understanding what’s happening and just ensuring that you’re not letting the AI bot run the business. That’s super important. Now, Carolyn, what legal considerations should firms keep in mind when deploying AI in their client-facing services?
Carolyn Welshhans: So here I’m going to focus specifically on considerations when it comes to financial regulators such as the SEC or FINRA, as opposed to much broader legal considerations, and there are certainly a lot of them when it comes to AI. But, specifically from an SEC perspective, for example, the SEC is first and foremost a disclosure agency. And so, whether we’re talking about public companies and their investors or we’re talking about regulated entities such as broker-dealers and investment advisors and their clients, the SEC is going to care most about what our clients are being told about the use, the risks, et cetera, of AI, and does that match what is actually happening. And we’ve seen some of these types of cases brought by the SEC already. They’ve been referred to as AI washing cases, and that’s where the SEC alleges that there’s a big disparity between what is being told to clients versus what is actually being done with AI, and the SEC alleges not much, if anything at all, is being done with the AI.
But there are a lot of other implications that financial firms should be thinking about when they’re using or contemplating using AI in client-facing ways. Financial institutions, for example, have very specific policy and procedure requirements that might be implicated regarding privacy, cybersecurity and books and records, for example, when it comes to AI, and, in turn, those could implicate issues of supervision and compliance at financial firms.
A final issue to keep in mind is that financial institutions should consider the implications for how AI is used in trading, and I think this is going to be a topic that we’re going to continue to discuss on this series but that could implicate issues such as net capital or the Market Access Rule, or other very specific regulations that have to do with trading that the SEC has used to bring cases in the past with algorithmic trading, for example; and, they may view AI as sort of the next natural extension of that, with implications for trading.
Pete McAteer: Great stuff there, Carolyn. Thank you. Jeff, I’m going to turn to you. Anything to add?
Jeff Gearhart: I’m thinking of it more from a practical aspect in terms of legal considerations when adopting or using AI models. There’s the whole security of information aspect – what the models are sourcing, how it is retained, whether there’s PII or anything like that involved in the data set. So, there’s the whole security aspect around those models; but perhaps the bigger one that jumps to mind, is avoiding conflicts of interest. So, for example, you’re creating trading models that can make decisions based on data and trading activity and continue to learn. Well, they very logically could decide that favoring the firm’s order flow over client order flow would improve profitability and be pretty efficient, except it’s not legal. So, you always have to put the client’s order first. So, you really have to build some guardrails and some oversight over the models and because they’re continually learning, you have to continually evaluate and maintain oversight. It’s a lot of work. It greatly improves trading, but there’s a lot of things to consider in terms of how the models connect.
Pete McAteer: That’s some great points that you mentioned. You started talking about algorithmic trading and the data that those algos may be accessing, and I immediately started to think about Reg ATS, Rule 3010, and protecting confidential trading information for some of these ATS’s. So, there are really some important thoughts there on how we handle and protect data, as these AI models are obviously deeply driven and dependent upon vast, vast, vast amounts of data. So what data is being accessed?
Jeff Gearhart: Yeah, really good point, Pete, and we have direct experience with some of our clients who have let those guardrails down, so not something to fall asleep on.
Pete McAteer: Guardrails, safeguards, I think that’s what the rule uses. All right, great. Thank you. Dan, data and tech – that’s your world. Any thoughts?
Dan Garrett: I’ll just kind of add to what Jeff was saying. Bias is a really big concern. Adding to what Jeff was saying, bias is a really big concern as these models are learning, as they’re changing, as they’re adapting and so forth. You may get results that you don’t wish to achieve, and it’s a real problem and an issue where the firms need to be testing and show that they’re testing for bias. And that’s not something that you can just get away from. It’s something that you need to understand in these models. You need to understand the logic that they’re going through when they make their decision making. Have some audit trails potentially on within certain applications, but also just be showing that you are testing and looking for these biases, that you don’t want the model to be learning over time.
Pete McAteer: Awesome. Yeah, that’s checks and balances and controls, testing making sure you know what’s happening behind the scenes. That’s critical. Okay, anything else to add to the legal considerations we should be thinking about?
Carolyn Welshhans: I just want to say I think you all made excellent points, and I know we’re keeping this high level, but just think about the references you’ve already made to the things that algos and AI when you’re using about trading, need to keep in mind, we’ve referenced some specific rules. And so I think that just reflects the importance of this is a technology that has incredible potential, but I think important to really take a look at the very intricate rules and regulations that are already on the books and how those might be implicated and how to plan ahead so that you are keeping everything in mind and making sure you know that you’re going to avoid interest from the SEC. And if you do, you guys have already started alluding to, I think, some documentation and thinking about how to explain exactly what your model is doing.
Pete McAteer: Jeff, from your experience in Capital Markets, how is AI transforming trading desk operations and risk management?
Jeff Gearhart: I think there’s an important point here. From a trading aspect, from a front office aspect, AI and modeling have been in use for quite a long time. In fact, I saw a recent statistic that AI strategies are driving over 70% of equity trades now. So, it’s very, very present. Just given its ability to analyze huge amounts of data, it’s bringing trade efficiency speed to the marketplace. But I think it’s moving into the operational processes, even from some of the likes of DTC adopting it into their processes. And where I’m seeing it being used is certainly what Dan indicated on the email reviews. But, it’s also being used by risk management for analyzing large amounts of data, doing the predictive analysis and evaluating models and seeing not only where the current risk is, but where the firm may be going. And then also it can be employed in trade surveillance.
Again, the ability to analyze the data, apply some models, see where the trades might be going, maybe actually connect some trades and really identify either bad trading habits, manipulative habits or even, in some cases, fraud. So, it’s really pretty cool how it can be applied to some of these other processes away from trading.
Libby Hall: Thanks for joining us for this episode of our special AI series from Oyster Consulting and Morgan Lewis. We hope our conversation gives you new insights into how AI is shaping the future of financial services. Be sure to subscribe so you don’t miss upcoming episodes as we continue exploring the legal, compliance, and operational impacts of AI. For more information about our experts and our services, visit our website at oysterllc.com. Thanks for listening.