The Realities of AI Implementation: What Every Firm Needs to Know

Part 3 of a special podcast series featuring Carolyn Welshhans of Morgan, Lewis

By Carolyn Welshhans, Pete McAteer, Dan Garrett and Jeff Gearhart

Green and orange glass building represents AI implementation for financial services

The Thin Line Between Innovation and Exposure

Powerful AI tools are rapidly transforming financial services, but beneath the promise of efficiency lurks significant risk. Artificial intelligence has the potential to be a game changer for financial services—but only if firms approach AI implementation with eyes wide open.  The difference between innovation and regulatory exposure often lies in the details: clear policies, robust oversight, and a thoughtful alignment of technology with compliance. For firms eager to harness AI’s power while avoiding costly missteps, strategic planning and sound governance are non-negotiable.

In Part 3 of our special Oyster Stew podcast series with Morgan Lewis, our panel of experts uncovers the practical realities facing firms implementing artificial intelligence solutions today:

  • Are your policies and strategies setting you up for success, or a compliance nightmare?
  • What are the legal repercussions when AI systems fail, like placing erroneous orders?
  • How can strong governance, data quality, and human expertise drive successful AI adoption?

Listen Now

Explore the Full Series

Looking to implement AI while managing associated risks? Listen to Part 1 and Part 2 of this series for more practical guidance from industry veterans who understand the technology and regulatory landscape.

Strategic AI Implementation

CRC-Oyster bridges the gap between innovation and regulation. Our team of technology and compliance experts works with financial services firms to evaluate AI strategies, implement strong governance frameworks, and develop policies that meet regulatory expectations. Whether you’re launching new AI tools, managing vendor risk, or strengthening your internal controls, we provide the insight and experience you need to integrate AI safely, efficiently, and compliantly. Let us help your firm turn complex challenges into sustainable success.

Transcript

Transcript provided by Buzzsprout

Libby Hall: Welcome to a special Oyster Stew podcast series presented by CRC | Oyster and Morgan Lewis. This week’s episode is part three of the series. Make sure you listen to the other episodes, where you’ll hear essential insights from industry veterans with decades of hands-on experience in wealth management, trading, technology and SEC enforcement. If you’d like to learn more about how CRC | Oyster and Morgan Lewis can help your firm, visit oysterllc.com and MorganLewis.com.

Pete McAteer: With the evolution of AI and the vast frontier facing the financial services industry, Oyster Consulting and Morgan Lewis have partnered to bring to light some of the critical challenges, threats, risks and the opportunities that we’re all thinking and talking about. The team of experts joining me today is:

Carolyn Welshans, a partner at Morgan Lewis in the Securities Enforcement and Litigation Group, where she advises and defends financial institutions, companies and their executives in investigations by the SEC and other financial regulators.

Dan Garrett, a Managing Director at Oyster Consulting, with 30 years of wealth management experience working at RIAs, broker-dealers and clearing and custody firms, running operations and technology groups.

Jeff Gearhart, also Managing Director at Oyster Consulting, with 35-plus years of capital markets experience in senior leadership roles with institutional broker-dealers.

And me, Pete McAteer, a Managing Director of 25 years in Financial Services, leading many of Oyster’s management consulting engagements that include firm strategic direction, platform strategy decisions, as well as execution, with a focus on readiness and change management. Thank you for joining us.

So, Dan, the first question is coming to you. Have you encountered instances where AI deployments led to unintended consequences? What lessons were learned?

Dan Garrett: That’s a great one, Pete. There’s so many out there. I’m going to pick on two. One that’s probably the most common that we’ve all heard about is somebody using AI and not verifying the information that it provides back. It’s been in the papers, somebody using AI and not verifying the information that it provides back.

So, we all know that AI hallucinates, it makes up things. That’s not a bug. It’s being creative like human brains. Sometimes it gives you creative things to say or to talk about, which is great when you want it to be creative, not great when you’re trying to look for facts and you’re displaying what you’re getting out of AI as facts. And so, the recommendation that we have is always verify. Trust but verify anything that these generative AI models provide to you. One of the tips that I like to provide is just ask the AI model to provide you with its sources and double check those sources. They can provide hyperlinks to sites. It’ll go there, you can look at it, and that’s a great way to just double-check that what it’s providing to you is accurate.

But the other thing I wanted to talk about was very specific to our industry and some things that I heard. Those were some stories that I heard from financial advisors. In our space, there’s been a plethora of AI agents that are being used for note taking.  They are being used by financial advisors to listen in on the phone call that they have with clients, and that helps them with note taking and it helps them summarize the call. They can chat with it afterwards and say “What did we talk about? What were the takeaways from the phone call?”  And, the generative AI will present back just a nice summary of the call. It’ll provide a to-do list that you can take and provides potential time savings.

What you hear a lot of times might be some hype from some of these different providers, but this can save up to five to 10 hours of time for financial advisors in that they don’t have to take notes anymore, they don’t need to transcribe notes, they don’t need to provide notes to their sales assistant. Sales assistants can go in and talk to the chat bot about that conversation that was had and get the takeaways.  So there is a potential opportunity. I’ve talked to financial advisors saying, it’s a game changer for them. It’s been wonderful.

However, I talked to another group of financial advisors that said, no, it’s an absolute nightmare. It’s terrible, because right now what they have to do is get the transcript from the AI conversation. They have to review the transcript to make sure that everything was said. So again, comparing their actual notes to what the AI is producing and saying is correct, verifying it, making changes to it and then providing it to compliance to be recorded.

 So, how can you take something that, for one group, is saving them 10 hours a week and another group, where it’s costing them time because they’re going through and reviewing it?

What I’ll say is it’s all about adoption and the way that some of these applications are put into place.  I put it out there as just a warning that when you think about implementing generative AI and using it in different models and so forth, really think about the consequences to the process and the flow and the requirements that you’re using it. Now, some firms’ compliance departments may not be requiring that of their financial advisors, to review and make changes and store those notes, and others may. We can get into whether they should or not, but it’s about implementation and then understanding the consequences of what happens there. So those are two that I just wanted to point out.

Pete McAteer: Hey, Dan, just a quick clarification on that second piece. Do you think it has more to do with the firm’s policies and procedures around managing the AI tools? I guess Carolyn might want to weigh in on this when we turn to her as well. I didn’t want to step on toes, but it just feels like you could really create some onerous oversight and review if you didn’t trust and didn’t have the experience with the tool set.

Dan Garrett: Absolutely correct, and we could get into an entire discussion around that, or whether these recorded phone calls are admissible and things that you should be storing in your books and records or not. Some firms are arguing that no, there isn’t a transcription, it’s just the AI has learned about the call, and you can talk to it about it. Yes, you can ask it to create a transcription, but that’s only if you ask it to do that, right? So, at that point, the transcription of the call exists and then it should be stored, and it should be made sure that it’s accurate, right? So, there’s a lot of gray there and ways to think about it, but it absolutely comes back to policies and procedures and thinking these things through before you run out and implement a system. Think about the policies and procedures that you’re going to put in place and really think through, is this going to make things better or worse in terms of operational efficiencies?

Pete McAteer: Okay. So, Carolyn, I’ll turn it over to you for your feedback. This is right up your alley now.

Carolyn Welshhans: Just to what Dan was talking about, I think he got right to kind of the central tension when you come to AI – that it’s got this incredible promise. There are a lot of ways that people generally have been able to identify efficiencies and positive things that it can provide to a business, including in the financial area. But at the same time, you’ve got to think about the fact that, ok, so what are the regulatory requirements? What is this going to mean for our governance? What are the things we’ve got to think through of how this fits into either what we’re already doing, or does it create a new obligation on our end that we didn’t have to deal with before? And that’s not to say you shouldn’t adopt AI if it makes sense for your business. It’s just that you’ve got to think all of this through, think through the different regulatory regimes that might apply to your business.  And then, what do you do as a result?

Pete McAteer: Okay, awesome. Thank you, Carolyn. Just a quick question, Jeff, just in case you’ve thought about this or maybe seen something out there in this space, but high-touch trading desks, they’re talking to their firms. You see this as it’s reared its head in that space in high-touch trading desks.  

Jeff Gearheart: I would say not as much as the algo market-making desk and the model-driven desks. It’s really where you see heavy use of AI, heavy use of data AI to manage that data and make trading decisions. That’s where it’s really coming into play. The high touch desk is still a lot of the good, old-fashioned voice communications, providing guidance to the clients and moving on from there.

Pete McAteer: And I guess those are already recorded lines and the transcripts would just be additive to the existing policies and procedures, right?

Jeff Gearhart: Fair. When you think about a user coming into a high touch desk, they’re seeking guidance on bringing a large position into the market to liquidate it or accumulate or something of that nature. So, they’re looking for consultation, and I’m pretty sure they’re still going to want to talk to their trading or sales trader, if you will.

Pete McAteer: Carolyn, what legal repercussions can firms face if AI systems fail or cause harm?

Carolyn Welshhans: Just like Dan before, I’m going to pick one situation to kind of pick on here. I think the one that people have thought about the most, or the worst-case scenario they’ve thought about, is AI hallucination when it comes to trading. What does that look like? What are those risks, and what could result? I’ve thought about that in terms of the closest analogy is algorithmic trading. We’ve already seen that. I think it’s in some ways a very close cousin, and the SEC has brought cases there where there have been allegedly runaway algos or trading algorithms that didn’t perform the way that they were supposed to and resulted in a flood of orders, for example, going to the market. In those situations, the SEC has brought cases against the broker-dealers involved under Rule 15c3-5 of the Securities Exchange Act of 1934. It’s sometimes referred to as the Market Access Rule.

What that rule generally requires is that broker-dealers have some pretty specific types of controls in place. They really come down to some financial risk and some regulatory risk controls that are really designed with the intent of preventing what people in the past have referred to as a fat finger error. You know, somebody enters an order for a million dollars when they meant a dollar, or you know a million orders when they meant one, you know you put in too many zeros, sort of thing.  These controls are supposed to be in place to make sure that if that order, that erroneous order, would exceed, for example, a credit or capital threshold for that specific customer and for the broker dealer itself, the order gets blocked. It doesn’t ever get placed.

So, you can see how that’s something that might be looked at if there were an algo that hallucinated and then similarly placed a bunch of orders that run contrary to the financial risk model of a broker-dealer or its customers, for example, or something else about their trading. Again, kind of like we were talking about a moment before, that’s not necessarily a reason to not adopt AI if it makes sense for your trading model and your business, but I think it just means you’ve got to think about that sort of rule if it applies to you as a broker dealer. And have you thought about how algorithmic trading in the past, if you’ve done it, or even if you haven’t, might now be implicated by AI under this sort of rule?  How do you put that into place? How do you make sure that your automated controls are keeping up with, for example, generative AI. That might be changing overtime, so are you thinking about how to make sure that you’re surveilling for those controls once you have them in place and that you’re comfortable that you’ve got that control. I think that’s one kind of very specific example of the legal repercussions that could come about when we’re talking about AI and trading and financial firms.

Pete McAteer: Terrific. Thank you, Carolyn. Jeff, I’m going to turn to you. Anything else to add there?

Jeff Gearheart: I think that is actually an excellent example. We do a lot of work around the market access rules and we’re well aware that there are a lot of large penalties and fines that can go into place. That’s just from the regulatory aspect. Then there’s also the trading losses and the true financial losses you’re incurring. It’s a big deal, and when you’re using these models or AI to guide the models.  Things can go haywire pretty quickly. So, you’ve got to have the right controls in place; not just on the credit and capital aspect, but the erroneous order controls testing that’s involved, which is actually required by the rule to do the certification. There’s a lot that firms have to do when you have direct market access and you’re using trading models and AI to make decisions. So, excellent point.

Pete McAteer: So, Jeff, how can firms proactively identify and mitigate risks associated with AI in trading operations?

Jeff Gearheart: I think there’s lots of ways to answer this and I’ll give some specific examples, but I think all the core risks have an underlying theme and that’s that industry knowledge and expertise is essential. It’s key to managing and mitigating the risks. So, in other words, artificial intelligence, great. Somebody needs to know what it’s doing and to evaluate the results. I think it’s going to become a larger problem when you talk about trading operations and settlement functions.

It’s not the glamour part of the industry, and that’s where we’re losing a lot of industry and institutional expertise. People are retiring or moving on, or things of that nature and, to be clear, nobody wants to go into the securities industry to be an operations professional. They’re all looking at the sexy side of trading and model development, things like that. You need to make sure you have the right people there. Key staff is essential to understand the basics of the process and evaluate the results of the AI results, the trends, the data analysis, things of that nature. So that, first and foremost, is knowledgeable, well-trained industry professionals.

Second, I think this is where a lot of companies need to evolve and where I think we’re seeing more work, is firms need to have an AI framework that defines the governance and accountability. Simply put, you need to make sure the company knows how AI is being used within the firm, that there’s an approval process, that people aren’t just inserting it in the process and moving forward from there. So those are what I think are my priorities.

When you get into the specifics, such as model risk, the model could be producing an incorrect output, so you need to have the right level of model validation in place, stress testing and, honestly, regular retraining and viewing the results, and making sure that they’re meeting your expectations.

A couple of other risks that I think are really key – data quality and integrity. That’s been a big deal for me. I’ve been in this field for over 34 years. Data quality is key, and these models can analyze huge amounts of data very quickly, but you had better have regular, rigorous data cleaning. Make sure it’s valid, make sure it’s accurate, make sure it’s not corrupted, those types of things.

Then, when it comes to the use of AI for operational risk, you need to make sure there’s transparency, there’s audit trails on what it’s doing, there’s some type of metrics that you can use to review the results to make sure they’re reasonable in an escalation process.

And the last thing I’ll state, even though there’s probably a bunch of other risks such as cybersecurity and other things you need to focus on, is change management. We’ve all worked in large companies, and they get content doing things one way. Well, AI continues to learn and evolve, so you have to provide training, ongoing management of anything that changes in the process that could affect the models and involve, not just the technology team but the end users and the people that can actually evaluate the results. So, there’s a lot, I guess, to answer your question in terms of how you could mitigate the risk, but those are what I think are the keys for me.

Pete McAteer: Yeah, thanks, Jeff. I agree. Much more to come. We’re just getting in the door, through the door with this right now. Carolyn, anything to add to the trading operations?

Carolyn Welshhans: I think what Jeff said was really thoughtful and for me it helped clarify, the thought of AI isn’t plug-and-play. Obviously, we’ve been talking about that, and I also think it’s not necessarily correct to think about it as a substitute for a lot of the uses we’ve just been talking about. It might be an enhancement, it might make things better, but as Jeff was talking, it was clear in each of the steps he was talking about you do still need the people, you still need the knowledge. You know whether it’s in terms of oversight or it’s the training of the model, or it’s thinking through what you really want it to be doing and the knowledge you want to be imparting to it. It’s still a partnership with the people who have that knowledge and have those contributions, and I think that might be a good way to think about it. Again, not a substitute or a plug and play, but an enhancement, if that is in fact what it would be for your business.

Pete McAteer: Yeah, the plug and play piece I see is where it inserts itself in the middle of the analysis and digestion of large amounts of data to summarize and pull together summary data, summary information that can be leveraged and considered. And it has to be considered by a human before it can be put to use.

Libby Hall: Thanks for joining us for this episode of our AI series with Morgan Lewis. We hope this conversation gave you new insights into how AI is shaping the future of financial services. This podcast series was recorded prior to the merger of Oyster Consulting and Compliance Risk Concepts. Be sure to subscribe so you don’t miss upcoming episodes as we continue exploring the legal compliance and operational impacts of AI. For more information about our experts and our services, visit our website at oysterllc.com.

About The Podcast Speakers
Photo of Carolyn Welshhans

Podcast Guest – Carolyn Welshhans

Carolyn Welshhans is a partner at Morgan Lewis in the Securities Enforcement and Litigation Group, where she advises and defends financial institutions, companies and their executives in investigations by the SEC and other financial regulators. Prior to joining Morgan Lewis, Carolyn held a number of senior leadership roles at the SEC, including Associate Director of the Division of Enforcement and Acting Data Officer for the Division.

Photo of Pete McAteer

Pete McAteer

Pete McAteer has senior level management experience in coaching, consulting and leading large programs and operations teams, which drive significant, impactful change management, process improvement and implementation efforts. He possesses a deep background with over 30 years of experience with Fortune 500 companies working in International Quality Manufacturing and Financial Services industries.

Photo of Dan Garrett

Dan Garrett

Dan Garrett provides general business leadership, technology strategy and execution for RIA, Broker-Dealer, and Clearing firms, including spearheading digital transformations, optimizing operations, navigating complex business transitions, and building software development teams and proprietary applications.

Photo of Jeff Gearhart

Jeffrey Gearhart

Jeffrey Gearhart is an intuitive, analytical leader with over 30 years of experience in banking and capital markets businesses. Prior to joining Oyster, he held senior leadership roles with The Bank of New York Mellon, including business line COO, CFO, business development and relationship management.

View Our Team