Preparing Your Financial Services Firm for AI: Governance, Data, and Compliance
Part 4 of a special podcast series featuring Carolyn Welshhans of Morgan Lewis
By Carolyn Welshhans, Dan Garrett, Jeff Gearhart and Pete McAteer
Subscribe to our original industry insights
AI in Financial Services: Why Governance Matters
Financial firms face unprecedented changes as artificial intelligence transforms everything from trading strategies to compliance procedures. This episode dives deep into the crucial foundation every organization should have in place before successfully integrating AI into its operations.
In Part 4 of our special AI Series with Carolyn Welshhans of Morgan Lewis, our expert panel explores:
- Key Strategies for Building Effective AI Governance Frameworks
- AI Policies to Your Firm’s Risk Profile and Systems
- Overcoming Legacy Technology Challenges in AI Adoption
- The Critical Role of Data in AI Readiness
Listen and learn as our panel provides actionable guidance for conducting comprehensive readiness assessments covering governance structures, strategic alignment, data management, model risk frameworks, and technology infrastructure.
Listen Now
For compliance professionals navigating these changes, the technical demands continue growing. Beyond understanding regulations, they must now comprehend complex systems, security measures, and emerging technologies, highlighting the need for continuous education as the AI landscape evolves.
Explore the Full Series
Expert Guidance for Smarter AI Integration
At Oyster Consulting, we help financial firms bridge the gap between AI ambition and operational reality. Our team of experts works with you to design governance frameworks, assess AI readiness, and align your technology strategy with your business objectives. From evaluating your existing systems and data infrastructure to advising on model risk management and compliance integration, we ensure your AI adoption is both innovative and regulatory-ready. With Oyster, you get more than advice. You gain a partner who understands the unique challenges of financial services and can guide you through each step of your AI journey.
Transcript
Transcript provided by BuzzSprout
Libby Hall: Hi and welcome to a special Oyster Stew podcast series presented by CRC- Oyster and Morgan Lewis. This week’s episode is part four of the series. Make sure to listen to the other episodes. You’ll hear essential insights from industry veterans with decades of hands-on experience in wealth management, trading, technology and SEC enforcement. To learn more about how CRC-Oyster and Morgan Lewis can help your firm, visit oysterllc.com and morganlewis.com.
Pete McAteer: All right, welcome. Carolyn, you’re going to lead us off today. What foundational policies should firms establish before integrating AI into their services?
Carolyn Welshhans: So, where I start is the use cases. You’ve made the decision that you’re going to be using AI, and I think you want to make sure you’re really clear and have thought through who’s going to be using it and for what purposes. That’s thing one when you’re thinking about your policies and your governance.
I think you also don’t want to lose sight of risk from the use of AI by others. Whether that is because you’re a trading firm and you’ve got to be thinking about the risk that, even if you’re not using AI, somebody else might be, and how might that introduce risk to what you’re doing? Or counterparties or third parties, if you have vendors or others that you contract with and they’re using AI. Either with your data or something else, does that introduce any risk that you’ve got to think about? So, I think those are kind of the main categories I think about.
And then you have got to turn to your actual model for governance. And again, this is not a one-size-fits-all, and it definitely shouldn’t be an off-the-shelf exercise, where you take something and you just plug in your organization’s name. You really want to think about what is your business? What are those categories of risk that you need to think about? And for your business, does it make sense to have, for example, a standalone AI policy? And maybe it does, particularly at the beginning, if you’re thinking about how you’re going to be introducing AI in the use cases and the people who are going to be using it. But maybe over time, or maybe, just because of, again, those use cases or what your organization is, it makes more sense, instead of a standalone policy, to instead introduce and incorporate AI into existing policies and procedures. I think we are rapidly reaching the point where just about anything is going to be AI when it comes to computers. So how does that make sense when you’re looking at your overall policies and procedures of where AI fits in?
And then you got to think about the players. Who is responsible for compliance and training? Who is responsible for the supervision of the AI system, including if it’s generative AI, in the event it changes over time? Your risk profile might change, either because of the use of AI or just what your business is doing. Those are all things that you’ve got to keep an eye on, and it might be different people in different parts of the organization, but who are they? What are those departments? Do they understand what their role is in governance?
And then, finally, how are you documenting it? When it comes to both the steps you’re taking for governance, for compliance, to show that you’ve been doing training on whatever your policies and procedures are, or whatever risks that you’ve identified, how are you documenting it? So those are kind of the main categories and foundational issues I think about when approaching what should a firm be thinking about once it’s decided it wants to adopt AI.
Pete McAteer: That’s terrific, Carolyn. Thank you so much. Jeff or Dan, any additional comments to add?
Dan Garrett: Carolyn, those are great points. I really like all of that. One of the things that’s interesting to me when I think about Compliance Officers and experts in their fields here, they’ve got to be great at understanding the knowledge, understanding the policies and procedures, the rules and regulations from the various entities that regulate them. And increasingly what we’ve seen is they’re having to get extremely more technical and understand technical systems. If you think about all of the latest regulations that have come over the last 10 years, it’s just getting increasingly more technical. It was cybersecurity coming to the forefront and doing third-party risk management, reviews of your vendors and understanding all the security measures that your firms are taking. And now, here comes AI and this is a whole other very technical component that compliance officers need to stay up on. Not just the rules and regs but understanding the technology and how it’s being used and the systems that are using it.
As you had mentioned, it is really critical to stay on top of it. Education in these areas is so important and staying on top of it. Things are moving so quickly, as we’re seeing. You know things are constantly changing, new products are evolving and so forth. I think one thing that’s nice about some of our regulators is that they’re trying to make rules and regs that are not specifically pertaining to AI but adopting the old rules and regs so that it’s inclusive of these new tools that are coming out with this technology. So, we’ll see how all of that comes on.
But I just wanted to point out that I think the Compliance folks out there are amazing in that they have to keep up with all of this and stay very informed about all of these things. And really think through, like you said, making it very specific to the company and the needs that they have, the way they’re deploying these types of things, the third-party systems that they’re using. All of those things come into play and so it’s critical to be watching out for that.
I think, also, a lot of firms that are slow to adopt, that a lot of times it’s because of this, it’s because of the unknown, the fear of change, the fear of this new technology. It’s a lot easier to just say no, we shouldn’t do that than to sit down and figure out well, how can we? How can we develop these policies and procedures to establish a safe environment for us to operate? And I think it’s paramount to firms now to be getting into this, because it’s like you mentioned, Carolyn, it’s going to be everywhere, it’s going to be in all these systems, it’s going to be pervasive and to stay on top of it and start now even if you start small, even if you’re not using AI or not using very much, to really get in there and start working on these policies, either standalone or within your current policies and procedures that you have. So just great points there.
Pete McAteer: Continuing to map and locate where AI has been inserted in your firm, where it’s serving its purpose, and just ensuring you maintain that inventory. I mean, that’s something we talk about – inventory of controls, but inventory of where AI has been inserted and where it’s being leveraged. Great stuff, great stuff. Next question to Dan: for firms with legacy systems, what strategies can facilitate a smooth transition to AI driven processes?
Dan Garrett: Yeah, it’s a good question, and we all have legacy systems that we have to deal with, some better than others. There’s two points I want to make here in this particular area. One is, and Carolyn touched on it a little bit, you need to sit down and think about how you’re going to deploy AI. I think, before you even begin with that, I think you start by not trying to bring in AI to solve a problem, but to really identify your problems first and foremost. What are the problems that you’re having as a firm? Is it growth, is it efficiencies? What are those things that you’re trying to tackle? And then look to see if there’s an AI solution that can help you do those types of things.
The thing about AI is that it’s not going to improve bad processes, it’s going to supercharge good processes. So, I think the focus here is to make sure you’ve got good processes in place, and if those processes can be sped up, they can be faster, they can be done more efficiently and you think AI can help you with that, then that’s a great place to start. If you have a messy process, clean that up first before you try to start adopting AI.
The second point I wanted to make was around data. If you’re just getting AI solution point solutions not requiring your data, it’s fine. But so much of it, and particularly to your question, Pete, which is about AI processing and transitioning to systems that can improve processes, and we’re talking about agentic AI, which is a tool that essentially takes generative AI and then takes it to one step level and actually actions and does something with the information that it’s generated. Data is extremely important for those kinds of agents and that AI operate properly, and it’s extremely critical that firms, before they adopt AI, have a good data strategy.
One, is they have good governance around the data, making sure that the data is secure, that the data that’s going into the AI systems is secure, that you’re not training models, large language models that are not yours or not in a tenant that you own. So, make sure that those things are in place, but just making sure that you’ve got good governance around your data. You’ve got clean data. That I think you need to start with first before you really start adopting AI and putting it into your processes.
Pete McAteer: Carolyn, I’ll turn it to you.
Carolyn Welshhans: Yeah, I mean, I think those are excellent points from Dan. I think it comes back to what he was saying – you need to know your business, and that’s just smart business in terms of what you’re going to do with AI. But that’s also a place where a lot of the regulation starts.
A lot of policy and procedure requirements under the SEC’s rules and regulations, for example, have a reasonableness standard. I’d much rather be in a point where you’re, if you have to argue that your policies and procedures are reasonable, because look at all this thought you put into about your business model, what your existing policies and procedures already were, to Dan’s point, your legacy systems as well, and your data.
I think it comes back to really going through those thought exercises to make sure that the AI makes sense, and that, as Dan said, it’s actually solving a problem that you have, and then that you’ve put all this thought into how you’re going to implement it and then govern it going forward. I think the data point is also a really key one that brings in, depending on what the data is, a whole other host of issues you’ve got to think about when it comes to, for example, privacy and cybersecurity. So again, having your house in order, like Dan was talking about, before you introduce this extra element to it, I think is a really good governance step to take.
Pete McAteer: The thing that always comes back to me anytime we talk about data is that data is the lifeblood of every firm, and it’s the lifeblood of your business. Bad data is bad information and here, with AI, it’s even more on steroids. You know, we’ve got to have our data management strategy and those operational processes in place to ensure that data that is driving, feeding your AI implementations. It’s vitally, vitally important. So, underpinning all of this, yes, cybersecurity, data security, access controls, all of that has got to feed this and be in really good shape before you start leaning too heavily on AI. So, again, that’s foundational. Let’s get your basement in order, maybe not the house, necessarily, but the basement. Super, super important, Jeff. Anything to add?
Jeff Gearheart: Maybe just an observation. We started with legacy systems, but Dan, legacy systems aren’t an impediment to AI. You might need to do some house cleaning – we keep talking about data management and keeping it clean. You might have to resolve some “legacy issues,” but it’s not an impediment to using AI at all, is it?
Dan Garrett: Absolutely, Jeff. It’s not an impediment. In fact, legacy just means old. It doesn’t mean they’re bad. So, yeah, they can be old and bad, but you know, essentially the opportunity here is to use AI to improve the capabilities of some of these systems, whether that’s a lot of cases when we’re talking about processing, it’s moving data between, sometimes, these legacy systems, because a lot of times the legacy systems aren’t great at talking to each other. So, I think it’s not an impediment. There may be challenges, because newer systems make it easier for systems to talk to one another, right, but there is opportunity there, I think, with legacy systems as well.
Pete McAteer: I’m never going to hear the word legacy again without thinking old. I’ve been called lots of things. Legacy is not one. All right, the next question up is for Jeff. Jeff, you’ll lead us off on this one. What does an effective AI readiness assessment entail for trading and capital markets divisions?
Jeff Gearheart: Thanks, Pete. When I think of a readiness assessment, first you have to think of the core areas which the firm has in place and in that regard, I think they have to be committed to AI, to implementing it into their daily operations, actively managing the use.
You have to make a commitment of resources, both financial and staff resources, and you have to make a commitment to the technology and the data needs. So, an assessment would focus around those and maybe for discussion or other observations in a moment or two. In my mind, I don’t think it’s going to be feasible to use AI just a little. By that I mean your team is going to want to use it, it’s going to help your operations, your competitors are already using it, so you’re going to have to try to keep up, whether it’s on the trading or the operations side, and, honestly, your vendors and key service providers are going to use it. So, this isn’t a case of hey, we’ll get to it when we can. Your firm needs to prepare for AI. You need to create a framework, and that’s where I look at where to start a readiness assessment. Does your firm have a framework in place for the strategy of use and the commitment of the resources that are needed.
And there I’ll talk about a few key elements. Number one: Governance and Oversight. This is kind of what Carolyn was talking about earlier. You need a framework. Who owns it? Who’s accountable for it? How’s it going to be tested? How are new uses going to be implemented? All those types of things. Even on your vendor analysis, find out the degree they’re using it, what data they need, those types of things.
Sometimes clients get a little frustrated with us, but you need policies that define the ownership, and everybody can’t be responsible. By the way, when I say ownership, you’ve got to define ownership, because if you don’t define one group or person, it can’t be owned by everybody. Then nobody owns it.
Second, which I think is key to a readiness assessment and into your framework, is the strategic alignment. Where are you using it? How does it make your firm more efficient, reduce risk, increase speed to trading? Make sure that any use of AI aligns with your business goals and isn’t some excited trader or trading operations person seeing a new way to do things, just making sure it fits the model, that leadership knows about it and is committed to supporting it and that it can stay in place. From there we can move into the actual use.
We’ve talked a lot about data governance and quality. A lot. I’ve just got to add it in here one more time. If you’re going to do a readiness assessment, look at the data sourcing, the integrity, the access controls, the validation steps, things of that nature. Model risk management is also key. Some of this, I think, slides into the technology world because they have a lot of these procedures in place already. But model validation, performance testing, documentation, again from prior experience, I think documentation is not something people like to do, but I think you really need to have it in place here so everybody understands what’s going on.
And last, I will just say the technology and architecture with it. And maybe, Dan, I don’t know if you have any perspective on this, but I think if you’re going to be using AI, you need to make a commitment to the technology and the architecture that’s needed to run it, not have it clunk along. So, I think that’s a key part of a technology assessment. We can get into all kinds of things such as cybersecurity, change management and those types of things, but those would be my priorities.
Pete McAteer: So, Dan, I think it’s a natural segue to you here to chime in, and then we’ll round things out with Carolyn.
Dan Garrett: Yeah, that’s perfect. We’ve done these readiness assessments. There are different checklists out there, major firms that put them together. You can find them online, you can call a professional to come in and help you navigate these things. But it’s important to do that assessment. Even if you’ve started with AI a little bit – you’re using Copilot in Microsoft, for example – but when you’re ready to really start implementing your own large language models or agentic AI, I think it’s really important to sit down as a firm and go through this readiness assessment and go through the pieces to just make sure your firm’s there.
Like Jeff said, there’s technology needs. Do you have the right resources in-house? Can you identify outside resources, consulting, and development firms that can help you? Do some of your partners and vendors provide additional services for you? Is your network there? We talked about data. I mean, all of these things are important, you know. Going back to your policies and procedures, are those in place? Are you ready to start putting those things together and defining those?
Your employee training is critical. Where are your employees on all this? Are they interested? Are they wanting to use it? Is there a desire? Do they have the skills? Do you have a training program that helps them adopt AI safely within your own tenant and your own network and things like that.
So, I can’t emphasize this enough: Be strategic around the assessment and don’t just jump in and start deploying these things. As Jeff alludes to, sometimes these can get very technical. Yes, you can go buy a vendor that provides some AI solution for you, but when you really truly want to adopt a large language model or agentic AI, it’s going to require some skills and/or a very good partner for you to come in and some technology that you may not have today. And again, just evaluating where you’re at today and where you want to go and the types of things that you want to try to achieve, it’s good to just take a beat and do that review, and get comfortable with that. You’re going to find gaps. You’re going to find the areas where you need to improve things before you get into it.
Pete McAteer: All right, thanks, Dan. Carolyn, your turn.
Carolyn Welshhans: Boy. Those are all really excellent points from Dan and Jeff, and I think the thing I want to emphasize is I would just really not fall asleep on Jeff’s point about strategic alignment.
I just think that’s so important, the points he was making about not skipping over what makes sense for the business, and is everybody on the same page about why you’re doing this and all the ways it fits together. And I think other things then flow from that. I think then, if you’ve got that really consciously articulated, then you can tie your governance and your policies and procedures and the need for training, and why everybody needs to pay attention to this back to that, because then all of that other stuff is part of what you know goes to the business and it’s what makes good business sense. Not just good like yes, everybody should comply and follow the law and try to keep track of all these tricky rules and regulations, but it’s also part of the business need. So I think that that is just a really good step that Jeff highlighted there that I think can drive a lot of other things.
Pete McAteer: Perfect. Well, I think that wraps us up for today. Hopefully, everybody listening today appreciates Carolyn’s, Dan’s, and Jeff’s viewpoints on whether or not you’re AI-ready. Thank you all very much. Have a great day.
Libby Hall: Thanks for joining us for this episode of our special AI series from Oyster Consulting and Morgan Lewis. We hope our conversation gives you new insights into how AI is shaping the future of financial services. Be sure to subscribe so you don’t miss upcoming episodes as we continue exploring the legal, compliance, and operational impacts of AI. For more information about our experts and our services, visit our website at oysterllc.com. Thanks for listening.