AI in Wealth Management Podcast Series Part 2

FINRA, Supervisory Obligations and Protecting Clients

By Dan Garrett, Casey Dougherty and Bryan Jacobsen

star nebula representing AI in wealth management part 2

AI’s Role in Transforming Wealth Management

Artificial intelligence is increasingly integral to the wealth management sector’s shift towards data-driven, personalized, and secure services. However, as firms embrace AI technology, they also face escalating regulatory demands and risk management hurdles.

In Part 1 of our Oyster Stew podcast series on AI and Wealth Management, we shared our experts’ insights on artificial intelligence (AI) and its impacts on Regulation Best Interest and recordkeeping, with a special focus on how the SEC’s actions around AI will affect compliance teams in the wealth management industry.

Exploring AI in Wealth Management

In part 2 of the series, Oyster experts Casey Dougherty, Dan Garrett and Bryan Jacobsen discuss:

  • FINRA’s exploration of AI technology
  • Supervisory obligations (FINRA Notice 2129), with regard to outsourcing AI technology to firms
  • Best practices for maintaining compliance and protecting clients

Ensuring Compliance While Embracing Technology

Embracing technological advancements is not just about staying competitive within the industry—it’s about maintaining compliance and protecting your clients. At Oyster Consulting, we understand the transformative power of technology and the balancing act of moving forward while managing risk. Our team of regulatory compliance consultants and technology strategists works closely with wealth management firms to navigate the digital landscape, ensuring that technology implementations comply with industry standards. By partnering with us for compliance consulting and operations strategy, your firm can embrace the digital age while protecting for firm and your clients.

Transcript

Transcript provided by TEMI

Bob Mooney:  Welcome to the Oyster Stew Podcast. I’m Bob Mooney, General Counsel for Oyster Consulting. In our previous podcast, we shared our insights on artificial intelligence and its impacts on Reg BI and record keeping with a special focus on how the SEC’s actions will affect compliance teams in the wealth management industry. This week, Oyster experts, Casey Dougherty, Dan Garrett, and Bryan Jacobsen discussed FINRA’s exploration of AI technology for its own uses, supervisory obligations, and protecting clients. Let’s continue our conversation, Dan.

Dan Garrett:  Alright, thanks Bob for that great introduction. Today’s podcast is a continuation of our first podcast on AI and the impact to the industry. In that podcast we discussed the current state of AI best interest rules and record keeping with a particular focus on the opinions, rules, and actions from the SEC. So, today we’re going to focus on FINRA’s response to AI and discuss some of the anticipated challenges and provide some best practices for our listeners. Today, I’m joined by Bryan Jacobsen and  Casey Dougherty. And I’m Dan Garrett. For those of you who didn’t join our last podcast, we’ll start off with a quick review of our backgrounds. I’m Dan Garrett. I’ve been in the industry for over 25 years with leadership positions at broker dealers, RIAs, and clearing firms in positions of technology and operations. So, with that, I’ll turn it over to you, Casey.

Casey Dougherty:  Yeah, thanks, Dan. So, I’m Casey Dougherty. I’ve spent the last 24 years working in the industry, primarily for so-called independent broker dealers in RIAs. Most recently, I’ve served in such positions as Chief Legal Officer, Chief Risk Officer, and Chief Compliance Officer for various firms. If I was going to frame expertise, it would be in working with firms going through significant transition or change, or who are trying to find ways to conduct business in a compliant manner where other firms have really struggled to do so. Now, what I’d like to do is turn it over to Bryan. Bryan, can you tell us a few words about your history and what your expertise is? 

Bryan Jacobsen:  Absolutely. And thank you Casey. I’ll try and keep it brief. Long story short, I’ve been in the industry for almost 30 years now. 99.99% of that time has been right in compliance. And I’ve worked for several dual registered firms obviously going up through the ranks. I’ve been the Chief Compliance Officer for over 15 years at this point. One of the things I think I do well is really listening to the business needs of clients and then figuring out how can we make the compliance work for their business needs. I often find that we live in a world of gray and my job is to put in the appropriate controls, but at the same time, allow the business to move forward to the extent possible. So anyway, that’s me in a nutshell, Dan.

Dan Garrett:  Yeah, thanks, Bryan. That was great. So, Casey, considering your expertise in compliance and legal supervision, what I wanted to do is start off with FINRA’s regulatory Notice 2129, which focuses on the supervisory obligations with regard to outsourcing AI technology to firms. Can you talk about that and interpret that for us and give us some information that we might want to know?

Casey Dougherty:  Yeah, sure. Like the SEC that we discussed during our last call from FINRA’s perspective, they’re also grappling with deployment of this technology by firms regulatory Notice 2129 actually deals with vendor management and outsourcing, which is certainly implicated for the vast majority of firms considering AI or AI like technologies that don’t have their own in-house ability to fully generate an AI tool. So, interestingly, although FINRA talks about vendor management, they don’t once use the term AI in that notice; that’s opposed to something like the SEC, which was front and center with their AI concerns. However, when reading that notice, I also read it with an article they published in June of 2020 on AI. Now in FINRA’s early article, they expressed concern about client privacy, reliability of the tools and the data the understanding of the inputs and outputs from a tool vendor controls change management and business continuity.

I know that’s a lot, but fundamentally they’re concerned about bias in the models, so that incomplete or inapplicable data sets or data being used that isn’t validated or factual before it’s being used. They’re concerned about errors in algorithms. They’re concerned about whether the proper parameters are used and whether the output can be explained. We talked a little bit about AI washing and greenwashing and things like that. So, this is sort of how that flows. They also want to make sure that each new model is thoroughly tested before replacing an old model and that there is some sort of ongoing monitoring and testing models for efficacy. So, FINRA’s observation is that member firms have continued to expand both the scope and the depth of their use of this technology and have increasingly leveraged vendors to perform risk management functions. I guess assistance supervising sales and trading activities, and also with, as we know, customer communications. So, the specific FINRA rules that are implicated FINRA, and its notice reminds firms that FINRA rule 3110 regarding supervision and the licensing or testing required of people who are involved with it really applies regardless of whether a firm does those things in-house or if it seeks to outsource them to a third-party vendor AI. 

Dan Garrett:  Yeah, that’s great. FINRA just recently did a survey of member firms, and they got an enormous number of responses. Over 95% of the firms responded back to this AI survey, which really tells us how much the industry is interested and how much the industry is paying attention to what’s going on here. And what came out of it is that the majority of the firms are looking, or adopting, or have adopted AI which is amazing. But the majority of those firms that have also said that it was mostly outsource solutions. So that is, they’re adopting vendors or third-party solutions that have AI built into it. And that makes sense from the standpoint of just the capabilities of some of the firms may not have already matured technology departments to really bring some of this stuff in-house, but it’s going to be very, very important to think about these tools, the AI tools integrated into these vendor solutions. 

So, let’s talk about some best practices around that and making sure that we’re kind of maintaining compliance and protecting client information in light of the risk mentioned by FINRA. Casey, do you want to take a stab at that?

Casey Dougherty:  Sure. It’s sort of an interesting issue because when I think of AI tools being integrated by vendors as you just referenced, I think it’s not just the firms themselves. I think it’s also the vendors themselves that might try to incorporate these into the way that they deliver their service. It’s a good reminder. We don’t operate in a vacuum, and in many cases, our vendors operate or build on other vendors who in turn build on other vendors. Still, this raises the possibility that at any point along that supply chain, I’m using that term loosely, that creates the service or product being delivered to the firm, that an essential element may be changed or perhaps look the same but operate differently than it did the day before. And reminded a bit of when you’re using your computer one day and everything seems to work, and then inexplicably it just stops working the following day.

Now, chances are what’s happened is that a foundational element was changed slightly, tweaked slightly, and that had a cascading effect where that isn’t transparent to the end user, in this case, your firm. That creates risk if you’re relying on that tool as a fundamental way that you’re delivering a service to your clients. I think the best way to deal with a potential situation like this is to have a really good understanding of how an essential tool works and get some required transparency from the vendor before they actually change some aspect of that tool. As part of a business continuity plan, you also need some backup strategy in the event you discover your tool doesn’t work as you expected. And I suppose for that matter, it probably makes sense to have ongoing testing of the tool to make sure it continues to work as expected. Of course, if you’re giving confidential information to a vendor, you need to make sure your privacy policy covers it and that you have great verification of the preventative measures your vendors are taking to protect that client data. So, I suppose shifting gears here, Bryan, this is for you and I’m thinking risk monitoring here and AI with your deep experience in compliance for dual registered entities and your focus on cybersecurity. How do you see FINRA’s emphasis on AI and risk monitoring evolving, and what considerations should firms have when they’re deploying AI for risk assessment?

Bryan Jacobsen:  Yeah, great question. And I’d also like to make a comment on something that you’re saying Casey, about vendors and their use of AI, which is, I think, similar to where FINRA’s at right now. Let’s first of all make sure that we’re all on the same page of what AI is.  AI is not alternative intelligence. AI is an algorithmic natural language tool that is looking to predict how to organize their response so that it comes out not as gibberish, but as human speak and things that the lay person can understand easily. That’s where we’re at with AI. So, I was recently at a conference, and it was almost laughable. Every single vendor out there had some kind of banner where they touted email reviews with AI.

And, when I look at it, it wasn’t AI and nothing is AI right now, as I mentioned. What it really came down to was maybe enhanced search functionality, is the best way I would describe it. So, I would definitely be skeptical of vendors claiming that they have the AI solution. That they’re using that as buzzword. But that being said, what is the future of this with respect to FINRA. FINRA, like so many other either firms or vendors, they’re looking at AI because they understand the immense possibility that AI could have. Even currently as a predictive language modeling tool, the ability to assimilate large reams of data and basically spit that out into a meaningful format is huge.

We all work with data currently, and we know that usually it’s not challenged with getting the data. Most firms have more than enough data anymore. The problem is, how do you get that data out so that it has some type of meaningful insight or value? And now what you’re able to do is not just look at one account or one firm, but now you can look across firms and assimilate all this data and have it spit out in basically any type of format that you can think of and do that in real time. So that’s kind of the future of AI. You know, I do see that just the implications on market regulation at FINRA or the SEC, things like audits, where they can start looking at trends of a certain firm or even a certain demographic of firms on certain types of deficiencies.

And then, they can help zero in on what they’re reviewing. And all of that is going to evolve over the next, I would say, two to three years. And I’d be astonished if the type of exams that we go through in a few years isn’t remarkably different and more focused compared to right now. Obviously FINRA tries to focus their attention and try and examine what is most relevant to a firm’s practice. But to a certain extent, they are doing it a little bit with just reading some disclosure documents, may maybe reading some WSP, maybe talking to the CCO, but now they’re able to take that data and really figure out, okay, well wait a minute, where have we seen issues with this firm before?  How can we get feedback from all the different regulators to see that it’s not just my office, but what’s happening across the country with this firm? I think the regulators are going to have a much better ability to do that within a couple years.

Casey Dougherty:  Thanks for that, Bryan. Initial comment, as you were talking about firms having lots of data, for me is what’s the data quality on that? Is there bias implicit in that the firm’s then going to leverage and move forward? But I take your point that firms actually already have a lot of information and certainly leveraging that and using it is something that sounds good. Dan, do you have any thoughts before I move on to my next question for Bryan?

Dan Garrett:  No, I agree with both your points. I can’t really add anything to it.

Casey Dougherty:  Okay. So, this sort of builds on the last question, Bryan. This is regarding large language models, and I think you touched on large data sets and things like that a bit, but I want to ask this and see if there’s more. Given FINRA’s interest in generative AI and LLMs, what implications do you see for firms using these technologies in terms of compliance, especially with FINRA’s efforts to understand vendor and internally developed AI tools?

Bryan Jacobsen:  Yeah, great question. So, one thing, just from a definitional standpoint for those that have never heard of the term LLM, it stands for Large Language Model. So, if you look at a ChatGPT or a Google Bard, those are large language models. And what these large language model tools are really doing is trying to predict when they respond to a question, what’s the most likely logical next word that they should put in their answer. And once they put that word, then they move on to the next word and the next word, and so on and so forth, until they complete the response. Now, from a user perspective, it looks like your computer is talking to you. Because it is like a seamless stream of language, and it looks fairly well refined.

So, that’s what ultimately these AI tools are currently doing. What we’re going to see in the future is that most of these AI tools, like ChatGPT and Google Bard, are what’s called single modal tools. They take text input; they’re going to produce text output. What ChatGPT and other firms are working on right now though, is to become more multimodal. And what that really means is that they’re going to be able to take an image and then also text, and then be able to decipher what the two mean, and then spit out either another image or another text, or maybe like an audio file or what have you. But the point being, it’s not going to be the same output as the input.

So, that’s going to start opening up the doors there. Now, as far as implications for compliance, and I referenced this before, but what we all know the adage – crap in equals crap out. So, I’ve worked for firms where getting reams of data is not the issue. That was the easy part. But then getting that data to in any way be meaningful and helpful to either me or my staff or, or what have you, was, was darn near impossible, right? You had to get to that person that was good at whatever crystal reports or one of those tools that could then collate that data and make it into some level of readable report. But then that person is much more of an IT person. So trying to get access to that person’s time was often a struggle with competing priorities and all that stuff.

Now, all of a sudden, you’re going to have a tool that you can say, okay, I know we have the data in there, so now I need my system to go in, here’s what I’m looking for, and spit out either a report or a synopsis or what have you, that’s going to give me a good idea of what concerns I should or should not have, based on whatever I’m asking the system to do. So again, it’s the ability to take this massive amount of data and to drive it back into a meaningful result. You know, we’re not there yet. I mean, if you look at ChatGPT, I mean they even limit their output to, I can’t remember, it is a couple thousand words, but there’s limits on how much you can do that. But as that expands, you’re going to find that the ability to assimilate more and more data is going to be naturally there.

Yeah, so that’s kind of my thoughts on it. I’m very optimistic about what the future will hold with these AI tools. We’re not there yet. I’ll say it again. Right now, AI is a cool search functionality. The results should be questioned. I look at AI for compliance firms that use AI. Now, I would look at it as a good starting point, but that does not take away your need to independently review the results, because there are quite a few inconsistencies – drastic ones. I did a rolling curve through AI to see what would come out of the tool. And this was using GPT and some of the things that it was saying were a thousand percent incorrect. And, I have no idea, it doesn’t really necessarily explain exactly where he got those references. So I’m not sure exactly how it pulled in the wrong data, but I was like, holy cow. I mean, that, that’s crazy. So, a firm that uses these tools, again, I would caution and just make sure that you’re validating everything.

So, Dan. I did have a question, with your background in digital transformations, how should firms interpret FINRA’s exploration of AI technologies for its own use as mentioned in their large language model coalition, and what can they learn from FINRA’s approach?

Dan Garrett:  Yeah, thanks, Bryan. It’s very interesting to me. So, I want all of our listeners to understand that FINRA’s taking a very proactive approach with AI. In April of last year, FINRA created this large language model coalition inside FINRA with three different committees. So, the first committee was looking at AI and its use internally within FINRA and also externally and the member firms, and how the member firms might use AI. The second group is focused on the tech stack that FINRA wants to use and build for their own purposes of using AI. And then the third group is focused on kind of the regulatory oversight, compliance rulemaking and things like that. This is, to me, like a big signal to the industry. What they’re saying is, we’re not just regulators on the sidelines, you know, we’re in this field with you.

And to me, we should view it as kind of a green light, that AI isn’t just acceptable, but it’s being encouraged, with the caveats of being cautious and having informed adoption. But the key takeaway here is to be proactive, engage AI and have this robust understanding of the mechanics, and the potential, and the risk. But then you have FINRA, that’s coming out and being very proactive and open with what they’re doing and reviewing. I mentioned the survey that they put out and publishing the results of that survey, but here they are using and investigating how they want to use the services of AI to make our industry better. We should be encouraged by that. The whole industry should be encouraged by that approach and taking a comprehensive approach within our firms to say, what are we going to do about AI? And making sure that we’re keeping up on what FINRA’s interest is with AI and what they’re doing and the regulations that they’re putting forward.

Bryan Jacobsen:  Thank you. Very, very helpful. You know, a follow up question that I had is, and we’ve talked about data bias and cybersecurity, what strategies would you suggest firms employ to manage these risks while also taking advantage of any efficiency gains that they can get from AI?

Dan Garret:  Yeah, these are the two things that I think are fairly unique that AI’s bringing, as Casey and you had talked about the importance of data privacy. We have that today. I mean, without AI, it’s very important that when we’re working with third party vendors and so forth, that we’re protecting data. AI brings these two new facets, data bias, and then cybersecurity. Now, of course, we have cybersecurity today, but AI brings a new facet or challenges to cybersecurity. And so, if we talk about and look at data biases, really, we want to make sure that we’re taking a proactive approach that starts with ensuring that the data comes from diverse sources and accurate sources, and we’re continuing to monitor the outcomes for note, for unintended biases. And that requires us to do some testing or requires our vendors to do some testing on an ongoing basis to make sure that the data isn’t providing bias results with cybersecurity.

This is interesting because, as you mentioned, Bryan, the GPT capabilities of producing not just text, that looks like a human speaking, it’s also producing images and video that are near lifelike, and it’s also reproducing voices. And so, it’s going to be very important for us to recognize that these systems, AI systems, can potentially be used in very harmful ways. They’re also, these tools themselves, the AI tools have the potential to be targets for cyber threats as well. So, I think when we just take this approach of adopting AI, we really need to think of security first, and incorporating layers of defense and rigorous access controls and on top of that, just regular audits, ethical AI training and transparency. We talked a lot about transparency in our first podcast, but these are the things that are kind of the best defense against these evolving risks.

Casey Dougherty:  Yeah, that makes sense to me. I think that, I wanted to pick up on something you said earlier, Dan, the SEC Chairman Ginsler seemed to be very much doom and gloom. Let’s clamp this thing down. This is potentially end of the world type stuff. There was some pragmatic advice or strategy that they thought of, but it was, you better test enough to make sure this is not going to cause any harm, and let’s make sure that we don’t move the playing pieces at all as a result of touching AI. And that’s in contrast to FINRA’s perspective, which seems to be much more pragmatic and positive in certain ways. Just recognizing this is coming along the way. Now, one, one thing that, that you also referenced was scams or the idea that, that that bad things could happen. I, I think of the term deep fakes when I think of videos that are, that are appear humanlike or voices that appear humanlike. So, with FINRA issuing alerts on AI enabled investment scams, including what I call deep fakes, what steps would you Dan and Bryan take, I suppose to protect clients, firm clients and ensure the integrity of investment advice?

Bryan Jacobsen:  Yeah, I think that’s a great question. So, I think the first defense is always going to be education. And it’s not just educating the clients, but it’s really educating the sales force on what capabilities are there, and what are some of the worst-case scenarios that reps need to be aware of. I almost look at this as when the concerns around senior investor issues came up and we started to have like trusted contact information on new account forms and that sort of thing. It’s kind of the same thing. It’s predicting what could happen and then taking steps to make sure that the reps understand that so they can be your first line of defense and look out for any clients that may be taken advantage of. If there’s large withdrawals of money from their account or anything like that. These are things that any reps should be on the lookout for.

Dan Garrett:  Yeah, Casey, just adding on to that, I think multifactor authentication today we’re going to need to examine it. We have it today, of course, but some of these deep fakes where a client’s writing just by getting a sample of their writing, GPT can duplicate or mimic the writing style of a client. With about 15 minutes of recorded voice, it can mimic fairly accurately the voice as well. And so, when you think about procedures or processes that our firms have in place today, where in order to wire money if you get an email from a client, the best thing to do is pick up the phone and call the client and verify that they did actually send the email and wish to have the wire. And it just opens up the possibility that if somebody has access to a phone and deep fakes could reproduce the voice and or even video of individuals on both sides of that phone call provides an opportunity for a risk that we just need to pay attention to.

I don’t know that it’s here today, but it’s something that I think we need to keep a close eye on in terms of how we structure and think about these things, particularly around this multifactor authentication process that we have today.

Casey Dougherty:  Yeah, I agree. I think that this is a challenging issue. The way that you approach trying to verify the person is who they say they are, is going to change. It’s becoming much more difficult both at the firm to client level and client back to firm. So that’s going to be move going forward. Now as we’re discussing this, one thing that occurred to me is, as an attorney, I always think of contracts and I think of our vendor contracts and how do you try to deal with vendor risks and contractual protections and things like that. And as an attorney, I typically recommend that firms review contracts. I’d usually say annually when they’re up for renewal, but I think it’s appropriate to have discussions with vendors about how they protect client information. It’s good to have conversations around EandO insurance and requirements that require them to report promptly to you in the event of an issue that they uncover.

I think the vendors are not going to just hear it from you. They’re going to hear it from lots of their clients. And also, as an attorney, don’t think you’re going to make a decision in a vacuum as a chief compliance officer. Don’t make a decision in vacuum. Involve your IT folks in your annual contract reviews. They’re the ones that are closest to understanding what you should be asking as part of your diligence. They can make sure that you ask questions to assure the vendor is well versed in the risks and taking appropriate steps to safeguard your information. Anyway, I’m interested in other thoughts too. Dan or Bryan, do you have any thoughts on vendor risks and contractual and due diligence, things like that? 

Dan Garrett:  Yeah, I agree that it’s going to be critical to stay ahead of these technological advancements and the implications. So, we’ve got a lot of new firms that are popping up with AI solutions within our space. And I think it’s going to be important to engage with active dialogue with the vendors focused on data protection. Some of the vendors that have already been providing services to our industry are very aware of rules and regs, the importance of adhering to the SEC rules, for example, around the requirements of breach of data and SEC’s mandate for prompt disclosure of any significant cybersecurity incidents. This is very important to have that conversation with firms, particularly new firms that are in the space providing these AI solutions. And you mentioned it earlier, Casey, it’s not just the vendors, but it’s the vendors of your vendors <laugh>. So, I just agree with you that as a chief compliance officer, you should bring in your tech staff and get them involved in these contract reviews. And everyone should be educated on the risk of adopting some of these AI capabilities and what that means, not just in bias, but in customer data protection.

Bryan Jacobsen:  Yeah, I totally agree with that. Beyond just the vendors, though. I think firms also need to be aware of, or I should say, be careful about how they utilize AI within their practice. Recently we saw enforcement action against a couple firms that claim to be using AI in their portfolio management process, which wasn’t quite the case. So, the point being is that to just be very careful on the use of a AI. As I mentioned earlier, AI tends to be that new sexy phrase. But at the same time, I think we have to make sure that we’re careful about how we’re using it so as not to confuse the investing public.

Casey Dougherty:  So, let’s break out a crystal ball here. What’s going to happen here in the future? Is AI going away? What are the future implications for AI and compliance and surveillance?

Bryan Jacobsen:  AI is not going away. It’s going to evolve. And if anything, it’s probably going to get much more robust over the next couple years. We’ve just scratched the surface of what this thing can do. So just imagine three, five, or 10 or 15 years, what type of evolution will we see? Like I said, I’m very optimistic about the improvements that we’ll see. I’m also a little bit concerned with, Dan, you mentioned, there’s a lot of nefarious stuff that can happen with AI in the wrong hands. The largest technology companies, Google, Microsoft, several others, they’ve actually formed a coalition of sorts to talk about it amongst the IT peers to basically figure out what type of safeguards do they need to put in place to make sure that this AI does not get used unethically. So, the point being is, I think there’s a lot of benefit that we’re going to see, but there’s also a lot of risk that we need to make sure we’re managing.

Dan Garrett:  Yeah, I think it’s going to be huge for compliance and surveillance. We know that that’s what FINRA is looking for. Because they have the opportunity to use it to help them with their data analysis. But one of the things that we’re seeing a lot with CAT and CAIS and the adoption of that is the problem with the false positives and reducing false positives in your compliance reviews and so forth. And AI can certainly help with that. As Bryan said, we may not be there just yet, but I think AI will be helping us resolve some of these issues as well. So, identifying things that it can take care of and really moving towards predictive risk management, which is proactively watching, not being reactive, but actually seeing patterns before an event occurs. And identifying, you know, the opportunities there to step in. Again, it’s really nice that FINRA’s embracing this for their own use. I think this is going to help the industry out in a lot of different ways, and compliance and surveillance is going to be a great opportunity, I think.

Casey Dougherty:  Yeah, I echo the optimism. I think FINRA definitely believes that AI is going to have the ability to sort through large data sets. Currently firms do sampling, often because they just can’t get through all the data, but an AI tool could look through things much more thoroughly. Artificial intelligence can eliminate some of those false positives and perhaps come up with better results. I think there’s a lot of promise involved there. So, let’s wrap up today with some thoughts on best practices for firms. What are your recommendations, Dan and Bryan, on what firms should do today related to AI?

Dan Garrett:  So, I’ll go Casey first. I mentioned this in the first podcast too. I’m very big on AI training. I think that the best practice here for me in the recommendation is to have a comprehensive training program that encompasses not only technical aspects of the AI tools but also their ethical and regulatory implications, and really fostering a culture of continuous learning and adaptability so that firms can better navigate the complexity of this landscape. As AI gets integrated and as AI evolves over time, I think it’s an essential for employees at all levels to understand how AI decisions are made, the potential biases that the tools might carry, and the importance of maintaining data privacy particularly under regulation. Equally important is just having established a feedback loop where insights and concerns from using AI can be openly discussed and addressed, no sweeping things under the rug. I think it’s very important that policies and procedures are put in place so that there’s transparency around this. And so take a proactive approach to ensuring that firms remain agile, informed, and ready to adjust their strategies in response to new regulations. Could be new insights and new technology advancements and safeguarding against future enforcement actions.

Bryan Jacobsen:  Yeah, I totally agree with what you’re saying about the education component. That’s going to be key. Especially with any new technology, there’s going to be more misunderstandings than understandings of what it can do. And then, from a best practice perspective, I would say remember, it is a tool, just like your surveillance program is a tool. It does not solve your compliance issues. It does not independently think, but it is a tool that firms can think about using to enhance their current process. I don’t look at AI as being something that eliminates future positions or headcount. Instead, I look at it as allowing your compliance staff, that probably has a lot of very, very little bandwidth on their plate.  But it allows them to now focus on more of the meaningful reviews as opposed to just carving through a lot of garbage stuff. So, again, it’s a tool, but that’s the way it should be thought of. And certainly, that’s the way I would position it.

Casey Dougherty:  So, I think the only thing I’d add to those would be, please recognize the SEC and FINRA might not enforce today, but it doesn’t mean that they can’t come back years from now and apply a retroactive standard with the benefit of hindsight. So, I’m thinking of a bit of mutual fund trails, best execution duties and advisory accounts as one enforcement example. So an ounce of prevention is worth a pound of cure in this case. Spend the time and the money to do this right. So anyway, that’s it for us today. On behalf of Oyster, Bryan, Dan, and Casey, we thank you for listening to our podcast on AI and the securities industry.

Bob Mooney:  Thanks everyone for listening. If you’d like to learn more about our experts and how Oyster can help your firm, visit our website at oysterllc.com. If you like what you heard today, follow us on whatever platform you listen to and give us a review. Reviews make it easier for people to find us. Have a great day.

About The Podcast Speakers
Photo of Dan Garrett

Dan Garrett

Dan Garrett provides general business leadership, technology strategy and execution for RIA, Broker-Dealer, and Clearing firms, including spearheading digital transformations, optimizing operations, navigating complex business transitions, and building software development teams and proprietary applications.

Photo of Casey Dougherty

Casey Dougherty

Casey Dougherty’s 20 years of experience includes expertise in Compliance and Legal supervision in a shared-services environment, executing broker-dealer to broker-dealer joint work and succession arrangements, and other marketing arrangements covering private placement life insurance, VUL and annuity sales.

Photo of Bryan Jacobsen

Bryan Jacobsen

Bryan’s role as a CCO for dual registered broker-dealer / RIAs, clearing firms and crypto-based entities enables him to apply his FinTech, financial, crypto, blockchain, and regulatory knowledge when providing practical compliance solutions.

View Our Team