AI in Wealth Management Podcast Series Part 1

SEC Actions, Best Interest Rules, Recordkeeping

By Casey Dougherty, Dan Garrett and Bryan Jacobsen

AI Podcast announcement set with backdrop of stars in space

Artificial Intelligence (AI) in Wealth Management

As the use of artificial intelligence (AI) becomes more and more prevalent, the wealth management industry is shifting towards more data-driven, personalized and secure services.  However, regulatory concerns and risk management challenges remain as firms increasingly adopt AI technology.  

In this week’s Oyster Stew podcast, Oyster experts Casey Dougherty, Dan Garrett and Bryan Jacobsen discuss the industry’s relationship with AI today, The role of Artificial Intelligence in Best Interest rules and recordkeeping, and the impacts of the SEC’s AI initiative. Get Oyster’s insights into what SEC sweeps and investigations mean for Compliance teams, and what wealth management firms should be doing to mitigate the risks that come with adopting AI technology.  

Balancing AI and Compliance

While the path to AI compliance in wealth management comes with challenges, it also offers significant opportunities for innovation and competitive advantage. At Oyster Consulting, we understand the intricacies of regulatory compliance and the transformative potential of AI technologies. Our team of experts can help your firm bridge the gap between adopting AI applications and the stringent regulatory environment of the financial services sector. 

Oyster Solutions governance, risk and compliance software can also help your firm manage compliance by automating compliance tasks, documenting reviews, and setting approval processes.   

Transcript

Transcript provided by TEMI

Bob Mooney:  Welcome to the Oyster Stew Podcast. I’m Bob Mooney, General Counsel for Oyster Consulting. The use of artificial intelligence is becoming a key component in the wealth management industry’s move towards more data-driven, personalized, and secure services. At the same time, increasing regulatory expectations and risk management challenges remain as firms look to employ AI technology. In this week’s Oyster Stew podcast, Oyster experts, Casey Dougherty, Dan Garrett, and Bryan Jacobsen, discuss the industry’s relationship with AI today and the SEC’s AI initiative. Get their insights into what SEC sweeps and investigations mean for compliance teams, and what compliance teams should be doing to mitigate the risk that come with adopting AI technology. Let’s get started – Casey?

Casey Dougherty:  Thanks, Bob. Yeah, I guess, I guess I’ll start by introducing myself. I’m Casey Dougherty. I’ve been with Oyster Consulting for a little bit over a year now. I’ve spent the last 24 years working in the industry primarily for, I guess so-called Independent broker dealers and RIAs. I’m an MBA and an attorney. Although, to be clear, I’m not providing legal advice on today’s call. Most recently I’ve served in positions as Chief Legal officer, Chief Risk Officer, Chief Compliance Officer. So that, that’s me. Bryan, how about you? What, what’s your history?

Bryan Jacobsen:  Yeah, thanks so much, Casey. So, my name is Bryan Jacobson. My experience is I’ve been in the industry for about 28 years now. Of those, 28 years have been in compliance and 14 years in the Chief Compliance Officer role for various firms. Similar to Casey, I deal a lot with independent broker-dealer/RIA firms. I also deal with a lot of digital asset firms. I’ve been the CCO for a number of crypto currency type firms, stablecoin manufacturers, that sort of thing. And, if I had to peg what one of my strengths would be, really working with firms that are looking to either build or rebuild teams or really improve on what they’re currently doing, and coming in there and just trying to be a partner to that business. Dan, how about yourself?

Dan Garrett:  Yeah, great. Thanks Bryan. Appreciate joining both of you for this podcast. My name is Dan Garrett. I’ve worked in the industry for about 25 years holding Chief Operating Officer and Chief Executive Officer roles at RIAs. I’ve also held several Chief Technology Officer roles at RIAs, broker- dealers and clearing firms. And so, at Oyster, I’m going to be providing strategic planning and executive consulting. So, one of the things I think we should do is start off just talking about AI in general and what we’ve seen in the industry past, present, and future. And I think back at robo-ddvisory is now getting on to be about 15 years old. And I know many firms have been using what we consider AI for robotic processing and some analytic work. But it seems like with ChatGPT and OpenAI, the industry’s just taken off and we’ve seen a very big difference in interest. And we’re now seeing some reactions. So, Bryan, maybe you can help us think about where this industry has been, where we’re at today, and then, where we’re going to go in the future before we start digging into some of the compliance items.

Bryan Jacobsen:  You know, let me start off by saying that I think the definition of AI, the beauty is in the eye of the beholder, and everyone seems to have their own definition. But, I think it’s important when you mention robo-advisors that came out 15 years ago or so; that was really not AI in the sense that we’re talking. That was really what I would call conventional programming. So conventional programming was – think of the if then statement, if this happens, then we’re going to output this. And that’s what robo-advisors was really about. It took a person’s risk tolerance, and if it met that score, then it would go into a predefined model and, and would be to just manage via that model. But it wasn’t really true analytics in the sense that it was taking every part of what you want to do and creating a portfolio for you.

We’re going into a situation where there’s actually two types of AI. The first type is the more common type. It’s called generative AI, and it basically is exactly what it sounds like. It generates text, speech, images, video, and it basically produces the same output as what the input was. So, if it’s analyzing video, it’s going to produce video. If it’s analyzing art, it’s going to produce art. That would be most of the common types of AI out there. ChatGPT is a good example of this. But what we’re seeing now is what’s called multi-modal AI. And what you have there is really, I think, where the true value of AI shines. What you have there is systems that are going to be able to take multiple different inputs and produce multiple different outputs.

I’ll use an example, currently right now, I’m training a puppy.  So, you could take an article about training puppies. You could then go to YouTube and look at some videos of well-known trainers. It would then compare the two, produce both video and text, just kind of summarizing the main key topics and the key things that you need to focus on. That’s really where the true value of AI is going to shine. It can take these multiple inputs and make use of that. So, that’s kind of where we’re at. Let me turn it back over to yourself, Dan.

Dan Garrett:  So I appreciate that. I know we’re going to get into more of the details, but let’s now talk about what’s going on with the SEC and the AI initiative. Casey, with your experience in compliance and legal supervision, can you tell us a little bit about what the initiative is and what it’s trying to address?

Casey Dougherty:  Yeah, certainly. This is a challenge. The SEC came out guns blazing on this one for some context. The initiative isn’t new. The SEC’s proposal was issued last July. The comment period for that proposal ended in October. There were lots of comments that the industry submitted. If enacted as written, the initiative would require broker-dealers and investment advisors to take steps to address conflicts of interest associated with the use of what Bryan was talking about. Predictive data analytics and similar technologies like, let’s say, machine learning, the conflicts of interest would need to, in some cases, be neutralized or eliminated. And we’ll come back to that. That’s important. That’s as opposed to just disclosing it like we do under the Advisors’ Act. So, as for the genesis of this, paraphrasing Chairman Gensler, the SEC believes that AI poses risks to investors as well as the financial system more generally.

This seems to flow from the SEC’s belief that AI’s output and models can be unpredictable, and if both widely adopted and based on flawed data or methodologies, they could have wide ranging. And I’m going to add in probably, at least in Gensler’s opinion, in the SEC’s opinion, negative repercussions. So, again, as written, if adopted, the firms would need to screen their predictive data analytic tools, PDAs, for conflicts of interest, placing the firm’s interests before clients or prospects they would need, and, of course, WSPs to address the rules. They’d have to create records like other rules test and changes to those tools. I want to say that there’s a context here. The SEC isn’t alone. Quoting an open letter from a number of industry experts, such as Open AI founder Sam Altman, they collectively state that “mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks, such as pandemics and nuclear war.”

I happen to think that there’s some positives as well. But I think that Chairman Gensler is coming from a place that might resonate with a number of industry leaders. And I also don’t really want to go into the detail on fallibility of human decision making or how we can convince or manipulate each other. And I guess that those are words that are loaded based on your frame of thought into making decisions or purchasing things we otherwise wouldn’t. However, I will note there is a reason why corporations devote extensive funds to advertising. The SEC in its proposal noted an example of gamification of financial services. And, when I think of gamification a game, I thought, oh, GameStop. It’s not really exactly the same thing they’re referring to, but when you have an idea that, that people will spend more time trading or on a site or let down their guard if something appears fun. I’m sure we’ve all heard that people buy with emotion and justify with logic. AI is probably going to be really good at manipulating emotions and convincing us to make decisions or purchases that right now takes a lot of time and money for organizations.

AI promises the ability to overcome our normal decision-making defenses at scale, at a manageable cost. And that potential vulnerability is both a tremendous opportunity for these businesses. And not to overstate it, a profound concern for our regulators. That was a lot, <laugh>. So, what are some other thoughts here? I’ve presented the doom and gloom here a bit. Dan and Bryan, do you have a bit more optimistic spin on this, on how this could actually help out a bit? 

Bryan Jacobsen:  I think those are great points. One thing I would say is that the SEC’s reaction is really something that we couldn’t foresee. The SEC thinks part of their job is to make sure that the industry is not moving at lightning speed. Maybe they’re going whatever is slower than lightning speed. But the point being is that I think as an industry, we tend to quickly want to embrace the new and the cool. But the SEC, I think, is saying, that’s great but we also have to put some guardrails around it. It actually reminds me, and I know this is going back quite a few years, but 25 years ago when imaging technology was really just kind of getting out there and getting more popular, there was a lot of concern with the SEC and similar reaction with firms going paperless, what would happen if you lost those records?

Now, if you look fast forward, you know, I don’t think that there’s too many firms out there that are keeping reams of paper anymore. It’s completely changed the landscape. Now, I’m hoping that it doesn’t take us 25 years on AI to get there. But the point being is I think the SEC is really doing their job in trying to make sure that we’re looking at things holistically, that we’re putting in guard rails, that we’re not just embracing it without understanding both the good and the bad of AI.

Dan Garrett:  Yeah, I agree. Maybe I’m dating myself even more, but when we go back to email and <laugh> prior to emails, it was all phone calls. And then all of a sudden email came along and there was a very big concern about financial advisors speaking to clients through email. And I don’t want to say knee jerk, but these kind of reactions to momentous technology advances are warranted for us to sit back and look at this. And this is in our lifetime, probably the biggest technological advance that we’ve seen and will change dramatically industries and opportunities and so forth. And as Casey said, there are risks. I guess I’m optimistic about our ability to adapt to those risks and to mitigate a lot of these things, and really to see the good and the things that these can do.

The MIT studies coming out saying that for knowledge workers, this provides a 40% increase in productivity and quality. And I think about technology with regards to our industry about helping with the financial advisor gap that we’ve been talking about for years, and the concern that we’re not going to be able to offer as much advice to as many people. And this may be a place where AI actually helps and provides information to clients and helps augment the financial advisor to be able to assist with more difficult problems, where AI can help with general things and help with the productivity of both advisors and operations within these companies. So, I’m very optimistic. 

I think we will deal with the risk and the opportunities. I think also the SEC providing this watchdog process for the industry is really good because it helps clients understand that there is an agency out there that’s watching out to make sure that these technologies are used appropriately. So I don’t think there’s any going back. I think it’s here and it’s here to stay. I think over time, we’ll deal with some of the opportunities and the risks, and we’ll get into this probably some more as we talk about more of the opportunities in the industry. But Casey, going back to some of this initiative, how’s the industry responding to this? What were some of the folks that were coming back with proposals or concerns with the SEC’s approach?

Casey Dougherty:  As I referenced, the comment period ended like October of this last year. This isn’t new, but the industry pushed back strongly. The primary concerns, as I’d frame them, seem to be the identification and elimination of conflicts, not just the disclosure. But elimination of a conflict, especially with tools that are complex, is quite difficult. I think the SEC’s proposal talked about AB testing control groups and the like, and in the real world, that’s sometimes difficult to do, especially if you’re on the leading edge of technology. The commentators suggested that it would be difficult enough that it might potentially stop them from using these technologies in an SEC regulated environment, putting the US investors at a disadvantage. They also suggested that tools like these can actually drive down costs for investors and prevention of their use could actually continue to push the industry towards consolidation, a few large players that can afford to use the tools and everybody else sort of being on the outside.

Dan Garrett:  Yeah, that’s a good point.

Bryan Jacobsen:  You know, Casey, that is a great point, and something that I think goes towards my optimism on the direction of this. In compliance, as we all know, staffing tends to be a four-letter word. It’s very difficult to get staff. You really need to go to any number of lengths to justify the addition. So, because of that, many compliance departments run fairly lean with resources, and what this does is somewhat level the playing field. If there are good tools that can provide AI compliance tools, then all of a sudden, these smaller firms can hopefully have fairly sophisticated tools and systems that are in comparison to larger firms that have much more dollars in the bank. So, I do think that there’s a lot of potential here to really help the overall compliance industry go forward.

Dan Garrett:  Yeah, I completely agree with that. It seems like all of the firms are very concerned about making sure that this technology is available broadly to everyone. I know that we just saw the release of Gronk, which is X’s AI system into open source, providing it to everyone to be able to download for free. It’s a very large language model, and we’ll see more of that. But I think it’s important to understand that this industry is evolving in a way that makes sure that this is known to be very disruptive and that it should be open and available to everyone. Casey, so this week news of two firms having fines from the SEC has come out, and I wanted to talk about that a little bit. It looks like they’re not waiting to finalize this rule before they start addressing some of the issues. Can you speak about that and then let’s talk about AI washing a little bit?

Casey Dougherty:  Yes. I think, Dan, that AI washing is the reason why these firms were dinged. And you’re right, the SEC is concerned, although, there’s no final rule, to be clear. This was a proposed rule. So, they received comments, but nothing’s finalized. We have no timeline for finalization of anything, but the SEC doesn’t seem to be waiting. Around the same time that they launched the proposed rule, they also launched an AI-related sweep, trying to determine from the industry participants about use of AI models and techniques information on the data sets that they’re using to populate these tools – information on incidents where the AI use didn’t work as anticipated. It caused regulatory or ethical or legal issues. They actually asked for WSPs in some cases related to the tools.

And then, as Dan alluded to, enforcement – specifically AI washing. I’ll pause here. AI washing is a lot like greenwashing. For those of you that’ll resonate with the idea that if you make environmental claims good or bad about something, and you don’t have knowledge about what the underlying thing is.  We can’t exaggerate in this industry, especially in marketing or advertising. So, that’s greenwashing. AI washing is the same thing. You need to understand the tool, what it does, and then if you’re going to market it, be accurate about what you say there. Make sure that your people are accurate about what they say. So, the idea that they’re enforcing now tells us that the SEC is looking to leverage existing industry rules and the absence of regulation. And I think back to Reg BI – they cited that it requires identification and mitigation of conflicts of interest.

Well, if AI is potentially a conflict of interest, then you probably should be considering that you need to understand the dataset that’s used to populate your AI tool, or the conflicts inherent in the use of the output of the tool. So anyway, the SEC has tools, even without a new rule. I also think of reasonably available alternatives. This was a reference as well. If you have built-in bias that doesn’t let you, under Reg BI, look at reasonably available alternatives, we’re already seeing firms that are being dinged for that AI or not. So that jumps out to me as well. Bryan, Dan, thoughts on that?

Bryan Jacobsen:  First of all, I do agree.  Ultimately, the game has not changed as far as what firms are required to do. If we’re making recommendations, we still have to document them appropriately and be able to evidence why we made those recommendations. AI is just going to allow for maybe a little less human touch. But there still needs to be that human interaction. But Dan, what do you think?

Dan Garrett:  Yeah, I agree. I think we can get into a lot of details around how technically these things need to be disclosed. I think transparency is really going to be key to gaining trust, but also being compliant. I think it’s very important that the technology that we’re using, that we understand what it’s doing and that we’re clearly disclosing that to clients for their opportunities. But we touched on something I wanted to come back to, and that’s conflicts of interest. In light of this focus around conflicts of interest, can you talk, Casey, just in general about what this is, and then, how does this relate to past regulations that have come about? Particularly around other regulations like the Advisor Act or the DOL approach addressing these conflicts? So, let’s discuss that in general. 

Casey Dougherty:  Yeah, sorry for glazing over that for our listeners. I sometimes throw out these terms and I should define them first. So, conflict of interest, it depends a bit on who you ask, on what a definition of a conflict of interest is. I’m going to borrow a definition from an SEC staff bulletin titled standards of conduct from broker dealers and investment advisors, conflicts of interest. So, the definition is under Reg BI and the IA fiduciary standard, a conflict of interest is an interest that might incline a broker dealer or investment advisor, consciously or unconsciously, to make a recommendation or render advice that is not disinterested. Now, as a matter of clarification, when I think on that for a moment, I don’t know if there’s much that we do in life that’s not conflicted, <laugh>. Certainly as a broker or their RIA, that, if I recommend a prospect join my firm and me or my firm will receive some compensation or prestige or just about anything, that’s a conflict. So I think part of this is understanding how broad this potentially is in having a systematic approach to it. Yeah, I want to visit some ethical or compliance considerations, at least as they relate to implementing surveillance technology. Bryan, in your past, I know you’ve successfully implemented surveillance technology.  How can similar technologies be leveraged to address the compliance requirements posed under the SEC’s new AI initiative?

Bryan Jacobsen:  Yep. Great question. Let me start off by first saying I was recently at a conference, a fairly large conference, and like most conferences there was a room full of vendors selling their surveillance or other compliance related tools. And I would say that almost every single one that I went to had some kind of blurb about using AI for whatever. And when you really boiled it down, they weren’t using AI. They were maybe making their search function a little bit more catchy, but there’s no real AI. So, the reason I say that is because I think from an industry standpoint, the tools that we have available have a long way to go before we can really say that we’re using AI within our surveillance programs.

So, I would certainly be skeptical and challenge any vendors that present you with AI options, and just make sure that they’re truly using AI and not just refiguring their search functionality. But that being said, when it comes to AI and surveillance, I think there’s just a ton of potential positive applications that we can look forward to. And I think that it’s probably just a matter of time. I would even say within the next few years, we’ll start seeing some of this. But, AI definitely has the ability to look at things like pattern recognition, transaction monitoring, looking at just reams of data. And instead of having an analyst look at each transaction and try and understand how that holistically fits, all of a sudden this can be printed off in a nice short summary. And you can focus the analyst’s attention on areas of true risk in the area of due diligence. 

This could be huge. Like many of you, I’ve sat on product due diligence committees, and it’s always somewhat of a challenging committee because you’re presented with a ton of data and not much time to really analyze the data. And then ultimately, you’re being asked to vote on whether or not that product should be offered to all of the retail customers. Now, with AI, you have a tool that could potentially analyze all of that data, summarize it, and in a very short, concise format, layout all of the potential pitfalls, the pros, the cons, all of that stuff. So, there’s a ton of applications that we can look forward to there.

And then the staffing, four-letter word, compliance automation is going to be huge. Just understanding that compliance staff is definitely a needed resource and a valuable resource. Instead of having them do what I would call the grunt work of moving things from point A to point B, now all of a sudden they’re able to actually do the analysis and really look at things deeply because the initial work has been done to gather the data and to present it in a way that is easy to review. So, I think there’s a lot of good stuff there. Now, with all that being said, I do think that there’s potential concerns. And, when we first started off, we talked about the SEC concerns, and I think one of the things I said is, the SEC is doing their job.

They’re trying to put some guardrails around any new technology that comes out to make sure that the firms are putting in the appropriate restrictions when it comes to AI because of the speed and the nature of AI. There’s a ton of stuff such as privacy concerns; bias and discrimination are a huge thing. Keep in mind that at the end of the day, AI is not AI in the sense of the computer is doing the thinking for you. It’s still based on an algorithm. That algorithm is still being programmed by a human being. A human being is naturally going to have their own biases that, if it’s programmed in that way, could certainly result in the AI being biased and discriminatory. So a lot of stuff there.

Also, there’s a lot of concern around the lack of transparency. Just because the algorithms themselves are so complex, it’s very difficult for most people to really grasp the algorithm behind the AI.  There’s a lot of challenges there. And then, the last thing I would say, and I think this is where the SEC is also coming from, is just an overall over-reliance on technology. Like anything else, AI or any other technology tool that we’ve had in the last 25, 30 years, it’s a tool, but it should not be the answer. I would be very skeptical of any firm that said we’re going to replace all compliance and supervision for AI, because once you do that, then I think you’re just destined for failure. So, a lot of challenges there. Casey or Dan, any thoughts on your end?

Casey Dougherty:  From my perspective, as you’re chatting around pattern recognition, it reminds me of one of the challenges that I’ve seen frequently at firms regarding, for instance, risk profiles. And we would always try to collect risk profiles from clients or prospects. How do you invest a portfolio in a manner that maximizes potential rates of return, but where the client doesn’t panic when the market drops in a predictable manner? And it was always a bit of a guessing game – you always felt a little bad when you have a client who states that they have a certain risk tolerance. And in retrospective, it turns out they didn’t. And I think one thing that’s great about pattern recognition, especially if it is mass rolled out, is that you can look at how a client behaves and prospectively adjust their portfolio based on risk tolerance, probably in a better way than many of our existing tools permit. So, I do see some positives there. Dan, did you have thoughts on these things?

Dan Garrett:  Yeah, I do. You know, not only with compliance, but I think there’s lots of opportunities within operations as well for some of these types of tools. But, Bryan, going back to your point, you need to know your AI, and it’s not just understanding how it works, but it’s really the details. And I think documentation is key to making sure that you’ve got these things in a diary <laugh> and available to the SEC when they come knocking. So, if you’re developing your own AI, it’s very important that you think about the data it’s using and the decisions that is making, and document it and just have it available. And most importantly, ethics is not just a buzzword. It needs to be embedded in the consideration of AI from the get-go. It means that there’s the bias checks, there’s ensuring for privacy, and then there’s keeping transparency as a top priority.

And again, all of this needs to be documented. Most importantly, and especially right now, education is key – not just for the developers that are creating these systems, but it’s really for compliance and legal and the business and executives to all keep educated on AI and AI’s development and how it’s being used within the firm. And then just effectively communicating this to clients. I don’t think it’s in anyone’s best interest to not talk about the fact that there’s AI under hood. I think that leads to concern, and I think it’s much better that we just take the opportunity to be very implicit when AI is being used, if a client is talking to a chat bot that they know. I recently called my bank and I was on a phone call where it was AI that was listening to me and responding, and I knew it was because the voice wasn’t very good.

So, we’ve all been on this kind of robocalls, but in today’s generation of AI it’s going to become increasingly more difficult for the average person to distinguish between what is fake and what is real. And I think it’s very important that we disclose up front and not try to deceive anybody with what they’re hearing or what they’re seeing. I think it also provides us a great opportunity to educate the public as they’re using these tools about how great these tools are and how helpful they could be and use that opportunity within the chat itself to encourage its use and promote the good that AI can bring.

Bryan Jacobsen:  Dan, can I ask a question really quick? Maybe this is more directed towards Casey, but what role do you think compliance or the Chief Compliance Officer should play in firms that do roll out these tools? Whether it’s a compliance tool or maybe a trading tool, but just in general tools that utilize AI. What is the role of the CCO or compliance?

Casey Dougherty:  I think given the risk sensitivity that we’re seeing, regulatory risk sensitivity, the enforcement that we’re currently seeing is based on a lack of understanding. It’s not understanding the conflicts of interest, it’s not understanding how you are marketing the tool, what it does and what it doesn’t do, the limitations on that. So, I think part of what a CCO should be doing is recognizing that AI is here to stay, creating protocols around how to screen or vet a tool. I think it’s probably premature for somebody to jump in and do AB testing, especially considering this as a preliminary tool. But under the existing dataset, you could say that you could disclose that our tool could have unpredictable results, verified if it seems odd with a human being. You could do things as a Compliance Officer under existing regulation to try to shield and protect your firm. And I don’t think that you have to pause adoption of AI or even that you should. I think there’s enough benefits you should consider.

Dan Garrett:  Yeah, I’ll add to that, Casey. Because I think, you know, we’ve been talking about proprietary systems, but even third-party systems that firms are using, I think they need to get a handle on that. It’s not good enough that somebody just went out and bought and integrated an application from a third party that’s got some AI in it. I think the firm needs to take the responsibility to understand that AI that firm is providing to them and understand how it works and disclosed, in case, you had mentioned WSPs earlier, these things need to be documented. And I think the Compliance Officer needs to be on top of that. I think also annual reviews, we’re doing annual reviews now around technology, but I think having the AI review part of that is going to be critical too. Really examining what new technologies that we’ve brought into the firm, what enhancements to existing technology that we have that now might have AI components to them. Again, documenting what they’re doing, how they work and ensuring that there isn’t bias and conflicts of interest.

Bryan Jacobsen:  Yep. And you know, I completely agree with both of you. I almost feel like this is similar to cybersecurity, where over the last few years I’ve seen more and more Compliance Officers tend to defer to maybe the CTO or the ISO when it comes to the cybersecurity. And, as a compliance profession, we have to take an active role in cybersecurity. Now, granted, it’s driven by technology, and certainly that’s probably above most people’s understanding, certainly above my understanding of technology. But we need to be able to understand what is happening in layman’s terms so that we can at a minimum, describe it to the regulators and explain to them what’s going on. Because most of the regulators that we’ll be dealing with are certainly not going to be tech people. So, when it comes to firms using AI tools, we absolutely need to get in there, do the review. Certainly, we need to make sure that we understand the algorithm at least to the point that we can explain it to a regulator and have a understanding of what exactly it’s searching for. So, definitely I see compliance as being a very, very necessary part of any due diligence that goes with AI.

Casey Dougherty:  Now, gentlemen I, as we sort of talked about before, this is a topic that probably is going to take a couple discussions. We’ve talked today about the ethics, the overview of the rule, the industry response, and I think during our next podcast, let’s talk a little bit more about the finer aspect of it, best practices, the transparency, record keeping, client interactions, and sort of the future impacts. What do we see if we pull out our crystal balls as to where this industry’s going? How should they be investing in moving forward? Are there any last comments that we want to have before we adjourn this podcast for today? And then pick this up for our next one.

Dan Garrett:  There’s a lot going on with FINRA, but I think as you said, let’s focus on just practice management and talk about some of the positive things that we’re seeing in the industry.

Bryan Jacobsen:  Yep. I would completely agree. I think that would be a great second podcast.

Bob Mooney:  Thanks everyone for listening. If you’d like to learn more about our experts and how Oyster can help your firm, visit our website at www.oysterllc.com. If you like what you heard today, follow us on whatever platform you listen to and give us a review. Reviews make it easier for people to find us. Have a great day.

About The Podcast Speakers
Photo of Casey Dougherty

Casey Dougherty

Casey Dougherty’s 20 years of experience includes expertise in Compliance and Legal supervision in a shared-services environment, executing broker-dealer to broker-dealer joint work and succession arrangements, and other marketing arrangements covering private placement life insurance, VUL and annuity sales.

Photo of Dan Garrett

Dan Garrett

Dan Garrett provides general business leadership, technology strategy and execution for RIA, Broker-Dealer, and Clearing firms, including spearheading digital transformations, optimizing operations, navigating complex business transitions, and building software development teams and proprietary applications.

Photo of Bryan Jacobsen

Bryan Jacobsen

Bryan’s role as a CCO for dual registered broker-dealer / RIAs, clearing firms and crypto-based entities enables him to apply his FinTech, financial, crypto, blockchain, and regulatory knowledge when providing practical compliance solutions.

View Our Team