Health-E Law Podcast Ep. 17
Navigating AI: Governance and Innovation at UCSD Health with Ron Skillens of UCSD Health
Thank you for downloading this transcript.
Listen to the podcast released April 21, 2025, here:
https://www.sheppardmullin.com/multimedia-631
Welcome to Health-e Law, Sheppard Mullin's podcast exploring the fascinating health tech topics and trends of the day. In this episode, Ron Skillens, Chief Compliance and Privacy Officer at UC San Diego Health, joins host and Sheppard Mullin partner Michael Orlando to discuss the transformative potential of AI in healthcare and the importance of balancing innovation with compliance.
About Ron Skillens
Ron Skillens is the chief compliance and privacy officer for UC San Diego Health. He is responsible for the mitigation of compliance and regulatory risks, pursuing leading practices and ensuring that all Health compliance and privacy activities are coordinated with the appropriate leaders.
In addition, Mr. Skillens provides health care research compliance support for UC San Diego Health Sciences and is responsible for administering the health system’s policy and procedure process. He has more than 30 years of compliance, audit and risk management experience working with diverse senior leadership, physicians and boards, bringing to this position a demonstrated commitment to compliance and collaboration.
About Michael Orlando
Michael Orlando is a corporate and intellectual property transactions partner in Sheppard Mullin’s San Diego (Del Mar) office. He is Co-Team Leader of the firm's Technology Transactions Team, and a member of the Life Sciences and Digital Health teams. He founded a software-as-a-service (SaaS) business prior to attending law school, and worked at a publicly-traded biotechnology company on an in-house secondment, and uses that experience in bringing a practical, business-oriented approach to his engagements.
For over 20 years, Michael has been assisting innovators, cutting-edge technology companies and other organizations develop, acquire, sell, and commercialize intellectual property assets, including technology licensing, commercial agreements, strategic partnerships, research, development and collaboration contracts, manufacturing and supply arrangements, outsourcing, and corporate transactions.
Transcript:
Michael Orlando:
Welcome to Health-e Law. I'm Mike Orlando, a partner at Sheppard Mullin, and I have the pleasure of hosting today's episode. And I'm pleased to be joined by Ron Skillens, the Chief Compliance and Privacy Officer for UC San Diego Health. Ron has nearly 30 years of compliance and risk management experience in a variety of settings. He regularly advises senior leadership, physicians, and boards on compliance and risk management matters, including issues like AI and healthcare. Thanks for joining us today, Ron.
Ron Skillens:
Great. Good to be here, Michael. Thank you.
Michael Orlando:
So Ron, today we're going to talk about the use of AI in healthcare and using AI governance for risk management. So, let's start off by talking about AI in healthcare generally to get us on the subject. AI has a potential to revolutionize patient care in hospital or operations. How do you envision AI transforming patient care and hospital operations in the next five years?
Ron Skillens:
Yeah, that's a big question. For those that are worried that AI is going to replace doctors and nurses anytime soon, I don't think that's a worry. I'm a movie buff, so I think less about Terminator and more of Jarvis from Iron Man, if you're familiar with Jarvis, right? So, Jarvis was helpful to Tony Stark and all that kind of thing. And so, healthcare in my experience has always been behind the curve as it relates to adopting technology, for a lot of different regulatory and other reasons. But on the patient care side in particular, there's so many exciting areas of innovation, specifically with AI, starting from AI scribing, where physicians can get their notes documented by AI and ultimately build out to providers, which is a tremendous wellness uplift for them. Responding to patient messages through AI, we're actually piloting that in our organization currently. Dispensing medications, there up in northern California there's actually robots that go down hallways and dispense medications because one of the compliance concerns is medication reconciliation, specifically for controlled substances. And so, even though that's not widespread, that is starting to be piloted as well.
And then on the hospital operations side, I would say just reducing wait times in the ER and predicting patient flow. All of these I think are awesome innovations that AI has the ability to unlock, however, in my experience, I'm usually called a buzzkill to innovation and compliance. And so when you look at the regulatory and legal landscape of these innovations, oftentimes the regulations don't match the innovation, and oftentimes they lag the innovation. And so, one of the things that we're trying to do is embrace innovation and lead the way, as we like to say, but also do so in a compliant and safe way.
Michael Orlando:
And Ron, you're not the only one that's called a buzzkill. I'm used to that in my career as well, and it's something that you do when you're trying to manage risk for people. So, I'm sure you know better than anyone that there are a number of these risks using AI, from data privacy and security, to the safety and ethical concerns. And I'm sure some listeners to this may be wondering, why do I need to have an AI governance structure for helping with managing the AI? What are your thoughts on the importance of an AI governance structure for your use at UC San Diego Health?
Ron Skillens:
Well, I think it helps us think about how we're going to use AI in the most creative way, but also in a compliant way, as I mentioned. And so there's so much talk about AI in the marketplace today, I think having an organized governance structure, and it doesn't have to be complicated, and we can talk about what ours is here, but it's important to understand the appropriate use. I know I saw a statistic recently that said that many workers today are already using ChatGPT and other generative AI tools in the workplace today, whether it's approved or not. That was the same as was the case with social media in my opinion years ago. So it's not that you're blocking the use of AI, but it's, how can you use it appropriately? Can you put some guardrails and policies around it? And particularly in healthcare, with health data being as valuable as it is and as sensitive as it is, it's extremely important to have those guardrails around that, and I believe that's what AI governance allows you to do.
Michael Orlando:
You've mentioned the structure you're using and the approaches that you use, and so I'm sure there's different approaches and structures in the AI governance and healthcare organizations that you considered. What is UCSD's approach to AI governance?
Ron Skillens:
Yeah, no, absolutely. UC San Diego is a part of the University of California system. And so within UC, there are five medical centers and then there are many more campuses, 11 overall campuses, a part of the UC system.
I believe three years ago, relatively recently, the office of the president, which is at the system level, created what's called the Presidential Working Group on AI. And so that's at the system level, so for those that are listening that are part of hospital systems, it won't necessarily be this structure because you're not part of the university, but whether it be your board or whatever the case may be, your governing body would create an overall structure. And so, we have that working group on AI at the system level. And then there are subcommittees that flow up to that working group, and so there's a health subcommittee, there's an HR subcommittee, because we're part of a campus, there's a policing and law enforcement subcommittee. And then there's a student experience subcommittee again, because we're a part of a university.
Drilling down on that from a health perspective at UCSD, we have a health AI committee specifically for our campus, and I sit on that committee, as well as our Chief Health AI officer and a number of others. And in that committee, we talk about what I just described a moment ago, which is appropriate uses of AI, new innovations and technologies, because particularly here in Southern California, there's a lot of genomics companies and others that are really eager to do research and other kinds of things with health data so they can unlock discoveries with AI. But one of the challenges from a legal regulatory standpoint is that when organizations train on your health AI data, that is a use from a HIPAA perspective, but also then you go into a whole area of regulation associated with that use.
And so health AI is not, I'll just say, I'm biased in terms of I talk about the risk and the compliance components. But it's also on the flip side of that, which is the innovation components that I spoke about on your first question, of looking at all the opportunities that it has. And so, one of the outputs from the health AI committee is a generative AI policiy on appropriate use as I associated and mentioned before. And we're just kind of getting started, this committee is about a year old or so, but that's how we structure it overall.
Again, for the listeners, I would say, I think it's just important to think about how you would organize, whether you call it a governance committee or a committee of some kind, I call it an AI council, that you would put some of these things in place.
Michael Orlando:
So Ron, can I ask you a follow-up question on that? How do you coordinate between the different stakeholders within the organization, with respect to these uses of AI and the balance of innovation versus risk?
Ron Skillens:
Yeah, that's a great question, and it's often kind of referred to as herding cats in a lot of ways, because in academics there's a strong research focus as you would expect, as an academic medical center organization. And so, what we have set up is a committee, another committee called Health Data Oversight Committee. And on that committee, I'm the chair of that committee, as well as other stakeholders. And we have researchers as well as commercial projects that are being proposed. There are uses of health data that go outside the institution, are being evaluated based upon various lenses of appropriate use, but also of public benefit and a number of other criteria that we have.
And so when you have a clinical use, let's say as an example, where you have a researcher that wants to use AI to address a certain area in cancer, and particularly oncology is a really growing area for this, they would submit an application. In that committee we would then evaluate whether or not what type of information would be shared, because as I mentioned with AI companies in particular, the algorithm trains on that data. And so that is a use according to HIPAA, as well as, will our data be used for commercial purposes for a profit, for a for-profit entity that potentially may not have a benefit to us as a university? And so all of those are considered, and if it meets all of those criteria in addition to going through our, if it's research, going through our institutional review board, the IRB, and going through the contracting process, then it would be cleared for its use case.
I think one of the biggest challenges, whether you're a university or not a university system when you have a system of evaluating use cases, particularly clinical ones, is to think through what's in the best interest, we're in healthcare, for the patient, right? Not what's in the best interest for the company or how you can make the most money off of something, type of thing. It's, what's in the best interest for the patient? And that's usually our guiding lens for those evaluations. And then also, if it was your data, how would you want it to be used? Right? And then also from a standpoint of privacy, the level of identification, and so there's levels of identification of the data that's shared so it could be de-identified or it could be identifiable, or it could be a limited data set. It could range between those three.
And for those of you, I won't geek out on this, but for those of you that are familiar with privacy, there's different requirements in terms of what's considered PHI and all of that kind of a thing. And so those are all the things that are considered in addition to some of the legal considerations. So I will say that we do, just for those that are operations people that are listening to this and listening to a compliance guy talk about this, most of the applications we see are approved for the most part, as opposed to denied. There's very few that are denied because most of them were able to work through that process.
Michael Orlando:
Yeah, I mean, that's really great to hear about how you're managing that process, and putting that in place and ensuring that it's effective sounds like a daunting task to me.
Ron Skillens:
Very.
Michael Orlando:
I mean, listeners are probably wondering about your experience with this AI governance so far. And you mentioned a challenge, and I'd be curious to hear, what are some of the other bigger challenges you've faced and some lessons learned so far with your AI governance structure?
Ron Skillens:
Yeah, I think one of the biggest challenges is just getting your hands around it, because AI is such a buzzword in our society today. I mean, everything that you hear in the news and the media is AI related in some form or fashion. And I think for me, it's just communicating the fact that every individual is not just an employee of the organization, but they're also a consumer. And in their personal life, they encounter, and as I mentioned, probably are using ChatGPT or some other types of AI in their current environment. And so, the biggest challenge with that in mind is people sometimes don't realize the risk of using AI when they move it from the personal environment to the work environment. And so helping one, to educate them about, yes, we're not trying to shut off your ability to use it, but we have to use it in an appropriate way.
So for example, our legal department determined that we didn't want to use the AI companion in Zoom when legal is on the line because it may jeopardize the privilege of a conversation, and so that was a line that was drawn as an example of a use of AI. There were other tools that were used, I'll say in the student experience because a university, there was an AI tool to help students kind of find services and they could ask questions. And so we have something called Triton GPT that is a custom GPT on the campus setting that allows them to do that. Well, that use also had to go through this evaluation process I described earlier, because student data has its own set of rules and regulations around it with FERPA and other kinds of things, and so we had to go through that process as well.
So while it is, to your point, challenging, there are ways of getting to a yes. But I think the biggest challenge, I think is just being aware of all the things that people are doing because oftentimes it just doesn't even occur to people that this is something I need to talk to somebody about.
Michael Orlando:
I'd like to talk about the advancements in AI and the regulatory environment around the use of AI and the fact that it's really moving quickly and it's making it difficult to stay ahead of emerging trends and the best practices. So, how will you keep the board and the staff updated on those latest advancements and emerging trends and best practices in managing AI at UC San Diego Health?
Ron Skillens:
Yeah, it's a great question. It's hard, to be honest with you. I think there's a multi-channel approach. The first thing is just talking to our peers. So I'm a part of a number of peer network groups, healthcare related and compliance related. So I talk to compliance officers not just within the UC system but across the country about what they're doing and similar AI governance questions that come up. I also, I think a large part of this is education and getting yourself educated about the fundamentals of AI. And so, one of the things that our organization pushed out is some basic AI training and education around, what is generative AI? What's the difference between generative AI and robotic process and automation and machine learning and the various aspects of AI that's been in our life for a long time, but it wasn't until 2023 when ChatGPT kind of got on the scene that people, it is become front and center for people. So, education is a key aspect.
And on that point of education, I would say for those listening that podcasts are great, like this one, is a great way of getting educated. There are others that I listen to as well. Courses, government communications that come out from the various agencies about how they view certain regulations and rules around the use of technology. Now, I will say in our current environment, there's a lot of vagueness from the government that's coming out right now and so there's not a lot of clarity specifically in this area. Many people have saw on the news that there is going to be great investments in AI and AI infrastructure, and I think that's a good thing, generally speaking. But I think it's more of us getting aware of what's happening in our industry, in this case, healthcare, and how that can impact what we do.
Michael Orlando:
Yeah. Finally, I'd like to ask you, what's your last word of advice that you would like to give to our listeners today? What's a message you'd like to leave with them?
Ron Skillens:
I would say that AI today is the most transformational time and technology that I think in our lifetime. Even though coming from a risk and compliance person, and I understand the risks of this, as legal and regulatory risks, it is also an opportunity to advance and get things done in a more efficient and effective way, particularly in the most important industry in my opinion, which is healthcare.
And so I would say these are my three or four takeaways for the listeners. One is educate yourself, as I mentioned a moment ago, is find ways of listening to podcasts and going to conferences and doing your own due diligence about getting up to speed on what AI is. And then number two, understand your organization's policies. I'd be remiss if I didn't say that given my position. So, what's your policies on the use of AI in your organization, if you have one or not? If not, ask. Ask your legal counsel, ask your compliance officer. And then the number three would be, experiment with it yourself personally. If you haven't used ChatGPT or open up a free account, learn how it works, use it. I know I've used it to do recipes and different things and my son uses it.
And then number four I would say is, going back to the point of this conversation is, start an AI committee of interested parties in your organization or think about it if you don't have one already. It doesn't have to be as formal as what I described here today, but it's just something to get started to think through some of these initial foundational questions because I assure you that your employees already are using AI today. And so it's not a matter of it not being in your environment, it already is. And so, to help give them tools to use it appropriately.
Michael Orlando:
Ron, that's some great words of advice, some very practical advice. And I guarantee you, I've also seen those stats about the number of users using a ChatGPT in the workplace without authorization. And I assure you that there's others that are listening to this that are grappling with the same issues.
So, thank you again, Ron. And also wanted to let the listeners know that Ron provided an AI governance checklist for us to use that he's made available, and it's available in the show notes for you to see and access. So, please go to the show notes if you want to see that, it's a really helpful resource for you all. Thank you again, Ron.
Ron Skillens:
Yeah, thank you. Take care.
Contact Info:
Additional Resources:
* * *
Thank you for listening! Don't forget to SUBSCRIBE to the show to receive new episodes delivered straight to your podcast player every month.
If you enjoyed this episode, please help us get the word out about this podcast. Rate and Review this show on Apple Podcasts, Amazon Music, or Spotify. It helps other listeners find this show.
This podcast is for informational and educational purposes only. It is not to be construed as legal advice specific to your circumstances. If you need help with any legal matter, be sure to consult with an attorney regarding your specific needs.