Health-e Law Podcast Ep. 1
Building Trust: Transparency in Data Collection and Use with Dr. Laura Tully
Thank you for downloading this transcript.
Listen to the podcast released December 7, 2023, here: https://www.sheppardmullin.com/multimedia-536
Welcome to Health-e Law, Sheppard Mullin's podcast addressing the fascinating health-tech topics and trends of the day. Our digital health legal team, alongside brilliant experts and thought leaders, share how innovations can solve some of healthcare’s (and maybe the world’s) biggest problems…if properly navigated.
In our inaugural episode, Dr. Laura Tully, Vice President of Clinical Services at ChatOwl, a virtual mental wellness company, joins our hosts to discuss the importance of transparency in health-tech’s data collection and use, including how it can drive engagement and outcome.
About Dr. Laura Tully
Dr. Laura Tully is a Harvard-trained, internationally recognized subject matter expert in digital health solutions for serious mental health. Her specific expertise includes ethical data use, trauma-informed suicide risk management, and digital wellness products for marginalized communities. As ChatOwl, Inc.'s Vice President of Clinical Services, she spearheads the development of AI-driven clinical interventions accessible to all adults facing mental health challenges.
Before joining ChatOwl, Dr. Tully served UC Davis Health, first as Director of Clinical Training in Early Psychosis Programs, then as Assistant Director of the California Early Psychosis Training and Technical Assistance Program (EP TTA), and finally as Associate Professor in Psychiatry. Since joining ChatOwl, she has maintained an association with UC Davis as an Adjunct Associate Professor in the School of Medicine.
About Sara Shanti
A partner in the Corporate and Securities Practice Group in the Sheppard Mullin's Chicago office and co-chair of its Digital Health Team, Sara’s practice sits at the forefront of healthcare technology by providing practical counsel on novel innovation and complex data privacy matters. Using her medical research background and HHS experience, Sara advises providers, payors, start-ups, technology companies, and their investors and stakeholders on digital healthcare and regulatory compliance matters, including artificial intelligence (AI), augmented and virtual reality (AR/VR), gamification, implantable and wearable devices, and telehealth.
At the cutting-edge of advising on "data as an asset" programming, Sara's practice supports investment in innovation and access to care initiatives, including mergers and acquisitions involving crucial, high-stakes and sensitive data, medical and wellness devices, and web-based applications and care.
About Phil Kim
A partner in the Corporate and Securities Practice Group in Sheppard Mullin's Dallas office and co-chair of its Digital Health Team, Phil Kim has a number of clients in digital health. He has assisted multinational technology companies entering the digital health space with various service and collaboration agreements for their wearable technology, along with global digital health companies bolstering their platform in the behavioral health space. He also assists public medical device, biotechnology, and pharmaceutical companies, as well as the investment banks that serve as underwriters in public securities offerings for those companies.
Phil also assists various healthcare companies on transactional and regulatory matters. He counsels healthcare systems, hospitals, ambulatory surgery centers, physician groups, home health providers, and other healthcare companies on the buy- and sell-side of mergers and acquisitions, joint ventures, and operational matters, which include regulatory, licensure, contractual, and administrative issues. Phil regularly advises clients on matters related to healthcare compliance, including liability exposure, the Stark law, anti-kickback statutes, and HIPAA/HITECH privacy issues. He also provides counsel on state and federal laws, business structuring formation, employment issues, and involving government agencies, including state and federal agencies.
Transcript:
Phil Kim:
Today's episode is on Transparency and Data Collection and Use.
Sara Shanti:
I'm Sara.
Phil Kim:
And I'm Phil.
Sara Shanti:
And we're your hosts today and want to thank you all for joining us and listening.
Phil Kim:
We're really pleased to have Dr. Laura Tully with us today. She is the Vice President of Clinical Services at ChatOwl, which is a cutting-edge virtual therapy platform dedicated to providing and promoting access to behavioral healthcare services. Dr. Tully is a Harvard-trained, internationally recognized subject matter expert in digital health solutions for serious mental illnesses. She has specific expertise in ethical data use, trauma-informed suicide risk management, and digital mental health products for marginalized communities, in addition to much more that she does on a daily basis. Dr. Tully also served as a professor at UC Davis Medical School's Department of Psychiatry, and now works with ChatOwl to promote mental health treatment services. Dr. Tully also goes just by Tully and she is probably the coolest person that you'll ever meet. And so thank you for joining us, Tully.
Dr. Laura Tully:
Thank you. What an intro.
Sara Shanti:
So before we start talking about this data-hungry world that we live in, Tully, and some of the ethical uses of data, can you just describe a little bit about what you have been working on?
Dr. Laura Tully:
The area that I'm most passionate about is how do we communicate transparently what these digital health products are doing with your data as a user. When we think about putting products into the healthcare space, particularly mental healthcare, where there's still a lot of stigma associated with mental health difficulties and the information that you're sharing with your provider is very personal and can make you very vulnerable. If you're moving that mental healthcare into a digital space, how are we protecting that personal information? And then how do we tell the user that that's what we're doing, such that the user trusts us?
Most of the ways that we communicate about data privacy and data use is in these long documents, end-user license agreements, that are written in legalese where things are buried in the small print. And it can be really challenging for a consumer to spend time in that and to understand really what's happening. And this is true across all digital products, right? I'm sure we've all updated our operating system on our phones and we have to agree to a terms and conditions, and you probably don't read them and you probably don't see what's happening to your data. And I think we have a responsibility in the healthcare space to make sure that users can understand those. And that means not relying on them reading those 20 pages, we have to communicate it differently.
Sara Shanti:
Can you talk a little bit at a higher level what kind of solutions and what kind of digital health products we're talking about that trigger some of these issues?
Dr. Laura Tully:
Yeah, so some examples of products where this matters, might be a product where you are accessing mental health support. It could be through a chatbot, it could be through questionnaires about your symptoms. It could even be phone calls or text messages with real human providers that you're doing it through a technology platform, video conferencing, for example. And when you use a technology platform, it is very common and there are good reasons for this for companies to use the data. It could be the actual words that you write. I could be the answers to those questionnaires. It's the conversations that you're having with that provider, to use that data to improve their product. And that makes sense, especially if you're in the artificial intelligence world you want to improve your AI model for understanding human speech. So you're going to use all that data that your customers give you to improve it, or you might want to make personalized recommendations based on the symptoms that the person is reporting, "Hey, you're saying you experienced symptoms A, B, and C. We know that people who like you have these symptoms have benefited from interventions X, Y, Z. Let's talk about it." And for both of those examples, either quality improvement as your AI model or personalized recommendations, the users have to agree to allow their data to be used in that way. Now of course, there is a third way that data can get used in the industry, which is selling it to third parties.
For example, if you are a company building an AI model, you might go and buy data, human conversation data to train your model. And that's coming from a company that a user has given that information to. And so with these three examples, often in these end-user license agreements, these terms of agreements, those examples are listed and in order to use the product, you are agreeing to them happening.
Sara Shanti:
Living in this data-hungry world where data has such value, and it can either be a solution, but it can also be a privacy or security issue and every click you make is somehow tracked just by way of the internet of things. Can you speak a little bit to why everyone wants this data so much?
Dr. Laura Tully:
The healthcare industry at large, but very specifically in the mental healthcare space, has realized that the more data we have about a person's experience of their mental health challenges and their journey towards wellbeing, however they define it, the better we can recognize patterns of change over time and identify what is actually contributing to that change. So I'll give an example. If we have data on a thousand people that experience symptoms of depression and we have data that follows them over time, their symptom experiences, what's happening in their life, the treatments they're engaging in, the changes they're making, stressful life events, positive and negative, buying a new home, getting fired from your job. Right, these things. If we have data on all of those things, we can then run these very powerful statistical models to identify what are some of the things that are driving mental health improvement and what are some of the things that are driving mental health problems, developing or being maintained.
And there is great power in large data sets because now you can see patterns in a way that perhaps in a smaller clinical trial, which has been the traditional way of doing it, you couldn't. And this data-hungry mission has come hand in hand with the development of these statistical models, right? Machine learning, intelligent algorithms for predicting people's behavior. And everybody has got very excited about how we can use this data to come up with things like, digital biomarkers or how do we target a person's behavior using digital therapeutics based on the data they've given us. So I think that's the mission, and it's driven by a noble cause. We've had a hundred years of mental health research, billions of dollars invested through institutions like the National Institution of Mental Health. And we're still struggling to understand why one person develops depression and another person develops anxiety. And why one person with depression responds to treatment A and another person doesn't.
It's clear that we need this data to do a better job, to improve the way we're providing mental health services and improve the quality of those services and those treatments. But that requires people being willing to give us their data. And so far, we've mostly just assumed that they are okay with that. And this was not really an answered question. We, as a field in the academic side have not really asked the question, "Do people want to share this data with us?" And if so, "How do they want to share it and what kind of control do they want over it?" And the industry certainly has not asked that question because it directly contradicts commercial and business goals, right? Because if my users say they don't want to give me data, I can't really build my business based on that data. So I sort of don't ask them, is what we've observed happening in the industry, or it's buried in those terms of agreement.
One of the things that we did at UC Davis was we ran a focus group study. And what that means is you get people from the populations or the groups that you're interested in serving, to come together and you ask them for their opinions on some of these things. And we asked them to tell us how they feel about data sharing, sharing their personal health data, specifically to a technology platform that then will use it to try to improve services, treatments, and outcomes. And the good news is that everybody, families, clients, and providers all agree and said they want to share relevant personal health data to make treatment better, both for themselves but also for all the other people that are experiencing these problems. But they were very clear that they didn't want it to be used for commercial purposes. They don't want it to be sold to third parties.
They want to make sure their data is protected with the very best security measures, and they wanted control over when and how the data was shared. So what that means is that we have a responsibility as technology companies or developers and to have these very transparent data sharing rules and to put the control of that data sharing in the hands of the consumer. They get to pick what they share, when they share it, and they get to change their mind and maybe they get to choose the context. They are very clearly communicating that that's what they want. And this means that we as an industry have to do a better job of communicating what we're doing with people's data.
And that's something that we did as part of my work at UC Davis and it led to a higher opt-in rate, 88% of our individuals who signed on to our Learning Health Care Network software, chose to share their data at some level, based on the way we communicated data sharing. And it's something that we're working towards in the company that I work for now, ChatOwl, because we understand that creating trust right from the start is going to increase our collection of that data as a company. We want that data, it's going to make our product better, but we need our users to trust us. And so we need to be transparent at the beginning, and that's what we're working towards as well.
Sara Shanti:
What does retention look like and how can you boost up those numbers using the transparency that you just talked about, that had a lot of buy-in? But how do you make that business case? What does the retention start to look like?
Dr. Laura Tully:
Well, first of all, I can tell you that the research on 30-day retention in mental health focused apps, paints quite a dim picture. Some researchers led by Amit Baumel and part of John Kane's group who drives a lot of this learning healthcare network work in serious mental illness here in the States. They did a large scoping review of all mental health apps available and published this in 2019. So the picture is a little bit gloomy. By day seven, roughly 10% of users are still coming back to an app. There's a little bit of variation between whether it is a mindfulness meditation app versus something like very targeted support for a mental health problem, versus something more focused on happiness and wellbeing more generally. But on average, you're looking at 10% of users who sign up to your app are still there on day seven. And by the time you get to day 30, it's in the 2 to 3%.
Sara Shanti:
So Phil, I think you and I are probably blown away a little bit here because we see the industry, as Tully mentioned, constantly thinking the more data the better. But if that mass collection of data is not the best quality of data because there's mistrust, maybe it's not as valuable as the industry thinks, versus getting really, really quality data, but maybe limiting the sharing a little bit. Tully, is that what the research is showing?
Dr. Laura Tully:
One of the primary problems we're trying to solve is engagement, is just getting people to engage with the product and then stay with the product. So engagement and retention. If you aren't transparent and you don't create that trust, people aren't going to engage. They're going to drop out, they're not going to trust you. And so you lose that data point anyway. And then I think the other point that you're speaking to Sara, is maybe they do engage, but maybe they don't tell you everything. Maybe they don't actually give the data that is needed to provide the very best care, because they don't trust you. And so for me, one of the key hypotheses that we need to test in industry, is if we do a better job of explaining data use and we put more control in the hands of the user, can we increase engagement, mediated by this increase in trust? Are people going to be more willing to use the product because they trust that the product cares about them and their data?
Phil Kim:
I'm sure you have an idea of what that transparency looks like in practice, as far as how you get patients to embrace it. We can talk about privacy policies and disclosures and other approaches, but as far as educating patients, giving them that power and ability to have more control over their data and making it more transparent, what do you think that might look like in practice?
Dr. Laura Tully:
Yeah, I think this is one of the central questions, right? I think when we think about how to communicate it and how to be transparent, it needs to be accessible. So we have to use accessible language. If we're going to use technical terms, we need to define them very clearly and we need to be succinct, targeted, and it needs to be interesting. Reading the 20-page document, it's a very good way of solving insomnia for some people. These are long, they are dry. So how do you explain this in a way that is engaging and accessible?
And one of the ways we did this in this focus group study at UC Davis, was we developed a whiteboard animation video that explained what data we were asking the person to give us. How that data would help them and the community, and then the ways that we are asking them to share it. "Can you share it with us as researchers at UC Davis? Will you share it with our partner researchers at UCSF? Are you willing to share it with the National Institutes of Health? And then if you're willing to share it there, are you okay with any researcher in this country being able to access it through the NIH's Central database?"
So there are different levels of data sharing. This was complicated, and this is unique to developing this Learning Healthcare Network in California. And this whiteboard video was very well received. It's imperfect, right? It's made by a bunch of academics using a third-party software tool. We did our very best, but we're not marketers, right? We're not graphic designers. I think it can be improved exponentially with some additional help, but the core concept of an engaging, visual, accessible medium that explains these things in a transparent way, seemed to really work. We got good feedback from these focus groups when we presented it, and as I said, having it as part of the signup process for this digital solution for tracking clinical outcomes, led to an 88% agreement rate to share to the country, to share to the NIH, which is huge. Now you have a large number of people saying, "Yes, I want you to use my data to make this healthcare system better for everybody experiencing the kinds of things I'm experiencing."
Phil Kim:
Thank you, Tully, for all your time and insight here today on this episode.
If you would like to review the studies Dr. Tully discussed, you can access them by clicking the links included in the description of this episode. That's it for us here at Health-e Law. We will see you all next time.
Contact Info:
Additional Resources:
National Institute of Mental Health – Statistics on Mental Illness
National Library of Medicine – User Experience, Engagement, and Popularity in Mental Health Apps
Objective User Engagement with Mental Health Apps – System Search and Panel-Based Usage Analysis
* * *
Thank you for listening! Don't forget to SUBSCRIBE to the show to receive new episodes delivered straight to your podcast player every month.
If you enjoyed this episode, please help us get the word out about this podcast. Rate and Review this show on Apple Podcasts, Amazon Music, Google Podcasts, or Spotify. It helps other listeners find this show.
This podcast is for informational and educational purposes only. It is not to be construed as legal advice specific to your circumstances. If you need help with any legal matter, be sure to consult with an attorney regarding your specific needs.