French Insider Podcast Ep. 25
Plunging Into the Future: What Companies Need to Know as They Embrace AI with Jim Gatto of Sheppard Mullin
Thank you for downloading this transcript.
Listen to the podcast released September 1, 2023 here: https://www.sheppardmullin.com/multimedia-507
In this episode of French Insider, Jim Gatto, a partner in Sheppard Mullin’s Washington D.C. office and co-chair of its AI team, joins host Sarah Ben-Moussa to discuss what companies should know as they embrace generative AI, including key legal issues, the European Union’s Artificial Intelligence Act, and unique due diligence concerns when acquiring or investing in companies that develop or use generative AI.
About James G. Gatto
James G. Gatto is a partner in the Intellectual Property Practice Group in Sheppard Mullin’s Washington, D.C. office, where he also serves as Co-Leader of the firm’s Artificial Intelligence Team and Leader of the Open Source Team.
Jim’s practice focuses on AI, blockchain, interactive entertainment and open source. He provides strategic advice on all aspects of intellectual property strategy and enforcement, technology transactions, licenses and tech-related regulatory issues, especially ones driven by new business models and/or disruptive technologies.
Jim has over 20 years of experience advising clients on AI issues and is an adjunct professor who teaches a course on Artificial Intelligence Legal Issues. He is considered a thought leader on legal issues associated with emerging technologies and business models, most recently blockchain, AI, open source and interactive entertainment.
About Sarah F. Ben-Moussa
Sarah F. Ben-Moussa is an associate in the Corporate Practice Group in Sheppard Mullin’s New York office, where her practice focuses on domestic and cross-border mergers and acquisitions, financings and corporate governance matters. As a member of the firm’s French Desk, she has advised companies and private equity funds in both the United States and Europe on mergers, acquisitions, joint ventures, financings, complex commercial agreements, and general corporate matters.
As a member of Sheppard Mullin’s Energy, Infrastructure and Project Finance team, Sarah also represents renewable energy companies, borrowers, financial sponsors, portfolio companies, commercial banks and other financial institutions in a variety of financing transactions. Her practice focuses on a variety of transactions in the energy sphere, representing renewable energy companies in project-level debt and equity financings of wind and solar facilities.
Before joining Sheppard Mullin, Sarah spent a year and a half studying and working in France, focusing on corporate transactions and commercial contracts in Europe and internationally. Sarah is also committed to pro bono work, focusing on cases involving children seeking asylum or other immigration-related relief.
Transcript:
Sarah Ben-Moussa:
Bonjour French Insider listeners. My name is Sarah Ben-Moussa. I'm an associate with the firm's New York office. Joining me today is Jim Gatto. Jim Gatto is a partner in the firm's Washington DC office. He is co-leader of the firm's artificial intelligence team and is leader of the open source team. He focuses on all aspects of IP and tech regulatory issues. He has over 20 years of experience advising clients on AI issues and is an adjunct professor who teaches a course on artificial intelligence legal issues. Jim, welcome to the show.
Jim Gatto:
Thank you, Sarah. It's great to be back on the French Insider.
Sarah Ben-Moussa:
Great. So today's topic is on artificial intelligence. So Jim, I know this has been a hot button issue in the news. So just to kind of go into it and orient our listeners, what is generative AI and why has it become so newsworthy?
Jim Gatto:
Sure. So there's a lot going on. So artificial intelligence itself has been around and talked about since the fifties, so many, many decades. And there's many tools that use different types of AI that are more what's referred to as deterministic, so things that might do a prediction or do sorting or classification or kind of more things where you're using AI in a way to leverage the power of computers to produce some specific result you're looking for. Generative AI is a type of artificial intelligence that uses some similar underlying technology, but its purpose is to create new content. So that content can be text, images, it can be software code, books, movies, etc.
Anything you want to create that's expressive content, generative AI is your tool. And the reason it's really taken off lately is that with the introduction last year of ChatGPT, it was really the first real high level deployment of a generative AI tool. And it was the fastest growing technology. It was like the fastest technology to get to a hundred million users and it really hasn't slowed down. It really just literally took the world by storm and it is incredibly transformative. And so people are using it in their personal lives and throughout business as well. And so it's become kind of a ubiquitous tool that everyone is talking about using and sharing horror stories and benefits about.
Sarah Ben-Moussa:
Right. Because I think sort of before the onset of ChatGPT, I think for us non in the know folks, AI was sort of the movie. I think that's as far as we've really gotten with this. It was a bit of a sci-fi concept to us. But now it's real and it's here and it's concrete. And so I think what people are wondering, especially on our side of things, is what are the key legal issues we're looking at here?
Jim Gatto:
Sure. Well, there's many, so I'll try to keep it at a high level. So we kind of break it down into two worlds if you will. So there's people, companies that are developing generative AI tools that are training the models, creating the algorithms and providing the applications for users. I'll keep that to the side for a second and focus primarily on how people are using it and what legal issues arise. So if you are using generative AI, the first thing to understand is you typically will put some kind of a prompt in which is just an input where you're asking it to create something for you. In many cases, if what you provide contains any confidential information, you may be putting that confidential information at risk. All these tools work in different ways and their terms of service are structured differently. But some of the tools, if you put the information in, you actually grant the license to the tool operator to use that information. So you don't want to be putting confidential information in unless you know for sure that it's going to be treated as confidential.
Second thing is, and this is a real big issue, because we're dealing with expressive content, many people create content and as part of their business model, they license it or want to have exclusive use to it. So if someone uses it without permission, they want to be able to have copyright protection and sue to stop. One of the big challenges with generative AI is that the output, the machine created elements that are the expressive content, are not copyright protectable. The US copyright office has taken this position. There was a case that was just handed down this Friday, this past Friday. The District court confirmed the copyright office that only human authored works can be protectable. So there's very limited, if any copyright protection available. And typically it's going to be for any embellishments that you do, any creative content you add. You'll get protection for that, what you add, but typically not the underlying part that was created by the generative AI tool.
Also, because we're dealing with content, the models are trained on copyright protected information and there's a bunch of lawsuits pending right now against some of the tool providers saying that they trained their model as using copyrighted material without permission. And the tool providers are saying that it's not copyright infringement or it's a fair use at least in the US, that that's a defense to copyright infringement. Those cases are pending and they're working their way through. But again, from the user perspective, if the output of what you get from the generative AI includes someone else's copyrighted material, there's at least a significant risk that you may be liable for infringement.
In some cases, the general AI tool providers will provide indemnity to you if there's infringement. But actually in some cases if you use an output that's infringing, you indemnify the tool company. So that's a double whammy. You have risk of infringement and you have to indemnify the tool Company. Companies like Adobe have actually stated that they've trained their model on content that they know they have licenses to, and so they're granting indemnity to users. So some people are using their tool in part because it's a good tool, but also in part because it minimizes that infringement risk. And if there is infringement, the indemnity will typically kick in.
One other big area that I want to just talk about and then I'll kind of pause, I know I'm throwing a lot at you here, another type of generative AI is what's referred to as an AI code generator. And the code in that case refers to source code, so software development. So developers are using these AI code generators to assist them in writing computer code. And what happens is literally you have the AI generator working side by side with your development environment. And as you're typing, in some cases it will auto complete a line of code for you based on using its predictive analytic ability. In some cases it can check code to see if there's any errors in the code. And in other cases you can ask it to write code that performs some function. So there's various types of output that can come out.
Most of those code generators are trained on open source code, known existing open source code. And there's a plus and a minus to that. The good news is pretty much all the open source is licensed and you can use it for really any purpose. So it's not so much of an infringement issue if any open source code gets into the output. But open source licenses typically have what's referred to as compliance obligation. So in some cases you can freely use the code, but there's certain conditions to that use like you might have to maintain copyright information, you might have to give attribution to the copyright owner, you might have to disclose if you've made changes, things like that.
And in other situations, if the code is under certain licenses like the GPL, if you use that code in your software, then your software has to be licensed under the GPL. So for any or all of those reasons, it's important to know if you're using a code generator, whether the output includes any open source code because it may be problematic from a legal perspective under the GPL scenario or may impose compliance obligations on you. It's a pretty complex area working with a lot of developers on these developing policies to kind of manage these risks. And some of the tools have started adding features that can help manage these risks as well. And we can go into that later if you want or if people want more information down the road, we can talk about that. But it gets pretty nuanced. Let me kind of pause there. That's a lot of legal issues.
Sarah Ben-Moussa:
Right. So is it safe to say big umbrellas to worry about here are, copyright, confidentiality, licenses, and I believe the last one was... and I think indemnity was the fourth one. That is something I did not think about, the fact that you may in fact be liable if you violate a license. Most people, lawyers notwithstanding, are not reading the terms and conditions of most of the software or most of the tools they're using. So I think when it comes to real practical risk, I think especially with this, is it safe to say that you kind of really have to parse through the buckets between what it is you are putting in and what it is the AI is generating for you and being able to make that link and sort of trace back that help?
Jim Gatto:
Yeah. It's definitely important to focus both on your input and the outputs. And it really is, to go back to what you said, across the board, people don't really read terms of service. But in this context it's really important. And that's part of, we can talk about later how companies can manage risk. But part of what they do is develop policies on using generative AI. And one aspect of the policy is to vet the different tools, look at the terms of service, assess what the legal risks are. And some companies are approving or disapproving certain tools in part based on the terms of service.
So the end user doesn't have to read the terms of service, but the company will and will adopt the policy. But one very important caveat, some of these tools now, they'll have a terms of service. But they'll pop up an option that it'll ask you, "Do you want to permit the tool to use your input to train the model to make it better?" And a lot of people just click yes just to move on. And so by doing that, you may have granted a license to the tool. Even if the terms of service say one thing, some of these tools now are popping up these little user input boxes and it can alter the terms with user consent. So both reading the terms and being careful about what you accept as you're using the program are very important.
Sarah Ben-Moussa:
Oh yeah. I'm thinking specifically for companies that may have really important confidential data as you mentioned. Again, if that popup, most of us just kind of click yes to make a go away. I've been guilty of that. But I think one thing I want to focus in on because we focused on some of the issues is liability. And I think we talked a bit about indemnity. But who is liable for the output infringing?
Jim Gatto:
That's a good question. It's somewhat fact specific. But the short answer is it could be either or both of the tool provider or the user. So if the tool provider for example is training on copyrighted material and they don't have a right to do it and they output that, arguably they've made a copy of the work that they trained on without license and that's copyright infringement, if the user makes additional copies or publishes it or does something else that's enumerated right under the copyright statute, then they may be liable for infringement as well. So with respect to the IP owner, either or both could be liable. But as we mentioned earlier, depending on the indemnity between one or the other, one of them may be actually responsible for the actions of the other because of the indemnity.
Sarah Ben-Moussa:
Okay. And then I think we've been talking a lot about the American landscape and our audience is pretty... in terms of jurisdiction and geography, we have a lot of investors in the US. But we also have a lot of companies that are based in Europe. And so I think I wanted to take a second and talk about the passage of the AI Act by the European Parliament last June and its attempts to address some of the riskier uses of AI. So could you give us maybe a broad overview of the EU AI Act and how it compares to maybe the American landscape?
Jim Gatto:
Sure, yeah, great question. So the AIA as it's referred to is working its way through the European system. And if it's passed, it really would be the first what I would say comprehensive law on artificial intelligence. And the approach they take is interesting. What they're doing at the high level that you asked about is that the law is looking at different applications of AI and assigning a different risk category to each of them. And they have three buckets. So one bucket is applications that use AI that create what is deemed to be an unacceptable risk. And so one example of this that's provided is a reference to the government run social scoring system like is used in China. So that type of AI use under the AIA would be banned as an unacceptable risk.
Then the second bucket is applications that use AI that are high risk. So the parameters around this are a little bit subjective. But tools for example, an example that's given is that use AI to scan someone's CV or resume or job application and rank applicants based on that is deemed to be high risk. And one of the reasons for that is in some cases AI is just not accurate. In other cases, it's trained on information and data that is actually shown to have bias in it. And so for things like that, those types of uses, it's deemed to be high risk because it could create an inequity or bias type situation.
And then the third category is kind of like everything else, it's things that are not in one of the first two categories. So it's not either banned or deemed high risk. So those are at a high level or unregulated, the third category. So US as usual is a much more fractured system. We don't have a single AIA that's being put through. What we have is a couple of different things, some of which are voluntary and others are kind of a regulatory patchwork.
So one of the prominent things that's been done at the federal level is a passage of what's called the AI Bill of Rights. And it's referred to often as a blueprint for an AI bill of rights and it has a set of principles that were put forth by the US Office of Science and Technology, and it was updated in 2022 and kind of backed by the White House. But they look at things like the safety and efficacy of the tool, the equity and non-discriminatory issues. They focus on privacy and data protection in part as we talked about earlier, transparency and awareness. So transparency is visibility on what the tool is trained on, how it works. And then other factors like human oversight, making sure that you don't just have these machines running without people testing them and making sure that they're working properly and accurately and not providing bad results or harmful results in some cases.
So it's more like a set of guidelines. It's not really law. So you have things like NIST, the “National Institute of Standards and Technology”, which has developed also an AI risk management framework. And so it tries to put a little bit of meat on the bones of the AIA and say, "Okay. Well, here's how you can take these broad principles and actually try to apply them and come up with safe use of AI”. And then you also have the FTC which has issued [They are responsible for consumer protection and various other activities] guidance and taken some enforcement actions on some one-off basis against companies that have engaged in things that either were privacy violations related to AI or misuse of information for other purposes like financial decisions or things like that.
So one more level down, you have the states. And a number of the states now, like New York was one of the first states to pass a law that's actually come into effect now on severe limitations and conditions on using AI in connection with employment decisions, which is also part of what's in the European Act. So that part is similar, but it's being implemented at the state level. At a high level, there's just this very different approach of how the US is a little more scattershot in how it's dealing with this, whereas the AIA would try to be more of a focused, comprehensive regulation if it's enacted and then brought into effect in the different member states.
Sarah Ben-Moussa:
Right. And that ends to be the story of the US regulatory landscape. So I'm sure we're going to be following this for months and years to come. So I think we have a good sense here of some of the legal considerations as a company who's maybe thinking about using or developing AI. But what about the sort of inverse of that? Is it legal for companies to use your work to train their own AI models?
Jim Gatto:
That's a question that's debated right now. And as I mentioned earlier, there's a number of lawsuits that are testing that very question. There are a number of suits against companies that have created books or images or code, and allegedly was trained using copywriting material without permission. And so those cases are kind of working their way through. There really hasn't been any kind of a substantive decision in any of those cases yet. So we'll see what happens with that. But some of the tools say that in very limited instances where there'll actually be an output that's is a copy of something that was trained on, the way they're supposed to work is the training process is really... Like when we look at a series of pictures, we don't memorize each picture, we memorize things about the picture. And so we learn information about images. And over time, you see more and more images. And if someone says, "Hey, create a picture of this" you have a lot of images or information about images in your mind and you create something new based on what you've learned in the past, for example.
And that's how they're supposed to work at a very high level. It's questionable whether they're all working that way. There's a lawsuit right now that Getty Images has filed where one of the exhibits is the fact that they're alleging that the AI model was trained on Getty images that were watermarked images. So it had a Getty logo on it. And they have output that shows the watermark on it. So Getty is saying it's not likely there's going to be a watermark if you had actually copied the image. So there are situations like that where there's been some examples. But in some of these cases, it's hard to know what the model was trained on.
So artists are saying, "You must be using my information and it's going to lead to people getting to use my work for free." But in some of the cases, the plaintiffs haven't actually provided enough specificity to show the court that there actually was use of their content in the training and that it made it into the output. So in some of those cases, there's been some procedural motions to dismiss. And in some cases the courts have said, "Yeah, there may not be enough in the complaint right now. But we'll agree that there's not enough. But you can go back and amend the complaint. If you can provide greater specificity, we'll let you take another bite of the apple." So they're kind of working through the procedural phases right now.
Sarah Ben-Moussa:
That'll be an interesting one to follow because I feel like with the Getty one it's a bit obvious, the watermark shows up on the image. But I'm really interested to see whether it's comedians or authors or other people who generate content, how do you prove that something was done in the style of your work and whether or not you have rights to that?
Jim Gatto:
Right. And that's another whole issue. One of the lawsuits filed by some of the artists are saying that some of these tools are trained specifically on their work. And you can say, "Give me a picture of monkey holding a red umbrella in the style of Sarah's art." And if you have a style, it'll create an image with those elements that you specified with your style. And so one of the questions is, is style actually protectable? Because style is... And this is a question under copyright law's, not necessarily specific to AI, but it's coming up in this context. But one of the questions is that with copyright, you can protect the expression of ideas but not the idea. So if the way you're defining the style is an idea for how to present something or an idea for a type of art, it's probably not protectable. But if it's rather the style is really embodies specific expression or types of expression, then it's more likely to be copyright protectable because that's what copyright covers, expression. So some of it may be fact specific. But that's kind of a very high level framing of the issue.
Sarah Ben-Moussa:
Got you. And so are there unique issues to consider in connection with diligence when we're looking at acquisitions or investments in companies that are developing or using generative AI?
Jim Gatto:
Yeah, absolutely. And that's one of the areas we've been pretty busy in. So for most deals there's a standard list of IP diligence questions, and those are all still relevant. The problem is some of them are not specific enough to capture some of these nuanced issues that we talked about. And the other problem is that in some cases, the target company doesn't understand the law well enough to answer the question properly. So as a simple example, if you say, "Identify all the works for which you have copyright protection. And can you represent that you actually own these and they're valid?" If they outputted stuff from generative AI, they may think, "Yeah, we created it, we own it. We filed a copyright registration. And so everything's all hunky-dory." But the reality is, it's not protectable. And so you have to dig into the way in which generative AI has been used and whether any of the output is significant work for which copyright protection was sought or is an important work for the value of the deal.
So that's one area. All the software stuff we talked about earlier. In general, you'll do an open source diligence if you're buying a company that has software because almost every company has software that uses open source now. So again, there's standard open source questions, but some of these questions around the output of generative AI and compliance obligations and whether you've kind of vetted whether the code you're using has this open source, sometimes that needs to be a little bit more specific. And then there's various other questions, but one of the other areas that's important is that a lot of companies, if they use third party developers or contractors to create content for them, a lot of times when you get into diligence like, "Yeah, this was created. We have an agreement that says we own it," but the same problems can arise. The contractor may not have generative AI policies. They may not be managing it, may not know the issues. And they may say, "Yeah, you own the copyright," when in fact it's not copyright protectable.
So you need to do additional diligence with respect to the third party contractors as well. Those are some of the topics. There's various others that can arise. If a company is training AI models and you're buying a company and they've gotten to the training side, one of the most important things to understand is that I mentioned earlier the FTC has done some enforcements. In one of the cases, there's a company called “Ever Album” that had a photo app where you could upload pictures. It was like an album. You can organize stuff and do different things you can typically do with photo apps. They ended up over time using those images to train a model to create facial recognition technology. They didn't disclose it to users and they didn't have permission to do it. Long story short, the FTC found out, did an enforcement action. And the remedy was they had to delete all of the models and algorithms that they built.
And the reason that's important is right now a lot of companies are investing in AI and AI-based companies. And if these companies don't have permission to use the data that they're using and they create these models, some companies are spending tens and hundreds of millions of dollars to train these models. And if you run into the remedy that the FTC imposed of what's called algorithmic discouragement we have to delete your models and algorithms if they were trained on data you didn't have a right to use, that wipes out the value of those tools and it can significantly impair your investment. So that's one really, really important issue that needs to be considered in connection with diligence. There's others, but that's probably the most at the top of our list from a business perspective.
Sarah Ben-Moussa:
All right. So get on top of it, get in front of it, get all the information you can at the onset. It's funny. As we were discussing this, when you were mentioning diligence, that little sort of ring in my head of open source, it was just sitting there because I can't tell you the amount of times we've been buy-side looking at a target. And they insist, "No, don't worry about it, no IP, no software, nothing to really report." And then you dig in a little bit and it turns out they're relying entirely on open source software because everybody does. And then that's a whole other diligence debacle.
Jim Gatto:
And that's not necessarily a problem because many open source licenses are benign. They don't create legal issues for you. There might be some compliance obligations that are pretty minimal. But the key just at a general level is when you're doing diligence, do the company have an open source policy and do they follow it? If a company has a policy and they're on top of it, diligence generally goes a little smoother. But when you ask the company, as you were saying, "Do you have any open source?" And they say "No." "Do you have an open source policy?" "No." And then you dig in, you realize, "Okay. There is open source. Did you have any policy?" "No." "How do you determine whether to use open source and whether it was a problem?" "Well, we didn't. We left it up to the developers."
So those usually don't go very well. And then you add AI on top of that with these code generators and it creates another level of potential issues. So it's definitely, as head of our open source team, I do a lot of that work and it amazes me that probably over 90 something percent of the companies use open source according to some of the statistics I've seen. And I'd say at least half the companies we encounter still don't have an open source policy. It's really scary.
Sarah Ben-Moussa:
All right. And so I think part of the reason we really wanted to look into doing this episode is just because, one, AI is everywhere and it's really come into the mainstream. But I think what we're sort of seeking to do is demystify it and get rid of some of those fears and apprehensions have. So I think one thing as we finish up the episode I really want to touch on is, what are the most important things that companies can do to minimize the risk when using generative AI?
Jim Gatto:
Sure. It's a great question and really probably a good one to end up with or put at the end here. So a lot of companies hear about a lot of the issues I'm talking about. And a lot of lawyers who are kind of new to this deal and they're getting up to speed don't feel like they have a good handle on the issues. And so they do what a lot of lawyers do, and they say, "Don't use it because risk." And of course, not using something usually minimizes the risk. But you lose the business benefit of this amazing tool that people are using to save so much time and do all this stuff. So the question is, if you're not going to just ban it, how do you use it in a way that doesn't create undue risk for the company? And really the short answer, this is a lot of what me and some of our other team members are spending our time on right now, is educating in-house legal departments and also boards like officers, directors, C-level people, because a lot of these are big business decisions like I was mentioning earlier. When you have these risks of algorithmic discouragement, you're talking about investing money, and do we have the risk of just having that go out the window? So we're doing training to help companies understand the issues, including some of what we covered on the episode today, but many other issues. And some of it depends on the use case, that's what the company wants to use it for. It gets into some additional information. But once we've helped them understand the issues, the second thing we help them understand is that, as I said earlier, all these tools are different. They have different terms of service and they have different features. So a lot of the responsible tool providers know that managing legal risk is an issue. And so they're building features into their tools that can help companies mitigate legal risks.
So on the AI code generators, on some of the tools, there's what is called a “filter” that will prevent any known open source code from being output. They also have what's called a “reference feature”, which you can use in kind of almost the opposite way. It can let code come out even if it's open source, but it'll identify the fact that this matches some known open source code. So it flags it. And you can then analyze it and see, is it problematic open source or not, and are there compliance obligations or not? So once you do that, you can manage the risk, right. So that's just one example. So once we go through the tools and all of that... And then there's also, as I mentioned earlier there's enterprise versions and individual user versions. And a lot of the enterprise versions come with different terms and with more safety features. So with OpenAI for example, there's two different ways to provide inputs. One is through an API and one is through the website directly.
And they state in their terms, if you're using the API, they'll treat your information as confidential and they won't use it to train models. If you do it through the website, they're going to use it. So understanding the differences in these tools and how they work and the legal issues, what you can then do is put together a policy that makes decisions and say, "Okay. Our company is approving these tools, but not these tools. And if you're going to use this tool, you have to use these features." And in some cases, if you have the enterprise version, an administrator can lock it down so users can't change some of the settings. In other cases, companies are specifying the use cases.
As I mentioned earlier, if you're creating content for a new character for a video game that you want to be able to protect and commercialize and monetize separately, you wouldn't want that to be not copyright protectable. So in some cases, companies are saying for these purposes, you cannot use generative AI or the output of generative AI. You can use it as inspiration, but not the actual output. In other cases, it's fine to use it. And then there's various other safeguards. I won't go through all the elements of the policy. But you kind of get the sense by understanding the legal risk, understanding the tool, understanding the differences, the terms of service, you can make intelligent decisions that help companies manage the risk.
Sarah Ben-Moussa:
Right. And I think that is something your team specifically is focusing on at this point.
Jim Gatto:
Absolutely. We're doing a lot of that.
Sarah Ben-Moussa:
I am excited to see where it goes. I really think when it comes to AI, it's sort of the, I don't want to call it Pandora's box because we're demystifying and getting rid of the stigma. But AI is cool. And I think we can't move backwards. We are where we are, and it's going to be interesting to see where it goes.
Jim Gatto:
I agree with that 100%. I do not think we're going to move backwards from this. The genie's been unleashed. The question is, how do we make sure that genie behaves?
Sarah Ben-Moussa:
Right. Exactly. All right. Thank you so much, Jim. This was super informative and I can't wait to see where it goes and some of the issues and challenges and opportunities that come up as a result.
Jim Gatto:
Well, I share those sentiments and look forward to seeing new changes pretty much on a daily basis right now. So it has certainly been kind of a fun ride. As I said I've been doing this for 20 years. I've been doing artificial intelligence work for over 20 years. And I haven't seen this pace of development really with any technology that I can recall in the 35 years I've been a lawyer. It really is taking the world by storm and it's producing some really good results, but it also has some potentially significant legal ramifications that companies, if they just understand them, they can manage them for the most part. So we look forward to helping people who need help. And I appreciate being on the podcast again. This has been great.
Sarah Ben-Moussa:
Okay. Thanks, Jim.
Jim Gatto:
Thank you.
Sarah Ben-Moussa:
For more information, visit the Sheppard Mullin French Desk at sheppardfrenchdesk.com. This podcast is recorded monthly and is available on Spotify, Apple Podcasts, Stitcher, Amazon Music, as well as on our website, frenchinsiderpodcast.com. We want to help you, and welcome your feedback and suggestions of topics.
Contact Information
Additional Resources:
Copyright Office Artificial Intelligence Initiative and Resource Guide | Law of The Ledger
Training AI Models - Just Because It’s Your Data Doesn’t Mean You Can Use It | Law of The Ledger
Congress Proposes National Commission to Create AI Guardrails | Law of The Ledger
Sheppard Mullin French Desk Blog
Sheppard Mullin Launches Artificial Intelligence Industry Team | Sheppard Mullin
* * *
Thank you for listening! Don’t forget to FOLLOW the show to receive every new episode delivered straight to your podcast player every week.
If you enjoyed this episode, please help us get the word out about this podcast. Rate and Review this show in Apple Podcasts, Amazon Music, Stitcher or Spotify. It helps other listeners find this show.
Be sure to connect with us and reach out with any questions/concerns:
This podcast is for informational and educational purposes only. It is not to be construed as legal advice specific to your circumstances. If you need help with any legal matter, be sure to consult with an attorney regarding your specific needs.