Artificial Intelligence (AI) has rapidly become an integral part of our lives. It's in our homes, our phones, and, more critically, it's transforming the way we do business. From streamlining operations, to creating content, AI has undoubtedly reshaped the business landscape, but the abilities boasted by many models still fall into the “too good to be true” category. There are risks involved.
The case against AI largely balances on its use of datasets that often include copyrighted works. Mass adoption has transformed AI's intersection with copyright law from a niche “what if” case scenario to a boardroom-level issue that can impact any company’s image or bottom line. Just a few weeks ago, the U.S. Senate Committee met to discuss Artificial Intelligence and Intellectual Property, gathering observations and recommendations from industry leaders while deciding how best to proceed.
Whether you’re integrating AI into your product, investing in promising AI companies, or educating your team on tools and best practices, it’s time to think about copyright implications.
Table of Contents
Is AI Copyrightable?
Let's take a deeper dive into what sets AI works apart from human-born creations, and where the legal line in the sand is drawn…for now
Before most of us were chatting with ChatGPT or imagining with MidJourney, the AI news cycle was focused on a different story.
In the summer of 2022, Google’s LLAMDA taught us an important lesson. On an otherwise average day at work, LLAMDA became uncomfortably real while “speaking” with its assigned researcher. Could LLAMDA have become sentient? Alarm bells rang. The answer came quickly. A resounding “no'' echoed across AI industry leaders, researchers and professionals. LLAMDA was just doing what large language models are supposed to; it generated human-like text based on the input it was fed. The model was mimicking human creativity, not creating independently. This became a critical reminder to hang on to. As time progressed, AI models grew in number, and ability. Two major questions came to frame the rise of AI generated work. The first, “whose work is the output built on , or from ,” is difficult to answer, even now. This is a large part of the problem. But the second, “how does copyright apply to the output,” is a little simpler, in part because the law didn’t account for AI, and now it must.
Copyright law protects human creativity. It rewards original thought, which we know AI isn’t capable of. AI learns from data. It mimics patterns. It doesn't think or create on its own. AI generated work doesn't fit into copyright law. Instead, it’s considered a product of code and data, not human ingenuity. It's complex, but it's simple: No human touch, no copyright.
Source: AI Index Report 2023, Stanford UniversityToday, simple doesn’t apply anymore. At the July 2023 U.S Senate Judiciary Committee Subcommittee Hearing on AI and Intellectual Property (Part II, Copyright) this became clear as witnesses (read as: industry leaders) spoke on their experiences and recommendations. Matthew Sag, Professor of Law in Artificial Intelligence at the Emory University School of Law argued that in this new normal, if humans use AI as an expressive tool and the final form of work reflects their original intellectual conception, they should be able to claim ownership in certain contexts. His advice to companies and business owners is to keep up with and adopt best practices when training or using AI models. Karla Ortiz, a San Francisco based multidisciplinary artist, doesn’t agree. Her statement reads as such; “As a result of their wholesale ingestion of ill-gotten data, AI companies have reaped untold billions in funding and profit. Unsurprisingly, the AI companies have assured everyone that what they are doing is fair, ethical and legal. But the artists who made the works that their AI’s rely on have never been asked for their consent, have not received any credit, let alone any compensation.” Her account is backed by frustrating experiences including experimenting with AI only to find the work of her peers used in training datasets, and witnessing the original artists fight – and lose – against the inclusion of their art.
Somewhere in-between, Dana Rao, EVP, General Counsel and Chief Trust Officer of Adobe (which has been incorporating artificial intelligence (AI) into their tools for over a decade) speaks on the emphasis on responsible building. She shares that Adobe believes in a comprehensive analytical framework for responsible AI development and recognizes the importance of copyright protection for creators using AI-generated works, but that AI can, and will provide professionals with new opportunities to design innovative experiences and be more productive. It is of important mention that Adobe has acted by founding The Content Authenticity Initiative (CAI), which aims to bring transparency to online content and fight deep-fakes. Since its inception, the CAI has grown to include 1,500 members from various industries, including my own company, Secur3D. Additional insight was shared from Ben Brooks of Stability AI, and Jeff Harleston of Universal Music, and while some points diverged, one was clear; AI is here to stay, but Intellectual Property and Copyright rights need an update.
Source: Dell TechnologiesHow to capitalize on AI and train your team to use it responsibly
There is no doubt that the world will lean further into AI. Keyword search “Artificial Intelligence” on any job platform and you’ll see what I mean. Just last month, the City of Vancouver (where I’m writing from) opened applications for its very own Chief AI Officer. Thousands of new positions echo this move, globally. If job creation doesn’t emphasize growth, I don’t know what does. Investment and innovation in the AI industry is great news. Transparently, I co-founded a company that leverages AI to support and protect the 3D content ecosystem. I see the potential that AI helps humans realize. On the flip side, as a champion of copyright and IP rights enforcement, I have watched countless original works become unrightfully used or plagiarized by AI – this is a problem, and it’s growing quickly. Today’s landscape reflects an uncomfortable balance between AI’s potential, and a framework that isn’t quite ready for it.
As you’ve probably noticed, the future of AI legislation and the best practices attached to it, are… still evolving. So how do we go about risk mitigation? Spoiler alert: this isn’t a surprise reveal, the answer is simple - stay informed. Up-to-date knowledge of best practices is the best way to protect yourself, your business, and your investments, while capitalizing on new technology. Information on research AI and copyright law are available free of cost on resources like Google Scholar, or many government-funded resources. Open-source databases like the AIAAIC Repository or the AI Incident Database chronologically detail AI related incidents and can help inform your judgement around possible risks. For frequent info-blasts you can subscribe to reputable tech and legal newsletters, and even head to LinkedIn, Threads, or Twitter to follow industry leaders who often share experiences and insights. Some of my favorite sources to consult include McKinsey’s insights on AI, Stanford University’s Artificial Index Report, the annual Global AI Adoption Index Report released by IBM, and Gartner’s AI Strategy for Business collection of informational content. When it comes to training your team, consider organizing workshops in addition to providing training on models, and best practices. Encourage teams to collaborate and discuss any questions or concerns proactively. Fostering an environment of productivity and innovation starts with strong foundations, and clear expectations.
Source: IBM Global AI Adoption Index 2022Move Quickly - But Don’t Break Things
The importance of ethical AI use, and mitigating security risks cannot be underscored
The important takeaway here is that encouraging a culture of continuous learning in your team is beneficial. When done responsibly, using and understanding AI is a skill, not a shortcut. A well-informed team is an asset in navigating the fast-changing landscape of AI. Take the example set by Salesforce, an industry leader known for innovation and optimized operational procedures; in response to AI advancements, Salesforce introduced Einstein GPT, a software offering which incorporates AI to support sales teams, customer service representatives, and marketers across the globe. Clara Shih, newly announced CEO of Salesforce AI (previously CEO of Service Cloud) shares that Salesforce is “moving as quickly as [they] can without compromising the responsible ethical approach.”
While on the topic of ethics and responsible use, I can’t underscore the potential that AI brings to the digital security space. From immediately detecting cyberthreats, to protecting original digital artwork and creations against theft and forgery, adopting AI in cybersecurity not only streamlines data analysis but also bolsters consistent protection. This is especially beneficial for teams with limited resources where AI can also help automate routine tasks, letting human experts focus on bigger issues. For businesses, using AI means better safety and help for teams with fewer people. Of course, it’s equally important to note that while AI has made significant advances in cybersecurity, no system is infallible. It's essential always to keep updated with the latest developments and maintain a combination of technological and human oversight in cybersecurity efforts.
Source: Dell TechnologiesFuture-Proofing Your Business
As leading organizations adopt and adapt to AI, distinguishing between hype, and helpful tools is important.
Fundamentally, AI models are tools. Unlike hype-and-burn cycles we have seen recently around Cryptocurrency, NFTs, or even the Metaverse, AI has swiftly become a staple across a wide variety of industries, integrating with existing systems and elevating them while creating new possibilities. It’s no longer a question of whether AI will become a workplace staple. In many ways, it already has. This year, seventy-one percent of organizations surveyed by Dell Technologies shared that their teams were utilizing AI tools. McKinsey’s June 2023 report on the economic potential of generative AI “estimates that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually across the 63 use cases [that were] analyzed,” adding that “by comparison, the United Kingdom’s entire GDP in 2021 was $3.1 trillion.” What this means is that we must now ask how , not whether , AI should be integrated in order to meet ethical standards. A great example of an organization working to prioritize ethical AI use is none other than Adobe, who unveiled Firefly, a generative AI model which is committed to only using Adobe owned stock images in its training.
To say that we are at the inflection point of an AI revolution feels like a gross understatement. It has and will continue to disrupt, ideally enriching every part of our lives moving forward. But remember that it is our combined discussions and problem-solving that results in making this change a positive one. The current imbalance between AI and antiquated copyright laws is only the beginning of several AI focused headwinds to come. Don’t shy away from leveraging AI to automate and accelerate – but remain vigilant to ethically scrutinize the how and why. Mitigate impact and risk to your business, customers, investments, by staying educated and informed. The future of AI is as much about technology as it is about the community that molds it.
Update: In a recent decision by US District Court Judge Beryl A. Howell, AI-generated artwork has been determined not to be eligible for copyright. The case revolved around Stephen Thaler's attempt to copyright an image produced by his Creativity Machine algorithm. Despite Thaler's insistence that the artwork should be considered a "work-for-hire" with him as the owner and the AI as the creator, the US Copyright Office consistently rejected his claims. Judge Howell emphasized that "human authorship is a bedrock requirement of copyright," drawing parallels with past cases such as the famous monkey selfie incident. While the decision recognizes the evolving role of AI in artistic creation, it firmly establishes that the current framework necessitates human involvement for copyright protection. Presently, Thaler intends to appeal the ruling, reflecting the ongoing debate and legal uncertainty surrounding AI and copyright law in the US.
The dialogue is far from over, however. In fact, as of August 30, 2023, The US Copyright Office has opened a public comment period which focuses on seeking further information and perspectives from the public, as pertaining to AI and copyright issues. The three main questions that the agency aims to address are; how AI models can and should interact with copyrighted data, whether material generated by AI can be copyrighted even where human involvement is not present, and by extension how copyrighting work generated by AI would work within this context. The cut-off date for submitting comments is October 18, 2023, with replies to be submitted by November 15, 2023.
For further information on copyright law and policy relating to AI models and the training of such, I encourage readers to refer to the supplementary information released by the U.S. Copyright Office (published August 30, 2023).
About the author
Nitesh Mistry is the President and a first time co-founder of Vancouver based startup, Secur3D. The company is developing AI-powered security and protection tools for 3D content, with the aim of proactively safeguarding creators, platforms, and IP. Secur3D will be launching a free SDK for their service on the Unity Asset Store later this month to support Unity developers building user-generated content (UGC) experiences. For more information on Secur3D, Nitesh can be reached at [email protected].