-2.5 C
Rīga
Saturday , July 27, 2024
Zaļā Josta - Reklāma
Mājas Entertainment Music Publishers Fire Back Against Anthropic As Infringement Battle Heats Up: ‘Hard...

Music Publishers Fire Back Against Anthropic As Infringement Battle Heats Up: ‘Hard to Imagine a Machine More Destructive to Artistic Control’

Music Publishers Fire Back Against Anthropic As Infringement Battle Heats Up: ‘Hard to Imagine a Machine More Destructive to Artistic Control’

Photo Credit: Anthropic

Last November, Universal Music, ABKCO, Concord, and others demanded a preliminary injunction against Anthropic in connection with their copyright infringement lawsuit against the AI giant. Now, these plaintiff music publishers have fired back against Anthropic’s opposition to the corresponding motion.

This newest twist in the intensifying courtroom confrontation just recently came to light in a filing that was shared with DMN. As we reported back in October, the plaintiff publishers claim that the Amazon- and Google-backed defendant trained its Claude chatbot on protected lyrics without permission and reproduced those lyrics sans authorization in a number of responses, among other things.

Predictably, Anthropic has refuted the allegations of copyright infringement and taken aim at the preliminary injunction motion. On the latter front, the well-funded AI operation in a January filing expressed the belief that training large language models (LLMs) on copyrighted materials constitutes fair use.

Beyond the often-heard fair use argument, the entity further wrote that “song lyrics are not among the outputs that typical” users request, attempted to pin the alleged infringement on the plaintiffs due to their submitting the underlying text prompts, and signaled that it’d implemented “guardrails” to prevent the display of the relevant lyrics in Claude answers moving forward.

As mentioned at the outset, the publishers have formally targeted these and other arguments – including by accusing Anthropic of knowingly enabling infringement – in a reply supporting their preliminary injunction motion.

Beginning on the guardrails side, the publishers are of the belief that “Anthropic’s new guardrails do not ‘moot’ the need for preliminary relief.” As described by the plaintiffs’ latest filing in the increasingly involved suit, said “inconsistent” – and possibly circumventable – guardrails aren’t a “complete solution,” as they could conceivably be shelved at will and are in any event failing to halt alleged infringement altogether.

“Publishers continue to obtain verbatim and near-verbatim copies, mashups and distortions, and unlicensed derivatives of lyrics to the works-in-suit,” wrote ABKCO, Concord, and the other plaintiffs.

(Interestingly, on the topic of guardrails designed to prevent AI chatbots from weighing in on certain topics, Bloomberg today reported that Anthropic had kicked off the development of “safeguards around its chatbot Claude ahead of global elections slated for this year…including redirecting voting-related prompts away from the service.”)

Shifting to the idea that the plaintiffs themselves are responsible for the alleged infringement because they penned the related queries, the publishers maintain that “Anthropic cannot escape responsibility” with the argument.

“Anthropic’s finetuning data shows that Publishers’ queries were the sort it expected from ‘normal’ users,” reads the appropriate section of the document, noting for good measure that without introducing the associated questions, the plaintiffs “would otherwise have no way to detect infringement.”

Not by accident, a larger portion of the filing yet tackles the flawed (though highly prevalent) argument that training LLMs on protected media is fair use. (In November, Anthropic elaborated upon its support for the idea in Copyright Office comments, separate from the above-highlighted January filing.)

“Second, in the unlikely event that Anthropic’s guardrails prevent its models from distributing copies of Publishers’ lyrics in the future,” the plaintiffs penned in one of several sections addressing the fair use position, “the models’ output of ‘new’ lyrics remains unfair. That output is enabled by unauthorized copying, attracts subscription fees and investment, and competes directly with songwriters and publishers whose own lyrics are the raw material for Anthropic’s substitutes.

“In Anthropic’s preferred future, songwriters will be supplanted by AI models built on the creativity of the authors they displace,” they indicated.

Furthermore, notwithstanding whether Claude answers ultimately infringe the publishers’ lyrics, Anthropic’s use of the latter is commercial as opposed to transformative, according to the document. And bigger picture, the alleged infringement “harms the market for” these lyrics, including on search-engine results pages and platforms like Genius.

“By taking for free what others license and using its unlawful copies to compete with Publishers’ licensees, Anthropic threatens the long-term licensing prospects for Publishers’ works,” per the text.

While this claim (and the broader situation) seemingly leaves the door open for a licensing agreement, AI-powered lyrical derivatives and mashups have emerged as a comparatively little-discussed obstacle standing in the way of any potential resolution.

“Moreover, it is hard to imagine a machine more destructive to artistic control than one that first copies lyrics,” the publishers laid out, “then alters them or combines them with works by other songwriters (or AI generated text) in ways that contravene the songwriters’ intent. Anthropic disregards the examples of error-filled or offensive outputs of Publishers’ lyrics, which Publishers would not license at any cost.”

Finally, in terms of the precise conditions of the plaintiffs’ sought injunction, the filing companies are only requesting that Anthropic make permanent during the litigation process the aforesaid guardrails and refrain from exploiting protected lyrics in future training models, according to the reply.

Earlier this week, a federal judge dismissed a substantial portion of Sarah Silverman’s class-action copyright suit against ChatGPT developer OpenAI, which has apparently rolled out a tool capable of creating eerily realistic videos based upon text prompts.

Read More

Zaļā Josta - Reklāma