Late last week an important, but disappointing, ruling came down from the 9th Circuit appeals court. The ruling in the case of Mavrix Photographs v. LiveJournal found that volunteer moderators could be deemed agents of a platform, and thus it’s possible that red flag knowledge of infringement by one of those volunteer moderators could lead to a platform losing its safe harbors. There are a lot of caveats there, and the ruling itself covers a lot of ground, so it’s important to dig in.
The case specifically involved a site hosted on LiveJournal called “Oh No They Didn’t” ONTD which covers celebrity news. Users submit various celebrity stories, and ONTD has a bunch of volunteer moderators who determine what gets posted and what does not. Some of the images that were posted were taken by a paparazzi outfit named Mavrix. Rather than send DMCA takedowns, Mavrix went straight to court and sued LiveJournal. LiveJournal claimed that it was protected by the DMCA safe harbors as the service provider and the lower court agreed. This ruling sends the case back to the lower court, saying that its analysis of whether or not the volunteer moderators were “agents” of LiveJournal was incomplete, and suggests it tries again.
There are a number of “tricky” issues involved in this case, starting with this: because ONTD became massively big and popular, LiveJournal itself got a bit more involved with ONTD, which may eventually prove to be its undoing. From the decision by the court:
When ONTD was created, like other LiveJournal communities, it was operated exclusively by volunteer moderators. LiveJournal was not involved in the day-to-day operation of the site. ONTD, however, grew in popularity to 52 million page views per month in 2010 and attracted LiveJournal’s attention. By a significant margin, ONTD is LiveJournal’s most popular community and is the only community with a “household name.” In 2010, LiveJournal sought to exercise more control over ONTD so that it could generate advertising revenue from the popular community. LiveJournal hired a then active moderator, Brendan Delzer, to serve as the community’s full time “primary leader.” By hiring Delzer, LiveJournal intended to “take over” ONTD, grow the site, and run ads on it.
As the “primary leader,” Delzer instructs ONTD moderators on the content they should approve and selects and removes moderators on the basis of their performance. Delzer also continues to perform moderator work, reviewing and approving posts alongside the other moderators whom he oversees. While Delzer is paid and expected to work full time, the other moderators are “free to leave and go and volunteer their time in any way they see fit.” In his deposition, Mark Ferrell, the General Manager of LiveJournal’s U.S. office, explained that Delzer “acts in some capacities as a sort of head maintainer” and serves in an “elevated status” to the other moderators. Delzer, on the other hand, testified at his deposition that he does not serve as head moderator and that ONTD has no “primary leader.”
It’s this oversight by a paid employee of LiveJournal that makes things a bit sticky. The question is whether or not this oversight and control went so far that the volunteer moderators could also be seen as “agents” of LiveJournal, rather than independent users of the platform.
Evidence presented by Mavrix shows that LiveJournal maintains significant control over ONTD and its moderators. Delzer gives the moderators substantive supervision and selects and removes moderators on the basis of their performance, thus demonstrating control. Delzer also exercises control over the moderators’ work schedule. For example, he added a moderator from Europe so that there would be a moderator who could work while other moderators slept. Further demonstrating LiveJournal’s control over the moderators, the moderators’ screening criteria derive from rules ratified by LiveJournal
The court doesn’t fully answer the question, but sends it back to the lower court, saying that it’s a “genuine issue of material fact” that should be explored to determine if LiveJournal was responsible, and thus would lose its safe harbors. The specific fact pattern and details here may mean that this ruling doesn’t turn out to be a huge problem in the long run for safe harbors, but… it is somewhat worrisome, in that there are at least a few statements in the ruling that are… concerning. For example:
… LiveJournal relies on moderators as an integral part of its screening and posting business model.
But… lots of sites rely on independent and volunteer moderators as a part of their business model. That alone shouldn’t matter as to whether or not a volunteer is truly an agent of the company.
A larger issue may be the simple fact that even if a moderator is deemed to be an “agent” of a platform, if they’re not experts in copyright, it would be ridiculous to then argue that their own failure to stop infringement makes an entire company liable. That would doom many websites that rely on volunteer help. If one were to mess up and not understand the vast nuances of copyright law, the liabilities for the platform could be immense. As Parker Higgins notes, the expectation here is unbalanced in a ridiculous way, especially as this very same court doesn’t seem to think that the sender of a DMCA takedown should take as much responsibility for its actions:
Still, even if the moderator draws a paycheck from the platform, it seems unreasonable to expect them to approach thorny copyright questions with the nuance of a trained professional. That is especially true when you compare this ruling with the Ninth Circuit’s most recent opinion in Lenz v. Universal, the “dancing baby” case, which looks down the other end of the copyright gun at takedown notice senders. Notice senders must consider fair use, but only so far as to form a “subjective good faith belief” about it. If courts don’t require the people sending a takedown notice to form an objectively reasonable interpretation of the law, why should they impose a higher standard on the moderators at platforms handling staggering quantities of user uploads?
But if moderators are a platform’s “agents,” then it runs into trouble if they have actual or “red flag” knowledge of infringements. The Ninth Circuit has instructed the lower court to find out whether the moderators had either. Noting the watermarks on some of the copyrighted images in the case, the court phrased the question of “red flag” knowledge as whether “it would be objectively obvious to a reasonable person that material bearing a generic watermark or a watermark referring to a service provider’s website was infringing.” That’s an important point to watch. ownership and licensing can be extremely complex — so oversimplifying it to the idea that the presence of a watermark means any use is infringing would have profound negative consequences.
And this is why this ruling may backfire for Hollywood — even as it pushed the court to rule this way. As EFF notes, at the very time that the MA is demanding that platforms do more to moderate content, the implications of this ruling may force them to do much less moderation:
The fact that moderators reviewed those submissions shouldn’t change the analysis. The DMCA does not forbid service providers from using moderators. Indeed, as we explained in the amicus brief PDF we filed with CCIA and several library associations, many online services have employees or volunteers who review content posted on their services, to determine for example whether the content violates community guidelines or terms of service. Others lack the technical or human resources to do so. Access to DMCA protections does not and should not turn on this choice.
The irony here is that copyright owners are constantly pressuring service providers to monitor and moderate the content on their services more actively. This decision just gave them a powerful incentive to refuse.
There are a few other issues in this case that are also potentially problematic. As Annemarie Bridy points out over at Stanford’s Center for Internet & Society, the court seems to totally mess up the analysis of the DMCA’s safe harbors by confusing part a of the DMCA 512 which applies to network providers and part c which applies to online service providers:
According to the court, the section 512a safe harbor covers users’ submission of material to providers, and section 512c covers the providers’ subsequent posting of that material to their sites. There is no such submission-posting distinction in section 512. On the face of the statute and in the legislative history, it’s quite clear that section 512a is meant to cover user-initiated, end-to-end routing of information across a provider’s network. A residential broadband access provider is the paradigmatic section 512a provider. Section 512c covers hosting providers like LiveJournal that receive, store, and provide public access to stored user-generated content. To characterize LiveJournal as a hybrid 512a512c provider misapplies the statute and introduces into the case law a wrongheaded distinction between submitting and posting material.
Putting aside the peculiar submission-posting dyad, the dispositive question concerning LiveJournal’s eligibility for the section 512c safe harbor is whether the site’s moderator-curated, user-submitted posts occur “at the direction of users,” taking into consideration the nature of moderators’ review and the fact that only about one-third of user submissions are ultimately posted. That question can be answered entirely within the ambit of section 512c and the existing case law interpreting it, including the Ninth Circuit’s own decision in Shelter Capital. There was simply no need for the court to invoke section 512a in this case.
The court’s analysis here is… just weird. It’s on page 13 of the ruling, and it really does seem to take a totally unchartered path in arguing that the submission of content is covered by 512a while the posting is covered by c. But… that’s wrong:
The district court focused on the users’ submission of infringing photographs to LiveJournal rather than LiveJournal’s screening and public posting of the photographs. A different safe harbor, § 512a, protects service providers from liability for the passive role they play when users submit infringing material to them…. The § 512c safe harbor, however, focuses on the service provider’s role in publicly posting infringing material on its site.
Among the other issues with this case, there’s also one on the question of whether or not the anonymous volunteer moderators should be disclosed. As we’ve discussed in the past, because the First Amendment also protects anonymity, any move to reveal an anonymous commenter must be carefully weighed against their First Amendment right to anonymity. The court here more or less brushes off this issue, saying that once the lower court determines the level of agency, that will answer the question on preserving anonymity:
Notwithstanding the deferential standard of review and complex issues of law that govern this discovery ruling, we vacate the district court’s order denying the motion and remand for further consideration. Whether the moderators are agents should inform the district court’s analysis of whether Mavrix’s need for discovery outweighs the moderators’ interest in anonymous internet speech. Given the importance of the agency analysis to the ultimate outcome of the case, and the importance of discovering the moderators’ roles to that agency analysis, the district court should also consider alternative means by which Mavrix could formally notify or serve the moderators with process requesting that they appear for their deposition at a date and time certain.
This is yet another important case in determining how online platforms can actually function today — and rulings that undermine safe harbors like the DMCA frequently seem to be what Hollywood wants — but again, this may backfire. Making it harder for these sites to function if they’re actively involved in moderation only means they’ll do much less of it.