Challenging The Black Box: On the Accountability of Algorithmic Copyright Enforcement

Lessons From Algorithmic Copyright Enforcement by Intermediaries​

Distributed networks opened up new opportunities for open access to creative materials. The low cost of coordinating creative efforts and distributing creative works to a large audience, enable individual users to collaborate in the production of creative works and freely share it with communities of their choosing.  At the same time, however, digital networks constitute a robust mechanism of enforcement.  Algorithms perform a large part of our law enforcement activity.  Many enforcement mechanisms are embedded in systems design, performing surveillance and implementing filtering and blocking measures.  Copyright was at the forefront of algorithmic law enforcement, employing technical measures, such as Digital Right Management Systems and Technological Protection Measures from the early 90’s.  Nowadays, much of the enforcement activity is implemented by online intermediaries, which monitor, filter, block, and disable access to allegedly infringing content (e.g., the “notice and take down” regime under the Digital Millennium Copyright Act).

One of the challenges arising from algorithmic enforcement is how to enhance the algorithm’s accountability and secure the public interest.  Algorithms are making critical choices on access to creative content, in non-transparent ways.  It is difficult to subject non-transparent “black box” governance to public or legal scrutiny.  We simply don’t know what is being removed, how decisions are being made, who is making such choice and how we might affect those decisions. Yet, without adequate channels of review, unauthorized restrictions on non-infringing content cannot be promptly corrected.  Unaccountable management of online content may also enable manipulation and abuse of power, thus creating new barriers to open competition and market innovation as well as challenging civil rights.

This paper seeks to map the barriers for enhancing public scrutiny of algorithmic copyright enforcement and explore different mechanisms to minimize them.  In particular, it considers the complicated and non-transparent nature of algorithms; the dynamics of copyright enforcement mechanisms that are based on constantly evolving learning machines; the legal barriers that prohibit “black box tinkering” as means of improving public knowledge of the system that regulates their behavior; and the failure of exiting mechanisms of review, such as counter notice under the DMCA, which enable users to challenge removals of allegedly infringing material by filing counter-notices.  Among the possible scrutiny enhancing mechanisms, this paper critically explores market-generated mechanisms, such as Google’s Transparency Report, private mechanisms of users’ literacy such as the FCC’s Fair Use Principles and possible public mechanisms of mandatory disclosure.  ​

Research Team:

Prof. Niva Elkin-Koren

Dr. Maayan ​Perel​

Nati Perl​

​​