Insights

Setting The Ground Rules for AI Usage

Even while courts and regulators fail to act expeditiously, we as an industry can and should do more, and quickly.

In my last piece, I outlined different outside boundaries for how the industry should approach rights management and exploitation with fair compensation in an AI-driven world. Courts and regulators are still racing to catch up, but that shouldn’t stop companies from setting their own internal policies.

The major players like ChatGPT or Gemini provide limitations to how much we, as an industry, can use them until they, too, adopt compensation models. The best approach is for trade groups and collective bargaining groups to adopt rules like those outlined previously, to further prevent unfair use and unfair compensation models.

Because of the considerations of fair use, not wishing to further empower incumbents, the data sets being huge, and there not being adequate copyright management information, we recommend the industry contemplate several different business models that in essence are tantamount to providing degrees of societally desirable control, compensation and credit for human authors and rights-holders. A two-pronged approach to licensing is called for.

Inputs

Regardless of whether or not the ingestion of copyrighted works by generative artificial intelligence (GAI) tools is fair use, GAI companies should pay a percentage of their income to rights holders on a compulsory blanket license basis for the rights to train on copyrighted content and human name, image, likeness, voice and style (NILV)s. This is similar in concept to performing rights organizations for music compositions. Without the copyrighted content, assuming no fair use, fair dealing or text and data mining (TDM) exceptions, the commercial GAI tools would have been trained only on works in the public domain or works that are licensed under a Creative Commons type license, allowing for commercial uses potentially leading to bias due to a smaller training set. This income could be split among rightsholders based on market share.

Prompts and Outputs

Downstream GAI versions that provenance and the Copyright Office policy deem a derivative work or to include Digital Replica / NILV rights would need to be directly licensed. We suggest a model similar to how TikTok tracks the creation of videos and streams of those videos as two separate metrics and weights in the revenue-sharing calculation. These outputs could be further paid for in proportion to what is used, with the value of the use in proportion to the totality of the output. This is similar to the YouTube vertical and horizontal splitting of economics in their supply-side licenses. A share is provided to each type of IP in the output and among each type of IP, the share is split pro-rata if there is more than one owner of the same type of IP.

This might include modifiers to the weighting of revenues to certain works based on the specificity of the prompt engineering. This is similar to the ‘Artist Centric’ model referenced above, providing for a multiplier giving heavier weight to streams occurring on content where the user searched for the artist or song directly as opposed to the music being algorithmically served to the user by the service. In the GAI context, an output made from a specific prompt using an artist’s or work’s name might be given a heavier weight when the compulsory license income is divided up among participants.

Final Thoughts

Notwithstanding the foregoing, GAI tools that generate deep-fakes of works should not be, and are not, permissible without the consent(s) of the rightsholder(s). This has been made clear in the recent ELVIS Act in Tennessee and the NO FAKES Act winding its way through Congress.