“Copyright noises” around Generative AI

Consider this:

Things are about to get a lot worse for Generative AI

The cat is out of the bag:

  • Generative AI systems like DALL-E and ChatGPT have been trained on copyrighted materials;
  • OpenAI, despite its name, has not been transparent about what it has been trained on.
  • Generative AI systems are fully capable of producing materials that infringe on copyright.
  • They do not inform users when they do so.
  • They do not provide any information about the provenance of any of the images they produce.
  • Users may not know when they produce any given image whether they are infringing.

As I write this, New York Times (yes, the same rag that calls itself as “journalism”) has sued Open AI.

In all likelihood, the New York Times lawsuit is just the first of many. On a multiple choice X poll today I asked people whether they thought the case would settle (most did) and what the likely value of such a settlement might be. Most answers were $100 million or more, 20% expected the settlement to be a billion dollars. When you multiply figures like these by the number of film studios, video game companies, other newspapers etc, you are soon talking real money.

It is because of this:

These are known downsides (and risk of doing business for “non-profit”). Expect a lot more hand wringing and drama around it.

Pretty bad. I still maintain- generative AI is primed to replace browsers and “usher in a new era of Internet”. It serves no other useful purpose.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.