The New York Times recently filed a lawsuit against Microsoft and OpenAI — the company that makes ChatGPT.
According to the lawsuit, the AI models both steal the work of the news giant while also sharing false information about stories it produced.
The misinformation comes from AI “hallucinations," which is when AI makes up information. Attributing incorrect information to the Times hurts its credibility.
OpenAI and Microsoft claim that training their models on the Times’s work falls under “fair use” laws that regulate the use of copyrighted works.
The lawsuit claims billions of dollars of damages and is similar to other legal suits brought against tech giants by authors whose work also trained AI models.
The ethical questions here are big. Doesn't every writer learn and borrow style from other writers? But shouldn't writers and organizations be fairly compensated? Yes and yes.
I have written articles on AI ethics for Reynolds Journalism Institute, SUCCESS Magazine, and the now defunct Habtic Standard. I am still torn.
In Joy Buolamwini’s new book, Unmasking AI, she talks about the shift from wanting more diverse training data to concern for informed consent and potential misuse of biometric data for facial recognition. But writing isn't biometric data and artists take inspiration from each other all the time.
I haven't decided how I feel yet, but I do believe that writers should be paid — especially when it relates to technology that attempts to replace them.
I'm Nia Norris and this is my perspective.