In a battle that has been brewing for some time, traditional media has found an unexpected ally in the fight against artificial intelligence (AI) firms, as an AI expert has come forward to support their cause. Ed Newton-Rex, a former executive at Stability AI, has taken a stand on behalf of the original creators of web content, who often find their work being used without permission to train AI chatbots. This practice has raised concerns about the ethical and legal implications of AI information-gathering.
As AI continues to advance, chatbots are becoming increasingly prevalent in various industries. These chatbots rely on vast amounts of data to learn and respond to queries. This data is often sourced from the internet, encompassing a wide range of content, including copyrighted works. That’s where the problem lies.
Courts worldwide will soon need to weigh in on this matter, as legislation is being fast-tracked to address the concerns raised by traditional media. Chatbot operators will be required to disclose how they train their AI models, shedding light on the ethics and legality of their practices.
OpenAI, a prominent developer in the field, has acknowledged the issue, stating that tools like its ChatGPT would be impossible to train without access to copyrighted material. OpenAI recently made its case in a submission to the House of Lords, highlighting the importance of copyright protection. They argue that copyright extends to all forms of human expression found on the internet, including blog posts, photographs, software code, and government documents.
On the other side of the debate, AI firms defend their practices by citing “fair use” rules that permit the limited use of copyrighted material without explicit permission from the copyright holder. Under fair use, it is argued that the incorporation of web content into training models falls within the bounds of legality. However, these claims are being challenged.
The New York Times has filed a lawsuit against OpenAI and Microsoft, accusing them of unlawfully using their work to train AI. Similarly, the photo agency Getty Images is suing Stability AI for using their image library without authorization in the creation of their AI models.
Ed Newton-Rex, who has now joined the ranks of those opposing AI firms, expressed doubts about the validity of fair use in this context. Speaking on BBC Radio 4’s Today programme, he argued, “There’s a very strong argument that generative AI training doesn’t fall under the fair use exception.” Speed is of the essence in resolving the legal implications surrounding AI operations, as many AI firms are training their models using a wide range of content, reflecting a need for urgent clarity.
As this debate heats up, it is clear that AI’s growth is raising fascinating and complex questions about the future of web content and copyright. Traditional media, once seen as the primary threat to a digital age, now finds itself fighting alongside AI experts against the potential abuse of copyrighted material. Legislation and court rulings will undoubtedly play a crucial role in defining the boundaries for AI information-gathering and ensuring a fair and ethical playing field for all involved.
Use the share button below if you liked it.