Late one night in 2017, as Editor of City AM, I asked our picture desk for an image to accompany a story about some proposed infrastructure investment. I forget the details, but it had something to do with a tunnel somewhere and I thought an aerial shot of the location would be useful. The picture editor found an image on file and it did the job nicely. Into the paper it went.
A week later I received an email from a photographer that read more like a lawyer’s letter. In fact, it was exactly like a lawyer’s letter, right down to the concluding line which said something like “I have a one hundred per cent success rate in pursuing publishers for payment regarding image rights and breach of copyright.”
It turned out that the photographer had noticed the use of an image - his image - and he swung into action to exert his rights and extract his fee. Fair enough, I thought, and we settled the bill. I got the distinct impression that this photographer made healthy revenue by chasing down unauthorised use of his images, and fair play to him for that.
This same debate is now taking place on an industrial scale, only it's the news and publishing industry attempting to exert their rights over the mighty AI companies.
We know that concerns about copyright infringement have been cited by businesses as a barrier to AI adoption. The question is, do the AI companies themselves have any such reticence? It would appear that they do not.
News publishers in the US have been leading the charge against AI firms, with the New York Times currently engaged in a bitter legal dispute with OpenAI and Microsoft:
The New York Times is suing Microsoft and OpenAI for billions of dollars over copyright infringement, alleging that the powerful technology companies used its information to train their artificial intelligence models and to “free-ride”.
The use of its data without permission or compensation undermined its business model and threatened independent journalism “vital to our democracy”, the media organisation said in documents filed with a federal court of Manhattan.
The lawsuit has laid bare the controversial use by technology companies of accurate and high-quality data provided by content creators, such as journalists, which are needed to power the “large-language models” that form the backbone of generative artificial intelligence.
OpenAI has hit back, accusing the New York Times of “hacking” its system to generate deliberately misleading results to support its case.
Billions of dollars are on the line in this one case alone.
Getty Images has launched legal action against London-based Stability AI in the UK and US, claiming that the generative AI firm unlawfully scraped millions of its images to train its model. Stability AI rejects the accusation.
Meanwhile, a trio of music publishers, including Universal Music, is pursuing Anthropic, the AI company backed by Google and Amazon, for unlawful use of song lyrics:
The complaint accused Anthropic of infringing the publishers' copyrights in lyrics from at least 500 songs by musicians including Beyonce, the Rolling Stones and The Beach Boys. The publishers claim Anthropic misused the lyrics as part of the "massive amounts of text" that it scrapes from the internet to train Claude [Anthropic’s chatbot] to respond to human prompts.
This is fascinating and evolving legal territory, and demonstrates yet again the complexities and tensions generated in the wake of AI’s seemingly unstoppable progress.
In the UK, the Publishers Association (whose members include Penguin Random House, HarperCollins and Oxford University Press) wrote last week to 50 tech companies including Google, Meta and OpenAI, warning the tech giants that they must pay to use or access books and articles produced by their members. The letter stated:
“Our members do not, outside of any agreed licensing arrangements to the contrary, authorise or otherwise grant permission for the use of any of their copyright-protected works in relation to, without limitation, the training, development or operation of AI models including large language models or other generative AI products.”
As is often the case, the “move fast and break things” approach of tech companies has left a lot of, well, breakages in their wake.
In the case of Meta, their chief scientist, Lann YeCunn, added insult to injury at the start of this year by claiming that:
Only a small number of book authors make significant money from book sales. This seems to suggest that most books should be freely available for download…the lost revenue for authors would be small, and the benefits to society large by comparison.
You can imagine how well that went down among the literary community.
Some voices in this debate take the view that this is all about open information, pooling common knowledge and widening access, but such a peace-and-love perspective is somewhat at odds with the billions and billions of dollars being generated by a relatively small number of individuals who have found a way to capitalise on the collective energy, wisdom, creativity and hard work of pretty much everyone else.
If we ever reach a point where we stop being ‘blown away’ by the latest leap forward in the world of AI, there’s going to have to be a reckoning, and the world’s journalists, authors, songwriters and photographers deserve a slice of the AI pie.
While some businesses are attempting to use the courts to force a pay day, governments and regulators are seemingly much slower off the mark.
In the UK, a House of Lords report earlier this year was unambiguous in its conclusions, with members stating they:
…do not believe it is fair for tech firms to use rightsholder data for commercial purposes without permission or compensation, and to gain vast financial rewards in the process…
…The application of the law to LLM processes is complex, but the principles remain clear…The point of copyright is to reward creators for their efforts, prevent others from using works without permission, and incentivise innovation.
Despite this, the government has been dragging its feet on any legislative response to the pillaging of content by tech giants. The UK’s Intellectual Property Office has been trying to bring all sides together under a voluntary agreement, and the EU is weaving the issue into its behemoth AI regulatory agenda. Meanwhile, some outlets such as The Times and the BBC have blocked OpenAI’s access to their website, while others - such as Politico - have entered into formal licensing arrangements with AI companies.
Whether through legal action, government regulation or industry initiatives, a form of coexistence will emerge, in time. Whether anyone will still need songwriters, photographers and writers by the time that point arrives is another question entirely.
Applications you may have missed
A robot passing someone an apple may not blow you away, but OpenAI claim that Figure 01 (to give the poor robot its name) has selected the apple because the man said he was hungry. Figure 01 goes on to sort some dirty dishes, based on its understanding of where such items ought to go.
In this video, the man has a conversation with Figure 01, demonstrating (in the words of OpenAI) “high-level visual and language intelligence” and “fast, low-level, dexterous robot actions.” A viewer’s first instinct might be to wonder whether the exchange is staged or pre-determined, but it’s highly unlikely OpenAI would release anything as reputationally risky as that. What’s more likely is that Figure 01 will be serving you burgers and fries within a few years, and who knows, maybe performing all manner of domestic or professional tasks not long after that.
That said, a new report suggests exposure to robots and AI at work leads to a deterioration in quality of life among the human workforce. The Guardian reports:
Exposure to new technologies including trackers, robots and AI-based software at work is bad for people’s quality of life, according to a groundbreaking study from the Institute for the Future of Work.
Based on a survey of more than 6,000 people, the thinktank analysed the impact on wellbeing of four groups of technologies that are becoming increasingly prevalent across the economy. The authors found that the more workers were exposed to technologies in three of these categories – software based on AI and machine learning; surveillance devices such as wearable trackers; and robotics – the worse their health and wellbeing tended to be.
By contrast, use of more long-established information and communication technologies (ICTs) such as laptops, tablets and instant messaging at work tended to have a more positive effect on wellbeing.
“We found that quality of life improved as the frequency of interaction with ICTs increased, whereas quality of life deteriorated as frequency of interaction with newer workplace technologies rose,” the report said.
While the authors did not directly investigate the causes, they pointed out that their findings were consistent with previous research which showed, “such technologies may exacerbate job insecurity, workload intensification, routinisation and loss of work meaningfulness, as well as disempowerment and loss of autonomy, all of which detract from overall employee wellbeing”.
Despite this rather gloomy take, the legal sector continues to provide fertile ground for emerging AI tech, with DraftWise, which offers an AI-powered contracts tool for lawyers, raising another $20m in investment, according to Reuters:
New York-based DraftWise's product helps lawyers draft and negotiate contracts — a longtime focus of legal technology developers that has increasingly incorporated AI.
DraftWise was founded by former Palantir engineers James Ding and Emre Ozen, alongside Ozan Yalti, who was a lawyer at global law firm Clifford Chance. The company got its start as part of Silicon Valley startup incubator Y Combinator in the summer of 2020 and later participated in law firm incubator programs, including at U.K. law firms Mishcon de Reya and Allen & Overy.
Ding said the company has differentiated itself by using law firms' records of past client work and other unique data to help them better tailor their contracts. Having access to that history of work in one place can help in the negotiating process, he said.
That’s it for this week. Thanks for reading The Application.
Christian