The New York Times sues Perplexity for producing ‘verbatim’ copies of its work

Original Article Summary
The New York Times has escalated its legal battle against the AI startup Perplexity, as it’s now suing the AI “answer engine” for allegedly producing and profiting from responses that are “verbatim or substantially similar copies” of the publication’s work. T…
Read full article at Biztoc.com✨Our Analysis
The New York Times' lawsuit against Perplexity for producing 'verbatim' copies of its work highlights the growing concern over AI-generated content and its potential impact on original creators. This development has significant implications for website owners, particularly those who produce high-quality, original content. As AI models like Perplexity continue to advance, the risk of their work being duplicated and profited from without permission increases. Website owners must be vigilant in monitoring AI bot traffic and ensure that their content is not being scraped or replicated without proper attribution or licensing. To protect their work, website owners can take several actionable steps: firstly, regularly review their website's traffic logs to identify potential AI bot activity; secondly, implement robust content protection measures, such as using meta tags or watermarking images; and thirdly, consider updating their llms.txt files to explicitly restrict AI models like Perplexity from accessing and reproducing their content.
Track AI Bots on Your Website
See which AI crawlers like ChatGPT, Claude, and Gemini are visiting your site. Get real-time analytics and actionable insights.
Start Tracking Free →

