If AI Can't Stop a Student From Cheating, How Can It Ever Be Safe?
Original Article Summary
Marc Watkins, Rhetorica, Dec 23, 2025 My first thought was that this is kind of a dumb question, but there's better logic behind it than it may seem: "If AI companies are honest and say that they cannot build guardrails into their models that stop st…
Read full article at Substack.com✨Our Analysis
Rhetorica's publication of Marc Watkins' article highlighting the limitations of AI in preventing cheating marks a significant concern about the technology's ability to regulate its own usage. The article questions the safety and reliability of AI models, citing the example of students using AI to cheat, and how this vulnerability can have broader implications. This news means that website owners need to be cautious about relying solely on AI-powered tools to prevent cheating or ensure content integrity on their platforms. As AI models become more prevalent, website owners may see an increase in AI-generated traffic, which can be difficult to distinguish from legitimate user interactions. This can lead to skewed analytics, compromised content quality, and potential security risks. To mitigate these risks, website owners can take actionable steps such as implementing robust content verification processes, monitoring AI bot traffic through tools like llms.txt, and regularly reviewing their website's security protocols to prevent potential vulnerabilities. Additionally, website owners can consider using AI detection tools to identify and flag suspicious activity, and adjust their content policies to account for the limitations of AI models in regulating user behavior.
Track AI Bots on Your Website
See which AI crawlers like ChatGPT, Claude, and Gemini are visiting your site. Get real-time analytics and actionable insights.
Start Tracking Free →

