OpenAI won’t watermark ChatGPT text because its users could get caught
![Vector illustration of the ChatGPT logo.](https://cdn.vox-cdn.com/thumbor/nilLDeYow5Roh4LkAVsb-ryA98U=/0x0:2040x1360/1310x873/cdn.vox-cdn.com/uploads/chorus_image/image/73501540/STK155_OPEN_AI_CVirginia_D.0.jpg)
OpenAI has had a system for watermarking ChatGPT-created text and a tool to detect the watermark ready for about a year, reports The Wall Street Journal. But the company is divided internally over whether to release it. On one hand, it seems like the responsible thing to do; on the other, it could hurt its bottom line.
OpenAI’s watermarking is described as adjusting how the model predicts the most likely words and phrases that will follow previous ones, creating a detectable pattern. (That’s a simplification, but you can check out Google’s more in-depth explanation for Gemini’s text watermarking for more).
Offering any way to detect AI-written material is a potential boon for teachers trying to deter students from turning over writing...