Anthropic users face a new choice – opt out or share your chats for AI training 

Anthropic is making some big changes to how it handles user data, requiring all Claude users to decide by September 28 whether they want their conversations used to train AI models. That is a massive update. Previously, users of Anthropic’s consumer products were told that their prompts and conversation outputs would be automatically deleted from Anthropic’s back end within 30 days “unless legally or policy‑required to keep them longer.”

Source: Anthropic users face a new choice – opt out or share your chats for AI training | TechCrunch

Get the latest RightsTech news and analysis delivered directly in your inbox every week
We respect your privacy.