Today we're open-sourcing R1 1776—a version of the DeepSeek R1 model that has been post-trained to provide uncensored, unbiased, and factual information.
To keep our model "uncensored" on sensitive topics, we created a diverse, multilingual evaluation set of 1000+ examples. Using human annotators and specially designed LLM judges, we compared frequency of censorship in the original R1 and state-of-the-art LLMs to R1 1776.
We also ensured that the model’s math and reasoning abilities remained intact after the uncensoring process. Benchmark evaluations showed it performed on par with the base R1 model, indicating that uncensoring had no impact on core reasoning capabilities.
@perplexity_ai This chart show "China censorship" where it claims zero for R1 1776 - great. Are there other censorships to evaluate? How would this chart look for other censorship or for censorship in general?
@perplexity_ai honestly, who cares about China censorship, can it write porn or a game of thrones like novel?
@perplexity_ai @AskVenice @ErikVoorhees Can we get this on Venice?
@perplexity_ai wait so you took out only Chinese censorship but not American?! lmao >model is so based bro, you can say Winnie the Pooh again but don't ask about the you know what or else whats so hard to understand about free speech for most orgs
@perplexity_ai The woke censors are a thousand times worse than the CCP censors.