A team of quantum physicists from Multiverse Computing, a Spanish firm focused on quantum-inspired artificial intelligence, has introduced a modified version of the AI reasoning model DeepSeek R1. This iteration, named DeepSeek R1 Slim, is 55% smaller than its predecessor while maintaining nearly equivalent performance. Notably, this new model claims to have eliminated the censorship imposed by its original Chinese developers, which is significant in the context of AI deployment in politically sensitive environments.
In China, AI systems are subject to stringent regulations that mandate compliance with state laws and socialist values, leading to built-in censorship layers during the training process. Such models often avoid addressing politically sensitive topics or default to state propaganda. To develop the Slim version, Multiverse utilized advanced mathematical techniques from quantum physics, specifically tensor networks, which allow for efficient representation and manipulation of large datasets. This method enables precise mapping of data correlations, facilitating the removal of unwanted information while preserving the model’s overall functionality.
The researchers conducted tests with DeepSeek R1 Slim using a dataset of 25 politically sensitive questions, such as those related to the Tiananmen Square protests and critiques of Chinese leadership. They compared the responses of the modified model to those of the original DeepSeek R1, employing OpenAI’s GPT-5 to evaluate the level of censorship in each answer. According to Multiverse, the uncensored model delivered factual responses akin to those from Western AI systems, showcasing its potential to navigate complex reasoning tasks without the constraints of censorship.
This initiative aligns with Multiverse’s broader mission to enhance AI efficiency. Current large language models are resource-intensive, requiring high-end GPUs for training and operation. Multiverse co-founder Roman Orús emphasizes the need for more compact models that can deliver comparable performance while conserving energy and resources. The AI industry is increasingly focused on model compression, exploring methods such as quantization and pruning to improve efficiency. However, these techniques often demand a trade-off between size and performance. The quantum-inspired approach employed by Multiverse stands out for its ability to minimize redundancy more effectively than conventional methods.
As the discourse around AI censorship escalates, professionals like Thomas Cao from Tufts University warn that claims of completely eradicating censorship should be approached with caution. The pervasive nature of Chinese internet regulations complicates the creation of entirely unbiased models, as censorship is embedded throughout the AI training process. Recent studies by academics, including researchers from Stanford and Princeton, have documented the extent of government-imposed censorship, revealing that models developed in China exhibit significantly higher censorship rates, particularly when responding to Chinese-language prompts.
Amidst this backdrop, there is a growing interest in developing uncensored AI models, following the lead of companies like Perplexity, which has previously released its own version of DeepSeek R1, dubbed R1 1776. However, as the field progresses, it remains crucial to navigate the complexities of censorship and the implications for the global AI information landscape.
Source: Quantum physicists have shrunk and “de-censored” DeepSeek R1 via MIT Technology Review
