I thought that guardrails were implemented just through the initial prompt that would say something like “You are an AI assistant blah blah don’t say any of these things…” but by the sounds of it, DeepSeek has the guardrails literally trained into the net?
This must be the result of the reinforcement learning that they do. I haven’t read the paper yet, but I bet this extra reinforcement learning step was initially conceived to add these kind of censorship guardrails rather than making it “more inclined to use chain of thought” which is the way they’ve advertised it (at least in the articles I’ve read).
I thought that guardrails were implemented just through the initial prompt that would say something like “You are an AI assistant blah blah don’t say any of these things…” but by the sounds of it, DeepSeek has the guardrails literally trained into the net?
This must be the result of the reinforcement learning that they do. I haven’t read the paper yet, but I bet this extra reinforcement learning step was initially conceived to add these kind of censorship guardrails rather than making it “more inclined to use chain of thought” which is the way they’ve advertised it (at least in the articles I’ve read).
Most commercial models have that, sadly. At training time they’re presented with both positive and negative responses to prompts.
If you have access to the trained model weights and biases, it’s possible to undo through a method called abliteration (1)
The silver lining is that a it makes explicit what different societies want to censor.
I didn’t know they were already doing that. Thanks for the link!
I saw it can answer if you make it use leetspeak, but I’m not savvy enough to know what that tells about guardtails