Ollama’s version is distilled with Qwen or Llama depending on parameter size, so it’s going to behave very different than the original model, since it is very different.
Except if you look at the top of OP’s picture, they are also running deepseek-r1:14B through ollama. I downloaded my copy on Sunday, so these should be fairly comparable situations.
I agree though that none of this applies to the full cloud-hosted model. I don’t want my account banned, so I’m not much for testing these boundary pushes in a surveilled environment. I imagine they have additional controls on the web version, including human intervention.
Ollama’s version is distilled with Qwen or Llama depending on parameter size, so it’s going to behave very different than the original model, since it is very different.
Except if you look at the top of OP’s picture, they are also running deepseek-r1:14B through ollama. I downloaded my copy on Sunday, so these should be fairly comparable situations.
I agree though that none of this applies to the full cloud-hosted model. I don’t want my account banned, so I’m not much for testing these boundary pushes in a surveilled environment. I imagine they have additional controls on the web version, including human intervention.
I… replied to a comment vs OP. Doh!