Auf YouTube findest du die angesagtesten Videos und Tracks. Außerdem kannst du eigene Inhalte hochladen und mit Freunden oder gleich der ganzen Welt teilen.
I keep seeing this sentiment, but in order to run the model on a high end consumer GPU, doesn’t it have to be reduced to like 1-2% of the size of the official one?
Edit: I just did a tiny bit of reading and I guess model size is a lot more complicated than I thought. I don’t have a good sense of how much it’s being reduced in quality to run locally.
YouTube-connected the right track still. All these people touting it as an open model likely haven’t even tried to run if locally themselves. The hosted version is not the same as what is easily runnable local.
Just think of it this way. Less digital neurons in smaller models means a smaller “brain”. It will be less accurate, more vague, and make more mistakes.
I keep seeing this sentiment, but in order to run the model on a high end consumer GPU, doesn’t it have to be reduced to like 1-2% of the size of the official one?
Edit: I just did a tiny bit of reading and I guess model size is a lot more complicated than I thought. I don’t have a good sense of how much it’s being reduced in quality to run locally.
YouTube-connected the right track still. All these people touting it as an open model likely haven’t even tried to run if locally themselves. The hosted version is not the same as what is easily runnable local.
Just think of it this way. Less digital neurons in smaller models means a smaller “brain”. It will be less accurate, more vague, and make more mistakes.