Oh, and you HAVE to try the new Qwen 2.5 14B.
The whole lineup is freaking sick, 34B it outscoring llama 3.1 70B in a lot of benchmarks, and in personal use it feels super smart.
You can try a smaller IQ3 imatrix quantization to speed it up, but 22B is indeed tight for 8GB.
If someone comes out with an AQLM for it, it might completely fit in VRAM, but I’m not sure it would even work for a Pascal card TBH.
If they silently ignores this (as they seem to be doing?) it just screams “have your cake and eat it,” in regards to whatever WotC imposed on them.
Technically they did not violate the contract. Maybe.
What? You want us to fix this, WotC? Well, you see, that would be quite expensive…
Facebook just didn’t release the code for llama imagegen.
The model you are looking for now is Flux.
TBH this is a great space for modding and local LLM/LLM “hordes”
deleted by creator
Demonizing spaces for like minded people to congregate doesn’t solve that.
If this is a polite way of saying “go somewhere else to lightly criticize democrats,” I don’t accept that. I can at least hope Lemmy can do better, and try to change it.
Of course having a good information diet is critical. But that’s besides the point? I don’t think this thread would be a thing if all our information diets were great.
It just doesn’t resonate with voters.
I think many voters “feel” tech getting junky, but the connection to why is just way too complicated for most to dig into. It’s not a direct line like tipping waiters or getting abortions.
I’m with Shepard on this one, even if he’s being a jerk about it.
Lemmy is a filter bubble, an echo chamber. You miss information that would be personally important to you, but is excluded because it doesn’t fit with the US Democrat party line, and the very specific part of it Lemmy’s politically active base likes.
Like, I’m a raging Trump hater, but I’m kind of aghast at how many knee jerk reactions (like, to me, your original reply) I get when I imply something vaguely critical about the Democrats.
What does that have to do with internet privacy legislation?
This is not just a partisan issue. As the article points out, its been like this for 30 years. The Dems failed to pass any meaningful legislation too.
It’s because it makes gobs of money that both parties are taking, and it also kind of projects US power to other countries since US tech is doing most of the data collection.
The fediverse doesn’t actively optimize for attention like commercial platforms. No notification spam and random pings on your phone, no sorting and throwing suggestions in your face by some algorithm that’s trying to keep you glued to the screen. It’s like night-and-day, IMO.
Sorting and such is just to try and bubble up interesting stuff.
One major problem it still has is encouraging filter bubbles, which have the secondary effect of sucking people in.
What’s the benefit?
Like, what’s the actual user experience gain from seeing someone else’s votes? Is it just so the average joe can profile users, like for identifying bots or whatever? That’s not rhetorical, I’m genuinely curious, as I don’t see what I’d gain from this as a Lemmy user.
Bit as I see it, I really have no desire to do this. Maybe if I was a a pseudo mod on a spammy community I guess? But comments are already a decent indicator.
There is no movie in Ba Sing Se.
Here we are safe. Here we are free.
It’s crazy that Twitter has such an outsized influence on the public, and I think it’s because news outlets amplify it so much.
It doesn’t have that many active users. And news rarely covers other platforms when something makes a lot of noise and reaches many eyeballs.
Eh, Elon Musk and the crypo industry are not “Big Tech” to me.
Peter Thiel kinda is, but he’s also kinda a black sheep. And their support of Republicans isn’t exactly surprising.
I’m all for this…
But aren’t the Democrats also kinda the party of Big Tech?
It would be amazing of campaigns start having a Fediverse presence, but still.
Yeah, well, I have been using base models and a few instruct tunes for a bit and haven’t even gotten refusals, as long as there as enough existing context.
IMO guardrails have been irrelevant for “local” models forever since a little prompt engineering or manipulation blows them away,.
In theory the base model should be less “censored,” but really its just for raw completion/continuation and further finetuning.
A Qwen 2.5 14B IQ3_M should completely fit in your VRAM, with longish context, with acceptable quality.
An IQ4_XS will just barely overflow but should still be fast at short context.
And while I have not tried it yet, the 14B is allegedly smart.
Also, what I do on my PC is hook up my monitor to the iGPU so the GPU’s VRAM is completely empty, lol.