Bz1sen@lemmy.worldtoPrivacy@lemmy.ml•Running local LLMs for privacy as an alternative to ChatGPT, MS Copilot etc.?
1·
4 hours agoHow fast are response times and how useful are the answers of these open source models that you can run on a low end GPU? I know this will be a “depends” answer, but maybe you can share more of your experience. I often use Claude sonnet newest model and for my use cases it is a real efficiency boost if used right. I once mid of last year tested briefly an open source model from meta and it just wasn’t it. Or do we rather have to conclude that we’ll have to wait for another year until smaller open source models are more proficient?
Also if you search “@threads.net” in your mastodon client and you find users from that server, then your server should be connected connected