Is this even still a thing? It seems to be pretty well dead. Poe-API shat the bed, GPT4FREE got shut down and it’s replacement seems to be pretty much non-functional, Proxies are a weird secret club thing (despite being nearly totally based on scraped corporate keys), etc.

  • Ganbat@lemmyonline.comOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    No, LLM text generation is generally done on GPU, as that’s rhe only way to get any reasonable speed. That’s why there’s a specifically-made Pyg model for running on CPU. That said, one generation can take anywhere from five to twenty minutes on CPU. It’s moot anyway as I only have 8GB ram.

    • coyotino [he/him]@beehaw.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      I’m just telling you, it ran fine on my laptop with no discrete GPU 🤷 RAM seemed to be the only limiting factor. But yeah if you’re stuck with 8GB, it would probably be rough. I mean it’s free, so you could always give it a shot? I think it might just use your page file, which would be slow but might still produce results?