FLOSS virtualization hacker, occasional brewer

  • 3 Posts
  • 44 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle




















  • Do you usually have some other front-end over the model? I can run llama.cpp directly in interactive mode but the results are a little underwhelming. However there seem to be various front ends that get better results? Is this down to better prompting and parameter control? I’ve seen temperature mentioned in relation to ChatGPT but I have no idea what rope and yarn factors are for?