This is cool, since this game is pretty suitable for local coop. I just hope they also did some optimization, since my endgame village has about 30fps on a midrange gaming pc
This is cool, since this game is pretty suitable for local coop. I just hope they also did some optimization, since my endgame village has about 30fps on a midrange gaming pc
Glad i could help ;)
You can get different results, sometimes better sometimes worse, most of the time differently phrased (e.g. the gemma models by google like to do bulletlists and sometimes tell me where they got that information from). There are models specifically trained / finetuned for different tasks (mostly coding, but also writing stories, answering medical questions, telling me what is on a picture, speaking different languages, running on smaller / bigger hardware, etc.). Have a look at ollamas library of models which is outright tiny compared to e.g. huggingface.
Also, i don’t trust OpenAI and others to be confidential with company data or explicit code snippets from work i feed them.
If you’re lucky you just set it to the wrong version, mine uses 10.3.0 (see below).
I tried running the docker container first as well but gave up since there are seperate versions for cuda and rocm which comes packaged with this as well and therefor gets unnecessary big.
I am running it on Fedora natively. I installed it with the setup script from the top of the docs:
curl -fsSL https://ollama.com/install.sh | sh
After that i created a service file (also stated in the linked docs) so that it starts at boot time (so i can just boot my pc and forget it without needing to login).
The crucial part for the GPU in question (RX 6700XT) was this line under the [service] section:
Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"
As you stated, this sets the environment variable for rocm. Also to be able to reach it from outside of localhost (for my server):
Environment="OLLAMA_HOST=0.0.0.0"
I have my gaming pc running as ollama host when i need it (RX 6700XT with rocm doing the heavy lifting). PC idles at ~50W and draws up to 200W when generating an answer. It is plenty fast though.
My mini pc home server is running openwebui with access to this “ollama instance” but also OpenAIs api when i just need a quick answer and therefor don’t turn on my pc.
Anna’s archive would be the go to i think. You can choose the language in the sidebar.
I hope it went well :) i was completely ready to go back changing the image tag to v2 but didn’t need to.
The choice gesture vs. nav button is usually part of android itself. In my case (pixel with therefor nearly stock android) its in settings -> system -> navigation mode (or something similar since its in german in my case). If you can’t find it, search for “navigation button your phone model”.
Edit: sorry, i just realized you meant the app drawer, not the overview of currently opened apps. I don’t know the answer to that.
Edit edit: ok i found something in lawnchairs settings. The last setting is called gestures. If you have navigation buttons enabled instead of nav gestures (see above) you could bind the home button to open the app drawer.
Lawnchair is a fork of the original pixel launcher that gives some quality of life upgrades to an already good piece of software, like being able to remove the search bar, resizable / reshapable icons and fonts usw. Its also open source, so feel free to check out the github.
There is no app store release right now but they are working on it. I’ve used the alpha versions for two years now and it has worked fine so far.
Im so looking forward to this. When i tried to use tmpfs / ramdisk, the transcoding would simply stop because there was no space left.
Yes, since we have similar gpus you could try the following to run it in a docker container on linux, taken from here and slightly modified:
#!/bin/bash
model=microsoft/phi-2
# share a volume with the Docker container to avoid downloading weights every run
volume=<path-to-your-data-directory>/data
docker run -e HSA_OVERRIDE_GFX_VERSION=10.3.0 -e PYTORCH_ROCM_ARCH="gfx1031" --device /dev/kfd --device /dev/dri --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.4-rocm --model-id $model
Note how the rocm version has a different tag and that you need to mount your gpu device into the container. The two environment variables are specific to my (any maybe yours also) gpu architecture. It will need a while to download though.
Huggingface TGI is just a piece of software handling the models, like gpt4all. Here is a list of models officially supported by TGI, although they state that you can try different ones as well. You follow the link and look for the files section. The size of the model files (safetensors or pickele binaries) gives a good estimate of how much vram you will need. Sadly this is more than most consumer graphics cards have except for santacoder and microsoft phi.
I tried Huggingface TGI yesterday, but all of the reasonable models need at least 16 gigs of vram. The only model i got working (on a desktop machine with a amd 6700xt gpu) was microsoft phi-2.
From what I’ve heard in this regard: synology bad, qnap good?
I’ve got a 12TB Seagate desktop expansion which contains a Seagate ironwolf drive. According to the link you shared, I’ll better look for a backup drive asap.
Edit: the ones in the backblaze reference are all exos models, but i still have no profounf trust in Seagate.
Meanwhile 4tb sata ssd is 300€ in germany
It’s true that you shouldn’t open ports to the internet. If you still want your services to be accessible from outside the local network you can install a wireguard server on your thin client that has access to the services you want. And if you really want to harden it you can restrict wireguard clients from ssh and other admin things.
You will need to open one port on the router to your wireguard server though. Wireguard is UDP though and ignores packages without an established connection, so attackers will not even know there is an open port on your router.
Edit: tailscale and zerotier are good external solutions to this as well without needing to open a port at all.
They have been built onto android until now, but i read that they wanted to ditch that and make their own os from scratch
TIL Amazon Echos can act as Zigbee hub. Still not using it lol.
Like one of the comments mentioned: there is yt-dlp for now at least.