• 1 Post
  • 27 Comments
Joined 2 months ago
cake
Cake day: February 19th, 2026

help-circle









  • I use a humidity sensor, motion sensor and a helper that shows the change over a period of time. If the humidity raises fast (+2%/5m) and goes over a certain amount (unique, depending on your room’s climate) the bathroom automation changes tracks to hold the light at 100%, turn the extractor fan on, and well, how you stop the automation depends on you. I let it stay on for 15 minutes before waiting for motion. Small tips: For me the humidity triggers the automation within 15s-1m of showering, which is okay for me. Motion sensors typically use IR to see movement. If the room is too steamy it might struggle to see you. Also, it cannot penetrate glass. It must have a line of sight to you.

    The best alt I think would be mmwave presence sensors, but they’re pricy and require a wired connection.


  • It was a huge pain and I ended up troubleshooting with Gemini for hours aha! I know, I’ll plant a tree to offset my sins. It was at least useful to rapid search solutions and tell me what component was the most likely issue.

    I had coturn set up for legacy Element Classic and, before that, XMPP, but as I wasn’t using those I decided to shut it down and try using Matrix Livekit’s internal TURN server. I’m not sure what actually helped in the end, but Livekit’s latest build caused a bug, so I instead pulled v1.9.12. I also shuffled around my reverse proxy config (from my old attempts) because some endpoints seemed to have changed. I’ll update later with anonymised config :3



  • This touches on one of the reasons I am inclined to pirate – the majority of the time it’s not the author or developer that you pay, it’s the distributor or streaming provider (who often takes a 30% cut), then the payment processor takes about 5%, then the publisher takes a significant and usually undisclosed portion, until finally (and this differs between media) the actual creator sees perhaps £10 of a £60 purchase. Until the vultures clear the field and stop taking hefty cuts, or if I trust the publisher, I am inclined to find a way to actually pay the developer, or not at all, because even though it takes effort to research the sources and distributors, I would much rather vote with my wallet and not accept astronomical distributor fees and anti-consumer practices.

    When I was younger I found an album I really liked on Bandcamp. The monetisation model the artist used meant you could actually pay 0 for the music. As I was tight financially I took it but was extremely grateful. This can be seen as consensual piracy, because in my eyes that produce is worth a certain value that can be exchanged with money, even if the seller doesn’t say it. Anyway, Bandcamp takes a 15% cut which is low for the industry, and this particular artist was also independent, meaning they were their own publisher/record label, so when I could I honoured that ‘pay what you feel it’s worth’ approach and bought it a couple years or so later for more than a commercial album. Trust is also extremely infrequent in capitalism, and I appreciated the design.







  • Can confirm what another user said, that Intel iGPU would be better in your case.

    I’ll let you know now – if it runs Windows kill it. My server was originally Windows running Docker Desktop. It hosted three services: Minecraft server which lagged like a bitch; Samba folder share; and Emby. Whenever Emby playback froze I knew Windows, whose antivirus kept running the HDD under constant load, had fucked the i6 6100 to 100%, which happened at least twice a day.

    Moving on, now I run Proxmox. I host 25 services with the CPU at ~35% idle and 24GB RAM at 75%. Nothing lags.

    Before I plugged in the GPU my server drew 25W consistently, going to 35W under load. With the GPU, an RTX 3060 11GB (used), it uses 85W idle, so make sure it’s worth it. For my case it not only transcodes for Emby and resumes streaming in a second, but also handles voice inference for Home Assistant in under a second, and mid-sized Ollama LLM responses. Would recommend a high VRAM Nvidia card (for CUDA) in that scenario, as my model Gemma3 7B uses 6GB VRAM and 2GB RAM. But a top model, say Dolphin-Mixtral 22B, needs 80GB storage, 17GB RAM and… Well I don’t have the RAM but you get it. LLMs are intensive.