• 0 Posts
  • 92 Comments
Joined 2 years ago
cake
Cake day: July 29th, 2023

help-circle
  • Kind of a lazy question, but are any of these protocols substantial over 802.11, especially if you just use p2p/adhoc/mesh modes?

    I haven’t touched mobile networks in a while so I’ve forgotten a lot, but iirc the main concern of mesh networks was efficient routing (which has been solved with some cool algorithms) and power efficiency for devices transmitting (again could have sworn 802.11 and even bluetooth can already achieve this).

    Zigby particularly stood out as annoying to me as it includes its own 2.4ghz physical layer stack which uses the same range as WiFI, which is already overcrowded as hell and relies on some CSMA/CA magic to make even the most apartment crowded area of APs function decently.


  • mlg@lemmy.worldtoSelfhosted@lemmy.worldgoodbye plex
    link
    fedilink
    English
    arrow-up
    10
    ·
    9 days ago

    Does jellyfin do untranscoded video/audio?

    Haven’t used it in years but finally building up my media server again and I remember it had some funky settings for hardware encoding back then which I didn’t need because I was connecting to it via a repurposed gaming laptop that could easily handle 4k content and surround sound by itself.





  • Ubuntu, and the experience was crap lol.

    Then I got to try Debian on a server and it was much nicer.

    Then I saw Torvalds uses Fedora, and given that he also disliked Debian and Ubuntu for their lack of end user ease, I switched and have been happy ever since.

    Seriously though, GNOME 40 really should not be the default DE. It made me think Linux UI was years behind Windows when it was actually the opposite with proven DEs like XFCE, KDE, and GNOME 3/2 etc.






  • A lot of it comes down to genre, target audience, and writer’s personal experience. Even MC and DC are characters written decades ago. Batman is basically from the 1930s/40s.

    Compare that to last decade’s best selling YA novels. Hunger Games was constructed to be very balanced from the start including a female main lead, same for Percy Jackson.

    My hot take is that most of these instances are actually fine as is because Hollywood in general sucks total ass at writing new characters into existing franchises, especially for the exact purpose of introducing diversity without any depth.

    There’s literally a 3+ hour series on youtube of how bad the new star wars trilogy is, and a solid third of that rant is about how poorly written the female lead is.

    The issue here is that having an equal or majority female (or any other metric) set of characters wouldn’t automatically make your story or writing better. You have to develop each character just like the rest, otherwise you end up with inserts that have no purpose other than to equal out a fraction.

    Whether that is due to the writers being able to create male characters easier, or just a perceived audience target, you’d much rather have a well written character than a soulless one.

    And that is likely not even correlated with male vs female writers. So much so that some critics even believe female writers are better at writing male characters than male writers, which is funny to think about. Ex: Harry Potter is still a 2:1 ratio.

    Again though, there are plenty of good examples (mostly books) with very successful stories with equal or majority female characters.

    If it makes you feel any better, this argument is old as hell lol. You can find ye olde forum posts discussing the exact same things mentioned in this entire thread from as far back as early 2000s, with plenty of in text examples from books and screenplay.


    The general concencus though, is that if the characters are good, the plot is good, and the writing is good, no one really cares about the number because you’re absorbed into the story. Your attachment to the story is a direct reflection of your own personal identity. If you notice the lack of X whatever while reading/watching and it breaks your immersion, then it’s probably a viable critique of the writing. If it’s something you notice after outside the story, then it might not matter as much as you think.


  • You might want to check what the actual hardware is first. You’ll probably be fine, but client 802.11 hardware can sometimes be underwhelming for hosting because they don’t have good stuff like beefed up MuMIMO.

    Although that’s assuming you will have a lot of traffic going through it, so you could always just test throughput and latency with iperf to see how well it functions.


  • mlg@lemmy.worldtoSelfhosted@lemmy.worldSelf host websites
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 months ago

    It depends on what it is really + convenience. There are lots of morons out here running basic info sites on full beefy datacenter VMs instead of a proper cloud webhost service.

    The most you’d be getting out of cloud is reliability. Self host assumes you don’t have any bottlenecks (easy enough to pass), but also 99% uptime which is impossible unless you are running with site redundancy (also possible, but I doubt how many people own multiple properties with their own distribute or private cloud solution).

    if 95% uptime is acceptable, and you don’t live in an area with outage issues from weather, I’d say go for it. Otherwise, you can find some pretty cheap cloud solutions for basic websites. Even a cheapo VPS would probably work just fine.


  • I have run photoprism straight from mdadm RAID5 on some ye olde SAS drives with only a reduction in the indexing speed (About 30K photos which took ~2 hours to index with GPU tensorflow).

    That being said I’m in a similar boat doing an upgrade and I have some warnings that I have found are helpful:

    1. Consumer grade NVMEs are not designed for tons of write ops, so they should optimally only be used in RAID 0/1/10. RAID 5/6 will literally start with a massive parity rip on the drives, and the default timer for RAID checks on Linux is 1 week. Same goes for ZFS and mdadm caching, just proceed with caution (ie 321 backups) if you go that route. Even if you end up doing RAID 5/6, make sure you get quality hardware with decent TBW, as sever grade NVMEs are often triple in TBW rating.
    2. ZFS is a load of pain if you’re running anything related to Fedora or Redhat, and the performance implications from lots and lots of testing is still arguably inconclusive on a NAS/Home lab setup. Unless you rely on the specific feature set or are making an actual hefty storage node, stock mdadm and LVM will probably fulfill your needs.
    3. Btrfs has all the features you need but is a load of trash in performance, highly recommend XFS for file integrity features + built in data dedup, and mdadm/lvm for the rest.

    I’m personally going with the NVME scheduled backups to RAID because the caching just doesn’t seem worth it when I’m gonna be slamming huge media files around all day along with running VMs and other crap. For context, the 2TB NVME brand I have is only rated for 1200 TBW. That’s probably more then enough for a file server, but for my homelab server it would just be caching constantly with whatever workload I’m throwing at it. Would still probably last a few years no issues, but SSD pricing has just been awful these past few years.

    On a related note, Photoprism needs to upgrade to Tensorflow 2 so I don’t have to compile an antiquated binary for CUDA support.


  • Google maps couldn’t navigate its way out of a straight road with the shit tier routing algorithm they haven’t updated for a decade.

    I’ve seen GPS devices from as far back as 2005 that can run circles around this absolute junk software, including the touchscreen UX,

    I hate this thing so much that I would pay to have Tesla to release their map system for any device just so we can experience Valhalla outside of OSMAnd and OrganicMaps which both lack the modern rendering flare.

    Seriously the only thing they change every update is a new design made up their yearly hired college interns, and another removed feature to reduce their cloud running cost.


  • I thought the default firewall rule for IPv6 is to block all incoming traffic? At least it is on my hardware out of box.

    Public facing IPv6 doesn’t means its externally reachable, its just how IPv6 works because there is no need for NATing. You can quickly test it by trying to SSH to it to make sure its not reachable. Otherwise just add a firewall rule that block all incoming IPv6.

    Anyway if you want to make sure it also doesn’t connect to the internet, you could just do the inverse and MAC ban outgoing traffic or put it on a VLAN.