The year was 2006, and the 80 GB HDD in my Dell Optiplex 790 was full of podcasts, stolen music, and episodes of Dr. Who…
The year was 2006, and the 80 GB HDD in my Dell Optiplex 790 was full of podcasts, stolen music, and episodes of Dr. Who…
ITT people trying to be edgy but I’m going to say invading Russia in the winter.
Honestly, if you’re doing regular backups and your ZFS system isn’t being used for business you’re probably fine. Yes, you are at increased risk of a second disk failure during resilver but even if that happens you’re just forced to use your backups, not complete destruction of the data.
You can also mitigate the risk of disk failure during resilver somewhat by ensuring that your disks are of different ages. The increased risk comes somewhat from the fact that if you have all the same brand of disks that are all the same age and/or from the same batch/factory they’re likely to die from age around the same time, so when one disk fails others might be soon to follow, especially during the relatively intense process of resilvering.
Otherwise, with the number of disks you have you’re likely better off just going with mirrors rather than RAIDZ at all. You’ll see increased performance, especially on write, and you’re not losing any space with a 3-way mirror versus a 3-disk RAIDZ2 array anyway.
The ZFS pool design guidelines are very conservative, which is a good thing because data loss can be catastrophic, but those guidelines were developed with pools that are much larger than yours and for data in mind that is fundamentally irreplaceable, such as user generated data for a business versus a personal media server.
Also, in general backups are more important than redundancy, so it’s good you’re doing that already. RAID is about maintaining uptime, data security is all about backups. Personally, I’d focus first on a solid 3-2-1 backup plan rather than worrying too much about trying to mitigate your current array suffering catastrophic failure.
I think they may have dropped the feature but I distinctly remember being disappointed in the feature that it wouldn’t download MP3s to your server so I’m pretty sure it existed at one point.
I think a lot of people use Tailscale and add their external clients to a dedicated tailnet. How are you hosting Plex without opening any ports though?
Honestly the writing’s been on the wall for Plex for a while now. I think it was when they introduced podcasts or news or something that it first became clear to me that Plex was trying to grow beyond a software company for self-hosters and prepare themselves for an IPO or something. I still use it simply because their client availability is second-to-none and I’ve got a bunch of people signed up already, but I’ve already made my peace that the “Plex getting shittier” line and the “Jellyfin getting better” line are getting closer and closer to crossing each other.
Especially with ChatGPT you don’t really need to be that good at it, just good enough to read the script over and to know how to execute it.
This is true, but I don’t know if you’d be counted as a seeder on that list though if you don’t have the full torrent.
I’m personally a big fan of OpenAudible. It’s not free, but it’s not crazy expensive and it does all the work for you. You sign into your Audible account in the app, it will pull your library, download each book, decrypt it, and convert it to the format of your choice (I usually do M4B). I’ve been using it for years and it makes downloading your Audible library in an ongoing basis a breeze.
So two things about this:
Tailscale doesn’t actually route through Tailscale’s servers, it just uses its servers to establish a direct connection between your nodes. You can use Headscale and monitor the traffic on the client and server sides to confirm this is the case. Headscale is just a FOSS implementation of that handshake server, and you point the Tailscale client there instead.
Doesn’t renting a $3 VPS and routing your traffic through that expose many of the same vulnerabilities regarding a 3rd party potentially having access to your VPN traffic, namely the VPS provider?
For what it’s worth, I generally think that the Headscale route is the most privacy- and data-sovereignty-preserving route, but I do think it’s worth differentiating between Tailscale and something like Nord or whatever, where the traffic is actually routed through the provider’s servers versus Tailscale where the traffic remains on your infrastructure.
This is very exciting, I’ve felt that SQLite has held back the performance of the *arrs for a long time so I’m excited to see this.
I came here to say exactly this - WireGuard is great and easy to set up, but it gets harder as you add more people, especially less technical ones, as getting them to make keys and move them around etc becomes a headache. Tailscale also minimizes the role of the central server, so if your box goes down the VPN can still function. Tailscale can also do some neat stuff with DNS that’s pretty nifty.
I only have messed with it a bit, but it does require a SMTP server because it relies on sending email for account setup.
For that number of disks, I would just buy a case that holds the number of disks you want and build a computer in that case; either move your existing home lab into that case or setup a new one and export the storage over the network.
Yeah it was 2006 and that was how you got the MP3 files onto your iPod Nano. This was back when “mobile internet” consisted of “m.website.com” links that loaded a page without a style sheet at dial-up speeds that was designed to be navigated with a D-pad.