I got a CA-53 recently myself, for much the same reason.
Nobody ever said anything about my Apple Watch, but holy crap does everyone love a calculator watch.
(Which is hilarious because as a kid, I was teased as a nerd for having such a thing.)
I got a CA-53 recently myself, for much the same reason.
Nobody ever said anything about my Apple Watch, but holy crap does everyone love a calculator watch.
(Which is hilarious because as a kid, I was teased as a nerd for having such a thing.)
It is mostly professional/office use where this make sense. I’ve implemented this (well, a similar thing that does the same thing) for clients that want versioning and compliance.
I’ve worked with/for a lot of places that keep everything because disks are cheap enough that they’ve decided it’s better to have a copy of every git version than not have one and need it some day.
Or places that have compliance reasons to have to keep copies of every email, document, spreadsheet, picture and so on. You’ll almost never touch “old” data, but you have to hold on to it for a decade somewhere.
It’s basically cold storage that can immediately pull the data into a fast cache if/when someone needs the older data, but otherwise it just sits there forever on a slow drive.
…depends what your use pattern is, but I doubt you’d enjoy it.
The problem is the cached data will be fast, but the uncached will, well, be on a hard drive.
If you have enough cached space to keep your OS and your used data on it, it’s great, but if you have enough disk space to keep your OS and used data on it, why are you doing this in the first place?
If you don’t have enough cache drive to keep your commonly used data on it, then it’s going to absolutely perform worse than just buying another SSD.
So I guess if this is ‘I keep my whole steam library installed, but only play 3 games at a time’ kinda usecase, it’ll probably work fine.
For everything else, eh, I probably wouldn’t.
Edit: a good usecase for this is more the ‘I have 800TB of data, but 99% of it is historical and the daily working set of it is just a couple hundred gigs’ on a NAS type thing.
I’ll admit to having no opinion on windowing systems.
If the distro ships with X, I use X, and if it ships with Wayland, I use Wayland.
I’d honestly probably not be able tell you which systems I’ve been using use one or the other, and that’s a good thing: if you can’t tell, then it probably doesn’t matter anymore.
Oh, that makes sense. I was trying to mentally imagine what kind of FDM printer could possibly need that much power and was very much coming up with a blank, lol.
So uh, if I can ask, why?
Like what are you doing that needs this kind of uh, upgrade?
One thing you probably need to figure out first: how are the dgpu and igpu connected to each other, and then which ports are connected to which gpu.
Everyone does funky shit with this, and you’ll sometimes have dgpus that require the igpu to do anything, or cases where the internal panel is only hooked up to the igpu (or only the dgpu), and the hdmi and display port and so on can be any damn thing.
So uh, before you get too deep in planning what gets which gpu, you probably need to see if the outputs you need support what you want to do.
It’s viable, but when you’re buying a DAS for the drives, figure out what the USB chipset is and make sure it’s not a flaky piece of crap.
Things have gotten better, but some random manufacturers are still using trash bridge chips and you’ll be in for a bad time. (By which I mean your drives will vanish in the middle of a write, and corrupt themselves.)
Okay so you’re able to access it via the IP it’s hosted on, but NOT via the domain name in the tunnel?
Is the working IP a public or private one?
My $5 is that you don’t have the tunnel configured properly and that’s why you’re having issues, but maybe not.
Also, what specifically did you put in the config file? Usually they’re not asking for an IP, but the FQDN of the site.
Am I missing something, or is this just the argo tunnel thing Cloudflare has offered for quite a while?
Are you using Cloudflare as DNS, proxy, or via their argo tunnels? (I know you said tunnel, but then mention accessing via IP address, so I’m not entirely sure what you’ve done.)
Kinda changes what you should be looking at.
Artificial meat seems like a scam to me.
Depends what it costs, honestly.
If you can make fake-cow that tastes like actual cow and a pound of ground fake cow isn’t $7, consider me very interested.
10000% this.
Tell me what it does, and SHOW me what it does.
Because guessing what the hell your thing looks like and behaves like is going to get me to bounce pretty much immediately because you’ve now made it where I have to figure out how to deploy your shit if I want to know. And, uh, generally, if you have no screenshots, you have no good documentation and thus it’s going to suuuuck.
It’s because of updates and who owns the support.
The postgres project makes the postgres container, the pict-rs project makes the pict-rs container, and so on.
When you make a monolithic container you’re now responsible for keeping your shit and everyone else’s updated, patched, and secured.
I don’t blame any dev for not wanting to own all that mess, and thus, you end up with seperate containers for each service.
Best as I can figure what got me was some delivery food, lol. The virus that damn near killed me is not especially common, and the usual means of transmission is fecal-oral.
So, some horribly gross restaurant employee was probably sick, went to take a shit, and didn’t wash their hands, though not like that can be proven.
…I don’t get delivery or eat out anymore.
Head to the lemmy github and subscribe to the releases email and you’ll get one when a new version is out.
(And, unlike SOME projects I’m subbed to, they don’t do anything that generates a ton of spam, so it really is just one-email-per-release.)
The best way I’ve heard that described is that for the Bambu stuff, you spend your time fiddling with the thing you want to print, not your printer.
I love my p1p (and it’s several thousand hours and 100kg of filament into ownership and all I’ve had to do is clean the bedplate and replace a nozzle), and really wish there was anyone who was making an open-source printer that’s as reliable and fiddle-free as this thing has been.
I’d probably go with getting the ISP equipment into the dumbest mode possible, and putting your own router in it’s place, so option #2?
I know nothing about eero stuff, but can you maybe also put it into a mode that has it doing wifi-only, and no routing/bridging/whatever?
Then you can just leave the ISP router in place, and just use them for wifi (and probably turn off the wifi on the ISP router, while you’re in there).
I’ve become a fan of staying one version behind for a month or two, unless there’s a security issue that is involved in which case I’ll patch.
I like it when someone who isn’t me finds out the catastrophic breaking issues and has to do the cleanup, and I’ll wait for the fixed version. :P
ArchiveBox is great.
I’m big into retro computing and general old electronics shit, and I archive everything I come across that’s useful.
I just assume anything and everything on some old dude’s blog about a 30 year old whatever is subject to vanishing at any moment, and if it was useful once, it’ll be useful again later probably so fuck it, make a copy of everything.
Not like storage is expensive, anyway.