Just your normal everyday casual software dev. Nothing to see here.

  • 0 Posts
  • 53 Comments
Joined 11 months ago
cake
Cake day: August 15th, 2023

help-circle
  • Actually since their permanent non-removable drives, I would say wherever you want to place them, if they’re meant primarily for storing user-based data you can do like what I used to do which was store them in within the home directory just as specific names. Like my old setup before I went proxmox was /backups was my backup drive, /home was my home drive that stored most of my users /home/steam held all my game server drive and /home/storage held my long term cold storage drive.


  • Pika@sh.itjust.workstoLinux@lemmy.mlWhy do you still hate Windows?
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    14 hours ago

    Honestly, privacy and freedom of choice alone is why I switched back.

    I will give windows credit, it’s definitely better than any other platform out there when it comes to support and it is really nice just having things “just work”. I went relatively 8 years having almost zero issues with gaming with the exception of my graphics driver which was a fault of AMD not necessarily Microsoft. All I would have to do is install a program maybe restart the computer and then run the program the way I went. With my current system I can’t even guarantee if the software I want to use will work because the ecosystem is geared towards Microsoft so every product out there is Microsoft first Unix if we get around to it.

    My only reason for switching was the lack of choice I was getting. While I never had to restart for updates because it automatically updated nightly when I turned it off so it was very non-invasive, the fact that I I wasn’t trusted enough with my computer to be able to turn those updates completely off if I wanted to, on top of the fact that every major update seemed to hard push the office suite, and every update seemed to respect my privacy less and less was already putting me on the edge of switching every time that I had it happened to me.

    But the recent rumor wave that was going through that Windows 10 when it reached end of life wasn’t going to be the same way that every other OS that they’ve had has been where they will release security updates past closing and instead they’re going to open the business only support tier to your Standard customer and offer Windows 10 at a subscription price instead, on top of the fact that Windows 11 wasn’t going to support how I wanted to set my computer up without having to reinstall it anyway, I just took the plunge and went back to Linux. Overall it has been enjoyable, but I really do miss the ease of being able to just install something and have it work that comes with being in the dominant ecosystem. That being said, It is nice not having to worry about what a mega company thinks I should run the computers that I paid for, built, and set up myself.



  • I’m currently running proxmox on a 32 gig server running a ryzen 5600 G, it’s going fine the containers don’t actually use all that much RAM and personally I’m actually seeing a better benchmarks than I did when I just ran as a Bare Bones Ubuntu server, my biggest issue has actually been a larger IO strain than anything, because it’s a lot more IO heavy now since everything’s containerized. I think I easily could run it with a lower amount of ram I would just have to turn off some of the more RAM intensive items

    As for if I regret changing, no way Jose, I absolutely love the ability of having everything containerized because I can set things up how I want it when I want it and if I end up screwing something up configuration wise or decide that I no longer need that service I can just nuke the container without having to remember well what did I install on this program so I can remove it and do other programs need this dependency to work. Plus while I haven’t tinkered as much in this area, you can hard set what resources you want a lot to each instance, so if you have a program like say a pi hole that you know is never going to use x amount of resources to be able to appropriately work you can restrict what it can do so if something does go wrong with it it doesn’t use all of your system resources

    The biggest con out of it is probably having to figure out how to do the networking side because every container is going to have a different IP address, I found using a web dashboard is my friend because I can have heimdel tell me where all my services are and I just have to click the icon to bring me to the right IP address, it took a lot of work to figure out how it’s operational and how to get it working, but the benefits I’ve gotten of having it is amazing. Just make sure you have a spare disk to temporarily clone partitions to because it’s extremly difficult to use existing disks in the machine. I’ve been slowly going one at a time copying it over to an external drive nuking the and then reinitializing the disc as part of the proxmox lvm and then copying the data back over onto their appropriate image file.


  • I personally will never use nextcloud, it is nice interface side but while I was researching the product I came across concerns with the security of the product. Those concerns have since then been fixed but the way they resolved the issue has made me lose all respect for them as a secure Cloud solution.

    Basically when they first introduced encrypting folders, there was a bug in the encryption program, and the only thing that ever would be encrypted was The Parent Directory but any subfolder in that directory would proceed to not be encrypted. The issue with that is that unless you had server-side access to view the files you had no way of knowing that your files weren’t actually being encrypted.

    All this is fine it’s a beta feature right? Except for when I read the GitHub issue on the report, they gaslit the reporter who reported the issue saying that despite the fact that it is advertised as feature on their stable branch, the feature was actually in beta status so therefore should not be used in a production environment, and then on top of , the feature was never removed from their features list, and proceeded to take another 3 months before anyone even started working on the issue report.

    This might not seem like a big deal to a lot of people, but as someone who is paranoid over security features, the projects inaction over something as critical as that while trying to advertise themselves as being a business grade solution made me flee hardcore

    That being said I fully agree with you out of the different Cloud platforms that I’ve had, nextCloud does seem to be the most refined and even has the ability to emulate an office suite which is really nice, I just can’t trust them, I just ended up using syncthing and took the hit on the feature set







  • Seconding this, I took the plunge a month or two back myself using proxmox for my home lab. Fair warning if you have never operated anything virtualized outside of using virtualbox or Docker like I was you are in for an ice Plunge so if you do go this route prepare for a shock, it is so nice once everything is up and running properly though and it’s real nice being able to delegate what resource uses what and how much, but getting used to the entire system is a very big jump, and it’s definitely going to be a backup existing Drive migrate data over to a new Drive style migration, it is not a fun project to try to do without having a spare drive to be able to use as a transfer Drive


  • judging by lack of description on this post, and the videos description, it’s a rage bait video based off potential intentions behind a website that logs discord activity and sells it for profit. The video description gave a big “I’m trying to egg you to watch this” vibe though so I didn’t go further. The site named has been shut down a few times now, it just renames itself every time and boom operational again.

    my opinion is that’s a risk you gotta take posting stuff online and it likely won’t be going anywhere, nothings secure unless you trust everyone involved. I wish for privacy but I don’t expect it unless I can meet that criteria




  • TPM is a good way, Mine is setup to have encryption of / via TPM with luks so it can boot no issues, then actual sensitive data like the /home/my user is encrypted using my password and the backup system + fileserver is standard luks with password.

    This setup allows for unassisted boot up of main systems (such as SSH) which let’s you sign in to manually unlock more sensative drives.



  • I fully agree that the author is being super disingenuous here. However, I don’t think Amazon is fully shuttering the program because they’ve stated they’re switching it over to a cart-based system which is already been proven to be successful in the trade and doesn’t include the high ceiling requirements, their biggest issue is adoption from other retailers, and switching over to that system will lower the ceiling(no pun intended) for entry


  • Correct me if I’m wrong, but it seems like this article isn’t being completely honest with us.

    The article is phrasing in a way as if Amazon has given up on the technology altogether, but then the article talks about how they are moving directions to be still based off the camera model just using a closer up cart style camera instead of a camera everywhere Style

    These smart carts are equipped with scales and sensors to track spending in real time and, of course, allow consumers to skip the checkout.

    This sounds less of a walking out on the technology as a whole as the title implies, and more geared towards we have a better technology available. I think this is either disingenuous reporting or are trying to gear it in a way that makes it look like Amazon is being pro privacy when in reality the same exact system is going to be in place, it’s just the camera is going to be part of the cart instead of cross the entire store, which has proven to be a large bottleneck for them and getting other retailers adopting it due to the high ceiling requirement

    This of course is ignoring what other commenters have stated about the fact that this is literally how training models work, you tell it what to look for and then eventually you have a large enough training set to be able to have it do it on its own. It’s not as if those 1,000 people are ringing out the groceries, they are more so assisting the technology by marking what is a product versus what is not a product, a process which that they have already downsized a few times from their initial Staffing requirements