• 0 Posts
  • 48 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle
  • The only true “roadblock” I have experienced was when running on the raspberry pi, where the CPU was too slow to do any transcoding at all, and the memory was too small and unupgradable to be able to run much at the same time.

    As soon as I had migrated to a proper desktop (the i7-920) I could run basically everything I would regularly want. And from then on it was a piece of cake upgrading. Shut the machine down, unplug, swap the parts, plug in, turn on. Linux has happily booted up with no trouble with the new hardware.

    Since my first server was a classic bios, and the later machines was UEFI, then that step required a reinstall… But after the reinstall, I actually just copied all the contents of the root partition over, and it just worked.

    The main limiting factors for me has been the amount of memory, the amount of SATA connectors for disks, and whether the hardware supported hardware transcoding.

    For memory, ensure the motherboard has 4 sockets for memory, that makes it easy to start out with a bit of memory and upgrade later. For example you could start out with 2x 4GB sticks for a total of 8GB, and then later when you feel like you need more, you buy 2x 8GB sticks. Now you have a total of 24 GB.

    For SATA ports, ensure the motherboard has enough ports for your needs, and I would also strongly recommend looking for a motherboard with at least 2 PCIe 16x slots, as that will allow you too add many more SATA or SAS ports via a SAS card.

    Hardware encoding is far from a must. It’s only really necessary if you have a lot of media in unsupported formats by the client devices. 95% of my library is h.264 in 1080p, which is supported on pretty much everything, so it will play directly and not require any transcoding. Most 1080p media is encoded in h.264, so it’s usually a non-issue. 4k media however often come in HEVC (h.265), which many devices do not support. These files will require transcoding to be playable on devices that do not support it, but a CPU can still transcode it using “software transcoding”, it’s just much slower and less responsive. So I would consider it a nice convenience, but definitely not a must, and it depends entirely on the encoding of the media library.

    EDIT: Oh, I just remembered… Beware of non-standard hardware. For example motherboards from Dell and IBM/Lenovo. These often come with non-standard fan mounts and headers, which means you can’t replace the fans. They also often have non-standard power supplies, in non-standard form factors, which means that if the power supply dies, it’s nearly impossibly to replace, and when you upgrade your motherboard you are likely forced to replace the power supply as well, and since the size of the power supply isn’t standard, the new power supply will not fix in the case… Many of their motherboards also have non-standard mounts for the motherboards, which means that you are forced to replace the case when upgrading the motherboard… You can often find companies selling their old workstations for dirt-cheap, which can be a great way to get started, but often these workstations are so non-standard that you practically can’t upgrade them… Often the only standard components in these are harddrives, SSDs, optical disc drives, memory, and any installed PCIe cards.


  • As long as it’s capable of booting into Linux, then you can start building a homelab…

    Initially I had a 2-bay Synology NAS, and a Raspberry Pi 3B… It was very modest, but enough to stream media to my TV and run a bunch of different stuff in docker containers.

    In my house, computer hardware is handed down. I buy something to upgrade my desktop, and whatever falls off that machine is handed down to my wife or my daughter’s machines, then finally it’s handed down to the server.

    At some point my old Core i7-920 ended up in the server. This was plenty to upgrade the server to running Kubernetes with even more stuff, and even software transcoding some media for streaming. Running BTRFS gave me the flexibility to add various used disks over time.

    At some point the CPU went bad, so I bought an upgrade for my desktop, and handed my old CPU donown the can, which released an Intel Core i5-2400F for the server. At this point storage and memory started to become the main limiting factor, so I added a PCI SAS card in IT mode to add more disks.

    As this point my wife needed a faster CPU, so I bought a newer used CPU for her, and her old Intel Core i7-3770 was handed down to the server. That gave quite a boost in raw CPU power.

    I ended up with a spare Intel Core i5-7600 because the first motherboard I bought for my wife was dead, so I looked up and found that for very cheap I could buy a motherboard to match, so I upgraded the server which opened up proper hardware transcoding.

    I have since added 2 Intel NUCs to have a highly available control plane for my cluster.

    This is where my server is at right now, and it’s way beyond sufficient for the media streaming, photo library, various game servers, a lot of self-hosted smart home stuff, and all sorts of other random bits and pieces I want to run.

    My suggestion would be to start out by finding the cheapest possible option, and then learn what your needs are.

    What do you want your server to do? What software do you want to run? What hardware do you want to connect to it? All of this will evolve as you start using your server more and more, and you will learn what you need to buy to achieve what you want to.





  • My team is constantly looking for new technologies to make sure we’re not turning ourselves into dinosaurs. We all know that Kubernetes won’t last forever, something better will come along some day.

    That being said I don’t really see the full value of Triton or Xen with unikernels… They might have a bit less performance overhead if used correctly, but then again Kubernetes on bare metal also has very little overhead.

    Kubernetes is certainly comes with a learning curve, and you need to know how to manage it, but once you have Kubernetes there’s a ton of nifty benefits that appear due to the thriving community.

    Need to autoscale based on some kind of queue? Just install the Keda helm chart

    Running in the cloud and want the cluster to autoscale the nodes? Just install cluster-autoscaler helm chart

    Want to pick up all of your logs and ship them somewhere? Just install the promtail helm chart

    Need a deployment tool? Just install the ArgoCD helm chart

    Need your secrets injected from some secret management solution? Just install the external-secrets helm chart

    Need to vulnerability scan all the images you are using in your cluster? Just install the trivy-operator helm chart

    Need a full monitoring stack? Just install the kube-prometheus-stack helm chart

    Need a logging solution? Just install the loki helm chart

    Need certificates? Just install the cert-manager helm chart

    The true benefit of Kubernetes isn’t Kubernetes itself, but all the it’s and pieces the community has made to add value to Kubernetes.


  • Apology accepted, and thank you for not name calling.

    And yeah, if you can save the ops team salaries by picking Heroku, then it certainly might offset the costs.

    When you talk about Triton, do you mean this? Because funnily enough one of their bigger features seems to be that you can run Kubernetes on top of it. It looks pretty cool though, but I must say it was quite hard to find proper info on it.

    Triton also seem to push for containerization quite heavily, and especially Docker… So when you talk about Triton are you suggesting to use the Infrastructure Containers or Virtual Machines instead?


  • FrederikNJS@lemm.eetoLinux@lemmy.mlGhostty terminal is out!
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    5 months ago

    I’m not quite sure what you are getting at… Are you implying that I’m autistic because I only have 10 pods in a Kubernetes cluster?

    Presently our clusters run roughly 1400 pods, and at this scale there certainly are benefits to using something like Kubernetes.

    If your project is small enough to make sense on Heroku, then that’s awesome, but at some point Heroku stops making sense… both for managing at scale, and costs. Heroku already seems to be 2-4x as expensive as AWS on-demand. Presently we’re investigating moving out of AWS and into a datacenter, as it seems that we can reduce our costs by at least an order of magnitude.


  • The right tool for the right job.

    I agree that many small businesses jump to Kube too early. If your entire app is a monolith and maybe a few supplementary services, then Kube is massive overkill.

    But many people also tend to overlook all of the other benefits that suddenly become very easy to add when you already have Kube, such as a common way to collect logs and metrics, injecting instrumentation, autoscaling, automated certificate handling, automated DNS management, encrypting internal network traffic, deployment tools that practically works out of the box, and of course immutable declarative deployments.

    Of course you can build all of this yourself, when you need it, but once you have the foundation up and running, it becomes quite easy to just add a helm chart and suddenly have a new capability.

    In my opinion, when the company it big enough to need a dedicated ops team, then it’s big enough to benefit from Kube.












  • The OP made the argument that Zuckerberg wanted to know their passwords, such that if the users reused the same passwords elsewhere, then he would be able to log in there and check out their accounts.

    For example he could have seen a profile he was interested in, nabbed their password and looked into their email.

    Not that he wouldn’t have godmode on their Facebook account, and needed their password to access their account, because of course he could have just accessed those accounts without needing the password.

    I have not heard this rumor before, though I wouldn’t be completely surprised if it was true.