• 0 Posts
  • 44 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • They have a responsibility to ensure that games they sell continue to work.

    How do you figure that? Valve sells you a game, and they publish the system requirements for that game. If a game doesn’t work on your system then they’ll give you a refund as long as it’s within the refund window. Beyond that, they owe you nothing. For 32-bit titles, one of the technical requirements is 32-bit OS libraries. Let’s say Windows 12 removes support for 32-bit software. What do you think Valve will do? I say they’ll mass-update the system requirements of 32-bit titles to indicate that they’re not compatible with Windows 12 and higher. Historical precedent is on my side here, because this is effectively what Valve did when Apple dropped 32-bit support from OSX.

    They ship libraries on Linux so there’s a common base, and they should also do so for 32-bit games. GOG does this for older games using things like dosbox or whatever, and Steam should follow suit.

    The libraries that Valve ships for Linux support are essentially Proton. They’ve invested a lot of effort into Proton because there’s a strong business case for doing so. 95% of Valve’s customers game on a Windows OS (source: Steam hardware survey), which means that Microsoft could present an existential threat to Valve if Microsoft attempts to lock down their platform. Having a functional alternative OS could dissuade Microsoft from making any anti-competitive moves. Plus, of course, without Proton there’s no Steam Deck. I think making money from Steam Deck is only Valve’s short term goal though, which is why they’re opening up Steam OS to other handheld makers. The long-term aim is to shift gaming away from closed-source to open-source platforms so that Valve’s business isn’t reliant on any one OS vendor.

    There’s no business case for doing the same with legacy 32-bit titles. There will be no new 32-bit titles going forward, and there’s hardly any market for existing legacy 32-bit titles. Valve would need to compete with GOG, who’s already doing much the same thing, and GOG is barely profitable as it stands. GOG’s 2024 profits were a paltry 1.1M profit on 199M revenue (source). That’s a profit margin of about 0.5%, which is not a healthy indicator in a for-profit business. 5% is generally considered low, 10% or better is a healthy margin. GOG is essentially being propped up by CD Projekt Red’s Witcher/Cyberpunk money (CDPR has 468M profit on 801M revenue in 2024; a profit margin of 58%, which is wildly high-margin). Valve could prop up a money-losing 32-bit compatibility project with all their income from 64-bit software sales, but I doubt they would.


  • The Steam client is just a launcher. Why is it Valve’s job to make sure that legacy 32-bit games continue to run? They’re not the vendor of the game, and they’re not the vendor of the OS. They’re just a middleman. If the game vendor doesn’t want to patch it to 64-bit, and the OS vendor doesn’t want to maintain 32-bit compatibility, then there’s simply no more support for that combination of OS and game. Valve isn’t required to step in there.

    It may surprise you to learn that Valve already switched the client to 64-bit… for Mac. OSX hasn’t had 32-bit support since 2019, but it still has a Steam client! Valve didn’t do anything for 32-bit-only Mac titles, except drop the “Mac OS compatible” tag once Apple had dropped 32-bit support. That’s all they’re ever going to do for 32-bit-only PC titles, when/if OS vendors completely drop 32-bit support.

    32-bit is dead and it’s somewhat absurd that Steam is still 32-bit.

    Tell that to anyone who bought a legacy title on Steam and now wants to run it on modern hardware. Leaving the Steam client at 32-bit is simply a low-effort way to ensure that the OS has the 32-bit libraries that will be required by any 32-bit title the user happens to launch.






  • Not the person you replied to, but I’m in agreement with them. I did tech hiring for some years for junior roles, and it was quite common to see applicants with a complete alphabet soup of certifications. More often than not, these cert-heavy applicants would show a complete lack of ability to apply that knowledge. For example they might have a network cert of some kind, yet were unable to competently answer a basic hypothetical like “what steps would you take to diagnose a network connection issue?” I suspect a lot of these applicants crammed for their many certifications, memorized known answers to typical questions, but never actually made any effort to put the knowledge to work. There’s nothing inherently wrong with certifications, but from past experience I’m always wary when I see a CV that’s heavy on certs but light on experience (which could be work experience or school or personal projects).


  • However, it’s worth mentioning that WireGuard is UDP only.

    That’s a very good point, which I completely overlooked.

    If you want something that “just works” under all conditions, then you’re looking at OpenVPN. Bonus, if you want to marginally improve the chance that everything just works, even in the most restrictive places (like hotel wifi), have your VPN used port 443 for TCP and 53 for UDP. These are the most heavily used ports for web and DNS. Meaning you VPN traffic will just “blend in” with normal internet noise (disclaimer: yes, deep packet inspection exists, but rustic hotel wifi’s aren’t going to be using it ;)

    Also good advice. In my case the VPN runs on my home server, there are no UDP restrictions of any kind on my home network and WireGuard is great in that scenario. For a mobile VPN solution where the network is not under your control and could be locked down in any number of ways, you’re definitely right that OpenVPN will be much more reliable when configured as you suggest.


  • I use WireGuard personally. OpenVPN has been around a long time, and is very configurable. That can be a benefit if you need some specific configuration, but it can also mean more opportunities to configure your connection in a less-secure way (e.g. selecting on older, less strong encryption algorithm). WireGuard is much newer and supports fewer options. For example it only does one encryption algorithm, but it’s one of the latest and most secure. WireGuard also tends to have faster transfer speeds, I believe because many of OpenVPN’s design choices were made long ago. Those design choices made sense for the processors available at the time, but simply aren’t as performant on modern multi core CPUs. WireGuard’s more recent design does a better job of taking advantage of modern processors so it tends to win speed benchmarks by a significant margin. That’s the primary reason I went with WireGuard.

    In terms of vulnerabilities, it’s tough to say which is better. OpenVPN has the longer track record of course, but its code base is an order of magnitude larger than WireGuard’s. More eyes have been looking at OpenVPN’s code for more time, but there’s more than 10x more OpenVPN code to look at. My personal feeling is that a leaner codebase is generally better for security, simply because there’s fewer lines of code in which vulnerabilities can lurk.

    If you do opt for OpenVPN, I believe UDP is generally better for performance. TCP support is mainly there for scenarios where UDP is blocked, or on dodgy connections where TCP’s more proactive handling of dropped packets can reduce the time before a lost packet gets retransmitted.







  • I think you’re referring to FlareSolverr. If so, I’m not aware of a direct replacement.

    Main issue is it’s heavy on resources (I have an rpi4b)

    FlareSolverr does add some memory overhead, but otherwise it’s fairly lightweight. On my system FlareSolverr has been up for 8 days and is using ~300MB:

    NAME           CPU %     MEM USAGE
    flaresolverr   0.01%     310.3MiB
    

    Note that any CPU usage introduced by FlareSolverr is unavoidable because that’s how CloudFlare protection works. CloudFlare creates a workload in the client browser that should be trivial if you’re making a single request, but brings your system to a crawl if you’re trying to send many requests, e.g. DDOSing or scraping. You need to execute that browser-based work somewhere to get past those CloudFlare checks.

    If hosting the FlareSolverr container on your rpi4b would put it under memory or CPU pressure, you could run the docker container on a different system. When setting up Flaresolverr in Prowlarr you create an indexer proxy with a tag. Any indexer with that tag sends their requests through the proxy instead of sending them directly to the tracker site. When Flaresolverr is running in a local Docker container the address for the proxy is localhost, e.g.:

    If you run Flaresolverr’s Docker container on another system that’s accessible to your rpi4b, you could create an indexer proxy whose Host is “http://<other_system_IP>:8191”. Keep security in mind when doing this, if you’ve got a VPN connection on your rpi4b with split tunneling enabled (i.e. connections to local network resources are allowed when the tunnel is up) then this setup would allow requests to these indexers to escape the VPN tunnel.

    On a side note, I’d strongly recommend trying out a Docker-based setup. Aside from Flaresolverr, I ran my servarr setup without containers for years and that was fine, but moving over to Docker made the configuration a lot easier. Before Docker I had a complex set of firewall rules to allow traffic to my local network and my VPN server, but drop any other traffic that wasn’t using the VPN tunnel. All the firewall complexity has now been replaced with a gluetun container, which is much easier to manage and probably more secure. You don’t have to switch to Docker-based all in go, you can run hybrid if need be.

    If you really don’t want to use Docker then you could attempt to install from source on the rpi4b. Be advised that you’re absolutely going offroad if you do this as it’s not officially supported by the FlareSolverr devs. It requires install an ARM-based Chromium browser, then setting some environment variables so that FlareSolverr uses that browser instead of trying to download its own. Exact steps are documented in this GitHub comment. I haven’t tested these steps, so YMMV. Honestly, I think this is a bad idea because the full browser will almost certainly require more memory. The browser included in the FlareSolverr container is stripped down to the bare minimum required to pass the CloudFlare checks.

    If you’re just strongly opposed to Docker for whatever reason then I think your best bet would be to combine the two approaches above. Host the FlareSolverr proxy on an x86-based system so you can install from source using the officially supported steps.


  • It’s likely CentOS 7.9, which was released in Nov. 2020 and shipped with kernel version 3.10.0-1160. It’s not completely ridiculous for a one year old POS systems to have a four year old OS. Design for those systems probably started a few years ago, when CentOS 7.9 was relatively recent. For an embedded system the bias would have been toward an established and mature OS, and CentOS 8.x was likely considered “too new” at the time they were speccing these systems. Remotely upgrading between major releases would not be advisable in an embedded system. The RHEL/CentOS in-place upgrade story is… not great. There was zero support for in-place upgrade until RHEL/CentOS 7, and it’s still considered “at your own risk” (source).