

For me things actually became easier when I got myself a native Linux install instead of Windows. But I guess it depends on your college.
For me things actually became easier when I got myself a native Linux install instead of Windows. But I guess it depends on your college.
The size difference is not significant. This is about the maintenance burden. When you need to change some of the code where CPU architecture specific things happen you always have to consider what to do with the code path or the compiler flags that concern 486 CPUs.
Here is the announcement by the maintainer Ingo Molnar where he lists some of the things he can now remove and stop worrying about: https://lore.kernel.org/lkml/20250425084216.3913608-1-mingo@kernel.org/
It’s quite cruel of that compiler not being happy until you’re exhausted.
Or they could just have been infected. Especially the ones on Windows 8, which has been EoL for over a year.
Hey OP, regarding Minecraft: It’s a Java program that uses OpenGL for rendering. Therefore it’s not a Windows game, but inherently cross platform. Here’s the official .deb package https://launcher.mojang.com/download/Minecraft.deb
the school’s IT
I wonder if that even exists. A mix of Windows 8 (EoL) and 10 (almost EoL) running on Haswells with students freely installing Roblox… it all gives an unmaintained vibe.
I like how their release announcements always kind of read like press releases. Even when it’s just the third maintenance release for some normal release train.
You’re not alone in this:
https://discussion.fedoraproject.org/t/usb-tethering-stopped-working-after-f42-update/148809
https://bugzilla.kernel.org/show_bug.cgi?id=220002
https://lore.kernel.org/all/e0df2d85-1296-4317-b717-bd757e3ab928@heusel.eu/
When Debian upgrades to this kernel version you might run into the issue again. Unless there is a fix deployed before then.
I wanted a mainstream option but not Ubuntu, and one that was preferably offered with KDE Plasma pre-packaged.
So I ended up deciding between Debian and Fedora, and what tipped me to Fedora was thinking: Well SELinux sounds neat, quite close to what I learned about Mandatory Access Control in the lectures, and besides, maybe it will be useful in my work knowing one that is close to RHEL.
Now I work in a network team that has been using Debian for 30 years, lol. Kind of ironic, but I don’t regret it, now I just know both.
And fighting SELinux was kind of fun too. I modified my local policies so that systemd can run screen
because I wanted to create a Minecraft service to which I could connect as admin, even if it was started by systemd.
I don’t know why it comes off as hostile, it wasn’t intended that way. Sorry for not expressing it better!
If the last sentence came across badly, that was more meant to be incredulous that people accept all these workaround instead. There are other comments in here that go to ridiculous lengths to enforce separation, like using the UEFI boot menu to select a disk manually. To me even having two ESPs seems overly cautious, and against the design philosophy. Sharing one ESP is really not an issue (at least as long as you know you’re doing it, as you unfortunately found out the hard way).
First of all: You don’t have to reinstall Windows to get it’s bootmgr EFI and supporting files back into the ESP. Installing those from the CLI in from a booted install media is possible, I did it before. You can even install all of Windows manually if you ever need to, it’s just annoying to do with the windows command line tools.
Secondly: I’m not familiar with all distro installers, but surely you can just not format the ESP? Worst case scenario you’d have to use manual disk formatting I guess, but it’s not that difficult.
Thirdly: You said Grub doesn’t show the disk. If you mean the Grub command interface didn’t show the disk, then the issue is deeper, at a UEFI or hardware level. If you mean there are no boot entries for a Windows install to be selected, then it could be that they were not generated because the Windows bootmgr EFI was not found when Grub got installed. Sometimes just booting back into Linux and running os-prober again might be enough, if the Windows bootmgr EFI is still around. On my distro the os-proper is automatically run when I run grub-mkconfig -o /boot/grub/grub.cfg
I’ve always used a shared ESP for my dual boot systems and I certainly don’t reinstall one OS as the result of a change with the other.
Looks okay to me. Not sure how important the last two are to be honest, but I included them for completeness
https://github.com/opencloud-eu/opencloud/blob/main/LICENSE
https://github.com/opencloud-eu/web/blob/main/LICENSE
https://github.com/opencloud-eu/web-extensions/blob/main/LICENSE
https://github.com/opencloud-eu/desktop/blob/main/COPYING
https://github.com/opencloud-eu/reva/blob/main/LICENSE
https://github.com/opencloud-eu/rclone/blob/master/COPYING
The marketing statements on the website say the right things too, but they are secondary to the above, obviously:
Openness
OpenCloud is and remains open source software. This means that you can download and use the source code free of charge and without obligation. We welcome and encourage any kind of participation in the work on OpenCloud in the spirit of open source collaboration.
OpenCloud GmbH also offers paid builds of OpenCloud for use in environments where support, professional services and other services are required.
Who are we?
OpenCloud GmbH is a young company founded under the umbrella of the Heinlein Group and employs a team of developers who are familiar with the project code.
The combination of the Heinlein Group’s many years of experience in the open source business and the unwavering enthusiasm of the developers, most of whom have many years of open source experience, provides the perfect foundation for an active project. And we warmly invite everyone to join us!
The foundation
The basis of the project is a fork of a widely used open source project whose components are co-developed by developers from the science organization CERN and other active participants. OpenCloud is now being continuously developed independently by the OpenCloud community and published under the Apache 2.0 and AGPL-3.0 licenses.
In the spirit of reusability of code under free licenses, we are grateful for the strong foundation on which we are building.
One theory is that Tor was opened to the public by the United States Naval Research Laboratory only to create a crowd of users for their agents to hide in. You need a large enough anonymity set for these sorts of technologies to work.
For your convenience here is the interview he’s going over in the video: https://www.nintendo.com/us/whatsnew/ask-the-developer-vol-16-nintendo-switch-2-part-4/
An interesting part is this:
Dohta: If we tried to use technology like software emulators, we’d have to run Switch 2 at full capacity, but that would mean the battery wouldn’t last so long, so we did something that’s somewhere in between a software emulator and hardware compatibility.
Sasaki: This is getting a bit technical, but the process of converting game data for Switch to run on Switch 2 is performed on a real-time basis as the data is read in.
Is it like having Switch games “simultaneously translated” for Switch 2?
Sasaki: That’s right. […]
So it sounds like they are doing some recompilation. /u/jonathansmith14921 had an interesting comment over on reddit, his suggestion is that they have to recompile the shader bytecode from Maxwell to Ampere to fit the new GPU. Makes sense to me.
Another interesting titbit from that thread, there are official (in)-compatibility lists: Launch-able but has issues and has issues even launching
My computer doesn’t really break, I’m Ship of Theseus-ing it regularly.
Apart from that, the only one among the normal window based ones that has felt like it respects my will to configure stuff in ways that feel right to me has been KDE Plasma.
9 years and 4 months ago I bought an Acer laptop with a 4 core Intel Skylake with hyperthreading (i7-6700HQ) and a Nvidia GTX 960M, because the laptop I had was slow for compiling in my classes at Uni, and I wanted a discrete GPU for the occasional game when away from my Desktop PC (winter break and such (still use it for that btw)). I regretted that three times:
First when I wanted to install Linux instead of just using VMs. In early 2016 the kernels on live system ISOs didn’t properly support Skylake yet, so I fucked around with Arch a bunch, but didn’t end up keeping it installed. Don’t remember why, probably got busy with schoolwork.
Then a while later, after I had installed Ubuntu or Fedora at some point, the next issue was that cooperative mode of Bluetooth and Wifi on the included Intel wireless chip wasn’t well supported (even found an Intel Bluetooth dev saying as much on a mailing list), and it hung sometimes, so I had to make a script to turn the chip off and then rescan the PCI bus, that worked as a workaround but was still annoying.
Finally when we had Machine Learning classes I thought I might be able to use CUDA locally, so I tried installing the proprietary Nvidia driver and was greeted by a black screen on the next boot. Had to boot from a live system and chroot in to remove the proprietary crap again.
On my Desktop PC I have used AMD GPUs for quite a while and dual booting Windows and Linux has always been a breeze.
I thought people either used the old 2.2.1 version or jumped ship. Had no idea it was still going.
Ah I’m glad to see the situation seems to have cooled a little.
See this comment and the three following, as well as this one and the two following. I think they can now work it out between the projects reasonably.
PS: This more fundamental proposal for Fedora Workstation that started from the OBS packaging issue is also interesting to read. It seems they are looking to make more limited / focused use of their own Flatpak remote in the future since some old assumptions regarding Flatpaks and Flathub don’t hold so well anymore.
I think you might find this comment by one of the OBS upstream devs interesting:
https://pagure.io/fedora-workstation/issue/463#comment-955899
I scoured their website and they completely fail to explain what they are actually doing on a technical level. I assume it would probably be a GPON network, just based on the offered speed. Not the best type of fiber connectivity, but probably pretty normal for the USA market.
That said, single mode fiber is absolutely the way forward and if you replace the devices on the end it can scale almost indefinitely. So I would jump on the occasion of having some laid to your house.
They don’t have IPv6 and they don’t offer static IPs which both kind of sucks, but it might be acceptable: https://support.surfinternet.com/surf-broadband-fiber-faqs No data caps is good at least.
Concerning your question about the markings, they spell out their process on this page, it does include marking existing utilities: https://surfinternet.com/fiber-optic-installation-process/