

everyone does their own thing, but semantic versioning is specifically:
- Major: Incompatible changes (breaks existing code).
- Minor: New, compatible features.
- Patch: Bug fixes, small improvements.
everyone does their own thing, but semantic versioning is specifically:
MLS only deals with encryption and key management, which is great but that’s been a “solved” problem since TextSecure (now Signal) introduced the TextSecure Protocol (now the Signal Protocol) in 2013.
What I’m aware is missing with RCS / MLS compared to Signal (someone with more recent knowledge please correct me):
RCS still leaks metadata like a sieve. Encryption, considering the platforms that exist today (Signal and SimpleX), should not be the minimum requirement. Plain-text messaging should not even be possible in modern secure messaging platforms. The platform should be open source and be engineered to mitigate the collection of metadata - like Signal and SimpleX.
but I thought apple were the good guys /s
not sure tbh. one time I did an uninstall/reinstall and that fixed the issue.
did you try clearing your browser cache? seems to fix a lot.
https://cryptpad.fr/ as an alternative to Googles online office suite.
end-to-end encrypted and open-source collaboration suite.
still really early in development but if you primarily work from the browser on a desktop/laptop, it works well enough. I’ve struggled with getting sheets working on a mobile browser but I really like that you could completely self host if you want or just pay them to do it for you.
Reddit/Twitter have had years of development and tons of funding. you’re comparing Applesnto oranges. give the little guys (Lemmy/Mastodon) some time to catch up? that they already work as well as they do compared to massive corporations is pretty impressive. And just to counter your experience, I’ve had zero issues with Mastodon. Lemmy has a few minor UX bugs that will eventually get ironed out.
loops is still very early in development. people need to tamper their expectations.
nearly daily Apex player moving to Linux full time again now that many more games work with it (knowing Apex no longer works). It will suck, but fuck microsoft and good riddance EA.
you can follow hashtags. I follow #opensource and a few other interests and I’ve found some interesting stuff you don’t generally see in other places. but yes, the format is completely different and I find lemmy allows for better discussion than Mastodon.
Checkout Notesnook. I’ve tried most of the ones you’ve listed and have been really enjoying how well it works compared to the competition considering its end-to-end encrypted.
A few features:
One thing I really like about the project is how open they are about what they’re doing, why they’re doing it and what the future holds. It’s been great seeing their roadmap (https://notesnook.com/roadmap/) and seeing promised features land with new ones being added, and I’ve only been using it for less than a year now!
I was also not sure what this meant, so I asked Google’s Gemini, and I think this clears it up for me:
This means that the creators of Llama 3.3 have chosen to release only the version of the model that has been fine-tuned for following instructions. They are not making the original, “pretrained” version available.
Here’s a breakdown of why this is significant:
- Pretrained models: These are large language models (LLMs) trained on a massive dataset of text and code. They have learned to predict the next word in a sequence, and in doing so, have developed a broad understanding of language and a wide range of general knowledge. However, they may not be very good at following instructions or performing specific tasks.
- Instruction-tuned models: These models are further trained on a dataset of instructions and desired outputs. This fine-tuning process teaches them to follow instructions more effectively, generate more relevant and helpful responses, and perform specific tasks with greater accuracy.
In the case of Llama 3.3 70B, you only have access to the model that has already been optimized for following instructions and engaging in dialogue. You cannot access the initial pretrained model that was used as the foundation for this instruction-tuned version.
Possible reasons why Meta (the creators of Llama) might have made this decision:
- Focus on specific use cases: By releasing only the instruction-tuned model, Meta might be encouraging developers to use Llama 3.3 for assistant-like chat applications and other tasks where following instructions is crucial.
- Competitive advantage: The pretrained model might be considered more valuable intellectual property, and Meta may want to keep it private to maintain a competitive advantage.
- Safety and responsibility: Releasing the pretrained model could potentially lead to its misuse for generating harmful or misleading content. By releasing only the instruction-tuned version, Meta might be trying to mitigate these risks.
Ultimately, the decision to release only the instruction-tuned model reflects Meta’s strategic goals for Llama 3.3 and their approach to responsible AI development.
Apex Legends. Its a difficult game to master, but every once in a while I get “in the zone” and pull moves/plays that impress myself. It’s not often, but feels nice when it happens. I still enjoy it even though I “suck” most of the time. I basically play it as a survival game >90% of the time.
So I upgraded and tested not adding a trusted proxy (using Traefik in front of Jellyfin) and nothing broke. Was it supposed to break or is it just that its insecure? Am I less secure by not adding it as a trusted proxy?