

I don’t have a phone that can scan QR codes.
QR codes are a plain text encoding scheme. If you can screenshot it, you have access to FOSS software that can decode it, and you can paste that URL into your browser.
I don’t have a phone that can scan QR codes.
QR codes are a plain text encoding scheme. If you can screenshot it, you have access to FOSS software that can decode it, and you can paste that URL into your browser.
Thread is a bit more power efficient, which matters for battery powered devices that aren’t connected to permanent power and don’t need to transmit significant data, like door locks, temperature/humidity sensors, things like that. A full wifi networking chip would consume a lot more power for an always-on device.
I’m not sure that would work. Admins need to manage their instance users, yes, but they also need to look out for the posts and comments in the communities hosted on their instance, and be one level of appeal above the mods of those communities. Including the ability to actually delete content hosted in those communities, or cached media on their own servers, in response to legal obligations.
Yeah, from what I remember of what Web 2.0 was, it was services that could be interactive in the browser window, without loading a whole new page each time the user submitted information through HTTP POST. “Ajax” was a hot buzzword among web/tech companies.
Flickr was mind blowing in that you could edit photo captions and titles without navigating away from the page. Gmail could refresh the inbox without reloading the sidebar. Google maps was impressive in that you could drag the map around and zoom within the window, while it fetched the graphical elements necessary on demand.
Or maybe web 2.0 included the ability to implement states in the stateless HTTP protocol. You could log into a page and it would only show you the new/unread items for you personally, rather than showing literally every visitor the exact same thing for the exact same URL.
Social networking became possible with Web 2.0 technologies, but I wouldn’t define Web 2.0 as inherently social. User interactions with a service was the core, and whether the service connected user to user through that service’s design was kinda beside the point.
Honestly, this is an easy way to share files with non-technical people in the outside world, too. Just open up a port for that very specific purpose, send the link to your friend, watch the one file get downloaded, and then close the port and turn off the http server.
It’s technically not very secure, so it’s a bad idea to leave that unattended, but you can always encrypt a zip file to send it and let that file level encryption kinda make up for lack of network level encryption. And as a one-off thing, you should close up your firewall/port forwarding when you’re done.
Yeah, if OP has command line access through rsync then the server is already configured to allow remote access over NFS or SMB or SSH or FTP or whatever. Setting up a mounted folder through whatever file browser (including the default Windows Explorer in Windows or Finder in MacOS) over the same protocol should be trivial, and not require any additional server side configuration.
Yeah, I mean I do still use rsync for the stuff that would take a long time, but for one-off file movement I just use a mounted network drive in the normal file browser, including on Windows and MacOS machines.
What if I told you that there are really stupid comments on Lemmy as well
That’s why I think the history of the U.S. phone system is so important. AT&T had to be dragged into interoperability by government regulation nearly every step of the way, but ended up needing to invent and publish the technical standards that made federation/interoperability possible, after government agencies started mandating them. The technical infeasibility of opening up a proprietary network has been overcome before, with much more complexity at the lower OSI layers, including defining new open standards regarding the physical layer of actual copper lines and switches.
Also as a result, that opens up Apple’s discounting strategy where it sells the one-year-old model as a discounted model. If an Apple model can get updates 6 years after release, then buying an 18-month old model (but as a new phone) still assures you of 4.5 years of updates.
I’d argue that telephones are the original federated service. There were fits and starts to getting the proprietary Bell/AT&T network to play nice with devices or lines not operated by them, but the initial system for long distance calling over the North American Numbering Plan made it possible for an AT&T customer to dial non-AT&T customers by the early 1950’s, and set the groundwork for the technical feasibility of the breakup of the AT&T/Bell monopoly.
We didn’t call it spam then, but unsolicited phone calls have always been a problem.
(the preview fetch is not e2ee afaik)
Technically, it is, but end to end encryption only covers the data between the ends, and not what one of the ends chooses to do with it. If one end of the conversation chooses to log the conversation in an insecure way, the conversation itself might technically be encrypted, but the contents of the conversation can be learned by another. Or if one end simply chooses to forward a message to a new party not part of the original conversation.
The link previews are happening outside of the conversation, and that action can be seen by people like the owner of the website, your ISP, and maybe WhatsApp itself (if configured in that way, not sure if it does).
So end to end isn’t a panacea. You have to understand how it fits into the broader context of security and threat models.
Loops really isn’t ready for primetime. It’s too new and unpolished, and will need a bit more time.
I wonder if peertube can scale. YouTube has a whole sophisticated system for ingesting and transcoding videos into dozens of formats, with tradeoffs being made on computational complexity versus file size/bandwidth, which requires some projection on which videos will be downloaded the most times in the future (and by which types of clients, with support for which codecs, etc.). Doing this can require a lot of networking/computing/memory/storage resources, and I wonder if the software can scale.
Works for me on Sync.
Exactly. To extend the junk food analogy, this is like making donuts from scratch in your own kitchen: customized to your preferences, maybe tastes better, but ultimately you’re still making a mess in your kitchen and eating unhealthy.
My theory is that there is quite a few servers that are chosing to defederate. The number of total servers continues to drop according to fedidb.
Or admins are just finding it not worth bothering with administering their own server and turning them off.
It sounds like you want a way to collect articles, including full text offline, and organize them in a searchable way. Why do you need RSS for this? Just use a blogging platform where you can organize each post, list/sort/filter by date or topic or original source, and use the search functionality in the actual blog platform.
No forum, email or word processor (even WordPerfect for the c64) or Notepad uses this
I think the convention of 2 newlines for each paragraph is a longstanding norm in plaintext. The old Usenet, list servs, plain text email, etc., was basically always like that, because you could never control how someone else wraps their text. 2 new lines would be a new paragraph no matter what, while single new lines could create ambiguity between an author’s intentional line break versus the rendering software’s decision to wrap an existing line.
For lists and the like, you’d want to be able to have newlines without new paragraphs, but you’d generally want ordered lists or unordered lists at that point.
For an obvious example of markup languages where newlines and carriage returns don’t have syntactic meaning, look at literally the most popular one: HTML.
So markdown was essentially enforcing the then existing best practices for pure plain text communication, to never use single line breaks except in lists.
Most UIs don’t even have a preview option, let alone need one, because they don’t require you to have a stick up your ass to ‘get’ using them.
It was pretty common before Markdown took over that forums and other user-input rich text fields used raw html (or a subset of html tags), or something syntactically similar to html’s opening and closing tags (BBcode, vBulletin markup, etc.).
Markdown was basically the first implementation that was designed to be human readable in plaintext but easily rendered into rich text (with an eye towards HTML). It’s not a coincidence that it took off in the early days of the “web 2.0” embrace of user-submitted content in asynchronous forms.
I get the complaint. But I think markdown makes a lot of sense as a way to store and render text, and that one compromise is worth it overall.
Yeah, looks like a series of voluntary tags in the metadata. Which is important, and probably necessary, but won’t actually do much to stop deceptive generation. Just helps highlight and index the use of some of these AI tools for people who don’t particularly want or need to hide that fact.
Who’s in the middle of this Venn Diagram between “uses some kind of custom OS on their phone to where their camera app doesn’t automatically read QR codes” and “doesn’t know how to install or use software that can read QR codes”?