• 0 Posts
  • 36 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle


  • Yeah, from what I remember of what Web 2.0 was, it was services that could be interactive in the browser window, without loading a whole new page each time the user submitted information through HTTP POST. “Ajax” was a hot buzzword among web/tech companies.

    Flickr was mind blowing in that you could edit photo captions and titles without navigating away from the page. Gmail could refresh the inbox without reloading the sidebar. Google maps was impressive in that you could drag the map around and zoom within the window, while it fetched the graphical elements necessary on demand.

    Or maybe web 2.0 included the ability to implement states in the stateless HTTP protocol. You could log into a page and it would only show you the new/unread items for you personally, rather than showing literally every visitor the exact same thing for the exact same URL.

    Social networking became possible with Web 2.0 technologies, but I wouldn’t define Web 2.0 as inherently social. User interactions with a service was the core, and whether the service connected user to user through that service’s design was kinda beside the point.


  • Honestly, this is an easy way to share files with non-technical people in the outside world, too. Just open up a port for that very specific purpose, send the link to your friend, watch the one file get downloaded, and then close the port and turn off the http server.

    It’s technically not very secure, so it’s a bad idea to leave that unattended, but you can always encrypt a zip file to send it and let that file level encryption kinda make up for lack of network level encryption. And as a one-off thing, you should close up your firewall/port forwarding when you’re done.





  • That’s why I think the history of the U.S. phone system is so important. AT&T had to be dragged into interoperability by government regulation nearly every step of the way, but ended up needing to invent and publish the technical standards that made federation/interoperability possible, after government agencies started mandating them. The technical infeasibility of opening up a proprietary network has been overcome before, with much more complexity at the lower OSI layers, including defining new open standards regarding the physical layer of actual copper lines and switches.



  • I’d argue that telephones are the original federated service. There were fits and starts to getting the proprietary Bell/AT&T network to play nice with devices or lines not operated by them, but the initial system for long distance calling over the North American Numbering Plan made it possible for an AT&T customer to dial non-AT&T customers by the early 1950’s, and set the groundwork for the technical feasibility of the breakup of the AT&T/Bell monopoly.

    We didn’t call it spam then, but unsolicited phone calls have always been a problem.


  • (the preview fetch is not e2ee afaik)

    Technically, it is, but end to end encryption only covers the data between the ends, and not what one of the ends chooses to do with it. If one end of the conversation chooses to log the conversation in an insecure way, the conversation itself might technically be encrypted, but the contents of the conversation can be learned by another. Or if one end simply chooses to forward a message to a new party not part of the original conversation.

    The link previews are happening outside of the conversation, and that action can be seen by people like the owner of the website, your ISP, and maybe WhatsApp itself (if configured in that way, not sure if it does).

    So end to end isn’t a panacea. You have to understand how it fits into the broader context of security and threat models.


  • Loops really isn’t ready for primetime. It’s too new and unpolished, and will need a bit more time.

    I wonder if peertube can scale. YouTube has a whole sophisticated system for ingesting and transcoding videos into dozens of formats, with tradeoffs being made on computational complexity versus file size/bandwidth, which requires some projection on which videos will be downloaded the most times in the future (and by which types of clients, with support for which codecs, etc.). Doing this can require a lot of networking/computing/memory/storage resources, and I wonder if the software can scale.






  • No forum, email or word processor (even WordPerfect for the c64) or Notepad uses this

    I think the convention of 2 newlines for each paragraph is a longstanding norm in plaintext. The old Usenet, list servs, plain text email, etc., was basically always like that, because you could never control how someone else wraps their text. 2 new lines would be a new paragraph no matter what, while single new lines could create ambiguity between an author’s intentional line break versus the rendering software’s decision to wrap an existing line.

    For lists and the like, you’d want to be able to have newlines without new paragraphs, but you’d generally want ordered lists or unordered lists at that point.

    For an obvious example of markup languages where newlines and carriage returns don’t have syntactic meaning, look at literally the most popular one: HTML.

    So markdown was essentially enforcing the then existing best practices for pure plain text communication, to never use single line breaks except in lists.

    Most UIs don’t even have a preview option, let alone need one, because they don’t require you to have a stick up your ass to ‘get’ using them.

    It was pretty common before Markdown took over that forums and other user-input rich text fields used raw html (or a subset of html tags), or something syntactically similar to html’s opening and closing tags (BBcode, vBulletin markup, etc.).

    Markdown was basically the first implementation that was designed to be human readable in plaintext but easily rendered into rich text (with an eye towards HTML). It’s not a coincidence that it took off in the early days of the “web 2.0” embrace of user-submitted content in asynchronous forms.

    I get the complaint. But I think markdown makes a lot of sense as a way to store and render text, and that one compromise is worth it overall.




  • Sometimes the identity of the messenger is important.

    Twitter was super easy to set up with the API to periodically tweet the output of some automated script: a weather forecast, a public safety alert, an air quality alert, a traffic advisory, a sports score, a news headline, etc.

    These are the types of messages that you’d want to subscribe to the actual identity, and maybe even be able to forward to others (aka retweeting) without compromising the identity verification inherent in the system.

    Twitter was an important service, and that’s why there are so many contenders trying to replace at least part of the experience.