• 0 Posts
  • 62 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle









  • I agree that the author didn’t do a great job explaining, but they are right about a few things.

    Primarily, LLMs are not truth machines. That just flatly and plainly not what they are. No researcher, not even OpenAI makes such a claim.

    The problem is the public perception that they are. Or that they almost are. Because a lot of time, they’re right. They might even be right more frequently than some people’s dumber friends. And even when they’re wrong, they sound right. Even when it’s wrong, it still sounds smarter than most peoples smartest friends.

    So, I think that the point is that there is a perception gap between what LLMs are, and what people THINK that they are.

    As long as the perception is more optimistic than the reality, a bubble of some kind will exist. But just because there is a “reckoning” somewhere in the future doesn’t imply it will crash to nothing. It just means the investment will align more closely to realistic expectations as the clarity of what realistic expectations even are become more clear.

    LLMs are going to revolutionize and also destroy many industries. It will absolutely fundamentally change the way we interact with technology. No doubt…but for applications which strictly demand correctness, they are not appropriate tools. And investors don’t really understand that yet.





  • It depends on what you mean by “escape”, and what you view as the alternative.

    I suspect that the pursuer could never converge on the same instantaneous point, given sufficient initial distance (and orientation). At a certain distance, the prey could enter a stable orbit around the pursuer. I don’t have a mathematical proof but I strongly suspect this to be the case,and I can envision the structure of a proof.

    Could the prey infinitely extend the gap between themselves and the pursuer? No. I don’t have the tooling to actually present such a proof, but of that one I am confident.

    I think if you introduced concepts of obstacles and a “radius of escape” (where if the gap meets a threshold the predator is permanently foiled), then there are almost certainly scenarios where the prey could escape.

    We actually see this scenario play out in nature all the time


  • Here’s one:

    Actually READ the Mueller Report.

    In the section around social media, it describes how IRA employees masquerade on EVERY side of a wedge issue, with the goal being to polarize.

    Like, their goal isn’t to crush BLM or take away abortion rights. Those are table stakes. They don’t REALLY care. That doesn’t destroy a democracy.

    Their goal is to make having difficult conversations impossible by making each side appear to be as irrational and shrill as possible. They want people to conclude that whoever is on the other side of any issue is a dangerous lost cause, unfit for participation in society.

    With this understanding, people need to understand that whatever your political alignment, there are IRA employees in your spaces saying things you might agree with. But just because you agree with them, doesn’t mean that their methodologies aren’t specifically tailored to damage Western Democracy.





  • I have no idea how poorly the authors of the study communicated their work because I haven’t read the study.

    Jumping to the conclusion that it’s junk because some news blogger wrote an awkward and confusing article about it isn’t fair at all. The press CONSISTENTLY writes absolute trash on the basis of scientific papers. That’s like, science reporting 101.

    And, based on what you’re saying, this still sounds completely different. RNA sequencing may be a mechanism to “why”, but you would knock my fucking socks off if you could use RNA to predict the physical geometry of a fingerprint. If you could say we have a fingerprint, and we have some RNA, do they belong to the same person? That would be unbelievably massive.


  • Right, so this methodology is a completely different approach. I don’t think it’s fair to call snake oil on this specifically with the justification that other models (using an entirely different approach) were.

    Again, not saying it’s real or not, I’m just saying that it’s appropriate to try new approaches to examine things we already THINK we know, and to be prepared to carefully and fairly evaluate new data that calls into question things we thought we knew. That’s just science.