☆ Yσɠƚԋσʂ ☆

  • 406 Posts
  • 664 Comments
Joined 4 years ago
cake
Cake day: January 18th, 2020

help-circle




  • The United States has done far more harm than good for humanity at large. The individualistic values it champions have led to a society that is fragmented and leaves many citizens in misery. Its global hegemony has resulted in the destruction of numerous countries, with countless lives lost due to its military interventions, coups, and regime change operations around the world. Moreover, the US’s extractive policies have prevented other nations from developing their own economies, perpetuating a cycle of underdevelopment and dependency. Additionally, as one of the largest consumers of energy per capita and major producers of fossil fuels, the United States is among the worst offenders when it comes to climate change, exacerbating global environmental crises with its unsustainable practices.




















  • I’d say it’s not so much that this tech doesn’t have value, but that it gets hyped up and used for things it really shouldn’t be used for. Specifically, the way models work currently, they’re not suitable for any scenario where you need an exact answer. So, it’s great for stuff like generative art or creative writing, but absolutely terrible for solving math problems or driving cars. Understanding the limitations of the tech is key for applying it in a sensible way.




  • not working due to hallucinations

    It’s pretty clear that hallucinations are an issue only for specific use cases. This problem certainly doesn’t make ML useless. For example, I find it’s far faster to use a code oriented model to get an idea of how to solve a problem than going to stack overflow. The output of the model doesn’t need to be perfect, it just needs to get me moving in the right direction.

    Furthermore, there is nothing to suggest that the problem of hallucinations is fundamental and can’t be addressed going forward. I’ve linked an example of a research team doing precisely that above.

    wasteful in terms of resources

    Sure, but so are plenty of other things. And as I’ve illustrated above, there are already drastic improvements happening in this area.

    creates problematic behaviors in terms of privacy

    Not really a unique problem either.

    creates more inequality

    Don’t see how that’s the case. In fact, I’d argue the opposite to be true, especially if the technology is open and available to everyone.

    and other problems and is thus in most cases (say outside of e.g numerical optimization as already done at e.g DoE, so in the “traditional” sense of AI, not the LLM craze) better be entirely ignored.

    There is a lot of hype around this tech, and some of it will die down eventually. However, it would be a mistake to throw the baby out with the bath water.

    what I mean is that the argument of inevitability itself is dangerous, often abused.

    The argument of inevitability stems from the fact that people have already found many commercial uses for this tech, and there is a ton of money being poured into it. This is unlikely to stop regardless of what your personal opinion on the tech is.