Lvxferre [he/him]

The catarrhine who invented a perpetual motion machine, by dreaming at night and devouring its own dreams through the day.

  • 0 Posts
  • 532 Comments
Joined 1 year ago
cake
Cake day: January 12th, 2024

help-circle





  • Yes, it is expensive. But most of that cost is not because of simple applications, like in my example with grammar tables. It’s because those models have been scaled up to a bazillion parameters and “trained” with a gorillabyte of scrapped data, in the hopes they’ll magically reach sentience and stop telling you to put glue on pizza. It’s because of meaning (semantics and pragmatics), not grammar.

    Also, natural languages don’t really have nonsensical rules; sure, sometimes you see some weird stuff (like Italian genderbending plurals, or English question formation), but even those are procedural: “if X, do Y”. LLMs are actually rather good at regenerating those procedural rules based on examples from the data.

    But I wish it had some broader use, that would justify its cost.

    I with that they cut down the costs based on the current uses. Small models for specific applications, dirty cheap in both training and running costs.

    (In both our cases, it’s about matching cost vs. use.)



  • Why not quanta? Don’t you believe in the power of the crystals? Quantum vibrations of the Universe from negative ions from the Himalayan salt lamps give you 153.7% better spiritual connection with the soul of the cosmic rays of the Unity!

    …what makes me sadder about the generative models is that the underlying tech is genuinely interesting. For example, for languages with large presence online they get the grammar right, so stuff like “give me a [declension | conjugation] table for [noun | verb]” works great, and if it’s any application where accuracy isn’t a big deal (like “give me ideas for [thing]”) you’ll probably get some interesting output. But it certainly not give you reliable info about most stuff, unless directly copied from elsewhere.



  • The whole thing can be summed up as the following: they’re selling you a hammer and telling you to use it with screws. Once you hammer the screw, it trashes the wood really bad. Then they’re calling the wood trashing “hallucination”, and promising you better hammers that won’t do this. Except a hammer is not a tool to use with screws dammit, you should be using a screwdriver.

    An AI leaderboard suggests the newest reasoning models used in chatbots are producing less accurate results because of higher hallucination rates.

    So he’s suggesting that the models are producing less accurate results… because they have higher rates of less accurate results? This is a tautological pseudo-explanation.

    AI chatbots from tech companies such as OpenAI and Google have been getting so-called reasoning upgrades over the past months

    When are people going to accept the fact that large “language” models are not general intelligence?

    ideally to make them better at giving us answers we can trust

    Those models are useful, but only a fool trusts = is gullible towards their output.

    OpenAI says the reasoning process isn’t to blame.

    Just like my dog isn’t to blame for the holes in my garden. Because I don’t have a dog.

    This is sounding more and more like model collapse - models perform worse when trained on the output of other models.

    inb4 sealions asking what’s my definition of reasoning in 3…2…1…


  • I wish EU4 had more automation, the amount of micromanagement there was awful. And this sort of game is more interesting when you can focus on the big picture.

    Sadly I don’t trust Hipsters’ Electronic Arts Paradox to do automation right. And by “right” I mean:

    • Transparent. You could reasonably get why the game AI will / won’t take a certain decision, without spending hours in the wiki or fucking around the game files.
    • Flexible. The best decision is often circumstantial, and playing styles are a thing.
    • Powerful, but not overpowered. The AI’s decisions should be decent, but not the best - a player who takes the time to learn how stuff works should be rewarded. (Or even better, tweak the AI so it does the best.)


  • Since a lot of people are asking what happened, here’s some context.

    Recently Nutomic requested more donations to Lemmy. This was cross-posted everywhere (like here, here, here, here, here). And, inevitably, people started calling out things like:

    • The devs’ defence of authoritarian regimes;
    • tankie here, tankie there;
    • lemmy .ml extremely shitty moderation practices;
    • Nutomic’s transphobic message; etc.

    as reasons to not donate to the development.

    That should be enough to get the meme OP shared.

    My take on this matter.

    If I don’t do this, odds are some assumptive trash will assume = lie = bullshit words into my mouth.

    The criticism against the devs is mostly valid, but not the full picture - even if they say all this shit, they’re still creating a platform that enables people to fight against it, and this should be taken into account.

    So it’s all about balancing those two things, you know? On moral and practical matters. For me at least the balance is overall positive; I’d be donating to the platform if I wasn’t broke. Plus, continued Lemmy development benefits us, and if they need to take a job the development slows down.

    But, still… I get people who won’t support them, I don’t think that they’re completely wrong, it’s just that they weight things different than I do. Either way, people should not focus on picking sides, but on being fair.

    I’d also like to encourage people who don’t want to contribute with Lemmy to do it for either PieFed or Sublinks. Both are independent from Lemmy, compete with it, but are still part of the Fediverse.








  • My sister and me spent a day in São Paulo city, visiting our 80yo aunt. She’s in good health, and was extra happy to see us - and with the gift (a crochet purse, made by our mum).

    Speaking on my mum, she has been coughing for a week or so, after she got a cold. Her cough is improving but I’m still worried about it, I told her to see a doc but she’s damn stubborn.

    Kika and Siegfrieda (my cats) are keeping up with their annoying but cute routine, I guess: one has 3AM zoomies, another meows loudly and “shows” us every single problem she has, as if we were able to stop raining or the neighbour’s baby from crying.

    It’s pinhão season! It’s pinhão season!

    I’m eating my quota of pinhão this winter! I wish I could plant a Paraná pine home, but I don’t have enough space for that (those trees get huge). Because of that I’m making do with my future apple bonsai, pepper plants and stuff like this. When I get Rich® I’ll make sure to buy a small house with a big garden and plant a bunch of those.

    Yesterday I prepared some baked rice with hot dogs, it was glorious. Specially the provolone and Parmesan crust. (I’m calling it baked “rice” but there was almost no rice there, in comparison with the vegs and sausages.)