Pls explain

  • simple@lemm.ee
    link
    fedilink
    arrow-up
    10
    ·
    1 year ago

    Hands are really complicated, even to draw. Everything else is relatively easy to guess for an AI, usually faces are looking at the camera or looking sideways, but hands have like a thousand different positions and poses. It’s hard for the AI to guess what the hands should look like and where the fingers should be. It doesn’t help that people are historically bad at drawing hands so there’s a lot of garbage in the data.

    • loathsome dongeater@lemmygrad.mlOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      That’s true but I would have thought that the models would be able to “understand” hands because I’m assuming they have seen millions of photographs with hands in them by now.

      • queermunist she/her@lemmy.ml
        link
        fedilink
        arrow-up
        5
        arrow-down
        4
        ·
        edit-2
        1 year ago

        I think it’s helpful to remember that the model doesn’t have a skeleton, its literally skin deep. It doesn’t understand hands, it understands pixels. Without an understanding of the actual structure all the AI can do is guess where the pixels go based on other neighboring pixels.

      • SheeEttin@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Sure, and if they were illustrative of hands, you’d get good hands for output. But they’re random photos from random angles, possibly only showing a few fingers. Or maybe with hands clasped. Or worse, two people holding hands. If you throw all of those into the mix and call them all hands, a mix is what you’re going to get out.

        Look at this picture: https://petapixel.com/assets/uploads/2023/03/SD1131497946_two_hands_clasped_together-copy.jpg

        You can sort of see where it’s coming from. Some parts look like a handshake, some parts look like two people standing side by side holding hands (both with and without fingers interlaced), some parts look like one person’s hands on their knee. It all depends on how you’re constructing the image, and what your input data and labeling is.

        Stable Diffusion works by changing individual pixels until it looks reasonable enough, not looking at the macro scale of the whole image. Other methods, like whatever dalle2 uses, seem to work better.