• 0 Posts
  • 120 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle


  • In fairness, the headlines written around this were generally atrocious, save a few (shout out to IGN and the original reporter, which may or may not have been techradar). Sure, in most of those you could read a more complete quote inside, but… staying at the headline isn’t just a gamer thing. Clickbait is dangerous for a reason.

    And also in fairness, the point he’s making is still not great. I mean, he’s the guy in charge of their subscription service, so I wouldn’t expect him to be too negative on the idea, but he’s still saying that it’s a future that will come. Not that all models will coexist, but that a Netflix future for gaming is coming.

    But yeah, gamers can be hostile without justification and often default to treating every relationship with the people making the games as an antagonistic or competitive one, which is a bummer. In that context, letting this guy talk was clearly a mistake.



  • Alright, I was only gently pointing it out because what he actually said is still a pretty bad take, but at this point it’s just annoying.

    No, he didn’t say that.

    He said that gaming subscriptions won’t take off UNTIL gamers get used to not owning their games. Wihch… yeah, it checks out.

    The all-subscription future already sucks, can we at least limit our outrage to the actual problem? I swear, I have no idea why gaming industry people ever talk to anybody. Nothing good ever comes of it.





  • I mean, you can “buy” stuff in Amazon Prime Video off service. Unlike Netflix or other platforms, they will let you “buy or rent” streaming movies, which is the same as finding the movie on the Amazon storefront and buying the digital copy instead of a physical copy.

    Now, does that mean they won’t yank it? Not really. A digital license is a license, not a purchase. Is the word “buy” or “own” inaccurate? I’m hoping not, because like the Sony thing showed, platforms are desperate to not have the courts improvise what rights they owe the buyers on digital purchases.

    I’m still buying my movies in 4K BluRay, though. And working on ripping all of them for streaming at home, now that I finally have the space.


  • To be clear about what I’m saying, the setup is subtitles in the same language as the audio. So if you’re learning French you set French audio with French subtitles.

    That REALLY helps bind the pronuntiation to the writing and it actually makes it far easier to understand the speech. Assuming you’re reading the subtitles at the same time, of course.

    You won’t understand a lot of it, and you’ll have to put up with the frustration of losing the plot often for a while, but it does help, in my experience.

    Subtitles in your own native language just make you tune out the audio and read the dialogue. That’s not helpful.


  • This is the answer. The answer is Netflix and Youtube. Anything with media using both audio and subtitles in the language you’re trying to learn.

    You still need a teacher to get you past learning enough basics of vocabulary and grammar to get started (and no, language learning apps are probably not an effective way past that) but once you have enough basic words and you understand how a sentence is put together the answer is to watch media even if you don’t fully understand what’s being said, paying attention and stopping sometimes to use dictionaries and translators to get you there on sentences you almost get.

    I know people who spent years spinning their wheels on learning apps while refusing to sit through media in the target language because they get frustrated or tired by the effort of trying to keep up. It’s a bit annoying, but it really works.


  • I don’t disagree on principle, but I do think it requires some thought.

    Also, that’s still a pretty significant backstop. You basically would need models to have a way to check generated content for copyright, in the way Youtube does, for instance. And that is already a big debate, whether enforcing that requirement is affordable to anybody but the big companies.

    But hey, maybe we can solve both issues the same way. We sure as hell need a better way to handle mass human-produced content and its interactions with IP. The current system does not work and it grandfathers in the big players in UGC, so whatever we come up with should work for both human and computer-generated content.


  • That’s not “coming”, it’s an ongoing process that has been going on for a couple hundred years, and it absolutely does not require ChatGPT.

    People genuinely underestimate how many of these things have been an ongoing concern. A lot like crypto isn’t that different to what you can do with a server, “AI” isn’t a magic key that unlocks automation. I don’t even know how this mental model works. Is the idea that companies who are currently hiring millions of copywriters will just rely on automated tools? I get that yeah, a bunch of call center people may get removed (again, a process that has been ongoing for decades), but how is compensating Facebook for scrubbing their social media posts for text data going to make that happen less?

    Again, I think people don’t understand the parameters of the problem, which is different from saying that there is no problem here. If anything the conversation is a net positive in that we should have been having it in 2010 when Amazon and Facebook and Google were all-in on this process already through both ML tools and other forms of data analysis.


  • I’m gonna say those circumstances changed when digital copies and the Internet became a thing, but at least we’re having the conversation now, I suppose.

    I agree that ML image and text generation can create something that breaks copyright. You for sure can duplicate images or use copyrighted characterrs. This is also true of Youtube videos and Tiktoks and a lot of human-created art. I think it’s a fascinated question to ponder whether the infraction is in what the tool generates (i.e. did it make a picture of Spider-Man and sell it to you for money, whcih is under copyright and thus can’t be used that way) or is the infraction in the ingest that enables it to do that (i.e. it learned on pictures of Spider-Man available on the Internet, and thus all output is tainted because the images are copyrighted).

    The first option makes more sense to me than the second, but if I’m being honest I don’t know if the entire framework makes sense at this point at all.


  • A lot of this can be traced back to the invention of photography, which is a fun point of reference, if one goes to dig up the debate at the time.

    In any case, the idea that humans can only produce so fast for so long and somehow that cleans the channel just doesn’t track. We are flooded by low quality content enabled by social media. There’s seven billion of us two or three billion of those are on social platforms and a whole bunch of the content being shared in channels is created by using corporate tools to make stuff by pointing phones at it. I guarantee that people will still go to museums to look at art regardless of how much cookie cutter AI stuff gets shared.

    However, I absolutely wouldn’t want a handful of corporations to have the ability to empower their employed artists with tools to run 10x faster than freelance artists. That is a horrifying proposition. Art is art. The difficulty isn’t in making the thing technically (say hello, Marcel Duchamp, I bet you thought you had already litgated this). Artists are gonna art, but it’s important that nobody has a monopoly on the tools to make art.


  • It’s not right to say that ML output isn’t good at practical tasks. It is and it’s already in use and has been for ages. The conversation about these is guided by the relatively anecdotal fact that chatbots and image generation got good so this stuff went viral, but ML models are being used for a bunch of practical uses, from speeding up repetitive, time consuming tasks (e.g. cleaning up motion capture, facial modelling or lip animation in games and movies) or specialized tasks (so much science research is using ML tools these days).

    Now, a lot of those are done using fully owned datasets, but not all, and the ramifications there are also important. People dramatically overestimate the impact of trash product flooding channels (which is already the case, as you say) and dramatically underestimate the applications of the underlying tech beyond the couple of viral apps they only got access to recently.


  • Yep. The effect of this as currently framed is that you get data ownership clauses in EULAs forever and only major data brokers like Google or Meta can afford to use this tech at all. It’s not even a new scenario, it already happened when those exact companies were pushing facial recognition and other big data tools.

    I agree that the basics of modern copyright don’t work great with ML in the mix (or with the Internet in the mix, while we’re at it), but people are leaning on the viral negativity to slip by very unwanted consequences before anybody can make a case for good use of the tech.


  • I think viral outrage aside, there is a very open question about what constitutes fair use in this application. And I think the viral outrage misunderstands the consequences of enforcing the notion that you can’t use openly scrapable online data to build ML models.

    Effectively what the copyright argument does here is make it so that ML models are only legally allowed to make by Meta, Google, Microsoft and maybe a couple of other companies. OpenAI can say whatever, I’m not concerned about them, but I am concerned about open source alternatives getting priced out of that market. I am also concerned about what it does to previously available APIs, as we’ve seen with Twitter and Reddit.

    I get that it’s fashionable to hate on these things, and it’s fashionable to repeat the bit of misinformation about models being a copy or a collage of training data, but there are ramifications here people aren’t talking about and I fear we’re going to the worst possible future on this, where AI models are effectively ubiquitous but legally limited to major data brokers who added clauses to own AI training rights from their billions of users.


  • I mean… yeah, retailer gut checks were a major driver for the industry for ages. The entire myth of the videogame crash in the early eighties, blown out of proportion as it is, comes down to retailers having a bad feeling about gaming after Atari. I’m big on preservation and physical media, but don’t downplay the schadenfreude caused by the absolutely toxic videogame retail industry entirely collapsing after digital distribution became a thing. I’ll buy direct to consumer from boutique retailers all day before I go back to buckets of games stolen from little kids and retailers keeping shelf space hostage based on how some rep’s E3’s afterparties went.

    That said, those guys really did flood the market with cookie cutter games in a very short time there for a while. There were a LOT of these.

    Weirdly, Neverwinter Nights must have done extremely well for how much credit Bioware gives it for redefining the genre, but at the time I remember being frustrated by it. It looked worse than the 2D stuff, the user generated content stuff was fun to mess with it didn’t create the huge endless content mill you’d expect from something like that today.

    I should go look up if there’s any data about how commercially successful it really was somewhere. Any pointers?