• 0 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: July 8th, 2023

help-circle



  • Right so WhatsApp and messenger are gatekeepers and they must allow interoperation with who anyone who wants to ie me running my own signal instance?

    There are several stipulations on interoperability in the new regulation (Ctrl+F “interop”). To my understanding it is stipulated that they have to make interoperability possible for certain third parties, but how to go about this is not exactly specified on a technical level - meaning the specific way to implement this is left to the gatekeeper. So your Signal server may or may not be able to depending on how exactly they go about this.

    They also need to interoperate with signal hence if a works with b and c works with a why wouldn’t b work with c?

    No they need to enable interoperability period. Says nothing about Signal (the software) per se. Meta has announced they plan on implementing it based on the Signal protocol (not Signal messenger software, not Signal server software).

    Cos if thats hoe it works or if im not allowed to interoperate with WhatsApp or messenger in the first place then this juat seems like its handing the monopoly away from the companies to the government and giving the people fuck all.

    To my knowledge the aim of the regulation is exactly that, to allow anybody interoperability with these “core platform services”. The status quo is that the regulations has been announced by the EU, it has gone into effect, and Meta has announced how they will implement interoperability to comply. Once the implementation is available and then found lacking in regard to the regulation it would be up to the affected third party to sue Meta over it.


  • In Germany, Mein Kampf is banned except for educational purposes, eg in history class.

    Strictly speaking this is incorrect, although the situation is somewhat complicated. There are laws that can be and were used to limit its redistribution (mainly the rule against anti-constitutional propaganda), but there are dissenting judgements saying original prints from before the end of WW2 cannot fall under this, since they are pre-constitutional. One particular reprint from 2018 has been classified as “liable to corrupt the young”, but to my knowledge this only means it cannot be publicly advertised.

    What is interesting though is how distribution and reprinting was prevented historically, which is copyright. As Hitlers legal heir the state of Bavaria held the copyright until it expired in 2015 and simply didn’t grant license to anything except versions with scholarly commentary. But technically since then anybody can print and distribute new copies of the book. If this violates any law will then be determined on a case-by-case basis after the fact.








  • a neural network with a series of layers (W in this case would be a single layer)

    I understood this differently. W is a whole model, not a single layer of a model. W is a layer of the Transformer architecture, not of a model. So it is a single feed forward or attention model, which is a layer in the Transformer. As the paper says, a LoRA:

    injects trainable rank decomposition matrices into each layer of the Transformer architecture

    It basically learns shifting the output of each Transformer layer. But the original Transformer stays intact, which is the whole point, as it lets you quickly train a LoRA when you need this extra bias, and you can switch to another for a different task easily, without re-training your Transformer. So if the source of the bias you want to get rid off is already in these original models in the Transformer, you are just fighting fire with fire.

    Which is a good approach for specific situations, but not for general ones. In the context of OP you would need one LoRA for fighting it sexualising Asian women, then you would need another one for the next bias you find, and before you know it you have hundreds and your output quality has degraded irrecoverably.


  • Yeah but that’s my point, right?

    That

    1. you do not “replace data until your desired objective”.
    2. the original model stays intact (the W in the picture you embedded).

    Meaning that when you change or remove the LoRA (A and B), the same types of biases will just resurface from the original model (W). Hence “less biased” W being the preferable solution, where possible.

    Don’t get me wrong, LoRAs seem quite interesting, they just don’t seem like a good general approach to fighting model bias.


  • First, there is no thing as a “de-biased” training set, only sets with whatever target series of biases you define for them to reflect.

    Yes, I obviously meant “de-biased” by definition of whoever makes the set. Didn’t think it worth mentioning, as it seems self evident. But again, in concrete terms regarding the OP this just means not having your dataset skewed towards sexualised depictions of certain groups.

    1. either you replace data until your desired objective, which will reduce the model’s quality for any of the alternatives

    […]
    For reference, LoRAs are a sledgehammer approach to apply the first way.

    The paper introducing LoRA seems to disagree (emphasis mine):

    We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks.

    There is no data replaced, the model is not changed at all. In fact if I’m not misunderstanding it adds an additional neural network on top of the pre-trained one, i.e. it’s adding data instead of replacing any. Fighting bias with bias if you will.

    And I think this is relevant to a discussion of all models, as reproduction of training set biases is something common to all neural networks.


  • “Inclusive models” would need to be larger.

    [citation needed]

    To my understanding the problem is that the models reproduce biases in the training material, not model size. Alignment is currently a manual process after the initial unsupervised learning phase, often done by click-workers (Reinforcement Learning from Human Feedback, RLHF), and aimed at coaxing the model towards more “politically correct” outputs; But ultimately at that time the damage is already done since the bias is encoded in the model weights and will resurface in the outputs just randomly or if you “jailbreak” enough.

    In the context of the OP, if your training material has a high volume of sexualised depictions of Asian women the model will reproduce that in its outputs. Which is also the argument the article makes. So what you need for more inclusive models is essentially a de-biased training set designed with that specific purpose in mind.

    I’m glad to be corrected here, especially if you have any sources to look at.




  • Minors can and have used more or less most of the internet safely. What is most of the internet? Services like Omegle or Chaturbate or Stripchat surely are not on it.

    Well that claim is a bit arbitrary IMHO. For one I don’t see a reason to exclude those services you mentioned from being part of “most of the internet”. On the contrary, from what I see all of them are clearnet services, accessible to the public, so this extraordinary claim would need some evidence toward it I would say. Secondly the latter two are explicitly pornographic in nature, so I don’t really see the relevance towards the point of children being harmed by accessing them; They shouldn’t be there in the first place. There is of course a valid discussion about moderation to be had if they are used to distribute CSAM, but that seems orthogonal to the question of parental oversight of minors internet use.

    Minors have used social media all this while, and other than what Facebook/Instagram on behest of US capitalist machinery has done to minors, […] most services do not abuse human psychology to this degree.

    Again, only according to your arbitrary definition of what “most services” are. Basically all of social media is doing attention hacking, large swaths of of the gaming industry intentionally abuse dopamine cycles to sell worthless “digital goods”, the www is full of dark patterns in large part fuelled by advertisement delivery. I mean Meta is indubitably a front runner in the race of surveillance capitalism, but isn’t that an argument in favour of Omegle in the context of this discussion? Facebook/Instagram/WhatsApp are much more certainly than Omegle a part of “most of the internet” after all, however you define that, and they are a clear and present danger to children.

    However, children’s minds are highly neuroplastic until adulthood, and a lot of the internet is damaging to the psyche of children, which is an entirely different discussion. If that seems like flipflopping, it is because internet safety has various degrees to it and the definition of safety varies from healthy usage to consumerism to addiction to gray area to developing deviant persona and even illegal uses.

    I don’t think it is a different discussion at all, rather it’s exactly the crux of the issue. The psyche of children is vulnerable; How do we best protect it and who is in the best position to effectively do so?

    It is fairly known how peer pressure wins over parental control on minor access to internet, so the “parent’s duty” argument is very flaky and invalid. Education on things rest of the society is freely using is not very conducive to children at the age of puberty (12-16), and 18 is supposedly the adult age.

    It might not be a definitive argument, but certainly not invalid. A parent is chiefly responsible for the safety, education, and behaviour of their children in basically all other areas of life. This responsibility doesn’t go away because the neighbours kids peer pressured them into throwing stones through a window or drinking alcohol. Why should access to the internet be any different?

    So is the argument now going to be letting kids do whatever they want by the time they are 18?

    Well yes, but within the confines of legality obviously. That’s literally the status quo in most jurisdictions, isn’t it?!

    Or will this be decided upon a combination of evaluation of mental age using tests related to Asperger’s, neurodivergence, ADHD and so on? How frequently will these tests be taken by kids?

    Gee I hope not. That sounds like the abyss below the slippery slope. But I don’t think anybody argued for that.

    Will there be exposure of the child to concepts like “absolute American freedom” and various forms of consumerism? Because that is what the child will get exposed to, as soon as he/she meets people outside home, or goes to the market with parents.

    Again, I don’t see the relevance to the Omegle situation. This is just life, the world is a dangerous place and while society can help by creating laws and such in the end the ones in the best position to safeguard their children according to their own world view will be the parents. Of course that is a duty in which every individual parent will inevitably fail by some metric, but so will society. Case in point, many children will be exposed to “absolute American freedom and various forms of consumerism” inside their own homes already, so if that’s your metric as a parent the only one who could ever protect a child from that is you, by preparing them for their inevitable confrontation with those concepts and hoping they take that lesson to heart.

    Their argument comes off as distasteful, even though a whole decade of video streaming exists as proof of Omegle being a key mainstream hub for minor sexual abuse content, with no kinds of methods used by the evasive service owner to combat it. Read the link I supplied in above comments regarding that.

    Yeah you claimed variously that it is a key part of Omegle “content”, for which I don’t see much corroborating evidence in the links you provided. Both the BBC story and the NCOSE piece seem to reference the same case of an 11 year old girl using the service unsupervised.

    Which leads me to why I’m taking issue with the statement of Omegle having content. It doesn’t in the sense most people would understand that. It revolves around having a conversation with an absolute stranger, and either side of this conversation can record it or publish it. There is no content here unless one participant creates it and distributes it elsewhere than Omegle, or takes other content and distributes it on Omegle. Everything on Omegle is content in the same sense as a phone call is content, to which I would argue it isn’t, at least not inherently. It’s an ephemeral conversation unless a participant records it.

    It might be content in the sense argued by the law and the court in the “A.M Vs Omegle” case, but that apparently ended in the motion to dismiss being partly granted and partly denied, which to me as a layperson sounds like a win for Omegle, at least temporarily.

    Furthermore you say Omegle and Brooks didn’t do anything against the abuse, but this is in direct contradiction to what Brooks claims in the message in the OP:

    Omegle worked with law enforcement agencies, and the National Center for Missing and Exploited Children, to help put evildoers in prison where they belong. There are “people” rotting behind bars right now thanks in part to evidence that Omegle proactively collected against them, and tipped the authorities off to.

    And this is all besides the point that giving an 11 year old unsupervised access to Omegle is kind of the same as letting them out into the shady part of town to talk to random strangers (when you ignore the added risk of physical harm there of course). That’s what the website was principally about, meeting random strangers. And if a parent were to let their child do that unsupervised in offline life we would put at least part of the blame for any harm on them.

    The internet wasn’t designed with the safety of children in mind, in fact not with anybodies safety in mind. Saying that it should be is an opinion, but in any case not the current reality. That leaves the majority of responsibility for the safety of children on the parents. And there is a bunch of things they can do, like not giving them networked devices in the first place, or restricting network access with whitelists, or educating them before the parents or others do give access. Yes, this parental control breaks down in social settings, but that is the case for a lot of different aspects of life and I don’t see how purging everything dangerous for children from the public internet is either a possible or even a desirable solution to this problem.

    Take for example what you and the NCOSE argued for, age verification. The state of the art for that on many explicitly pornographic services is a simple dialogue asking if the user is of legal age in their jurisdiction. The infrastructure to do otherwise, which would require a governmentally issued digital ID of some kind, doesn’t exist in most countries let alone globally. Never mind the implications this would have for user privacy. Some services use a certain identifier so that their service can be automatically filtered, but that again leaves the parents with the responsibility to set up and maintain said filter. And in the end there will not be a way around that at all, unless you purposely rebuild the internet with a level of control it simply is not engineered to provide currently.

    You should be able to see clearly that I am quite interested in such discussions without the moderator part.

    Well the one who brought that into the discussion was you. Not to diminish your efforts, but I stand by what I said on the matter earlier.


  • Minors can use most of the internet safely.

    I beg to differ. Minors can’t safely use the internet at all, it’s the internet. Every depth of the human psyche is mirrored onto it, and frankly any guardian letting a child onto it without at the minimum strong primers on its dangers is derilict of their duty. Which might have been excusable 20-30 years ago when everybody was confused about what the internet even is, but not so much in 2023.

    If you make another deranged argument like that, you will get the banhammer.

    Just for clarity, I’m not the person you said this to, but I think if you are out here threatening people with bans over a rhetorical question, you might want to take a break. Nevermind the disconnect between you saying you haven’t used it at all but purpoting to know exactly what kind of “content” was on it these last years, when it didn’t even really have content in the usual sense of the word.