The problem is with the development ceasing. The source code will remain, but if there’z no dedicated team developing bugs will not be fixed and features will not be added.
The problem is with the development ceasing. The source code will remain, but if there’z no dedicated team developing bugs will not be fixed and features will not be added.
That makes much more sense, my first intuition was passing people on the sidewalk which… doesn’t seem like a red flag.
What do you mean with
walking around other earthlings on footpaths etc instead of through
Is an earthling a human, an animal, a plant or subsets of those three? And what is walking through an earthling?
I’m geniuenly curious, I have no idea what you mean.
Not a member, can you post the reason?
A foot is like 30cm. So it’s roughly 27000 cm^3 or 27 liters.
AI and robotics companies don’t want this to happen. OpenAI, for example, has reportedly fought to “water down” safety regulations and reduce AI-quality requirements. According to an article in Time, it lobbied European Union officials against classifying models like ChatGPT as “high risk,” which would have brought “stringent legal requirements including transparency, traceability, and human oversight.” The reasoning was supposedly that OpenAI did not intend to put its products to high-risk use—a logical twist akin to the Titanic owners lobbying that the ship should not be inspected for lifeboats on the principle that it was a “general purpose” vessel that also could sail in warm waters where there were no icebergs and people could float for days.
What would’ve been high risk? Well:
In one section of the White Paper OpenAI shared with European officials at the time, the company pushed back against a proposed amendment to the AI Act that would have classified generative AI systems such as ChatGPT and Dall-E as “high risk” if they generated text or imagery that could “falsely appear to a person to be human generated and authentic.”
That does make sense, considering ELIZA from the 60s would fit this description. It pretty much repeated what you wrote to it in a different style.
I don’t see how generative AI can be considered high risk when it’s literally just fancy keyboard autofill. If a doctor asks ChatGPT what the correct dose of medication for a patient is, it’s not ChatGPT which should be considered high risk but rather the doctor.
I’d say the CEO is the only one who’s overpaid. The other executives make between $200k to $370k, which is a lot of money but barely noteworthy imo.
Yeah, me and my friends did that too, a decade later. We used something like shutdown -s -t [time]
though.
We used to put this file into the shared folder everyone could access, made it hidden, created a shortcut, changed the shortcut’s image to a folder and renamed it to something along the lines of “maths test solutions”.
We also made some other .bat files which looked like this:
powerpoint
powerpoint
powerpoint
powerpoint
⋮
It’s blatantly illegal and Facebook/Meta has probably had enough of EU fines already.
Also, see this article from 2 days ago
Tl;dr:
The case centred on a challenge by Meta after the German cartel office in 2019 ordered the social media giant to stop collecting users’ data without their consent, calling the practice an abuse of market power.
This is the GutHub project by the way:
https://github.com/anarchivist/worldcat
Clearly, a project whose last commit was 12 years ago should be more than enough evidence that she hacked WorldCat.