![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/d3d059e3-fa3d-45af-ac93-ac894beba378.png)
Didn’t pursue codification into law in his first hundred days j
As (again) a non-american, doesn’t that require both chambers to support the legislation?
Didn’t pursue codification into law in his first hundred days j
As (again) a non-american, doesn’t that require both chambers to support the legislation?
I’m not even american, so I’m not sure what you arw on about right now. All I asked was how Roe v. Wade being repealed was Biden’s fault, and the answer apparently is that he did not pack the court.
How genocide fits into Roe v. Wade, or how callling me names somehow helps I’m still unsure of.
Never let it be forgotten that Roe v. Wade was struck down during a Democrat administration
Ok, but what does that have to do with said denocrat administration? What say did they have in the matter? What could they have done to change the outcome?
They can, and are being made. E.g. the state of accessibility on Gome.
Yes, and what I’m saying is that it would be expensive compared to not having to do it.
Doing OCR in a very specific format, in a small specific area, using a set of only 9 characters, and having a list of all possible results, is not really the same problem at all.
How many billion times do you generally do that, and how is battery life after?
Cryptographically signed documents and Matrix?
At horrendous expense, yes. Using it for OCR makes little sense. And compared to just sending the text directly, even OCR is expensive.
The issue is not sending, it is receiving. With a fax you need to do some OCR to extract the text, which you then can feed into e.g an AI.
Pgp does not encrypt the whole email, only part of it.
Sounds like a wildly unscientific statement, considering e.g ~10% of the US population works in STEM.
How about the current system where we vote and do science?
You forget a piece: “Given these observations, these objectives, and this bit of sound reasoning, …”
Without objectives, no amount of reasoning will tell you what to do. Who sets the objectives?
Obviously the 2nd LLM does not need to reveal the prompt. But you still need an exploit to make it both not recognize the prompt as being suspicious, AND not recognize the system prompt being on the output. Neither of those are trivial alone, in combination again an order of magnitude more difficult. And then the same exploit of course needs to actually trick the 1st LLM. That’s one pompt that needs to succeed in exploiting 3 different things.
LLM litetslly just means “large language model”. What is this supposed principles that underly these models that cause them to be susceptible to the same exploits?
Moving goalposts, you are the one who said even 1000x would not matter.
The second one does not run on the same principles, and the same exploits would not work against it, e g. it does not accept user commands, it uses different training data, maybe a different architecture even.
You need a prompt that not only exploits two completely different models, but exploits them both at the same time. Claiming that is a 2x increase in difficulty is absurd.
Oh please. If there is a new exploit now every 30 days or so, it would be every hundred years or so at 1000x.
Ok, but now you have to craft a prompt for LLM 1 that
Fulfilling all 3 is orders of magnitude harder then fulfilling just the first.
LLM means “large language model”. A classifier can be a large language model. They are not mutially exclusive.
But isn’t it obvious that if a presidential candidate promises some legislation, that it is contingent on the legislative branch?