![](https://lemmy.blahaj.zone/pictrs/image/ec4a5b82-aa4d-4812-8dce-d4394112bcd2.png)
![](https://lemmy.ml/pictrs/image/d3d059e3-fa3d-45af-ac93-ac894beba378.png)
Wait, wait, wait. You’re telling me people leave their homes?!
Wait, wait, wait. You’re telling me people leave their homes?!
I second this question. How am I even supposed to clean it?
I’m not legally allowed to tell you what I did without lying to you
If I know the way that I die, through anything else I will survive.
Sure, that’s one way we could go with it. To the death.
Where do I fit in there? I reproduced.
I mean we asked you and got six, so…
Yeah. Using people’s names much in conversation just feels like, scummy to me. Like trying to make friends and influence people or whatever
Oh, you mean…um… whatserface?
It’s on Android, but
See, this is why I hate “would you rather…?”
Why can’t it just be content with the amount I feed it, why’s it have to be starving to the degree that it begs?
Ah, yes. From before the flood of modern content
Oh, look, my list, slightly rearranged, missing some star wars stuff, and with some extras for me to try!
Ah, the Athenian model.
I think having some kind of required civics course for the random sounds appointees would do well. Legal language exists for reasons that go beyond being deliberately obtuse, so it could still be used to try and reduce ambiguity
Another fun response is to ask about the things the do support taxes paying for, like the death penalty, and bringing up the way Jesus talked about those things.
1st, I didn’t just say 1000x harder is still easy, I said 10 or 1000x would still be easy compared to multiple different jailbreaks on this thread, a reference to your saying it would be “orders of magnitude harder”
2nd, the difficulty of seeing the system prompt being 1000x harder only makes it take 1000x longer of the difficulty is the only and biggest bottleneck
3rd, if they are both LLMs they are both running on the principles of an LLM, so the techniques that tend to work against them will be similar
4th, the second LLM doesn’t need to be broken to the extent that it reveals its system prompt, just to be confused enough to return a false negative.
And the second llm is running on the same basic principles as the first, so it might be 2 or 4 times harder, but it’s unlikely to be 1000x. But here we are.
You’re welcome to prove me wrong, but I expect if this problem was as easy to solve as you seem to think, it would be more solved by now.
They’re so much quieter, too. Not as easy to notice when you’re the one using the tool, but compare how it sounds to be nearby someone else using one and it’s a biiiig difference