Signal is the same in that regards.
Signal is the same in that regards.
Yes. The thing is that then you are no longer anonymously using yt-dlp.
The next step would be trying to detect that case… maybe adding captchas when there’s even a slight suspicion.
Perhaps even to the point of banning users (and then I hope you did not rely on the same account for gmail or others).
It’ll be a cat and mouse situation. Similar as it happened with Twitter, there are also third party apps, but many gave up.
I wouldn’t be surprised if at some point they start doing something like what Twitter did and require login to view the content.
The thing is… they are not really disagreeing if they are not saying something that conflicts or challengues the argument.
They just mistakenly believe they disagree when in fact they are agreeing. That’s what makes it stupid.
If you don’t like it, vote with your wallet
I’d say more: don’t use Youtube if you don’t like it.
It’s very hypocritical to see how everyone bashes at Youtube, Twitter, Facebook, Uber, etc. and yet they continue using it as if life would be hell without the luxury of those completelly non essential brands. If you truly don’t like them, just let them die… look for alternatives. Supporting an alternative is what’s gonna hurt them the most if what you actually want is to force them to change.
There’s also a lot of videos from rich Youtube creators complaining about Youtube policies, and yet most of them don’t even try to set up channels on alternative platforms. Many creators have enough resources to even launch their own private video podcast services, and yet only very few do anything close to even attempt that.
I mean, it would technically be possible to build a computer out or organic and biological live tissue. It wouldn’t be very practical but it’s technically possible.
I just don’t think it would be very reasonable to consider that the one thing making it intelligent is that they are made of proteins and living cells instead of silicates and diodes. I’d argue that such a claim would, on itself, be a strong claim too.
Note that “real world truth” is something you can never accurately map with just your senses.
No model of the “real world” is accurate, and not everyone maps the “real world truth” they personally experience through their senses in the same way… or even necessarily in a way that’s really truly “correct”, since the senses are often deceiving.
A person who is blind experiences the “real world truth” by mapping it to a different set of models than someone who has additional visual information to mix into that model.
However, that doesn’t mean that the blind person can “never understand” the “real world truth” …it just means that the extent at which they experience that truth is different, since they need to rely in other senses to form their model.
Of course, the more different the senses and experiences between two intelligent beings, the harder it will be for them to communicate with each other in a way they can truly empathize. At the end of the day, when we say we “understand” someone, what we mean is that we have found enough evidence to hold the belief that some aspects of our models are similar enough. It doesn’t really mean that what we modeled is truly accurate, nor that if we didn’t understand them then our model (or theirs) is somehow invalid. Sometimes people are both technically referring to the same “real world truth”, they simply don’t understand each other and focus on different aspects/perceptions of it.
Someone (or something) not understanding an idea you hold doesn’t mean that they (or you) aren’t intelligent. It just means you both perceive/model reality in different ways.
Step 1. Analize what’s the possible consequence / event that you find undesirable
Step 2. Determine whether there’s something you can do to prevent it: if there is, go to step 3, if there’s not go to step 4
Step 3. Do it, do that thing that you believe can prevent it. And after you’ve done it, go back to step 2 and reevaluate if there’s something else.
Step 4. Since there’s nothing else you can do to prevent it, accept the fact that this consequence might happen and adapt to it… you already did all you could do given the circumstances and your current state/ability, you can’t do anything about it anymore, so why worry? just accept it. Try and make it less “undesirable”.
Step 5. Wait. Entertain yourself some other way… you did your part.
Step 6. Either the event doesn’t happen, or it happens but you already prepared to accept the consequences.
Step 7. Analyze what (not) happened and how it happened (or didn’t). Try to understand it better so in the future you can better predict / adapt under similar circumstances, and go back to step 1.
The AI can only judge by having a neural network trained on what’s a human and what’s an AI (and btw, for that training you need humans)… which means you can break that test by making an AI that also accesses that same neural network and uses it to self-test the responses before outputting them, providing only exactly the kind of output the other AI would give a “human” verdict on.
So I don’t think that would work very well, it’ll just be a cat & mouse race between the AIs.
It could still be bayesian reasoning, but a much more complex one, underlaid by a lot of preconceptions (which could have also been acquired in a bayesian way).
Even if the result is random, a highly pre-trained bayessian network that has the experience of seeing many puzzles or tests before that do follow non-random patterns might expect a non-random pattern… so those people might have learned to not expect true randomness, since most things aren’t random.
Yes… the chinese experiment misses the point, because the Turing test was never really about figuring out whether or not an algorithm has “conscience” (what is that even?)… but about determining if an algorithm can exhibit inteligent behavior that’s equivalent/indistinguishable from a human.
The chinese room is useless because the only thing it proves is that people don’t know what conscience is, or what are they even are trying to test.
A test that didn’t require a human could theoretically be tested automatically by the machine preemptively and solved easily.
I can’t imagine how would you test this in a way that wouldn’t require a human.
You mean “confidentiality”, not privacy.
Just the metadata related to whether you personally, traceable to your full name and address, have a Signal account and how much you use it might be considered a privacy breach already, even if the content of the messages is confidential.