Any bot? That’s just impossible. We’re going to have to tie identity back to meatspace somehow eventually.
An existing bot? I don’t think I can improve on existing captchas, really. I imagine an LLM will eventually tip their hand, too, like giving an “as an AI” answer or just knowing way too much stuff.
It’s not so important to tell the difference between a human and a bot as it is to tell the difference between a human and ten thousand bots. So add a very small cost to passing the test that is trivial to a human but would make mass abuse impractical. Like a million dollars. And then when a bot or two does get through anyway, who cares, you got a million dollars.
Yeah this seems to be the idea behind mCaptcha and other proof of work based solutions. I noticed the developers were working on adding that to Lemmy
Wait a minute - GPT-4 - is that you asking this question?
How would you design a test that only a human can pass, but a bot cannot?
Very simple.
In every area of the world, there are one or more volunteers depending on population / 100 sq km. When someone wants to sign up, they knock on this person’s door and shakes their hand. The volunteer approves the sign-up as human. For disabled folks, a subset of volunteers will go to them to do this. In extremely remote area, various individual workarounds can be applied.
Dick pics and tit pics. Bots do not have dicks and tits.
Gives new meaning to Tits or GTFO
There’ll be AI art for that.
This would tie in nicely to existing library systems. As a plus, if your account ever gets stolen or if you’re old and don’t understand this whole technology thing, you can talk to a real person. Like the concept of web of trust.
This has some similarities to the invite-tree method that lobste.rs uses. You have to convince another, existing user that you’re human to join. If a bot invites lots of other bots it’s easy to tree-ban them all, if a human is repeatedly fallible you can remove their invite privileges, but you still get bots in when they trick humans (lobsters isn’t handshakes-at-doorstep level by any margin).
I convinced another user to invite me over IRC. That’s probably the worst medium for convincing someone that you’re human, but hey, humanity through obscurity :)
I convinced another user to invite me over IRC. That’s probably the worst medium for convincing someone that you’re human
Hahah, I’ll say!
That’s exactly what a bot would say, bake him away toys!
The trolly problem as captcha. AI’s literally cannot answer that.
def solve_trolley_problem(): print("Pull the lever.")
Neither can i
You may want to look up “Gom Jabbar” test.
Ask how much is 1 divided by 3; then ask to multiply this result by 6.
If the results looks like 1.99999999998 , it’s 99.999999998% a bot.
I just tried this with snapchat bot and it relied 2
Damn! Now I’m wondering if I married a fellow human or a bot.
The best tests I am aware of are ones that require contextual understanding of empathy.
For example “You are walking along a beach and see a turtle upside down on it back. It is struggling and cannot move, if it can’t right itself it will starve and die. What do you do?”
Problem is the questions need to be more or less unique.
Is this testing whether I’m a replicant or a lesbian, Mr. Deckard?
I don’t think this technique would stand up to modern LLMs though, I put this question into chatGPT and got the following
“I would definitely help the turtle. I would cautiously approach the turtle, making sure not to startle it further, and gently flip it over onto it’s feet. I would also check to make sure it’s healthy and not injured, and take it to a nearby animal rescue if necessary. Additionally, I may share my experience with others to raise awareness about the importance of protecting and preserving our environment and the animals that call it home”
Granted it’s got the classic chatGPT over formality that might clue someone reading the response in, but that could be solved with better prompting on my part. Modern LLMs like ChatGPT are really good at faking empathy and other human social skills, so I don’t think this approach would work
Modern LLMs like ChatGPT are really good at faking empathy
They’re really not, it’s just giving that answer because a human already gave it, somewhere on the internet. That’s why OP suggested asking unique questions… but that may prove harder than it sounds. 😊
That’s why I used the phrase “faking empathy”, I’m fully aware the chatGPT doesn’t “understand” the question in any meaningful sense, but that doesn’t stop it from giving meaningful answers to the question - that’s literally the whole point of it. And to be frank, if you think that a unique question would stump it, I don’t think you really understand how LLMs work. I highly doubt that the answer it spit back was just copied verbatim from some response in it’s training data (which btw, includes more than just internet scraping). It doesn’t just parrot back text as is, it uses existing tangentially related text to form it’s responses, so unless you can think of an ethical quandary which is totally unlike any ethical discussion ever posed by humanity before (and continue to do so for millions of users), then it won’t have any trouble adapting to your unique questions. It’s pretty easy to test this yourself, do what writers currently do with chatGPT - go in and give it an entirely fictional context, with things that don’t actually exist in human society, then ask it questions about it. I think you’d be surprised with how well it handles that, even though it’s virtually guaranteed there are no verbatim examples to pull from for the conversation
"If I encounter a turtle in distress, here’s what I would recommend doing:
Assess the situation: Approach the turtle calmly and determine the extent of its distress. Ensure your safety and be mindful of any potential dangers in the environment.
Protect the turtle: While keeping in mind that turtles can be easily stressed, try to shield the turtle from any direct sunlight or extreme weather conditions to prevent further harm.
Determine the species: If you can, identify the species of the turtle, as different species have different needs and handling requirements. However, if you are unsure, treat the turtle with general care and caution.
Handle the turtle gently: If it is safe to do so, carefully pick up the turtle by its sides, avoiding excessive pressure on the shell. Keep the turtle close to the ground to minimize any potential fall risks.
Return the turtle to an upright position: Find a suitable location nearby where the turtle can be placed in an upright position. Ensure that the surface is not too slippery and provides the turtle with traction to move. Avoid placing the turtle back into the water immediately, as it may be disoriented and in need of rest.
Observe the turtle: Give the turtle some space and time to recover and regain its strength. Monitor its behavior to see if it is able to move on its own. If the turtle seems unable to move or exhibits signs of injury, it would be best to seek assistance from a local wildlife rehabilitation center or animal rescue organization.
Remember, when interacting with wildlife, it’s important to prioritize their well-being and safety. If in doubt, contacting local authorities or experts can provide the most appropriate guidance and support for the situation."
I was gonna say point and laugh at gods failure of a creation because holy shit why would you evolve into a thing that can die by simply flipping onto it’s back.
I mean advanced AI aside, there are already browser extensions that you can pay for that have humans on the other end solving your Captcha. It’s pretty much impossible to stop it imo
A long term solution would probably be a system similar to like public key/private key that is issued by a government or something to verify you’re a real person that you must provide to sign up for a site. We obviously don’t have the resources to do that 😐 and people are going to leak theirs starting day 1.
Honestly, disregarding the dystopian nature of it all, I think Sam Altman’s worldcoin is a good idea at least for authentication because all you need to do is scan your iris to prove you are a person and you’re in easily. People could steal your eyes tho 💀 so it’s not foolproof. But in general biometric proof of personhood could be a way forward as well.
This is a bit out there, so bear with me.
In the past, people discovered that if they applied face paint in a specific way, cameras could no longer recognizing their face as a face. Now with this information, you get (eg. 4?) different people. You take a clean picture of each of their heads from a close proximity.
Then, you apply makeup to each of them, using the same method that messes with facial recognition software. Next, take a picture of each of their heads from a little further away.
Fill a captcha with pictures of the faces with the makeup. Give the end user a clean-faced picture, and then ask them to match it to the correct image of the same person’s face but with the special makeup.
Mess around with the colours and shadow intensity of the images to make everyone’s picture match more closely with everyone else’s picture if you want to add some extra chaos to it. This last bit will keep everyone out if you go too far with it.
Face recognition ability in humans varies wildly, unfortunately. And that’s without making it harder with face paint. Regular people can get completely fooled by simple things like glasses on/off or a different hairstyle (turns out Clark Kent was on to something after all).
Sounds elaborate… For humans to solve
Do you have any suggestions that would be immune to having the same flaw?
This sounds like something a bot would like to know 🤔
If you can use human screening, you could ask about a recent event that didn’t happen. This would cause a problem for LLMs attempting to answer, because their datasets aren’t recent, so anything recent won’t be well-refined. Further, they can hallucinate. So by asking about an event that didn’t happen, you might get a hallucinated answer talking about details on something that didn’t exist.
Tried it on ChatGPT GPT-4 with Bing and it failed the test, so any other LLM out there shouldn’t stand a chance.
On the other hand you have insecure humans who make stuff up to pretend that they know what you are talking about
Keeping them out of social media is a feature, not a bug.
I’m a big fan of biometric authentication
Like it takes a stool sample?
Not sure if I want to know how you unlock your phone.
Common methods are fingerprint detection, face recognition, iris/retina scanning.
Not sure if I want to know how you unlock your phone.
They take a picture of a skid mark on their underwear. Perfectly clean and safe. A bit awkward when you’re paying at the supermarket.
Just ask them if they are a bot. Remember, you can’t lie on the internet…
I once worked as a 3rd party in a large internet news site and got assigned a task to replace their current captcha with a partner’s captcha system. This new system would play an ad and ask the user to type the name of the company in that ad.
In my first test I already noticed that the company name was available in a public variable on the site and showed that to my manager by opening the dev tools and passing the captcha test with just some commands.
His response: “no user is gonna go into that much effort just to avoid typing the company name”.
If I’m a bot I have to tell you. It’s in the internet constitution.
I’m pretty sure you have to have 2 bots and ask 1 bot is the other bot would lie about being a bot… something like that.
This explains why Nerv had three Magi computers in Evangelion.