Just a 15 second game like Snake or Helicopter. Should stop a significant level of bots, no?
Like others have said already, bots could likely learn to play those easily … but I’m more concerned about people with disabilities / illnesses that would make playing these games hard, painful or even impossible. Someone who has parkinsons or arthritis for example might be able to click a big square in an image to solve a captcha, but might have trouble to “fine-tune” their movements fast enough to play a minigame that effectively locks them out of the community if they fail, especially if there is a timer involved.
I wonder if you can detect if the player is a bot or not, regardless, most captchas are also ml training if I remember correctly
There are two issue posts on the Lemmy github about the captcha options they considered. It is an interesting read. I had no idea there were so many types or even the existence of embedded options. I thought all were 3rd party and most were google, but I was wrong. Still, there are recent Lemmy posts by the devs basically saying the only option to effectively control the bots is by requiring a valid email for account creation.
With AI capabilities now, surely it’s pretty easy for an AI to follow a set of instructions like: create an email, check email, click link in email…etc - is that correct? Or put another way - why would email verification stump ML so consistently if it’s trained to create emails and do the process
I’m only parroting. The developers of Lemmy mentioned this as the only empirically effective option in the real world. AI in the real world is far dumber than it is framed in the media. Even a well trained model can’t just run off on a long tangent path to connect the dots. It only works for a very short time under controlled circumstances in the real world before it must be reset. This is not real AI.
I think a reverse Turing test is much harder for a computer to fake. Stopping general bots is not hard. Stopping bots written specifically for the interface is hard.
deleted by creator
Training an AI to play snake or other simple games is not hard. Making it stop at a specific score might make it slightly harder, but not much. Then you just need to read the text from the screen either, which is trivial. No, not hard for a bots to get past. It might slow actual humans more than bots.
It’s definitely trivial for an AI to solve the “game” or task, I think an interesting question would be whether you could filter them by checking how efficiently they do so.
I’m thinking something like giving two consecutive math tasks, first you give e.g. 1+1, then you give something like 11 + 7. While probably all people would spend a small, but detectable, longer amount of time on the “harder” problem, an AI would have to be trained on “what do humans perceive as the harder problem” in order to be undetectable. That is, even training the AI to have a “human like” delay in responding isn’t enough, you would have to train it to have a relatively longer delay on “harder” problems.
Another could be:
- Sort the words (ajax, zebra) alphabetically
- Sort the words (analogous, analogy) alphabetically
where the human would spend more time on the second. Do you think such an approach would be feasible, or is there a very good, immediate reason it isn’t a common approach already?
I know a lot of sites now use browser fingerprinting and the like in order to determine how likely a user is to be a bot. The modern web tracks a lot of information about users, and all of that can be used to gauge how ‘human’ the user is, though this does raise some other concerns. A sufficiently stalkerish site already knows if you’re human or not.
This CGP Grey video is great, and covers how many captchas are often used to train the bots. https://www.youtube.com/watch?v=R9OHn5ZF4Uo