It’s definitely trivial for an AI to solve the “game” or task, I think an interesting question would be whether you could filter them by checking how efficiently they do so.
I’m thinking something like giving two consecutive math tasks, first you give e.g. 1+1, then you give something like 11 + 7. While probably all people would spend a small, but detectable, longer amount of time on the “harder” problem, an AI would have to be trained on “what do humans perceive as the harder problem” in order to be undetectable. That is, even training the AI to have a “human like” delay in responding isn’t enough, you would have to train it to have a relatively longer delay on “harder” problems.
Another could be:
Sort the words (ajax, zebra) alphabetically
Sort the words (analogous, analogy) alphabetically
where the human would spend more time on the second. Do you think such an approach would be feasible, or is there a very good, immediate reason it isn’t a common approach already?
I know a lot of sites now use browser fingerprinting and the like in order to determine how likely a user is to be a bot. The modern web tracks a lot of information about users, and all of that can be used to gauge how ‘human’ the user is, though this does raise some other concerns. A sufficiently stalkerish site already knows if you’re human or not.
It’s definitely trivial for an AI to solve the “game” or task, I think an interesting question would be whether you could filter them by checking how efficiently they do so.
I’m thinking something like giving two consecutive math tasks, first you give e.g. 1+1, then you give something like 11 + 7. While probably all people would spend a small, but detectable, longer amount of time on the “harder” problem, an AI would have to be trained on “what do humans perceive as the harder problem” in order to be undetectable. That is, even training the AI to have a “human like” delay in responding isn’t enough, you would have to train it to have a relatively longer delay on “harder” problems.
Another could be:
where the human would spend more time on the second. Do you think such an approach would be feasible, or is there a very good, immediate reason it isn’t a common approach already?
I know a lot of sites now use browser fingerprinting and the like in order to determine how likely a user is to be a bot. The modern web tracks a lot of information about users, and all of that can be used to gauge how ‘human’ the user is, though this does raise some other concerns. A sufficiently stalkerish site already knows if you’re human or not.
This CGP Grey video is great, and covers how many captchas are often used to train the bots. https://www.youtube.com/watch?v=R9OHn5ZF4Uo