(quote)
CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) is a challenge-response authentication security technique. They are riddles designed to protect websites from malicious bots. By requiring one to pass a brief test to verify that they are a human and not a computer attempting to access a password-protected account, CAPTCHA helps safeguard users against spam and password decryption.
An increasing number of individuals say that the once-simple Captcha has become a bothersome barrier to accessing websites, making even seemingly straightforward tasks like paying their utility bills or logging into social media accounts harrowing.
“Things are going to get even stranger, to be honest, because now you have to do something that’s nonsensical,” per Kevin Gosschalk, the founder and CEO of Arkose Labs, a web security firm that designs captchas. “Otherwise, large multimodal models will be able to understand.”
Arkose MatchKey Labs develops Captchas for websites. Their “easier” option requires the user to use the arrows to match the bear in the right image with the crab in the left image. It sounds simple enough until you reach a website that is thought to be very vulnerable to bot attacks. At that point, you are instructed to adjust the items using the arrows until they resemble the left image.
The company describes itself as “the strongest Captcha ever made” and states that users can only be approved for general threat levels if they can answer the puzzle the first time. However, they acknowledge that the rates at which users complete the model do not affect the strength of their puzzles, which are “designed for bad actors.”
Both become more adept at breaking codes, so they must create more difficult and sophisticated Captch that require real intellect to solve. For example, they might ask the user to solve basic math equities. Last year, researchers from the University of California, Irvine discovered that bots could reliably and nearly perfectly solve distorted text Captchas. According to the study, bots are frequently designed to post phony comments or reviews, scrape content from websites, and “often outsource solving to Captcha farms – sweatshop-like operations where humans are paid to solve Captchas.
Demonstrating you’re not a robot is getting harder and harder
It’s getting hard to tell humans and bots apart.
At some point last year, Google’s constant requests to prove I’m human began to feel increasingly aggressive. More and more, the simple, slightly too-cute button saying “I’m not a robot” was followed by demands to prove it — by selecting all the traffic lights, crosswalks, and storefronts in an image grid. Soon the traffic lights were buried in distant foliage, the crosswalks warped and half around a corner, the storefront signage blurry and in Korean. There’s something uniquely dispiriting about being asked to identify a fire hydrant and struggling at it.
These tests are called CAPTCHA, an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart, and they’ve reached this sort of inscrutability plateau before. In the early 2000s, simple images of text were enough to stump most spambots. But a decade later, after Google had bought the program from Carnegie Mellon researchers and was using it to digitize Google Books, texts had to be increasingly warped and obscured to stay ahead of improving optical character recognition programs — programs which, in a roundabout way, all those humans solving CAPTCHAs were helping to improve.
Because CAPTCHA is such an elegant tool for training AI, any given test could only ever be temporary, something its inventors acknowledged at the outset. With all those researchers, scammers, and ordinary humans solving billions of puzzles just at the threshold of what AI can do, at some point the machines were going to pass us by. In 2014, Google pitted one of its machine learning algorithms against humans in solving the most distorted text CAPTCHAs: the computer got the test right 99.8 percent of the time, while the humans got a mere 33 percent.
Machine learning is now about as good as humans at basic text, image, and voice recognition tasks, Polakis says. In fact, algorithms are probably better at it: “We’re at a point where making it harder for software ends up making it too hard for many people. We need some alternative, but there’s not a concrete plan yet.”
The problem with many of these tests isn’t necessarily that bots are too clever — it’s that humans suck at them. And it’s not that humans are dumb; it’s that humans are wildly diverse in language, culture, and experience. Once you get rid of all that stuff to make a test that any human can pass, without prior training or much thought, you’re left with brute tasks like image processing, exactly the thing a tailor-made AI is going to be good at.
“The tests are limited by human capabilities,” Polakis says. “It’s not only our physical capabilities, you need something that [can] cross cultural, cross language. You need some type of challenge that works with someone from Greece, someone from Chicago, someone from South Africa, Iran, and Australia at the same time. And it has to be independent from cultural intricacies and differences. You need something that’s easy for an average human, it shouldn’t be bound to a specific subgroup of people, and it should be hard for computers at the same time. That’s very limiting in what you can actually do. And it has to be something that a human can do fast, and isn’t too annoying.”
Figuring out how to fix those blurry image quizzes quickly takes you into philosophical territory: what is the universal human quality that can be demonstrated to a machine, but that no machine can mimic? What is it to be human?
(unquote)
Image courtesy Pixabay/Janos Perian
2024-06-14