On the 20th of June, an anonymous person decided to “break” Uber’s petition site. They then automated a process which resulted in over 100,000 fraudulent signatures in under 3 hours, effectively rendering the petition useless. The second step could have been avoided if they were using FunCaptcha.

It’s important to note that the first step, the actual breaking of the website through code exploits, isn’t something FunCaptcha (or any other CAPTCHA service) is designed to prevent. The information fields (First Name, Last Name etc) accepted any form of input. This allowed the anonymous vigilante to break the web page and even direct future visitors to Uber’s competitor, Lyft. Not cool right?

The perpetrator then listed what other malicious acts could be carried out through exploitation of the weakness:

Screen Shot 2015-06-23 at 2.28.57 pm

Pretty serious stuff. However, this post isn’t about the HTML exploits. Instead, we’re looking at how 100,000 signatures were signed up within 3 hours because the petition didn’t have anything in place to verify if that the signatures were submitted from a genuine source.

This is something FunCaptcha specializes in: we ensure that all activity through petitions, contact forms and surveys is genuine. If it isn’t, websites start to see Non-Human Traffic, which skews the results, leading to uninformed decisions and a misunderstanding of what your users or fans want. No-one, especially businesses, likes to waste their time and money – but without a secure method of verification, that’s exactly what happened to Uber in this case. Don’t worry Uber, we still love your friendly drivers and minty refreshments.

Simply put: if you plan on setting up a petition, a contact form or a survey – use FunCaptcha and avoid the headache that spammers cause. It’s why we’re the CAPTCHA of choice on Care2.com, one of the world’s largest petition sites.

Incoming Bot

The purpose of a CAPTCHA is simple: protect a website from malicious attacks (i.e. spammers) by being difficult/impossible for bots but easy enough to let humans through. But what happens when the most commonly used CAPTCHA service can be solved with 97%+ accuracy by the very bots it was designed to beat?

For over a decade, text based CAPTCHAs have been the popular choice for this task. They grab a word (usually English), warp it into a shape not commonly seen and then ask users to type the words they see. Some text CAPTCHAs even use a random assortment of letters and numbers in an attempt to hinder the bots even more. The issue? Programs that utilize Optical Recognition Software, known as OCR, read the distorted text and allow bots through to websites that relied on the security service to prevent that very thing happening.


This, unfortunately, is a common problem. By design, text CAPTCHAs have a shelf life – in order for them to remain difficult for bots, they have to become increasingly harder for humans. It appears that we’ve reached the ceiling for text CAPTCHA effectiveness, which is a big motivation for our creation of FunCaptcha.

The internet was built on innovation and that’s exactly what we’re doing with FunCaptcha – innovating an area of web security that sorely needs it.

Update: watch co-founder and CAPTCHA expert, Matthew Ford, go into detail on this topic in our new video series!

“CAPTCHA” is an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart”. That’s a bit of a mouthful and for those of us who don’t know what a “Turing test” is – still a bit confusing. It refers to the test proposed by Alan Turing in 1950 that attempted to determine if a computer could “think”. Turing quickly realized that the term “think”, in this context, was a bit ambiguous so he refined the focus and further elaborated: the test aimed to evaluate how well a computer could emulate or exhibit behaviour indistinguishable from that of a human user by having a human judge engage in conversation with both a human participant and a computer participant. But enough history! What are CAPTCHA’s doing today and why are they so hard?


Well, as the name CAPTCHA implies, the general principle behind the “Turing test” has been adopted into a more automated approach that is being completed almost 300,000,000 times per day. That is a LOT of testing. But why? Protection from malicious software is obviously of high importance for those responsible for online services so (unfortunately) the twisty and hard to read CAPTCHAs were initially chosen for the task. For the last few years, it’s been the only way to engage users in the “Turing test”.


However, as the sophistication of the recognition software increased, so too did the difficulty of these distorted text/image CAPTCHAs. This revealed a fundamental flaw in the traditional text/image method: the only way to make the test harder for bots was to make the text/images warped and distorted to the point where even humans could barely understand what they were being presented with.


This is why FunCaptcha exists: we realized that for CAPTCHAs to remain a relevant and effective web security asset, they needed innovation or the problem would only get worse. Considering the necessity, annoyance and sheer volume of traditional CAPTCHAs, you can see why we’re so focused on reinvigorating the process with something that’s fun and engaging – hence, FunCaptcha was born. Give us a try, if you haven’t already.

At the start of December, a rather large update to the traditional reCAPTCHA technology was announced, dubbed the “No CAPTCHA reCAPTCHA” experience. For many, it came as a pleasant surprise – no more squiggly letters and hard-to-read numbers and images? What has been a frustrating experience for millions of internet users the world over looked to be getting a big injection of convenience.

The old ReCAPTCHA.

But when the mechanics behind the “new” technology were broken down via reverse engineering, many developers asserted that this newly developed convenience is merely the addition of a “whitelist”. To put it simply: user’s past behavior and previous CAPTCHA solves are recorded in their cookies, which are then detected by future reCAPTCHA challenges. Those that are seen as being genuine users get the “No CAPTCHA experience”, while those that aren’t get reverted back to the usual distorted text reCAPTCHA.

The new ReCAPTCHA.

The existing mechanics (and thus, flaws) behind the reCAPTCHA system are still there but with the introduction of this cookie “whitelist”, perhaps reCAPTCHA could be made easier for users, without simultaneously making it easier for bots. However – this looks to have backfired because of two main issues.

Easier for humans, easier for bots

The manner by which reCAPTCHA uses their new whitelist system has actually made it more easily exploited for no gain, according to www.sakurity.com consultant, Egor Homakov. In a blog post from December 4th, he eloquently sums up his findings (namely the whitelist and the consequences) but we wanted to break his findings down further and relate them to readers who may not have the experience necessary to fully grasp the conclusions Egor is coming to.

His first main concern is how relying on cookies for extra convenience doesn’t add any extra security at all. If the sole goal was to simply make it easier for humans without amplifying the existing security, then technically, it was a success. Egor declares this is important because the “No CAPTCHA reCAPTCHA Experience” doesn’t make it harder for botsjust easier for humans.

This is a problem, Egor says, due to the way the whitelist is implemented, allowing exploitation because “the legacy flow is still available and old OCR bots can keep recognizing” the old CAPTCHA.

For those making alternate CAPTCHAs, this was an interesting point of difference raised by Egor. For example, the FunCaptcha uses an approach opposite to how reCAPTCHA now does it. Instead of making it easier after repeated completions, FunCaptcha becomes harder after repeated mistakes. This is for two reasons:

1) To make a CAPTCHA that is inherently fast and easy for humans even easier would compromise its security against bots for no real gain.

2) A major vulnerability for visual CAPTCHAs with a small number of discrete answers is a brute-force attack by a bot, which performs automated guessing over and over until it breaks through. By tracking the history of the IP and making the CAPTCHA’s string of challenges longer after each failed attempt, a brute-force attack quickly becomes impractical.

Furthermore, many developers are puzzled by these changes – as explained by Egor’s findings, by trying to make the reCAPTCHA process more convenient, the latest changes have arguably compromised its security.

Removing Challenge/Response has removed the challenge – for bots

Egor further goes on to explain that by introducing the cookie whitelist as a replacement to the traditional “challenge/response” method, the service has become even more vulnerable to malicious attack via a process called “clickjacking”. If a valid cookie whitelist has been accumulated (known as “g-recaptcha-response”), then the user gets the “free pass”. How is this abused? Simply click the video below to get a look at the exploit in action.

Keep in mind: we are NOT providing the technical step-by-step recipe on HOW to do this – simply the result of the exploit being implemented.

To reword Egor’s assertion and explain the above video as simply as possible: the person wanting to spam a certain website needs to obtain a valid “g-recaptcha-response” that matches the required credentials of the targeted website via an unsuspecting user. This is done by creating a fake variant of the target website’s reCAPTCHA, having an unsuspected user complete this fake variant and then using the generated “g-recaptcha-response” to give bots access to the original target’s website through the now breakable reCAPTCHA. This is made possible due to the “g-recaptcha-response” token being made available before submission to the CAPTCHA.


The conclusion that can be drawn from Egor’s findings? While the convenience of reCAPTCHA has somewhat increased for some users, so has the vulnerability. He proposes that the implementation of the cookie whitelist has not only opened the service to exploitation in and of itself, it has also opened a gateway into the existing technology by replacing challenge/response with “g-captcha-response” token.

CAPTCHA innovation has started to occur around the globe so there certainly are more options now. For developers of secure alternative types of CAPTCHA, the goal is to provide a method that, at its core, is already so quickly solvable that it makes room for the challenge to become lengthier in response to brute-force attacks, while still staying reasonable for humans accidentally caught in the net. Forcing it to become trivially solvable after building a whitelist of “human” behavior would be both pointless and potentially damaging – resulting in the position that Egor believes reCAPTCHA now finds itself in.

Creators of the typed-in CAPTCHA are finally admitting what I’ve been saying for years now: CAPTCHAs cause huge problems. They drive away genuine users and let bots through. If you are a website operator, these CAPTCHAs lower the conversion rate of your online forms as your users get frustrated with twisty letters and leave, increasing the bounce rate of your signup or comment pages.

Recent attempts to kill the CAPTCHA have touted the use of a “black box”: a magical secret bit of code that sorts the users of your site into groups. If the user is put in the group deemed “probably not a bot”, they get no challenge, or one that is not very secure. If the user is put in the group deemed “probably a bot” or “not enough information to decide”, the user gets the old, nasty typed-in challenge that stops both bots and people from continuing.

This black box would be wonderful if it actually worked, but this idea seldom pans out, and I’ll try to explain why. As a website operator, you should ask some hard questions about any spam-blocking solution that relies on a black box.

Will the black box mistakenly treat genuine users as bots?

When the black box mistakenly sorts genuine users of your website into the group “probably a bot”, that is called a false positive. It’s like a medical test that mistakenly says a patient has a disease. Maybe the user’s IP address was used in the past by a bot. Maybe their system is compromised. Maybe they’ve gotten a lot of CAPTCHAs wrong in the past for their own legitimate reasons. Maybe the user was put into a “blacklist” database by mistake. They will probably never know why they are suspected as a bot, and never know how to fix it.

If a user of your site is sorted by the magical black box into the “bot” group, the user gets blocked entirely, or gets a CAPTCHA challenge just as nasty, frustrating, long, and difficult as ever– or even more so. Those users have a big chance of bouncing away from your site. And you’ll never know you lost them. Your site is like a hot-air balloon with a hole somewhere up there. Your signups and comments are not rising as fast as you think they should, but you don’t know where the leak is, or how to fix it.

Many developers are talking now about their bad experiences with the black box mistaking them for a bot, followed by impossible-to-solve puzzles. Their concerns are rising rapidly.

Will the black box mistakenly treat bots as genuine users?

When the black box mistakenly sorts a spambot visitor to your website into the group “probably not a bot”, that is called a false negative. It’s like a medical test that mistakenly says a sick patient is disease free. Maybe the black box is simply not very accurate. Maybe the bot has been deliberately written in a way to appear human. Maybe the bot is cleverly using the resources of a genuine user, like a ghost hovering over their shoulder. This all happens because spammers are determined, and bots can be adapted to fool the black box. The history of computing tells the story of this arms race over and over again, and the black box always loses.

If a bot visiting your site is sorted by the magical black box into the “genuine user” group, the bot gets no challenge, or a trivially easy challenge, such as ticking a box. It’s then very easy for the bot to pass that challenge, and get into your site free and clear. A bot that succeeds will usually signal this, and a torrent of bots will then come rushing in. Your site can get filled with spam overnight, taking weeks to clean up. Sometimes even the very creators of a black-box defense are getting hit with spam!

Again, many developers are talking now about how black boxes can be deeply analysed, which should allow spammers to design bots to get through the black box CAPTCHAs and create lots of spam on their sites.

Will the black box require users to use the internet a certain way?

When the black box sorts genuine users of your website into the group “not enough information to decide”, it has to assume the user is a clever bot, which creates all the problems of the false positive I described above. But why can’t the black box tell? You have to ask and experiment to figure out why. Many developers have already found this depends on the user’s browsing history, or  cookies, or on whether the user is logged into a particular service. As one developer put it, these black box CAPTCHAs are a good way to test how much a company knows about you. It can depend on whether the user is running particular anti-snooping software, or using a browser that’s not very common. You may find that your most interesting and valuable website visitors also happen to be the kind of people who resist using the internet in a conventional way. Why drive them away just because they are not doing what the black box wants them to do?

Even if you personally find all this a bit paranoid, you have to consider how to accommodate your customers who have these concerns. Many find it creepy to find on your site a chunk of code that relies on knowing a huge amount of information about your user– what this observer called the “panopticon” and this one called a “habit of overstepping the limits of what consumers will allow it to learn about them“. They see it as trojan code that can be updated and changed without your knowledge by a company that openly says that it wants to thoroughly track user behavior across the web.

What’s the alternative to the black box?

An alternative to a black box is a transparent box, aligned with the open source ideal. For example, our alternative solution FunCaptcha blocks spammers without resorting to a black box. FunCaptcha is open (if not quite open-source) about its inner workings, and if you try FunCaptcha for yourself you can probably figure it out anyway. At the heart of FunCaptcha is a visual puzzle that is impractical for spammers to attack. (I’ll post more about that later, and share the positive things that security experts have said about FunCaptcha’s approach– it’s a whole other fascinating subject.) FunCaptcha will change the nature of its challenge based on a user’s history, but most importantly, that judgment is easy for you to understand. Furthermore, even if that judgment produces a false positive or false negative, there’s no harm done. A bot mistaken as a genuine user will still get stopped, and a genuine user mistaken as a bot will still get a challenge that is quick and easy to solve. All this sidesteps the secretiveness that makes the black box approach vulnerable.

If a bot tries to randomly guess its way through FunCaptcha, its odds of getting through are low– much lower than the chances of a bot getting through a typed-in CAPTCHA. The IP address of the user may be suspect, because it is on the Stop Forum Spam list or it has gotten FunCaptcha more often wrong than right in the past. If the IP is suspect, the FunCaptcha challenge becomes a little longer– more images to turn the right way up, or faces to move into the middle. When that happens, FunCaptcha’s completion rate remains extremely high– far higher than the completion rate for typed-in CAPTCHAs– and fifteen seconds long on average. (You can see more about this on its page for performance metrics.)

If a user’s IP address has a clean history, and your site’s FunCaptcha security setting is “Automatic”, then the user will get a short challenge– it could be just one image to turn the right way up, or face to move to the middle. On average that short challenge takes less than five seconds to complete. FunCaptcha is slated for a feature that makes it even easier and faster for an IP that has gotten FunCaptcha correct a few times in a row. At the easiest level, the user will get a “free pass” challenge: one click, with no wrong answer. This will make FunCaptcha’s completion rate even higher, and let users through even faster. (By the way, if you want your site to never do all this, and be that much more careful about letting through bots that are randomly guessing the answer, you can make your FunCaptcha security setting “Always enhanced”. The completion rate will still be extremely high and users will still get through very quickly.)

To put it simply…

There is an alternative to the magical black box: a transparent process that the creator is happy to explain to you, or you can quickly figure out by playing with the solution yourself. Make sure that when a spam-blocker sorts a visitor correctly, bots get stopped and users get a very easy and fast challenge. Even if it sorts a visitor incorrectly, you should be assured that bots can’t get far, and genuine users get a quick challenge with an extremely high completion rate. Don’t rely on black-box solutions made by companies that track every aspect of a user’s behavior. Don’t take a chance that your users are being quietly categorized as bots and subjected to terrible typed-in CAPTCHAs, making them leave. Don’t take a chance that a clever botnet will be given trivial challenges, flooding your site with spam. Use a solution that does not rely on a black box.

eCards have been an incredibly useful, popular and often hilarious product since their inception in 1994. The first website ever, The Electric Postcard, went from only a dozen or so cards being sent to over 1.7 million after only one and a half years of operation. The model gained traction and various websites began to spring up, with a website called Blue Mountain Arts even being bought by Excite@Home for upwards of $780m in 1999, at the height of the “Dotcom Bubble”.

However, the security, integrity and potential future of the general eCard business model was thrown into question in 2007, when wave after wave of eCards were sent to random users across the globe, with the subject of “You’ve received a postcard from a family member!” Clicking these interactive eCards would direct users to websites that utilized Javascript to exploit a user’s browser or even include downloadable malware hidden in the eCards itself.

It was a pretty dark time for eCards.

Today, eCards have a wide variety of uses.  From simply wishing mom a Happy Birthday, to raising awareness – and even funds –  for social causes.

Yet the security concerns are still valid: the fundamental process by which an eCard is sent to users can (and is) exploited through automated software and even the manual input of random e-mails. The process usually occurs from the relevant website itself, which requires only your name, your e-mail, the e-mail you wish to send to, a delivery date and of course, the message itself. In June of this year, Symantec reported on an eCcard spam campaign that linked to a “get rich quick scheme” which even utilized a fake BBC news report in an attempt to confuse users and convince them of its legitimacy.

eCard sites need to protect their brands reputation by ensuring their users security. All it takes is for someone to receive one “dodgy” or negative eCard from a service and the experience is forever tainted. Rarely would anyone choose to open up another e-mail from an address or provider that has previously led to disastrous results. In fact they may never open another eCard again – regardless of the source.

FunCaptcha has a deep history in the eCard space, and can answer any questions that you might have on this topic.