From the stereotypical loner in the basement to organized criminal gangs and nation states, attackers have become increasingly sophisticated over the past decade. They have the same skills, tools, and services at their fingertips as your IT teams do. This includes the ability to use artificial intelligence (AI) and machine learning (ML) to create sophisticated campaigns that adapt to your mitigation efforts. These dynamic attack methods keep evolving as the cost versus value equation continues to deliver extraordinary ROI for attackers. In particular, credential stuffing has evolved from attractive to downright lucrative.
Back in the day, a simple cURL tool could siphon website data. Companies added defenses like CAPTCHA, and in response, attackers adapted their attack methods to utilize CAPTCHA solvers and scriptable consumer browsers to imitate human behavior. These shifts were all an effort to capitalize on the growing value of their targets.
Today, attackers can gather a dossier on their targets using the same technologies that organizations leverage to protect their applications. The attackers gain insight into weaknesses in ways similar to those used by security and fraud teams as they seek information about attackers. With an even playing field, how can security and fraud teams stay ahead? The key lies in using automation, machine learning, and AI to create a security deterrent— maintaining resiliency and efficacy as attackers retool and adapt to security countermeasures in order to disrupt the ROI of an attack.