The hiring technology you trusted to make recruiting easier might actually be putting your company in legal crosshairs. From class action lawsuits targeting major ATS vendors to new state laws rolling out with tight deadlines, 2026 is shaping up to be a reckoning for employers who assumed their software had compliance covered.
Here's what you need to know:
If you are an HR professional or a hiring manager who relies on an ATS to sort through candidates, there is a good chance you have been sleeping pretty well at night. After all, the whole point of bringing in technology was to make hiring faster, more consistent, and less prone to the kind of human bias that gets companies into trouble. The problem is that the technology itself is now the thing getting companies into trouble, and in a big way.
Let's start with the case that has the entire HR tech world paying attention. The Mobley v. Workday lawsuit, formally filed in the U.S. District Court for the Northern District of California, centers on a plaintiff who applied to over 100 jobs through employers using Workday's AI screening tools over a period of seven years. He was rejected within minutes nearly every time. His claim is that Workday's algorithms systematically disadvantaged him based on his age, race, and disability status. What makes this case particularly significant is that the court did not just let the lawsuit proceed. In January 2026, Judge Rita Lin expanded it into a nationwide class action covering applicants over 40, and the court ruled that Workday could be considered an "agent" performing hiring functions on behalf of employers. That distinction is enormous because it means the ATS vendor is not just a neutral tool sitting in the background. It is, in the eyes of the court, actively participating in the decision-making process by recommending some candidates and rejecting others.
Then there is the Eightfold AI lawsuit, which takes a completely different angle but lands in the same uncomfortable place. Filed in January 2026 by former EEOC chair Jenny R. Yang and the nonprofit Towards Justice, the class action alleges that Eightfold scraped personal data on over a billion workers, assigned each applicant a score on a zero to five scale, and filtered out low-scoring candidates before a human being ever looked at their application. The kicker is that the lawsuit does not even claim the algorithm was biased. It claims the algorithm existed in secret, without the disclosures required under the Fair Credit Reporting Act. This reframes the entire conversation. You do not need to prove bias to bring a case against an ATS vendor anymore. You just need to show that the system was making consequential decisions about people without telling them.
For companies that rely on applicant tracking systems to manage their hiring pipeline, these cases should be a wake-up call. The legal theory that your ATS vendor is an "agent" acting on your behalf means that when the software discriminates or fails to disclose how it operates, your company shares the liability. That is not a hypothetical scenario. As Jones Walker LLP detailed in their analysis, when you read these two cases together, a coherent theory of AI vendor liability emerges that should concern every employer using algorithmic screening.
It is not just the lawsuits that should have your attention. State and local governments are racing to regulate the use of AI in hiring, and the compliance deadlines are coming in hot. New York City already has Local Law 144 on the books, which requires employers using automated employment decision tools to conduct annual bias audits and notify candidates that AI is being used to evaluate them. The law includes specific requirements around how those audits are conducted, including calculations of selection rates and impact ratios across categories of sex, race, and ethnicity.
But New York is just the beginning. The Colorado AI Act takes effect on June 30, 2026, and it requires employers to exercise "reasonable care" to prevent algorithmic discrimination in their hiring processes. That phrase, "reasonable care," is doing a lot of heavy lifting because it essentially means that if your AI screening tool produces discriminatory outcomes and you cannot demonstrate that you took deliberate steps to prevent it, you are exposed. Violations can be treated as deceptive trade practices carrying penalties of up to $20,000 per violation. The Department of Justice has also started factoring AI risk management into how it evaluates corporate compliance programs, which means this is not just a state-level issue. It is working its way into the federal enforcement playbook too.
What does all of this mean in practical terms? It means the days of purchasing an ATS, plugging it in, and assuming it handles compliance for you are over. Employers need to understand exactly how their hiring technology works, what data it uses, how candidates are scored and ranked, and whether the system provides the kind of transparency that these new laws demand. If you cannot answer those questions about your current ATS, you have a problem. This is exactly why partnering with a transparent and compliance-focused ATS provider matters more than ever. Companies like HiringThing have built their platforms around giving employers clear visibility into how candidates move through the hiring process, which is the kind of accountability that holds up under scrutiny.
Let's come back to that "agent" concept for a moment because it is the legal theory that should keep every HR leader up at night. Traditionally, ATS vendors have operated under the assumption that they are simply providing a tool. The employer makes the hiring decisions, and the vendor just facilitates the process. That assumption is crumbling. When a court designates an ATS vendor as an "agent" of the employer, it means the vendor is not just handing you a hammer. It is swinging the hammer for you. And if that hammer hits the wrong nail, you both pay.
The implications of this are far-reaching. If your ATS automatically filters out candidates before a human reviewer ever sees their application, the argument that a person made the final hiring decision gets much weaker. If the system relies on historical hiring data that reflects past biases, those biases get baked into every screening decision the algorithm makes going forward. And if the vendor cannot explain how its algorithm weighs different factors, or if you as the employer cannot explain it either, you are in a tough spot when a plaintiff or a regulator comes knocking. As Fisher Phillips noted in their analysis of the Workday ruling, this case underscores the urgent need for AI compliance diligence across the board.
This is not a niche legal issue affecting only the biggest companies with the most sophisticated AI tools. The reality is that nearly 99% of Fortune 500 companies use ATS platforms, and adoption among smaller companies is growing rapidly. Research shows that 93% of recruiters now use an ATS of some kind. If you are using one, you need to know what it is doing under the hood. You need to know whether it is scoring candidates, how it is scoring them, and whether those scores have been audited for disparate impact. And you need that information documented in a way that you can produce if you are ever asked to. Taking a proactive approach to preventing hiring bias through your ATS is not just good ethics. It is a legal necessity.
The good news is that there are concrete steps you can take to protect your organization without abandoning hiring technology altogether. Nobody is suggesting you go back to sifting through paper resumes. The point is to be intentional and informed about the technology you use, and to treat your ATS as a regulated selection system rather than a set-it-and-forget-it tool. Here are some things that you should consider.
Ask your ATS provider directly how their screening algorithms work, what data sources they use, and whether they conduct regular bias testing. If your vendor cannot answer those questions clearly, that tells you something important. Request documentation on their bias audit methodology and their approach to FCRA compliance. A quality ATS partner like HiringThing will not only be able to answer these questions but will welcome them, because transparency is a feature, not a liability. You can also use resources like the EEOC's guidance on AI in hiring to benchmark your own practices against what federal regulators are watching for.
This means creating a clear record of how your hiring decisions are made, including the objective criteria your ATS uses, any adjustments made to AI-driven scores, and the role of human oversight at each stage of the process. The companies that will weather these legal challenges best are the ones that can demonstrate they had a deliberate, documented framework for managing algorithmic hiring risk. Think of it like an audit trail for your recruiting. If someone asks why a candidate was screened out, you should be able to trace that decision back to a specific, job-related criterion. As K&L Gates advised employers, developing a compliance roadmap now is essential to meeting obligations before deadlines arrive.
One of the most damaging allegations in these lawsuits is that candidates were rejected by an algorithm before any human ever reviewed their application. That is a terrible look in court, and it is also just bad hiring practice. Even if your ATS handles initial screening, make sure there is a meaningful human review step before any candidate is eliminated from consideration. ATS platforms like HiringThing are designed to streamline and support your hiring process without removing the human element, and that balance is going to be critical going forward.
If you have employees or candidates in New York City, you should already be conducting annual bias audits. If you are hiring in Colorado, you need a plan in place before June 30, 2026. And keep your eye on California's new FEHA regulations governing automated decision systems, which took effect in late 2025 and create additional litigation risk for employers using AI in hiring. Do not wait for a regulator or a plaintiff's attorney to tell you that you are behind. Work with your legal counsel and your ATS provider to map out which laws apply to your operations and build compliance into your process now, not after something goes wrong.
The hiring technology landscape has been shifting dramatically for years, and 2026 is proving to be the year when the legal system catches up to the pace of AI adoption in recruiting. Class action lawsuits against major vendors are testing new legal theories that expand liability to both the companies that build these tools and the employers who deploy them. New state laws are imposing hard requirements around bias audits, candidate disclosure, and algorithmic accountability. And the old assumption that buying a well-known ATS means compliance is handled is no longer a safe bet.
None of this means you should fear technology. It means you should respect it enough to understand what it is doing on your behalf. The companies that invest in transparent, well-governed hiring processes will not only reduce their legal exposure but will build a better candidate experience in the process. Candidates are paying attention to how they are treated during the hiring process, and a fair, transparent approach is a competitive advantage in a tight talent market. Choosing an ATS partner that prioritizes compliance, transparency, and human-centered hiring is not just risk management. It is smart business.
Your ATS should be working for you, not setting traps you do not know about. Take the time to look under the hood, ask hard questions of your vendor, and build the kind of hiring process that you would be comfortable defending in front of a judge. Because in 2026, that is no longer a theoretical exercise. It is the reality of doing business.
HiringThing is a modern recruiting and HR workflow platform as a service that creates seamless HR experiences. Our white label solutions and open API enable technology and service providers to offer talent software to their clients. Approachable and adaptable, the HiringThing HR platform empowers anyone, anywhere to strengthen their team.