You applied for a job you were qualified for. You never heard back. Or maybe you got a rejection within hours—so fast, you knew no human actually reviewed your résumé.
Here’s what most people don’t realize: That instant rejection may not have been about your qualifications at all. Increasingly, artificial intelligence systems screen job candidates, make promotion decisions, and even determine who gets fired. And these AI systems are breaking the law—discriminating against applicants based on race, gender, age, disability, and other protected characteristics.
At Wanta Thome Employment Lawyers, we’ve recovered millions of dollars for clients facing employment discrimination. As AI hiring tools spread across industries, we’re seeing a new twist on workplace civil rights violations—and most employees have no idea they have legal recourse. If your job application was rejected by AI, you may have a discrimination claim that our attorneys could help you file. Contact us today to schedule a consultation and learn more.
The Problem: AI Is Making Discriminatory Hiring Decisions (And Getting Away With It)
Here’s how algorithmic discrimination works in practice:
- Amazon discovered its own hiring algorithm was rejecting female software engineers—the AI had learned from 10 years of male-dominated hiring data and systematically penalized résumés that included words like “women’s chess club captain” or graduation from women’s colleges. Amazon scrapped the system, but thousands of women were screened out before the bias was detected.
- Healthcare algorithms underestimated medical needs for Black patients, assigning them lower-priority access to care programs compared to white patients with identical health conditions. The algorithm used healthcare spending as a proxy for medical need, but because Black patients historically receive less healthcare due to systemic barriers, the AI interpreted their lower spending as “lower need.”
- Criminal risk assessment tools like COMPAS flagged Black defendants as “high risk” at nearly twice the rate of white defendants with identical criminal histories, affecting sentencing, parole decisions, and pre-trial detention across the U.S. justice system.
The pattern is clear: AI systems trained on biased historical data perpetuate—and often amplify—existing discrimination patterns. And because these decisions happen at scale (one algorithm screens thousands of applicants), the discriminatory impact is massive. If your job application was screened and ultimately rejected through AI, contact our team of attorneys to learn more about filing a discrimination claim.
Most Employees Don’t Know They’ve Been Discriminated Against
Application rejections involving AI discrimination can be particularly difficult to solve because you’ll never know why you were rejected. There’s no interview where you could detect bias. No human decision-maker to hold accountable. Just an automated “We’ve decided to move forward with other candidates” email—or more often, silence. The financial consequences compound quickly, including:
- Lost wages from the job you should have gotten (courts calculate this for every month you’re unemployed or underpaid)
- Career trajectory damage when AI blocks you from promotions or leadership tracks
- Retirement savings gaps that grow over decades when discrimination limits your earning potential
- Health insurance disruptions if you were wrongfully terminated by an algorithmic decision
And employers know most people won’t fight back because they assume the AI decision was “objective” or “data-driven.” That’s exactly wrong—AI can be just as discriminatory as human decision-makers, and federal law prohibits both.
How AI Discrimination Violates Federal Law
Title VII of the Civil Rights Act prohibits employment discrimination based on race, color, religion, sex, or national origin—whether that discrimination comes from a human hiring manager or an algorithm.
The Age Discrimination in Employment Act (ADEA) protects workers over 40 from algorithmic systems that filter out older applicants by using proxies like “digital native” or “recent graduate” in screening criteria.
The Americans with Disabilities Act (ADA) requires reasonable accommodations—which AI systems often fail to provide when they automatically reject candidates with employment gaps (which may be disability-related) or screen out applicants based on criteria that disproportionately exclude people with disabilities.
Under disparate impact theory, you don’t have to prove the algorithm was intentionally discriminatory—you only need to show it had a discriminatory effect on a protected class, and the employer can’t demonstrate a legitimate business necessity for the practice.
Wanta Thome Law Holds Employers Accountable for AI Discrimination
We’ve been tracking the rise of algorithmic discrimination for years, and we know how to build cases against employers using biased AI systems.
Our Approach to AI Discrimination Cases
We identify the algorithmic bias. Through discovery, we can force employers to reveal how their AI systems make decisions, what data they’re trained on, and whether they’ve tested for discriminatory impact across protected classes.
We quantify your damages. AI discrimination cases often involve class action potential because the same algorithm screened hundreds or thousands of candidates. We calculate:
- Lost wages from the position you should have received
- Emotional distress damages
- Punitive damages when the discrimination was particularly egregious
We use our proprietary Resolution Method to accelerate your case without sacrificing value.
What AI Discrimination Claims Are Worth
AI discrimination settlements are reaching record-breaking levels—and federal enforcement agencies are signaling they’ll pursue these cases aggressively.
Recent Federal Settlements Set the Benchmark
The Equal Employment Opportunity Commission and Department of Justice have secured major settlements that demonstrate the financial exposure employers face when their algorithms discriminate:
iTutorGroup (2023): $365,000 for Age and Gender Discrimination
An online tutoring company programmed its recruitment software to automatically reject women over 55 and men over 60. The EEOC’s investigation revealed the algorithm contained hard-coded age limits—not subtle bias, but explicit discriminatory rules. The company paid $365,000 to over 200 rejected applicants and was forced to invite every affected candidate to reapply with human review.
This wasn’t a “black box” AI making questionable inferences—this was automated discrimination programmed directly into the hiring system. The settlement established that algorithmic bias is legally identical to a human manager enforcing discriminatory policies.
Cox Communications (2025): $459,895 for Citizenship Discrimination
Cox used a recruiting platform from Georgia Tech that allowed employers to automatically filter out candidates based on citizenship status. The platform’s settings excluded refugees, asylum recipients, and lawful permanent residents—even some U.S. nationals—before they could apply. The Department of Justice’s Immigrant and Employee Rights Section imposed a $459,895 penalty and prohibited Cox from using citizenship filters unless legally required for specific roles.
The key takeaway: Employers are strictly liable for how they configure third-party recruiting software. You can’t hide behind “the platform let us do it”—if you selected discriminatory filters, you violated federal law.
Meta Platforms (2023-2024): $115,000 + Technical Overhaul
Meta’s ad delivery algorithms violated the Fair Housing Act and equal credit laws by “steering” housing, employment, and credit ads based on race, gender, age, and other protected characteristics. Even when employers targeted ads broadly, Meta’s optimization system learned from historical engagement data and showed job ads disproportionately to certain demographics.
The settlement forced Meta to build the Variance Reduction System (VRS)—an algorithmic compliance layer that detects when ads aren’t reaching diverse audiences and intervenes to correct the imbalance. The $115,000 civil penalty was minor compared to the engineering resources required to redesign their entire ad delivery infrastructure.
Apple Inc. (2024): $25 Million for PERM Recruitment Discrimination
While not purely algorithmic, Apple’s $25 million settlement demonstrates the scale of liability when recruitment systems—digital or otherwise—systematically exclude protected workers. The company’s labor certification process disadvantaged certain citizenship categories, requiring extensive auditing to ensure equal access.
Signs You May Have Been Discriminated Against by AI
You should consult an employment lawyer if:
- You were rejected for a position within hours or days (indicating automated screening)
- Your qualifications clearly matched the job description, but you weren’t interviewed
- You were screened out after providing demographic information (age, graduation year, ZIP code)
- The employer uses “AI-powered” or “automated” hiring platforms (HireVue, Pymetrics, Eightfold, etc.)
- You were rejected for a promotion despite strong performance reviews
- You were terminated shortly after an algorithmic “performance management” system flagged you
Minnesota and Illinois law give you a limited time to file discrimination claims. In many cases, you have as little as one year from the discriminatory act. Waiting means losing your legal rights.
What Happens When You Work with Wanta Thome Employment Lawyers
Step 1: Free Case Evaluation (15 Minutes)
We review your employment timeline, the employer’s hiring/firing process, and whether algorithmic bias likely played a role. This consultation is confidential and costs you nothing.
Step 2: Investigation & Evidence Gathering
If we take your case, we immediately begin building evidence—requesting the employer’s AI system documentation, analyzing their hiring patterns across demographic groups, and identifying other affected employees for potential class action.
Step 3: Strategic Pressure & Negotiation
We use the Wanta Thome Resolution Method to create early settlement opportunities. Employers using biased AI systems know discovery will be expensive and embarrassing—we leverage that reality.
Step 4: Trial-Ready Advocacy
If the settlement isn’t adequate, we’re prepared to take your case to trial. Our attorneys have secured verdicts and settlements totaling millions of dollars in employment discrimination cases—employers know we don’t bluff.
No Fees Unless We WinYou pay nothing upfront. We work on contingency, meaning our fees come from your settlement or verdict. If we don’t recover money for you, you owe us nothing.
Frequently Asked Questions About AI Discrimination Claims
Q: How do I prove I was discriminated against by an algorithm?
A: Through legal discovery, we can force employers to disclose how their AI systems work, what data they use, and whether they’ve tested for bias. We also analyze their hiring patterns statistically—if their AI disproportionately rejects women, older workers, or racial minorities, that creates a strong disparate impact case.
Q: What if I don’t know whether the employer used AI?
A: We can investigate. Tell-tale signs include instant rejections, “talent analytics platforms,” or job postings mentioning “AI-powered recruiting.” If your rejection seemed automated, we’ll find out.
Q: Can I sue if I was rejected for a job I didn’t get?
A: Yes. You don’t need to be an existing employee to file a discrimination claim. If you were qualified and were rejected due to algorithmic bias, you have a claim for lost wages, emotional distress, and other damages.
Q: What if the employer claims their AI is “neutral” or “objective”?
A: That’s not a defense. Even neutral algorithms violate civil rights law if they produce discriminatory outcomes—this is called disparate impact. The employer must prove their AI system is a “business necessity” and that no less-discriminatory alternative exists.
Q: How long do I have to file?
A: In Minnesota, you typically have 365 days to file with the Minnesota Department of Human Rights or 180 days to file with the EEOC. In Illinois, it’s 180 days for EEOC claims. Missing these deadlines means losing your legal rights permanently—don’t wait.
Take Action: Schedule Your Free Consultation
If you suspect you were screened out of a job, promotion, or opportunity by a biased algorithm, contact Wanta Thome Employment Lawyers today.
Call us at 866-696-7067 or schedule a consultation online. You have nothing to lose and potentially hundreds of thousands of dollars to gain. Our case evaluation is confidential, and you’ll speak directly with an experienced employment discrimination attorney—not a paralegal or intake coordinator.
AI discrimination is a rapidly evolving area of law, and employers are hoping you won’t realize you have legal recourse. Don’t let them get away with it. Contact Wanta Thome for transparent communication, strategic case resolution, and relentless advocacy backed by trial-ready preparation.