The WOTC Story: $200K Captured in One Week
Every wrong checkbox cost $5,000 in lost tax credits. The bottleneck was comprehension, not code. Here's how we solved it.
Tax credit applications are a $5,000 problem. Not $5,000 to build. $5,000 per wrong answer.
"I started this project on the 12 August and worked 25 hours a week and developed it on my own!!!"
August 2024. 25 hours a week. Just me and AI. No team. No funding. Just a problem that needed solving.
The problem: WOTC (Work Opportunity Tax Credit) applications. Companies can claim up to $9,600 per qualifying hire. But the forms are complex. One wrong checkbox = denied. HR doesn't read the guidance. They guess. They get it wrong.
The industry standard before: Manual phone calls to state agencies. $5K per applicant verification. 45+ minutes per call. Paper forms that got lost. 30% wrong answers. Each wrong checkbox = $5,000 lost in tax credits.
There was one question that killed more applications than any other.
"Many employers miss out on tax credits through the New York Youth Program due to incorrect answers on Question 13. This question asks if the employee was unemployed, wanted more paid work, or felt their skills were underutilized when starting the job."
Most people qualify. They answer incorrectly anyway. Not because they're lying—because they don't understand the question. Legal jargon written by lawyers, answered by workers.
"The bottleneck wasn't the code. It was human comprehension. Applicants weren't clicking 'No' because they weren't eligible—they were clicking 'No' because they didn't understand the question."
Read that again. The problem wasn't eligibility. It wasn't the system. It wasn't the regulations. It was comprehension. People didn't understand what they were being asked.
"Every input field is a potential failure point. $5K per wrong answer changes how you think about forms."
So I built something where they couldn't click wrong.
"Simple > Smart. No AI, no fancy extraction. Just audio + big buttons."
The solution wasn't better training. It wasn't smarter AI. It was audio guidance. The system reads every question out loud in plain language. Simple Yes/No buttons. Mobile-friendly. Under 60 seconds to complete.
The results: Wrong answers dropped from 30% to under 5%. Verification time went from 45 minutes to under 60 seconds. Cost per applicant went from $5K to ~$0.10. Accuracy jumped from 70% to 95%+.
"I have built a system that converts speech to JSON to PDF and submits it to the government. The government doesn't know what they are playing with."
But audio verification was just the first piece. I built a full suite:
1. Audio WOTC Verification — the 60-second form that replaced 45-minute calls 2. Digital IRS Form 8850 — 7 languages, touch signatures, real-time validation 3. Enterprise Tax Credit Platform — Gmail API monitoring, PDF extraction, PostGIS geographic eligibility, 86-column federal reporting
7 languages: English, Spanish, French, Haitian Creole, Korean, Russian, Chinese. Because the workforce is diverse and the tax credits shouldn't depend on speaking English fluently.
"Completed WOTC in 10 months. In that 10 months I produced over 50 repos worth of the same project, progressively iterating on the project."
50 repos → 1 shipped product. That ratio looks inefficient. It's not. It's how hyperfocus works. Each iteration teaches something. The tunnel stays fixed on the problem until the problem is solved.
Now it's in production. Real companies. Real tax credits. $200K+ generated. Built in 1 week. Solo.
"The bottleneck wasn't the code. It was human comprehension. Audio guidance solved what better UI couldn't."
That's the pattern I keep finding. The bottleneck isn't capability. It's interface. Fix the interface, unlock the capability.
The same HR person who would misclick a checkbox can answer a simple question correctly when it's asked in plain language. AI doesn't replace humans. It removes the friction points where humans fail.