I Got Tired of LinkedIn's Black Hole, So I Built My Own Job Search Engine
Published on
I Got Tired of LinkedIn's Black Hole, So I Built My Own Job Search Engine
I don't know about you, but I've spent way too many hours applying to jobs on LinkedIn. Crafting custom cover letters, using Simplify to tailor resumes, just for everything to vanish into the void. No response. No rejection. Nothing. Half the listings are stale postings from months ago that are just collecting resumes into a black hole.
So I tried the alternatives. Jobright, HireCafe, a handful of others. Some of these companies have VC backing and actual engineering teams, and the backends still aren't fully wired up. HireCafe had features that literally didn't connect to anything. A team of five engineers and the backend isn't hooked up. It's kind of wild when you think about it — companies raising venture capital money to build job search products and shipping them half-finished.
But the real problem with all of them was the same: none of them let you filter by the actual technologies you build with. I don't want "software engineer" results. I want to know exactly which companies are building with Django, React, TypeScript, and Claude. I want remote, US-based, and the precise overlap between my stack and theirs. None of these tools did that. So I built RepoRadar.
The Idea Started on GitHub
Before RepoRadar was an application, it was a manual habit. I'd go to GitHub and search filename:CLAUDE.md to find organizations with Claude Code config files in their repos. Companies using Cursor have .cursorrules files. Copilot users have copilot-instructions.md. These config files are signals — if a company has a CLAUDE.md, they're building with the same AI workflow I use every day.
I also realized you can learn a company's entire tech stack by reading their dependency files. Their requirements.txt tells you exactly which Python packages they use. Their package.json maps out the frontend. No job description fluff, no buzzword padding — just the real stack, right there in the repo.
The original version of RepoRadar automated this workflow. Sign in with Google, connect your GitHub account, and the app would queue a Celery background task to search GitHub for repos matching your criteria. It'd parse dependency files, detect AI tool signals, check for production maturity indicators like Docker and CI/CD, and score each organization on a 0-100 scale.
And it worked. Kind of. I found a French company building with Django and LangChain. I followed some developers building the same way I do. I got more connected to the ecosystem. That part was genuinely valuable.
The problem is that most companies keep their repos private. The organizations actually hiring for engineering roles — the ones with production codebases you'd want to work at — aren't hosting that code publicly. The GitHub search was cool technology, but as a job search tool it was hitting a wall. I could see how other people build, but I couldn't find jobs.
ATS Platforms Are a Gold Mine
Here's something most engineers don't realize: almost every tech company posts their jobs on an applicant tracking system — Greenhouse, Lever, Ashby, or Workable — and all four of these platforms have completely free, public, no-auth-required APIs for their job boards. No API key needed. No authentication. Just hit the endpoint and you get back every open role as JSON.
I seeded the database with known tech company slugs — Stripe, Anthropic, OpenAI, Vercel, Cloudflare, Supabase, Linear, Sentry, Railway, and about 50 more. Then I found an open-source repo with 6,261 verified ATS slugs across all four platforms. Built a batch fetcher that processes 50 companies at a time with staggered requests. After the first full run: 177,675 real job listings from over 5,000 companies.
On top of that I added four external job boards — RemoteOK, Remotive, We Work Remotely, and the monthly Hacker News "Who's Hiring" thread. Each one has a different API format (JSON, RSS, Algolia search), but they all funnel into the same unified JobListing table. Celery Beat schedules daily refreshes so the database stays current.
The ATS probing also runs automatically whenever the GitHub search discovers a new organization. It tries the org's GitHub login as the slug across all four platforms in parallel:
def probe_company(self, slug: str) -> dict[str, bool]:
urls = {
"greenhouse": f"https://boards-api.greenhouse.io/v1/boards/{slug}/jobs",
"lever": f"https://api.lever.co/v0/postings/{slug}?mode=json",
"ashby": f"https://api.ashbyhq.com/posting-api/job-board/{slug}",
"workable": f"https://apply.workable.com/api/v1/widget/accounts/{slug}",
}
with ThreadPoolExecutor(max_workers=4) as executor:
futures = {
executor.submit(_check, platform, url): platform
for platform, url in urls.items()
}
for future in as_completed(futures):
platform, found = future.result()
results[platform] = found
return results
Four concurrent requests, sub-two-second response. If Greenhouse returns a 200 with a jobs array, we know they have a board there. Fetch everything, store it, detect the tech stack from job descriptions.
Tech Detection: From Dependency Files to Job Descriptions
Stack detection was originally built for parsing structured files — exact package name matching against dictionaries of about 150 known technologies across Python and JavaScript ecosystems. "django" becomes ("Django", "backend"). "@anthropic-ai/sdk" becomes ("Claude API", "ai_ml"). Clean, deterministic, no false positives.
But job descriptions are free text, not dependency files. So I built a second detection engine using pre-compiled regex patterns with word boundary matching:
# Sort by length descending — "ruby on rails" must match before "ruby"
_PATTERNS = []
for keyword in sorted(TECH_KEYWORDS.keys(), key=len, reverse=True):
pattern = re.compile(rf"\b{re.escape(keyword)}\b", re.IGNORECASE)
_PATTERNS.append((pattern, TECH_KEYWORDS[keyword]))
The sort-by-length trick matters. You want "Next.js" to match before "Next," "Ruby on Rails" before "Ruby," and "Django REST Framework" before "Django."
This got interesting fast. After pivoting to jobs-first, I discovered that 53% of my job database — 94,811 out of 177,675 listings — had zero detected technologies. They were invisible to search. The keyword list was missing SQL (the most common technology in engineering job descriptions), C++, HTML, CSS, Linux, Git, React Native, Spring Boot, ASP.NET, Blazor, and about 25 more.
And then there was the "Go" disaster. 19,597 jobs had ["Go"] as their only detected tech. Not Go the programming language — the English word "go" appearing in sentences like "you will go home knowing" and "as we go to market." A Lead Dentist job was tagged with Go. The word-boundary regex \bgo\b technically worked, but "go" is just too common in English. Removed "go", kept "golang" as the only trigger. After reprocessing all 185,000 jobs, the false positive count dropped from 14,291 to zero.
The lesson: tech detection from job descriptions is fundamentally different from parsing dependency files. Dependency files are unambiguous — django==4.2 means Django. Job descriptions are free text where everyday English words collide with programming language names. Short keywords like Go, R, and C need special handling or they'll match everything.
The Auth Problem (It's Always Auth)
Getting OAuth to work across a split deployment — Netlify frontend, Railway backend — was one of the bigger headaches. Three separate problems, each one a war story.
The Netlify Proxy Trap: Netlify's status = 200 proxy follows HTTP redirects server-side. When allauth returns a 302 redirect to Google's OAuth page, Netlify follows the redirect itself and returns the Google HTML to the browser as a 200 response. The browser never sees the redirect, never navigates to Google, and the OAuth flow is completely dead. Fix: skip the Netlify proxy entirely for OAuth. The "Sign in with Google" button sends the browser directly to Railway. Sounds obvious in hindsight — took hours to figure out.
The JWT Signing Key Nobody Told Me About: Google OAuth worked — user could pick their account, allauth processed the callback, created the user — but then every response was a 500 error. The traceback: ValueError: Could not deserialize key data. Allauth's JWT strategy defaults to RS256 (asymmetric signing), which needs an RSA private key we never configured. One-line fix: HEADLESS_JWT_ALGORITHM = "HS256". Symmetric signing uses Django's SECRET_KEY automatically. For a single-server app, HS256 is perfectly fine.
The Cross-Domain GitHub Connect: GitHub isn't a login provider — it's a connected service. The user logs in with Google, then clicks "Connect GitHub" to link their account for API access. But the frontend is on Netlify and the backend is on Railway. Different domains, no shared cookies. When the browser hits Railway, Railway has no idea who the user is.
The fix was a three-layer solution: pass the JWT token in the URL when navigating to Railway, validate it server-side and establish a Django session, then set allauth's process=connect parameter so it links the GitHub account to the existing user instead of creating a duplicate. Also had to bump the JWT lifetime from the default 5 minutes to 24 hours, because users who logged in more than 5 minutes ago were getting "invalid token" errors.
This was the first application I'd built with GitHub OAuth, and honestly the cross-domain session problem is something nobody warns you about until you're knee-deep in it.
The Honest Pivot: 177,675 Jobs Were Sitting Right There
Here's the part where I have to be honest about what happened. The GitHub repo scanning — the original core feature, the one I was most proud of — hadn't actually helped find a single job. The company search, tech stack detection, AI tool signals, scoring algorithm — all technically impressive, none practically useful for landing interviews.
Meanwhile, the ATS job aggregation that I'd built as a secondary feature was sitting on 177,675 real job listings from 5,046 companies. The Celery Beat refresh was silently building the most valuable part of the app while I was obsessing over the GitHub integration.
So I pivoted. Made job search the primary experience. Upload resume, see matching jobs, apply. The GitHub search moved to a "Companies" tab for power users who want to dig into a specific org's repos and tech stack. You can still analyze a repo with Claude if you want a deep look at what a company's building — it sends the repo structure to Claude and gets back architecture analysis, code quality signals, and a "why work here" section. It's a cool feature. It's just not the main product anymore.
The pivot was 167 insertions, 1,211 deletions across 18 files. More code removed than added. Outreach generation — gone. Hunter.io and Apollo.io enrichment — gone. API key management — gone. The best kind of refactor is the one where you delete more than you write.
Resume Parsing: Your Resume Is the Config File
Upload a PDF or DOCX and a Celery task sends it to Claude for structured extraction — tech stack, key projects, years of experience, and what I call a "story hook" (the thing about your background that makes you memorable). Once parsed, the Jobs page auto-populates the tech filter chips with your stack. Search fires immediately. Upload resume, see matching jobs. Two interactions from signup to value.
The matching algorithm counts how many of the user's resume techs appear in each job's detected technologies, then sorts by overlap. Best matches first. The top 200 get stored and refreshed daily. You wake up to new matches without doing anything.
I gave it to my brother — he's a senior .NET engineer looking for ASP.NET and Blazor jobs. He uploaded his resume, the tech chips auto-filled with C#, .NET, Blazor, and Entity Framework, and he found roles he'd never have seen through traditional job boards. He actually applied to one, which was a genuinely cool validation moment.
Getting People to Actually Use It
Building RepoRadar took about a week and a half of focused work with Claude Code. Getting people to use it has been the harder problem.
I posted on LinkedIn and got 500 impressions. Eight people clicked through and created accounts. The landing page to login to dashboard conversion path worked — about 50% of signups uploaded their resume, which meant they actually engaged with the core feature. I had 15 total users, 7 resume uploads, and one returning user who came back the next day. Not huge numbers, but real signal.
Then I posted on Hacker News. Completely different traffic pattern: visitors from Bangladesh, Serbia, South Korea, Netherlands — lots of countries, zero signups. HN visitors bounced hard from the landing page. They'd look and leave. The realization was immediate: LinkedIn traffic comes from people actively thinking about jobs — they're the target user. HN traffic comes from people evaluating the tech, not the product.
That led directly to the anonymous search feature. If HN visitors won't sign up, let them try the tool without signing up. I added a public /jobs route: anonymous users see 5 real job cards, then 3 blurred cards with a gradient overlay and a "185,000+ more jobs match your search" prompt. The backend change was literally one line — permission_classes = [AllowAny] on the search view. The blur effect is pure Tailwind. Within an hour of deploying, the first anonymous user hit the search from the landing page.
I also wrote articles on Dev.to, CodeNewbie, and a few other platforms. Ended up with about 15 signups total across all channels. For SEO, I should have bought a custom domain from the start — running on reporadar-app.netlify.app is fine for an MVP, but it's an SEO dead end. I ended up setting up Google Cloud Console, generating a sitemap, doing all the indexing work anyway. Should have just spent the $12 on day one.
The Stack
For the record, here's what this thing is built on. I'm a Django person — I know that's contrarian in 2026 when everybody's building with Next.js or FastAPI, but Django is my main squeeze. The ORM, the admin, the auth system, the migration framework. I actually built something with FastAPI recently and it's great — it's all Python at the end of the day — but for a full-featured app with user accounts, OAuth, background jobs, and an admin panel, Django is hard to beat.
Frontend: React 19, TypeScript, Vite, Tailwind CSS, TanStack Query, React Router. Deployed on Netlify.
Backend: Django 5, Django REST Framework, Celery + Redis, PostgreSQL 16. Deployed on Railway.
Auth: Google SSO via django-allauth headless mode with built-in JWT. No dj-rest-auth, no djangorestframework-simplejwt — allauth handles everything now.
AI: Claude API for resume parsing. GitHub repo analysis via Celery background tasks.
Job Sources: Greenhouse, Lever, Ashby, Workable APIs + RemoteOK, Remotive, We Work Remotely, HN Who's Hiring. 185,000+ listings from 6,500+ companies.
Testing: 132 tests. TDD from day one. Pure functions for business logic, mocked HTTP for external APIs, real fixture files for detection tests.
The Actual Lesson
With Claude Code, you can build a full-stack SPA with OAuth, background jobs, multiple external API integrations, AI features, and 132 tests in about a week and a half. That's the easy part.
The hard part is getting anyone to look at it.
You can build the most elegant tech stack detection engine in the world, but if nobody knows it exists, it doesn't matter. The code is the easy part. Distribution is the actual product challenge. LinkedIn posts work better than Hacker News for this audience. Letting people try the product without signing up converts better than a fancy landing page. And the feature you build first might not be the feature people actually want — the secondary thing you built almost as an afterthought might be the whole product.
I'm still figuring out the distribution part. But at least now when I search for jobs, I'm not using LinkedIn's black hole. I'm using my own tool, filtering by my exact tech stack, and applying to companies that actually build the way I build.
Tech Stack: Django 5 · DRF · React 19 · TypeScript · Vite · Tailwind CSS · PostgreSQL 16 · Redis · Celery · Claude API · GitHub API · Greenhouse · Lever · Ashby · Workable · RemoteOK · Remotive · WWR · Railway · Netlify
Try it: reporadar-app.netlify.app GitHub: github.com/mattyray/reporadar
Comments
-
said:
И что бы мы делали без вашей отличной фразы
Mar 19, 2026 17:19
Serviceeftersyn, [url=https://tusze.estudio-urody.com.pl/bot.php?redirecty=aHR0cHM6Ly9zaXRlcy5zdWZmb2xrLmVkdS9jb25ub3JtdWxjYWh5LzIwMTQvMDIvMTMvbGVnby1taW5kc3Rvcm0tZXhwZXJpbWVudDA3ZmViMTQv]https://tusze.estudio-urody.com.pl/bot.php?redirecty=aHR0cHM6Ly9zaXRlcy5zdWZmb2xrLmVkdS9jb25ub3JtdWxjYWh5LzIwMTQvMDIvMTMvbGVnby1taW5kc3Rvcm0tZXhwZXJpbWVudDA3ZmViMTQv[/url] er vigtigt for at sikre, at dine apparater fungerer optimalt. Et grundlæggende eftersyn burde afsløre eventuelle problemer før de udvikler sig i større skader. Regelmæssige kontroller maksimerer levetiden og styrker effektiviteten af dine enheder. -
said:
I casinГІ non aams online, [url=https://childrencamp.asur.uy/i-migliori-casino-senza-invio-documenti-gioca-in-2/]https://childrencamp.asur.uy/i-migliori-casino-senza-invio-documenti-gioca-in-2/[/url] offrono un'ampia varietГ di giochi, da slot machine, roulette e blackjack. Investire in queste piattaforme puГІ essere interessante per approfondire nuove opportunitГ di vincita. Inoltre, le promozioni si rivelano allettanti. Tuttavia, ГЁ fondamentale la sicurezza.
Mar 19, 2026 17:51 -
said:
In the world of gaming, a non GamStop online casino, [url=http://www.papscigarco.com/respected-casinos-not-on-gamstop/]http://www.papscigarco.com/respected-casinos-not-on-gamstop/[/url] offers players an exciting experience. With a wide range of options available, users can enjoy unparalleled entertainment. These platforms provide autonomy to gamble without restrictions, making them increasingly popular. Many participants seek out these casinos for their unique features and generous bonuses. It’s essential to choose a credible site to ensure a safe gaming environment. Explore the adventures of a non GamStop online casino today!
Mar 19, 2026 17:51 -
said:
In the thrilling world of Casino Online, [url=https://phillylimorentals.com/blog/unlock-exciting-bonuses-with-7gold-casino-promo-5/]https://phillylimorentals.com/blog/unlock-exciting-bonuses-with-7gold-casino-promo-5/[/url], players have the opportunity to enjoy a variety of games. If you like slots or table games, there’s plenty for everyone. Securing big is just a moment away!
Mar 19, 2026 18:26 -
said:
roulette online echtgeld, [url=https://blog.equinoxbroker.com/roulette-mit-echtgeld-strategien-tipps-und-tricks/]https://blog.equinoxbroker.com/roulette-mit-echtgeld-strategien-tipps-und-tricks/[/url] ist eine spannende Möglichkeit, um echtes Geld zu gewinnen. Zocker können von zu Hause aus spielen. Das Angebot an Spielen ist beeindruckend. стратегии и удача играют важную роль.
Mar 19, 2026 18:57 -
said:
Прошу прощения, что вмешался... Но мне очень близка эта тема. Могу помочь с ответом. Пишите в PM.
Mar 19, 2026 20:00
Керамическая плитка, [url=https://www.melbournefloodmaster.com.au/restore-your-homes-after-flood-with-melbourne-flood-masters-reliable-flood-damage-restoration-services-in-geelong/]https://www.melbournefloodmaster.com.au/restore-your-homes-after-flood-with-melbourne-flood-masters-reliable-flood-damage-restoration-services-in-geelong/[/url] известна для облагораживания помещений. Она выделяется большим выбором текстур и расцветок. Керамическая плитка легка в уходе и надежна. -
said:
В этом что-то есть. Буду знать, благодарю за информацию.
Mar 19, 2026 20:11
гадание онлайн, [url=https://v-kosmose.com/gadanie-onlajn/kostyah-da-ili-net/]кубик да нет[/url] становится все более популярным, так как многие ищут альтернативные способы получить советы. Это быстро и доступно каждому. Эксперты предлагают разнообразные методы, включая таро. Не упустите шанс узнать свое судьбу!