I spent three weeks scraping and analyzing over 1,000 AI-related job postings from LinkedIn, Greenhouse, and Lever. The results confirm what everyone in the industry suspects but nobody quantifies: AI hiring is a mess.
Companies don't know what they need. Job titles are meaningless. Salary ranges are all over the map. And the skills that actually matter for building AI products barely show up in the requirements.
Here's what the data says.
## The Title Inflation Problem
The single most common job title in AI right now is "AI Engineer." It appears in 34% of the postings I analyzed. But what an "AI Engineer" does varies so wildly between companies that the title has become meaningless.
At a Series A startup, "AI Engineer" means you're calling the OpenAI API, writing prompt templates, and building a RAG pipeline with LangChain. At Google, "AI Engineer" means you're working on large-scale model serving infrastructure, writing C++ and CUDA kernels, and optimizing inference latency at the millisecond level. Both roles pay north of $200,000. They share almost no required skills.
The second most common title is "Machine Learning Engineer" at 22%, followed by "Data Scientist" at 15%, and "ML Infrastructure Engineer" at 8%. "Prompt Engineer," which dominated headlines in 2023, appeared in less than 2% of postings. That title is effectively dead.
The newest addition to the lexicon: "AI Product Manager" and "Agent Engineer." Both appeared in roughly 5% of postings each, almost exclusively at companies building agentic AI products. A year ago, "Agent Engineer" wasn't a job title. Now it shows up at Anthropic, OpenAI, Salesforce, and dozens of startups.
## What Companies Say They Want vs. What They Need
Here's where the data gets revealing. I tracked the top 20 skills mentioned in job postings and compared them to what people who actually work in these roles say they spend their time doing (based on conversations with 40+ AI professionals and public engineering blogs).
The top required skills by mention frequency: Python (89%), PyTorch (47%), machine learning fundamentals (44%), cloud platforms like AWS/GCP/Azure (41%), LLMs and generative AI (38%), SQL (36%), Docker/Kubernetes (33%), natural language processing (28%), data pipelines (25%), and RAG/retrieval systems (22%).
What people actually do: write API integration code (the majority of "AI Engineers" spend 60% or more of their time connecting models to existing systems), debug prompts and evaluate model outputs (way more art than science), manage cloud infrastructure and costs (compute budget management is a full-time job), wrangle data formats and build ETL pipelines, and attend meetings about "AI strategy" with executives who don't understand what they're asking for.
The disconnect is clearest around deep learning expertise. 47% of postings require PyTorch experience. But for the vast majority of AI application roles at non-research-lab companies, you're not training models. You're calling an API. The difference between someone who can train a transformer from scratch and someone who can build a great product on top of GPT-4 is the difference between a car mechanic and a NASCAR driver. Both important. Totally different jobs.
## The Salary Picture
AI salaries remain inflated compared to general software engineering, but the range is staggering. Based on salary data reported in postings (about 40% include ranges, thanks to transparency laws in Colorado, New York, California, and Washington):
Entry-level AI roles (0-2 years): $120,000-$180,000 base. This is for people calling APIs and building simple integrations. Two years ago, this same work paid $90,000-$130,000 and was called "backend engineer."
Mid-level AI Engineer (3-5 years): $180,000-$280,000 base. Wide range because this bucket includes everything from RAG pipeline developers to people building custom fine-tuning infrastructure.
Senior/Staff AI Engineer (6+ years): $280,000-$450,000 base, with total compensation often exceeding $500,000 at big tech. The ceiling has risen significantly due to talent wars between OpenAI, Anthropic, Google, and Meta.
ML Research Scientists: $200,000-$600,000+ base. The top end of this range is for PhD holders from top-10 programs with published papers. Anthropic, OpenAI, and Google DeepMind are the primary bidders at the top end.
AI Product Managers: $150,000-$300,000. This role barely existed two years ago. Companies are paying a premium for PMs who understand both the capabilities and limitations of language models.
The geographic premium for San Francisco/Bay Area has shrunk but not disappeared. Remote AI roles typically pay 10-20% less than equivalent Bay Area positions. New York has mostly reached parity with SF. London, Berlin, and Toronto are 25-40% below SF base rates, though total compensation gaps narrow when you factor in cost of living and equity differences.
## The PhD Premium Is Declining
This is the most interesting trend in the data. In 2023, 62% of AI job postings at major companies listed a PhD as "required" or "strongly preferred." In my 2026 sample, that number dropped to 28%.
What happened? The work changed.
When the job was "train a model," you needed deep knowledge of optimization theory, attention mechanisms, and scaling laws. When the job is "build a product using someone else's model," you need software engineering skills, product intuition, and the ability to evaluate model outputs. Those skills come from building things, not from writing dissertations.
The labs are the exception. Anthropic, OpenAI, Google DeepMind, and Meta FAIR still heavily recruit PhDs for research positions. But the broader market has shifted. Practical experience building AI applications, especially with LLMs, now trumps academic credentials for most roles.
Several hiring managers I spoke with said the same thing: "I'd rather hire a senior software engineer who's built three production AI features than a fresh PhD who's published five papers." The PhD candidate understands the theory. The engineer understands production. Most companies need production.
## The Skills Gap That Nobody Talks About
The single most valuable skill in AI right now barely appears in job postings: evaluation. Knowing how to systematically evaluate whether a language model is actually working for your use case.
This isn't about running benchmark suites. It's about building custom evaluation frameworks that test your specific application's failure modes. Can the model handle edge cases in your domain? Does it hallucinate on your data? Does it degrade gracefully when it's uncertain? How do you measure that? How do you monitor it in production?
Only 8% of the postings I analyzed mentioned evaluation skills. But every AI team leader I talked to listed it as their biggest hiring challenge. "I can find people who know how to call the API," one VP of Engineering at a growth-stage startup told me. "I can't find people who know how to tell me if the output is actually good."
The second undervalued skill: cost optimization. Running LLM inference at scale is expensive, and the difference between a naive implementation and an optimized one can be 10x in compute costs. Techniques like prompt caching, batched inference, model distillation, and intelligent routing between large and small models can save companies millions annually. Almost nobody lists "cost optimization" in job requirements. Everyone needs it.
The third: safety and red-teaming. As AI products handle more sensitive tasks, the need for people who can systematically find failure modes and adversarial inputs is growing fast. This is a discipline that barely existed three years ago. There aren't enough experienced practitioners to meet demand, and the job postings mostly don't know how to describe what they need.
## The Two-Track Market
The data reveals two completely different AI job markets that share a label but not much else.
Track 1 is the research track. This is a small market, maybe 5,000-10,000 total positions globally at major labs and top universities. It pays extremely well, requires deep technical expertise (usually a PhD), and drives the fundamental advances in the field. Competition is brutal. It's harder to get a research position at Anthropic or Google DeepMind than to get into most Ivy League schools.
Track 2 is the application track. This is where 90%+ of the jobs are. These are software engineers building products that happen to use AI. The core skills are software engineering, not machine learning. The day-to-day work involves API calls, data pipelines, evaluation frameworks, and product management. You need to understand what models can do, not how they work internally.
The market's biggest dysfunction is that companies post Track 2 jobs with Track 1 requirements. They ask for PyTorch experience when the job is calling an API. They ask for papers published when the job is building a CRUD app with an LLM feature. They ask for distributed training experience when the model runs on OpenAI's servers.
This hurts both sides. Companies can't fill positions because they've filtered out qualified candidates. Candidates feel inadequate because they think they need a research background for an application engineering role.
## What Smart Companies Do Differently
The companies I've seen hire most effectively in AI do three things differently.
First, they split the "AI Engineer" role into specific titles that describe the actual work. "AI Application Developer" for people building products on top of models. "ML Platform Engineer" for people building the infrastructure. "AI Evaluation Specialist" for people testing and monitoring model quality. Clear titles attract the right candidates.
Second, they hire for learning speed, not credentials. The field moves so fast that what you knew six months ago is partially obsolete. A candidate who shipped a production RAG system last month is more valuable than one who wrote a great paper last year. Smart companies test for this by giving practical take-home projects that mirror actual work.
Third, they acknowledge the cost. AI talent is expensive because the market is tight. Companies that lowball offers, try to reclassify AI roles under standard engineering pay bands, or insist on in-office work without a compelling reason lose candidates to the companies that don't.
## The 2026 Outlook
Based on the posting trends, I expect three shifts over the next twelve months.
The "AI Engineer" title will fracture into more specific roles. The current catch-all is unsustainable. Companies need to communicate what they actually need, and candidates need to know what they're signing up for.
Evaluation and safety skills will command a premium. As AI products move deeper into regulated industries like healthcare, finance, and legal, the ability to systematically test and validate AI systems will become critical. Expect dedicated "AI QA" roles to appear in force.
Salaries at the top will plateau, but the middle will keep climbing. The bidding war for top researchers has already hit ceiling prices that even Google thinks are unreasonable. But the broad base of AI application engineers, currently underpaid relative to their impact, will see steady compensation growth as companies realize how hard these roles are to fill.
The AI job market isn't broken because there aren't enough talented people. It's broken because companies and candidates are speaking different languages. Fix the job descriptions and you fix half the problem. The other half? That's a longer conversation about what we actually want AI professionals to do, and whether anyone has figured that out yet.
Models10 min read
AI Hiring Is Broken: What 1,000 Job Postings Tell Us About the Market
We analyzed over 1,000 AI job postings across LinkedIn, Greenhouse, and Lever. The gap between what companies say they want and what they actually need has never been wider. Here's the data.