The labor market is getting noisy around AI. Every week brings a fresh claim about jobs disappearing, new tools replacing teams, or entire professions being rewritten. Most of that talk misses the part that matters inside actual companies. The most wanted person in many teams will not be the one who talks the most about AI. It will be the person who quietly uses it well, often, and with good judgment. Microsoft and LinkedIn found that 66% of leaders would not hire someone without AI skills, while 71% said they would choose a less experienced candidate with AI skills over a more experienced one without them. LinkedIn’s 2025 Work Change Report adds a bigger frame: by 2030, 70% of the skills used in most jobs are expected to change, with AI acting as a major driver.
Table of Contents
That does not mean every worker needs to become an AI engineer. It points to something more disruptive and more practical. Applied AI fluency is moving from a specialist edge to a general work advantage. The people who will stand out are the ones who can fold AI into writing, analysis, sales prep, project work, client service, research, coding, recruiting, planning, and decision support without turning sloppy, overconfident, or dependent. They will not look impressive because they know a few prompt tricks. They will look impressive because they get more done, ask better questions, learn faster, and help the rest of the team move.
That is why “AI power user” is turning into a serious workplace category. Microsoft and LinkedIn use the term for people who use AI at least several times a week, experiment often, and report meaningful gains in focus, creativity, and motivation. The phrase sounds a little grand, but the core idea is simple. Power users do not treat AI like a novelty. They treat it like part of the job. Once that habit settles into a company, everyone notices who can produce first drafts faster, prepare for meetings better, surface patterns earlier, and turn messy information into something useful.
The hiring signal has already changed
A year or two ago, many employers still treated AI as a side skill, something nice to have on a résumé next to spreadsheet fluency or slide design. That posture is breaking down. Microsoft and LinkedIn’s 2024 Work Trend Index did not just find broad interest in AI. It found a hiring threshold moving in plain sight: two-thirds of leaders said they would not hire someone without AI skills, and a larger share preferred lower experience plus AI fluency over higher experience without it. That is not a distant forecast. It is already visible in employer preference.
LinkedIn’s 2025 data points in the same direction, though from a wider labor-market angle. The company says the pace at which members add new skills to their profiles has risen 140% since 2022, and AI literacy demand in jobs increased more than sixfold over the past year. The same report says AI literacy skills added by members grew 177% since 2023, nearly five times faster than skills overall. Employers are not only buying AI systems. They are starting to buy AI-ready behavior.
The World Economic Forum gives that shift a broader economic frame. In its Future of Jobs Report 2025, the forum says 39% of core job skills are expected to change by 2030 and 63% of employers identify skill gaps as the main barrier to business transformation. It also projects 170 million new jobs and 92 million displaced ones by 2030, leaving a net gain of 78 million jobs. Those numbers are often quoted as proof that everything will be fine. They say something sharper than that. Job churn is rising, skills are moving, and employers are worried they cannot retrain fast enough.
That is why the “most in-demand worker” question is not really about a single title. It is about a pattern. Companies need people who can absorb new tools without supervision, translate them into daily output, and still keep standards high. LinkedIn phrases the target well: the sought-after mix is AI and human skills together, especially adaptability and a growth mindset. That language sounds soft until you look at how work actually changes. A team does not benefit much from one isolated expert in the corner. It benefits from colleagues who can learn quickly, adjust their workflow, and keep moving while the tooling shifts under their feet.
There is another clue hiding in the training gap. Microsoft and LinkedIn found that only 39% of users had received AI training from their company and only 25% of companies expected to offer it that year. So the demand signal is rising faster than formal preparation. That gap favors people who are willing to teach themselves, test tools in their own workflow, and develop competence before their employer finally rolls out a policy deck and a short internal webinar. Early movers are getting a head start in a market that still acts like the race has not begun.
AI work has moved beyond the tech department
A lot of the public conversation still treats AI work as a branch of software work. That is outdated. Lightcast’s 2025 report found that 51% of job postings requiring AI skills were outside IT and computer science occupations. Since 2022, postings mentioning generative AI skills outside IT and computer science were up 800%. That matters because it changes who the future AI standout actually is. In many companies, the person gaining ground will not be a model builder. It will be the recruiter who uses AI to tighten sourcing and screening, the marketer who can produce sharper briefs and faster testing cycles, the project lead who can turn scattered notes into decisions, or the finance analyst who can move from extraction to interpretation much faster than before.
LinkedIn’s own skills data lines up with that. In its 2025 “Skills on the Rise” analysis, AI literacy appeared among the fastest-growing skills across countries and job functions, not just in engineering tracks. In the United States, AI literacy ranked first, ahead of conflict mitigation and adaptability. In Australia, Brazil, Germany, Spain, and the United Kingdom, AI literacy also appeared near the top alongside communication and strategic thinking. That is a sign of diffusion, not niche demand.
PwC’s 2025 Global AI Jobs Barometer adds another useful angle. It says jobs requiring AI skills grew 7.5% last year even while total job postings fell 11.3%, and that the share of jobs requiring AI skills is growing in every industry. The report also says wages are rising twice as fast in the most AI-exposed industries as in the least exposed ones. That does not prove every AI worker is paid more because of brilliance. It does show that employers are putting real value on the ability to work in AI-heavy environments. Markets tend to reveal what employers say more honestly than conference panels do.
The salary signal is even clearer in Lightcast’s data. The firm found that job postings mentioning AI skills carried a 28% salary premium, which it estimates at roughly $18,000 more per year. PwC found an even larger average wage premium for workers with AI skills: 56% higher wages when comparing workers in the same occupation who differ on whether they have AI skills. Methodologies differ, so those numbers should not be stacked as if they measure the same thing. Still, they point in the same direction. AI skill is moving from résumé decoration to economic leverage.
This is why the phrase “AI power user” matters more than “AI specialist” for many companies. A specialist is scarce and expensive. A power user is someone inside a normal function who gets unusually strong output from ordinary work. That is easier to spread across an organization and often more immediately useful. A law firm might need a handful of deep technical experts. It may get larger gains from dozens of lawyers and operations staff who can summarize, compare, draft, check, and research with speed and care. A retailer may never hire a machine learning scientist. It may still benefit from merchants, planners, and marketers who know how to use AI every day without wrecking judgment or brand standards. Lightcast’s finding that more AI-skill postings now sit outside tech makes this point hard to ignore.
The shape of a real AI power user
The phrase sounds flashy, so it helps to strip it down. Microsoft and LinkedIn define power users as people who use AI at least several times per week. Their survey says those users save more than 30 minutes per day and are more likely to say AI boosts creativity, focus, motivation, and enjoyment of work. The top predictor of becoming a power user was not title, age, or seniority. It was frequent experimentation. That is a revealing detail. The trait that separates stronger users is not access. It is behavior.
A real power user is not somebody who pastes entire assignments into a chatbot and hopes for a miracle. Nor is it the person who treats prompt writing like stage magic. In most offices, power use looks much less theatrical. It shows up in repeated cycles: framing the task clearly, pushing for options, checking the output, revising the request, mixing AI drafts with domain knowledge, and knowing when to stop. These people build small routines around the tools. They save prompts that work. They compare outputs across systems. They know which tasks deserve automation and which demand a human pass at the end. Microsoft’s broader work-trend material also describes power users as more likely to keep trying when the first response is weak, experiment with different ways of using AI, and research new prompts.
Casual use versus power use
| Casual AI user | AI power user |
|---|---|
| Uses AI occasionally for drafting or summarizing | Uses AI several times a week as part of a repeatable workflow |
| Accepts the first usable answer | Iterates, checks, rewrites, and tests alternatives |
| Treats AI as a shortcut | Treats AI as a work partner that still needs supervision |
| Uses similar prompts for every task | Adjusts prompts to context, audience, and decision type |
| Saves a bit of time on isolated tasks | Compounds gains across meetings, writing, analysis, planning, and learning |
| Rarely shares working methods | Becomes a model for colleagues and raises team norms |
The table compresses a pattern visible across Microsoft’s power-user data, LinkedIn’s AI literacy findings, and field research on productivity. The jump from casual use to power use is not mainly about software access. It is about disciplined repetition, task judgment, and the willingness to keep refining the interaction.
There is also a social side to power use that gets overlooked. The strongest AI colleague in a team is often the person who can translate tool behavior for everyone else. They explain what the model is good at, what it breaks, what should never be trusted without checking, and what kinds of tasks produce repeatable gains. That matters because AI adoption in real firms rarely spreads evenly. It spreads through examples, copied habits, and local trust. Microsoft’s research notes that power users are more likely to hear from leadership about the importance of generative AI and more likely to get training. The causality can run both ways, but the practical point is clear: power users do not just use the tools; they become visible nodes in how the organization learns.
The category is also broader than many executives assume. LinkedIn’s skills data shows AI literacy, communication, strategic thinking, and adaptability traveling together. That combination matters because raw AI use without context has low value. A mediocre writer with a chatbot is still a mediocre writer, just faster. A strong operator, analyst, recruiter, editor, or salesperson using AI well can produce much more, learn much faster, and often lift the people nearby. The best AI power users are not the most dazzled by the technology. They are the least sentimental about it. They use it because it helps them do real work.
Productivity gains show up where judgment meets software
The case for AI power users would be weaker if it relied only on surveys and hiring sentiment. It does not. A growing body of field evidence shows that generative AI can produce real gains, especially when workers know the domain well enough to steer and check the system. In a widely cited NBER study of 5,179 customer support agents, access to a generative AI assistant increased productivity by 14% on average, with a 34% improvement for novice and lower-skilled workers and little gain for highly skilled workers. The most plausible reading is not that experts do not matter. It is that AI can spread some tacit know-how across a team, helping less experienced workers close part of the gap.
MIT research on writing tasks found something similar. Workers using ChatGPT completed certain writing assignments 40% faster and improved output quality by 18%. That study covered marketers, consultants, grant writers, analysts, managers, and HR professionals. Those are not edge-case jobs. They are common white-collar roles full of drafting, framing, and synthesis work. The lift appears where workers can combine AI speed with human selection and editing.
Harvard Business School’s work with Boston Consulting Group is useful because it adds a warning label. Researchers found that consultants using AI on suitable tasks could produce results faster and at higher quality, but on harder tasks outside the model’s reliable zone they were less likely to get the right answer. HBS described this as a “jagged technological frontier.” That phrase deserves to stick. It captures the uncomfortable truth that AI does not fail in neat, predictable ways. It performs brilliantly in some slices of work and poorly in others that may look similar on the surface. That is exactly why AI power users matter. They learn the edge of the tool instead of mistaking competence for omniscience.
MIT Sloan’s later reporting on GitHub Copilot pushes the story beyond raw speed. Developers with access to the tool increased their time spent on core coding by 12.4% and cut project-management activity by 24.9%. Junior developers saw the biggest effect. The important point here is not just that work got faster. The composition of work changed. More time went to the core task, less to the overhead around it. That is a hallmark of power use across many roles. A good AI user is often not doing more of the same thing. They are spending less time on setup, formatting, retrieval, and repetitive structuring, and more time on judgment, exception handling, and decision quality.
Stanford HAI’s 2025 AI Index says business use is accelerating, with 78% of organizations reporting AI use in 2024, up from 55% a year earlier, and 71% reporting generative AI use in at least one business function. The same report says research continues to show strong productivity impacts and often helps narrow skill gaps. That combination matters. AI does not just reward the already elite. It can also raise the floor. Inside firms, that makes the colleague who knows how to use AI well even more valuable, because they help the team gain speed without waiting for a major reorganization.
None of this proves AI power users win automatically. The evidence says something more precise. The gains tend to appear where work contains repeatable cognitive tasks, the user can judge output quality, and the workflow allows fast iteration. That description fits a startling amount of modern office work. It also explains why the strongest users are becoming so visible. They are not just faster at single tasks. They are better at redesigning the work around the tool.
Human strengths are moving closer to the center
The cheap version of the AI argument says soft skills will matter less because machines can now write, summarize, and simulate conversation. The evidence points the other way. LinkedIn’s Work Change Report says companies want people who can learn new technical skills while maintaining strong human skills, and it calls the combination of AI and human skills a key signal of adaptability and growth mindset. The same report says communication was the number one most in-demand skill in 2024. It also notes that people developing generative AI skills were more likely to develop human skills such as change readiness, trust building, and logical reasoning.
LinkedIn’s 2025 “Skills on the Rise” ranking tells a similar story from another angle. Across countries, AI literacy shows up beside communication, strategic thinking, adaptability, relationship building, and conflict mitigation. Those pairings are not accidental. As AI lowers the cost of producing text and analysis, the premium shifts toward framing, judgment, persuasion, coordination, and trust. Somebody still has to decide what problem matters, what answer is usable, what tone fits the client, what risk is acceptable, and what tradeoff the team should make. AI can assist with all of that. It does not carry responsibility for it.
The World Economic Forum reaches much the same conclusion. Its 2025 jobs report says the fastest-growing skills still include human capabilities such as creative thinking, resilience, flexibility, and collaboration alongside technical skills in AI, big data, and cybersecurity. The forum is not sentimental about this. It is reading employer demand. Firms want technical literacy, but they are not asking for a workforce of brittle tool operators. They want people who can keep adapting while systems, workflows, and market conditions move around them.
The OECD sharpens the point. Its Employment Outlook 2025 says AI is increasing the need for management, business, and digital skills while reducing demand for some cognitive and clerical tasks. An earlier OECD paper on AI skills in job postings found that communication, problem-solving, creativity, and teamwork gained relative importance over time alongside AI-specific and software-related competencies. Put that together and the picture is pretty clear. The worker who benefits most from AI is not the one who tries to outsource thinking. It is the one who can pair technical use with judgment that other people trust.
That is why the future “best colleague” is unlikely to be a pure automation maximalist. It will be somebody who knows when to speed up and when to slow down. Somebody who can use AI to draft a proposal in minutes, then catch the weak assumption in the middle of it. Somebody who can turn a meeting transcript into action items, then notice the political risk the model cannot see. The more language and analysis become abundant, the more scarce good interpretation becomes. LinkedIn’s own chief economist put it bluntly in the Work Change Report: AI is most powerful when collaborative humans surround and lead it.
Teams are splitting into fast movers and laggards
The biggest change inside many organizations will not arrive as a formal restructuring. It will show up as widening distance between coworkers who have folded AI into their daily work and those who still treat it as optional. Microsoft and LinkedIn report that power users say AI saves them more than 30 minutes per day. Half an hour sounds modest until you spread it across a month of drafts, analysis, meeting prep, admin cleanup, and knowledge retrieval. Then it becomes a compounding advantage. The value is not in one dramatic leap. It is in daily separation.
You can already see where that separation forms. The faster movers arrive at meetings with clearer summaries and sharper questions. They turn recordings, notes, and past threads into decisions quickly. They produce more variants before choosing one. They spend less time staring at the blank page. They can compare documents, markets, candidates, or customer patterns faster than colleagues who still do everything manually. After a few months, this starts to look like talent even when part of it is workflow design. That is not a criticism. Workflow design is talent now.
There is a cultural effect too. Power users often become the people others copy. They share prompt structures, evaluation routines, template setups, and ways of checking outputs. Their teammates start asking them which tool to use, how to set up a task, or how to spot a weak answer. That can create local multiplier effects inside a team, which is why these colleagues become so visible. Microsoft’s materials suggest power users are more likely to get training and more likely to hear leadership messaging about AI. That does not only make them better users. It gives them informal status as early interpreters of a new work system.
Yet the split is not just about speed. It is also about confidence under uncertainty. Workers who have spent months experimenting with AI tend to know what kinds of tasks deserve trust and what kinds deserve caution. They are less likely to freeze when a new tool appears, because they have already built the habit of trying, checking, and adapting. That habit matters more than any single platform. Models will change. Interfaces will change. Vendors will come and go. The durable advantage is not tool loyalty. It is learned fluency. LinkedIn’s data on skill change and Microsoft’s finding that experimentation predicts power use both point in that direction.
A team with only a few such people can feel sharply uneven. Some members will work in a new tempo while others remain stuck in the old one. Managers will begin to notice the gap even if they do not yet know how to name it. They may call it proactivity, adaptability, curiosity, or business judgment. Often they are seeing AI fluency bundled together with those traits. The workers who look unusually capable over the next few years will often be the ones who learned to think with AI before everyone else did.
Shallow adoption creates its own problems
None of this should slide into empty boosterism. Strong AI use is valuable precisely because weak AI use creates fresh risks. Harvard’s “jagged frontier” finding is the cleanest warning. On some tasks, AI helped consultants produce better work faster. On tasks beyond the tool’s reliable range, performance dropped. That is not a side issue. It is the central management problem. People who do not understand the limits of AI can look productive while quietly making worse decisions.
Anthropic’s recent economic research adds a second caution. Its 2026 work on labor-market impacts says actual usage remains only a fraction of what is theoretically feasible, even as occupations with higher observed exposure are projected to grow less through 2034. Its March 2026 Economic Index report says about 49% of jobs have seen at least a quarter of their tasks performed using Claude. That figure is striking, but it does not mean half the labor market is about to vanish. It shows that task-level exposure is spreading unevenly and that current adoption still sits below theoretical capability. The danger lies in confusing exposure with replacement and possibility with practice.
The ILO’s 2025 update makes the distribution issue clearer. Clerical occupations remain the most exposed to generative AI, while some strongly digitized professional and technical roles have also become more exposed as the systems improve. That pattern matters because it breaks the lazy assumption that AI pressure sits only at the bottom or only in repetitive work. Exposure is widening, but it is not uniform. The colleague who thrives in this environment will be the one who knows which parts of the role can be accelerated, which parts can be delegated safely, and which parts still require heavy human control.
There are social costs as well. MIT Sloan’s reporting on Copilot found a sharp drop in peer collaboration among developers using the tool. Less back-and-forth may save time, but it can also shrink learning, weaken shared standards, and reduce the informal exchange that keeps a team healthy. A worker can become faster and more isolated at the same time. That is not a contradiction. It is a design problem. Power use should not become solitary use. The strongest colleagues will know how to keep human review, discussion, and accountability in the loop.
This is why shallow adoption is not enough. Having an AI button inside the software stack does very little on its own. A team full of low-trust, low-discipline users can generate more text, more slides, more drafts, more summaries, and still arrive at worse conclusions. Strong AI colleagues are valuable because they reduce that risk. They verify, compare, question, and edit. They know when the output feels too neat. They catch the fake citation, the stale assumption, the generic phrasing, the missed exception. A real AI power user is part producer and part brake pedal.
Companies need more builders than spectators
If the most wanted colleague will be an AI power user, companies should stop treating power use as a lucky individual trait and start building for it. The first move is obvious and still neglected: training. Microsoft and LinkedIn found that only 39% of users had received AI training from their company, while LinkedIn’s Work Change Report says 70% of HR professionals are prioritizing upskilling initiatives in 2025. There is no reason for those two numbers to sit so far apart except organizational drift. Firms like the idea of AI readiness more than they like the work of teaching it.
The second move is more important than generic workshops: role-based workflow design. AI use in marketing is not AI use in legal ops, procurement, finance, or recruiting. Lightcast’s research argues that each career area sits at a different stage of AI adoption and requires different skill clusters. That is why blanket “learn prompting” sessions disappoint people. Workers do not need motivational speeches about the future. They need a concrete map of where AI fits in their own week: which recurring tasks to test first, what good output looks like, what must be checked, where data boundaries sit, and when a human sign-off is mandatory.
The third move is to reward visible method sharing. Power users raise team value when their routines spread. They lower team value when they become private performers who hoard tricks. Managers should want internal examples, tested prompt libraries, output checklists, review norms, and short demos from strong users who can show what works in the firm’s own environment. This is where AI adoption stops being a vendor project and becomes operating culture. The goal is not a few AI stars. It is a thicker layer of competent daily users. Microsoft’s own materials point to experimentation as the best predictor of power use; the company’s research also shows strong demand from leaders combined with weak formal readiness inside organizations. That combination argues for active internal learning systems, not passive hope.
Employers also need to stop misunderstanding entry-level talent in the AI era. MIT Sloan’s reporting on software developers found junior workers saw some of the biggest gains from AI assistance, and the NBER call-center study found novice workers benefited most. Replacing junior roles too aggressively may save salary line items in the short run while damaging the firm’s future skill base. Junior staff with strong AI habits can become productive faster, but they still need coaching, feedback, and judgment training. The next great colleague may start out as a relatively inexperienced hire who learns quickly because AI shortens the ramp.
There is also a business case hiding in the labor-market data. Lightcast says even one relevant AI skill carries a 28% salary premium, and PwC says workers with AI skills command higher wages while AI-heavy industries are seeing faster wage and productivity growth. Those numbers are not just recruiting signals. They say undertrained companies are leaving value on the table. The cheapest way to improve the quality of work may not be another software license. It may be helping ordinary employees become much better users of the software they already have.
The next indispensable colleague
The labor market still needs deep technical experts. It still needs people who build models, data systems, infrastructure, and governance. That part is not in doubt. The larger shift is happening elsewhere. The broadest value is moving toward workers who can combine domain knowledge, AI fluency, and human judgment in the same piece of work. Lightcast shows that most postings asking for AI skills now sit outside tech. LinkedIn shows AI literacy climbing quickly across functions and countries. Microsoft shows leaders already using AI skill as a hiring filter. PwC shows employers paying for it. That is not hype. It is convergence.
The term “AI power user” can sound temporary, like a label from an early product cycle. The underlying pattern is not temporary at all. Every major technology wave creates a class of workers who learn to use the new system before the organization redesigns around it. They become translators, accelerators, and informal teachers. They are the people others rely on because they make the tool useful rather than noisy. That is the colleague many teams will want most over the next few years.
There is a final reason this matters. The WEF’s 2025 report says the skills gap is the biggest barrier to business transformation. That gap will not be closed by announcements, pilots, or slogans about innovation. It will close through thousands of ordinary workers learning how to do their job better with new tools while keeping standards intact. The worker who figures that out early will not just protect their own career. They will become useful to everyone around them. And usefulness, more than title inflation or AI theater, is what hiring markets reward for the longest.
FAQ
No. Lightcast found that 51% of job postings requiring AI skills were outside IT and computer science in 2024, which suggests AI demand is now spread across functions such as marketing, operations, recruiting, finance, and customer work.
Yes. Microsoft and LinkedIn reported that 66% of leaders would not hire someone without AI skills, and 71% would rather hire a less experienced candidate with AI skills than a more experienced one without them.
The clearest difference is repeated, disciplined use. Microsoft and LinkedIn say power users use AI at least several times per week, save more than 30 minutes a day, and are strongly associated with frequent experimentation.
The evidence points the other way. LinkedIn says communication was the top in-demand skill in 2024, and WEF says creative thinking, resilience, flexibility, and collaboration remain among the fastest-growing skill areas.
Often, yes, though the gains depend on the task and the user. NBER found a 14% average productivity increase for customer support agents using an AI assistant, while MIT research found certain writing tasks were completed 40% faster with 18% better quality.
Yes. HBS research on the “jagged technological frontier” found that AI helped on some tasks and hurt performance on others outside the tool’s strong zone. Fast output is not the same as good judgment.
Most need both, but many are underinvesting in broad workforce fluency. LinkedIn reports strong upskilling intent among HR teams, and Microsoft shows demand for AI-capable workers already exceeds company training efforts. That makes internal development a serious business issue, not a side program.
Author:
Jan Bielik
CEO & Founder of Webiano Digital & Marketing Agency

This article is an original analysis supported by the sources cited below
Work Change Report: AI Is Coming to Work
LinkedIn Economic Graph report on skill change, AI literacy demand, and the growing value of combining AI skills with human strengths.
Skills on the Rise in 2025
LinkedIn analysis of the fastest-growing skills across countries and job functions, including AI literacy, communication, and adaptability.
2024 Work Trend Index Annual Report
Microsoft and LinkedIn executive summary covering hiring preferences, AI power users, and employee adoption patterns.
AI at work is here — now comes the hard part
Microsoft WorkLab article summarizing the main findings from the 2024 Work Trend Index.
Microsoft and LinkedIn release the 2024 Work Trend Index on the state of AI at work
Official Microsoft summary of the report’s hiring, adoption, and leadership findings.
Future of Jobs Report 2025
World Economic Forum publication page for the 2025 report on global job creation, displacement, and skills demand.
Future of Jobs Report 2025: 78 Million New Job Opportunities by 2030 but Urgent Upskilling Needed to Prepare Workforces
WEF press release highlighting projected labor-market churn and the scale of the skills gap.
Future of Jobs Report 2025: The jobs of the future – and the skills you need to get them
WEF explainer on the report’s findings around growing roles and the skill mix employers want.
The 2025 AI Index Report
Stanford HAI overview of the 2025 AI Index, including business adoption, investment, and productivity evidence.
AI Index 2025: State of AI in 10 Charts
Stanford HAI summary of key AI adoption, cost, and usage trends from the 2025 index.
The Fearless Future: 2025 Global AI Jobs Barometer
PwC’s main landing page for its 2025 global analysis of AI’s effects on jobs, wages, and productivity.
The Fearless Future: PwC’s 2025 Global AI Jobs Barometer
PwC report PDF with detailed data on wage growth, industry exposure, and demand for AI-skilled workers.
Beyond the Buzz: Developing the AI Skills Employers Actually Need
Lightcast report page on AI skill demand across career areas and the spread of AI hiring beyond tech roles.
Developing the AI Skills Employers Actually Need
Lightcast report PDF with data on non-tech AI hiring and salary premiums attached to AI skills.
The Anthropic Economic Index
Anthropic’s running economic research hub tracking how Claude is being used across work and non-work tasks.
Anthropic Economic Index report: Learning curves
Anthropic’s March 2026 report on changing usage patterns and task-level AI diffusion across jobs.
Labor market impacts of AI: A new measure and early evidence
Anthropic research introducing observed exposure and early evidence on AI’s labor-market effects.
Generative AI and Jobs: A Refined Global Index of Occupational Exposure
ILO publication explaining which occupations are most exposed to generative AI and how that exposure is shifting.
Generative AI at Work
NBER paper on AI-assisted customer support work and its productivity gains, especially for less experienced workers.
Experimental evidence on the productivity effects of generative artificial intelligence
Science paper reporting faster completion times and improved quality on certain writing tasks with ChatGPT.
Humans vs. Machines: Untangling the Tasks AI Can (and Can’t) Handle
Harvard Business School summary of research on where generative AI helps knowledge workers and where it fails.
Generative AI changes how employees spend their time
MIT Sloan article on how access to AI coding tools shifts work toward core tasks and away from overhead.
Study finds ChatGPT boosts worker productivity for some writing tasks
MIT News summary of the writing-task productivity experiment and its main findings.
OECD Employment Outlook 2025
OECD report describing how AI is changing demand for management, business, and digital skills.
Demand for AI skills in jobs
OECD working paper on online job-posting evidence for AI skills and their complementarity with communication and problem-solving.
AI Will Transform the Global Economy. Let’s Make Sure It Benefits Humanity.
IMF analysis of AI exposure across global labor markets and the balance between complementarity and displacement.



