Our Approach

AI Workforce Watch employs a multi-factor analytical framework that combines AI capability assessment, task-level occupational decomposition, labor market data, and expert research synthesis to evaluate the automation risk facing individual occupations and industry sectors.

Rather than relying on a single metric or a narrow set of assumptions, our methodology integrates findings from multiple authoritative sources to produce a comprehensive view of how artificial intelligence is likely to affect specific roles over varying time horizons.

The AI Job Scanner uses large language models to synthesize research from peer-reviewed studies, government datasets, and industry reports. Each analysis cross-references current AI capabilities against the specific task composition of a given occupation, producing a risk score grounded in empirical evidence rather than speculation.

Data Sources

Our analysis draws on a range of established research institutions, government agencies, and industry organizations. The following sources form the foundation of our assessments:

Bureau of Labor Statistics (BLS)

Occupational employment and wage statistics, industry staffing patterns, and job growth projections from the Occupational Outlook Handbook. BLS data provides the quantitative baseline for understanding current labor market conditions and projected employment trends.

O*NET (Occupational Information Network)

Task-level job descriptions, detailed skills requirements, work activities, and knowledge domains for over 1,000 occupations. O*NET data enables the granular task decomposition that underpins our risk scoring methodology.

McKinsey Global Institute

Research on automation potential across work activities, industry transformation studies, and workforce transition modeling. McKinsey's task-level automation analysis provides a foundational reference point for assessing which activities are technically automatable.

World Economic Forum

Future of Jobs reports, skills gap analysis, and employer survey data on anticipated workforce changes. WEF research offers a forward-looking perspective on how organizations plan to adopt AI technologies.

PwC and Accenture

Industry-specific AI adoption research, economic impact modeling, and enterprise technology deployment data. These studies provide insight into real-world adoption rates and sector-specific implementation patterns.

Brookings Institution

Research on AI's impact on wages, employment inequality, and geographic disparities in automation exposure. Brookings provides critical analysis of the distributional effects of AI-driven workforce changes.

Academic Research

Peer-reviewed work from the Stanford Institute for Human-Centered AI (HAI), MIT Work of the Future initiative, and the Oxford Martin School. These institutions produce rigorous, methodologically transparent research on AI's labor market implications.

AI Company Publications

Technical capability assessments and safety research from Anthropic, OpenAI, and Google DeepMind. These publications inform our understanding of current and near-term AI system capabilities, which directly affects task-level automation potential estimates.

Risk Score Methodology

Each occupation analyzed by AI Workforce Watch receives a risk score on a scale from 0 to 100, where 0 indicates minimal automation risk and 100 indicates very high automation risk. These scores are derived from a weighted evaluation of four primary factors:

Task Composition

What percentage of the job's core tasks involve routine cognitive or manual work as opposed to creative problem-solving, complex interpersonal interaction, or unstructured physical activity? Occupations dominated by routine, predictable tasks receive higher risk scores.

Current AI Capabilities

Which of the occupation's constituent tasks can current AI systems perform at or above human-level competency? This factor assesses the present state of AI technology relative to the specific demands of each role.

Adoption Barriers

What regulatory, economic, social, and infrastructure barriers exist that may slow or prevent automation, even where technical capability exists? Factors such as licensing requirements, high implementation costs, public trust considerations, and physical infrastructure constraints are evaluated.

Historical Precedent

How have similar technological transitions affected comparable roles in the past? Historical patterns of adoption, displacement, and role transformation inform our projections for how AI-driven change is likely to unfold.

Risk Categories

0 - 25 Low Risk

Occupations with strong barriers to automation, high reliance on interpersonal or creative skills, and limited overlap with current AI capabilities.

26 - 50 Moderate Risk

Occupations where some tasks are automatable but significant portions of the role require human judgment, adaptability, or physical dexterity.

51 - 75 High Risk

Occupations where a majority of tasks are technically automatable and adoption barriers are moderate or declining.

76 - 100 Very High Risk

Occupations where most tasks are automatable with current or near-term AI, adoption barriers are low, and historical precedent suggests rapid displacement.

Task-Level Analysis

A core principle of our methodology is that jobs are not monolithic units; they are composed of discrete tasks, each with its own automation profile. Analyzing automation risk at the task level produces more accurate and actionable assessments than occupation-level generalizations.

Jobs are decomposed into their constituent tasks using O*NET task databases, which catalog the specific activities, knowledge areas, and skills associated with each occupation. Each task is then evaluated independently for its automation potential based on the technical requirements of the task, the current state of AI capabilities relevant to that task type, and the practical feasibility of deploying automation.

The aggregate risk score for an occupation reflects the weighted combination of its task-level assessments, with weighting determined by the relative time and importance each task represents within the overall role. This approach ensures that an occupation is not classified as high-risk simply because one peripheral task is automatable, nor classified as low-risk if its core activities are increasingly within the reach of AI systems.

Timeline Projections

Automation risk is not a static measure. The likelihood and pace of displacement vary depending on the time horizon under consideration. AI Workforce Watch provides projections across three time frames:

Near-Term: 1 to 3 Years

Near-term projections are based on AI capabilities that are currently deployed in production environments. These assessments reflect technologies that are commercially available, have demonstrated reliability at scale, and are actively being adopted by organizations. Near-term risk is highest for tasks where proven AI tools already exist and adoption barriers are minimal.

Medium-Term: 3 to 7 Years

Medium-term projections incorporate capabilities that are currently in research or early deployment stages, adjusted for typical technology adoption curves. These estimates account for the time required for technical maturation, regulatory adaptation, organizational change management, and workforce retraining.

Long-Term: 7 to 15 Years

Long-term projections are based on extrapolated capability trajectories and historical patterns of technology adoption and workforce adjustment. These estimates carry greater uncertainty and are presented as directional indicators rather than precise forecasts. Long-term analysis considers potential regulatory shifts, societal adaptation, and the emergence of new occupational categories.

Limitations and Disclaimers

Transparency about the boundaries of our analysis is essential. Users should be aware of the following limitations when interpreting risk scores and projections:

  • Risk scores are estimates based on current knowledge and publicly available research. They should not be treated as deterministic predictions of specific outcomes.
  • AI capability development is inherently uncertain. The pace of progress may accelerate or decelerate in ways that are difficult to forecast, and breakthrough capabilities may emerge unexpectedly.
  • Industry-specific factors, geographic variation, and regulatory changes can significantly affect the timeline and extent of automation in any given occupation or sector.
  • Individual job roles vary significantly even within the same occupational title. Two workers with the same job title may face very different automation exposure depending on their specific responsibilities, employer, and industry context.
  • Risk scores are updated periodically as new research becomes available and as AI capabilities evolve. Historical scores may not reflect current conditions.
  • This tool is for informational and educational purposes only. It should not be the sole basis for career decisions, hiring practices, or workforce planning. Users are encouraged to consult multiple sources and professional advisors when making consequential decisions.

Updates and Corrections

Our methodology is reviewed and refined on a quarterly basis to incorporate new research findings, updated AI capability assessments, and revised labor market data. Material changes to the methodology are documented and communicated through our blog.

We welcome questions, feedback, and correction requests. If you have identified an error in our analysis or have information that may improve our assessments, please contact us at methodology@aiworkforcewatch.com.

For the latest analysis and commentary on AI workforce trends, visit our research blog.