Workplace impact of artificial intelligence

Source: Wikipedia, the free encyclopedia.
A close up of a person's neck and upper torso, with a black rectangular sensor and camera unit attached to their shirt collar
AI-enabled wearable sensor networks may improve worker safety and health through access to real-time, personalized data, but also presents psychosocial hazards such as micromanagement, a perception of surveillance, and information security concerns.

The impact of artificial intelligence on workers includes both applications to improve worker safety and health, and potential hazards that must be controlled.

One potential application is using AI to eliminate hazards by removing humans from hazardous situations that involve risk of stress, overwork, or musculoskeletal injuries. Predictive analytics may also be used to identify conditions that may lead to hazards such as fatigue, repetitive strain injuries, or toxic substance exposure, leading to earlier interventions. Another is to streamline workplace safety and health workflows through automating repetitive tasks, enhancing safety training programs through virtual reality, or detecting and reporting near misses.

When used in the workplace, AI also presents the possibility of new hazards. These may arise from

ergonomic risks of control interfaces and human–machine interactions. Hazard controls include cybersecurity and information privacy measures, communication and transparency with workers about data usage, and limitations on collaborative robots
.

From a workplace safety and health perspective, only

"weak" or "narrow" AI that is tailored to a specific task is relevant, as there are many examples that are currently in use or expected to come into use in the near future. "Strong" or "general" AI is not expected to be feasible in the near future,[according to whom?] and discussion of its risks is within the purview of futurists and philosophers rather than industrial hygienists
.

Certain digital technologies are predicted to result in job losses. In recent years, the adoption of modern robotics has led to net employment growth. However, many businesses anticipate that automation, or employing robots would result in job losses in the future. This is especially true for companies in Central and Eastern Europe.[2][3][4] Other digital technologies, such as platforms or big data, are projected to have a more neutral impact on employment.[2][4]

Health and safety applications

In order for any potential AI health and safety application to be adopted, it requires acceptance by both managers and workers. For example, worker acceptance may be diminished by concerns about information privacy,[5] or from a lack of trust and acceptance of the new technology, which may arise from inadequate transparency or training.[6]: 26–28, 43–45  Alternatively, managers may emphasize increases in economic productivity rather than gains in worker safety and health when implementing AI-based systems.[7]

Eliminating hazardous tasks

A large room with a suspended ceiling packed with cubicles containing computer monitors
Call centers involve significant psychosocial hazards due to surveillance and overwork. AI-enabled chatbots can remove workers from the most basic and repetitive of these tasks.

AI may increase the scope of work tasks where a worker can be removed from a situation that carries risk. In a sense, while traditional automation can replace the functions of a worker's body with a robot, AI effectively replaces the functions of their brain with a computer. Hazards that can be avoided include stress, overwork, musculoskeletal injuries, and boredom.[8]: 5–7 

This can expand the range of affected job sectors into white-collar and service sector jobs such as in medicine, finance, and information technology.[9] As an example, call center workers face extensive health and safety risks due to its repetitive and demanding nature and its high rates of micro-surveillance. AI-enabled chatbots lower the need for humans to perform the most basic call center tasks.[8]: 5–7 

Analytics to reduce risk

A drawing of a man lifting a weight onto an apparatus, with various distances marked
The NIOSH lifting equation[10][11] is calibrated for a typical healthy worker to avoid back injuries, but AI-based methods may instead allow real-time, personalized calculation of risk.

Machine learning is used for

industrial disasters or make disaster response more efficient.[12]

For manual

ergonomic risk and fatigue management, as well as better analysis of the risk associated with specific job roles.[5]

Wearable sensors may also enable earlier intervention against exposure to toxic substances than is possible with area or breathing zone testing on a periodic basis. Furthermore, the large data sets generated could improve workplace health surveillance, risk assessment, and research.[12]

Streamlining safety and health workflows

AI can also be used to make the workplace safety and health workflow more efficient. One example is coding of workers' compensation claims, which are submitted in a prose narrative form and must manually be assigned standardized codes. AI is being investigated to perform this task faster, more cheaply, and with fewer errors.[13][14]

AI‐enabled virtual reality systems may be useful for safety training for hazard recognition.[12]

Artificial intelligence may be used to more efficiently detect near misses. Reporting and analysis of near misses are important in reducing accident rates, but they are often underreported because they are not noticed by humans, or are not reported by workers due to social factors.[15]

Hazards

inscrutability
in their decision-making, which can lead to hazards if managers or workers cannot predict or understand an AI-based system's behavior.

There are several broad aspects of AI that may give rise to specific hazards. The risks depend on implementation rather than the mere presence of AI.[8]: 2–3 

Systems using sub-symbolic AI such as

runtime. Systems using symbolic AI are less prone to unpredictable behavior.[6]
: 14–18 

The use of AI also increases cybersecurity risks relative to platforms that do not use AI,[6]: 17  and information privacy concerns about collected data may pose a hazard to workers.[5]

Psychosocial

Introduction of new AI-enabled technologies may lead to changes in work practices that carry psychosocial hazards such as a need for retraining or fear of technological unemployment.

cardiovascular disease or musculoskeletal injury.[16] Many hazards of AI are psychosocial in nature due to its potential to cause changes in work organization, in terms of increasing complexity and interaction between different organizational factors. However, psychosocial risks are often overlooked by designers of advanced manufacturing systems.[7]

Changes in work practices

AI is expected to lead to changes in the skills required of workers, requiring training of existing workers, flexibility, and openness to change.[1] The requirement for combining conventional expertise with computer skills may be challenging for existing workers.[7] Over-reliance on AI tools may lead to deskilling of some professions.[12]

Increased monitoring may lead to micromanagement and thus to stress and anxiety. A perception of surveillance may also lead to stress. Controls for these include consultation with worker groups, extensive testing, and attention to introduced bias. Wearable sensors, activity trackers, and augmented reality may also lead to stress from micromanagement, both for assembly line workers and gig workers. Gig workers also lack the legal protections and rights of formal workers.[8]: 2–10 

There is also the risk of people being forced to work at a robot's pace, or to monitor robot performance at nonstandard hours.[8]: 5–7 

Bias

Algorithms trained on past decisions may mimic undesirable human biases, for example, past discriminatory hiring and firing practices. Information asymmetry between management and workers may lead to stress, if workers do not have access to the data or algorithms that are the basis for decision-making.[8]: 3–5 

In addition to building a model with inadvertently discriminatory features, intentional discrimination may occur through designing metrics that covertly result in discrimination through

correlated variables in a non-obvious way.[8]
: 12–13 

In complex human‐machine interactions, some approaches to accident analysis may be biased to safeguard a technological system and its developers by assigning blame to the individual human operator instead.[12]

Physical

A yellow rectangular wheeled forklift robot in a warehouse, with stacks of boxes visible and additional similar robots visible behind it
Automated guided vehicles are examples of cobots currently in common use. Use of AI to operate these robots may affect the risk of physical hazards such as the robot or its moving parts colliding with workers.

Physical hazards in the form of human–robot collisions may arise from robots using AI, especially collaborative robots (cobots). Cobots are intended to operate in close proximity to humans, which makes impossible the common hazard control of isolating the robot using fences or other barriers, which is widely used for traditional industrial robots. Automated guided vehicles are a type of cobot that as of 2019 are in common use, often as forklifts or pallet jacks in warehouses or factories.[6]: 5, 29–30  For cobots, sensor malfunctions or unexpected work environment conditions can lead to unpredictable robot behavior and thus to human–robot collisions.[8]: 5–7 

ergonomics of control interfaces and human–machine interactions may give rise to hazards.[7]

Hazard controls

AI, in common with other computational technologies, requires cybersecurity measures to stop software breaches and intrusions,[6]: 17  as well as information privacy measures.[5] Communication and transparency with workers about data usage is a control for psychosocial hazards arising from security and privacy issues.[5] Proposed best practices for employer‐sponsored worker monitoring programs include using only validated sensor technologies; ensuring voluntary worker participation; ceasing data collection outside the workplace; disclosing all data uses; and ensuring secure data storage.[12]

For industrial cobots equipped with AI‐enabled sensors, the International Organization for Standardization (ISO) recommended: (a) safety‐related monitored stopping controls; (b) human hand guiding of the cobot; (c) speed and separation monitoring controls; and (d) power and force limitations. Networked AI-enabled cobots may share safety improvements with each other.[12] Human oversight is another general hazard control for AI.[8]: 12–13 

Risk management

Both applications and hazards arising from AI can be considered as part of existing frameworks for occupational health and safety risk management. As with all hazards, risk identification is most effective and least costly when done in the design phase.[7]

Workplace health surveillance, the collection and analysis of health data on workers, is challenging for AI because labor data are often reported in aggregate and does not provide breakdowns between different types of work, and is focused on economic data such as wages and employment rates rather than skill content of jobs. Proxies for skill content include educational requirements and classifications of routine versus non-routine, and cognitive versus physical jobs. However, these may still not be specific enough to distinguish specific occupations that have distinct impacts from AI. The United States Department of Labor's Occupational Information Network is an example of a database with a detailed taxonomy of skills. Additionally, data are often reported on a national level, while there is much geographical variation, especially between urban and rural areas.[9]

Standards and regulation

As of 2019, ISO was developing a standard on the use of metrics and dashboards, information displays presenting company metrics for managers, in workplaces. The standard is planned to include guidelines for both gathering data and displaying it in a viewable and useful manner.[8]: 11 [17][18]

In the

Radio Equipment Directive (2014/53/EU), and the General Product Safety Directive (2001/95/EC).[8]
: 10, 12–13 

References

  1. ^ a b https://capitalandopinions.com/impact-of-ai-on-jobs/
  2. ^ .
  3. .
  4. ^ a b Genz, Sabrina (2022-05-05). "The nuanced relationship between cutting-edge technologies and jobs: Evidence from Germany". Brookings. Retrieved 2022-06-05.
  5. ^ a b c d e Gianatti, Toni-Louise (2020-05-14). "How AI-Driven Algorithms Improve an Individual's Ergonomic Safety". Occupational Health & Safety. Retrieved 2020-07-30.
  6. ^ a b c d e f Jansen, Anne; van der Beek, Dolf; Cremers, Anita; Neerincx, Mark; van Middelaar, Johan (2018-08-28). "Emergent risks to workplace safety: working in the same space as a cobot". Netherlands Organisation for Applied Scientific Research (TNO). Retrieved 2020-08-12.
  7. ^
    S2CID 115901369
    .
  8. ^ a b c d e f g h i j k l m Moore, Phoebe V. (2019-05-07). "OSH and the Future of Work: benefits and risks of artificial intelligence tools in workplaces". EU-OSHA. Retrieved 2020-07-30.
  9. ^
    PMID 30910965
    .
  10. ^ Warner, Emily; Hudock, Stephen D.; Lu, Jack (2017-08-25). "NLE Calc: A Mobile Application Based on the Revised NIOSH Lifting Equation". NIOSH Science Blog. Retrieved 2020-08-17.
  11. .
  12. ^ .
  13. ^ Meyers, Alysha R. (2019-05-01). "AI and Workers' Comp". NIOSH Science Blog. Retrieved 2020-08-03.
  14. ^ Webb, Sydney; Siordia, Carlos; Bertke, Stephen; Bartlett, Diana; Reitz, Dan (2020-02-26). "Artificial Intelligence Crowdsourcing Competition for Injury Surveillance". NIOSH Science Blog. Retrieved 2020-08-03.
  15. ^ Ferguson, Murray (2016-04-19). "Artificial Intelligence: What's To Come for EHS… And When?". EHS Today. Retrieved 2020-07-30.
  16. ^ Brun, Emmanuelle; Milczarek, Malgorzata (2007). "Expert forecast on emerging psychosocial risks related to occupational safety and health". European Agency for Safety and Health at Work. Retrieved September 3, 2015.
  17. ^ Moore, Phoebe V. (2014-04-01). "Questioning occupational safety and health in the age of AI". Kommission Arbeitsschutz und Normung. Retrieved 2020-08-06.
  18. ^ "Standards by ISO/IEC JTC 1/SC 42 - Artificial intelligence". International Organization for Standardization. Retrieved 2020-08-06.