Automated decision-making

Source: Wikipedia, the free encyclopedia.

Automated decision-making (ADM) involves the use of data, machines and

augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.[1][2][3]

Overview

There are different definitions of ADM based on the level of automation involved. Some definitions suggests ADM involves decisions made through purely technological means without human input,

neural networks
(DNN).

Since the 1950s computers have gone from being able to do basic processing to having the capacity to undertake complex, ambiguous and highly skilled tasks such as image and speech recognition, gameplay, scientific and medical analysis and inferencing across multiple data sources. ADM is now being increasingly deployed across all sectors of society and many diverse domains from entertainment to transport.

An ADM system (ADMS) may involve multiple decision points, data sets, and technologies (ADMT) and may sit within a larger administrative or technical system such as a criminal justice system or business process.

Data

Automated decision-making involves using data as input to be analyzed within a process, model, or algorithm or for learning and generating new models.[7] ADM systems may use and connect a wide range of data types and sources depending on the goals and contexts of the system, for example, sensor data for self-driving cars and robotics, identity data for security systems, demographic and financial data for public administration, medical records in health, criminal records in law. This can sometimes involve vast amounts of data and computing power.

Data quality

The quality of the available data and its ability to be used in ADM systems is fundamental to the outcomes. It is often highly problematic for many reasons. Datasets are often highly variable; corporations or governments may control large-scale data, restricted for privacy or security reasons, incomplete, biased, limited in terms of time or coverage, measuring and describing terms in different ways, and many other issues.

For machines to learn from data, large corpora are often required, which can be challenging to obtain or compute; however, where available, they have provided significant breakthroughs, for example, in diagnosing chest X-rays.[8]

ADM Technologies

Automated decision-making technologies (ADMT) are software-coded digital tools that automate the translation of input data to output data, contributing to the function of automated decision-making systems.[7] There are a wide range of technologies in use across ADM applications and systems.

ADMTs involving basic computational operations

  • Search (includes 1-2-1, 1-2-many, data matching/merge)
  • Matching (two different things)
  • Mathematical Calculation (formula)

ADMTs for assessment and grouping:

ADMTs relating to space and flows:

ADMTs for processing of complex data formats

Other ADMT

Machine learning

Machine learning (ML) involves training computer programs through exposure to large data sets and examples to learn from experience and solve problems.[2] Machine learning can be used to generate and analyse data as well as make algorithmic calculations and has been applied to image and speech recognition, translations, text, data and simulations. While machine learning has been around for some time, it is becoming increasingly powerful due to recent breakthroughs in training deep neural networks (DNNs), and dramatic increases in data storage capacity and computational power with GPU coprocessors and cloud computing.[2]

Machine learning systems based on foundation models run on deep neural networks and use pattern matching to train a single huge system on large amounts of general data such as text and images. Early models tended to start from scratch for each new problem however since the early 2020s many are able to be adapted to new problems.[9] Examples of these technologies include Open AI's DALL-E (an image creation program) and their various GPT language models, and Google's PaLM language model program.

Applications

ADM is being used to replace or augment human decision-making by both public and private-sector organisations for a range of reasons including to help increase consistency, improve efficiency, reduce costs and enable new solutions to complex problems.[10]

Debate

Research and development are underway into uses of technology to assess argument quality,[11][12][13] assess argumentative essays[14][15] and judge debates.[16][17][18][19] Potential applications of these argument technologies span education and society. Scenarios to consider, in these regards, include those involving the assessment and evaluation of conversational, mathematical, scientific, interpretive, legal, and political argumentation and debate.

Law

In legal systems around the world, algorithmic tools such as risk assessment instruments (RAI), are being used to supplement or replace the human judgment of judges, civil servants and police officers in many contexts.[20] In the United States RAI are being used to generate scores to predict the risk of recidivism in pre-trial detention and sentencing decisions,[21] evaluate parole for prisoners and to predict "hot spots" for future crime.[22][23][24] These scores may result in automatic effects or may be used to inform decisions made by officials within the justice system.[20] In Canada ADM has been used since 2014 to automate certain activities conducted by immigration officials and to support the evaluation of some immigrant and visitor applications.[25]

Economics

Automated decision-making systems are used in certain computer programs to create buy and sell orders related to specific financial transactions and automatically submit the orders in the international markets. Computer programs can automatically generate orders based on predefined set of rules using trading strategies which are based on technical analyses, advanced statistical and mathematical computations, or inputs from other electronic sources.

Business

Continuous auditing

Continuous auditing uses advanced analytical tools to automate auditing processes. It can be utilized in the private sector by business enterprises and in the public sector by governmental organizations and municipalities.[26] As artificial intelligence and machine learning continue to advance, accountants and auditors may make use of increasingly sophisticated algorithms which make decisions such as those involving determining what is anomalous, whether to notify personnel, and how to prioritize those tasks assigned to personnel.

Media and Entertainment

Digital media, entertainment platforms, and information services increasingly provide content to audiences via automated recommender systems based on demographic information, previous selections, collaborative filtering or content-based filtering.[27] This includes music and video platforms, publishing, health information, product databases and search engines. Many recommender systems also provide some agency to users in accepting recommendations and incorporate data-driven algorithmic feedback loops based on the actions of the system user.[6]

Large-scale machine learning language models and image creation programs being developed by companies such as OpenAI and Google in the 2020s have restricted access however they are likely to have widespread application in fields such as advertising, copywriting, stock imagery and graphic design as well as other fields such as journalism and law.[9]

Advertising

Online advertising is closely integrated with many digital media platforms, websites and search engines and often involves automated delivery of display advertisements in diverse formats. 'Programmatic' online advertising involves automating the sale and delivery of digital advertising on websites and platforms via software rather than direct human decision-making.[27] This is sometimes known as the waterfall model which involves a sequence of steps across various systems and players: publishers and data management platforms, user data, ad servers and their delivery data, inventory management systems, ad traders and ad exchanges.[27] There are various issues with this system including lack of transparency for advertisers, unverifiable metrics, lack of control over ad venues, audience tracking and privacy concerns.[27] Internet users who dislike ads have adopted counter measures such as ad blocking technologies which allow users to automatically filter unwanted advertising from websites and some internet applications. In 2017, 24% of Australian internet users had ad blockers.[28]

Health

Deep learning AI image models are being used for reviewing x-rays and detecting the eye condition macular degeneration.

Social Services

Governments have been implementing digital technologies to provide more efficient administration and social services since the early 2000s, often referred to as e-government. Many governments around the world are now using automated, algorithmic systems for profiling and targeting policies and services including algorithmic policing based on risks, surveillance sorting of people such as airport screening, providing services based on risk profiles in child protection, providing employment services and governing the unemployed.[29] A significant application of ADM in social services relates to the use of predictive analytics – eg predictions of risks to children from abuse/neglect in child protection, predictions of recidivism or crime in policing and criminal justice, predictions of welfare/tax fraud in compliance systems, predictions of long term unemployment in employment services. Historically these systems were based on standard statistical analyses, however from the early 2000s machine learning has increasingly been developed and deployed. Key issues with the use of ADM in social services include bias, fairness, accountability and explainability which refers to transparency around the reasons for a decision and the ability to explain the basis on which a machine made a decision.[29] For example Australia's federal social security delivery agency, Centrelink, developed and implemented an automated processes for detecting and collecting debt which led to many cases of wrongful debt collection in what became known as the RoboDebt scheme.[30]

Transport and Mobility

Connected and automated mobility (CAM) involves autonomous vehicles such as self-driving cars and other forms of transport which use automated decision-making systems to replace various aspects of human control of the vehicle.[31] This can range from level 0 (complete human driving) to level 5 (completely autonomous).[2] At level 5 the machine is able to make decisions to control the vehicle based on data models and geospatial mapping and real-time sensors and processing of the environment. Cars with levels 1 to 3 are already available on the market in 2021. In 2016 The German government established an 'Ethics Commission on Automated and Connected Driving' which recommended connected and automated vehicles (CAVs) be developed if the systems cause fewer accidents than human drivers (positive balance of risk). It also provided 20 ethical rules for the adaptation of automated and connected driving.[32] In 2020 the European Commission strategy on CAMs recommended that they be adopted in Europe to reduce road fatalities and lower emissions however self-driving cars also raise many policy, security and legal issues in terms of liability and ethical decision-making in the case of accidents, as well as privacy issues.[31] Issues of trust in autonomous vehicles and community concerns about their safety are key factors to be addressed if AVs are to be widely adopted.[33]

Surveillance

Automated digital data collections via sensors, cameras, online transactions and social media have significantly expanded the scope, scale, and goals of surveillance practices and institutions in government and commercial sectors.[34] As a result there has been a major shift from targeted monitoring of suspects to the ability to monitor entire populations.[35] The level of surveillance now possible as a result of automated data collection has been described as surveillance capitalism or surveillance economy to indicate the way digital media involves large-scale tracking and accumulation of data on every interaction.

Ethical and legal issues

There are many social, ethical and legal implications of automated decision-making systems. Concerns raised include lack of transparency and contestability of decisions, incursions on privacy and surveillance, exacerbating systemic bias and inequality due to data and algorithmic bias, intellectual property rights, the spread of misinformation via media platforms, administrative discrimination, risk and responsibility, unemployment and many others.[36][37] As ADM becomes more ubiquitous there is greater need to address the ethical challenges to ensure good governance in information societies.[38]

ADM systems are often based on machine learning and algorithms which are not easily able to be viewed or analysed, leading to concerns that they are 'black box' systems which are not transparent or accountable.[2]

A report from Citizen lab in Canada argues for a critical human rights analysis of the application of ADM in various areas to ensure the use of automated decision-making does not result in infringements on rights, including the rights to equality and non-discrimination; freedom of movement, expression, religion, and association; privacy rights and the rights to life, liberty, and security of the person.[25]

Legislative responses to ADM include:

Bias

ADM may incorporate algorithmic bias arising from:

  • Data sources, where data inputs are biased in their collection or selection[37]
  • Technical design of the algorithm, for example where assumptions have been made about how a person will behave[44]
  • Emergent bias, where the application of ADM in unanticipated circumstances creates a biased outcome[44]

Explainability

Questions of biased or incorrect data or algorithms and concerns that some ADMs are black box technologies, closed to human scrutiny or interrogation, has led to what is referred to as the issue of explainability, or the right to an explanation of automated decisions and AI. This is also known as Explainable AI (XAI), or Interpretable AI, in which the results of the solution can be analysed and understood by humans. XAI algorithms are considered to follow three principles - transparency, interpretability and explainability.

Information asymmetry

Automated decision-making may increase the information asymmetry between individuals whose data feeds into the system and the platforms and decision-making systems capable of inferring information from that data. On the other hand it has been observed that in financial trading the information asymmetry between two artificial intelligent agents may be much less than between two human agents or between human and machine agents.[45]

Research fields

Many academic disciplines and fields are increasingly turning their attention to the development, application and implications of ADM including business, computer sciences, human computer interaction (HCI), law, public administration, and media and communications. The automation of media content and algorithmically driven news, video and other content via search systems and platforms is a major focus of academic research in media studies.[27]

The ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) was established in 2018 to study transparency and explainability in the context of socio-technical systems, many of which include ADM and AI.

Key research centres investigating ADM include:

See also

References

  1. . Retrieved November 1, 2022.
  2. ^ .
  3. .
  4. ^ UK Information Commissioner's Office (2021-09-24). Guide to the UK General Data Protection Regulation (UK GDPR) (Report). Information Commissioner's Office UK. Archived from the original on 2018-12-21. Retrieved 2021-10-05.
  5. S2CID 73490120
    .
  6. ^ .
  7. ^ a b Algorithm Watch (2020). Automating Society 2019. Algorithm Watch (Report). Retrieved 2022-02-28.
  8. S2CID 235735320
    .
  9. ^ a b Snoswell, Aaron J.; Hunter, Dan (13 April 2022). "Robots are creating images and telling jokes. 5 things to know about foundation models and the next generation of AI". The Conversation. Retrieved 2022-04-21.
  10. S2CID 52075037
    .
  11. ^ Wachsmuth, Henning; Naderi, Nona; Hou, Yufang; Bilu, Yonatan; Prabhakaran, Vinodkumar; Thijm, Tim; Hirst, Graema; Stein, Benno (2017). "Computational argumentation quality assessment in natural language" (PDF). Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. pp. 176–187.
  12. ^ Wachsmuth, Henning; Naderi, Nona; Habernal, Ivan; Hou, Yufang; Hirst, Graeme; Gurevych, Iryna; Stein, Benno (2017). "Argumentation quality assessment: Theory vs. practice" (PDF). Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. pp. 250–255.
  13. ^ Gretz, Shai; Friedman, Roni; Cohen-Karlik, Edo; Toledo, Assaf; Lahav, Dan; Aharonov, Ranit; Slonim, Noam (2020). "A large-scale dataset for argument quality ranking: Construction and analysis". Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. pp. 7805–7813.
  14. .
  15. ^ Persing, Isaac; Ng, Vincent (2015). "Modeling argument strength in student essays" (PDF). Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. pp. 543–552.
  16. .
  17. ^ Potash, Peter; Rumshisky, Anna (2017). "Towards debate automation: a recurrent model for predicting debate winners" (PDF). Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. pp. 2465–2475.
  18. .
  19. .
  20. ^ a b Chohlas-Wood, Alex (2020). Understanding risk assessment instruments in criminal justice. Brookings Institution.
  21. ^ Angwin, Julia; Larson, Jeff; Mattu, Surya (23 May 2016). "Machine Bias". ProPublica. Archived from the original on 2021-10-04. Retrieved 2021-10-04.
  22. S2CID 21115049
    .
  23. .
  24. .
  25. ^ a b Molnar, Petra; Gill, Lex (2018). Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada's Immigration and Refugee System. Citizen Lab and International Human Rights Program (Faculty of Law, University of Toronto).
  26. .
  27. ^ . Q110607881.
  28. ^ Newman, N; Fletcher, R; Kalogeropoulos, A (2017). Reuters Institute Digital News Report (Report). Reuters Institute for the Study of Journalism. Archived from the original on 2013-08-17. Retrieved 2022-01-19.
  29. ^
    S2CID 158229201
    .
  30. .
  31. ^ .
  32. ^ Federal Ministry of Transport and Digital Infrastructures. Ethics Commission's complete report on automated and connected driving. www.bmvi.de (Report). German Government. Archived from the original on 2017-09-04. Retrieved 2021-11-23.
  33. S2CID 225261480
    .
  34. .
  35. .
  36. OCLC 1013516195.{{cite book}}: CS1 maint: location missing publisher (link
    )
  37. ^
  38. .
  39. ^ "EUR-Lex - 32016R0679 - EN - EUR-Lex". eur-lex.europa.eu. Retrieved 2021-09-13.
  40. S2CID 23933541
    .
  41. ^ Court of Justice of the European Union. "Request for a preliminary ruling from the Verwaltungsgericht Wien (Austria) lodged on 16 March 2022 – CK (Case C-203/22)".
  42. ^
    S2CID 4049746
    .
  43. .
  44. ^ .
  45. OCLC 1004620876.{{cite book}}: CS1 maint: location missing publisher (link
    )