Talk:Artificial intelligence/Archive 13

Page contents not supported in other languages.
Source: Wikipedia, the free encyclopedia.

AI systems are heuristics, not algorithms

It should be noted that AI systems are not algorithms with known results, they are heuristics that approximate the solution. AI is used when complete analysis can be done are rare. AI is used when the input space is large and the decisions hard to make. The neural network or other methods approximate the solution but that solution is approximate as it does not cover all use cases. AI should be treated as a heuristic that gets one closer to the solution but not all the way there. It should not be used to drive cars, in hiring or in healthcare. Those fields are too critical for approximations.

This was posted by 198.103.184.76 North8000 (talk) 15:39, 20 January 2022 (UTC)

"Natural stupidity" listed at Redirects for discussion

An editor has identified a potential problem with the redirect Natural stupidity and has thus listed it for discussion. This discussion will occur at Wikipedia:Redirects for discussion/Log/2022 January 27#Natural stupidity until a consensus is reached, and readers of this page are welcome to contribute to the discussion. signed, Rosguill talk 20:40, 27 January 2022 (UTC)

So, fuzzy logic is the same as artificial intelligence?

Over at Fuzzy logic#Artificial intelligence, it (currently) says:

AI and fuzzy logic, when analyzed, are the same thing — the underlying logic of neural networks is fuzzy.

Maybe somebody here can improve that section of Fuzzy logic. --R. S. Shaw (talk) 04:21, 16 February 2022 (UTC)

This is a very long article that I really like. Thanks to whoever created this article about AI. Note: There is only AI that controls self-driving-cars like a Tesla. I wonder when AI will control everything. Antiesten (talk) 23:42, 22 March 2022 (UTC)

Where did it go? (On the big copy-edit in the fall of 2021)

This summer and fall, I have copy-edited the entire article for brevity (as well as better organization, citation format, and a non-technical

existential risk of AI or machine learning and so on. I've documented exactly where everything I cut has been moved to, and indicated the things I couldn't find a place for (or were otherwise unusable). You can see exactly where this material went here: Talk:Artificial intelligence/Where did it go? 2021. ---- CharlesGillingham (talk
) 00:52, 14 October 2021 (UTC)

Thanks for your hard-work. I think many of these topics are related to AI only remotely. AXONOV (talk) 18:51, 16 October 2021 (UTC)

Semi-protected edit request on 29 April 2022

hey i found some extra information i would like to add 12.96.155.31 (talk) 16:58, 29 April 2022 (UTC)

 Not done: it's not clear what changes you want to be made. Please mention the specific changes in a "change X to Y" format and provide a reliable source if appropriate. Cannolis (talk) 17:16, 29 April 2022 (UTC)
You should consider this a request to remove semi-protect. Rklawton (talk) 02:06, 16 May 2022 (UTC)

Google engineers Blaise Agüera y Arcas's and Blake Lemoine's claims about the Google LaMDA chatbot

Not sure if this will gain any traction or get wider spread attention. I believe this Washington Post article and this Economist article are the first mainstream discussions of it. Not saying I personally give it any credibility but it is interesting. If this shows up in any more publications might it be fit for inclusion, or is this just

WP:RECENTISM trivia? —DIYeditor (talk
) 21:50, 11 June 2022 (UTC)

This got more coverage on the 12th. I guess this would also be relevant to Turing test if it proves enduring. —DIYeditor (talk) 03:27, 13 June 2022 (UTC)

CharlesTGillingham (talk
) 00:47, 12 July 2022 (UTC)

Copyedit

Added a comma to sentence:

' Philosopher Nick Bostrom argues that sufficiently intelligent AI if it chooses actions based on achieving some goal, will exhibit convergent behavior such as acquiring resources or protecting itself from being shut down.'

between "AI" and "if" to improve flow and grammar. Please correct if mistaken, thank you! King keudo (talk) 20:51, 21 September 2022 (UTC)

 Resolved by somebody since this request was made. --
talk
) 09:55, 17 October 2022 (UTC)

Russell definition of AI excludes major fields: CV, transcription, etc.

This article has again been rewritten by someone to once again narrowly define AI as: only autonomous agents are AI. This is based on the Russell definition, which is highly controversial, if not almost generally rejected. This article has been repeatedly been sabotaged by ABM, robotics, killer drones, etc. advocates to narrowly define AI as interactive agents, thereby excluding some of the major key fields of actual AI such as computer vision, speech recognition/transcription, machine translation.

The trick being used to misdefine seem to confuse between AI and AI-systems/AI-based systems/etc.: the former synthesized information, the latter includes an AI component, but also wrongly includes purely procedural steps that have no intelligence to them. A typical misdefinition seems to go like:

AI is difficult to define, AI-based system are things interact with their environment

Google gives this:

the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

The explanation of their use of Oxford for all definitions is given here: https://support.google.com/websearch/answer/10106608?hl=en

This article needs a definition that recognizes these major fields like CV and speech recognition as being AI (i.e. not being a part of AI). The productive way is probably to early on state that AI is often encountered in everyday life as part of larger AI-based systems, which can also include procedural components. Bquast (talk) 18:19, 23 October 2022 (UTC)

Further research: the Russel and Norvig "definition" in fact does the same trick as I mentioned above...It states it is difficult to define AI and proceeds that it is easier to work with a definition of AI-based system / agents, etc.
This is NOT a definition of AI, and the definition of another concept should not be used here. This Wikipedia article should define the exact concept of AI, not how AI is used. The "engine" article also doesn't describe how political scientists and lawyers should think about cars.
Similarly, we don't say, human intelligence is hard to define, but a human is a bag of blood and bones that responds to inputs in various ways. Or something along those lines.
Proposed next steps:
  1. Add a link at the top to "Autonomous agents" article
  2. See what references the Oxford dictionary has for its definition
  3. See a definition from the Dartmouth conference on AI
  4. Replace the Russel and Norvig AI-based agents definition with the Oxford definition of Artificial # Intelligence, include in brackets a link to autonomous agents
  5. Revise subsequent paragraphs accordingly where needed
Bquast (talk) 03:35, 27 October 2022 (UTC)

Add a figure with a framework for artificial intelligence in enterprise applications

Being a scientific researcher, I am new to editing Wikipedia. Can you help me, please? I propose to add a unified framework for "Artificial Intelligence in Enterprise Applications". The framework has recently been published in a peer-reviewed, high-quality scientific journal (Scimago Q1), refer to https://www.sciencedirect.com/science/article/pii/S0923474822000467. I am the author of that article and declare a conflict of interest as I am related to the AI article on Wikipedia as a researcher. Specifically, I wanted to contribute my framework's visualization/figure (refer to Figure 6 at the end of the journal article) and an explanatory paragraph for the following reasons: 1) To add further clarity to the current Wikipedia article by depicting the interrelationships of various AI subfields in a visualization/graphic form, and 2) in the proposed explanatory paragraph include cross-links for these subfields to their corresponding areas on Wikipedia. The framework does not contradict anything in the existing Wikipedia article. I published my research article as Open Access and have approval from the publisher to contribute my framework to Wikipedia. Kind regards, Heinzhausw (talk) 06:13, 31 October 2022 (UTC)

References

Copyedit

Under Tools, the first line contains a misplaced modifier. "Many problems in AI can be solved theoretically by intelligently searching through many possible solutions..." The line should probably read: "AI can solve many problems theoretically by intelligently searching through many possible solutions..." LBirdy (talk) 16:02, 5 November 2022 (UTC)

Thank you. I've just changed this to "AI can solve many problems by intelligently searching through many possible solutions." Elspea756 (talk) 16:57, 5 November 2022 (UTC)

implement comment: too many sections, remove intelligent agent section

there has for a long time been an inline comment to remove some sections, there are too many in this article.

I suggest to remove the talk about intelligent agents, it is highly confusing (not in the least because this article was not very accurate with this before), and it does not belong here, there already is an article on intelligent agent. Bquast (talk) 01:41, 17 November 2022 (UTC)

Re: definition of AI

@Bquast: I'm fine with reframing the definition without the term "intelligent agents". This term's popularity peaked back around 2000 or so. A good reworking might even make the underlying philosophical points more clear.

I would be fine with McCarthy's definition, i.e. "Intelligence is the computational part of the ability to achieve goals in the world." (You mentioned above that you would be okay with the definition of AI proposed at The Dartmouth Conference, but I don't believe they made a formal definition -- I assume you had in mind McCarthy's understanding of the term.)

There are several essential elements to the academic definition of AI (as opposed to definitions from popular sources, or dictionaries):

  1. It must be in terms of behavior; it's something it does, not something it is. (That was Turing's main point.)
  2. It must not be in terms of human intelligence. (People like McCarthy have vociferously argued against this.)
  3. It must in terms of goal-directed behavior -- what economists call "rationality". In other words, in terms of well-defined problems with well-defined solutions.

R & N's chapter 2 definition uses a four way categorization: "Thinking humanly", 'acting humanly", "Thinking rationally", "Acting rationally". This is a good way to frame these issues. Two orthogonal dimensions thinking vs. acting, human-like vs goal-directed. ----

CharlesTGillingham (talk
) 06:28, 26 November 2022 (UTC)


Oh, and one last thing, which often needs to be said on this page:

There are many, many contradictory sources on AI, whole communities of thinkers who have their own understanding of AI, and many thousands of individual writers who have tried their hand at defining it or re-defining it. The article relies heavily on Russell & Norvig's textbook, in many places, because it is by far the most popular textbook, used in literally thousands of introductory AI courses for almost thirty years now. From Wikipedia's point of view, R & N is the most reliable source we could cite on the topic.

And a parenthetical comment:

By the way, R & N defines "agent" as: "something that perceives and acts", i.e. "something with inputs and outputs". Autonomy or persistence is not a part of their discussion. Any program, any program at all, fits their definition of an "agent". ----

) 06:28, 26 November 2022 (UTC)

Semi-protected edit request on 13 November 2022

I'm requesting to add a section under "risks" of artificial intelligence

Gender Bias in Artificial Intelligence: As artificial intelligence continues to evolve and learn, it’s important to address the fact that the field of AI is extremely male dominated and how that impacts the way AI is learning language and values. In an article written by Susan Leavy from University College Dublin, she talks about the existence of the language used when referencing male and female roles. For example: the term “man-kind” and “man” referring to all of humanity, work roles such as firefighters being seen as a male role, and the words used to describe family such as how a father would be seen as a “family man” and that women don’t have an equal term. If these societal norms aren’t challenged throughout the advancement of AI, then the small ways that language differs between genders will be embedded into the AI’s memory and further reinforce gender inequality for future generations.

Leavy, Susan. “Gender Bias in Artificial Intelligence: Proceedings of the 1st International Workshop on Gender Equality in Software Engineering.” ACM Digital Library, 28 May 2018, https://dl.acm.org/doi/pdf/10.1145/3195570.3195580. Kawahsaki (talk) 19:57, 13 November 2022 (UTC)

 Not done: Hello Kawahsaki, and welcome to Wikipedia! I'm afraid I have to decline to perform this request for a couple of reasons.
When creating
edit requests
one of the conditions for it being successful is that it be uncontroversial. Gender bias as a topic in whole is certainly controversial in the world today, and so the creation of an entire section based on such a topic would be out of scope here.
Additionally, I have concerns regarding the prose you've written. Wikipedia strives to maintain
tertiary
source. Some of your prose seems to fall below this guideline. An example is the phrase it is important to address the fact. Wikipedia may state that a source believes something is important, but Wikipedia would not say something like this in it's own voice.
Now, this page is currently under what we call
semi-protection
. This means that only editors with accounts that are 3 days old and have a total of 10 edits may edit the page. If you make 9 more editors anywhere on Wikipedia (and there are plenty of eligible pages), and wait until November 16th, you'll be able to edit this page directly.
Feel free to drop by my
Teahouse
, which is a venue that specializes in answering questions from new editors.
Cheers, and happy editing! —Sirdog (talk) 04:26, 14 November 2022 (UTC)
I think you could add your contribution to the main article on
CharlesTGillingham (talk
) 06:40, 28 November 2022 (UTC)

Why the Oxford Dictionary definition is inadequate

The article currently quotes the Oxford dictionary to define AI: "the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages."

This definition is rejected by the leading AI textbook (see Chpt. 2, Artificial Intelligence: A Modern Approach) and by AI founder John McCarthy (who coined the term "artificial intelligence") (see multiple citations in the article; just search for his name)

A brief introduction to the problems with definition:

The problem is this phrase: "tasks that normally require human intelligence". Consider these two lists:

Tasks that require considerable human intelligence:

  • Multiplying large numbers.
  • Memorizing long lists of information.
  • Doing high school algebra
  • Solving a linear differential equation
  • Playing chess at a beginner's level.

Tasks that do not require human intelligence (i.e. "unintelligent" small children or animals can do it):

  • Facial recognition
  • Visual perception
  • Speech recognition
  • Walking from room to room without bumping into something
  • Picking up an egg without breaking it
  • Noticing who is speaking

The Oxford definition categorizes programs that can do tasks from list 1 as AI, and categorize programs from list 2 as being outside of AI's scope. This is obviously not what is actually happening out in the field -- exactly the opposite, in most cases. All of the problems in list 1 were solved back in the 1960s, with computers far less powerful than the one in your microwave or clock radio. The problems in list 2 have only been solved recently, if at all.

Activities considered "intelligent" when a human does them can sometimes be relatively easy for machines, and sometimes activities that would never appear particularly "intelligent" when a human does them can be incredibly difficult for machines. (See Moravec's paradox) Thus the definition of artificial intelligence can't just be in terms of "human intelligence" -- a more general definition is needed. The Oxford dictionary definition is not adequate.

My recommendation

Scrap the extended definition all together: just stick with the naive common usage definition. Go directly to the examples (i.e. paragraph two of the lede)

Leave the difficult problem of defining "intelligence" (without reference to human intelligence) to the section "Defining AI" deeper in the article. This section considers the major issues, and should settle on "rationality" (i.e. goal-directed behavior) as Russell and Norvig do, and as John McCarthy did.----

CharlesTGillingham (talk
) 04:31, 28 November 2022 (UTC)

Actually, I just noticed, it doesn't exist any more! I will restore this very brief philosophical discussion, without any mention of "intelligent agents". And I will leave Google's definition as well. ---- ) 04:53, 28 November 2022 (UTC)
please review the history, the Russell definition was moved to the intelligent agent article, it is not adequate for artificial intelligence because it includes all kinds of procedural actions that are of interesting to fields like political science, but are not the essence of AI itself Bquast (talk) 14:29, 30 November 2022 (UTC)
@
CharlesTGillingham (talk
) 04:31, 4 December 2022 (UTC)
@) 05:22, 4 December 2022 (UTC)
@
CharlesTGillingham
ok, sorry then I misunderstood your intention. In general I agree that this current definition is not good. Intelilgence can be human or animal (or plants?). I'm not sure about your list, many of the "dumb" tasks do require intelligence, I would not consider facial recognition _not_ intelilgence.
Regarding the link from Google, I put the direct citation of OED, but you can find it like this: https://www.google.com/search?q=artificial+intelligence+definition I will try to add it soon Bquast (talk) 02:49, 6 December 2022 (UTC)

References, further reading, notes, etc. cleanup

A major cleanup is needed of all these sections. It seems like many authors have inserted their won (maybe) relevant material here. It should contain references of the text used. It should also avoid mentioning the same references in many different places, in particular the confusing Russell and Norvig book. Bquast (talk) 14:32, 30 November 2022 (UTC)

Articles in this area are prone to reference spamming. I've done some work on this at related articles but not on this one. Also keeping a watch on a this and related articles so that it doesn't get worse. North8000 (talk) 21:55, 30 November 2022 (UTC)
Wikipedia requires reliable sources. An article should include only citations to the most reliable sources as possible. These is no reason to include more references to less reliable sources, or to exclude references to the most reliable sources.
There is no more reliable source about AI than Russell and Norvig, the leading textbook, used in thousands of introductory university courses about AI. There is vast body of less reliable sources about AI. There is a lot of dissent, new ideas, outsider perspectives, home brews, sloppy journalism, self-promotion and so on. Wikipedia has to take a NPOV on this huge variety, and we don't have room to cover them all. Thus we, as editors, need to prove that every contribution reflects "mainstream" and "consensus" views on the subject. This is all we have room for. This all that is relevant here. The dozens of citations in this article to the leading text book are a way of showing that each contribution is mainstream and consensus, and a way of weeding out the fringe. ----
CharlesTGillingham (talk
) 04:55, 4 December 2022 (UTC)
Please take care that cites you remove are not still in use by referencing. Removing cite that have short form references causes "no target errors". -- LCU ActivelyDisinterested transmissions °co-ords° 09:37, 8 December 2022 (UTC)

A Commons file used on this page or its Wikidata item has been nominated for deletion

The following Wikimedia Commons file used on this page or its Wikidata item has been nominated for deletion:

Participate in the deletion discussion at the nomination page. —Community Tech bot (talk) 22:24, 15 March 2023 (UTC)

Not used here.
CharlesTGillingham (talk
) 10:10, 23 March 2023 (UTC)

Wiki Education assignment: Research Process and Methodology - SP23 - Sect 201 - Thu

This article was the subject of a Wiki Education Foundation-supported course assignment, between 25 January 2023 and 5 May 2023. Further details are available on the course page. Student editor(s): Liliability (article contribs).

— Assignment last updated by Liliability (talk) 03:41, 13 April 2023 (UTC)

Future

In the "Future - Technological unemployment" section, would it be appropriate to add a clarifying statement to the quote, "...but they generally agree that it could be a net benefit if productivity gains are redistributed." With how it's presented, there is explicit reasoning that productivity gains would be seen by displaced workers receiving the monetary excess generated by AI's labor. However, this source is a survey of economics professors. Not business leaders speaking on affected industries and not sociologists speaking on affected workers. As a professional writer, presenting a quote like that from experts in a different field feels like an intentional misrepresentation.

Newer and older articles take a different tack, speculating that productivity gains would be seen in industries receiving displaced workers. Elsewhere, it's predicted that productivity gains would be seen from knowledge workers that learn or are able to augment their work with AI as it presents the opportunity to handle repetitive tasks.

Anecdotally, I use AI as an editor and it has tripled my productivity as a writer, which has given me time to edit Wikipedia articles. Software developers with whom I work have announced similar results, without mention of Wikipedia. In that regard, the section on technological unemployment speaks more to the AI boogeyman than it does potential benefit, and I think we should fix that.

NOTE: I am not an AI nor am I employed by an AI or an AI developer. I have no stake in AI and no more interest than ensuring an accurate reporting of the facts. Oleanderyogurt (talk) 00:03, 18 April 2023 (UTC)

I agree that the current sentence is for many reasons problematic. IMO it would be best to simply remove it. North8000 (talk) 21:10, 18 April 2023 (UTC)

Infobox

This article needs an infobox, it could be there general infobox template, or a specific one. Technology standard is a common one, but standard is not correct. Maybe scientific domain or something. What does everyone think? Bquast (talk) 16:24, 21 April 2023 (UTC)

IMHO we're better off without it. I foresee endless problems trying to decide what to put into it for such a broad vaguely defined topic and not much value to what we do put in there. Sincerely, North8000 (talk) 17:20, 21 April 2023 (UTC)

"Tools" section should contain a "machine learning" subsection

I believe machine learning is part of AI and the "Tools" section should contain a subsection named "machine learning methods".

However, currently under the "Tools" section, there is only a subsection named "Classifiers and statistical learning methods". "Classification" is just one task of supervised learning, which is one type of machine learning. Also, not all machine learning methods are statistical.

Changing "classifiers and statistical learning methods" to "machine learning methods" can also make the title simpler and easier to understand.

@CharlesGillingham @

CharlesTGillingham

Cooper2222 (talk) 21:40, 16 April 2023 (UTC)

There are many ways to organize this section. The idea was to list the tools without worrying about what they are used for, because in many cases, a particular tool can be used for many different things. This is kind of obvious with Search, Logic and ANNs.
All the things listed there (decision tree, nearest neighbor, Kernel methods, SVM, naive Bayes) are "classifiers" that were developed in the language of the statistics literature (in the 90s) and were mostly applied to machine learning. However, they are also tools for data science and statistical analysis. (Or, at very least, they share a lot in common with other statistical tools.)
Thus, I like the word "statistics" or "statistical" in the title. These are statistical tools. I would be more inclined to strike the "machine learning" part of the title -- we already have a section on machine learning above.
But feel free to be
CharlesTGillingham (talk
) 01:48, 27 April 2023 (UTC)
Originally I didn't consider models like k-NN to be statistical, because they are not based on probability. But you said these models all came from statistics. If we consider all these models to be statistical, what is the difference between statistical learning and machine learning? ---- Cooper2222 (talk) 03:30, 28 April 2023 (UTC)
Well, for me, all these tools are all somewhere near the border between AI and statistics, regardless of whether they are generally considered to be inside or outside of AI. It's the shared mathematical language, the way the problems are framed, and the precise way solutions can be judged and measured. All of that comes from statistics, not from previous AI research. ----
CharlesTGillingham (talk
) 02:54, 2 May 2023 (UTC)

Sentence cut

I cut this, because it is at the wrong level of detail for the lede (which should primarily be a summary of the contents of the article). Not sure where to move it to, so I put it here for now ----

) 17:42, 1 July 2023 (UTC)

This has changed the purchasing process, being the AI application functions a mediator between the consumer, product, and brand by providing personalized recommendations based on previous consumer purchasing decisions.[1]

References

  1. ^ Curtis, Lee (June 2020). "Trademark Law Playing Catch-up with Artificial Intelligence?". WIPO Magazine.

CharlesTGillingham (talk
) 17:42, 1 July 2023 (UTC)

More material temporarily placed here

I cut this from

AI § history
, for several reasons:

  1. Brevity
  2. I think it reads better if this section is just a social history of AI, and doesn't address technical history or arguable historical interpretation.
  3. The linked article (
    symbolic AI
    ) has been rewritten to describe a slightly different subject, so links from here are misleading.

It's going to take me some research on how Wikipedia should address the terminological issue of "symbolic AI" vs. "GOFAI" (don't worry about it if you don't know what that is). To keep moving forward, I will just park this stuff here.

By the 1950s, two visions for how to achieve machine intelligence emerged. One vision, known as
Symbolic AI or GOFAI, was to use computers to create a symbolic representation of the world and systems that could reason about the world. Proponents included Allen Newell, Herbert A. Simon, and Marvin Minsky. Closely associated with this approach was the "heuristic search"
approach, which likened intelligence to a problem of exploring a space of possibilities for answers.

The second vision, known as the

artificial neural networks were pushed to the background but have gained new prominence in recent decades.[2]