|This is the talk page for discussing improvements to the Artificial neuron article.
This is not a forum for general discussion of the article's subject.
|This article is of interest to the following WikiProjects:|
|The contents of the Linear threshold unit page were merged into Artificial neuron on 7 October 2014. For the contribution history and old versions of the redirected page, please see ; for the discussion at that location, see its talk page.|
There seems to be at least a little overlap between this article and perceptron. Perhaps a partial merge of the overlapping info and some cross-links would be a good idea? --Delirium 23:04, Oct 25, 2003 (UTC)
- Exactly what I was about to point out myself. Maybe this article should be whittled down to the key concepts of an artificial neuron per se (biological basis etc), and Perceptron expanded to cover the specifics of the McCullough-Pitts implementation - after all, there are other kinds of neural net, which therefore contain other kinds of neuron. Indeed, some of the comments here (such as the values being boolean) arguably don't even generalise over all perceptrons. Once I've finished defining them for my coursework, I'll try and sort out the various articles here. - IMSoP 18:24, 11 Dec 2003 (UTC)
What does this article want to tell us?
Where does that criticism come from -- namely the one about artificial neurons not having multiple output axons? I've never come across that, pls provide citation!
W0 to Wm inputs is m+1 inputs, not m
I found this article researching something else. w0 through wm makes w an array with m + 1 elements. I would think that the first sentence under Basic Structure should be either:
For a given artificial neuron, let there be m + 1 inputs with signals x0 through xm and weights w0 through wm.
For a given artificial neuron, let there be m inputs with signals x0 through xm - 1 and weights w0 through wm - 1.
I don't want to make this change as A) it might be correct in Engineersp33k and B) I don't have the expertise to know how this might change other parts of the discussion that follows. TechBear 17:03, 24 October 2007 (UTC)
The criticism that artificial neurons are not biologically plausible is obvious - thats why they are called artificial. I am not sure if Izhikevich wants to be cited for pointing out the obvious. I would dismiss the "criticism" section, since it suggest that there is an argue whether the artificial neuron is biologically plausible or not. A section discussing what such an artificial neuron could tell about real biology would be more fruitful, e.g. the capacity of neurons. —Preceding unsigned comment added by 22.214.171.124 (talk) 16:53, 5 March 2009 (UTC)
The article refers to "training" in the example algorithm and the following spreadsheet, however no details of how an artificial neuron might be trained are given. The text above the example says that there is more than one way. Perhaps at least one method should be described as it seems to me that the article makes little sense without such a description. I can guess that the process involves trying various inputs and adjusting weights until the required outputs are achieved, but it seems to me that unless this is done very carefully that the process might not even converge. It also begs the question that, if you know what outputs are required for given inputs (as in the example of the logical or function) then there are much simpler ways of implementing the required function without any training being required. Presumably the utility of artificial neurons comes from the fact that there must be some way of implementing training in cases where the required outputs are not known. The article would be much more useful if someone could describe how this is done.126.96.36.199 (talk) 19:53, 17 November 2010 (UTC)
nonlinear combination function
Some explanation of why a nonlinear combination function is needed to get a multilayer network that can't be reduced to a single layer network would be nice — Preceding unsigned comment added by 188.8.131.52 (talk) 22:18, 19 June 2012 (UTC)
The image is pretty ugly and I want to replace it, but has different numbering and different threshold/bias. Later in the article it says which is simpler but inconsistent with the earlier image. Same with the pseudocode numbering. It would be nice to make them all consistent and use a prettier picture.
http://neuralnetworksanddeeplearning.com/chap1.html also counts from 1. — Omegatron (talk) 05:25, 6 January 2018 (UTC)