- Artificial Neural Networks (ANN), are the learning mechanism of machine learning models. They are supposed to replicate the functioing of the brain and consist of “neurons” or nodes who are lined up in layer. The nodes are connected with all the nodes of the previous and following layers….you will immediately note here that it is already different than real neurons as the signal can pass birectionally: there is indeep no such thing as axon and dendrites:
- Nodes which receive the data are called the input layer, it would be compared with afferent neurons.
- Layers of nodes which relay the data weights are called hidden layers; it would be compared with interneurons. These layers are in fact filters which compute different type of calculations.
- Nodes with the result are called the output layer; it would be compared with efferent neurons.
- Something special in imaging: Convolutional Neural Networks (CNN)
- In imaging the ANNs are called Convolutional Neural Networks (CNNs). It sounds complicated at first but a CNN is named after the filtering calculation it makes: a Convolution.
- With this calculation a value is calculated using two adajcent horizontal or vertical pixels. This results in a reduction in the number of pixels as the data is passed through each filter layer. This can be compensated by adding a pixel frame around the data images before thaey are proccesses; this method is called zero padding.
- Input layer
- The input layer is where the data is presented to the artificial neural network.
- This layer is untrained and the data is then passed to the hidden layers before the final result appears in the output layer.
- Output layer
- these are the results of the algorithm after the data has been passed through the input and hidden layers. The number of neurons is normally equal to the number of solutions (for example 5 solutions would mean 5 neurons eczema, impetigo, scabies, fungal infections and insect bites)
- Here the results appear as a numerical value between 0 and one for the different solutions. In supervised learning these value are compared with the labels (also a numerical value). The corrections are then fed back into the hidden layers so that the corrections can be made (see “backpropagation”.
- Hidden layer
- this corresponds to the layers between the input and output layers. This where the learning of an algorithm takes place: see this as a progressive adjustment of weights throughout the learning process.
- It is also where a problem lies in artificial intelligence in health. Even if the prediction is correct (often medicine is a gray zone), it is not possible to explain the results: we could reach the problematic of association versus causation: see an example in epidemiology about measuring the strength of an association (Bradford-Hill Criteria):
The Bradford-Hill criteria (J Roy Soc Med 1965:58:295-300)
- Strength of the association.
-According to Hill, the stronger the association between a risk factor and outcome, the more likely the relationship is to be causal. - Consistency of findings.
-Have the same findings must be observed among different populations, in different study designs and different times? - Specificity of the association.
-There must be a one to one relationship between cause and outcome. - Temporal sequence of association.
-Exposure must precede outcome. - Biological gradient.
-Change in disease rates should follow from corresponding changes in exposure (dose-response). - Biological plausibility.
-Presence of a potential biological mechanism. - Coherence.
-Does the relationship agree with the current knowledge of the natural history/biology of the disease? - Experiment.
-Does the removal of the exposure alter the frequency of the outcome?
- What is so special about Artificial Neural networks ?
- They are able to recognize and learn patters even if the evidence is messy. They are able to achieve constraint satisfaction, even if the data is conflicting. Like humans, they can build on overlapping features displaying a similar patters.
- Even if they become incomplete, the whole system has learnt and it shows graceful degradation, where the performance degrades gradually as it become damaged.
- Gas nets
- This is a more recent concept with is derived from findings in neuroscience. Contrarily to classical ANNs, there isn’t the need to be a “hard wired” connection linking 2 neurons for some kind of regulation to take place.
- For instance to bring oxygen to neurons, blood flow needs to be increases. This is done by the secrestion of nitric oxide (NO) by neurons with act on blood vessel cells. This leads to dilation of the vessel lumen and subsequent inrease in oxygen supply (NO=no contfaction)
- The same type of design has been applied to the secretion of artifical gas in a design structure concept called Gas-nets