Neuronal networks, stemming from neuroscience, are the intricate connections between neurons within the central nervous system. Computer science has long sought to mimic these systems, allowing for parallel processing to grasp complex, non-linear dependencies from raw data. This learning process builds upon previous knowledge, much like our own brains. In essence, a neuronal network comprises hardware and software, resembling the human brain, with numerous processors distributed across three layers: (1)
- Input-Layer
- Hidden-Layer
- Output-Layer
The hidden layers, which can number in the hundreds or thousands, facilitate learning from one layer to the next. Visualizing this, the network takes shape:
(Source: Ralf T., Kreutzer, (2019))
Here, nodes are interconnected, with each node's inputs linked to many nodes in the preceding layer. Similarly, outputs from one layer feed into many nodes in the subsequent layer, symbolizing the flow of information. Notably, the output layer may comprise one or multiple nodes. (1)
Beyond structure, a neuronal network's depth refers to its hidden layers, while its width relates to the number of nodes or connections per node. Various configurations facilitate both forward and backward information propagation, crucial for learning. (1)
Each node acts as a repository of knowledge, housing both programmed rules and those gleaned during training.
- Ralf T., Kreutzer, (2019) "Künstliche Intelligenz verstehen: Grundlagen – Use-Cases – unternehmenseigene KI-Journey", Springer Fachmedien Wiesbaden, Wiesbaden