As a hobbyist AI researcher I have often found myself pondering on the exact workings of the human mind, and biological neural networks in general. The most fascinating aspect about it I have always found that of the creation of what I have come to call an 'internal universe', or IU. Essentially it's an internal representation of reality which doesn't necessarily match up with the actual reality. This IU can also contain completely imaginary things which in actuality do not exist, or distortions of factual processes to give it another interpretation.
Before I dive into this topic of internal universes, though, allow me to first address the nature of a biological neural network (BNN) and the way experiences and memories affect it. The essential aspect of a BNN and neural networks (NN) in general is that of input. Without input (the 'brain in a jar' example) there's no point to an NN. The basic and most essential function of an NN is to transform or convolute input. Lack of input will generally lead to self-destruction in NNs with active internal feedback mechanisms (see sensory deprivation experiments).
This input we can refer to as 'experiences' and the recollection of such input as 'memories'. They form the foundation and corner stones of the functioning of a BNN. Their role in the BNN is both formative and functional, in that the input (I) and recollection (R) are required during the formation of a BNN to form the proper structures. This process can be observed in infants, for example, where the type, intensity and duration of input they are exposed to during their first months and years can lead to the formation of a BNN which produces very distinct and quite predictable output (O).
A NN is by definition very unlikely to respond by itself in a manner which can be considered logical. Instead it appears to become 'programmed' by the input, which modifies and adds to the convolutions applied to the input signal. Most of the resulting behaviour (output) is simply handled by fixed parts of the network, resulting in what is referred to as 'instinct' or behaviour routines. This results in a predictable, fixed response (convolution, or C) to a certain type of input, in the form of I -> C -> O.
Naturally, the intriguing and defining part of a NN is its self-modifying functionality, which is of varying capability in different individuals. This process takes place in different parts of the BNN, from the lower, less advanced sections of the network, to the newer, more complex sections. In the former sections we see the above process, of input directly modifying the structure of the network. In the latter sections we encounter what is commonly referred to as 'intelligence' or 'reasoning', but can also be called more broadly as self-awareness (SA).
SA is the ability to reflect upon and predict the consequences of a course of action. It is a reasoning process involving the awareness of oneself and also the essence of what AI researchers are trying to emulate. SA is also the core of the IU. Imagine SA at the core of this universe, with recollections, input, convolutions and active feedback (AF) surrounding it. This model is essentially the model of a personality, as it's referred to. In this model, SA is a semi-passive presence in the NN, monitoring its surroundings and trying to compose a functional model of the reality outside the NN. This leads to the IU.
In short-hand form: C(I + R + AF) -> SA(IU) -> O.
The interesting thing about this model is that it seems to form an adequate model for human behaviour and interactions, including in large groups. It explains why the old adage that no two people will ever fully agree on anything is theoretically true, as the possibility of the IUs of two individuals matching up fully is extremely remotely. While the SA can semi-directly access the I by suppressing convolutions and thus validate the IU against reality (the underlying principle of the scientific method), this is a limited and intensive process. Only a strong SA can sufficiently do so, and may then still be blocked by the limitations of the IU due to previous input. This is the underlying principle behind psychological trauma, where a sufficiently strong input can cause a strong AF process involving R. This is what happens for example in PTSD, where the AF result can be overpowering, even for the SA and its IU.
Soon I hope to further test this model using software and FPGA-based implementations. The prospect of this theorem making it into a replacement for psychology and foundation for AI seems promising at this point.