The question of how computationally-useful wiring and connectivity is achieved and maintained in neural networks is of interest to both neuroscience and artificial intelligence. This project seeks to explore the relationship between reservoir computing, evolution of graph/network structure, and real biological plasticity. It builds upon current work, while opening up exciting new possible directions of research which are benefitted by expertise in machine learning and complexity available in Bristol. It is directly relevant to existing work by Dr Conor Houghton and Andrew Shannon in reservoir computing and physics-inspired machine learning. Understanding the tuning of network structures – and how these impact on information processing and memory – could yield insight into how biological and artificial networks may accomplish optimised computation and efficiency.
The project proposes to address the following questions:
(1) What structural and spectral network characteristics exist across optimised recurrent neural networks in reservoir computing?
(2) What learning rules (biophysically-inspired plasticity) have the potential to encourage similarly structured networks?
This project seeks to answer research question 1 by investigating the graph-theoretic characteristics associated with optimised recurrent neural networks in reservoir computing. Genetic algorithms are used to optimise RNNs; analysis is conducted on the eigenspectra and network characteristics of the reservoirs to examine the possible network qualities associated with capacity and efficiency. The architectures of reservoirs with varying degrees of capacity are studied. Research question 2 is addressed in relation to current PhD work: the network, topological, and cost attributes of reservoirs discussed above are compared/contrasted with those of networks produced by plasticity rules. This is to discover if local plasticity paradigms have potential to optimise network architectures to perform computational tasks.