How to modify a neural network gradually without changing its input-output functionality

Christopher DiMattina, Kechen Zhang

Research output: Contribution to journalArticle

Abstract

It is generally unknown when distinct neural networks having different synaptic weights and thresholds implement identical input-output transformations. Determining the exact conditions for structurally distinct yet functionally equivalent networks may shed light on the theoretical constraints on how diverse neural circuits might develop and be maintained to serve identical functions. Such consideration also imposes practical limits on our ability to uniquely infer the structure of underlying neural circuits from stimulus-response measurements. We introduce a biologically inspired mathematical method for determining when the structure of a neural network can be perturbed gradually while preserving functionality. We show that for common three-layer networks with convergent and nondegenerate connection weights, this is possible only when the hidden unit gains are power functions, exponentials, or logarithmic functions, which are known to approximate the gains seen in some biological neurons. For practical applications, our numerical simulations with finite and noisy data show that continuous confounding of parameters due to network functional equivalence tends to occur approximately even when the gain function is not one of the aforementioned three types, suggesting that our analytical results are applicable to more general situations and may help identify a common source of parameter variability in neural network modeling.

Original languageEnglish (US)
Pages (from-to)1-47
Number of pages47
JournalNeural Computation
Volume22
Issue number1
DOIs
StatePublished - Jan 2010

Fingerprint

Weights and Measures
Neurons
Neural Networks
Functionality
Neuron
Modeling
Equivalence
Simulation
Stimulus
Exponential Function
Layer
Mathematical Methods

ASJC Scopus subject areas

  • Cognitive Neuroscience

Cite this

How to modify a neural network gradually without changing its input-output functionality. / DiMattina, Christopher; Zhang, Kechen.

In: Neural Computation, Vol. 22, No. 1, 01.2010, p. 1-47.

Research output: Contribution to journalArticle

@article{c2282a2d0472464f9070af16f8c6c11c,
title = "How to modify a neural network gradually without changing its input-output functionality",
abstract = "It is generally unknown when distinct neural networks having different synaptic weights and thresholds implement identical input-output transformations. Determining the exact conditions for structurally distinct yet functionally equivalent networks may shed light on the theoretical constraints on how diverse neural circuits might develop and be maintained to serve identical functions. Such consideration also imposes practical limits on our ability to uniquely infer the structure of underlying neural circuits from stimulus-response measurements. We introduce a biologically inspired mathematical method for determining when the structure of a neural network can be perturbed gradually while preserving functionality. We show that for common three-layer networks with convergent and nondegenerate connection weights, this is possible only when the hidden unit gains are power functions, exponentials, or logarithmic functions, which are known to approximate the gains seen in some biological neurons. For practical applications, our numerical simulations with finite and noisy data show that continuous confounding of parameters due to network functional equivalence tends to occur approximately even when the gain function is not one of the aforementioned three types, suggesting that our analytical results are applicable to more general situations and may help identify a common source of parameter variability in neural network modeling.",
author = "Christopher DiMattina and Kechen Zhang",
year = "2010",
month = "1",
doi = "10.1162/neco.2009.05-08-781",
language = "English (US)",
volume = "22",
pages = "1--47",
journal = "Neural Computation",
issn = "0899-7667",
publisher = "MIT Press Journals",
number = "1",

}

TY - JOUR

T1 - How to modify a neural network gradually without changing its input-output functionality

AU - DiMattina, Christopher

AU - Zhang, Kechen

PY - 2010/1

Y1 - 2010/1

N2 - It is generally unknown when distinct neural networks having different synaptic weights and thresholds implement identical input-output transformations. Determining the exact conditions for structurally distinct yet functionally equivalent networks may shed light on the theoretical constraints on how diverse neural circuits might develop and be maintained to serve identical functions. Such consideration also imposes practical limits on our ability to uniquely infer the structure of underlying neural circuits from stimulus-response measurements. We introduce a biologically inspired mathematical method for determining when the structure of a neural network can be perturbed gradually while preserving functionality. We show that for common three-layer networks with convergent and nondegenerate connection weights, this is possible only when the hidden unit gains are power functions, exponentials, or logarithmic functions, which are known to approximate the gains seen in some biological neurons. For practical applications, our numerical simulations with finite and noisy data show that continuous confounding of parameters due to network functional equivalence tends to occur approximately even when the gain function is not one of the aforementioned three types, suggesting that our analytical results are applicable to more general situations and may help identify a common source of parameter variability in neural network modeling.

AB - It is generally unknown when distinct neural networks having different synaptic weights and thresholds implement identical input-output transformations. Determining the exact conditions for structurally distinct yet functionally equivalent networks may shed light on the theoretical constraints on how diverse neural circuits might develop and be maintained to serve identical functions. Such consideration also imposes practical limits on our ability to uniquely infer the structure of underlying neural circuits from stimulus-response measurements. We introduce a biologically inspired mathematical method for determining when the structure of a neural network can be perturbed gradually while preserving functionality. We show that for common three-layer networks with convergent and nondegenerate connection weights, this is possible only when the hidden unit gains are power functions, exponentials, or logarithmic functions, which are known to approximate the gains seen in some biological neurons. For practical applications, our numerical simulations with finite and noisy data show that continuous confounding of parameters due to network functional equivalence tends to occur approximately even when the gain function is not one of the aforementioned three types, suggesting that our analytical results are applicable to more general situations and may help identify a common source of parameter variability in neural network modeling.

UR - http://www.scopus.com/inward/record.url?scp=77649249604&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77649249604&partnerID=8YFLogxK

U2 - 10.1162/neco.2009.05-08-781

DO - 10.1162/neco.2009.05-08-781

M3 - Article

VL - 22

SP - 1

EP - 47

JO - Neural Computation

JF - Neural Computation

SN - 0899-7667

IS - 1

ER -