### Abstract

It is generally unknown when distinct neural networks having different synaptic weights and thresholds implement identical input-output transformations. Determining the exact conditions for structurally distinct yet functionally equivalent networks may shed light on the theoretical constraints on how diverse neural circuits might develop and be maintained to serve identical functions. Such consideration also imposes practical limits on our ability to uniquely infer the structure of underlying neural circuits from stimulus-response measurements. We introduce a biologically inspired mathematical method for determining when the structure of a neural network can be perturbed gradually while preserving functionality. We show that for common three-layer networks with convergent and nondegenerate connection weights, this is possible only when the hidden unit gains are power functions, exponentials, or logarithmic functions, which are known to approximate the gains seen in some biological neurons. For practical applications, our numerical simulations with finite and noisy data show that continuous confounding of parameters due to network functional equivalence tends to occur approximately even when the gain function is not one of the aforementioned three types, suggesting that our analytical results are applicable to more general situations and may help identify a common source of parameter variability in neural network modeling.

Original language | English (US) |
---|---|

Pages (from-to) | 1-47 |

Number of pages | 47 |

Journal | Neural Computation |

Volume | 22 |

Issue number | 1 |

DOIs | |

State | Published - Jan 2010 |

### Fingerprint

### ASJC Scopus subject areas

- Cognitive Neuroscience

### Cite this

**How to modify a neural network gradually without changing its input-output functionality.** / DiMattina, Christopher; Zhang, Kechen.

Research output: Contribution to journal › Article

*Neural Computation*, vol. 22, no. 1, pp. 1-47. https://doi.org/10.1162/neco.2009.05-08-781

}

TY - JOUR

T1 - How to modify a neural network gradually without changing its input-output functionality

AU - DiMattina, Christopher

AU - Zhang, Kechen

PY - 2010/1

Y1 - 2010/1

N2 - It is generally unknown when distinct neural networks having different synaptic weights and thresholds implement identical input-output transformations. Determining the exact conditions for structurally distinct yet functionally equivalent networks may shed light on the theoretical constraints on how diverse neural circuits might develop and be maintained to serve identical functions. Such consideration also imposes practical limits on our ability to uniquely infer the structure of underlying neural circuits from stimulus-response measurements. We introduce a biologically inspired mathematical method for determining when the structure of a neural network can be perturbed gradually while preserving functionality. We show that for common three-layer networks with convergent and nondegenerate connection weights, this is possible only when the hidden unit gains are power functions, exponentials, or logarithmic functions, which are known to approximate the gains seen in some biological neurons. For practical applications, our numerical simulations with finite and noisy data show that continuous confounding of parameters due to network functional equivalence tends to occur approximately even when the gain function is not one of the aforementioned three types, suggesting that our analytical results are applicable to more general situations and may help identify a common source of parameter variability in neural network modeling.

AB - It is generally unknown when distinct neural networks having different synaptic weights and thresholds implement identical input-output transformations. Determining the exact conditions for structurally distinct yet functionally equivalent networks may shed light on the theoretical constraints on how diverse neural circuits might develop and be maintained to serve identical functions. Such consideration also imposes practical limits on our ability to uniquely infer the structure of underlying neural circuits from stimulus-response measurements. We introduce a biologically inspired mathematical method for determining when the structure of a neural network can be perturbed gradually while preserving functionality. We show that for common three-layer networks with convergent and nondegenerate connection weights, this is possible only when the hidden unit gains are power functions, exponentials, or logarithmic functions, which are known to approximate the gains seen in some biological neurons. For practical applications, our numerical simulations with finite and noisy data show that continuous confounding of parameters due to network functional equivalence tends to occur approximately even when the gain function is not one of the aforementioned three types, suggesting that our analytical results are applicable to more general situations and may help identify a common source of parameter variability in neural network modeling.

UR - http://www.scopus.com/inward/record.url?scp=77649249604&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77649249604&partnerID=8YFLogxK

U2 - 10.1162/neco.2009.05-08-781

DO - 10.1162/neco.2009.05-08-781

M3 - Article

C2 - 19842986

AN - SCOPUS:77649249604

VL - 22

SP - 1

EP - 47

JO - Neural Computation

JF - Neural Computation

SN - 0899-7667

IS - 1

ER -