Cooperative learning: Decentralized data neural network

Noah Lewis, Sergey Plis, Vince Daniel Calhoun

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Researchers often wish to study data stored in separate locations, such as when several research entities wish to make inferences from their combined data. The most common solution is to centralize the data in one location. However, certain types of data can be difficult to transfer between entities due to legal or practical reasons. This makes centralizing these types of data problematic. A possible solution is the use of methods that learn from data without moving them to a central location: decentralized algorithms. Only a few algorithms emphasizing that property are known to us, and even fewer are used in the biomedical domain. In this paper, we propose a decentralized neural network that allows data analysis without transferring the data from the sites that host them. Instead, this method only transfers the gradients (or their parts) calculated via back-propagation. Our approach allows us to learn a classifier even when class examples are located at different sites, enabling privacy-aware collaboration across groups with specific research interests. We validate the method in several experiments to test stability, compare performance to a network trained on the centralized data, and investigate the ability to reduce size of data transfer. Our experiments on simulated, benchmark, and neuroimaging addiction data provide strong evidence that the proposed model works as effectively as a pooled centralized model.

Original languageEnglish (US)
Title of host publication2017 International Joint Conference on Neural Networks, IJCNN 2017 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages324-331
Number of pages8
Volume2017-May
ISBN (Electronic)9781509061815
DOIs
StatePublished - Jun 30 2017
Event2017 International Joint Conference on Neural Networks, IJCNN 2017 - Anchorage, United States
Duration: May 14 2017May 19 2017

Other

Other2017 International Joint Conference on Neural Networks, IJCNN 2017
CountryUnited States
CityAnchorage
Period5/14/175/19/17

Fingerprint

Neural networks
Neuroimaging
Data transfer
Backpropagation
Classifiers
Experiments

Keywords

  • Distributed signal processing
  • Neural network

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Cite this

Lewis, N., Plis, S., & Calhoun, V. D. (2017). Cooperative learning: Decentralized data neural network. In 2017 International Joint Conference on Neural Networks, IJCNN 2017 - Proceedings (Vol. 2017-May, pp. 324-331). [7965872] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/IJCNN.2017.7965872

Cooperative learning : Decentralized data neural network. / Lewis, Noah; Plis, Sergey; Calhoun, Vince Daniel.

2017 International Joint Conference on Neural Networks, IJCNN 2017 - Proceedings. Vol. 2017-May Institute of Electrical and Electronics Engineers Inc., 2017. p. 324-331 7965872.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Lewis, N, Plis, S & Calhoun, VD 2017, Cooperative learning: Decentralized data neural network. in 2017 International Joint Conference on Neural Networks, IJCNN 2017 - Proceedings. vol. 2017-May, 7965872, Institute of Electrical and Electronics Engineers Inc., pp. 324-331, 2017 International Joint Conference on Neural Networks, IJCNN 2017, Anchorage, United States, 5/14/17. https://doi.org/10.1109/IJCNN.2017.7965872
Lewis N, Plis S, Calhoun VD. Cooperative learning: Decentralized data neural network. In 2017 International Joint Conference on Neural Networks, IJCNN 2017 - Proceedings. Vol. 2017-May. Institute of Electrical and Electronics Engineers Inc. 2017. p. 324-331. 7965872 https://doi.org/10.1109/IJCNN.2017.7965872
Lewis, Noah ; Plis, Sergey ; Calhoun, Vince Daniel. / Cooperative learning : Decentralized data neural network. 2017 International Joint Conference on Neural Networks, IJCNN 2017 - Proceedings. Vol. 2017-May Institute of Electrical and Electronics Engineers Inc., 2017. pp. 324-331
@inproceedings{822b763bf59c45a6b0ab7cb735cf3b90,
title = "Cooperative learning: Decentralized data neural network",
abstract = "Researchers often wish to study data stored in separate locations, such as when several research entities wish to make inferences from their combined data. The most common solution is to centralize the data in one location. However, certain types of data can be difficult to transfer between entities due to legal or practical reasons. This makes centralizing these types of data problematic. A possible solution is the use of methods that learn from data without moving them to a central location: decentralized algorithms. Only a few algorithms emphasizing that property are known to us, and even fewer are used in the biomedical domain. In this paper, we propose a decentralized neural network that allows data analysis without transferring the data from the sites that host them. Instead, this method only transfers the gradients (or their parts) calculated via back-propagation. Our approach allows us to learn a classifier even when class examples are located at different sites, enabling privacy-aware collaboration across groups with specific research interests. We validate the method in several experiments to test stability, compare performance to a network trained on the centralized data, and investigate the ability to reduce size of data transfer. Our experiments on simulated, benchmark, and neuroimaging addiction data provide strong evidence that the proposed model works as effectively as a pooled centralized model.",
keywords = "Distributed signal processing, Neural network",
author = "Noah Lewis and Sergey Plis and Calhoun, {Vince Daniel}",
year = "2017",
month = "6",
day = "30",
doi = "10.1109/IJCNN.2017.7965872",
language = "English (US)",
volume = "2017-May",
pages = "324--331",
booktitle = "2017 International Joint Conference on Neural Networks, IJCNN 2017 - Proceedings",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Cooperative learning

T2 - Decentralized data neural network

AU - Lewis, Noah

AU - Plis, Sergey

AU - Calhoun, Vince Daniel

PY - 2017/6/30

Y1 - 2017/6/30

N2 - Researchers often wish to study data stored in separate locations, such as when several research entities wish to make inferences from their combined data. The most common solution is to centralize the data in one location. However, certain types of data can be difficult to transfer between entities due to legal or practical reasons. This makes centralizing these types of data problematic. A possible solution is the use of methods that learn from data without moving them to a central location: decentralized algorithms. Only a few algorithms emphasizing that property are known to us, and even fewer are used in the biomedical domain. In this paper, we propose a decentralized neural network that allows data analysis without transferring the data from the sites that host them. Instead, this method only transfers the gradients (or their parts) calculated via back-propagation. Our approach allows us to learn a classifier even when class examples are located at different sites, enabling privacy-aware collaboration across groups with specific research interests. We validate the method in several experiments to test stability, compare performance to a network trained on the centralized data, and investigate the ability to reduce size of data transfer. Our experiments on simulated, benchmark, and neuroimaging addiction data provide strong evidence that the proposed model works as effectively as a pooled centralized model.

AB - Researchers often wish to study data stored in separate locations, such as when several research entities wish to make inferences from their combined data. The most common solution is to centralize the data in one location. However, certain types of data can be difficult to transfer between entities due to legal or practical reasons. This makes centralizing these types of data problematic. A possible solution is the use of methods that learn from data without moving them to a central location: decentralized algorithms. Only a few algorithms emphasizing that property are known to us, and even fewer are used in the biomedical domain. In this paper, we propose a decentralized neural network that allows data analysis without transferring the data from the sites that host them. Instead, this method only transfers the gradients (or their parts) calculated via back-propagation. Our approach allows us to learn a classifier even when class examples are located at different sites, enabling privacy-aware collaboration across groups with specific research interests. We validate the method in several experiments to test stability, compare performance to a network trained on the centralized data, and investigate the ability to reduce size of data transfer. Our experiments on simulated, benchmark, and neuroimaging addiction data provide strong evidence that the proposed model works as effectively as a pooled centralized model.

KW - Distributed signal processing

KW - Neural network

UR - http://www.scopus.com/inward/record.url?scp=85031046906&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85031046906&partnerID=8YFLogxK

U2 - 10.1109/IJCNN.2017.7965872

DO - 10.1109/IJCNN.2017.7965872

M3 - Conference contribution

AN - SCOPUS:85031046906

VL - 2017-May

SP - 324

EP - 331

BT - 2017 International Joint Conference on Neural Networks, IJCNN 2017 - Proceedings

PB - Institute of Electrical and Electronics Engineers Inc.

ER -