Making big sense from big data in toxicology by read-across

Research output: Contribution to journalArticlepeer-review


Modern information technologies have made big data available in safety sciences, i.e., extremely large data sets that may be analyzed only computationally to reveal patterns, trends and associations. This happens by (1) compilation of large sets of existing data, e.g., as a result of the European REACH regulation, (2) the use of omics technologies and (3) systematic robotized testing in a high-throughput manner. All three approaches and some other high-content technologies leave us with big data - the challenge is now to make big sense of these data. Read-across, i.e., the local similarity-based intrapolation of properties, is gaining momentum with increasing data availability and consensus on how to process and report it. It is predominantly applied to in vivo test data as a gap-filling approach, but can similarly complement other incomplete datasets. Big data are first of all repositories for finding similar substances and ensure that the available data is fully exploited. High-content and high-throughput approaches similarly require focusing on clusters, in this case formed by underlying mechanisms such as pathways of toxicity. The closely connected properties, i.e., structural and biological similarity, create the confidence needed for predictions of toxic properties. Here, a new web-based tool under development called REACH-across, which aims to support and automate structure-based read-across, is presented among others.

Original languageEnglish (US)
Pages (from-to)83-93
Number of pages11
Issue number2
StatePublished - 2016


  • Computational toxicology
  • Data-mining
  • Databases
  • In silico
  • Read-across

ASJC Scopus subject areas

  • Pharmacology
  • Medical Laboratory Technology


Dive into the research topics of 'Making big sense from big data in toxicology by read-across'. Together they form a unique fingerprint.

Cite this