Chapter 18 Committees of decision trees

David Heath, Simon Kasif, Steven Salzberg

Research output: Contribution to journalArticlepeer-review

13 Scopus citations

Abstract

Many intelligent systems are designed to sift through a mass of evidence and arrive at a decision. Certain pieces of evidence may be given more weight than others, and this may affect the final decision significantly. When more than one intelligent agent is available to make a decision, we can form a committee of experts. By combining the different opinions of these experts, the committee approach can sometimes outperform any individual expert. In this paper, we show how to exploit randomized learning algorithms in order to develop committees of experts. By using the majority vote of these experts to make decisions, we are able to improve the performance of the original learning algorithm. More precisely, we have developed a randomized decision tree induction algorithm, which generates different decision trees every time it is run. Each tree represents a different expert decision-maker. We combine these trees using a majority voting scheme in order to overcome small errors that appear in individual trees. We have tested our idea with several real data sets, and found that accuracy consistently improved when compared to the decision made by a single expert. We have developed some analytical results that explain why this effect occurs. Our experiments also show that the majority voting technique outperforms at least some alternative strategies for exploiting randomization.

Original languageEnglish (US)
Pages (from-to)305-317
Number of pages13
JournalAdvances in Psychology
Volume113
Issue numberC
DOIs
StatePublished - 1996

ASJC Scopus subject areas

  • General Psychology

Fingerprint

Dive into the research topics of 'Chapter 18 Committees of decision trees'. Together they form a unique fingerprint.

Cite this