Chapter 18 Committees of decision trees

David Heath, Simon Kasif, Steven L Salzberg

Research output: Contribution to journalArticle

Abstract

Many intelligent systems are designed to sift through a mass of evidence and arrive at a decision. Certain pieces of evidence may be given more weight than others, and this may affect the final decision significantly. When more than one intelligent agent is available to make a decision, we can form a committee of experts. By combining the different opinions of these experts, the committee approach can sometimes outperform any individual expert. In this paper, we show how to exploit randomized learning algorithms in order to develop committees of experts. By using the majority vote of these experts to make decisions, we are able to improve the performance of the original learning algorithm. More precisely, we have developed a randomized decision tree induction algorithm, which generates different decision trees every time it is run. Each tree represents a different expert decision-maker. We combine these trees using a majority voting scheme in order to overcome small errors that appear in individual trees. We have tested our idea with several real data sets, and found that accuracy consistently improved when compared to the decision made by a single expert. We have developed some analytical results that explain why this effect occurs. Our experiments also show that the majority voting technique outperforms at least some alternative strategies for exploiting randomization.

Original languageEnglish (US)
Pages (from-to)305-317
Number of pages13
JournalAdvances in Psychology
Volume113
Issue numberC
DOIs
StatePublished - 1996

Fingerprint

Decision Trees
Politics
Learning
Expert Testimony
Random Allocation
Weights and Measures

ASJC Scopus subject areas

  • Psychology(all)

Cite this

Chapter 18 Committees of decision trees. / Heath, David; Kasif, Simon; Salzberg, Steven L.

In: Advances in Psychology, Vol. 113, No. C, 1996, p. 305-317.

Research output: Contribution to journalArticle

Heath, David ; Kasif, Simon ; Salzberg, Steven L. / Chapter 18 Committees of decision trees. In: Advances in Psychology. 1996 ; Vol. 113, No. C. pp. 305-317.
@article{2cef421ab0474af381893f1fcb622ab2,
title = "Chapter 18 Committees of decision trees",
abstract = "Many intelligent systems are designed to sift through a mass of evidence and arrive at a decision. Certain pieces of evidence may be given more weight than others, and this may affect the final decision significantly. When more than one intelligent agent is available to make a decision, we can form a committee of experts. By combining the different opinions of these experts, the committee approach can sometimes outperform any individual expert. In this paper, we show how to exploit randomized learning algorithms in order to develop committees of experts. By using the majority vote of these experts to make decisions, we are able to improve the performance of the original learning algorithm. More precisely, we have developed a randomized decision tree induction algorithm, which generates different decision trees every time it is run. Each tree represents a different expert decision-maker. We combine these trees using a majority voting scheme in order to overcome small errors that appear in individual trees. We have tested our idea with several real data sets, and found that accuracy consistently improved when compared to the decision made by a single expert. We have developed some analytical results that explain why this effect occurs. Our experiments also show that the majority voting technique outperforms at least some alternative strategies for exploiting randomization.",
author = "David Heath and Simon Kasif and Salzberg, {Steven L}",
year = "1996",
doi = "10.1016/S0166-4115(96)80038-0",
language = "English (US)",
volume = "113",
pages = "305--317",
journal = "Advances in Psychology",
issn = "0166-4115",
publisher = "Elsevier",
number = "C",

}

TY - JOUR

T1 - Chapter 18 Committees of decision trees

AU - Heath, David

AU - Kasif, Simon

AU - Salzberg, Steven L

PY - 1996

Y1 - 1996

N2 - Many intelligent systems are designed to sift through a mass of evidence and arrive at a decision. Certain pieces of evidence may be given more weight than others, and this may affect the final decision significantly. When more than one intelligent agent is available to make a decision, we can form a committee of experts. By combining the different opinions of these experts, the committee approach can sometimes outperform any individual expert. In this paper, we show how to exploit randomized learning algorithms in order to develop committees of experts. By using the majority vote of these experts to make decisions, we are able to improve the performance of the original learning algorithm. More precisely, we have developed a randomized decision tree induction algorithm, which generates different decision trees every time it is run. Each tree represents a different expert decision-maker. We combine these trees using a majority voting scheme in order to overcome small errors that appear in individual trees. We have tested our idea with several real data sets, and found that accuracy consistently improved when compared to the decision made by a single expert. We have developed some analytical results that explain why this effect occurs. Our experiments also show that the majority voting technique outperforms at least some alternative strategies for exploiting randomization.

AB - Many intelligent systems are designed to sift through a mass of evidence and arrive at a decision. Certain pieces of evidence may be given more weight than others, and this may affect the final decision significantly. When more than one intelligent agent is available to make a decision, we can form a committee of experts. By combining the different opinions of these experts, the committee approach can sometimes outperform any individual expert. In this paper, we show how to exploit randomized learning algorithms in order to develop committees of experts. By using the majority vote of these experts to make decisions, we are able to improve the performance of the original learning algorithm. More precisely, we have developed a randomized decision tree induction algorithm, which generates different decision trees every time it is run. Each tree represents a different expert decision-maker. We combine these trees using a majority voting scheme in order to overcome small errors that appear in individual trees. We have tested our idea with several real data sets, and found that accuracy consistently improved when compared to the decision made by a single expert. We have developed some analytical results that explain why this effect occurs. Our experiments also show that the majority voting technique outperforms at least some alternative strategies for exploiting randomization.

UR - http://www.scopus.com/inward/record.url?scp=77956778607&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77956778607&partnerID=8YFLogxK

U2 - 10.1016/S0166-4115(96)80038-0

DO - 10.1016/S0166-4115(96)80038-0

M3 - Article

AN - SCOPUS:77956778607

VL - 113

SP - 305

EP - 317

JO - Advances in Psychology

JF - Advances in Psychology

SN - 0166-4115

IS - C

ER -