TY - JOUR
T1 - Chapter 18 Committees of decision trees
AU - Heath, David
AU - Kasif, Simon
AU - Salzberg, Steven
N1 - Funding Information:
Decision trees have been used successfully for many different decision making and classification tasks. A number of standard techniques have been developed in the machine learning community, most notably Quinlan's C4.5 algorithm (1986) and Breiman et al.'s CART (Classification and Regression Trees) algorithm (1984). Since the introduction of these algorithms, numerous variations and improvements have been put forward, including new pruning strategies (e.g., Quinlan, 1987) and incremental The authors wish to thank David Aha for providing comments and relevant references. This research was supported in part by the Air Force Office of Scientific Research under Grant AFOSR-89-0151, and by the National Science Foundation under Grants IRI-9116843 and IRI-9223591.
PY - 1996
Y1 - 1996
N2 - Many intelligent systems are designed to sift through a mass of evidence and arrive at a decision. Certain pieces of evidence may be given more weight than others, and this may affect the final decision significantly. When more than one intelligent agent is available to make a decision, we can form a committee of experts. By combining the different opinions of these experts, the committee approach can sometimes outperform any individual expert. In this paper, we show how to exploit randomized learning algorithms in order to develop committees of experts. By using the majority vote of these experts to make decisions, we are able to improve the performance of the original learning algorithm. More precisely, we have developed a randomized decision tree induction algorithm, which generates different decision trees every time it is run. Each tree represents a different expert decision-maker. We combine these trees using a majority voting scheme in order to overcome small errors that appear in individual trees. We have tested our idea with several real data sets, and found that accuracy consistently improved when compared to the decision made by a single expert. We have developed some analytical results that explain why this effect occurs. Our experiments also show that the majority voting technique outperforms at least some alternative strategies for exploiting randomization.
AB - Many intelligent systems are designed to sift through a mass of evidence and arrive at a decision. Certain pieces of evidence may be given more weight than others, and this may affect the final decision significantly. When more than one intelligent agent is available to make a decision, we can form a committee of experts. By combining the different opinions of these experts, the committee approach can sometimes outperform any individual expert. In this paper, we show how to exploit randomized learning algorithms in order to develop committees of experts. By using the majority vote of these experts to make decisions, we are able to improve the performance of the original learning algorithm. More precisely, we have developed a randomized decision tree induction algorithm, which generates different decision trees every time it is run. Each tree represents a different expert decision-maker. We combine these trees using a majority voting scheme in order to overcome small errors that appear in individual trees. We have tested our idea with several real data sets, and found that accuracy consistently improved when compared to the decision made by a single expert. We have developed some analytical results that explain why this effect occurs. Our experiments also show that the majority voting technique outperforms at least some alternative strategies for exploiting randomization.
UR - http://www.scopus.com/inward/record.url?scp=77956778607&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=77956778607&partnerID=8YFLogxK
U2 - 10.1016/S0166-4115(96)80038-0
DO - 10.1016/S0166-4115(96)80038-0
M3 - Article
AN - SCOPUS:77956778607
SN - 0166-4115
VL - 113
SP - 305
EP - 317
JO - Advances in Psychology
JF - Advances in Psychology
IS - C
ER -