Academic Journals Database
Disseminating quality controlled scientific knowledge

BUILDING OF DECISION TREE CLASSFIERS BY MEANS OF DATA IMPROBABILITY NUMERICAL FEATURES

ADD TO MY LIST
 
Author(s): Devender Rondla | G. Krishna Veni

Journal: International Journal of Computer & Electronics Research
ISSN 2320-9348

Volume: 2;
Issue: 3;
Start page: 67;
Date: 2013;
Original page

Keywords: Data uncertainty | Decision-tree classification | Pruning technique | Probability distribution

ABSTRACT
In many applications data improbability is widespread and the effortless way to hold the data uncertainty is to abstract probability distributions by review statistics such as means and variances. And this method is known as Averaging. Decision-tree classification model to accommodate data tuples having numerical attributes with uncertainty described by arbitrary pdf’s is proven to be improved. To build decision trees for classifying such data have modified classical decision tree building algorithms. Exploiting data uncertainty leads to decision trees with remarkably higher accuracies when suitable pdf’s are used. Hence that data be collected and stored with the pdf information intact. Despite the fact that, because of the increased amount of information to be processed, as well as the added difficult entropy computations involved Performance is an issue. A series of pruning techniques are devised to improve tree construction efficiency. Several of these pruning methods are generalisations of analogous techniques for handling point-valued information. Other methods such as, pruning by bounding and end-point sampling are new and are mainly designed to handle unsure data.
Why do you need a reservation system?      Save time & money - Smart Internet Solutions