Databook PDF
Databook PDF
C lassification
Algorithms and Applications
Chapman & Hall/CRC
Data Mining and Knowledge Discovery Series
SERIES EDITOR
Vipin Kumar
University of Minnesota
Department of Computer Science and Engineering
Minneapolis, Minnesota, U.S.A.
PUBLISHED TITLES
ADVANCES IN MACHINE LEARNING AND DATA MINING FOR ASTRONOMY
Michael J. Way, Jeffrey D. Scargle, Kamal M. Ali, and Ashok N. Srivastava
BIOLOGICAL DATA MINING
Jake Y. Chen and Stefano Lonardi
COMPUTATIONAL BUSINESS ANALYTICS
Subrata Das
COMPUTATIONAL INTELLIGENT DATA ANALYSIS FOR SUSTAINABLE
DEVELOPMENT
Ting Yu, Nitesh V. Chawla, and Simeon Simoff
COMPUTATIONAL METHODS OF FEATURE SELECTION
Huan Liu and Hiroshi Motoda
CONSTRAINED CLUSTERING: ADVANCES IN ALGORITHMS, THEORY,
AND APPLICATIONS
Sugato Basu, Ian Davidson, and Kiri L. Wagstaff
CONTRAST DATA MINING: CONCEPTS, ALGORITHMS, AND APPLICATIONS
Guozhu Dong and James Bailey
DATA CLASSIFICATION: ALGORITHMS AND APPLICATIONS
Charu C. Aggarawal
DATA CLUSTERING: ALGORITHMS AND APPLICATIONS
Charu C. Aggarawal and Chandan K. Reddy
DATA CLUSTERING IN C++: AN OBJECT-ORIENTED APPROACH
Guojun Gan
DATA MINING FOR DESIGN AND MARKETING
Yukio Ohsawa and Katsutoshi Yada
DATA MINING WITH R: LEARNING WITH CASE STUDIES
Luís Torgo
FOUNDATIONS OF PREDICTIVE ANALYTICS
James Wu and Stephen Coggeshall
GEOGRAPHIC DATA MINING AND KNOWLEDGE DISCOVERY,
SECOND EDITION
Harvey J. Miller and Jiawei Han
HANDBOOK OF EDUCATIONAL DATA MINING
Cristóbal Romero, Sebastian Ventura, Mykola Pechenizkiy, and Ryan S.J.d. Baker
INFORMATION DISCOVERY ON ELECTRONIC HEALTH RECORDS
Vagelis Hristidis
INTELLIGENT TECHNOLOGIES FOR WEB APPLICATIONS
Priti Srinivas Sajja and Rajendra Akerkar
INTRODUCTION TO PRIVACY-PRESERVING DATA PUBLISHING: CONCEPTS
AND TECHNIQUES
Benjamin C. M. Fung, Ke Wang, Ada Wai-Chee Fu, and Philip S. Yu
KNOWLEDGE DISCOVERY FOR COUNTERTERRORISM AND
LAW ENFORCEMENT
David Skillicorn
KNOWLEDGE DISCOVERY FROM DATA STREAMS
João Gama
MACHINE LEARNING AND KNOWLEDGE DISCOVERY FOR
ENGINEERING SYSTEMS HEALTH MANAGEMENT
Ashok N. Srivastava and Jiawei Han
MINING SOFTWARE SPECIFICATIONS: METHODOLOGIES AND APPLICATIONS
David Lo, Siau-Cheng Khoo, Jiawei Han, and Chao Liu
MULTIMEDIA DATA MINING: A SYSTEMATIC INTRODUCTION TO
CONCEPTS AND THEORY
Zhongfei Zhang and Ruofei Zhang
MUSIC DATA MINING
Tao Li, Mitsunori Ogihara, and George Tzanetakis
NEXT GENERATION OF DATA MINING
Hillol Kargupta, Jiawei Han, Philip S. Yu, Rajeev Motwani, and Vipin Kumar
RAPIDMINER: DATA MINING USE CASES AND BUSINESS ANALYTICS
APPLICATIONS
Markus Hofmann and Ralf Klinkenberg
RELATIONAL DATA CLUSTERING: MODELS, ALGORITHMS,
AND APPLICATIONS
Bo Long, Zhongfei Zhang, and Philip S. Yu
SERVICE-ORIENTED DISTRIBUTED KNOWLEDGE DISCOVERY
Domenico Talia and Paolo Trunfio
SPECTRAL FEATURE SELECTION FOR DATA MINING
Zheng Alan Zhao and Huan Liu
STATISTICAL DATA MINING USING SAS APPLICATIONS, SECOND EDITION
George Fernandez
SUPPORT VECTOR MACHINES: OPTIMIZATION BASED THEORY,
ALGORITHMS, AND EXTENSIONS
Naiyang Deng, Yingjie Tian, and Chunhua Zhang
TEMPORAL DATA MINING
Theophano Mitsa
TEXT MINING: CLASSIFICATION, CLUSTERING, AND APPLICATIONS
Ashok N. Srivastava and Mehran Sahami
THE TOP TEN ALGORITHMS IN DATA MINING
Xindong Wu and Vipin Kumar
UNDERSTANDING COMPLEX DATASETS: DATA MINING WITH MATRIX
DECOMPOSITIONS
David Skillicorn
D ata
C lassification
Algorithms and Applications
Edited by
Charu C. Aggarwal
IBM T. J. Watson Research Center
Yorktown Heights, New York, USA
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to
publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials
or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material repro-
duced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any
copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any
form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming,
and recording, or in any information storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copy-
right.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400.
CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been
granted a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identifica-
tion and explanation without intent to infringe.
QA76.9.F5.D38 2014
005.74’1‑‑dc23 2013050912
Contributors xxv
Preface xxvii
ix
x Contents
Index 667
Editor Biography
Charu C. Aggarwal is a Research Scientist at the IBM T. J. Watson Research Center in York-
town Heights, New York. He completed his B.S. from IIT Kanpur in 1993 and his Ph.D. from
Massachusetts Institute of Technology in 1996. His research interest during his Ph.D. years was in
combinatorial optimization (network flow algorithms), and his thesis advisor was Professor James
B. Orlin. He has since worked in the field of performance analysis, databases, and data mining. He
has published over 200 papers in refereed conferences and journals, and has applied for or been
granted over 80 patents. He is author or editor of ten books. Because of the commercial value of the
aforementioned patents, he has received several invention achievement awards and has thrice been
designated a Master Inventor at IBM. He is a recipient of an IBM Corporate Award (2003) for his
work on bio-terrorist threat detection in data streams, a recipient of the IBM Outstanding Innovation
Award (2008) for his scientific contributions to privacy technology, a recipient of the IBM Outstand-
ing Technical Achievement Award (2009) for his work on data streams, and a recipient of an IBM
Research Division Award (2008) for his contributions to System S. He also received the EDBT 2014
Test of Time Award for his work on condensation-based privacy-preserving data mining.
He served as an associate editor of the IEEE Transactions on Knowledge and Data Engineering
from 2004 to 2008. He is an associate editor of the ACM Transactions on Knowledge Discovery
and Data Mining, an action editor of the Data Mining and Knowledge Discovery Journal, editor-in-
chief of the ACM SIGKDD Explorations, and an associate editor of the Knowledge and Information
Systems Journal. He serves on the advisory board of the Lecture Notes on Social Networks, a pub-
lication by Springer. He serves as the vice-president of the SIAM Activity Group on Data Mining,
which is responsible for all data mining activities organized by SIAM, including their main data
mining conference. He is a fellow of the IEEE and the ACM, for “contributions to knowledge dis-
covery and data mining algorithms.”
xxiii
Contributors
Yixiang Fang Qi Li
The University of Hong Kong State University of New York at Buffalo
Hong Kong Buffalo, New York
xxv
xxvi Contributors
The problem of classification is perhaps one of the most widely studied in the data mining and ma-
chine learning communities. This problem has been studied by researchers from several disciplines
over several decades. Applications of classification include a wide variety of problem domains such
as text, multimedia, social networks, and biological data. Furthermore, the problem may be en-
countered in a number of different scenarios such as streaming or uncertain data. Classification is a
rather diverse topic, and the underlying algorithms depend greatly on the data domain and problem
scenario.
Therefore, this book will focus on three primary aspects of data classification. The first set of
chapters will focus on the core methods for data classification. These include methods such as prob-
abilistic classification, decision trees, rule-based methods, instance-based techniques, SVM meth-
ods, and neural networks. The second set of chapters will focus on different problem domains and
scenarios such as multimedia data, text data, time-series data, network data, data streams, and un-
certain data. The third set of chapters will focus on different variations of the classification problem
such as ensemble methods, visual methods, transfer learning, semi-supervised methods, and active
learning. These are advanced methods, which can be used to enhance the quality of the underlying
classification results.
The classification problem has been addressed by a number of different communities such as
pattern recognition, databases, data mining, and machine learning. In some cases, the work by the
different communities tends to be fragmented, and has not been addressed in a unified way. This
book will make a conscious effort to address the work of the different communities in a unified way.
The book will start off with an overview of the basic methods in data classification, and then discuss
progressively more refined and complex methods for data classification. Special attention will also
be paid to more recent problem domains such as graphs and social networks.
The chapters in the book will be divided into three types:
• Method Chapters: These chapters discuss the key techniques that are commonly used for
classification, such as probabilistic methods, decision trees, rule-based methods, instance-
based methods, SVM techniques, and neural networks.
• Domain Chapters: These chapters discuss the specific methods used for different domains
of data such as text data, multimedia data, time-series data, discrete sequence data, network
data, and uncertain data. Many of these chapters can also be considered application chap-
ters, because they explore the specific characteristics of the problem in a particular domain.
Dedicated chapters are also devoted to large data sets and data streams, because of the recent
importance of the big data paradigm.
• Variations and Insights: These chapters discuss the key variations on the classification pro-
cess such as classification ensembles, rare-class learning, distance function learning, active
learning, and visual learning. Many variations such as transfer learning and semi-supervised
learning use side-information in order to enhance the classification results. A separate chapter
is also devoted to evaluation aspects of classifiers.
This book is designed to be comprehensive in its coverage of the entire area of classification, and it
is hoped that it will serve as a knowledgeable compendium to students and researchers.
xxvii
Chapter 1
An Introduction to Data Classification
Charu C. Aggarwal
IBM T. J. Watson Research Center
Yorktown Heights, NY
charu@us.ibm.com
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Common Techniques in Data Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Feature Selection Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Probabilistic Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.3 Decision Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.4 Rule-Based Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.5 Instance-Based Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.6 SVM Classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.7 Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3 Handing Different Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.1 Large Scale Data: Big Data and Data Streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.1.1 Data Streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.1.2 The Big Data Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3.2 Text Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.3.3 Multimedia Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.3.4 Time Series and Sequence Data Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.3.5 Network Data Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.3.6 Uncertain Data Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.4 Variations on Data Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.4.1 Rare Class Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.4.2 Distance Function Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.4.3 Ensemble Learning for Data Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.4.4 Enhancing Classification Methods with Additional Data . . . . . . . . . . . . . . . . . . . 24
1.4.4.1 Semi-Supervised Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.4.4.2 Transfer Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.4.5 Incorporating Human Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.4.5.1 Active Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.4.5.2 Visual Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.4.6 Evaluating Classification Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.5 Discussion and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1
2 Data Classification: Algorithms and Applications
1.1 Introduction
The problem of data classification has numerous applications in a wide variety of mining ap-
plications. This is because the problem attempts to learn the relationship between a set of feature
variables and a target variable of interest. Since many practical problems can be expressed as as-
sociations between feature and target variables, this provides a broad range of applicability of this
model. The problem of classification may be stated as follows:
Given a set of training data points along with associated training labels, determine the class la-
bel for an unlabeled test instance.
Numerous variations of this problem can be defined over different settings. Excellent overviews
on data classification may be found in [39, 50, 63, 85]. Classification algorithms typically contain
two phases:
• Training Phase: In this phase, a model is constructed from the training instances.
• Testing Phase: In this phase, the model is used to assign a label to an unlabeled test instance.
In some cases, such as lazy learning, the training phase is omitted entirely, and the classification is
performed directly from the relationship of the training instances to the test instance. Instance-based
methods such as the nearest neighbor classifiers are examples of such a scenario. Even in such cases,
a pre-processing phase such as a nearest neighbor index construction may be performed in order to
ensure efficiency during the testing phase.
The output of a classification algorithm may be presented for a test instance in one of two ways:
1. Discrete Label: In this case, a label is returned for the test instance.
2. Numerical Score: In this case, a numerical score is returned for each class label and test in-
stance combination. Note that the numerical score can be converted to a discrete label for a
test instance, by picking the class with the highest score for that test instance. The advantage
of a numerical score is that it now becomes possible to compare the relative propensity of
different test instances to belong to a particular class of importance, and rank them if needed.
Such methods are used often in rare class detection problems, where the original class distri-
bution is highly imbalanced, and the discovery of some classes is more valuable than others.
The classification problem thus segments the unseen test instances into groups, as defined by the
class label. While the segmentation of examples into groups is also done by clustering, there is
a key difference between the two problems. In the case of clustering, the segmentation is done
using similarities between the feature variables, with no prior understanding of the structure of the
groups. In the case of classification, the segmentation is done on the basis of a training data set,
which encodes knowledge about the structure of the groups in the form of a target variable. Thus,
while the segmentations of the data are usually related to notions of similarity, as in clustering,
significant deviations from the similarity-based segmentation may be achieved in practical settings.
As a result, the classification problem is referred to as supervised learning, just as clustering is
referred to as unsupervised learning. The supervision process often provides significant application-
specific utility, because the class labels may represent important properties of interest.
Some common application domains in which the classification problem arises, are as follows:
• Customer Target Marketing: Since the classification problem relates feature variables to
target classes, this method is extremely popular for the problem of customer target marketing.
An Introduction to Data Classification 3
In such cases, feature variables describing the customer may be used to predict their buy-
ing interests on the basis of previous training examples. The target variable may encode the
buying interest of the customer.
• Medical Disease Diagnosis: In recent years, the use of data mining methods in medical
technology has gained increasing traction. The features may be extracted from the medical
records, and the class labels correspond to whether or not a patient may pick up a disease
in the future. In these cases, it is desirable to make disease predictions with the use of such
information.
• Supervised Event Detection: In many temporal scenarios, class labels may be associated
with time stamps corresponding to unusual events. For example, an intrusion activity may
be represented as a class label. In such cases, time-series classification methods can be very
useful.
• Multimedia Data Analysis: It is often desirable to perform classification of large volumes of
multimedia data such as photos, videos, audio or other more complex multimedia data. Mul-
timedia data analysis can often be challenging, because of the complexity of the underlying
feature space and the semantic gap between the feature values and corresponding inferences.
• Biological Data Analysis: Biological data is often represented as discrete sequences, in
which it is desirable to predict the properties of particular sequences. In some cases, the
biological data is also expressed in the form of networks. Therefore, classification methods
can be applied in a variety of different ways in this scenario.
• Document Categorization and Filtering: Many applications, such as newswire services,
require the classification of large numbers of documents in real time. This application is
referred to as document categorization, and is an important area of research in its own right.
• Social Network Analysis: Many forms of social network analysis, such as collective classi-
fication, associate labels with the underlying nodes. These are then used in order to predict
the labels of other nodes. Such applications are very useful for predicting useful properties of
actors in a social network.
The diversity of problems that can be addressed by classification algorithms is significant, and cov-
ers many domains. It is impossible to exhaustively discuss all such applications in either a single
chapter or book. Therefore, this book will organize the area of classification into key topics of in-
terest. The work in the data classification area typically falls into a number of broad categories;
• Technique-centered: The problem of data classification can be solved using numerous
classes of techniques such as decision trees, rule-based methods, neural networks, SVM meth-
ods, nearest neighbor methods, and probabilistic methods. This book will cover the most
popular classification methods in the literature comprehensively.
• Data-Type Centered: Many different data types are created by different applications. Some
examples of different data types include text, multimedia, uncertain data, time series, discrete
sequence, and network data. Each of these different data types requires the design of different
techniques, each of which can be quite different.
• Variations on Classification Analysis: Numerous variations on the standard classification
problem exist, which deal with more challenging scenarios such as rare class learning, transfer
learning, semi-supervised learning, or active learning. Alternatively, different variations of
classification, such as ensemble analysis, can be used in order to improve the effectiveness
of classification algorithms. These issues are of course closely related to issues of model
evaluation. All these issues will be discussed extensively in this book.
4 Data Classification: Algorithms and Applications
This chapter will discuss each of these issues in detail, and will also discuss how the organization of
the book relates to these different areas of data classification. The chapter is organized as follows.
The next section discusses the common techniques that are used for data classification. Section
1.3 explores the use of different data types in the classification process. Section 1.4 discusses the
different variations of data classification. Section 1.5 discusses the conclusions and summary.
2. Wrapper Models: In these cases, the feature selection process is embedded into a classifica-
tion algorithm, in order to make the feature selection process sensitive to the classification
algorithm. This approach recognizes the fact that different algorithms may work better with
different features.
In order to perform feature selection with filter models, a number of different measures are used
in order to quantify the relevance of a feature to the classification process. Typically, these measures
compute the imbalance of the feature values over different ranges of the attribute, which may either
be discrete or numerical. Some examples are as follows:
1 This feature is used to measure prostate cancer in men.
An Introduction to Data Classification 5
• Gini Index: Let p1 . . . pk be the fraction of classes that correspond to a particular value of the
discrete attribute. Then, the gini-index of that value of the discrete attribute is given by:
k
G = 1 − ∑ p2i (1.1)
i=1
The value of G ranges between 0 and 1 − 1/k. Smaller values are more indicative of class
imbalance. This indicates that the feature value is more discriminative for classification. The
overall gini-index for the attribute can be measured by weighted averaging over different
values of the discrete attribute, or by using the maximum gini-index over any of the different
discrete values. Different strategies may be more desirable for different scenarios, though the
weighted average is more commonly used.
• Entropy: The entropy of a particular value of the discrete attribute is measured as follows:
k
E = − ∑ pi · log(pi ) (1.2)
i=1
The same notations are used above, as for the case of the gini-index. The value of the entropy
lies between 0 and log(k), with smaller values being more indicative of class skew.
• Fisher’s Index: The Fisher’s index measures the ratio of the between class scatter to the within
class scatter. Therefore, if p j is the fraction of training examples belonging to class j, µ j is
the mean of a particular feature for class j, µ is the global mean for that feature, and σ j is
the standard deviation of that feature for class j, then the Fisher score F can be computed as
follows:
∑kj=1 p j · (µ j − µ)2
F= (1.3)
∑kj=1 p j · σ2j
A wide variety of other measures such as the χ2 -statistic and mutual information are also available in
order to quantify the discriminative power of attributes. An approach known as the Fisher’s discrim-
inant [61] is also used in order to combine the different features into directions in the data that are
highly relevant to classification. Such methods are of course feature transformation methods, which
are also closely related to feature selection methods, just as unsupervised dimensionality reduction
methods are related to unsupervised feature selection methods.
The Fisher’s discriminant will be explained below for the two-class problem. Let µ0 and µ1 be
the d-dimensional row vectors representing the means of the records in the two classes, and let Σ0
and Σ1 be the corresponding d × d covariance matrices, in which the (i, j)th entry represents the
covariance between dimensions i and j for that class. Then, the equivalent Fisher score FS(V ) for a
d-dimensional row vector V may be written as follows:
(V · (µ0 − µ1 ))2
FS(V ) = T
(1.4)
V (p0 · Σ0 + p1 · Σ1 )V
This is a generalization of the axis-parallel score in Equation 1.3, to an arbitrary direction V . The
goal is to determine a direction V , which maximizes the Fisher score. It can be shown that the
optimal direction V ∗ may be determined by solving a generalized eigenvalue problem, and is given
by the following expression:
If desired, successively orthogonal directions may be determined by iteratively projecting the data
onto the residual subspace, after determining the optimal directions one by one.
6 Data Classification: Algorithms and Applications
More generally, it should be pointed out that many features are often closely correlated with
one another, and the additional utility of an attribute, once a certain set of features have already
been selected, is different from its standalone utility. In order to address this issue, the Minimum
Redundancy Maximum Relevance approach was proposed in [69], in which features are incremen-
tally selected on the basis of their incremental gain on adding them to the feature set. Note that this
method is also a filter model, since the evaluation is on a subset of features, and a crisp criterion is
used to evaluate the subset.
In wrapper models, the feature selection phase is embedded into an iterative approach with a
classification algorithm. In each iteration, the classification algorithm evaluates a particular set of
features. This set of features is then augmented using a particular (e.g., greedy) strategy, and tested
to see of the quality of the classification improves. Since the classification algorithm is used for
evaluation, this approach will generally create a feature set, which is sensitive to the classification
algorithm. This approach has been found to be useful in practice, because of the wide diversity of
models on data classification. For example, an SVM would tend to prefer features in which the two
classes separate out using a linear model, whereas a nearest neighbor classifier would prefer features
in which the different classes are clustered into spherical regions. A good survey on feature selection
methods may be found in [59]. Feature selection methods are discussed in detail in Chapter 2.
P(x1 . . . xd |Y (T ) = i)
P(Y (T ) = i|x1 . . . xd ) = P(Y (T ) = i) · (1.6)
P(x1 . . . xd )
Since the denominator is constant across all classes, and one only needs to determine the class with
the maximum posterior probability, one can approximate the aforementioned expression as follows:
The key here is that the expression on the right can be evaluated more easily in a data-driven
way, as long as the naive Bayes assumption is used for simplification. Specifically, in Equation1.7,
the expression P(Y (T ) = i|x1 . . . xd ) can be expressed as the product of the feature-wise conditional
An Introduction to Data Classification 7
probabilities.
d
P(x1 . . . xd |Y (T ) = i) = ∏ P(x j |Y (T ) = i) (1.8)
j=1
This is referred to as conditional independence, and therefore the Bayes method is referred to as
“naive.” This simplification is crucial, because these individual probabilities can be estimated from
the training data in a more robust way. The naive Bayes theorem is crucial in providing the ability
to perform the product-wise simplification. The term P(x j |Y (T ) = i) is computed as the fraction of
the records in the portion of the training data corresponding to the ith class, which contains feature
value x j for the jth attribute. If desired, Laplacian smoothing can be used in cases when enough
data is not available to estimate these values robustly. This is quite often the case, when a small
amount of training data may contain few or no training records containing a particular feature value.
The Bayes rule has been used quite successfully in the context of a wide variety of applications,
and is particularly popular in the context of text classification. In spite of the naive independence
assumption, the Bayes model seems to be quite effective in practice. A detailed discussion of the
naive assumption in the context of the effectiveness of the Bayes classifier may be found in [38].
Another probabilistic approach is to directly model the posterior probability, by learning a dis-
criminative function that maps an input feature vector directly onto a class label. This approach is
often referred to as a discriminative model. Logistic regression is a popular discriminative classifier,
and its goal is to directly estimate the posterior probability P(Y (T ) = i|X ) from the training data.
Formally, the logistic regression model is defined as
1
P(Y (T ) = i|X ) = T , (1.9)
1 + e−θ X
where θ is the vector of parameters to be estimated. In general, maximum likelihood is used to deter-
mine the parameters of the logistic regression. To handle overfitting problems in logistic regression,
regularization is introduced to penalize the log likelihood function for large values of θ. The logistic
regression model has been extensively used in numerous disciplines, including the Web, and the
medical and social science fields.
A variety of other probabilistic models are known in the literature, such as probabilistic graphical
models, and conditional random fields. An overview of probabilistic methods for data classification
are found in [20, 64]. Probabilistic methods for data classification are discussed in Chapter 3.
The value of G(N) lies between 0 and 1 − 1/k. The smaller the value of G(N), the greater the skew.
In the cases where the classes are evenly balanced, the value is 1 − 1/k. An alternative measure is
8 Data Classification: Algorithms and Applications
TABLE 1.1: Training Data Snapshot Relating Cardiovascular Risk Based on Previous Events to
Different Blood Parameters
Patient Name CRP Level Cholestrol High Risk? (Class Label)
Mary 3.2 170 Y
Joe 0.9 273 N
Jack 2.5 213 Y
Jane 1.7 229 N
Tom 1.1 160 N
Peter 1.9 205 N
Elizabeth 8.1 160 Y
Lata 1.3 171 N
Daniela 4.5 133 Y
Eric 11.4 122 N
Michael 1.8 280 Y
The value of the entropy lies2 between 0 and log(k). The value is log(k), when the records are
perfectly balanced among the different classes. This corresponds to the scenario with maximum
entropy. The smaller the entropy, the greater the skew in the data. Thus, the gini-index and entropy
provide an effective way to evaluate the quality of a node in terms of its level of discrimination
between the different classes.
While constructing the training model, the split is performed, so as to minimize the weighted
sum of the gini-index or entropy of the two nodes. This step is performed recursively, until a ter-
mination criterion is satisfied. The most obvious termination criterion is one where all data records
in the node belong to the same class. More generally, the termination criterion requires either a
minimum level of skew or purity, or a minimum number of records in the node in order to avoid
overfitting. One problem in decision tree construction is that there is no way to predict the best
time to stop decision tree growth, in order to prevent overfitting. Therefore, in many variations, the
decision tree is pruned in order to remove nodes that may correspond to overfitting. There are differ-
ent ways of pruning the decision tree. One way of pruning is to use a minimum description length
principle in deciding when to prune a node from the tree. Another approach is to hold out a small
portion of the training data during the decision tree growth phase. It is then tested to see whether
replacing a subtree with a single node improves the classification accuracy on the hold out set. If
this is the case, then the pruning is performed. In the testing phase, a test instance is assigned to an
appropriate path in the decision tree, based on the evaluation of the split criteria in a hierarchical
decision process. The class label of the corresponding leaf node is reported as the relevant one.
Figure 1.1 provides an example of how the decision tree is constructed. Here, we have illustrated
a case where the two measures (features) of the blood parameters of patients are used in order to
assess the level of cardiovascular risk in the patient. The two measures are the C-Reactive Protein
(CRP) level and Cholesterol level, which are well known parameters related to cardiovascular risk.
It is assumed that a training data set is available, which is already labeled into high risk and low
risk patients, based on previous cardiovascular events such as myocardial infarctions or strokes. At
the same time, it is assumed that the feature values of the blood parameters for these patients are
available. A snapshot of this data is illustrated in Table 1.1. It is evident from the training data that
2 The value of the expression at pi = 0 needs to be evaluated at the limit.
An Introduction to Data Classification 9
CͲReactiveProtein(CRP)< 2 CͲReactiveProtein(CRP)>2
Cholesterol<250 Cholesterol>250
Cholesterol<200 Cholesterol>200
CRP +Chol/100
Ch l/100 < 4 CRP +Chol/100
Ch l/100 >4
4
Normal HighRisk
(b)MultivariateSplits
FIGURE 1.1: Illustration of univariate and multivariate splits for decision tree construction.
higher CRP and Cholesterol levels correspond to greater risk, though it is possible to reach more
definitive conclusions by combining the two.
An example of a decision tree that constructs the classification model on the basis of the two
features is illustrated in Figure 1.1(a). This decision tree uses univariate splits, by first partitioning
on the CRP level, and then using a split criterion on the Cholesterol level. Note that the Cholesterol
split criteria in the two CRP branches of the tree are different. In principle, different features can
be used to split different nodes at the same level of the tree. It is also sometimes possible to use
conditions on multiple attributes in order to create more powerful splits at a particular level of the
tree. An example is illustrated in Figure 1.1(b), where a linear combination of the two attributes
provides a much more powerful split than a single attribute. The split condition is as follows:
CRP + Cholestrol/100 ≤ 4
Note that a single condition such as this is able to partition the training data very well into the
two classes (with a few exceptions). Therefore, the split is more powerful in discriminating between
the two classes in a smaller number of levels of the decision tree. Where possible, it is desirable
to construct more compact decision trees in order to obtain the most accurate results. Such splits
are referred to as multivariate splits. Some of the earliest methods for decision tree construction
include C4.5 [72], ID3 [73], and CART [22]. A detailed discussion of decision trees may be found
in [22, 65, 72, 73]. Decision trees are discussed in Chapter 4.
assigns a test instance to a particular label. For example, for the case of the decision tree illustrated
in Figure 1.1(a), the rightmost path corresponds to the following rule:
It is possible to create a set of disjoint rules from the different paths in the decision tree. In fact,
a number of methods such as C4.5, create related models for both decision tree construction and
rule construction. The corresponding rule-based classifier is referred to as C4.5Rules.
Rule-based classifiers can be viewed as more general models than decision tree models. While
decision trees require the induced rule sets to be non-overlapping, this is not the case for rule-based
classifiers. For example, consider the following rule:
CRP> 3 ⇒ HighRisk
Clearly, this rule overlaps with the previous rule, and is also quite relevant to the prediction of a
given test instance. In rule-based methods, a set of rules is mined from the training data in the first
phase (or training phase). During the testing phase, it is determined which rules are relevant to the
test instance and the final result is based on a combination of the class values predicted by the
different rules.
In many cases, it may be possible to create rules that possibly conflict with one another on the
right hand side for a particular test instance. Therefore, it is important to design methods that can
effectively determine a resolution to these conflicts. The method of resolution depends upon whether
the rule sets are ordered or unordered. If the rule sets are ordered, then the top matching rules can
be used to make the prediction. If the rule sets are unordered, then the rules can be used to vote on
the test instance. Numerous methods such as Classification based on Associations (CBA) [58], CN2
[31], and RIPPER [26] have been proposed in the literature, which use a variety of rule induction
methods, based on different ways of mining and prioritizing the rules.
Methods such as CN2 and RIPPER use the sequential covering paradigm, where rules with
high accuracy and coverage are sequentially mined from the training data. The idea is that a rule is
grown corresponding to specific target class, and then all training instances matching (or covering)
the antecedent of that rule are removed. This approach is applied repeatedly, until only training
instances of a particular class remain in the data. This constitutes the default class, which is selected
for a test instance, when no rule is fired. The process of mining a rule for the training data is referred
to as rule growth. The growth of a rule involves the successive addition of conjuncts to the left-hand
side of the rule, after the selection of a particular consequent class. This can be viewed as growing a
single “best” path in a decision tree, by adding conditions (split criteria) to the left-hand side of the
rule. After the rule growth phase, a rule-pruning phase is used, which is analogous to decision tree
construction. In this sense, the rule-growth of rule-based classifiers share a number of conceptual
similarities with decision tree classifiers. These rules are ranked in the same order as they are mined
from the training data. For a given test instance, the class variable in the consequent of the first
matching rule is reported. If no matching rule is found, then the default class is reported as the
relevant one.
Methods such as CBA [58] use the traditional association rule framework, in which rules are
determined with the use of specific support and confidence measures. Therefore, these methods are
referred to as associative classifiers. It is also relatively easy to prioritize these rules with the use of
these parameters. The final classification can be performed by either using the majority vote from
the matching rules, or by picking the top ranked rule(s) for classification. Typically, the confidence
of the rule is used to prioritize them, and the support is used to prune for statistical significance.
A single catch-all rule is also created for test instances that are not covered by any rule. Typically,
this catch-all rule might correspond to the majority class among training instances not covered
by any rule. Rule-based methods tend to be more robust than decision trees, because they are not
An Introduction to Data Classification 11
restricted to a strict hierarchical partitioning of the data. This is most evident from the relative
performance of these methods in some sparse high dimensional domains such as text. For example,
while many rule-based methods such as RIPPER are frequently used for the text domain, decision
trees are used rarely for text. Another advantage of these methods is that they are relatively easy
to generalize to different data types such as sequences, XML or graph data [14, 93]. In such cases,
the left-hand side of the rule needs to be defined in a way that is specific for that data domain. For
example, for a sequence classification problem [14], the left-hand side of the rule corresponds to a
sequence of symbols. For a graph-classification problem, the left-hand side of the rule corresponds
to a frequent structure [93]. Therefore, while rule-based methods are related to decision trees, they
have significantly greater expressive power. Rule-based methods are discussed in detail in Chapter 5.
MARGIN
. ..
. .. . .
. . . .
. . .
. . . .
. . . .
. . . .
. . .
. . .. .
.. . . .
.
MARGIN MARGIN VIOLATION WITH PENALTY BASED SLACK VARIABLES
MARGINVIOLATIONWITHPENALTYͲBASEDSLACKVARIABLES
CRP + Cholestrol/100 ≤ 4
In such a case, the split condition in the multivariate case may also be used as stand-alone con-
dition for classification. This, a SVM classifier, may be considered a single level decision tree with
a very carefully chosen multivariate split condition. Clearly, since the effectiveness of the approach
depends only on a single separating hyperplane, it is critical to define this separation carefully.
Support vector machines are generally defined for binary classification problems. Therefore, the
class variable yi for the ith training instance Xi is assumed to be drawn from {−1, +1}. The most
important criterion, which is commonly used for SVM classification, is that of the maximum margin
hyperplane. In order to understand this point, consider the case of linearly separable data illustrated
in Figure 1.2(a). Two possible separating hyperplanes, with their corresponding support vectors and
margins have been illustrated in the figure. It is evident that one of the separating hyperplanes has a
much larger margin than the other, and is therefore more desirable because of its greater generality
for unseen test examples. Therefore, one of the important criteria for support vector machines is to
achieve maximum margin separation of the hyperplanes.
In general, it is assumed for d dimensional data that the separating hyperplane is of the form
W · X + b = 0. Here W is a d-dimensional vector representing the coefficients of the hyperplane of
separation, and b is a constant. Without loss of generality, it may be assumed (because of appropriate
coefficient scaling) that the two symmetric support vectors have the form W · X + b = 1 and W ·
X + b = −1. The coefficients W and b need to be learned from the training data D in order to
maximize the margin of separation between these two parallel hyperplanes. It can shown from
elementary linear algebra that the distance between these two hyperplanes is 2/||W ||. Maximizing
this objective function is equivalent to minimizing ||W ||2 /2. The problem constraints are defined by
the fact that the training data points for each class are on one side of the support vector. Therefore,
these constraints are as follows:
W · Xi + b ≥ +1 ∀i : yi = +1 (1.12)
W · Xi + b ≤ −1 ∀i : yi = −1 (1.13)
This is a constrained convex quadratic optimization problem, which can be solved using Lagrangian
methods. In practice, an off-the-shelf optimization solver may be used to achieve the same goal.
An Introduction to Data Classification 13
In practice, the data may not be linearly separable. In such cases, soft-margin methods may
be used. A slack ξi ≥ 0 is introduced for training instance, and a training instance is allowed to
violate the support vector constraint, for a penalty, which is dependent on the slack. This situation
is illustrated in Figure 1.2(b). Therefore, the new set of constraints are now as follows:
W · Xi + b ≥ +1 − ξi ∀i : yi = +1 (1.14)
W · Xi + b ≤ −1 + ξi ∀i : yi = −1 (1.15)
ξi ≥ 0 (1.16)
Note that additional non-negativity constraints also need to be imposed in the slack variables. The
objective function is now ||W ||2 /2 + C · ∑ni=1 ξi . The constant C regulates the importance of the
margin and the slack requirements. In other words, small values of C make the approach closer to
soft-margin SVM, whereas large values of C make the approach more of the hard-margin SVM. It
is also possible to solve this problem using off-the-shelf optimization solvers.
It is also possible to use transformations on the feature variables in order to design non-linear
SVM methods. In practice, non-linear SVM methods are learned using kernel methods. The key idea
here is that SVM formulations can be solved using only pairwise dot products (similarity values)
between objects. In other words, the optimal decision about the class label of a test instance, from
the solution to the quadratic optimization problem in this section, can be expressed in terms of the
following:
2. Pairwise dot product of the test instance and different training instances.
The reader is advised to refer to [84] for the specific details of the solution to the optimization
formulation. The dot product between a pair of instances can be viewed as notion of similarity
among them. Therefore, the aforementioned observations imply that it is possible to perform SVM
classification, with pairwise similarity information between training data pairs and training-test data
pairs. The actual feature values are not required.
This opens the door for using transformations, which are represented by their similarity values.
These similarities can be viewed as kernel functions K(X ,Y ), which measure similarities between
the points X and Y . Conceptually, the kernel function may be viewed as dot product between the
pair of points in a newly transformed space (denoted by mapping function Φ(·)). However, this
transformation does not need to be explicitly computed, as long as the kernel function (dot product)
K(X,Y ) is already available:
K(X ,Y ) = Φ(X ) · Φ(Y ) (1.17)
Therefore, all computations can be performed in the original space using the dot products implied
by the kernel function. Some interesting examples of kernel functions include the Gaussian radial
basis function, polynomial kernel, and hyperbolic tangent, which are listed below in the same order.
K(Xi , X j ) =e−||Xi −X j ||
2 /2σ2
(1.18)
K(Xi , X j ) =(Xi · X j + 1) h
(1.19)
K(Xi , X j ) =tanh(κXi · X j − δ) (1.20)
These different functions result in different kinds of nonlinear decision boundaries in the original
space, but they correspond to a linear separator in the transformed space. The performance of a
classifier can be sensitive to the choice of the kernel used for the transformation. One advantage
of kernel methods is that they can also be extended to arbitrary data types, as long as appropriate
pairwise similarities can be defined.
14 Data Classification: Algorithms and Applications
The major downside of SVM methods is that they are slow. However, they are very popular and
tend to have high accuracy in many practical domains such as text. An introduction to SVM methods
may be found in [30, 46, 75, 76, 85]. Kernel methods for support vector machines are discussed
in [75]. SVM methods are discussed in detail in Chapter 7.
zi = sign{W · Xi + b} (1.21)
The output is a predicted value of the binary class variable, which is assumed to be drawn from
{−1, +1}. The notation b denotes the bias. Thus, for a vector Xi drawn from a dimensionality of d,
the weight vector W should also contain d elements. Now consider a binary classification problem,
in which all labels are drawn from {+1, −1}. We assume that the class label of Xi is denoted by yi .
In that case, the sign of the predicted function zi yields the class label. An example of the perceptron
architecture is illustrated in Figure 1.3(a). Thus, the goal of the approach is to learn the set of
weights W with the use of the training data, so as to minimize the least squares error (yi − zi )2 . The
idea is that we start off with random weights and gradually update them, when a mistake is made
by applying the current function on the training example. The magnitude of the update is regulated
by a learning rate λ. This update is similar to the updates in gradient descent, which are made for
least-squares optimization. In the case of neural networks, the update function is as follows.
t+1 t
W = W + λ(yi − zi )Xi (1.22)
t
Here, W is the value of the weight vector in the tth iteration. It is not difficult to show that the
incremental update vector is related to the negative gradient of (yi − zi )2 with respect to W . It is also
easy to see that updates are made to the weights, only when mistakes are made in classification.
When the outputs are correct, the incremental change to the weights is zero.
The similarity to support vector machines is quite striking, in the sense that a linear function
is also learned in this case, and the sign of the linear function predicts the class label. In fact, the
perceptron model and support vector machines are closely related, in that both are linear function
approximators. In the case of support vector machines, this is achieved with the use of maximum
margin optimization. In the case of neural networks, this is achieved with the use of an incremental
An Introduction to Data Classification 15
INPUTNODES INPUTLAYER
Xi1 Xi1
w1 HIDDENLAYER
OUTPUTNODE
Xi2 w2 Xi2
w3 є Zi
OUTPUT LAYER
OUTPUTLAYER
Zi
Xi3 w4 Xi3
Xi4 Xi4
learning algorithm, which is approximately equivalent to least squares error optimization of the
prediction.
The constant λ regulates the learning rate. The choice of learning rate is sometimes important,
because learning rates that are too small will result in very slow training. On the other hand, if the
learning rates are too fast, this will result in oscillation between suboptimal solutions. In practice,
the learning rates are fast initially, and then allowed to gradually slow down over time. The idea here
is that initially large steps are likely to be helpful, but are then reduced in size to prevent oscillation
between suboptimal solutions. For example, after t iterations, the learning rate may be chosen to be
proportional to 1/t.
The aforementioned discussion was based on the simple perceptron architecture, which can
model only linear relationships. In practice, the neural network is arranged in three layers, referred
to as the input layer, hidden layer, and the output layer. The input layer only transmits the inputs
forward, and therefore, there are really only two layers to the neural network, which can perform
computations. Within the hidden layer, there can be any number of layers of neurons. In such cases,
there can be an arbitrary number of layers in the neural network. In practice, there is only one hidden
layer, which leads to a 2-layer network. An example of a multilayer network is illustrated in Figure
1.3(b). The perceptron can be viewed as a very special kind of neural network, which contains only
a single layer of neurons (corresponding to the output node). Multilayer neural networks allow the
approximation of nonlinear functions, and complex decision boundaries, by an appropriate choice
of the network topology, and non-linear functions at the nodes. In these cases, a logistic or sigmoid
function known as a squashing function is also applied to the inputs of neurons in order to model
non-linear characteristics. It is possible to use different non-linear functions at different nodes. Such
general architectures are very powerful in approximating arbitrary functions in a neural network,
given enough training data and training time. This is the reason that neural networks are sometimes
referred to as universal function approximators.
In the case of single layer perceptron algorthms, the training process is easy to perform by using
a gradient descent approach. The major challenge in training multilayer networks is that it is no
longer known for intermediate (hidden layer) nodes, what their “expected” output should be. This is
only known for the final output node. Therefore, some kind of “error feedback” is required, in order
to determine the changes in the weights at the intermediate nodes. The training process proceeds in
two phases, one of which is in the forward direction, and the other is in the backward direction.
1. Forward Phase: In the forward phase, the activation function is repeatedly applied to prop-
agate the inputs from the neural network in the forward direction. Since the final output is
supposed to match the class label, the final output at the output layer provides an error value,
depending on the training label value. This error is then used to update the weights of the
output layer, and propagate the weight updates backwards in the next phase.
16 Data Classification: Algorithms and Applications
2. Backpropagation Phase: In the backward phase, the errors are propagated backwards through
the neural network layers. This leads to the updating of the weights in the neurons of the
different layers. The gradients at the previous layers are learned as a function of the errors
and weights in the layer ahead of it. The learning rate λ plays an important role in regulating
the rate of learning.
In practice, any arbitrary function can be approximated well by a neural network. The price of this
generality is that neural networks are often quite slow in practice. They are also sensitive to noise,
and can sometimes overfit the training data.
The previous discussion assumed only binary labels. It is possible to create a k-label neural net-
work, by either using a multiclass “one-versus-all” meta-algorithm, or by creating a neural network
architecture in which the number of output nodes is equal to the number of class labels. Each output
represents prediction to a particular label value. A number of implementations of neural network
methods have been studied in [35,57,66,77,88], and many of these implementations are designed in
the context of text data. It should be pointed out that both neural networks and SVM classifiers use a
linear model that is quite similar. The main difference between the two is in how the optimal linear
hyperplane is determined. Rather than using a direct optimization methodology, neural networks
use a mistake-driven approach to data classification [35]. Neural networks are described in detail
in [19, 51]. This topic is addressed in detail in Chapter 8.
• Concept Drift: The data streams are typically created by a generating process, which may
change over time. This results in concept drift, which corresponds to changes in the underly-
ing stream patterns over time. The presence of concept drift can be detrimental to classifica-
tion algorithms, because models become stale over time. Therefore, it is crucial to adjust the
model in an incremental way, so that it achieves high accuracy over current test instances.
• Massive Domain Constraint: The streaming scenario often contains discrete attributes that
take on millions of possible values. This is because streaming items are often associated with
discrete identifiers. Examples could be email addresses in an email addresses, IP addresses
in a network packet stream, and URLs in a click stream extracted from proxy Web logs.
The massive domain problem is ubiquitous in streaming applications. In fact, many synopsis
data structures, such as the count-min sketch [33], and the Flajolet-Martin data structure [41],
have been designed with this issue in mind. While this issue has not been addressed very
extensively in the stream mining literature (beyond basic synopsis methods for counting),
recent work has made a number of advances in this direction [9].
optimize only a small working set of variables while keeping the others fixed. This working set is
selected by using a steepest descent criterion. This optimizes the advantage gained from using a
particular subset of attributes. Another strategy used is to discard training examples, which do not
have any impact on the margin of the classifiers. Training examples that are away from the decision
boundary, and on its “correct” side, have no impact on the margin of the classifier, even if they are
removed. Other methods such as SVMPerf [54] reformulate the SVM optimization to reduce the
number of slack variables, and increase the number of constraints. A cutting plane approach, which
works with a small subset of constraints at a time, is used in order to solve the resulting optimization
problem effectively.
Further challenges arise for extremely large data sets. This is because an increasing size of the
data implies that a distributed file system must be used in order to store it, and distributed processing
techniques are required in order to ensure sufficient scalability. The challenge here is that if large
segments of the data are available on different machines, it is often too expensive to shuffle the data
across different machines in order to extract integrated insights from it. Thus, as in all distributed
infrastructures, it is desirable to exchange intermediate insights, so as to minimize communication
costs. For an application programmer, this can sometimes create challenges in terms of keeping
track of where different parts of the data are stored, and the precise ordering of communications in
order to minimize the costs.
In this context, Google’s MapReduce framework [37] provides an effective method for analysis
of large amounts of data, especially when the nature of the computations involve linearly computable
statistical functions over the elements of the data streams. One desirable aspect of this framework is
that it abstracts out the precise details of where different parts of the data are stored to the applica-
tion programmer. As stated in [37]: “The run-time system takes care of the details of partitioning the
input data, scheduling the program’s execution across a set of machines, handling machine failures,
and managing the required inter-machine communication. This allows programmers without any
experience with parallel and distributed systems to easily utilize the resources of a large distributed
system.” Many classification algorithms such as k-means are naturally linear in terms of their scala-
bility with the size of the data. A primer on the MapReduce framework implementation on Apache
Hadoop may be found in [87]. The key idea here is to use a Map function in order to distribute the
work across the different machines, and then provide an automated way to shuffle out much smaller
data in (key,value) pairs containing intermediate results. The Reduce function is then applied to the
aggregated results from the Map step in order to obtain the final results.
Google’s original MapReduce framework was designed for analyzing large amounts of Web
logs, and more specifically deriving linearly computable statistics from the logs. It has been shown
[44] that a declarative framework is particularly useful in many MapReduce applications, and that
many existing classification algorithms can be generalized to the MapReduce framework. A proper
choice of the algorithm to adapt to the MapReduce framework is crucial, since the framework is
particularly effective for linear computations. It should be pointed out that the major attraction of
the MapReduce framework is its ability to provide application programmers with a cleaner abstrac-
tion, which is independent of very specific run-time details of the distributed system. It should not,
however, be assumed that such a system is somehow inherently superior to existing methods for dis-
tributed parallelization from an effectiveness or flexibility perspective, especially if an application
programmer is willing to design such details from scratch. A detailed discussion of classification
algorithms for big data is provided in Chapter 10.
text is much closer to multidimensional data. However, the standard methods for multidimensional
classification often need to be modified for text.
The main challenge with text classification is that the data is extremely high dimensional and
sparse. A typical text lexicon may be of a size of a hundred thousand words, but a document may
typically contain far fewer words. Thus, most of the attribute values are zero, and the frequencies are
relatively small. Many common words may be very noisy and not very discriminative for the clas-
sification process. Therefore, the problems of feature selection and representation are particularly
important in text classification.
Not all classification methods are equally popular for text data. For example, rule-based meth-
ods, the Bayes method, and SVM classifiers tend to be more popular than other classifiers. Some
rule-based classifiers such as RIPPER [26] were originally designed for text classification. Neural
methods and instance-based methods are also sometimes used. A popular instance-based method
used for text classification is Rocchio’s method [56, 74]. Instance-based methods are also some-
times used with centroid-based classification, where frequency-truncated centroids of class-specific
clusters are used, instead of the original documents for the k-nearest neighbor approach. This gen-
erally provides better accuracy, because the centroid of a small closely related set of documents is
often a more stable representation of that data locality than any single document. This is especially
true because of the sparse nature of text data, in which two related documents may often have only
a small number of words in common.
Many classifiers such as decision trees, which are popularly used in other data domains, are
not quite as popular for text data. The reason for this is that decision trees use a strict hierarchical
partitioning of the data. Therefore, the features at the higher levels of the tree are implicitly given
greater importance than other features. In a text collection containing hundreds of thousands of
features (words), a single word usually tells us very little about the class label. Furthermore, a
decision tree will typically partition the data space with a very small number of splits. This is a
problem, when this value is orders of magnitude less than the underlying data dimensionality. Of
course, decision trees in text are not very balanced either, because of the fact that a given word
is contained only in a small subset of the documents. Consider the case where a split corresponds
to presence or absence of a word. Because of the imbalanced nature of the tree, most paths from
the root to leaves will correspond to word-absence decisions, and a very small number (less than
5 to 10) word-presence decisions. Clearly, this will lead to poor classification, especially in cases
where word-absence does not convey much information, and a modest number of word presence
decisions are required. Univariate decision trees do not work very well for very high dimensional
data sets, because of disproportionate importance to some features, and a corresponding inability to
effectively leverage all the available features. It is possible to improve the effectiveness of decision
trees for text classification by using multivariate splits, though this can be rather expensive.
The standard classification methods, which are used for the text domain, also need to be suitably
modified. This is because of the high dimensional and sparse nature of the text domain. For example,
text has a dedicated model, known as the multinomial Bayes model, which is different from the
standard Bernoulli model [12]. The Bernoulli model treats the presence and absence of a word in
a text document in a symmetric way. However, in a given text document, only a small fraction
of the lexicon size is present in it. The absence of a word is usually far less informative than the
presence of a word. The symmetric treatment of word presence and word absence can sometimes be
detrimental to the effectiveness of a Bayes classifier in the text domain. In order to achieve this goal,
the multinomial Bayes model is used, which uses the frequency of word presence in a document,
but ignores non-occurrence.
In the context of SVM classifiers, scalability is important, because such classifiers scale poorly
both with number of training documents and data dimensionality (lexicon size). Furthermore, the
sparsity of text (i.e., few non-zero feature values) should be used to improve the training efficiency.
This is because the training model in an SVM classifier is constructed using a constrained quadratic
optimization problem, which has as many constraints as the number of data points. This is rather
20 Data Classification: Algorithms and Applications
large, and it directly results in an increased size of the corresponding Lagrangian relaxation. In the
case of kernel SVM, the space-requirements for the kernel matrix could also scale quadratically with
the number of data points. A few methods such as SVMLight [53] address this issue by carefully
breaking down the problem into smaller subproblems, and optimizing only a few variables at a time.
Other methods such as SVMPerf [54] also leverage the sparsity of the text domain. The SVMPerf
method scales as O(n · s), where s is proportional to the average number of non-zero feature values
per training document.
Text classification often needs to be performed in scenarios, where it is accompanied by linked
data. The links between documents are typically inherited from domains such as the Web and social
networks. In such cases, the links contain useful information, which should be leveraged in the
classification process. A number of techniques have recently been designed to utilize such side
information in the classification process. Detailed surveys on text classification may be found in
[12, 78]. The problem of text classification is discussed in detail in Chapter 11 of this book.
Both of these scenarios are equally important from the perspective of analytical inferences in a wide
variety of scenarios. Furthermore, these scenarios are also relevant to the case of sequence data.
Sequence data arises frequently in biological, Web log mining, and system analysis applications.
The discrete nature of the underlying data necessitates the use of methods that are quite different
from the case of continuous time series data. For example, in the case of discrete sequences, the
nature of the distance functions and modeling methodologies are quite different than those in time-
series data.
A brief survey of time-series and sequence classification methods may be found in [91]. A
detailed discussion on time-series data classification is provided in Chapter 13, and that of sequence
data classification methods is provided in Chapter 14. While the two areas are clearly connected,
there are significant differences between these two topics, so as to merit separate topical treatment.
ther supervised or unsupervised methods [3]. For example, consider the case of an image collection,
in which the similarity is defined on the basis of a user-centered semantic criterion. In such a case,
the use of standard distance functions such as the Euclidian metric may not reflect the semantic sim-
ilarities between two images well, because they are based on human perception, and may even vary
from collection to collection. Thus, the best way to address this issue is to explicitly incorporate
human feedback into the learning process. Typically, this feedback is incorporated either in terms of
pairs of images with explicit distance values, or in terms of rankings of different images to a given
target image. Such an approach can be used for a variety of different data domains. This is the train-
ing data that is used for learning purposes. A detailed survey of distance function learning methods
is provided in [92]. The topic of distance function learning is discussed in detail in Chapter 18.
• Boosting: Boosting [40] is a common technique used in classification. The idea is to focus
on successively difficult portions of the data set in order to create models that can classify
the data points in these portions more accurately, and then use the ensemble scores over all
the components. A hold-out approach is used in order to determine the incorrectly classified
instances for each portion of the data set. Thus, the idea is to sequentially determine better
classifiers for more difficult portions of the data, and then combine the results in order to
obtain a meta-classifier, which works well on all parts of the data.
• Bagging: Bagging [24] is an approach that works with random data samples, and combines
the results from the models constructed using different samples. The training examples for
each classifier are selected by sampling with replacement. These are referred to as bootstrap
samples. This approach has often been shown to provide superior results in certain scenarios,
though this is not always the case. This approach is not effective for reducing the bias, but can
reduce the variance, because of the specific random aspects of the training data.
• Random Forests: Random forests [23] are a method that use sets of decision trees on either
splits with randomly generated vectors, or random subsets of the training data, and com-
pute the score as a function of these different components. Typically, the random vectors are
generated from a fixed probability distribution. Therefore, random forests can be created by
either random split selection, or random input selection. Random forests are closely related
24 Data Classification: Algorithms and Applications
to bagging, and in fact bagging with decision trees can be considered a special case of ran-
dom forests, in terms of how the sample is selected (bootstrapping). In the case of random
forests, it is also possible to create the trees in a lazy way, which is tailored to the particular
test instance at hand.
• Model Averaging and Combination: This is one of the most common models used in ensemble
analysis. In fact, the random forest method discussed above is a special case of this idea. In
the context of the classification problem, many Bayesian methods [34] exist for the model
combination process. The use of different models ensures that the error caused by the bias of
a particular classifier does not dominate the classification results.
• Stacking: Methods such as stacking [90] also combine different models in a variety of ways,
such as using a second-level classifier in order to perform the combination. The output of
different first-level classifiers is used to create a new feature representation for the second
level classifier. These first level classifiers may be chosen in a variety of ways, such as using
different bagged classifiers, or by using different training models. In order to avoid overfitting,
the training data needs to be divided into two subsets for the first and second level classifiers.
• Bucket of Models: In this approach [94] a “hold-out” portion of the data set is used in order to
decide the most appropriate model. The most appropriate model is one in which the highest
accuracy is achieved in the held out data set. In essence, this approach can be viewed as a
competition or bake-off contest between the different models.
The area of meta-algorithms in classification is very rich, and different variations may work better
in different scenarios. An overview of different meta-algorithms in classification is provided in
Chapter 19.
X
X X CLASS A
CLASS A X
X
X XX
X
XX X X
OLD DECISION BOUNDARY X X X X
X X X
XX X X X
X
X XX X
X
CLASS B X X
X X
X
CLASS B X X
The motivation of semi-supervised learning is that knowledge of the dense regions in the space
and correlated regions of the space are helpful for classification. Consider the two-class example
illustrated in Figure 1.4(a), in which only a single training example is available for each class.
In such a case, the decision boundary between the two classes is the straight line perpendicular
to the one joining the two classes. However, suppose that some additional unsupervised examples
are available, as illustrated in Figure 1.4(b). These unsupervised examples are denoted by ‘x’. In
such a case, the decision boundary changes from Figure 1.4(a). The major assumption here is that
the classes vary less in dense regions of the training data, because of the smoothness assumption.
As a result, even though the added examples do not have labels, they contribute significantly to
improvements in classification accuracy.
In this example, the correlations between feature values were estimated with unlabeled training
data. This has an intuitive interpretation in the context of text data, where joint feature distributions
can be estimated with unlabeled data. For example, consider a scenario, where training data is
available about predicting whether a document is the “politics” category. It may be possible that the
word “Obama” (or some of the less common words) may not occur in any of the (small number
of) training documents. However, the word “Obama” may often co-occur with many features of the
“politics” category in the unlabeled instances. Thus, the unlabeled instances can be used to learn the
relevance of these less common features to the classification process, especially when the amount
of available training data is small.
Similarly, when the data is clustered, each cluster in the data is likely to predominantly contain
data records of one class or the other. The identification of these clusters only requires unsuper-
vised data rather than labeled data. Once the clusters have been identified from unlabeled data,
only a small number of labeled examples are required in order to determine confidently which label
corresponds to which cluster. Therefore, when a test example is classified, its clustering structure
provides critical information for its classification process, even when a smaller number of labeled
examples are available. It has been argued in [67] that the accuracy of the approach may increase ex-
ponentially with the number of labeled examples, as long as the assumption of smoothness in label
structure variation holds true. Of course, in real life, this may not be true. Nevertheless, it has been
shown repeatedly in many domains that the addition of unlabeled data provides significant advan-
tages for the classification process. An argument for the effectiveness of semi-supervised learning
that uses the spectral clustering structure of the data may be found in [18]. In some domains such
as graph data, semi-supervised learning is the only way in which classification may be performed.
This is because a given node may have very few neighbors of a specific class.
Semi-supervised methods are implemented in a wide variety of ways. Some of these methods
directly try to label the unlabeled data in order to increase the size of the training set. The idea is
26 Data Classification: Algorithms and Applications
to incrementally add the most confidently predicted label to the training data. This is referred to as
self training. Such methods have the downside that they run the risk of overfitting. For example,
when an unlabeled example is added to the training data with a specific label, the label might be
incorrect because of the specific characteristics of the feature space, or the classifier. This might
result in further propagation of the errors. The results can be quite severe in many scenarios.
Therefore, semi-supervised methods need to be carefully designed in order to avoid overfitting.
An example of such a method is co-training [21], which partitions the attribute set into two subsets,
on which classifier models are independently constructed. The top label predictions of one classifier
are used to augment the training data of the other, and vice-versa. Specifically, the steps of co-
training are as follows:
2. Train two independent classifier models M1 and M2 , which use the disjoint feature sets f1
and f2 , respectively.
3. Add the unlabeled instance with the most confidently predicted label from M1 to the training
data for M2 and vice-versa.
the learning process. For example, consider the case of learning the class label of Chinese docu-
ments, where enough training data is not available about the documents. However, similar English
documents may be available that contain training labels. In such cases, the knowledge in training
data for the English documents can be transferred to the Chinese document scenario for more ef-
fective classification. Typically, this process requires some kind of “bridge” in order to relate the
Chinese documents to the English documents. An example of such a “bridge” could be pairs of
similar Chinese and English documents though many other models are possible. In many cases,
a small amount of auxiliary training data in the form of labeled Chinese training documents may
also be available in order to further enhance the effectiveness of the transfer process. This general
principle can also be applied to cross-category or cross-domain scenarios where knowledge from
one classification category is used to enhance the learning of another category [71], or the knowl-
edge from one data domain (e.g., text) is used to enhance the learning of another data domain (e.g.,
images) [36, 70, 71, 95]. Broadly speaking, transfer learning methods fall into one of the following
four categories:
1. Instance-Based Transfer: In this case, the feature spaces of the two domains are highly over-
lapping; even the class labels may be the same. Therefore, it is possible to transfer knowledge
from one domain to the other by simply re-weighting the features.
2. Feature-Based Transfer: In this case, there may be some overlaps among the features, but
a significant portion of the feature space may be different. Often, the goal is to perform a
transformation of each feature set into a new low dimensional space, which can be shared
across related tasks.
3. Parameter-Based Transfer: In this case, the motivation is that a good training model has
typically learned a lot of structure. Therefore, if two tasks are related, then the structure can
be transferred to learn the target task.
4. Relational-Transfer Learning: The idea here is that if two domains are related, they may share
some similarity relations among objects. These similarity relations can be used for transfer
learning across domains.
The major challenge in such transfer learning methods is that negative transfer can be caused in
some cases when the side information used is very noisy or irrelevant to the learning process. There-
fore, it is critical to use the transfer learning process in a careful and judicious way in order to truly
improve the quality of the results. A survey on transfer learning methods may be found in [68], and
a detailed discussion on this topic may be found in Chapter 21.
ClassA ClassB
(a) Class Separation (b) Random Sample with SVM (c) Active Sample with SVM Clas-
Classifier sifier
learning algorithms, most of which try to either reduce the uncertainty in classification or reduce the
error associated with the classification process. Some examples of criteria that are commonly used
in order to query the learner are as follows:
• Uncertainty Sampling: In this case, the learner queries the user for labels of examples, for
which the greatest level of uncertainty exists about its correct output [45].
• Query by Committee (QBC): In this case, the learner queries the user for labels of examples
in which a committee of classifiers have the greatest disagreement. Clearly, this is another
indirect way to ensure that examples with the greatest uncertainty are queries [81].
• Greatest Model Change: In this case, the learner queries the user for labels of examples,
which cause the greatest level of change from the current model. The goal here is to learn
new knowledge that is not currently incorporated in the model [27].
• Greatest Error Reduction: In this case, the learner queries the user for labels of examples,
which causes the greatest reduction of error in the current example [28].
• Greatest Variance Reduction: In this case, the learner queries the user for examples, which
result in greatest reduction in output variance [28]. This is actually similar to the previous
case, since the variance is a component of the total error.
• Representativeness: In this case, the learner queries the user for labels that are most represen-
tative of the underlying data. Typically, this approach combines one of the aforementioned
criteria (such as uncertainty sampling or QBC) with a representativeness model such as a
density-based method in order to perform the classification [80].
These different kinds of models may work well in different kinds of scenarios. Another form of
active learning queries the data vertically. In other words, instead of examples, it is learned which
attributes to collect, so as to minimize the error at a given cost level [62]. A survey on active learning
methods may be found in [79]. The topic of active learning is discussed in detail in Chapter 22.
A general discussion on visual data mining methods is found in [10, 47, 49, 55, 83]. A detailed
discussion of methods for visual classification is provided in Chapter 23.
• Methodology used for evaluation: Classification algorithms require a training phase and a
testing phase, in which the test examples are cleanly separated from the training data. How-
ever, in order to evaluate an algorithm, some of the labeled examples must be removed from
the training data, and the model is constructed on these examples. The problem here is that the
removal of labeled examples implicitly underestimates the power of the classifier, as it relates
to the set of labels already available. Therefore, how should this removal from the labeled
examples be performed so as to not impact the learner accuracy too much?
Various strategies are possible, such as hold out, bootstrapping, and cross-validation, of which
the first is the simplest to implement, and the last provides the greatest accuracy of implemen-
tation. In the hold-out approach, a fixed percentage of the training examples are “held out,”
and not used in the training. These examples are then used for evaluation. Since only a subset
of the training data is used, the evaluation tends to be pessimistic with the approach. Some
variations use stratified sampling, in which each class is sampled independently in proportion.
This ensures that random variations of class frequency between training and test examples are
removed.
In bootstrapping, sampling with replacement is used for creating the training examples. The
most typical scenario is that n examples are sampled with replacement, as a result of which
the fraction of examples not sampled is equal to (1 − 1/n)n ≈ 1/e, where e is the basis of
the natural logarithm. The class accuracy is then evaluated as a weighted combination of the
accuracy a1 on the unsampled (test) examples, and the accuracy a2 on the full labeled data.
The full accuracy A is given by:
This procedure is repeated over multiple bootstrap samples and the final accuracy is reported.
Note that the component a2 tends to be highly optimistic, as a result of which the bootstrap-
ping approach produces highly optimistic estimates. It is most appropriate for smaller data
sets.
In cross-validation, the training data is divided into a set of k disjoint subsets. One of the k
subsets is used for testing, whereas the other (k − 1) subsets are used for training. This process
is repeated by using each of the k subsets as the test set, and the error is averaged over all
possibilities. This has the advantage that all examples in the labeled data have an opportunity
to be treated as test examples. Furthermore, when k is large, the training data size approaches
the full labeled data. Therefore, such an approach approximates the accuracy of the model
using the entire labeled data well. A special case is “leave-one-out” cross-validation, where
k is chosen to be equal to the number of training examples, and therefore each test segment
contains exactly one example. This is, however, expensive to implement.
• Quantification of accuracy: This issue deals with the problem of quantifying the error of
a classification algorithm. At first sight, it would seem that it is most beneficial to use a
measure such as the absolute classification accuracy, which directly computes the fraction
of examples that are correctly classified. However, this may not always be appropriate in
An Introduction to Data Classification 31
all cases. For example, some algorithms may have much lower variance across different data
sets, and may therefore be more desirable. In this context, an important issue that arises is that
of the statistical significance of the results, when a particular classifier performs better than
another on a data set. Another issue is that the output of a classification algorithm may either
be presented as a discrete label for the test instance, or a numerical score, which represents the
propensity of the test instance to belong to a specific class. For the case where it is presented
as a discrete label, the accuracy is the most appropriate score.
In some cases, the output is presented as a numerical score, especially when the class is rare.
In such cases, the Precision-Recall or ROC curves may need to be used for the purposes of
classification evaluation. This is particularly important in imbalanced and rare-class scenarios.
Even when the output is presented as a binary label, the evaluation methodology is different
for the rare class scenario. In the rare class scenario, the misclassification of the rare class
is typically much more costly than that of the normal class. In such cases, cost sensitive
variations of evaluation models may need to be used for greater robustness. For example, the
cost sensitive accuracy weights the rare class and normal class examples differently in the
evaluation.
An excellent review of evaluation of classification algorithms may be found in [52]. A discussion
of evaluation of classification algorithms is provided in Chapter 24.
Bibliography
[1] C. Aggarwal. Outlier Analysis, Springer, 2013.
[2] C. Aggarwal and C. Reddy. Data Clustering: Algorithms and Applications, CRC Press, 2013.
32 Data Classification: Algorithms and Applications
[3] C. Aggarwal. Towards Systematic Design of Distance Functions in Data Mining Applications,
ACM KDD Conference, 2003.
[5] C. Aggarwal. On Density-based Transforms for Uncertain Data Mining, ICDE Conference,
2007.
[7] C. Aggarwal and H. Wang. Managing and Mining Graph Data, Springer, 2010.
[8] C. Aggarwal and C. Zhai. Mining Text Data, Chapter 11, Springer, 2012.
[9] C. Aggarwal and P. Yu. On Classification of High Cardinality Data Streams. SDM Conference,
2010.
[10] C. Aggarwal. Towards Effective and Interpretable Data Mining by Visual Interaction, ACM
SIGKDD Explorations, 2002.
[13] C. Aggarwal, J. Han, J. Wang, and P. Yu. A framework for classification of evolving data
streams. In IEEE TKDE Journal, 2006.
[14] C. Aggarwal. On Effective Classification of Strings with Wavelets, ACM KDD Conference,
2002.
[15] D. Aha, D. Kibler, and M. Albert. Instance-based learning algorithms, Machine Learning,
6(1):37–66, 1991.
[16] D. Aha. Lazy learning: Special issue editorial. Artificial Intelligence Review, 11:7–10, 1997.
[17] M. Ankerst, M. Ester, and H.-P. Kriegel. Towards an Effective Cooperation of the User and the
Computer for Classification, ACM KDD Conference, 2000.
[18] M. Belkin and P. Niyogi. Semi-supervised learning on Riemannian manifolds, Machine Learn-
ing, 56:209–239, 2004.
[19] C. Bishop. Neural Networks for Pattern Recognition, Oxford University Press, 1996.
[20] C. Bishop. Pattern Recognition and Machine Learning, Springer, 2007.
[21] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. Proceedings
of the Eleventh Annual Conference on Computational Learning Theory, pages 92–100, 1998.
[23] L. Breiman. Random forests. Journal Machine Learning Archive, 45(1):5–32, 2001.
[25] N. V. Chawla, N. Japkowicz, and A. Kotcz. Editorial: Special Issue on Learning from Imbal-
anced Data Sets, ACM SIGKDD Explorations Newsletter, 6(1):1–6, 2004.
An Introduction to Data Classification 33
[26] W. Cohen and Y. Singer. Context-sensitive learning methods for text categorization, ACM
Transactions on Information Systems, 17(2):141–173, 1999.
[27] D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning, Machine
Learning, 5(2):201–221, 1994.
[28] D. Cohn, Z. Ghahramani and M. Jordan. Active learning with statistical models, Journal of
Artificial Intelligence Research, 4:129–145, 1996.
[29] O. Chapelle, B. Scholkopf, and A. Zien. Semi-supervised learning. Vol. 2, Cambridge: MIT
Press, 2006.
[30] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines and Other
Kernel-based Learning Methods, Cambridge University Press, 2000.
[31] P. Clark and T. Niblett. The CN2 Induction algorithm, Machine Learning, 3(4):261–283, 1989.
[32] B. Clarke. Bayes model averaging and stacking when model approximation error cannot be
ignored, Journal of Machine Learning Research, pages 683–712, 2003.
[33] G. Cormode and S. Muthukrishnan, An improved data-stream summary: The count-min sketch
and its applications, Journal of Algorithms, 55(1), (2005), pp. 58–75.
[34] P. Domingos. Bayesian Averaging of Classifiers and the Overfitting Problem. ICML Confer-
ence, 2000.
[35] I. Dagan, Y. Karov, and D. Roth. Mistake-driven Learning in Text Categorization, Proceedings
of EMNLP, 1997.
[36] W. Dai, Y. Chen, G.-R. Xue, Q. Yang, and Y. Yu. Translated learning: Transfer learning across
different feature spaces. Proceedings of Advances in Neural Information Processing Systems,
2008.
[37] J. Dean and S. Ghemawat. MapReduce: A flexible data processing tool, Communication of the
ACM, 53:72–77, 2010.
[38] P. Domingos and M. J. Pazzani. On the optimality of the simple Bayesian classifier under
zero-one loss. Machine Learning, 29(2–3):103–130, 1997.
[39] R. Duda, P. Hart, and D. Stork, Pattern Classification, Wiley, 2001.
[40] Y. Freund, R. Schapire. A decision-theoretic generalization of online learning and application
to boosting, Lecture Notes in Computer Science, 904:23–37, 1995.
[41] P. Flajolet and G. N. Martin. Probabilistic counting algorithms for data base applications.
Journal of Computer and System Sciences, 31(2):182–209, 1985.
[42] J. Gehrke, V. Ganti, R. Ramakrishnan, and W.-Y. Loh. BOAT: Optimistic Decision Tree Con-
struction, ACM SIGMOD Conference, 1999.
[43] J. Gehrke, R. Ramakrishnan, and V. Ganti. Rainforest—a framework for fast decision tree
construction of large datasets, VLDB Conference, pages 416–427, 1998.
[45] D. Lewis and J. Catlett. Heterogeneous Uncertainty Sampling for Supervised Learning, ICML
Conference, 1994.
[46] L. Hamel. Knowledge Discovery with Support Vector Machines, Wiley, 2009.
[48] M. Mehta, R. Agrawal, and J. Rissanen. SLIQ: A Fast Scalable Classifier for Data Mining,
EDBT Conference, 1996.
[49] M. C. F. de Oliveira and H. Levkowitz. Visual Data Mining: A Survey, IEEE Transactions on
Visualization and Computer Graphics, 9(3):378–394. 2003.
[50] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining,
Inference, and Prediction, Springer, 2013.
[51] S. Haykin. Neural Networks and Learning Machines, Prentice Hall, 2008.
[53] T. Joachims. Making Large scale SVMs practical, Advances in Kernel Methods, Support Vector
Learning, pages 169–184, Cambridge: MIT Press, 1998.
[54] T. Joachims. Training Linear SVMs in Linear Time, KDD, pages 217–226, 2006.
[55] D. Keim. Information and visual data mining, IEEE Transactions on Visualization and Com-
puter Graphics, 8(1):1–8, 2002.
[56] W. Lam and C. Y. Ho. Using a Generalized Instance Set for Automatic Text Categorization.
ACM SIGIR Conference, 1998.
[57] N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold
algorithm. Machine Learning, 2:285–318, 1988.
[58] B. Liu, W. Hsu, and Y. Ma. Integrating Classification and Association Rule Mining, ACM
KDD Conference, 1998.
[59] H. Liu and H. Motoda. Feature Selection for Knowledge Discovery and Data Mining, Springer,
1998.
[65] S. K. Murthy. Automatic construction of decision trees from data: A multi-disciplinary survey,
Data Mining and Knowledge Discovery, 2(4):345–389, 1998.
[66] H. T. Ng, W. Goh and K. Low. Feature Selection, Perceptron Learning, and a Usability Case
Study for Text Categorization. ACM SIGIR Conference, 1997.
An Introduction to Data Classification 35
[67] K. Nigam, A. McCallum, S. Thrun, and T. Mitchell. Text classification from labeled and unla-
beled documents using EM, Machine Learning, 39(2–3):103–134, 2000.
[68] S. J. Pan and Q. Yang. A survey on transfer learning, IEEE Transactons on Knowledge and
Data Engineering, 22(10):1345–1359, 2010.
[69] H. Peng, F. Long, and C. Ding. Feature selection based on mutual information: Criteria of max-
dependency, max-relevance, and min-redundancy, IEEE Transactions on Pattern Analysis and
Machine Intelligence, 27(8):1226–1238, 2005.
[70] G. Qi, C. Aggarwal, and T. Huang. Towards Semantic Knowledge Propagation from Text
Corpus to Web Images, WWW Conference, 2011.
[71] G. Qi, C. Aggarwal, Y. Rui, Q. Tian, S. Chang, and T. Huang. Towards Cross-Category Knowl-
edge Propagation for Learning Visual Concepts, CVPR Conference, 2011.
[72] J. Quinlan. C4.5: Programs for Machine Learning, Morgan-Kaufmann Publishers, 1993.
[73] J. R. Quinlan. Induction of decision trees, Machine Learning, 1(1):81–106, 1986.
[74] J. Rocchio. Relevance feedback information retrieval. The Smart Retrieval System - Experi-
ments in Automatic Document Processing, G. Salton, Ed. Englewood Cliffs, Prentice Hall, NJ:
pages 313–323, 1971.
[75] B. Scholkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regulariza-
tion, Optimization, and Beyond, Cambridge University Press, 2001.
[76] I. Steinwart and A. Christmann. Support Vector Machines, Springer, 2008.
[77] H. Schutze, D. Hull, and J. Pedersen. A Comparison of Classifiers and Document Representa-
tions for the Routing Problem. ACM SIGIR Conference, 1995.
[78] F. Sebastiani. Machine learning in automated text categorization, ACM Computing Surveys,
34(1):1–47, 2002.
[79] B. Settles. Active Learning, Morgan and Claypool, 2012.
[80] B. Settles and M. Craven. An analysis of active learning strategies for sequence labeling
tasks, Proceedings of the Conference on Empirical Methods in Natural Language Process-
ing (EMNLP), pages 1069–1078, 2008.
[81] H. Seung, M. Opper, and H. Sompolinsky. Query by Committee. Fifth Annual Workshop on
Computational Learning Theory, 1992.
[82] J. Shafer, R. Agrawal, and M. Mehta. SPRINT: A scalable parallel classfier for data mining,
VLDB Conference, pages 544–555, 1996.
[83] T. Soukop and I. Davidson. Visual Data Mining: Techniques and Tools for Data Visualization,
Wiley, 2002.
[84] P.-N. Tan, M. Steinbach, and V. Kumar. Introduction to Data Mining. Pearson, 2005.
[85] V. Vapnik. The Nature of Statistical Learning Theory, Springer, New York, 1995.
[86] H. Wang, W. Fan, P. Yu, and J. Han. Mining Concept-Drifting Data Streams with Ensemble
Classifiers, KDD Conference, 2003.
[87] T. White. Hadoop: The Definitive Guide. Yahoo! Press, 2011.
36 Data Classification: Algorithms and Applications
[88] E. Wiener, J. O. Pedersen, and A. S. Weigend. A neural network approach to topic spotting.
SDAIR, pages 317–332, 1995.
[89] D. Wettschereck, D. Aha, and T. Mohri. A review and empirical evaluation of feature weight-
ing methods for a class of lazy learning algorithms, Artificial Intelligence Review, 11(1–
5):273–314, 1997.
[91] Z. Xing and J. Pei, and E. Keogh. A brief survey on sequence classification. SIGKDD Explo-
rations, 12(1):40–48, 2010.
[93] M. J. Zaki and C. Aggarwal. XRules: A Structural Classifier for XML Data, ACM KDD Con-
ference, 2003.
[94] B. Zenko. Is combining classifiers better than selecting the best one? Machine Learning,
54(3):255–273, 2004.
[95] Y. Zhu, S. J. Pan, Y. Chen, G.-R. Xue, Q. Yang, and Y. Yu. Heterogeneous Transfer Learning
for Image Classification. Special Track on AI and the Web, associated with The Twenty-Fourth
AAAI Conference on Artificial Intelligence, 2010.
[96] X. Zhu and A. Goldberg. Introduction to Semi-Supervised Learning, Morgan and Claypool,
2009.