Unit 1 (DMW)
Unit 1 (DMW)
UNIT-1
.
The major reason that data mining has attracted a great deal of attention in
information industry in recent years is due to the wide availability of huge amounts
of data and the imminent need for turning such data into useful information and
knowledge. The information and knowledge gained can be used for applications
ranging from business management, production control, and market analysis, to
engineering design and science exploration.
UNIT-1
dredging. Many people treat data mining as a synonym for another popularly used
term, Knowledge Discovery in Databases", or KDD
UNIT-1
UNIT-1
How is a data warehouse different from a database? How are they similar?
UNIT-1
databases, and special c application-oriented databases, such as spatial databases,
time-series databases, text databases, and multimedia databases.
Flat files: Flat files are actually the most common data source for data
mining algorithms, especially at the research level. Flat files are simple data files in
text or binary format with a structure known by the data mining algorithm to be
applied. The data in these files can be transactions, time-series data, scientific
measurements, etc.
The most commonly used query language for relational database is SQL,
which allows retrieval and manipulation of the data stored in the tables, as well as
the calculation of aggregate functions such as average, sum, min, max and count.
For instance, an SQL query to select the videos grouped by category would be:
UNIT-1
Data mining algorithms using relational databases can be more versatile than
data mining algorithms specifically written for flat files, since they can take
advantage of the structure inherent to relational databases. While data mining can
benefit from SQL for data selection, transformation and consolidation, it goes
beyond what SQL could provide, such as predicting, comparing, detecting
deviations, etc.
Data warehouses
UNIT-1
The data cube structure that stores the primitive or lowest level of
information is called a base cuboid. Its corresponding higher level multidimensional
(cube) structures are called (non-base) cuboids. A base cuboid together with all of
its corresponding higher level cuboids form a data cube. By providing
multidimensional data views and the precomputation of summarized data, data
warehouse systems are well suited for On-Line Analytical Processing, or OLAP. OLAP
operations make use of background knowledge regarding the domain of the data
being studied in order to allow the presentation of data at different levels of
abstraction. Such operations accommodate different user viewpoints. Examples of
OLAP operations include drill-down and roll-up, which allow the user to view the
data at differing degrees of summarization, as illustrated in above figure.
Transactional databases
UNIT-1
classes and class hierarchies. Each entity in the database is considered as an
object. The object contains a set of variables that describe the object, a set of
messages that the object can use to communicate with other objects or with the
rest of the database system and a set of methods where each method holds the
code to implement a message.
A multimedia database stores images, audio, and video data, and is used in
applications such as picture content-based retrieval, voice-mail systems, video-on-
demand systems, the World Wide Web, and speech-based user interfaces.
UNIT-1
CONCEPT DESCRIPTION
Data can be associated with classes or concepts. It describes a given set of data in
a concise and summarative manner, presenting interesting general properties of the
data. These descriptions can be derived via
Data characterization
Example:
UNIT-1
Example
The general features of students with high GPAs may be compared with the general
features of students with low GPAs. The resulting description could be a general
comparative profile of the students such as 75% of the students with high GPAs are
fourth-year computing science students while 65% of the students with low GPAs
are not.
(4) Novel.
There are many data mining systems available or being developed. Some are
specialized systems dedicated to a given data source or are confined to limited data
mining functionalities, other are more versatile and comprehensive. Data mining
systems can be categorized according to various criteria among other classification
are the following:
UNIT-1
Classification according to the type of data source mined: this classification
categorizes data mining systems according to the type of data handled such as
spatial data, multimedia data, time-series data, text data, World Wide Web, etc.
2. State the data mining primitives and list the advantages of data
mining over other approaches of analyzing data. (6 Marks Nov 2015)
Task-relevant data: This primitive specifies the data upon which mining is to be
performed. It involves specifying the database and tables or data warehouse
containing the relevant data, conditions for selecting the relevant data, the relevant
attributes or dimensions for exploration, and instructions regarding the ordering or
grouping of the data retrieved.
Knowledge type to be mined: This primitive specifies the specific data mining
function to be performed, such as characterization, discrimination, association,
classification, clustering, or evolution analysis. As well, the user can be more
UNIT-1
specific and provide pattern templates that all discovered patterns must match.
These templates or meta patterns (also called meta rules or meta queries), can be
used to guide the discovery process.
UNIT-1
deduction rules, can help focus and speed up a data mining process, or judge the
interestingness of discovered patterns.
UNIT-1
DATA PREPROCESSING
Data in the real world is dirty. It can be in incomplete, noisy and inconsistent
from. These data needs to be preprocessed in order to help improve the quality of
the data, and quality of the mining results.
If no quality data, then no quality mining results. The quality decision is always
based on the quality data.
If there is much irrelevant and redundant information present or noisy and
unreliable data, then knowledge discovery during the training phase is more
difficult
UNIT-1
Functional dependency violation (e.g., modify some linked data)
Data cleaning
Fill in missing values, smooth noisy data, identify or remove outliers, and
resolve inconsistencies
Data integration
Data transformation
Data reduction
Data discretization
Part of data reduction but with particular importance, especially for numerical
data
UNIT-1
Data cleaning
Data cleaning routines attempt to fill in missing values, smooth out noise while
identifying outliers, and correct inconsistencies in the data.
The various methods for handling the problem of missing values in data tuples
include:
(a) Ignoring the tuple: This is usually done when the class label is missing
(assuming the mining task involves classification or description). This method is not
very effective unless the tuple contains several
attributes with missing values. It is especially poor when the percentage of missing
values per attribute
varies considerably.
(b) Manually filling in the missing value: In general, this approach is time-
consuming and may not be a reasonable task for large data sets with many missing
values, especially when the value to be filled in is not easily determined.
(c) Using a global constant to fill in the missing value: Replace all missing
attribute values by the same constant, such as a label like Unknown, or . If
missing values are replaced by, say, Unknown, then the mining program may
mistakenly think that they form an interesting concept, since they all have a value
UNIT-1
in common that of Unknown. Hence, although this method is simple, it is not
recommended.
(d) Using the attribute mean for quantitative (numeric) values or
attribute mode for categorical (nominal) values, for all samples belonging
to the same class as the given tuple: For example, if classifying customers
according to credit risk, replace the missing value with the average income value
for customers in the same credit risk category as that of the given tuple.
(e) Using the most probable value to fill in the missing value: This may be
determined with regression, inference-based tools using Bayesian formalism, or
decision tree induction. For example, using the other customer attributes in your
data set, you may construct a decision tree to predict the missing values for
income.
Noisy data:
In this technique,
o Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29,
34
UNIT-1
o Partition into (equi-depth) bins(equi depth of 3 since each bin contains
three values):
- Bin 1: 4, 8, 9, 15
In smoothing by bin means, each value in a bin is replaced by the mean value
of the bin. For example, the mean of the values 4, 8, and 15 in Bin 1 is 9. Therefore,
each original value in this bin is replaced by the value 9. Similarly, smoothing by bin
medians can be employed, in which each bin value is replaced by the bin median. In
smoothing by bin boundaries, the minimum and maximum values in a given bin are
identified as the bin boundaries. Each bin value is then replaced by the closest
boundary value.
Suppose that the data for analysis include the attribute age. The age values for the
data tuples are (in increasing order): 13, 15, 16, 16, 19, 20, 20, 21, 22, 22, 25, 25,
25, 25, 30, 33, 33, 35, 35, 35, 35, 36, 40, 45, 46, 52, 70.
(a) Use smoothing by bin means to smooth the above data, using a bin depth of 3.
Illustrate your steps.
Comment on the effect of this technique for the given data.
The following steps are required to smooth the above data using smoothing
by bin means with a bin
depth of 3.
Step 1: Sort the data. (This step is not required here as the data are already
sorted.)
UNIT-1
Bin 4: 22, 25, 25 Bin 5: 25, 25, 30 Bin 6: 33, 33, 35
Bin 7: 35, 35, 35 Bin 8: 36, 40, 45 Bin 9: 46, 52, 70
Step 4: Replace each of the values in each bin by the arithmetic mean calculated
for the bin.
Bin 1: 14, 14, 14 Bin 2: 18, 18, 18 Bin 3: 21, 21, 21
Bin 4: 24, 24, 24 Bin 5: 26, 26, 26 Bin 6: 33, 33, 33
Bin 7: 35, 35, 35 Bin 8: 40, 40, 40 Bin 9: 56, 56, 56
Linear regression involves finding the best of line to fit two variables,
so that one variable can be used to predict the other.
Using regression to find a mathematical equation to fit the data helps smooth out
the noise.
UNIT-1
Field overloading: is a kind of source of errors that typically occurs when
developers compress new attribute definitions into unused portions of already
defined attributes.
Unique rule is a rule says that each value of the given attribute must be different
from all other values of that attribute
Consecutive rule is a rule says that there can be no missing values between the
lowest and highest values of the attribute and that all values must also be unique.
Null rule specifies the use of blanks, question marks, special characters or other
strings that may indicate the null condition and how such values should be handled.
Data Integration
It combines data from multiple sources into a coherent store. There are
number of issues to consider during data integration.
Issues:
UNIT-1
1. Correlation analysis
The result of the equation is > 0, then A and B are positively correlated,
which means the value of A increases as the values of B increases. The
higher value may indicate redundancy that may be removed.
The result of the equation is = 0, then A and B are independent and there is
no correlation between them.
If the resulting value is < 0, then A and B are negatively correlated where the
values of one attribute increase as the value of one attribute decrease which
means each attribute may discourages each other.
Example:
UNIT-1
Data Transformation
Normalization
In which data are scaled to fall within a small, specified range, useful for
classification algorithms involving neural networks, distance measurements such as
nearest neighbor classification and clustering. There are 3 methods for data
normalization. They are:
min-max normalization
z-score normalization
normalization by decimal scaling
UNIT-1
Min-max normalization: performs linear transformation on the original data
values. It can be defined as,
v minA
v' (new _ maxA new _ minA) new _ minA
maxA minA
UNIT-1
4. Numerosity reduction, where the data are replaced or estimated by
alternative, smaller data representations such as parametric models (which
need store only the model parameters instead of the actual data) or
nonparametric methods such as clustering, sampling, and the use of
histograms.
Data cube aggregation: Reduce the data to the concept level needed in the
analysis. Queries regarding aggregated information should be answered using data
cube when possible. Data cubes store multidimensional aggregated information.
The following figure shows a data cube for multidimensional analysis of sales data
with respect to annual sales per item type for each branch.
Each cells holds an aggregate data value, corresponding to the data point in
multidimensional space.Data cubes provide fast access to precomputed,
summarized data, thereby benefiting on-line analytical processing as well as data
mining.
The cube created at the lowest level of abstraction is referred to as the base
cuboid. A cube for the highest level of abstraction is the apex cuboid. The lowest
level of a data cube (base cuboid). Data cubes created for varying levels of
abstraction are sometimes referred to as cuboids, so that a data cube" may
instead refer to a lattice of cuboids. Each higher level of abstraction further reduces
the resulting data size.
The following database consists of sales per quarter for the years 1997-1999.
UNIT-1
Suppose, the annalyser interested in the annual sales rather than sales per quarter,
the above data can be aggregated so that the resulting data summarizes the total
sales per year instead of per quarter. The resulting data in smaller in volume,
without loss of information necessary for the analysis task
Dimensionality Reduction
It reduces the data set size by removing irrelevant attributes. This is a method of
attribute subset selection are applied. A heuristic method of attribute of sub set
selection is explained here:
Feature selection is a must for any data mining product. That is because,
when you build a data mining model, the dataset frequently contains more
information than is needed to build the model. For example, a dataset may contain
500 columns that describe characteristics of customers, but perhaps only 50 of
those columns are used to build a particular model. If you keep the unneeded
columns while building the model, more CPU and memory are required during the
training process, and more storage space is required for the completed model.
2. Step-wise backward elimination: The procedure starts with the full set of
attributes. At each step, it removes the worst attribute remaining in the set.
UNIT-1
each step one selects the best attribute and removes the worst from among the
remaining attributes.
The mining algorithm itself is used to determine the attribute sub set, then it is
called wrapper approach or filter approach. Wrapper approach leads to greater
accuracy since it optimizes the evaluation measure of the algorithm while removing
attributes.
Data compression
Wavelet transforms
Principal components analysis.
UNIT-1
Wavelet compression is a form of data compression well suited for image
compression. The discrete wavelet transform (DWT) is a linear signal processing
technique that, when applied to a data vector D, transforms it to a numerically
different vector, D0, of wavelet coefficients.
1. The length, L, of the input data vector must be an integer power of two. This
condition can be met by padding the data vector with zeros, as necessary.
data smoothing
calculating weighted difference
3. The two functions are applied to pairs of the input data, resulting in two sets of
data of length L/2.
4. The two functions are recursively applied to the sets of data obtained in the
previous loop, until the resulting data sets obtained are of desired length.
5. A selection of values from the data sets obtained in the above iterations are
designated the wavelet coefficients of the transformed data.
If wavelet coefficients are larger than some user-specified threshold then it can be
retained. The remaining coefficients are set to 0.
UNIT-1
The principal components (new set of axes) give important information about
variance. Using the strongest components one can reconstruct a good
approximation of the original signal.
Numerosity Reduction
Data volume can be reduced by choosing alternative smaller forms of data. This
tech. can be
Parametric method
Non parametric method
Parametric: Assume the data fits some model, then estimate model parameters,
and store only the parameters, instead of actual data.
Non parametric: In which histogram, clustering and sampling is used to store
reduced form of data.
2 Histogram
Divide data into buckets and store average (sum) for each bucket
A bucket represents an attribute-value/frequency pair
It can be constructed optimally in one dimension using dynamic programming
It divides up the range of possible values in a data set into classes or groups.
For each group, a rectangle (bucket) is constructed with a base length equal
to the range of values in that specific group, and an area proportional to the
number of observations falling into that group.
The buckets are displayed in a horizontal axis while height of a bucket
represents the average frequency of the values.
UNIT-1
Example:
The following data are a list of prices of commonly sold items. The numbers have
been sorted.
1, 1, 5, 5, 5, 5, 5, 8, 8, 10, 10, 10, 10, 12, 14, 14, 14, 15, 15, 15, 15, 15, 15, 18, 18,
18, 18, 18, 18, 18, 18, 20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 25, 25, 25, 25, 25,
28, 28, 30, 30, 30.
Draw histogram plot for price where each bucket should have equi width of 10
The buckets can be determined based on the following partitioning rules, including
the following.
V-Optimal and MaxDiff histograms tend to be the most accurate and practical.
Histograms are highly effective at approximating both sparse and dense data, as
well as highly skewed, and uniform data.
Clustering techniques consider data tuples as objects. They partition the objects
into groups or clusters, so that objects within a cluster are similar" to one another
and dissimilar" to objects in other clusters. Similarity is commonly defined in terms
of how close" the objects are in space, based on a distance function.
UNIT-1
Sampling
UNIT-1
UNIT-1
for each customer age group. In this way, the age group having the smallest
number of customers will be sure to be represented.
Advantages of sampling
1. An advantage of sampling for data reduction is that the cost of obtaining a
sample is proportional to the size of the sample, n, as opposed to N, the data
set size. Hence, sampling complexity is potentially sub-linear to the size of
the data.
Discretization:
Discretization techniques can be used to reduce the number of values for a given
continuous attribute, by dividing the range of the attribute into intervals. Interval
labels can then be used to replace actual data values.
Concept Hierarchy
UNIT-1
There are five methods for numeric concept hierarchy generation. These include:
1. binning,
2. histogram analysis,
3. clustering analysis,
4. entropy-based discretization, and
5. data segmentation by natural partitioning".
UNIT-1
Example:
Suppose that profits at different branches of a company for the year 1997 cover a
wide range, from -$351,976.00 to $4,700,896.50. A user wishes to have a concept
hierarchy for profit automatically generated
Suppose that the data within the 5%-tile and 95%-tile are between -$159,876 and
$1,838,761. The results of applying the 3-4-5 rule are shown in following figure
Step 1: Based on the above information, the minimum and maximum values are:
MIN = -$351, 976.00, and MAX = $4, 700, 896.50. The low (5%-tile) and high (95%-
tile) values to be considered for the top or first level of segmentation are: LOW = -
$159, 876, and HIGH = $1, 838,761.
Step 2: Given LOW and HIGH, the most significant digit is at the million dollar digit
position (i.e., msd =1,000,000). Rounding LOW down to the million dollar digit, we
get LOW = -$1; 000; 000; and rounding HIGH up to the million dollar digit, we get
HIGH = +$2; 000; 000.
Step 3: Since this interval ranges over 3 distinct values at the most significant digit,
i.e., (2; 000; 000-(-1, 000; 000))/1, 000, 000 = 3, the segment is partitioned into 3
equi-width sub segments according to the 3-4-5 rule: (-$1,000,000 - $0], ($0 -
$1,000,000], and ($1,000,000 - $2,000,000]. This represents the top tier of the
hierarchy.
Step 4: We now examine the MIN and MAX values to see how they fit" into the first
level partitions. Since the first interval, (-$1, 000, 000 - $0] covers the MIN value,
i.e., LOW < MIN, we can adjust the left boundary of this interval to make the
interval smaller. The most significant digit of MIN is the hundred thousand digit
position. Rounding MIN down to this position, we get MIN0 = -$400, 000.
UNIT-1
Therefore, the first interval is redefined as (-$400,000 - 0]. Since the last interval,
($1,000,000-$2,000,000] does not cover the MAX value, i.e., MAX > HIGH, we need
to create a new interval to cover it. Rounding up MAX at its most significant digit
position, the new interval is ($2,000,000 - $5,000,000]. Hence, the top most level of
the hierarchy contains four partitions, (-$400,000 - $0], ($0 - $1,000,000],
($1,000,000 - $2,000,000], and ($2,000,000 - $5,000,000].
Step 5: Recursively, each interval can be further partitioned according to the 3-4-5
rule to form the next lower level of the hierarchy:
- The first interval (-$400,000 - $0] is partitioned into 4 sub-intervals: (-
$400,000 - -$300,000], (-$300,000 - -$200,000], (-$200,000 - -$100,000],
and (-$100,000 - $0].
- The second interval, ($0- $1,000,000], is partitioned into 5 sub-intervals: ($0 -
$200,000], ($200,000 - $400,000], ($400,000 - $600,000], ($600,000 -
$800,000], and ($800,000 -$1,000,000].
- The third interval, ($1,000,000 - $2,000,000], is partitioned into 5 sub-
intervals: ($1,000,000 - $1,200,000], ($1,200,000 - $1,400,000], ($1,400,000
- $1,600,000], ($1,600,000 - $1,800,000], and ($1,800,000 - $2,000,000].
- The last interval, ($2,000,000 - $5,000,000], is partitioned into 3 sub-
intervals: ($2,000,000 - $3,000,000], ($3,000,000 - $4,000,000], and
($4,000,000 - $5,000,000].
UNIT-1
UNIT-1
Data scrubbing tools use simple domain knowledge to detect errors and make
corrections in the data
Data auditing tools find discrepancies by analyzing the data to discover rules and
relationships and detecting data that violate such conditions
Describe the differences between the following approaches for the integration of a
data mining system with a database or data warehouse system: no coupling, loose
coupling, semitight coupling, and tight coupling. State which approach you think is
the most popular, and why.
The differences between the following architectures for the integration of a data
mining system with a database or data warehouse system are as follows.
No coupling:
The data mining system uses sources such as flat files to obtain the initial data set
to be mined since no database system or data warehouse system functions are
implemented as part of the process. Thus, this architecture represents a poor design
choice.
Loose coupling:
The data mining system is not integrated with the database or data warehouse
system beyond their use as the source of the initial data set to be mined, and
possible use in storage of the results. Thus, this architecture can take advantage of
the flexibility, efficiency and features such as indexing that the database and data
warehousing systems may provide. However, it is difficult for loose coupling to
achieve high scalability and good performance with large data sets as many such
systems are memory-based.
Semitight coupling:
UNIT-1
Some of the data mining primitives such as aggregation, sorting or pre computation
of statistical functions are efficiently implemented in the database or data
warehouse system, for use by the data mining system during mining-query
processing. Also, some frequently used inter mediate mining results can be pre
computed and stored in the database or data warehouse system, thereby
enhancing the performance of the data mining system.
Tight coupling:
The database or data warehouse system is fully integrated as part of the data
mining system and thereby provides optimized data mining query processing. Thus,
the data mining sub system is treated as one functional component of an
information system. This is a highly desirable architecture as it facilitates efficient
implementations of data mining functions, high system performance, and an
integrated information processing environment
From the descriptions of the architectures provided above, it can be seen that tight
coupling is the best alternative without respect to technical or implementation
issues. However, as much of the technical infrastructure needed in a tightly coupled
system is still evolving, implementation of such a system is non-trivial. Therefore,
the most popular architecture is currently semi tight coupling as it provides a
compromise between loose and tight coupling.
UNIT-1
In other words, in many real-life situations, it is helpful to describe data by a
single number that is most representative of the entire collection of numbers. Such
a number is called a measure of central tendency. The most commonly used
measures are as follows. Mean, Median, and Mode
Mean: mean, or average, of numbers is the sum of the numbers divided by n. That
is:
Example 1
The marks of seven students in a mathematics test with a maximum possible mark
of 20 are given below:
15 13 18 16 14 17 12
Solution:
Midrange
The midrange of a data set is the average of the minimum and maximum values.
UNIT-1
Median: median of numbers is the middle number when the numbers are written in
order. If is even, the median is the average of the two middle numbers.
Example 2
The marks of nine students in a geography test that had a maximum possible mark
of 50 are given below:
47 35 37 32 38 39 36 34 35
Solution:
Arrange the data values in order from the lowest value to the highest value:
32 34 35 35 36 37 38 39 47
The fifth data value, 36, is the middle value in this arrangement.
Note:
In general:
If the number of values in the data set is even, then the median is the average of
the two middle values.
Example 3
UNIT-1
Find the median of the following data set:
12 18 16 21 10 13 17 19
Solution:
Arrange the data values in order from the lowest value to the highest value:
10 12 13 16 17 18 19 21
The number of values in the data set is 8, which is even. So, the median is the
average of the two middle values.
Trimmed mean
Mode of numbers is the number that occurs most frequently. If two numbers
tie for most frequent occurrence, the collection has two modes and is called
bimodal.
48 44 48 45 42 49 48
Solution:
UNIT-1
The mode is 48 since it occurs most often.
It is possible for a set of data values to have more than one mode.
If there are two data values that occur most frequently, we say that the set of
data values is bimodal.
If there is three data values that occur most frequently, we say that the set of
data values is trimodal
If two or more data values that occur most frequently, we say that the set of
data values is multimodal
If there is no data value or data values that occur most frequently, we say
that the set of data values has no mode.
The mean, median and mode of a data set are collectively known as measures of
central tendency as these three measures focus on where the data is centered or
clustered. To analyze data using the mean, median and mode, we need to use the
most appropriate measure of central tendency. The following points should be
remembered:
The mean is useful for predicting future results when there are no extreme
values in the data set. However, the impact of extreme values on the mean
may be important and should be considered. E.g. The impact of a stock
market crash on average investment returns.
The median may be more useful than the mean when there are extreme
values in the data set as it is not affected by the extreme values.
The mode is useful when the most common item, characteristic or value of a
data set is required.
Measures of Dispersion
Measures of dispersion measure how spread out a set of data is. The two
most commonly used measures of dispersion are the variance and the standard
deviation. Rather than showing how data are similar, they show how data differs
from its variation, spread, or dispersion.
Very different sets of numbers can have the same mean. You will now study
two measures of dispersion, which give you an idea of how much the numbers in a
set differ from the mean of the set. These two measures are called the variance of
the set and the standard deviation of the set
UNIT-1
Percentile
Percentiles are values that divide a sample of data into one hundred groups
containing (as far as possible) equal numbers of observations.
The pth percentile of a distribution is the value such that p percent of the
observations fall at or below it.
The most commonly used percentiles other than the median are the 25th percentile
and the 75th percentile.
The 25th percentile demarcates the first quartile, the median or 50th percentile
demarcates the second quartile, the 75th percentile demarcates the third quartile,
and the 100th percentile demarcates the fourth quartile.
Quartiles
Quartiles are numbers that divide an ordered data set into four portions, each
containing approximately one-fourth of the data. Twenty-five percent of the data
values come before the first quartile (Q1). The median is the second quartile (Q2);
50% of the data values come before the median. Seventy-five percent of the data
values come before the third quartile (Q3).
UNIT-1
Q1=25th percentile=(n*25/100), where n is total number of data in the given data
set
Q2=median=50th percentile=(n*50/100)
Q3=75th percentile=(n*75/100)
The inter quartile range is the length of the interval between the lower
quartile (Q1) and the upper quartile (Q3). This interval indicates the central, or
middle, 50% of a data set.
IQR=Q3-Q1
Range
The range of a set of data is the difference between its largest (maximum)
and smallest (minimum) values. In the statistical world, the range is reported as a
single number, the difference between maximum and minimum. Sometimes, the
range is often reported as from (the minimum) to (the maximum), i.e., two
numbers.
Example1:
The range of data set is 38. The range gives only minimal information about the
spread of the data, by defining the two extremes. It says nothing about how the
data are distributed between those two endpoints.
Example2:
In this example we demonstrate how to find the minimum value, maximum value,
and range of the following data: 29, 31, 24, 29, 30, 25
UNIT-1
Thus the range is 7.
Five-Number Summary
Box plots
A box plot is a graph used to represent the range, median, quartiles and inter
quartile range of a set of data values.
(i) Draw a box to represent the middle 50% of the observations of the data set.
(ii) Show the median by drawing a vertical line within the box.
(iii) Draw the lines (called whiskers) from the lower and upper ends of the box to
the minimum and maximum values of the data set respectively, as shown in the
following diagram.
76 79 76 74 75 71 85 82 82 79 81
71 74 75 76 76 79 79 81 82 82 85
Q1=11*(25/100) th value
UNIT-1
=2.75 =>3rd value
=75
=79
=11*(75/100)th value
= 82
Step 5: Min X= 71
Since the medians represent the middle points, they split the data into four equal
parts. In other words:
Outliers
Outlier data is a data that falls outside the range. Outliers will be any points below
Q1 1.5IQR or above Q3 + 1.5IQR.
UNIT-1
Example:
10.2, 14.1, 14.4, 14.4, 14.4, 14.5, 14.5, 14.6, 14.7, 14.7, 14.7, 14.9, 15.1, 15.9,
16.4
To find out if there are any outliers, I first have to find the IQR. There are fifteen data
points, so the median will be at position (15/2) = 7.5=8 th value=14.6. That is, Q2 =
14.6.
Q1 is the fourth value in the list and Q3 is the twelfth: Q1 = 14.4 and Q3 = 14.9.
The values for Q1 1.5IQR and Q3 + 1.5IQR are the "fences" that mark off the
"reasonable" values from the outlier values. Outliers lie outside the fences.
1 Histogram
UNIT-1
The histogram is only appropriate for variables whose values are numerical and
measured on an interval scale. It is generally used when dealing with large data
sets (>100 observations)
A histogram can also help detect any unusual observations (outliers), or any gaps in
the data set.
2 Scatter Plot
A scatter plot is a useful summary of a set of bivariate data (two variables), usually
drawn before working out a linear correlation coefficient or fitting a regression line.
It gives a good visual picture of the relationship between the two variables, and aids
the interpretation of the correlation coefficient or regression model.
Each unit contributes one point to the scatter plot, on which points are plotted but
not joined. The resulting pattern indicates the type and strength of the relationship
between the two variables.
UNIT-1
A scatter plot will also show up a non-linear relationship between the two variables
and whether or not there exist any outliers in the data.
3 Loess curve
It is another important exploratory graphic aid that adds a smooth curve to a scatter
plot in order to provide better perception of the pattern of dependence. The word
loess is short for local regression.
4 Box plot
The picture produced consists of the most extreme values in the data set (maximum
and minimum values), the lower and upper quartiles, and the median.
5 Quantile plot
Displays all of the data (allowing the user to assess both the overall behavior
and unusual occurrences)
Plots quantile information
UNIT-1
For a data xi data sorted in increasing order, fi indicates that
approximately 100 fi% of the data are below or equal to the value xi
The f quantile is the data value below which approximately a decimal fraction f of
the data is found. That data value is denoted q(f). Each data point can be assigned
an f-value. Let a time series x of length n be sorted from smallest to largest values,
such that the sorted values have rank. The f-value for each observation is computed
This kind of comparison is much more detailed than a simple comparison of means
or medians.
A normal distribution is often a reasonable model for the data. Without inspecting
the data, however, it is risky to assume a normal distribution. There are a number of
graphs that can be used to check the deviations of the data from the normal
distribution. The most useful tool for assessing normality is a quantile quantile or
QQ plot. This is a scatterplot with the quantiles of the scores on the horizontal axis
and the expected normal scores on the vertical axis.
In other words, it is a graph that shows the quantiles of one univariate distribution
against the corresponding quantiles of another. It is a powerful visualization tool in
that it allows the user to view whether there is a shift in going from one distribution
to another.
First, we sort the data from smallest to largest. A plot of these scores against the
expected normal scores should reveal a straight line.
UNIT-1
The expected normal scores are calculated by taking the z-scores of (I - )/n where
I is the rank in increasing order.
Curvature of the points indicates departures of normality. This plot is also useful for
detecting outliers. The outliers appear as points that are far away from the overall
pattern op points
Example 1
UNIT-1
UNIT-1