Chapter 3 Delphi Method
Chapter 3 Delphi Method
CHAPTER 3
DELPHI METHOD
3.1 INTRODUCTION
Cooper & Schindler (2009) While publishing data and literature are
considered valuable for any research study, it is often recognized that only a
fraction of the existing knowledge in any field is put in to writing and hence
an experience survey along with a comprehensive literature review was
considered necessary in order to take a more complete view of the research
problem.
of first and second stages was then used for the design and documentation of
subsequent stages of the research study.
The Delphi study was considered suitable for this research work as
there was incomplete knowledge about the research problem. The aforesaid
method largely facilitated improving understanding of the research problem,
research opportunities and possible solutions to research questions.
The study was conducted in three rounds (DS1, DS2, DS3) seeking
opinion of the experts through questionnaire without revealing their
individual identity. The first round of Delphi questionnaire for this research
study consisted of ten open ended questions covering all aspects of research
problem. The objective of the first round questionnaire was to conduct a brain
storming exercise and generate as many ideas and opinion of the experts as
possible on every issue. The time allocated for the first round was 30 days.
All the experts who participated in the first round were also invited
to participate in the next two rounds. Additional experts were also chosen for
the second round. The responses that were mentioned by the experts in the
first round formed the basis of the questionnaire for the second round. The
Second round had sixteen Linkert Scale questions in carrier development
practice factors, Ten in Human Resource influencing factor and Ten in
Organisational factor. The questionnaires developed in each round were first
subjected to Pilot testing before being administered to the experts.
For the current Delphi study twenty experts were carefully selected
based on their job knowledge, experience and wisdom. There were eight
professionals in the panel experts, were working in the Reputed Multinational
Companies in Bangalore. Twelve other members of the panel were CEO cum
HR Managers of various organizations.
Dalkey & Helmer 1963: Delbecq & Ven (1971) the success of a
study is largely dependent on the quality of the participants reported specific
criteria for the selection of panel experts.
The Potential participants for this study were identified through their
expertise in the areas of Human Resource Management in the Information
Technology. They included Academics, Human Resource Practitioners and
Industrial Psychologists.
Follow –up telephone calls were made and letters were sent to non –
respondents after two weeks. Nominees were advised that each round of the
study would require thirty minutes and that data collection would occur over a
two month time period.
3.2.4.1 Academics
The panel nominees were asked to express their expert opinions and
judgments on the current development of retention management in
Information Technology and to identify the key of HR factors influencing
retention in the workplace.
70
Rowe & Wright 1999; van De Ven & Delbecq (1974) However for
this study, a small sample size was deemed acceptable due to the preliminary
exploratory role of the Delphi technique in the first stage of the research. It
was, however critical to secure the participation of the right kinds of experts,
who understand the issues, have a vision and represent a substantial variety of
viewpoints.
71
In the second round, the responses suggested in the first round were
presented to each respondent in the form of survey statements and
accompanying response selections, each selection serving to complete the
initial statement. The respondents were asked to indicate the degree to which
they agreed with each completed statement on 1 to 5 with 1 indicating
strongly disagrees and 5 strongly agree (Linkert scale).
The responses that received the greatest support for each of the
questions were fed back to the experts during the round three. In the third and
final round, the respondents were asked to rank the responses that
accompanied each statement according to their perceived importance with 1
being strongly disagree and 5 strongly agree. This was done to help the
respondents further refine their opinions and assist in achieving consensus.
This type of question removes the need for the researcher to pre –
judge appropriate categories for response, allowing groupings of similar
responses to be constructed if necessary after the data have been collected.
In the third round and final round survey, the experts were asked to
strongly disagree or strongly agree with the final wording of an item as well
as provide additional comment under the specified concept areas. This
procedure stopped a three questionnaires or round which seems fairly typical
of many studies. Consensus or trend towards consensus was documented at
the conclusion of Round 3.
Dane (1990) has stated that a Pilot study is “an abbreviated version
of research project in which the researcher practices or tests procedures to be
used in the subsequent full scale project”. Since the measures of the research
are either new or reconfigured from their original sources, a pilot study would
ensure psychometric cleaning of the objects, So that only appropriate objects
chosen through proper analysis would be used. A systematic Pilot study was
74
carried out on 50 samples who didn’t participate in the final research. The
research instrument tested to ascertain the reliability and validity of the
instrument used. Recommendations found to be valid were incorporated into
the survey design to the actual research.
have colleagues or friends read through the questionnaire and play the role of
respondents, even if they know little about the subject.
3.5.2 Distribution
The data are typically treated as interval scale. When using this
approach to determine the total score for each respondent on each store, it is
important to use a consistent scoring procedure so that a high (or low) score
consistently reflects a favourable response. This requires that the categories
assigned to the negative statements by the respondents be scored by reversing
77
Rajiv Grover Marco Vriens (2006) a respondent will have the most
favourable attitude towards the store with the highest score. It is easy to
construct and administer this scale, and it is easy for the respondent to
understand.
The primary data required for the study have been collected from
selected employees working in IT organizations located in Bangalore city,
Karnataka State. The primary data collection is done in two stages. In the first
stage, a well structured questionnaire has been developed and pre-testing of
the questionnaire has been done by choosing 50 employees (respondents)
from different levels of 25 IT organizations on a random basis from
Bangalore city, Karnataka State.
The secondary data related to the study are collected from different
sources including text books, articles published in journals, news papers,
periodicals National Association of Software and Service Companies
(NASSCOM) websites, Mckinsey study reports company websites,
Government’s IT department sites, doctoral research thesis and various other
related sites.
validity indices in later studies. Because sources of error vary with the
targeted construct, the method of assessment, and the function of assessment,
the methods of content validation will also vary across these dimensions. In
this research, data was analysed using content validity technique.
Crocker et al (1986) the optimal number of judges will vary with the
element under consideration, the internal consistency of the ratings, and
practical considerations. In the present research, twenty panelists were asked
to indicate whether or not the measurement item was “essential” to the
operationalisation of the theoretical context. The panelist’s inputs were then
used to compute the CVR for each I Th candidate item in the questionnaire
(CVRi) as follows Equation (3.1).
ne - N / 2
CVR = (3.1)
N/2
It is inferred from the CVR equation that the content validity ratio
takes on values between -1.00 and +1.00 where CVR=0.00 means that
50percent of the panelists of size N believe that a measurement item is
“essential”. A CVR>0.00 would, therefore believe that a measurement item is
“essential” and thereby valid.
Nunnally (1978) Content validity ratios were thus calculated for the
questions which could measure WL and JP for the Content Validity Ratio
80
Table. All the statements in QWL and JP were considered after the
calculation of content validity ratio. Only statements which have a score of
0.50 or above have been included in the survey instrument. After the content
validation and reliability check, the final version of the questionnaire was
arrived at. In order to evaluate the reliability level of the data, Cronbach alpha
test is conducted. Only elements with alpha value of 0.70 or above are
considered. For all the variables of factor analysis, alpha value is above 0.70
which shows the internal consistency of the scales (Cronbach 1981).
Collis & Hussey (2003) A Likert rating scale as described was used
for section two of the questionnaire. The questions were turned into
statements and the respondents were asked to indicate their level of agreement
by checking the chosen box with an “x”.
Collis & Hussey (2003) With the exception of sub-section me, all
question statements were posed in a positive context. The benefit of this was
to discourage leading statements, i.e. leading the respondent into a negative
context. It is after all, the negative context that the researcher attempted to
invalidate, but if established, it is an indication of a problem area. Therefore,
if the answer is “Disagree”, then it is actually so. Propose that questions of a
sensitive nature should be avoided, or if asked, they should be towards the
end of the questionnaire; however they strongly advise against asking
negative questions.
82
Factor analysis has been used here to identify and define the
underlying dimensions in the original variables and is used to reduce the
number of variables by eliminating redundancy. The general purpose of factor
analysis is to find a method of summarizing the information contained in a
number of original variables in to a smaller set of new composite dimensions
(Factors) with minimum loss of information. That is, the Factor Analysis tries
to identify and define the underlying dimensions in the original variables.
d. Scores for each factor can be computed for each case. These
scores are then used for further analysis.
Y = b0+b1X1+b2X2+b3X3……bkXk
3.10 HYPOTHESIS
3.11 CONCLUSION