A Simpler Understanding of Classic GT: How it is a fundamentally different methodology

Ólavur Christiansen

Abstract

The author reduces the research rationale of classic grounded
theory (GT) methodology and the consequential classic GT
research procedures and stages down to their essential
elements. This reduction makes it possible to compare classic
GT to other research methodologies in a manner that is simpler
and yet concise. This methodological analysis and synthesis
has been conducted while applying and after having applied
the classic GT methodology in practice in a major project. The
fundamental differences between classic GT versus other
adaptations of GT, as well as other qualitative-inductive
research approaches, are mainly explained by the very different
approaches in solving the problem of many equally justifiable
interpretations of the same data, and by the consequential
differences in research procedures, and how they are applied.
Comprehension of methodological differences in details will
always be relevant. However, an uncomplicated and still
concise explanation of the differences between these
methodologies is necessary. “Grounded theory” (GT) is used as
a common label in the literature for very different research
approaches. This simpler approach of comparing the
methodologies will be helpful for researchers, who might want
to consider several options when deciding which research
methodology to use, and who need quickly to understand some
of the most essential methodological elements.

Introduction

For prospective researchers, who wish to consider several
options when deciding which research methodology to use, it
can be bewildering when “grounded theory” is used as a
common label in the literature for very different research
methodologies. During the research process that led to the
theory of “opportunizing” in business (Christiansen, 2005;
2006) the author made some observations and lived through
some experiences that could be helpful to others who might
want to utilize Glaser’s prescribed set of classic grounded
theory (GT) research procedures, or other adapted GT
procedures, or other mainly inductive-qualitative research
procedures in e.g. economics, business and management
research. This article is based on a systematic treatment of
these observations and experiences.

Glaser’s prescribed set of GT research procedures are
definite with regard to their usage and research rationale
(Glaser and Strauss, 1967; Glaser, 1978; 1992; 1998; 2001;
2003; 2005). In this article, these procedures will be referred to
as classic grounded theory methodology or classic GT. Strauss
and Corbin (1990; 1998) have prescribed a set of research
procedures that also are specific, and this set of procedures is
also called “grounded theory”. However, the research rationales
that are attached to these two different sets of “grounded
theory” procedures are clearly different, and consequently, and
despite some apparent similarity, these two sets of research
procedures are also very different. It is also obvious that there
is a much wider diversity regarding applied research
procedures in studies labelled as “grounded theory” studies in
the literature. It has even been claimed that almost any
qualitative research can be labelled as a “grounded theory”
(Simmons, 1995).

Research methodologies almost by definition are different.
They each have a different raison d’être, set of procedures and
standards. Methodological diversity has its raison d’être and
there is nothing wrong in it. To make judgments regarding
general superiority or inferiority of methodologies may be
pointless. However, to mix procedures of different researcher
methodologies, which have different research rationales, may
give a set of research procedures that do not represent a
consistent method. A best choice of methodology depends on fit
to the individual researcher’s purpose or skills, or the
contextual purpose, and any research outcome has to be judged
according to the raison d’être, procedures and standards of the
methodology applied.

The purpose of the article is to suggest a simplified and yet
concise approach by which to compare research procedures that
are labelled as GT, as well as other mainly inductive
qualitative research methodologies. The basis for this
comparing will be a reduction of the classic GT research
rationale and the consequential classic GT research procedures
and stages down to their essential elements. Thus, instead of
only focusing on the many differences within the many details,
focus can be delimited to the differences in the fundamental
research rationales of the methodologies, and the consequential
fundamental differences in the research procedures and stages
of research.

This simplified basis for comparing will, of course, sum up
and highlight the fundamentals of classic GT. It will not
necessarily sum up and highlight all the essential features of
the other methodologies. However, it will be enough to give an
explanation for the methodological differences that are most
fundamental, and which may be most the problematic for
prospective researchers to understand.

The Classic GT Research Rationale

The rationale for using classic GT methodology, or its
raison d’être, can be summed up and explained in different
ways. One example is the following: “A methodology was
needed that could get through and beyond conjecture and
preconception to exactly the underlying processes of what is
going on so that professionals and laymen alike could intervene
with confidence to help resolve the participants’ main concern
surrounding learning, pain and profit.” (Glaser, 1998, p. 5).

To “get through and beyond conjecture and preconception
to exactly the underlying processes of what is going on in the
resolving of the participant’s main concern”, the research area
or the general research topic must, of course, be known.
However, the researcher has to minimize his/her
preconceptions and this requires that not even the research
problem should be preconceived. It has to be allowed to emerge
from the systematic collection and treatment of data during the
research process. Due to its rationale, classic GT methodology
is predominantly empirical and inductive – what counts is only
what the data relate. The methodology is for the generation of a
theory directly from data that explains as much as possible
with as few concepts as possible, and what are explained are
the behaviour patterns of those being studied. The research
outcome is a conceptual theory. Substantive concepts are stable
latent patterns that image the area being researched. These
concepts are generated from the systematic treatment of the
data and should not be preconceived. These concepts should
represent a considerable abstraction of time, place and people,
and should have name labels that fit vis-à-vis what actually
goes on in the resolving of the main concern, and be firmly
grounded in the data by interchangeable data indicators. The
purpose is certainly not conceptual descriptions with many
concepts; such conceptual descriptions just convey stories that
are bound to the specificity of time, place and people. The
methodology can be used not only on qualitative data, but also
quantitative data, but in practice it is mostly used on
qualitative data.

Another way of expressing this rationale could, for
example, be as follows: (1) to delimit the study to the main
concern and its recurrent solution of those being studied (their
substantive interests), and (2) to prevent preconceived
professional concerns to mask what actually goes on in the field
of study, and instead to stay open and let patterns emerge from
the data. I will refer to these two points as the two hallmarks of
the methodology. The following text will further explain the
meaning of these two hallmarks and their significance in
classic GT.

When researchers are confronted by an overwhelming set
of collected data, some of them may find relief by concluding
that the cultural, social or economic organization of life is
complex enough to allow a number of equally justifiable
interpretations. The rationale of classic GT is to meet this
unique challenge by a unique solution. This is to find the core
variable as the first stage of the research. This is the first
hallmark of the methodology. As a concept, the main concern
and its recurrent resolution of those being studied is summed
up by the core variable of the emergent theory. After finding
the core variable, the subsequent research and the generated
theory is delimited to the core variable and to what is related to
the core variable – the theory thus becomes a theory around the
core variable. In other words, as the first stage of research, the
main concern and its recurrent resolution of those being
studied has to be conceptualized or summed up and explained
by one concept, which becomes the core variable. The core
variable has to be allowed to emerge from the systematic
treatment of the data during the research process, and should
in no way be preconceived, and this is accomplished by
adhering to the second hallmark. A fitting name has to be given
to the concept that emerges as the core variable. By its naming,
the core variable represents that particular behaviour pattern
that is highly important for the participants, but also
problematic. It is what drives and directs these people’s
behaviour. The core variable is that particular concept that is
most related to the other concepts of the emerging theory. The
core variable is also that concept of the theory that explains
most of the variation in the data or in the studied behaviour.
The problem of “numerous equally justifiable interpretations of
the data” is minimized by finding the core variable.
Consequently, the research is deliberately set out to follow the
agenda of those being studied, the substantive interest
relevancy of those being studied, and not any preconceived
agenda of some professional research community or individual
researchers, or their deemed professional interest relevancy.
This is also the second hallmark of the methodology.

The second hallmark of classic GT has been referred to as
“staying open and letting patterns emerge from data” and its
opposite is “logically deducing, logically conjecturing,
preconceiving (and possibly testing or quantitatively verifying
some auxiliary hypotheses)”. The orthodox GT analyst does not
know a priori what he/she is looking for. Thus, much of the
induction in orthodox GT is not tantamount to the ordinary
induction, or the inductive principles used by different
hermeneutic research procedures. Instead, classic GT induction
is “assumption free” as well as “assumption based”, but this
latter only applies when these assumptions correspond to what
already has emerged as more or less stable patterns in the data
(Hartman, 2001, p. 37). This means that there is a “classic GT
form of induction”, which is different. Coupled with the first
hallmark of classic GT, this helps keep the substantive
interests of the participants in the field of study in focus, to
avoid the compulsory, preconceived interests of the established
research community, and to focus on what actually goes on in
the field of study. In other words, the research is delimited to
what is empirically discovered to be the most relevant and
problematic for the people being studied, not what a priori is
deemed most relevant by the researcher (or by those being
studied). For the researcher this means a minimizing of
preconceptions and a suspension of prior knowledge and
understanding regarding the area of research. Sometimes it
may even be an advantage to be completely without any prior
knowledge about the area of research prior to conducting the
research. Such a statement, of course, flies in the face of
positivist, rationalist and many other research positions. Yet,
in the fairy tale, “The Emperor’s New Clothes”, it was only an
innocent and ignorant little child that could do justice to reality
by shouting out: “He is naked!”.

Due to its rationale, the classic GT methodology has no
attachment to any particular theoretical-disciplinal paradigm
(Kuhn, 1996), theoretical perspective or theoretical-disciplinal
research program (Lakatos, 1970). Ontological and
epistemological positions may also contain pre-framings or
preconceptions. Due to its rationale, the classic GT
methodology is almost free of logically derived assumptions
regarding ontology and epistemology. Its basic assumptions are
limited to this: “Because man is a meaning-making creature,
social life is patterned and empirically integrated. It is only a
question of applying a rigorous and systematic method for
discovering and explaining these patterns. Thus, just do it.”
(Glaser, 2004). The classic GT methodology is for the study of
behaviour or behaviour patterns, not for the study of people or
units as such. To generalize on units or people is difficult by
any means. To generalize on behaviour is easier. Behaviour
patterns transcend the borders of units.

Classic GT methodology can be conceived as a
methodological paradigm or methodological research program,
but it is not a usual one. The methodological procedures are the
outcome of doing classic GT research on classic GT research
since the early 1960s, i.e. the methodology is itself a classic
grounded theory and thus essentially empirically generated.
That the methodology is very different does not mean that it is
better. For certain research tasks, and for certain very relevant
and necessary research tasks, it would be a very wrong choice.
For other research tasks, it could very well be an option. This is
true especially when new perspectives may emerge regarding
what actually goes on in a field of study. However, even though
new concepts and models may emerge, it does not necessarily
mean that other concepts and models are wrong. The classic GT
rationale is to increase and not to decrease methodological
diversity and options, including ontological and epistemological
options. When the classic GT rationales are stated as (1) “to
keep the main concern and its recurrent solution of those being
studied in focus” and (2) “to prevent preconceived professional
concerns to mask what actually goes on in the field of study”,
this does not mean that use of other methodologies by default
will lead to the opposite result. It may even be a strength if
many different methodologies can be applied within a given
research task. Methodological choice is not a question of
enabling a researcher to reach the “absolute truth line”, but to
come closer to it. Social life has many facets, many realities
may emerge in approaching “the truth line”, and there cannot
be any ultimate finality in any classic GT theory generation.

Of course, those being studied in a classic GT research
know much more about what they do than any researcher. No
classic GT researcher can or should compete with these people
in their contextual knowing and describing. However, these
people have not conceptualized nor conceptually explained
what they do and how they accomplish it. The researcher, on
the other hand, uses his/her license to conceptualize. Thus, the
researcher can empower these people by providing them with
an empirically grounded theory that conceptually explains
what actually goes on and how they recurrently resolve their
main concern. If some changes are needed, then these people
would be empowered to accomplish them.

The Consequential Classic GT: The research
procedures and distinct terminology

The research rationale of classic GT is made operative by
the classic GT research procedures and by a distinct classic GT
terminology. With reference to the research rationale, many of
the procedures explain themselves. Firstly, it is difficult to “get
through and beyond conjecture and preconception to exactly the
underlying processes of what is going on in the resolving of the
participant’s main concern” without taking a predominantly
empirical and inductive approach in the systematic collection
and treatment of data. However, this inductive approach is not
the same as the “ordinary” inductive approach. This inductive
approach is basically “assumption-free” and only “assumptionbased”
when these assumptions represent emerging stable
patterns in the data. Anything else may be preconceptions, and
preconceptions have to be minimized. Thus, a distinction may
be made between (1) deductive logic based on a priori
knowledge (which is minimized), (2) inductive logic where nongrounded
assumptions also may direct the research process,
(and which also is minimized), and (3) “the classic GT form of
induction”, where data takes the lead of the research process
and where only grounded assumptions count. Suspension of
prior knowledge and minimization of logical-deductive
elements does not mean the elimination of them; neither does it
give “objectivity”. However, it makes a big difference. The data
have also to be collected without any tainting of the
researcher’s possible preconceived notions, and this means that
the researcher starts without any predetermined or
preconceived research problem. Actually, one cannot know
what one is studying before one has had a chance to look at the
data – it has to “emerge” first. Literature reading has to wait to
the end of the research. Only the data provides the control, and
the task of the researcher is to be able to follow where the data
lead him/her (Lowe, 2005).

Secondly, it is difficult to “get through and beyond
conjecture and preconception to exactly the underlying
processes of what is going on in the resolving of the
participant’s main concern” without the specific procedure of
conceptualization by the method of constantly comparing. This
procedure of conceptualizing thus becomes the main inductive
procedure for the systematic treatment of data. The research
rationale also requires delimiting, and the procedure of
conceptualization is inherently delimiting, and the summit of
this delimiting is achieved by finding the authentic core
variable.

Possibly the most important and the most problematic
issue for any researcher who uses the methodology is
conceptualization or concept generation. To conceptualize
means to discover and to name latent patterns and
relationships between latent patterns as they emerge in the
data and are verified by interchangeable data indicators.
Further, to conceptualize means: “to discover and generate new
categories and their properties, instead of being forced to use
received concepts.” (Glaser, 1998, p. 133)

By coding (or conceptualizing or categorizing), the data
are analyzed by being cut into slices that are constantly
compared, and subsequently they may become synthesized and
put together again differently according to the “pattern fit” and
the various relationships. By coding, fitting names are given to
each stable pattern, which convey explanations regarding the
main concern and its recurrent resolution of those being
studied. This takes place in a process of data collection and
data coding that usually becomes iterative and involves much
reworking.

There are two main types of building blocks of theory.
These are substantive concepts or codes and theoretical
concepts or codes. Substantive concepts are stable latent
patterns that summarize the empirical substance of the data
and signify the underlying meaning, uniformity and/or pattern.
Theoretical codes, on the other hand, signify the relationships
between substantive codes. For substantive concepts there is a
hierarchy of levels. Any substantive concept has a level of
abstractness vis-à-vis time, place and people. The more a
particular underlying meaning, uniformity and/or pattern
represents an abstract of time, place and people, the higher is
the concept’s conceptual level. The core variable is the
substantive concept of the theory that has the highest
conceptual level, and it is most closely related to all the other
lesser-level concepts. Sub-core variables are below the core
variable in conceptual level and very closely related to the core
variable. Categories are below sub-core variables in conceptual
level, but are related to some sub-core variables. A property is
another type of concept; it is a conceptual characteristic of a
category, sub-core variable or core variable, or a concept of a
concept. Consequently, a property has a lesser conceptual level
than the concept to which it refers. Data (qualitative or
quantitative) are contextual descriptions that are bounded to
the specificity of time, place and people and are at the lowest
conceptual level. Theoretical codes are usually on a higher
conceptual level than substantive concepts, as they signify
more general phenomena (different kinds of causes, correlation,
processes with at least two stages that account for variation
over time, loops, inseparable part-wholeness structures, etc.).
(Glaser, 1978, pp. 93-115; 1992, pp. 38-39, 75-76).

A distinction is made between substantive coding and
theoretical coding. There are two types of substantive coding.
They are open coding and selective coding. Open coding is for
finding the core variable. Selective coding is applied when the
core variable has emerged and selective coding is delimited to
concepts or data fragments that are related to the core variable.
Theoretical coding is for recognizing or discovering the type of
relationships between substantive concepts.

Classic GT is a form of latent pattern analysis of
qualitative or quantitative data, but in other respects it is quite
unlike, e.g. factor analysis. It originates from multivariate
quantitative methodology (Glaser, 1998, p. 27). Yet, the
methodology does not rely on any form of measuring or any
counting. It does not rely on index construction of any kind, but
on interchangeable indicators found in the data (Glaser, 1978,
pp. 55-65). Glaser recommends that emergent categories
(different latent patterns) should not be listed during the data
work, and that data indicators should not be counted (Glaser,
1998, p.137).

The methodology is rarely used on only quantitative data,
despite the fact that it is far easier. It has to be high calibre
quantitative data, and such data on behaviour may be costly to
obtain. When the methodology is used on qualitative data, the
use of it has to be entirely technology-free (Glaser, 2003:17-44).
Apart from mere writing purposes, the use of special computer
software for coding or for sorting of categories or coded data is
not recommended. Use of computer software may lead to a
built-in pre-framing, incompatibility regarding forced choices,
as well as incompatibility regarding flexibility, pacing and
attention to what goes on in the data.

In the next section, more will be explained about classic
GT procedures and terminology.

The Consequential Classic GT Stages of Research

Because focus is on behaviour patterns that transcend the
limits of individual units, the data are collected by theoretical
sampling and not by statistical or representative sampling. In
the beginning phase of theoretical sampling, the differences
among the sampled units are maximized. Analysis and
synthesis of the data then determines what unit to sample
next. The data should be collected without any tainting of the
researcher’s possible preconceived notions from pre-existing
theory, and the significance of the data should never be prejudged,
for example, by assuming that variables such as age,
sex, income, size, type of business, etc. are important. When the
interview is used in data collection, ungrounded or
predetermined questions should be avoided. Instead, the
interviewee should just be encouraged to talk freely about
his/her main concern and its recurrent solving. This may be
done in different ways, depending on what the interviewer
finds appropriate in the given context. When the core variable
has been revealed, more grounded questions may be asked.
Audio or video is not recommended during interviews, and it
may not be a good idea to take notes as well during interviews.
This may inhibit the interviewee in giving genuine and original
data. Instead, the data may be recorded afterwards, and the
coding of it should begin immediately. (Glaser, 2001, pp. 165-
184).

The procedural stages of the research are generally
sequential, but once the research process begins, they are often
conducted simultaneously or serendipitously according to the
requirements of the particular research. Following the
preparatory stage of not preconceiving the problem, and the
data collection stage, an overview of the subsequent stages is as
follows (Simmons, 2002):

As mentioned, there are two procedural stages of
substantive coding, open coding and selective coding. Common
to them is the procedure of constant comparative analysis. This
means constantly comparing or relating data or data incidences
(line by line) to emerging concepts (ideas), then relating concept
(ideas) to other concepts (ideas) or their properties.

Open coding, which has the purpose of finding the core
variable, allows coding of anything and everything in the data.
The analyst asks three general questions of the data: “What is
this data a study of”. This ultimately leads to the discovery of
the core variable that subsequently becomes the focus of the
research. The next question is: “What category or property of a
category does this incident indicate?” (This encourages thinking
conceptually and to avoiding contextualizing or “story-telling”).
The third question is: “What is actually happening in the
data?” (This alerts to possible theoretical codes).

The next procedure of selective coding is carried out when
the core variable and its major dimensions and properties have
been discovered. Selective coding means delimiting the coding
to concepts or data fragments related to the core variable, but
in other respects the procedures are the same while in the
process of constantly comparing. Theoretical coding is to
recognize or discover how the substantive concepts may relate
to each other as hypotheses to be integrated into a theory.
Theoretical coding is facilitated by the procedure of sorting (see
below).

The procedure of memo-writing is a must in a classic
grounded theory study. Memos are the “theorizing write-up” of
ideas about substantive codes and their relationships. The
writing of memos triggers insight and new ideas, and provides
a record of grounding. While coding gives conceptual familiarity
with the data, emergence happens while memo-writing. Data
are always available, and can be analyzed at any time, while
ideas are fragile. They should be written down at the earliest
possible moment. Memos are always modifiable as more is
discovered about the topic. Data collection, analysis (coding),
sorting, and memo-writing are ongoing and overlap. (Glaser,
1978, pp. 82-92; Glaser, 1998, pp. 177-186). Conceptual
familiarity with what conceptually occurs in the data has to
reach a certain threshold before insight can strike gradually or
suddenly or in abundance – or in other words: before emergence
of concepts can occur. It requires theoretical sensitivity and
creativity, but hardly more logic than what can be summoned
by a small child in solving a jigsaw. Activation of more complex
logic than that can easily trigger logical elaboration, and when
an analyst relies on logical elaborations and deductions instead
of what the data conceptually tell, he/she has actually
abandoned the methodology. However, in theoretical sampling,
a bit of logic is used in deciding where take the next sample. In
theoretical coding, prior knowledge and logical understanding
of as many theoretical codes as possible will be helpful. This
means that while classic GT is predominately inductive
regarding the research area and the research problem, it is also
a specified inductive-deductive mix.

The procedure of sorting refers not to data sorting, but to
conceptual sorting of memos and accompanying data. By
default, it also involves constant comparing. As explained, this
has to be done manually, and a pair of scissors and a number of
paper boxes may be useful. Sorting may become appropriate at
any time during the course of the research. The final sort
frames or constitutes the first draft of the write-up.

Once the researcher feels confident in his/her theory,
he/she can begin to analyze and integrate relevant existing
literature into it. A classic GT comparative literature review
examines and compares the concepts rather than the contexts
from whence the data came. Contextual literature without
conceptual relatedness is not integrated, but non-contextual
literature (i.e. from other disciplines) should be integrated if
relatedness is found. Such a comparison may modify the
theory, and it may of course also add to or correct the preexisting
literature. Usually, it is difficult to find relatedness in
contextual literature. Consequently, the literature review is
usually short.

The key issue comes down to the methodology’s as well as
the researcher’s capability to reveal a credible theory from the
data that explains with parsimony and scope. This means the
capability to make allowance for and to trigger the emergence
of concepts that (1) fit to the data, (2) work to explain, and are
(3) relevant for those being studied. Yet, there is also a 4 th
criterion for assessment. This criterion probably applies to all
research, which literally means “search again”. A generated
orthodox GT is “asymptotic” in the meaning that it approaches
what goes on, but most likely, it will never reach any ultimate
or final “truth line”. Further research, involving new data, may
bring it closer to the ultimate “truth line” or the asymptote.
Therefore, a generated classic GT is modifiable. It should be
open to modification, and consequently fit as a tool for learning.
(Glaser, 1992, p. 116).

The Challenges for a Novice Classic GT Researcher

There is no reason to expect that it is easier for a beginner
to use these procedures than it is for a beginner to use
advanced quantitative-statistical procedures in research. Yet,
the innate and required abilities to learn these different sets of
procedures may be very different. While attempting to achieve
autonomy in the use of the classic GT methodology, the novice
classic GT researcher has to relinquish all theoreticaldisciplinal
autonomy over the research process, and to
surrendered this autonomy and control to the data. This cannot
be done without humility and without extended tolerance for
extended periods of confusion, while not controlling “as usual”.
The task of the researcher is to follow where the data might
lead him/her while conceptualizing by constantly comparing,
memo-writing, sorting, etc. From this relinquishing of
autonomy, another kind of autonomy has to emerge. This is
researcher autonomy as the researcher gradually learns to use
the research procedures as prescribed. Such autonomy is not
obtained without accomplishing a major research project.
However, this is a description of a good outcome. A different
outcome is quite possible if no qualified methodological
coaching is available, and the need for such coaching may be
underestimated. The need to emphasize the classic GT research
procedures and stages of research as necessary requirements
for fulfilling the classic GT research rationale may also have
been underestimated. These relationships are fundamental for
fully understanding whether or not classic GT is the right
methodology to choose for a given research task and research
purpose, and also for understanding the methodology.

The suspension of prior knowledge and the keeping of
preconceptions in check will usually lead to long periods of
seeming deadlock, confusion, even depression, while no stable
patterns are seen in the data. In such a situation it becomes
tempting to find another solution than “to discover the core
variable first” for solving the problem of “many equally
justifiable interpretations of the data”. A pre-framed
professional concern or preconceived theoretical perspective
may replace the role of the core variable.

In such a situation it may also become an option to apply
the different GT procedures that are prescribed by, e.g.,
Strauss and Corbin (1990; 1998) as an alternative. The
Strauss-Corbin version of GT also applies a core variable, but
this core variable is found at a later stage of research to sum up
or integrate the findings. (Strauss and Corbin, 1998, pp. 143-
161). This core variable has not the role to delimit the study
from its start in order to solve the problem of “many equally
justifiable interpretations of the data”. Furthermore, the
Strauss-Corbin version of GT applies the procedures of “axial
coding” and the “consequential/conditional matrix” (ibid., pp.
123-142, 181-199). These represent a different coding paradigm
that replaces the role of theoretical coding, sorting and partly
substantive coding in classic GT, and the role of the “classic GT
form of induction”. This coding paradigm more restricted. It
favours the generation of concepts that fit within a narrow
range of theoretical codes. These are mostly the theoretical
codes of symbolic interactionism or the stimulus-organismresponse
model (ibid., p. 128). As opposed to this, the classic GT
researcher has to be open for the emergence of any type of
theoretical code, and their number may range between 40 and
several hundred (Glaser, 2005, pp. 17-30).

If the researcher needs to pre-frame his/her study, to
predefine the core variable, or to define the core variable at the
end of the study, or to use a given theoretical perspective as a
substitute for finding the core variable as the first stage of
research, or does not want to use “the classic GT form of
induction”, then classic GT definitely will be a wrong choice of
methodology.

An Approach to Compare Methodologies that is
Simpler

Detailed explanations of the many methodological
differences are of course necessary, and are especially valuable
when provided by the methodological pioneers. Barney Glaser
(1992) has given his own detailed account of the differences
between classic GT and the version of GT that has been
prescribed by Strauss and Corbin (1990). Glaser’s critique can
easily be misunderstood. Glaser does not claim that classic GT
is a better methodology. Glaser just concludes that the Strauss-
Corbin version of GT is fundamentally different from classic GT
methodology, and that this very different methodology should
be referred to a different name:

It is a “new” conceptual method, uniquely suited to
qualitative research, that simply uses the grounded
theory name, with the author having no realization of
what grounded theory was in the first place – what it
was in goals, methodology, freedom, level of
abstraction, constant comparison, naturalism,
emergence, trust and care about what the participants
perceive and what their problems are. (Glaser, 1992,
pp. 123-124).

Jan Hartman (2001) has also provided a detailed account
for the differences between these two different “grounded
theory” approaches. In Hartman’s view, the most important
idea perhaps behind grounded theory, as it was conceived by
Glaser and Strauss (1967), is that the theory that is generated
has to emerge without being influenced by a priori theoretical
assumptions, and that all elements in the theory have to be
grounded in data. Hartman concludes that the Corbin-Strauss
GT procedures will not always be able to fulfil this original
intention behind grounded theory. (Hartman, 2001, pp. 41-42).
This also means that the de facto rationale of the Corbin-
Strauss GT methodology is different from classic GT rationale.

In this article, the two “hallmarks” of classic GT have
been used to explain the classic GT research rationale. Jointly
these two “hallmarks” justify the pivotal role of the core
variable
in solving the problem of “multiple equally justified
interpretations”, the role of the very different “classic GT form
of induction
” to prevent preconceptions and to facilitate
grounding, and the role of the procedure of “conceptualizing
while constantly comparing
” while applying the “classic GT
form of induction
” for the detection of stable latent patterns in
the data. When this frame is used for comparing methodologies,
the fundamental difference between classic GT and logical
deductive or hypothetical-deductive approaches is obvious. The
fundamental differences between classic GT and other mainly
inductive-qualitative or hermeneutic research approaches, as
well, do not need further elaboration.

The first and second hallmark of classic GT, i.e., the
role of the core variable, and the very different “classic GT form
of induction
”, are enough to highlight a fundamental difference.
That many of these other methodologies also use procedures for
coding and comparing of qualitative data, as well as memowriting
does not eradicate this difference. Because the “classic
GT form of induction
” and the role of the core variable differ
from other inductive-qualitative approaches, the classic GT
procedures for coding, constantly comparing and memo-writing
and sorting are applied very differently. To assume that
procedures with the same name mean equivalent procedures
only leads to confusion. Because of the differences between the
classic GT and the Strauss-Corbin set of research procedures,
these two sets research procedures could lead to the emergence
of dissimilar core variables and dissimilar sets of substantive
concepts
within the same area of research.

The Role of Symbolic Interactionism

Many authors have linked symbolic interactionism with
Glaser’s classic GT. There are many examples, and it is beyond
the scope of this article to comment on them (Alvesson &
Skoldberg, 2000; Denzin & Lincoln, 2000; Creswell, 1998;
Morse, 1994). It has even been stated that symbolic
interactionism
is the foundational philosophy of the original or
classic GT. If this were true, this would mean that any
prospective classic GT research had to start with a
preconceived or predefined theoretical perspective, namely the
perspective of symbolic interactionism. If this were true, classic
GT would be inconsistent and hence meaningless. Dr. Glaser
has carefully explained that symbolic interactionism is not the
foundational theoretical perspective of classic GT. Classic GT is
a general inductive methodology that presumes no discipline or
theoretical perspective or data type (Glaser, 2005, pp. 141-160).
In his book from 1998, Dr. Glaser gives an account of how his
acquaintance with the Chicago school of symbolic interaction
through Anselm Strauss gave him “a chance to analyze
qualitative data by applying my quantitative ideas to
qualitative data
”. It also gave him a chance of fully absorbing
the notion that man is a meaning-making animal (Glaser,
1998:32). This may have been an important step for a
researcher, who previously had been accustomed to
quantitative research procedures, but this does not mean
adherence to the methodological and theoretical perspective of
symbolic interactionism. However, the axial coding paradigm of
the Strauss-Corbin version of GT is directed towards some pre-
selected theoretical codes (Strauss and Corbin, 1998, p. 128),
and these are quite compatible with symbolic interactionism.

Some Examples that Highlight the Difference

The difference between classic GT and other versions of
GT can be illustrated by some examples. Frederic Lee has
made some attempts to apply GT methodology within the
context of macroeconomics (Lee, 2002a:4; Lee, 2002b; Lee,
2005). However, Lee’s research problem is entirely set within
the paradigm of post-Keynesian economics and heterodox
economics without any focus on what is the most important and
problematic for those being studied. This means that classic GT
will be unsuited for Lee’s research task and research purpose,
and consequently, Lee applies another version of GT.

One example of a GT study in business that
deliberately avoids classic GT is Tomas Brytting’s study of
“Organizing in the small growing firm” (Brytting, 1991). About
the core variable Brytting writes: “The study’s “aspect” or “core
variable” was set at the outset: “organizing processes in small
firms”. An analysis à la Glaser would not have defined that core
variable until later on in the research process. With this study’s
data, Glaser might have ended up with a theory about
sensemaking in the small firm…/…My view in this study has
been that generation of theory might benefit from the same
systematic and cumulative ambition that guides the testing of
theory
.” (Ibid., pp. 209-210). Due to Brytting’s research
purposes, another version of GT was a more fit choice for him.
However, Brytting’s understanding of a core variable has
nothing to do with the core variable in classic GT, and it does
not correspond entirely to the meaning of the core variable in
the Strauss-Corbin version of GT. Brytting preconceives the
notion of “sensemaking”, and “organizing processes in small
firms” is just his general research topic.

In her book, “Grounded Theory in Management
Research
”, Karen Locke (2001) explains the use of the Corbin-
Strauss version of GT. However, it is remarkable that she does
not take Dr. Glaser’s clear position seriously. Dr. Glaser states
that the Corbin & Strauss version of GT is an entirely different
methodology. (Ibid., p. 71). Locke labels both as grounded
theory. Consequently, her readers do not obtain any clarity
regarding the difference between these two research
methodologies. Neither do her readers obtain any clarity
regarding the classic GT research rationale and the
consequential classic GT research procedures and stages of
research. For example, Locke misses the pivotal role of the core
variable
in classic GT and she does not mention the procedure
of sorting. She also states: “Certainly, the school of thought,
namely symbolic interactionism, that informed the
understanding of social reality expressed in grounded theory’s
research practices, appears to have been left behind
.” (Ibid., p.
viii). Thus, for Locke, correct use of GT means to view and treat
the data through the “glasses” of one particular theoretical
perspective, namely the perspective of symbolic interactionism.
Avoidance of any such pre-framing is part of the classic GT
research rationale. This may be the clearest difference between
classic GT and other versions of GT.

Conclusion

When the essential elements of classic GT are used as a
frame of reference, a simpler and yet concise comparison of
classic GT and seemingly similar methodologies can be
achieved. The essential elements are: The first hallmark of
classic GT, [“to keep the main concern and its recurrent
solution of those being studied in focus”], the finding of the
consequential core variable as the first stage of research, and
the subsequent and consequential delimiting of the research to
the core variable. These elements minimize the problem of
“many equally justifiable interpretations of the data”.

The Corbin-Strauss version of GT finds a substitute
solution to this problem. This solution is not necessarily an
inferior one. It solves the problem of “many equally justifiable
interpretations of the data” by viewing and treating the data
through the “lens” of a restricted range of possible theoretical
codes and hence pre-selected theoretical perspectives and
possibly also predetermined professional concern.
Consequently, there is no need to find the core variable as the
first stage of research, or any need or urgency to find it at all.

The second hallmark of classic GT [“to prevent any
preconceived professional concerns to mask what actually goes
on in the field of study
”] cannot apply in the same way, or apply
at all, in the Corbin-Strauss version of GT. This second
hallmark is tantamount to the “classic GT form of induction”,
and it is inconsistent with the axial coding paradigm of the
Strauss-Corbin version of GT. As a consequence, the procedures
of conceptualizing (coding) have to be applied differently in the
Corbin-Strauss version of GT.

Because the Corbin-Strauss version of GT finds a
substitute solution to the problem of “many equally justifiable
interpretations of the data”, a user of this methodology needs
not to endure long periods of seeming deadlock, confusion, even
depression, while no stable patterns are seen in the data. It will
always be easier to interpret the data through the “glasses” of a
pre-determined theoretical perspective, and this will ultimately
yield the findings of a standard solution. To deem this solution
inferior however, is pointless.

Author

Olavur Christiansen
Department of Social Science
University of Faroe Islands
J.C. Svabosgoeta 7
FO-100 Torshavn, Faroe Islands
Email: OlavurC@setur.fo

References

Alvesson, Mats and Skoldberg, Kaj (2000), Reflexive
Methodology, New Vistas for Qualitative
Research, Sage Publications, London

Brytting, Thomas (1991), Organizing in the small growing firm,
a grounded theory approach, Published Ph.D.
dissertation, Stockholm School of Economics,
Stockholm.

Christiansen, Ólavur. (2005), The theory of “opportunizing” and
the sub-process of “conditional befriending”. Journal of
Business & Economics Research, Vol. 3, No 4, pp. 73-88.

Christiansen, Ólavur. 2006. Opportunizing: A classic grounded
theory study on business and management. Grounded
Theory Review: An international Journal, Vol. 6, No 1,
pp.109-133

Creswell, John W. (1998), Qualitative Inquiry and Research
Design: Choosing among Five Traditions, Sage
Publications, London.

Denzin, Norman K. and Lincoln, Yvonna S. (Eds.), (2000),
Handbook of Qualitative Research, Sage Publications,
London.

Glaser, Barney G. (1978), Advances in the Methodology of
Grounded Theory: Theoretical Sensitivity, The
Sociology Press, Mill Valley, CA.

Glaser, Barney G. (1992), Emergence vs Forcing: Basics of
Grounded Theory Analysis, Sociology Press, Mill Valley,
CA.

Glaser, Barney G. (1998), Doing Grounded Theory: Issues and
Discussions, Sociology Press, Mill Valley, CA.

Glaser, Barney G. (2001), The Grounded Theory Perspective:
Conceptualization Contrasted with Description,
Sociology Press, Mill Valley, CA.

Glaser, Barney G. (2003), The Grounded Theory Perspective II:
Description’s Remodeling of Grounded Theory
Methodology, Sociology Press, Mill Valley, CA.

Glaser, Barney (2004), Glaser’s explanations on a seminar in
London, April 2004.

Glaser, Barney G. (2005), Grounded Theory Perspective III:
Theoretical coding, Sociology Press Mill Valley, CA.

Glaser, Barney G. and Strauss, Anselm L. (1967), The
Discovery of Grounded Theory: strategies for
qualitative research, Aldine De Gruyter New York.

Hartman, Jan (2001), Grundad teori, teorigenerering på
empirisk grund, Studentlitteratur, Lund, Sweden.

Kuhn, Thomas S. (1996), The Structure of Scientific
Revolutions, Third Edition, The University of Chicago
Press, Chicago.

Lakatos, Imre (1970), Falsification and the Methodology of
Scientific Research Programmes. In Lakatos and
Musgrave (eds.), 1970, Criticism and Growth of
Knowledge, Cambridge University Press. Cambridge,
UK.

Lee, Frederic S. (2002a), Post Keynesian Price Theory,
Cambridge University Press, Cambridge, UK:

Lee, Frederic S. (2002b), Theory creation and the
methodological foundation of Post Keynesian
economics, Cambridge Journal of Economics, Vol. 26,
pp. 789-804.

Lee, Frederick (2005), Grounded Theory and Heterodox
Economics, The Grounded Theory Review: An
International Journal, Vol. 4, No 2, pp. 95-116.

Locke, Karen (2001), Grounded Theory in Management
Research, Sage Publications, London.

Lowe, Andy (2005), Trust in Emergence. Keynote presentation
delivered to the 3 rd International Qualitative Research
Convention, Johor Bahru, Malaysia, August 23 rd ,
hosted by University Teknologi, Malaysia.

Morse, Janice M. (Ed.), (1994),Critical Issues in Qualitative
Research Methods, Sage Publications, London.

Simmons, Odis E. (2002), Summary of the stages of a GT
research, unpublished paper.

Simmons. Odis E. (1995), Illegitimate use of the “grounded
theory” title, pp. 163-169 in Barney G. Glaser (Ed.),
1995, Grounded Theory 1984-1994, Volume One,
Sociology Press, Mill Valley, CA.

Strauss, Anselm and Corbin, Juliet (1990), Basics of
Qualitative Research. Grounded Theory Procedures and
Techniques, Sage Publications, Newbury Park, CA.

Strauss, Anselm and Juliet Corbin (1998), Basics of Qualitative
Research, Second Edition. Techniques and Procedures
for Developing Grounded Theory, Sage Publications,
London.

Facebooktwitterredditpinterestlinkedinmail