Remodeling Grounded Theory

By Barney G. Glaser Ph.D., Hon. Ph.D. with the assistance of
Judith Holton

Abstract

This paper outlines my concerns with Qualitative Data Analysis’ (QDA)
numerous remodelings of Grounded Theory (GT) and the subsequent eroding
impact. I cite several examples of the erosion and summarize essential
elements of classic GT methodology. It is hoped that the article will clarify my
concerns with the continuing enthusiasm but misunderstood embrace of GT by
QDA methodologists and serve as a preliminary guide to novice researchers
who wish to explore the fundamental principles of GT.

Introduction

The difference between the particularistic, routine, normative data we all garner
in our everyday lives and scientific data is that the latter is produced by a
methodology. This is what makes it scientific. This may sound trite, but it is just
the beginning of many complex issues.Whatever methodology may be chosen
to make an ensuing research scientific has many implicit and explicit problems.
It implies a certain type of data collection, the pacing and timing for data
collection, a type of analysis and a specific type of research product.

In the case of qualitative data, the explicit goal is description. The clear issue
articulated in much of the literature regarding qualitative data analysis (QDA)
methodology is the accuracy, truth, trustworthiness or objectivity of the data.
This worrisome accuracy of the data focuses on its subjectivity, its interpretative
nature, its plausibility, the data voice and its constructivism. Achieving accuracy
is always worrisome with a QDA methodology.

These are a few of the problems of description. Other QDA problems include
pacing of data collection, the volume of data, the procedure and rigor of data
analysis, generalizability of the unit findings, the framing of the ensuing analysis
and the product. These issues and others are debated at length in the
qualitative research literature.Worrisome accuracy of qualitative data description
continually concerns qualitative researchers and their audiences. I have
addressed these problems at length in “The Grounded Theory Perspective:
Conceptualization Contrasted with Description” (Glaser, 2001).

In this paper I will take up the conceptual perspective of classic Grounded
Theory (GT). (In some of the research literature, classic GT methodology has
also been termed Glaserian GT although I personally prefer the term “classic”
as recognition of the methodology’s origins.) The conceptual nature of classic
GT renders it abstract of time, place and people. While grounded in data, the
conceptual hypotheses of GT do not entail the problems of accuracy that plague
QDA methods.

The mixing of QDA and GT methodologies has the effect of downgrading and
eroding the GT goal of conceptual theory. The result is a default remodeling of
classic GT into just another QDA method with all its descriptive baggage. Given
the ascending focus on QDA by sheer dint of the number of researchers
engaged in qualitative analysis labeled as GT, the apparent merger between the
two methodologies results in default remodeling to QDA canons and techniques.
Conceptual requirements of GT methodology are easily lost in QDA problems of
accuracy, type data, constructivism, participant voice, data collection rigor
according to positivistic representative requirements, however couched in a
flexibility of approach (see Lowe, 1997). The result is a blocking of classic GT
methodology and the loss of its power to transcend the strictures of worrisome
accuracy – the prime concern of QDA methods to produce conceptual theory
that explains fundamental social patterns within the substantive focus of inquiry.

I will address some, but not all, of the myriad of remodeling blocks to classic GT
analysis brought on by lacing it with QDA descriptive methodological
requirements. My goal is to alleviate the bane on good GT analysis brought on
by those QDA senior researchers open to no other method, especially the GT
method. I hope to relieve GT of the excessive scientism brought on it by those
worried about accuracy and what is “real” data when creating a scientific
product. I hope to give explanatory strength to those Ph.D. dissertation level
students to stand their GT grounds when struggling in the face of the misapplied
QDA critique by their seniors and supervisors.

I wish to remind people, yet again, that classic GT is simply a set of integrated
conceptual hypotheses systematically generated to produce an inductive theory
about a substantive area. Classic GT is a highly structured but eminently flexible
methodology. Its data collection and analysis procedures are explicit and the
pacing of these procedures is, at once, simultaneous, sequential, subsequent,
scheduled and serendipitous, forming an integrated methodological “whole” that
enables the emergence of conceptual theory as distinct from the thematic
analysis characteristic of QDA research. I have detailed these matters in my
books “Theoretical Sensitivity” (Glaser, 1978), “Basics of Grounded Theory
Analysis” (Glaser, 1992), “Doing Grounded Theory” (Glaser, 1998a), and “The
Grounded Theory Perspective” (Glaser, 2001). Over the years since the initial
publication of “Discovery of Grounded Theory” (Glaser & Strauss, 1967), the
transcendent nature of GT as a general research methodology has been
subsumed by the fervent adoption of GT terminology and selective application
of discrete aspects of GT methodology into the realm of QDA research
methodology. This multi-method cherry picking approach, while obviously
acceptable to QDA, is not compatible with the requirements of GT methodology.

Currently it appears to be very popular in QDA research substantive and
methodological papers to label QDA as GT for the rhetorical legitimating effect
and then to critique its various strategies as somewhat less than possible or
effective; then further, to sanctify the mix of methods as one method. Classic GT
is not what these “adopted QDA” usages would call GT. These researchers do
not realize that while often using the same type of qualitative data, the GT and
QDA methods are sufficiently at odds with each other as to be incapable of
integration. Each method stands alone as quite legitimate. The reader is to keep
in mind that this paper is about GT and how to extract it from this remodeling. It
does not condemn QDA in any way. QDA methods are quite worthy, respectable
and acceptable. As I have said above, the choice of methodology to render
research representations about qualitative data as scientific is the researcher’s
choice. But there is a difference between received concepts, problems and
frameworks imposed on data by QDA methods and GT’s focus on the
generation and emergence of concepts, problems and theoretical codes. The
choice of methodology should not be confused, lumped or used piece-meal if
GT is involved. To do so is to erode the conceptual power of GT.

As such, GT procedures and ideas are used to legitimate and buttress routine
QDA methodology. Considering the inundation, overwhelming and overload of
QDA dictums, “words” and assumed requirements on GT methodology, the
reader will see that it is hard to both assimilate and withstand this avalanche on
GT methodology. The assault is so strong and well meaning that many—
particularly novice researchers—do not know, nor realize, that GT is being
remodeled by default.

The view of this paper is that the researcher who has to achieve a GT product
to move on with his or her career and skill development is often blocked by the
confusion created through this inappropriate mixing of methods and the
attendant QDA requirements thus imposed. Undoing the blocks to GT by this
default remodeling will not be an easy task given the overwhelming confusion
that has resulted and seems destined to continue to grow.

I will deal with as many of the blocks as I see relevant but certainly not all. If I
repeat, it will be from different vantagepoints to undo QDA remodeling in the
service of advancing the GT perspective. I will hit hard that GT deals with the
data, as it is, not what QDA wishes it to be or, more formally, what QDA
preconceives to be accurate and to be forcefully conceptualized. This requires
honesty about taking all data as it comes, figuring it out and then its
conceptualization. I have written at length on “all is data” and on forcing in
“Doing Grounded Theory” (Glaser, 1998a).

As I deal with this escalating remodeling of GT to QDA requirements, my hope
is to free GT up to be as originally envisioned. In “Theoretical Sensitivity” I
wrote: “The goal of grounded theory is to generate a conceptual theory that
accounts for a pattern of behavior which is relevant and problematic for those
involved. The goal is not voluminous description, nor clever verification
.” (Glaser,
1978, p.93)

QDA Blocking of GT

This paper has a simple message. GT is a straightforward methodology. It is a
comprehensive, integrated and highly structured, yet eminently flexible process
that takes a researcher from the first day in the field to a finished written theory.
Following the full suite of GT procedures based on the constant comparative
method, results in a smooth uninterrupted emergent analysis and the generation
of a substantive or formal theory. When GT procedures are laced with the
exhaustive, abundant requirements of QDA methodology, GT becomes
distorted, wasting large amounts of precious research time and derailing the
knowledge—hence grounding—of GT as to what is really going on. The
intertwining of GT with preconceived conjecture, preconceptions, forced
concepts and organization, logical connections and before-the-fact professional
interest defaults GT to a remodeling of GT methodology to the status of a mixed
methods QDA methodology. This leads to multiple blocks on conceptual GT.

The word “analysis” is a catchall word for what to do with data. It is “scientized”
up, down and sideways in QDA methodologies catching up GT analysis in its
wake. QDA leads to particularistic analysis based on discrete experiences while
blocking the abstract idea of conceptualizing latent patterns upon which GT is
based. When GT becomes laced with QDA requirements, it is hard to follow to
the point of confusion. Theory development is confused with QDA description
thereby blocking GT generation of conceptual theory.

GT has clear, extensive procedures. When brought into QDA, GT abstraction is
neglected in favor of accuracy of description—the dominant concern of QDA
methodology—and GT acquires the QDA problem of worrisome accuracy—an
irrelevant concern in GT. To repeat, GT methodology is a straightforward
approach to theory generation. To spend time worrying about its place in QDA
methods and science is just fancy, legitimating talk, but the result is the
defaulting of GT to the confusion of QDA analysis.

Creswell in his book “Qualitative Inquiry and Research Design” (1998) lumps GT
into comparisons with phenomenology, ethnography, case study and
biographical life history. The result of the lumping is a cursory default remodeling
of GT to a “kind” of QDA. This lumping of GT with other QDA methods prevents
GT from standing alone as a transcending general research methodology. The
criteria of Creswell’s continuum organize methods according to when theory is
used in research, varying from before the study begins to post-study. By study,
he means data collection and structuring questions. This is a very weak
gradation for discerning the difference among QDA methods and GT
methodology. Creswell clearly does not discern the difference between
generating theory from data collection and generating theory that applies to the
data once collected. Both come during and after data collection, but are very
differently sourced. The result is a lumping and confusion of GT with QDA.

Creswell (1998, p.86) says:

“At the most extreme end of the continuum, toward the ‘after’ end, I
place grounded theory. Strauss and Corbin (1990) are clear that one
collects and analyzes data before using theory in a grounded theory
study. This explains, for example, the women’s sexually abuse study by
Morrow and Smith (1995) in which they generate the theory through
data collection, pose it at the end, and eschew prescribing a theory at
the beginning of the study. In my own studies, I have refrained from
advancing a theory at the beginning of my grounded theory research,
generated the theory through data collection and analysis, posed the
theory as a logic diagram and introduced contending and contrasting
theory with the model I generate at the end of my study (Creswell &
Brown 1992, Creswell and Urbom 1997).”

Creswell may be stating a fundamental tenant of GT—begin with no
preconceived theory and then generate one during the analysis (unless he
meant applying an extant theory). As a distinguishing item of GT, however, it is
barely a beginning, leaving the reader with no knowledge of how generating is
done, because the assumption is that it is done by routine QDA. Contrasting the
generated theory with extant other theories to prove, improve or disprove one or
the other neglects or ignores constantly comparing the theories for category and
property generation. This contrasting with other theories also prevents
modifying the GT generated theory using the other theory as a kind of data.
Both constant comparing and modifying are two vital tenants of GT.

GT may or may not be mentioned in a QDA methodological discussion, but its
procedures frequently are. As such, constant comparative analysis, problem
emergence, theoretical sampling, theoretical saturation, conceptual emergence,
memoing, sorting, etc. become laced with QDA requirements thereby defaulting
their rigorous use to a QDA burden. This virtual subversion of GT results in
complex confusion of an otherwise simple methodology for novice researchers.
The researcher is blocked and no longer freed by the power and autonomy
offered by GT to arrive at new emergent, generated theory. The ability to be
honest about what exactly is the data is consequently distorted by the
unattainable quest for QDA accuracy. For example, Kathryn MAY unwittingly
erodes the GT methodology in QDA fashion when describing the cognitive
processes inherent in data analysis.

Doing qualitative research is not a passive endeavor. Despite current
perceptions and student’s prayers, theory does not magically emerge
from data. Nor is it true that, if only one is patient enough, insight
wondrously enlightens the researcher. Rather, data analysis is a
process that requires astute questioning, a relentless search for
answers, active observation, and accurate recall. It is a process of
piecing together data, of making the invisible obvious, of recognizing the
significant from the insignificant, of linking seemingly unrelated facts
logically, of fitting categories one with another, and of attributing
consequences to antecedents. It is a process of conjecture and
verification, of correction and modification, of suggestion and defense. It
is a creative process of organizing data so that the analytic scheme will
appear obvious.” (May, 1994, p.10)

Dr May engages in descriptive capture in QDA fashion and attacks the main
tenant of GT, that theory can emerge. She is lost in accurate fact research,
which is moot for GT. She prefers to force the data, making it obey her
framework. She does not acknowledge the constant comparative method by
which theory emerges from all data. Again, GT is defaulted to routine QDA.

Similarly, this Ph.D. student—in her e-mail cry to me for help—wanted to do a
GT dissertation but was caught up in QDA and descriptive capture.

“I need some guidance. I’m on wrong track—I don’t care about the main
concerns of clinical social workers in private practice. I care about the
main concerns of anyone attempting to contextualize practice. Maybe
the issue is that I’m interested in an activity regardless of the actor. If I
ask these questions I have no doubt that main concerns will emerge as
well as attempts to continually resolve them. This I care about.” (E-mail
correspondence, Jan 2002)

She is caught by the QDA approach to force the data for a professional concern.
She wants to use GT procedures in service of a QDA forcing approach, which
defaults GT. GT, does not work that way, but the prevalence of QDA would have
her think that way. Later, under my guidance, she let the main concern emerge
and did an amazingly good dissertation on binary deconstruction between social
worker and client.

The GT problem and core variable must emerge and it will. I have seen it
hundreds of times. Later, when the GT’s main concern emerges and is
explained in a generated theory, it will have relevance for professional concerns.
Starting before emergence with the professional interest problem is very likely to
result in research with little or no relevance in GT—just routine QDA description
with “as if” importance.

Here is a good example of extensive lacing of GT by QDA needs. The confusion
of QDA requirements and GT procedures, in this example, makes it hard to
follow and clearly erodes GT by default remodeling.

Comprehension is achieved in grounded theory by using taperecorded,
unstructured interviews and by observing participants in their
daily lives. However, the assumption of symbolic interactionism that
underlie grounded theory set the stage for examining process, for
identifying stages and phases in the participant’s experience. Symbolic
interaction purports that meaning is socially constructed, negotiated
and changes over time. Therefore the interview process seeks to elicit a
participant’s story, and this story is told sequentially as the events being
reported unfold. Comprehension is reached when the researcher has
interviewed enough to gain in-depth understanding.” (Morse, 1986, p.39)

In fact, GT does not require tape-recorded data. Field notes are preferable. GT
uses all types of interviews and, as the study proceeds, the best interview style
emerges. It is not underlined by symbolic interaction, nor constructed data. GT
uses “all as data,” of which these are just one kind of data. GT does not
preconceive the theoretical code of process. There are over 18 theoretical
coding families of which process is only one. In GT, its relevance must emerge;
it is not presumed. Interviews lead to many theoretical codes. Participant stories
are moot. Patterns are sought and conceptualized. GT does not search for
description of particularistic accounts. All data are constantly compared to
generate concepts.

Morse continues her description of GT:

Synthesis is facilitated by adequacy of the data and the processes of
analysis. During this phase the researcher is able to create a
generalized story and to determine points of departure, of variation in
this story. The process of analysis begins with line-by-line analysis to
identify first level codes. Second-level codes are used to identify
significant portions of the text and compile these excerpts into
categories.Writing memos is key to recording insight and facilitates, at
an early stage, the development of theory.” (Morse, 19994, page 39)

It is, indeed, hard to recognize GT procedures in this quote by Morse.
“Adequacy of data” and a “generalized story” smack of worrisome accuracy and
descriptive capture, which are pure QDA concerns. They do not relate to GT
procedures. GT fractures the story in the service of conceptualization. Her
approach to line-by-line analysis is a bare reference to the constant comparative
process, but that is all. Her references to first level, second level codes, portions
of text and compiling excerpts into categories are far from the constant
comparative method designed to generate conceptual categories and their
properties from the outset of data collection and analysis.Writing memos in GT
has to do with immediate recording of generated theoretical conceptual ideas
grounded in data, not the mystical—perhaps conjectural—insights to which
Morse refers to.

Morse continues with her description of GT:

“As synthesis is gained and the variation in the data becomes evident,
grounded theorists sample according to the theoretical needs of the
study. If a negative case is identified, the researcher, theoretically, must
sample for more negative cases until saturation is reached when
synthesis is attained.” (Morse, 1994, page 39)

Again, finding GT procedures in this description is hard. There is always
variation in the data. GT is concerned with generating a multivariate conceptual
theory—not data variation for QDA. In GT, seeking negative cases is not a
procedure. This is more likely to be preconceived forcing. GT seeks comparative
incidents by theoretical sampling. The purpose in sampling is to generate
categories and their properties. The GT researcher does not know in advance
what will be found. Incidents sampled may be similar or different, positive or
negative. Morse’s reference to saturation does not imply conceptual saturation;
rather, it anticipates simple redundancy without conceptual analysis.

Morse continues:

“Theorizing follows from the processes of theoretical sampling.
Typologies are constructed by determining two significant characteristics
and sorting participants against each characteristic on a 2×2 matrix.
Diagramming is used to enhance understanding and identifying the
basic social process (BSP) that accounts for most of the variation in the
data.” (Morse, 1994, page 39).

Theorizing in GT is an emergent process generated by continuous cycling of the
integrated processes of collecting, coding and conceptual analysis with the
results written up constantly in memos. Theoretical sampling is just one source
of grounding during the constant comparative method. Preconceiving theoretical
codes such as typologies or basic social processes (BSPs) is not GT. In GT,
relevant theoretical codes emerge in conceptual memo sorting and could be
“whatever.” While the fourfold property space is a good tool, when emergent, for
conceptualizing types (see Glaser & Strauss, “Awareness of Dying,” 1965), it is
not for placing or sorting participants, a priori, nor for counting them. This is
strictly routine, preconceived QDA descriptive capture, not GT.

Morse finishes:

“As with the methods previously discussed, recontextualization is determined by
the level of abstraction attained in the model development. Whereas substantive
theory is context bound, formal theory is more abstract and may be applicable
to many settings or other experiences.” (Morse, 1994, page 34).

This statement is totally wrong for GT, but it addresses the usual QDA
quandary of trying to generalize a description of a unit. In contrast,
GT substantive theory always has general implications and can easily be
applied to other substantive areas by the constant comparative method of
modifying theory. For example, by comparing incidents and modifying the
substantive theory of milkmen who engage in cultivating housewives for
profit and recreation, a GT of cultivation can apply easily to doctors
cultivating clients to build a practice, thereby expanding the original
substantive theory to include cultivating down instead of cultivating up
the social scale. Formal theory is generated by many such diverse area
comparisons done in a concerted way to generate a formal theory of
cultivating for recreation, profit, client building, help, donations etc.

Context must emerge as a relevant category or as a theoretical code like all
other categories in a GT. It cannot be assumed as relevant in advance. As one
applies substantive theory elsewhere or generates formal theory, context—when
relevant—will emerge.

These quotes clearly lump GT into the multi-method QDA camp with the result
being default remodeling by erosion of classic GT methodology. Nowhere does
MORSE refer to the GT procedures of delimiting at each phase of generating, of
theoretical completeness, conceptual saturation, core variable analysis, open to
selective coding, memo banks, analytic rules, theoretical sorting, memo piles
writing up, reworking and resorting, emergent problem, interchangeability of
indices and theoretical (not substantive) coding. The effect of such default
remodeling is a great loss of essential GT procedures blocked by the imposition
of QDA worrisome accuracy requirements.

GT requires following its rigorous procedures to generate a theory that fits,
works, is relevant and readily modifiable. When it is adopted, co-opted, and
corrupted by QDA research, a close look at the work often shows that the QDA
researcher is tinkering with the GT method. He or she brings it into a QDA
research design to comply with the strictures and professional expectations of
the dominant paradigm. Getting some kind of product with a few concepts
rescues the QDA research, since the QDA description alone does not suffice.
Then, the GT label is used to legitimate the QDA research.

GT stands alone as a conceptual theory generating methodology. It is a general
methodology. It can use any data, but obviously the favorite data, to date, is
qualitative data. Ergo GT is drawn into the QDA multi-method world and eroded
by consequence, however unwittingly. This revealing of method muddling (see
Baker, Wuest, & Stern, 1992) of procedures does a tinkering rescue job, but the
result is that GT is default remodeled. GT becomes considered, wrongly, as an
interpretative method, a symbolic interaction method, a constructionist method,
a qualitative method, a describing method, a producer of worrisome facts, a
memoing method, an interview or field method and so forth. It is clear that this
tinkering by QDA researchers indicates they are too derailed by QDA to learn
systematic GT procedures. At best, a few GT procedures are borrowed out of
context.

These above authors are typical of many trying to place GT somewhere in the
QDA camp. First they lace it with some QDA requirements and ideas, which
they then use to lump GT into QDA multi-method thought. Lumping GT in as a
QDA methodology simply does not apply and, indeed, blocks good GT while the
default remodeling of GT into another QDA rages on. Lumping erodes GT. In the
remainder of this article, I will try to show how GT stands alone on its own, as a
conceptualizing methodology. My goal will be to bring out the classic GT
perspective on how GT analysis is done—to lay this method bare—and in the
bargain to show how QDA blocks, as I have said, GT generation and product
proof.

Grounded Theory Procedures

When not laced and lumped with QDA requirements, GT procedures are fairly
simple. The blocking problems come with the method mixing. I have already
written in detail much about GT procedures in “Discovery of Grounded Theory”
(Glaser & Strauss, 1967), “Theoretical Sensitivity” (Glaser, 1978), “Doing
Grounded Theory” (Glaser, 1998a), “Basics of Grounded Theory Analysis
(Glaser, 1992), “More Grounded Theory Methodology” (Glaser, 1994), and “The
Grounded Theory Perspective” (Glaser, 2001), all by Sociology Press. I have
also published many examples of a “good” GT analysis—”Examples of
Grounded Theory” (Glaser, 1993), “Grounded Theory 1984 to 1994” (Glaser,
1995), “Gerund Grounded Theory” (Glaser, 1998b)—and have given many
references in my books.

The GT product is simple. It is not a factual description. It is a set of carefully
grounded concepts organized around a core category and integrated into
hypotheses. The generated theory explains the preponderance of behavior in a
substantive area with the prime mover of this behavior surfacing as the main
concern of the primary participants. I have said over and over that GT is not
findings, not accurate facts and not description. It is just straightforward
conceptualization integrated into theory—a set of plausible, grounded
hypotheses. It is just that—no more—and it is readily modifiable as new data
come from whatever source—literature, new data, collegial comments, etc. The
constant comparative method weaves the new data into the sub-conceptualization.
What is important is to use the complete package of GT procedures as an
integrated methodological whole.

The following is a summary of the essential elements of GT methodology: Bear
in mind, when reading this summary, that the goal of GT is conceptual theory
abstract of time, place and people. The goal of GT is NOT the QDA quest for
accurate description.

Theoretical sensitivity

The ability to generate concepts from data and to relate them according to
normal models of theory in general, and theory development in sociology in
particular, is the essence of theoretical sensitivity. Generating a theory from data
means that most hypotheses and concepts not only come from the data, but are
systematically worked out in relation to the data during the course of the
research. A researcher requires two essential characteristics for the
development of theoretical sensitivity. First, he or she must have the personal
and temperamental bent to maintain analytic distance, tolerate confusion and
regression while remaining open, trusting to preconscious processing and to
conceptual emergence. Second, he/she must have the ability to develop
theoretical insight into the area of research combined with the ability to make
something of these insights. He/she must have the ability to conceptualize and
organize, make abstract connections, visualize and think multivariately. The first
step in gaining theoretical sensitivity is to enter the research setting with as few
predetermined ideas as possible—especially logically deducted, a prior
hypotheses. The research problem and its delimitation are discovered. The preframework
efforts of QDA block this theoretical sensitivity.

Getting started

A good GT analysis starts right off with regular daily data collecting, coding and
analysis. The start is not blocked by a preconceived problem, a methods chapter
or a literature review. The focus and flow is immediately into conceptualization
using the constant comparative method. The best way to do GT is to just do it. It
cannot fail as the social psychological world of structure, culture, social
interaction, social organization etc. goes on irrespective. There always is a main
concern and there always is a prime mover. As an open, generative and
emergent methodology, GT provides an honest approach to the data that lets
the natural organization of substantive life emerge. The GT researcher listens to
participants venting issues rather than encouraging them to talk about a subject
of little interest. The mandate is to remain open to what is actually happening
and not to start filtering data through pre-conceived hypotheses and biases to
listen and observe and thereby discover the main concern of the participants in
the field and how they resolve this concern. The forcing, preconceived notions of
an initial professional problem, or an extant theory and framework are
suspended in the service of seeing what will emerge conceptually by constant
comparative analysis. When QDA requires this preconception, GT is rendered
non-emergent through coding and memoing as the researcher tries to follow a
non-emergent problem.

All is data

GT stands alone as a conceptual theory generating methodology. It can use any
data, but obviously the favorite data to date is qualitative. While interviews are
the most popular, GT works with any data—”all is data”—not just one specific
data. It is up to the GT researcher to figure out what data they are getting. The
data may be baseline, vague, interpreted or proper-line. The data is not to be
discounted as “not objective,” as “subjective,” “obvious,” “constructed,” etc, as we
fine in QDA critiques. There is always a perception of a perception as the
conceptual level rises.We are all stuck with a “human” view of what is going on
and hazy concepts and descriptions about it. GT procedures sharpen the
generated concepts systematically.

Use of the literature

It is critical in GT methodology to avoid unduly influencing the preconceptualization
of the research through extensive reading in the substantive
area and the forcing of extant theoretical overlays on the collection and analysis
of data. To undertake an extensive review of literature before the emergence of
a core category violates the basic premise of GT—that being, the theory
emerges from the data not from extant theory. It also runs the risk of clouding
the researcher’s ability to remain open to the emergence of a completely new
core category that has not figured prominently in the research to date thereby
thwarting the theoretical sensitivity. Practically, it may well result in the
researcher spending valuable time on an area of literature that proves to be of
little significance to the resultant GT. Instead, GT methodology treats the
literature as another source of data to be integrated into the constant
comparative analysis process once the core category, its properties and related
categories have emerged and the basic conceptual development is well
underway. The pre study literature review of QDA is a waste of time and a
derailing of relevance for the GT Study.

Theoretical coding

The conceptualization of data through coding is the foundation of GT
development. Incidents articulated in the data are analyzed and coded, using
the constant comparative method, to generate initially substantive, and later
theoretical, categories. The essential relationship between data and theory is a
conceptual code. The code conceptualizes the underlying pattern of a set of
empirical indicators within the data. Coding gets the analyst off the empirical
level by fracturing the data, then conceptually grouping it into codes that then
become the theory that explains what is happening in the data. A code gives the
researcher a condensed, abstract view with scope of the data that includes
otherwise seemingly disparate phenomenon. Substantive codes conceptualize
the empirical substance of the area of research. Theoretical codes
conceptualize how the substantive codes may relate to each other as
hypotheses to be integrated into the theory. Theoretical codes give integrative
scope, broad pictures and a new perspective. They help the analyst maintain the
conceptual level in writing about concepts and their interrelations.

Open coding

It is in the beginning with open coding—and a minimum of preconception—that
the analyst is most tested as to his trust in himself, in the grounded method and
in the skill to use the method and as to the ability to generate codes and find
relevance. The process begins with line-by-line open coding of the data to
identify substantive codes emergent within the data. The analyst begins by
coding the data in every way possible—”running the data open.” From the start,
the analyst asks a set of questions of the data—”What is this data a study of?”
“What category does this incident indicate?” “What is actually happening in the
data?” “What is the main concern being faced by the participants?” and “What
accounts for the continual resolving of this concern?” These questions keep the
analyst theoretically sensitive and transcending when analyzing, collecting and
coding the data. They force him/her to focus on patterns among incidents that
yield codes and to rise conceptually above detailed description of incidents. The
analyst codes for as many categories as fit successive, different incidents, while
coding into as many categories as possible. New categories emerge and new
incidents fit into existing categories.

Open coding allows the analyst to see the direction in which to take the study by
theoretical sampling before he/she has become selective and focused on a
particular problem. Thus, when he/she does begin to focus, he/she is sure of
relevance. The researcher begins to see the kind of categories that can handle
the data theoretically, so that he/she knows how to code all data, ensuring the
emergent theory fits and works. Open coding allows the analyst the full range of
theoretical sensitivity as it allows to take chances on trying to generate codes
that may fit and work.

Line by line coding forces the analyst to verify and saturate categories and
minimizes the missing an important category and ensures the grounding of
categories the data beyond impressionism. The result is a rich, dense theory
with the feeling that nothing has been left out. It also corrects the forcing of “pet”
themes and ideas, unless they have emergent fit. The analyst must do his/her
own coding. Coding constantly stimulates ideas. The preplanned coding efforts
of routine QDA to suit the preconceived professional problem easily remodel GT
by stifling its approach.

Theoretical sampling

Theoretical sampling is the process of data collection for generating theory
whereby the analyst jointly collects, codes and analyses the data and decides
what data to collect next and where to find them, in order to develop the theory
as it emerges. The process of data collection is controlled by the emerging
theory, whether substantive or formal. Beyond the decisions concerning initial
collection of data, further collection cannot be planned in advance of the
emerging theory. Only as the researcher discovers codes and tries to saturate
them by looking for comparison groups, does both (1) what codes and their
properties and (2) where to collect data on them emerge. By identifying
emerging gaps in the theory, the analyst will be guided as to next sources of
data collection and interview style. The basic question in theoretical sampling is
what groups or subgroups does one turn to next in data collection—and for what
theoretical purpose? The possibilities of multiple comparisons are infinite and so
groups must be chosen according to theoretical criteria. The criteria—of
theoretical purpose and relevance—are applied in the ongoing joint collection
and analysis of data associated with the generation of theory. As such, they are
continually tailored to fit the data and are applied judiciously at the right point
and moment in the analysis. In this way, the analyst can continually adjust the
control of data collection to ensure the data’s relevance to the emerging theory.

Clearly this approach to data collection done jointly with analysis is far different
from the typical QDA preplanned, sequential approach to data collection and
management. Imposing the QDA approach on GT would block it from the start.

Constant comparative method

The constant comparative method enables the generation of theory through
systematic and explicit coding and analytic procedures. The process involves
three types of comparison. Incidents are compared to incidents to establish
underlying uniformity and its varying conditions. The uniformity and the
conditions become generated concepts and hypotheses. Then, concepts are
compared to more incidents to generate new theoretical properties of the
concept and more hypotheses. The purpose is theoretical elaboration, saturation
and verification of concepts, densification of concepts by developing their
properties and generation of further concepts. Finally, concepts are compared to
concepts. The purpose is to establish the best fit of many choices of concepts to
a set of indicators, the conceptual levels between the concepts that refer to the
same set of indicators and the integration into hypotheses between the
concepts, which becomes the theory. Comparisons in QDA research are
between far more general ideas leading to not tightly grounded categories.

Core variable

As the researcher proceeds to compare incident to incident in the data, then
incidents to categories, a core category begins to emerge. This core variable,
which appears to account for most of the variation around the concern or
problem that is the focus of the study, becomes the focus of further selective
data collection and coding efforts. It explains how the main concern is
continually resolved. As the analyst develops several workable coded
categories, he/she should begin early to saturate as much as possible those
that seem to have explanatory power. The core variable can be any kind of
theoretical code—a process, a condition, two dimensions, a consequence, a
range and so forth. Its primary function is to integrate the theory and render it
dense and saturated. It takes time and much coding and analysis to verify a
core category through saturation, relevance and workability. The criteria for
establishing the core variable within a GT are that it is central, relating to as
many other categories and their properties as possible and accounting for a
large portion of the variation in a pattern of behavior. The core variable reoccurs
frequently in the data and comes to be seen as a stable pattern that is more
and more related to other variables. It relates meaningfully and easily with other
categories. It has clear and grabbing implications for formal theory. It is
completely variable and has carry through in the emerging theory, enabling the
analyst to get through the analyses of the processes that he/she is working on
by its relevance and explanatory power. Core variable, conceptual theory is far
beyond QDA description or conceptual descriptions which are unending since
they are not tied down to a conceptual scheme. A reversion to QDA clearly
blocks this necessary theoretical completeness.

Selective coding

Selective coding means to cease open coding and to delimit coding to only
those variables that relate to the core variable in sufficiently significant ways as
to produce a parsimonious theory. Selective coding begins only after the analyst
is sure that he/she has discovered the core variable. QDA researchers have
never figured out the exact purpose and techniques of selective coding. Often
they selectively code from the start with preconceived categories.

Delimiting

Subsequent data collection and coding is thereby delimited to that which is
relevant to the emergent conceptual framework. This selective data collection
and analysis continues until the researcher has sufficiently elaborated and
integrated the core variable, its properties and its theoretical connections to
other relevant categories.

Integrating a theory around a core variable delimits the theory and thereby the
research project. This delimiting occurs at two levels—the theory and the
categories. First the theory solidifies, in the sense that major modifications
become fewer and fewer as the analyst compares the next incidents of a
category to its properties. Later modifications are mainly on the order of
clarifying the logic, taking out non-relevant properties, integrating elaborating
details of properties into the major outline of interrelated categories and—most
important—reduction. Reduction occurs when the analyst discovers underlying
uniformity in the original set of categories or their properties and then
reformulates the theory with a smaller set of higher-level concepts. The second
level of delimiting the theory is a reduction in the original list of categories for
coding. As the theory grows, becomes reduced, and increasingly works better
for ordering a mass of qualitative data, the analyst becomes committed to it.
This allows the researcher to pare down the original list of categories for
collecting and coding data, according to the present boundaries of the theory.
The analyst now focuses on one category as the core variable and only
variables related to the core variable will be included in the theory. The list of
categories for coding is further delimited through theoretical saturation. Since
QDA researchers focus on full description, and no core variable conceptual
analysis, delimiting does not occur in QDA research. It just goes on and on –
empirical tiny topics draining both researcher and audience.

Interchangeability of indicators

GT is based on a concept-indicator model of constant comparisons of incidents
(indicators) to incidents (indicators) and, once a conceptual code is generated,
of incidents (indicators) to emerging concept. This forces the analyst into
confronting similarities, differences and degrees in consistency of meaning
between incidents (indicators), generating an underlying uniformity which in turn
results in a coded category and the beginnings of properties of it. From the
comparisons of further incidents (indicators) to the conceptual codes, the code
is sharpened to achieve its best fit while further properties are generated until
the code is verified and saturated.

Conceptual specification, not definition, is the focus of GT. The GT conceptindicator
model requires concepts and their dimensions to earn their way into
the theory by systematic generation of data. Changing incidents (indicators) and
thereby generating new properties of a code can only go so far before the
analyst discovers saturation of ideas through interchangeability of indicators.
This interchangeability produces, at the same time, the transferability of the
theory to other areas by linking to incidents (indicators) in other substantive or
sub-substantive areas that produce the same category or properties of it.
Interchangeability produces saturation of concepts and their properties, not
redundancy of description as some QDA methodologists would have it (see
Morse, 1995, p.147).

Pacing

Generating GT takes time. It is above all a delayed action phenomenon. Little
increments of coding, analyzing and collecting data cook and mature and then
blossom later into theoretical memos. Significant theoretical realizations come
with growth and maturity in the data, and much of this is outside the analyst’s
awareness until preconscious processing becomes conscious. Thus the analyst
must pace himself, exercise patience and accept nothing until something
happens, as it surely does. Surviving the apparent confusion is important. This
requires that the analyst takes whatever amount of quality time that is required
to do the discovery process and that he/she learns to take this time in a manner
consistent with the own temporal nature as an analyst—the personal pacing.
Rushing or forcing the process will shut down the analyst creativity and
conceptual abilities, exhausting the energy and leaving the researcher empty
and the theory thin and incomplete. In QDA work researchers are paced
sequentially through the program and framework, and often driven to long
periods of no product and exhaustion. To overlay this QDA program on GT
severely remodels GT to its deficit.

Memoing

Theory articulation is facilitated through an extensive and systematic process of
memoing that parallels the data analysis process in GT. Memos are theoretical
notes about the data and the conceptual connections between categories. The
writing of theoretical memos is the core stage in the process of generating
theory. If the analyst skips this stage by going directly to sorting or writing up,
after coding, he/she is not doing GT.

Memo writing is a continual process that leads naturally to abstraction or
ideation—continually capturing the “frontier of the analyst’s thinking” as he/she
goes through data and codes, sorts and writes. It is essential that the analyst
interrupts coding to memo ideas as they occur if he/she is to reap the subtle
reward of the constant input from reading the data carefully, asking the above
questions and coding accordingly. Memos help the analyst to raise the data to a
conceptual level and develop the properties of each category that begin to
define them operationally. Memos present hypotheses about connections
between categories and/or their properties and begin to integrate these
connections with clusters of other categories to generate the theory. Memos
also begin to locate the emerging theory with other theories with potentially
more or less relevance.

The basic goal of memoing is to develop ideas (codes) with complete freedom
into a memo fund that is highly sort-able. Memo construction differs from writing
detailed description. Although typically based on description, memos raise that
description to the theoretical level through the conceptual rendering of the
material. Thus, the original description is subsumed by the analysis. Codes
conceptualize data. Memos reveal and relate by theoretically coding the
properties of substantive codes—drawing and filling out analytic properties of
the descriptive data.

Early on memos arise from constant comparison of indicators to indicators, then
indicators to concepts. Later on memos generate new memos, reading literature
generates memos, sorting and writing also generate memos—memoing is never
done! Memos slow the analyst’s pace, forcing to reason through and verify
categories and their integration and fit, relevance and work for the theory. In this
way, he/she does not prematurely conclude the final theoretical framework and
core variables.

Comparative reasoning in memos—by constant comparisons—undoes
preconceived notions, hypotheses, and scholarly baggage while at the same
time constantly expanding and breaking the boundaries of current analyses.
Memos are excellent source of directions for theoretical sampling—they point
out gaps in existing analyses and possible new related directions for the
emerging theory. Clearly the preconceived approach and framework of QDA
research is in conflict with the freedom of memoing. The conflict is most often
resolved by the preponderance of QDA research and GT loses this vital aspect.

Sorting and writing up

Throughout the constant comparative coding process, the researcher has been
capturing the emergent ideation of substantive and theoretical categories in the
form of memos. Once the researcher has achieved theoretical saturation of the
categories, he/she proceeds to review, sort and integrate the numerous memos
related to the core category, its properties and related categories. The sorted
memos generate a theoretical outline, or conceptual framework, for the full
articulation of the GT through an integrated set of hypotheses.

Ideational memos are the fund of GT. Theoretical sorting of the memos is the
key to formulating the theory for presentation or writing. Sorting is essential—it
puts the fractured data back together. With GT, the outline for writing is simply
an emergent product of the sorting of memos. There are no preconceived
outlines. GT generates the outline through the sorting of memos by the sorting
of the categories and properties in the memos into similarities, connections and
conceptual orderings. This forces patterns that become the outline.

To preconceive a theoretical outline is to risk logical elaboration. Instead,
theoretical sorting forces the “nitty gritty” of making theoretically discrete
discriminations as to where each idea fits in the emerging theory. Theoretical
sorting is based on theoretical codes. The theoretical decision about the precise
location of a particular memo—as the analyst sees similarities, connections and
underlying uniformities—is based on the theoretical coding of the data that is
grounding the idea.

If the analyst omits sorting, the theory will be linear, thin and less than fully
integrated. Rich, multi-relation, multivariate theory is generated through sorting.
Without sorting, a theory lacks the internal integration of connections among
many categories.With sorting, data and ideas are theoretically ordered. Sorting
is conceptual sorting, not data sorting. Sorting provides theoretical
completeness. Sorting generates more memos—often on higher conceptual
levels—furthering and condensing the theory. It integrates the relevant literature
into the theory, sorting it with the memos.

Sorting also has a conceptual, zeroing-in capacity. The analyst soon sees where
each concept fits and works, its relevance and how it will carry forward in the
cumulative development of the theory. Sorting prevents over-conceptualization
and pre-conceptualization, since these excesses fall away as analyst zeros in on
the most parsimonious set of integrated concepts. Thus, sorting forces
ideational discrimination between categories while relating them, integrating
them and preventing their proliferation. The constant creativity of sorting memos
prevents the use of computer sorting as used in QDA work.

Analytic rules developed during sorting

While theoretical coding establishes the relationship among variables, analytic
rules guide the construction of the theory as it emerges. They guide the
theoretical sorting and subsequent writing of the theory. Analytic rules detail
operations, specify foci, delimit and select use of the data and concepts, act as
reminders of what to do and keep track of and provide the necessary discipline
for sticking to and keeping track of the central theme as the total theory is
generated.

There are several fundamental analytic rules. First, sorting can start anywhere. It
will force its own beginning, middle, and end for writing. The important thing is to
start. Trying conceptually to locate the first memos will force the analyst to start
reasoning out the integration. Once started, analyst soon learns where ideas
are likely to integrate best and sorting becomes generative and fun. Start with
the core variable and then sort all other categories and properties only as they
relate to the core variable. This rule forces focus, selectivity and delimiting of the
analysis. Theoretical coding helps in deciding and in figuring out the meaning of
the relation of a concept to the core variable. This theoretical code should be
written and sorted into the appropriate pile with the substantive code. Once
sorting on the core variable begins, the constant comparisons are likely to
generate many new ideas, especially on theoretical codes for integrating the
theory. Stop sorting and memo! Then, sort the memo into the integration.

The analyst carries forward to subsequent sorts the use of each concept from
the point of its introduction into the theory. The concept is illustrated only when it
is first introduced to develop the imagery of its meaning. Thereafter, only the
concept is used, not the illustration. All ideas must fit in somewhere in the
outline or the integration must be changed or modified. This is essential for, if
the analyst ignores this fitting all categories, he/she will break out of the theory
too soon and necessary ideas and relations will not be used. This rule is based
on the assumption that the social world is integrated and the job of the analyst
is to discover it. If he/she cannot find the integration, he/she must re-sort and reintegrate
the concepts to fit better. The analyst moves back and forth between
outline and ideas as he/she sorts forcing underlying patterns, integrations and
multivariate relations between the concepts. The process is intensely generative,
yielding many theoretical coding memos to be resorted into the outline. Again it
cannot be done by the simple code and retrieve of computer sorting.

Sorting forces the analyst to introduce an idea in one place and then establish
its carry forward when it is necessary to use it again in other relations. When in
doubt about a place to sort an idea, put it in that part of the outline where the
first possibility of its use occurs, with a note to scrutinize and pass forward to
the next possible place. Theoretical completeness implies theoretical coverage
as far as the study can take the analyst. It requires that, in cutting off the study,
he/she explains with the fewest possible concepts and with the greatest possible
scope, as much variation as possible in the behavior and problem under study.
The theory thus explains sufficiently how people continually resolve their main
concern with concepts that fit, work, have relevance and are saturated.

Summary

Always keep in mind that GT methodology is itself a GT that emerged from
doing research on dying patients in 1967. It was discovered, not invented. It is a
sure thing for researchers to cast their fate with. It was not thought up as a
proffered approach to doing research based on conjectural “wisdoms” from
science, positivism or naturalism. It is not a concoction based on logical
“science” literature telling us how science ought to be.

GT gives the social psychological world a rhetoric—a jargon to be sure—but
one backed up by systematic procedures. It is not an empty rhetoric, but
unfortunately it often takes time for GT procedures to catch up to rhetoric with
“grab.” Part of the delayed learning is the remodeling—hence blocking—by QDA
requirements, especially the accuracy quest.

One promise is that the abstraction of GT from data—generating GT—does
away with the problems of QDA that are “scientized” on and on. As the GT
researcher (especially a Ph.D. student) does GT analysis that produces a
substantive, conceptual theory with general implications—not descriptive
findings—he or she will advisably steer clear of the quicksand of the descriptive
problems. QDA problems are numerous. A short list of these would include
accuracy, interpretation, construction, meaning, positivistic canons and
naturalistic canons of data collection and analysis of unit samples, starting with
preconceived structured interviews right off, sequencing frameworks,
preconceived professional problems, pet theoretical codes, etc and etc. The list
is long, the idea is clear.

“Minus mentorees” should be cautious, in their aloneness, about seeking too
much guidance from “one book read” mentors and the intrusive erosion that
results as these mentors try to make sense of GT in their QDA context. They
should seek help from people who have written a GT book.
———

The time for GT to explain and be applied to “what is going on” means leaving
the onslaught of QDA methodologies, which so erode it and then remodeled it.
Evert GUMMESSON says it clearly in his recent paper, “Relationship marketing
and the new Economy: it’s time for De-Programming” (2002). What
GUMMESSON says about marketing applies equally to nursing, medicine,
education, social work and other practicing professions as well as academic
work.

Today’s general textbooks perpetuate the established marketing
management epic from the 1960s with the new just added as extras. It
is further my contention that marketing education has taken an
unfortunate direction and has crossed the fine line between education
and brainwashing. The countdown of a painful—but revitalizing—
process of deprogramming has to be initiated.

What do we need in such a situation? A shrink? No, it is less
sophisticated than that. All we need is systematic application of
common sense, both in academe and in corporations.We need to use
our observational capacity in an inductive mode and allow it to receive
the true story of life, search for patterns and build theory.Yes, theory.
General marketing theory that helps us put events and activities into a
context. This is all within the spirit of grounded theory, wide spread in
sociology but little understood by marketers. My interpretation of a
recent book on the subject by Glaser (2001) is as follows: ‘take the
elevator from the ground floor of raw substantive data and description to
the penthouse of conceptualization and general theory. And do this
without paying homage to the legacy of extant theory.’ In doing this,
complexity, fuzziness and ambiguity are received with cheers by the
researchers and not shunned as unorderly and threatening as they are
by quantitative researchers. Good theory is useful for scholars and
practicing managers alike.” (Gummesson, 2002, 132).

I trust that this paper demonstrates how freedom from QDA requirements will
allow unfettered GT procedures to result in generated theory that fulfills
Gummesson’s vision.

Authors

Barney G. Glaser Ph.D., Hon. Ph.D.
The Grounded Theory Institute
P.O. Box 400
Mill Valley, CA 94942
USA

Judith A. Holton
10 Edinburgh Drive
Charlottetown, PE C1A 3E8
Canada

Correspondence
Tel: 415 388 8431
Fax: 415 381 2254
E-mail: bglaser@speakeasy.net
judith@islandtelecom.com

References

Baker, Cynthia, Wuest, Judith, & Stern, Phyllis (1992). Method Slurring, The
Phenomenology/Grounded Theory Example, Journal of Advanced Nursing, 17, 1355-
1360.

Creswell, John W. (1998). Qualitative Inquiry and Research Design. Thousand Oaks, CA:
Sage.

Glaser, Barney G. (1978). Theoretical Sensitivity: Advances in the Methodology of
Grounded Theory. Mill Valley, Ca.: Sociology Press.

Glaser, Barney G. (1992). Basics of Grounded Theory Analysis. Mill Valley, Ca.: Sociology
Press.

Glaser, Barney G. (Ed.) (1993). Examples of Grounded Theory. A Reader. Mill Valley, Ca.:
Sociology Press.

Glaser, Barney G. (Ed.) (1994). More Grounded Theory Methodology. A Reader. Mill
Valley, Ca.: Sociology Press.

Glaser, Barney G. (Ed.) (1995). Grounded Theory 1984 to 1994. Mill Valley, Ca.:
Sociology Press.

Glaser, Barney G. (1998a). Doing Grounded Theory. Issues and Discussions. Mill Valley,
Ca.: Sociology Press.

Glaser, Barney G., with the assistance of W. Douglas Kaplan (Ed.) (1998b). Gerund
Grounded Theory: The Basic Social Process Dissertation. Mill Valley, Ca.: Sociology
Press.

Glaser, Barney G. (2001). The Grounded Theory Perspective: Conceptualization
Contrasted with Description. Mill Valley, Ca.: Sociology Press.

Glaser, Barney G., & Strauss, Anselm L. (1965). Awareness of Dying. Chicago: Aldine
Publishing Co.

Glaser, Barney G., & Strauss, Anselm L. (1967). Discovery of Grounded Theory. Mill
Valley, Ca.: Sociology Press.

Gummesson, Evert (2002). Relationship Marketing and the New Economy: It’s Time for
De-Programming. Journal of Services Marketing, 16, no.7, 585-589.

Lowe, Andy (1997). Managing to Post Merger Aftermath-Default Remodeling, Dept of
Marketing University of Strathclyde (Grounded theory Review).

May, Kathryn (1994). The Case for Magic in Method. In Janice Morse (Ed.), Critical
Issues in Qualitative Research Methods (pp.10-22). Thousand Oaks, CA: Sage.

Morse, Janice (1994). “Emerging from the Data.” Cognitive Processes of Analysis in
Qualitative Research. In Janice Morse (Ed.), Critical Issues in Qualitative Research
Methods (pp.23-41). Thousand Oaks,CA: Sage,

Morse, Janice (1995). Editorial. Qualitative Health Review, May, page 147 –149.

Facebooktwitterredditpinterestlinkedinmail