Alvita Nathaniel, Ph.D., FNP-BC, FAANP
In the U.S., all research must be approved by an Institutional Review Board (IRB) that evaluates research protocols for the purpose of protecting human subjects. This paper includes a brief history of the development of public policy that guides institutional review boards in the U.S. and commentary on the responsibilities of a grounded theory researcher interested in applying for approval for a research study.
An institutional review board (IRB) is a formally constituted committee that approves and monitors biomedical and behavioural research with the purpose of protecting the rights and welfare of research participants. An IRB performs scientific, ethical, and regulatory oversight functions. In the U.S., it is common for grounded theorists to experience frustration with the IRB protocol submission process. Facets of the application process may seem rigid, redundant, and non-applicable. Review board members may not seem to understand or appreciate qualitative methods and delays are common. In addition, a conglomeration of disparate policies and procedures coupled with a variety of types of review boards creates a system that defies description. Nevertheless, a researcher who understands public policy and the responsibilities of institutional review boards can learn to develop research applications that are quickly approved.
Created to protect the rights of human subjects, institutional review boards’ policies and procedures flow from ethical principles and two critical 20th century documents. The ethical considerations of harm versus benefit, privacy, confidentiality, respect for persons, truthfulness, and autonomy undergird the protection of human research participants. These principles began to be codified during the Nuremberg trials in response to atrocities committed by Nazi era German physicians in the name of medical research (October 1946 – April 1949). Developed by the panel of international judges overseeing the Nuremberg Military Tribunals and with the assistance of physician consultants (Shuster, 1997), the code served as a set of principles against which the experiments in the concentration camps could be judged (Burkhardt & Nathaniel, 2008). Subsequently, the Nuremberg Code became a blueprint for the Declaration of Helsinki. Addressed primarily to physicians in 1964, the World Medical Assembly developed this declaration as a “statement of ethical principles for medical research involving human subjects….” (World Medical Association, 1964). In the years that followed, governments began to develop regulations based upon ethical principles, the Nuremberg Code, and the Declaration of Helsinki.
In 1962, the Kefauver-Harris Bill expanded the principles from the Nuremberg Code by ensuring greater drug safety in the United States. Enacted after thalidomide was found to have caused severe birth defects, the Kefauver-Harris Bill 1) empowered the Food and Drug Administration (FDA) to ban drug experiments on humans until safety tests have been completed on animals, 2) required drug manufacturers and researchers to submit adverse reaction reports to the FDA, 3) required drug advertising to include complete information about risks and benefits, and 4) required informed consent from clinical study participants (First Clinical Research, 2010).
In 1966, the U.S. Surgeon General issued a policy statement entitled Clinical Research and Investigation Involving Human Beings in the form of a memorandum to the heads of the institutions conducting research with public health service grants (Sparks, 2002). The policy, which stipulated that all human subject research must be preceded by independent review, was the origin of IRBs in the U.S. (Sparks, 2002). Other public policies followed the Surgeon General’s memorandum.
An important longitudinal study began before the Surgeon General’s policy statement and continued for many years afterwards. Every country has profound stories about violations of human rights during research studies. In the U. S. the Tuskegee Study of Untreated Syphilis in the Negro Male is one of the most “horrendous” examples of research that disregards basic ethical principles of research conduct. The study started in 1932 when the U.S. Public Health Service and the Tuskegee Institute began recording the natural progression of untreated syphilis. Conducted without informed consent, the study initially involved 600 Black men: 399 with syphilis and a disease-free control group (U. S. Centers for Disease Control and Prevention, 2009). The men were told that they were being treated for “bad blood,” a colloquial phrase used to describe obscure ailments and fatigue. Led to believe they were being treated, the men were never given adequate treatment. Even after penicillin was found to cure syphilis in the 1940s, researchers decided to forego treatment so they could continue to study the progress of untreated syphilis. The men were never given a choice to withdraw from the study. In exchange for participating, they received free medical exams, free meals, and burial insurance. The research continued for 40 years until 1972 when a public outcry condemned the study (Jones, 1981). The public outcry surrounding the Tuskegee study influenced subsequent policies designed to protect human subjects (Tuskegee University).
When it was signed into law in 1974, the National Research Act created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1979). Under the direction of the Secretary of the U.S. Department of Health, Education, and Welfare, the Commission issued the highly acclaimed Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. The Belmont Report serves as a basis for subsequent laws, rules, and regulations and consists of three basic elements: 1) a discussion of the boundaries between practice and research; 2) a discussion of the three basic ethics principles of respect for persons, beneficence, and justice, that undergird all other considerations; and 3) a discussion of specific applications of the ethical principles in regard to informed consent, assessment of risks and benefits, and selection of research participants (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1979). The U.S. government codified the Belmont Report in the form of Title 45 Code of Federal Regulations, Part 46 (45 CFR 46). Regulations stipulated in this policy apply to all research involving human subjects (Protection of Human Subjects rev. 2009).
Another sweeping set of regulations affecting research arose from health care legislation. In 1996 the U.S. Congress enacted the Health Insurance Portability and Accountability Act (HIPAA). This act directed the Secretary of Health and Human Services to develop comprehensive standards to protect the privacy and security of individually identifiable personal health information (Health insurance reform: Security standards, 2002; Standards for Privacy of Individually Identifiable Health Information: Final rule 2002). Evolving at a time when an unprecedented number of Americans were unable to obtain health care and health insurance, the main purpose of the legislation was to improve portability and continuity of health insurance coverage, to combat waste and fraud in health care delivery, and to simplify the administration of health insurance. HIPAA was designed to protect individually identifiable health information and set standards for the security of electronic protected health information. The HIPAA Privacy Rule requires health care providers and health insurers to obtain additional documentation from researchers before disclosing personal health information for research and to scrutinize researchers’ requests for access to health information more closely. The HIPAA Security Rule provides standards for the security of electronic health information. Privacy and security regulations are stringent and have far-reaching implications that spill into research policy. Although, some research organizations are not officially regulated by HIPAA, most IRBs require all investigators to complete HIPAA training and to follow HIPAA regulations, even if health insurance is not involved, research participants are not patients, and health information is not gathered.
In addition to the milestone documents and policies above, U.S. government agencies continue to refine policies for research involving human subjects. Each agency, such as the Food and Drug Administration, the Department of Agriculture, the National Institutes of Health, and the Department of Defence, specify rules and regulations for research within their domain. All institutional review boards are bound to each of these sets of regulations and thus must follow a complex myriad of policies. For example, the policy manual for one academic health center IRB specifies that its procedures comply with the following regulations: U.S. Department of Health & Human Services Office for Human Research Protections (OHRP) IRB Guidelines; Federal Policy for the Protection of Human Subjects (45 CFR 46); the FDA Cosmetic Act; the Medical Device Amendments of 1976; the Safe Medical Devices Act of 1990; the Medicare Manual; the FDA Investigational Device Exemptions Manual; the American Society of Hospital Pharmacists, Inc. Guidelines for the Use of Investigational Drugs in Organized Health Care Settings (21 CFR 50); FDA IRB Review and Approval (21 CFR 56); the Health Insurance Portability and Accountability Act (HIPAA); and the Joint Commission on Accreditation of Healthcare Organizations standards (JCAHO). Whew!
In addition to a complex mix of very specific policies and procedures, the various types of IRBs themselves are confusing and similarities among them are haphazard since there is no umbrella organization that encourages standardization. Some IRBs evaluate the scientific merit of applications and others defer scientific recommendations to institutional scientific review boards or research experts in individual departments. Some IRBs are part of large academic health center hospital and university collaborations and some are restricted to small, private educational institutions. In addition, with the advent of very large multi-center clinical trials, Central IRBs, which are not affiliated with individual institutions, have emerged. Because each academic institution is responsible for assuring the safety of research participants, multi-center clinical trial research protocols approved by central IRBs, continue to be reviewed to some degree by the local institutional review boards.
Institutional review board membership is highly regulated. According to Title 21 of the Code of Federal Regulations (21 CFR 56), each IRB must have at least five members with varying backgrounds who are sufficiently qualified through the experience and expertise. The composition must include diversity of members including consideration of race, gender, cultural backgrounds, and sensitivity to cultural issues. Each must possess the competence to ascertain the acceptability of proposed research activities and to understand applicable law, standards of professional conduct and standards of practice. Each IRB must include at least one member whose primary concerns are scientific and at least one whose primary concerns are nonscientific. Each IRB must include at least one member who is not affiliated with the institution (Food and Drug Administration, 2010). Inasmuch as protecting the rights of research participants has emerged primarily from medical research, most regulatory bodies and many IRBs are dominated by physicians and other health care related professionals.
One might wonder why busy professionals agree to serve on institutional review boards. In addition to understanding very fine distinctions of ethics and complex research policy, IRB members must be able to read lengthy research protocols and make critical decisions about very specialized scientific research. At any given meeting, an IRB member might review protocol applications for previously untested surgical procedures, clinical drug trials, medical devices, survey research, use of large databases, quantitative and qualitative behavioral studies, and other types of research. Members must be knowledgeable about potential harms and benefits of various research interventions and procedures, well versed in policy, knowledgeable about literacy, and sensitive to the ethical implications of every facet of the research process.
Stringently controlled by laws and regulations, IRBs deal more often with scientific studies of a quantitative nature in which attention to objective detail is imperative and management of every tiny bit of data must be controlled. With all of this in mind, it is easy to understand why an IRB might stumble on a research proposal for a grounded theory study. It is no wonder that questions on research protocol applications may seem inapplicable to grounded theory studies or that IRB members have questions about the grounded theory method. Grounded theory is based upon emergence and induction rather than deduction and hypothesis testing. It flows from a paradigm that is alien to most IRB members. Accustomed to focusing impeccable attention on every detail of studies, institutional review boards want to know what the researcher is testing, how it will be measured, what interview questions will be asked, where the research will take place, how many “subjects” will be needed, and on and on…. These questions help IRBs to understand quantitative studies, but are frustrating for grounded theorists who enter scholarly inquire with open minds, seeking to understand processes and structures from the perspective of their research participants.
This is not to suggest that IRB members in the U.S. are opposed to grounded theory research. IRB members are highly qualified scientists with dedication to research and the capacity to learn about unfamiliar research methods. So, it is up to the researcher to take responsibility and help the institutional review board understand the grounded theory proposal. The researcher should anticipate questions and concerns and provide scientific rational (based upon the classic grounded theory literature) for each element of the research. For example, when asked to provide a list of interview questions, the theorist should explain that grounded theory seeks to understand a problem and its solution from the participants’ perspectives and that providing a list of preconceived questions will block emergence and thus distort the “findings”. In fact, at the beginning of the research process, the researcher may not even know what the problem is. The researcher should offer scholarly resources to support these assertions and provide an interview question intended to induce “spill”. Having furnished rational for the grounded theory research process, the researcher will find that an IRB is more likely to quickly approve the research protocol.
In conclusion, the IRB process in the U.S. is highly regulated and complex. Geared toward protecting research participants, institutional review boards must review many types of research. The grounded theory researcher who anticipates questions and concerns and addresses them in the initial research protocol application is much more likely to be successful.
Alvita Nathaniel, PhD, FNP-BC, FAANP
West Virginia University
Email: anathaniel@hsc.wvu.edu
Burkhardt, M., & Nathaniel, A. K. (2008). Ethics & issues in contemporary nursing (3rd ed.). Clifton Park, NY: Delmar.
First Clinical Research. (2010). Clinical Research Milestones Retrieved 9-24-2010, 2010, from http://firstclinical.com/milestones/?page=9&date=All&anchor=word
Institutional Review Boards of 2010, 21 U.S.C § 56.
Jones, J. (1981). Bad blood: The Tuskegee syphilis experiment: A tragedy of racce and medicine. New York, NY: The Free Press.
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont Report: Ethical principles and guidelines for the protection of human subjects of research.
Nuremberg Code. (October 1946 – April 1949) Trials of War Criminals Before the Nuremberg Military Tribunals (Vol. 2, pp. 181-182): U. S. Government Printing Office.
Shuster, E. (1997). Fifty years later: The significance of the Nuremberg Code. New England Journal of Medicine, 337(20), 1436-
1440.
Sparks, J. (2002). Timeline of laws related to the protection of human subjects. Bethesda, MD: Retrieved from http://history.nih.gov/about/timelines_laws_human.html#1966.
Tuskegee University. Research ethics: The Tuskegee syphilis study Retrieved October 18, 2010, from http://www.tuskegee.edu/global/story.asp?s=1207598
U. S. Centers for Disease Control and Prevention. (2009). U.S. Public Health Service Syphilis Study at Tuskegee Retrieved October 18, 2010, from http://www.cdc.gov/tuskegee/timeline.htm
Health insurance reform: Security standards of 2002, 45 U.S.C. § 160, 162, 164, Federal Register 68(34).
Standards for Privacy of Individually Identifiable Health Information: Final rule of 2002, 45 U.S.C § 160 and 164, Federal Register 67(157)
Protection of Human Subjects rev. 2009, 45 U.S.C. § 46 (Government Printing Office 2009).
World Medical Association. (1964). Declaration of Helsinki: Ethical principles for medical reseasrch involving human subjects.
Glaser, B.G. (1992). Emergence vs. forcing. Mill Valley, CA: Sociology Press.
Glaser, B. G. (Ed.). (1995). Grounded theory, 1984-1994 (Vols. 1-2). Mill Valley, CA: Sociology Press.
Glaser, B.G. (2008). Doing quantitative grounded theory. Mill Valley, CA: Sociology Press.
Glaser, B.G. (2009). Jargonizing: Using the grounded theory vocabulary. Mill Valley, CA: Sociology Press.
Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. New York: Aldine.