Research Methodology

Published 17 Feb 2017

The reality that high standards in research exist at the present time, added with the fact that our knowledge in the fields of quantitative-based methodology has extended in the years, there is more need for students to take more research methods and statistics courses. It appears that professor curriculum and the like have less and less space for research design, statistics, and measurements. At some institutions, students are obligated to take only one statistics class. At such institutions, choice of subjects of study is enormously critical. Vital components of research conducting are consciousness of the most contemporary statistical theory and applications and a readiness to change. As noted by Johnson, “it is vital that professors of educational research and statistics obtain the newest developments in methodology and statistics so that they can transfer this information to their students” (Johnson 34-35).

Unhappily, the teaching of unsuitable or outdated statistical techniques can lead the students to under-develop their educational and science works. It is especially a problem for those who conduct serious theses, dissertations, and other types of research studies. As it is known, statistical teaching has changed little during the last decades. Moreover, confirmation exists that the greater part of published studies, which are seriously flawed, contain procedural, analytical, and interpretational errors (Onwuegbuzie 110-116). As noted by Onwuegbuzie, some of these flaws take sauce from graduate-level instructions in which statistical methods are taught as a chain of schedule steps, rather than as a interactive and reflective process. In such works, graduate-level curricula reduce students’ experience with statistical theory and applications. Such works can also contain applications of diverse inaccurate and deceptive mythologies about the nature of research. Other point of unsuccessful research delivery are growing numbers of statistics instructors teaching out of their areas of knowledge and a failure, unwillingness, or even refusal to distinguish that statistical techniques that were popular in past years. Such techniques may be considered to contain no importance (Cangelosi 26-27).

Examples of unsuitable statistical practices that are being circulated in many statistics courses include: (1) not providing proof that statistical assumptions were checked prior to conducting quantitative analyses. (2) Sometimes, sample size considerations are not discussed. (3) We can also find inappropriate treatment of multivariate data in many research reports. (4) We may encounter failure to report consistency indices for either prior or present samples. (5) It can also be no control for Type I error rate and failure to report effect sizes (Onwuegbuzie 121-127). While conducting a research, student make keep in mind that the function of research is the decision that a statistics executor must make. The resemblance of statistics approach instructors uses to teach statistics is a function of the philosophical orientation, and also the experience in using diverse methodologies (Carver 42).

Philosophical and Theoretical Background

Shadish argues that research and evaluation of data is not merely applied by social science. Shadish appeals to the peculiar problems manifested in evaluation and research. However, various problems arise not merely in evaluation of data but whenever one tries to apply social science. The problems, then, arise not from the perverse peculiarities of research but from the manifest failure of much of mainstream social science and the identifiable reasons for that failure (Shadish 20-26).

When the practice of evaluation and research was initiated in the 1960s, the shift toward a postindustrial information-based economy, and corresponding ways of thinking, had just begun. The succession of dominant ideas within the field may reflect not just a progressive maturation of thinking through experience, but an evolved adaptation to a changing social environment (Shadish 27-28).

It has been claimed that the so-called rational model of social decision-making is now dead and, with it, the role of the program evaluator as an intelligence agent (or, at least, information provider) for the defunct rational policymakers. But what precisely does that mean? Does it mean that decisions are now somehow to be made irrationally? If so, precisely what kind of information is now required for the making of irrational decisions?
In some sense, it can be said that calling a research unsuccessful is a subjective notion. In estimating social or student unsuccessful research, we do not think that anyone really believes that modern humans have somehow lost their rational faculties in planning complex social strategy or design. By implication, research tools and evaluators are not exclusively beholden to any monolithic set of rules, objectives, or policymakers. Their clientele is the entire social network involved in and influenced by the social program, not just some specially empowered individual. The most important in the research is the credibility and accuracy of the results, not a standard or defined methodology (Kaplan 81-85).

Neither research evaluation nor, as more broadly construed, evaluation, constitute a field of inquiry. Evaluation is an enterprise aimed at deciding the worth of various activities, and that enterprise comes equipped with a variety of assumptions and methods. As noted by Shadish, we can evaluate anything in our research. But if evaluation as an intellectual or professional activity is defined too broadly, it loses most of its usefulness in focusing attention and effort. A central set of issues exists with respect to evaluation (Shadish 30).

Threats during Research Process

Analysis of findings concerning the researcher as possible threat during the research process: The following aspects of the researcher as possible threat during the research process will be addressed in this discussion: the educational researcher’s mental and other discomfort could pose a threat to the truth value of data obtained and information obtained from data analyses; the researcher not being sufficiently prepared to conduct the field research; not being able to do member checking on findings; conducting inappropriate interviews; not including demographic data in the description of the results; the researcher not being able to analyze interviews in depth; and describing the research methodology and research results in a superficial manner (Hyman 28-29).

The researcher’s mental and other discomfort could pose a threat to the truth value of data obtained and information obtained from data analyses: In certain circumstances the research topic could be so close at home to the researcher’s own frame of reference and experience that all effort to bracket and intuit are fruitless. In the authors’ research they have experienced that the researcher could not conduct interviews with families in which a child was terminally ill. As soon as the researcher was confronted with the intense pain of the families that were in the process of saying good bye to their beloved, she could not bear the pain. The reason for this was that her own unresolved pain regarding her brother’s suicide. Supervisors must be alert because post-graduate candidates quite often select research topics close to their own unresolved pain and experiences.

The researcher not being sufficiently prepared to conduct the field research: This includes amongst others not checking the context and culture of the respondents; being dressed inappropriately; insensitivity towards possible problems with technical apparatus during recording data in the practical research situation such as tape recorders with dysfunctional batteries. In the authors’ own experience with the project on the termination of pregnancy, one of the researchers in the project went to conduct an interview with an extremely poor lady in a far off rural community. Although she came from the same tribe, speaking the same language and dialect of Xhosa, she had climbed the socio-economic ladder and drove in with her luxurious motorcar and expensive jewelry and clothes to the place where the interview had to be conducted. Needless to say the interview, in spite of efforts was not successfully conducted (Hays 71-73).

The researcher not being able to do member checking on findings: After being interviewed the respondents expressed the wish not to be contacted in any way by the researchers. They have told their story about their experience of being involved in the termination of pregnancy and do not want to be reminded again of the experienced pain. In the research on the termination of a pregnancy, almost all of these adolescents expressed the wish never to see the researchers again. They made it very clear that it was not that they disliked them, but that this experience is a chapter in their lives that they want to close and forget as soon as possible. To counteract this very real problem they decided to conduct more interviews than the usual number that is deemed as saturated data. In this way the authors caught the lived-experienced as repeated themes. Further, the authors also conducted interviews in similar contexts in other provinces in South Africa so that they could describe the lived-experience of these adolescents as richly and densely as possible (Hays 74-76).

The researcher conducting inappropriate interviews: This aspect includes researcher bias, leading questions, and defocusing of researcher leading to insensitivity and interviews that are too short to express the richness of the investigated phenomenon; and conducting therapy instead of research. In the authors’ own research they have experienced that even a trained and well-experienced interviewee could become so involved with the research topic that she could not bracket her own experiences. This happened when she conducted a focus group interview with a group of students on their experiences of a technology laboratory. When the students started to share their experiences on the lego blocks the interviewer started to share her own children’s experiences of lego. The result was that the interviews could not be utilized in the research (Hopkins 57-59).

The researcher not including demographic data in the description of the results: This lead to confusion in the reader of the research report because she cannot understand the context in which the research was conducted. Without a dense description of the context of a research project other researchers might not be able to transfer findings to their research projects. For example the results from a research project in a rural tribal area in which customary laws and rejection of the evacuated fetus is such that there is no place in the tribal graveyard for the fetus, cannot be transferred to a higher socio-economic status family from a western background in which the teenage girl is allowed to mourn her loss of the terminated life. However, if the context were described very clearly, the transference would be possible within that specific context (Hopkins 60-64).

The researcher not being able to analyze interviews in depth: Possible reasons for this have been identified as: the researcher spending insufficient time and not being immersed in the data; the presentation of the storyline is insufficient; and the researcher analyzing the data with preconceived ideas. In an investigation of a researcher “The ethical conflict of registered nurse relating to termination of teenage pregnancy” the question posed by the interviewer was “Tell me about your experiences when you had to attend to a teenager who requested a termination of pregnancy?” One of the aims of this investigation was “To explore and describe the lived-experiences, thoughts, perceptions, feelings, behaviors and viewpoints of a nurse practitioner that is a mother of a teenage daughter, regarding a teenager who terminates pregnancy. This would also shed light on the possible conflict that these nurses would experience as their personal beliefs are challenged by the obligation to perform a service” (Hyman 30). The two main themes that emerged were participants felt that registered nurses had multiple role expectations of themselves during their involvement in termination of pregnancy by teenagers; and participants expressed concern over the need for improved communication between parents and children regarding sexuality and reproductive issues. It is clear that there is not a logical coherence between the various aspects of the research project and the researcher could not have analyzed the data in depth. It is also clear that the researcher’s own preconceived ideas contaminated the research project as a whole and more specifically the conducting of the interview and the resulting analysis of the data.

The researcher describing the research methodology and research results in a superficial manner: The research methodology is not described in a justified and logical manner so that the other researchers can replicate the research in similar contexts. Regarding the research results, the data have not been reduced enough; the themes have not been described in depth so that the reader understands the meaning; the quotations of the respondents do not support the themes that have been described. This contributes to a lack of richness in description of the phenomenon (Hyman 31-33).
Possible measures that can be applied to ensure trustworthiness

To address the abovementioned threats to trustworthiness researchers can apply the following criteria and accompanying strategies: truth-value through credibility; applicability through transferability; consistency through dependability; and neutrality through confirmability. Different actions can be taken to apply these strategies.

The actions that can be taken to apply credibility include: prolonged and varied field experience; reflexivity (field journal that is the keeping of field notes on paper and tape recordings); triangulation through using multiple researchers, multiple data collections, multiple contexts; and multiple data sources. Member checking has to be done in an unconventional manner by utilizing similar respondents in similar contexts. Peer examination has to be utilized through regular team meetings to monitor progress and justification of the research process. Threats regarding interviewing can be addressed by monitoring audio taped and transcribed interviews by the researchers. The authority of the researchers can be attained through workshops on qualitative research methodology, pilot interviews and continuous discussions on the research findings. Structural coherence can be addressed by utilizing cognitive strategies such as bracketing and intuiting (Hopkins 69-72).

The threats to transferability can be addressed by describing the respondents within their specific demographic contexts; and by giving a dense and rich description of the results so that the respondents’ voices could be heard.
Dependability can be ensured by an audit of the research process with specific reference to the stepwise replication of the interviews; multiple researchers participating in the research (that is triangulation and peer review were utilized). Data reduction can take place by applying code-recode procedures. The researchers and independent coders should have consensus discussions. Confirmation can be ensured by providing a trial of evidence for co-researchers to follow and check whether they would arrive at similar conclusions. This includes the monitoring of the researchers applying triangulation and being reflexive throughout the research process (Hopkins 73-74).

Works Cited:

  • Cangelosi, J. S. Designing tests for evaluating student achievement. New York: Longman, 1990.
  • Carver, R. P. The case against statistical significance testing, revisited. The Journal of Experimental Education, 61, 1993.
  • Hays, W. L. Statistics (3rd ed.). New York: Holt, Rinehart & Winston, 1981.
  • Hopkins, K. D. Conducting Research. Englewood Cliffs, NJ: Prentice Hall, 2002.
  • Hyman, R. Diversity of research techniques. Psychological Bulletin, 118, 1995.
  • Johnson, C. W. A multiple comparison procedure. Educational Research. Journal of the American Statistical Association, 50, 1985.
  • Kaplan, R. M. Philosophy of research: Principles, applications, and issues. Pacific Grove, CA: Brooks/Cole. 1996.
  • Onwuegbuzie, S. A simple sequentially projective multiple test procedure. Research Errors. Scandinavian Journal of Statistics, 6, 1979.
  • Shadish, T. Research and evaluation of data. Educational Researcher, 26(5), 1991
Did it help you?