What must a researcher do in order to ensure the quality of his/her instrument?

https://doi.org/10.3916/escuela-de-autores-174

Author: Águeda Delgado– Translation: Erika-Lucia Gonzalez-Carrion

A central aspect of all research that will determine to a large extent the quality of the results is the choice of the method and the instruments selected for data collection. Based on the research objectives and hypotheses (if any), the researcher will determine the type of research (documentary, experimental, field…), the approach (quantitative, qualitative or mixed) and the scope of the research (exploratory, descriptive, correlational…) and, based on these, the methods, techniques and instruments that will establish the direction the study will take, how the data will be collected and the depth of the data.

What must a researcher do in order to ensure the quality of his/her instrument?

However, the choice of method does not always depend on the will of the researcher, but is often restricted by his or her possibilities and limitations, as shown in this previous entry.

Data collection, for its part, depends not only on the method or techniques selected but also on the sources from which the data will be obtained and the instruments chosen or designed for this purpose. The latter must meet three essential requirements: reliability, validity and objectivity.

– Reliability refers to the degree to which an instrument produces consistent and coherent results.

– Validity, in general, refers to the degree to which an instrument actually measures the variable it is intended to measure.

– Objectivity is the degree to which the instrument is or is not permeable to the influence of the biases of the researchers who administer, score, and interpret them.

Both reliability and validity are determined by various techniques and specific statistics. If any of the three requirements were to fail, the instrument would not be useful for carrying out the study. Therefore, it is important when selecting the instrument(s) for data collection to follow a systematic procedure and not to leave it to improvisation, to know the variable to be measured very well, as well as the theory and practice that support it, so as not to generate instruments that are not very valid or reliable.

Remember that all these aspects must be incorporated in detail in the “Material and method” section of our article, justifying the choice of method and instruments and demonstrating that our tool meets the requirements that are assumed, since we must show that the research is carried out with scientific rigor and that, therefore, the results we have arrived at are valid.

How to Determine the Validity and Reliability of an Instrument
By: Yue Li

What must a researcher do in order to ensure the quality of his/her instrument?

Validity and reliability are two important factors to consider when developing and testing any instrument (e.g., content assessment test, questionnaire) for use in a study. Attention to these considerations helps to insure the quality of your measurement and of the data collected for your study.

Understanding and Testing Validity

Validity refers to the degree to which an instrument accurately measures what it intends to measure. Three common types of validity for researchers and evaluators to consider are content, construct, and criterion validities.

  • Content validity indicates the extent to which items adequately measure or represent the content of the property or trait that the researcher wishes to measure. Subject matter expert review is often a good first step in instrument development to assess content validity, in relation to the area or field you are studying.
  • Construct validity indicates the extent to which a measurement method accurately represents a construct (e.g., a latent variable or phenomena that can’t be measured directly, such as a person’s attitude or belief) and produces an observation, distinct from that which is produced by a measure of another construct. Common methods to assess construct validity include, but are not limited to, factor analysis, correlation tests, and item response theory models (including Rasch model).
  • Criterion-related validity indicates the extent to which the instrument’s scores correlate with an external criterion (i.e., usually another measurement from a different instrument) either at present (concurrent validity) or in the future (predictive validity). A common measurement of this type of validity is the correlation coefficient between two measures.

Often times, when developing, modifying, and interpreting the validity of a given instrument, rather than view or test each type of validity individually, researchers and evaluators test for evidence of several different forms of validity, collectively (e.g., see Samuel Messick’s work regarding validity).

Understanding and Testing Reliability

Reliability refers to the degree to which an instrument yields consistent results. Common measures of reliability include internal consistency, test-retest, and inter-rater reliabilities.

  • Internal consistency reliability looks at the consistency of the score of individual items on an instrument, with the scores of a set of items, or subscale, which typically consists of several items to measure a single construct. Cronbach’s alpha is one of the most common methods for checking internal consistency reliability. Group variability, score reliability, number of items, sample sizes, and difficulty level of the instrument also can impact the Cronbach’s alpha value.
  • Test-retest measures the correlation between scores from one administration of an instrument to another, usually within an interval of 2 to 3 weeks. Unlike pre-post tests, no treatment occurs between the first and second administrations of the instrument, in order to test-retest reliability. A similar type of reliability called alternate forms, involves using slightly different forms or versions of an instrument to see if different versions yield consistent results.
  • Inter-rater reliability checks the degree of agreement among raters (i.e., those completing items on an instrument). Common situations where more than one rater is involved may occur when more than one person conducts classroom observations, uses an observation protocol or scores an open-ended test, using a rubric or other standard protocol. Kappa statistics, correlation coefficients, and intra-class correlation (ICC) coefficient are some of the commonly reported measures of inter-rater reliability.

Developing a valid and reliable instrument usually requires multiple iterations of piloting and testing which can be resource intensive. Therefore, when available, I suggest using already established valid and reliable instruments, such as those published in peer-reviewed journal articles. However, even when using these instruments, you should re-check validity and reliability, using the methods of your study and your own participants’ data before running additional statistical analyses. This process will confirm that the instrument performs, as intended, in your study with the population you are studying, even though they are identical to the purpose and population for which the instrument was initially developed. Below are a few additional, useful readings to further inform your understanding of validity and reliability.

Resources for Understanding and Testing Reliability

  • American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1985). Standards for educational and psychological testing. Washington, DC: Authors.
  • Bond, T. G., & Fox, C. M. (2001). Applying the Rasch model: Fundamental measurement in the human sciences. Mahwah, NJ: Lawrence Erlbaum.
  • Cronbach, L. (1990).  Essentials of psychological testing.  New York, NY: Harper & Row.
  • Carmines, E., & Zeller, R. (1979).  Reliability and Validity Assessment. Beverly Hills, CA: Sage Publications.
  • Messick, S. (1987). Validity. ETS Research Report Series, 1987: i–208. doi:10.1002/j.2330-8516.1987.tb00244.x
  • Liu, X. (2010). Using and developing measurement instruments in science education: A Rasch modeling approach. Charlotte, NC: Information Age.

This entry was posted in Uncategorized. Bookmark the permalink.

How do you ensure quality in research?

9 Strategies to Enhance Quality of Data in Online Research.
Recruiting the Right Participants..
Ensuring Participant Attention..
Verifying Participant Demographics..
Screening or Discouraging Dishonest Survey-Takers..
Avoiding Non-Naive Participants..
Ensuring That Participants Fully Understand the Survey's Language..

Why do you think researchers need to consider qualities of a good research instrument?

The data that is collected is only as good as the instrument that collects the data. A poorly designed instrument will lead to bad data, which will lead to bad conclusions. Therefore, developing a good instrument is the most important part of conducting a high quality research study.

What must you ensure in preparing for your research instrument?

When a conducting a research, you need to prepare and implement the appropriate instrument to gather the data you need. When preparing an instrument, you must ensure that it is valid and reliable. An instrument is valid when directly answers or addresses your research questions.

How can a researcher make sure that the instrument is reliable and valid?

How to ensure validity and reliability in your research. The reliability and validity of your results depends on creating a strong research design, choosing appropriate methods and samples, and conducting the research carefully and consistently.