CLEP TIG - Principles of Marketing • XPP • 79995-007740 • Dr01 8/12/09 ta • Dr02 9/3/09 ta • Preight 9/18/09 • New Job 90336-007740 Dr01 9/19/11 ta • Preight 10/11/11 ta • New Job 96636-
007740 Dr01 11/13/12 ta • Prelight 11/26/12 ta • NEW 100395 dr01 080613 iy • dr02 090413 iy • prf 091313 iy • NEW 105042-007740 Dr01 8/13/14 ta • Dr02 8/9/14 ta • preight 091214 ljg • New job
109579-007740 • Dr01 8/25/15 ta
22
Validity
Validity is a characteristic of a particular use of the
test scores of a group of examinees. If the scores are
used to make inferences about the examinees’
knowledge of a particular subject, the validity of the
scores for that purpose is the extent to which those
inferences can be trusted to be accurate.
One type of evidence for the validity of test scores is
called content-related evidence of validity. It is
usually based upon the judgments of a set of experts
who evaluate the extent to which the content of the
test is appropriate for the inferences to be made
about the examinees’ knowledge. The committee
that developed the CLEP Principles of Marketing
examination selected the content of the test to reflect
the content of Principles of Marketing courses at
most colleges, as determined by a curriculum survey.
Since colleges differ somewhat in the content of the
courses they offer, faculty members should, and are
urged to, review the content outline and the sample
questions to ensure that the test covers core content
appropriate to the courses at their college.
Another type of evidence for test-score validity is
called
criterion-related evidence of validity. It
consists of statistical evidence that examinees who
score high on the test also do well on other measures
of the knowledge or skills the test is being used to
measure. Criterion-related evidence for the validity
of CLEP scores can be obtained by studies
comparing students’ CLEP scores with the grades
they received in corresponding classes, or other
measures of achievement or ability. CLEP and the
College Board conduct these studies, called
Admitted Class Evaluation Service or ACES, for
individual colleges that meet certain criteria at the
college’s request. Please contact CLEP for more
information.
Reliability
The reliability of the test scores of a group of
examinees is commonly described by two statistics:
the reliability coefficient and the standard error of
measurement (SEM). The reliability coefficient is
the correlation between the scores those examinees
get (or would get) on two independent replications
of the measurement process. The reliability
coefficient is intended to indicate the
stability/consistency of the candidates’ test scores,
and is often expressed as a number ranging from
.00 to 1.00. A value of .00 indicates total lack of
stability, while a value of 1.00 indicates perfect
stability. The reliability coefficient can be interpreted
as the correlation between the scores examinees
would earn on two forms of the test that had no
questions in common.
Statisticians use an internal-consistency measure to
calculate
the reliability coefficients for the CLEP
exam.
1
This involves looking at the statistical
relationships among responses to individual
multiple-choice questions to estimate the reliability
of the total test score. The SEM is an estimate of the
amount by which a typical test-taker’s score differs
from the average of the scores that a test-taker would
have gotten on all possible editions of the test. It is
expressed in score units of the test. Intervals
extending one standard error above and below the
true score for a test-taker will include 68 percent of
that test-taker’s obtained scores. Similarly, intervals
extending two standard errors above and below the
true score will include 95 percent of the test-taker’s
obtained scores. The standard error of measurement
is inversely related to the reliability coefficient. If the
reliability of the test were 1.00 (if it perfectly
measured the candidate’s knowledge), the standard
error of measurement would be zero.
An additional index of reliability is the conditional
standard of error of measurement (CSEM). Since
different editions of this exam contain different
questions, a test-taker’s score would not be exactly
the same on all possible editions of the exam. The
CSEM indicates how much those scores would vary.
It is the typical distance of those scores (all for the
same test-taker) from their average. A test-taker’s
CSEM on a test cannot be computed, but by using
the data from many test-takers, it can be estimated.
The CSEM estimate reported here is for a test-taker
whose average score, over all possible forms of the
exam, would be equal to the recommended C-level
credit-granting score.
Scores on the CLEP examination in Principles of
Marketing are estimated to have a reliability
coefficient of 0.89. The standard error of measurement
is 3.38 scaled-score points. The conditional standard
error of measurement at the recommended C-level
credit-granting score is 3.74 scaled-score points.
1
The formula used is known as Kuder-Richardson 20, or KR-20, which is equivalent
to a more general formula called coefficient alpha.
109579 - 007740 • UNLWEB915