Page 118 - DEDU504_EDUCATIONAL_MEASUREMENT_AND_EVALUATION_ENGLISH
P. 118
Educational Measurement and Evaluation
Notes (iv) more critical standards of test forms and quality of items;
(v) more rigid statistical analysis than the usual informal objective tests; and
(vi) derivation of the set of norms.
9.1 Construction and Validation of Items
One problem for developers of standardised test is the examination of content or syllabus and
the group of students they have not specifically taught. Therefore the developers must be
reasonably sure that the content selected for test items is likely the one that has received
instructional emphasis. Selection of content must be general enough to fit into any school situation
where the course is taught. The difficulty arises in selection of content due to the nature of
knowledge, understandings, skills, concepts or applications to be tested. In subjects where
instructional objectives are clearly stated in terms of intended learning outcomes, it is easier to
develop test items that sample the content adequately, as in mathematics where facts and skills
are well known. In other fields, where knowledge, skills, attitudes and other outcomes are of
more indefinite nature, it is more difficult to validate the test items. Validity of test content in
most of the subjects is difficult to establish by acceptable objective means and statistical methods.
It is practically impossible in some cases. Particular validation procedures may be effective and
acceptable in one subject and completely unsuitable in another. That is why developers of
standardised tests have used different types of validation procedures in different subjects.
Selection of Test Items
Principles and guidelines for constructing objective test items were discussed. Therefore methods
used for construction of items for standardised tests and actual construction of test items on
different outcomes of learning are briefly discussed here.
(i) Validity of a Test : It depends on : (a) validity of the content in general, and (b) validity of
the individual test items of which the test is composed of. Validity of the items before they
are actually administered depends on the ability of the test constructor to :
(i) select the right form of the objective items; and
(ii) skill to construct the items on the pre-decided intended learning outcomes identified
while designing the test, besides avoiding the various types of weaknesses highlighted in
the earlier chapter on construction of objective-type questions.
Real evidence about item validities is found only by :
(i) actual administration of the test in preliminary form to a large group of typical pupils,
representative of the population, and
(ii) detailed statistical analysis of results item-wise.
On the basis of this analysis many items are rejected, modified, revised or replaced. For this
reason in the preliminary form of the test there may be many more items than envisaged
in the final form.
(ii) Objectivity : It is so important an element in the reliability of measurement that one
can-not think of a standardised test made up of items which are not characterised by
objectivity. Usually objectivity is determined by the form of test item the framer uses.
Though the precise form of objective technique that fits best the subjects to be tested is
difficult, it is usually taken care of by experimentation and previous experience.
(iii) Content Analysis : It is the next problem once instructional area to be covered is decided. It
refers to the content elements like terms, facts, concepts, principles, processes and other
generalisations or intended outcomes that form the basis of item writing. These elements
are to be stated in some objective form that reflects the nature of instructional objectives or
112 LOVELY PROFESSIONAL UNIVERSITY