Friday, February 14, 2014

Blog 5 - Constructed-response and Fixed-response


Describe the differences between constructed-response and fixed-response assessments. When would you use each type of assessment in eLearning? Why?

As stated in blog #4 (http://maj-eln.blogspot.com/2014/02/blog-4-pros-and-cons-of-constructed.html), there are two different types of constructed response test items -- completion items and essay items. A completion item is often known as a “fill-in-the-blank” item. The student often just completes the sentence.  In an essay item (short or long format) the student provides a narrative response to the test item.  In a constructed response test item, the student must enter or write out their answer.
Fix-response test items prompt the student to select their answer from the response options. The most common types of fix-response items are multiple-choice or true-false test items. However, variations of fixed-response include matching, ranking, multiple true-false, and embedded-choice items.

Time Considerations

Student can answer more multiple choice questions in a shorter period of time than constructed response items. Students generally can answer 1 multiple-choice item per minute and 2 true-false test items per minute (Oosterhof, Conrad, & Ely, 2008). Fixed-response would be a great way to quickly assess student at the beginning of a concept to be able to measure prior knowledge or to quickly check for understanding in the middle or end of a concept.

For both types of assessment options, it takes time to write a well-crafted test item to measure the indented performance objective. The teacher’s challenge is to write constructed response items so that there is only one correct answer since there can be multiple answers. A teacher may take only a few minutes to create a constructed response items, such as a short answer or essay; they need to make sure that the question is measuring the instructional objectives and the scoring plan is well-defined. Creation of a well-defined scoring plan can be time consuming.
A fixed-response test item also takes time to write since the teacher must create an appropriate stem and response options. The response options should be able to measure not only what the student knows but the distractors should help the teacher in identifying the student’s thought process. For example when creating a multiple-choice item about the American Revolution, the teacher might include some prominent historical figures from the time but not directly associated with the Revolutionary War.  All the wrong answers are plausible, but not valid for that particular test item.

With online assessments, scoring can be instant. With a fixed-response, computers can quickly score and grade a student’s test.  There is consistency and objective scoring since the computer can quickly do this and the teacher doesn’t have to be involved. The computer can quickly check the student’s selection (e.g., A, B, C, etc. for multiple-choice or true/false) and assign the student the correct point value for the correct answer. Students can get immediate feedback on how they did on their test.
Constructed response items can also be scored using technology. A response textbox is part of the test item; the student types their answer in the provided textbox. The computer program automatically scores and grades their response using letter recognition algorithm. Since a student can make mistakes typing but response is recognizable, teacher has the ability to overwrite computer’s score and assign partial credit (points) following the test item’s scoring plan.

The scoring plan should be clearly defined. Oosterhof, Conrad, and Ely (2008, p. 92) states that a scoring plan should have 3 characteristics: (1) total number of points assigned to the item based on its importance relative to other items, (2) specific attributes to be evaluated in students’ responses, and (3) for each attribute, criteria for awarding points, including partial credit. A scoring plan is essential for constructed response (especially short answer or essay).
Student Knowledge
In a fixed-response, when students know something of the subject, they have a better chance of getting the answer correct over a constructed response item. Part of this is due to guess parameters (guessing is discussed later in this blog) and part of it can be attributed to recognition of terms or concepts.

Immediate student feedback is possible in both fixed-response and constructed response. As mentioned above, the scoring is done using technology so the student can get immediate feedback on fixed-response test items. The student selects “A” for their answer and the student has it right or wrong. For a constructed response, the student can still receive feedback, but the feedback may be delayed if the teacher must review the test item (especially in a fill-in-the-blank scenario where the computer is looking for letter-to-letter recognition) or when there is a score plan and thus teacher review.
Both fixed-response and constructed response items can measure declarative and procedural knowledge what Webb refers to as level 1 (http://www.aps.edu/rda/documents/resources/Webbs_DOK_Guide.pdf) and Bloom’s Taxonomy refers to as remembering (http://ww2.odu.edu/educ/roverbau/Bloom/blooms_taxonomy.htm).  A well-crafted test item (either fixed-response or constructed response) can measure various procedural knowledge capabilities when well-written. The higher knowledge areas involves more complex skills such as problem solving is where both fixed-response and constructed response fall short. So based on the type of information the teacher is looking to get about their student’s mastery of the subject, it may be necessary to use fixed-response and constructed response items.

Measurable Items

Both types of test items can be used in variety of subject areas (e.g., English, math, science, social studies). A teacher can easily write a constructed response on factual knowledge of space exploration and write fixed-response multiple choice items on the same topic. However, both test items may not be suitable for other subject areas such as music and even certain concepts/aspects of science and math. In music, for example, a teacher cannot write a constructed response or fixed-response for Arizona’s music standard Strand 1: Create, Concept 2: Playing instructions, alone and with others, music from various genres and diverse cultures (http://www.azed.gov/standards-practices/files/2011/09/music.pdf).  In science one cannot write test items to address science problems and math computations. Other test item formats must be used to address these concepts.

Fixed-response items are susceptible to guessing. For example a four alternative test item, the student has 25% chance of selecting the correct answer; a true-false the student has a 50% chance of selecting the correct answer. Test reliability increases when there are multiple-choice, alternate-choice, and essay test items in the same assessment.
For constructed response test it may include many items in one test. This allows more adequate sampling of content and thus increases the number of test items must be included.  Compared to short answer or essays, fixed-response allows for better sampling of content.

Format

It goes without saying, but both types of test items need to be free of grammatical errors and extraneous wording. The constructed response must be written in such a way that there is only a single or homogeneous set of responses. Multiple-choice fixed-responses must have a stem that clearly presents the problem to be addressed and the grammar in each option is consistent with the stem.

The teacher (that is the test writer) must keep in mind the reading level of the students. Is the student being assessed on their reading skills or their knowledge of the learning objective? Using vocabulary and wording that is of higher reading level than the student prevents the educator from knowing if the student did not master the objective because of their reading level or because of the subject matter.
Constructed response may be more suitable for younger students since they can just “fill-in-the-blank” as they read on the computer. Fixed-response may be more challenging for younger students since there is often much more reading involved. Therefore, it is important to note that both of these testing options may not be appropriate for younger students if their reading skills have not yet fully developed.

Conclusion

In an eLearning environment, the educator needs to use both fixed-response and constructed response test items. However, it is important to utilize them appropriately. Looking to see if the student can demonstrate a realistic task, such as those required in the workforce, students should not be asked to answer multiple-choice or true/false questions, but extended constructed response so that they can demonstrate their organize and communicate their thoughts. 

 
Work Cited
Oosterhof, A., Conrad, R.-M., & Ely, D. P. (2008). Assessing Learners Online. Upper Saddle River: Merrill/Prentice Hall

No comments:

Post a Comment