As stated in blog #5 (http://maj-eln.blogspot.com/2014/02/blog-5-constructed-response-and-fixed.html), there are differences between
constructed-response and fixed-response assessments. Constructed-response items, often known as
“fill-in-the-blank” allows the student must enter or write out their answer.
Whereas in fixed test items the student selects their answer from the response
options. The most common types of fix-response items are multiple-choice or
true-false test items. However, variations of fixed-response include matching,
ranking, multiple true-false, and embedded-choice items. (Refer
to blog #5, http://maj-eln.blogspot.com/2014/02/blog-5-constructed-response-and-fixed.html, for further differences between these two
response types.)
Both of these test construction
options have benefits in the eLearning environment. With both
constructed-response and fix-response the student enters their answer and the
computer can automatically evaluate whether the student has answered the test
item correctly or not and assign the appropriate score to that test item. The
drawback, however, in a constructed-response is often computer programs use an
algorithm that is based on letter recognition. If the student misspells the
word or inserts an extra space, the computer would mark this as an incorrect
answer. A way around this is for the teacher to have the ability to overwrite
the computer’s score and assign the appropriate points following the test
item’s scoring plan. In the online assessment program I’m familiar (Galileo, www.ati-online.com) with this is an easy
process as I can see the students who have the incorrect answer and quickly
evaluate the student’s response and enter the appropriate points.
In addition to a
“fill-in-the-blank” type of constructed-response, a teacher may create a short
or long essay to assess the student’s mastery of the stated performance
objective. A short essay should be able to be answered in less than 10 minutes (Oosterhof, Conrad, & Ely, 2008). “How does
the Moon’s appearance change during a four-week lunar cycle?” is an example of
a short essay. An example of a long
essay test item is “What important contributions has space exploration
contributed our everyday life? Provide at least two of these contributions.
Explain why and how each contribution has impacted everyday life.”
There can be many fixed-response test items on a single
test. A teacher can assess many instructional objectives since declarative and
procedural knowledge is assessed. The computer scores these items quickly and
accurately. Unfortunately, a student can easily guess and/or cheat on this type
of test.Even though a constructed response measures instructional objective more directly, a problem is that there should only be so many essay questions/prompts per test. Another problem with essay style test items in an eLearning environment is that students must type their answer and they might not be good typist. The students need to know how they are being assessed when answering such prompts, so a scoring plan should be provided to them upfront. The scoring plan must be well defined, not only for the student but also for the teacher (or whoever is reviewing and scoring the student’s response). High-stakes assessments, such as state standardized testing, has long written responses which computer algorithms can help score, but human intervention is still required. For a formative assessment, the teacher often must manually review this and score the student’s response. This can be a time-consuming process if there is a large class size. Additionally, inter-rated reliability is a factor. If there are four teachers giving the same test in which there are test items that must be manually scored by the teacher, will all four teachers be faithfully following the defined scoring plan? Should the teachers share the scoring (e.g., teacher A score teacher B’s test)? Will the scoring rotation delay the feedback to the student? Providing the student with timely feedback is important, not only in an eLearning environment but also in the traditional “brick and mortar” environment.
Speaking of feedback, the student can get immediate feedback
on each test item or at the end of the assessment. However, the test creator must ensure that
the feedback is appropriate. In a constructed-response, the student is asked
“What is 2 plus 3?” and they answer “5” the computer can display “great!” or
“you’ve answered correctly.” If the student answered incorrectly, a message
“Sorry, the answer is 5” along with an explanation of why. Providing
correct/incorrect feedback can communicate misleading information to the
students who select the correct answer, but for the wrong reason. Or at the end
of the test, a final score is provided to the student.
The teacher must keep time constraints in mind. A
constructed-response test is going to take longer for the student to complete
than a fixed-response test. A teacher
must decide when it is better to have a test be administered online with all
the test scoring conducted electronically and when a teacher-intervention is
needed. Creating a test online takes time, depending on the teacher’s
experience in test construction and their learning management system.
Some assessments are not suitable for an eLearning, such as
in the case of needing to measure a performance based skill (e.g., dancing
skills). A teacher needs to balance between constructed and fixed-response test
types and those that might be more performance based. Performance based items
will need to be teacher-graded with a clearly defined scoring plan.
When a test is created, the teacher should consider placing
constructed response, especially essay types of test items, at the end of the
test. This way the student can answer the easier items first and more
challenging questions later. A written test should have a variety of
fixed-response items followed by more challenging constructed-response items.
The test creator needs to have a balance between
computer-graded and teacher-graded assessment items. This balance not only
helps the educator but also the student. By providing a mixture of
computer-graded and teacher-graded tests, the computer can quickly and
accurately grade fixed-response and “fill-in-the-blank” constructed-response
items, leaving the teacher time to review the only the incorrect
constructed-response items and time to review and score the short and long
essay constructed-response items. Students have the experience of easier to
more challenging test items through the different types of test items as they
progress through their test. It is important that both types of assessments
provide the students with clear, meaningful, and timely feedback.
Work Cited
Oosterhof, A., Conrad, R.-M., & Ely,
D. P. (2008). Assessing Learners Online. Upper Saddle River:
Merrill/Prentice Hall.
No comments:
Post a Comment