Tuesday, February 4, 2014

Blog 4 - Pros and Cons of Constructed-response


Describe the various types of constructed-response assessments. What are the advantages and disadvantages of using these types of assessments? Include pros and cons of making the exam as well as grading and giving feedback.
In a written assessment, there are two categories of assessment test items – constructed-response and fixed-response. A constructed-response item includes completion and essay formats. Students enter or write their response rather than selecting the answer among options, such as in the case of multiple-choice options. Most educators and students are used to a multiple choice or true-false test – that is what fixed-response test items is.
There are two types of constructed-response items – completion items and essay items. A completion item is often known as a “fill-in-the-blank” item. The student often just completes the sentence.  In an essay item the student provides a narrative response to the test item.

Developing Assessments: A Guide to Multiple Choice, Constructed-Response, Thematic Essays, and Document Based Questions (http://www.edteck.com/michigan/guides/Assess_guide.pdf) provides the foundation of creating test items (e.g., test aligned to school district standards, assesses a variety of cognitive levels, uses authentic materials, and assesses a range of skills. This document further provides guidelines for constructed-response test items, including scoring.

The three advantages to the completion format:  (1) ease of construction, (2) student-generated answers, and (3) the ability of including many items in one test. With advantages there are also disadvantages, which are (1) limited to measuring recall information, what Webb refers to as level 1 (http://www.aps.edu/rda/documents/resources/Webbs_DOK_Guide.pdf) and Bloom’s Taxonomy refers to as remembering (http://ww2.odu.edu/educ/roverbau/Bloom/blooms_taxonomy.htm) and (2) scoring errors occurs over objectively scored items.
Completion Format Advantages and Limitations


Advantages

Limitations
Ease of construction
  • Readily measures recall of information 
  • There is no detailed scoring plan
Limited to measuring recall of information
  • Often does not measure procedural knowledge


Student generates the answer

  • Students do not have to solve the problem presented
  • Minimizes guessing
  • Reliability better than multiple-choice
  • Test item must be carefully constructed to ensure that student response is identical to desired response (especially when test is electronically scored)

Scored erroneously

  • Since there can be a variety of responses
  • Answer choices are multiple-choice, true-false, or other alternate-choice format
  • Guessing parameters of multiple-choice is .25 (for a 4 item test item)
  • Electronically scored responses errors can occur

Include many items in one test

  • More adequate sampling of content
  • Since there is more sampling of content, increased number of test items must be included
  • Increase generalizability of test scores




 According to Oosterhof, Conrad, and Ely (2008, p. 88), when writing completion items, it is important for the educator to use the following 8 criteria items:
  1. Does this item measure the specific skills?
  2. Is the reading skill required by this item below the students’ ability?
  3. Will only a single or very homogeneous set of responses provide a correct response to the item?
  4. Does the item use grammatical structure and vocabulary that is different from the contained in the source of instruction?
  5. If the item requires a numerical response, does the question state the unit of measure to be used in the answer?
  6. Does the blank represent a key word?
  7. Are blanks placed at or near the end of the item?
  8. Is the number of blanks sufficiently limited?


When scoring a completion format, there is less objective than other test item formats (e.g., multiple-choice or true/false) since the student supplies their own response. An educator’s challenge is to write completion items so that there is only one correct answer since there can be multiple answers. The educator should include in their scoring plan the correct answer and, when applicable, a list of other acceptable alternatives. The scoring plan ensures that the educator scores consistently as it is not fair to accept an answer as right on one student’s test and the same answer not acceptable on another student’s test.

Essay items have a number of strengths over the completion format since they are able to: (1) measure instructional objectives more directly, (2) allows the educator to gain insight into the student’s thoughts, (3) less time-consuming to construct, and (4) provide a more realistic task to the student. The limitations of an essay item, however, is that they (1) provide less adequate sampling of assessed content, (2) there is reliability issues with how the essay item is scored, and (3) there is a time factor.
Essay Test Item Advantages and Limitations
Advantages
Limitations
Measures instructional objective more directly
·         Measures the behavior of the performance objective
Less adequate sampling of assessed content
·         Student must take the time to read and answer the test item, so test cannot include all learned content
·         One broad essay test question should not cover a greater percentage of skills
Student insight
·         Measures higher-level cognitive objectives
·         Student selects, organizes, and integrates information in a logical way
·         Not measuring the student’s writing skills, but rather their mastery of the content. If writing skills are assessed, the writing score should be reported separately.
Reliability issue
·         Educator bias in scoring could affect test reliability
·         Difference in how teachers score
·         Educator test scoring fatigue (papers at the top of the pile scored differently than those at the end)
·         Scores influenced by educator’s student expectation
·          Writing conventions and presentation affects score (although, handwriting would not be a factor in an online test)
Time-consuming to construct
·         Less time-consuming to construct the test
·         Time is spent to ensure accurate scoring plan
Time factor
·         Educator must take time to read and score test (even if automatic scoring is available)
·         If other educators are scoring (inter-rater reliability) it may take additional time for others to read and score tests
·         Time consuming to produce a well-defined scoring plan
·         Student must take the time to read and answer each test item (student should answer within 10 minutes)
Realistic task
·         In the workforce, students are not asked to perform a task with multiple-choice or true/false questions, but rather have to organize and communicate their thoughts

According to Oosterhof, Conrad, and Ely (2008, p. 96), when writing essay items, it is important for the educator to use the following 6 criteria items:
  1. Does this item measure the specified skill?
  2. Is the level of reading skill required by this item below the learners’ ability?
  3. Will all or almost all students answer this item in less than 10 minutes?
  4. Will the scoring plan result in different readers (scorers) assigning similar scores to a given student’s response?
  5. Does the scoring plan describe a correct and complete response?
  6. Is the item written in such a way that the scoring plan will be obvious to knowledgeable learners?



An educator needs to set time aside to score an essay test. This is whether the educator is using technology to automatically score the test or not. Even with automation, the educator should review the students’ answers. Educators may find that essay tests are easier to prepare since fewer questions are included in a test. However, they need to consider not just the test writing component but the test scoring component as well.

The student should have access to the scoring plan prior to answering the essay item. This ensures that they have clear expectations of what is expected of them and provide guidance of responding to the essay. It is important that when scoring an essay test item that there is consistency, thus ensuring that all answers are given the correct point value. The scoring plan should be clearly defined. Oosterhof, Conrad, and Ely (2008, p. 92) states that a scoring plan should have 3 characteristics: (1) total number of points assigned to the item based on its importance relative to other items, (2) specific attributes to be evaluated in students’ responses, and (3) for each attribute, criteria for awarding points, including partial credit.
The using of rubrics (either holistic or analytical) is important. An analytical scoring plan includes a description and point (score) the student receives for all the necessary elements and point value and criteria for partially answering. An overall score is then assigned to the student. The holistic approach takes into consideration how the student’s answer, as a whole. There is no one correct response. This type of approach focuses on quality, and understanding of the content/skills (http://www.uni.edu/chfasoa/analyticholisticrubrics.pdf).  

Work Cited
Oosterhof, A., Conrad, R.-M., & Ely, D. P. (2008). Assessing Learners Online. Upper Saddle River: Merrill/Prentice Hall

1 comment:

  1. Eln_Course: Blog 4 - Pros And Cons Of Constructed-Response >>>>> Download Now

    >>>>> Download Full

    Eln_Course: Blog 4 - Pros And Cons Of Constructed-Response >>>>> Download LINK

    >>>>> Download Now

    Eln_Course: Blog 4 - Pros And Cons Of Constructed-Response >>>>> Download Full

    >>>>> Download LINK NG

    ReplyDelete