DC Field | Value | Language |
dc.contributor.author | Ashagre, Endale | - |
dc.date.accessioned | 2016-07-02T07:07:45Z | - |
dc.date.available | 2016-07-02T07:07:45Z | - |
dc.date.issued | 2009-08 | - |
dc.identifier.uri | http://hdl.handle.net/123456789/2088 | - |
dc.description.abstract | It is widely believed that “assessment drives curriculum”. Hence, it can be argued that if the
quality of teaching, training, and leaning is to be upgraded, assessment is the obvious staring
point. Instructors employ a variety of assessment tools in order to get an overview of students’
performance. Among these, multiple choice quizzes and tests are the most reliable and
commonly used assessment tools. A dependable multiple choice item is not just coming. It
requires a thorough assessment and a continuous refinement. This implies the need to
examine the quality, within the context the item is employed, through different mechanisms.
One way to deal with this is through item analysis. It is a statistical procedure to analyze test
items that combines methods used to evaluate the important characteristics of test items, such
as difficulty, discrimination, and distractor analysis of the items in a test. Accordingly, the
purpose of this study is to examine sample exam papers being administered by the different
departments of St. Mary’s University College and subsequently forward appropriate feedback
on how to improve multiple choice items. The study employed both qualitative and quantitative
analysis. Quantitatively; items were examined using basic item analysis statistics, which
includes Item Difficulty and Discrimination index and Point-Biserial Correlation as well as
Frequency Counts and Percentage. A total of seven hundred sixty one exam papers, which
consisted 234 items, from nine courses, have been considered. To supplement the results
obtained from this quantitative date, items were qualitatively reviewed in comparison with the
basic guidelines of multiple choice item writing. Results of the study indicated that the majority
(83%) of items examined have a moderate difficulty (a difficulty index: .20<p<.80) and more
than half of the total number of items were found to be good and effective in discriminating
(Discrimination index ≥.20 (72%) and Point-Biserial Correlation≥.20 (52%)). On the other
hand, those poor performing items, which are identified by quantitative analysis, were found
violating the basic principles of multiple choice test item writing. | en_US |
dc.description.sponsorship | St.Mary's University | en_US |
dc.language.iso | en | en_US |
dc.publisher | St.Mary's University | en_US |
dc.subject | Test Construction Skills, St. Mary’s University College | en_US |
dc.title | Improving Test Construction Skills through Item Analysis: The Case of St. Mary’s University College | en_US |
dc.type | Article | en_US |
Appears in Collections: | Proceedings of the 7th National Conference on Private Higher Education Institutions (PHEIs) in Ethiopia
|