1 / 22

BY DR MAMMAN MUSA DEPARTMENT OF SCIENCE EDUCATION AHMADU BELLO UNIVERSITY, ZARIA

PRESCRIPTIVE AND ANALYTIC TECHNIQUES FOR QUALITY ASSURANCE IN THE MEASUREMENT OF EDUCATIONAL ACHIEVEMENT. BY DR MAMMAN MUSA DEPARTMENT OF SCIENCE EDUCATION AHMADU BELLO UNIVERSITY, ZARIA

hobson
Download Presentation

BY DR MAMMAN MUSA DEPARTMENT OF SCIENCE EDUCATION AHMADU BELLO UNIVERSITY, ZARIA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PRESCRIPTIVE AND ANALYTIC TECHNIQUES FOR QUALITY ASSURANCE IN THE MEASUREMENT OF EDUCATIONAL ACHIEVEMENT BY DR MAMMAN MUSA DEPARTMENT OF SCIENCE EDUCATION AHMADU BELLO UNIVERSITY, ZARIA A DISCUSSION PAPER PRESENTED AT A WORKSHOP FOR DEANS, DIRECTORS AND HEADS OF DEPARTMENT HELD AT POSTGRADUATE AUDITORIUM, A. B. U. ZARIA ON 20TH JULY, 2016

  2. INTRODUCTION One of the most important traditional functions of a university academic staff is to impart knowledge to students through lecturing. This earned such staff a name “lecturer”. Due to series of developments in education and technology, the lecturer’s roles were drastically modified from that of a lecturer to that a facilitator of learning (Norton, 2009). The end activity of a lecturer is to assess his or her students and award grades.

  3. INTRODUCTION CONTD 1. This assessment component was among the “seven golden rules for university and college lecturers” as postulated by Ellington (2000). 2. Quality Assurance Agency (QAA, 2010) conducted an audit of 123 institutions. Among the areas that need major improvement was in the area of assessment. 3. National Students Survey (2007) indicated that students were least satisfied with their assessment by their lecturers. 4. In the process of preparing this paper, 700 students were randomly served with a six-item questionnaire (Appendix A). About 75% of the respondents stated that the grades given by their lecturers do not reflect their true effort or ability.

  4. INTRODUCTION CONTD It is one of the aims of this paper to provide lecturers with prescriptive and the analytic techniques that could improve their assessment practices. • The prescriptive techniques are guidelines or suggestions that were believed to produce good results. • The analytic techniques on the other hand are mathematical techniques that use formulae to provide indices that could lead to qualitative assessment of students for sound decision making.

  5. SOME BASIC CONCEPTS • Tests are educational measurements tools deployed to assess students’ performances. These tests are usually for CA or for feedback. • Examinations are sets of educational instruments (tests and non tests) used to measure students’ performance or achievement. • Measurement is the process of assigning numerals to objects, events, traits of persons according to rules (Stevens, 1951).

  6. SOME BASIC CONCEPTS: CONTD… • Evaluation is a process that determines the effects of curriculum on teaching and learning of students. Evaluation could be said to have three components: quantitative, qualitative and value judgement.

  7. Prescriptive Techniques for Qualitative Assessment • Using Bloom’s Taxonomy: • Knowledge: Recognition or recall of specific materials • Comprehension: Grasping the meaning of materials • Application: Using information in concrete situation • Analysis: Breaking down materials in parts • Synthesis: Putting together parts to form a whole • Evaluation: Judging the value of materials and methods for a given purpose.

  8. 2. Good Planning: *Determine the Purpose of the Test: Is the test for placement, diagnostic, formative or summative?*Determine the Format of the Test: Is it an essay, multiple choice, true or false, fill in the blank, marching test or a combination of tests or CBT?*Determine the Number of Items to be Contained in the Test: with the appropriate time to complete the test.*Develop a Table of Specification

  9. 3. Different Types of Tests. The Essay Tests Multiple Choice Tests True – False Test Short-Answer Test (Completion of Fill in the Blank) CBT

  10. ANALYTIC TECHNIQUES • Use of Psychometrics of Tests: *Validity index is the appropriateness of a test which indicates that test serves the purpose for which it is intended. *Reliability index is the appropriateness of a test which indicates that the results generated by the test are consistent from one measurement session to another. *Difficulty index of a test is the difficulty level of the test. A good test should possess an appropriate level of difficulty. *

  11. ANALYTIC TECHNIQUES *Discrimination index of a test is ability of the test to distinguish between good and poor students and should also be appropriate level.

  12. 2. Final Course Scores Too Low EXAMPLE:32,34,25,40,36,38,45,28,35,30,44, 32, 44,29,31,43,46,46,37,33. If the scores of a particular course are too low for unjustifiable reasons, such scores have to be normalized by a formula: T = 10Z + 50, where T is normalized score, and Z = (raw score – average)/stdev

  13. . Final Course Scores Too Low • . The results of the transformed scores then become: 45,46, 33, 55,49, 52, 63, 37, 47, 40, 62,43, 62, 40, 42, 60, 65, 65, 51, 45.

  14. 3. Final Course Results Too High EXAMPLE: 62,74,65,70,76,88,75,68,75,60,74,62,64,69,81,73,66,76,77,90. if the scores of a particular course are too high for unjustifiable reasons, scores have to be normalized by a formula: T = 10Z + 50, where T is normalized score, and Z = (raw score – average)/stdev.

  15. Final Course Results Too High • The results of the transformed scores then become: 39, 54, 43, 49, 57, 72, 56, 47, 56, 36, 54, 39, 41, 47, 64, 53 44, 57, 58, 75.

  16. CONCLUSION • An objective assessment is a function of the quality of the instrument that measures the educational outcomes. • It is always a good teaching that produces good results and good teaching is hinged on setting teachable, learnable and achievable educational objectives. • Lecturers should employ tests of various types to cater for all aspects of educational objectives.

  17. CONCLUSION • It is recommended that lectures should use both the prescriptive and analytic techniques whenever needs arise. • It is a case of exam malpractice to donate marks in an attempt to raise performance. It is equally the same to subtract marks in an attempt to moderate the results. The best practice is to transform or normalize the scores.

  18. THANK YOU ALL FOR LISTENING

  19. REFERENCES • Alonge, M.F. and Ojerinde, A. (1987). Guessing Tendency and Differential Performance of Secondary School Students on Mathematics Achievement Tests. Journal of Science Teachers Association, 25(1) & (2), 56-63. • Alonge, M. F. (2003): Assessment and Examination: The Pathways to Development. Inaugural Lecture at University of Ado-Ekiti, 21st August. • Bachman, L.F. (1990). Fundamental Considerations in Language Testing. Oxford: Oxford University Press. • Bloom, B.S. (1956). Taxonomy of Educational Objectives: The Classification of Educational Goals: Handbook I, Cognitive Domain. N.Y.; Toronto: Longmans. • Brown, F. G. (1970): Principles of Educational and Psychological Testing. Illinois: The Dryden Press Inc. • Bodmann, S.M. and Robinson, D.H., 2004. Speed and performance differences among computer- based and paper-pencil tests. Journal of Educational Computing Research, 31, 51-60. • Brown, F. G. (1970): Principles of Educational and Psychological Testing: Illinois The Dryden Press Inc. • Carlbring, P., et al., 2007. Internet vs. paper and pencil administration of questionnaires commonly used in panic/agoraphobia research. Computers in Human Behavior, 23, 1421-1434. • Cronbach, L.J. (1989). Essentials of Psychological Testing. Fourth Edition. N.Y.: Harper & Row • Cronk, B.C. and West, J.L., 2002. Personality research on the Internet: A comparison of Web-based and traditional instruments in take-home and in-class settings. Behavior Research Methods, Instruments and Computers, 34 (2), 177-180. • DiLalla, D.L., 1996. Computerized administration of the Multidimensional Personality Questionnaire. Assessment, 3, 365374. • Dillon, A., 1994. Designing usable electronic text: Ergonomic aspects of human information usage. London: Taylor & Francis.

  20. REFERENCES • Ellington, H. (2000). How to Become an Excellent Tertiary-Level Teacher. Journal of Further and Higher Education, 24(3). • Ford, B.D., Vitelli, R., and Stuckless, N., 1996. The eects of computer versus paper-and-pencil administration on measures of anger and revenge with an inmate population. Computers in Human Behavior, 12, 159-166. • Fouladi, R.T., McCarthy, C.J., and Moller, N.P., 2002. Paper-and-pencil or online? Evaluating mode effects on measures of emotional functioning and attachment. Assessment,9, 204215 • Fox, S. and Schwartz, D., 2002. Social desirability and controllability in computerized and paper- and-pencil personality questionnaires. Computers in Human Behavior, 18, 389410. • George, C.E., Lankford, J.S., and Wilson, S.E., 1992. The effects of computerized versus paper-and- pencil administration on negative effect. Computers in Human Behavior, 8, 203209. • Goldberg, A., Russell, M., and Cook, A., 2003. The effect of computers on student writing: A meta- analysis of studies from 1992 to 2002. Journal of Technology, Learning, and Assessment, 2, 1-51. [online]. Available from: http://www.jtla.org [Accessed 15 June 2006]. • Gould, J.D., 1981. Composing letters with computer-based text editors. Human Factors, 23, 593- 606. • King, W.C. and Miles, E.W., 1995. A quasi-experimental assessment of the effect of computerizing noncognitive paper-and-pencil measurements: A test of measurement equivalence. Journal of Applied Psychology, 80, 643-651. • Kobak, K.A., Reynolds, W.M., and Greist, J.H., 1993. Development and validation of a computer- administered version of the • Hamilton Anxiety Scale. Psychological Assessment, 5, 487-492. • Lankford, J.S., Bell, R.W., and Elias, J.W., 1994. Computerized versus standard personality measures: Equivalency, computer anxiety, and gender differences. Computers in Human Behavior, 10, 497510. • McCoy, S., et al., 2004. . Electronic versus paper surveys: Analysis of potential psychometric biases. In: Proceedings of

  21. REFERENCES • the IEEE 37th Hawaii International Conference on System Science. Washington, DC: IEEE Computer Society, 80265C. • Mayes, D.K., Sims, V.K., and Koonce, J.M., 2001. Comprehension and workload differences for VDT and paper-based reading. International Journal of Industrial Ergonomics, 28, 367-378. • Merten, T. and Ruch, W., 1996. A comparison of computerized and conventional administration of the German versions of the Eysenck Personality Questionnaire and the Carroll Rating Scale for Depression. Personality and Individual Differences, 20, 281291. • National Student Survey (2007): http:www.unitstats.com (last accessed on 20th November, 2007. • Norton, A. (2009). Assessing Student Learning. In Fry H, Ketteridge, S. & Marshall, S. (Eds) “A Handbook for Teaching and Learning in Higher Education” (3rd Edition). New York: Routledge Taylor and Francis Group. • Noyes, E.S. (1963): “Essay and Objective Test in English”. College Board Review. Vol. 49, 7 • Noyes, J.M., Garland, K.J., and Robbins, E.L., 2004. Paper-based versus computer-based assessment: Is workload another test mode effect? British Journal of Educational Technology, 35, 111-113. • Pinsoneault, T.B., 1996. Equivalency of computer-assisted and paper-and-pencil administered versions of the Minnesota Multiphasic Personality Inventory-2. Computers in Human Behavior, 12, 291-300. • Puhan, G. and Boughton, K.A., 2004. Evaluating the comparability of paper and pencil versus computerized version sofa large-scale certification test. In: Proceedings of the annual meeting of the American Educational Research Association(AERA).SanDiego,CA: Educational Testing Service. • Rosenfeld, R., et al., 1992. A computer-administered version of the Yale-Brown Obsessive- Compulsive Scale. Psychological Assessment, 4, 329-332. • Russell, M., 1999. Testing on computers: A follow-up study comparing performance on computer and paper. Education Policy Analysis Archives, 7, 20. • Steer, R.A., et al., 1994. Use of the computer-administered Beck Depression Inventory and Hopelessness Scale with psychiatric patients. Computers in Human Behavior, 10, 223-229. • Stevens, S.S. (1951). Mathematics, Measurement and Psychophysics. In Stevens, S.S. (Ed). Handbook of Experimental Psychology. New York: John Wiley and Sons.

  22. REFERENCES • van De Vijver, F.J.R. and Harsveld, M., 1994. The incomplete equivalence of the paper-and-pencil and computerized versions of the General Aptitude Test Battery. Journal of Applied Psychology,79 852-859 • Vansickle, T.R. and Kapes, J.T., 1993. Comparing paper-pencil and computer-based versions of the Strong-Campbell Interest Inventory. Computers in Human Behavior, 9, 441-449. • Vispoel, W.P., 2000. Computerized versus paper-and-pencil assessment of self-concept: Score comparability and respondent preferences. Measurement and Evaluation in Counseling and Development, 33, 130-143. • Wastlund, E., et al., 2005. Effects of VDT and paper presentation on consumption and production of information: Psychological and physiological factors. Computers in Human Behavior, 21, 377-394. • Williams, J.E. and McCord, D.M., 2006. Equivalence of standard and computerized versions of the Raven Progressive Matrices test. Computers in Human Behavior, 22, 791-800. • Ziefle, M., 1998. Effects of display resolution on visual performance. Human Factors, 40, 554568.

More Related