Моделирование системы оценивания студентами письменных текстов и индивидуальных характеристик автора в зависимости от ошибок

Ключевые слова: студенты колледжа, оценка письменного текста, факторный анализ, предвзятость оценивающего, обучение навыкам письма, оценивание однокурсников

Аннотация

Пишущих часто оценивает аудитория, и эти оценки могут относиться как к тексту, так и к самим авторам. Данная работа основывается на предыдущих исследованиях оценивания письменных текстов и восприятия ошибок и направлена на выявление взаимосвязи между этими процессами. Студенты колледжа оценивали четыре эссе, в одном из которых ошибок не было, в другом были незначительные ошибки, в третьем — серьезные ошибки, а четвертом — оба типа ошибок. Оценки включали качественные характеристики письменного текста (формальные особенности, идеи, организацию текста, связность текста, авторский стиль и др.) и характеристики самого автора (креативность, интеллект, душевность, доброта и др.). По результатам эксплораторного факторного анализа в этих оценках были выявлены латентные конструкты. Один из этих конструктов — качество письма и навыки — включает в себя характеристики письменного текста и интеллектуальные способности автора (например, интеллект и осведомленность о теме). Второй конструкт — личность автора — включает в себя межличностные характеристики (например, доброта и верность). Между этими двумя конструктами наблюдалась положительная корреляция. Результаты выявили склонность студентов к формированию целостного впечатления о качестве письменного текста и авторах, а не отрывочных суждений об отдельных аспектах. Суждения о личностных характеристиках авторов может свидетельствовать о наличии скрытых предубеждений. Студенты также обращали больше внимания на незначительные ошибки, чем на серьезные. В статье также обсуждается влияние на оценку письменных работ предубеждений и уровня профессиональной подготовки оценивающих.

Скачивания

Данные скачивания пока не доступны.

Биографии авторов

Джошуа Уилсон, Университет Делавэра

School of Education, Assistant Professor

Мелисса Баччан, Университет Западной Вирджинии

Learning Sciences and Human Development, Assistant Professor

Дэндэн Чен, Американский совет анестезиологов

Psychometrics and Research, Psychometrician

Литература

Ackerman, J. M. (1993). The promise of writing to learn. Written Communication, 10(3), 334-370. DOI: https://doi.org/10.1177/0741088393010003002

Attali, Y. (2016). A comparison of newly-trained and experienced raters on a standardized writing assessment. Language Testing, 33, 99-115. DOI: https://doi.org/10.1177/0265532215582283

Arnold, K. M., Umanath, S., thio, K., Reilly, W. B., McDaniel, M. A., & Marsh, E. J. (2017). Understanding the cognitive processes involved in writing to learn. Journal of Experimental Psychology: Applied, 23(2), 115-127. DOI: https://doi.org/10.1037/xap0000119

Bangert-Drowns, R. L., Hurley, M. M., & Wilkinson, B. (2004). The effects of school-based writing-to-learn interventions on academic achievement: A meta-analysis. Review of Educational Research, 74(1), 29-58. DOI: https://doi.org/10.3102/00346543074001029

Bauman, C. W., & Skitka, L. J. (2010). Making attributions for behaviors: The prevalence of correspondence bias in the general population. Basic and Applied Social Psychology, 32, 269-277. DOI: https://doi.org/10.1080/01973533.2010.495654

Beavers, A. S., Lounsbury, J. W., Richards, J. K., Huck, S. W., Skolits, G. J., & Esquivel, S. L. (2013). Practical considerations for using exploratory factor analysis in educational research. Practical Assessment, Research & Evaluation, 18(6), 1-13. https://scholarworks.umass.edu/pare/vol18/iss1/6.

Boland, J. E., & Queen, R. (2016). If you're house is still available, send me an email: personality influences reactions to written errors in email messages. PloS One, 11(3), e0149885. DOI: https://doi.org/10.1371/journal.pone.0149885

Breland, H. M., & Jones, R. J. (1982). Perceptions of writing skill. Written Communication, 1(1), 101-119. DOI: https://doi.org/10.1177/0741088384001001005

Brown, T. A. (2006). Confirmatory factor analysis for applied research. Guilford.

Byrne, B. M. (2013). Structural equation modeling with Mplus: Basic concepts, applications, and programming. Routledge.

Cho, K., & MacArthur, C. (2011). Learning by reviewing. Journal of Educational Psychology, 103, 73-84. DOI: https://doi.org/10.1037/a0021950

Cho, K., Schunn, C. D., & Charney, D. (2006). Commenting on writing: Typology and perceived helpfulness of comments from novice peer reviewers and subject matter experts. Written Communication, 23(3), 260-294. DOI: https://doi.org/10.1177/0741088306289261

Cox, D., Cox, J. G., & Cox, A. D. (2017). To err is human? How typographical and orthographical errors affect perceptions of online reviewers. Computers in Human Behavior, 75, 245-253. DOI: https://doi.org/10.1016/j.chb.2017.05.008

Crossley, S. A., Roscoe, R. D., & McNamara, D. S. (2014). What is successful writing? An investigation in the multiple ways writers can write successful essays. Written Communication, 31(2), 184-214. DOI: https://doi.org/10.1177/0741088314526354

Crusan, D., Plakans, L., & Gebril, A. (2016). Writing assessment literacy: Surveying second language teachers' knowledge, beliefs, and practices. Assessing Writing, 28, 43-56. DOI: https://doi.org/10.1016/j.asw.2016.03.001

Devitt, A. J. (2004). Writing genres. Southern Illinois University Press.

Engelhard, G. (1994). Examining rater errors in the assessment of written composition with a many-faceted Rasch model. Journal of Educational Measurement, 31, 93-112. DOI: https://doi.org/10.1111/j.1745-3984.1994.tb00436.x

Elton, L., (2010). Academic writing and tacit knowledge. Teaching Higher Education, 15, 151-160. DOI: https://doi.org/10.1080/13562511003619979

Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70(3), 287-322. DOI: https://doi.org/10.3102/00346543070003287

Fathi, J. & Khodabakhsh, M. R. (2019). The role of self-assessment and peer-assessment in improving writing performance of Iranian EFL students. International Journal of English Language & Translation Studies, 7(3), 1-10.

Field, A. (2013). Discovering statistics using IBM SPSS Statistics (4th ed.). Sage.

Figueredo, L., & Varnhagen, C. K. (2005). Didn't you run the spell checker? Effects of type of spelling error and use of a spell checker on perceptions of the author. Reading Psychology, 26(4-5), 441-458. DOI: https://doi.org/10.1080/02702710500400495

Gansle, K. A., VanDerHeyden, A. M., Noell, G. H., Resetar, J. L., & Williams, K. L. (2006). The technical adequacy of curriculum-based and rating-based measures of written expression for elementary school students. School Psychology Review, 35, 435-450.

Gao, Y., Schunn, C. D., & Yu, Q. (2019) The alignment of written peer feedback with draft problems and its impact on revision in peer assessment, Assessment & Evaluation in Higher Education, 44(2), 294-308. DOI: https://doi.org/10.1080/02602938.2018.1499075

Gielen, S., Peeters, E., Dochy, F., Onghena, P., & Struyven, K. (2010). Improving the effectiveness of peer feedback for learning. Learning and Instruction, 20, 304-315. DOI: https://doi.org/10.1016/j.learninstruc.2009.08.007

Gielen, S., Tops, L., Dochy, F., Onghena, P., & Smeets, S. (2010). A comparative study of peer and teacher feedback and of various peer feedback forms in a secondary school writing curriculum. British Educational Research Journal, 36, 143-162. DOI: https://doi.org/10.1080/01411920902894070

Godley, A., & Escher, A. (2012). Bidialectual African American adolescents' beliefs about spoken language expectations in English classrooms. Journal of Adolescent and Adult Literacy, 55(8), 704-713. DOI: https://doi.org/10.1002/JAAL.00085

Goodwin, S. (2016). A Many-Facet Rasch analysis comparing essay rater behavior on an academic English reading/writing test used for two purposes. Assessing Writing, 30, 21-31. DOI: https://doi.org/10.1016/j.asw.2016.07.004

Guadagnoli, E. & Velicer, W. F. (1988). Relation of sample size to the stability of component patterns. Psychological Bulletin, 103, 265-275. DOI: https://doi.org/10.1037/0033-2909.103.2.265

Hacker, D., & Sommers, N. (2016). Rules for writers (8th ed.). Bedford/St. Martin's.

Hoover, B. (2013, March 4th). Good grammar should be everyone's business. Harvard Business Review. https://hbr.org/2013/03/good-grammar-should-be-everyon.

Huot, B. (1990). Reliability, validity, and holistic scoring: What we know and what we need to know. College Composition and Communication, 41(2), 201-213. DOI: https://doi.org/10.2307/358160

Hyland, K. (2007). Genre pedagogy: Language, literacy, and L2 writing instruction. Journal of Second Language Writing, 16(3), 148-164. DOI: https://doi.org/10.1016/j.jslw.2007.07.005

Jeong, A., Li, H., & Pan, A. J. (2017). A sequential analysis of responses in online debates to postings of students exhibiting high versus low grammar and spelling errors. Educational Technology Research and Development, 65(5), 1175-1194. DOI: https://doi.org/10.1007/s11423-016-9501-2

Johnson, A. C., Wilson, J., & Roscoe, R. D. (2017). College student perceptions of writing errors, text quality, and author characteristics. Assessing Writing, 34, 72-87.

Johnson, D., & VanBrackle, L. (2012). Linguistic discrimination in writing assessment: How raters react to African American errors ESL errors, and standard English errors on a state-mandated writing example. Assessing Writing, 17, 35-54. DOI: https://doi.org/10.1016/j.asw.2011.10.001

Jonnson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130-144. DOI: https://doi.org/10.1016/j.edurev.2007.05.002

Kaiser, H. F. (1974). An index of factorial simplicity. Psychometrika, 39, 31-36.

Kaufman, J. H., & Schunn, C. D. (2011). Students' perceptions about peer assessment for writing: Their origin and impact on revision work. Instructional Science, 39(3), 387-406. DOI: https://doi.org/10.1007/s11251-010-9133-6

Kenny, D. A. (2014, February 6). Measuring model fit. http://davidakenny.net/cm/fit.htm.

Klein, P. D. (1999). Reopening inquiry into cognitive processes in writing-to-learn. Educational Psychology Review, 11(3), 203-270. DOI: https://doi.org/10.1023/A:1021913217147

Knoch, U., Read, J., & Randow, J. (2007). Re-training writing raters online: How does it compare to face-to-face training? Assessing Writing, 12, 26-43. DOI: https://doi.org/10.1016/j.asw.2007.04.001

Kreiner, D. S., Schnakenberg, S. D., Green, A. G., Costello, M. J., & McClin, A. F. (2002). Effects of spelling errors on the perception of writers. The Journal of General Psychology, 129(1), 5-17. DOI: https://doi.org/10.1080/00221300209602029

Lee, C. J., Sugimoto, C. R., Zhang, G., & Cronin, B. (2013). Bias in peer review. Journal of the Association for Information Science and Technology, 64(1), 2-17. DOI: https://doi.org/10.1002/asi.22784

Li, H., Xiong, Y., Hunter, C. V., Guo, X., & Tywoniw, R. (2019) Does peer assessment promote student learning? A meta-analysis, Assessment & Evaluation in Higher Education. DOI: https://doi.org/10.1080/02602938.2019.1620679

Li, H., Xiong, Y., Zang, X., Kornhaber, M. L., Lyu, Y., Chung, K. S., & K. Suen, H. (2016). Peer assessment in the digital age: A meta-analysis comparing peer and teacher ratings. Assessment & Evaluation in Higher Education, 41(2), 245-264. DOI: https://doi.org/10.1080/02602938.2014.999746

Lim, G. S. (2011). The development and maintenance of rating quality in performance writing assessment: A longitudinal study of new and experienced raters. Language Testing, 28, 543-560. DOI: https://doi.org/10.1177/0265532211406422

MacCallum, R. C., Widaman, K. F., Zhang, S., & Hong, S. (1999). Sample size in factor analysis. Psychological Methods, 4, 84-99. DOI: https://doi.org/10.1037/1082-989X.4.1.84

Magnifico, A. M. (2010). Writing for whom? Cognition, motivation, and a writer's audience. Educational Psychologist, 45(3), 167-184. DOI: https://doi.org/10.1080/00461520.2010.493470

May, G. L. (2008). The effect of rater training on reducing social style bias in peer evaluation. Business Communication Quarterly, 71(3), 297-313. DOI: https://doi.org/10.1177/1080569908321431

McNamara, D. S. (2013). The epistemic stance between the author and reader: A driving force in the cohesion of text and writing. Discourse Studies, 15(5), 579-595. DOI: https://doi.org/10.1177/1461445613501446

Morin-Lessard, E., & McKelvie, S. J. (2019). Does writeing rite matter? Effects of textual errors on personality trait attributions. Current Psychology, 38(1), 21-32. DOI: https://doi.org/10.1007/s12144-017-9582-z

Muthén, L. K., & Muthén, B. O. (1998-2017). Mplus user's guide (8th ed.). Los Angeles, CA: Muthén & Muthén.

Olinghouse, N. G., Graham, S., & Gillespie, A. (2014). The relationship of discourse and topic knowledge to fifth graders' writing performance. Journal of Educational Psychology, 101, 37-50. DOI: https://doi.org/10.1037/a0037549

Panadero, E. (2016). Is it safe? Social, interpersonal, and human effects of peer assessment. In G. T. L., Brown, & L. R. Harris (Eds.), Handbook of social and human conditions in assessment (pp. 247-266). Routledge.

Panadero, E., & Alqassab, M. (2019). An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading. Assessment and Evaluation in Higher Education, 44(8), 1253-1278. DOI: https://doi.org/10.1080/02602938.2019.1600186

Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review, 9, 129-144. DOI: https://doi.org/10.1016/j.edurev.2013.01.002

Panadero, E., Romero, M., & Strijbos, J. (2013). The impact of a rubric and friendship on peer assessment: Effects on construct validity, performance, and perceptions of fairness and comfort. Studies in Educational Evaluation, 39(4), 195-203. DOI: https://doi.org/10.1016/j.stueduc.2013.10.005

Patchan, M. M., Charney, D., & Schunn, C. D. (2009). A validation study of students' end comments: Comparing comments by students, a writing instructor, and a content instructor. Journal of Writing Research, 1(2), 124-152.

Patchan, M. M., Hawk, B., Stevens, C. A., & Schunn, C. D. (2013). The effects of skill diversity on commenting and revisions. Instructional Science, 41(2), 381-405. DOI: https://doi.org/10.1007/s11251-012-9236-3

Patchan, M. M., & Schunn, C. D. (2015). Understanding the benefits of providing peer feedback: How students respond to peers' texts of varying quality. Instructional Science, 43(5), 591-614. DOI: https://doi.org/10.1007/s11251-015-9353-x

Patchan, M. M., & Schunn, C. D. (2016). Understanding the effects of receiving peer feedback for text revision: Relations between author and reviewer ability. Journal of Writing Research, 8(2), 227-265. DOI: https://doi.org/10.17239/jowr-2016.08.02.03

Patchan, M. M., Schunn, C. D., & Clark, R. J. (2011). Writing in the natural sciences: Understanding the effects of different types of reviewers on the writing process. Journal of Writing Research, 2(3), 365-393.

Patchan, M. M., Schunn, C. D., & Clark, R. J. (2018). Accountability in peer assessment: Examining the effects of reviewing grades on peer ratings and peer feedback. Studies in Higher Education, 43(12), 2263-2278. DOI: https://doi.org/10.1080/03075079.2017.1320374

Ramon-Casas, M., Nuño, N., Pons, F., & Cunillera, T. (2019). The different impact of a structured peer-assessment task in relation to university undergraduates' initial writing skills. Assessment & Evaluation in Higher Education, 44(5), 653-663. DOI: https://doi.org/10.1080/02602938.2018.1525337

Reiff, M. J., & Bawarshi, A. (2011). Tracing discursive resources: How students use prior genre knowledge to negotiate new writing contexts in first-year composition. Written Communication, 28(3), 312-337. DOI: https://doi.org/10.1177/0741088311410183

Roscoe, R. D. (2014). Self-monitoring and knowledge-building in learning by teaching. Instructional Science, 42, 327-251. DOI: https://doi.org/10.1007/s11251-013-9283-4

Ruscio, J., & Roche, B. (2012). Determining the number of factors to retain in an exploratory factor analysis using comparison data of known factorial structure. Psychological Assessment, 24, 282-292. DOI: https://doi.org/10.1037/a0025697

Schunn, C., Godley, A., & DeMartino, S. (2016). The reliability and validity of peer review of writing in high school AP English classes. Journal of Adolescent and Adult Literacy, 60(1), 13-23. DOI: https://doi.org/10.1002/jaal.525

Sluijsmans, D. M. A., Brand-Gruwel, S., van Merriënboer, J. J. G., & Bastiaens, T. J. (2003). The training of peer assessment skills to promote the development of reflection skills in teacher education. Studies in Educational Evaluation, 29(1), 23-42. DOI: https://doi.org/10.1016/S0191-491X(03)90003-4

Soltero-González, L., Escamilla, K., & Hopewell, S. (2012). Changing teachers' perceptions about the writing abilities of emerging bilingual students: Towards a holistic bilingual perspective on writing assessment. International Journal of Bilingual Education and Bilingualism, 15(1), 71-94. DOI: https://doi.org/10.1080/13670050.2011.604712

Steiger, J. H., & Lind, J. C. (1980, May). Statistically based tests for the number of common factors. Paper presented at the annual Spring Meeting of the Psychometric Society, Iowa City, IA.

Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68, 249-276. DOI: https://doi.org/10.2307/1170598

Topping, K. (2009). Peer assessment. Theory into Practice, 48, 20-27. DOI: https://doi.org/10.1080/00405840802577569

Topping, K. J., Smith, E. F., Swanson, I., & Elliot, A. (2000). Formative peer assessment of academic writing between postgraduate students. Assessment & Evaluation in Higher Education, 25(2), 149-169. DOI: https://doi.org/10.1080/713611428

van Gennip, N. A. E., Segers, M. S. R., & Tillema, H. H. (2009). Peer assessment for learning from a social perspective: The inference of interpersonal variables and structural features. Educational Research Review, 4(1), 41-54. DOI: https://doi.org/10.1016/j.edurev.2008.11.002

van Zundert, M., Sluijsmans, D., & van Merriënboer, J. (2010). Effective peer assessment processes: Research findings and future directions. Learning and Instruction, 20, 270-279. DOI: https://doi.org/10.1016/j.learninstruc.2009.08.004

Varner (Allen), L. K., Roscoe, R. D., & McNamara, D. S. (2013). Evaluative misalignment of 10th-grade student and teacher criteria for essay quality: An automated textual analysis. Journal of Writing Research, 5(1), 35-59. DOI: https://doi.org/10.17239/jowr-2013.05.01.2

Vignovic, J. A., & Thompson, L. F. (2010). Computer-mediated cross-cultural collaboration: Attributing communication errors to the person versus the situation. Journal of Applied Psychology, 95(2), 265-276. DOI: https://doi.org/10.1037/a0018628

Wang, J., & Engelhard, G. (2019). Conceptualizing rater judgments and rating processes for rater-mediated assessments. Journal of Educational Measurement, 56, 582-602. DOI: https://doi.org/10.1111/jedm.12226

Weigle, S. C. (1998). Using FACETS to model rater training effects. Language Testing, 15, 263- 287. DOI: https://doi.org/10.1177/026553229801500205

Weigle, S. C. (2007). Teaching writing teachers about assessment. Journal of Second Language Writing, 16(3), 194-209. DOI: https://doi.org/10.1016/j.jslw.2007.07.004

Опубликован
2020-06-30
Как цитировать
RoscoeR., WilsonJ., PatchanM., ChenD., & JohnsonA. (2020). Моделирование системы оценивания студентами письменных текстов и индивидуальных характеристик автора в зависимости от ошибок. Journal of Language and Education, 6(2), 147-164. https://doi.org/10.17323/jle.2020.10316
Раздел
Оригинальное исследование