Categories

you may like

Automated Essay Scoring

  • Human Language Technologies
  • Categories:Computers & Internet
  • Language:English(Translation Services Available)
  • Publication date:
  • Pages:314
  • Retail Price:(Unknown)
  • Size:190mm×234mm
  • Page Views:91
  • Words:(Unknown)
  • Star Ratings:
  • Text Color:Black and white
You haven’t logged in yet. Sign In to continue.

Request for Review Sample

Through our website, you are submitting the application for you to evaluate the book. If it is approved, you may read the electronic edition of this book online.

Copyright Usage
Application
 

Special Note:
The submission of this request means you agree to inquire the books through RIGHTOL, and undertakes, within 18 months, not to inquire the books through any other third party, including but not limited to authors, publishers and other rights agencies. Otherwise we have right to terminate your use of Rights Online and our cooperation, as well as require a penalty of no less than 1000 US Dollars.


Description

This book discusses the state of the art of automated essay scoring, its challenges and its potential. One of the earliest applications of artificial intelligence to language data (along with machine translation and speech recognition), automated essay scoring has evolved to become both a revenue-generating industry and a vast field of research, with many subfields and connections to other NLP tasks. In this book, we review the developments in this field against the backdrop of Elias Page's seminal 1966 paper titled "The Imminence of Grading Essays by Computer."

Part 1 establishes what automated essay scoring is about, why it exists, where the technology stands, and what are some of the main issues. In Part 2, the book presents guided exercises to illustrate how one would go about building and evaluating a simple automated scoring system, while Part 3 offers readers a survey of the literature on different types of scoring models, the aspects of essay quality studied in prior research, and the implementation and evaluation of a scoring engine. Part 4 offers a broader view of the field inclusive of some neighboring areas, and Part part 5 closes with summary and discussion.

This book grew out of a week-long course on automated evaluation of language production at the North American Summer School for Logic, Language, and Information (NASSLLI), attended by advanced undergraduates and early-stage graduate students from a variety of disciplines. Teachers of natural language processing, in particular, will find that the book offers a useful foundation for a supplemental module on automated scoring. Professionals and students in linguistics, applied linguistics, educational technology, and other related disciplines will also find the material here useful.

Author

Beata Beigman Klebanov, Educational Testing Service
Beata Beigman Klebanov is a Senior Research Scientist at Educational Testing Service, Princeton, NJ. She specializes in development of language technology for education in the subfields of reading and writing. She has led projects on developing automated methods for assessing quality of arguments, topic development, use of figurative language, as well as worked on methods for estimating text complexity and predicting the rate of oral reading of a given text. She has also worked on the effect of noise in language data on the performance of statistical models, as well as on characteristics of class vs test performance. She is the principal investigator behind Relay ReaderTM—innovative technology to support development of reading fluency.
Dr. Beigman Klebanov’s research appeared in leading journals, such as Computational Linguistics, Transactions of the ACL, ACM Transactions on Speech and Language Processing, Journal of AI in Education, Language Testing, Journal of Educational Psychology, as well as in proceedings of top-tier conferences such as Association for Computational Linguistics’ annual meetings (ACL), Learning Analytics and Knowledge conferences (LAK), and the annual meetings of the National Council on Measurement in Education (NCME). She has co-organized a series of ACL workshops and shared tasks on processing of metaphor and other types of figurative language. Beata is currently serving as an action editor for the Transactions of the ACL journal and has served as an area chair or senior area chair for the NAACL/ACL conferences in 2019–2022.

Nitin Madnani, Educational Testing Service
Nitin Madnani is a Distinguished Research Engineer in the AI Research Labs at the Educational Testing Service (ETS) in Princeton. His NLP adventures began with an elective course on computational linguistics he took while studying computer architecture at the University of Maryland, College Park. As a Ph.D. student at the Institute of Advanced Computer Studies (UMIACS), he worked on automated document summarization, statistical machine translation, and paraphrase generation. After earning his Ph.D. in 2010, he joined the NLP & Speech research group at ETS where he led—and continues to lead—a wide variety of projects that use NLP to build useful educational applications and technologies. Examples include mining Wikipedia revision history to correct grammatical errors, automatically detecting organizational elements in argumentative discourse, creating a service-based, polyglot framework for implementing robust, high-performance automated scoring & feedback systems, and building the first-ever, fully open-source, comprehensive evaluation toolkit for automated scoring.
Dr. Madnani’s work has appeared in leading journals such as Computational Linguistics, Transactions of the ACL, ACM Transactions on Speech and Language Processing, ACM Transactions on Intelligent Systems and Technology, Machine Translation, Journal of Writing Analytics, and the Journal of Open Source Software. His research has also appeared in the proceedings of toptier conferences such as Association for Computational Linguistics’ annual meeting series (ACL,NAACL, EACL, EMNLP), Learning Analytics and Knowledge, Learning @ Scale, and the annual meetings of the American Educational Research Association (AERA) and the National Council on Measurement in Education (NCME). Nitin is currently serving as an action editor for the Transactions of the ACL (TACL) journal, an executive board member of the ACL Special Interest Group on Building Educational Applications (SIGEDU), and the Chief Information Officer for ACL.
He has served as senior area chair, area chair, or a member of the organizing committee for the NAACL/ACL/EMNLP series of conferences since 2017.

Contents

Preface
Building an Automated Essay Scoring System
From Lessons to Guidelines / Models
Generic Features
Genre- and Task-Specific Features
Automated Scoring Systems: From Prototype to Production
Evaluating for Real-World Use
Automated Feedback
Automated Scoring of Content
Automated Scoring of Speech
Fooling the System: Gaming Strategies
Looking Back, Looking Ahead
Definitions-in-Context
Index
References
Authors' Biographies

Share via valid email address:


Back
© 2024 RIGHTOL All Rights Reserved.