Human–Automated Judgment Learning: Enhancing Interaction with Automated Judgment Systems
This chapter presents a methodology for investigating human interaction with automated judges capable of informing training and design: human-automated judgment learning (HAJL). After introducing HAJL, it describes the experimental task and experimental design used as a test case for investigating HAJL's utility. Then, idiographic results representative of the insights that HAJL can bring and a nomothetic analysis of the experimental manipulations are reported. It ends with conclusions surrounding HAJL's utility. The results showed the HAJL's ability not only to capture individual judgment achievement, interaction with an automated judge, and understanding of an automated judge but also to identify the mechanisms underlying these performance measures, including cognitive control, knowledge, conflict, compromise, adaptation, and actual and assumed similarity. In addition, it highlights the number of factors that go into designing effective human-automated judge interaction, which require detailed methods for measurement and analysis.
Oxford Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.
If you think you should have access to this title, please contact your librarian.