Evaluation for operational IR applications: generalizability and automation
2013 (English)In: LivingLab '13 Proceedings of the 2013 workshop on Living labs for information retrieval evaluation, New York: Association for Computing Machinery (ACM), 2013, 11-12 p.Conference paper (Refereed)
Black box information retrieval (IR) application evaluation allows practitioners to measure the quality of their IR application. Instead of evaluating specific components, e.g. solely the search engine, a complete IR application, including the user's perspective, is evaluated. The evaluation methodology is designed to be applicable to operational IR applications. The black box evaluation methodology could be packaged into an evaluation and monitoring tool, making it usable for industry stakeholders. The tool should lead practitioners through the evaluation process and maintain the test results for the manual and automatic tests. This paper shows that the methodology is generalizable, even though the diversity of IR applications is high. The challenges in automating tests are the simulation of tasks that require intellectual effort and the handling of different visualizations of the same concept.
Place, publisher, year, edition, pages
New York: Association for Computing Machinery (ACM), 2013. 11-12 p.
Information retrieval, application evaluation, user perception
Research subject Man-Machine-Interaction (MMI)
IdentifiersURN: urn:nbn:se:su:diva-97726DOI: 10.1145/2513150.2513160ISBN: 978-1-4503-2420-5OAI: oai:DiVA.org:su-97726DiVA: diva2:679956
ACM International Conference on Information and Knowledge Management (CIKM 2013), October 27th - November 1st, 2013, Burlingame, CA, USA