RMIT University
Browse

An analysis of machine translation errors on the effectiveness of an Arabic-English QA system

conference contribution
posted on 2024-10-31, 10:19 authored by Azzah Al-Maskari, Mark SandersonMark Sanderson
The aim of this paper is to investigate how much the effectiveness of a Question Answering (QA) system was affected by the performance of Machine Translation (MT) based question translation. Nearly 200 questions were selected from TREC QA tracks and ran through a question answering system. It was able to answer 42.6% of the questions correctly in a monolingual run. These questions were then translated manually from English into Arabic and back into English using an MT system, and then re-applied to the QA system. The system was able to answer 10.2% of the translated questions. An analysis of what sort of translation error affected which questions was conducted, concluding that factoid type questions are less prone to translation error than others.

History

Start page

9

End page

14

Total pages

6

Outlet

Proceedings of the EACL Workshop on Multilingual Question Answering (MLQA06)

Editors

Anselmo Penas and Richard Sutcliffe

Name of conference

EACL Workshop on Multilingual Question Answering (MLQA06)

Publisher

Association for Computational Linguistics (ACL)

Place published

East Stroudsburg, PA, USA

Start date

2006-04-04

End date

2006-04-04

Language

English

Copyright

© April 2006, Association for Computational Linguistics

Former Identifier

2006021730

Esploro creation date

2020-06-22

Fedora creation date

2011-10-28

Usage metrics

    Scholarly Works

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC