This is a page about Thresholded Minimum Error Rate Training (TMERT). Minimum Error Rate Training (MERT) is a method widely used for tuning the parameters for applications such as machine translation. TMERT additionally adds a threshold to the MERT process, which makes it possible to optionally not provide an answer for any particular input. The example implementation here can be used to learn weights and thresholds for question answering to maximize C@1 score (but it could be modified to optimize any other similar measure). It can be used under the LGPL 2.1. It was used as a part of our question answering system for CLEF 2013.
tmert.py (v. 1.1)
The script was developed mainly by Graham Neubig and Philip Arthur at the Nara Institute of Science and Technology. If you would like more details about the script, or would like to cite it in your research, please reference:
Inter-Sentence Features and Thresholded Minimum Error Rate Training: NAIST at CLEF 2013 QA4MRE
Philip Arthur, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura
Conference and Labs of the Evaluation Forum (CLEF). September 2013.
If you have any questions about the corpus, please feel free to ask Graham by contacting neubig at gmail dot com.