I need to compare two text classifiers – one human, one machine. They are assigning multiple tags from an ontology. We have an initial corpus of ~700 records tagged by both classifiers. The goal is to measure the ‘value added’ by the human. However, we don’t yet have any ground truth data (i.e. agreed annotations).
Any ideas on how best to approach this problem in a commercial environment (i.e. quickly, simply, with minimum fuss), or indeed what’s possible?
I thought of measuring the absolute delta between the two profiles (regardless of polarity) to give a ceiling on the value added, and/or comparing the profile of tags added by each human coder against the centroid to give a crude measure of inter-coder agreement (and hence difficulty of the task). But neither really measures the ‘value added’ that I’m looking for, so I’m sure there must better solutions.
Suggestions, anyone? Or is this as far as we can go without ground truth data?
Related Posts:
- Text Analytics Summit Europe – highlights and reflections
- How do you measure site search quality?
- Prostitutes Appeal to Pope: Text Analytics applied to Search
- The role of Natural Language Processing in Information Retrieval
- Text Analytics: Yesterday, Today and Tomorrow
