Earlier this week I had the privilege of attending the Text Analytics Summit Europe at the Royal Garden Hotel in Kensington. Some of you may of course recognise this hotel as the base for Justin Bieber’s recent visit to London, but sadly (or is that fortunately?) he didn’t join us. Next time, maybe…
Still, the event was highly enjoyable, and served as visible testament of increasing maturity in the industry. When I did my PhD in natural language processing some *cough* years ago there really wasn’t a lot happening outside of academia – the best you’d get in mentioning ‘NLP’ to someone was an assumption that you’d fallen victim to some new age psychobabble. So it’s great to see the discipline finally ‘going mainstream’ and enjoying attention from a healthy cross section of society. Sadly I wasn’t able to attend the whole event, but here’s a few of the standouts for me:
First up was Richard Heale of the Thoughtware Group, who came all the way from Australia to give us an entertaining overview of the field. It’s not easy summarising a field as diverse and complex as TA within half an hour, but I thought Richard did an excellent job. I also enjoyed Meta Brown’s talk, who discussed the challenges of cross-lingual text analytics, and confided in us how far ahead Europe is over the US in this respect. A self-effacing American… Whatever next!
This was followed by Alessandro Zanasi, who in his time has both founded a major text analytics company (TEMIS) and successfully negotiated his way round the labyrinthine R&D funding structures of the European Commission. I am not sure which feat deserves the greater recognition…
I also enjoyed the talk by Matt Anderson of Aviva, who showed how text analytics techniques can be applied within the context of customer experience management, and that of John McConnell of Analytical People, whose talk raised issues of how teams are structured, and the role of linguistic expertise within them. I recall the infamous quote by Fred Jelinek that “Every time I fire a linguist, the performance of our speech recognition system goes up”. But judging by the sentiment at this event, the species is not quite so endangered. On the contrary, most attendees recognised the value of linguistic expertise alongside domain expertise in developing complete TA solutions, and the prospect for automating these skills remains somewhat distant. Perhaps I should paraphrase Fred and say every time you hire a linguist, the performance of your text analysis goes up
Another standout was the talk by Luca Toldo of Merck, who very articulately made the case for open, transparent, quantitative benchmarking in text analytics, citing the TREC framework as an example of one such approach. I am very sympathetic to this point of view, as many of the advances in information retrieval (and of course speech recognition and a host of other language processing tasks) over the last decade or two are directly attibutable to the adoption of an open and shared framework for evaluating algorithms and systems. However, persuading skeptical or under-resourced vendors to see the value in engaging in such an undertaking is still another matter.
So all in all a great event, and well done to George Kiley and co. at Text Analytics News for putting it together. It was also interesting to see a significant number of independent TA consultants at the event – a sign of growing liquidity in the TA talent market place, at last!
I’ve yet to fnd out what the plans are for next year’s event, but more than likely I’ll be there. In the meantime, hope to see you at our next London Text Analytics MeetUp!
Related Posts:
- Text Analytics meetup
- Text Analytics for Medical Informatics + Question Answering
- Prostitutes Appeal to Pope: Text Analytics applied to Search
- The role of Natural Language Processing in Information Retrieval
- Text Analytics: Yesterday, Today and Tomorrow
