Syntactic N-grams as machine learning features for natural language processing

Grigori Sidorov, Francisco Velasquez, Efstathios Stamatatos, Alexander Gelbukh, Liliana Chanona-Hernández

Producción científica: Contribución a una revistaArtículorevisión exhaustiva

219 Citas (Scopus)

Resumen

In this paper we introduce and discuss a concept of syntactic n-grams (sn-grams). Sn-grams differ from traditional n-grams in the manner how we construct them, i.e., what elements are considered neighbors. In case of sn-grams, the neighbors are taken by following syntactic relations in syntactic trees, and not by taking words as they appear in a text, i.e., sn-grams are constructed by following paths in syntactic trees. In this manner, sn-grams allow bringing syntactic knowledge into machine learning methods; still, previous parsing is necessary for their construction. Sn-grams can be applied in any natural language processing (NLP) task where traditional n-grams are used. We describe how sn-grams were applied to authorship attribution. We used as baseline traditional n-grams of words, part of speech (POS) tags and characters; three classifiers were applied: support vector machines (SVM), naive Bayes (NB), and tree classifier J48. Sn-grams give better results with SVM classifier.

Idioma originalInglés
Páginas (desde-hasta)853-860
Número de páginas8
PublicaciónExpert Systems with Applications
Volumen41
N.º3
DOI
EstadoPublicada - 2014

Huella

Profundice en los temas de investigación de 'Syntactic N-grams as machine learning features for natural language processing'. En conjunto forman una huella única.

Citar esto