Semantic Representations in Text Data


Semantic Representations in Text Data is a scholarly work, published in 2018 in ''International Journal of Grid and Distributed Computing''. The main subjects of the publication include information retrieval, natural language processing, artificial intelligence, Semantic Web, and computer science. Automatic text mining processes and other sophisticated natural language processing constructs need realistic representations of text/documents which embed semantics efficiently.All the representations work on the notion that every data contains different explanatory factors (attributes).In this article, authors exploit these explanatory factors to study and compare various semantic representation methods for text documents.The article critically reviews recent trends in the area of semi-supervised semantic representations, covering cutting-edge methods in distributed representations such as embeddings.This article gives a broad and synthesized description of various forms of text representations, presented in their chronological order ranging from BoW models to the most recent embeddings learning.Conclusively, various findings taken together provide valuable pointers for researchers looking to work in the field of semantic representations.In addition, the article also shows that one need to develop a model for learning universal embeddings in unsupervised/semi-supervised settings that incorporate contextual as well as word-order information, with language independent features and which would be feasible for large dataset.

Related Works