We present SW2V (Senses and Words to Vectors), a new model which simultaneously learns embeddings for both words and senses
as an emerging feature by exploiting knowledge from both text corpora and semantic networks in a joint training phase.
Word and sense embeddings are therefore represented in the same vector space.
Data and Code
Currently available files for download:
Massimiliano Mancini, Jose Camacho-Collados, Ignacio Iacobacci and Roberto Navigli.
Embedding Words and Senses Together via Joint Knowledge-Enhanced Training.
In arXiv:1612.02703, 2016.