A Novel Approach to a Semantically-Aware Representation of Items (NASARI): semantic vector representations for BabelNet synsets and Wikipedia pages in several languages.
NASARI provides a large coverage of concepts and named entities and has been proved to be useful for many Natural Language Processing tasks such as monolingual and cross-lingual Semantic Similarity and Word Sense Disambiguation, tasks on which NASARI has achieved state-of-the-art results on several standard benchmarks.
NASARI website

SW2V is a knowledge-based approach for obtaining continuous representations for individual word senses.

We present SW2V (Senses and Words to Vectors), a new model which simultaneously learns embeddings for both words and senses as an emerging feature by exploiting knowledge from both text corpora and semantic networks in a joint training phase. Word and sense embeddings are therefore represented in the same vector space.
SW2V website

SensEmbed is a knowledge-based approach for obtaining continuous representations for individual word senses.

We propose a multi-faceted approach that transforms word embeddings to the sense level and leverages knowledge from a large semantic network for effective semantic similarity measurement.
SensEmbed website

DefIE is an approach to large-scale Information Extraction (IE) based on a syntactic-semantic analysis of textual definitions.

Given a large corpus of definitions DefIE leverages syntactic dependencies to reduce data sparsity, then disambiguates the arguments and content words of the relation strings, and finally exploits the resulting information to organize the acquired relations hierarchically. The output is a high-quality knowledge base consisting of several million automatically acquired semantic relations.
DefIE website

KB-Unify is an approach for integrating the output of different Open Information Extraction systems into a single unified and fully disambiguated knowledge repository.

The unification algorithm consists of three main steps: (1) disambiguation of relation argument pairs via a sense-based vector representation and a large unified sense inventory; (2) ranking of semantic relations according to their degree of specificity; (3) cross-resource relation alignment and merging based on the semantic similarity of domains and ranges.
KB-Unify website

Babelfy is a unified approach to multilingual Word Sense Disambiguation and Entity Linking.

Entity Linking (EL) and Word Sense Disambiguation (WSD) both address the lexical ambiguity of language. But while the two tasks are pretty similar, they differ in a fundamental respect: in EL the textual mention can be linked to a named entity which may or may not contain the exact mention, while in WSD there is a perfect match between the word form (better, its lemma) and a suitable word sense.
We present Babelfy, a unified graph-based approach to EL and WSD based on a loose identification of candidate meanings coupled with a densest subgraph heuristic which selects high-coherence semantic interpretations.
Babelfy website

Align, Disambiguate, and Walk (ADW) is a WordNet-based approach for measuring semantic similarity of arbitrary pairs of lexical items, from word senses to full texts. The approach leverages random walks on semantic networks for modeling lexical items.

ADW website

Structural Semantic Interconnections (SSI) is a knowledge-based algorithm for Word Sense Disambiguation based on Structural Pattern Recognition.
SSI is untrained and has been extensively evaluated in ontology learning, gloss and open text word sense disambiguation.
Given a set of words in context and a lexical knowledge base, obtained by integrating a number of dictionaries, annotated corpora and collocation resources, SSI outputs a semantic graph includings the senses chosen and the semantic interconnections between them.
SSI online

TermExtractor is a software package for the extraction of relevant terms consensually referred in a specific domain. The application take as input a corpus of domain documents in any format (plain text files, Word documents, pdfs, etc.), parses the documents, and extracts a list of syntactically plausible terms.
Two entropy-based measures, called Domain Relevance and Domain Consensus, are then used to select only those terms which are relevant to the domain of interest or consensually referred throughout the documents. This is achieve with the aid of a set of contrastive corpora from different domains.
TermExtractor online

Glossextractor is a web tool for the automated acquisition of a glossary for an input terminology. Starting from domain terms, the software extracts relevant glosses from a number of resources (dictionary and glossary definitions, definitions within texts, etc.).
Glossextractor online

TAV (TAxonomy Validator) is a visual tool for the validation of taxonomies. The tool allows the visualization and browsing of OWL ontologies. Authorized users can perform concept editing (creation, deletion, hypernymy change, addition of conceptual relations, etc.) and ontology download.
TAV online

Valido is a visual tool for supporting the validator in the difficult task of assessing the quality and suitability of sense annotations.
Given a set of words in context (e.g. a sentence from a corpus), Valido applies the SSI algorithm and shows the validator the resulting semantic interconnections supporting a choice of word senses.
Valido online