Visit complete Generative AI roadmap

← Back to Topics List

ELMo (Embeddings from Language Models)

ELMo (Embeddings from Language Models) is a deep contextualized word representation technique developed by researchers at Allen Institute for Artificial Intelligence in 2018. ELMo uses a deep bidirectional language model to generate word embeddings that are sensitive to the context in which the words are used.

Traditional word embeddings such as word2vec and GloVe generate a fixed-size vector representation for each word in a vocabulary, which is independent of the context in which the word is used. In contrast, ELMo generates a dynamic representation for each word based on its context, by using a deep bidirectional language model to take into account both the preceding and following words in a sentence.

Sources of ELMo (Embeddings from Language Models):

Resources Community KGx AICbe YouTube

by Devansh Shukla

"AI Tamil Nadu formely known as AI Coimbatore is a close-Knit community initiative by Navaneeth with a goal to offer world-class AI education to anyone in Tamilnadu for free."