word embeddings
Projects with this topic
-
Evaluation of various deep learning models for sentiment analysis You are given the reviews dataset. These are 194439 amazon reviews for cell phones and accessories taken from https://jmcauley.ucsd.edu/data/amazon/ Use the “reviewText” and “overall” fields from this file. The goal is to predict the rating given the review by modeling it as a multi-class classification problem. • Take the first 70% dataset for train, next 10% for validation/development, and remaining 20% for test. • Recurrent neural networks • RNNs: Train a single directional RNN with L layers. Vary the number of layers (as 1,2,3,4) and also size of layers (20, 50, 100, 200). Report accuracy on test set. • LSTMs: Train a single directional LSTM with L layers. Vary the number of layers (as 1,2,3,4) and also size of layers (20, 50, 100, 200). Report accuracy on test set. • BiLSTM: Train a single directional RNN with L layers. Vary the number of layers (as 1,2,3,4) and also size of layers (20, 50, 100, 200). Report accuracy on test set.
Updated -
Word2vec, FastText, GloVE embedding accuracy comparison using analogy, vector-oriented reasoning, synonym and conjugation datasets from multiple sources.
Updated -
a Data Semantics Project
Updated -
This is the repository for my final year project. The title of the project is Deep Learning of Word Embeddings for semantic search.
Updated