Wolfram Research

Task Type

Feature Extraction

27 items

BERT Trained on BookCorpus and Wikipedia Data

Represent text as a sequence of vectors

BioBERT Trained on PubMed and PMC Data NEW

Represent text as a sequence of vectors

BPEmb Subword Embeddings Trained on Wikipedia Data

Represent words or subwords as vectors

Clinical Concept Embeddings Trained on Health Insurance Claims, Clinical Narratives from Stanford and PubMed Journal Articles

Represent a clinical concept as a vector

ConceptNet Numberbatch Word Vectors V17.06

Represent words as vectors

ConceptNet Numberbatch Word Vectors V17.06 (Raw Model)

Represent words as vectors

DistilBERT Trained on BookCorpus and English Wikipedia Data

Represent text as a sequence of vectors

ELMo Contextual Word Representations Trained on 1B Word Benchmark

Represent words as contextual word-embedding vectors

GloVe 100-Dimensional Word Vectors Trained on Tweets

Represent words as vectors

GloVe 100-Dimensional Word Vectors Trained on Wikipedia and Gigaword 5 Data

Represent words as vectors

GloVe 200-Dimensional Word Vectors Trained on Tweets

Represent words as vectors

GloVe 25-Dimensional Word Vectors Trained on Tweets

Represent words as vectors

GloVe 300-Dimensional Word Vectors Trained on Common Crawl 42B

Represent words as vectors

GloVe 300-Dimensional Word Vectors Trained on Common Crawl 840B

Represent words as vectors

GloVe 300-Dimensional Word Vectors Trained on Wikipedia and Gigaword 5 Data

Represent words as vectors

GloVe 50-Dimensional Word Vectors Trained on Tweets

Represent words as vectors

GloVe 50-Dimensional Word Vectors Trained on Wikipedia and Gigaword 5 Data

Represent words as vectors

GPT2 Transformer Trained on WebText Data

Generate text in English and represent text as a sequence of vectors

GPT Transformer Trained on BookCorpus Data

Generate text in English and represent text as a sequence of vectors

OpenFace Face Recognition Net Trained on CASIA-WebFace and FaceScrub Data

Represent a facial image as a vector

Pose-Aware Face Recognition in the Wild Nets Trained on CASIA WebFace Data

Represent a facial image as a vector

Pre-trained Distilled BERT Trained on BookCorpus and English Wikipedia Data

Represent text as a sequence of vectors

ResNet-101 for 3D Morphable Model Regression Trained on Casia WebFace Data

Represent a facial image as a vector

ResNet-101 Trained on Augmented CASIA-WebFace Data

Represent a facial image as a vector

RoBERTa Trained on BookCorpus, English Wikipedia, CC-News, OpenWebText and Stories Datasets

Represent text as a sequence of vectors

SciBERT Trained on Semantic Scholar Data NEW

Represent text as a sequence of vectors

VGGish Feature Extractor Trained on YouTube Data

Represent sounds as a sequence of vectors