-
Do As I Can, Not As I Say (SayCan): Grounding Language in Robotic Affordances
-
MSR-VTT: A Large Video Description Dataset for Bridging Video and Language
-
https://socraticmodels.github.io/
-
CLIP: Connecting Text and Images: We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the ‘zero-shot’ capabilities of GPT-2 and GPT-3
-
GPT-3: Language Models are Few-Shot Learners
-
RoBERTa: A Robustly Optimized BERT Pretraining Approach
-
Wav2CLIP: Learning Robust Audio Representations From CLIP
-
https://arxiv.org/pdf/2204.00598.pdf#page=12&org=google
-
https://arxiv.org/pdf/2204.00598.pdf#page=13&org=google
-
https://arxiv.org/pdf/2204.00598.pdf#page=5&org=google
-
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
-
Elicit: The AI Research Assistant
-