-
PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World
-
Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language
-
Housekeep: Tidying Virtual Households using Commonsense Reasoning
-
LID: Pre-Trained Language Models for Interactive Decision-Making
-
Do As I Can, Not As I Say (SayCan): Grounding Language in Robotic Affordances
-
Inner Monologue: Embodied Reasoning through Planning with Language Models
-
ViNG: Learning Open-World Navigation with Visual Goals
-
CLIP: Connecting Text and Images: We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the ‘zero-shot’ capabilities of GPT-2 and GPT-3
-
GPT-3: Language Models are Few-Shot Learners
-
https://sites.google.com/view/lmnav