“Foundational Challenges in Assuring Alignment and Safety of Large Language Models”, Usman Anwar, Abulhair Saparov, Javier Rando, Daniel Paleka, Miles Turpin, Peter Hase, Ekdeep Singh Lubana, Erik Jenner, Stephen Casper, Oliver Sourbut, Benjamin L. Edelman, Zhaowei Zhang, Mario Günther, Anton Korinek, Jose Hernandez-Orallo, Lewis Hammond, Eric Bigelow, Alexander Pan, Lauro Langosco, Tomasz Korbak, Heidi Zhang, Ruiqi Zhong, Seán Ó hÉigeartaigh, Gabriel Recchia, Giulio Corsi, Alan Chan, Markus Anderljung, Lilian Edwards, Yoshua Bengio, Danqi Chen, Samuel Albanie, Tegan Maharaj, Jakob Foerster, Florian Tramer, He He, Atoosa Kasirzadeh, Yejin Choi, David Krueger2024-04-15 (, , , , , , )⁠:

This work identifies 18 foundational challenges in assuring the alignment and safety of large language models (LLMs). These challenges are organized into 3 different categories: scientific understanding of LLMs, development and deployment methods, and sociotechnical challenges.

Based on the identified challenges, we pose 200+ concrete research questions.