- See Also
-
Links
- “Sam Altman Confronted a Member over a Research Paper That Discussed the Company, While Directors Disagreed for Months about Who Should Fill Board Vacancies”, Metz et al 2023
- “How OpenAI’s Bizarre Structure Gave 4 People the Power to Fire Sam Altman”, Dave 2023
- “Summon a Demon and Bind It: A Grounded Theory of LLM Red Teaming in the Wild”, Inie et al 2023
- “Scalable and Transferable Black-Box Jailbreaks for Language Models via Persona Modulation”, Shah et al 2023
- “Specific versus General Principles for Constitutional AI”, Kundu et al 2023
- “Beyond Memorization: Violating Privacy Via Inference With Large Language Models”, Staab et al 2023
- “Lost in the Middle: How Language Models Use Long Contexts”, Liu et al 2023
- “Language Models Don’t Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting”, Turpin et al 2023
- “A Radical Plan to Make AI Good, Not Evil”, Knight 2023
- “Constitutional AI: Harmlessness from AI Feedback”, Bai et al 2022
- “The Perception of Rhythm in Language”, Cutler 1994
- Miscellaneous
- Link Bibliography
See Also
Links
“Sam Altman Confronted a Member over a Research Paper That Discussed the Company, While Directors Disagreed for Months about Who Should Fill Board Vacancies”, Metz et al 2023
“How OpenAI’s Bizarre Structure Gave 4 People the Power to Fire Sam Altman”, Dave 2023
“How OpenAI’s Bizarre Structure Gave 4 People the Power to Fire Sam Altman”
“Summon a Demon and Bind It: A Grounded Theory of LLM Red Teaming in the Wild”, Inie et al 2023
“Summon a Demon and Bind it: A Grounded Theory of LLM Red Teaming in the Wild”
“Scalable and Transferable Black-Box Jailbreaks for Language Models via Persona Modulation”, Shah et al 2023
“Scalable and Transferable Black-Box Jailbreaks for Language Models via Persona Modulation”
“Specific versus General Principles for Constitutional AI”, Kundu et al 2023
“Beyond Memorization: Violating Privacy Via Inference With Large Language Models”, Staab et al 2023
“Beyond Memorization: Violating Privacy Via Inference with Large Language Models”
“Lost in the Middle: How Language Models Use Long Contexts”, Liu et al 2023
“Language Models Don’t Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting”, Turpin et al 2023
“A Radical Plan to Make AI Good, Not Evil”, Knight 2023
“Constitutional AI: Harmlessness from AI Feedback”, Bai et al 2022
“The Perception of Rhythm in Language”, Cutler 1994
Miscellaneous
-
https://marginalrevolution.com/marginalrevolution/2023/01/ai-passes-law-and-economics-exam.html
-
https://nostalgebraist.tumblr.com/post/728556535745232896/claude-is-insufferable
-
https://thezvi.wordpress.com/2023/07/25/anthropic-observations/
-
https://twitter.com/IntuitMachine/status/1678870325600108545
-
https://twitter.com/LouisKnightWebb/status/1724510794514157668
-
https://twitter.com/OwainEvans_UK/status/1636580251676585986
-
https://twitter.com/OwainEvans_UK/status/1636581594642403328
-
https://twitter.com/OwainEvans_UK/status/1636605571637055488
-
https://twitter.com/OwainEvans_UK/status/1636762386085605376
-
https://www.lesswrong.com/posts/R3eDrDoX8LisKgGZe/sum-threshold-attacks?commentId=yqCkCQLkkaCnZCukJ
-
https://www.vox.com/future-perfect/23794855/anthropic-ai-openai-claude-2
-
https://xmarquez.github.io/GPTDemocracyIndex/GPTDemocracyIndex.html
Link Bibliography
-
https://www.nytimes.com/2023/11/21/technology/openai-altman-board-fight.html
: “Sam Altman Confronted a Member over a Research Paper That Discussed the Company, While Directors Disagreed for Months about Who Should Fill Board Vacancies”, Cade Metz, Tripp Mickle, Mike Isaac -
https://www.wired.com/story/openai-bizarre-structure-4-people-the-power-to-fire-sam-altman/
: “How OpenAI’s Bizarre Structure Gave 4 People the Power to Fire Sam Altman”, Paresh Dave -
https://arxiv.org/abs/2305.04388
: “Language Models Don’t Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting”, Miles Turpin, Julian Michael, Ethan Perez, Samuel R. Bowman -
https://www.wired.com/story/anthropic-ai-chatbots-ethics/
: “A Radical Plan to Make AI Good, Not Evil”, Will Knight