- See Also
-
Links
- “Universal Self-Consistency for Large Language Model Generation”, Chen et al 2023
- “PEARL: Personalizing Large Language Model Writing Assistants With Generation-Calibrated Retrievers”, Mysore et al 2023
- “Zero-Shot Goal-Directed Dialogue via RL on Imagined Conversations”, Hong et al 2023
- “Can GPT Models Be Financial Analysts? An Evaluation of ChatGPT and GPT-4 on Mock CFA Exams”, Callanan et al 2023
- “Beyond Memorization: Violating Privacy Via Inference With Large Language Models”, Staab et al 2023
- “GeoLLM: Extracting Geospatial Knowledge from Large Language Models”, Manvi et al 2023
- “Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models”, Zhou et al 2023
- “Using Large Language Models for Qualitative Analysis Can Introduce Serious Bias”, Ashwin et al 2023
- “Embers of Autoregression: Understanding Large Language Models Through the Problem They Are Trained to Solve”, McCoy et al 2023
- “The Cambridge Law Corpus: A Corpus for Legal AI Research”, Östling et al 2023
- “A Boy Saw 17 Doctors over 3 Years for Chronic Pain. ChatGPT Found the Diagnosis”, Holohan 2023
- “Are Large Language Models a Threat to Digital Public Goods? Evidence from Activity on Stack Overflow”, Rio-Chanona et al 2023
- “Distilling Large Language Models for Biomedical Knowledge Extraction: A Case Study on Adverse Drug Events”, Gu et al 2023
- “Explaining Competitive-Level Programming Solutions Using LLMs”, Li et al 2023
- “Lost in the Middle: How Language Models Use Long Contexts”, Liu et al 2023
- “Evaluating Superhuman Models With Consistency Checks”, Fluri et al 2023
- “Artificial Artificial Artificial Intelligence: Crowd Workers Widely Use Large Language Models for Text Production Tasks”, Veselovsky et al 2023
- “Don’t Want Students to Rely on ChatGPT? Have Them Use It: It’s Easy to Forget How Little Students and Educators Understand Generative AI’s Flaws. Once They Actually Try It Out, They’ll See That It Can’t Replace Them”, Howell 2023
- “Iterative Translation Refinement With Large Language Models”, Chen et al 2023
- “Can Large Language Models Democratize Access to Dual-use Biotechnology?”, Soice et al 2023
- “Do GPTs Produce Less Literal Translations?”, Raunak et al 2023
- “The False Promise of Imitating Proprietary LLMs”, Gudibande et al 2023
- “Learning to Generate Novel Scientific Directions With Contextualized Literature-based Discovery”, Wang et al 2023
- “How Language Model Hallucinations Can Snowball”, Zhang et al 2023
- “Generative AI at Work”, Brynjolfsson et al 2023
- “Performance of ChatGPT on Free-response, Clinical Reasoning Exams”, Strong et al 2023
- “How Well Do Large Language Models Perform in Arithmetic Tasks?”, Yuan et al 2023
- “Limitations of Language Models in Arithmetic and Symbolic Induction”, Qian et al 2022
- “Can GPT-3 Write an Academic Paper on Itself, With Minimal Human Input?”, GPT-3 et al 2022 (page 2)
- Sort By Magic
- Miscellaneous
- Link Bibliography
See Also
Links
“Universal Self-Consistency for Large Language Model Generation”, Chen et al 2023
“Universal Self-Consistency for Large Language Model Generation”
“PEARL: Personalizing Large Language Model Writing Assistants With Generation-Calibrated Retrievers”, Mysore et al 2023
“PEARL: Personalizing Large Language Model Writing Assistants with Generation-Calibrated Retrievers”
“Zero-Shot Goal-Directed Dialogue via RL on Imagined Conversations”, Hong et al 2023
“Zero-Shot Goal-Directed Dialogue via RL on Imagined Conversations”
“Can GPT Models Be Financial Analysts? An Evaluation of ChatGPT and GPT-4 on Mock CFA Exams”, Callanan et al 2023
“Can GPT models be Financial Analysts? An Evaluation of ChatGPT and GPT-4 on mock CFA Exams”
“Beyond Memorization: Violating Privacy Via Inference With Large Language Models”, Staab et al 2023
“Beyond Memorization: Violating Privacy Via Inference with Large Language Models”
“GeoLLM: Extracting Geospatial Knowledge from Large Language Models”, Manvi et al 2023
“GeoLLM: Extracting Geospatial Knowledge from Large Language Models”
“Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models”, Zhou et al 2023
“Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models”
“Using Large Language Models for Qualitative Analysis Can Introduce Serious Bias”, Ashwin et al 2023
“Using Large Language Models for Qualitative Analysis Can Introduce Serious Bias”
“Embers of Autoregression: Understanding Large Language Models Through the Problem They Are Trained to Solve”, McCoy et al 2023
“The Cambridge Law Corpus: A Corpus for Legal AI Research”, Östling et al 2023
“A Boy Saw 17 Doctors over 3 Years for Chronic Pain. ChatGPT Found the Diagnosis”, Holohan 2023
“A boy saw 17 doctors over 3 years for chronic pain. ChatGPT found the diagnosis”
“Are Large Language Models a Threat to Digital Public Goods? Evidence from Activity on Stack Overflow”, Rio-Chanona et al 2023
“Distilling Large Language Models for Biomedical Knowledge Extraction: A Case Study on Adverse Drug Events”, Gu et al 2023
“Explaining Competitive-Level Programming Solutions Using LLMs”, Li et al 2023
“Explaining Competitive-Level Programming Solutions using LLMs”
“Lost in the Middle: How Language Models Use Long Contexts”, Liu et al 2023
“Evaluating Superhuman Models With Consistency Checks”, Fluri et al 2023
“Artificial Artificial Artificial Intelligence: Crowd Workers Widely Use Large Language Models for Text Production Tasks”, Veselovsky et al 2023
“Don’t Want Students to Rely on ChatGPT? Have Them Use It: It’s Easy to Forget How Little Students and Educators Understand Generative AI’s Flaws. Once They Actually Try It Out, They’ll See That It Can’t Replace Them”, Howell 2023
“Iterative Translation Refinement With Large Language Models”, Chen et al 2023
“Iterative Translation Refinement with Large Language Models”
“Can Large Language Models Democratize Access to Dual-use Biotechnology?”, Soice et al 2023
“Can large language models democratize access to dual-use biotechnology?”
“Do GPTs Produce Less Literal Translations?”, Raunak et al 2023
“The False Promise of Imitating Proprietary LLMs”, Gudibande et al 2023
“Learning to Generate Novel Scientific Directions With Contextualized Literature-based Discovery”, Wang et al 2023
“Learning to Generate Novel Scientific Directions with Contextualized Literature-based Discovery”
“How Language Model Hallucinations Can Snowball”, Zhang et al 2023
“Generative AI at Work”, Brynjolfsson et al 2023
“Performance of ChatGPT on Free-response, Clinical Reasoning Exams”, Strong et al 2023
“Performance of ChatGPT on free-response, clinical reasoning exams”
“How Well Do Large Language Models Perform in Arithmetic Tasks?”, Yuan et al 2023
“How well do Large Language Models perform in Arithmetic tasks?”
“Limitations of Language Models in Arithmetic and Symbolic Induction”, Qian et al 2022
“Limitations of Language Models in Arithmetic and Symbolic Induction”
“Can GPT-3 Write an Academic Paper on Itself, With Minimal Human Input?”, GPT-3 et al 2022 (page 2)
“Can GPT-3 write an academic paper on itself, with minimal human input?”
Sort By Magic
Annotations sorted by machine learning into inferred 'tags'. This provides an alternative way to browse: instead of by date order, one can browse in topic order. The 'sorted' list has been automatically clustered into multiple sections & auto-labeled for easier browsing.
Beginning with the newest annotation, it uses the embedding of each annotation to attempt to create a list of nearest-neighbor annotations, creating a progression of topics. For more details, see the link.
languageapplications
llm-applications
languagemodels
Miscellaneous
-
https://automated.beehiiv.com/p/aiimmunity-challenge-lessons-clinical-research-exam
-
https://chat.openai.com/share/25124525-0bad-4c13-ae5a-ae4beac60360
-
https://davidabell.substack.com/p/playing-around-with-machine-translation
-
https://dropbox.tech/machine-learning/prompt-injection-with-control-characters-openai-chatgpt-llm
-
https://jxnl.github.io/instructor/blog/2023/11/05/chain-of-density/
-
https://openai.com/blog/function-calling-and-other-api-updates#function-calling
-
https://restofworld.org/2023/ai-revolution-outsourced-workers/
-
https://twitter.com/kenshinsamurai9/status/1662510532585291779
-
https://www.ft.com/content/9aeb482d-f781-45c0-896f-38fdcc912139
-
https://www.integrity-research.com/ai-fails-insider-trading-test/
-
https://www.nytimes.com/2023/06/08/business/khan-ai-gpt-tutoring-bot.html
-
https://www.reddit.com/r/ChatGPT/comments/15et6f2/well_i_got_what_i_asked_for/
-
https://www.vice.com/en/article/5d93p3/what-happens-when-you-ask-ai-to-control-your-life
Link Bibliography
-
https://arxiv.org/abs/2310.08678
: “Can GPT Models Be Financial Analysts? An Evaluation of ChatGPT and GPT-4 on Mock CFA Exams”, -
https://arxiv.org/abs/2310.06213
: “GeoLLM: Extracting Geospatial Knowledge from Large Language Models”, Rohin Manvi, Samar Khanna, Gengchen Mai, Marshall Burke, David Lobell, Stefano Ermon -
https://arxiv.org/abs/2310.04406
: “Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models”, Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, Yu-Xiong Wang -
https://arxiv.org/abs/2309.12269
: “The Cambridge Law Corpus: A Corpus for Legal AI Research”, -
https://arxiv.org/abs/2307.06439#microsoft
: “Distilling Large Language Models for Biomedical Knowledge Extraction: A Case Study on Adverse Drug Events”, -
https://arxiv.org/abs/2305.15717
: “The False Promise of Imitating Proprietary LLMs”, Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine, Dawn Song -
https://arxiv.org/abs/2305.13534
: “How Language Model Hallucinations Can Snowball”, Muru Zhang, Ofir Press, William Merrill, Alisa Liu, Noah A. Smith -
https://www.medrxiv.org/content/10.1101/2023.03.24.23287731.full
: “Performance of ChatGPT on Free-response, Clinical Reasoning Exams”, -
https://arxiv.org/abs/2304.02015#alibaba
: “How Well Do Large Language Models Perform in Arithmetic Tasks?”, Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang -
2022-gpt3.pdf#page=2
: “Can GPT-3 Write an Academic Paper on Itself, With Minimal Human Input?”, GPT-3, Almira Osmanovic-Thunström, Steinn Steingrimsson