- See Also
-
Links
- “Introducing Microsoft 365 Copilot—your Copilot for Work”, 2023
- “ProofNet: Autoformalizing and Formally Proving Undergraduate-Level Mathematics”, Et Al 2023
- “CodeBERTScore: Evaluating Code Generation With Pretrained Models of Code”, Et Al 2023
- “Google Is Asking Employees to Test Potential ChatGPT Competitors, including a Chatbot Called ‘Apprentice Bard’”, 2023
- “An Analysis of the Automatic Bug Fixing Performance of ChatGPT”, Et Al 2023
- “Connor Leahy on Aliens, Ethics, Economics, Memetics, and Education § GPT-4”, 2023
- “General Availability of Azure OpenAI Service Expands Access to Large, Advanced AI Models With Added Enterprise Benefits”, 2023
- “SantaCoder: Don’t Reach for the Stars!”, Et Al 2023
- “TrojanPuzzle: Covertly Poisoning Code-Suggestion Models”, Et Al 2023
- “ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages”, Et Al 2022
- “The Stack: 3 TB of Permissively Licensed Source Code”, Et Al 2022
- “PAL: Program-aided Language Models”, Et Al 2022
- “Programming Possibility: Kevin Scott on AI’s Impact on Cognitive Work”, 2022
- “Challenging BIG-Bench Tasks (BBH) and Whether Chain-of-Thought Can Solve Them”, Et Al 2022
- “Vote-K: Selective Annotation Makes Language Models Better Few-Shot Learners”, Et Al 2022
- “Repair Is Nearly Generation: Multilingual Program Repair With LLMs”, Et Al 2022
- “Language Models Can Teach Themselves to Program Better”, Et Al 2022
- “Efficient Training of Language Models to Fill in the Middle”, Et Al 2022
- “PanGu-Coder: Program Synthesis With Function-Level Language Modeling”, Et Al 2022
- “CodeT: Code Generation With Generated Tests”, Et Al 2022
- “Can Large Language Models Reason about Medical Questions?”, Et Al 2022
- “Craft an Iron Sword: Dynamically Generating Interactive Game Characters by Prompting Large Language Models Tuned on Code”, Et Al 2022
- “Code Translation With Compiler Representations”, Et Al 2022
- “Repository-Level Prompt Generation for Large Language Models of Code”, Et Al 2022
- “Productivity Assessment of Neural Code Completion”, Et Al 2022
- “End-to-end Symbolic Regression With Transformers”, Et Al 2022
- “PaLM: Scaling Language Modeling With Pathways”, Et Al 2022
- “A Conversational Paradigm for Program Synthesis”, Et Al 2022
- “Evaluating the Text-to-SQL Capabilities of Large Language Models”, Et Al 2022
- “Expectation vs. Experience: Evaluating the Usability of Code Generation Tools Powered by Large Language Models”, Et Al 2022
- “PolyCoder: A Systematic Evaluation of Large Language Models of Code”, Et Al 2022
- “Pop Quiz! Can a Large Language Model Help With Reverse Engineering?”, Et Al 2022
- “Text and Code Embeddings by Contrastive Pre-Training”, Et Al 2022
- “Neural Language Models Are Effective Plagiarists”, 2022
- “Deep Symbolic Regression for Recurrent Sequences”, D’Et Al 2022
- “Discovering the Syntax and Strategies of Natural Language Programming With Generative Language Models”, Et Al 2022
- “A Neural Network Solves and Generates Mathematics Problems by Program Synthesis: Calculus, Differential Equations, Linear Algebra, and More”, Et Al 2021
- “WebGPT: Improving the Factual Accuracy of Language Models through Web Browsing”, Et Al 2021
- “WebGPT: Browser-assisted Question-answering With Human Feedback”, Et Al 2021
- “Few-Shot Semantic Parsing With Language Models Trained On Code”, 2021
- “Scaling Language Models: Methods, Analysis & Insights from Training Gopher”, Et Al 2021
- “Jigsaw: Large Language Models Meet Program Synthesis”, Et Al 2021
- “Can Pre-trained Language Models Be Used to Resolve Textual and Semantic Merge Conflicts?”, Et Al 2021
- “Solving Probability and Statistics Problems by Program Synthesis”, Et Al 2021
- “Solving Linear Algebra by Program Synthesis”, 2021
- “Automatic Program Repair With OpenAI’s Codex: Evaluating QuixBugs”, 2021
- “GenLine and GenForm: Two Tools for Interacting With Generative Language Models in a Code Editor”, Et Al 2021
- “An Empirical Cybersecurity Evaluation of GitHub Copilot’s Code Contributions”, Et Al 2021
- “Learning C to X86 Translation: An Experiment in Neural Compilation”, Armengol-Estapé & 2021
- “Program Synthesis With Large Language Models”, Et Al 2021
- “TAPEX: Table Pre-training via Learning a Neural SQL Executor”, Et Al 2021
- “Evaluating Large Language Models Trained on Code”, Et Al 2021
- “Research Recitation: A First Look at Rote Learning in GitHub Copilot Suggestions”, 2021
- “Microsoft and OpenAI Have a New A.I. Tool That Will Give Coding Suggestions to Software Developers”, 2021
- “SymbolicGPT: A Generative Transformer Model for Symbolic Regression”, Et Al 2021
- “Measuring Coding Challenge Competence With APPS”, Et Al 2021
- “Improving Code Autocompletion With Transfer Learning”, Et Al 2021
- “Learning Autocompletion from Real-World Datasets”, Et Al 2020
- “GraphCodeBERT: Pre-training Code Representations With Data Flow”, Et Al 2020
- “CoCoNuT: Combining Context-Aware Neural Translation Models Using Ensemble for Program Repair”, Et Al 2020
- “TransCoder: Unsupervised Translation of Programming Languages”, Et Al 2020
- “GPT-3 Random Sample Dump: JavaScript Tutorial”, GPT-3 2020
- “IntelliCode Compose: Code Generation Using Transformer”, Et Al 2020
- “Deep Learning for Symbolic Mathematics”, 2019
- “CodeSearchNet Challenge: Evaluating the State of Semantic Code Search”, Et Al 2019
- “BERTScore: Evaluating Text Generation With BERT”, Et Al 2019
- “Seq2SQL: Generating Structured Queries from Natural Language Using Reinforcement Learning”, Et Al 2017
- “Learning to Superoptimize Programs”, Et Al 2017
- “DeepCoder: Learning to Write Programs”, Et Al 2016
- “OpenAI API Alchemy: Smart Formatting and Code Creation”
- “Transformer-VAE for Program Synthesis”
- Wikipedia
- Miscellaneous
- Link Bibliography
See Also
Links
“Introducing Microsoft 365 Copilot—your Copilot for Work”, 2023
“Introducing Microsoft 365 Copilot—your copilot for work”, 2023-03-26 ( ; similar; bibliography)
“ProofNet: Autoformalizing and Formally Proving Undergraduate-Level Mathematics”, Et Al 2023
“ProofNet: Autoformalizing and Formally Proving Undergraduate-Level Mathematics”, 2023-02-24 ( ; similar; bibliography)
“CodeBERTScore: Evaluating Code Generation With Pretrained Models of Code”, Et Al 2023
“CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code”, 2023-02-10 (similar)
“Google Is Asking Employees to Test Potential ChatGPT Competitors, including a Chatbot Called ‘Apprentice Bard’”, 2023
“Google is asking employees to test potential ChatGPT competitors, including a chatbot called 'Apprentice Bard'”, 2023-01-31 ( ; similar; bibliography)
“An Analysis of the Automatic Bug Fixing Performance of ChatGPT”, Et Al 2023
“An Analysis of the Automatic Bug Fixing Performance of ChatGPT”, 2023-01-20 (similar; bibliography)
“Connor Leahy on Aliens, Ethics, Economics, Memetics, and Education § GPT-4”, 2023
“Connor Leahy on Aliens, Ethics, Economics, Memetics, and Education § GPT-4”, 2023-01-19 ( ; similar)
“General Availability of Azure OpenAI Service Expands Access to Large, Advanced AI Models With Added Enterprise Benefits”, 2023
“General availability of Azure OpenAI Service expands access to large, advanced AI models with added enterprise benefits”, 2023-01-16 ( ; similar; bibliography)
“SantaCoder: Don’t Reach for the Stars!”, Et Al 2023
“SantaCoder: don’t reach for the stars!”, 2023-01-09 (similar)
“TrojanPuzzle: Covertly Poisoning Code-Suggestion Models”, Et Al 2023
“TrojanPuzzle: Covertly Poisoning Code-Suggestion Models”, 2023-01-06 ( ; similar)
“ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages”, Et Al 2022
“ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages”, 2022-12-13 ( ; similar)
“The Stack: 3 TB of Permissively Licensed Source Code”, Et Al 2022
“The Stack: 3 TB of permissively licensed source code”, 2022-11-20 ( ; similar; bibliography)
“PAL: Program-aided Language Models”, Et Al 2022
“PAL: Program-aided Language Models”, 2022-11-18 ( ; similar)
“Programming Possibility: Kevin Scott on AI’s Impact on Cognitive Work”, 2022
“Programming Possibility: Kevin Scott on AI’s Impact on Cognitive Work”, 2022-10-18 (similar; bibliography)
“Challenging BIG-Bench Tasks (BBH) and Whether Chain-of-Thought Can Solve Them”, Et Al 2022
“Challenging BIG-Bench Tasks (BBH) and Whether Chain-of-Thought Can Solve Them”, 2022-10-17 ( ; similar; bibliography)
“Vote-K: Selective Annotation Makes Language Models Better Few-Shot Learners”, Et Al 2022
“Vote-K: Selective Annotation Makes Language Models Better Few-Shot Learners”, 2022-09-05 ( ; similar; bibliography)
“Repair Is Nearly Generation: Multilingual Program Repair With LLMs”, Et Al 2022
“Repair Is Nearly Generation: Multilingual Program Repair with LLMs”, 2022-08-24 (similar)
“Language Models Can Teach Themselves to Program Better”, Et Al 2022
“Language Models Can Teach Themselves to Program Better”, 2022-07-29 ( ; similar)
“Efficient Training of Language Models to Fill in the Middle”, Et Al 2022
“Efficient Training of Language Models to Fill in the Middle”, 2022-07-28 (similar)
“PanGu-Coder: Program Synthesis With Function-Level Language Modeling”, Et Al 2022
“PanGu-Coder: Program Synthesis with Function-Level Language Modeling”, 2022-07-22 (similar)
“CodeT: Code Generation With Generated Tests”, Et Al 2022
“CodeT: Code Generation with Generated Tests”, 2022-07-21 ( ; similar)
“Can Large Language Models Reason about Medical Questions?”, Et Al 2022
“Can large language models reason about medical questions?”, 2022-07-17 ( ; similar; bibliography)
“Craft an Iron Sword: Dynamically Generating Interactive Game Characters by Prompting Large Language Models Tuned on Code”, Et Al 2022
“Craft an Iron Sword: Dynamically Generating Interactive Game Characters by Prompting Large Language Models Tuned on Code”, 2022-07-05 (similar)
“Code Translation With Compiler Representations”, Et Al 2022
“Code Translation with Compiler Representations”, 2022-06-30 (similar)
“Repository-Level Prompt Generation for Large Language Models of Code”, Et Al 2022
“Repository-Level Prompt Generation for Large Language Models of Code”, 2022-06-26 (similar)
“Productivity Assessment of Neural Code Completion”, Et Al 2022
“Productivity Assessment of Neural Code Completion”, 2022-05-13 (similar; bibliography)
“End-to-end Symbolic Regression With Transformers”, Et Al 2022
“End-to-end symbolic regression with transformers”, 2022-04-22 ( ; similar)
“PaLM: Scaling Language Modeling With Pathways”, Et Al 2022
“PaLM: Scaling Language Modeling with Pathways”, 2022-04-05 ( ; similar; bibliography)
“A Conversational Paradigm for Program Synthesis”, Et Al 2022
“A Conversational Paradigm for Program Synthesis”, 2022-03-25 ( ; similar)
“Evaluating the Text-to-SQL Capabilities of Large Language Models”, Et Al 2022
“Evaluating the Text-to-SQL Capabilities of Large Language Models”, 2022-03-15 (similar)
“Expectation vs. Experience: Evaluating the Usability of Code Generation Tools Powered by Large Language Models”, Et Al 2022
“Expectation vs. Experience: Evaluating the Usability of Code Generation Tools Powered by Large Language Models”, 2022-03-06 ( ; similar; bibliography)
“PolyCoder: A Systematic Evaluation of Large Language Models of Code”, Et Al 2022
“PolyCoder: A Systematic Evaluation of Large Language Models of Code”, 2022-02-26 (similar)
“Pop Quiz! Can a Large Language Model Help With Reverse Engineering?”, Et Al 2022
“Pop Quiz! Can a Large Language Model Help With Reverse Engineering?”, 2022-02-02 ( ; similar)
“Text and Code Embeddings by Contrastive Pre-Training”, Et Al 2022
“Text and Code Embeddings by Contrastive Pre-Training”, 2022-01-24 ( ; similar; bibliography)
“Neural Language Models Are Effective Plagiarists”, 2022
“Neural Language Models are Effective Plagiarists”, 2022-01-19 (similar)
“Deep Symbolic Regression for Recurrent Sequences”, D’Et Al 2022
“Deep Symbolic Regression for Recurrent Sequences”, 2022-01-12 ( ; similar)
“Discovering the Syntax and Strategies of Natural Language Programming With Generative Language Models”, Et Al 2022
“Discovering the Syntax and Strategies of Natural Language Programming with Generative Language Models”, 2022-01-06 ( ; backlinks; similar)
“A Neural Network Solves and Generates Mathematics Problems by Program Synthesis: Calculus, Differential Equations, Linear Algebra, and More”, Et Al 2021
“A Neural Network Solves and Generates Mathematics Problems by Program Synthesis: Calculus, Differential Equations, Linear Algebra, and More”, 2021-12-31 ( ; similar)
“WebGPT: Improving the Factual Accuracy of Language Models through Web Browsing”, Et Al 2021
“WebGPT: Improving the factual accuracy of language models through web browsing”, 2021-12-16 ( ; similar; bibliography)
“WebGPT: Browser-assisted Question-answering With Human Feedback”, Et Al 2021
“WebGPT: Browser-assisted question-answering with human feedback”, 2021-12-16 ( ; similar; bibliography)
“Few-Shot Semantic Parsing With Language Models Trained On Code”, 2021
“Few-Shot Semantic Parsing with Language Models Trained On Code”, 2021-12-16 (similar)
“Scaling Language Models: Methods, Analysis & Insights from Training Gopher”, Et Al 2021
“Scaling Language Models: Methods, Analysis & Insights from Training Gopher”, 2021-12-08 ( ; similar; bibliography)
“Jigsaw: Large Language Models Meet Program Synthesis”, Et Al 2021
“Jigsaw: Large Language Models meet Program Synthesis”, 2021-12-06 (similar)
“Can Pre-trained Language Models Be Used to Resolve Textual and Semantic Merge Conflicts?”, Et Al 2021
“Can Pre-trained Language Models be Used to Resolve Textual and Semantic Merge Conflicts?”, 2021-11-23 ( ; similar; bibliography)
“Solving Probability and Statistics Problems by Program Synthesis”, Et Al 2021
“Solving Probability and Statistics Problems by Program Synthesis”, 2021-11-16 ( ; backlinks; similar; bibliography)
“Solving Linear Algebra by Program Synthesis”, 2021
“Solving Linear Algebra by Program Synthesis”, 2021-11-16 ( ; backlinks; similar)
“Automatic Program Repair With OpenAI’s Codex: Evaluating QuixBugs”, 2021
“Automatic Program Repair with OpenAI’s Codex: Evaluating QuixBugs”, 2021-11-06 (backlinks; similar)
“GenLine and GenForm: Two Tools for Interacting With Generative Language Models in a Code Editor”, Et Al 2021
“GenLine and GenForm: Two Tools for Interacting with Generative Language Models in a Code Editor”, 2021-09-07 ( ; similar; bibliography)
“An Empirical Cybersecurity Evaluation of GitHub Copilot’s Code Contributions”, Et Al 2021
“An Empirical Cybersecurity Evaluation of GitHub Copilot’s Code Contributions”, 2021-08-20 ( ; similar)
“Learning C to X86 Translation: An Experiment in Neural Compilation”, Armengol-Estapé & 2021
“Learning C to x86 Translation: An Experiment in Neural Compilation”, 2021-08-17 (similar)
“Program Synthesis With Large Language Models”, Et Al 2021
“Program Synthesis with Large Language Models”, 2021-08-16 ( ; similar)
“TAPEX: Table Pre-training via Learning a Neural SQL Executor”, Et Al 2021
“TAPEX: Table Pre-training via Learning a Neural SQL Executor”, 2021-07-16 ( ; similar)
“Evaluating Large Language Models Trained on Code”, Et Al 2021
“Evaluating Large Language Models Trained on Code”, 2021-07-07 ( ; similar)
“Research Recitation: A First Look at Rote Learning in GitHub Copilot Suggestions”, 2021
“Research recitation: A first look at rote learning in GitHub Copilot suggestions”, 2021-07 (similar)
“Microsoft and OpenAI Have a New A.I. Tool That Will Give Coding Suggestions to Software Developers”, 2021
“Microsoft and OpenAI have a new A.I. tool that will give coding suggestions to software developers”, 2021-06-29 (similar)
“SymbolicGPT: A Generative Transformer Model for Symbolic Regression”, Et Al 2021
“SymbolicGPT: A Generative Transformer Model for Symbolic Regression”, 2021-06-27 ( ; backlinks; similar)
“Measuring Coding Challenge Competence With APPS”, Et Al 2021
“Measuring Coding Challenge Competence With APPS”, 2021-05-20 (backlinks; similar)
“Improving Code Autocompletion With Transfer Learning”, Et Al 2021
“Improving Code Autocompletion with Transfer Learning”, 2021-05-12 (similar)
“Learning Autocompletion from Real-World Datasets”, Et Al 2020
“Learning Autocompletion from Real-World Datasets”, 2020-11-09 (similar)
“GraphCodeBERT: Pre-training Code Representations With Data Flow”, Et Al 2020
“GraphCodeBERT: Pre-training Code Representations with Data Flow”, 2020-09-17 (similar)
“CoCoNuT: Combining Context-Aware Neural Translation Models Using Ensemble for Program Repair”, Et Al 2020
“CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair”, 2020-07-01 ( ; backlinks; similar)
“TransCoder: Unsupervised Translation of Programming Languages”, Et Al 2020
“TransCoder: Unsupervised Translation of Programming Languages”, 2020-06-05 (similar)
“GPT-3 Random Sample Dump: JavaScript Tutorial”, GPT-3 2020
“GPT-3 random sample dump: JavaScript tutorial”, 2020-05-28 (similar)
“IntelliCode Compose: Code Generation Using Transformer”, Et Al 2020
“IntelliCode Compose: Code Generation Using Transformer”, 2020-05-16 (similar)
“Deep Learning for Symbolic Mathematics”, 2019
“Deep Learning for Symbolic Mathematics”, 2019-12-02 ( ; similar)
“CodeSearchNet Challenge: Evaluating the State of Semantic Code Search”, Et Al 2019
“CodeSearchNet Challenge: Evaluating the State of Semantic Code Search”, 2019-09-20 (similar)
“BERTScore: Evaluating Text Generation With BERT”, Et Al 2019
“BERTScore: Evaluating Text Generation with BERT”, 2019-04-21 (backlinks; similar)
“Seq2SQL: Generating Structured Queries from Natural Language Using Reinforcement Learning”, Et Al 2017
“Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning”, 2017-08-31 ( ; backlinks; similar)
“Learning to Superoptimize Programs”, Et Al 2017
“Learning to superoptimize programs”, 2017-02-23 ( ; similar)
“DeepCoder: Learning to Write Programs”, Et Al 2016
“DeepCoder: Learning to Write Programs”, 2016-11-07 ( ; similar)
“OpenAI API Alchemy: Smart Formatting and Code Creation”
“Transformer-VAE for Program Synthesis”
Wikipedia
Miscellaneous
-
https://andrewmayneblog.wordpress.com/2023/03/23/chatgpt-code-interpreter-magic/
-
https://beta.openai.com/docs/guides/embeddings/code-search-using-embeddings
-
https://mullikine.github.io/posts/nlsh-natural-language-shell/
-
https://nitter.moomoo.me/AlexTamkin/status/1567956315208830976
-
https://nitter.moomoo.me/ArtirKel/status/1588245580160983040
-
https://nitter.moomoo.me/ArtirKel/status/1588246269385838594
-
https://nitter.moomoo.me/BHolmesDev/status/1587788026637336576
-
https://nitter.moomoo.me/PerksPlus0001/status/1631372820709253120
-
https://nitter.moomoo.me/ThePrimeagen/status/1628047727866126336
-
https://nitter.moomoo.me/ThomasMiconi/status/1569408502447374336
-
https://nitter.moomoo.me/amanrsanger/status/1631029716550549504
-
https://nitter.moomoo.me/ccanonne_/status/1639848150495301633
-
https://nitter.moomoo.me/d_feldman/status/1549607411845152770
-
https://nitter.moomoo.me/fabianstelzer/status/1572571003804614657
-
https://nitter.moomoo.me/goodside/status/1614089728890130435
-
https://nitter.moomoo.me/moreisdifferent/status/1612489352105365511
-
https://nitter.moomoo.me/negamuhia/status/1569616507256115205
-
https://nitter.moomoo.me/oegerikus/status/1610945035888955392
-
https://nitter.moomoo.me/patrickmineault/status/1591874392279351297
-
https://nitter.moomoo.me/perrymetzger/status/1632004276883947520
-
https://nitter.moomoo.me/scottleibrand/status/1430753899460194310
-
https://nitter.moomoo.me/sergeykarayev/status/1569377881440276481
-
https://nitter.moomoo.me/sergeykarayev/status/1569571367833714688
-
https://nitter.moomoo.me/thisiswrenn/status/1523182708385452032
-
https://nitter.moomoo.me/zswitten/status/1631190068970012675
-
https://old.reddit.com/r/GPT3/comments/106t5gv/compressing_prompt_text_with_lossless_compression/
-
https://tagide.com/education/writing-a-tokenizer-with-chatgpt/
-
https://towardsdatascience.com/can-chatgpt-write-better-sql-than-a-data-analyst-f079518efab2
-
https://towardsdatascience.com/codex-by-openai-in-action-83529c0076cc
-
https://www.lesswrong.com/posts/ib9bfyJiz4FLuHDQs/openai-codex-first-impressions
-
https://www.lesswrong.com/posts/ux93sLHcqmBfsRTvg/gpt-can-write-quines-now-gpt-4
-
https://www.nytimes.com/2021/09/09/technology/codex-artificial-intelligence-coding.html
-
https://www.shawnmatthewcrawford.com/balloons-the-balloon-clicker-game.html
-
https://www.theverge.com/2021/8/10/22618128/openai-codex-natural-language-into-code-api-beta-access
Link Bibliography
-
https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work/
: “Introducing Microsoft 365 Copilot—your Copilot for Work”, Jared Spataro: -
https://arxiv.org/abs/2302.12433
: “ProofNet: Autoformalizing and Formally Proving Undergraduate-Level Mathematics”, Zhangir Azerbayev, Bartosz Piotrowski, Hailey Schoelkopf, Edward W. Ayers, Dragomir Radev, Jeremy Avigad: -
https://www.cnbc.com/2023/01/31/google-testing-chatgpt-like-chatbot-apprentice-bard-with-employees.html
: “Google Is Asking Employees to Test Potential ChatGPT Competitors, including a Chatbot Called 'Apprentice Bard'”, Jennifer Elias: -
https://arxiv.org/abs/2301.08653
: “An Analysis of the Automatic Bug Fixing Performance of ChatGPT”, Dominik Sobania, Martin Briesch, Carol Hanna, Justyna Petke: -
https://azure.microsoft.com/en-us/blog/general-availability-of-azure-openai-service-expands-access-to-large-advanced-ai-models-with-added-enterprise-benefits/
: “General Availability of Azure OpenAI Service Expands Access to Large, Advanced AI Models With Added Enterprise Benefits”, Eric Boyd: -
https://arxiv.org/abs/2211.15533
: “The Stack: 3 TB of Permissively Licensed Source Code”, : -
https://greylock.com/greymatter/kevin-scott-ai-programming-possibility/
: “Programming Possibility: Kevin Scott on AI’s Impact on Cognitive Work”, Reid Hoffman, Kevin Scott: -
https://arxiv.org/abs/2210.09261#google
: “Challenging BIG-Bench Tasks (BBH) and Whether Chain-of-Thought Can Solve Them”, : -
https://arxiv.org/abs/2209.01975
: “Vote-<em>K< / em>: Selective Annotation Makes Language Models Better Few-Shot Learners”, : -
https://arxiv.org/abs/2207.08143
: “Can Large Language Models Reason about Medical Questions?”, Valentin Liévin, Christoffer Egeberg Hother, Ole Winther: -
https://arxiv.org/abs/2205.06537#github
: “Productivity Assessment of Neural Code Completion”, : -
https://arxiv.org/abs/2204.02311#google
: “PaLM: Scaling Language Modeling With Pathways”, : -
2022-vaithilingam.pdf
: “Expectation vs. Experience: Evaluating the Usability of Code Generation Tools Powered by Large Language Models”, Priyan Vaithilingam, Tianyi Zhang, Elena Glassman: -
https://arxiv.org/abs/2201.10005#openai
: “Text and Code Embeddings by Contrastive Pre-Training”, : -
https://openai.com/blog/webgpt/
: “WebGPT: Improving the Factual Accuracy of Language Models through Web Browsing”, Jacob Hilton, Suchir Balaji, Reiichiro Nakano, John Schulman: -
https://arxiv.org/abs/2112.09332#openai
: “WebGPT: Browser-assisted Question-answering With Human Feedback”, : -
https://arxiv.org/abs/2112.11446#deepmind
: “Scaling Language Models: Methods, Analysis & Insights from Training Gopher”, : -
https://arxiv.org/abs/2111.11904#microsoft
: “Can Pre-trained Language Models Be Used to Resolve Textual and Semantic Merge Conflicts?”, Jialu Zhang, Todd Mytkowicz, Mike Kaufman, Ruzica Piskac, Shuvendu K. Lahiri: -
https://arxiv.org/abs/2111.08267
: “Solving Probability and Statistics Problems by Program Synthesis”, Leonard Tang, Elizabeth Ke, Nikhil Singh, Nakul Verma, Iddo Drori: -
2021-jiang.pdf
: “GenLine and GenForm: Two Tools for Interacting With Generative Language Models in a Code Editor”, Ellen Jiang, Edwin Toh, Alejandra Molina, Aaron Donsbach, Carrie Cai, Michael Terry: