Large Language Models

TitleMaterialsReferences
Large Language ModelsSlides
ArchitecturesSlides[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]
GenerationSlidesMaterials[5] [15] [16]
Instruction TuningSlides
RLHFSlides[17] [18] [19]
DpoSlides[17] [20]
Tasks and DatasetsSlides[21] [22] [23] [24] [25] [26] [27] [28] [29]
Efficient LLM Training and InferenceSlides
Sequence ParallelismSlides[30] [31] [32] [33]
Page AttentionSlides[34] [35] [36]
Speculative DecodingSlides[37] [38]
Open-Source Infrastructure for LLMsSlides[39] [40] [41] [42] [43] [44]
Tool UseSlides[45] [46] [47]
Structured OutputsSlides
Constrained DecodingSlides[48] [49] [50]
Long ContextSlides[51] [52] [53]
Retrieval Augmented GenerationSlides[54] [55] [56] [57] [58]
Structured DialoguesSlides[59] [60] [10] [61] [62] [63] [64] [65] [66]
Limitations of LLMsSlides[67] [68] [69] [70]

References

  1. PaLM: Scaling Language Modeling with PathwaysAakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, etal.2022
  2. Gemini: A Family of Highly Capable Multimodal Models Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, etal.2023
  3. Mistral 7BAlbert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, etal.2023
  4. Mixtral of ExpertsAlbert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, etal.2024
  5. Improving Language Understanding by Generative PretrainingAlec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever2018
  6. Attention Is All You NeedAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, etal.2017
  7. BERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingJacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova2018
  8. Physics of Language Models: Part 3.1, Knowledge Storage and ExtractionZeyuan Allen-Zhu, Yuanzhi Li2023
  9. Language Models are Unsupervised Multitask LearnersAlec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever2019
  10. Language Models are Few-Shot LearnersTom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, etal.2020
  11. https://commoncrawl.org/
  12. The Pile: An 800GB Dataset of Diverse Text for Language ModelingLeo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, etal.2020
  13. Mamba: Linear-Time Sequence Modeling with Selective State SpacesAlbert Gu, Tri Dao2023
  14. Efficiently Modeling Long Sequences with Structured State SpacesAlbert Gu, Karan Goel, Christopher Ré2021
  15. The Curious Case of Neural Text DegenerationAri Holtzman, Jan Buys, Li Du, Maxwell Forbes, Yejin Choi2019
  16. Turning Up the Heat: Min-p Sampling for Creative and Coherent LLM OutputsMinh Nguyen, Andrew Baker, Clement Neo, Allen Roush, Andreas Kirsch, Ravid Shwartz-Ziv2024
  17. Training language models to follow instructions with human feedbackLong Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, etal.2022
  18. Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMsArash Ahmadian, Chris Cremer, Matthias Gallé, Marzieh Fadaee, Julia Kreutzer, Olivier Pietquin, etal.2024
  19. Simple statistical gradient-following algorithms for connectionist reinforcement learningRonald J. Williams1992
  20. Direct Preference Optimization: Your Language Model is Secretly a Reward ModelRafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, Chelsea Finn2023
  21. DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over ParagraphsDheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, Matt Gardner2019
  22. PIQA: Reasoning about Physical Commonsense in Natural LanguageYonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, Yejin Choi2019
  23. Measuring Massive Multitask Language UnderstandingDan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, Jacob Steinhardt2020
  24. Training Verifiers to Solve Math Word ProblemsKarl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, etal.2021
  25. WinoGrande: An Adversarial Winograd Schema Challenge at ScaleKeisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi2019
  26. Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language modelsAarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, etal.2022
  27. AGIEval: A Human-Centric Benchmark for Evaluating Foundation ModelsWanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, etal.2023
  28. Evaluating Large Language Models Trained on CodeMark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, etal.2021
  29. Program Synthesis with Large Language ModelsJacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, etal.2021
  30. Ring Attention with Blockwise Transformers for Near-Infinite ContextHao Liu, Matei Zaharia, Pieter Abbeel2023
  31. Sequence Parallelism: Long Sequence Training from System PerspectiveShenggui Li, Fuzhao Xue, Chaitanya Baranwal, Yongbin Li, Yang You2021
  32. Reducing Activation Recomputation in Large Transformer ModelsVijay Korthikanti, Jared Casper, Sangkug Lym, Lawrence McAfee, Michael Andersch, etal.2022
  33. DISTFLASHATTN: Distributed Memory-efficient Attention for Long-context LLMs TrainingDacheng Li, Rulin Shao, Anze Xie, Eric P. Xing, Xuezhe Ma, Ion Stoica, Joseph E. Gonzalez, Hao Zhang2023
  34. Efficient Memory Management for Large Language Model Serving with PagedAttentionWoosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, etal.2023
  35. PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information FunnelingZefan Cai, Yichi Zhang, Bofei Gao, Yuliang Liu, Tianyu Liu, Keming Lu, Wayne Xiong, Yue Dong, etal.2024
  36. GQA: Training Generalized Multi-Query Transformer Models from Multi-Head CheckpointsJoshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebrón, Sumit Sanghai2023
  37. Fast Inference from Transformers via Speculative DecodingYaniv Leviathan, Matan Kalman, Yossi Matias2022
  38. Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding HeadsTianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D. Lee, Deming Chen, Tri Dao2024
  39. https://github.com/pytorch/torchtune
  40. https://github.com/vllm-project/vllm
  41. https://huggingface.co/models
  42. https://lmsys.org/
  43. https://ollama.com/
  44. https://github.com/ggerganov/llama.cpp
  45. Toolformer: Language Models Can Teach Themselves to Use ToolsTimo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, etal.2023
  46. AnyTool: Self-Reflective, Hierarchical Agents for Large-Scale API CallsYu Du, Fangyun Wei, Hongyang Zhang2024
  47. The Llama 3 Herd of ModelsAaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, etal.2024
  48. Synchromesh: Reliable code generation from pre-trained language modelsGabriel Poesia, Oleksandr Polozov, Vu Le, Ashish Tiwari, Gustavo Soares, Christopher Meek, etal.2022
  49. Guiding LLMs The Right Way: Fast, Non-Invasive Constrained GenerationLuca Beurer-Kellner, Marc Fischer, Martin Vechev2024
  50. Lexically Constrained Decoding for Sequence Generation Using Grid Beam SearchChris Hokamp, Qun Liu2017
  51. Long Context Compression with Activation BeaconPeitian Zhang, Zheng Liu, Shitao Xiao, Ninglu Shao, Qiwei Ye, Zhicheng Dou2024
  52. RoFormer: Enhanced Transformer with Rotary Position EmbeddingJianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, Yunfeng Liu2021
  53. Extending Context Window of Large Language Models via Positional InterpolationShouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian2023
  54. Reading Wikipedia to Answer Open-Domain QuestionsDanqi Chen, Adam Fisch, Jason Weston, Antoine Bordes2017
  55. Retrieval-Augmented Generation for Knowledge-Intensive NLP TasksPatrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, etal.2020
  56. REALM: Retrieval-Augmented Language Model Pre-TrainingKelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang2020
  57. Improving language models by retrieving from trillions of tokensSebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, etal.2021
  58. In-Context Retrieval-Augmented Language ModelsOri Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham2023
  59. Vision Transformers Need RegistersTimothée Darcet, Maxime Oquab, Julien Mairal, Piotr Bojanowski2023
  60. Massive Activations in Large Language ModelsMingjie Sun, Xinlei Chen, J. Zico Kolter, Zhuang Liu2024
  61. Chain-of-Thought Prompting Elicits Reasoning in Large Language ModelsJason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, etal.2022
  62. Self-Consistency Improves Chain of Thought Reasoning in Language ModelsXuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, etal.2022
  63. Tree of Thoughts: Deliberate Problem Solving with Large Language ModelsShunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan2023
  64. ReAct: Synergizing Reasoning and Acting in Language ModelsShunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao2022
  65. Reflexion: Language Agents with Verbal Reinforcement LearningNoah Shinn, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik Narasimhan, Shunyu Yao2023
  66. Generative Verifiers: Reward Modeling as Next-Token PredictionLunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, Rishabh Agarwal2024
  67. ChatGPT is bullshitMichael Townsen Hicks, James Humphries, Joe Slater2024
  68. Large Language Models Cannot Self-Correct Reasoning YetJie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, Denny Zhou2023
  69. Dissociating language and thought in large language modelsKyle Mahowald, Anna A. Ivanova, Idan A. Blank, Nancy Kanwisher, Joshua B. Tenenbaum, etal.2023
  70. Physics of Language Models: Part 1, Learning Hierarchical Language StructuresZeyuan Allen-Zhu, Yuanzhi Li2023