Advances in Deep Learning

TitleMaterialsReferences
Getting Started
Welcome to Advances in Deep LearningSlides
Introduction
Structures of Deep NetworksSlides
Training Deep NetworksSlides
Modern GPU ArchitecturesSlidesMaterials
Advanced Training
Training Large ModelsSlides
Mixed Precision TrainingSlides[1]
Distributed TrainingSlides[2] [3]
Zero Redundancy TrainingSlides[4]
Low-Rank AdaptersSlides[5]
QuantizationSlides[6] [7] [8]
Quantized Low-Rank AdaptersSlides[9]
Low-Rank ProjectionsSlides[10]
CheckpointingSlides[11]
FlashAttentionSlides[12] [13] [14]
Open-Source Infrastructure for Model TrainingSlidesMaterials[15] [16]
Generative Models
Generative ModelsSlides
Variational Auto EncodersSlides[17]
Generative Adversarial NetworksSlides[18] [19] [20]
Flow-Based ModelsSlides[21] [22] [23]
Auto-Regressive GenerationSlides[24] [25] [26] [27]
Vector QuantizationSlides[28] [29] [30]
Dall-ESlides[28] [31] [32] [33] [34] [35] [36]
Diffusion ModelsSlides[37] [38] [39]
Latent Diffusion and State-of-the-Art ModelsSlides[40] [31] [41] [42] [43] [44] [45] [46]
Which Generative Model Should I Use?Slides
Large Language Models
Large Language ModelsSlides
ArchitecturesSlides[47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60]
GenerationSlidesMaterials[51] [61] [62]
Instruction TuningSlides
RLHFSlides[63] [64] [65]
DpoSlides[63] [66]
Tasks and DatasetsSlides[67] [68] [69] [70] [71] [72] [73] [74] [75]
Efficient LLM Training and InferenceSlides
Sequence ParallelismSlides[76] [77] [78] [79]
Page AttentionSlides[80] [81] [82]
Speculative DecodingSlides[83] [84]
Open-Source Infrastructure for LLMsSlides[85] [86] [87] [88] [89] [90]
Tool UseSlides[91] [92] [93]
Structured OutputsSlides
Constrained DecodingSlides[94] [95] [96]
Long ContextSlides[97] [98] [99]
Retrieval Augmented GenerationSlides[100] [101] [102] [103] [104]
Structured DialoguesSlides[105] [106] [56] [107] [108] [109] [110] [111] [112]
Limitations of LLMsSlides[113] [114] [115] [116]

References

  1. Mixed Precision TrainingPaulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, etal.2017
  2. GPipe: Efficient Training of Giant Neural Networks using Pipeline ParallelismYanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Mia Xu Chen, Dehao Chen, HyoukJoong Lee, etal.2018
  3. GShard: Scaling Giant Models with Conditional Computation and Automatic ShardingDmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, etal.2020
  4. ZeRO: Memory Optimizations Toward Training Trillion Parameter ModelsSamyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, Yuxiong He2019
  5. LoRA: Low-Rank Adaptation of Large Language ModelsEdward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, etal.2021
  6. 8-Bit Approximations for Parallelism in Deep LearningTim Dettmers2015
  7. 8-bit Optimizers via Block-wise QuantizationTim Dettmers, Mike Lewis, Sam Shleifer, Luke Zettlemoyer2021
  8. The case for 4-bit precision: k-bit Inference Scaling LawsTim Dettmers, Luke Zettlemoyer2022
  9. QLoRA: Efficient Finetuning of Quantized LLMsTim Dettmers, Artidoro Pagnoni, Ari Holtzman, Luke Zettlemoyer2023
  10. GaLore: Memory-Efficient LLM Training by Gradient Low-Rank ProjectionJiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, Yuandong Tian2024
  11. Training Deep Nets with Sublinear Memory CostTianqi Chen, Bing Xu, Chiyuan Zhang, Carlos Guestrin2016
  12. FlashAttention: Fast and Memory-Efficient Exact Attention with IO-AwarenessTri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, Christopher Ré2022
  13. FlashAttention-2: Faster Attention with Better Parallelism and Work PartitioningTri Dao2023
  14. FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precisionJay Shah, Ganesh Bikshandi, Ying Zhang, Vijay Thakkar, Pradeep Ramani, Tri Dao2024
  15. https://github.com/ray-project/ray
  16. https://github.com/Lightning-AI/pytorch-lightning
  17. Auto-Encoding Variational BayesDiederik P Kingma, Max Welling2013
  18. Generative Adversarial NetworksIan J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, etal.2014
  19. Large Scale GAN Training for High Fidelity Natural Image SynthesisAndrew Brock, Jeff Donahue, Karen Simonyan2018
  20. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial NetworkChristian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, etal.2016
  21. Variational Inference with Normalizing FlowsDanilo Jimenez Rezende, Shakir Mohamed2015
  22. Density estimation using Real NVPLaurent Dinh, Jascha Sohl-Dickstein, Samy Bengio2016
  23. Glow: Generative Flow with Invertible 1x1 ConvolutionsDiederik P. Kingma, Prafulla Dhariwal2018
  24. WaveNet: A Generative Model for Raw AudioAaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, etal.2016
  25. Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive TransformerSongwei Ge, Thomas Hayes, Harry Yang, Xi Yin, Guan Pang, David Jacobs, Jia-Bin Huang, Devi Parikh2022
  26. Lossless Image Compression through Super-ResolutionSheng Cao, Chao-Yuan Wu, Philipp Krähenbühl2020
  27. Practical Full Resolution Learned Lossless Image CompressionFabian Mentzer, Eirikur Agustsson, Michael Tschannen, Radu Timofte, Luc Van Gool2018
  28. Neural Discrete Representation LearningAaron van den Oord, Oriol Vinyals, Koray Kavukcuoglu2017
  29. Taming Transformers for High-Resolution Image SynthesisPatrick Esser, Robin Rombach, Björn Ommer2020
  30. Language Model Beats Diffusion -- Tokenizer is Key to Visual GenerationLijun Yu, José Lezama, Nitesh B. Gundavarapu, Luca Versari, Kihyuk Sohn, David Minnen, etal.2023
  31. Zero-Shot Text-to-Image GenerationAditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, etal.2021
  32. https://insightcivic.s3.us-east-1.amazonaws.com/language-models.pdf
  33. Simulating 500 million years of evolution with a language modelThomas Hayes, Roshan Rao, Halil Akin, Nicholas J. Sofroniew, Deniz Oktay, etal.2024
  34. Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image CaptioningPiyush Sharma, Nan Ding, Sebastian Goodman, Radu Soricut2018
  35. YFCC100M: The New Data in Multimedia ResearchBart Thomee, David A. Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, etal.2015
  36. Generating Long Sequences with Sparse TransformersRewon Child, Scott Gray, Alec Radford, Ilya Sutskever2019
  37. Denoising Diffusion Probabilistic ModelsJonathan Ho, Ajay Jain, Pieter Abbeel2020
  38. Generative Modeling by Estimating Gradients of the Data DistributionYang Song, Stefano Ermon2019
  39. Diffusion Models Beat GANs on Image SynthesisPrafulla Dhariwal, Alex Nichol2021
  40. High-Resolution Image Synthesis with Latent Diffusion ModelsRobin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer2021
  41. Photorealistic Text-to-Image Diffusion Models with Deep Language UnderstandingChitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, etal.2022
  42. Hierarchical Text-Conditional Image Generation with CLIP LatentsAditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen2022
  43. CCM: Adding Conditional Controls to Text-to-Image Consistency ModelsJie Xiao, Kai Zhu, Han Zhang, Zhiheng Liu, Yujun Shen, Yu Liu, Xueyang Fu, Zheng-Jun Zha2023
  44. Adding Conditional Control to Text-to-Image Diffusion ModelsLvmin Zhang, Anyi Rao, Maneesh Agrawala2023
  45. One-step Diffusion with Distribution Matching DistillationTianwei Yin, Michaël Gharbi, Richard Zhang, Eli Shechtman, Fredo Durand, William T. Freeman, etal.2023
  46. Diffusion Models: A Comprehensive Survey of Methods and ApplicationsLing Yang, Zhilong Zhang, Yang Song, Shenda Hong, Runsheng Xu, Yue Zhao, Wentao Zhang, Bin Cui, etal.2022
  47. PaLM: Scaling Language Modeling with PathwaysAakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, etal.2022
  48. Gemini: A Family of Highly Capable Multimodal Models Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, etal.2023
  49. Mistral 7BAlbert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, etal.2023
  50. Mixtral of ExpertsAlbert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, etal.2024
  51. Improving Language Understanding by Generative PretrainingAlec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever2018
  52. Attention Is All You NeedAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, etal.2017
  53. BERT: Pre-training of Deep Bidirectional Transformers for Language UnderstandingJacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova2018
  54. Physics of Language Models: Part 3.1, Knowledge Storage and ExtractionZeyuan Allen-Zhu, Yuanzhi Li2023
  55. Language Models are Unsupervised Multitask LearnersAlec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever2019
  56. Language Models are Few-Shot LearnersTom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, etal.2020
  57. https://commoncrawl.org/
  58. The Pile: An 800GB Dataset of Diverse Text for Language ModelingLeo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, etal.2020
  59. Mamba: Linear-Time Sequence Modeling with Selective State SpacesAlbert Gu, Tri Dao2023
  60. Efficiently Modeling Long Sequences with Structured State SpacesAlbert Gu, Karan Goel, Christopher Ré2021
  61. The Curious Case of Neural Text DegenerationAri Holtzman, Jan Buys, Li Du, Maxwell Forbes, Yejin Choi2019
  62. Turning Up the Heat: Min-p Sampling for Creative and Coherent LLM OutputsMinh Nguyen, Andrew Baker, Clement Neo, Allen Roush, Andreas Kirsch, Ravid Shwartz-Ziv2024
  63. Training language models to follow instructions with human feedbackLong Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, etal.2022
  64. Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMsArash Ahmadian, Chris Cremer, Matthias Gallé, Marzieh Fadaee, Julia Kreutzer, Olivier Pietquin, etal.2024
  65. Simple statistical gradient-following algorithms for connectionist reinforcement learningRonald J. Williams1992
  66. Direct Preference Optimization: Your Language Model is Secretly a Reward ModelRafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, Chelsea Finn2023
  67. DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over ParagraphsDheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, Matt Gardner2019
  68. PIQA: Reasoning about Physical Commonsense in Natural LanguageYonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, Yejin Choi2019
  69. Measuring Massive Multitask Language UnderstandingDan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, Jacob Steinhardt2020
  70. Training Verifiers to Solve Math Word ProblemsKarl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, etal.2021
  71. WinoGrande: An Adversarial Winograd Schema Challenge at ScaleKeisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi2019
  72. Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language modelsAarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, etal.2022
  73. AGIEval: A Human-Centric Benchmark for Evaluating Foundation ModelsWanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, etal.2023
  74. Evaluating Large Language Models Trained on CodeMark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, etal.2021
  75. Program Synthesis with Large Language ModelsJacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, etal.2021
  76. Ring Attention with Blockwise Transformers for Near-Infinite ContextHao Liu, Matei Zaharia, Pieter Abbeel2023
  77. Sequence Parallelism: Long Sequence Training from System PerspectiveShenggui Li, Fuzhao Xue, Chaitanya Baranwal, Yongbin Li, Yang You2021
  78. Reducing Activation Recomputation in Large Transformer ModelsVijay Korthikanti, Jared Casper, Sangkug Lym, Lawrence McAfee, Michael Andersch, etal.2022
  79. DISTFLASHATTN: Distributed Memory-efficient Attention for Long-context LLMs TrainingDacheng Li, Rulin Shao, Anze Xie, Eric P. Xing, Xuezhe Ma, Ion Stoica, Joseph E. Gonzalez, Hao Zhang2023
  80. Efficient Memory Management for Large Language Model Serving with PagedAttentionWoosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, etal.2023
  81. PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information FunnelingZefan Cai, Yichi Zhang, Bofei Gao, Yuliang Liu, Tianyu Liu, Keming Lu, Wayne Xiong, Yue Dong, etal.2024
  82. GQA: Training Generalized Multi-Query Transformer Models from Multi-Head CheckpointsJoshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebrón, Sumit Sanghai2023
  83. Fast Inference from Transformers via Speculative DecodingYaniv Leviathan, Matan Kalman, Yossi Matias2022
  84. Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding HeadsTianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D. Lee, Deming Chen, Tri Dao2024
  85. https://github.com/pytorch/torchtune
  86. https://github.com/vllm-project/vllm
  87. https://huggingface.co/models
  88. https://lmsys.org/
  89. https://ollama.com/
  90. https://github.com/ggerganov/llama.cpp
  91. Toolformer: Language Models Can Teach Themselves to Use ToolsTimo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, etal.2023
  92. AnyTool: Self-Reflective, Hierarchical Agents for Large-Scale API CallsYu Du, Fangyun Wei, Hongyang Zhang2024
  93. The Llama 3 Herd of ModelsAaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, etal.2024
  94. Synchromesh: Reliable code generation from pre-trained language modelsGabriel Poesia, Oleksandr Polozov, Vu Le, Ashish Tiwari, Gustavo Soares, Christopher Meek, etal.2022
  95. Guiding LLMs The Right Way: Fast, Non-Invasive Constrained GenerationLuca Beurer-Kellner, Marc Fischer, Martin Vechev2024
  96. Lexically Constrained Decoding for Sequence Generation Using Grid Beam SearchChris Hokamp, Qun Liu2017
  97. Long Context Compression with Activation BeaconPeitian Zhang, Zheng Liu, Shitao Xiao, Ninglu Shao, Qiwei Ye, Zhicheng Dou2024
  98. RoFormer: Enhanced Transformer with Rotary Position EmbeddingJianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, Yunfeng Liu2021
  99. Extending Context Window of Large Language Models via Positional InterpolationShouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian2023
  100. Reading Wikipedia to Answer Open-Domain QuestionsDanqi Chen, Adam Fisch, Jason Weston, Antoine Bordes2017
  101. Retrieval-Augmented Generation for Knowledge-Intensive NLP TasksPatrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, etal.2020
  102. REALM: Retrieval-Augmented Language Model Pre-TrainingKelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang2020
  103. Improving language models by retrieving from trillions of tokensSebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, etal.2021
  104. In-Context Retrieval-Augmented Language ModelsOri Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham2023
  105. Vision Transformers Need RegistersTimothée Darcet, Maxime Oquab, Julien Mairal, Piotr Bojanowski2023
  106. Massive Activations in Large Language ModelsMingjie Sun, Xinlei Chen, J. Zico Kolter, Zhuang Liu2024
  107. Chain-of-Thought Prompting Elicits Reasoning in Large Language ModelsJason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, etal.2022
  108. Self-Consistency Improves Chain of Thought Reasoning in Language ModelsXuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, etal.2022
  109. Tree of Thoughts: Deliberate Problem Solving with Large Language ModelsShunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, Karthik Narasimhan2023
  110. ReAct: Synergizing Reasoning and Acting in Language ModelsShunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, Yuan Cao2022
  111. Reflexion: Language Agents with Verbal Reinforcement LearningNoah Shinn, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik Narasimhan, Shunyu Yao2023
  112. Generative Verifiers: Reward Modeling as Next-Token PredictionLunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, Rishabh Agarwal2024
  113. ChatGPT is bullshitMichael Townsen Hicks, James Humphries, Joe Slater2024
  114. Large Language Models Cannot Self-Correct Reasoning YetJie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, Denny Zhou2023
  115. Dissociating language and thought in large language modelsKyle Mahowald, Anna A. Ivanova, Idan A. Blank, Nancy Kanwisher, Joshua B. Tenenbaum, etal.2023
  116. Physics of Language Models: Part 1, Learning Hierarchical Language StructuresZeyuan Allen-Zhu, Yuanzhi Li2023