Transformers Transformer Encoder & Decoder --- config: flowchart: rankSpacing: 20 --- flowchart TB x[fa:fa-flask x]:::input MHA(Multi-Head Attention) add1((+)) ln1(LayerNorm):::normalization mlp(MLP):::linear add2((+)) ln2(LayerNorm):::normalization y[y]:::output x --> |k| MHA x --> |q| MHA x --> |v| MHA MHA --> add1 x --> add1 add1 --> ln1 ln1 --> mlp mlp --> add2 ln1 --> add2 add2 --> ln2 --> y LayerNorm + FFN + Skip-connection (which was supposed to be introduced in part 2) Example: BERT (or training) causal mask Decoder-Only GPT-family