Now we have covered vectors, which are 1-D arrays, and matrices, which are 2-D arrays. But how do we represent general N-D arrays? These are represented in tensors. Tensors A tensor $\mathbf{X}$ is an $N$-D array $$ X\in \mathbb{R}^{d_1 \times d_2 \times \cdots \times d_{N}}, $$ where $d_{i}$ denotes the shape along the $i$-th dimension. Tensors A vector $\mathbf{v}\in\mathbb{R}^{n}$ is a 1-dimensional tensor; A matrix $\mathbf{M}\in\mathbb{R}^{n\times m}$ is a 2-dimensional tensor; An image $\mathbf{I}\in\mathbb{R}^{H\times W \times C}$ is a 3-dimensional tensor with height $H$, width $W$, and three color channels $C=3$; A video $\mathbf{V}\in\mathbb{R}^{T\times H\times W \times C}$ is a 4-dimensional tensor with height $H$, width $W$, color $C$, and time $T$. Representing Data as Tensors torch.Tensor A torch.Tensor is a data container. It has some important properties such as: shape - the size of the tensor; dtype - the data type (e.g. torch.float32, torch.int64, torch.bool, etc.); ndim - the number of dimensions, namely ndim = len(shape). Tensors with all elements being zero 1 2 3 4 5 6 # A 1-d all-zero tensor (vector) with 10 elements a = torch.zeros(10) print(f"{type(a)=}") print(f"{a=}") print(f"{a.shape=}") print(f"{a.ndim=}") type(a)=