Open In App

Tensors in Pytorch

A Pytorch Tensor is basically the same as a NumPy array. This means it does not know anything about deep learning or computational graphs or gradients and is just a generic n-dimensional array to be used for arbitrary numeric computation. However, the biggest difference between a NumPy array and a PyTorch Tensor is that a PyTorch Tensor can run on either CPU or GPU. To run operations on the GPU, just cast the Tensor to a cuda datatype using:

device = torch.device(“cpu”)



# to create random input and output data ,

# and H is hidden dimension; D_out is output dimension.



N, D_in, H, D_out = 32, 100, 10, 2 

x = torch.randn(N, D_in, device=device, dtype=torch.float)    #where x is a tensor 

In the above example, x can be thought of as a random feature tensor as an input to a model. We will see how to create tensors, different attributes, and operations on a tensor in this article.

How to create a Tensor?

You can create a tensor using some simple lines of code as shown below.




import torch
V_data = [1, 2, 3, 4, 5]
V = torch.tensor(V_data)
print(V)

Output: 

tensor([1, 2, 3, 4, 5])

You can also create a tensor of random data with a given dimensionality like:




import torch
  
x = torch.randn((3, 4, 5))
print(x)

Output :

tensor([[[ 0.8332, -0.2102,  0.0213,  0.4375, -0.9506],
         [ 0.0877, -1.5845, -0.1520,  0.3944, -0.7282],
         [-0.6923,  0.0332, -0.4628, -0.9127, -1.4349],
         [-0.3641, -0.5880, -0.5963, -1.4126,  0.5308]],

        [[ 0.4492, -1.2030,  2.5985,  0.8966,  0.4876],
         [ 0.5083,  1.4515,  0.6496,  0.3407,  0.0093],
         [ 0.1237,  0.3783, -0.7969,  1.4019,  0.0633],
         [ 0.4399,  0.3827,  1.2231, -0.0674, -1.0158]],

        [[-0.2490, -0.5475,  0.6201, -2.2092,  0.8405],
         [ 0.1684, -1.0118,  0.7414, -3.3518, -0.3209],
         [ 0.6543,  0.1956, -0.2954,  0.1055,  1.6523],
         [-0.9872, -2.0118, -1.6609,  1.4072,  0.0632]]])

You can also create tensors using the following functions:




import torch
  
z= torch.zeros([3,3], dtype=torch.int32)
print(z)

Output:  

tensor([[0, 0, 0],
        [0, 0, 0],
        [0, 0, 0]], dtype=torch.int32)




import torch
  
z = torch.ones([3,3])
print(z)

Output:

tensor([[1., 1., 1.],
        [1., 1., 1.],
        [1., 1., 1.]])

Syntax: torch.full(size, fill_value, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False)

And torch.full_like() is:

Syntax: torch.full_like(input, fill_value, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False, memory_format=torch.preserve_format)




import torch
  
# example of torch.full()
newTensor= torch.full((4, 3), 3.14,dtype= torch.float32)
print(newTensor)

Output:

tensor([[3.1400, 3.1400, 3.1400],
        [3.1400, 3.1400, 3.1400],
        [3.1400, 3.1400, 3.1400],
        [3.1400, 3.1400, 3.1400]])




import torch
  
# Example for torch.full_like()
x = torch.full_like(newTensor,3.24, dtype=None )
print(x)

Output: 

tensor([[3.2400, 3.2400, 3.2400],
        [3.2400, 3.2400, 3.2400],
        [3.2400, 3.2400, 3.2400],
        [3.2400, 3.2400, 3.2400]])

Here a new Tensor is returned, with the same size and dtype as the newTensor which was earlier returned from the torch.full method in the example shown above.

Tensor attributes:

Each tensor( torch.Tensor ) has a torch.dtype, torch.device, and torch.layout attributes. 

Example:




torch.device('cuda:0')

Output : 

device(type='cuda', index=0)

If the device ordinal is not present, this object will always represent the current device for the device type, even after torch.cuda.set_device() is called.

1. torch.strided: Represents dense Tensors and is the memory layout that is most commonly used. Each stridden tensor has an associated torch.Storage, which holds its data. These tensors provide a multi-dimensional, stridden view of storage. The stride of an array (also referred to as increment, pitch or step size) is the number of locations in memory between the beginnings of successive array elements, measured in bytes or in units of the size of the array’s elements. The stride cannot be smaller than the element size but can be larger, indicating extra space between elements. So basically here Strides are a list of integers: the k-th stride represents the jump in the memory necessary to go from one element to the next one in the kth dimension of the Tensor. This concept makes it possible to perform many tensor operations efficiently.

Let’s run some example snippets:




x = torch.Tensor([[1, 2, 3, 4], [5, 7, 8, 9]])
x.stride()

Output:

(4,1)

2. torch.sparse_coo_tensor: Used to store array in the sparse coordinate list. In COO format, the specified elements are stored as tuples of element indices and the corresponding values. 




i = [[0, 1, 1],
     [2, 0, 2]]
  
v =  [3, 4, 5]
s = torch.sparse_coo_tensor(i, v, (2, 3))
Print(s)

Output: 

tensor(indices=tensor([[0, 1, 1],
                       [2, 0, 2]]),
       values=tensor([3, 4, 5]),
       size=(2, 3), nnz=3, layout=torch.sparse_coo)

Tensor operations :

You can add two tensors like matrix addition.




x = torch.tensor([1., 2., 3.])
y = torch.tensor([4., 5., 6.])
z = x + y
print(z)

Output: 

tensor([5., 7., 9.])




x_1 = torch.randn(2, 5)
y_1 = torch.randn(3, 5)
z_1 = torch.cat([x_1, y_1])
print(z_1)

Output: 

tensor([[ 0.5761,  0.6781,  0.1621,  0.4986,  0.3410],
        [-0.8428,  0.2510, -0.2668, -1.1475,  0.5675],
        [-0.2797, -0.0699,  2.8936,  1.8260,  2.1227],
        [ 1.3765, -0.0939, -0.3774, -0.3834,  0.0682],
        [ 2.3666,  0.0904,  0.7956,  1.2281,  0.5561]])

To concatenate columns you can do the following.




x_2 = torch.randn(2, 3)
y_2 = torch.randn(2, 5)
  
# second argument specifies which axis to concat along
z_2 = torch.cat([x_2, y_2], 1)
print(z_2)

Output:

tensor([[ 0.5818,  0.7047,  0.1581,  1.8658,  0.5953, -0.9453, -0.6395, -0.7106],
        [ 1.2197,  0.8110, -1.6072,  0.1463,  0.4895, -0.8226, -0.1889,  0.2668]])




x = torch.randn(2, 3, 4)
print(x)
  
# reshape to 2 rows, 12 columns
print(x.view(2, 12))

Output:

tensor([[[ 0.4321,  0.2414, -0.4776,  1.6408],
         [ 0.9085,  0.9195,  0.1321,  1.1891],
         [-0.9267, -0.1384,  0.0115, -0.4731]],

        [[ 0.7256,  0.6990, -1.7374,  0.6053],
         [ 0.0224, -1.2108,  0.1974,  0.0655],
         [-0.6182, -0.0797,  0.2603, -1.3280]]])
tensor([[ 0.4321,  0.2414, -0.4776,  1.6408,  0.9085,  0.9195,  0.1321,  1.1891,
         -0.9267, -0.1384,  0.0115, -0.4731],
        [ 0.7256,  0.6990, -1.7374,  0.6053,  0.0224, -1.2108,  0.1974,  0.0655,
         -0.6182, -0.0797,  0.2603, -1.3280]])




x = torch.randn(3,3)
print((x, torch.argmax(x)))

Output: 

(tensor([[ 1.9610, -0.7683, -2.6080],
        [-0.3659, -0.1731,  0.1061],
        [ 0.8582,  0.6420, -0.2380]]), tensor(0))




x = torch.randn(3,3)
print((x, torch.argmin(x)))

Output: 

(tensor([[ 0.9838, -1.2761,  0.2257],
        [-0.4754,  1.2677,  1.1973],
        [-1.2298, -0.5710, -1.3635]]), tensor(8))

Article Tags :