module = nn.LookupTable(nIndex, sizes)or
module = nn.LookupTable(nIndex, size1, [size2], [size3], ...)
This layer is a particular case of a convolution, where the width of the convolution would be 1
.
When calling forward(input)
, it assumes input
is a 1D tensor filled with indices. Indices start
at 1
and can go up to nIndex
. For each index, it outputs a corresponding Tensor
of size
specified by sizes
(an LongStorage
) or size1 x size2 x...
.
The output tensors are concatenated, generating a size1 x size2 x ... x sizeN x n
tensor, where n
is the size of the input
tensor.
When only size1
is provided, this is equivalent to do the following matrix-matrix multiplication
in an efficient manner:
M Pwhere
M
is a 2D matrix size1 x nIndex
containing the parameters of the lookup-table and
P
is a 2D matrix, where each column vector i
is a zero vector except at index input[i]
where it is 1
.
Example:
-- a lookup table containing 10 tensors of size 3 module = nn.LookupTable(10, 3) input = torch.Tensor(4) input[1] = 1; input[2] = 2; input[3] = 1; input[4] = 10; print(module:forward(input))
Outputs something like:
-0.1784 2.2045 -0.1784 -0.2475 -1.0120 0.0537 -1.0120 -0.2148 -1.2840 0.8685 -1.2840 -0.2792 [torch.Tensor of dimension 3x4]Note that the first column vector is the same than the 3rd one!