deepctr_torch.layers.core module

class deepctr_torch.layers.core.Conv2dSame(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[source]

Tensorflow like ‘SAME’ convolution wrapper for 2D convolutions

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class deepctr_torch.layers.core.DNN(inputs_dim, hidden_units, activation='relu', l2_reg=0, dropout_rate=0, use_bn=False, init_std=0.0001, dice_dim=3, seed=1024, device='cpu')[source]

The Multi Layer Percetron

Input shape
  • nD tensor with shape: (batch_size, ..., input_dim). The most common situation would be a 2D input with shape (batch_size, input_dim).
Output shape
  • nD tensor with shape: (batch_size, ..., hidden_size[-1]). For instance, for a 2D input with shape (batch_size, input_dim), the output would have shape (batch_size, hidden_size[-1]).
Arguments
  • inputs_dim: input feature dimension.
  • hidden_units:list of positive integer, the layer number and units in each layer.
  • activation: Activation function to use.
  • l2_reg: float between 0 and 1. L2 regularizer strength applied to the kernel weights matrix.
  • dropout_rate: float in [0,1). Fraction of the units to dropout.
  • use_bn: bool. Whether use BatchNormalization before activation or not.
  • seed: A Python integer to use as random seed.
forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class deepctr_torch.layers.core.LocalActivationUnit(hidden_units=(64, 32), embedding_dim=4, activation='sigmoid', dropout_rate=0, dice_dim=3, l2_reg=0, use_bn=False)[source]
The LocalActivationUnit used in DIN with which the representation of
user interests varies adaptively given different candidate items.
Input shape
  • A list of two 3D tensor with shape: (batch_size, 1, embedding_size) and (batch_size, T, embedding_size)
Output shape
  • 3D tensor with shape: (batch_size, T, 1).
Arguments
  • hidden_units:list of positive integer, the attention net layer number and units in each layer.
  • activation: Activation function to use in attention net.
  • l2_reg: float between 0 and 1. L2 regularizer strength applied to the kernel weights matrix of attention net.
  • dropout_rate: float in [0,1). Fraction of the units to dropout in attention net.
  • use_bn: bool. Whether use BatchNormalization before activation or not in attention net.
  • seed: A Python integer to use as random seed.
References
  • [Zhou G, Zhu X, Song C, et al. Deep interest network for click-through rate prediction[C]//Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2018: 1059-1068.](https://arxiv.org/pdf/1706.06978.pdf)
forward(query, user_behavior)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class deepctr_torch.layers.core.PredictionLayer(task='binary', use_bias=True, **kwargs)[source]
Arguments
  • task: str, "binary" for binary logloss or "regression" for regression loss
  • use_bias: bool.Whether add bias term or not.
forward(X)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.