deepctr_torch.models.basemodel module

Author:
Weichen Shen,weichenswc@163.com zanshuxun, zanshuxun@aliyun.com
class deepctr_torch.models.basemodel.BaseModel(linear_feature_columns, dnn_feature_columns, l2_reg_linear=1e-05, l2_reg_embedding=1e-05, init_std=0.0001, seed=1024, task='binary', device='cpu', gpus=None)[source]
compile(optimizer, loss=None, metrics=None)[source]
Parameters:
evaluate(x, y, batch_size=256)[source]
Parameters:
  • x – Numpy array of test data (if the model has a single input), or list of Numpy arrays (if the model has multiple inputs).
  • y – Numpy array of target (label) data (if the model has a single output), or list of Numpy arrays (if the model has multiple outputs).
  • batch_size – Integer or None. Number of samples per evaluation step. If unspecified, batch_size will default to 256.
Returns:

Dict contains metric names and metric values.

fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, initial_epoch=0, validation_split=0.0, validation_data=None, shuffle=True, callbacks=None)[source]
Parameters:
  • x – Numpy array of training data (if the model has a single input), or list of Numpy arrays (if the model has multiple inputs).If input layers in the model are named, you can also pass a dictionary mapping input names to Numpy arrays.
  • y – Numpy array of target (label) data (if the model has a single output), or list of Numpy arrays (if the model has multiple outputs).
  • batch_size – Integer or None. Number of samples per gradient update. If unspecified, batch_size will default to 256.
  • epochs – Integer. Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided. Note that in conjunction with initial_epoch, epochs is to be understood as “final epoch”. The model is not trained for a number of iterations given by epochs, but merely until the epoch of index epochs is reached.
  • verbose – Integer. 0, 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch.
  • initial_epoch – Integer. Epoch at which to start training (useful for resuming a previous training run).
  • validation_split – Float between 0 and 1. Fraction of the training data to be used as validation data. The model will set apart this fraction of the training data, will not train on it, and will evaluate the loss and any model metrics on this data at the end of each epoch. The validation data is selected from the last samples in the x and y data provided, before shuffling.
  • validation_data – tuple (x_val, y_val) or tuple (x_val, y_val, val_sample_weights) on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. validation_data will override validation_split.
  • shuffle – Boolean. Whether to shuffle the order of the batches at the beginning of each epoch.
  • callbacks – List of deepctr_torch.callbacks.Callback instances. List of callbacks to apply during training and validation (if ). See [callbacks](https://tensorflow.google.cn/api_docs/python/tf/keras/callbacks). Now available: EarlyStopping , ModelCheckpoint
Returns:

A History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values (if applicable).

predict(x, batch_size=256)[source]
Parameters:
  • x – The input data, as a Numpy array (or list of Numpy arrays if the model has multiple inputs).
  • batch_size – Integer. If unspecified, it will default to 256.
Returns:

Numpy array(s) of predictions.