LinearRegressor.SGD constructor

LinearRegressor.SGD(
  1. DataFrame trainData,
  2. String targetName, {
  3. int iterationLimit = iterationLimitDefaultValue,
  4. LearningRateType learningRateType = learningRateTypeDefaultValue,
  5. InitialCoefficientsType initialCoefficientType = initialCoefficientsTypeDefaultValue,
  6. double initialLearningRate = initialLearningRateDefaultValue,
  7. double decay = decayDefaultValue,
  8. int dropRate = dropRateDefaultValue,
  9. double minCoefficientUpdate = minCoefficientsUpdateDefaultValue,
  10. double lambda = lambdaDefaultValue,
  11. bool fitIntercept = fitInterceptDefaultValue,
  12. double interceptScale = interceptScaleDefaultValue,
  13. bool collectLearningData = collectLearningDataDefaultValue,
  14. DType dtype = dTypeDefaultValue,
  15. int? randomSeed,
  16. Matrix? initialCoefficients,
})

Linear regression with Stochastic Gradient Descent optimization and L2 regularization

Parameters:

trainData A DataFrame with observations that is used by the regressor to learn coefficients of the predicting hyperplane. Must contain targetName column.

targetName A string that serves as a name of the target column containing observation labels.

iterationLimit A number of fitting iterations. Uses as a condition of convergence in the optimization algorithm. Default value is 100.

initialLearningRate The initial value defining velocity of the convergence of gradient descent-based optimizers. Default value is 1e-3.

decay The value meaning "speed" of learning rate decrease. Applicable only for LearningRateType.timeBased, LearningRateType.stepBased, and LearningRateType.exponential strategies

dropRate The value that is used as a number of learning iterations after which the learning rate will be decreased. The value is applicable only for LearningRateType.stepBased learning rate; it will be omitted for other learning rate strategies

minCoefficientUpdate A minimum distance between coefficient vectors in two contiguous iterations. Uses as a condition of convergence in the optimization algorithm. If difference between the two vectors is small enough, there is no reason to continue fitting. Default value is 1e-12

lambda L2 regularization coefficient. Uses to prevent the regressor's overfitting. The greater the value of lambda, the more regular the coefficients are. Extremely large lambda may decrease the coefficients to nothing, otherwise too small lambda may be a cause of too large absolute values of the coefficients.

randomSeed A seed value that will be passed to a random value generator, used by stochastic optimizers. Will be ignored, if the solver is not stochastic. Remember, each time you run the stochastic regressor with the same parameters but with unspecified randomSeed, you will receive different results. To avoid it, define randomSeed.

fitIntercept Whether or not to fit intercept term. Default value is false. Intercept in 2-dimensional space is a bias of the line (relative to X-axis).

interceptScale A value defining a size of the intercept.

learningRateType A value defining a strategy for the learning rate behaviour throughout the whole fitting process.

initialCoefficientType Defines the coefficients that will be autogenerated at the first iteration of optimization. By default, all the autogenerated coefficients are equal to zeroes at the start. If initialCoefficients are provided, the parameter will be ignored.

initialCoefficients Coefficients to be used during the first iteration of optimization algorithm. initialCoefficients should have length that is equal to the number of features in the trainData.

collectLearningData Whether or not to collect learning data, for instance cost function value per each iteration. Affects performance much. If collectLearningData is true, one may access costPerIteration getter in order to evaluate learning process more thoroughly.

dtype A data type for all the numeric values, used by the algorithm. Can affect performance or accuracy of the computations. Default value is DType.float32.

Implementation

factory LinearRegressor.SGD(
  DataFrame trainData,
  String targetName, {
  int iterationLimit = iterationLimitDefaultValue,
  LearningRateType learningRateType = learningRateTypeDefaultValue,
  InitialCoefficientsType initialCoefficientType =
      initialCoefficientsTypeDefaultValue,
  double initialLearningRate = initialLearningRateDefaultValue,
  double decay = decayDefaultValue,
  int dropRate = dropRateDefaultValue,
  double minCoefficientUpdate = minCoefficientsUpdateDefaultValue,
  double lambda = lambdaDefaultValue,
  bool fitIntercept = fitInterceptDefaultValue,
  double interceptScale = interceptScaleDefaultValue,
  bool collectLearningData = collectLearningDataDefaultValue,
  DType dtype = dTypeDefaultValue,
  int? randomSeed,
  Matrix? initialCoefficients,
}) =>
    initLinearRegressorModule().get<LinearRegressorFactory>().create(
          fittingData: trainData,
          targetName: targetName,
          optimizerType: LinearOptimizerType.gradient,
          iterationsLimit: iterationLimit,
          learningRateType: learningRateType,
          initialCoefficientsType: initialCoefficientType,
          initialLearningRate: initialLearningRate,
          decay: decay,
          dropRate: dropRate,
          minCoefficientsUpdate: minCoefficientUpdate,
          lambda: lambda,
          regularizationType: RegularizationType.L2,
          fitIntercept: fitIntercept,
          interceptScale: interceptScale,
          randomSeed: randomSeed,
          batchSize: 1,
          initialCoefficients: initialCoefficients,
          isFittingDataNormalized: false,
          collectLearningData: collectLearningData,
          dtype: dtype,
        );