This page evaluates the test results for each forecast method and identifies the best fit method.

You can select which product and customer data displays on the page with the context selectors on the top-right of the page. To search for a product or customer, enter a search term or reference ID in the Find... fields.

If you edit these parameters in the Edit Parameters card, the parameters change at a global level and apply to all product-customer combinations. The global level changes are reflected in the Effective Parameters card.

ParameterValue
Forecast Error Basis

The calculation for the forecast error is the absolute forecast error divided by the selected divisor. This parameter specifies the divisor used in the calculation. 

To change the parameter, select the cell next to Forecast Error Basis and select a parameter from the dropdown:

  • Actuals: Divides the forecast error by history.
  • Forecast: Divides the forecast error by the forecast.
  • Minimum Error: Divides the forecast error by either the history or the forecast (whichever gives the lowest error). This measure helps to prevent behaviors that can introduce bias into forecasts.
MAPE/Accuracy

Defines whether to display the Mean Absolute Percentage Error (MAPE) or Forecast Accuracy (defined as 100% minus MAPE).

To change this parameter, select the cell next to MAPE / Accuracy and select either MAPE or Accuracy from the dropdown.

Lead Time OffsetThis parameter specifies the number of planning periods to be used as lead time offset for forecast error calculations. For example: for a forecast made for Week 1, select a Lead Time Offset of 2 to measure the forecast made in Week 1 from Week 3.

To change the number, select the cell next to Lead Time Offset and select a number from dropdown.
Periods to SumThis parameter specifies the number of planning periods to sum in the calculation of MAPE. For example: for a forecast calculated in Week 1, select a Lead Time Offset of 2 and select a Periods to Sum of 3 to measure the forecast made in Week 1 offset to Week 3, and then summed for Week 3, Week 4, and Week 5.

Note: This parameter references the Initialization Periods set in D2000 - Global Stat Parameters and D2200 - Stat Optimization Parameters. The test window references the initialization period set for the forecast method and discountes the number of periods when producing a forecast. If the Number of Tests is set to 10, and the initialization period is set to 6, then the forecast is based upon the difference between these two values, 4 in this case.
Number of TestsThis parameter specifies the number of forecast calculations to be tested.

These parameters are set by a system administrator at the global level and apply to all product-customer combinations.

If you edit a parameter in the Edit Parameters card, this change edits the parameters at a global level and applies to all product-customer combinations. The global level changes are reflected in the Effective Parameters card. 

See Edit Paramenters above for details about each setting.

This card shows the forecast basis input from which the statistical forecasts are calculated. 

If there is more than one forecast basis (for example, the demand forecast basis for customer-product A was set to Corrected History, and customer-product B was set to Pure Demand), the Mixed Data Check is highlighted in pink. 

The forecast basis for all customer-product combinations must be aligned to the same forecast basis. This allows you to get an accurate view of statistical forecasts when you select a customer hierarchy level that's higher than customer-product at leaf level.

This card shows the C-Var figures which were calculated for the specific customer-product combination.

ParameterValue
C-Var (Mean)A system-calculated value for the selected product-customer combination. It's calculated based on C-Var defined as standard deviation divided by the mean of pure history.
C-Var (Trend)A system-calculated value for the selected product-customer combination. It's calculated based on C-Var defined as standard error divided by the mean of the trendline.
C-Var (Residual)A system-calculated value for the selected product-customer combination. It's calculated based on the difference in standard deviations of values versus predicted values.

The parameters that display in the View Settings card as described below:

ParameterValue
View Years HistoryThe number of years history to view. If a user enters more than the maximum history available, the maximum available is shown.
View Years FutureThe number of future years to view.  If a user enters more than the maximum available, the maximum available is shown.
View Period Type

Fiscal year or rolling year. To change the period type, select the cell next to View Period Type and select either: 

  • Fiscal Year: Shows the current fiscal year as a minimum. 
  • Rolling Year: Always shows the current period as a minimum. Select additional years to extend the view range from this minimum position.

This table displays the test results for each of the statistical forecast methods.

To identify the best-fit method applied to the selected customer-product combination, the table ranks each method with the best-fit and the historical demand.

The columns in the table are described below:

ParameterValue
MAPE/Accuracy (%)Displays system calculated MAPE/accuracy value.
RMSEDisplays the system-calculated root mean square error value from the completed tests.
Bias (%)Displays the system-calculated bias value from the completed tests.
Rank RMSEGives the rank order of the statistical methods, with a rank of 1 being ‘best fit’. That is the method which gives the lowest RMSE value.
Can TestThis column displays which of the methods that could be tested. A selected checkbox means that this method has been tested. Insufficient historical demand data might prevent methods from being tested.
Best FitThis column displays which of the statistical forecast methods has been identified by the system as best fit when comparing the results from all methods.
Disable MethodSelect the checkboxes in this colum to disable a statistical forecast method.
EF Disable MethodThis column displays the disabled method as confirmed and effective.

Use this graph to compare the Testing Window, Effective Adjusted History, Forecast, Best Fit Method, and Best Fit Method Sum.

To see a graphical comparison of the forecasts, use the context selector to add a further statistical forecast to the graph. Select a method from the context selector at the bottom of the chart.