Tobacco has been the backbone of Zimbabwe’s agricultural economy since long and ranks second most important cash crop after food crops. Of late, tobacco yield has been diminishing in Zimbabwe. A systematic study of tobacco yield was adopted to formulate appropriate strategies to address this diminishing trend. The researchers focused on time series analysis of tobacco yield (1980-2018) using autoregressive integrated moving average (ARIMA) models to forecast 2019-2023 yield. The ARIMA model showed that production of tobacco would be 1511.78 kg/hectare by end of 2023. The researchers assumed that total tobacco yield reflects the estimated total national production in Zimbabwe. The study employed Box-Jenkins methodology in building the ARIMA model. Data analysis was conducted using R-software, and ARIMA (1, 1, 0) was identified as best model. Model diagnostics were performed to ensure validity. Total yield predicted, displayed a downward slope characterised by slight changes on the overall decreasing slope during the forecasted years. The success of higher area and productivity lies in timely provision of adequate inputs, educating and training of farmers, soil conservation and reclamation, and supportive government policies regarding tobacco production. Further studies should consider qualitative approaches with relevant key stakeholders to solicit information behind the observed trends.
Zimbabwe is the largest grower of tobacco in Africa and the fourth largest grower in the world. Zimbabwe’s principal agricultural export is tobacco which occupies 60% of total agricultural production. History reveals that in 2017 tobacco accounted for 11% of the country’s gross domestic product and three million residents of Zimbabwe out of sixteen million depend on tobacco products for their livelihood.
Until recently, Zimbabwe had experienced steady economic growth. Zimbabwe’s total exports of tobacco increased by 40 percent between 1981-1983 and 1996-1998 seasons. Although the share of tobacco in total agricultural exports has declined from its peak of 78 percent in 1992, it continued to account for more than 55 percent of total agricultural exports during 1996-1998. Increases in both planting areas and yields have contributed to a significant increase in output of tobacco over the past decades. Comparing the three-year average 1980-82 with 1998-2000, yield increased by about 29 percent, from 1 900 kg/ha to 2 510 kg/ha. Most commercial tobacco farmers practice a five-year rotation, and these other crops are an integral component of the overall land use system and help provide a steady cash flow. More than 80 percent of all horticultural exports, for example, are grown on tobacco farms and were first developed using tobacco revenue. Although not as advanced as large-scale commercial growers, most small-scale commercial farmers still produce at a reasonably high level and enjoy good access to basic equipment, including ox ploughs and carts, hand sprayers, sufficient barn space for curing tobacco and baling equipment. Three main types of tobacco grown in Zimbabwe are: flue-cured, burley and oriental tobacco. Of these, flue-cured is by far the most important and is generally produced in the better rainfall areas to the north and east of Harare. The northern regions produce a Virginia type of tobacco, whereas growers in the east produce a thicker, slower developing type used for blended cigarettes. Since cigarette production in Zimbabwe is on a small scale, the major activities in the tobacco industry are the growing, curing, and subsequent handling and distribution of tobacco leaf. The country does not have a large tobacco manufacturing industry and produces only enough cigarettes to supply domestic demand and provide a relatively small volume for export. Therefore, 98 percent of all tobacco production is exported. All tobacco grown in Zimbabwe is sold on the auction floors in several parts of the country as unprocessed green leaf. Thus, tobacco production has provided an economic base for farmers to develop other production opportunities. Tobacco production generates considerable rural employment and other parts of the economy, including input supply, transportation services, coal mining, hospitality during the auction season and other consumer services.
This highlights the need to forecast the crop yield as it helps for future planning. Planning for future support in terms of inputs such as fertilizers, pesticides and seeds, agricultural extension services, loans, and insurance. Yield determination is of paramount importance as it benefits farmers in reducing their losses and to get the best prices for their crops. The capacity of yield data to be expressed as time series data allows the use of ARIMA models to forecast the future tobacco yield volume. The Autoregressive Integrated Moving Average (ARIMA) model, first introduced by Box and Jenkins in 1976 is one of the techniques of analysing time series data using historical data to analyse the general trend and bases future predictions on the results of the analysis 1.
Time series data can be generated from measurements from biological, physical, economical, or environmental phenomena of interest. 2 argued that the three main goals of time series analysis are forecasting, modelling and characterisation. 3 indicated that time series is a general analysis tool of dominant practical interest in many disciplines and allows one to discover with some margin of error, the future values of a series from its past values. Several research, has been performed to fit ARIMA models in the agriculture sector all over the world for different types of agricultural crops. Yields for crops such as maize [4, tea 5, sugarcane 6, wheat 7, vegetables 8 and oilseeds 9 among many others have been modelled and predicted using ARIMA models. According to 10, ARIMA models have also received increasing applications in different sectors across the world today including exchange rates, economic growth, weather, stock market index, sales, transport, earthquakes, terrorist attacks, loan defaults among many others. 11 for example, used ARIMA models for stock price prediction using stock exchange data from New York Stock Exchange. 12 emphasised that ARIMA models just like any other predictive models used in forecasting has limitations on accuracy of predictions, yet it is used more widely for forecasting the future successive values in a time series. 13 added new dimensions to the evolution of this literature. They introduced a univariate filtering model, an ARIMA (0, 1,2) to best represent crop yield series.
14 used an ARIMA model to forecast different types of Seasonal rice productions in Bangladesh. From this study, he concluded that ARIMA models give good forecasting results for short term analysis. 15, used ARIMA models to forecast pigeon pea production in India. 12 conducted a research study using time series ARIMA forecasting model for predicting sugarcane production in India using secondary data collected from 1950 to 2012 to predict five leading years. They stated the reason for selecting the model was that it assumes and considers the non-zero autocorrelation between the successive values of the time series data. ARIMA (1, 1, 2) was found to be the best model having the minimum Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) values. The study also statistically tested and validated that successive residuals (forecast errors) in the fitted ARIMA time series were not correlated, and that residuals seem to be normally distributed with mean zero and constant variance. They concluded that the ARIMA (2, 1, 0) model predicted an increase in the sugarcane production for the year 2013, then a fall in 2014, and the subsequent years up to 2017 indicating an overall increase in production. 16 applied ARIMA model on a 60 years’ time series data (from 1950 to 2010) to forecast annual productivity of selected agricultural products (34 different products) in India. The validity of the model was verified with various model selection criteria such as minimum AIC and lowest MAPE (Mean Absolute Percentage Error) values.
17 analysed several forecasting techniques for evaluating crop yield assessments in Ghana to provide useful information for decision making about Ghana’s economy. They compared different models such as Damped-Trend Linear Exponential Smoothing, ARMA models, Simple and Double Exponential Smoothing under each district data and found ARMA model more robust (independent from cyclical length) and preferable over other models.
18 worked on modelling and forecasting the production of wheat in India from 1961-2013. Compared different models (Parametric regression, exponential smoothing and ARIMA) to observed trend of wheat production. Found best model ARIMA (1,1,0) on basis of several goodness of fit criteria (Root Mean Squared Error, Mean Absolute Percentage Error, Mean Absolute Error, Mean Squared Error, Akaike Information Criterion, Schwarz's Bayesian Information Criterion and R-squared) and forecasted value 100.271 million tonnes for 2017-18 season. They also examined independence and normality assumption through Run test and Shapiro-wilk test, respectively.
The question of whether crop forecasts are necessary always crops up from time to time. While different types of forecasting models have been used to forecast crop production, ARIMA has proved to have sound outcome in terms of forecasts, hence in Zimbabwe, there is need for tobacco forecasting as it is one of the most valuable cash crops. The research aims to assess the inclination of tobacco yield in Zimbabwe over the past years, develop a mathematical model to describe future estimates of tobacco production in Zimbabwe using a combination of AR and MA processes by analysing historical data from 1980-2018 to predict annual tobacco yield in Zimbabwe from 2019 to 2023.
The researchers used quantitative research design to understand relationships between variables when performing the time series analysis of tobacco yield in Zimbabwe using secondary data from the Zimbabwe National Statistics Agency (ZIMSTAT) from 1980 to 2018. The data used consists of actual area planted, crop reaped and yield per hectare of the tobacco crop on commercial farms, A1 and A2 farms, resettlement farms, and communal farms. The empirical and inferential statistics were presented in tables, graphs, and some narrative aspects. The researchers will manipulate the data and give it meaning through descriptive and inferential statistics. This will enable the research to settle sound conclusions on which predictive models are best to include in the algorithm.
2.1. Time Series AnalysisTime series analysis help us to understand the underlying naturalistic process, the pattern of change over time, or evaluate the effects of either planned or unplanned activities. Any data of variables that are collected with successive times and equal intervals over a certain period is a time series data. Time series can be defined as a collection of random variables indexed according to the order they are obtained in time. A time series model for the observed data {Yt} is a specification of the joint distributions (or possibly only the means and covariance’s) of a sequence of random variables {Yt} of which {yt} is postulated to be a realization. In forecasting, there is a need for visualisation of time series plots to know the pattern or trend that is exhibited by the data to come up with the best model.
2.2. Time Series ComponentsThere are four components of time series which are trend, seasonality, irregularities cycles, and variation. According to 19, time series analysis can isolate each component and quantify the extent to which each component influences the form of the observed data and forecast can project the underlying pattern into the future. Time series plots can reveal patterns such as random, trends, level shifts, periods or cycles, unusual observations, or a combination of patterns 20.
The trend component of a time series shows the overall long-term direction of the data whether a downward or upward trend of the data in each period in a predictable manner. The trend of the data might either be linear or non-linear depending on the variables considered. The seasonal component exists when the series exhibit regular fluctuations based on the seasons. The seasonal variation occurs at a specific period such as monthly, quarterly, or yearly. The cyclical component also known as irregularities cycles exhibits oscillatory movements of the trend in time series that occur for more than one year. The data shows rise and falls that are not periodic and it repeats itself over a longer life span. Any variations that are not explained by the above components that are a trend, seasonal, and cyclical are called random or irregular component. Disturbances that are not predictable as they tend not to follow the general trend exhibited in the time series data.
2.3. Assumptions of Time Series AnalysisWhen dealing with the time series it is important to compute stationary tests before moving forward with any type of analysis. In modelling time series, we assume that the data is stationary that is mean, variance, and autocorrelation structure do not change over period. To avoid coming up with spurious results from forecasting time series we should ensure that the data does not exhibit any trend or seasonality. The plot of the time series can show whether the data is stationary or not and to statistically prove it, one can partition your data into two sets and compare their mean and variance to check if they change over time. The non-stationary data can be transformed into stationary either by differencing or log-transformation. To check stationarity, we use Unit root tests (checking difference stationary from trend stationarity) and Augmented Dickey-Fuller test (ADF).
Another assumption that must be exhibited by the data is that they follow a normal distribution, and its violation may result in wrong parameter estimation or forecasted output. Generally, histograms, stem and leaf plots, box plots, percent-percent (P-P) plots, quantile-quantile (Q-Q) plots, plots of the empirical cumulative distribution function and other variants of probability plots have most application for normality assumption checking 21. Apart from the above-mentioned ways of checking normality, we may use analytical tests that are conducted on the bases of empirical distribution function (EDF) which are: Kolmogorov-Smirnov Test and Shapiro-Wilk Test.
The disturbances or residuals should be independent and there must be no autocorrelation. Durbin-Watson test is used to check whether there is positive autocorrelation in residuals. Another proposed method is to plot residuals against the fitted values. Authors argue that if the model is correct, the plot should be structureless. Another tool used is to plot the auto-correlation function (ACF) of the residuals and this should not show any significant terms even though it is expected that 1/20 to be above . Here, n is the number of observations used in the time series.
The variance of residuals should have constant variance and it can be checked by plotting the residuals scatter plot. It should exhibit the rectangular shape around zero horizontal levels without showing any trends.
2.4. Time Series ModelsA moving average is an average of a specific number of time series values around each value of t in the time series, except for the first few and last few terms. It is one of the techniques used for smoothing in time series analysis as well as in forecasting and it is only used on a time series that does not have a trend. An example of moving average series with an order q {MA (q)}
![]() | (1) |
Where c0 is a constant, at is a white noise series, and 𝜃1, 𝜃2, .. 𝜃q are model parameters (Tsay, 2010).
An autoregression refers to a time series model that uses previous observations to predict future observations. An example of AR (p) model with order p:
![]() | (2) |
Where ϕ0 is the constant term, ϕp are model parameters and 𝑎t is assumed to be a white noise series.
The model is used to explain the weakly stationary stochastic time series and it is a combination of AR (p) and MA (q) models. An example of ARMA (p, q) is given below:
![]() | (3) |
An ARMA model combines the ideas of AR and MA models into a compact form so that the number of parameters used is kept small, achieving parsimony in parameterization 22.
The main difference between ARMA and ARIMA model is that there is an integration of differencing part in ARIMA of non-stationary data to ensure the assumption of stationarity is employed. ARIMA model is said to be a unit-root non-stationary because its AR polynomial has a unit-root and a conventional approach for handling unit-root non-stationary is to use differencing (Tsay, 2010). If the differencing Wt = Yt – Yt-1 = (1 − B) Yt or higher-order differencing Wt = (1 − B)d Yt of non-stationary time series then we call Yt an ARIMA (p, d, q) process with order p of AR process, d the number of differences made for a series to become stationary and q is the order of a moving average process.
![]() | (4) |
The seasonal ARIMA model incorporates both non-seasonal and seasonal factors in a multiplicative model: SARIMA (p, d, q) (P, D, Q) S. 23 proposed the following model when dealing with a time series that contains seasonal fluctuations:
![]() | (5) |
Where Yt is the observed value at time t, is the value at time t of white noise, d is order of differencing,
is ordinary autoregressive component of order p and
is the ordinary moving average component of order q, s is number of seasons in a year and D is order of the seasonal differencing,
and
are the seasonal autoregressive and moving average difference of orders P and Q at lag s. According to Box & Jenkins (1976), the operator polynomials are:
![]() | (6) |
![]() | (7) |
![]() | (8) |
23 analysis refers to a systematic method of identifying, fitting, checking, and using integrated autoregressive, moving average (ARIMA) time series models and this method is appropriate for at least 30 observations. The three iterative steps that need to be employed when conducting a time series analysis using ARIMA models are model identification through analysis of historical data, parameter estimation through estimating the unknown parameters of the model and diagnostic checking by analyzing residuals to determine the model adequacy 20.
Identification of the appropriate ARIMA model requires skills obtained by experience” 20. 23 postulates the following summary table on how to identify the model.
The value of p is found using partial autocorrelations of a stationary data; if the PACF cut off after a few lags, the last lag with a large value would be the estimated value of p and if it does not cut off then p=0 (Box & Jenkins, 1976). The value of q is found using autocorrelation of a stationary data; if the ACF cut off after a few lags, the last lag with a large value would be the estimated value of q (Box & Jenkins, 1976). “ARIMA (p, d, q) model, the autocorrelation function will be a mixture of exponential decay and damped sine waves after the first q-p lags” 20, 23.
20 postulates that there are several methods such as the methods of moments, maximum likelihood, and least-squares that can be employed to estimate the parameters in the tentatively identified model. Most ARIMA models are non-linear one may choose to use maximum likelihood estimation once the values of p, d and q have found and backcasting may be used to find estimates of the initial residuals (Box and Jenkins, 1976).
Model adequacy will be checked through residual analysis from both AR and MA models to check whether the fitted model is adequate. The residual or disturbances of the model should behave as a white noise process 20. If the model is adequacy the scatter plot of residual should exhibit a rectangular shape thus it must not show any trends. If the model is appropriate, then the residual sample autocorrelation function should have no structure to identify 20. One may use statistical tests like approximate chi-square test of model adequacy and the Ljung-Box to test the model adequacy 20. Once the adequate model is fitted, the model can be used for forecasting.
The time series plot of total yield for the period 1980 up to 2015 was done to check whether the data was stationary before considering performing any statistical test. The plot shows that there was a stable trend from 1995-1998, a sharp increase in 1984 and a drop around 1986 and again a sharp increase in yield in the year 1998 and an overall decrease in yield pattern up to 2015. The time series data is non-stationary as evidenced by absence of constant variation within the dataset. However, a decreasing trend was observed as presented in Figure 1.
The ADF was performed (Table 2) to test for stationarity and the p-value obtained was 0.6106 (> 0.5), we fail to reject the null hypothesis and conclude that the data is non-stationary. The researchers moved to the next step of differencing the time series data.
It was observed that after first differencing the data became stationary (Figure 2) as both mean and variance became constant. Therefore, there was no need for further differencing (d=1) for the ARIMA (p, d, q) as the data revolved around zero.
Further test for stationarity of the first differenced data was conducted (Table 3). Since p value is less than 0.5, we reject the null hypothesis and conclude that the data is stationary in its mean and variance after 1st differencing.
The main goal of this stage is to find the autoregressive model and moving average terms in order to have the identified model ARIMA (p, d, q). The correlogram for the differenced data was examined and plotted as presented in Figure 3 and Figure 4.
It can be observed that p=0, as the partial auto-correlation function (PACF) in Figure 4 does not cut off and q=1 as the auto-correlation function (ACF) in Figure 3 cuts off after lag 1 and since we conducted first difference to the data d=1, the proposed model is ARIMA (1, 1, 0) with no seasonality. The researchers further used the auto.arima function in R-software to confirm the best model as it computes the maximum likelihood function together with the AIC and BIC functions. Therefore, the best model was found to be ARIMA (1, 1, 0).
3.2. Parameter EstimationThe following step is for determining parameters of autoregressive and moving average terms that are included in the fitted model.
As expected, our model has d = 1 that represents differencing of order 1. There is no additional differencing in the above best fit model. The best fit model has AR value of order 1 (p = 1) and MR of order 0 (q = 0) and a standard error of 0.145.
3.3. Model DiagnosisFigure 5 shows the residuals that resemble a white noise structure as they deviate around mean zero and constant variance. The fitted model is stationary according to the structure of residuals.
The histogram of residuals shown in Figure 6 has a bell-shaped shape that resembles a normal distribution for the model residuals.
The normal Q-Q plot helps to determine if the dependent variable is normally distributed by plotting quantiles (i.e., percentiles) from our distribution against a theoretical distribution. Figure 7 shows that the distribution follows a normal distribution as its plotted points are generally in a straight line.
The correlogram in Figure 8 and partial correlogram in Figure 9 does not show any structural pattern which confirms that there are no serial autocorrelations. There are no significant lags that are above the limit from lag 1 to lag 17. The fitted model has residuals that are identically dependent distributed and not correlated to variables of the model.
The Box-Ljung test was conducted to test for serial correlation as follows:
Ho: There are no serial autocorrelation of the time series.
H1: There are serial autocorrelation of the time series.
The p-value of 0.7032 (> 0.05), indicates that the residuals are independent, and we accept the null hypothesis and conclude that there are no serial autocorrelations in the fitted model.
3.4. ForecastingThe researchers forecasted the future tobacco yield in Zimbabwe. Figure 10 shows the predicted future tobacco yield numbers with a stable but slightly decreasing trend. The predicted values for annual forecasts are summarised in Table 6 with a 90% confidence Interval. The forecasts show that there is room for increasing total yield if correct measures are implemented.
The study showed that the tobacco industry has the potential to grow as the forecasted values displayed a decreasing trend in the expected values which can be reversed if correct strategic measures are implemented by relevant key stakeholders ranging from government and private institutions, and the individual farmers. Findings of this study has reviewed that the overall general trend of tobacco yield values is decreasing, an indicator to the agricultural sector of inefficient strategies and maybe deteriorating health of the nations’ soils due to soil diseases which may result from insufficient crop rotation leading to decrease in economic growth as tobacco is the backbone of Zimbabwe’s economic balance. The time series analysis was successful in building a model using historical data and was further used for forecasting annual yield for tobacco values up to the year 2023. The researchers recommend other scholars to conduct the forecasting process using different methods such as the emerging artificial neural networks (machine learning) in order to make comparisons and come up with best model(s). ZIMSTAT and the Tobacco Industry Board (TIMB) may use the forecasted values as well as the general trend projected by the ARIMA model for strategic planning. Agricultural sector policy makers and strategic advisors should implement policies that aid in increasing the yield and continuously revise the policies as the world is revolving around new technologies to increase yield volumes, thus boosting the economy’s GDP.
There is no conflict of interest to be reported by the authors. We want to thank the ZIMSTAT team for availing the historical tobacco yield data used for analysis.
[1] | Edbrooke J. (2017). Time Series Modelling Technique Analysis for Enterprise Stress Testing. Doctoral Dissertation Thesis, Imperial College London. | ||
In article | |||
[2] | Weigend AS, Gershenfeld NA (1994). Time Series Prediction: Forecasting the Future and Understanding the Past. Reading: Addison-Wesley, 1994, (Chapter 1). | ||
In article | |||
[3] | Tealab A. (2018). Time Series Forecasting Using Artificial Neural Networks Methodologies: A systematic Review. Future Computing and Informatics Journal. 3(2), pp. 334-340. | ||
In article | View Article | ||
[4] | Badmus AM, Ariyo OS (2011). Forecasting Cultivated Areas and Production of Maize in Nigeria using ARIMA Model. Asia Journal of Agricultural Science. 3(3), pp. 171-176. | ||
In article | |||
[5] | Dhekale BS, Sahu PK, Vishwajith KP, Mishra P, Noman MD. (2014). Modeling and Forecasting Tea Production in West Bangal. 10(2), pp. 94-103. | ||
In article | |||
[6] | Sankar TJ, Pushpa P. (2019). Design and Development of Time Series Analysis for Saccharum Afficinarum Production in India. Journal of Composition Theory. 12(9), pp. 203-211. | ||
In article | |||
[7] | Amin W, Amanullah M, Akbar A. (2014). Time Series Modeling for Forecasting Wheat Production for Pakistan. The Journal of Plant and Animal Sciences. 24(5), pp. 1444-1451. | ||
In article | |||
[8] | Arivarasi R, Madhavhi G. (2015). Time Series Analysis of Vegetable Production and Forecasting Using ARIMA Model. Asian Journal of Science Technology. 6(10), pp. 1844-1848. | ||
In article | |||
[9] | Mithiya D, Datta L, Mandal K. (2019). Time Series Analysis and Forecasting of Oilseeds Production in India: Using Autoregressive Integrated Moving Average and Group Method of Data Handling–Neural Network. Asian Journal of Agricultural Extension, Economics and Sociology. 30(2), pp. 1-14. | ||
In article | View Article | ||
[10] | Aslam, F., Salman, A., & Jan, I. (2019). Predicting wheat production in Pakistan by using an artificial neural network approach. Sarhad J. Agricult., 35(4), 1054-1062. | ||
In article | View Article | ||
[11] | Adebiyi AA, Adewuni AO, Ayo CK. (2014). Comparison of ARIMA and Artificial Neural Networks Models for Stock Price Prediction. Journal of Applied Mathematics. 2014. | ||
In article | View Article | ||
[12] | Manoj K, Madhu A. (2014). An Application of Time Series ARIMA Forecasting Model for Predicting Sugarcane Production in India. Faculty of Economic Sciences. 9(1), pp. 81-94. | ||
In article | |||
[13] | Goodwin BK, Ker AP. (1998). Nonparametric estimation of crop yield distributions: Implications for rating group-risk crop insurance contracts. American Journal of Agricultural Economics. 80(1), pp. 139-153. | ||
In article | View Article | ||
[14] | Hamjah MA. (2014). Rice Production Forecasting in Bangladesh: An Application of Box-Jenkins ARIMA Model. Mathematical Theory and Modeling. 4(4), pp. 1-11. | ||
In article | |||
[15] | Rachana W, Suvarna, M, Sonal, G. (2010). Use of ARIMA Model for Forecasting Pigeon Pea Production in India. International Review of Business and Finance. | ||
In article | |||
[16] | Padhan PC. (2012). Application of ARIMA Model for Forecasting Agricultural Productivity in India. Journal of Agriculture and Social Science. 8(2), PP. 50-56. | ||
In article | |||
[17] | Choudhury A, Jones J, (2014). Crop Yield Prediction Using Time Series Models. Journal of Economics and Economic Education Research. 15(3), pp. 53. | ||
In article | |||
[18] | Dasyam R, Pal S, Rao VS, Bhattacharyya B. (2015). Time Series Modeling for Trend Analysis and Forecasting Wheat Production of India. International Journal of Agriculture, Environment and Biotechnology. 8(2), pp. 303. | ||
In article | View Article | ||
[19] | Bee Dagum E, Bianconcini S. (2016). Time Series Components. | ||
In article | View Article | ||
[20] | Montgomery DC, Jennings C, Kulahci M. (2015). Introduction to Time Series Analysis and Forecasting. 2nd Edition. John Wiley and Sons. pp. 18-19. | ||
In article | |||
[21] | Das KR, Imon AHMR. (2017). A Brief Review of Tests for Normality. American Journal of Theoretical and Applied Statistics, 5, pp. 5-12. | ||
In article | View Article | ||
[22] | Tsay RS. (2010). Analysis of Financial Time Series. 3rd Edition. pp. 15-18. | ||
In article | View Article | ||
[23] | Box, GEP, Jenkins, G. (1970). Time series analysis: forecasting and control. 2nd Edition. San Francisco: Holden-Day. pp. 240. | ||
In article | |||
Published with license by Science and Education Publishing, Copyright © 2022 Tichaona W. Mapuwei, Jenias Ndava, Mellissa Kachaka and Brain Kusotera
This work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit
https://creativecommons.org/licenses/by/4.0/
[1] | Edbrooke J. (2017). Time Series Modelling Technique Analysis for Enterprise Stress Testing. Doctoral Dissertation Thesis, Imperial College London. | ||
In article | |||
[2] | Weigend AS, Gershenfeld NA (1994). Time Series Prediction: Forecasting the Future and Understanding the Past. Reading: Addison-Wesley, 1994, (Chapter 1). | ||
In article | |||
[3] | Tealab A. (2018). Time Series Forecasting Using Artificial Neural Networks Methodologies: A systematic Review. Future Computing and Informatics Journal. 3(2), pp. 334-340. | ||
In article | View Article | ||
[4] | Badmus AM, Ariyo OS (2011). Forecasting Cultivated Areas and Production of Maize in Nigeria using ARIMA Model. Asia Journal of Agricultural Science. 3(3), pp. 171-176. | ||
In article | |||
[5] | Dhekale BS, Sahu PK, Vishwajith KP, Mishra P, Noman MD. (2014). Modeling and Forecasting Tea Production in West Bangal. 10(2), pp. 94-103. | ||
In article | |||
[6] | Sankar TJ, Pushpa P. (2019). Design and Development of Time Series Analysis for Saccharum Afficinarum Production in India. Journal of Composition Theory. 12(9), pp. 203-211. | ||
In article | |||
[7] | Amin W, Amanullah M, Akbar A. (2014). Time Series Modeling for Forecasting Wheat Production for Pakistan. The Journal of Plant and Animal Sciences. 24(5), pp. 1444-1451. | ||
In article | |||
[8] | Arivarasi R, Madhavhi G. (2015). Time Series Analysis of Vegetable Production and Forecasting Using ARIMA Model. Asian Journal of Science Technology. 6(10), pp. 1844-1848. | ||
In article | |||
[9] | Mithiya D, Datta L, Mandal K. (2019). Time Series Analysis and Forecasting of Oilseeds Production in India: Using Autoregressive Integrated Moving Average and Group Method of Data Handling–Neural Network. Asian Journal of Agricultural Extension, Economics and Sociology. 30(2), pp. 1-14. | ||
In article | View Article | ||
[10] | Aslam, F., Salman, A., & Jan, I. (2019). Predicting wheat production in Pakistan by using an artificial neural network approach. Sarhad J. Agricult., 35(4), 1054-1062. | ||
In article | View Article | ||
[11] | Adebiyi AA, Adewuni AO, Ayo CK. (2014). Comparison of ARIMA and Artificial Neural Networks Models for Stock Price Prediction. Journal of Applied Mathematics. 2014. | ||
In article | View Article | ||
[12] | Manoj K, Madhu A. (2014). An Application of Time Series ARIMA Forecasting Model for Predicting Sugarcane Production in India. Faculty of Economic Sciences. 9(1), pp. 81-94. | ||
In article | |||
[13] | Goodwin BK, Ker AP. (1998). Nonparametric estimation of crop yield distributions: Implications for rating group-risk crop insurance contracts. American Journal of Agricultural Economics. 80(1), pp. 139-153. | ||
In article | View Article | ||
[14] | Hamjah MA. (2014). Rice Production Forecasting in Bangladesh: An Application of Box-Jenkins ARIMA Model. Mathematical Theory and Modeling. 4(4), pp. 1-11. | ||
In article | |||
[15] | Rachana W, Suvarna, M, Sonal, G. (2010). Use of ARIMA Model for Forecasting Pigeon Pea Production in India. International Review of Business and Finance. | ||
In article | |||
[16] | Padhan PC. (2012). Application of ARIMA Model for Forecasting Agricultural Productivity in India. Journal of Agriculture and Social Science. 8(2), PP. 50-56. | ||
In article | |||
[17] | Choudhury A, Jones J, (2014). Crop Yield Prediction Using Time Series Models. Journal of Economics and Economic Education Research. 15(3), pp. 53. | ||
In article | |||
[18] | Dasyam R, Pal S, Rao VS, Bhattacharyya B. (2015). Time Series Modeling for Trend Analysis and Forecasting Wheat Production of India. International Journal of Agriculture, Environment and Biotechnology. 8(2), pp. 303. | ||
In article | View Article | ||
[19] | Bee Dagum E, Bianconcini S. (2016). Time Series Components. | ||
In article | View Article | ||
[20] | Montgomery DC, Jennings C, Kulahci M. (2015). Introduction to Time Series Analysis and Forecasting. 2nd Edition. John Wiley and Sons. pp. 18-19. | ||
In article | |||
[21] | Das KR, Imon AHMR. (2017). A Brief Review of Tests for Normality. American Journal of Theoretical and Applied Statistics, 5, pp. 5-12. | ||
In article | View Article | ||
[22] | Tsay RS. (2010). Analysis of Financial Time Series. 3rd Edition. pp. 15-18. | ||
In article | View Article | ||
[23] | Box, GEP, Jenkins, G. (1970). Time series analysis: forecasting and control. 2nd Edition. San Francisco: Holden-Day. pp. 240. | ||
In article | |||