Unobserved components models (UCMs), sometimes referred to as structural time-series models, decompose a time series into its salient time-dependent features. These typically characterize the trending behavior, seasonal variation, and (nonseasonal) cyclical properties of the time series. The components are usually specified in a stochastic way so that they can evolve over time, for example, to capture changing seasonal patterns. Among many other features, the UCM framework can incorporate explanatory variables, allowing outliers and structural breaks to be captured, and can deal easily with daily or weekly effects and calendar issues like moving holidays.
UCMs are easily constructed in state space form. This enables the application of the Kalman filter algorithms, through which maximum likelihood estimation of the structural parameters are obtained, optimal predictions are made about the future state vector and the time series itself, and smoothed estimates of the unobserved components can be determined. The stylized facts of the series are then established and the components can be illustrated graphically, so that one can, for example, visualize the cyclical patterns in the time series or look at how the seasonal patterns change over time. If required, these characteristics can be removed, so that the data can be detrended, seasonally adjusted, or have business cycles extracted, without the need for ad hoc filtering techniques. Overall, UCMs have an intuitive interpretation and yield results that are simple to understand and communicate to others. Factoring in its competitive forecasting ability, the UCM framework is hugely appealing as a modeling tool.
Article
Unobserved Components Models
Joanne Ercolani
Article
Predictive Coding Theories of Cortical Function
Linxing Preston Jiang and Rajesh P.N. Rao
Predictive coding is a unifying framework for understanding perception, action, and neocortical organization. In predictive coding, different areas of the neocortex implement a hierarchical generative model of the world that is learned from sensory inputs. Cortical circuits are hypothesized to perform Bayesian inference based on this generative model. Specifically, the Rao–Ballard hierarchical predictive coding model assumes that the top-down feedback connections from higher to lower order cortical areas convey predictions of lower-level activities. The bottom-up, feedforward connections in turn convey the errors between top-down predictions and actual activities. These errors are used to correct current estimates of the state of the world and generate new predictions. Through the objective of minimizing prediction errors, predictive coding provides a functional explanation for a wide range of neural responses and many aspects of brain organization.