The origins of modern technological change provide the context necessary to understand present-day technological transformation, to investigate the impact of the new digital technologies, and to examine the phenomenon of digital disruption of established industries and occupations. How these contemporary technologies will transform industries and institutions, or serve to create new industries and institutions, will unfold in time. The implications of the relationships between these pervasive new forms of digital transformation and the accompanying new business models, business strategies, innovation, and capabilities are being worked through at global, national, corporate, and local levels. Whatever the technological future holds it will be defined by continual adaptation, perpetual innovation, and the search for new potential. Presently, the world is experiencing the impact of waves of innovation created by the rapid advance of digital networks, software, and information and communication technology systems that have transformed workplaces, cities, and whole economies. These digital technologies are converging and coalescing into intelligent technology systems that facilitate and structure our lives. Through creative destruction, digital technologies fundamentally challenge existing routines, capabilities, and structures by which organizations presently operate, adapt, and innovate. In turn, digital technologies stimulate a higher rate of both technological and business model innovation, moving from producer innovation toward more user-collaborative and open-collaborative innovation. However, as dominant global platform technologies emerge, some impending dilemmas associated with the concentration and monopolization of digital markets become salient. The extent of the contribution made by digital transformation to economic growth and environmental sustainability requires a critical appraisal.
The current discontent with the dominant macroeconomic theory paradigm, known as Dynamic Stochastic General Equilibrium (DSGE) models, calls for an appraisal of the methods and strategies employed in studying and modeling macroeconomic phenomena using aggregate time series data. The appraisal pertains to the effectiveness of these methods and strategies in accomplishing the primary objective of empirical modeling: to learn from data about phenomena of interest. The co-occurring developments in macroeconomics and econometrics since the 1930s provides the backdrop for the appraisal with the Keynes vs. Tinbergen controversy at center stage. The overall appraisal is that the DSGE paradigm gives rise to estimated structural models that are both statistically and substantively misspecified, yielding untrustworthy evidence that contribute very little, if anything, to real learning from data about macroeconomic phenomena. A primary contributor to the untrustworthiness of evidence is the traditional econometric perspective of viewing empirical modeling as curve-fitting (structural models), guided by impromptu error term assumptions, and evaluated on goodness-of-fit grounds. Regrettably, excellent fit is neither necessary nor sufficient for the reliability of inference and the trustworthiness of the ensuing evidence. Recommendations on how to improve the trustworthiness of empirical evidence revolve around a broader model-based (non-curve-fitting) modeling framework, that attributes cardinal roles to both theory and data without undermining the credibleness of either source of information. Two crucial distinctions hold the key to securing the trusworthiness of evidence. The first distinguishes between modeling (specification, misspeification testing, respecification, and inference), and the second between a substantive (structural) and a statistical model (the probabilistic assumptions imposed on the particular data). This enables one to establish statistical adequacy (the validity of these assumptions) before relating it to the structural model and posing questions of interest to the data. The greatest enemy of learning from data about macroeconomic phenomena is not the absence of an alternative and more coherent empirical modeling framework, but the illusion that foisting highly formal structural models on the data can give rise to such learning just because their construction and curve-fitting rely on seemingly sophisticated tools. Regrettably, applying sophisticated tools to a statistically and substantively misspecified DSGE model does nothing to restore the trustworthiness of the evidence stemming from it.
The political economy of protection is a field within economics, but it has significant overlap with its sister discipline, political science. For a political economy of protection, one needs at a minimum two types of economic agents: political decision makers who provide protection, and economic agents who are protected or even actively seek protection. The typical political economy scenario leads to an economic outcome that is not Pareto-optimal: From a general welfare perspective, the political interaction is not desirable. An important task of political economy research is to explain why and how political interaction takes place. For the first part of the question, it appears clear that if protection is actively sought, the protection seeker intends to benefit from his activities. However, if the policymakers were truly interested in Pareto optimality and welfare maximization, they would refuse to protect. Hence a crucial assumption in the political economy literature is that the politicians’ objective function differs from the general welfare function. For the second part of the question, theoretical political economy models consider either the election campaign phase when politicians are eager to win a majority of votes (preelection models) or the phase when the politicians have been elected and may benefit from the spoils associated with holding office (postelection models). Whereas in the election phase, politicians have an incentive to cater to the interests of that part of the electorate that is considered pivotal for the election outcome, in the postelection phase they may be open to, for example, special interest group (SIG) influences from which they derive utility. A first wave of theoretical political economy models originates from the 1980s. Building on these early advances, more elaborate models have been proposed. The most prominent one is the Grossman–Helpman protection for sale (PfS) model. It delivers a postelection general equilibrium framework of trade policy determination. In this common agency model, industry interest groups act as principals and offer the government a menu of contracts of campaign contributions in exchange for trade policy. The PfS model predicts that industries that lobby for protection will obtain trade protection in equilibrium, whereas nonlobbying industries will face import subsidies. Numerous papers have evaluated the PfS model empirically and found that the implied weight on contributions in the governmental welfare function and the implied share of the population represented by lobbies are both very high. Remedies for this surprising result exist, but it has also been argued that the found empirical regularities may be spurious. At the beginning of the 21st century, the majority of political economy literature is still theoretical, but better data availability increasingly offers the opportunity to empirically test theoretical results. A number of challenges remain for the political economy literature, however. In particular, more work is required to better understand policymaker interests. Moreover, an incorporation of political economy aspects into the new trade theory models that allow for intra-industry trade and firm diversity appears to be a promising avenue for future research.