Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, CLIMATE SCIENCE (oxfordre.com/climatescience). (c) Oxford University Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy and Legal Notice).

date: 18 February 2020

High-Resolution Thunderstorm Modeling

Summary and Keywords

Since the dawn of the digital computing age in the mid-20th century, computers have been used as virtual laboratories for the study of atmospheric phenomena. The first simulations of thunderstorms captured only their gross features, yet required the most advanced computing hardware of the time. The following decades saw exponential growth in computational power that was, and continues to be, exploited by scientists seeking to answer fundamental questions about the internal workings of thunderstorms, the most devastating of which cause substantial loss of life and property throughout the world every year.

By the mid-1970s, the most powerful computers available to scientists contained, for the first time, enough memory and computing power to represent the atmosphere containing a thunderstorm in three dimensions. Prior to this time, thunderstorms were represented primarily in two dimensions, which implicitly assumed an infinitely long cloud in the missing dimension. These earliest state-of-the-art, fully three-dimensional simulations revealed fundamental properties of thunderstorms, such as the structure of updrafts and downdrafts and the evolution of precipitation, while still only roughly approximating the flow of an actual storm due computing limitations.

In the decades that followed these pioneering three-dimensional thunderstorm simulations, new modeling approaches were developed that included more accurate ways of representing winds, temperature, pressure, friction, and the complex microphysical processes involving solid, liquid, and gaseous forms of water within the storm. Further, these models also were able to be run at a resolution higher than that of previous studies due to the steady growth of available computational resources described by Moore’s law, which observed that computing power doubled roughly every two years. The resolution of thunderstorm models was able to be increased to the point where features on the order of a couple hundred meters could be resolved, allowing small but intense features such as downbursts and tornadoes to be simulated within the parent thunderstorm. As model resolution increased further, so did the amount of data produced by the models, which presented a significant challenge to scientists trying to compare their simulated thunderstorms to observed thunderstorms. Visualization and analysis software was developed and refined in tandem with improved modeling and computing hardware, allowing the simulated data to be brought to life and allowing direct comparison to observed storms. In 2019, the highest resolution simulations of violent thunderstorms are able to capture processes such as tornado formation and evolution which are found to include the aggregation of many small, weak vortices with diameters of dozens of meters, features which simply cannot not be simulated at lower resolution.

Keywords: numerical weather prediction, thunderstorm modeling, high performance computing, tornadoes, severe weather, big data

Why Are Thunderstorms Studied?

Each year, thunderstorms bring rain, hail, lightning, damaging winds, and flooding to regions spanning much of the globe. The strongest thunderstorms typically occur when the atmosphere is unstable, containing large amounts of near-surface moisture and abundant environmental wind shear. A forcing mechanism that causes upward motion in such an environment can trigger the formation of a tall cumuliform cloud that achieves self-sustaining vertical motion, resulting in a thunderstorm. The vertical motion within these storms is forced primarily by the buoyancy that results from the release of latent heat due to phase changes of water (condensation, freezing, and deposition). Clouds will rise freely upward so long as the cloudy air is lighter (more buoyant) than the air surrounding the cloud; for the most unstable atmospheres, this can occur over depths well exceeding 10 km, from near the Earth’s surface to the lower stratosphere. The type and strength of thunderstorm that occurs at any given time and place has been found to be a strong function of the pre-storm atmospheric conditions, with the most dangerous storms forming when the atmosphere exhibits deep layers of instability and wind shear. Thunderstorm types include short-lived “ordinary” thunderstorms, longer-lived multicelluar storms, quasi-linear convective systems including squall lines and bow echoes, and supercell thunderstorms, the long-lived rotating storms that produce the most devastating tornadoes. While thunderstorms are primarily known for the damage they cause due to lightning, winds, hail, and tornadoes, collections of thunderstorms known as mesoscale convective complexes can produce heavy rainfall over many thousands of square kilometers, bringing rain during dry summer months in the continental mid-latitudes and helping to recharge reservoirs and aquifers.

Collecting useful observational measurements of thunderstorm properties is a significant challenge. Thunderstorms can contain large amounts of wind shear, large hail, and frequent lightning. In-situ observations within thunderstorms conducted with research aircraft can offer only a tiny sampling of storm properties along the aircraft’s path. Surface measurements during thunderstorms are often quite incomplete due to a lack of sufficient instrumentation beneath the storms as well as the difficulty and expense of a deploying networks of portable observational research platforms to study them. Numerical models offer a powerful approach toward understanding thunderstorms by simulating their behavior over space and time, whereby model variables describing the spatiotemporal properties of thunderstorms can be visualized and analyzed with computers.

Some of the first applications of the earliest computers were attempts at forecasting the weather. As computers became more powerful throughout the 1960s and 1970s, scientists began to use them as virtual laboratories for the study of atmospheric phenomena including thunderstorms. This article focuses on the use of numerical models to simulate the strongest thunderstorms (primarily supercell thunderstorms, which produce the strongest tornadoes), with an emphasis on research that required (or requires) the most powerful available computing resources. Our understanding of thunderstorms has been profoundly influenced by the use of computers to simulate their behavior. Prior to the 1960s, theory and observations served as the dominant approaches toward understanding the atmosphere. During the 1960s, some scientists interested in better understanding thunderstorms began to utilize computers to conduct short, crude simulations of simple convection such as buoyant thermals. Over time, as computers became faster and the amount of computer memory increased, scientists utilized these newer machines to increase the sophistication of their models in order to simulate thunderstorms in three dimensions.

The first three-dimensional (3D) simulations of thunderstorms were conducted in the mid 1970s. These simulations, while very crude when compared to simulations in the late 2010s, ushered in the modern era of high-resolution thunderstorm modeling where entire thunderstorms can be simulated at high enough resolution to resolve tornadoes and downbursts and the processes involving their formation, maintenance, and decay. As supercomputers continue to increase in speed, memory, and storage capacity, researchers must adapt their models or create new models to harness this power. These new models will produce simulations that contain an increasing amount of physical realism when compared to observed storms, and these simulations, when analyzed, will answer questions that would be otherwise be impossible to answer utilizing only observations and mathematical theory.

Storm Modeling and the Computing Revolution

The history of modern thunderstorm research begins in 1947 with the Thunderstorm Project, the first large-scale field project focused specifically on understanding the behavior of thunderstorms in the United States (Braham, 1996; Byers & Braham, 1949). This federally sponsored program was initiated just at the end of the Second World War and was designed with a primary goal to better understand thunderstorms and the specific hazards they posed to military and commercial aircraft, several of which had crashed while encountering thunderstorms. Specially modified aircraft were flown through thunderstorms at different altitudes where winds, pressure, temperature, and hdyrometeors (rain, snow, graupel, and hail) were measured. In the 1950s, radar, which served primarily as a way to detect enemy aircraft during the war, was now being used to detect hydrometeors in thunderstorm clouds. Radar continues to serve as the most effective remote-sensing technology for the detection of winds and hydrometeors in thunderstorms.

The National Science Foundation (NSF) was founded by the United States government in 1950 with the NSF-sponsored National Center for Atmospheric Research (NCAR) established in Boulder, Colorado, in 1959. By the early 1950s, computers were starting to be used with the goal of improving weather forecasting, but only at the synoptic scale, and with decidedly mixed results (Persson, 2005a,b,c; Shuman, 1989). At this time the possibilities of numerical weather prediction were only just beginning to be realized, and the development of filtered numerical equations and numerical techniques that would apply to thunderstorm-scale simulations had not yet been developed. The following decades would see an exponential growth in computer-processing power, memory and memory bandwidth, and storage, and these computing advancements were mirrored by an increase in model resolution and sophistication to exploit this power. The advancement of our understanding of thunderstorms since the 1950s has gone hand in hand with the advancement of improved observational data sources as well as the steady increase of computational power that could be applied toward modeling weather phenomena.

Numerical Model Properties

In order to understand the profound impact computing technology has had on the nature of numerical simulations of thunderstorms, it is useful to first discuss the basic properties of an atmospheric cloud model used by scientists to study them. A cloud model is a computer application that simulates, using known laws and approximations, the time-dependent behavior of a cloud and the atmosphere containing and surrounding the cloud. Properties forecast in numerical models of thunderstorms typically include the wind, temperature, pressure, turbulent kinetic energy, and bulk properties of water vapor, cloud water, cloud ice, graupel and/or hail, and snow. The most realistic cloud models are 3D, allowing for motion in all known spatial dimensions (East/West, North/South, up/down) as occurs in real clouds. A 3D cloud model will typically forecast atmospheric properties utilizing a mesh, or grid, that describes the Cartesian locations of all of the grid volumes (typically rectangular prisms) that comprise the full volume being forecast by 3D model. These combined grid volumes together make up the model domain, the full volume that contains the model’s “known universe.”

For a Cartesian model on an orthogonal mesh, the lengths of the edges of the grid volumes are referred to as the grid spacing with the three components represented as ∆x, ∆y, and ∆z. One of the consequences of the finite difference numerical modeling approach is that the grid spacing greatly determines the fidelity of the simulation. The first 3D thunderstorm models utilized horizontal grid spacings on the order of 2–3 km with vertical grid spacings on the order of 1 km and contained domains on the order of 20–40 km in the horizontal and around 10 km in the vertical. This would be considered a very low-resolution simulation in modern times; there are too few grid volumes to span the thunderstorm to create a clear picture, much like the first digital cameras with only tens of thousands of pixels could not create a high-fidelity image comparable to a multi-megapixel camera. When discussing cloud model simulations, resolution (and hence grid spacing) is critical because throughout each grid volume in a cloud model, wind, temperature, pressure, and microphysical variables involving cloud, rain, snow, graupel, and hail are constant; no further subdivision is possible without further increasing the model resolution and altering the model mesh. Hence, resolution strongly determines the fidelity, or physical realism, of the simulation, so long as the model contains physical parameterizations that are appropriate for that resolution.

Models that simulate thunderstorms require parameterizations in order to include precipitation and incorporate the effects of friction, subgrid scale kinetic energy, and radiation. Parameterizations are model algorithms that handle physical processes that are either too complex or too poorly understood to model using solvable fundamental equations, such as the interaction of countless numbers of hydrometeors that occurs in thunderstorms. Further, because thunderstorm models cover a limited domain, boundary conditions must be specified on all six faces of the model domain’s volume. The surface boundary condition in particular has a dramatic effect on the evolution of simulated thunderstorms, and the most appropriate choice of surface boundary condition remains an open question in 2019.

The first cloud models were not 3D, however; clouds were either represented as “blobs” of air that can only rise and fall and contain no horizontal variation (one-dimensional [1D] model), or contained symmetry about a vertical axis (axisymmetric two-dimensional [2D] model) or about a vertically oriented plane (slab-symmetric 2D model). These first models were developed under the constraints of the hardware available at the time; fully 3D models could not have been developed and tested until the computing technology matured. Cloud model simulations containing fewer than three dimensions are obviously unphysical, but an enormous amount of valuable and insightful research was conducted under these constraints in the early days of cloud modeling. Further, some atmospheric phenomena do lend themselves well to a 2D approach. For instance, a downburst formed from a thunderstorm cloud in a quiescent environment may be approximated to be axisymmetric; a squall line, which does not vary appreciably along its length, can be represented fairly well in a 2D slab symmetric model. However, in both of these cases, symmetry is enforced and truly physically realistic flow is almost always impossible to achieve.

In the early 21st century, a research-class 3D cloud model forecasts the winds, temperature, pressure, humidity, turbulent kinetic energy, and the mixing ratio of liquid and solid cloud water, rain, snow, graupel, and/or hail over its domain. Models with more advanced microphysics parameterizations will forecast additional variables related to the microphysics (such as number concentration). The cloud and rain field output of such a model, when visualized, will bear a very strong resemblance to a high-quality observed thunderstorm. Before a discussion of the state of the science of 3D high-resolution thunderstorm modeling in the late 2010s, research involving the modeling of thunderstorms will be approached historically, beginning with the first simulations in the 1960s.

The Early Years of Storm Modeling

Before computers could be programmed to forecast the weather, the set of equations to be solved first needed to be chosen, as well as numerical methods to approximate the solution. Charney (1955) acknowledged that, despite encouraging results found in recent attempts to solve the quasi-geostrophic equations for atmospheric flow, solutions to the primitive equations were not yet possible due to numerical instability. Smagorinsky (1958) made one of the first attempts to create a framework for the stable numerical integration of the so-called primitive equations that describe the physics of the atmosphere. Lilly (1962) developed a 2D model to simulate buoyant convection in a dry atmosphere and achieved the twin goals of numerical stability and qualitatively correct numerical results in some of his experiments; however, many shortcomings associated with stability, accuracy, and lack of computing capability to improve results were identified. Utilizing a set of equations different from Lilly (1962), similar encouraging results utilizing a dry axisymmetric 2D model were reported by Ogura (1962) who later introduced moisture into his model and simulated the formation of a cloud in a conditionally unstable atmosphere (Ogura, 1963). Orville (1964) applied Ogura’s equation set toward the simulation of upslope mountain flow by modifying the bottom boundary to include a “mountain” with a 45 degree slope, and later included water vapor to measure the effects of latent heating on the solution (Orville, 1965). Many of these early simulations suffered from computational instabilities that caused the simulation to degrade after short integrations. This issue was remedied by Arakawa (1966) who presented a set of equations and numerical techniques that overcame some of these instabilities and showed that long-term integrations of simulated 2D convection could be conducted without failing. This work was instrumental in paving the way toward fully stable 3D simulations that were to arrive in the 1970s.

A major challenge to the field of cloud modeling is the handling of microphysical processes involving clouds and hydrometeors (rain, snow, and graupel/hail). A real thunderstorm contains a myriad of particles of different shapes, sizes, fall speeds, and phase, and these particles interact with each other and their surrounding environment in countless ways. A finite difference cloud model cannot hope to capture these processes, as each grid volume, rather than containing interacting particles, represents each microphysical property as a single bulk variable (e.g., mixing ratio and number concentration) that is constant throughout the volume. Despite this and other simplifications, the parameterization of microphysics tends to be one of the most computationally expensive parts of a cloud model owing to the many complex source and sink terms that transfer water substance from one microphysical variable to another. The first models containing both cloud and rain were 1D, only simulating along the vertical axis (e.g., Srivastava, 1967). In one of the most influential studies to the field of cloud modeling, Kessler (1969) developed a system of equations that described the formation of clouds and rain, and rain’s fallout that could be incorporated into a 3D cloud model. Kessler’s work was the first to carefully address conservation issues regarding all forms of water (in this case, water vapor, cloud water, and rain). Despite the lack of ice and the inevitable presence of unphysically supercooled liquid water, Kessler’s “warm-rain” microphysics have been programmed into many cloud models of varying sophistication throughout the years— Kessler’s microphysics and other similar routines are sometimes referred to in the literature as “warm rain” microphysics, as that the equations describing cloud and rain variables only properly describe precipitating convection that does not involve the ice phase, hence “warm.” In thunderstorm simulations where temperatures colder than 40 °C are commonly found aloft, simulations utilizing such “warm” rain microphysics routines will contain liquid cloud and rain that is actually unphysically cold!

Pioneering work by Rutledge and Hobbs (1983) and Lin, Farley, and Orville (1983) included the addition of ice substance (cloud ice, snow, and graupel) to numerical model equation sets, which resulted in much more realistic storm structure and evolution when compared to observations. The use of Kessler microphysics in cloud modeling studies is rare in the 2010s, having been supplanted by microphysical routines that incorporate cloud ice, snow, graupel and hail in addition to liquid cloud and rain.

The 3D Revolution

By the end of the 1960s, many of the major stability issues that had plagued early numerical attempts had been worked out, and the first simulations of rain producing clouds were being simulated in 2D. Computing technology, though still quite primitive, was steadily advancing to the point where 3D simulations were possible over limited domains (Deardorff, 1970). It must be emphasized that the cloud modeling work up to this point had been done in a 2D framework, either axisymmetric or slab symmetric. In both cases, these simplifications arose due to the lack of available computing power. From 1964 to 1969, the CDC-6600, designed by Seymour Cray, was the fastest computer in the world. Approximately 50 were made, one of which was delivered to NCAR in December 1965. The machine, which was one of the first solid-state general computing systems, was programmed by feeding punch cards (containing Fortran 66 code) into a card reader where output was sent to a plotter or to magnetic tape. One needed to be physically present to use the machine, and it was not until the mid-1970s, following the development of a dedicated network connection as part of the ARPANET, that remote access to supercomputing facilities became feasible. The ARPANET was precursor to the internet that connected research facilities in the United States with a 50 kilobit per second dedicated connection, allowing remote access to computing facilities as well as the ability to display model output to a researcher’s local terminal. The CDC-6600 contained up to 982 kilobytes of memory could execute up to 3 megaFLOPS (millions of floating point operations per second). Its successor, the CDC-7600, ran about five times faster than the 6600 and had four times as much memory. These two machines served as the primary workhorses for cloud modeling spanning the mid-1960s to mid-1970s.

In the early 1970s, improvements and refinements to 2D models continued, setting stage for the upcoming 3D revolution. More sophistication in the microphysical processes involving rain formation and subsequent falling, breaking, and evaporating were included in the 2D model of Takeda (1971). Run with 1 km horizontal and 500 m grid spacing, his simulations run in sheared environments indicated that for a range of atmospheric stability and shear profiles, a cloud could be simulated that was self-sustaining. Wilhelmson and Ogura (1972) found that small changes to the equation set by Ogura and Phillips (1962) removed the need to use a computationally expensive iterative solution for calculating saturation vapor pressure, while providing nearly identical model results. Steiner (1973) developed one of the first 3D models of shallow convection, examining the behavior of buoyant thermals in sheared environments. His simulations did not contain any water substance, however, and the conclusions that vertical motions in 3D are stifled by environmental shear would be modified significantly once the effects of condensation were included in 3D convective models. Schlesinger (1973a,b) developed a 2D anelastic model containing liquid water microphysics and conducted experiments exploring the role of environmental moisture and shear on storm morphology as well as the sensitivity of simulations to various microphysical parameters, finding that long-lived storms could be maintained in highly sheared environments so long as low level environmental moisture was sufficient. Wilhelmson (1974) presented one of the first 3D thunderstorm simulations containing liquid water microphysics. His simulation was run utilizing isotropic grid spacing of 600 m on a 3D mesh with dimensions of 64 x 33 x 25, containing a total of 53,000 grid volumes and utilizing mirror symmetry about the x axis. The limited domain size (38 km horizontal extent) and the use of doubly periodic boundary conditions precluded the ability to model the full life cycle of a realistic storm, however. In comparison to 2D simulations in identical environments Wilhelmson found that the 3D thunderstorm developed faster and lasted longer than its 2D counterpart before decaying, emphasizing the need for fully 3D modeling to properly model deep moist convection. Schlesinger (1975) converted his 2D model to 3D and included directional shear in his first “toy” 3D simulations (so-called due to the modest 11 x 11 x 8 grid) run with horizontal grid spacing of 3.2 km and vertical spacing of 700 m. Schlesinger’s 3D model contained open lateral boundary conditions, an improvement over doubly periodic boundary conditions, allowing for flow in and out of the model’s lateral boundaries. These early simulations were among the first to explore the morphology of storms in directionally sheared environments. Subsequent 3D simulations by Klemp and Wilhelmson (1978b) and Schlesinger (1980) revealed the process of thunderstorm splitting into right and left moving cells, and found that the right moving cell was favored in environments with strong low-level shear typical to environments associated with severe weather in the United States.

Klemp and Wilhelmson (1978a) presented supercell simulation results from their newly developed 3D cloud model, which was a significant improvement over previously reported efforts. The so-called “Klemp–Wilhelmson model” would be used in many subsequent studies throughout the 1970s and 1980s. The 3D model incorporated Kessler-type microphysics, utilized open lateral boundary conditions, and a turbulence closure scheme to better handle subgrid kinetic energy. Their use of a prognostic equation for pressure was seen as advantageous over solving a diagnostic elliptical equation that implicitly assumes infinitely fast sound waves; however, this required the use of an extremely small time step in order to achieve computational stability. The use of a novel “time splitting” technique was achieved by separating out the terms responsible for sound waves and only integrating these sound waves small time step that guaranteed stability, an approach that was also used in the newly developed Colorado State RAMS model (Pielke et al., 1992). This model was used in early cloud modeling studies that explored orographic clouds, mesoscale convective systems, and was one of the first models to include the ice phase in simulations of thunderstorms (Cotton & Tripoli, 1978; Cotton, Tripoli, Rauber, & Mulvihill, 1986; Tripoli & Cotton, 1989).

Cray and the Dawn of the Supercomputing Era

In 1972, Seymour Cray, the architect of the CDC-6600 and 7600 that had served as the primary workhorses for cloud modeling research, established Cray Research, Inc., in Chippewa Falls, Wisconsin. The first Cray1 supercomputer was sold in 1976, and Cray quickly established itself as the leading supercomputer manufacturer in the world, a title that would hold until the 1990s. Cray computers exhibited significantly faster “number crunching” capabilities and larger amounts of memory and disk storage than previous machines. These machines contained vector processors that could simultaneously apply a mathematical calculation to several values in an array concurrently, an early form of parallel processing that served to speed up the earliest cloud models.

The first Cray supercomputer was delivered to NCAR in 1976 and would be in service for 12 years. Atmospheric scientists lucky enough to obtain access could connect to it remotely rather than being required to be physically present, as had been the case in the early 1970s. Despite the promising advances in computing technology exemplified by Cray’s success, supercomputers in the early 1980s were almost entirely out of reach to university researchers who desperately needed access to them due to either the cost of, or complete lack of, access. In mid-1983, about 75 supercomputers existed across the world, the majority of which were Cray-1’s (Smarr et al., 1983). While NCAR provided access to researchers, only three universities in the United States had machines on site, and it was this clear deficiency in basic computational research capability that led to the development of a proposal to the National Science Foundation by scientists at the University of Illinois to create an on-site national center for scientific computing. The National Center for Supercomputing Applications (NCSA) would be established in 1986, the same year its Cray-1 went online. NCSA provided not just supercomputing access, but additional software and services to assist scientists from diverse fields in utilizing the hardware (and continues to do so in 2019). Remote access allowed scientists to edit, develop, and compile cloud model code and submit jobs to the Cray through a queuing system. Plotted output residing on the computer could also be graphically displayed to the researcher’s office computer using the dedicated network connection. With the proliferation of the Internet (many research universities were connected in the 1980s, years before it was generally available) it was now the standard practice to access supercomputing facilities, such as those available at NCAR or NCSA, using a remote network connection using software such as NCSA Telnet. Advances in operating system technology, the development of more sophisticated Fortran compilers, and the greater availability of supercomputing resources advanced many scientific fields during the 1990s.

Supercells and Supercomputers

Many of the fundamental properties of supercell thunderstorms were identified during the 1980s, and this was largely the results of 3D simulations of supercells. Several 3D supercell modeling studies published at this time greatly advanced our understanding of storm structure, dynamics, and morphology (e.g., Droegemeier & Wilhelmson, 1985; Klemp & Rotunno, 1983; Klemp, Wilhelmson, & Ray, 1981; Rotunno & Klemp, 1982, 1985; Schlesinger, 1980; Weisman & Klemp, 1982, 1984; Wilhelmson & Klemp, 1978, 1981). Throughout the 1980s, 3D simulations were conducted on grids with horizontal grid spacings on the order of 1 km with vertical spacings typically about 500 m. A typical simulation would span a domain on the order of 40 km by 40 km in the horizontal, extending to ten or more kilometers in the vertical. These domains resulted in grids on the order of 50 x 50 x 30 (75,000) grid points. (The highest-resolution thunderstorm simulations conducted in the late 2010s contain tens to hundreds of billions of grid volumes, six to seven orders of magnitude more than these early simulations.) It was widely recognized at the time that this resolution was inadequate to capture important smaller-scale flows that were known to be present in supercells (such as tornadoes) and that liquid-only microphysics was unphysical, but computers simply did not yet exist that had enough memory and processing power to conduct such a simulation. Despite these limitations, carefully constructed numerical experiments of thunderstorms throughout the 1980s and 1990s revealed a tremendous amount of insight into thunderstorm structure and morphology, such as the nature of supercell splitting, the origin of low-level supercell mesocyclone rotation, and the nature of thunderstorm cold pools. A technique for the creation of a vertically “stretched” grid was provided in Wilhelmson and Chen (1982), where the vertical grid spacing ∆z is a function of height. Most future studies involving deep moist convective would use a stretched vertical grid, typically with the highest resolution (smallest ∆z) occurring at or near ground level and increasing monotonically with height in order to “focus” resolution near the ground. This approach, however, can result in “pancake” grids with large aspect ratios (∆x/∆z), and highly anisotropic meshes are known to affect the parameterization of the turbulent kinetic energy, which can result in spurious energy piling up at high wave numbers, detrimentally affecting the simulation (Nishizawa, Yashiro, Sato, Miyamoto, & Tomita, 2015).

One of the most significant examples of the power and promise of computing technology during this time was the video “A Study of the Evolution of a Numerically Modeled Severe Storm” produced by a team of scientists at NCSA in 1989 (Wilhelmson et al., 1989, 1990). This six minute-long video, nominated for an Academy Award, explored the structure of a 3D simulated supercell using most advanced commercial software and hardware available at the time. The video featured animations of the cloud and rain rendered in transparent isosurfaces and colored slices, as well as colored Lagrangian tracers. That it took four scientific animators one person-year to develop the video following the execution of the model emphasizes the challenge in producing high-quality, scientifically meaningful animations of 3D thunderstorm simulations. In 2019, this video remains one of the most insightful, high-quality visualizations of a full thunderstorm, despite the many visualization software options that later became available to scientists.

The desire to study the process of tornadogenesis within a simulated supercell was a motivator for the development of nested grid capability in cloud models (Klemp & Rotunno, 1983). By nesting a higher-resolution grid within a coarser grid, one could, for instance, explore the development of a tornado without running the entire simulation at the resolution of the fine grid, something not computationally feasible. Wicker and Wilhelmson (1995) simulated a tornadic supercell with two meshes, the finest with 120 m horizontal spacing, and Grasso and Cotton (1995) did the same using three meshes with the finest containing 111 m horizontal spacing. In both studies, the higher resolution mesh was introduced prior to tornado formation in order to reduce the amount of computer time. Even with this approach, however, the tornadoes in both simulations were marginally resolved. These simulations revealed the importance of baroclinically generated horizontal vorticity in strengthening a supercell’s low-level mesocyclone, as well as the potential role of downdrafts as initiation mechanisms for tornado formation. They also demonstrated the utility of nested grids to incorporate different scales of motion, although spurious error growth along nest boundaries was a common problem (Wicker & Wilhelmson, 1995). Finley, Cotton, and Pielke (2001) initialized a RAMS model simulation with synoptic data but incorporated six nested meshes, with the innermost 100 m horizontal grid focused on a supercell that produced two tornadoes. This simulation was notable in that it did not utilize the “warm bubble” mechanism to trigger a storm in a horizontally homogeneous environment as was typically done in idealized supercell simulations. It is worth noting that while similarities were found between these three numerical studies, no clear picture had emerged regarding supercell tornado genesis and maintenance, and this is partly due to inadequate resolution; in fact it is common to describe vortices in these and other similar simulations as “tornado-like vortices” (TLVs) due to their being marginally resolved. However, these simulations resolved important features in the tornadic region of the supercell and provided encouragement that a much clearer picture would emerge as models were improved and simulations were run at higher resolution, enabled by the steady increase in computing power made possible by supercomputers.

Severe weather outbreaks involving supercells provide a significant forecasting challenge to operational meteorologists who, throughout the late 20th and early 21st centuries, have increasingly relied upon numerical weather prediction models for forecast guidance. The Center for Analysis and Prediction of Storms (CAPS) was formed at the University of Oklahoma in 1988 with the goal of addressing the incredible challenge of providing operational forecasting of severe weather, with the Advanced Regional Prediction System (ARPS) model serving as its centerpiece (Johnson, Bauer, Riccardi, Droegemeier, & Xue, 1994; Xue, Wang, Gao, Brewster, & Droegemeier, 2003). ARPS was able to forecast five hours in advance the location and intensity of supercells that were observed in the U.S. Plains on May 24, 1996, demonstrating the utility of a weather model to sufficiently predict severe mesoscale phenomena in an operational setting (Sathye, Xue, Bassett, & Droegemeier, 1997).

The Use of Ice Microphysics in 3D Thunderstorm Simulations

An important advancement during the early 1980s was the development of equations that described the time-dependent microphysical characteristics of ice hydrometeors and the development of 2D models of thunderstorms utilizing so-called ice microphysics (e.g., Lin et al., 1983). Kessler’s (1969) scheme and its variants allowed, necessarily, the occurrence of unphysically cold supercooled cloud water and rain, with rain being the only hydrometeor allowed to fall to the ground as precipitation. These simplifications resulted in distorted cloud anvils lacking the structure expected partly due to the comparatively slow fall speed of snow as opposed to rain, and typically produced thunderstorm outflow that was too cold compared to observations. It was well known at the time that supercells contained an abundance of cloud ice, snow, graupel, and hail that needed to be accounted for in simulations. Despite the development of ice microphysics on paper in the early 1980s, 3D cloud model simulations could not contain ice if they were not programmed to, and doing so would have resulted in code that could not run on the hardware of the time due to the significant increase in both computation and memory usage it would have required. Ice microphysics routines required new prognostic equations to be solved, where memory allocations for additional 3D floating point arrays were required for snow, ice, and graupel mixing ratios. Due to the notoriously computationally expensive calculations inherent to microphysics, a simulation that fit in memory in the computers of the time would have taken a prohibitively long time to integrate. Early 3D simulations of thunderstorms containing ice microphysics (e.g., Knupp, 1989; Straka & Anderson, 1993) utilized grid spacings on the order of 500 m and were used to study the formation of downdrafts in thunderstorms, where processes such as the melting of hail were responsible for a significant amount of negative buoyancy that forced strong downdrafts known as downbursts. In the following decades, 3D simulations of thunderstorms would be run with a variety of cloud microphysics options, with a progression over time of single-moment ice microphysics replacing liquid-water-only microphysics, and multi-moment ice microphysics routines generally supplanting single-moment routines. It has been widely recognized that up until the use of multi-moment ice microphysics routines (with prognostic equations for variables beyond only mixing ratios, such as hydrometeor number concentrations) in cloud models, that simulated thunderstorm cold pools have been biased cold compared to observations. The implications of this cold bias are known to be significant, as a colder cold pool will propagate faster and interact with environmental wind shear differently than a warm one (Rotunno, Klemp, & Weisman, 1988; Weisman & Rotunno, 2004). The use of more physically correct microphysics parameterizations should aid future discovery in supercell tornado behavior as baroclinically generated horizontal vorticity, which forms due to horizontal gradients of density that arise from different amounts of cooling occurring throughout and adjacent to the storm’s cold pool, and this cooling is due to the evaporation of rain and the melting and sublimation of hail, is known to be a primary source of vertical rotation in the bottom kilometer or two of supercell updrafts and may also serve also as a primary source of vorticity occurring within tornadoes as cold pool air is abruptly tilted upward into the lowest region of the tornado (Orf, Wilhelmson, Lee, Finley, & Houston, 2017).

High-Resolution Cloud Modeling in the Age of Distributed-Memory Supercomputing

A Word About “High Resolution” as a Descriptor

Within the field of atmospheric modeling, the term “high resolution” is used to ostensibly indicate a certain level of fidelity “beyond the norm.” It is generally true that increasingly higher resolution is desirable in research on thunderstorms; strong local gradients of winds, temperature, and pressure are typical within in real thunderstorms, requiring high resolution to capture the nature of these abrupt changes. Further, features that occur within thunderstorms such as downbursts, tornadoes, near-ground inflow, and outflow layers are much smaller than the storm in which they are occurring, requiring resolution well beyond that which captures only the storm’s basic features. In 2019 it is possible to conduct, on consumer desktop hardware, 3D simulations of thunderstorms that would have been impossible on any computer of the 20th century. A consumer motherboard for a home or office computer could contain a processor with up 20 cores and 128 GB of memory, with each core exhibiting a clock speed of around 3 GHz, providing around 1 TFLOP of performance. A single Graphical Processor Unit (GPU) could easily add several additional TFLOPs of computing to a desktop computer, providing performance on par with the world’s fastest supercomputers at the turn of the century.

One consequence of this breathtaking increase in computing power is that “high resolution” as a descriptor has become meaningless, as a “high-resolution” simulation of a thunderstorm in 1985 may have modeled with 500 m grid spacing, while the same descriptor would be properly applied to a simulation in 2019 with 50 m grid spacing—a thousandfold increase in the number of grid volumes (and hence computer memory) required. This has resulted in the literature being peppered with descriptions of “high,” “very-high,” “ultra-high,” “super-high,” and even “giga-” resolution simulations to discern from previous coarser simulations. The ambiguity of these descriptors can only be removed by clearly specifying the grid spacing and the phenomenon being simulated, and it is hoped that, due to the eventual exhaustion of adjectives to indicate yet higher resolution, modelers will instead indicate grid spacing and let the reader decide the nature of the resolution!

The Transition to the Era of Supercomputing Centers

During the 1990s Cray began to share the stage with new computing architectures whose performance began to eclipse that of its signature single vector processor design, and the turn of the century marked rapid developments in supercomputing technology and access. It became necessary to add additional independent processors to supercomputers rather than optimizing or speeding up a single processor. Multiprocessor machines would introduce new challenges for programmers, as most of these new architectures demanded significant modification to existing code (or development of new code) to exploit all of the processors efficiently.

One difficult but inexpensive computing option available to modelers in the 1990s was made possible by the development of transputer technology. Transputers were an inexpensive type of microprocessor, manufactured in the 1980s by the U.K. corporation Inmos, that were designed to serve as individual computing units (nodes) in so-called massively parallel computers. They were designed as computing nodes (i.e., as standalone computing entities or computers, hooked to a high-speed network, that is part of a supercomputer) that could be assembled into a massively parallel (able to conduct many calculations concurrently) computer. Jacob & Anderson (1992) report on the creation and use of the Wisconsin Model Engine (WME), a 192-node, massively parallel supercomputer that was used by John Anderson’s group at the University of Wisconsin to conduct high-resolution simulations of the ocean and atmosphere, including microbursts, damaging downdrafts that form within thunderstorms (Anderson, Orf, & Straka, 1990; Orf, Anderson, & Straka, 1996).

One of the most exciting computer architectures at this time was the Connection Machines CM-2 and CM-5 supercomputers that were housed at NCSA in the mid-1990s. These were the first commercially available massively parallel supercomputers, containing up to 1,024 processors, each of which had its own memory space, similar to the WME. These machines were among the first to require message passing in order to communicate data between the individual nodes. To simplify matters for programmers, Connection Machines released its own Fortran compiler with extensions that, unlike the WME, “hid” the underlying message passing, allowing programmers to code simple expressions involving 3D arrays for executing what otherwise would have been complex parallel operations. (This approach is similar to the single-expression whole-array operations that were added in the Fortran95 standard.) Taking advantage of this new architecture, and using a modified version of the Klemp–Wilhelmson model, Lee and Wilhelmson (1997a) presented simulations of a line of weak tornadoes that formed from convection growing along a wind shear boundary. This work and companion studies (Lee & Wilhelmson, 1997b, 2000) explored the nature of non-supercell tornadoes, which tend to be shorter lived and weaker than those formed in supercells, typically forming beneath rapid growing cumulonimbi as the result of intense vortex stretching.

While the CM-5 was the last machine built by Connection Machines, it heralded the future of individual shared-memory multiprocessor computers as well as distributed-memory supercomputers made up of identical nodes. Distributed-memory supercomputing continues to be the norm in 2019, where frequent sharing of data between the individual nodes during model integration is required in order to solve the equations that define the future state of the cloud. The emergence of distributed-memory supercomputing required the development of algorithms and libraries to manage the efficient passing data between nodes. The Message Passing Interface (MPI) standard was developed to address this issue, and soon MPI libraries were available to programmers who could use routines in their cloud models to handle data exchange between compute nodes. With MPI, a cloud model could be written to take advantage of as many processors as were available; however, the frequent exchange of data between compute nodes required to update the model state from time to time can result in a situation where adding more compute nodes actually slows the model down due to communication bottlenecks. It requires significant forethought and effort to construct a distributed-memory cloud model that scales well to tens of thousands or more nodes. This recognition has resulted in the development of parallelized open-source community cloud models such as ARPS (Xue, Wang, Gao, Brewster, & Droegemeier, 2003), WRF (Skamarock et al., 2008), and CM1 (Bryan & Fritsch, 2002) that greatly benefit the cloud modeling community, providing modifiable “virtual laboratories” for the study of thunderstorms and other weather phenomena.

Recognizing the lack of shared computing resource available to researchers in the United States, the National Science Foundation solicited the development of a system comprised of supercomputers at sites in the United States, connected by a fast network, that would be used by the U.S. research community. The Teragrid, which became operational in 2004, dramatically increased the access to supercomputing facilities for U.S. scientists. The Teragrid established the model that is still used in 2019, replaced by the follow-on eXtreme Science and Engineering Discovery Environment (XSEDE) project that provides access to modern supercomputing facilities. XSEDE and other similar projects have made supercomputing much more accessible to researchers across the globe, although access to large portions of the most powerful computers, required for the highest resolution thunderstorm simulations, remains extremely competitive. Since the 2010s the National Science Foundation has funded initiatives such as the Blue Waters project (Bode et al., 2013; Kramer, Butler, Bauer, Chadalavada, & Mendes, 2014), providing access to a petascale machine to scientists who can convince a panel that their proposed work demands the most powerful hardware available and that they have the resources to use such a machine efficiently.

Bryan, Wyngaard, and Fritsch (2003) conducted a series of numerical experiments with the CM1 cloud model, written by NCAR scientist George Bryan and made publicly available, that indicated that grid spacings of about 100 m or less were required in order for a cloud model to “perform as designed” in the simulation of deep moist convection, showing that the inertial subrange was not properly resolved in coarser simulations, and that convergence of did not occur as well. Fiori, Parodi, and Siccardi (2010) similarly found that supercell simulations require grid spacing of about 200 m as a minimum resolution to properly converge, and that this only occurs when a sophisticated enough turbulence closure method is used. Despite these and other studies that indicate the importance of using enough resolution to study thunderstorms, in the late 2010s simulations of idealized supercells are regularly reported in the literature run at grid spacings 250 m or greater. While it is important to note that much important work continues to be done at these coarser resolutions, as is evident from the huge body of work that dates back to the first 3D simulations of thunderstorms, re-conducting many of these simulations at higher resolution often results in different interpretations of the simulation. There can be significant complications to running very large numerical simulations, including being able to create a model that scales efficiently to large node counts, learning how to take advantage of the latest hardware topologies, and dealing with an exponentially increasing data load from model output. However, these types of challenges have always been surmountable. It is clear that it will require many thunderstorm simulations run with much finer grids, with spacings on the order of 10 m (or higher, as tornadoes and tornado-like vortices have been observed in thunderstorms with diameters as small as 10 m, requiring grid spacing on the order of 2 m to adequately resolve) before scientists can hope to understand the nature of tornado genesis, maintenance, and decay well enough to be able to create numerical forecasts of tornado paths from thunderstorms that have yet to form or are in the stages of formation.

The Path to Fully-Resolved Tornadoes in Simulated Supercells

The exponential increase in computing power available to researchers resulted in breakthrough simulations in the early 21st century where simulated supercells produce strong tornadoes (Noda & Niino, 2005, 2010; Xue, 2004). These studies utilized the ARPS model using the environmental conditions of the oft-simulated May 20, 1977, Del City supercell. These simulations improved upon Wicker & Wilhelmson (1995) and Grasso & Cotton (1995) by eliminating the introduction of nested grids and running the entire simulation at much higher resolution (grid spacings on the order of 60 m). The ARPS simulations utilized Kessler microphysics, however, and it would be a decade before simulations at comparable resolution were conducted with more physically proper ice microphysics and in domains large enough to avoid problems associated with interaction between the storm and the model’s lateral boundaries for the storm’s full lifespan.

An issue that remains a topic of debate is the use of “free-slip” boundary conditions in simulations of thunderstorms. The free-slip surface boundary condition eliminates the effects of surface friction with the model’s bottom boundary, which in all simulations discussed thus far is perfectly flat, representing an obviously idealized approximation to the real Earth’s surface. It is well understood that friction with the Earth’s surface plays a key role in the life cycle of extratropical cyclones, also shaping the structure of the atmospheric boundary layer. In a cloud model, friction is typically introduced by the inclusion of a drag coefficient that reduces the horizontal winds at the model’s lowest grid level. Its inclusion in cloud models results in more physically realistic thunderstorm outflow profiles, such as the leading edges of gust fronts (Mitchell & Hovermale, 1977). However, the anomalously strong, local winds in the tornadic region of a supercell thunderstorm do not necessarily lend themselves well to this approach, which implicitly assumes a logarithmic wind profile beneath the lowest scalar level of models that use an Arakawa C mesh, such as CM1. In cases where insufficient vertical grid spacing results in the effects of surface friction being felt over too deep a layer, the cold pool structure of simulated supercell simulations is modified, and this can have a cascading effect on the storm morphology in the tornadic region of the storm. Further, surface friction has been identified as being of key importance in the formation of simulated tornadoes in ARPS simulations of supercells and mesoscale convective systems (Roberts, Xue, Schenkman, & Dawson, 2016; Schenkman, Xue, & Dawson, 2016; Schenkman, Xue, & Hu, 2014; Schenkman, Xue, & Shapiro, 2012), and recent high-resolution simulations of downbursts and downburst-producing thunderstorms require surface drag in order to obtain realistic wind profiles near the ground (Orf, Kantor, & Savory, 2012). Friction is also found to be, somewhat counterintuitively, necessary for the strongest near-surface winds found in real tornadoes, and this is supported by theory and numerical simulation (Davies-Jones, 2015). As thunderstorm simulations continue to be run on finer and finer grids, the effects of surface friction parameterization should hopefully be reduced as sub-meter vertical grid spacings will allow the physically proper no-slip surface condition to be used effectively. Currently, there is no clear consensus on what surface boundary condition is most appropriate for contemporary idealized supercell simulations (Markowski, 2016, 2018).

Orf, Wilhelmson, Lee, Finley, and Houston (2017) presented results from a 30 m grid spacing free-slip supercell simulation run in a domain spanning 120 km x 120 km x 120 km and containing 1.8 billion grid points. The simulated supercell produced a long-track EF-5 strength tornado, simulated in the environment that produced the similar May 24, 2011, El Reno, OK tornadic supercell. This work was noteworthy in that data over a large domain was saved at very high temporal frequency (up to every 1 model second) and visualized and animated in insightful ways (Orf, 2019; Orf et al., 2014; Orf, Wilhelmson, & Wicker, 2016). Further, they used an isotropic (∆x = ∆y = ∆z) mesh over the bottom 10 km of a 60 km x 60 km region centered on the supercell, utilizing vertical stretching from 10 to 20 km AGL and horizontal stretching for 60 km along the periphery of the domain. A feature they dubbed the Streamwise Vorticity Current (SVC) was identified within the near-surface cold pool of the storm’s forward flank, corresponding to a significant drop in pressure below 2 km as well as a lowering toward the ground of extremely strong updraft winds. Unpublished simulations where surface friction was turned on at the beginning of the simulation resulted in significantly different results where no long track EF5 tornado occurred, in contrast to Schenkman et al. (2012) and Roberts et al. (2016) who found that friction was required to produce their strongest tornadoes. Using a different environment but otherwise identical parameters as in Orf et al. (2017), Finley (2018) found a similar relationship between the SVC and a strong, long-lived tornado in a simulation conducted using the April 27, 2011, southeastern United States outbreak environment as the model’s base state. Figure 1 contains a still image from a 15 misotropic simulation of the May, 24, 2011 violently tornadic supercell described in Orf et al. (2017), providing an example of state-of-the-art thunderstorm modeling and visualization in the late 2010s. In this image, features such as subtornadic vortices, a wall cloud and turbulent tail cloud, and rain shafts in the storm’s rear flank are resolved in fine detail, as well as a violent multiple vortex tornado pendant from a barrel-like cloud that bears a strong resemblance to a real storm.

High-Resolution Thunderstorm Modeling

Figure 1. A still frame from a 15 m isotropic simulation containing 8 billion grid zones (Orf, 2017). The cloud and rain fields are volume rendered, and the surface is shaded by density perturbation from base state, with blue indicating the negatively buoyant surface air found within the storm’s cold pool. Fine scale features are present in this image, including turbulent rain shafts present in the storm’s rear flank, diffuse rain in the forward flank, a multiple vortex cyclonic tornado with attendant wall cloud, an anticyclonic funnel cloud, and a turbulent tail cloud along the storm’s forward flank.

Mashiko & Niino (2017) conducted a tornado-resolving simulation at 10 m horizontal grid spacing utilizing the Japan Meteorological Agency Nonhydrostatic Model (JMANHM; Saito et al., 2006), incorporating the effects of surface friction and utilizing a vertical mesh stretching from 10 to 50 m to the model top of 10.8 km. Their simulation contained a multiple vortex tornado exhibiting vertical winds as high as 50 m s1 10 m above ground level. Utilizing CM1 with parameters similar to Orf et al. (2017) including free-slip boundary conditions, which are “zero surface strains” in both simulations (another approach to free-slip is “zero surface strain gradient” where the vertical derivative of surface strain is set to zero. The latter boundary condition did not result in a long-track EF5 tornado in unpublished simulations otherwise identical to those of Orf et al., 2017), Yao et al. (2018) simulated a supercell with 25 m horizontal grid spacing and 20 m grid spacing below 1 km using atmospheric conditions adjacent to an observed EF4 producing supercell that occurred in Jiangsu Province, China on June 23, 2016.

As has been the case since the 1960s, rapidly changing computer technology continues to provide new opportunities for simulating thunderstorms at higher resolutions with increasingly sophisticated physics parameterizations for hydrometeors, friction, radiation, and turbulent kinetic energy. The recent advent of graphical processing units (GPUs) as calculating platforms in supercomputers is having a significant impact across all fields of computational science including atmospheric science (e.g., Leutwyler, Fuhrer, Lapillonne, Lüthi, & Schär, 2016; Schalkwijk, Griffith, Post, & Jonker, 2012; Vanderbauwhede & Takemi, 2013). New programming models such as OpenACC and CUDA have been developed to help scientists efficiently exploit these massively parallel devices. Summit, the world’s fastest supercomputer in late 2018, obtains 95% of its computing power from GPUs. Programmers must modify their code to efficiently utilize GPUs on hybrid computing platforms, and this can be a time-consuming process as large sections of code often must be refactored in order to run efficiently.

The “Big Data” Problem

Doubling the resolution of a 3D model results in 8 (23) times more grid volumes and a reduction of the model’s time step by a factor of 2 in order to maintain computational stability. The roughly 16 times more calculations required to run such simulations underscores the need for a steady progression in supercomputer capability over the coming decades in order to continue to advance knowledge in the atmospheric (and other) sciences. Moving data between memory and disk (input/output, or I/O) continues to be significantly slower than moving data within computer memory, and I/O can become a severe bottleneck in increasingly higher-resolution simulations on modern supercomputers. While I/O bandwidth and storage capacity have both increased exponentially since the 1970s, the time it takes to complete a simulation where data is saved frequently to disk can be overwhelmingly dominated by I/O. This time can be greatly reduced by writing code that utilizes some form of parallel I/O, either by writing multiple files concurrently on a fast shared file system commonly found on supercomputers, or using routines that efficiently write files by multiple processors concurrently. I/O continues to be a significant bottleneck for researchers requiring frequent saving of large amounts of data that is often necessary to fully analyze a high-resolution simulation with rapidly evolving flows. Approaches for handling this issue range from the in-situ approach (Bauer et al., 2016) where visualization and analysis is done “live” while the simulation is being run on the supercomputer, to the post hoc approach of saving large quantities of compressed data to disk for later analysis and visualization. Lossy floating point compression can dramatically reduce (by one or two orders of magnitude) the amount of data written to disk at the expense of the loss of floating point precision, but the amount of needed accuracy in saved data can be specified upfront with algorithms such as ZFP (Lindstrom, 2014; Orf, 2017). It is likely that future supercomputers will involve topologies that are quite different than those found in 2019, and this will in part be due to the fact that I/O, communication, and processing do not all scale the same as simulations increase in size, and that frequent large-scale “saving data to disk” may no longer be feasible using current techniques and hardware topologies.

Due to the nature of modern distributed-memory supercomputing architectures where simulations can span thousands of nodes, models with naive approaches to I/O can spend the vast majority of their time writing data to disk I/O rather than doing calculations, a situation that indicates poor scaling and one that is generally not tolerated on supercomputers. Data from large, complicated simulations can take years to study, making the in-situ approach infeasible. Self-describing scientific data formats such as netCDF and HDF are useful for storing model output data, and parallel methods for writing these file formats have been developed.

Scientists wishing to minimize the I/O overhead in high-resolution thunderstorm simulations have several options. If only a small portion of the domain is being studied, only data from that region and its immediate surroundings may be saved, reducing the amount of data written to disk. Researchers may choose to save only model prognostic variables, calculating diagnostic quantities on the fly later. Data compression can be applied to 3D floating point arrays before being written to disk, where decompression occurs on the fly when data is later read back. Data compression may be lossless (where the original data is perfectly reconstructed) and lossy (where some accuracy is sacrificed in order to reduce the amount of bytes written to disk). Lossless compression algorithms typically cannot achieve very large compression ratios due to the restriction of requiring bit perfect decompression. As an example of the effectiveness of lossy ZFP compression, domain-wide compression ratios of over 200:1 for microphyiscal variables have been achieved while preserving enough accuracy for analysis and visualization in simulations containing billions of grid zones (Orf, 2017). Specifying required accuracy up front allows the algorithm to compress only as much as is needed to achieve that accuracy, resulting in compression greatly exceeding those found in lossless compression methods (Lindstrom, 2014).

The Road Ahead

The rapid advancement of supercomputing technology and numerical model development has resulted in the ability to conduct simulations of thunderstorms that bear a striking resemblance to observations. Despite this encouraging trend, much work remains in order to answer fundamental questions about the formation and evolution of damaging thunderstorm phenomena such as tornadoes, downbursts, and flash flooding. While idealized simulations of thunderstorms provide critical information on these features, the ability to predict their behavior accurately remains one of the grandest forecasting challenges. The road ahead will undoubtedly involve the use of ensembles of simulations to obtain better probabilistic forecasts of severe weather events and to provide useful information on the internal variability of thunderstorm simulations in an idealized framework. As resolution further increases, the inclusion of more realistic surface features including accurate topography, orography, and the natural and built environment will be necessary. In order to properly simulate the damage supercell-induced tornadoes cause to the built environment, structure and debris models will need to be coupled to thunderstorm model simulations. Machine learning will be used to improve upon existing parameterizations of cloud microphysics and computationally intensive bin models will also become more common. In the future, operational numerical weather prediction models that forecast on the continental scale will be run at resolutions of hundreds of meters and will contain hundreds of vertical levels, and these models will regularly produce the full spectrum of observed thunderstorm types including tornado producing supercells. Ensembles of operational model simulations run at sub-kilometer resolutions could provide probabilistic forecasts of thunderstorms and the potential for these storms to form tornadoes. In order for the future high numerical weather prediction simulations involving deep moist convection to be of operational use, the models will need to be initialized with an accurate, very high-resolution set of initial conditions, and it is likely that these conditions will be supplied primarily by weather satellites and other forms of remote sensing technology that can quickly scan wide volumes of the atmosphere. As always, advances in our understanding of thunderstorms will happen due to advances in the three main pillars of meteorological research: observations, theory, and numerical modeling.

Acknowledgments

Figure 1 was produced using the open source VisIt software on the Blue Waters supercomputer. The author’s current research is supported by the Cooperative Institute for Meteorological Satellite Systems at the University of Wisconsin—Madison, and the National Science Foundation. One anonymous reviewer provided useful feedback that helped improve the scope and organization of the article. The author would like to especially thank Dr. Robert Schlesinger for taking time to discuss his personal experience in conducting some of the first 3D numerical simulations of thunderstorms, and the immense challenges faced by modelers at the early dawn of the modern computing age.

Further Reading

Cotton, W. R., Bryan, G., & van den Heever, S. C. (2010). Storm and cloud dynamics. Cambridge, MA: Academic Press.Find this resource:

Doswell, C. A., III. (2001). Severe convective storms (Meteorological Monographs). Boston, MA: American Meteorological Society.Find this resource:

Markowski, P., & Richardson, Y. (2011). Mesoscale meteorology in midlatitudes. Hoboken, NJ: John Wiley & Sons.Find this resource:

References

Anderson, J. R., Orf, L. G., & Straka, J. M. (1990). A 3-D model system for simulating thunderstorm microburst outflows. Meteorology and Atmospheric Physics, 49(1–4), 125–131.Find this resource:

Arakawa, A. (1966). Computational design for long-term numerical integration of the equations of fluid motion: Two-dimensional incompressible flow. Part I. Journal of Computational Physics, 1(1), 119–143.Find this resource:

Bauer, A. C., Abbasi, H., Ahrens, J., Childs, H., Geveci, B., Klasky, S., . . ., Bethel, E. W. (2016). In situ methods, infrastructures, and applications on high performance computing platforms. Computer Graphics Forum, 35(3), 577–597.Find this resource:

Bode, B., Butler, M., Dunning, T., Hoefler, T., Kramer, W., Gropp, W., & Hwu, W.-M. (2013). The Blue Waters Super-System for Super-Science. In J. S. Vetter (Ed.), Contemporary High Performance Computing From Petascale Toward Exascale (pp. 339–366). London, U.K.: Chapman and Hall.Find this resource:

Braham, R. R. (1996). The Thunderstorm Project 18th Conference on severe local storms luncheon speech. Bulletin of the American Meteorological Society, 77(8), 1835–1846.Find this resource:

Bryan, G. H., & Fritsch, J. M. (2002). A benchmark simulation for moist nonhydrostatic numerical models. Monthly Weather Review, 130(12), 2917–2928.Find this resource:

Bryan, G. H., Wyngaard, J. C., & Fritsch, J. M. (2003). Resolution requirements for the simulation of deep moist convection. Monthly Weather Review, 131(10), 2394–2416.Find this resource:

Byers, H. R., & Braham, R. R., Jr. (1949). The thunderstorm [Technical report]. Washington, D.C.: United States Department of Commerce.Find this resource:

Charney, J. (1955). The use of the primitive equations of motion in numerical prediction. Tell’Us, 7(1), 22–26.Find this resource:

Cotton, W. R., & Tripoli, G. J. (1978). Cumulus convection in shear Flow—Three-dimensional numerical experiments. Journal of Atmospheric Science, 35(8), 1503–1521.Find this resource:

Cotton, W. R., Tripoli, G. J., Rauber, R. M., & Mulvihill, E. A. (1986). Numerical simulation of the effects of varying ice crystal nucleation rates and aggregation processes on orographic snowfall. Journal of Applied Meteorology and Climatology, 25(11), 1658–1680.Find this resource:

Davies-Jones, R. (2015). A review of supercell and tornado dynamics. Atmospheric Research, 158–159, 274–291.Find this resource:

Deardorff, J. W. (1970). A numerical study of three-dimensional turbulent channel flow at large Reynolds numbers. Journal of Fluid Mechanics, 41(2), 453–480.Find this resource:

Droegemeier, K. K., & Wilhelmson, R. B. (1985). Three-dimensional numerical modeling of convection produced by interacting thunderstorm outflows. Part I: Control simulation and low-level moisture variations. Journal of Atmospheric Sciences, 42(22), 2381–2403.Find this resource:

Finley, C. A. (2018). High-resolution simulation of a violent tornado in the 27 April 2011 outbreak environment. Paper presented at the 29th conference on severe local storms, Stowe, VT: American Meteorological Society.Find this resource:

Finley, C. A., Cotton, W. R., & Pielke, R. A. (2001). Numerical simulation of tornadogenesis in a highprecipitation supercell. Part I: Storm evolution and transition into a bow echo. Journal of Atmospheric Sciences, 58(13), 1597–1629.Find this resource:

Fiori, E., Parodi, A., & Siccardi, F. (2010). Turbulence closure parameterization and grid spacing effects in simulated supercell storms. Journal of Atmospheric Sciences, 67(12), 3870–3890.Find this resource:

Grasso, L. D., & Cotton, W. R. (1995). Numerical simulation of a tornado vortex. Journal of Atmospheric Sciences, 52(8), 1192–1203.Find this resource:

Jacob, R., & Anderson, J. (1992). Do-it-yourself massively parallel supercomputer does useful physics. Computers in Physics, 6(3), 244–251.Find this resource:

Johnson, K. W., Bauer, J., Riccardi, G. A., Droegemeier, K. K., & Xue, M. (1994). Distributed processing of a regional prediction model. Monthly Weather Review, 122(11), 2558–2572.Find this resource:

Kessler, E. (1969). On the distribution and continuity of water substance in atmospheric circulations. In E. Kessler (Ed.), Meteorological monographs (Vol. 10, pp. 1–84). Boston, MA: American Meteorological Society.Find this resource:

Klemp, J. B., & Rotunno, R. (1983). A study of the tornadic region within a supercell thunderstorm. Journal of Atmospheric Sciences, 40(2), 359–377.Find this resource:

Klemp, J. B., & Wilhelmson, R. B. (1978a). The simulation of three-dimensional convective storm dynamics. Journal of Atmospheric Sciences, 35(6), 1070–1096.Find this resource:

Klemp, J. B., & Wilhelmson, R. B. (1978b). Simulations of right- and left-moving storms produced through storm splitting. Journal of Atmospheric Sciences, 35(6), 1097–1110.Find this resource:

Klemp, J. B., Wilhelmson, R. B., & Ray, P. S. (1981). Observed and numerically simulated structure of a mature supercell thunderstorm. Journal of Atmospheric Sciences, 38(8), 1558–1580.Find this resource:

Knupp, K. R. (1989). Numerical simulation of low-level downdraft initiation within precipitating cumulonimbi: Some preliminary results. Monthly Weather Review, 117(7), 1517–1529.Find this resource:

Kramer, W., Butler, M., Bauer, G., Chadalavada, K., & Mendes, C. (2014). High-Performance Parallel I/O (pp. 17–31). London, U.K.: Chapman and Hall/CRC.Find this resource:

Lee, B. D., & Wilhelmson, R. B. (1997a). The numerical simulation of nonsupercell tornadogenesis. Part II: Evolution of a family of tornadoes along a weak outflow boundary. Journal of Atmospheric Sciences, 54(19), 2387–2415.Find this resource:

Lee, B. D., & Wilhelmson, R. B. (1997b). The numerical simulation of non-supercell tornadogenesis. Part I: Initiation and evolution of pretornadic misocyclone circulations along a dry outflow boundary. Journal of Atmospheric Sciences, 54(1), 32–60.Find this resource:

Lee, B. D., & Wilhelmson, R. B. (2000). The numerical simulation of nonsupercell tornadogenesis. Part III: Parameter tests investigating the role of CAPE, vortex sheet strength, and boundary layer vertical shear. Journal of Atmospheric Sciences, 57(14), 2246–2261.Find this resource:

Leutwyler, D., Fuhrer, O., Lapillonne, X., Lüthi, D., & Schär, C. (2016). Towards Europeanscale convection-resolving climate simulations with GPUs: A study with COSMO 4.19. Geoscientific Model Development, 9(9), 3393–3412.Find this resource:

Lilly, D. K. (1962). On the numerical simulation of buoyant convection. Tellus A, 14(2).Find this resource:

Lin, Y.-L., Farley, R. D., & Orville, H. D. (1983). Bulk parameterization of the snow field in a cloud model. Journal of Applied Meteorology and Climatology, 22(6), 1065–1092.Find this resource:

Lindstrom, P. (2014). Fixed-rate compressed floating-point arrays. IEEE Transactions on Visualization and Computer Graphics, 20(12), 2674–2683.Find this resource:

Markowski, P. (2018). A review of the various treatments of the surface momentum flux in severe storms simulations: Assumptions, deficiencies, and alternatives. Paper presented at the 29th conference on severe local storms, Stowe, VT: American Meteorological Society.Find this resource:

Markowski, P. M. (2016). An idealized numerical simulation investigation of the effects of surface drag on the development of Near-Surface vertical vorticity in supercell thunderstorms. Journal of Atmospheric Sciences, 73(11), 4349–4385.Find this resource:

Mashiko, W., & Niino, H. (2017). Super high-resolution simulation of the 6 May 2012 Tsukuba supercell tornado: Near-Surface structure and its evolution. SOLAIAT, 13, 135–139.Find this resource:

Mitchell, K. E., & Hovermale, J. B. (1977). A numerical investigation of the severe thunderstorm gust front. Monthly Weather Review, 105(5), 657–675.Find this resource:

Nishizawa, S., Yashiro, H., Sato, Y., Miyamoto, Y., & Tomita, H. (2015). Influence of grid aspect ratio on planetary boundary layer turbulence in large-eddy simulations. Geoscientific Model Development, 8(10), 3393–3419.Find this resource:

Noda, A. T., & Niino, H. (2005). Genesis and structure of a major tornado in a numerically-simulated supercell storm: Importance of vertical vorticity in a gust front. SOLAIAT, 1, 5–8.Find this resource:

Noda, A. T., & Niino, H. (2010). A numerical investigation of a supercell tornado: Genesis and vorticity budget. Journal of the Meteorological Society of Japan. Series II, 88(2), 135–159.Find this resource:

Ogura, Y. (1962). Convection of isolated masses of a buoyant fluid: A numerical calculation. Journal of Atmospheric Sciences, 19(6), 492–502.Find this resource:

Ogura, Y. (1963). The evolution of a moist convective element in a shallow, conditionally unstable atmosphere: A numerical calculation. Journal of Atmospheric Sciences, 20(5), 407–424.Find this resource:

Ogura, Y., & Phillips, N. A. (1962). Scale analysis of deep and shallow convection in the atmosphere. Journal of Atmospheric Sciences, 19(2), 173–179.Find this resource:

Orf, L. (2017). The use of ZFP lossy compression in tornado-resolving thunderstorm simulations. AGU 2017 Annual Meeting.Find this resource:

Orf, L. (2019). Leigh Orf—Simulation and visualization of thunderstorms, tornadoes, and downbursts.Find this resource:

Orf, L., Kantor, E., & Savory, E. (2012). Simulation of a downburst-producing thunderstorm using a very high-resolution three-dimensional cloud model. Journal of Wind Engineering and Industrial Aerodynamics, 104–106, 547–557.Find this resource:

Orf, L., Wilhelmson, R., Lee, B., & Finley, C. (2014). Genesis and maintenance of a long-track EF5 tornado embedded within a simulated supercell. Retrieved from (Presented at the 24th Conference on Severe Local Storms, Madison).Find this resource:

Orf, L., Wilhelmson, R., Lee, B., Finley, C., & Houston, A. (2017). Evolution of a long-track violent tornado within a simulated supercell. Bulletin of the American Meteorological Society, 98(1), 45–68.Find this resource:

Orf, L., Wilhelmson, R., & Wicker, L. (2016). Visualization of a simulated long-track EF5 tornado embedded within a supercell thunderstorm. Parallel Computing, 55, 28–34.Find this resource:

Orf, L. G., Anderson, J. R., & Straka, J. M. (1996). A three-dimensional numerical analysis of colliding microburst outflow dynamics. Journal of Atmospheric Sciences, 53(17), 2490–2511.Find this resource:

Orville, H. D. (1964). On mountain upslope winds. Journal of Atmospheric Sciences, 21(6), 622–633.Find this resource:

Orville, H. D. (1965). A numerical study of the initiation of cumulus clouds over mountainous terrain. Journal of Atmospheric Sciences, 22(6), 684–699.Find this resource:

Persson, A. (2005a). Early operational Numerical Weather Prediction outside the USA: An historical Introduction. Part 1: Internationalism and engineering NWP in Sweden, 1952–69. Meteorological Applications, 12(2), 135–159.Find this resource:

Persson, A. (2005b). Early operational Numerical Weather Prediction outside the USA: An historical introduction. Part II: Twenty countries around the world. Meteorological Applications, 12(3), 269–289.Find this resource:

Persson, A. (2005c). Early operational Numerical Weather Prediction outside the USA: An historical introduction. Part III: Endurance and mathematics–British NWP, 1948–1965. Meteorological Applications, 12(4), 381–413.Find this resource:

Pielke, R. A., Cotton, W. R., Walko, R. L., Tremback, C. J., Lyons, W. A., Grasso, L. D., . . . Copeland, J. H. (1992). A comprehensive meteorological modeling system—RAMS. Meteorology and Atmospheric Physics, 49(1), 69–91.Find this resource:

Roberts, B., Xue, M., Schenkman, A. D., & Dawson, D. T. (2016). The role of surface drag in tornadogenesis within an idealized supercell simulation. Journal of Atmospheric Sciences, 73(9), 3371–3395.Find this resource:

Rotunno, R., & Klemp, J. (1985). On the rotation and propagation of simulated supercell thunderstorms. Journal of Atmospheric Sciences, 42(3), 271–292.Find this resource:

Rotunno, R., & Klemp, J. B. (1982). The influence of the shear-induced pressure gradient on thunderstorm motion. Monthly Weather Review, 110(2), 136–151.Find this resource:

Rotunno, R., Klemp, J. B., & Weisman, M. L. (1988). A theory for strong, long-lived squall lines. Journal of Atmospheric Sciences, 45(3), 463–485.Find this resource:

Rutledge, S. A., & Hobbs, P. (1983). The mesoscale and microscale structure and organization of clouds and precipitation in midlatitude cyclones. VIII: A model for the “seeder–feeder” process in warm-frontal rainbands. Journal of Atmospheric Sciences, 40(5), 1185–1206.Find this resource:

Saito, K., Fujita, T., Yamada, Y., Ishida, J.-I., Kumagai, Y., Aranami, K., . . . Yamazaki, Y. (2006). The operational JMA nonhydrostatic mesoscale model. Monthly Weather Review, 134(4), 1266–1298.Find this resource:

Sathye, A., Xue, M., Bassett, G., & Droegemeier, K. (1997). Parallel weather modeling with the advanced regional prediction system. Parallel Computing, 23(14), 2243–2256.Find this resource:

Schalkwijk, J., Griffith, E. J., Post, F. H., & Jonker, H. J. J. (2012). High-performance simulations of turbulent clouds on a desktop PC: Exploiting the GPU. Bulletin of the American Meteorological Society, 93(3), 307–314.Find this resource:

Schenkman, A. D., Xue, M., & Dawson, D. T., II. (2016). The cause of internal outflow surges in a high-resolution simulation of the 8 May 2003 Oklahoma City tornadic supercell. Journal of Atmospheric Sciences, 73(1), 353–370.Find this resource:

Schenkman, A. D., Xue, M., & Hu, M. (2014). Tornadogenesis in a high-resolution simulation of the 8 May 2003 Oklahoma City supercell. Journal of Atmospheric Sciences, 71(1), 130–154.Find this resource:

Schenkman, A. D., Xue, M., & Shapiro, A. (2012). Tornadogenesis in a simulated mesovortex within a mesoscale convective system. Journal of Atmospheric Sciences, 69(11), 3372–3390.Find this resource:

Schlesinger, R. E. (1973a). A numerical model of deep moist convection: Part I. Comparative experiments for variable ambient moisture and wind shear. Journal of Atmospheric Sciences, 30(5), 835–856.Find this resource:

Schlesinger, R. E. (1973b). A numerical model of deep moist convection: Part II. A prototype experiment and variations upon it. Journal of Atmospheric Sciences, 30(7), 1374–1391.Find this resource:

Schlesinger, R. E. (1975). A three-dimensional numerical model of an isolated deep convective cloud: Preliminary results. Journal of Atmospheric Sciences, 32(5), 934–957.Find this resource:

Schlesinger, R. E. (1980). A three-dimensional numerical model of an isolated thunderstorm. Part II: Dynamics of updraft splitting and mesovortex couplet evolution. Journal of Atmospheric Sciences, 37(2), 395–420.Find this resource:

Shuman, F. G. (1989). History of numerical weather prediction at the National Meteorological Center. Weather Forecast., 4(3), 286–296.Find this resource:

Skamarock, W., Klemp, J., Dudhia, J., Gill, D., Barker, D., Wang, W., . . . Duda, M. (2008). A Description of the Advanced Research WRF Version 3 [Technical report].

Smagorinsky, J. (1958). On the numerical integration of the primitive equations of motion for baroclinic flow in a closed region. Monthly Weather Review, 86(12), 457–466.Find this resource:

Smarr, L., Kogut, J., Kuck, D., Wilhelmson, R., Wolynes, P., Hess, K., . . ., McMeeking, R. (1983). A center for scientific and engineering supercomputing [Proposal].

Srivastava, R. C. (1967). A study of the effect of precipitation on cumulus dynamics. Journal of Atmospheric Sciences, 24(1), 36–45.Find this resource:

Steiner, J. T. (1973). A three-dimensional model of cumulus cloud development. Journal of Atmospheric Sciences, 30(3), 414–435.Find this resource:

Straka, J. M., & Anderson, J. R. (1993). Numerical simulations of microburst-producing storms: Some results from storms observed during COHMEX. Journal of Atmospheric Sciences, 50(10), 1329–1348.Find this resource:

Takeda, T. (1971). Numerical simulation of a precipitating convective cloud: The formation of a “Long-Lasting” cloud. Journal of Atmospheric Sciences, 28(3), 350–376.Find this resource:

Tripoli, G. J., & Cotton, W. R. (1989). Numerical study of an observed orogenic mesoscale convective system. Part 1: Simulated genesis and comparison with observations. Monthly Weather Review, 117(2), 273–304.Find this resource:

Vanderbauwhede, W., & Takemi, T. (2013). An investigation into the feasibility and benefits of GPU/multicore acceleration of the weather research and forecasting model. In 2013 international conference on high performance computing simulation (HPCS) (pp. 482–489).Find this resource:

Weisman, M. L., & Klemp, J. B. (1982). The dependence of numerically simulated convective storms on vertical wind shear and buoyancy. Monthly Weather Review, 110(6), 504–520.Find this resource:

Weisman, M. L., & Klemp, J. B. (1984). The structure and classification of numerically simulated convective storms in directionally varying wind shears. Monthly Weather Review, 112(12), 2479–2498.Find this resource:

Weisman, M. L., & Rotunno, R. (2004). “A theory for strong long-lived squall lines” revisited. Journal of Atmospheric Sciences, 61(4), 361–382.Find this resource:

Wicker, L. J., & Wilhelmson, R. B. (1995). Simulation and analysis of tornado development and decay within a three-dimensional supercell thunderstorm. Journal of Atmospheric Sciences, 52(15), 2675–2703.Find this resource:

Wilhelmson, R. (1974). The life cycle of a thunderstorm in three dimensions. Journal of Atmospheric Sciences, 31(6), 1629–1651.Find this resource:

Wilhelmson, R., Brooks, H., Jewett, B., Shaw, C., Wicker, L., & Coauthors. (1989). Study of a numerically modeled severe storm.

Wilhelmson, R., & Ogura, Y. (1972). The pressure perturbation and the numerical modeling of a cloud. Journal of Atmospheric Sciences, 29(7), 1295–1307.Find this resource:

Wilhelmson, R. B., & Chen, C.-S. (1982). A simulation of the development of successive cells along a cold outflow boundary. Journal of Atmospheric Sciences, 39(7), 1466–1483.Find this resource:

Wilhelmson, R. B., Jewett, B. F., Shaw, C., Wicker, L. J., Arrott, M., Bushell, C. B., . . ., Yost, J. B. (1990). A study of the evolution of a numerically modeled severe storm. International Journal of Supercomputing Applications, 4(2), 20–36.Find this resource:

Wilhelmson, R. B., & Klemp, J. B. (1978). A numerical study of storm splitting that leads to long-lived storms. Journal of Atmospheric Sciences, 35(10), 1974–1986.Find this resource:

Wilhelmson, R. B., & Klemp, J. B. (1981). A three-dimensional numerical simulation of splitting severe storms on 3 April 1964. Journal of Atmospheric Sciences, 38(8), 1581–1600.Find this resource:

Xue, M. (2004). Tornadogenesis within a simulated supercell storm. Paper presented at the 22nd Severe Local Storms Conference. Hyannis, MA: American Meteorological Society.Find this resource:

Xue, M., Wang, D., Gao, J., Brewster, K., & Droegemeier, K. K. (2003). The Advanced Regional Prediction System (ARPS), storm-scale numerical weather prediction and data assimilation. Meteorology and Atmospheric Physics, 82(1), 139–170.Find this resource:

Yao, D., Xue, H., Yin, J., Sun, J., Liang, X., & Guo, J. (2018). Investigation into the formation, structure, and evolution of an EF4 tornado in east china using a high-resolution numerical simulation. Journal of Meteorological Research, 32(2), 157–171.Find this resource: