1-10 of 10 Results

  • Keywords: simulation modeling x
Clear all

Article

Computer simulations can be defined in three categories: computational modeling simulations, human-computer simulations, and computer-mediated simulations. These categories of simulations are defined primarily by the role computers take and by the role humans take in the implementation of the simulation. The literature on the use of simulations in the international studies classroom considers under what circumstances and in what ways the use of simulations creates pedagogical benefits when compared with other teaching methods. But another issue to consider is under what circumstances and in what ways the use of computers can add (or subtract) pedagogical value when compared to other methods for implementing simulations. There are six alleged benefits of using simulation: encouraging cognitive and affective learning, enhancing student motivation, creating opportunities for longer-term learning, increasing personal efficiency, and promoting student-teacher relations. Moreover, in regard to the use of computer simulations, there are a set of good practices to consider. The first good practice emerges out of a realization of the unequal level of access to technology. The second good practice emerges from a clear understanding of the strengths and weaknesses of a computer-assisted simulation. The final and perhaps most fundamental good practice emerges from the idea that computers and technology more generally are not ends in themselves, but a means to help instructors reach a set of pedagogical goals.

Article

Gretchen J. Van Dyke

The United Nations and the European Union are extraordinarily complex institutions that pose considerable challenges for international studies faculty who work to expose their students to the theoretical, conceptual, and factual material associated with both entities. One way that faculty across the academic spectrum are bringing the two institutions “alive” for their students is by utilizing in-class and multi-institutional simulations of both the UN and the EU. Model United Nations (MUN) and Model European Union simulations are experiential learning tools used by an ever-increasing number of students. The roots of Model UN simulations can be traced to the student-led Model League of Nations simulations that began at Harvard University in the 1920s. Current secondary school MUN offerings include two initiatives, Global Classrooms and the Montessori Model Union Nations (Montessori-MUN). Compared to the institutionalized MUNs, Model EU programs are relatively young. There are three long-standing, distinct, intercollegiate EU simulations in the United States: one in New York, one in the Mid-Atlantic region, and one in the Mid-West. As faculty continue to engage their students with Model UN and Model EU simulations, new scholarship is expected to continue documenting their experiences while emphasizing the value of active and experiential learning pedagogies. In addition, future research will highlight new technologies as critical tools in the Model UN and Model EU preparatory processes and offer quantitative data that supports well-established qualitative conclusions about the positive educational value of these simulations.

Article

Pieter van Baal and Hendriek Boshuizen

In most countries, non-communicable diseases have taken over infectious diseases as the most important causes of death. Many non-communicable diseases that were previously lethal diseases have become chronic, and this has changed the healthcare landscape in terms of treatment and prevention options. Currently, a large part of healthcare spending is targeted at curing and caring for the elderly, who have multiple chronic diseases. In this context prevention plays an important role, as there are many risk factors amenable to prevention policies that are related to multiple chronic diseases. This article discusses the use of simulation modeling to better understand the relations between chronic diseases and their risk factors with the aim to inform health policy. Simulation modeling sheds light on important policy questions related to population aging and priority setting. The focus is on the modeling of multiple chronic diseases in the general population and how to consistently model the relations between chronic diseases and their risk factors by combining various data sources. Methodological issues in chronic disease modeling and how these relate to the availability of data are discussed. Here, a distinction is made between (a) issues related to the construction of the epidemiological simulation model and (b) issues related to linking outcomes of the epidemiological simulation model to economic relevant outcomes such as quality of life, healthcare spending and labor market participation. Based on this distinction, several simulation models are discussed that link risk factors to multiple chronic diseases in order to explore how these issues are handled in practice. Recommendations for future research are provided.

Article

Gabriele Gramelsberger

Climate and simulation have become interwoven concepts during the past decades because, on the one hand, climate scientists shouldn’t experiment with real climate and, on the other hand, societies want to know how climate will change in the next decades. Both in-silico experiments for a better understanding of climatic processes as well as forecasts of possible futures can be achieved only by using climate models. The article investigates possibilities and problems of model-mediated knowledge for science and societies. It explores historically how climate became a subject of science and of simulation, what kind of infrastructure is required to apply models and simulations properly, and how model-mediated knowledge can be evaluated. In addition to an overview of the diversity and variety of models in climate science, the article focuses on quasiheuristic climate models, with an emphasis on atmospheric models.

Article

Ravi Bhavnani and David Sylvan

On several occasions in the past 70 years, simulation as a research program in FPA rose, then fell. We begin by defining what we mean by simulations before reviewing the two main streams of simulation work in FPA: human-based and computer-based, with this latter itself comprising two streams. We conclude with a speculative discussion of what happened to simulation in FPA and what the future may hold.

Article

Helena Sofia Rodrigues and Manuel José Fonseca

In the context of epidemiology, an epidemic is defined as the spread of an infectious disease to a large number of people, in a given population, within a short period of time. When we refer to the marketing field, a message is viral when it is broadly sent and received by the target market through person-to-person transmission. This marketing communication strategy is currently assumed to be an evolution by word of mouth, with the influence of information technologies, and called Viral Marketing. This stated similarity between an epidemic and the viral marketing process is notable yet the critical factors to this communication strategy’s effectiveness remain largely unknown. A literature review specifying some techniques and examples to optimize the use of viral marketing is therefore useful. Advantages and disadvantages exist to using social networks for the reproduction of viral information. It is very hard to predict whether a campaign becomes viral. However, there are some techniques to improve advertising/marketing communication, which viral campaigns have in common and can be used for producing a better communication campaign overall. It is believed that the mathematical models used in epidemiology could be a good way to model a marketing communication in a specific field. Indeed, an epidemiological model SIR (Susceptible-Infected-Recovered) helps to reveal the effects of a viral marketing strategy. A comparison between the disease parameters and the marketing application, as well as simulations using Matlab software explores the parallelism between a virus and the viral marketing approach.

Article

In this article, the concepts and background of regional climate modeling of the future Baltic Sea are summarized and state-of-the-art projections, climate change impact studies, and challenges are discussed. The focus is on projected oceanographic changes in future climate. However, as these changes may have a significant impact on biogeochemical cycling, nutrient load scenario simulations in future climates are briefly discussed as well. The Baltic Sea is special compared to other coastal seas as it is a tideless, semi-enclosed sea with large freshwater and nutrient supply from a partly heavily populated catchment area and a long response time of about 30 years, and as it is, in the early 21st century, warming faster than any other coastal sea in the world. Hence, policymakers request the development of nutrient load abatement strategies in future climate. For this purpose, large ensembles of coupled climate–environmental scenario simulations based upon high-resolution circulation models were developed to estimate changes in water temperature, salinity, sea-ice cover, sea level, oxygen, nutrient, and phytoplankton concentrations, and water transparency, together with uncertainty ranges. Uncertainties in scenario simulations of the Baltic Sea are considerable. Sources of uncertainties are global and regional climate model biases, natural variability, and unknown greenhouse gas emission and nutrient load scenarios. Unknown early 21st-century and future bioavailable nutrient loads from land and atmosphere and the experimental setup of the dynamical downscaling technique are perhaps the largest sources of uncertainties for marine biogeochemistry projections. The high uncertainties might potentially be reducible through investments in new multi-model ensemble simulations that are built on better experimental setups, improved models, and more plausible nutrient loads. The development of community models for the Baltic Sea region with improved performance and common coordinated experiments of scenario simulations is recommended.

Article

William Joseph Gutowski and Filippo Giorgi

Regional climate downscaling has been motivated by the objective to understand how climate processes not resolved by global models can influence the evolution of a region’s climate and by the need to provide climate change information to other sectors, such as water resources, agriculture, and human health, on scales poorly resolved by global models but where impacts are felt. There are four primary approaches to regional downscaling: regional climate models (RCMs), empirical statistical downscaling (ESD), variable resolution global models (VARGCM), and “time-slice” simulations with high-resolution global atmospheric models (HIRGCM). Downscaling using RCMs is often referred to as dynamical downscaling to contrast it with statistical downscaling. Although there have been efforts to coordinate each of these approaches, the predominant effort to coordinate regional downscaling activities has involved RCMs. Initially, downscaling activities were directed toward specific, individual projects. Typically, there was little similarity between these projects in terms of focus region, resolution, time period, boundary conditions, and phenomena of interest. The lack of coordination hindered evaluation of downscaling methods, because sources of success or problems in downscaling could be specific to model formulation, phenomena studied, or the method itself. This prompted the organization of the first dynamical-downscaling intercomparison projects in the 1990s and early 2000s. These programs and several others following provided coordination focused on an individual region and an opportunity to understand sources of differences between downscaling models while overall illustrating the capabilities of dynamical downscaling for representing climatologically important regional phenomena. However, coordination between programs was limited. Recognition of the need for further coordination led to the formation of the Coordinated Regional Downscaling Experiment (CORDEX) under the auspices of the World Climate Research Programme (WCRP). Initial CORDEX efforts focused on establishing and performing a common framework for carrying out dynamically downscaled simulations over multiple regions around the world. This framework has now become an organizing structure for downscaling activities around the world. Further efforts under the CORDEX program have strengthened the program’s scientific motivations, such as assessing added value in downscaling, regional human influences on climate, coupled ocean­–land–atmosphere modeling, precipitation systems, extreme events, and local wind systems. In addition, CORDEX is promoting expanded efforts to compare capabilities of all downscaling methods for producing regional information. The efforts are motivated in part by the scientific goal to understand thoroughly regional climate and its change and by the growing need for climate information to assist climate services for a multitude of climate-impacted sectors.

Article

Nick Malleson, Alison Heppenstall, and Andrew Crooks

Since the earliest geographical explorations of criminal phenomena, scientists have come to the realization that crime occurrences can often be best explained by analysis at local scales. For example, the works of Guerry and Quetelet—which are often credited as being the first spatial studies of crime—analyzed data that had been aggregated to regions approximately similar to US states. The next major seminal work on spatial crime patterns was from the Chicago School in the 20th century and increased the spatial resolution of analysis to the census tract (an American administrative area that is designed to contain approximately 4,000 individual inhabitants). With the availability of higher-quality spatial data, as well as improvements in the computing infrastructure (particularly with respect to spatial analysis and mapping), more recent empirical spatial criminology work can operate at even higher resolutions; the “crime at places” literature regularly highlights the importance of analyzing crime at the street segment or at even finer scales. These empirical realizations—that crime patterns vary substantially at micro places—are well grounded in the core environmental criminology theories of routine activity theory, the geometric theory of crime, and the rational choice perspective. Each theory focuses on the individual-level nature of crime, the behavior and motivations of individual people, and the importance of the immediate surroundings. For example, routine activities theory stipulates that a crime is possible when an offender and a potential victim meet at the same time and place in the absence of a capable guardian. The geometric theory of crime suggests that individuals build up an awareness of their surroundings as they undertake their routine activities, and it is where these areas overlap with crime opportunities that crimes are most likely to occur. Finally, the rational choice perspective suggests that the decision to commit a crime is partially a cost-benefit analysis of the risks and rewards. To properly understand or model these three decisions it is important to capture the motivations, awareness, rationality, immediate surroundings, etc., of the individual and include a highly disaggregate representation of space (i.e. “micro-places”). Unfortunately one of the most common methods for modeling crime, regression, is somewhat poorly suited capturing these dynamics. As with most traditional modeling approaches, regression models represent the underlying system through mathematical aggregations. The resulting models are therefore well suited to systems that behave in a linear fashion (e.g., where a change in model input leads to a predictable change in the model output) and where low-level heterogeneity is not important (i.e., we can assume that everyone in a particular group of people will behave in the same way). However, as alluded to earlier, the crime system does not necessarily meet these assumptions. To really understand the dynamics of crime patterns, and to be able to properly represent the underlying theories, it is necessary to represent the behavior of the individual system components (i.e. people) directly. For this reason, many scientists from a variety of different disciplines are turning to individual-level modeling techniques such as agent-based modeling.

Article

Political systems involve citizens, voters, politicians, parties, legislatures, and governments. These political actors interact with each other and dynamically alter their strategies according to the results of their interactions. A major challenge in political science is to understand the dynamic interactions between political actors and extrapolate from the process of individual political decision making to collective outcomes. Agent-based modeling (ABM) offers a means to comprehend and theorize the nonlinear, recursive, and interactive political process. It views political systems as complex, self-organizing, self-reproducing, and adaptive systems consisting of large numbers of heterogeneous agents that follow a set of rules governing their interactions. It allows the specification of agent properties and rules governing agent interactions in a simulation to observe how micro-level processes generate macro-level phenomena. It forces researchers to make assumptions surrounding a theory explicit, facilitates the discovery of extensions and boundary conditions of the modeled theory through what-if computational experiments, and helps researchers understand dynamic processes in the real-world. ABM models have been built to address critical questions in political decision making, including why voter turnouts remain high, how party coalitions form, how voters’ knowledge and emotion affect election outcomes, and how political attitudes change through a campaign. These models illustrate the use of ABM in explicating assumptions and rules of theoretical frameworks, simulating repeated execution of these rules, and revealing emergent patterns and their boundary conditions. While ABM has limitations in external validity and robustness, it provides political scientists a bottom-up approach to study a complex system by clearly defining the behavior of various actors and generate theoretical insights on political phenomena.