1-20 of 24 Results

  • Keywords: simulations x
Clear all

Article

Computer simulations can be defined in three categories: computational modeling simulations, human-computer simulations, and computer-mediated simulations. These categories of simulations are defined primarily by the role computers take and by the role humans take in the implementation of the simulation. The literature on the use of simulations in the international studies classroom considers under what circumstances and in what ways the use of simulations creates pedagogical benefits when compared with other teaching methods. But another issue to consider is under what circumstances and in what ways the use of computers can add (or subtract) pedagogical value when compared to other methods for implementing simulations. There are six alleged benefits of using simulation: encouraging cognitive and affective learning, enhancing student motivation, creating opportunities for longer-term learning, increasing personal efficiency, and promoting student-teacher relations. Moreover, in regard to the use of computer simulations, there are a set of good practices to consider. The first good practice emerges out of a realization of the unequal level of access to technology. The second good practice emerges from a clear understanding of the strengths and weaknesses of a computer-assisted simulation. The final and perhaps most fundamental good practice emerges from the idea that computers and technology more generally are not ends in themselves, but a means to help instructors reach a set of pedagogical goals.

Article

Although instructors are increasingly adopting the practices of online engagement in the field of international studies, there are few discussions in the disciplinary literature of its methods, advantages and disadvantages. Online engagement can be considered as a type of class participation that takes place on the Internet. It refers to engagement between groups of students and an instructor, as well as engagement among students. Online engagement activities can be integrated into fully-online courses, or they may supplement in-class participation in traditional courses., There are five common methods that can be used to create online engagement among students: online discussion boards, class blogs, social networking sites such as Twitter and Facebook, wikis, and online simulations. Each of these has its advantages and disadvantages. For each there are case studies in the literature, and best practices can be summarized. Online engagement should not be technology-driven; rather, it should be integrated with the course content and learning outcomes. Instructors should craft assignments in ways that encourage creative and critical thinking, and should take into account the particular problems that arise in the absence of face-to-face interaction. Online engagement activities should be chosen to mitigate some of the issues with traditional classroom activities, and/or develop novel skills that are relevant to the 21st-century economy. These activities should be accessible to all—including, but not restricted to, students with disabilities. Instructors and institutions should also be aware of ethical and legal issues, such as privacy, and the ownership of the data generated by online engagement activities by users.

Article

Pieter van Baal and Hendriek Boshuizen

In most countries, non-communicable diseases have taken over infectious diseases as the most important causes of death. Many non-communicable diseases that were previously lethal diseases have become chronic, and this has changed the healthcare landscape in terms of treatment and prevention options. Currently, a large part of healthcare spending is targeted at curing and caring for the elderly, who have multiple chronic diseases. In this context prevention plays an important role, as there are many risk factors amenable to prevention policies that are related to multiple chronic diseases. This article discusses the use of simulation modeling to better understand the relations between chronic diseases and their risk factors with the aim to inform health policy. Simulation modeling sheds light on important policy questions related to population aging and priority setting. The focus is on the modeling of multiple chronic diseases in the general population and how to consistently model the relations between chronic diseases and their risk factors by combining various data sources. Methodological issues in chronic disease modeling and how these relate to the availability of data are discussed. Here, a distinction is made between (a) issues related to the construction of the epidemiological simulation model and (b) issues related to linking outcomes of the epidemiological simulation model to economic relevant outcomes such as quality of life, healthcare spending and labor market participation. Based on this distinction, several simulation models are discussed that link risk factors to multiple chronic diseases in order to explore how these issues are handled in practice. Recommendations for future research are provided.

Article

Carolyn M. Shaw and Amanda Rosen

Simulations and games have been used in the international studies classroom for over fifty years, producing a considerable body of literature devoted to their study and evolution. From the earliest use of these techniques in the classroom, instructors have sought to identify and characterize the benefits of these tools for student learning. Scholars note, in particular, the value of simulations and games in achieving specific learning objectives that are not easily conveyed through lecture format. More recent writings have focused on what specific lessons can be conveyed through different types of exercises and have included detailed descriptions or appendices so that others can use these exercises. As simulations and games have become more widely incorporated into the classroom, a growing body of literature has provided instructions on how to custom design simulations to fit instructors’ specific needs. Although initial evaluations of the effectiveness of simulations were methodologically weak and flawed by research design, sampling, or other methodological problems, newer studies have become more sophisticated. Rather than simply arguing that simulations are (or are not) a better teaching tool than traditional class formats, there is greater recognition that simulations are simply one technique of many that can promote student learning. Scholars, however, are still seeking to understand under what conditions simulations and games are especially beneficial in the classroom.

Article

Gretchen J. Van Dyke

The United Nations and the European Union are extraordinarily complex institutions that pose considerable challenges for international studies faculty who work to expose their students to the theoretical, conceptual, and factual material associated with both entities. One way that faculty across the academic spectrum are bringing the two institutions “alive” for their students is by utilizing in-class and multi-institutional simulations of both the UN and the EU. Model United Nations (MUN) and Model European Union simulations are experiential learning tools used by an ever-increasing number of students. The roots of Model UN simulations can be traced to the student-led Model League of Nations simulations that began at Harvard University in the 1920s. Current secondary school MUN offerings include two initiatives, Global Classrooms and the Montessori Model Union Nations (Montessori-MUN). Compared to the institutionalized MUNs, Model EU programs are relatively young. There are three long-standing, distinct, intercollegiate EU simulations in the United States: one in New York, one in the Mid-Atlantic region, and one in the Mid-West. As faculty continue to engage their students with Model UN and Model EU simulations, new scholarship is expected to continue documenting their experiences while emphasizing the value of active and experiential learning pedagogies. In addition, future research will highlight new technologies as critical tools in the Model UN and Model EU preparatory processes and offer quantitative data that supports well-established qualitative conclusions about the positive educational value of these simulations.

Article

Aidan Moran, Nick Sevdalis, and Lauren Wallace

At first glance, there are certain similarities between performance in surgery and that in competitive sports. Clearly, both require exceptional gross and fine motor ability and effective concentration skills, and both are routinely performed in dynamic environments, often under time constraints. On closer inspection, however, crucial differences emerge between these skilled domains. For example, surgery does not involve directly antagonistic opponents competing for victory. Nevertheless, analogies between surgery and sport have contributed to an upsurge of research interest in the psychological processes that underlie expertise in surgical performance. Of these processes, perhaps the most frequently investigated in recent years is that of motor imagery (MI) or the cognitive simulation skill that enables us to rehearse actions in our imagination without engaging in the physical movements involved. Research on motor imagery training (MIT; also called motor imagery practice, MIP) has important theoretical and practical implications. Specifically, at a theoretical level, hundreds of experimental studies in psychology have demonstrated the efficacy of MIT/MIP in improving skill learning and skilled performance in a variety of fields such as sport and music. The most widely accepted explanation of these effects comes from “simulation theory,” which postulates that executed and imagined actions share some common neural circuits and cognitive mechanisms. Put simply, imagining a skill activates some of the brain areas and neural circuits that are involved in its actual execution. Accordingly, systematic engagement in MI appears to “prime” the brain for optimal skilled performance. At the practical level, as surgical instruction has moved largely from an apprenticeship model (the so-called see one, do one, teach one approach) to one based on simulation technology and practice (e.g., the use of virtual reality equipment), there has been a corresponding growth of interest in the potential of cognitive training techniques (e.g., MIT/MIP) to improve and augment surgical skills and performance. Although these cognitive training techniques suffer both from certain conceptual confusion (e.g., with regard to the clarity of key terms) and inadequate empirical validation, they offer considerable promise in the quest for a cost-effective supplementary training tool in surgical education. Against this background, it is important for researchers and practitioners alike to explore the cognitive psychological factors (such as motor imagery) that underlie surgical skill learning and performance.

Article

Philip M. Ouellette and David Wilkerson

The growth in technological advances in recent years has revolutionized the way we teach, learn, and practice social work. Due to increases in educational costs and the need for students to maintain family and work responsibilities, an increasing number of social work programs have turned to today’s advances in technology to deliver their courses and programs. This change has resulted in the creative use of new multimedia tools and online pedagogical strategies to offer distance web-based educational programming. With increases in technology-supported programs, recent research studies have identified a number of areas needing further investigation to ensure that quality distance education programs are developed.

Article

Ravi Bhavnani and David Sylvan

On several occasions in the past 70 years, simulation as a research program in FPA rose, then fell. We begin by defining what we mean by simulations before reviewing the two main streams of simulation work in FPA: human-based and computer-based, with this latter itself comprising two streams. We conclude with a speculative discussion of what happened to simulation in FPA and what the future may hold.

Article

Political systems involve citizens, voters, politicians, parties, legislatures, and governments. These political actors interact with each other and dynamically alter their strategies according to the results of their interactions. A major challenge in political science is to understand the dynamic interactions between political actors and extrapolate from the process of individual political decision making to collective outcomes. Agent-based modeling (ABM) offers a means to comprehend and theorize the nonlinear, recursive, and interactive political process. It views political systems as complex, self-organizing, self-reproducing, and adaptive systems consisting of large numbers of heterogeneous agents that follow a set of rules governing their interactions. It allows the specification of agent properties and rules governing agent interactions in a simulation to observe how micro-level processes generate macro-level phenomena. It forces researchers to make assumptions surrounding a theory explicit, facilitates the discovery of extensions and boundary conditions of the modeled theory through what-if computational experiments, and helps researchers understand dynamic processes in the real-world. ABM models have been built to address critical questions in political decision making, including why voter turnouts remain high, how party coalitions form, how voters’ knowledge and emotion affect election outcomes, and how political attitudes change through a campaign. These models illustrate the use of ABM in explicating assumptions and rules of theoretical frameworks, simulating repeated execution of these rules, and revealing emergent patterns and their boundary conditions. While ABM has limitations in external validity and robustness, it provides political scientists a bottom-up approach to study a complex system by clearly defining the behavior of various actors and generate theoretical insights on political phenomena.

Article

The planetary boundary layer of Mars is a crucial component of the Martian climate and meteorology, as well as a key driver of the surface-atmosphere exchanges on Mars. As such, it is explored by several landers and orbiters; high-resolution atmospheric modeling is used to interpret the measurements by those spacecrafts. The planetary boundary layer of Mars is particularly influenced by the strong radiative control of the Martian surface and, as a result, features a more extreme version of planetary boundary layer phenomena occurring on Earth. In daytime, the Martian planetary boundary layer is highly turbulent, mixing heat and momentum in the atmosphere up to about 10 kilometers from the surface. Daytime convective turbulence is organized as convective cells and vortices, the latter giving rise to numerous dust devils when dust is lifted and transported in the vortex. The nighttime planetary boundary layer is dominated by stable-layer turbulence, which is much less intense than in the daytime, and slope winds in regions characterized by uneven topography. Clouds and fogs are associated with the planetary boundary layer activity on Mars.

Article

In the years following the Second World War, the U.S. government played a prominent role in the support of basic scientific research. The National Science Foundation (NSF) was created in 1950 with the primary mission of supporting fundamental science and engineering, excluding medical sciences. Over the years, the NSF has operated from the “bottom up,” keeping close track of research around the United States and the world while maintaining constant contact with the research community to identify ever-moving horizons of inquiry. In the 1950s the field of meteorology was something of a poor cousin to the other branches of science; forecasting was considered more of trade than a discipline founded on sound theoretical foundations. Realizing the importance of the field to both the economy and national security, the NSF leadership made a concerted effort to enhance understanding of the global atmospheric circulation. The National Center for Atmospheric Research (NCAR) was established to complement ongoing research efforts in academic institutions; it has played a pivotal role in providing observational and modeling tools to the emerging cadre of researchers in the disciplines of meteorology and atmospheric sciences. As understanding of the predictability of the coupled atmosphere-ocean system grew, the field of climate science emerged as a natural outgrowth of meteorology, oceanography, and atmospheric sciences. The NSF played a leading role in the implementation of major international programs such as the International Geophysical Year (IGY), the Global Weather Experiment, the World Ocean Circulation Experiment (WOCE) and Tropical Ocean Global Atmosphere (TOGA). Through these programs, understanding of the coupled climate system comprising atmosphere, ocean, land, ice-sheet, and sea ice greatly improved. Consistent with its mission, the NSF supported projects that advanced fundamental knowledge of forcing and feedbacks in the coupled atmosphere-ocean-land system. Research projects have included theoretical, observational, and modeling studies of the following: the general circulation of the stratosphere and troposphere; the processes that govern climate; the causes of climate variability and change; methods of predicting climate variations; climate predictability; development and testing of parameterization of physical processes; numerical methods for use in large-scale climate models; the assembly and analysis of instrumental and/or modeled climate data; data assimilation studies; and the development and use of climate models to diagnose and simulate climate variability and change. Climate scientists work together on an array of topics spanning time scales from the seasonal to the centennial. The NSF also supports research on the natural evolution of the earth’s climate on geological time scales with the goal of providing a baseline for present variability and future trends. The development of paleoclimate data sets has resulted in longer term data for evaluation of model simulations, analogous to the evaluation using instrumental observations. This has enabled scientists to create transformative syntheses of paleoclimate data and modeling outcomes in order to understand the response of the longer-term and higher magnitude variability of the climate system that is observed in the geological records. The NSF will continue to address emerging issues in climate and earth-system science through balanced investments in transformative ideas, enabling infrastructure and major facilities to be developed.

Article

Shuiqing Yin and Deliang Chen

Weather generators (WGs) are stochastic models that can generate synthetic climate time series of unlimited length and having statistical properties similar to those of observed time series for a location or an area. WGs can infill missing data, extend the length of climate time series, and generate meteorological conditions for unobserved locations. Since the 1990s WGs have become an important spatial-temporal statistical downscaling methodology and have been playing an increasingly important role in climate-change impact assessment. Although the majority of the existing WGs have focused on simulation of precipitation for a single site, more and more WGs considering correlations among multiple sites, and multiple variables, including precipitation and nonprecipitation variables such as temperature, solar radiation, wind, humidity, and cloud cover have been developed for daily and sub-daily scales. Various parametric, semi-parametric and nonparametric WGs have shown the ability to represent the mean, variance, and autocorrelation characteristics of climate variables at different scales. Two main methodologies including change factor and conditional WGs on large-scale dynamical and thermal dynamical weather states have been developed for applications under a changing climate. However, rationality and validity of assumptions underlining both methodologies need to be carefully checked before they can be used to project future climate change at local scale. Further, simulation of extreme values by the existing WGs needs to be further improved. WGs assimilating multisource observations from ground observations, reanalysis, satellite remote sensing, and weather radar for the continuous simulation of two-dimensional climate fields based on the mixed physics-based and stochastic approaches deserve further efforts. An inter-comparison project on a large ensemble of WG methods may be helpful for the improvement of WGs. Due to the applied nature of WGs, their future development also requires inputs from decision-makers and other relevant stakeholders.

Article

With current rapid growth of cities and the move toward the development of both sustainable and resilient infrastructure systems, it is vital for the structural engineering community to continue to improve their knowledge in earthquake engineering to limit infrastructure damage and the associated social and economic impacts. Historically, the development of such knowledge has been accomplished through the deployment of analytical simulations and experimental testing. Experimental testing is considered the most accurate tool by which local behavior of components or global response of systems can be assessed, assuming the test setup is realistically configured and the experiment is effectively executed. However, issues of scale, equipment capacity, and availability of research funding continue to hinder full-scale testing of complete structures. On the other hand, analytical simulation software is limited to solving specific type of problems and in many cases fail to capture complex behaviors, failure modes, and collapse of structural systems. Hybrid simulation has emerged as a potentially accurate and efficient tool for the evaluation of the response of large and complex structures under earthquake loading. In hybrid (experiment-analysis) simulation, part of a structural system is experimentally represented while the rest of the structure is numerically modeled. Typically, the most critical component is physically represented. By combining a physical specimen and a numerical model, the system-level behavior can be better quantified than modeling the entire system purely analytically or testing only a component. This article discusses the use of hybrid simulation as an effective tool for the seismic evaluation of structures. First, a chronicled development of hybrid simulation is presented with an overview of some of the previously conducted studies. Second, an overview of a hybrid simulation environment is provided. Finally, a hybrid simulation application example on the response of steel frames with semi-rigid connections under earthquake excitations is presented. The simulations included a full-scale physical specimen for the experimental module of a connection, and a 2D finite element model for the analytical module. It is demonstrated that hybrid simulation is a powerful tool for advanced assessment when used with appropriate analytical and experimental realizations of the components and that semi-rigid frames are a viable option in earthquake engineering applications.

Article

David Kaufman and Alice Ireland

Simulations provide opportunities to extend and enhance the practice, feedback, and assessment provided during teacher education. A simulation is a simplified but accurate, valid, and dynamic model of reality. A simulation allows users to encounter problem situations, test decisions and actions, experience the results, and modify behavior cost-effectively and without risking harm. Simulations may or may not be implemented using digital technologies but increasingly take advantage of them to provide more realism, flexibility, access, and detailed feedback. Simulations have many advantages for learning and practice, including the ability to repeat scenarios with specific learning objectives, practice for longer periods than are available in real life, use trial and error, experience rare or risky situations, and measure outcomes with validated scoring systems. For skills development, a simulation’s outcome measures, combined with debriefing and reflection, serve as feedback for a formative assessment cycle of repeated performance practice and improvement. Simulations are becoming more common in preservice teacher education for skills such as lesson planning and implementation, classroom management, ethical practice, and teaching students with varying learning needs. Preservice teachers can move from theory into action, with more practice time and variety than would be available in limited live practicum sessions and without negatively affecting vulnerable students. While simulations are widely accepted in medical and health education, examples in teacher education have often been research prototypes used in experimental settings. These prototypes and newer commercial examples demonstrate the potential of simulations as a tool for both preservice and in-service teacher education. However, cost, simulation limitations, and lack of rigorous evidence as to their effectiveness has slowed their widespread adoption.

Article

Nick Malleson, Alison Heppenstall, and Andrew Crooks

Since the earliest geographical explorations of criminal phenomena, scientists have come to the realization that crime occurrences can often be best explained by analysis at local scales. For example, the works of Guerry and Quetelet—which are often credited as being the first spatial studies of crime—analyzed data that had been aggregated to regions approximately similar to US states. The next major seminal work on spatial crime patterns was from the Chicago School in the 20th century and increased the spatial resolution of analysis to the census tract (an American administrative area that is designed to contain approximately 4,000 individual inhabitants). With the availability of higher-quality spatial data, as well as improvements in the computing infrastructure (particularly with respect to spatial analysis and mapping), more recent empirical spatial criminology work can operate at even higher resolutions; the “crime at places” literature regularly highlights the importance of analyzing crime at the street segment or at even finer scales. These empirical realizations—that crime patterns vary substantially at micro places—are well grounded in the core environmental criminology theories of routine activity theory, the geometric theory of crime, and the rational choice perspective. Each theory focuses on the individual-level nature of crime, the behavior and motivations of individual people, and the importance of the immediate surroundings. For example, routine activities theory stipulates that a crime is possible when an offender and a potential victim meet at the same time and place in the absence of a capable guardian. The geometric theory of crime suggests that individuals build up an awareness of their surroundings as they undertake their routine activities, and it is where these areas overlap with crime opportunities that crimes are most likely to occur. Finally, the rational choice perspective suggests that the decision to commit a crime is partially a cost-benefit analysis of the risks and rewards. To properly understand or model these three decisions it is important to capture the motivations, awareness, rationality, immediate surroundings, etc., of the individual and include a highly disaggregate representation of space (i.e. “micro-places”). Unfortunately one of the most common methods for modeling crime, regression, is somewhat poorly suited capturing these dynamics. As with most traditional modeling approaches, regression models represent the underlying system through mathematical aggregations. The resulting models are therefore well suited to systems that behave in a linear fashion (e.g., where a change in model input leads to a predictable change in the model output) and where low-level heterogeneity is not important (i.e., we can assume that everyone in a particular group of people will behave in the same way). However, as alluded to earlier, the crime system does not necessarily meet these assumptions. To really understand the dynamics of crime patterns, and to be able to properly represent the underlying theories, it is necessary to represent the behavior of the individual system components (i.e. people) directly. For this reason, many scientists from a variety of different disciplines are turning to individual-level modeling techniques such as agent-based modeling.

Article

Robert J. Beck and Henry F. Carey

The international law (IL) course offers a unique opportunity for students to engage in classroom debate on crucial topics ranging from the genocide in Darfur, the Israeli–Palestinian issue, or peace processes in Sri Lanka. A well-designed IL course can help students to appreciate their own preconceptions and biases and to develop a more nuanced and critical sense of legality. During the Cold War, IL became increasingly marginalized as a result of the perceived failure of international institutions to avert World War II and the concurrent ascent of realism as IR’s predominant theoretical paradigm. Over the past two decades, however, as IL’s profile has soared considerably, political scientists and students have taken a renewed interest in the subject. Today, IL teaching/study remains popular in law schools. As a general practice, most instructors of IL, both in law schools or undergraduate institutions, begin their course designs by selecting readings on basic legal concepts and principles. Once the basic subject matter and associated reading assignments have been determined, instructors typically move on to develop their syllabi, which may cover a variety of topics such as interdisciplinary methods, IL theory, cultural relativism, formality vs informality, identity politics, law and economics/public choice, feminism, legal realism, and reformism/modernism. There are several innovative approaches for teaching IL, including moot courts, debates, simulations, clinical learning, internships, legal research training, and technology-enhanced teaching. Another important component of IL courses is assessment of learning outcomes, and a typical approach is to administer end-of-semester essay-based examinations.

Article

Jeffrey S. Lantis, Kent J. Kille, and Matthew Krain

The literature on active teaching and learning in international studies has developed significantly in recent decades. The philosophy behind active teaching approaches focuses on the goal of empowering students and promoting knowledge retention through engagement and experiential learning. Teacher-scholars in many different disciplines have contributed to a wide and increasingly deep literature on teaching with purpose. They identify best practices, including the importance of designing exercises that have clear educational objectives, exploring examples and alternative ways of engaging students, detailing clear procedures, and implementing assessment protocols. Examples of popular and successful active teaching and learning approaches include teaching with case studies and problem-based learning in international studies, where students confront the complexities of an issue or puzzle, and reason through potential solutions. Other instructors employ structured debates in the classroom, where students are assigned common reading materials and then develop arguments on one side or another of the debate in order to critically examine issues. More teachers are engaging students through use of alternative texts like literature and films, where reading historical narratives, memoirs, or even graphic novels may help capture student interest and promote critical thinking and reflection. In addition, simulations and games remain very popular—from simple in-class game theory exercises to semester-long role-playing simulations of international diplomacy. Studies show that all of these approaches, when implemented with clear educational objectives and intentionality, can promote student learning, interest, and retention of knowledge and perspectives. Finally, teacher-scholars have begun to embrace the importance of assessment and thoughtful reflection on the effectiveness of active teaching and learning techniques for the international studies classroom. Evidence regarding the achievement of learning outcomes, or potential limitations, can help inform improvements in experiential learning program design for future iterations.

Article

In this article, the concepts and background of regional climate modeling of the future Baltic Sea are summarized and state-of-the-art projections, climate change impact studies, and challenges are discussed. The focus is on projected oceanographic changes in future climate. However, as these changes may have a significant impact on biogeochemical cycling, nutrient load scenario simulations in future climates are briefly discussed as well. The Baltic Sea is special compared to other coastal seas as it is a tideless, semi-enclosed sea with large freshwater and nutrient supply from a partly heavily populated catchment area and a long response time of about 30 years, and as it is, in the early 21st century, warming faster than any other coastal sea in the world. Hence, policymakers request the development of nutrient load abatement strategies in future climate. For this purpose, large ensembles of coupled climate–environmental scenario simulations based upon high-resolution circulation models were developed to estimate changes in water temperature, salinity, sea-ice cover, sea level, oxygen, nutrient, and phytoplankton concentrations, and water transparency, together with uncertainty ranges. Uncertainties in scenario simulations of the Baltic Sea are considerable. Sources of uncertainties are global and regional climate model biases, natural variability, and unknown greenhouse gas emission and nutrient load scenarios. Unknown early 21st-century and future bioavailable nutrient loads from land and atmosphere and the experimental setup of the dynamical downscaling technique are perhaps the largest sources of uncertainties for marine biogeochemistry projections. The high uncertainties might potentially be reducible through investments in new multi-model ensemble simulations that are built on better experimental setups, improved models, and more plausible nutrient loads. The development of community models for the Baltic Sea region with improved performance and common coordinated experiments of scenario simulations is recommended.

Article

William Joseph Gutowski and Filippo Giorgi

Regional climate downscaling has been motivated by the objective to understand how climate processes not resolved by global models can influence the evolution of a region’s climate and by the need to provide climate change information to other sectors, such as water resources, agriculture, and human health, on scales poorly resolved by global models but where impacts are felt. There are four primary approaches to regional downscaling: regional climate models (RCMs), empirical statistical downscaling (ESD), variable resolution global models (VARGCM), and “time-slice” simulations with high-resolution global atmospheric models (HIRGCM). Downscaling using RCMs is often referred to as dynamical downscaling to contrast it with statistical downscaling. Although there have been efforts to coordinate each of these approaches, the predominant effort to coordinate regional downscaling activities has involved RCMs. Initially, downscaling activities were directed toward specific, individual projects. Typically, there was little similarity between these projects in terms of focus region, resolution, time period, boundary conditions, and phenomena of interest. The lack of coordination hindered evaluation of downscaling methods, because sources of success or problems in downscaling could be specific to model formulation, phenomena studied, or the method itself. This prompted the organization of the first dynamical-downscaling intercomparison projects in the 1990s and early 2000s. These programs and several others following provided coordination focused on an individual region and an opportunity to understand sources of differences between downscaling models while overall illustrating the capabilities of dynamical downscaling for representing climatologically important regional phenomena. However, coordination between programs was limited. Recognition of the need for further coordination led to the formation of the Coordinated Regional Downscaling Experiment (CORDEX) under the auspices of the World Climate Research Programme (WCRP). Initial CORDEX efforts focused on establishing and performing a common framework for carrying out dynamically downscaled simulations over multiple regions around the world. This framework has now become an organizing structure for downscaling activities around the world. Further efforts under the CORDEX program have strengthened the program’s scientific motivations, such as assessing added value in downscaling, regional human influences on climate, coupled ocean­–land–atmosphere modeling, precipitation systems, extreme events, and local wind systems. In addition, CORDEX is promoting expanded efforts to compare capabilities of all downscaling methods for producing regional information. The efforts are motivated in part by the scientific goal to understand thoroughly regional climate and its change and by the growing need for climate information to assist climate services for a multitude of climate-impacted sectors.

Article

Donald Edwards

Crayfish are decapod crustaceans that use different forms of escape to flee from different types of predatory attacks. Lateral and Medial Giant escapes are released by giant interneurons of the same name in response to sudden, sharp attacks from the rear and front of the animal, respectively. A Lateral Giant (LG) escape uses a fast rostral abdominal flexion to pitch the animal up and forward at very short latency. It is succeeded by guided swimming movements powered by a series of rapid abdominal flexions and extensions. A Medial Giant (MG) escape uses a fast, full abdominal flexion to thrust the animal directly backward, and is also followed by swimming that moves the animal rapidly away from the attacker. More slowly developing attacks evoke Non-Giant (NG) escapes, which have a longer latency, are varied in the form of abdominal flexion, and are directed initially away from the attacker. They, too, are followed by swimming away from the attacker. The neural circuitry for LG escape has been extensively studied and has provided insights into the neural control of behavior, synaptic integration, coincidence detection, electrical synapses, behavioral and synaptic plasticity, neuroeconomical decision-making, and the modulatory effects of monoamines and of changes in the animal’s social status.