1-20 of 23 Results

  • Keywords: scale x
Clear all

Article

Sumerian-Akkadian mythology reaches back to the earliest lists of gods in the third millennium bce and preoccupied the Mesopotamian intellectuals for more than 2000 years. This overview describes four major moments in the earlier phases of that history, each putting in place a different type of cosmic building block: ontologies, infrastructures, genealogies, and interfaces. These four phases stretch from the first mythological narratives in the mid-third millennium down to the late second and first millennium bce, when Mesopotamian materials are reconfigured and adapted for cuneiform scribal traditions in northern Mesopotamia, Syria and the Levant. Rather than limiting ourselves to late, somewhat heterodox recompilations such as the Enuma Elish or the Baal Epic, this contribution argues that the most important and long-lived features of the mythological tradition in Mesopotamia came into existence between 2500 and 1500bce.Like the poetry of a particular language or the usual turns of phrase in a family, the mythology embedded in a particular culture or civilization provides decisive clues to the central concerns of that society. These clues are indirect hints at most, constrained by the need to transmit specific textual materials (mythologems, proverbs, or narratives), while at the same time producing the local pragmatic effects that they are thought to achieve. Surprisingly, then, mythological materials are also usually quite susceptible to translation, giving the unknowing reader the impression that things were not so very different four thousand years ago in ancient Iraq. If we adopt a definition of myth that limits our quarry to “stories about deities that describe how the basic structures of reality came into existence,” excluding thereby .

Article

Steven L. McMurtry, Susan J. Rose, and Lisa K. Berger

Accurate measurement is essential for effective social work practice, but doing it well can be difficult. One solution is to use rapid assessment instruments (RAIs), which are brief scales that typically require less than 15 minutes to complete. Some are administered by practitioners, but most are self-administered on paper or electronically. RAIs are available for screening, initial assessment, monitoring of service progress, and outcome evaluation. Some require author permission, others are sold commercially, and many more are free and in the public domain. Selection of an RAI should be based first on its psychometric strength, including content, concurrent, and known-groups validity, as well as on types of reliability such as internal consistency, but practical criteria such as readability are also important. And when used in practice settings, RAIs should be part of a well-rounded measurement plan that also includes behavioral observations, client logs, unobtrusive measures, and other approaches.

Article

Peter Wenner and Pernille Bülow

Homeostatic plasticity refers to a collection of mechanisms that function to homeostatically maintain some feature of neural function. The field began with the view that homeostatic plasticity exists predominantly for the maintenance of spike rate. However, it has become clear that multiple features undergo some form of homeostatic control, including network activity, burst rate, or synaptic strength. There are several different forms of homeostatic plasticity, which are typically triggered following perturbations in activity levels. Homeostatic intrinsic plasticity (HIP) appears to compensate for the perturbation with changes in membrane excitability (voltage-gated conductances); synaptic scaling is thought to be a multiplicative increase or decrease of synaptic strengths throughout the cell following an activity perturbation; presynaptic homeostatic plasticity is a change in probability of release following a perturbation to postsynaptic receptor activity. Each form of homeostatic plasticity can be different in terms of the mechanisms that are engaged, the feature that is homeostatically regulated, the trigger that initiates the compensation, and the signaling cascades that mediate these processes. Homeostatic plasticity is often described in development, but can extend into maturity and has been described in vitro and in vivo.

Article

Dimitris Korobilis and Davide Pettenuzzo

Bayesian inference in economics is primarily perceived as a methodology for cases where the data are short, that is, not informative enough in order to be able to obtain reliable econometric estimates of quantities of interest. In these cases, prior beliefs, such as the experience of the decision-maker or results from economic theory, can be explicitly incorporated to the econometric estimation problem and enhance the desired solution. In contrast, in fields such as computing science and signal processing, Bayesian inference and computation have long been used for tackling challenges associated with ultra high-dimensional data. Such fields have developed several novel Bayesian algorithms that have gradually been established in mainstream statistics, and they now have a prominent position in machine learning applications in numerous disciplines. While traditional Bayesian algorithms are powerful enough to allow for estimation of very complex problems (for instance, nonlinear dynamic stochastic general equilibrium models), they are not able to cope computationally with the demands of rapidly increasing economic data sets. Bayesian machine learning algorithms are able to provide rigorous and computationally feasible solutions to various high-dimensional econometric problems, thus supporting modern decision-making in a timely manner.

Article

Jaakko Kauko, Janne Varjo, and Hannele Pitkänen

The quality of education has been a central matter of global debate in the new millennium. The global trend supports test-based accountability models and increasing national data collection as techniques for supporting and increasing quality in education. In contrast, a central feature of the Finnish education system runs counter to the global trend: it does not have strong top-down quality control mechanisms. Historical development of the Finnish model has a strong continuity, which has stood up against the global quality and evaluation policy flows. The evolution of the Finnish “model” dates back centuries. The foundations of the Finnish quality system can be traced to participation in international comparative learning studies developing national capacity, the inspection of folk education supporting the tradition of nationally coordinated external evaluation, and the local supervision of folk schools through school boards emphasizing local provision and the quality control of evaluation. These developments culminated during the 1990s with the radical deregulation and decentralization of education governance. The current model is partly unarticulated. However, it is clearly distinguishable: in comprehensive schools (primary and lower secondary), ensuring quality is entrusted to education providers and schools. They are expected to conduct self-evaluation regularly. There are no national standardized tests, and sample-based testing for development purposes forms the core of evaluation data. Only the main evaluation results are published, making school rankings impossible. Yet there is a large variation in how the quality of education is approached and evaluated in Finland’s more than 300 municipalities. Significantly, the central government has no direct means to control the quality of local education. Its impact is indirect through aims to foster and promote the quality evaluation culture in schools and municipalities. Furthermore, international cooperation and participation in international large-scale assessments have been unable to politicize the national education development discourse. This somewhat uncoordinated yet economical and teacher-friendly quality system raises interesting questions for further research: is this only a Finnish peculiarity developed in a specific historical context, or does it make possible critical theoretical and societal conclusions that question the dominant global test-based quality trends? The buffering of international accountability-based testing and swimming against the global quality evaluation flow is built on (a) the compartmentalization of international tests; (b) the fact that national coordination began to see a deregulated system as a necessity and virtue, and was long fragmented in different evaluation functions; and (c) the important role the local level has played historically in upholding and evaluating the quality of education.

Article

Space has always animated world politics, but three spatial orientations are striking. First, the Westphalian orientation deems space a sovereign power container. Second, the scalar takes recourse to the local, regional, national, and global spaces in which world politics is played out. Third, the relational deems space a (re)produced, sociohistorically contingent phenomenon that changes according to the humans occupying it and the thought, power, and resources flowing through it. Under this latter orientation, space is lived, lived in and lived through. Whilst relationality, to a degree, calls into question the received wisdoms of International Relations (IR), the fixity of sovereignty and territory remain. The orientations coexist concomitantly, reflecting the “many worlds” humankind occupies.

Article

Mark Gibney, Linda Cornett, Peter Haschke, Reed M. Wood, and Daniel Arnon

Although every violation of international human rights law standards is both deplorable and illegal, one of the major advances in the social sciences has been the development of measures of comparative state practice. The oldest of these is the Political Terror Scale (PTS), which provides an ordinal measure of physical integrity violations carried out by governments or those associated with the state. Providing data from the mid-1970s to the present, the PTS scores the human rights practices of more than 190 countries on a scale of 1–5, with 1 representing “best practices” and 5 indicating gross and systematic violations. There are two different sources for these scores: U.S. State Department Country Reports on Human Rights Practices and the Amnesty International Annual Report. Although human rights have traditionally been associated only with the state, individuals can also be denied human rights protection by non-state actors. To measure this, the Societal Violence Scale (SVS) has been created to analyze three sources of physical integrity violations: the individual; corporate or criminal gang activity; and armed groups. As globalization proceeds apace, states have an increased influence on human rights protection in other countries. Unfortunately, human rights data, such as the PTS, analyze only the domestic practices of states. In an effort to better understand the full extent of a state’s human rights performance, the Extraterritorial Obligations (ETO) Report is currently being constructed. The ETO Report will provide an important analysis of state human rights performance when acting outside its own territorial borders.

Article

Kevin Corcoran

This entry reviews the uses of scales and instruments in social work practice, including scales and instruments for diagnosis and evidencing treatment necessity, as methods for monitoring client progress, and as outcomes measures of clinical significance. A resource list for locating scales and instruments is provided.

Article

Melanie C. Green and Kaitlin Fitzgerald

Transportation Theory: Narrative transportation theory focuses on the causes and consequences of an individual being immersed in a story, or transported into a narrative world. Transportation refers to the feeling of being so absorbed in a story that connection to the real world is lost for some time; it includes cognitive engagement, emotional experience, and the presence of mental imagery. This experience is a key mechanism underlying narrative influence on recipients’ attitudes and beliefs, particularly in combination with enjoyment and character identification. Narrative persuasion through transportation has been demonstrated with a wide variety of topics, including health, social issues, and consumer products. Transportation can occur across media (through written, audio, or video narratives) and for both factual and fictional stories. It is typically measured with a self-report scale, which has been well-validated (Green & Brock, 2000). Transportation is conceptually similar to flow (Csikszentmihalyi, 1990) and presence (Klimmt & Vorderer, 2003), although both flow and presence pertain more to being immersed in an experience, rather than specifically in a narrative. While individuals are transported, their mental systems and capacities become concentrated on events occurring in the story, causing them to lose track of time, lack awareness of the surrounding environment, and experience powerful emotions as a result of their immersion in the narrative. Transported recipients may also lose some access to real-world knowledge, making them more likely to adapt their real-world beliefs and behaviors to be more consistent with the story to which they are exposed. Transportation theory suggests several mechanisms to explain this phenomenon, including reduced counterarguing, connections with characters, heightened perceptions of realism, the formation of vivid mental imagery, and emotional engagement. Personality factors can also affect the extent of transportation: narrative recipients vary in transportability, or their dispositional tendency to become transported; and they may be influenced differently by narratives due to a difference in their need for affect (individuals high in need for affect are more likely to be transported into narratives). Additional factors such as story quality and points of similarity between the reader and the story can also influence transportation.

Article

Shuiqing Yin and Deliang Chen

Weather generators (WGs) are stochastic models that can generate synthetic climate time series of unlimited length and having statistical properties similar to those of observed time series for a location or an area. WGs can infill missing data, extend the length of climate time series, and generate meteorological conditions for unobserved locations. Since the 1990s WGs have become an important spatial-temporal statistical downscaling methodology and have been playing an increasingly important role in climate-change impact assessment. Although the majority of the existing WGs have focused on simulation of precipitation for a single site, more and more WGs considering correlations among multiple sites, and multiple variables, including precipitation and nonprecipitation variables such as temperature, solar radiation, wind, humidity, and cloud cover have been developed for daily and sub-daily scales. Various parametric, semi-parametric and nonparametric WGs have shown the ability to represent the mean, variance, and autocorrelation characteristics of climate variables at different scales. Two main methodologies including change factor and conditional WGs on large-scale dynamical and thermal dynamical weather states have been developed for applications under a changing climate. However, rationality and validity of assumptions underlining both methodologies need to be carefully checked before they can be used to project future climate change at local scale. Further, simulation of extreme values by the existing WGs needs to be further improved. WGs assimilating multisource observations from ground observations, reanalysis, satellite remote sensing, and weather radar for the continuous simulation of two-dimensional climate fields based on the mixed physics-based and stochastic approaches deserve further efforts. An inter-comparison project on a large ensemble of WG methods may be helpful for the improvement of WGs. Due to the applied nature of WGs, their future development also requires inputs from decision-makers and other relevant stakeholders.

Article

Yasar Kondakci

It is generally understood that a stable external environment around educational organizations is a thing of the past. Currently, in the 21st century, educational organizations are living in highly volatile environments, and various political, economic, social, demographic, and ecological forces are putting pressure on these organizations to change their structural and functional characteristics. Educational change as a field of research is a relatively new area and metalevel thinking about educational change has largely been inspired by theories and models that are borrowed from the broader field of organization science. The broader field possesses a multitude of theories and models of change but the same theoretical and practical plurality is not evident for educational change. However, there has always been a convergence of ideas between educational change and organizational change. As a result, educational change scholars and practitioners have borrowed the models and theories from the broader field of organization science. Parallel to the understanding in organization science, educational change interventions reflect a planned change understanding. Planned change is triggered by an external force, introduces change, and terminates the process. Although different models count on different steps to depict the process, these three phases delineate the planned change process. Many change models count on political, economic, social, or ecological forces of change for organizations. However, educational organizations have more specific and unique forces of change. Global student achievement comparison programs (e.g., Program for International Student Assessment), inequities in education, Organization for Economic Cooperation and Development’s (OECD) 21st-century skills, science, technology, engineering, and mathematics (STEM) movements, the trends in internationalization in education, and political conflicts around the world are putting pressure on education systems and schools around their structures and functions. Despite a conceptual plurality and richness in practical models, both organizational and educational change experience a high failure rate, which results in human, financial, and managerial issues for educational organizations. Considering the high failure rate in educational change, it is argued that conceptual and practical issues exist in educational change approaches. A broad review of both educational and organizational change suggests policy borrowing, a political rationale dominating educational change, a static organizational perspective, a loss of sight of the whole organization, and the ignoring of the human side of change as the main issues in change interventions. Assuming change as a top-down, planned, stage-based, hierarchical, and linear phenomenon, conceiving it as an extraordinary practice in the life of organizations and perceiving it as involvement of a distinguished group in the organization are some of the common problems in the dominant approach to change. These criticisms suggest a need for a fundamental shift in its conceptualization, which in turn suggests a shift in the ontology of change. According to the alternative understanding of change (i.e., continuous change), change is a small-scale, bottom-up, ongoing, cumulative, and improvisational process. The new understanding provides valuable insights into the conceptualization and practice of change. Continuous change perspective provides effective insights into the missing aspects in change implementation rather than suggesting totally replacing the planned change perspective.

Article

Syed Abdul Hamid

Health microinsurance (HMI) has been used around the globe since the early 1990s for financial risk protection against health shocks in poverty-stricken rural populations in low-income countries. However, there is much debate in the literature on its impact on financial risk protection. There is also no clear answer to the critical policy question about whether HMI is a viable route to provide healthcare to the people of the informal economy, especially in the rural areas. Findings show that HMI schemes are concentrated widely in the low-income countries, especially in South Asia (about 43%) and East Africa (about 25.4%). India accounts for 30% of HMI schemes. Bangladesh and Kenya also possess a good number of schemes. There is some evidence that HMI increases access to healthcare or utilization of healthcare. One set of the literature shows that HMI provides financial protection against the costs of illness to its enrollees by reducing out-of-pocket payments and/or catastrophic spending. On the contrary, a large body of literature with strong methodological rigor shows that HMI fails to provide financial protection against health shocks to its clients. Some of the studies in the latter group rather find that HMI contributes to the decline of financial risk protection. These findings seem to be logical as there is a high copayment and a lack of continuum of care in most cases. The findings also show that scale and dependence on subsidy are the major concerns. Low enrollment and low renewal are common concerns of the voluntary HMI schemes in South Asian countries. In addition, the declining trend of donor subsidies makes the HMI schemes supported by external donors more vulnerable. These challenges and constraints restrict the scale and profitability of HMI initiatives, especially those that are voluntary. Consequently, the existing organizations may cease HMI activities. Overall, although HMI can increase access to healthcare, it fails to provide financial risk protection against health shocks. The existing HMI practices in South Asia, especially in the HMIs owned by nongovernmental organizations and microfinance institutions, are not a viable route to provide healthcare to the rural population of the informal economy. However, HMI schemes may play some supportive role in implementation of a nationalized scheme, if there is one. There is also concern about the institutional viability of the HMI organizations (e.g., ownership and management efficiency). Future research may address this issue.

Article

Wouter van Atteveldt, Kasper Welbers, and Mariken van der Velden

Analyzing political text can answer many pressing questions in political science, from understanding political ideology to mapping the effects of censorship in authoritarian states. This makes the study of political text and speech an important part of the political science methodological toolbox. The confluence of increasing availability of large digital text collections, plentiful computational power, and methodological innovations has led to many researchers adopting techniques of automatic text analysis for coding and analyzing textual data. In what is sometimes termed the “text as data” approach, texts are converted to a numerical representation, and various techniques such as dictionary analysis, automatic scaling, topic modeling, and machine learning are used to find patterns in and test hypotheses on these data. These methods all make certain assumptions and need to be validated to assess their fitness for any particular task and domain.

Article

Most urban residents in high-income countries obtain piped and treated water for drinking and domestic use from centralized utility-run water systems. In low- and middle-income countries (LMICs), however, utilities work alongside myriad other service providers that deliver water to hundreds of millions of city-dwellers. Hybrid modes of water delivery in urban areas in low- and middle-income countries are systems in which a variety of state and nonstate actors contribute to the delivery of water to households, schools, healthcare facilities, businesses, and government offices. Historically, the field has evolved to include within-utility networks and outside-the-utility provision mechanisms. Utilities service the urban core through network connections, while nonstate, smaller-scale providers supplement utility services both inside and outside the piped network. The main reform waves since the 1990s—privatization and corporatization—have done little to alter the hybrid nature of provision. Numerous case studies of nonutility water providers suggest that they are imperfect substitutes for utilities. They reach millions of households with no access to piped water, but the water they deliver tends to be of uncertain quality and is typically far more expensive than utility water. Newer work on utility-provided water and utility reforms has highlighted the political challenges of private sector participation in urban water; debates have also focused on the importance of contractual details such as tariff structures and investor incentives. New research has produced numerous studies on LMICs on the ways in which utilities extend their service areas and service types through explicit and implicit relationships with front-line water workers and with supplemental nonstate water suppliers. From the nonutility perspective, debates animated by questions of price and quality, the desirability or possibility of regulation, and the compatibility (or lack thereof) between reliance on small-scale water providers and the human right to safe water, are key areas of research. While understanding the hybrid nature of water delivery is essential for responsible policy formulation and for understanding inequalities in the urban sphere, there is no substitute for the convenience and affordability of universal utility provision, and no question that research on the conditions under which particular types of reforms can improve utility provision is sorely needed.

Article

The intelligence test consists of a series of exercises designed to measure intelligence. Intelligence is generally understood as mental capacity that enables a person to learn at school or, more generally, to reason, to solve problems, and to adapt to new (challenging) situations. There are many types of intelligence tests depending on the kind of person (age, profession, culture, etc.) and the way intelligence is understood. Some tests are general, others are focused on evaluating language skills, others on memory, on abstract and logical thinking, or on abilities in a wide variety of areas, such as, for example, recognizing and matching implicit visual patterns. Scores may be presented as an IQ (intelligence quotient), as a mental age, or simply as a point on a scale. Intelligence tests are instrumental in ordering, ranking, and comparing individuals and groups. The testing of intelligence started in the 19th century and became a common practice in schools and universities, psychotechnical institutions, courts, asylums, and private companies on an international level during the 20th century. It is generally assumed that the first test was designed by the French scholars A. Binet and T. Simon in 1905, but the historical link between testing and experimenting points to previous tests, such as the word association test. Testing was practiced and understood in different ways, depending not only on the time, but also on the concrete local (cultural and institutional) conditions. For example, in the United States and Brazil, testing was immediately linked to race differences and eugenic programs, while in other places, such as Spain, it was part of an attempt to detect “feebleness” and to grade students at certain schools. Since its beginning, the intelligence test received harsh criticism and triggered massive protests. The debate went through the mass media, leading to the infamous “IQ test wars.” Thus, nowadays, psychologists are aware of the inherent danger of cultural discrimination and social marginalization, and they are more careful in the promotion of intelligence testing. In order to understand the role the intelligence test plays in today’s society, it is necessary to explore its history with the help of well-documented case studies. Such studies show how the testing practice was employed in national contexts and how it was received, used, or rejected by different social groups or professionals. Current historical research adopts a more inclusive perspective, moving away from a narrative focused on the role testing played in North-America. New work has appeared that explores how testing was taking place in different national and cultural environments, such as Russia (the former Soviet Union), India, Italy, the Netherlands, Sweden, Argentina, Chile, and many other places.

Article

Latin American transnational social movements (TSMs) are key actors in debates about the future of global governance. Since the 1990s, they have played an important role in creating new organizational fora to bring together civil society actors from around the globe. In spite of this relevance, the literature on social movements from the region focuses primarily—and often exclusively—on the domestic arena. Nevertheless, there is an increasingly influential body of scholarship from the region, which has contributed to relevant theoretical debates on how actors overcome collective action problems in constructing transnational social movements and how they articulate mobilization efforts at the local, national and international scales. The use of new digital technologies has further blurred the distinction among scales of activism. It has become harder to tell where interpretative frames originate, to trace diffusion paths across national borders, and to determine the boundaries of movements. At the same time, there are important gaps in the literature, chief among them the study of right-wing transnational networks.

Article

José Luis Pinto-Prades, Arthur Attema, and Fernando Ignacio Sánchez-Martínez

Quality-adjusted life years (QALYs) are one of the main health outcomes measures used to make health policy decisions. It is assumed that the objective of policymakers is to maximize QALYs. Since the QALY weighs life years according to their health-related quality of life, it is necessary to calculate those weights (also called utilities) in order to estimate the number of QALYs produced by a medical treatment. The methodology most commonly used to estimate utilities is to present standard gamble (SG) or time trade-off (TTO) questions to a representative sample of the general population. It is assumed that, in this way, utilities reflect public preferences. Two different assumptions should hold for utilities to be a valid representation of public preferences. One is that the standard (linear) QALY model has to be a good model of how subjects value health. The second is that subjects should have consistent preferences over health states. Based on the main assumptions of the popular linear QALY model, most of those assumptions do not hold. A modification of the linear model can be a tractable improvement. This suggests that utilities elicited under the assumption that the linear QALY model holds may be biased. In addition, the second assumption, namely that subjects have consistent preferences that are estimated by asking SG or TTO questions, does not seem to hold. Subjects are sensitive to features of the elicitation process (like the order of questions or the type of task) that should not matter in order to estimate utilities. The evidence suggests that questions (TTO, SG) that researchers ask members of the general population produce response patterns that do not agree with the assumption that subjects have well-defined preferences when researchers ask them to estimate the value of health states. Two approaches can deal with this problem. One is based on the assumption that subjects have true but biased preferences. True preferences can be recovered from biased ones. This approach is valid as long as the theory used to debias is correct. The second approach is based on the idea that preferences are imprecise. In practice, national bodies use utilities elicited using TTO or SG under the assumptions that the linear QALY model is a good enough representation of public preferences and that subjects’ responses to preference elicitation methods are coherent.

Article

Ching-mu Chen and Shin-Kun Peng

For research attempting to investigate why economic activities are distributed unevenly across geographic space, new economic geography (NEG) provides a general equilibrium-based and microfounded approach to modeling a spatial economy characterized by a large variety of economic agglomerations. NEG emphasizes how agglomeration (centripetal) and dispersion (centrifugal) forces interact to generate observed spatial configurations and uneven distributions of economic activity. However, numerous economic geographers prefer to refer to the term new economic geographies as vigorous and diversified academic outputs that are inspired by the institutional-cultural turn of economic geography. Accordingly, the term geographical economics has been suggested as an alternative to NEG. Approaches for modeling a spatial economy through the use of a general equilibrium framework have not only rendered existing concepts amenable to empirical scrutiny and policy analysis but also drawn economic geography and location theories from the periphery to the center of mainstream economic theory. Reduced-form empirical studies have attempted to test certain implications of NEG. However, due to NEG’s simplified geographic settings, the developed NEG models cannot be easily applied to observed data. The recent development of quantitative spatial models based on the mechanisms formalized by previous NEG theories has been a breakthrough in building an empirically relevant framework for implementing counterfactual policy exercises. If quantitative spatial models can connect with observed data in an empirically meaningful manner, they can enable the decomposition of key theoretical mechanisms and afford specificity in the evaluation of the general equilibrium effects of policy interventions in particular settings. Several decades since its proposal, NEG has been criticized for its parsimonious assumptions about the economy across space and time. Therefore, existing challenges still require theoretical and quantitative models on new microfoundations pertaining to the interactions between economic agents across geographical space and the relationship between geography and economic development.

Article

Japan is one of the world’s leading marine fishing nations in globalized industrial fisheries, yet the mainstay of the national fishing industry continues to be small-scale fisheries with their own set of cultural and environmental heritage. The cultural tradition of the Japanese fishing communities still preserves the various ways of understanding local weather, which are mainly based on landscape perception and forecasting knowledge. The prediction of weather conditions for a given location and time is part of a long-established historical tradition related to the need for an “easy” understanding of the climatic and maritime environment. It encompasses a variety of practical experiences, skillful reasoning strategies, and cultural values concerning indigenous environmental knowledge, decision-making strategies, and habitual applications of knowledge in everyday life. Japanese traditional forecasting culture interfaces with modern meteorological forecasting technologies to generate a hybrid knowledge, and offers an example of the complex dialogue between global science and local science. Specifically, interpretations and meteorological observations of local weather are modes of everyday engagement with the weather that exhibit a highly nuanced ecological sophistication and continue to offer a critical discourse on the cultural, environmental, and social context of Japanese small-scale fisheries. Indigenous weather understanding is bound up with community-based cultural heritage—religious traditions, meteorological classifications, proverbs, traditional forecasting models, and selective incorporation or rejection of scientific forecasting data—that offers a general overview of the interaction between community know-how, sensory experience, skills, and cultural practices.

Article

Denzil G. Fiebig and Hong Il Yoo

Stated preference methods are used to collect individual-level data on what respondents say they would do when faced with a hypothetical but realistic situation. The hypothetical nature of the data has long been a source of concern among researchers as such data stand in contrast to revealed preference data, which record the choices made by individuals in actual market situations. But there is considerable support for stated preference methods as they are a cost-effective means of generating data that can be specifically tailored to a research question and, in some cases, such as gauging preferences for a new product or non-market good, there may be no practical alternative source of data. While stated preference data come in many forms, the primary focus in this article is data generated by discrete choice experiments, and thus the econometric methods will be those associated with modeling binary and multinomial choices with panel data.