1-20 of 55 Results  for:

  • Keywords: measurement x
Clear all

Article

Jennifer L. Magnabosco

Throughout history, measuring outcomes has been a goal and priority in the human services. This entry chronicles the history of outcomes measurement in the human services in the United States and discusses present-day outcome measurement activities as well as trends and some of the key areas for outcomes measurement in several human service domains.

Article

Jarred Gallegos, Julie Lutz, Emma Katz, and Barry Edelstein

The assessment of older adults is quite challenging in light of the many age-related physiological and metabolic changes, increased number of chronic diseases with potential psychiatric manifestations, the associated medications and their side effects, and the age-related changes in the presentation of common mental health problems and disorders. A biopsychosocial approach to assessment is particularly important for older adults due to the substantial interplay of biological, psychological, and social factors that collectively produce the clinical presentation faced by clinicians. An appreciation of age-related and non-normative changes in cognitive skills and sensory processes is particularly important both for planning the assessment process and the interpretation of findings. The assessment of older adults is unfortunately plagued by a paucity of age-appropriate assessment instruments, as most instruments have been developed with young adults. This paucity of age-appropriate assessment instruments is an impediment to reliable and valid assessment. Notwithstanding that caveat, comprehensive and valid assessment of older adults can be accomplished through an understanding of the interaction of age-related factors that influence the experience and presentation of psychiatric disorders, and an appreciation of the strengths and weaknesses of the assessment instruments that are used to achieve valid and reliable assessments.

Article

Rasmus Benestad

The Barents Sea is a region of the Arctic Ocean named after one of its first known explorers (1594–1597), Willem Barentsz from the Netherlands, although there are accounts of earlier explorations: the Norwegian seafarer Ottar rounded the northern tip of Europe and explored the Barents and White Seas between 870 and 890 ce, a journey followed by a number of Norsemen; Pomors hunted seals and walruses in the region; and Novgorodian merchants engaged in the fur trade. These seafarers were probably the first to accumulate knowledge about the nature of sea ice in the Barents region; however, scientific expeditions and the exploration of the climate of the region had to wait until the invention and employment of scientific instruments such as the thermometer and barometer. Most of the early exploration involved mapping the land and the sea ice and making geographical observations. There were also many unsuccessful attempts to use the Northeast Passage to reach the Bering Strait. The first scientific expeditions involved F. P. Litke (1821±1824), P. K. Pakhtusov (1834±1835), A. K. Tsivol’ka (1837±1839), and Henrik Mohn (1876–1878), who recorded oceanographic, ice, and meteorological conditions. The scientific study of the Barents region and its climate has been spearheaded by a number of campaigns. There were four generations of the International Polar Year (IPY): 1882–1883, 1932–1933, 1957–1958, and 2007–2008. A British polar campaign was launched in July 1945 with Antarctic operations administered by the Colonial Office, renamed as the Falkland Islands Dependencies Survey (FIDS); it included a scientific bureau by 1950. It was rebranded as the British Antarctic Survey (BAS) in 1962 (British Antarctic Survey History leaflet). While BAS had its initial emphasis on the Antarctic, it has also been involved in science projects in the Barents region. The most dedicated mission to the Arctic and the Barents region has been the Arctic Monitoring and Assessment Programme (AMAP), which has commissioned a series of reports on the Arctic climate: the Arctic Climate Impact Assessment (ACIA) report, the Snow Water Ice and Permafrost in the Arctic (SWIPA) report, and the Adaptive Actions in a Changing Arctic (AACA) report. The climate of the Barents Sea is strongly influenced by the warm waters from the Norwegian current bringing heat from the subtropical North Atlantic. The region is 10°C–15°C warmer than the average temperature on the same latitude, and a large part of the Barents Sea is open water even in winter. It is roughly bounded by the Svalbard archipelago, northern Fennoscandia, the Kanin Peninsula, Kolguyev Island, Novaya Zemlya, and Franz Josef Land, and is a shallow ocean basin which constrains physical processes such as currents and convection. To the west, the Greenland Sea forms a buffer region with some of the strongest temperature gradients on earth between Iceland and Greenland. The combination of a strong temperature gradient and westerlies influences air pressure, wind patterns, and storm tracks. The strong temperature contrast between sea ice and open water in the northern part sets the stage for polar lows, as well as heat and moisture exchange between ocean and atmosphere. Glaciers on the Arctic islands generate icebergs, which may drift in the Barents Sea subject to wind and ocean currents. The land encircling the Barents Sea includes regions with permafrost and tundra. Precipitation comes mainly from synoptic storms and weather fronts; it falls as snow in the winter and rain in the summer. The land area is snow-covered in winter, and rivers in the region drain the rainwater and meltwater into the Barents Sea. Pronounced natural variations in the seasonal weather statistics can be linked to variations in the polar jet stream and Rossby waves, which result in a clustering of storm activity, blocking high-pressure systems. The Barents region is subject to rapid climate change due to a “polar amplification,” and observations from Svalbard suggest that the past warming trend ranks among the strongest recorded on earth. The regional change is reinforced by a number of feedback effects, such as receding sea-ice cover and influx of mild moist air from the south.

Article

Quality assurance (QA) is a widely accepted management function that is intended to ensure that services provided to consumers meet agreed upon standards. Standards come from professional organizations, evidence-based practices, and public policies that specify outcomes for consumers. QA systems consist of measurement, comparison of findings to standards, and feedback to practitioners and managers. There is emerging but limited research that indicates that QA can be an effective strategy for improving outcomes for consumers.

Article

Bill Nugent

Measurement is a fundamentally important component of social work research. This entry briefly covers two important notions in psychometrics: reliability and validity. Reliability concerns errors of measurement, and validity concerns the accuracy of the inferences that are made from scores from a measurement procedure. Both norm-referenced and criterion-referenced measurement procedures are discussed.

Article

Trustworthy measurement is essential to make inferences about people and events, as well as to make scientific inquiries and comprehend human behaviors. Measurement is used for validating and building theories, substantiating research endeavors, contributing to science, and supporting a variety of applications. Sport and exercise psychology is a theoretical and practical domain derived from two domains: psychology and kinesiology. As such, the measurement methods used by scientists and practitioners relate to the acquisition of motor skills (i.e., genetics and environment-deliberate practice), physiological measures (e.g., heart rate pulse, heart rate variability, breathing amplitude and frequency, galvanic skin response, and electrocardiogram), and psychological measures including introspective instruments in the form of questionnaires, interviews, and observations. Sport and exercise psychology entails the measurement of motor performance (e.g., time-trials, one repetition maximum tests), cognitive development (e.g., knowledge base and structure, deliberate practice, perception-cognition, attention, memory), social aspects (e.g., team dynamics, cohesion, leadership, shared mental models, coach-performer interaction), the self (e.g., self-esteem, self-concept, physical self), affective and emotional states (e.g., mood, burnout), and psychological skills (e.g. imagery, goal-setting, relaxation, emotion regulation, stress management, self-talk, relaxation, and pre-performance routine). Sport and exercise psychologists are also interested in measuring the affective domain (e.g., quality of life, affect/emotions, perceived effort), psychopathological states (e.g., anxiety, depression), cognitive domain (e.g., executive functioning, information processing, decision making, attention, academic achievements, cognition and aging), social-cognitive concepts (e.g., self-efficacy, self-control, motivation), and biochemical markers of human functioning (e.g., genetic factors, hormonal changes). The emergence of neuroscientific methods have ushered in new methodological tools (e.g., electroencephalogram; fMRI) to assess central markers (brain systems) linked to performance, learning, and well-being in sport and exercise settings. Altogether, the measures in the sport and exercise domain are used to establish linkages among the emotional, cognitive, and motor systems.

Article

Michael P. Leiter

Engagement has continued to develop as a positive construct in organizational psychology. Initially defined as employees’ identification with their work, work engagement became understood as a configuration of vigor, dedication, and absorption that motivates exceptional work performance. Although generally viewed as a positive construct, engagement may have a dark side in giving work excessive importance in employees’ lives. There has been some debate regarding the specific qualities that define engagement and the extent to which engagement is an enduring trait in contrast to a varying response to situational constraints and opportunities. The concerns are reflected in the measures of engagement, the most widely used is the Utrecht Work Engagement Scale (UWES). The Job Demands/Resources Model has structured much of the research work on engagement in recent years, leading to initiatives to enhance engagement by improving the quality and variety of resources available to employees at work. Within this domain, job crafting appears to provide a means through which individuals or groups may broaden their opportunities to participate in engaging activities while reducing the range of drudgery inherent in their work.

Article

Johabed G. Olvera and Claudia N. Avellaneda

As one of the reforms supported by the New Public Management movement, Performance Management Systems (PMSs) have been implemented worldwide, across various policy areas and different levels of government. PMSs require public organizations to establish clear goals, measure indicators of these purposes, report this information, and, ultimately, link this information with strategic decisions aimed at improving agencies’ performances. Therefore, the components of any PMS include: (1) strategic planning; (2) data collection and analysis (performance measurement); and (3) data utilization for decision-making (performance management). However, the degree of adoption and implementation of PMS components varies across both countries and levels of government. Therefore, in understanding the role of PMSs in public administration, it is important to recognize that the drivers explaining the adoption of PMS components may differ from those explaining their implementation. Although the goal of any PMS is to boost government performance, the existent empirical evidence assessing PMS impact on organizational performance reports mixed results, and suggests that the implementation of PMSs may generate some unintended consequences. Moreover, while worldwide there is a steady increase in the adoption of performance metrics, the same cannot be said about the use of these metrics in decision-making or performance management. Research on the drivers of adoption and implementation of PMSs in developing countries is still lacking.

Article

Steven L. McMurtry, Susan J. Rose, and Lisa K. Berger

Accurate measurement is essential for effective social work practice, but doing it well can be difficult. One solution is to use rapid assessment instruments (RAIs), which are brief scales that typically require less than 15 minutes to complete. Some are administered by practitioners, but most are self-administered on paper or electronically. RAIs are available for screening, initial assessment, monitoring of service progress, and outcome evaluation. Some require author permission, others are sold commercially, and many more are free and in the public domain. Selection of an RAI should be based first on its psychometric strength, including content, concurrent, and known-groups validity, as well as on types of reliability such as internal consistency, but practical criteria such as readability are also important. And when used in practice settings, RAIs should be part of a well-rounded measurement plan that also includes behavioral observations, client logs, unobtrusive measures, and other approaches.

Article

Mahesh K. Nalla, Gregory J. Howard, and Graeme R. Newman

One common claim about crime is that it is driven in particular ways by development. Whereas the classic civilization thesis asserts that development will yield declining crime rates, the conflict tradition in criminology as well as the modernization school expect rises in crime rates, although for different reasons. Notwithstanding a raft of empirical investigations into the matter, an association between development and crime has not been consistently demonstrated. The puzzling results in the literature may be owing to the challenges in conceptualizing and operationalizing development. They are also almost certainly attributable to the serious problems related to the cross-national measurement of crime. Given the current state of knowledge and the prospects for future research, evidence reportedly bearing on the development and crime relationship should be received with ample caution and skepticism. Refinements in measurement practices and research strategies may remedy the extant situation, but for now the relationship between development and crime is an open and complicated question.

Article

Most applied researchers in macroeconomics who work with official macroeconomic statistics (such as those found in the National Accounts, the Balance of Payments, national government budgets, labor force statistics, etc.) treat data as immutable rather than subject to measurement error and revision. Some of this error may be caused by disagreement or confusion about what should be measured. Some may be due to the practical challenges of producing timely, accurate, and precise estimates. The economic importance of measurement error may be accentuated by simple arithmetic transformations of the data, or by more complex but still common transformations to remove seasonal or other fluctuations. As a result, measurement error is seemingly omnipresent in macroeconomics. Even the most widely used measures such as Gross Domestic Products (GDP) are acknowledged to be poor measures of aggregate welfare as they omit leisure and non-market production activity and fail to consider intertemporal issues related to the sustainability of economic activity. But even modest attempts to improve GDP estimates can generate considerable controversy in practice. Common statistical approaches to allow for measurement errors, including most factor models, rely on assumptions that are at odds with common economic assumptions which imply that measurement errors in published aggregate series should behave much like forecast errors. Fortunately, recent research has shown how multiple data releases may be combined in a flexible way to give improved estimates of the underlying quantities. Increasingly, the challenge for macroeconomists is to recognize the impact that measurement error may have on their analysis and to condition their policy advice on a realistic assessment of the quality of their available information.

Article

The study of economic development in African countries is facing a basic problem of lack of reliable data and statistics. These problems of economic data are best addressed under three main categories: design, capacity, and politics. The focus is on gross domestic product (GDP), but the relevance goes beyond GDP statistics because the aggregate of GDP requires economic data on all sectors of the economy, including expenditures and consumption, which gives the basis for discussing levels and trends in monetary poverty as well as having a correct count of the total population. The problem of design refers to the fact that many, if not most, of the statistical categories that are in use in international organizations disseminating statistics on social and economic affairs are categories that were designed as appropriate for developed countries. Indicators such as “unemployment” work better in a formalized labor market. The problems of design translate into problems of capacity. Because most economic transactions are not recorded in informal economies, this means that they are not reported as a rule and that therefore such recordings require a lot more resources. Many statistics that are readily available from administrative sources in higher-income countries need to be collected in more expensive surveys in low-income countries. Budgets for official services may already be constrained, and thus resources for statistical offices to collect these data are less than the resources needed to do so. Finally, politics of data matter. When statistics are based on missing data, the estimates will invariably be soft, and therefore malleable. Thus, when incentives are clearly identifiable before reporting or aggregation, the final estimates may be biased.

Article

Edward J. Mullen, Jennifer L. Bellamy, and Sarah E. Bledsoe

This entry describes best practices as these are used in social work. The term best practices originated in the organizational management literature in the context of performance measurement and quality improvement where best practices are defined as the preferred technique or approach for achieving a valued outcome. Identification of best practices requires measurement, benchmarking, and identification of processes that result in better outcomes. The identification of best practices requires that organizations put in place quality data collection systems, quality improvement processes, and methods for analyzing and benchmarking pooled provider data. Through this process, organizational learning and organizational performance can be improved.

Article

Research methods in lifespan development include single-factor designs that either follow a single cohort of individuals over time or compare age groups at a single time point. The two basic types of studies involving the manipulation of the single factors of age, cohort, and time of measurement are longitudinal and cross-sectional. Each of these has advantages and disadvantages, but both are characterized by limitations because they cannot definitively separate the joint influences of age, cohort, and type of measurement. The third group of designs involves manipulation of two or more levels of each factor to permit inferences to be drawn that separate personal from social aging. The theoretical problems involved in both the single-factor and sequential designs combine with practical issues to present lifespan developmental researchers with a number of choices in approaching the variables of interest. The theoretical problems include the inevitable linking of personal with social aging, particularly evident in single-factor designs, and the fact that selective attrition leads to the differential availability of increasingly select older samples. Practical problems include the need to assign participants to appropriate age intervals and such clerical issues as the need to track participants in follow-up investigations. Researchers must also be aware of methodological issues related to task equivalence across individuals of different ages and the need to covary for potential confounds that could lead to differences across groups of participants due to such factors as education and health status. The increasing recognition of the need to address these issues is leading to a body of literature that reflects the growing sophistication of the field along with the more widespread availability of sophisticated analytic methods. As these improvements continue to raise the level of scholarship in the field, there will be a greater understanding of both ontogenetic change as well as the influence of context on development from childhood through later life.

Article

Real-time response measurement (RTR), sometimes also called continuous response measurement (CRM), is a computerized survey tool that continuously measures short-time perceptions while political audiences are exposed to campaign messages by using electronic input devices. Combining RTR data with information about the message content allows for tracing back viewers’ impressions to single arguments or nonverbal signals of a speaker and, therefore, showing which kinds of arguments or nonverbal signals are most persuasive. In the context of applied political communication research, RTR is used by political consultants to develop persuasive campaign messages and prepare candidates for participating in televised debates. In addition, TV networks use RTR to identify crucial moments of televised debates and sometimes even display RTR data during their live debate broadcasts. In academic research most RTR studies deal with the persuasive effects of televised political ads and especially televised debates, sometimes including hundreds of participants rating candidates’ performances during live debate broadcasts. In order to capture features of human information processing, RTR measurement is combined with other data sources like content analysis, traditional survey questionnaires, qualitative focus group data, or psychophysiological data. Those studies answer various questions on the effects of campaign communication including which elements of verbal and nonverbal communication explain short-term perceptions of campaign messages, which predispositions influence voters’ short-term perceptions of campaign messages, and the extent to which voters’ opinions are explained by short-term perceptions versus long-term predispositions. In several such studies, RTR measurement has proven to be reliable and valid; it appears to be one of the most promising research tools for future studies on the effects of campaign communication.

Article

Field plots are often used to obtain experimental data (soil loss values corresponding to different climate, soil, topographic, crop, and management conditions) for predicting and evaluating soil erosion and sediment yield. Plots are used to study physical phenomena affecting soil detachment and transport, and their sizes are determined according to the experimental objectives and the type of data to be obtained. Studies on interrill erosion due to rainfall impact and overland flow need small plot width (2–3 m) and length (< 10 m), while studies on rill erosion require plot lengths greater than 6–13 m. Sites must be selected to represent the range of uniform slopes prevailing in the farming area under consideration. Plots equipped to study interrill and rill erosion, like those used for developing the Universal Soil Loss Equation (USLE), measure erosion from the top of a slope where runoff begins; they must be wide enough to minimize the edge or border effects and long enough to develop downslope rills. Experimental stations generally include bounded runoff plots of known rea, slope steepness, slope length, and soil type, from which both runoff and soil loss can be monitored. Once the boundaries defining the plot area are fixed, a collecting equipment must be used to catch the plot runoff. A conveyance system (H-flume or pipe) carries total runoff to a unit sampling the sediment and a storage system, such as a sequence of tanks, in which sediments are accumulated. Simple methods have been developed for estimating the mean sediment concentration of all runoff stored in a tank by using the vertical concentration profile measured on a side of the tank. When a large number of plots are equipped, the sampling of suspension and consequent oven-drying in the laboratory are highly time-consuming. For this purpose, a sampler that can extract a column of suspension, extending from the free surface to the bottom of the tank, can be used. For large plots, or where runoff volumes are high, a divisor that splits the flow into equal parts and passes one part in a storage tank as a sample can be used. Examples of these devices include the Geib multislot divisor and the Coshocton wheel. Specific equipment and procedures must be employed to detect the soil removed by rill and gully erosion. Because most of the soil organic matter is found close to the soil surface, erosion significantly decreases soil organic matter content. Several studies have demonstrated that the soil removed by erosion is 1.3–5 times richer in organic matter than the remaining soil. Soil organic matter facilitates the formation of soil aggregates, increases soil porosity, and improves soil structure, facilitating water infiltration. The removal of organic matter content can influence soil infiltration, soil structure, and soil erodibility.

Article

Gerald Cliff and April Wall-Parker

As far back as the 19th century, statistics on reported crime have been relied upon as a means to understand and explain the nature and prevalence of crime (Friedrichs, 2007). Measurements of crime help us understand how much of it occurs on a yearly basis, where it occurs, and the costs to our society as a whole. Studying crime statistics also helps us understand the effectiveness of efforts to control it by tracking arrests and convictions. Analysts can tell whether it is increasing or decreasing relative to other possible mitigating factors such as the economy or unemployment rates in a community. Politicians can point to crime statistics to define a problem or indicate a success. Sociologists can study the ups and downs of crime rates and any number of other variables in the society such as education, employment rates, ethnic demographics, and a long list of other factors thought to affect the rate at which crime is committed. Property value is affected by the crime rates in a given neighborhood, and insurance rates are said to fluctuate with the ups and downs of crime. Analyzing any criminal act’s prevalence, cost to society, impact on victims, potential preventive measures, correction strategies, and even the characteristics of perpetrators and victims has provided valuable insights and a wealth of useful information in society’s efforts to combat violent/index crimes. This information has only been possible because there is little disagreement as to exactly what constitutes a criminal act when discussing violent or property crimes or what has come to be grouped under the catch-all heading of “street crime”; this is decidedly not the case with crimes included under the white-collar crime umbrella.

Article

Kevin Corcoran

This entry reviews the uses of scales and instruments in social work practice, including scales and instruments for diagnosis and evidencing treatment necessity, as methods for monitoring client progress, and as outcomes measures of clinical significance. A resource list for locating scales and instruments is provided.

Article

Vulnerability is complex because it involves many characteristics of people and groups that expose them to harm and limit their ability to anticipate, cope with, and recover from harm. The subject is also complex because workers in many disciplines such as public health, psychology, geography, and development studies (among others) have different ways of defining, measuring, and assessing vulnerability. Some of these practitioners focus on the short-term identification of vulnerability, so that maps and lists of people living “at risk” can be generated and used by authorities. Others are more concerned with reasons why some people are more vulnerable when facing a hazard or threat than others. Professionals working at the scale of localities are interested in methods that bring out residents’ own knowledge of hazards and help them to cooperate with each other to find ways of reducing risk. There are some interpretations of vulnerability that seek its root cause in the creation of risk by political and economic systems that make investment and locational decisions for the benefit of small elites without regard for how these decisions affect the majority. Finally, whatever success there may be in treating vulnerability in any of the ways just mentioned, it will always be a part of the human condition, and this fact in itself is puzzling.

Article

Gawon Cho, Giancarlo Pasquini, and Stacey B. Scott

The study of human development across the lifespan is inherently about patterns across time. Although many developmental questions have been tested with cross-sectional comparisons of younger and older persons, understanding of development as it occurs requires a longitudinal design, repeatedly observing the same individual across time. Development, however, unfolds across multiple time scales (i.e., moments, days, years) and encompasses both enduring changes and transient fluctuations within an individual. Measurement burst designs can detect such variations across different timescales, and disentangle patterns of variations associated with distinct dimensions of time periods. Measurement burst designs are a special type of longitudinal design in which multiple “bursts” of intensive (e.g., hourly, daily) measurements are embedded in a larger longitudinal (e.g., monthly, yearly) study. The hybrid nature of these designs allow researchers to address questions not only of cross-sectional comparisons of individual differences (e.g., do older adults typically report lower levels of negative mood than younger adults?) and longitudinal examinations of intraindividual change (e.g., as individuals get older, do they report lower levels of negative mood?) but also of intraindividual variability (e.g., is negative mood worse on days when individuals have experienced an argument compared to days when an argument did not occur?). Researchers can leverage measurement burst designs to examine how patterns of intraindividual variability unfolding over short timescales may exhibit intraindividual change across long timescales in order to understand lifespan development. The use of measurement burst designs provides an opportunity to collect more valid and reliable measurements of development across multiple time scales throughout adulthood.