1-5 of 5 Results

  • Keywords: sampling x
Clear all

Article

Sampling refers to the process used to identify and select cases for analysis (i.e., a sample) with the goal of drawing meaningful research conclusions. Sampling is integral to the overall research process as it has substantial implications on the quality of research findings. Inappropriate sampling techniques can lead to problems of interpretation, such as drawing invalid conclusions about a population. Whereas sampling in quantitative research focuses on maximizing the statistical representativeness of a population by a chosen sample, sampling in qualitative research generally focuses on the complete representation of a phenomenon of interest. Because of this core difference in purpose, many sampling considerations differ between qualitative and quantitative approaches despite a shared general purpose: careful selection of cases to maximize the validity of conclusions. Achieving generalizability, the extent to which observed effects from one study can be used to predict the same and similar effects in different contexts, drives most quantitative research. Obtaining a representative sample with characteristics that reflect a targeted population is critical to making accurate statistical inferences, which is core to such research. Such samples can be best acquired through probability sampling, a procedure in which all members of the target population have a known and random chance of being selected. However, probability sampling techniques are uncommon in modern quantitative research because of practical constraints; non-probability sampling, such as by convenience, is now normative. When sampling this way, special attention should be given to statistical implications of issues such as range restriction and omitted variable bias. In either case, careful planning is required to estimate an appropriate sample size before the start of data collection. In contrast to generalizability, transferability, the degree to which study findings can be applied to other contexts, is the goal of most qualitative research. This approach is more concerned with providing information to readers and less concerned with making generalizable broad claims for readers. Similar to quantitative research, choosing a population and sample are critical for qualitative research, to help readers determine likelihood of transfer, yet representativeness is not as crucial. Sample size determination in qualitative research is drastically different from that of quantitative research, because sample size determination should occur during data collection, in an ongoing process in search of saturation, which focuses on achieving theoretical completeness instead of maximizing the quality of statistical inference. Theoretically speaking, although quantitative and qualitative research have distinct statistical underpinnings that should drive different sampling requirements, in practice they both heavily rely on non-probability samples, and the implications of non-probability sampling is often not well understood. Although non-probability samples do not automatically generate poor-quality data, incomplete consideration of case selection strategy can harm the validity of research conclusions. The nature and number of cases collected must be determined cautiously to respect research goals and the underlying scientific paradigm employed. Understanding the commonalities and differences in sampling between quantitative and qualitative research can help researchers better identify high-quality research designs across paradigms.

Article

Joel Koopman and Nikolaos Dimotakis

Experience sampling is a method aimed primarily at examining within-individual covariation of transient phenomena utilizing repeated measures. It can be applied to test nuanced predictions of extant theories and can provide insights that are otherwise difficult to obtain. It does so by examining the phenomena of interest close to where they occur and thus avoiding issues with recall and similar concerns. Data collected through the experience sampling method (ESM) can, alternatively, be utilized to collect highly reliable data to investigate between-individual phenomena. A number of decisions need to be made when designing an ESM study. Study duration and intensity (that is, total days of measurement and total assessments per day) represent a tradeoff between data richness and participant fatigue that needs to be carefully weighed. Other scheduling options need to be considered, such as triggered versus scheduled surveys. Researchers also need to be aware of the generally high potential cost of this approach, as well as the monetary and nonmonetary resources required. The intensity of this method also requires special consideration of the sample and the context. Proper screening is invaluable; ensuring that participants and their context is applicable and appropriate to the design is an important first step. The next step is ensuring that the surveys are planned in a compatible way to the sample, and that the surveys are designed to appropriately and rigorously collect data that can be used to accomplish the aims of the study at hand. Furthermore, ESM data typically requires proper consideration in regards to how the data will be analyzed and how results will be interpreted. Proper attention to analytic approaches (typically multilevel) is required. Finally, when interpreting results from ESM data, one must not forget that these effects typically represent processes that occur continuously across individuals’ working lives—effect sizes thus need to be considered with this in mind.

Article

James A. Muncy and Alice M. Muncy

Business research is conducted by both businesspeople, who have informational needs, and scholars, whose field of study is business. Though some of the specifics as to how research is conducted differs between scholarly research and applied research, the general process they follow is the same. Business research is conducted in five stages. The first stage is problem formation where the objectives of the research are established. The second stage is research design. In this stage, the researcher identifies the variables of interest and possible relationships among those variables, decides on the appropriate data source and measurement approach, and plans the sampling methodology. It is also within the research design stage that the role that time will play in the study is determined. The third stage is data collection. Researchers must decide whether to outsource the data collection process or collect the data themselves. Also, data quality issues must be addressed in the collection of the data. The fourth stage is data analysis. The data must be prepared and cleaned. Statistical packages or programs such as SAS, SPSS, STATA, and R are used to analyze quantitative data. In the cases of qualitative data, coding, artificial intelligence, and/or interpretive analysis is employed. The fifth stage is the presentation of results. In applied business research, the results are typically limited in their distribution and they must be addressed to the immediate problem at hand. In scholarly business research, the results are intended to be widely distributed through journals, books, and conferences. As a means of quality control, scholarly research usually goes through a double-blind review process before it is published.

Article

A limited dependent variable (LDV) is an outcome or response variable whose value is either restricted to a small number of (usually discrete) values or limited in its range of values. The first type of LDV is commonly called a categorical variable; its value indicates the group or category to which an observation belongs (e.g., male or female). Such categories often represent different choice outcomes, where interest centers on modeling the probability each outcome is selected. An LDV of the second type arises when observations are drawn about a variable whose distribution is truncated, or when some values of a variable are censored, implying that some values are wholly or partially unobserved. Methods such as linear regression are inadequate for obtaining statistically valid inferences in models that involve an LDV. Instead, different methods are needed that can account for the unique statistical characteristics of a given LDV.

Article

Rand R. Wilcox

Inferential statistical methods stem from the distinction between a sample and a population. A sample refers to the data at hand. For example, 100 adults may be asked which of two olive oils they prefer. Imagine that 60 say brand A. But of interest is the proportion of all adults who would prefer brand A if they could be asked. To what extent does 60% reflect the true proportion of adults who prefer brand A? There are several components to inferential methods. They include assumptions about how to model the probabilities of all possible outcomes. Another is how to model outcomes of interest. Imagine, for example, that there is interest in understanding the overall satisfaction with a particular automobile given an individual’s age. One strategy is to assume that the typical response Y , given an individuals age, X , is given by Y = β 0 + β 1 X , where the slope, β 1 , and intercept, β 0 , are unknown constants, in which case a sample would be used to make inferences about their values. Assumptions are also made about how the data were obtained. Was this done in a manner for which random sampling can be assumed? There is even an issue related to the very notion of what is meant by probability. Let μ denote the population mean of Y . The frequentist approach views probabilities in terms of relative frequencies and μ is viewed as a fixed, unknown constant. In contrast, the Bayesian approach views μ as having some distribution that is specified by the investigator. For example, it may be assumed that μ has a normal distribution. The point is that the probabilities associated with μ are not based on the notion of relative frequencies and they are not based on the data at hand. Rather, the probabilities associated with μ stem from judgments made by the investigator. Inferential methods can be classified into three types: distribution free, parametric, and non-parametric. The meaning of the term “non-parametric” depends on the situation as will be explained. The choice between parametric and non-parametric methods can be crucial for reasons that will be outlined. To complicate matters, the number of inferential methods has grown tremendously during the last 50 years. Even for goals that may seem relatively simple, such as comparing two independent groups of individuals, there are numerous methods that may be used. Expert guidance can be crucial in terms of understanding what inferences are reasonable in a given situation.