Research Methods in Sport and Exercise Psychology
Summary and Keywords
Research methods in sport and exercise psychology are embedded in the domain’s network of methodological assumptions, historical traditions, and research themes. Sport and exercise psychology is a unique domain that derives and integrates concepts and terminologies from both psychology and kinesiology domains. Thus, research methods used to study the main concerns and interests of sport and exercise psychology represent the domain’s intellectual properties.
The main methods used in the sport and exercise psychology domain are: (a) experimental, (b) psychometric, (c) multivariate correlational, (d) meta-analytic, (e) idiosyncratic, and (f) qualitative approach. Each of these research methods tends to fulfill a distinguishable research purpose in the domain and thus enables the generation of evidence that is not readily gleaned through other methods. Although the six research methods represent a sufficient diversity of available methods in sport and exercise psychology, they must be viewed as a starting point for researchers interested in the domain. Other research methods (e.g., case study, Bayesian inferences, and psychophysiological approach) exist and bear potential to advance the domain of sport and exercise psychology.
Research methods in any domain are not an independent entity but rather embedded in the domain-related network of methodological assumptions, historical traditions, and research themes (Marten, 1987; Mir & Watson, 2000). This perspective is reflected in the organization of this chapter. Specifically, we start by clarifying key terminologies and by expressing our methodological position in the domain of sport and exercise psychology. Following this stage, we briefly overview the domain’s history, and we name research themes that facilitate specific research questions. After laying out the background, we introduce research methods selected to best represent the domain’s intellectual properties. It is beyond the chapter’s scope and our intention to offer a complete account of research methods in sport and exercise psychology. Thus, the article must be viewed as an introduction to the topic, and interested readers are encouraged to explore other relevant sources, such as those cited in this article.
Sport and exercise psychology is “the scientific study of people and their behaviors in sport and exercise activities and the practical application of that knowledge” (Weinberg & Gould, 2011, p. 4). A scientific domain requires a methodological framework, which is a different concept than a method. Methodology refers to “an intricate set of ontological and epistemological commitments that a researcher brings to his/her work,” whereas method is “the actual set of techniques and procedures used to collect and analyze data” (Prasad, 1997, p. 103). The definitions convey at least three aspects of consideration. First, methodology is a higher and broader concept than method in science. Second, method refers to explicit research techniques/procedures in research, particularly measurement and statistical analysis. Finally, methodology involves implicit philosophical standpoints specified by ontological and epistemological beliefs. The first two aspects are transparent in their points, but the last one is challenging.
Although subjected to further debate, we uphold in the chapter that researchers in sport and exercise psychology generally adopt Donald Campbell’s methodological position. In the Methodology and Epistemology for Social Science, Campbell (1988) proposed a methodological perspective of ontological realism and epistemological relativism. The tradition of ontological realism can be traced to Kant’s argument for the existence of a priori reality (see Logan, 2015). Ontological realism indicates that objects’ existence is independent to their theorizations. That is, entities can be treated as valid when they continually explain various phenomena (Tooley, 1987). For instance, gravity is unobservable, but its existence can be justified because it explicates many observations (e.g., objects falling in an earthward direction). Epistemological relativism denotes the point that researchers can never study phenomena in a neutral fashion, and thus the generated knowledge cannot be value free (Mahoney, 1976). Because knowledge is always theory dependent, the approximation of knowledge toward reality/truth relies on theoretical progress, or what Kuhn (1962) called paradigm shift.
A methodology of ontological realism and epistemological relativism carries multiple benefits when addressing methods in sport and exercise psychology. The first benefit comes from just specifying the methodology. Because researchers follow their methodological assumptions in thinking, explicating these assumptions reduces misinterpretations in scientific communication (Machlup, 1978). Second, the adoption of ontological realism safeguards the coherence of a domain’s collective work. If researchers in a domain do not share an ontological reality, their work cannot be systematically accumulated due to the incomparability of what is researched (Mir & Watson, 2000). Lastly, researchers assuming epistemological relativism view research methods critically because they believe that research methods are not value free. Such a critical perspective is particularly valuable when viewing quantitative methods because the values associated with quantitative methods tend to be implicitly expressed (Good, 1976). For instance, assigning a unit of millisecond to reaction time measurement conveys a value in interpreting time, because alternative interpretations such as the logarithmic transformation are available. Tukey (1977) was perhaps alert to the issue when describing variable transformation with variable re-expression. Next, we provide a brief overview of the domain’s history with reference to its research methods, which constitute the core of the present article.
Brief History and Research Themes
The parent fields of sport and exercise psychology are kinesiology (what used to be called physical education) and psychology (Weinberg & Gould, 2011). Starting in the 1890s, psychologists and physical educators became interested in applying psychology to sport and physical training (Kornspan, 2012). For example, a psychologist from Indiana University, Norman Triplett, is credited for conducting the first experimental study in sport and exercise psychology (Weinberg & Gould, 2011). Triplett (1898) observed a social phenomenon that cyclists tended to ride faster when riding with others than when riding alone. During the period 1920–1940, experimental laboratory methods became popular in sport and exercise psychology with the establishment of dedicated laboratories across the world (Kornspan, 2012). For instance, Coleman Griffith helped create the first sport and exercise psychology laboratory in North America at the University of Illinois in 1925 (Kroll & Lewis, 1970). In addition to experimental methods, research interests in identifying elite athletes using psychometric methods also grew substantially during the same period (Kornspan, 2012). Fuchs (2009) recounted that Babe Ruth, the then baseball superstar in the New York Yankees, received a set of psychological tests (e.g., reaction time and intelligence) in the Columbia University psychology laboratory after a baseball game during the 1921 season. Unsurprisingly Ruth excelled in most tests compared to the population average (Fullerton, 1921).
Sport and exercise psychology developed quickly in the second half of the 20th century and beyond, while separating itself from related domains such as motor learning and control (Weinberg & Gould, 2011). Meanwhile, the domain’s research method was diversified by an applied consideration of performance enhancement in sport and physical training (Kornspan, 2012). For instance, some researchers were influenced by Yerkes and Dodson’s (1908) work (i.e., the inverted-U law between arousal and performance) and sought to understand individualized profiles of arousal-performance relationship using probabilistic methods: later categorized as the idiosyncratic approach (e.g., Kamata, Tenenbaum, & Hanin, 2002). Some other researchers followed Maslow’s (1968) work on peak experience and conducted research to study a similar construct termed flow state in sport and exercise psychology (e.g., Csikszentmihalyi, 1990; Jackson & Marsh, 1996). Due to the experiential nature of flow state, research of flow state often includes both qualitative (e.g., interview) and quantitative (e.g., psychometrics) methods. Yet still other researchers considered the performance enhancement issue from a social perspective by focusing on group performance (e.g., Carron, Bray, & Eys, 2002). The research tends to involve measurement of multiple team characteristics, and thus multivariate statistical techniques like structural equation modeling became an appropriate modeling tool. Resulting from the rapid development of sport and exercise psychology, the evidence for specific topics in the domain has been accumulated in an ever-increasing speed, allowing for data synthesis research method (e.g., meta-analysis) to appear and prosper.
Nowadays sport and exercise psychology is a distinctive domain with its own organizations and journals. Although sport and exercise psychology journals are not identical to each other regarding journal mission, they share clear research themes of the domain. Specifically, sport and exercise psychology research revolves around two themes, including (a) how psychological factors affect physical behaviors and motor performance, and (b) what and how physical activity engagement affects psychological development and well-being (Weinberg & Gould, 2011). Given the domain’s research themes, brief history, and methodological assumptions, research methods are discussed next.
Six categories of research method in sport and exercise psychology are introduced here, including the (a) experimental approach, (b) psychometric approach, (c) multivariate correlational approach, (d) meta-analytic approach, (e) idiosyncratic approach, and (f) qualitative approach. The six categories were selected because of their important roles in the domain’s history and because each of them satisfies a distinctive research purpose.
The first category of research method in sport and exercise psychology is the experimental approach. When using this method, the research purpose is to make causal inferences (Shadish, Cook, & Campbell, 2002). That is, researchers conduct experiments to answer questions like whether X (i.e., independent variable) causes Y (i.e., dependent variable). Knowing whether a change in one variable causes a change in another variable is considered a challenge in science. Testing hypotheses via an experimental approach is a viable solution to challenges such as causal inference (Rubin, 1974). An experiment holds features, such as control group and random allocation, so that, if Y demonstrates a measurable effect, X is a more plausible cause than all other covariates (Shadish, Cook, & Campbell, 2002). The interest of studying causal relationships in sport and exercise psychology is not different than that of other scientific domains. However, representative method features exist in the domain when answering domain-specific questions.
For example, one causal question concerns whether video presentations using 3-dimensional (3D) technology results in decision-making enhancement in sports. Several experimental studies’ findings consistently indicated that skilled athletes made better decisions (regarding accuracy and latency) than their less skilled counterparts (for a review see Tenenbaum, 2003). However, some researchers criticized the evidence for low external validity because participants were not acting in the real situations but watching pre-recorded videos in a laboratory room prior to making decisions (Abernethy, Thomas, & Thomas, 1993; Mann, Williams, Ward, & Janelle, 2007). Others also argued that the low external validity dampened the expertise effect on decision-making in sport (Williams, Davids, Burwitz, & Williams, 1992). Because 3D technology incorporates binocular disparity, accommodation, and convergence techniques in presenting spatial cues, 3D videos are viewed as more authentic visual representations than traditional video clips (Yang et al., 2012). This makes 3D technology plausible in enhancing the external validity of experiments and thereby owns potential to enhance decision-making in sports.
Liu et al. (2017) conducted an experimental study to test whether the application of 3D technology to video presentations would enhance tennis players’ anticipatory decision making. The study maintained at least three method features that tend to be considered by researchers working on related topics. First, the study involved a temporal-occlusion task in measuring tennis players’ decision making. The task consisted of a video simulation technique in that video scenarios were produced from a returner’s perspective to re-create a customary view of returning a serve in tennis (see Williams & Ward, 2003). All the video scenarios were edited to start at the moment when the server tosses the ball, and to turn completely black at the moment of racket-ball contact. Because no task-relevant information was presented after the frame of racket-ball contact, the task was temporally occluded. In the task, participants judged the serve direction by pressing labeled keys on a keyboard as fast and as accurately as possible, and their decision-making accuracy (in percentage correct) and reaction time (RT) were measured.
Moreover, Liu et al. (2017) had a placebo condition in addition to a control condition of watching traditional video clips and a condition of watching 3D video clips. Although 3D technology could enhance visual representations, it required those watching 3D video clips to wear glasses. Wearing glasses introduced a procedural difference between 3D condition and control, and it could lead to perceptual differences (e.g., darker video display; Read & Bohr, 2014). To render it unlikely in confounding the results, the consideration of a placebo condition was necessary during the experimental design. Liu et al.’s results revealed that wearing 3D glasses shortened decision-making RTs without affecting its accuracy. This finding helped clarify previous findings that 3D is associated with shorter decision-making RTs but similar accuracy than the control condition (e.g., Hohmann, Obelöer, Schlapkohl, & Raab, 2016).
Finally, Liu et al. (2017) randomly allocated participants to the three groups according to Schulz’s (1996) recommendations. Specifically, they used a randomizing tool (i.e., Excel) to generate a random sequence and concealed the sequence until group assignment. Although random allocation is an earmark of the experimental approach, it can be difficult to achieve when conducting experiments with high-level athletes. It is because athletes tend to maintain a rigid schedule, and they constitute a small proportion of the population. Therefore, researchers in sport and exercise psychology often are required to make adaptations in their experiments. For example, Hohmann et al. (2016) used the training schedule of a women’s handball team to form two training conditions (including a 3D video training condition and a traditional video training condition) on decision making and found training benefit with 3D compared to traditional video footage. However, a threat on causal inference is associated with Hohmann et al.’s group allocation strategy: An experiment missing random allocation bears increased risks of reaching biased conclusions, and thus its design is quasi-experimental (Shadish et al., 2002). That is, the observed training benefit can be caused by factors other than the video footage, such as pre-training condition difference.
Beyond measurement and design, using appropriate analyses increases the likelihood of making correct causal inferences in the experimental approach. The importance of selecting statistical analysis is shown in another causal question in sport and exercise psychology: Does physical exercise enhance cognitive functioning? Experimental studies probing this question are frequently randomized controlled trials with pretest-posttest cognitive measurement. For example, in a study investigating whether physical exercise increases students’ academic achievement, Davis et al. (2011) randomly allocated 171 overweight children to three conditions, including two exercise conditions (differing in exercise dosage) and a passive control condition. In addition, Davis and colleagues measured the children’s academic achievements before and after a 13-week exercise program. The causal inference in the pretest-posttest experimental design could be made through two types of statistical analysis (Vickers & Altman, 2001). The first type, Analysis of Covariance (ANCOVA), which enables group comparison on the posttest score while treating the pretest score as a covariate. The alternative one, Gain Score Analysis (GSA), uses the gain score (i.e., posttest minus pretest) in comparing conditions. Although both ANCOVA and GSA share a similar set of statistical assumptions, they often lead to disparate conclusions. This is historically known as the “Lord’s Paradox” (Lord, 1967).
Liu, Lebeau, and Tenenbaum (2016) recently summarized the conclusion of Lord’s paradox in that, as long as a study follows the experimental approach, researchers must prefer ANCOVA to GSA in analyzing the pretest-posttest data. ANCOVA is a more preferable statistical method than GSA because ANCOVA enjoys higher testing power and more reliable effect estimates (e.g., Miller & Chapman, 2001). Liu and colleagues also found empirical evidence to support the conclusion of Lord’s paradox by reviewing 33 randomized controlled trials published between 1996 and 2015. Specifically, a dependence pattern between pretest group differences and statistical testing conclusions was revealed among GSA studies: Statistically significant findings were more likely to appear in GSA studies where the control subjects outperformed exercise subjects on cognitive tasks at pretest. Given the conclusion of Lord’s paradox, it is unfortunate that 27 out of 33 studies adopted GSA rather than ANCOVA in Liu and colleagues’ review. The reason may be that researchers sometimes unwittingly select data analysis strategies based on data patterns (Gelman & Loken, 2013). It is thereby important to call for the use of ANCOVA instead of GSA in analyzing pretest-posttest datasets within the experimental approach.
In previous sections, several method examples of the experimental approach were introduced in the background of domain-specific questions. Although the experimental approach remains a valuable method category in sport and exercise psychology, it is limited due to its associated assumptions. For example, one limitation of the experimental approach is that all the experimental groups are assumed to be equivalent to each other on all the variables. Rubin (1974) argued that randomly allocating participants provides researchers with a convenient statistical solution in defending for group equivalence in an experiment. Nevertheless, groups of a given experiment can be nonequivalent even when random allocation is properly implemented. Another limitation of the experimental approach is the linear and reductionist reasoning. Such a limitation is mirrored by the assumptions of linear statistical models frequently used to analyze experimental data. These assumptions include at least (a) linearity between the independent variable and the dependent variable, (b) independence of data points, (c) normality of observed variables, (d) residual normality and homogeneity, and (e) correct model specification (see Cohen, Cohen, West, & Aiken, 2003). Finally, experiments are limited by the measurements when the measurements bear insufficient reliability and validity. The establishment of measurement reliability and validity requires a unique method approach. A second category of research method, psychometric approach, is subsequently described.
In sport and exercise psychology, researchers predominantly choose the classical test theory to guide their psychometric work, and only a few scholars explored alternative theories like item-response theory (Zhu, 2012).1 However, regardless of the theoretical guidance, developing valid measurement tools is always the reason for using psychometric methods. Validity is considered the evidence-based inferences that the observed score (which reflects an interaction between the measurement tool and the sample of participants) represents the targeted construct (for a review, see Flake, Pek, & Hehman, 2017). According to the Standards for Educational and Psychological Testing (American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, 2014), the process of validating a measurement consists of long-term efforts instead of one-shot study design. The long-term efforts of measurement validation span at least three phases, including (a) substantive, (b) structural, and (c) external.2
The substantive phase concerns specifying an instrument’s theoretical background so that past literature can be connected in defining the measured construct. Therefore, this phase is critical for generating items in the instrument. A well-developed instrument at the substantive phase is associated with face validity and content validity. Face validity evidence comes from laypersons judging the instrument to measure the desired construct, and content validity evidence is identical to face validity evidence except that the judgments come from content experts (Vaughn & Daniel, 2012). The structural phase features the quantitative analyses to the examination of the instrument structure or the instrument’s items. Quantitative analyses, such as factor analysis (including exploratory factor analysis and confirmatory factor analysis) and various reliability estimates (e.g., coefficient ), help in testing the factor structure and the internal consistency of the measure. Internal consistency is critical evidence for an instrument’s reliability. Because reliability reflects the degree to which an instrument is free from errors (resulted from random factors), it is a prerequisite for validity and can be used to infer validity (McDonald, 1999). Lastly, during the external phase, researchers place the construct (operationalized by the instrument) in a larger nomological network by testing how it relates to other constructs. Researchers can thus glean evidence about the convergent validity and discriminant validity of the instrument. Convergent validity and discriminant validity can be inferred from correlational evidence in a way that the correlation between conceptually related (or converging) measures should be higher than those between conceptually distinguishable (or diverging) measures (Vaughn & Daniel, 2012).
In sport and exercise psychology, researchers follow identical principles in validating the measurement of a wide array of constructs (see Tenenbaum, Eklund, & Kamata, 2012). The constructs vary from being perception centered such as self-efficacy and executive functioning, to being affect based such as burnout and flow, and to being motivation oriented such as intrinsic/extrinsic motivation, and many more. Due to the similarity across these measures regarding their validation process, only one measure in sport settings was described to represent the domain in introducing the psychometric approach; for an example representing exercise settings, readers are referred to the psychometric work on measuring the construct of behavioral habit (e.g., Gardner, Abraham, Lally, & de Bruijn, 2012; Verplanken & Orbell, 2003).
Competitive anxiety is a construct of interest to sport and exercise psychology researchers. Developing a valid measurement of competitive anxiety thus becomes a distinctive research area in sport and exercise psychology. Prior to 1990, the measurement of competitive anxiety in sport primarily involved signs of somatic anxiety (Smith, Smoll, & Schutz, 1990). Based on a multidimensional theory of anxiety (Morris & Liebert, 1973), which treats somatic anxiety and cognitive anxiety as separable dimensions within the concept of anxiety, Smith et al. (1990) developed the Sport Anxiety Scale (SAS). Smith and colleagues’ results consistently demonstrated a three-factor structure in SAS when high school and collegiate athletes were recruited as participants. The three factors were named into three subscales of SAS, including somatic anxiety, worry, and concentration disruption. Whereas the somatic anxiety subscale represents the somatic anxiety dimension, the other two subscales reflect the cognitive anxiety dimension. Although SAS was a useful tool to measure competitive anxiety, it was only validated for use among athletes of high school and college. The factor structure of SAS broke down when it was tested in children aged 9 to 12 years (e.g., Smith, Smoll, & Barnett, 1995). Smith, Smoll, Cumming, and Grossbard (2006) subsequently validated a revised version of SAS, Sport Anxiety Scale-2 (SAS-2), by extending the measurement of competitive anxiety downward on the age continuum to children as young as nine years old. Smith and colleagues’ development of SAS-2 exemplifies the three stages of measurement validation.
At the substantive stage of developing SAS-2, Smith et al. (2006) first hypothesized reasons for the problem of SAS when administered to children aged between 9 and 12. One reason Smith and colleagues found was that, based on a Flesch-Kincaid readability assessment (Harrison, 1980), many items in the SAS had reading levels above the ninth grade, and thus they posted difficulty for younger children’s reading comprehension. In addition, some SAS items in both worry and concentration disruption subscale had confounding parts in content. For example, one worry item reads, “I’m concerned about performing poorly,” and one item from concentration disruption subscale reads, “I’m concerned I won’t be able to concentrate.” The “I’m concerned” part in both items may have reduced the semantic distinctiveness between the items. Therefore, to rectify the potential problem, Smith and colleagues generated new items for each of the three SAS subscales by removing confounding item contents and by maintaining the item readability to a level of eight years old and above. The step ensured that both domain experts and intended laypersons agreed with what the SAS-2 intended to measure, thus gleaning both content validity and face validity evidence.
At the structural stage, Smith et al. (2006) performed factor analysis and calculated coefficient for the observed SAS-2 score. The score was derived from multiple samples varying in their representations of intended athletic population, including 277 9- and 10-year-old children, 418 11- and 12-year-old children, 342 13- and 14-year-old children, and 593 college freshmen. Results of factor analysis for all the samples supported an identical three-factor structure of SAS-2, suggesting factorial invariance in a population of the tested age range (i.e., from nine years old to college age). Additionally, the coefficient estimated from the observed SAS-2 score indicated that reasonable reliability was reached in measuring competitive anxiety.
Finally, Smith et al. (2006) made substantial efforts at the external stage to validate SAS-2. To obtain evidence for convergent validity and divergent validity, Smith and colleagues correlated the observed score of competitive anxiety using SAS-2 to itself, and to several other theoretically relevant constructs. These constructs included coach-initiated motivational climate, social desirability, achievement-related goals, and self-esteem. The pattern of the correlational evidence supported that SAS-2 is a valid measure of competitive anxiety when considered in a broader conceptual background. For instance, Smith and colleagues administered both SAS and SAS-2 to a college-age sample and correlated the two scales. Results showed that the correlation coefficients between the same subscales were higher than those between different subscales. For another instance, self-esteem has consistently exhibited negative relation to competitive anxiety in the past research (e.g., Brown, 1998), and Smith and colleagues also acquired negative correlation estimates between competitive anxiety and self-esteem.
To summarize, developing valid measurement tools is an ongoing research endeavor that requires researchers to seek evidence from multiple sources by various means. Although the endeavor can be roughly separated into linear stages of measurement validation, the process is actually iterative and interactive. From this perspective, Smith and colleagues demonstrated a good example of how to apply psychometric methods to measuring competitive anxiety. Although we only focused on the SAS-2 in this section, the psychometric research of competitive anxiety should be viewed on a larger scale by including the development of both SAS and SAS-2, in addition to the evidentiary contributions made in other projects that involved SAS/SAS-2. A good measurement is only a start in research because measurement enables researchers to study how the measured construct relates to other constructs. Many researchers are asking questions concerning the relationships among multiple constructs. This trend facilitates the growth of research methods under another category: the multivariate correlational approach.
Multivariate Correlational Approach
The introduction of research methods under the multivariate correlational approach focuses on path analysis and its more advanced version based on the notion of latent variable, structural equation modeling (SEM). SEM has become increasingly popular among behavioral and social sciences, including sport and exercise psychology. It is because SEM enables researchers to conduct model testing in circumstances that cannot be well achieved through other methods (Bentler, 1986). Specifically, SEM demonstrates advantages in modeling multivariate correlational relationship by merging multiple regression analysis and factor analysis (McIntosh, in press). SEM goes beyond multiple regression analysis by considering measurement errors in predictor-like variables and enables examining multiple criterion-like variables at the same time. In addition, unlike factor analysis that does not offer significance tests of the overall factor model and of individual factor loadings, SEM allows to generate significance test results for the overall model fit to the data, and for individual parameter estimates, respectively.
Broadly, SEM consists of two components: the measurement model and the structural model (Kline, 2011). In the measurement model, researchers specify and test hypothesized relationships among observed variables and latent variables. The measurement model is often used as an independent method, known as confirmatory factor analysis (CFA), to examine the factor structure in questionnaire-type data. A good model-data fit in the measurement model not only generates validity evidence for the measurement but also supports using the extracted latent variables for further modeling (for a review, see Morin, Myers, & Lee, in press). In the structural model, researchers specify and test hypothesized relationships (which is often directional based on theories and temporal order, just like those in multiple regression analysis) among latent variables (Kline, 2011). Because many SEM applications are aimed at testing for a priori hypotheses about the structural model, the measurement model is frequently the preceding step of the structural model. Recently, McIntosh (in press) introduced the Bayesian approach to the SEM applications and viewed it as a vital opportunity in the sport and exercise domain. The Bayesian approach allows previous data to be utilized in the SEM model testing, thus elevating the importance of published data to the domain. In sport and exercise psychology, the SEM applications are conducive to answering important research questions. Two representative cases of this method are described next.
One question asked by researchers in sport settings is what environmental and dispositional factors may account for athletes’ burnout. Burnout is a chronic state of ill-being associated with long-term exposure to stress (Maslach, Schaufeli, & Leiter, 2001). For athletes, the state of burnout is related to sport competition, and it is considered as a syndrome (Eklund & Cresswell, 2007). The syndrome is characterized by three dimensions of experience in sport settings, including a reduced sense of accomplishment, sport devaluation, and emotional and physical exhaustion (Raedeke & Smith, 2001). Barcza-Renner, Eklund, Morin, and Habeeb (2016) applied SEM to investigate a multivariate relationship centering on athlete burnout. The data was collected from 487 NCAA Division I swimmers who were facing conference championship meets in three weeks, and it consisted of measurements of athlete burnout, type of motivation (i.e., autonomous, controlled, amotivation), athlete perfectionism (i.e., self-oriented versus socially prescribed), and coaching behavior (i.e., controlling use of rewards, negative conditional regard, intimidation, and excessive personal control). Therefore, the multivariate correlational relationship among 10 latent variables (i.e., three from motivation, two from perfectionism, one from burnout, and four from coaching behavior) became readily testable using SEM. Guided by theories and past evidence, Barcza-Renner and colleagues hypothesized that coaching behaviors would affect athlete burnout through two sequential mediators, including athlete perfectionism and motivation.
Prior to conducting the hypothesized structural model, Barcza-Renner et al. (2016) ran a CFA model by specifying the 10 latent variables and letting them freely correlate with one another. Their specification of the measurement model was based on previous studies investigating factor structures of identical measurement instruments. For instance, the specification of the measurement model for athlete burnout adopted a bifactor procedure based on Cresswell and Eklund (2005). That is, each of the burnout items were specified to simultaneously load on a global athlete burnout construct and on one of the three dimensions of athlete burnout syndrome (e.g., sport devaluation). Barcza-Renner and colleagues’ measurement model was supported by results of multiple model-data fit indices (see Biddle et al., 2001 for a review of model fit indices in SEM), indicating that the extraction of the 10 latent variables together with observed variables reasonably accounted for the data. Barcza-Renner and colleagues subsequently tested the structural model based on their hypothesis. However, because mediating effect could be either direct or indirect, multiple nested structural models were fitted to the data to select a winning model that was relatively parsimonious and fitting. The results suggested that specific types of coaching behavior exerted differential influences on athlete burnout either directly or indirectly through mediators of perfectionism and motivation. Overall, the application of SEM in Barcza-Renner and colleagues’ study helped explore hypothesized effect of environmental (i.e., coach) and dispositional factors (i.e., perfectionism) on athlete burnout, although one cannot attempt causal inferences in the finding given the study’s cross-sectional nature.
A second question asked by researchers in exercise settings is what cognitive functions were subsumed under the concept of executive functioning (EF), because physical exercise seems to result in moderate-to-large EF improvement (see Colcombe & Kramer, 2003). EF is regarded as a high-order construct that regulates attentional functions, which is associated with the functioning of the frontal cortex (Baddeley, 1986; Norman & Shallice, 1986). Despite such complexity, the measurement of EF in relevant research tends to come from a single task (e.g., Wisconsin Card Sorting Test). This reality raises researchers’ concerns over the “task impurity problem.” That is, any single tasks claimed to assess EF are impure in that the tasks may also require cognitive skills/functions subserved by non-frontal cortical regions (Phillips, 1997). To clarify EF’s conceptual and functional reference, Miyake et al. (2000) applied SEM to modeling data from 137 college students who performed a set of 14 tasks relevant to EF. Their CFA results indicated that EF captures three distinguishable but united cognitive functions, including (a) updating/monitoring the content of working memory, (b) shifting between mental sets represented by different task requirements, and (c) inhibiting proponent responses. Given the functional structure of EF, Miyake and colleagues further explored how each of the three EF functions predicted several popular EF tasks using structural models. For example, only the shifting (but not updating or inhibiting) function of EF predicted the performance of Wisconsin Card Sorting Test. By applying SEM, Miyake and colleagues helped advance the understanding of EF’s cognitive functions and they partially resolved the task impurity issue by specifying the most influential cognitive component(s) in performing a given EF task.
Although SEM is a valuable tool for studying multivariate correlational relationships, its appropriate implementation requires researchers’ caution on several interrelated issues. First, some labeling of SEM may not be accurate. For example, SEM is often referred to as “causal modeling” and the measurement model “confirmatory factor analysis.” These terms may mislead researchers to falsely believe that using SEM helps attempt causal inferences specified by directional paths in the structural model (Guttman, 1976). This labeling issue is related to a second issue, which concerns the incompetence of SEM in differentiating statistically equivalent models. In SEM, statistically equivalent models of a given model are those that produce an identical fit to the data (MacCallum, Wegener, Uchino, & Fabrigar, 1993). This limitation of SEM demystifies “causal modeling” because a given structural model with a directional path from latent variable X to latent variable Y shows no statistical differences than a model re-specifying the path from Y to X. A final issue of SEM application relates to the general difficulty of obtaining longitudinal data. Longitudinal data can help rule out some statistically equivalent models given the principle that temporally ensuing events cannot causally affect their precedents. Therefore, research using SEM (e.g., latent growth modeling) for analyzing longitudinal data is encouraged. Thus far, all the method approaches presented herein have served to answer direct empirical questions, the studies of which are categorized as primary studies. The literature also includes secondary studies. These studies aim to answer questions based on the evidence from the primary studies. A forth method approach, meta-analytical approach, applied in secondary studies is introduced next.
Gene Glass (1976) coined the term meta-analysis, literally meaning the analysis of analyses, when naming a research method used for quantitatively reviewing results from related studies. Involving the analysis of results from different studies pertaining to a shared substantive theme, meta-analysis enables examining whether the collective evidence is coherent, and if not, what are the main attributes accounting for the differences (Becker & Ahn, 2012). Therefore, meta-analysis enables researchers to generalize the validity of results to situations varying on contextual factors, such as measurement choice, treatment implementation type, and sample (Hunter & Schmidt, 2004). Conducting a meta-analysis consists of at least five steps, as summarized by Cooper (2009). These steps include, (a) problem formation, (b) data collection, (c) data evaluation, (d) data analysis, and (e) public presentation.
At the step of problem formation, researcher(s) conducting a meta-analysis must specify a research question. An appropriate question must target on a substantive topic that is balanced on its breadth. Namely, it is broad enough to be addressed in at least several studies, and it is narrow enough to be manageable by researchers. To estimate the number of studies available for the topic, researchers advance to the second step, data collection. Data collection includes not only searching various sources (e.g., databases) for studies satisfying predetermined keywords but also gathering data from the studies. Whereas the study-searching part of data collection often interacts with problem formation, the data-gathering part drives researchers to clarify the definition of each studied variable, and to develop a reliable process (e.g., manual and/or training) in coding the data. The third meta-analytic step is data evaluation, in which researchers appraise the quality of the included studies. Frequently coded quality variables are study design, sample size, treatment implementation, just to name a few. At the forth step, data analysis (for details, see Borenstein, Hedges, Higgins, & Rothstein, 2009), researchers produce an estimated “overall” effect size of the research question from all the included studies. Additionally, they may find that the between-study differences of the effect size can be explained by some variables (i.e., moderators). Finally, researchers must present the meta-analytic results to the public. We refrain from adding details here but refer readers to the Moher, Liberati, Tetzlaff, Altman, and PRISMA Group (2009) statement regarding public presentation of meta-analysis.
In sport and exercise psychology, many research questions can be answered by implementing meta-analytic applications. Two meta-analytic studies are introduced as examples here. The first meta-analytic study concerns the validity of Borg’s Ratings of Perceived Exertion (RPE). RPE enables to measure people’s perceived exertion in physical activities and it is created as a proxy indicator of exercise intensity (see Borg, 1973). In research, RPE is contrasted with physiological indicators of physical exertion, such as heart-rate, oxygen consumption, and lactic acid accumulation. Although RPE was supposed to correlate highly (.80–.90 for Pearson’s r) with physiological indicators, inconsistencies were noted across studies regarding how strongly RPE correlates to these physiological measures (Noble & Robertson, 1996).
Chen, Fan, and Moe (2002) conducted a meta-analysis to explore such an inconsistency. Specifically, they meta-analytically reviewed the Pearson correlation between RPE and each of the six physiological measures of exertion, including heart rate, %VO2max, VO2, blood lactate concentration, respiration rate, and ventilation. In the study-searching part of the data collection, Chen and colleagues obtained all the (published and unpublished) English articles available at that time. Their search resulted in 437 studies for inclusion eligibility in the meta-analysis. Chen and colleagues finally included 64 studies after applying several screening criteria. For instance, one criterion was that a given study must report at least one Pearson correlation between RPE and a physiological measure. A second criterion was that the Pearson correlation(s) must be estimated at the level of individual participant, not at the group level. In the data-gathering part of data collection, 169, 36, 31, 27, 19, 12 correlation coefficients were collected from the 64 studies between RPE and heart rate, blood lactate concentration, VO2, ventilation, %VO2max, and respiration rate, respectively. Additionally, several study features were coded as testable moderators, such as gender and exercise type. Chen and colleagues considered exercise type because, although RPE had been applied in different exercise types (e.g., swimming), RPE scale was developed exclusively for cycling tasks (e.g., Borg, 1973). Chen and colleagues’ data evaluation step involved a three-category coding system (i.e., excellent, satisfactory, poor). They later tested whether study inconsistencies can also be attributed to the quality of study, just like in the case of a moderator.
After data analysis, Chen et al. (2002) uncovered that the overall Pearson correlation between RPE and heart rate, blood lactate concentration, VO2, ventilation, %VO2max, and respiration rate was estimated to be .62, .57, .63, .61, .64, and .72, respectively. In addition, statistically significant moderators were identified. For instance, exercise type was shown to moderate the correlation strength between RPE and heart rate (HR). A closer examination using multiple comparison tests revealed that the RPE-HR correlation (i.e., .83) in swimming was higher than those in other exercise types. In sum, Chen and colleagues’ study quantitatively explored the inconsistencies across studies regarding the correlation between Borg’s RPE and physiological measures. Their evidence indicated that the overall strength of the Pearson correlation was not as strong as researchers previously thought, and that several contextual factors could moderate the size of the correlation.
The second meta-analytic study concerns the phenomenon of the Quiet Eye (QE) in sport settings. QE refers to
the final fixation or tracking gaze that is located on a specific location or object in the visuo-motor workspace within 3° of visual angle for a minimum of 100ms. The onset of the QE occurs prior to the final movement in the task, and the offset occurs when the gaze deviates off the object or location by more than 3° of visual angle for a minimum of 100ms, [and] therefore the QE can carry through and beyond the final movement of the task.
(Vickers, 2007, p. 280)
Evidence on QE grew steadily after J. N. Vickers’s publications in the 1990s (e.g., Vickers, 1996). Unlike studies that focus on gaze behaviors occurring during sport performance, the QE studies emphasize the performance-predicting power of the last visual fixation prior to movement initiation. In a recent meta-analysis, Lebeau et al. (2016) synthesized the QE studies to answer a general question of whether QE duration affects sport performance. The study exhibited how detailed research questions could be formulated through the study-searching process. Specifically, Lebeau and colleagues separated the meta-analytic review into two syntheses based on the design of the final included studies: 27 non-intervention studies were included in Synthesis 1, which addressed whether skilled athletes use longer QE than less skilled ones and whether longer QE durations are associated with more successful trials within individuals; in contrast, nine intervention studies were incorporated in Synthesis 2, which addressed the question whether QE can be prolonged through training to enhance athletic performance. When gathering data from QE studies, Lebeau and colleagues coded QE and/or performance using Cohen’s d family of effect size, and they coded several hypothesized moderators. One moderator, for instance, concerns whether QE is measured on an absolute scale (in ms) or on a relative scale (where QE duration is divided by a period consisting of QE duration and movement execution time). Beyond these data, Lebeau and colleagues coded 12 design characteristics (e.g., reporting participant inclusion/exclusion criteria) of QE studies so that they could evaluate the study’s quality. Overall, the authors’ coding reached sufficient inter-coder reliability.
Lebeau et al.’s (2016) Synthesis 1 revealed that skilled athletes demonstrated a large effect () in prolonging QE than those less skilled, and even within individuals, comparing successful trials to unsuccessful ones regarding QE duration led to a moderate effect (). Synthesis 2 indicated that QE training had a large effect () in increasing QE duration, which was accompanied by sizable performance improvement (). Collectively, Lebeau et al.’s (2016) meta-analysis generated evidence to support a causal linkage between QE duration and sports performance. It also implied an applied value of QE training for performance enhancement in sports.
In sum, the meta-analysis approach can be a valuable tool for research in sport and exercise psychology. It is like previously introduced research methods in the domain in that researchers can create population-wide knowledge with it. For instance, the conclusions of Lebeau et al. (2016) can be generalizable to athletic populations represented by the samples of the included studies. Nevertheless, sport and exercise psychology researchers’ interest is not limited to population-wide knowledge. Sometimes they aim to describe fluctuations in emotions, cognitions, and performance of single performers through a given time phase. In such situations, another research method, idiosyncratic approach, is worth researchers’ considerations.
The idiosyncratic approach represents a probabilistic technique developed to extend Hanin’s (2000) conceptualization of the individual zone of optimal functioning (IZOF) model, which highlights the connection between affective states (e.g., emotion, mood) and athletic performance. Kamata, Tenenbaum, and Hanin (2002) first showed that the idiosyncratic method could help estimate the individual affect-related performance zone (IAPZ) profile curve by regressing an affect variable to a graded performance variable in an ordinal logistic model. The affect variable can be measured either in a physiological (e.g., heart rate) or a self-reported (e.g., arousal level) way. A core assumption of IAPZ profile conceptualization is that within-individual generalizations are of foremost importance (Molden & Dweck, 2006). By delineating IAPZ profile charts, idiosyncratic approach fits to the research interest of identifying one’s IZOF by studying intra-individual variability of both affect and performance. To such extent, idiosyncratic approach also bears applied value for practitioners because the identified optimal performance zones are frequently goals of performance-enhancement techniques in sport psychology, such as self-talk and imagery (Tenenbaum, Edmonds, & Eccles, 2008).
The implementation of the idiosyncratic approach requires a sequence of operations. To help illustrate the operations, a tutorial article was published in the Journal of Clinical Sports Psychology (Johnson, Edmonds, Kamata, & Tenenbaum, 2009). The major steps included (a) collecting and recoding an individual’s performance-affect data in a given athletic task, (b) running an ordinal logistic regression to estimate model parameters, (c) using the fitted model to infer IAPZs, and (d) utilizing IAPZs in IZOF-based research and practice. To further clarify the idiosyncratic approach, one sport study investigating IAPZs in elite archers is described.
Filho, Moraes, and Tenenbaum (2008) applied the idiosyncratic approach to the investigation of IZOF in three elite archers. The data collection lasted for an entire season and the data consisted of arousal level and pleasure (based on the affect grid), heart rate (HR), and archery performance. Because the idiosyncratic approach consists of regressing an affect variable onto a performance variable, Filho and colleagues’ data allowed to establish three IAPZ profiles: arousal-performance, pleasure-performance, and HR-performance. To prepare the data for logistic regression, Filho et al. (2008) recoded each instance of an archer’s performance according to the performance score percentile in the data. Specifically, optimal, moderate, and poor performance categories were assigned to performance above 66th percentile, between the 34th and 66th percentile, and below 33rd percentile, respectively. This three-level system (i.e., optimal, moderate, and poor) was further converted to a five-level system by utilizing information from the affect variable. That is, for any instance of suboptimal performance (i.e., moderate or poor), the associated affect measure should be either above or below the mean of all the affect measures associated with the optimal performance category. Obtaining this comparative information of affect helped further categorize each instance of suboptimal performance. For example, if an affect measure associated with an instance of poor performance were below the criterion, the poor performance would be converted to Poor/Below (P/B); if another affect measure associated with another instance of poor performance were above the criterion, the poor performance would be converted to Poor/Above (P/A). Therefore, a given performance variable was transformed to a five-level ordinal variable including Poor/Below, Moderate/Below, Optimal, Moderate/Above, and Poor/Above.
The remaining steps of the idiosyncratic approach include running logistical ordinal regressions (for details see Johnson et al., 2009), generating graphs based on data and regression model output, as well as making use of findings. Filho et al. (2008) drew the arousal-performance IAPZ curve for each of the three archers in the 70m shooting event. The IAPZ curves showed idiosyncratic differences regarding what range of arousal level was associated with optimal archery performance. Filho and colleagues were able to delineate the IAPZ profile charts for all the archers so that the archers’ season-long performance fluctuations were visually available. This information of performance fluctuation generated applied implications. For instance, although all the three archers demonstrated clear performance fluctuations throughout the season in the arousal-performance IAPZ profile charts, each of them exhibited a unique pattern. Performance of one archer fluctuated mainly in the over-arousal range, that of a second archer mostly in the under-arousal range, and that of the last archer across the whole range of arousal level. Given such information and a performance-enhancement intention, the first archer should apply relaxation techniques, the second archer should learn energizing techniques, and the last archer should enhance awareness of arousal level throughout the season (for a review of psychological skills in sport, see Williams, 2010).
Although studies applying idiosyncratic approach are mostly targeted on sport topics, work focusing on intra-individual variability between affect and exercise behaviors do exist in exercise settings. In particular, Williams (2008) proposed a conceptual framework wherein affective response to exercise is used to predict exercise adherence in an idiosyncratic process. The framework absorbs strengths from both the dual-model theory and the hedonic theory: Dual-mode theory (Ekkekakis, 2003; Ekkekakis, Hall, & Petruzzello, 2005) posits the manner exercise intensity impacts affective response via cognitive (e.g., appraisal) and interoceptive (e.g., ventilatory drive) factors, and hedonic theory (see Williams, 2008 for a review) propounds the predictive power of affective response to exercise on future exercise adherence through anticipated affective response. Therefore, Williams’s conceptual framework suggests a plausible way of individualizing exercise prescription to maximize future exercise adherence. Preliminary evidence from longitudinal studies (e.g., Williams et al., 2008) supports Williams’s conceptual framework, although studies applying methods such as idiosyncratic approach in sports are still missing.
The difficulty of applying the idiosyncratic method to exercise settings is multitudinous. First, even with the hedonic theory, it is still unclear how to obtain a singular measure of affective response to exercise in predicting exercise adherence. For instance, affective responses to both peak and end of exercise are likely to influence an individual’s overall affective evaluation of a given exercise (see Kahneman, Fredrickson, Schreiber, & Redelmeier, 1993). If this is true, how to model the two affective responses into an overall affective response to be used in the idiosyncratic method? Second, similar to the case of affective response, the concept of exercise is also hard to be concisely represented due to variability beyond exercise intensity, such as the duration and mode of exercise. Third, the quantification of exercise overly relies on retrospective report, which can be biased (Shiffman et al., 1997). Forth, unlike the affect-performance association measured in idiosyncratic sport studies, the measurements of affective response to exercise and exercise adherence have longer time interval, which makes the predictive relationship more susceptible to confounding factors.
Overall the idiosyncratic approach is a viable tool helping to understand the intra-individual association between affect and performance in sports, and potentially between affective response to exercise and exercise adherence. It is thus a distinct method approach compared to those introduced previously, which primarily helps investigate inter-individual variability. However, despite such a difference, all the research methods so far do share a fundamental point that the revelation of truth in the domain demands the quantification of what is researched. Although the dominance of quantitative research in sport and exercise psychology is unquestionable, the existence of qualitative research should be appreciated and acknowledged (Smith & Sparkes, in press). The next section reflects such a general position.
First and foremost, we must clarify that, due to the scope of this article, we do not intend to present a complete picture of qualitative research in sport and exercise psychology. The concept of qualitative research is broader than what we meant by “qualitative approach,” which can be loosely regarded as the method component of qualitative research. Although qualitative researchers show reluctance in defining qualitative research for good reasons (see Smith & Sparkes, in press), a working definition would help readers get a sense of the term.
Qualitative research is a situated activity which locates the observer in the world. Qualitative research consists of a set of interpretive, material practices that make the world visible. These practices transform the world. They turn the world into a series of representations, including fieldnotes, interviews, conversations, photographs, recordings and memos to the self. At this level, qualitative research involves an interpretive, naturalistic approach to the world. This means that qualitative researchers study things in their natural settings, attempting to make sense of or interpret phenomena in terms of the meanings people bring to them.
(Denzin & Lincoln, 2011, p. 3)
Qualitative research methods consist of a set of diverse options for researchers, enabling them to understand issues in an enriched manner (Smith & Sparkes, 2016a). Specifically, the options come from choosing the interpretive lens, the form of data collection, and the type of data analysis. Pertaining to data collection, the working definition of qualitative research already has a good explanation. The interpretive lens includes but is not limited to case studies (Hodge & Sharp, 2016), narrative inquiry (Papathomas, 2016), phenomenology (Smith, 2016), ethnography (Atkinson, 2017), critical theory (Patton, 2002), and community-based participatory action research (Schinke & Blodgett, 2016). Each interpretive lens provides a specific way for researchers to construe meanings from the data and it must also fit to the purpose of a given study. For instance, ethnography fits to understanding the culture of a particular group from the group members’ angle(s). This interpretive lens has been applied in research of cultural sport psychology (Blodgett, Schinke, McGannon, & Fisher, 2015), whose central tenet is to consider individuals as “saturated with cultural meanings and social norms” in sport settings (Ryba & Schinke, 2009, p. 264). Lastly, like data collection and interpretive lens, qualitative data analysis has several methods, including content analysis, phenomenological analyses, framework analysis, narrative analyses, conversation analysis, discourse analyses, and thematic analysis (see Smith & Sparkes, 2016b for details). For example, thematic analysis, which is a “method for identifying patterns (‘themes’) in a data set, and for describing and interpreting the meaning and importance of those” (Braun, Clarke, & Weate, 2016, p. 191), is proposed to be the most popular qualitative analysis in sport and exercise psychology (Smith & Sparkes, in press).
The popularity of, and significance attached to, qualitative approach is growing in sport and exercise psychology. Culver, Gilbert, and Sparkes (2012) compared the percentage of qualitative publications in three domain-specific North American journals between 2000s and 1990s, and concluded that the percentage in 2000s increased by 68% from that in 1990s. Additionally, Culver and colleagues found that the number of different authors publishing qualitative research in the three journals also increased significantly from 1990s to 2000s. Beyond formal investigations, the trend of valuing qualitative research is also evidenced in adding a first-time chapter of qualitative research in the new edition of the Handbook of Sport Psychology (Tenenbaum & Eklund, in press). To show the strength of qualitative approach in sport and exercise psychology research, two qualitative studies are described next.
The first study concerns understanding what “pleasure” means for older adults who are engaged with regular exercise behaviors. Realizing that the concept of pleasure is under-researched within health and health-related areas, and that pleasure is important for driving and maintaining exercise behaviors, Phoenix and Orr (2014) adopted an interpretive paradigm and a narrative social constructionist approach to help them interpret the meaning of pleasure. The study involved 51 older adults who were over 60 years old and were self-identified as exercisers. The data collection included life-history interviews and photography-based exercises. In the photography-based exercise, 27 participants completed sessions where their exercise processes were photographed by the researchers. The photos were further used to elicit discussions about participants’ exercising moments between researchers and participants. To analyze the data, Phoenix and Orr employed a type of thematic analysis (i.e., categorical-content analysis) to identify central themes regarding what pleasure meant for participants inside and outside exercise settings.
Four distinct types of pleasure emerged in Phoenix and Orr’s (2014) study, including (a) sensory pleasures, (b) documented pleasures, (c) the pleasure of immersion, and (d) the pleasure of habitual action. Sensory pleasures are associated with positive sensory stimulations during exercise, such as the smell of nice perfumes in dancing and the feeling of water in swimming. Documented pleasures are a type of temporally expansive gratification that result from the process and outcome of documenting exercises. For instance, one participant showed researchers his decades-long training log regarding his engagement with multiple types of exercise. Phoenix and Orr interpreted a moment of documented pleasures from the participant, who seemed to relive the past in the present during the recounting. The pleasure of immersion comes from a sense of body-mind unification during exercise, which enabled participants to escape from, consider, and/or gain perspectives on important issues in their lives during exercise. Lastly, the pleasure of habitual action comes from satisfying the need of maintaining regular physical activity as a habit. For example, a participant recalled:
It’s purely that I feel it’s [Tai Chi] better for me . . . Some mornings you wake up and you think “oh, I feel a bit grotty this morning,” you know. And I think no, once you’ve missed it once, the temptation is to say every time I feel a bit grotty, that I won’t do it. And in fact, I always feel great afterwards, I feel absolutely wonderful afterwards and then I feel happy that I made myself go.
(Phoenix & Orr, 2014, p. 98)
Overall the study outlined the pleasure dimensions and their relations to exercise engagement, especially among physically active older adults. The four types of pleasure offer insights to future theorizations involving the emotion in exercise behaviors. Perhaps what is most important is that such findings were difficulty to capture in research using quantitative methods. To further showcase the research using qualitative methods, a second study, investigating a talent-development program from a perspective of cultural sport psychology, is described.
In a case study, Henriksen, Stambulova, and Roessler (2010) followed an ecological approach to examine the talent-development system of the successful Danish national 49er sailing team. Unlike studies focusing on individual athletes, Henriksen and colleagues were interested in the role of culture in athletic talent development. In a contemplated model, they highlighted various dimensions of influence in cultivating sports talent. Specifically, their model consisted of three temporal categories (i.e., past, present, and future), two subjects (i.e., athletic and non-athletic), and two levels (i.e., micro- and macro-). Cultural influence (e.g., organizational culture and national sports federation) was thus reflected at the macro-level factors throughout time and across subjects in the model. Multiple forms of data (i.e., semi-structured interview, in situ observation, and document analysis) were collected from several groups of interest (i.e., athletes, coaches, and team managers). Both deductive and inductive processes were involved in analyzing the data under an interpretive guidance of ethnography. As a result, important points were revealed regarding how cultural aspects affect the growth of sailing athletes in the team.
First, even though it was revealed that the Danish athletes were experiencing a shortage of resources (e.g., financial support) compared to counterparts from other nations, they compensated such a weakness by strengthening an overall culture named the Danish model. The model can be summarized using the words of one athlete: “Every country wants to be the best. At the same time, their athletes compete among themselves. We are the only nation that has chosen to solve this dilemma by working together. And I am damned proud of that” (p. 219). Moreover, details of the Danish model are embodied by six interconnected assumptions held by everyone in the team. The assumptions include: (a) an individual must be responsible for his/her own excellence, (b) team success is a precondition for individual success, (c) elite/older athletes have both priority team status and duty to help pre-elite/younger members, (d) everyone should always strive for improvement, (e) optimal performance comes from concentrating on processes instead of outcomes, and (f) everybody improves and benefits from sharing knowledge and co-operation with others. Lastly, within an openness-collaboration culture, autonomy was deemed as a key attribute of any future elite athletes in sailing. This point can be epitomized in the following observation note:
While having a cup of coffee, a group of Danish sailors discussed the dismissal of the American coach of a Danish elite boat (not a 49er). The sailors agreed that, with him as a coach, the crew had performed well. But rather than teaching the crew to analyze and make decisions, the coach had told them exactly how to do things. An elite sailor from a different type of boat commented: “that way they will never learn how to handle things for themselves and make their own decisions on the water. Weather conditions change all the time. A coach like that can push them a bit of the way, but he will never be able to take them al the way to the top. ” The other athletes agreed.
(Henriksen et al., 2010, p. 219)
As shown by previous exemplar studies, research using the qualitative approach helps the intellectual advancement of the domain. A final note of qualitative approach concerns a critiquing point that qualitative results lack statistical-probabilistic (or enumerative) generalizability. However, as Smith and Sparkes (in press) pointed out, it is inappropriate to expect statistical-probability generalizability from results generated with qualitative approach. Instead, researchers should pay attention to naturalistic generalizability (Stake, 1995) and transferable generalizability (Tracy, 2010). Naturalistic generalizability occurs when readers of the research echo with the findings by reflecting upon direct or vicarious life experiences, and transferable generalizability happens whenever people in one setting think about adopting something (identified in the research) used by other people.
In this chapter, we summarized six approaches of research method in sport and exercise psychology in the backdrop of the domain’s methodology assumptions, history, and research themes. Each of the six methods helps researchers achieve a distinguishable research purpose and they tend to be complementary to each other. Prior to concluding the chapter, we would like to share a few thoughts.
First, it is worth noting that the chapter is both strengthened and limited by adopting the method definition in the chapter’s opening sections. The definition highlights the data collection and analysis procedures and therefore provides a clear platform to organize the approaches of research method. However, the framework limits the possibility of addressing research methods that is hard to be sorted out by naming the procedures of data collection and analysis. For instance, case study is by itself an important research method. Regarding its definition, Yin (2014) proposed “A case study investigates a contemporary phenomenon within its real-life context; when the boundaries between the phenomenon and context are not clearly evident; and in which multiple sources of evidence are used” (p. 23). We therefore cannot include case study because of its phenomenon-bounded nature and its readiness to incorporate multiple method approaches already included in the current chapter.
Second, we believe it is time for quantitative researchers in sport and exercise psychology to consider alternative statistics/inferences in their inquiries, such as the Bayesian approach. Although we understand the current dearth of training opportunities regarding Bayesian statistics in sport and exercise psychology, there are many good resources, such as books (Almond, Mislevy, Steinberg, Yan, & Williamson, 2015; Gelman et al., 2014; Gelman & Hill, 2007), and even free open-source software that is fast developing (Wagenmakers et al., 2018). A recent publication from the Journal of Sports Sciences showed a good example of applying Bayesian statistics/inferences to studying baseball players’ sensorimotor skills (see Klemish et al., 2018).
Lastly, as authors of a method article, we wish to acknowledge the significance of psychophysiological measurements and neuro-stimulation techniques to sport and exercise psychology. Measurements such as electroencephalogram (EEG)/event-related potential (ERPs) and functional magnetic resonance imaging (fMRI), and techniques such as transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS) are worth considerations for developing a future unified picture of human functioning in sport and exercise settings. Although a review of such method approaches is beyond this chapter, interested readers are highly encouraged to explore relevant resources (e.g., Amaro & Barker, 2006; Boniface & Ziemann, 2009; Luck, 2014; Ullsperger & Debener, 2010).
We would like to thank Dr. Amanda L. Rebar and an anonymous reviewer for their helpful comments and suggestions.
Abernethy, B., Thomas, K. T., & Thomas, J. T. (1993). Strategies for improving understanding of motor expertise (or mistakes we have made and things we have learned!!). Advances in Psychology, 102, 317–356.Find this resource:
Almond, R. G., Mislevy, R. J., Steinberg, L. S., Yan, D., & Williamson, D. M. (2015). Bayesian networks in educational assessment. New York, NY: Springer.Find this resource:
Amaro, E., & Barker, G. J. (2006). Study design in fMRI: Basic principles. Brain and Cognition, 60(3), 220–232.Find this resource:
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington DC: American Educational Research Association.Find this resource:
Atkinson, P. (2017). Thinking ethnographically. London, UK: SAGE.Find this resource:
Baddeley, A. D. (1986). Working memory. New York, NY: Oxford University Press.Find this resource:
Barcza-Renner, K., Eklund, R. C., Morin, A. J., & Habeeb, C. M. (2016). Controlling coaching behaviors and athlete burnout: Investigating the mediating roles of perfectionism and motivation. Journal of Sport and Exercise Psychology, 38(1), 30–44.Find this resource:
Becker, B. J., & Ahn, S. (2012). Synthesizing measurement outcomes through meta-analysis. In G. Tenenbaum, R. C. Eklund, & A. Kamata (Eds.), Measurement in sport and exercise psychology (pp. 153–168). Champaign, IL: Human Kinetics.Find this resource:
Bentler, P. (1986). Structural modeling and Psychometrika: An historical perspective on growth and achievements. Psychometrika, 51, 35–51.Find this resource:
Biddle, S. J., Markland, D., Gilbourne, D., Chatzisarantis, N. L., & Sparkes, A. C. (2001). Research methods in sport and exercise psychology: Quantitative and qualitative issues. Journal of Sports Sciences, 19(10), 777–809.Find this resource:
Blodgett, A. T., Schinke, R. J., McGannon, K. R., & Fisher, L. A. (2015). Cultural sport psychology research: Conceptions, evolutions and forecasts. International Review of Sport and Exercise Psychology, 8, 24–43.Find this resource:
Boniface, S., & Ziemann, U. (2009). Plasticity in the human nervous system: Investigations with transcranial magnetic stimulation. Cambridge, UK: Cambridge University Press.Find this resource:
Borenstein, M., Hedges, L. V., Higgins, J., & Rothstein, H. R. (2009). Introduction to meta-analysis. Chichester, UK: Wiley.Find this resource:
Braun, V., Clarke, V., & Weate, P. (2016). Using thematic analysis in sport and exercise research. In B. Smith & A. C. Sparkes (Eds.), Routledge handbook of qualitative research in sport and exercise (pp. 191–205). London, UK: Routledge.Find this resource:
Brown, J. D. (1998). The self. New York, NY: McGraw-Hill.Find this resource:
Borg, G. A.V. (1973). Perceived exertion: A note on “history” and methods. Medicine and Science in Sports, 5, 90–93.Find this resource:
Campbell D. T. (1988). Methodology and epistemology for social science: Selected papers. Chicago, IL: University of Chicago Press.Find this resource:
Carron, A. V., Bray, S. R., & Eys, M. A. (2002). Team cohesion and team success in sport. Journal of Sports Sciences, 20(2), 119–126.Find this resource:
Chen, M. J., Fan, X., & Moe, S. T. (2002). Criterion-related validity of the Borg ratings of perceived exertion scale in healthy individuals: A meta-analysis. Journal of Sports Sciences, 20(11), 873–899.Find this resource:
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Mahwah, NJ: Routledge.Find this resource:
Colcombe, S., & Kramer, A. F. (2003). Fitness effects on the cognitive function of older adults: A meta-analytic study. Psychological Science, 14, 125–130.Find this resource:
Cooper, H. M. (2009). Research synthesis and meta-analysis: A step by step approach (4th ed.). Thousand Oaks, CA: SAGE.Find this resource:
Cooper, H. M. (2015). Research synthesis and meta-analysis: A step-by-step approach (5th ed.). Thousand Oaks, CA: SAGE.Find this resource:
Cresswell, S. L., & Eklund, R. C. (2005). Motivation and burnout among top amateur rugby players. Medicine and Science in Sports and Exercise, 37, 469–477.Find this resource:
Csikszentmihalyi, M. (1990). Flow: The psychology of optimal experience. New York, NY: Harper & Row.Find this resource:
Culver, D. M., Gilbert, W., & Sparkes, A. (2012). Qualitative research in sport psychology journals: The next decade 2000–2009 and beyond. The Sport Psychologist, 26(2), 261–281.Find this resource:
Davis, C. L., Tomporowski, P. D., McDowell, J. E., Austin, B. P., Miller, P. H., Yanasak, N. E., & Naglieri, J. A. (2011). Exercise improves executive function and achievement and alters brain activation in overweight children: a randomized, controlled trial. Health Psychology, 30, 91.Find this resource:
Denzin, N., & Lincoln, Y. (2011). Introduction: The discipline and practice of qualitative research. In N. Denzin & Y. Lincoln (Eds.), Handbook of qualitative research (pp. 1–19). London, UK: SAGE.Find this resource:
Edmonds, W. A., Johnson, M. B., Tenenbaum, G., & Kamata, A. (2012). Idiosyncratic measures in sport. In G. Tenenbaum, R. C. Eklund, & A. Kamata (Eds.), Measurement in sport and exercise psychology (pp. 81–90). Champaign, IL: Human Kinetics.Find this resource:
Ekkekakis, P. (2003). Pleasure and displeasure from the body: Perspectives from exercise. Cognition & Emotion, 17(2), 213–239.Find this resource:
Ekkekakis, P., Hall, E. E., & Petruzzello, S. J. (2005). Variation and homogeneity in affective responses to physical activity of varying intensities: An alternative perspective on dose–response based on evolutionary considerations. Journal of Sports Sciences, 23(5), 477–500.Find this resource:
Eklund, R. C., & Cresswell, S. L. (2007). Athlete burnout. In G. Tenenbaum & R. C. Eklund (Eds.), Handbook of sport psychology (3rd ed., pp. 621–641). Hoboken, NJ: John Wiley.Find this resource:
Filho, E. S., Moraes, L. C., & Tenenbaum, G. (2008). Affective and physiological states during archery competitions: Adopting and enhancing the probabilistic methodology of individual affect-related performance zones (IAPZs). Journal of Applied Sport Psychology, 20(4), 441–456.Find this resource:
Flake, J. K., Pek, J., & Hehman, E. (2017). Construct validation in social and personality research: Current practice and recommendations. Social Psychological and Personality Science, 8(4), 370–378.Find this resource:
Fuchs, A. H. (1998). Psychology and ‘the babe.’ Journal of the History of the Behavioral Sciences, 34(2), 153–165.Find this resource:
Fuchs, A. H. (2009). Psychology and baseball: The testing of Babe Ruth. In C. D. Green & L. T. Benjamin, Jr. (Eds.), Psychology gets in the game: Sport, mind, and behavior, 1880–1960 (pp. 144–167). Lincoln, NE: University of Nebraska Press.Find this resource:
Fullerton, H. S. (1921). Why Babe Ruth is greatest home-run hitter. Popular Science Monthly, 99(4), 19–21.Find this resource:
Gardner, B., Abraham, C., Lally, P., & de Bruijn, G. J. (2012). Towards parsimony in habit measurement: Testing the convergent and predictive validity of an automaticity subscale of the Self-Report Habit Index. International Journal of Behavioral Nutrition and Physical Activity, 9, 102.Find this resource:
Gelman, A., & Hill, J. (2007). Data analysis using regression and multilevel/hierarchical models. Cambridge, UK: Cambridge University Press.Find this resource:
Gelman, A., & Loken, E. (2013). The garden of forking paths: Why multiple comparisons can be a problem, even when there is no ‘Fishing Expedition’ or ‘p-Hacking’ and the research hypothesis was posited ahead of time. Technical Report, Department of Statistics, Columbia University.Find this resource:
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., & Rubin, D. B. (2014). Bayesian data analysis. Boca Raton, FL: CRC Press.Find this resource:
Glass, G. V. (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5(10), 3–8.Find this resource:
Good, I. (1976). The Bayesian influence, or how to sweep subjectivism under the carpet. In C. A. Hooker & W. Harper (Eds.), Foundations of probability theory, statistical inference, and statistical theories of science (Vol. 2, pp. 125–174). Dordrecht, The Netherlands: D. Reidel.Find this resource:
Guttman, L. (1976). What is not what in statistics. The Statistician, 26, 81–107.Find this resource:
Hanin, Y. L. (2000). Emotions in sport. Champaign, IL: Human Kinetics.Find this resource:
Harrison, C. (1980). Readability in the classroom. Cambridge, UK: Cambridge Educational.Find this resource:
Henriksen, K., Stambulova, N., & Roessler, K. K. (2010). Holistic approach to athletic talent development environments: A successful sailing milieu. Psychology of Sport and Exercise, 11(3), 212–222.Find this resource:
Hodge, K., & Sharp, L.-A. (2016). Case studies. In B. Smith & A. C. Sparkes (Eds.), Routledge handbook of qualitative research in sport and exercise (pp. 62–74). London, UK: Routledge.Find this resource:
Hohmann, T., Obelöer, H., Schlapkohl, N., & Raab, M. (2016). Does training with 3D videos improve decision-making in team invasion sports? Journal of Sports Sciences, 34(8), 746–755.Find this resource:
Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias in research findings. Newbury Park, CA: SAGE.Find this resource:
Hunter, J. E., & Schmidt, F. L. (2004). Methods of meta-analysis: Correcting error and bias in research findings. Newbury Park, CA: SAGE.Find this resource:
Jackson, S. A., & Marsh, H. W. (1996). Development and validation of a scale to measure optimal experience: The Flow State Scale. Journal of Sport & Exercise Psychology, 18(1), 17–35.Find this resource:
Johnson, M. B., Edmonds, W. A., Kamata, A., & Tenenbaum, G. (2009). Determining individual affect-related performance zones (IAPZs): A tutorial. Journal of Clinical Sport Psychology, 3(1), 34–57.Find this resource:
Kahneman, D., Fredrickson, B. L., Schreiber, C. A., & Redelmeier, D. A. (1993). When more pain is preferred to less: Adding a better end. Psychological Science, 4(6), 401–405.Find this resource:
Kamata, A., Tenenbaum, G., & Hanin, Y. L. (2002). Individual zone of optimal functioning (IZOF): A probabilistic estimation. Journal of Sport and Exercise Psychology, 24(2), 189–208.Find this resource:
Klemish, D., Ramger, B., Vittetoe, K., Reiter, J. P., Tokdar, S. T., & Appelbaum, L. G. (2018). Visual abilities distinguish pitchers from hitters in professional baseball. Journal of Sports Sciences, 36(2), 171–179.Find this resource:
Kline, R. (2011). Principles and practice of structural equation modeling (3rd ed.). New York, NY: Guilford.Find this resource:
Kline, R. B. (2015). Principles and practice of structural equation modeling. New York, NY: Guilford Press.Find this resource:
Kornspan, A. S. (2012). History of sport and performance psychology. In S. Murphy (Ed.), The Oxford handbook of sport and performance psychology (pp. 3–23). Oxford, UK: Oxford University Press.Find this resource:
Kroll, W., & Lewis, G. (1970). America’s first sport psychologist. Quest, 13(1), 1–4.Find this resource:
Krumlinde-Sundholm, L., Holmefur, M., Kottorp, A., & Eliasson, A. C. (2007). The Assisting Hand Assessment: Current evidence of validity, reliability, and responsiveness to change. Developmental Medicine & Child Neurology, 49(4), 259–264.Find this resource:
Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago, IL: University of Chicago Press.Find this resource:
Lebeau, J. C., Liu, S., Sáenz-Moncaleano, C., Sanduvete-Chaves, S., Chacón-Moscoso, S., Becker, B. J., . . . Tenenbaum, G. (2016). Quiet eye and performance in sport: A meta-analysis. Journal of Sport and Exercise Psychology, 38(5), 441–457.Find this resource:
Liu, S., Lebeau, J.-C., & Tenenbaum, G. (2016). Does exercise improve cognitive performance? A conservative message from Lord’s paradox. Frontiers in Psychology, 7, 1092.Find this resource:
Liu, S., Ritchie, J., Sáenz-Moncaleano, C., Ward, S. K., Paulsen, C., Klein, T., . . . Tenenbaum, G. (2017). 3D technology of Sony Bloggie has no advantage in decision-making of tennis serve direction: A randomized placebo-controlled study. European Journal of Sport Science, 17(5), 603–610.Find this resource:
Logan, B. (2015). Immanuel Kant’s Prolegomena to any future metaphysics in focus. New York, NY: Routledge.Find this resource:
Lord, F. M. (1967). A paradox in the interpretation of group comparisons. Psychological Bulletin, 68(5), 304–305.Find this resource:
Luck, S. J. (2014). An introduction to the event-related potential technique. Cambridge, MA: MIT Press.Find this resource:
MacCallum, R. C., Wegener, D. T., Uchino, B. N., & Fabrigar, L. R. (1993). The problem of equivalent models in applications of covariance structure analysis. Psychological Bulletin, 114, 185–199.Find this resource:
Machlup, F. (1978). What is meant by methodology: A selective survey of the literature. In F. Machlup (Ed.), Methodology of economics and other social science (pp. 1–15). New York, NY: Academic Press.Find this resource:
Mahoney, M. J. (1976). Scientist as subject: The psychological imperative. Cambridge, MA: Ballinger.Find this resource:
Mann, D. T., Williams, A. M., Ward, P., & Janelle, C. M. (2007). Perceptual-cognitive expertise in sport: A meta-analysis. Journal of Sport and Exercise Psychology, 29(4), 457–478.Find this resource:
Maslach, C., Schaufeli, W. B., & Leiter, M. P. (2001). Job burnout. Annual Review of Psychology, 52, 397–422.Find this resource:
Maslow, A. H. (1968). Toward a psychology of being (2nd ed.). New York, NY: Van Nostrand Reinhold.Find this resource:
Martens, R. (1987). Science, knowledge, and sport psychology. The Sport Psychologist, 1(1), 29–55.Find this resource:
McDonald, R. P. (1999). Test theory: A unified treatment. Mahwah, NJ: Lawrence Erlbaum.Find this resource:
McIntosh, C. I. (in press). Doing SEM Bayesian-style: New opportunities for sport and exercise psychology. In G. Tenenbaum & R. C. Eklund (Eds.), Handbook of sport psychology. Hoboken, NJ: John Wiley.Find this resource:
Miller, G. A., & Chapman, J. P. (2001). Misunderstanding analysis of covariance. Journal of Abnormal Psychology, 110(1), 40–48.Find this resource:
Mir, R., & Watson, A. (2000). Strategic management and the philosophy of science: The case for a constructivist methodology. Strategic Management Journal, 21, 941–953.Find this resource:
Miyake, A., Friedman, N. P., Emerson, M. J., Witzki, A. H., Howerter, A., & Wager, T. D. (2000). The unity and diversity of executive functions and their contributions to complex ‘frontal lobe’ tasks: A latent variable analysis. Cognitive Psychology, 41(1), 49–100.Find this resource:
Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & PRISMA Group. (2009). Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Medicine, 6(7), e1000097.Find this resource:
Molden, D. C., & Dweck, C. S. (2006). Finding ‘meaning’ in psychology: A lay theories approach to self-regulation, social perception, and social development. The American Psychologist, 61, 192–203.Find this resource:
Morin, A. J., Myers, N. D., & Lee, S. (in press). Modern factor analytic techniques: Bifactor models, exploratory structural equation modeling (ESEM) and Bifactor—ESEM. In G. Tenenbaum & R. C. Eklund (Eds.), Handbook of sport psychology. Hoboken, NJ: John Wiley.Find this resource:
Morris, L. W., & Liebert, R. M. (1973). Effects of negative feedback, threat of shock, and level of trait anxiety on the arousal of two components of anxiety. Journal of Consulting Psychology, 20, 321–326.Find this resource:
Noble, B. J., & Robertson, R. J. (1996). Perceived exertion. Champaign, IL: Human Kinetics.Find this resource:
Norman, D. A., & Shallice, T. (1986). Attention to action: Willed and automatic control of behavior. In R. J. Davidson, G. E. Schwartz, & D. Shapiro (Eds.), Consciousness and self-regulation: Advances in research and theory (Vol. 4, pp. 1–18). New York, NY: Plenum.Find this resource:
Papathomas, A. (2016). Narrative inquiry: From cardinal to marginal and back? In B. Smith & A. C. Sparkes (Eds.), Routledge handbook of qualitative research in sport and exercise (pp. 37–48). London, UK: Routledge.Find this resource:
Patton. M. Q. (2002). Qualitative research and evaluation methods. Thousand Oaks, CA: SAGE.Find this resource:
Phillips, L. H. (1997). Do ‘frontal tests’ measure executive function? Issues of assessment and evidence from fluency tests. In P. Rabbitt (Ed.), Methodology of frontal and executive function (pp. 191–213). Hove, UK: Psychology Press.Find this resource:
Phoenix, C., & Orr, N. (2014). Pleasure: A forgotten dimension of physical activity in older age. Social Science & Medicine, 115, 94–102.Find this resource:
Prasad, P. (1997). Systems of meaning: Ethnography as a methodology for the study of information technologies. In A. Lee, J. Liebenau, & J. Degross (Eds.), Information systems and qualitative research (pp. 101–108). Boston, MA: Springer.Find this resource:
Raedeke, T. D., & Smith, A. L. (2001). Development and preliminary validation of an athlete burnout measure. Journal of Sport & Exercise Psychology, 23, 281–306.Find this resource:
Read, J. C., & Bohr, I. (2014). User experience while viewing stereoscopic 3D television. Ergonomics, 57, 1140–1153.Find this resource:
Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66(5), 688–701.Find this resource:
Russell, J. A., Weiss, A., & Mendelsohn, G. A. (1989). Affect grid: A single-item scale of pleasure and arousal. Journal of Personality and Social Psychology, 57, 493–502.Find this resource:
Ryba, T. V., & Schinke, R. J. (2009). Methodology as a ritualized eurocentrism: Introduction to the special issue. International Journal of Sport and Exercise Psychology, 7, 263–274.Find this resource:
Schinke, R. J., & Blodgett, A. T. (2016). Embarking on community based participatory action research: A methodology that emerges from (and in) communities. In B. Smith & A. C. Sparkes (Eds.), Routledge handbook of qualitative research in sport and exercise (pp. 88–99). London, UK: Routledge.Find this resource:
Schulz, K. F. (1996). Randomised trials, human nature, and reporting guidelines. The Lancet, 348(9027), 596–598.Find this resource:
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.Find this resource:
Shiffman, S., Hufford, M., Hickcox, M., Paty, J. A., Gnys, M., & Kassel, J. D. (1997). Remember that? A comparison of real-time versus retrospective recall of smoking lapses. Journal of Consulting and Clinical Psychology, 65(2), 292–300.Find this resource:
Smith, B., & Sparkes, A. C. (2016a). Introduction: An invitation to qualitative research. In B. Smith & A. C. Sparkes (Eds.), Routledge handbook of qualitative research methods in sport and exercise (pp. 1–7). New York, NY: Routledge.Find this resource:
Smith, B., & Sparkes, A. (2016b). Routledge handbook of qualitative research in sport and exercise. London, UK: Routledge.Find this resource:
Smith, B., & Sparkes, A. C. (in press). Qualitative Research. In G. Tenenbaum & R. C. Eklund (Eds.), Handbook of sport psychology. Hoboken, NJ: John Wiley.Find this resource:
Smith, J. (2016). Interpretative phenomenological analysis in sport and exercise: Getting at experience. In B. Smith & A. C. Sparkes (Eds.), Routledge handbook of qualitative research in sport and exercise (pp. 219–229). London, UK: Routledge.Find this resource:
Smith, R. E., Smoll, F. L., & Barnett, N. P. (1995). Reduction of children’s sport performance anxiety through social support and stress-reduction training for coaches. Journal of Applied Developmental Psychology, 16, 125–142.Find this resource:
Smith, R. E., Smoll, F. L., Cumming, S. P., & Grossbard, J. R. (2006). Measurement of multidimensional sport performance anxiety in children and adults: The Sport Anxiety Scale-2. Journal of Sport and Exercise Psychology, 28(4), 479–501.Find this resource:
Smith, R. E., Smoll, F. L., & Schutz, R. W. (1990). Measurement and correlates of sport-specific cognitive and somatic trait anxiety: The Sport Anxiety Scale. Anxiety Research, 2, 263–280.Find this resource:
Stake, R. E. (1995). The art of case study research. London, UK: SAGE.Find this resource:
Tenenbaum, G. (2003). Expert athletes: An integrated approach to decision making. In J. L. Starkes & K. A. Ericsson (Eds.), Expert performance in sports: Advances in research on sport expertise (pp. 191–218). Champion, IL: Human Kinetics.Find this resource:
Tenenbaum, G., Edmonds, W. A., & Eccles, D. W. (2008). Emotions, coping strategies, and performance: A conceptual framework for defining affect-related performance zones. Military Psychology, 20(S1), S11–S37.Find this resource:
Tenenbaum, G., Eklund, R. C. (in press). Handbook of sport psychology (4th ed.). Hoboken, NJ: John Wiley.Find this resource:
Tenenbaum, G., Eklund, R. C., & Kamata, A. (2012). Measurement in sport and exercise psychology. Champaign, IL: Human Kinetics.Find this resource:
Tooley, M. (1987). Causation: A realist approach. New York, NY: Oxford University Press.Find this resource:
Tracy, S. J. (2010). Qualitative quality: Eight ‘big tent’ criteria for excellent qualitative research. Qualitative Inquiry, 16, 837–851.Find this resource:
Triplett, N. (1898). The dynamogenic factors in pacemaking and competition. American Journal of Psychology, 9(4), 507–533.Find this resource:
Tukey, J. W. (1977). Exploratory data analysis. Reading, MA: Addison-Wesley.Find this resource:
Ullsperger, M., & Debener, S. (2010). Simultaneous EEG and fMRI: Recording, analysis, and application. Oxford, UK: Oxford University Press.Find this resource:
Vaughn, B. K., & Daniel, S. R. (2012). Conceptualizing validity. In G. Tenenbaum, R. C. Eklund, & A. Kamata (Eds.), Measurement in sport and exercise psychology (pp. 33–39). Champaign, IL: Human Kinetics.Find this resource:
Verplanken, B., & Orbell, S. (2003). Reflections on past behavior: A self-report index of habit strength. Journal of Applied Social Psychology, 33(6), 1313–1330.Find this resource:
Vickers, A. J., & Altman, D. G. (2001). Analysing controlled trials with baseline and follow up measurements. BMJ, 323(7321), 1123–1124.Find this resource:
Vickers, J. N. (1996). Visual control when aiming at a far target. Journal of Experimental Psychology: Human Perception and Performance, 22(2), 342–354.Find this resource:
Vickers, J. N. (2007). Perception, cognition, and decision training: The quiet eye in action. Champaign, IL: Human Kinetics.Find this resource:
Wagenmakers, E. J., Love, J., Marsman, M., Jamil, T., Ly, A., Verhagen, J., . . . Meerhoff, F. (2018). Bayesian inference for psychology. Part II: Example applications with JASP. Psychonomic Bulletin & Review, 25(1), 58–76.Find this resource:
Weinberg, R. S., & Gould, D. (2011). Foundations of sport and exercise psychology (5th ed.). Champaign, IL: Human Kinetics.Find this resource:
Williams, A. M., Davids, K., Burwitz, L., & Williams, J. G. (1992). Perception and action in sport. Journal of Human Movement Studies, 22, 147–204.Find this resource:
Williams, A. M., & Ward, P. (2003). Perceptual expertise: Development in Sport. In J. L. Starkes & K. A. Ericsson (Eds.), Expert performance in sports: Advances in research on sport expertise (pp. 219–249). Champion, IL: Human Kinetics.Find this resource:
Williams, D. M. (2008). Exercise, affect, and adherence: an integrated model and a case for self-paced exercise. Journal of Sport and Exercise Psychology, 30(5), 471–496.Find this resource:
Williams, D. M., Dunsiger, S., Ciccolo, J. T., Lewis, B. A., Albrecht, A. E., & Marcus, B. H. (2008). Acute affective response to a moderate-intensity exercise stimulus predicts physical activity participation 6 and 12 months later. Psychology of Sport and Exercise, 9(3), 231–245.Find this resource:
Williams, J. M. (2010). Applied sport psychology: Personal growth to peak performance. New York, NY: McGraw-Hill.Find this resource:
Yang, S. N., Schlieski, T., Selmins, B., Cooper, S. C., Doherty, R. A., Corriveau, P. J., & Sheedy, J. E. (2012). Stereoscopic viewing and reported perceived immersion and symptoms. Optometry and Vision Science, 89, 1068–1080.Find this resource:
Yerkes, R. M., & Dodson, J. D. (1908). The relation of strength of stimulus to rapidity of habit-formation. Journal of Comparative Neurology, 18(5), 459–482.Find this resource:
Yin, R. K. (2014). Case study research: Design and methods (5th ed.). Thousand Oaks, CA: SAGE.Find this resource:
Zhu, W. (2012). Measurement practice in sport and exercise psychology. In G. Tenenbaum, R. C. Eklund, & A. Kamata (Eds.), Measurement in sport and exercise psychology (pp. 9–21). Champaign, IL: Human Kinetics.Find this resource:
(1.) Classical test theory, also named classical true-score theory, was developed to estimate the measurement precision of a test score. In the theory, a test score is assumed to include a true score and an error, and mathematical solutions are derived to solve the problem generically known as test reliability (McDonald, 1999).