The Oxford Research Encyclopedia of Psychology is now available via subscription. Visit About to learn more, meet the editorial board, or learn about the subscriber services.

Dismiss
Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, PSYCHOLOGY (oxfordre.com/psychology). (c) Oxford University Press USA, 2018. All Rights Reserved. Personal use only; commercial use is strictly prohibited. Please see applicable Privacy Policy and Legal Notice (for details see Privacy Policy and Legal Notice).

Subscriber: null; date: 16 January 2019

Experience Sampling in Lifespan Developmental Methodology

Summary and Keywords

Experience-sampling methodology (ESM) captures everyday events and experiences during, or shortly after, their natural occurrence in people’s daily lives. It is typically implemented with mobile devices that participants carry with them as they pursue their everyday routines, and that signal participants multiple times a day throughout several days or weeks to report on their momentary experiences and situation. ESM provides insights into short-term within-person variations and daily-life contexts of experiences, which are essential aspects of human functioning and development. ESM also can ameliorate some of the challenges in lifespan-developmental methodology, in particular those imposed by age-comparative designs. Compared to retrospective or global self-reports, for example, ESM can reduce potential non-equivalence of measures caused by age differences in the susceptibility to retrospective memory biases. Furthermore, ESM maximizes ecological validity compared to studies conducted in artificial laboratory contexts, which is a key concern when different age groups may differentially respond to unfamiliar situations. Despite these strengths, ESM also bears significant challenges related to potential sample selectivity and selective sample attrition, participants’ compliance and diligence, measurement reactivity, and missing responses. In age-comparative research, these challenges may be aggravated if their prevalence varies depending on participants’ age. Applications of ESM in lifespan methodology therefore require carefully addressing each of these challenges when planning, conducting, and analyzing a study, and this article provides practical guidelines for doing so. When adequately applied, experience sampling is a powerful tool in lifespan-developmental methodology, particularly when implemented in long-term longitudinal and cross-sequential designs.

Keywords: experience sampling, ambulatory assessment, age-group comparisons, mobile phones, development, ecological validity, selectivity, measurement reactivity, practical guideline

Introduction

Experience sampling refers to the capturing of experiences—such as events, behaviors, feelings, or thoughts—as they naturally occur in everyday life. Three distinctive characteristics set experience sampling apart from other research approaches: First, of interest are experiences as they spontaneously occur in participants’ real-life contexts (as opposed to, for example, experiences that researchers induce in artificial laboratory contexts). Second, these experiences are captured in the moment of their occurrence or shortly thereafter (as opposed to, for example, retrospective reconstructions of past experiences in questionnaires, interviews, or end-of-day diaries). Third, experiences are sampled multiple times within short time spans (as opposed to, for example, single-time assessments or longitudinal assessments with long measurement intervals). Such measurement burst ( or micro-longitudinal) designs typically involve several daily assessments across a few days or weeks, yielding a series of snapshots of participants’ everyday experiences within a circumscribed period.

The term “experience sampling” was initially coined by Mihaly Csikszentmihalyi and colleagues (Hektner, Schmidt, & Csikszentmihalyi, 2007) and is mostly used to refer to the acquisition of repeated self-reports of momentary experiences, which is the focus of the present article. Numerous technological solutions are available to implement experience sampling. Typically, participants are given mobile electronic devices, such as mobile phones or tablet computers, which they carry with them while they pursue their normal daily routines. These devices signal participants when to respond, display questions on the participants’ momentary situation and experiences, and record the participants’ responses.

To capture the multiple facets of naturally unfolding psychological processes, their covariates, and contexts, experience sampling can be combined with other ambulatory assessments. Examples include ambulatory monitoring of physiological parameters such as heart rate (Wrzus, Müller, Wagner, Lindenberger, & Riediger, 2013), skin conductance (Doberenz, Roth, Wollburg, Maslowski, & Kim, 2011), or hormone concentrations (Harden et al., 2016). Other possibilities include the ambulatory assessment of everyday physical activities (Ebner-Priemer, Koudela, Mutz, & Kanning, 2013), interpersonal behaviors (Schmid Mast, Gatica-Perez, Frauendorfer, Nguyen, & Choudhury, 2015), or performance-completing cognitive tasks in everyday contexts (Riediger et al., 2014). Experience sampling can also be combined with the recording of ambient environmental parameters such as sound (Mehl, 2017) or geographical location (e.g., geo-tracking; see Epstein et al., 2014). This article will focus particularly on the sampling of self-reported momentary experiences in participants’ daily lives.

Experience sampling has been successfully applied with participants from diverse age groups, including elementary-school children (Leonhardt, Könen, Dirk, & Schmiedek, 2016), adolescents (Klipker, Wrzus, Rauers, & Riediger, 2017), and adults of different age groups including very old adults (Riediger & Freund, 2008) and it has also been used in samples with a wide age range of participants (Riediger, Schmiedek, Wagner, & Lindenberger, 2009). It represents an important tool in lifespan developmental methodology because it offers compelling strengths, both from a conceptual and from a methodological perspective. Experiences sampling, however, also entails methodological challenges, some of which may be exacerbated in age-comparative research. Careful consideration of both its benefits and challenges is therefore necessary when planning, conducting, and analyzing experience-sampling studies. The purpose of this article is to provide an overview of the state of the art of experience sampling in lifespan developmental methodology. Following a discussion of the conceptual and methodological strengths of experience sampling, the challenges that this method entails for lifespan researchers are elaborated. After that, the pragmatics of experience-sampling studies in lifespan psychology are discussed. The authors outline how the benefits of the method can be exploited and its potential pitfalls can be addressed and minimized.

Strengths of Experience Sampling as a Research Tool in Lifespan Developmental Methodology

A compelling conceptual benefit of experience sampling results from its micro-longitudinal design (i.e., the frequent repetition of measurements with brief time intervals between them). In contrast to cross-sectional age-group comparisons or macro-longitudinal studies with long measurement intervals between assessments, this makes short-term processes and dynamics within individuals accessible to scientific investigation. Awareness is increasing that considering such within-person dynamics provides insights beyond understanding between-person differences (Molenaar, 2004) and that within-person dynamics constitute an essential aspect of human functioning and development (Li, Huxhold, & Schmiedek, 2004; Nesselroade & Salthouse, 2004; Nesselroade & Molenaar, 2010).

For example, adolescents report comparatively more varying affective experiences than children or adults (Riediger & Klipker, 2014), whereas emotional experiences reported by older adults tend to fluctuate less than those reported by younger adults (Brose, De Roover, Ceulemans, & Kuppens, 2015; Röcke & Brose, 2013). Such within-person variations in affective experiences have been linked to processes in other domains of life, such as endocrine functioning (Klipker, Wrzus, Rauers, Boker, & Riediger, 2017), physiological activation (Riediger et al., 2014), cognitive performance (Brose, Schmiedek, Koval, & Kuppens, 2015; Riediger, Wrzus, Schmiedek, Wagner, & Lindenberger, 2011), and depressive symptoms (Brose et al., 2015). Insights like these underscore the importance of considering short-term within-person processes in lifespan developmental research.

To fully exploit the potential of experience-sampling methodology in future lifespan developmental research, it would be desirable to combine micro- with macro-longitudinal designs (i.e., to conduct multiple experience-sampling measurement bursts with the same sample throughout the years), thus obtaining information about change and variability on different time scales. Such long-term longitudinal studies, repeating experience-sampling phases within the sample over larger periods of time, are only just beginning to emerge (Carstensen et al., 2011; Riediger, 2018). An even more powerful design would additionally include new cohorts of participants within the same initial age range at each longitudinal wave of assessment. Such cross-sequential designs (Schaie, 1994) provide unique opportunities for disentangling effects that are due to within-person change over time, from those that are due to historical influences or derive from participants’ reactivity to the repeated measurements. The authors are not aware of any current study employing experience sampling in a cohort-sequential design. Given the multiple benefits of such a design, however, it is desirable that this void will soon be filled by future research.

Another advantage of experience sampling is that experiences are assessed in close temporal proximity to their occurrence. This may minimize response biases when compared with retrospective and global self-reports (Riediger & Rauers, 2014). For example, retrospective measures require participants to report psychological experiences they went through earlier in time, such as how nervous they felt at a certain point in the past. Such retrospective reports are subject to memory loss and biases, imposing limits on how accurately people can retrospectively report specific experiences. Global measures (such as the question of how nervous one typically feels), in turn, require people to aggregate their typically varying experiences over time. These aggregations, too, are prone to biases. Consequently, global and retrospective self-reports may yield different findings than self-reports of the same experiences obtained in the moment of their occurrence (Miron-Shatz, Stone, & Kahneman, 2009).

Differential reliance on episodic versus semantic knowledge representations contributes to these differences (Robinson & Clore, 2002b). Episodic memory represents specific experiences one made in particular situations. Such episodic representations, however, are evanescent for many psychological processes: that is, only accessible for a limited time period after the experience has occurred (Robinson & Clore, 2002a). After that, participants have to recruit information from semantic memory comprising their subjective theories or beliefs about experiences and situations. Semantic representations integrate information from multiple influences, such as the person’s self-concept, internalized cultural norms and expectations, or motivational preferences. The immediacy of the measurement in experience sampling makes episodic representations accessible and helps to capture experiences as they were actually lived. Retrospective and global self-reports, in contrast, are more prone to be influenced by participants’ semantic memory. This is evident, for example, in findings showing that women tend to describe themselves as being more emotional than men in global self-reports (thus reflecting culturally prescribed gender role expectations in Western societies), whereas no such gender differences were evident in experience samples of actual momentary emotional experiences from the same participants (Barrett, Robin, Pietromonaco, & Eyssell, 1998).

Considering such methodological implications is particularly important in lifespan research with age-heterogeneous samples. Age-related differences in fluid-cognitive capacities and motivational preferences may make participants from different age groups differentially susceptible to biases in retrospective and global self-reports (Klumb & Baltes, 1999), delimiting the equivalence of measures across age groups (Brose & Ebner-Priemer, 2015; Riediger & Rauers, 2014). Adult age-related decline in episodic memory, for instance, may predispose older adults more than younger individuals to recruit semantic rather than episodic memory when responding to retrospective self-reports. In addition, semantic representations may differ between age groups due to age-related differences in motivational predispositions, self-concepts, or age-graded cultural norms, which may yield age differences in retrospective or global reports of experiences even when there were no actual experiential differences at the moment of occurrence. There is, for example, evidence that when retrospectively reconstructing emotional experiences, older adults tend to overestimate their past positive affective experiences more than younger adults, and younger adults tend to overestimate their past negative experiences more than older adults (Ready, Weinberger, & Jones, 2007). For age-mixed samples, experience sampling can therefore reduce potential methodological artifacts that may either conceal existing age differences in experiences or produce spurious age effects. Of course, careful exploration of measurement equivalence across age groups is nevertheless an indispensable step in analyzing experience-sampling data from age-heterogeneous samples (Knight & Zerr, 2010).

Another important strength of experience sampling derives from the fact that it collects information on spontaneously occurring experiences within the natural context of the participants’ day-to-day lives. This maximizes the external validity of the assessment compared to assessments in artificial laboratory contexts (Schwarz, 2007). Contemporary electronic devices for experience-sampling assessments provide the additional methodological benefit of allowing close monitoring of participant response adherence to the measurement scheme in the absence of experimenters. Again, the maximization of ecological validity may be particularly relevant from the perspective of lifespan methodology. There is evidence, for example, that age-related differences observed in well-controlled laboratory contexts are not always reflected in measurements obtained in everyday life situations (Riediger et al., 2014). Maximizing external validity, of course, comes at the cost of limiting control over factors that may influence the phenomenon under study. Study variables therefore should include careful assessment of potential covariates of the target phenomenon under study.

By assessing everyday life processes, experience sampling also provides unique opportunities to understand experiences and behaviors in their ecological context: that is, to provide insight into the role of individuals’ everyday surroundings for the target phenomena under study, such as individuals’ social partners, locations, or everyday hassles and uplifts. Some studies have further maximized this particular advantage by obtaining experience samples not only from single individuals but also from multiple individuals belonging to a social unit in a time-synchronized manner. In Rauers, Blanke, and Riediger (2013), for example, dyadic experience sampling with younger and older adult romantic partners brought unique insights into the role of context (here, the momentary presence or absence of one’s partner) for adult age differences in emotional competencies.

In short, experience sampling brings immense conceptual and methodological assets to lifespan research. These include the possibilities to gain insight into the short-term within-person dynamics of psychological experiences and into the role of daily-life contexts for the target phenomena under study. The immediacy of the measurement and the fact that it takes place in the participants’ natural environments represent further methodological advantages of experience sampling. Nonetheless, experience sampling also involves some challenges to lifespan researchers, as discussed in the next section.

Challenges of Experience Sampling in Lifespan Developmental Research

Like every method, experience sampling also carries significant challenges that need to be considered when planning and implementing a study (Scollon, Kim-Prieto, & Diener, 2003). Of these challenges, four stand out as particularly critical from the perspective of lifespan developmental methodology.

First, the burden for the participants—the required commitment of time and effort—is considerable (Cain, Depp, & Jeste, 2009). Participants are requested to interrupt their activities several times daily throughout several days or weeks to report about their momentary situation and experiences. The use of electronic assessment devices may represent an additional threshold to participate, especially for individuals with low confidence in handling the technical devices. Additionally, some individuals may be more likely than others to forget, misplace, or lose the device, which results in missing data. These demands imposed on participants can potentially impair the representativeness both of the initially recruited sample and of the sample that completes the entire study protocol (Klumb, Elfering, & Herre, 2009). In other words, certain types of individuals could be more or less likely to be represented in the sample from the beginning (initial sample selectivity), or to drop out during the study interval (selective sample attrition). Both types of selectivity are more likely for studies with high participant burden.

In age-heterogeneous samples, this limitation is aggravated if sample selectivity and attrition differs between age groups, or if they are differentially associated with variables of interest depending on participants’ age. For example, typical middle-aged adults with full-time jobs and underage children are difficult to recruit for experience-sampling studies, while middle-aged adults who do not have to juggle multiple responsibilities at work and at home are more willing to participate, even though they account for a smaller proportion of the population of middle-aged adults. Similarly, older adults of above-average health and mobility are more likely to participate in experience-sampling studies than older adults with less good health and mobility. Furthermore, selectivity effects caused by the necessity to handle electronic assessment devices may be larger among older than younger individuals, as the latter are typically more accustomed to using mobile technologies. Motives to participate in experience-sampling studies may vary between age groups. Reimbursement for study participation, for example, often seems a main reason for children, adolescents, and young adults to participate in experience-sampling studies, making more affluent individuals in these age groups less likely to participate. In middle-aged and older age groups, in contrast, intrinsic motives may be more likely, like personal interest, the wish to contribute to science, or the social contact with the experimenters. Careful consideration of sample selectivity is therefore paramount in experience sampling (Klumb et al., 2009).

Potential age differences in the reasons for participating also relate to a second challenge of experience sampling in lifespan methodology—namely, that study participation occurs in the absence of experimenters and hence without obvious external control over participants’ compliance and diligence. In laboratory studies, the circumscribed test setting and the presence of an experimenter provide obvious external cues for reinforcing diligent adherence to study instructions. In experience-sampling studies, however, participants have to sustain task motivation throughout the entire study phase with limited help from such external motivators. Modern mobile technology allows researchers to monitor whether participants completed their scheduled assessments, and using such monitoring opportunities is highly advisable (Hoppmann & Riediger, 2009). Even if items are responded to in time, however, diligence in responding may vary, from carefully choosing the response option that best describes one’s momentary situation to arbitrarily checking off responses. From a lifespan perspective, this concern may intensify if participants from different age groups differ in their self-regulated (intrinsic) study motivation. Eliciting and maintaining participants’ motivation to carefully adhere to study instructions, and monitoring participants’ task diligence is therefore essential for successful experience studies in lifespan psychology.

A third potential problem in using experience sampling involves reactivity effects. These can occur if a scientific method changes the phenomenon that it investigates (Barta, Tennen, & Litt, 2013; Scollon et al., 2003). Reactivity can be a problem in social and behavioral studies in general, but it may be aggravated in experience-sampling research. For example, repeatedly asking participants how tired they are may increase awareness of fatigue or may shift the meaning participants assign to the response scale. Furthermore, study participation could influence people’s behaviors (e.g., participants may discontinue a conversation to tend to a prompt to complete the assessment instrument, or even modify their daily routine during the experience-sampling phase). It therefore is crucial to emphasize upon initial instruction that participants should not deviate from their typical routines while taking part in the study. Additionally, developmental researchers should test for potential age-differential reactivity effects (see discussion of trend analyses in section “Addressing Challenges of Missing Data in Experience-Sampling Studies”).

Another challenge of experience sampling refers to the issue of incomplete datasets due to missing responses (Black, Harel, & Matthews, 2013). It is not realistic to expect that all participants of a sample will be available for all scheduled experience-sampling occasions. There may be situations when participants cannot (or do not want to) complete scheduled assessments, in order to not interrupt their current activity (e.g., when driving or during a doctor’s appointment). Again, in age-comparative research, this challenge may become more pronounced if the likelihood and timing of missing responses systematically differs between age groups. Working adults, for example, may be less available for completing scheduled experience samples during particularly busy times of the day, leading to a smaller and more selective collection of experience samples than that collected for non-working younger adults or retired older participants. A similar concern can apply to children and adolescents when school regulations prevent them from responding to assessments during school hours. Researchers need to address the issue of non-complete datasets both when planning their study and when analyzing their datasets.

To summarize, despite its undisputable strengths, experience sampling also poses important challenges for lifespan researchers. The relatively high demands on, and burden for, participants may increase the likelihood of selective sample recruitment and selective sample attrition. Conducting the study in participants’ everyday life contexts may be associated with decreased participant compliance and diligence. Repeating assessment occasions multiple times within a relatively short period of time, finally, may change the phenomenon under study and also enhance the likelihood of incomplete datasets due to missings. Paying particular attention to these challenges is paramount for lifespan researchers when planning and analyzing experience-sampling studies.

Conducting Experience Sampling Studies in Lifespan Developmental Research

The remainder of this article provides practical guidelines for employing experience sampling as a research tool in lifespan-developmental methodology. Criteria that indicate whether experience sampling is a suitable method for one’s research questions are discussed, and recommendations for addressing the four challenges of experience sampling introduced in the previous section are provided. The article then turns to the issues of planning the time windows and schedules for experience-sampling assessments, of determining the sample size and number of assessment occasions, and of choosing suitable experience-sampling technology for one’s project. It concludes with a brief outlook on what to consider when analyzing experience-sampling data.

Deciding if Experience Sampling Is a Suitable Approach for One’s Research Question

An important first step before planning an experience-sampling study is to decide whether experience sampling is indeed a suitable means for accomplishing one’s research goals. Generally speaking, experience sampling is a powerful tool for describing (a) the prevalence of experiences, behaviors, events, and contextual characteristics as they occur in daily life and in natural environments and (b) the naturally occurring variation and co-variation of these experiences, behaviors, events, and contextual characteristics over time (both within and between individuals). The latter includes the possibility to investigate within-person short-term fluctuations or changes in experiences and behaviors. Depending on the studied phenomenon, such fluctuations may either reflect people’s flexibility and adaptability, or indicate their instability and vulnerability.

Research questions that resonate with one or more of the themes outlined above lend themselves to experience-sampling methodology. In addition, the following checklist questions are helpful in deciding whether experience sampling should be the method of choice: (a) Is episodic (versus semantic) knowledge representation of primary interest? (b) Is the phenomenon of interest characterized by within-person short-term variability (versus stability)? (c) Is it accessible to participants’ introspection? (d) Is ecological validity (at the expense of strict experimental control) paramount? If the answers to these questions are affirmative, experience sampling is a fitting and persuasive tool to pursue one’s research goals. In contrast, experience sampling is less suitable if the primary goal of the study is to investigate questions of causality. Correlational associations between variables observed in experience-sampling data do not constitute causal evidence (Foster, 2010). Correlational patterns observed in experience-sampling data may, however, still be useful to inform hypotheses about causality to be tested in future studies using experimental manipulation.

In addition, researchers should consider whether experience sampling is feasible for the target population under study. Is it to be expected that individuals from that population will have the time to commit themselves to diligent study participation? Will they be able to comply with study instructions in their everyday lives, in the absence of direct control? Do they meet the cognitive, sensory, and motoric requirements of study participation (e.g., to understand the study procedure and instructions, to hear the signals requesting completion of the assessment instrument, or to handle the assessment device)? Experience sampling may still be applicable when doubts arise regarding one or more of these questions, but researchers should then take measures to account for these concerns when planning their study (e.g., by specifying inclusion/exclusion criteria for participation, providing help for participants throughout the study phase, or by choosing technical devices that match the participants’ sensory and motoric abilities).

Dealing With Challenges of Experience Sampling in Lifespan Research

Researchers should be aware of the methodological challenges of experience sampling in lifespan developmental methodology and adopt appropriate measures to address them. In this section, strategies are offered to address each of the aspects elaborated above in the section on challenges of experience sampling in lifespan developmental research.

Addressing the Challenges of Sample Selectivity and Selective Sample Attrition

Sample recruitment strategies should be chosen to minimize potential self-selection biases that would result in limited sample representativeness. If at all feasible, representative samples drawn from residents’ registries are preferable over convenience samples. Furthermore, in-depth analyses of potential sample selectivity and selective sample attrition should be planned from the beginning. Care should be taken to record relevant information in this regard already during recruitment (e.g., socio-demographic characteristics or factors that might influence the target phenomenon under study should be recorded). These data can then be used in selectivity analyses to investigate whether individuals who agreed to participate differed systematically from individuals who were contacted but declined study participation, and whether participants who dropped out during the study differed significantly from those who completed the entire study protocol (Black et al., 2013). Analyses should also address the question of whether sample selectivity varied depending on participants’ age or other characteristics that may be relevant when evaluating the generalizability of the findings.

Furthermore, researchers should consider measures to minimize participant drop-out during the study. Care should be taken to implement reasonable study characteristics (e.g., regarding the number of measurement occasions and the length of the individual assessments). Finding a careful balance between what is scientifically desirable and what can be expected of participants, without exceeding the level of commitment that they can keep up for the entire study period, is essential. In addition, establishing and maintaining personal contact with the participants during the study is helpful, both for minimizing sample attrition and for helping prevent additional problems arising from low participant compliance and diligence. In the authors’ experience, both types of problems can be minimized by enhancing participants’ motivation, as discussed in the next section.

Addressing Challenges of Participants’ Compliance and Diligence

To maximize participants’ compliance and task diligence, even in the absence of direct control instances, researchers should ask themselves what they can do to support and maintain participants’ study motivation. Particularly helpful are means that spark participants’ interest in, identification with, and commitment to the study, as well as their self-confidence in being able to complete the study and handle the assessment device. Especially important are carefully prepared instruction sessions that are tailored to the participants at hand. Instruction sessions should be conducted in small groups, the size of which should be adjusted to the complexity of the study protocol and participants’ individual needs, which may vary with age. For young adults, a maximum of four to six participants per group is advisable. For children and older adults, the authors recommend individual instruction sessions. Sufficient time should be scheduled for practicing handling the electronic assessment device and for making sure that the study procedure and task instructions are understood. Providing participants with short written summaries of the content covered during the instruction session can prove helpful.

Further examples to foster participants’ motivation include, for instance, personal contact (ideally always with the same member[s] of the study team), easy and low-threshold ways for participants to contact the research staff in the event of problems or questions, and professionally designed, age-appropriate, and engaging study materials. Information about the scientific relevance of the study can also motivate participants, particularly adults. Care should be taken, however, to not inform participants about specific research questions or hypotheses before they have completed the study protocol in order to avoid influencing their response behavior. Interested participants can be offered information about the study results or feedback on their own data after completion of the data collection, if this is feasible. Furthermore, adequate reimbursement to acknowledge participants’ efforts can be motivating. Reimbursement schemes can be scaled so that participants receive a bonus incentive if they complete a predefined number of measurement occasions.

Researchers should also implement measures that allow them to control participants’ compliance with the study instructions. The authors highly recommend use of experience-sampling technology that records the time of responding to the items of the experience-sampling instrument and that allows restricting the time window during which participants can complete a scheduled measurement occasion (e.g., up to 15 minutes after a prompt). Deviation from the measurement schedule and exploration of missing and partially missing responses to experience-sampling measurement occasions provide indication of participant’s study compliance. Task diligence is more difficult to objectively observe than compliance to measurement schedules. Screening responses and response patterns for implausible or aberrant patterns is advisable to identify potential cases of careless responding. Response times that are considerably faster or slower than the average response time of that participant may be suspicious, as well as implausible response patterns or stereotypic response behaviors that are atypical for that participant (e.g., if the response option that can be chosen fastest is invariantly selected for all items of a given assessment). Such unusual characteristics may serve as a basis for marking individuals or measurement occasions as suspicious. Importantly, however, these characteristics do not provide solid proof that these data are invalid, and such suspicious data should therefore not simply be discarded. Instead, the authors recommend repeating the analyses with and without the individuals and measurement occasions in question. This way, researchers reduce the possibility that their results exclusively or partly rely on invalid data.

Another important consideration regarding study compliance and diligence pertains to the question whether to record participants’ responses in an anonymous or identifiable manner. Anonymous recording means that experimenters cannot trace responses back to the participant providing them. Advantages of anonymous recording of responses are that no extra measures to ensure adherence to data protection regulations are necessary and that self-presentational response biases due to social desirability are reduced. On the other hand, however, the non-accountability of anonymous data recording has the disadvantage of increasing the likelihood of non-compliant and careless responding (non accountability effect; see Meade & Craig, 2012).

Identifiable recording of responses, in contrast, means that experimenters can link data to the respective participants. In line with data protection statutes, identifiable responses typically have to be recorded in a pseudonymous form: that is, an abstract unique identification code that can be linked to the participants’ identity is used to distinguish data records in the dataset. The key linking the identification code to participants’ identities needs to be password protected, and complex provisions have to be taken to ensure that password access as well as data storage adhere to data protection regulation. Identifiable responding counteracts the non-accountability effect and supports compliance and diligence of participants. It may, however, entice self-presentation tendencies (e.g., socially desirable responding; see Meade & Craig, 2012) more than anonymous data collection. Advantages of pseudonymous recording typically outweigh their disadvantages. In particular, and provided suitable assessment technology is used, it allows online monitoring of participants’ response compliance during the study interval and potential interventions in the case of irregularities. This is particularly helpful when working with children and adolescent participants but also with other individuals for whom sustaining self-regulated motivation to comply with study instructions throughout a long period of time may be challenging. Furthermore, pseudonymous data recording is imperative for linking participants’ individual datasets across repeated assessments in macro-longitudinal or cohort-sequential designs that constitute particularly potent applications of experience sampling in lifespan developmental methodology.

Addressing Challenges of Potential Measurement Reactivity

An important means of dealing with potential measurement reactivity is identifying and statistically controlling for potential trends in participants’ responses: that is, systematic shifts in their response behavior throughout the study interval. Trends can be cumulative in a linear or nonlinear fashion (i.e., when the frequency or intensity of reported experiences changes throughout the study interval) and/or cyclic (e.g., when participants’ responses differ systematically depending on the time of day or on whether assessments were obtained on weekdays or weekends). Such time-related trends can be identified, for example, by comparing goodness-of-fit indices between series of multilevel regression models with and without time-in-study-variable(s) as predictor(s) of momentary experience-sampling reports (for details, see Singer & Willet, 2003). If systematic time-related trends are observed, they can be statistically controlled for, either by de-trending data (Ruppert, Wand, & Carroll, 2003; for an exemplary practical implication, see Brose, Schmiedek, Lövdén, & Lindenberger, 2011), or by including relevant time-in-study variable(s) as covariates (Singer & Willet, 2003; for an exemplary practical implication, see Dirk & Schmiedek, 2016).

If potential longer-term intervention effects of study participation on experiences and behaviors are expected, implementation of classical treatment-control group designs is advisable. In the simplest design, participants are randomly assigned to either a treatment or a control condition. In the treatment condition, participants complete (one or more) assessments both before (pre-) and after (post-) taking part in the experience-sampling phase. The control group participates in the pre- and post-assessments but not in the experience-sampling phase. Significant differences between treatment and control groups in pre-to-post change of assessments indicate significant intervention effects of the experience-sampling phase. The authors have not observed such longer-term intervention effects in any of their applications of experience sampling (all with German samples), which all did not explicitly aim at modifying participants’ behaviors or experiences. For example, compared to control groups, and controlling for pre-experience assessments, no post-experience-sampling differences were evident regarding longer-term exercise adherence in exercise beginners who recorded their exercise-related experiences several times a day throughout three weeks (Riediger & Freund, 2004), or in empathic accuracy among romantic partners who repeatedly judged their partner’s affect several times a day throughout 15 days (Rauers et al., 2013; for further examples, see Barta et al., 2013). Given their complexity and resource-intensiveness, treatment-control-group designs are therefore particularly advisable when demonstrating intervention effects is among the aims of the study (Kaplan & Stone, 2013).

Addressing Challenges of Missing Data in Experience-Sampling Studies

Researchers should minimize the likelihood of systematically missing data and differences between age groups in the number of completed experience-sampling measures. Again, suitable measures to this end will differ depending on the target populations. For example, for school-aged participants, researchers could attempt to obtain the school’s approval allowing study participation during school hours. Missing data, however, cannot be completely prevented in experience-sampling studies and should therefore be adequately considered in the data analyses (see section “Analyzing Data from Experience-Sampling Studies”). In addition, researchers should refer to earlier studies with similar study populations to estimate how many missing measurements to expect in their study. Calculations of the necessary number of assessment occasions during the experience-sampling phase (see section “Power Considerations in Experience-Sampling Research: Planning the Sample Size and the Number of Experience-Sampling Occasions”) should consider this estimated loss. An alternative way to address this challenge is to schedule flexible assessment schedules, with additional assessment occasions in the case of missing measurements. For example, the authors have conducted various studies with three experience-sampling phases that consisted of three consecutive days with six daily experience-sampling occasions each, and breaks of six days between experience-sampling phases. For each experience-sampling day on which a given participant completed less than five of the six experiences-sampling occasions, the respective experience-sampling phase for that person was prolonged for a day. This flexible procedure ensured that a sufficient number of completed measurement occasions were available for all participants of the study (Blanke, Riediger, & Brose, 2018; Riediger & Freund, 2008; Riediger et al., 2011).

Planning the Experience-Sampling Schedule: Time Windows and Sampling Schemes for Daily Assessments

Two decisions have to be made when planning the experience-sampling schedule for participants: During which time window throughout the day will experience samples be obtained? And at what times will specific experience-sampling occasions be scheduled?

In many cases, daily routines and rhythms vary substantially between participants, particularly (but not only) in age-heterogeneous samples. Scheduling measurement occasion at times when participants are still, or already, asleep (or otherwise generally unavailable for assessments) should be avoided, as this may not only aggravate the problem of incomplete datasets with missing responses but also undermine participants’ study motivation. To avoid this problem, routinely asking participants to choose a personalized daily time window for the experience-sampling assessments according to their habitual wake-up time has proven effective in past research (Klipker et al., 2017; Luong, Wrzus, Wagner, & Riediger, 2016; Rauers et al., 2013; Wrzus, Wagner, & Riediger, 2016).

Within that time window for assessments, experience samples can be scheduled according to different sampling methods, which include signal-contingent, interval-contingent, event-contingent, and context-contingent sampling, as well as any combinations of the above. Which assessment schedule is most appropriate in a given study context depends on the specific research question at hand, the prevalence of the particular experience under study, and on feasibility considerations.

Signal-contingent sampling is the most frequently used sampling scheme. Here, the electronic assessment device signals participants when to respond to the experience-sampling instrument. These signals are scheduled at varying, (pseudo-)random time points within the measurement time window, such that participants do not know when the next signal will occur. For example, the authors often use a signal-contingent measurement where personalized 12-hour time windows for assessments are broken down in six two-hour time slots. Within each of these two-hour slots, one experience-sampling occasion is scheduled randomly with the added provision that adjacent experience samples are at least 15 minutes apart. Signal-contingent sampling has the advantages of maximizing the everyday representativeness of the measurements and of minimizing the likelihood that participants change their daily routine in expectation of the assessment. It is the best-suited sampling approach for measuring experiences that occur frequently and fluctuate throughout the day, such that within-person variation can be adequately represented by the scheduled experience samples.

Interval-contingent sampling, in contrast, involves scheduling assessments at predetermined time points during the day. Compared to signal-contingent sampling, this requires less flexibility from participants, but it comes at the cost of comparatively less representative experience samples and a higher likelihood that participants adjust their daily activities in expectation of the assessment. Interval-contingent sampling can nevertheless be appropriate when the predetermined time points of assessment are of primary conceptual interest (e.g., when the research question pertains to circadian rhythms or daily routines at particular times during the day).

Event-contingent sampling refers to sampling schemes in which participants complete experience-sampling assessments whenever a pre-specified event occurs (e.g., a social interaction, an interpersonal conflict). This sampling scheme can be appropriate when experiences that surround relatively infrequent events are of interest. It requires, however, a high degree of proactivity and attentiveness on the part of the participants who have to self-initiate completion of the experience-sampling instrument. Researchers have limited possibilities to control the comprehensiveness and compliance of participants’ reports (e.g., whether all relevant events were indeed reported and whether the experience-sampling reports were indeed made immediately after the events occurred).

Context-contingent sampling is a similar sampling scheme, although it allows more control on the part of the researchers. Here, experience-sampling assessments are automatically triggered when additional recording of context characteristics signals occurrence of a pre-defined event (e.g., increase in heart rate, being in a particular geographical area). Prerequisite for this sampling scheme is that experience sampling is combined with another ambulatory measurement technology that participants wear and that (semi-) continuously records the relevant context variable (e.g., ambulatory physiological assessments or geo-tracking). To date, technological solutions for implementing experience-sampling studies do not yet routinely include algorithms that allow for context-dependent sampling. Consequently, only a few applications of this sampling scheme with custom-made software are available to date (Gustafson et al., 2014), despite its profound methodological advantages over self-initiated event-contingent sampling.

Highly desirable future advances in experience-sampling technology include innovations that allow researchers to flexibly combine these various scheduling schemes and to routinely bring together experience sampling with other forms of ambulatory assessments as a prerequisite for context-contingent sampling.

Power Considerations in Experience-Sampling Research: Planning the Sample Size and the Number of Experience-Sampling Occasions

Power analyses serve to estimate how many observations will be needed to statistically detect a hypothesized effect size (e.g., a mean difference or a regression slope) at a given level of significance, provided that this effect indeed exists. Whereas conducting an a priori power analysis is advisable for any study, it is especially advisable for experience-sampling studies that require considerable time, effort, and financial resources. Power analysis can help reduce such costs by planning a study such that unnecessary oversampling is avoided (or by abandoning a study plan if the required sample cannot be realized within the researcher’s budget).

Power analysis for experience-sampling designs relies on the same general logic as for other designs: It requires the researcher to make assumptions regarding several parameters, among them the expected size of the effect, the expected variation of the outcome variable, and the expected residual variance that cannot be explained by the assumed effect. Ideally, all these assumptions are estimated based on empirical evidence from past studies or pilot data. Different from simpler between-person designs, power analysis for experience-sampling designs requires researchers to consider multiple sources of variation for each of these entities, at least distinguishing within-person variation (e.g., how much are people expected to vary over time regarding a given variable?) from between-person variation (e.g., how much is this variable expected to vary between participants?). Depending on the nature of their hypotheses, researchers may choose to enhance the power of the study by increasing the number of persons included in the study, the number of observations made per person, or both. As mentioned above, the planned number of observations per person should be decided following additional considerations regarding participant burden.

Over the last two decades, various procedures, tools, and recommendations for power analysis for experience-sampling designs have evolved (Bolger & Laurenceau, 2013c). Free software tools such as PinT (Bosker, Snijders, & Guldemond, 2003) or Optimal Design (Raudenbush et al., 2011) support power analysis for multilevel designs. Simulation methods have also been recommended to calculate power in experience-sampling designs (Bolger, Stadler, & Laurenceau, 2012).

Choosing an Experience-Sampling Technology

Up until the beginning of the 21st century, it was not uncommon that paper-and-pencil questionnaires were used in experience-sampling research. Technological advances have outdated this approach, and it is inadvisable for two reasons. First, there is ample evidence demonstrating high risk of participants’ non-compliance when paper-and-pencil format is used for ambulatory assessments (Stone, Shiffman, Schwartz, Broderick, & Hufford, 2002). Second, various solutions for the digital recording of experience-sampling responses are available that allow precise control over participants’ study compliance and assessment schedules. These also include low-cost options (e.g., free experience-sampling software, use of participants’ own mobile phones as assessment devices).

Generally, choosing a study technology involves decisions regarding both hardware and software. In terms of hardware, current studies typically either use mobile phones or tablet computers as assessment instruments. The least costly variant is to use participants’ own mobile phones. This, however, comes with the disadvantage that the display of items and the accuracy of recording of some information (e.g., response latencies) are not standardized across participants. If possible, the authors therefore recommend using the same type of assessment instrument for all participants. When choosing the hardware for one’s study, particular attention should be paid to its appropriateness for the designated target population(s). Particularly when working with children and older adults, careful piloting of devices is necessary to ensure that display contrast, font size, signal loudness, and so forth can be adjusted according to the sensory requirements of the sample, and that participants can manually operate the assessment device without difficulties (e.g., via touch screen, keyboard, or buttons).

Available experience-sampling software solutions include freeware as well as commercial software. Some researchers also employ custom-made solutions. The field of experience-sampling solutions is rapidly growing and reflects the ongoing advances in mobile technologies. Selecting the appropriate software for one’s study should be based on thorough research of the available options at the time the study was conducted. At the moment of writing this article, an up-to-date overview of available free and commercial software and hardware is available, for example, at the website of the Society for Ambulatory Assessment. Choosing the appropriate experience-sampling software typically requires weighing desired features for one’s study and research-pragmatic budgetary considerations. To aid this decision, Table 1 summarizes software features that have proven helpful in the authors’ past research (implemented in a custom-made solution). Even though currently available free and commercial solutions do not yet routinely offer all of these features, this list may be helpful for researchers for comparing the feasibility of different software options for their study and for software developers to further refine their solutions according to researchers’ needs.

Table 1. Selection of Potentially Helpful Features of Experience-Sampling Software

  • easy setup and adjustment of experience-sampling items

  • availability of different item formats (e.g., single-choice versus multiple-choice items)

  • possibility to specify item trees (i.e., conditional presentation of items depending on participants’ responses to preceding items; for instance, when questions on momentary social partners are shown only when participants indicate that another person is present, and are otherwise skipped)

  • possibility for individualized adjustment of assessment schedules (e.g., personalized time windows for assessments)

  • possibility for scheduling of assessment-free days between experience-sampling days

  • possibility for scheduling additional individual assessment days if the number of missing measurements on a given day exceed a pre-defined criterion

  • specification of a limited time window during which participants can complete the assessment after a prompt (to avoid retrospective completion and deviation from the sampling scheme)

  • immediate and data-secure upload of responses from the local devices to a central server

  • possibility for online monitoring of participants’ compliance to the experience-sampling measurement scheme

  • possibility for scheduling reminder signals if participants do not respond to prompt

  • recording of the date, time of day, and response latencies for each item of the experience-sampling instrument (e.g., for plausibility checks)

Analyzing Data From Experience-Sampling Studies

Prior to the statistical analyses, researchers should plan for an extended phase of data cleaning that is necessary to enhance the reliability, validity, and power of their data (for an overview, see McCabe, Mack, & Fleeson, 2013). Part of this process is checking for and dealing with missing values (Black et al., 2013). Missing values are not necessarily a problem if appropriate statistical approaches to deal with them are employed (Enders, 2010; Graham, 2009). Conclusions drawn from the data, however, may be invalid if there are too many missing values, or if observations are not missing at random. In developmental studies, it is particularly important to test whether missing values are systematically related to participants’ age. Additionally, researchers should check for, and mark, individuals or occasions that give rise to suspicion about careless responding, to be considered in control analyses (as described in the section on challenges of experience sampling in lifespan developmental research).

As mentioned before, an important strength of the experience-sampling method is to investigate within-person variation and co-variation of processes, as well as respective between-person differences. To exploit this potential, and to avoid false conclusions from the data, researchers need to employ adequate statistical models that distinguish within-person from between-person levels of analyses and that consider statistical interdependencies deriving from repeatedly observing individuals over time and with varying intervals between measurement occasions. Common approaches for modeling such interdependencies build on multilevel regression or structural equation frameworks (Singer & Willet, 2003; Walls & Schafer, 2006). In addition to analyzing whether a given variable differs on average between individuals, these approaches also allow testing whether associations between variables differ between persons. Adequate model specifications also require attention to within-person data dependencies that derive from the timing of measurement occasions (e.g., adjunct measurements of one person may be more similar to each other than more distal measurements of a person). More complex dependencies may arise from unequally spaced time intervals between experience-sampling observations, or from extended time intervals between experience-sampling phases (Bolger & Laurenceau, 2013b). Additional interdependencies between individuals need to be considered when investigating couples, families, or other social groups (Bolger & Laurenceau, 2013a; Laurenceau & Bolger, 2013; Nestler, Grimm, & Schönbrodt, 2015).

Conclusions

Experience sampling brings both important strengths and significant challenges to lifespan-developmental methodology. If these challenges are adequately addressed, experience sampling represents a powerful research tool. This article provides practical guidelines for conducting experience-sampling research in lifespan-developmental psychology. The field of lifespan psychology will particularly profit from implementing experience sampling within long-term longitudinal and cohort-sequential designs and from combining it with other ambulatory measures more routinely in the future.

References

Barrett, L. F., Robin, L., Pietromonaco, P. R., & Eyssell, K. M. (1998). Are women the ‘more emotional’ sex? Evidence from emotional experiences in social context. Cognition and Emotion, 12, 555–578.Find this resource:

    Barta, W. D., Tennen, H., & Litt, M. D. (2013). Measurement reactivity in diary research. In M. R. Mehl & T. S. Conner (Eds.), Handbook of research methods for studying daily life (pp. 108–123). New York, NY: Guilford Press.Find this resource:

      Black, A. C., Harel, O., & Matthews, G. (2013). Techniques for analyzing intensive longitudinal data with missing values. In M. R. Mehl & T. S. Conner (Eds.), Handbook of research methods for studying daily life (pp. 339–356). New York, NY: Guilford Press.Find this resource:

        Blanke, E. S., Riediger, M., & Brose, A. (2018). Pathways to happiness are multidirectional: Associations between state mindfulness and everyday affective experience. Emotion, 18, 202–211.Find this resource:

          Bolger, N., & Laurenceau, J.-P. (2013a). Design and analysis of intensive longitudinal studies of distinguishable dyads. In N. Bolger & J.-P. Laurenceau (Eds.), Intensive longitudinal methods. An introduction to diary and experience sampling research (pp. 143–171). New York, NY: Guilford Press.Find this resource:

            Bolger, N., & Laurenceau, J.-P. (2013b). Modeling the wtihin-subject causal process. In N. Bolger & J.-P. Laurenceau (Eds.), Intensive longitudinal methods. An introduction to diary and experience sampling research (pp. 69–101). New York, NY: Guilford Press.Find this resource:

              Bolger, N., & Laurenceau, J.-P. (2013c). Statistical power for intensive longitudinal designs. In N. Bolger & J.-P. Laurenceau (Eds.), Intensive longitudinal methods. An introduction to diary and experience sampling research (pp. 197–227). New York, NY: Guilford Press.Find this resource:

                Bolger, N., Stadler, G., & Laurenceau, J.-P. (2012). Power analysis for intensive longitudinal studies. In M. R. Mehl & T. S. Connor (Eds.), Handbook of research methods for studying daily life (pp. 285–320). New York, NY: Guilford Press.Find this resource:

                  Bosker, R. J., Snijders, T. A. B., & Guldemond, H. (2003). PinT: Power in two-level designs: Estimating standard errors of regression coefficients in hierarchical linear models for power calculations: User’s manual (Version 2.1).

                  Brose, A., De Roover, K., Ceulemans, E., & Kuppens, P. (2015). Older adults’ affective experiences across 100 days are less variable and less complex than younger adults’. Psychology and Aging, 30, 194–208.Find this resource:

                    Brose, A., & Ebner-Priemer, U. (2015). Ambulatory assessment in the research on aging: Contemporary and future applications. Gerontology, 61, 372–380.Find this resource:

                      Brose, A., Schmiedek, F., Koval, P., & Kuppens, P. (2015). Emotional inertia contributes to depressive symptoms beyond perseverative thinking. Cognition and Emotion, 29, 527–538.Find this resource:

                        Brose, A., Schmiedek, F., Lövdén, M., & Lindenberger, U. (2011). Normal aging dampens the link between intrusive thoughts and negative affect in reaction to daily stressors. Psychology and Aging, 26, 488–502.Find this resource:

                          Cain, A. E., Depp, C. A., & Jeste, D. V. (2009). Ecological momentary assessment in aging research: A critical review. Journal of Psychiatric Research, 43, 987–996.Find this resource:

                            Carstensen, L. L., Turan, B., Scheibe, S., Ram, N., Ersner-Hershfield, H., Samanez-Larkin, G. R., . . . Nesselroade, J. R. (2011). Emotional experience improves with age: Evidence based on over 10 years of experience sampling. Psychology and Aging, 26, 21–33.Find this resource:

                              Dirk, J., & Schmiedek, F. (2016). Fluctuations in elementary school children’s working memory performance in the school context. Journal of Educational Psychology, 108, 722.Find this resource:

                                Doberenz, S., Roth, W. T., Wollburg, E., Maslowski, N. I., & Kim, S. (2011). Methodological considerations in ambulatory skin conductance monitoring. International Journal of Psychophysiology, 80, 87–95.Find this resource:

                                  Ebner-Priemer, U. W., Koudela, S., Mutz, G., & Kanning, M. K. (2013). Interactive multimodal ambulatory monitoring to investigate the association between physical activity and affect. Frontiers in Psychology, 3, 596.Find this resource:

                                    Enders, C. K. (2010). Applied missing data analysis. New York, NY: Guilford Press.Find this resource:

                                      Epstein, D. H., Tyburski, M., Craig, I. M., Phillips, K. A., Jobes, M. L., Vahabzadeh, M., . . . Preston, K. L. (2014). Real-time tracking of neighborhood surroundings and mood in urban drug misusers: Application of a new method to study behavior in its geographical context. Drug and Alcohol Dependence, 134, 22–29.Find this resource:

                                        Foster, E. (2010). Causal inference and developmental psychology. Developmental Psychology, 46, 1454–1480.Find this resource:

                                          Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annual Review of Psychology, 60, 549–576.Find this resource:

                                            Gustafson, D. H., McTavish, F. M., Chih, M.-Y., Atwood, A. K., Johnson, R. A., Boyle, M. G., . . . Dillenburg, L. (2014). A smartphone application to support recovery from alcoholism: A randomized clinical trial. JAMA Psychiatry, 71, 566–572.Find this resource:

                                              Harden, K. P., Wrzus, C., Luong, G., Grotzinger, A., Bajbouj, M., Rauers, A., . . . Riediger, M. (2016). Diurnal coupling between testosterone and cortisol from adolescence to older adulthood. Psychoneuroendocrinology, 73, 79–90.Find this resource:

                                                Hektner, J. M., Schmidt, J. A., & Csikszentmihalyi, M. (2007). Experience sampling method: Measuring the quality of everyday life. Thousand Oaks, CA: SAGE.Find this resource:

                                                  Hoppmann, C., & Riediger, M. (2009). Ambulatory assessment in lifespan psychology: An overview of current status and new trends. European Psychologist, 14, 98–108.Find this resource:

                                                    Kaplan, R. M., & Stone, A. A. (2013). Bringing the laboratory and clinic to the community: mobile technologies for health promotion and disease prevention. Annual Review of Psychology, 64, 471–498.Find this resource:

                                                      Klipker, K., Wrzus, C., Rauers, A., Boker, S. M., & Riediger, M. (2017). Within-person changes in salivary testosterone and physical characteristics of puberty predict boys’ daily affect. Hormones and Behavior, 95, 22–32.Find this resource:

                                                        Klipker, K., Wrzus, C., Rauers, A., & Riediger, M. (2017). Hedonic orientation moderates the association between cognitive control and affect reactivity to daily hassles in adolescent boys. Emotion, 17, 497–508.Find this resource:

                                                          Klumb, P. L., & Baltes, M. M. (1999). Time use of old and very old Berliners: Productive and consumptive activities as functions of resources. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences, 54, 271–S278.Find this resource:

                                                            Klumb, P., Elfering, A., & Herre, C. (2009). Ambulatory assessment in industrial/organizational psychology: Fruitful examples and methodological issues. European Psychologist, 14, 120–131.Find this resource:

                                                              Knight, G. P., & Zerr, A. A. (2010). Informed theory and measurement equivalence in child development research. Child Development Perspectives, 4, 25–30.Find this resource:

                                                                Laurenceau, J.-P., & Bolger, N. (2013). Analyzing intensive longitudinal data from dyads. In M. R. Mehl & T. S. Conner (Eds.), Handbook of research methods for studying daily life (pp. 285–320). New York, NY: Guilford Press.Find this resource:

                                                                  Leonhardt, A., Könen, T., Dirk, J., & Schmiedek, F. (2016). How differentiated do children experience affect? An investigation of the within-and between-person structure of children’s affect. Psychological Assessment, 28, 575–585.Find this resource:

                                                                    Li, S.-C., Huxhold, O., & Schmiedek, F. (2004). Aging and attenuated processing robustness. Gerontology, 50, 28–34.Find this resource:

                                                                      Luong, G., Wrzus, C., Wagner, G. G., & Riediger, M. (2016). When bad moods may not be so bad: Valuing negative affect is associated with weakened affect–health links. Emotion, 16, 387–401.Find this resource:

                                                                        McCabe, K. O., Mack, L., & Fleeson, W. (2013). A guide for data cleaning in experience sampling studies. In M. R. Mehl & T. S. Conner (Eds.), Handbook of research methods for studying daily life (pp. 407–422). New York, NY: Guilford Press.Find this resource:

                                                                          Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17, 437–455.Find this resource:

                                                                            Mehl, M. R. (2017). The electronically activated recorder (EAR): A method for the naturalistic observation of daily social behavior. Current Directions in Psychological Science, 26, 184–190.Find this resource:

                                                                              Miron-Shatz, T., Stone, A. A., & Kahneman, D. (2009). Memories of yesterday’s emotions: Does the valence of experience affect the memory-experience gap?. Emotion, 9, 885–891.Find this resource:

                                                                                Molenaar, P. C. (2004). A manifesto on psychology as idiographic science: Bringing the person back into scientific psychology, this time forever. Measurement, 2, 201–218.Find this resource:

                                                                                  Nesselroade, J. R., & Molenaar, P. C. M. (2010). Emphasizing intraindividual variability in the study of development over the lifespan. In W. F. Overton (Ed.), The handbook of lifespan development. Cognition, biology, and methods across the lifespan (Vol. 1, pp. 30–54). Hoboken, NJ: Wiley.Find this resource:

                                                                                    Nesselroade, J. R., & Salthouse, T. A. (2004). Methodological and theoretical implications of intraindividual variability in perceptual-motor performance. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences, 59, 49–55.Find this resource:

                                                                                      Nestler, S., Grimm, K. J., & Schönbrodt, F. D. (2015). The social consequences and mechanisms of personality: How to analyse longitudinal data from individual, dyadic, round‐robin and network designs. European Journal of Personality, 29, 272–295.Find this resource:

                                                                                        Raudenbush, S. W., Spybrook, J., Congdon, R., Liu, X., Martinez, A., Bloom, H., . . . Hill, C. (2011). Optimal design plus empirical evidence (Version 3.0).

                                                                                        Rauers, A., Blanke, E. S., & Riediger, M. (2013). Everyday empathic accuracy in younger and older couples: Do you need to see your partner to know his or her feelings?. Psychological Science, 24, 2210–2217.Find this resource:

                                                                                          Ready, R. E., Weinberger, M. I., & Jones, K. M. (2007). How happy have you felt lately? Two diary studies of emotion recall in older and younger adults. Cognition and Emotion, 21, 728–757.Find this resource:

                                                                                            Riediger, M. (2018). Ambulatory assessment in survey research: The multi-method ambulatory assessment project. In M. Erlinghagen, K. Hank, & M. Kreyenfeld (Eds.), Innovation und Wissenstransfer in der empirischen Sozial- und Verhaltensforschung. Festschrift für Gert G. Wagner (pp. 85–100). Frankfurt, Germany: Campus Verlag.Find this resource:

                                                                                              Riediger, M., & Freund, A. M. (2004). Interference and facilitation among personal goals: Differential associations with subjective well-being and persistent goal pursuit. Personality and Social Psychology Bulletin, 30, 1511–1523.Find this resource:

                                                                                                Riediger, M., & Freund, A. M. (2008). Me against myself: Motivational conflict and emotional development in adulthood. Psychology and Aging, 23, 479–494.Find this resource:

                                                                                                  Riediger, M., & Klipker, K. (2014). Emotion regulation in adolescence. In J. J. Gross (Ed.), Handbook of emotion regulation (2nd ed., pp. 187–202). New York, NY: Guilford Press.Find this resource:

                                                                                                    Riediger, M., & Rauers, A. (2014). Do everyday affective experiences differ throughout adulthood? A review of ambulatory-assessment evidence. In P. Verhaeghen & C. Hertzog (Eds.), The Oxford handbook of emotion, social cognition, and everyday problem solving during adulthood (pp. 61–82). Oxford, UK: Oxford University Press.Find this resource:

                                                                                                      Riediger, M., Schmiedek, F., Wagner, G., & Lindenberger, U. (2009). Seeking pleasure and seeking pain: Differences in pro- and contra-hedonic motivation from adolescence to old age. Psychological Science, 20, 1529–1535.Find this resource:

                                                                                                        Riediger, M., Wrzus, C., Klipker, K., Müller, V., Schmiedek, F., & Wagner, G. G. (2014). Outside of the laboratory: Associations of working-memory performance with psychological and physiological arousal vary with age. Psychology and Aging, 29, 103–114.Find this resource:

                                                                                                          Riediger, M., Wrzus, C., Schmiedek, F., Wagner, G. G., & Lindenberger, U. (2011). Is seeking bad mood cognitively demanding? Contra-hedonic orientation and working-memory capacity in everyday life. Emotion, 11, 656–665.Find this resource:

                                                                                                            Robinson, M. D., & Clore, G. L. (2002a). Belief and feeling: Evidence for an accessibility model of emotional self-report. Psychological Bulletin, 128, 934–960.Find this resource:

                                                                                                              Robinson, M. D., & Clore, G. L. (2002b). Episodic and semantic knowledge in emotional self-report: Evidence for two judgment processes. Journal of Personality and Social Psychology, 83, 198–215.Find this resource:

                                                                                                                Röcke, C., & Brose, A. (2013). Intraindividual variability and stability of affect and well-being. The Journal of Gerontopsychology and Geriatric Psychiatry, 26, 185–199.Find this resource:

                                                                                                                  Ruppert, D., Wand, M. P., & Carroll, R. J. (2003). Semiparametric regression. Cambridge series in statistical and probabilistic mathematics 12. Cambridge, UK: Cambridge University Press.Find this resource:

                                                                                                                    Schaie, K. W. (1994). Developmental designs revisited. In S. H. Cohen & H. W. Reese (Eds.), Life-span developmental psychology (pp. 45–64). Hillsdale, NJ: Erlbaum.Find this resource:

                                                                                                                      Schmid Mast, M., Gatica-Perez, D., Frauendorfer, D., Nguyen, L., & Choudhury, T. (2015). Social sensing for psychology: Automated interpersonal behavior assessment. Current Directions in Psychological Science, 24, 154–160.Find this resource:

                                                                                                                        Schwarz, N. (2007). Retrospective and concurrent self-reports: The rationale for real-time data capture. In A. A. Stone, S. Shiffman, A. A. Atienza, & L. Nebeling (Eds.), The science of real-time data capture (pp. 11–26). New York, NY: Oxford University Press.Find this resource:

                                                                                                                          Scollon, C. N., Kim-Prieto, C., & Diener, E. (2003). Experience sampling: Promises and pitfalls, strengths and weaknesses. Journal of Happiness Studies, 4, 5–34.Find this resource:

                                                                                                                            Singer, J. D., & Willet, J. B. (2003). Applied longitudinal data analysis: Modeling change and event occurrence. New York, NY: Oxford University Press.Find this resource:

                                                                                                                              Stone, A. A., Shiffman, S., Schwartz, J. E., Broderick, J. E., & Hufford, M. R. (2002). Patient non-compliance with paper diaries. British Medical Journal, 324, 1193–1194.Find this resource:

                                                                                                                                Walls, T. A., & Schafer, J. L. (2006). Models for intensive longitudinal data. Oxford, UK: Oxford University Press.Find this resource:

                                                                                                                                  Wrzus, C., Müller, V., Wagner, G. G., Lindenberger, U., & Riediger, M. (2013). Affective and cardiovascular responding to unpleasant events from adolescence to old age: Complexity of events matters. Developmental Psychology, 49, 384–397.Find this resource:

                                                                                                                                    Wrzus, C., Wagner, G. G., & Riediger, M. (2016). Personality-situation transactions from adolescence to old age. Journal of Personality and Social Psychology, 110, 782–799.Find this resource: