Attrition Rate Research Paper

Short abstract

Loss to follow-up can greatly affect the strength of a trial's findings. But most reports do not give readers enough information for them to be able to understand the potential effects

The main evaluative strength of randomised controlled trials is that each group is generally balanced in all characteristics, with any imbalance occurring by chance. However, during many trials participants are lost to follow-up. Such attrition prevents a full intention to treat analysis being carried out and can introduce bias.1,2 Attrition can also occur when participants have missing data at one or more points. We argue that researchers need to be more explicit about loss to follow-up, especially if rates are high.

Effects of attrition

Attrition can introduce bias if the characteristics of people lost to follow-up differ between the randomised groups. In terms of bias, this loss is important only if the differing characteristic is correlated with the trial's outcome measures. However, attrition is not a black and white issue—there is no specific level of loss to follow-up at which attrition related bias becomes acknowledged as a problem. Schulz and Grimes argue that loss to follow-up of 5% or lower is usually of little concern, whereas a loss of 20% or greater means that readers should be concerned about the possibility of bias; losses between 5% and 20% may still be a source of bias.3 For the purposes of this article we will not differentiate between loss to follow-up and missing data. We have also not considered exclusions by trial investigators. Although exclusion is justified in some cases,3 generally it is ill advised.1,2

Reporting of attrition

In a review of trials published in four general medical journals in 2002, 54% (71) of the 132 trials had some loss to follow-up for the main analysis.4 Among these trials the median percentage loss was 7% (minimum 0.08%, maximum 48%; interquartile range 2-18%). These data suggest that potentially problematic loss to follow-up occurs in many trials, even those published in high quality general journals.4

The standard practice for reporting trials, as encouraged by CONSORT,5 is to include a table describing the baseline characteristics of the trial participants. This table provides useful information on all participants and confirms the success of the randomisation process. However, if there has been loss to follow-up, information from the whole sample may not adequately describe the analysed sample nor accurately reflect the comparability of the trial groups. Thus, we suggest it is informative to present baseline characteristics for the participants for whom data have been analysed and those who are lost to follow-up separately. This would provide a clearer picture of the subsample not included in an analysis and may help indicate potential attrition bias. As an example we have taken data from a recently published randomised trial.​

Example trial

A trial of hip protectors for preventing hip fracture is typical of many studies in that about 20% of participants were lost to follow-up from routine data collection.6 The authors dealt with this problem for the main outcome (hip fracture) by accessing the general practice records of non-responders. However, for secondary outcomes (such as quality of life) this was not possible. Therefore, reports of these secondary outcomes are at a high risk of bias from attrition. We can assess whether these outcomes may be affected by attrition bias by comparing rates of loss to follow-up between the arms of the trial as well as by examining the baseline characteristics of participants who were lost to follow-up and the characteristics of those remaining.

The trial had a small but significant difference in attrition rates between the two arms (372 (28%) in the intervention group and 619 (22%) in the control group, P = 0.001). This is the first indication of a potential problem (table). Because we have different numbers of participants leaving the trial arms, the likelihood that participants in one group are not balanced with similar participants in the other trial arm is increased.

Table 1

Baseline characteristics of all participants in a trial evaluating hip protectors, those lost to follow-up, and those remaining in the trial to the end.6 Values are percentages (numbers) unless stated otherwise

The table includes the baseline characteristics of all the participants. Not surprisingly, the baseline characteristics of the whole sample are well balanced, as we would expect through random allocation. However, these data cannot tell us whether the sample of women included in the analysis is balanced between the two treatment groups.

Of more interest is a comparison of the baseline characteristics of those who have left the study and those remaining. The table shows, as you might expect, that the between group differences in those lost to follow-up tend to be larger than any chance differences at baseline. For example, more volunteers, people with poor or fair health, and people with a previous fracture are lost from the control group than the intervention group. In these examples, the differential attrition leads to varying changes in the characteristics of those participants remaining in the trial when compared with all participants at baseline (table). This information is useful for the reader. Yet, providing information about the participants who have not contributed to the analysis may also be informative.

Interpreting results

The internal validity of a trial's results partly depends on the between group balance in prognostic characteristics of those who remain in the trial. In addition, important imbalances that are not readily apparent in the analysed groups may become apparent when we examine the between group characteristics of those lost to follow-up.

To clarify this point we have carried out some simulation work to assess the effect of attrition bias on baseline characteristics and the type 1 error rate. The simulations used a population of trials each with 630 participants and 10% biased attrition in one arm and 10% random attrition in the other. Although the type 1 error rate was substantially increased, such attrition does not always lead to an apparent imbalance of the baseline characteristics of the participants remaining in the trial. Thus, assessment of the characteristics of those lost to follow-up may be particularly important. Further work is needed to examine how this issue is influenced by sample size and other factors.

The data in the table would allow trialists to make a qualitative judgment about whether an important predictor variable has become more imbalanced since randomisation. They could then decide whether to carry out a sensitivity analysis treating a variable, such as previous fracture, as a covariate. However, since such judgment is subjective, especially for smaller sample sizes, a statistical test could be useful.

Statistical testing

The use of statistical tests in this context is complex, and there are arguments for and against testing. From current knowledge we suggest it is not useful to test for differences statistically. A further reason for avoiding statistical testing stems from arguments Altman and others have put forward against baseline testing of the total randomised group—that imbalance of a predictor variable may still bias the study results, even if the imbalance does not reach conventional levels of significance.7-10

Decisions about covariates are normally made before the start of the trial. However, because attrition bias cannot always be anticipated, information on differential attrition is relevant at the analysis stage. Nevertheless, adjusting for variables that have not been specified in advance is poor statistical practice and may introduce bias.11 As Altman suggests, if serious attrition bias is suspected, the analysis should be carried out as originally planned, with perhaps a second analysis adjusting for the new covariate.11 Furthermore, the data presented can only include observed baseline variables: unrecorded or unknown variables may also be imbalanced.

Implications

Baseline tables are a useful way of assessing the study sample, but the characteristics of the sample may change during the study, especially by attrition. As the hip protector example shows, the baseline characteristics of those lost to follow-up during a study can differ. And since attrition of data commonly occurs in trials,1,2,4 such differences could also be common. We know that missing data make it more difficult to carry out a true intention to treat analysis. Yet, since information on the baseline characteristics of those lost to follow-up and those for whom data are analysed is rarely reported, it is almost impossible to identify the effect of attrition on the study sample as a whole and therefore the result of the randomised controlled trial.

Questions arise about when these modified baseline tables could and should be used. We suggest that information on the participants included in the main analysis of a paper is of interest, especially if attrition is high. Missing data are more common for secondary outcome measures, as researchers often focus on collecting data on the primary outcome. Authors could consider presenting a table showing baseline characteristics of those who were and were not analysed when reporting secondary outcomes. The table might look different for different analyses of the same study. Although this table would require increased journal space, the information is arguably more useful than that in a standard baseline table. Since many journals publish online, these tables could be made available to interested readers in the electronic version only. Alternatively, the information might be incorporated into the flow diagram recommended by CONSORT.5

Summary points

Loss to follow-up can lead to bias in randomised trials

Imbalance resulting from this attrition is often hidden

Baseline characteristics of participants lost to follow-up and those included in the analysis should be reported separately

Assessment of the effect of differences between groups on the results is mainly subjective

Notes

Contributors and sources: JCD, DJT, and CEH are all involved in the design, implementation, and analysis of randomised controlled trials. JCD and DJT were responsible for the concept, design, and drafting of the article. CEH had statistical input. JCD is the guarantor.

Competing interests: None declared.

References

1. Tierney JF, Stewart LA. Investigating patient exclusion bias in meta analysis. Int J Epidemiol 2005;34: 79-87. [PubMed]

2. Hollis S, Campbell F. What is meant by intention to treat analysis? Survey of published randomised controlled trials. BMJ 1999;319: 670-4. [PMC free article][PubMed]

3. Fergusson D, Aaron SD, Guyatt G, Hebert P. Post-randomisation exclusions: the intention to treat principle and excluding patients from analysis. BMJ 2002;325: 652-4. [PMC free article][PubMed]

4. Hewitt, CE, Hahn S, Torgerson DJ, Watson J, Bland M. Adequacy and reporting of allocation concealment: review of recent trials published in four general medical journals. BMJ 2005;330: 1057-8. [PMC free article][PubMed]

5. Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised controlled trials. Lancet 2001;357: 1191-4. [PubMed]

6. Birks YF, Porthouse J, Addie C, Loughney K, Saxon L, Baverstock M, et al. Randomized controlled trial of hip protectors among women living in the community. Osteoporos Int 2004;15: 701-6. [PubMed]

7. Altman DG. Comparability of randomised groups. Statistician 1985;34: 125-36.

8. Senn S. Testing for baseline balance in clinical trials. Stat Med 1994;13: 1715-26. [PubMed]

9. Senn S. Covariate imbalance and random allocation in clinical trials. Stat Med 1989;8: 467-75. [PubMed]

10. Roberts C, Torgerson DJ. Baseline imbalance in randomised controlled trials. BMJ 1999;319: 185. [PMC free article][PubMed]

11. Altman DG. Adjustment for covariate imbalance. In: Armitage P, Colton T, eds. Encyclopaedia of biostatistics. 2nd ed. Chichester: John Wiley, 2005: 1273-8.


Articles from The BMJ are provided here courtesy of BMJ Publishing Group

Abstract

Objective To examine attrition variables in randomized controlled trials of cognitive behavioral interventions for children with chronic illnesses. Methods We examined attrition rates reported on 40 randomized cognitive behavioral interventions published in six pediatric research journals, during the years 2002–2007. Intervention focus was limited to children with a chronic medical condition, such as asthma, obesity, arthritis, diabetes, cancer, sickle cell disease, and cystic fibrosis. Results Mean rate of enrollment refusal was 37% (range 0–75%). Mean attrition rate was 20% (range 0–54%) for initial follow-up and 32% (range 0–59%) for extended follow-up. Of the reviewed articles, 40% included a CONSORT diagram. Conclusions Strategies that can be used to limit attrition include tailoring recruitment to the study population, providing personalized feedback, maintaining consistent study procedures, providing incentives, and using intensive tracking measures. There is a need for standardized definitions and reporting of attrition rates in randomized cognitive behavioral intervention studies.

attrition, pediatric, chronic illness, cognitive behavioral intervention, randomized controlled trial

Attrition or the loss of eligible participants is a significant threat to the internal, external, and statistical validity of intervention studies (Harris, 1998; Marcellus, 2004). Attrition may compromise internal validity by altering the random composition of groups and their equivalence (Kazdin, 1999). External validity may be compromised due to the potential for attrition to limit the generalizability of results to only those who are retained in a study. For instance, those participants retained in a study may be more persistent or more adherent, or have other characteristics that differ from those who drop-out. Attrition may also compromise statistical validity by reducing sample size and power or by systematically altering the variability within samples. Across various categories of intervention studies, reported attrition rates range from 5% to 70% and bias is thought to be a significant concern if the attrition rate exceeds 20% (Harris, 1998; Marcellus, 2004). An earlier review specific to behavioral medicine outpatient intervention studies for people with a chronic health condition found that attrition rates ranged from 10% to 59%, with a mean of 33% (Davis & Addis, 1999).

A major problem with understanding the potential threat of attrition is that it often goes unreported in studies. Sifers, Puddy, Warren, and Roberts (2002) reviewed 260 empirical studies published in 1997 in four major pediatric or child psychology journals (Journal of Pediatric Psychology, Journal of Clinical Child Psychology, Child Development, and Journal of Abnormal Child Psychology) and found that across these four journals only 19.6–36.2% of the articles (mean = 28%) reported information on attrition. With the adoption of the Consolidated Standards of Reporting Trials (CONSORT) statement by many journals (including this one in 2002), one might expect higher rates of reporting attrition and reasons for participant drop-out. Kane, Wang, and Garrard (2007) reviewed the quality of reporting in randomized controlled trials in the Journal of the American Medical Association (JAMA; which had adopted the CONSORT guidelines) and the New England Journal of Medicine (NEJM; which had not adopted the CONSORT guidelines) before and after the CONSORT guidelines were released. They found that JAMA showed more consistent improvements in all aspects of reporting on randomized trials (e.g., JAMA reports of the number completing a study rose from 36% pre-CONSORT to 88% post-CONSORT vs. 24–50% respectively for the NEJM). However, Kane et al. (2007) also noted that varying definitions of attrition were used and reasons for attrition were coded differently across the studies.

To foster consistency in the reporting of attrition rates, we have superimposed the different types of attrition defined in the literature onto the CONSORT diagram, which tracks the flow of participants throughout a study (Figure 1). Enrollment refusal occurs when participants who are otherwise eligible either refuse to participate or cannot complete study requirements. Baseline attrition occurs when eligible participants have agreed to participate and signed an informed consent form but do not complete baseline data collection, and are therefore not randomized to a study arm. Post-randomization attrition occurs when participants do not receive the allocated intervention, prematurely discontinue the intervention, or do not complete follow-up measures after receiving the intervention (Zebracki et al., 2003). Note that in Figure 1 we have further delineated post-randomization attrition to during intervention and during follow-up in order to better categorize these different forms of attrition. Attrition due to missing data occurs when participants are excluded from study analyses due to incomplete, inaccurate, or missing data (Ahern & Le Brocque, 2005). Because most studies do not report on attrition due to missing data (possibly because participants who do not complete follow-up measures overlap with those missing data), we will focus on baseline and post-randomization attrition.

In addition to having standard definitions of attrition and use of the CONSORT flowchart to track participants throughout a study, it is also important to understand why potential participants refuse to participate or drop-out at various phases of a study. Determining reasons for or predictors of attrition is usually done by asking participants why they refused or dropped out and/or by using available data (such as demographic or disease-related information) to compare those who consent and complete a study versus those who refuse to enroll or drop-out. Standardized reporting methods are essential for comparing attrition rates across intervention studies and determining predictors of attrition. Thus, the primary purpose of this paper is to examine attrition rates reported in randomized cognitive behavioral treatment (CBT) studies for children and adolescents with chronic medical conditions in six pediatric or health psychology journals. Predictors of attrition and recommendations for minimizing attrition are also discussed.

Methods

Database

We examined attrition rates reported for randomized cognitive behavioral interventions, published during the years 2002–2007, in six pediatric or health psychology journals: Pediatrics (n = 9), Journal of Pediatrics (n = 5), Children's Health Care (n = 6), Journal of Pediatric Psychology (n = 15), Health Psychology (n = 4), and Journal of Clinical Psychology in Medical Settings (n = 1). Each journal issue printed between January 2002 and November 2007 was reviewed by hand for eligible articles. In addition to hand-review, an electronic search was performed within each of the six journals using the following key terms: randomly assigned; cognitive behavioral therapy; cognitive behavioral intervention; randomized and behavior; intervention and behavior; chronic and intervention.

Articles were included in analyses if the following criteria were met: (a) a randomized controlled trial, (b) it utilized a cognitive and/or behavioral intervention (e.g., education, exercise, cognitive behavioral therapy), and (c) the target population was children or adolescents with a chronic medical condition, such as asthma, obesity, arthritis, diabetes, cancer, sickle cell disease, and cystic fibrosis. Studies conducted with children with mental health disorders or developmental disabilities, such as depression, autism, or ADHD, were excluded. Published abstracts were also excluded.

Coding Procedures

Eligible articles were coded by the first author using a 10-point worksheet. In order to determine accuracy of coding, 25% of the 40 eligible articles (10 articles, at least one from each journal) were randomly selected using a SPSS 15.0 random number generator and independently coded by the second author. Agreements arose when coders identified the same number of participants or information in an article, or judged it to be absent. Percent agreement is provided for each of the 10 coded variables. Fleiss and Cohen (1973) suggest that intraclass correlation coefficient (ICC) is the mathematical equivalent of the weighted Kappa for ordinal data. Thus the intraclass correlation coefficients (Model 2, individual agreement; Shrout & Fleiss, 1979) and raw agreement were calculated for ordinal variables one through six: (1) number of eligible participants (ICC = 1.0; 80%), (2) number of participants enrolled (ICC = .97; 80%), (3) number of participants who completed the baseline assessment (ICC = 1.0; 90%), (4) number of participants who were randomized (ICC = 1.0; 90%), (5) number of participants who completed the intervention (ICC = 1.0; 80%), and (6) number of participants who completed follow-up assessments at each follow-up (ICC = .97; 85%). Cohen's (1960) kappa statistic and raw agreement were calculated for the categorical variables (seven through ten), coded as yes or no: (7) whether a CONSORT diagram or comparable flow-chart was provided (kappa = 1.0; 100%), (8) whether differences between completers and noncompleters were reported (kappa = 1.0; 100%), (9) whether reasons for refusal or drop-out were provided (kappa = .80; 90%), and (10) whether incentives for participation were discussed (kappa = .29; 70%). Overall ICC (n = 63 discrete data points) was .76 for this study. Overall kappa (n = 40 discrete data points) was .78. Overall percent agreement was 87%, determined by dividing the total number of matches (n = 90) by the potential number of matches (n = 103) across the 10 coded variables. Discrepancies were resolved through consensus.

Results

Table 1 presents attrition data for each of the 40 reviewed articles. Enrollment refusal rate is presented as the number of participants who signed an informed consent form (i.e., enrolled) over the number of participants who were eligible to participate in the study, along with percent enrollment refusal. Baseline attrition is presented as the number of participants who completed the baseline assessment over the number of participants who enrolled in the study, along with percent attrition. Post-randomization attrition during intervention is presented as the number of participants who remained in the study during the intervention period over the number of participants who were randomized to a study arm, along with percent attrition. Post-randomization attrition during follow-up is presented as the number of participants who completed the follow-up assessment over the number of participants who were randomized, along with percent attrition.

Table 1.

Randomized Cognitive Behavioral Intervention Studies for Children with Chronic Medical Conditions (2002–2007)

Research study No. enrolled/ No. eligible [Refusal (%)] No. baseline/ No. enrolled [Attrition (%)] No. intervention/ No. randomized [Attrition (%)] No. follow-up/ No. randomized [Attrition (%)] CONSORT diagram Noncompleter differences Reasons for refusal or drop-out Incentives reported 
Pediatrics
Krishna et al. (2003) 246/249 (1) 228/246 (7) 228/246 (7) 3 month 163/246 (34) 12 month 101/246 (59) No Not analyzed Too busy, not interested, too much hassle No 
Powers et al. (2005) 10/14 (29) 10/10 (0) 9/10 (10) 3 month 9/10 (10) 12 month 9/10 (10) Yes Not analyzed Time constraints No 
Cabana et al. (2006) 971/2270 (57) 964/971 (1) 101/101 (0) 12 month 807/971 (17) Yes Not analyzed None reported Yes: CME and $50/year. 
Daley, Copeland, Wright, Roalfe, and Wales (2006) 81/132 (39) 81/81 (0) 81/81 (0) 8 week 75/81 (7) 14 week 71/81 (12) 28 week 71/81 (12) Yes Not analyzed Not interested Yes: £37.50 
Gorelick et al. (2006) 352/618 (43) 352/352 (0) 352/352 (0) 6 month 273/352 (22) Yes Public insurance, ethnic minority None reported No 
McPherson, Glazebrook, Forster, James, and Smyth (2006) 101/163 (38) 101/101 (0) 99/101 (2) 1 month 99/101 (2) 6 month 90/101 (11) Yes No differences found Family concerns No 
Sockrider et al. (2006) 464 464/464 (0) 464/464 (0) 14 day 214/464 (54) 9 month 218/464 (53) No Not analyzed None reported No 
Chan et al. (2007) 120/126 (5) 120/120 (0) 120/120 (0) 54 week 102/120 (14) Yes Not analyzed Unanticipated move Yes: volunteer 
Golley, Magarey, Baur, Steinbeck, and Daniels (2007) 115/193 (40) 111/115 (3) 68/75 (9) 6 month 57/75 (24) 12 month 91/111 (18) Yes Older, higher BMI None reported No 
Journal of Pediatrics
Lee et al. (2002) 28 28/28 (0) 25/28 (11) 10 week 25/28 (11) 66 week 24/28 (14) No Not analyzed Pain; medical complication No 
Kelly et al. (2004) 25 25/25 (0) 20/20 (0) 8 week 20/20 (0) No Not analyzed None reported No 
Watts et al. (2004) 21 21/21 (0) 14/14 (0) 8 week 14/14 (0) 16 week 14/14 (0) No Not analyzed None reported Yes: volunteer 
Balagopal et al. (2005) 21 21/21 (0) 15/15 (0) 3 month 15/15 (0) No Not analyzed None reported Yes: payment 
Stark et al. (2006) 65/194 (66) 58/65 (11) 52/65 (20) 6 month 49/65 (25) 12 month 49/65 (25) No Older, low, and high medication use Too busy, too far, too many appointments, not interested, intervention not necessary No 
Children's Health Care
Applegate et al. (2003) 124/124 (0) 124/124 (0) 123/124 (1) 0 day 122/124 (2) No Not analyzed Patient death, hospitalization No 
Powers et al., 200312/13 (8) 12/12 (0) 9/12 (25) 1 year 8/12 (33) No No differences found Work demands, change in family situation, child illness, new baby No 
Herrera, Johnson, and Steele (2004) 75/79 (5) 75/75 (0) 46/50 (8) 10 week 46/50 (8) No Not analyzed None reported No 
Krishna, Balas, Francisco, and Konig (2006) 246/1000 (75) 235/246 (4) 235/246 (4) 3/12 month 228/246 (7) No Milder disease, male, young Too busy, too far, follow-up time too great, medical and technical problems No 
Abram et al. (2007) 81/105 (23) 81/81 (0) 81/81 (0) 3 month 50/81 (38) 6 month 66/81 (19) Yes Not analyzed None reported No 
Schwartz, Radcliffe, and Barakat (2007) 58/102 (43) 49/58 (16) 41/49 (16) 2 month 41/49 (16) No Not analyzed Too busy, not interested, started new treatment No 
Journal of Pediatric Psychology
Brown et al. (2002) 111/144 (23) 101/111 (9) 95/101 (6) 3 month 91/101 (10) 12 month 93/101 (8) No Male None reported Yes: $75 
Madsen, Roisman, and Collins (2002) 224 224/224 (0) 224/224 (0) Not reported No Not analyzed None reported Yes: volunteer 
Davis, Quittner, Stack, and Yang (2004) 47/48 (2) 47/47 (0) 47/47 (0) 3 month 47/47 (0) 6 month 22/22 (0) No Not analyzed Lack of access to computer Yes: $15 
Klosky et al. (2004) 79/79 (0) 79/79 (0) 79/79 (0) 0 day 79/79 (0) No N/A N/A No 
Koontz, Short, Kalinyak, and Noll (2004) 24/26 (8) 24/24 (0) 24/24 (0) 24/24 (0) No N/A None reported No 
Ellis et al. (2005) 38/47 (19) 31/38 (18) 26/31 (16) 9 month 23/31 (26) Yes Not analyzed Disagreement regarding direction of therapy; low engagement in therapy No 
Kazak et al. (2005) 19/47 (60) 19/19 (0) 16/19 (16) 2 month 17/19 (11) Yes No differences found Not interested, unwilling to leave child, scheduling conflict, overwhelmed No 
Robins, Smith, Glutting, and Bishop (2005) 86/103 (17) 86/86 (0) 77/86 (10) 3 month 70/86 (19) 6–12 month 69/86 (20) Yes Not analyzed Too busy, too far, wanted to be in treatment arm, started additional therapy Yes: $25 
Stark et al. (2005) 65/194 (66) 57/65 (12) 52/65 (20) 8 weeks 49/65 (25) Yes Older, low, and high medication use Too busy, could not provide food diary No 
Connelly, Rapoff, Thompson, and Connelly (2006) 41/50 (26) 37/41 (10) 36/37 (3) 3 month 31/37 (16) Yes Not analyzed Too busy, technology difficulties Yes: $50 
Hicks, Baeyer, and McGrath (2006) 72/83 (13) 47/72 (35) 42/47 (11) 1 month 37/47 (21) 3 month 32/47 (32) Yes No differences found None reported No 
Warner et al. (2006) 61/180 (66) Not reported 55/61 (10) 1 month 50/61 (18) No No differences found None reported Yes: $15 or $45 
Wysocki et al. (2006) 104/388 (73) 104/104 (0) 104/104 (0) 6 month 92/104 (12) Yes Lower SES, living with single parent None reported Yes: $100 or $200 
Ellis et al. (2007) 144/182 (21) 127/144 (12) 111/127 (13) 7 month 110/127 (13) No Not analyzed Too busy, parental disinterest; did not think intervention helpful No 
Goldfield et al. (2007) 30/30 (0) 30/30 (0) 30/30 (0) 8 week 30/30 (0) No N/A N/A Yes: Park and transport 
Health Psychology
Dahlquist, Pendly, Landthrip, Jones, and Steuber (2002) 31/44 (30) 31/31 (0) 29/29 (0) 8 week 29/29 (0) No Not analyzed Intervention not necessary Yes: Prizes 
Rapoff et al. (2002) 54/90 (40) 54/54 (0) 54/54 (0) 52 week 34/54 (37) No Less disease activity Taken off medications No 
Epstein, Palach, Kilanowski, and Baynor (2004) 72/72 (0) 63/72 (12) 61/63 (3) 6 month 61/63 (3) 12 month 60/63 (5) Yes Not analyzed None reported No 
Liossi, White, and Hatira (2006) 45/49 (8) 45/45 (0) 45/45 (0) 1 month 45/45 (0) 6 month 45/45 (0) Yes Not analyzed Families too distressed, felt intervention not necessary No 
Journal of Clinical Psychology in Medical Settings
Ellis et al. (2004) 38/47 (19) 31/38 (18) 27/31 (13) 6 month 25/31 (19) No No differences found None reported No 
Research study No. enrolled/ No. eligible [Refusal (%)] No. baseline/ No. enrolled [Attrition (%)] No. intervention/ No. randomized [Attrition (%)] No. follow-up/ No. randomized [Attrition (%)] CONSORT diagram Noncompleter differences Reasons for refusal or drop-out Incentives reported 
Pediatrics
Krishna et al. (2003) 246/249 (1) 228/246 (7) 228/246 (7) 3 month 163/246 (34) 12 month 101/246 (59) No Not analyzed Too busy, not interested, too much hassle No 
Powers et al. (2005) 10/14 (29) 10/10 (0) 9/10 (10) 3 month 9/10 (10) 12 month 9/10 (10) Yes Not analyzed Time constraints No 
Cabana et al. (2006) 971/2270 (57) 964/971 (1) 101/101 (0) 12 month 807/971 (17) Yes Not analyzed None reported Yes: CME and $50/year. 
Daley, Copeland, Wright, Roalfe, and Wales (2006) 81/132 (39) 81/81 (0) 81/81 (0) 8 week 75/81 (7) 14 week 71/81 (12) 28 week 71/81 (12) Yes 

0 Thoughts to “Attrition Rate Research Paper

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *