• Sarco(endo)plasmic reticulum ATPase is a molecular partner of Wolfram syndrome 1 protein, which negatively regulates its expression.

      Zatyka, Malgorzatta; Xavier, Gabriela Da Silva; Bellomo, Elisa A.; Leadbeater, Wendy; Astuti, Dewi; Smith, Joel; Michelangeli, Francesco; Rutter, Guy A.; Barrett, Timothy G.; University of Birmingham, Imperial College London, (Oxford University Press, 01/02/2015)
      Wolfram syndrome is an autosomal recessive disorder characterized by neurodegeneration and diabetes mellitus. The gene responsible for the syndrome (WFS1) encodes an endoplasmic reticulum (ER)-resident transmembrane protein that is involved in the regulation of the unfolded protein response (UPR), intracellular ion homeostasis, cyclic adenosine monophosphate production and regulation of insulin biosynthesis and secretion. In this study, single cell Ca(2+) imaging with fura-2 and direct measurements of free cytosolic ATP concentration ([ATP]CYT) with adenovirally expressed luciferase confirmed a reduced and delayed rise in cytosolic free Ca(2+) concentration ([Ca(2+)]CYT), and additionally, diminished [ATP]CYT rises in response to elevated glucose concentrations in WFS1-depleted MIN6 cells. We also observed that sarco(endo)plasmic reticulum ATPase (SERCA) expression was elevated in several WFS1-depleted cell models and primary islets. We demonstrated a novel interaction between WFS1 and SERCA by co-immunoprecipitation in Cos7 cells and with endogenous proteins in human neuroblastoma cells. This interaction was reduced when cells were treated with the ER stress inducer dithiothreitol. Treatment of WFS1-depleted neuroblastoma cells with the proteasome inhibitor MG132 resulted in reduced accumulation of SERCA levels compared with wild-type cells. Together these results reveal a role for WFS1 in the negative regulation of SERCA and provide further insights into the function of WFS1 in calcium homeostasis.
    • The sarcoplasmic-endoplasmic reticulum Ca(2+)-ATPase (SERCA) is the likely molecular target for the acute toxicity of the brominated flame retardant hexabromocyclododecane (HBCD).

      Al-Mousa, Fawaz; Michelangeli, Francesco; University of Birmingham, UK (Elsevier, 01/01/2014)
      Hexabromocyclododecane (HBCD) is a widely utilised brominated flame retardant (BFR). It has been shown to bio-accumulate within organisms, including man, and possibly cause neurological disorders. The acute neurotoxicity of HBCD, and six other unrelated BFRs, were assessed in SH-SY5Y human neuroblastoma cells by 24h viability assays and HBCD proved to be the most lethal (LC50, 3μM). In addition, the effects of these BFRs were also assessed for their potency at inhibiting the sarcoplasmic-endoplasmic reticulum Ca(2+) ATPase (SERCA) derived from the SH-SY5Y cells and again HBCD was the most potent (IC50, 2.7μM). The data for the other BFRs tested showed a direct correlation (coefficient 0.94) between the potencies of inducing cell death and inhibiting the Ca(2+) ATPase, indicating that SERCA is likely to be the molecular target for acute toxicity. Mechanistic studies of HBCD on the Ca(2+) ATPase suggest that it affects ATP binding, phosphorylation as well as the E2 to E1 transition step.
    • Saturated Fatty Acid Intake as a Risk Factor for Cardiovascular Disease in Healthy Caucasian Adults from Western Populations

      Thomas, Patricia; Mushtaq, Sohail; University of Chester (Cambridge University Press, 2014-02-02)
      ABSTRACT Background: Cardiovascular disease (CVD) is the leading cause of premature death globally (WHO, 2010).For over 50 years saturated fatty acids (SFA) have been implicated as a main dietary risk factor for CVD. Therefore national guidelines recommend limiting SFA to <10% of total daily energy intake COMA, [1]. However, recent literature has begun to question this advice due to contra evidence showing SFA not to be a risk factor for CVD, Hoenselaar [2]. This study’s aim was to investigate the relationship between SFA and CVD to assess whether or not recommendations should be made to review national guidelines. Method: A systematic review and meta-analysis were conducted. Electronic research databases were searched using variations of the keywords “saturated fatty acids” and “cardiovascular disease”. Articles were only included if they had a randomised control trial (RCT) or prospective cohort (PC) study design. Additionally participants had to meet the following criteria: Caucasian, non-smokers, normal BMI, classed as healthy, no preexisting CVD related conditions, not taking cholesterol altering drugs and no inborn errors of lipid metabolism. Articles were also only included if they were conducted in western populations in an attempt to standardise environmental factors. In the PCs, only data which was adjusted for these factors was included. Articles were assessed for quality using the Jadad et al. [3] scoring/CASP tool and for confounding variables, risk of bias and homogeneity. Results: A total of 411 articles were identified. Eight articles were included after exclusion for duplication, study design, not meeting full inclusion criteria, low quality, confounding variables, high risk of bias and heterogeneity. Of these, 4 were RCTs and 4 were PCs including 193,409 participants (192,686 female, 723 male). RCT and PC data were analysed separately. For the RCTs, LDL-cholesterol concentration post high/low SFA intervention was used as a functional biomarker for CVD risk. For the PCs the number of CVD related events in the low/high SFA diet groups was used as the marker for CVD risk. In the RCT meta-analysis there was a standard mean difference (95%CI) of -0.94 (-1.17, - 0.71) (p<0.00001) favouring the low SFA diet to decrease the risk of CVD. In the PC metaanalysis a risk ratio (95%CI) of 1.00 (0.64, 1.58) (p=1.00) showed there to be no statistically significant relationship between SFA and CVD. Sensitivity analyses conducted predominantly showed no change in outcome. Discussion: RCT outcomes favoured a low SFA diet for lowering CVD risk whereas the PC outcome showed no relationship. Although these differed they indicate that SFA does not increase CVD risk in western Caucasian adults. However further research is needed before requesting recommendations for the review of national guidelines. These findings correlate with other systematic reviews/meta-analyses e.g. Skeaff and Miller, [4]. Conclusion: From the studies included SFA does not increase CVD risk in affluent Caucasian adults.
    • Scenario-Led Habitat Modelling of Land Use Change Impacts on Key Species

      Geary, Matthew; Fielding, Alan H.; McGowan, Philip J. K.; Marsden, Stuart J.; University of Chester, Manchester Metropolitan University; Newcastle University (PLOS, 16/11/2015)
      Accurate predictions of the impacts of future land use change on species of conservation concern can help to inform policy-makers and improve conservation measures. If predictions are spatially explicit, predicted consequences of likely land use changes could be accessible to land managers at a scale relevant to their working landscape. We introduce a method, based on open source software, which integrates habitat suitability modelling with scenario-building, and illustrate its use by investigating the effects of alternative land use change scenarios on landscape suitability for black grouse Tetrao tetrix. Expert opinion was used to construct five near-future (twenty years) scenarios for the 800 km2 study site in upland Scotland. For each scenario, the cover of different land use types was altered by 5–30% from 20 random starting locations and changes in habitat suitability assessed by projecting a MaxEnt suitability model onto each simulated landscape. A scenario converting grazed land to moorland and open forestry was the most beneficial for black grouse, and ‘increased grazing’ (the opposite conversion) the most detrimental. Positioning of new landscape blocks was shown to be important in some situations. Increasing the area of open-canopy forestry caused a proportional decrease in suitability, but suitability gains for the ‘reduced grazing’ scenario were nonlinear. ‘Scenario-led’ landscape simulation models can be applied in assessments of the impacts of land use change both on individual species and also on diversity and community measures, or ecosystem services. A next step would be to include landscape configuration more explicitly in the simulation models, both to make them more realistic, and to examine the effects of habitat placement more thoroughly. In this example, the recommended policy would be incentives on grazing reduction to benefit black grouse.
    • Season-long increases in perceived muscle soreness in professional rugby league players: role of player position, match characteristics and playing surface

      Fletcher, Ben D.; Twist, Craig; Haigh, Julian; Brewer, Clive; Morton, James P.; Close, Graeme L.; Liverpool John Moores University; University of Chester (Taylor and Francis, 14/09/2015)
      Rugby League (RL) is a high-impact collision sport characterised by repeated sprints and numerous high-speed impacts and consequently players often report immediate and prolonged muscle soreness in the days after a match. We examined muscle soreness after matches during a full season to understand the extent to which match characteristics influence soreness. Thirty-one elite Super League players provided daily measures of muscle soreness after each of the 26 competitive fixtures of the 2012 season. Playing position, phase of the season, playing surface and match characteristics were recorded from each match. Muscle soreness peaked at day 1 and was still apparent at day 4 post-game with no attenuation in the magnitude of muscle soreness over the course of the season. Neither playing position, phase of season or playing surface had any effects on the extent of muscle soreness. Playing time and total number of collisions were significantly correlated with higher ratings of muscle soreness, especially in the forwards. These data indicate the absence of a repeated bout effect or ‘contact adaptations’ in elite rugby players with soreness present throughout the entire season. Strategies must now be implemented to deal with the physical and psychological consequences of prolonged feeling of pain
    • Seasonality in birth rate in two 19th century North Wales parishes

      Lewis, Stephen J.; Glenn, Janine; Chester College of Higher Education (2000)
      An understanding of the basic patterns in the seasonality of birth rate may be useful in certain fertility techniques. The use of artificial birth control, however, has had the effect of masking the influence of underlying biometeriological factors. To elucidate trends in birth seasonality in the absence of current social factors, a study useing nineteenth-century parish records was undertaken. Analysis of 5,905 births recorded in the baptismal records of the parishes of Hawarden and Northop between the years 1837 and 1886 revealed significant seasonal trend with a peak occuring in the spring. Further analysis showed a significant positive correlation occuring between the (standardized) numbers of births in each month and the mean day length (hours between sunrise and sunset) of the pulative month of conception.
    • Seasonality of Plasmodium falciparum transmission: a systematic review

      Reiner, Robert C.; Geary, Matthew; Atkinson, Peter M.; Smith, David L.; Gething, Peter W.; 1 Fogarty International Center, National Institutes of Health, Bethesda, MD, USA 2 Department of Epidemiology and Biostatistics, Indiana University School of Public Health, Bloomington, IN, USA 3 Department of Entomology, University of California, Davis, CA, USA 4 Department of Biological Sciences, University of Chester, Chester, UK 5 Faculty of Science and Technology, Engineering Building, Lancaster University, Lancaster LA1 4YR, UK 6 Faculty of Geosciences, University of Utrecht, Heidelberglaan 2, Utrecht, 3584 CS, The Netherlands 7 School of Geography, Archaeology and Palaeoecology, Queen’s University Belfast, Belfast BT7 1NN, Northern Ireland, UK 8 Geography and Environment, University of Southampton, Highfield, Southampton SO17 1BJ, UK 9 Center for Disease Dynamics, Economics and Policy, Washington, DC, USA 10 Spatial Ecology and Epidemiology Group, Department of Zoology, University of Oxford, Oxford, UK (BioMed Central, 15/09/2015)
      Background Although Plasmodium falciparum transmission frequently exhibits seasonal patterns, the drivers of malaria seasonality are often unclear. Given the massive variation in the landscape upon which transmission acts, intra-annual fluctuations are likely influenced by different factors in different settings. Further, the presence of potentially substantial inter-annual variation can mask seasonal patterns; it may be that a location has “strongly seasonal” transmission and yet no single season ever matches the mean, or synoptic, curve. Accurate accounting of seasonality can inform efficient malaria control and treatment strategies. In spite of the demonstrable importance of accurately capturing the seasonality of malaria, data required to describe these patterns is not universally accessible and as such localized and regional efforts at quantifying malaria seasonality are disjointed and not easily generalized. Methods The purpose of this review was to audit the literature on seasonality of P. falciparum and quantitatively summarize the collective findings. Six search terms were selected to systematically compile a list of papers relevant to the seasonality of P. falciparum transmission, and a questionnaire was developed to catalogue the manuscripts. Results and discussion 152 manuscripts were identified as relating to the seasonality of malaria transmission, deaths due to malaria or the population dynamics of mosquito vectors of malaria. Among these, there were 126 statistical analyses and 31 mechanistic analyses (some manuscripts did both). Discussion Identified relationships between temporal patterns in malaria and climatological drivers of malaria varied greatly across the globe, with different drivers appearing important in different locations. Although commonly studied drivers of malaria such as temperature and rainfall were often found to significantly influence transmission, the lags between a weather event and a resulting change in malaria transmission also varied greatly by location. Conclusions The contradicting results of studies using similar data and modelling approaches from similar locations as well as the confounding nature of climatological covariates underlines the importance of a multi-faceted modelling approach that attempts to capture seasonal patterns at both small and large spatial scales. Keywords: Plasmodium falciparum ; Seasonality; Climatic drivers
    • Second generation tyrosine kinase inhibitors prevent disease progression in high-risk (high CIP2A) chronic myeloid leukaemia patients.

      Lucas, Claire; Harris, Robert; Holcroft, Alison; Scott, Laura; Carmell, Natasha; McDonald, Elizabeth; Polydoros, Fotis; Clark, Richard (Nature, 13/03/2015)
      High cancerous inhibitor of PP2A (CIP2A) protein levels at diagnosis of chronic myeloid leukaemia (CML) are predictive of disease progression in imatinib-treated patients. It is not known whether this is true in patients treated with second generation tyrosine kinase inhibitors (2G TKI) from diagnosis, and whether 2G TKIs modulate the CIP2A pathway. Here, we show that patients with high diagnostic CIP2A levels who receive a 2G TKI do not progress, unlike those treated with imatinib (P=<0.0001). 2G TKIs induce more potent suppression of CIP2A and c-Myc than imatinib. The transcription factor E2F1 is elevated in high CIP2A patients and following 1 month of in vivo treatment 2G TKIs suppress E2F1 and reduce CIP2A; these effects are not seen with imatinib. Silencing of CIP2A, c-Myc or E2F1 in K562 cells or CML CD34+ cells reactivates PP2A leading to BCR-ABL suppression. CIP2A increases proliferation and this is only reduced by 2G TKIs. Patients with high CIP2A levels should be offered 2G TKI treatment in preference to imatinib. 2G TKIs disrupt the CIP2A/c-Myc/E2F1 positive feedback loop, leading to lower disease progression risk. The data supports the view that CIP2A inhibits PP2Ac, stabilising E2F1, creating a CIP2A/c-Myc/E2F1 positive feedback loop, which imatinib cannot overcome.
    • The sedentary office: an expert statement on the growing case for change towards better health and productivity

      Buckley, John P.; Hedge, Alan; Yates, Thomas; Copeland, Robert J.; Loosemore, Michael; Hamer, Mark; Bradley, Gavin; Dunstan, David W.; University Centre Shrewsbury (BMJ, 01/06/2015)
      An international group of experts convened to provide guidance for employers to promote the avoidance of prolonged periods of sedentary work. The set of recommendations was developed from the totality of the current evidence, including long-term epidemiological studies and interventional studies of getting workers to stand and/or move more frequently. The evidence was ranked in quality using the four levels of the American College of Sports Medicine. The derived guidance is as follows: for those occupations which are predominantly desk based, workers should aim to initially progress towards accumulating 2 h/day of standing and light activity (light walking) during working hours, eventually progressing to a total accumulation of 4 h/day (prorated to part-time hours). To achieve this, seated-based work should be regularly broken up with standing-based work, the use of sit–stand desks, or the taking of short active standing breaks. Along with other health promotion goals (improved nutrition, reducing alcohol, smoking and stress), companies should also promote among their staff that prolonged sitting, aggregated from work and in leisure time, may significantly and independently increase the risk of cardiometabolic diseases and premature mortality. It is appreciated that these recommendations should be interpreted in relation to the evidence from which they were derived, largely observational and retrospective studies, or short-term interventional studies showing acute cardiometabolic changes. While longer term intervention studies are required, the level of consistent evidence accumulated to date, and the public health context of rising chronic diseases, suggest initial guidelines are justified. We hope these guidelines stimulate future research, and that greater precision will be possible within future iterations.
    • Selected physiological, perceptual and physical performance changes during two bouts of prolonged high intensity intermittent running separated by 72 hours

      Dobbin, Nicholas; Lamb, Kevin L.; Twist, Craig; University of Chester (Lippincott, Williams & Wilkins, 01/12/2017)
      This study investigated the effects of performing a second 90-minute intermittent running protocol 72 hours after an initial trial on selected physiological, perceptual, and sprint running measures. Eight subelite soccer players provided measures of isokinetic muscle function, countermovement jump (CMJ), 10-m sprinting, and muscle soreness before, and at 0, 24, 48, and 72 hours after a 90-minute intermittent high-intensity running bout (IHIR-1). A second 90-minute IHIR bout (IHIR-2) was performed 72 hours after the first. Heart rates, ratings of perceived exertion (RPE), blood lactate concentration [Bla], and 10-m sprint times were recorded periodically during both IHIR. Analysis of effects revealed that in the 72-hour period after IHIR-1, there were most likely increases in muscle soreness and likely to very likely deteriorations in CMJ, 10-m sprint, and isokinetic muscle function. During IHIR-2, heart rates (possibly to likely) and [Bla] (possibly to very likely) were lower than IHIR-1, whereas RPE remained unchanged. Sprint times during IHIR-2 were also likely to very likely higher than in IHIR-1. It was evident that these team sport players exposed to repeat bouts of prolonged high-intensity running within 72 hours downregulated their sprint performances in the second bout despite no change in perceived effort. These findings have implications for managing training and match loads during periods of intense scheduling.
    • Self-efficacy for temptations is a better predictor of weight loss than motivation and global self-efficacy: Evidence from two prospective studies among overweight/obese women at high risk of breast cancer.

      Armitage, Christopher J.; Wright, Claire E.; Parfitt, Gaynor; Pegington, Mary; Donnelly, Louise S.; Harvie, Michelle N.; University of Manchester; University of Chester; University of South Australia; University Hospital South Manchester (Elsevier, 03/02/2014)
      OBJECTIVES: Identifying predictors of weight loss could help to triage people who will benefit most from programs and identify those who require additional support. The present research was designed to address statistical, conceptual and operational difficulties associated with the role of self-efficacy in predicting weight loss. METHODS: In Study 1, 115 dieting overweight/obese women at high risk of breast cancer were weighed and completed questionnaires assessing motivation, global self-efficacy and self-efficacy for temptations. The main outcome measure was weight, measured 3-months post-baseline. Study 2 was identical (n=107), except changes in psychological variables were computed, and used to predict weight 6-months post-baseline. RESULTS: In Study 1, self-efficacy for temptations was a significant predictor of weight loss at 3-month follow-up. In Study 2, improved self-efficacy for temptations between baseline and four-weeks was predictive of lower weight at 6 months. CONCLUSION: The key finding was that self-efficacy for temptations, as opposed to motivation and global self-efficacy, was predictive of subsequent weight loss. PRACTICE IMPLICATIONS: The implication is that augmenting dieters' capability for dealing with temptations might boost the impact of weight loss programs.
    • Semi-automated time-motion analysis of senior elite rugby league

      Sykes, Dave; Twist, Craig; Hall, Shayne; Nicholas, Ceri; Lamb, Kevin L.; University of Chester ; University of Chester ; ProZone Group Ltd ; University of Chester ; University of Chester (University of Wales Institute, Cardiff, 2009)
      The aim of this study was to examine the movement demands of senior elite rugby league with consideration of the impact of player position and match phase.
    • Semi-automated time-motion analysis of senior elite rugby league

      Sykes, Dave; Twist, Craig; Hall, Shayne; Lamb, Kevin L.; University of Chester ; ProZone Group Ltd ; University of Chester ; University of Chester (2009-09)
    • Semi-automated time-motion analysis of senior elite rugby league

      Sykes, Dave; Twist, Craig; Hall, Shayne; Nicholas, Ceri; Lamb, Kevin L.; University of Chester ; University of Chester ; ProZone Group Ltd ; University of Chester ; University of Chester (2008-09)
    • Service evaluation of parent education in the Blacon Sure Start area

      Pearson, Charlotte; Thurston, Miranda; University College Chester (University College Chester, 2004-09)
      This report evaluates part of the Sure Start in Blacon.
    • Severe acute malnutrition and HIV in children in Malawi

      Fergusson, Pamela L. (University of Liverpool (University of Chester), 2009-07)
      Sub-Saharan Africa is more affected by the HIV epidemic than any other region of the world. At the same time, malnutrition remains a major public health concern. HIV and malnutrition are interlinked, both epidemiologically and physiologically, contributing to high mortality and poor growth and development of children in sub-Saharan Africa. This thesis aims to explore the impact of HIV on the treatment and care of children with severe acute malnutrition in Malawi. The thesis will investigate mortality and nutritional recovery in HIV-infected and uninfected children with SAM; HIV infection and nutritional status in carers of children with SAM; and caregiver perspectives on quality of care for children with SAM. The study is based on a prospective cohort study of 454 children with SAM and meta-analysis of 17 relevant studies; a cross sectional study of 322 carer-child pairs; and a qualitative study using a grounded theory approach.
    • A simple procedure for investigating differences in sexual dimorphism between populations

      Lewis, Stephen J.; Chester College of Higher Education (Oxbow Books (for The Osteoarchaeological Research Group), 1997)
      Although sexual dimorphism has a strong genetic component in many animals, external factors may alter its expression - enhancing or diminishing it depending on the parameter measured and the type of influence experienced. A measure of sexual dimorphism may be used, therefore, to characterise a whole population and the factors acting upon it. Differences between populations for such factors may then be investigated by comparing sexual dimorphisms and may be more informative than merely comparing population means. A quick and relatively simple technique which provides a coefficient of the relationship between a continuous variable and another which is dichotomous, such as sex, is the point biserial correlation. This is a less frequently described extension of the commonly used Pearson product-moment correlation. The point-biserial correlation coefficients can be calculated for a given parameter and compared to determine whether the same sexual dimorphism is evident in different samples. If it is not, some factor influencing one or other population, as a whole, may require further investigation. The full procedure, which can be performed without the need for statistical tables, and the necessary formulae are described. This method, in its generalised form, may also be applied to the study of bilateral asymmetry.
    • A simple procedure for testing for differences in sexual dimorphism between populations

      Lewis, Stephen J.; Chester College of Higher Education (InterStat, 2004)
      Using point bi-serial correlation, an approach to the biological problem of comparing differences in sexual dimorphism between populations is presented. In particular, a way of deriving the point bi-serial correlation coefficient from summary statistics (means, standard deviations and sample sizes) is given. This bypasses the need for access to raw data.
    • Skeletal changes during catch-up growth - suggestions from a rat model

      Lewis, Stephen J.; Chester College of Higher Education (2001)
      The removal of a growth limiting influence, such as undernutrition, is frequently followed by a period of growth at a rate greater than normally expected for chronological age. This, so called 'catch-up growth', tends to return affected individuals to their original growth trajectory. How catch-up growth is controlled or regulated remains unclear. Histological studies of the proximal tibial growth plate in rats which had been undernourished by half-feeding between the 56th and 70th days (post partum) showed them to be thinner than those of controls. Upon inspection, the chondrocytes also appeared to be flatter in profile and less numerous. However, during the catch-up period, when food was again allowed ad libitum, previously undernourished rats showed wider growth plates with rounder and more numerous chondrocytes than controls. It was noted that such changes were similar to those that accompany sectioning of the periosteum and the release of growth restraint that results. A study of the flexibility of the sacro-iliac joint in the same animals suggested that it was less flexible following undernutrition but more so during the catch-up period, by comparison with controls. This was in contrast to a progressive loss of flexibility shown by controls during the same period and would appear to result from changes in the characteristics of the fibrous connective tissues associated with the joint. If this reflects a more generalized change in such connective tissues, but particularly the periosteum, ligaments and joint capsules, this may represent a means whereby skeletal growth rate during the catch-up period may be influenced in a co-ordinated manner.
    • Slowing the Reconstitution of W′ in Recovery With Repeated Bouts of Maximal Exercise

      Chorley, Alan; Bott, Richard; Marwood, Simon; Lamb, Kevin L.; University of Chester; Liverpool Hope University (Human Kinetics, 01/02/2019)
      Purpose: This study examined the partial reconstitution of the work capacity above critical power (W′) following successive bouts of maximal exercise using a new repeated ramp test, against which the fit of an existing W′ balance (W'bal) prediction model was tested. Methods: Twenty active adults, consisting of trained cyclists (n = 9; age 43 [15] y, V˙ O2max 61.9 [8.5] mL·kg−1·min−1) and untrained cyclists (n = 11; age 36 [15] y, V˙ O2max 52.4 [5.8] mL·kg−1·min−1) performed 2 tests 2 to 4 d apart, consisting of 3 incremental ramps (20 W·min−1) to exhaustion interspersed with 2-min recoveries. Results: Intratrial differences between recoveries demonstrated significant reductions in the amount of W′ reconstituted for the group and both subsets (P < .05). The observed minimal detectable changes of 475 J (first recovery) and 368 J (second recovery) can be used to monitor changes in the rate of W′ reconstitution in individual trained cyclists. Intertrial relative reliability of W′ reconstitution was evaluated by intraclass correlation coefficients for the group (≥.859) and the trained (≥.940) and untrained (≥.768) subsets. Absolute reliability was evaluated with typical error (TE) and coefficient of variation (CV) for the group (TE ≤ 559 J, CV ≤ 9.2%), trained (TE ≤ 301 J, CV ≤ 4.7%), and untrained (TE ≤ 720 J, CV ≤ 12.4%). Conclusions: The reconstitution of W′ is subject to a fatiguing effect hitherto unaccounted for in W'bal prediction models. Furthermore, the W'bal model did not provide a good fit for the repeated ramp test, which itself proved to be a reliable test protocol.