Articles

Read the latest articles relevant to your clinical practice, including exclusive insights from Healthed surveys and polls.

By reading selected clinical articles, you earn CPD in the Educational Activities (EA) category whenever you click the “Claim CPD” button and follow the prompts. 

Dr Daman Langguth

Research in rheumatoid arthritis (RA) over the past 10 years has gained significant ground in both pathophysiological and clinical understanding. It is now known that early aggressive therapy within the first three months of the development of joint symptoms decreases the chance of developing severe disease, both clinically and radiologically. To enable this early diagnosis, there has been considerable effort made to discover serological markers of disease. Around 80% of RA patients become rheumatoid factor positive (IgM RF), though this can take many years to occur. In other words, IgM RF (hereafter called RF) has low sensitivity in the early stages of RA. Furthermore, patients with other inflammatory diseases (including Sjögren’s syndrome, chronic viral and bacterial infections) may also be positive for RF, and thus RF has a relatively low specificity for RA. The RF is, therefore, not an ideal test in the early detection and confirmation of RA. There has been an on-going search for an auto-antigen in RA over the past 30 years. It has been known that senescent cells display antigens not present on other cells, and that RA patients may make antibodies against them. This was first reported with the anti-perinuclear factor (APF) antibodies directed against senescent buccal mucosal cells in 1964, but this test was challenging to perform and interpret. These cells were later found to contain filament aggregating protein (filaggrin). Subsequently, in 1979, antibodies directed against keratin (anti-keratin antibodies, AKA) in senescent oesophageal cells were discovered. In 1994, another antibody named anti-Sa was discovered that reacted against modified vimentin in mesenchymal cells. In the late1990s, antibodies directed against citrullinated peptides were ‘discovered’. In fact, we now know that all of the aforementioned antibodies detect similar antigens. When cells grow old, some of the structural proteins undergo citrullination under the direction of cellular enzymes. Arginine residues undergo deamination to form the non-standard amino acid citrulline. Citrullinated peptides fit better into the HLA-DR4 molecules that are strongly associated with RA development, severity and prognosis. It is also known that many types of citrullinated peptides are present in the body, both in and outside joints. It has been determined that sera from individual RA patients contain antibodies that react against different citrullinated peptides, but these individuals’ antibodies do not react against all possible citrullinated peptides. Thus, to improve the sensitivity of the citrullinated peptide assays, cyclic citrullinated peptides (CCP) have been artificially generated to mimic a range of conformational epitopes present in vivo. It is these artificial peptides that are used in the second generation anti-CCP assays. Sullivan Nicolaides Pathology uses the Abbott Architect assay which is standardised against the Axis-Shield, Dundee UK, second generation CCP assay. False positive CCP antibodies have recently been reported to occur in acute viral (e.g. EBV, HIV) and some atypical bacterial (Q Fever) seroconversions. The antibodies may be present for a few months after seroconversion, but do not predict inflammatory arthritis in these individuals.

Anti-CCP assays

CCP antibodies alone give a sensitivity of around 66% in early RA, similar to RF, though they have a much higher specificity of >95% (compared with around 80% for RF). The combination of anti-CCP and RF tests is now considered to be the ‘gold standard’ in the early detection of RA. Combining RF with anti-CCP enables approximately 80% (i.e. 80% sensitivity) of RA patients to be detected in the early phase (less than sixmonths duration) of this disease. The presence of anti-CCP antibodies has also been shown to predict RA patients who will go on to develop more severe joint disease, both radiologically and clinically. They also appear to be a better marker of disease severity than RF. Anti-CCP antibodies have also been shown to be present prior to the development of clinical disease, and thus may predict the development of RA in patients with uncharacterised recent onset inflammatory arthritis. At present, it is not known whether monitoring the level of these antibodies will be useful as a marker of disease control, though some data in patients treated with biologic (e.g. etanercept, infliximab agents) suggests they may be useful. It has not been determined whether the absolute levels of CCP antibodies allow further disease risk stratification. Our pathology laboratories reports CCP antibodies in a quantitative fashion – normal less than 5 U/mL with a range of up to 2000 U/mL. References
  1. ACR Position statement on anti-CCP antibodies http://www.rheumatology.org/publications hotline/1003anticcp.asp.
  2. Forslind K, Ahlmen M, Eberhardt K et al. Prediction of radiologic outcome in early rheumatoid arthritis in clinical practice: role of antibodies to citrullinated peptides (anti-CCP). Ann Rheum Dis 2004; 63:1090-5.
  3. Huizinga TWJ, Amos CI, van der Helm-van Mil AHM et al. Refining the complex rheumatoid arthritis phenotype based on specificity of the HLA-DRB1 Shared epitope for antibodies to citrullinated proteins. Arthritis Rheum 2005; 52:3433-8.
  4. Lee DM, Schur PH. Clinical Utility of the anti-CCP assay in patients with rheumatic disease. Ann Rheum Dis 2003; 62:870-4.
  5. Van Gaalen FA, Linn-Rasker SP, van Venrooij Wj et al. Autoantibodies to cyclic citrullinated peptides predict progression to rheumatoid arthritis in patients with undifferentiated arthritis. Arthritis Rheum 2004;50: 709-15.
  6. Zendman AJW, van Venrooij WJ, Prujin GJM. Use and significance of anti-CCP autoantibodies in rheumatoid arthritis. Rheumatology 2006; 46:20-5.

General Practice Pathology is a new regular column each authored by an Australian expert pathologist on a topic of particular relevance and interest to practising GPs. The authors provide this editorial, free of charge as part of an educational initiative developed and coordinated by Sonic Pathology.
Dr Linda Calabresi

Looks like there is yet another reason to rethink the long-term use of proton pump inhibitors. And this one is a doozy. According to a new study, recently published in the BMJ journal, Gut, the long-term use of PPIs is linked to a more than doubling of the risk of developing stomach cancer. And before you jump to the reasonable conclusion that these patients might have had untreated Helicobacter Pylori, this 2.4 fold increase in gastric cancer risk occurred in patients who had had H.pylori but had been successfully treated more than 12 months previously. What’s more, the risk increased proportionally with the duration of PPI use and the dose, which the Hong Kong authors said suggested a cause-effect relationship. No such increased risk was found among those patients who took H2 receptor antagonists. While the study was observational, the large sample size (more than 63,000 patients with a history of effective H.pylori treatment) and the relatively long duration of follow-up (median 7.6 years) lent validity to the findings. The link between H.pylori and gastric cancer, has been known for decades. It has been shown that eradicating H.pylori reduces the risk of developing gastric cancer by 33-47%. However, the study authors said, it is also known that a considerable proportion of these individuals go on to develop gastric cancer even after they have successfully eradicated the bacteria. “To our knowledge, this is the first study to demonstrate that long-term PPI use, even after H. pylori eradication therapy, is still associated with an increased risk of gastric cancer,” they said. By way of explanation, the researchers note that gastric atrophy is considered a precursor to gastric cancer. And while gastric atrophy is a known sequela of chronic H. pylori infection, it could also be worsened and maintained by the profound acid suppression associated with PPI use and this could be why the risk persisted even after the infection had been treated. Bottom line? According to the study authors, doctors need to ‘exercise caution when prescribing long-term PPIs to these patients even after successful eradication of H. pylori.’ Ref: Gut 2017; 0:1-8. Doi:10.1136/gutjnl-2017-314605

Dr Linda Calabresi

Self-harm among teenagers is on the increase, a new study confirms and frighteningly it’s our younger girls that appear most at risk. According to a population-based, UK study the annual incidence of self-harm increased by an incredible 68% between 2011 and 2014 among girls aged 13-16, from 46 per 10000 to 77 per 10000. The research, based on analysis of electronic health records from over 670 general practices, also found that girls were three times more likely to self-harm than boys among the almost 17,000 young people (aged 10-19 years) studied. The importance of identifying these patients and implementing effective interventions was highlighted by the other major finding of this study. “Children and adolescents who harmed themselves were approximately nine times more likely to die unnaturally during follow-up, with especially noticeable increases in risks of suicide…, and fatal acute alcohol and drug poisoning,” the BMJ study authors said. And if you were to think this might be a problem unique to the UK, the researchers, in their article actually referred to an Australian population based cohort study published five years ago that found that 8% of adolescents aged less than 20 years reported harming themselves at some time. The UK study also showed that the likelihood of referral was lowest in areas that were the most deprived, even though these were the areas where the incidence was highest, an example of the ‘inverse care law’ where the people in most need get the least care. While the link between social deprivation and self-harm might be understandable, researchers were at a loss to explain the recent sharp increase in incidence among the young 13-16 year old girls in particular. What they could say is that by analysing general practice data rather than inpatient hospital data, an additional 50% of self-harm episodes in children and adolescents were identified. In short, it is much more likely a self-harming teenager will engage with their GP rather than appear at a hospital service. And even though, as the study authors concede there is little evidence to guide the most effective way to manage these children and adolescents, the need for GPs to identify these patients and intervene early is imperative. “The increased risks of all cause and cause-specific mortality observed emphasise the urgent need for integrated care involving families, schools, and healthcare provision to enhance safety among these distressed young people in the short term, and to help secure their future mental health and wellbeing,” they concluded. BMJ 2017; 359:j4351 doi: 10.1136/bmj.j4351

Shomik Sengupta

Bladder cancer affects almost 3,000 Australians each year and causes thousands of deaths. Yet it often has a lower profile compared to other types of cancer such as breast, lung and prostate. The rate at which Australians are diagnosed with bladder cancer has decreased over time, which means the death rate has fallen too, although at a slower rate. This has led to an increase in the so called mortality-to-incidence ratio, a key statistic that measures the proportion of people with a cancer who die from it. For bladder cancer this went up from 0.3 (about 30%) in the 1980s to 0.4 (40%) in 2010 (compared to 0.2 for breast and colon cancer and 0.8 for lung cancer). While the relative survival (survival compared to a healthy individual of similar age) for most other cancers has improved in Australia, for bladder cancer this has decreased over time.

Who gets bladder cancer?

Australia’s anti-smoking measures and effective quitting campaigns have led to a progressive reduction in smoking rates over the last 25 years. This is undoubtedly one key reason behind the observed decline in bladder cancer diagnoses over time. Environmental risk factors are thought to be more important than genetic or inherited susceptibility when it comes to bladder cancer. The most significant known risk factor is cigarette smoking. Bladder cancer risk also increases with exposure to chemicals such as dyes and solvents used in industries like hairdressing, printing and textiles. Appropriate workplace safety measures are crucial to minimising exposure, but the increased risk of occupational bladder cancer remains an ongoing problem. Certain medications, such as the chemotherapy drug cyclophosphamide, and pelvic radiation therapy have also been linked to bladder cancer. Patients who have had such treatment need to be specifically checked for the main symptoms and signs of bladder cancer, such as blood in urine. Men develop bladder cancer about three times as often as women. In part, this may have to do with the fact that men are exposed more to the risk factors. Conversely, women have a relatively poorer survival from bladder cancer compared to men. The reasons for this are unclear, but may partly relate to difficulties in diagnosis.
Read more – Interactive body map: what really gives you cancer?

How is bladder cancer diagnosed?

At present, unlike other cancers such as breast cancer that can be picked up on mammograms, bladder cancer can’t be diagnosed at the stage where there are no symptoms. The usual symptoms that lead to the diagnosis of bladder cancer are blood in the urine (haematuria) or irritation during urination, such as frequency and burning. But symptoms are quite common and, in most instances, caused by relatively benign problems such as infections, urinary stones or enlargement of the prostate. So, the key to bladder cancer diagnosis is for suspicious symptoms to be quickly and appropriately assessed by a doctor. Haematuria, in particular, always needs to be considered a serious symptom and investigated further. Up to 20% of patients with blood in the urine will turn out to have bladder cancer. Even if the bleeding occurs transiently, this could still be the first symptom that leads to the earliest possible diagnosis of bladder cancer. It shouldn’t be ignored, since delayed diagnosis of bladder cancer is known to worsen treatment outcomes. Unfortunately, delays in investigation of blood in urine are well known to occur and particular subgroups such as women and smokers tend to experience the greatest delays. Recent studies from Victoria and West Australia have shown how some Australian patients have significant and concerning delays in investigation of urinary bleeding. Multiple factors contribute to such delays, including public perception and anxiety, lack of referral from general practitioners and administrative and resourcing limitations at hospitals. Patients reporting blood in their urine should be referred for scans such as an ultrasound or computerised tomography (CT) to assess the kidneys. They should also have their bladder examined internally (cystoscopy) using a fibre-optic instrument known as a cytoscope. Cystoscopy, a procedure usually performed by urologists (medical specialists of urinary tract surgery), remains the gold standard for diagnosing bladder cancer. Although diagnostic scans can help detect some bladder cancers, they have significant limitations in detecting certain types of tumours.

What happens if cancer is detected?

If a bladder cancer is noted on cystoscopy, it is removed and/or destroyed using instruments that can be passed into the bladder alongside the cystoscope. These procedures can be carried out at the same setting or subsequently, depending on available instruments and anaesthesia. The cancerous tissue removed is examined by a pathologist to confirm the diagnosis. This also provides additional information such as the stage of the cancer (how deep it has spread) and grade (based on appearance of the cancer cells), which help determine further management.

Are there any new developments?

Given that cystoscopy is an invasive procedure, there has been considerable effort to develop a non-invasive test, usually focusing on markers in the urine that can indicate the presence of cancer. To date, none of these have been reliable enough to obviate the need for cystoscopy.
Read more: Can we use a simple blood test to detect cancer?
Additionally, to enhance the ability to detect small bladder cancers, cystoscopy using blue light of a certain wavelength (360-450nm) can be combined with the administration of a fluorescent marker (hexaminolevulinate) which highlights the cancerous tissue. While this approach does lead to the detection of more cancers, the resulting clinical benefit remains uncertain. The ConversationAt present, immediate and appropriate investigation of suspicious symptoms, especially haematuria, using a combination of radiological scans and cystoscopy, remains the best means to diagnose bladder cancer in an accurate and timely manner. Shomik Sengupta, Professor of Surgery, Eastern Health Clinical School, Monash University This article was originally published on The Conversation. Read the original article.
A/Prof Ken Sikaris

Blood tests for iron status are among the most common requested in clinical medicine. This may be largely justified because of the prevalence of iron deficiency combined with a relatively common genetic condition of haemochromatosis. In Australia, iron deficiency, defined by the Royal College of Pathologists of Australasia (RCPA) as a ferritin level below 30 ug/L, affects only 3.4% of men but 22.3% of women according to the Australian Bureau of Statistics survey in 2011-2012. The issue in women is particularly related to premenopausal women (16-44 years) where 34.1% are iron deficient. This is not surprising when nutrition surveys show that 40% of premenopausal women have inadequate dietary iron intake. Despite this high prevalence, screening with iron studies is not currently recommended in any demographic. While many hospitals include a ferritin in the shared care antenatal panel, most antenatal guidelines assume that an FBE will detect iron deficiency (which is probably wrong). Anaemia is a late stage of iron deficiency and ideally not a stage we should be waiting for. While it is true that microcytosis of red cells is often found in iron deficiency this is unreliable as
  • thalassaemia also causes microcytosis and
  • vegetarians usually also have B12 deficiency which causes macrocytosis that ‘cancels out’ the low mean cell volume.
The unevenness (or high red cell distribution width / RDW) is a more sensitive test of early iron deficiency. The association of B12 deficiency and iron deficiency, especially in vegetarians, is so important that clinicians should always think of the other when the other is detected. It is estimated that one in eight Australians carry the predisposition to haemochromatosis. It is most common in British / Celtic peoples (C282Y or H63D are the common HFE gene mutations). When two HFE heterozygotes have children, one in four of the offspring will be homozygote therefore roughly (1/8 * 1/8 * 1/4 =) 1/256 Australian are homozygote - but only half develop disease. This may be because many have been protected from iron overload through diet or blood loss, such as blood donation. Even at a ‘disease’ prevalence of 1:400 to 1:500, haemochromatosis is a relatively common condition with significant potential morbidity that must be considered, especially in all relatives (first degree relatives can be gene tested without iron studies). ‘Iron overload’ is a little more awkward to define than iron deficiency. Serum ferritin levels above the population norms are not necessarily harmful, but if we waited for serum ferritin levels to reach dangerous levels (eg >1000 ug/L), we would not be preventing the sequelae of iron overload such as liver disease, but also a higher risk of cardiovascular disease and premature arthropathy. Most labs have upper clinical decision limits for ferritin of between 200 and 500 ug/L as a sensitive early warning for the possibility of haemochromatosis. Should a high ferritin level be confirmed, gene testing can be rebated according to Medicare Benefit Schedule (MBS) requirements. The pathology tests I have been discussing serum ferritin as the marker of iron stores however clinicians in Australia commonly request ‘iron studies’. Indeed, ferritin is the storage protein for iron that ‘leaks’ out of cells and most accurately reflects cellular iron stores. What is the value of the other two measurements? One of the other measurements is serum iron and it is a bad measure of iron status (we probably shouldn’t report it at all). The serum iron level depends on meals, depends on the time of day (lower in the afternoon) and most importantly, depends on the concentration of the protein that chaperones iron in the circulation: serum transferrin. Patients with higher transferrin levels will generally have higher serum iron levels. What is important is how iron is the transferrin carrying and this is calculated as the ‘transferrin saturation’ (a ratio of serum iron to transferrin). Typically transferrin saturation is at least 10% full, and uncommonly more than 45% full and levels outside this are supportive of iron deficiency and iron overload respectively. While the transferrin saturation calculation corrects some of the unreliability of serum iron, saturation is still subject to diet and supplements and diurnal variation. Clinicians in Australia are used to requesting the full iron study panel of tests. This is useful in iron overload because in haemochromatosis, the earliest change is a high transferrin saturation which may be found years before the ferritin rises above the upper decision limit. A confirmed elevation of transferrin saturation is also allows haemochromatosis gene testing to be MBS rebated. Iron deficiency can be identified by a low serum ferritin (less than 30 ug/L) and the rest of the iron studies may also be altered with low serum iron saturation and higher levels of transferrin. Low serum iron saturation is non-specific (eg diet and afternoon samples) and high transferrin is also non-specific (eg OCP and pregnancy). Unfortunately there are some patients that are misidentified with iron deficiency because of these non-specific tests even when ferritin was clearly normal and there is discussion of banning the ability to request serum iron and transferrin when looking for iron deficiency because of the potential harms in misinterpretation. For clinicians there are even more important confounders than the physiological effects on serum iron and transferrin saturation because when inflammation (the ‘acute phase reaction’) is present the body actually hides away its iron stores by decreasing iron release (low serum iron), decreasing transferrin production (ie negative acute phase reactant), and because iron is no longer being mobilised, it starts accumulating in cells (ferritin rise as if it were an acute phase protein). Unfortunately all the iron studies are therefore unreliable in the presence of inflammation and if there is some suspicion a serum CRP is the most sensitive and specific test to detect inflammation. All we can say otherwise is
  • that if the ferritin is below 30 ug/L in the presence of inflammation there must be iron deficiency and
  • (ii) if the ferritin rises above 100 ug/L in the presence of inflammation then there was probably enough iron around anyway.
There is a test that helps separate true iron deficiency in anaemic patients with inflammatory disorders called ‘soluble serum transferrin receptors’ but it is not covered in the Medicare Benefits Schedule although the RCPA have made a submission to government.
General Practice Pathology is a new regular column each authored by an Australian expert pathologist on a topic of particular relevance and interest to practising GPs. The authors provide this editorial, free of charge as part of an educational initiative developed and coordinated by Sonic Pathology.

Dr Linda Calabresi

A case history recently published in the BMJ highlights one of those uncommon but very diagnoseable conditions if you just spot the clues. According to the French authors, the 62 year old man presented with a history of recurrent oral ulcers sometimes accompanied by laryngitis and conjunctivitis. During one of these episodes he had developed an acute fever, a sore throat when swallowing and laryngitis – he had sought medical attention and was prescribed ibuprofen and clarithromycin. Two days after this, the man developed conjunctivitis, erosions in the mucosal membrane in the mouth and skin lesions. Not unsurprisingly, the man’s attending doctors though he had Stevens-Johnson syndrome and sent him to hospital. Full examination showed painful diffuse erosions of mucous membranes not only of the oral cavity but also of the nose, the epiglottis and the glans. The skin lesions were noted to be target lesions involving three raised concentric red rings and they were found on the trunk, lower limbs and scrotum. He was febrile, fatigued and eating was painful. Diagnostic tests showed a raised CRP but little else. The skin biopsy showed a dense lichenoid lymphocytic infiltrate. So did he have Stevens-Johnson syndrome? Apparently not. The target lesions with their three concentric rings and the widespread oral, ocular and genital mucous membrane erosions are in fact suggestive of erythema multiforme, and specifically because of the fact more than one mucous membrane was involved, the more severe type of erythema multiforme – erythema multiforme major. The authors did concede that erythema multiforme is frequently confused with Stevens-Johnson syndrome, and even toxic epidermal necrolysis (TEN), which are life-threatening conditions. The features that helped distinguish this as a case of erythema multiforme rather than the other more serious alternatives were:
    • the previous episodes of oral ulcers, sometimes with laryngitis and conjunctivitis. Even though erythema multiforme is rare, of the people who do get it some 40% experience multiple recurrences often triggered by the herpes simplex virus.
    • erythema multiforme is generally a post-infectious disease most commonly herpes simplex (which was tricky in this case as viral cultures from the patient’s mouth were negative) whereas 85% of Stevens-Johnson syndrome and toxic epidermal necrolysis cases are drug-induced.
    • erythema multiforme usually begins with systemic symptoms such as fever and then mucosal involvement. The skin lesions typically appear later. In Stevens-Johnson syndrome and toxic epidermal necrolysis the severe cutaneous reaction is usually the first sign of the condition occurring four to 28 days after taking the offending drug.
    • finally the skin lesions are different. As in this case, the typical skin lesions of erythema multiforme are three raised concentric rings that usually respond to topical steroids and oral antihistamines. In Stevens-Johnson syndrome and toxic epidermal necrolysis the lesions are ‘atypical targets with two concentric rings and purpuric macules that evolve into blisters and skin that detaches with finger friction (Nikolsky sign).’
And what happened to this patient? According to the case report, he wound up staying eight days in hospital treated with enteral nutrition, topical steroids and steroid mouthwashes. All the skin and mucosal membrane lesions healed and he fully recovered. Interestingly, he did have minor relapses annually for a number of years but these weren’t severe enough to warrant any further treatment. Ref: BMJ 2017; 359 doi: https://doi.org/10.1136/bmj.j3817

Prof Shaun Roman

With the release of a new TV series based on Margaret Atwood’s The Handmaid’s Tale, and a recent study claiming male sperm count is decreasing globally, fertility is in the spotlight. Many want to know if the dystopian future Atwood created in which the world has largely become infertile, is in fact possible. And are we on our way there already?

What this latest study found

The recent paper that hit headlines all over the world highlighted the issue of declining sperm numbers in Western men. The study is a meta-analysis, which gathers together similar studies and combines the results. Each of the studies in the analysis has different men assessed at different times by different researchers. This means, as a whole, it is not as powerful as a study examining the same men over time. And many of the individual studies assessed have their own problems.
Read more - Health Check: when does fertility decline?

So is fertility actually declining?

The current estimate is that Western men produce 50 million sperm per millilitre in an ejaculate, which is lower than previously. However, only one sperm is needed to fertilise an egg, so 50 million sperm per ml suggests human males don’t have a problem just yet. There are data indicating that from below 40 million sperm per ml there is a linear relationship between sperm numbers and probability of pregnancy. The World Health Organisation (WHO) suggests 15 million per ml sperm is a minimum to be considered fertile. The minimum is based on men who have successfully fathered a child in the last 12 months. By definition, 5% of the men with numbers below 15 million per ml will still be able to reproduce. For females the issue that needs to be understood is that there is already a small window of time women are fertile, and this is decreasing as women are more educated and career-focused. Women have their highest number of eggs when they are still a fetus in their mother’s womb. About one sixth of the eggs are left at birth and by puberty the number is 500,000 eggs or less. From puberty until 37 years of age there is a steady decline from 500,000 to 25,000 eggs. After 37 years, the rate of decline increases and by menopause (average age of 51 in the US) only 1,000 eggs remain. It’s important to realise these are average numbers and there is no guarantee a woman will have 25,000 eggs at 37. The other issue is quality. Chromosomal issues (such as Down’s syndrome - where a person has three copies of chromosome 21 instead of two) increase with maternal age. IVF is seen as a way of rescuing fertility, but the success rate of 41.5% is for women younger than 35, and measures pregnancies, not live births. By 40 years old, that success rate is 22% and by 43 years it’s 5%.
Read more - Explainer: what causes women’s fertility to decline with age?
In short, the situation for women is not great, but the numbers are not changing with time (estimates of fertility from 1600 to 1950 don’t differ).

What is affecting fertility today?

The key determinant in women’s fertility is education - not individuals’ education but that of the community as a whole. If your community becomes educated, your fertility declines, as women become educated and less likely to have children in their youth. Choosing to delay having a child is not the only issue. Lifestyle choices matter. We know smoking, alcohol and obesity all affect the number and quality of eggs a woman has. As a female has all the eggs she will ever have when she is in her mother’s womb, the mother smoking will affect those eggs. Smoking in pregnancy is declining slowly (from 15% in 2009 to 11% in 2014) but is still very high in the Indigenous population (45%).
Read more: Why women’s eggs run out and what can be done about it
Smoking and alcohol are said to be major factors contributing to male sperm numbers but the evidence is limited by the nature of the studies. The effects of obesity and stress have the clearest evidence. For example, increased levels of anxiety and stress have been associated with lower sperm count. Life stress (defined as two or more stressful events in the last 12 months) has been found to have an effect, but not job stress. For men, the numbers themselves represent a blunt measure of fertility. It’s the quality of the sperm produced that’s of concern. The WHO minimum is that only 4% of male sperm need to be of good appearance to be considered fertile. It’s not really possible for us to be able to tell which of many factors may be influencing sperm appearance.

Problems with studying fertility

While we can talk about what research says on fertility, there are a few inherent problems with researching in this field. Most of the data we have on sperm count come from two sources: men attending an infertility clinic, and those undergoing a medical prior to military service. The first is restricted to those who likely already have a problem. The second is limited to one age group. Meta-analyses, which combine the results from lots of studies, are limited to those all using the same tools and approaches so they can be compared. As a result, a large meta-analysis that suggested smoking is detrimental was limited to men attending an infertility clinic, which would indicate many of them are likely to be infertile anyway. Another big study used conscripts in the US and Europe but failed to find an association between fertility and alcohol consumption. This is because it only assessed the alcohol consumed the week prior to the medical - and most recruits probably wouldn’t be out drinking in the days leading up to their medical.

So could we become extinct?

The reproduction rate is below that required for total population replacement in the US, Australia, and many other countries. But the human population in total is still growing as it ages.
Read more: Most men don’t realise age is a factor in their fertility too
The start of this millennium also represented the time when births for women aged 30-34 overtook those in the 25-29 age group, and the 35-39 age group overtook the 20-24 age group. Teenage pregnancy (15-19 years) is now level with older mums (40-44) in Australia. The quality of the sperm and egg is more important than the numbers. While we are still investigating what quality means to future generations, we do know that infertility represents a predictor of increased death rates. Men diagnosed with infertility had a higher risk of developing diabetes, ischaemic heart disease, alcohol abuse and drug abuse. The ConversationUltimately it’s not a numbers game but a quality game. This is true not just for the chances of having a child but having a healthy child. More immediately, fertility is a predictor of general health. While it does not appear that we are going to be extinct soon (at least not through reproductive failure), sperm quality could be a signal of wider health problems and should be investigated further. Shaun Roman, Senior Lecturer, University of Newcastle This article was originally published on The Conversation. Read the original article.
Dr Krissy Kendall

Infertility, defined as the inability of a couple to conceive after at least 12 months of regular, unprotected sex, affects about 15% of couples worldwide. Several factors can lead to infertility, but specific to men, infertility has been linked to lower levels of antioxidants in their semen. This exposes them to an increased risk of chemically reactive species containing oxygen, which can damage sperm. These reactive oxygen species are naturally involved in various pathways essential for normal reproduction. But uncontrolled and excessive levels of reactive oxygen species results in damage to your cells (or “oxidative stress”). This can affect semen health, and damage the DNA carried in the sperm, leading to the onset of male infertility.
Read more: Science or Snake Oil: is A2 milk better for you than regular cow’s milk?

Can supplements improve sperm health?

Antioxidants have long been used to manage male infertility as they can help alleviate the detrimental role of reactive oxygen species and oxidative stress on sperm health. Generally speaking, studies have shown favourable effects with supplementation, but results have been rather inconsistent due to large variations in study design, antioxidant formulations, and dosages. Several lab studies have reported beneficial effects of antioxidants such as vitamins E and C on the mobility of the sperm and DNA integrity (absence of breaks or nicks in the DNA). But these haven’t been able to be replicated in humans. There is some research suggesting six months of supplementation with vitamin E and selenium can increase sperm motility and the percentage of healthy, living sperm, as well as pregnancy rates. Other studies have found improvements in sperm volume, DNA damage, and pregnancy rate following treatment with supplements l-carnitine (an amino acid), Coenzyme Q10, and zinc. But there seems to be an equal number of studies showing no improvements in sperm motility, sperm concentration, the size or shape of sperm, or other measures. Perhaps it’s the inconsistency in results, and the overall desire to improve fertility rates that has led some companies to create their own sperm-saving cocktails.

The research behind Menevit

Menevit is a male fertility supplement aimed at promoting sperm health. It’s a combination of antioxidants, including vitamins C and E, zinc, folic acid, and selenium, formulated to maintain sperm health.
Read more: Most men don’t realise age is a factor in their fertility too
The makers of Menevit claim the antioxidants it contains can help maintain normal sperm numbers, improve sperm swimming, improve sperm-egg development, and protect against DNA damage. Following three months of supplementation, participants taking Menevit recorded a statistically significant improvement in pregnancy rate compared to the control group (38.5% versus 16%). But no significant changes in egg fertilisation or embryo quality were detected between the two groups.To date, there has only been one published study conducted on the actual product. The lead author of the study is also the inventor of the product. At first glance these findings may seem promising, but a few things warrant attention. As mentioned, the principle investigator of the study is also the inventor of the product, something many would argue is a conflict of interest. The study also reported no improvements in DNA integrity or sperm motility, the two most cited benefits of supplementing with antioxidants. Furthermore, the study looked at who was pregnant three months later, not who actually gave birth to a child. The dosages used in the Menevit product are also much lower than what’s been in previous studies. For example, significant improvements in total sperm count have been observed following 26 weeks of supplementation with folic acid and zinc. But this study used 66mg of zinc (compared to 25mg in Menevit) and 5mg of folic acid (compared to 500 micrograms in Menevit). It’s hard to say you would get the same results from the lower doses. And studies showing improvements in sperm motility and DNA integrity following vitamin E and selenium supplementation used much larger doses than what is found in Menevit. The dosage of vitamin E used in previous studies has ranged from 600-1,490 international units, Menevit has 400 international units. The dose of selenium studied was 225 micrograms, compared to only 26 micrograms in the Menevit product.

Your best bet for healthy sperm

Before you stock up on every antioxidant out there, take a quick look at your lifestyle. Sperm health can be affected by unhealthy lifestyle factors like poor diet, alcohol consumption, smoking, and stress.
Read more: The Handmaid’s Tale and counting sperm: are fertility rates actually declining?
Following a diet comprised of whole foods (not packaged, processed foods), avoiding excessive consumption of alcohol, engaging in regular physical activity, and not smoking can go a long way when it comes to improving the health of your sperm. The ConversationAs for sperm supplements such as Menevit, there’s a great deal of research that still needs to be done before we can say for sure it’s a worthwhile investment. Krissy Kendall, Lecturer of Exercise and Sports Science, Edith Cowan University This article was originally published on The Conversation. Read the original article.
Prof Louise Newman

The Australian newspaper recently reported the royal commission investigating institutional child sex abuse was advocating psychologists use “potentially dangerous” therapy techniques to recover repressed memories in clients with history of trauma. The reports suggest researchers and doctors are speaking out against such practices, which risk implanting false memories in the minds of victims. The debate about the nature of early trauma memories and their recovery isn’t new. Since Sigmund Freud developed the idea of “repression” – where people store away memories of stressful childhood events so they don’t interfere with daily life – psychologists and law practitioners have been arguing about the nature of memory and whether it’s possible to create false memories of past situations. Recovery from trauma for some people involves recalling and understanding past events. But repressed memories, where the victim remembers nothing of the abuse, are relatively uncommon and there is little reliable evidence about their frequency in trauma survivors. According to reports from clinical practice and experimental studies of recall, most patients can partially recall events, even if elements of these have been suppressed.

What are repressed memories?

The concept of repressing traumatic memories was part of this model. Repression, as Freud saw it, is a fundamental defensive process where the mind forgets or places events, thoughts and memories we cannot acknowledge or bear elsewhere.Freud introduced the concept that child abuse is a major cause of mental disorders such as hysteria, also known as conversion disorder. People with these disorders could lose bodily functions, such as the ability to move one of their limbs, following a stressful event. Freud also suggested that if these memories weren’t recalled, it could result in physical or mental symptoms. He argued symptoms of a mental disorder can be a return of the repressed memories, or a symbolic way of communicating a traumatic event. An example would be suddenly losing speech ability when someone has a terrible memory of trauma they feel unable to disclose. This idea of hidden traumas and their ability to influence psychological functioning despite not being recalled or available to consciousness has shaped much of our current thinking about symptoms and the need to understand what lies behind them. Those who accept the repression interpretation argue children may repress memories of early abuse for many years and that these can be recalled when it’s safe to do so. This is variously referred to as traumatic amnesia or dissociative amnesia. Proponents accept repressed traumatic memories can be accurate and used in therapy to recover memories and build up an account of early experiences.
Read more: Dissociative identity disorder exists and is the result of childhood trauma

False memory and the memory wars

Freud later withdrew his initial ideas around abuse underlying mental health disorders. He instead drew on his belief of the child’s commonly held sexual fantasies about their parents, which he said could influence formation of memories that did not did not mean actual sexual behaviour had taken place. This may have been Freud caving in to the social pressures of his time. This interpretation lent itself to the false memory hypothesis. Here the argument is that memory can be distorted, sometimes even by therapists. This can influence the experience of recalling memories, resulting in false memories. Those who hold this view oppose therapy approaches based on uncovering memories and believe it’s better to focus on recovery from current symptoms related to trauma. This group point out that emotionally traumatic memory can be more vividly remembered than non-traumatic memories, so it wouldn’t hold these events would be repressed. They remain sceptical about reclaimed memories and even more so about therapies based on recall – such as recovered memory therapy and hypnosis. The 1990s saw the height of these memory wars, as they came to be known, between proponents of repressed memory and those of the false memory hypothesis. The debate was influenced by increasing awareness and research on memory systems in academic psychology and an attitude of scepticism about therapeutic approaches focused on encouraging recall of past trauma. In 1992, the parents of Jennifer Freyd, who had accused her father of sexual assault, founded the False Memory Syndrome Foundation. The parents maintained Jennifer’s accusations were false and encouraged by recovered memory therapy. While the foundation has claimed false memories of abuse are easily created by therapies of dubious validity, there is no good evidence of a “false memory syndrome” that can be reliably defined, or any evidence of how widespread the use of these types of therapies might be.
Read more: We’re capable of infinite memory, but where in the brain is it stored, and what parts help retrieve it?

An unhelpful debate

Both sides do agree that abuse and trauma during critical developmental periods are related to both biological and psychological vulnerability. Early trauma creates physical changes in the brain that predispose the individual to mental disorders in later life. Early trauma has a negative impact on self-esteem and the ability to form trusting relationships. The consequences can be lifelong. A therapist’s role is to help abuse survivors deal with these long-term consequences and gain better control of their emotional life and interpersonal functioning. Some survivors will want to have relief from ongoing symptoms of anxiety, memories of abuse and experiences such as nightmares. Others may express the need for a greater understanding of their experiences and to be free from feelings of self-blame and guilt they may have carried from childhood. Some individuals will benefit from longer psychotherapies dealing with the impact of child abuse on their lives. Most therapists use techniques such as trauma-focused cognitive behavioural therapy, which aren’t aimed exclusively at recovering memories of abuse. The royal commission has heard evidence of the serious impact of being dismissed or not believed when making disclosures of abuse and seeking protection. The therapist should be respectful and guided by the needs of the survivor.
Read more: Why does it take victims of child sex abuse so long to speak up?
Right now, we need to acknowledge child abuse on a large scale and develop approaches for intervention. It may be time to move beyond these memory wars and focus on the impacts of abuse on victims; impacts greater than the direct symptoms of trauma. The ConversationIt’s vital psychotherapy acknowledges the variation in responses to trauma and the profound impact of betrayal in abusive families. Repetition of invalidation and denial should be avoided in academic debate and clinical approaches. Louise Newman, Director of the Centre for Women’s Mental Health at the Royal Women’s Hospital and Professor of Psychiatry, University of Melbourne This article was originally published on The Conversation. Read the original article.
Dr Linda Calabresi

New guidelines suggest excising a changing skin lesion after one month As with facing an exam where you haven’t studied, or finding yourself naked in a public place – missing a melanoma diagnosis is the stuff of nightmares for most GPs. In a condition where the prognosis can vary dramatically according to a fraction of a millimetre, the importance of early detection is well-known and keenly felt by clinicians. According to new guidelines published in the MJA, Australian doctors’ ability to detect classical melanomas early has been improving as evidenced by both the average thickness of the tumour when it is excised and the improved mortality rates associated with these types of tumours. Unfortunately, however the atypical melanomas are still proving a challenge. Whether they be nodular, occurring in an unusual site or lacking the classic pigmentation, atypical melanomas are still not being excised until they are significantly more advanced and consequently the prognosis associated with these lesions remains poor. As a result, a Cancer Council working group have revised the clinical guidelines on melanoma, in particular focusing on atypical presentations. The upshot of their advice? If a patient presents with any skin lesion that has been changing or growing over the course of a month, that lesion should be excised. The Australian guideline authors suggest that in addition to assessing lesions according to the ABCD criteria (asymmetry, border irregularity, colour variegation, and diameter >6mm) we should add EFG (elevated, firm and growing) as independent indicators of possible melanoma. “Any lesion that is elevated, firm and growing over a period of more than one month should be excised or referred for prompt expert opinion,” they wrote. In their article, the working group do acknowledge that it is not always a delayed diagnosis that is to blame for atypical melanomas being commonly more advanced when excised. Some of these tumours, such as the nodular and desmoplastic subtypes can grow very rapidly. “These subtypes are more common on chronically sun-damaged skin, typically on the head and neck and predominantly in older men,” the authors said. However, the most important common denominator with melanomas is that they are changing, they concluded. A history of change, preferably with some documentation of that change such as photographic evidence should be enough to raise the treating doctor’s index of suspicion. “Suspicious raised lesions should be excised rather than monitored,” they concluded. Ref: MJA Online 9.10.17 doi:10.5694/mja17.00123

Dr Danbee Kim

Lately, some neuroscientists have been struggling with an identity crisis: what do we believe, and what do we want to achieve? Is it enough to study the brain’s machinery, or are we missing its larger design?

Scholars have pondered the mind since Aristotle, and scientists have studied the nervous system since the mid-1800s, but neuroscience as we recognize it today did not coalesce as a distinct study until the early 1960s. In the first ever Annual Review of Neuroscience, the editors recalled that in the years immediately after World War II, scientists felt a “growing appreciation that few things are more important than understanding how the nervous system controls behavior.” This “growing appreciation” brought together researchers scattered across many well-established fields – anatomy, physiology, pharmacology, psychology, medicine, behavior – and united them in the newly coined discipline of neuroscience.

It was clear to those researchers that studying the nervous system needed knowledge and techniques from many other disciplines. The Neuroscience Research Program at MIT, established in 1962, brought together scientists from multiple universities in an attempt to bridge neuroscience with biology, immunology, genetics, molecular biology, chemistry, and physics. The first ever Department of Neurobiology was established at Harvard in 1966 under the direction of six professors: a physician, two neurophysiologists, two neuroanatomists, and a biochemist. The first meeting of the Society for Neuroscience was held the next year, where scientists from diverse fields met to discuss and debate nervous systems and behavior, using any method they thought relevant or optimal.

These pioneers of neuroscience sought to understand the relationship between the nervous system and behavior. But what exactly is behavior? Does the nervous system actually control behavior? And when can we say that we are really “understanding” anything?
Behavioral questions
It may sound pedantic or philosophical to worry about definitions of “behavior,” “control,” and “understanding.” But for a field as young and diverse as neuroscience, dismissing these foundational discussions can cause a great deal of confusion, which in turn can bog down progress for years, if not decades. Unfortunately for today’s neuroscientists, we rarely talk about the assumptions that underlie our research.

“Understanding,” for instance, means different things to different people. For an engineer, to understand something is to be able to build it; for a physicist, to understand something is to be able to create a mathematical model that can predict it. By these definitions, we don’t currently “understand” the brain – and it’s unclear what kind of detective work might solve that mystery.

Many neuroscientists believe that the detective work consists of two main parts: describing in great detail the molecular bits and pieces of the brain, and causing a reliable change in behavior by changing something about those bits and pieces. From this perspective, behavior is an easily observable phenomena – one that can be used as a measurement.

But since the beginning of neuroscience, a vocal and persistent minority has argued that detective work of this kind, no matter how detailed, cannot bring us closer to “understanding” the relationship between the nervous system and behavior. The dominant, granular view of neuroscience contains several problematic assumptions about behavior, the dissenters say, in an argument most recently made earlier this year by John Krakauer, Asif Ghazanfar, Alex Gomez-Marin, Malcolm MacIver, and David Poeppel in a paper called “Neuroscience Needs Behavior: Correcting a Reductionist Bias.”

>> Read more Source: Massive

Prof Sally Ferguson

Today, the “beautiful mechanism” of the body clock, and the group of cells in our brain where it all happens, have shot to prominence. The 2017 Nobel Prize in Physiology or Medicine has been awarded to Jeffrey C. Hall, Michael Rosbash and Michael W. Young for their work on describing the molecular cogs and wheels inside our biological clock. In the 18th century an astronomer by the name of Jean Jacques d'Ortuous de Marian noted his plants opening and closing their leaves with the cycle of light and dark, with the leaves opening towards the sun. Being an inquisitive chap, he placed the plants in constant darkness and observed that the daily opening and closing of the leaves continued even in the absence of sunlight – indicative of an internal clock. Subsequent work by others also showed innate daily rhythms in other animals and plants, but the location and inner workings of the biological timing system remained a mystery.
Read more - Keeping time: how our circadian rhythms drive us
The discovery of a misfiring gene that resulted in disrupted daily rhythms in fruit flies (the unsung heroes of the story) gave the first hint. Over several years, Hall, Rosbash and Young uncovered the machinery of the biological clock. It’s in your genes. From the latin circa “about” and diem “a day”, circadian rhythms are internally driven cycles in all living things - including humans - that continue in the absence of external time cues. The sleep/wake cycle is one daily rhythm; core body temperature is another. While we have known since de Marian that physiological systems are controlled internally, the way in which the clock works was a mystery. The biological clock’s cycle is generated by a feedback loop. Genes are activated which trigger the production of proteins. When protein levels build up to a critical threshold in the cells, the genes are switched off. The proteins then degrade over time to a point that allows the genes to switch back on, starting the cycle again. This takes about 24 hours. But it isn’t just one gene doing all the work. Hall, Rosbach and Young found that many genes, proteins and regulators are involved in the complex machinery that keeps us ticking. Some molecules control the activation of genes, some are involved in the translation of light information from the eyes, and some govern the clock’s stability and precision, ensuring that it keeps ticking and remains in sync with the external environment. While we already knew that the internally generated cycle existed, Hall, Rosbach and Young described the mechanisms by which the cycle is created and maintained at the molecular level. As a result of this work we now understand how internal rhythms remain synchronised with each other and with the external environment. We are starting to understand the range of health challenges experienced by those who have to work against their internal clocks, such as shift workers. We can predict times of the day and night where alertness and performance are likely to be impaired and thus control the health and safety risks.
Read more: Power naps and meals don’t always help shift workers make it through the night
The ConversationAnd we can explain why, on the first morning after the start of daylight savings, waking up is so much harder. But don’t worry, the beautiful mechanism in your biological clock is designed to make adjustments based on the information it gets from the external environment, and those molecules will have you resynchronised in just a couple of days. Sally Ferguson, Research professor, CQUniversity Australia This article was originally published on The Conversation. Read the original article.