Articles

Read the latest articles relevant to your clinical practice, including exclusive insights from Healthed surveys and polls.

By reading selected clinical articles, you earn CPD in the Educational Activities (EA) category whenever you click the “Claim CPD” button and follow the prompts. 

Dr Jenny Robson

Schistosomiasis, also known as bilharzia, is the second most prevalent tropical disease after malaria and is a leading cause of morbidity in many parts of the world. It is not uncommon in Australia because of the many travellers who visit endemic areas and swim or bathe in freshwater lakes and streams. Places commonly implicated include Lake Kariba and Lake Malawi in Africa. Immigrants and refugees from bilharzia endemic countries are also likely to present with untreated infection. With increasing travel to and migration from Africa and the Americas knowledge of the dangers and means of avoiding schistosomiasis is essential. Schistosomiasis is caused by trematodes of the genus Schistosoma. The principal schistosomes of medical importance, S japonicum, S mansoni, S mekongi (intestinal schistosomiasis) and S haematobium (urinary schistosomiasis), infect people who enter water in which infected snails (intermediate hosts) are living. The larval cercariae shed by the snail actively penetrate unbroken skin and develop into schistosomulae that migrate through the lungs to the liver where they mature into adults. Female worms lay eggs that pass through the vessels and tissues to the lumen of the gut or bladder (depending on localisation of worms). A proportion of eggs escape from the host and may be found in faeces or urine. The host's immune response to eggs that become lodged in the tissues is largely responsible for disease, Figure 1.  

Geographic distribution

This is governed by the distribution of the intermediate host snail. S haematobium                         Africa, Middle East, India (only Maharashtra) S japonicum                               Philippines, Indonesia (only Sulawesi), parts of China S mansoni                                   Africa, Middle East, some Caribbean Islands, parts of South America (Brazil, Surinam, Venezuela) S mekongi                                   Laos and Cambodia S intercalatum                           10 countries within the rainforest belt of West Africa.

At-risk groups

Owing to the absence of suitable snail hosts, transmission cannot occur in Australia. A history of overseas travel or residence is essential for this diagnosis. Chronic schistosomiasis is more likely to be seen in migrants and refugees from endemic areas. In Australia, where the definitive host is freshwater and marine birds, non-human trematodes may cause schistosomal dermatitis (cercarial dermatitis, swimmer's itch). Onset is usually within 15 minutes of skin contact with cercariae.

Clinical presentation

Disease due to schistosomiasis depends on the infecting species and the intensity of infection. Acute schistosomiasis occurs two to 12 weeks post infection and symptoms last for periods varying from one day to a month or more; recurrence of symptoms 2-3 weeks later is common. Between 40-95% of individuals, not previously exposed to infection, develop symptoms which include fever, malaise, headache, abdominal pain, diarrhoea and urticaria. Many have eosinophilia. After the initial acute onset, most become asymptomatic, although those with S haematobium infections may develop microscopic or macroscopic haematuria. Rare complications result from ectopic deposition of eggs in the spinal cord and brain. Most travellers are only mildly infected and are therefore often asymptomatic and unlikely to develop the severe manifestations of chronic schistosomiasis. Severe disease occurs in patients with heavy and prolonged infection. Hepatosplenomegaly, portal hypertension, ascites and oesophageal varices may result from intestinal schistosomiasis. And frank haematuria with varying degrees of impairment of the urinary bladder and ureters may occur with S haematobium infections.

Diagnosis

The prepatent period of S japonicum, S mansoni and S mekongi is 6-8 weeks, and for S. haematobium 10-12 weeks. Examination of faeces or urine before this time often yields false negative results. Similarly, with serology, testing too early may result in false negative results. Antibody development occurs slightly before eggs are detected. Eosinophilia (greater than 0.60 x103/mL) is present in up to 80% of patients with infections; however, its absence does not exclude infection.

Parasitologic examination

Diagnosis is by demonstration of eggs of S japonicum, S mansoni and S mekongi  in faeces, or eggs of S haematobium in urine. At least two stool or urine specimens should be submitted for examination over a period of 10 days. Whilst eggs may be found in all specimens of urine, there is some evidence of a diurnal periodicity with a peak of excretion around midday. Collection of the terminal portion of urine collected between noon and 2 pm is therefore recommended. Schistosome eggs can also be demonstrated in rectal snips or bladder biopsies. Viability of eggs can be assessed if the biopsies are received fresh.

Serologic examination

At our laboratory, antibodies are detected by enzyme immunoassay (EIA) using purified egg S mansoni antigen. Antibodies to this antigen may be undetectable in the pre-patent period lasting 8-10 weeks. The test detects genus specific antibodies. In the absence of a diagnosis based on egg identification, travel history provides the best assessment of likely species.

Interpretation

Parasitologic Faeces is concentrated (modified formalin-ethyl acetate) and urine either centrifuged or filtered; all of the concentrate or sediment is examined. Because of the low sensitivity of these techniques, a negative faecal or urine examination does not exclude schistosomiasis. Microscopic examination of eggs enables the species of parasite to be determined. At least two examinations on different days are recommended. Serologic Schistosome serology cannot distinguish between past or current infection nor differentiate the species of infection. Clinical history and further investigations should be considered when establishing the diagnosis. Recent infections may be serologically negative.

Preventative measures

Cercariae can burrow through the mucosa of the mouth as well as through unbroken skin. All fresh water in endemic areas should be considered suspect, although snails tend to live in slow-flowing and stagnant waters, rather than in rapids and fast-flowing waters. If freshwater contact is unavoidable, bathing water should be heated to 50°C for five minutes or treated with iodine or chlorine as for the treatment of drinking water. Water can also be strained through paper filters, or allowed to stand for 2-3 days before use. This exceeds the usual life span of the cercariae. Of course, the container must be kept free of snails. High waterproof boots or hip waders are recommended if wading through streams or swamps. It is wise to carry a pair of rubber gloves to protect hands when contact with fresh water is anticipated. Vigorous towel drying, and rubbing alcohol on exposed skin immediately after contact with untreated water, may also help reduce cercarial penetration. Vegetables should be well cooked and salads avoided as these may have been washed in infected water, allowing cercariae to attach themselves to the leaves.

Treatment

Praziquantel (Biltricide) 20 mg/kg bodyweight every four hours for 2-3 doses depending upon the species is recommended. In travellers, this is likely to achieve cure rates in the order of 90%. Tablets are scored and available as a 600mg dose dispensed six per pack. In patients at risk of chronic disease, such as refugees and migrants, it is important to be aware of complications that may arise from chronic infection: liver fibrosis, portal hypertension and its sequelae, and colorectal malignancy in the intestinal forms; obstructive uropathy, superimposed bacterial infection, infertility and possibly bladder cancer.

Follow-up

Follow-up schistosomiasis serology is recommended in 12 to 36 months after treatment. Follow-up serology may differ between immigrants and returned travellers. Travellers may show a more rapid serological decline post-treatment due to a shorter duration of infection and lower parasite burden. Immigrants may even show a rise in titre within the first 6-12 months post-treatment. Persisting titres should not automatically justify retreatment, this should be based on symptoms, parasite identification or eosinophilia. Viable eggs may continue to be excreted for up to one month after successful treatment. Non-viable and degenerate eggs can be found in tissue biopsies for years after infection has occurred.
General Practice Pathology is a new regular column each authored by an Australian expert pathologist on a topic of particular relevance and interest to practising GPs. The authors provide this editorial, free of charge as part of an educational initiative developed and coordinated by Sonic Pathology.
Dr Linda Calabresi

Scarlet fever is on the rise. According to the latest issue of The Lancet Infectious Diseases, cases of scarlet fever in the UK reached a 50 year high last year with a seven fold increase in new cases in the last five years. In addition, similar increases having been reported in a number of Asian countries including Vietnam, China, South Korea and Hong Kong. But public health authorities remain perplexed as to why the disease appears to be making a comeback. Detailed analysis of the causative organism shows different strains of the strep bacteria have been responsible for the UK and Asian outbreaks, so they are unsure if they are linked at all or whether the resurgence has to do with external factors such as the immune status of the population or environmental factors. So far it would seem that Australia is yet to be affected by this increased incidence however experts are warning we should not be complacent. Unlike in England, scarlet fever is not a notifiable disease in this country except in WA. And even in the UK, data suggests marked under-reporting. Scarlet fever is highly contagious and usually affects children under the age of 10, although it can occur in adults as well. While the bacterial infection, caused by Streptococcus pyogenes or group A Streptococcus (GAS) was a common cause of death in the 1800s, these days it is readily treated with antibiotics usually penicillin. However, failure to recognise the condition and treat it appropriately can lead to complications such as pneumonia, and liver and kidney damage. Children with the infection typically experience sore throat, headache and fever along with the characteristic popular pink-red rash that feels like sandpaper and the so-called strawberry tongue. Diagnosis is usually made via a throat swab. In an accompanying comment, Australian infectious diseases researchers Professor Mark Walker and Stephan Brouwer from the University of Queensland said, “Scarlet fever epidemics have yet to abate in the UK and northeast Asia. Thus, heightened global surveillance for the dissemination of scarlet fever is warranted.” In other words, be alert, people! Ref: Lancet Infect Dis 2017 Published Online November 27, 2017 http://dx.doi.org/10.1016/ S1473-3099(17)30693-X Online/Comment http://dx.doi.org/10.1016/ S1473-3099(17)30694-1

Dr Perminder Sachdev

When people think of lithium, it’s usually to do with batteries, but lithium also has a long history in medicine. Lithium carbonate, or lithium salt, is mainly used to treat and prevent bipolar disorder. This is a condition in which a person experiences significant mood swings from highs that can tip into mania to lows that can plunge into depression. More recently, though, lithium has been explored as a potential preventive therapy for dementia. A recent paper even led some to question whether we should start putting lithium in drinking water to lower population dementia rates.
But despite early studies linking lithium to better cognitive function, there is currently not enough evidence to start using it as a preventive dementia strategy.

Lithium’s medical history

Lithium is a soft, light-silver metal present in many water systems, which means humans have always been exposed to it. Its concentrations in water range from undetectable to very high, especially in geothermal waters and oil-gas field brines. The high concentration of lithium in some natural springs led to it being related to healing. In the 19th century, lithium water was used to treat gout and rheumatism. Of course this was with little objective evidence of any benefit. Early attempts to treat diseases such as kidney stones with higher doses of lithium often led to lithium toxicity – potentially irreversible damage to the kidneys and brain. The landmark event in the medical history of lithium was a 1949 paper by Australian psychiatrist John Cade in the Medical Journal of Australia. This demonstrated its benefit in bipolar disorder, then known as manic-depressive illness. The psychiatric community took some time to absorb this finding – the US regulator the Food and Drug Administration only approved lithium for use in 1970. After that, lithium as a drug transformed psychiatric practice, especially in the treatment and prevention of bipolar disorder. This led to extensive research into the mechanisms of lithium in the brain.
Read more: What is bipolar disorder?

How lithium affects the brain

We don’t know exactly how lithium works, but we know it helps the way brain cell connections remodel themselves, usually referred to as synaptic plasticity. It also protects brain neurons by controlling cellular pathways, such as those involved in oxidative stress (where the brain struggles to control toxins) and inflammation. Animal studies have shown that long-term treatment with lithium leads to improvement in memory and learning. These observations led to studies of lithium’s protective effects on brain neurons in bipolar patients who had been taking it for a long time. One of these was a review of more than 20 studies, seven of which examined dementia rates in patients with mood disorders (such as bipolar) being treated with standard therapeutic doses of lithium. Five of these studies showed lithium treatment was related to low dementia rates. The review looked at four randomised controlled trials (comparing one group of patients on lithium with a group taking a placebo). These examined lithium’s effects on cognitive impairment (such as memory loss) or dementia over six to 15 months. One study did not show a statistically significant benefit on cognition but showed a biologically positive effect on the levels of a protein that promotes nerve cell growth. The other three showed statistically significant, albeit modest, beneficial effects of lithium on cognitive decline.
Read more: How we can protect our brains from memory loss and dementia

Lithium in water

A number of epidemiological studies – which track patterns and causes of diseases in populations – have linked lithium concentrations in drinking water with rates of psychiatric disease. In the above-mentioned review, nine out of 11 studies found an association between trace-dose lithium (low doses in drinking water but not detectable in blood of the people consuming it) and low rates of suicide and, less commonly, homicide, mortality and crime. More recently, researchers in Denmark conducted a nation-wide study linking dementia rates based on hospital records for people aged 50-90 with their likely exposure to lithium. This was based on the lithium levels in the waterworks predominantly supplying the region where they lived. Those with higher dementia rates came from regions with lower mean levels of lithium in the water than those without. This was 11.5 micrograms (µg) per litre compared to 12.2µg per litre. The Danish population is geographically stable and the health record linkage is excellent for such studies. The reliability and validity of dementia diagnosis in Danish health registers is also high. But the study had a number of limitations. The lithium intake was based on sampling of waterworks that provide water to only 42% of the population. The sampling was done for only four years (2009-2013) and extrapolated to a lifetime. Many potential, additional variables were not considered. For instance, a major source of lithium is diet, and some bottled water contains lithium. The study did not take this into account. An intriguing aspect of the results, for which no explanation was given, was that the relationship wasn’t linear. That is, lower doses (5.1-10µg per litre) increased the risk of dementia by about 20%, whereas exposure to levels over 15µg/L reduced the risk by about the same amount.

We’re not there yet

Observational studies (which make educated assumptions by observing a sample of the population) have considerable merit in the epidemiology of dementia, but have sometimes led to blind alleys. Aluminium is a useful example, with its preventive role in dementia still unclear after several decades of observations. A concern is lithium may take the same path.
Read more – In defence of observational science: randomised experiments aren’t the only way to the truth
Lithium was once widely used as an elixir and even as a salt substitute, but was discredited because of lack of effectiveness, marked toxicity and early death. We must wait for more observational studies with the rigour such studies warrant before we start clinical tests of its effects in drinking water. The ConversationWe must also study the potential harmful effects of lithium on the thyroid and the kidney, as these organs bear the brunt of long-term harms of lithium. For now, there is insufficient evidence to add lithium to the drinking water. Perminder Sachdev, Scientia Professor of Neuropsychiatry, Centre for Healthy Brain Ageing (CHeBA), School of Psychiatry, UNSW This article was originally published on The Conversation. Read the original article.
Dr Linda Calabresi

New US guidelines are the most aggressive yet in terms of targets for blood pressure control. Put out by the American College of Cardiology and the American Heart Association, and published in JAMA, the guidelines recommend we now consider anyone with a BP of 120/80 mmHg or above as having abnormal blood pressure. People who have a systolic between 120 and 130 mmHg but whose diastolic is still below 80 mmHg are to be considered to have elevated BP. But those who have both a systolic up to 10mmHg above target (120-130mmHg) and a diastolic between 80 and 90 mmHg should now be classified as having stage 1 hypertension. An accompanying editorial estimates that this reclassification will result in a 14% increase in the US population who should be recognised as having hypertension. But before clinicians start reaching for the script pad, the guidelines recommend this stage 1 hypertension be initially treated with non-pharmacological therapies – basically addressing the factors that most likely pushed their blood pressure up to start with – lose weight, exercise more, reduce salt intake, cut down on alcohol. The exception to this, is that group of patients whose absolute 10 year CVD risk predictor has them with a 10% or more chance of having a major CV event. In these cases, it’s gloves off. The less than 130/80 target for high risk patients is very similar to Australian guidelines. What’s different is that this is now a recommended target for everyone. The new US guidelines recommend everyone with a BP over 140/90 mmHg be treated with medication (preferably two agents) regardless of their absolute CV risk. Our Heart Foundation says try other lifestyle changes in people with a very low CV risk and no other comorbidities until we reach the 160/100 mmHg mark. The other new development in the US guidelines is the recommendation to use BP measurements from ambulatory or home BP monitoring to both confirm a diagnosis of hypertension and titrate therapy. This is in keeping with Australian recommended practice. The US guidelines were developed by an expert committee after examining all the current evidence and conducting a series of systematic reviews looking at some key clinical questions. “From a public health perspective, considering the high population-attributable risk of CVD associated with hypertension, the potential benefits of tighter control of hypertension are substantial,” the guideline authors wrote. However, they do acknowledge that such an aggressive approach does carry risks, especially in the elderly. “Although studies do suggest that lower BP is better for most patients, including those older than 75 years, the balance of the potential benefits of hypertension management and medication costs, adverse effects, and polypharmacy must be considered for each individual patient,” they said. Ref: JAMA. Published online November 20, 2017. doi:10.1001/jama.2017.18706

Dr Lee Price

High sensitivity(HS) troponin measurement in the emergency room/hospital setting is now widely established in Australia and is now being recommended for widespread implementation in the USA. Lower cut-offs into the normal range may find value as a single determinant for exclusion purposes in the acute emergency ward setting, however, because HS troponin may be elevated in a number of noncoronary cardiac conditions, a rise and/or fall in the level is usually required for diagnosis of a coronary infarct1. In unstable angina pectoris, a troponin level may be normal, as may an ECG recording if the patient is pain free at the time. Two articles in the Medical Journal of Australia published in the past three years have addressed the issues/problems surrounding ordering of the test in general practice 1,2. In both articles the authors agree that there are times when a single measurement of HS troponin can be useful clinically; however, there are times when it can be counterproductive. Firstly, it is agreed that a patient with classical features of the acute coronary syndrome (ACS) plus or minus ECG findings who has had pain in the 24 hours prior to assessment should be referred urgently to an emergency centre without troponin measurement. The turnaround time for an urgent troponin in most acute hospitals is of the order of 60 minutes or less. In the community private pathology scenario, turnaround time for a troponin result, even when treated as urgent, could take anywhere from four to 12 hours. That usually means that the result is only available after hours. Frequently, the ordering clinician is unavailable to receive or act on the result. A troponin can be useful in the general practice setting if the patient has had atypical chest pain with a low but not negligible likelihood of ACS; or if the patient has been pain and symptom free for 24 hours with a normal ECG. After an infarct, troponin can remain elevated for over a week. For the laboratory, an abnormal troponin requires phoning the result if it is an urgent request from the clinician. This may be after hours – even after midnight. Usually the context of the result is only known by the requesting clinician. If a requesting clinician is unavailable to receive the result after hours, the patient will usually be contacted by a pathologist or emergency services. After-hours doctor services often are uninterested in receiving or acting on critical results such as troponin. In summary, there is a place for troponin measurement in general practice. Elevated levels are not uncommon due to causes other than the ACS. Turnaround time for a result may take much longer when collected in a collection centre than in the hospital setting. When ordering an urgent troponin please ensure that the laboratory has a valid contact number for after hours. References 1. Aroney CA, Cullen L. Appropriate use of serum troponin testing in general practice: a narrative review. MJA 2016; 205:(2) 91-94. 2. Marshall GA, Wijeratne NG, Thomas D. Should general practitioners order troponin tests? MJA 2014; 201: 155-157.
General Practice Pathology is a new regular column each authored by an Australian expert pathologist on a topic of particular relevance and interest to practising GPs. The authors provide this editorial, free of charge as part of an educational initiative developed and coordinated by Sonic Pathology.

Dr Linda Calabresi

At a time when there is increasing pressure on GPs not to prescribe antibiotics, a new primary care study endorsing their role in the early treatment of uncomplicated UTI makes a welcome change. The trial, recently published in the BMJ showed that not only did early antibiotic treatment for a lower UTI significantly shorten the duration of symptoms, it also reduced the risk of the patient developing pyelonephritis. However, the researchers stopped short of recommending all women with lower UTI symptoms commence antibiotics at first presentation. In deference to the rising rates of antibiotic resistance against UTI-causing bacteria, and the fact that little harm came to the women who were originally in the NSAID group but were eventually put on antibiotics, they effectively suggest a ‘just in case’ script. “[A] strategy of selectively deferring rather than completely withholding antibiotic treatment may be preferable for uncomplicated lower UTI,” they said. The only caveat they suggested to this strategy, was for women who had lower UTI symptoms and a CRP greater than 10mg/L who appeared, in post hoc analysis to have a greater likelihood of developing pyelonephritis and might therefore benefit from immediate antibiotics. But this would need further research they suggested. The Swiss study, a randomised, double blind trial involved more than 250 women who presented to their GP with symptoms of an uncomplicated lower UTI, and were found to have either leucocytes or nitrite or both on a urine dipstick test. The women were randomised to receive either norfloxacin or the NSAID, diclofenac. The choice of norfloxacin as the antibiotic, which does seem a little like using a hammer to crack a nut, was based on pre-determined high susceptibility rates in this Swiss population and diclofenac was the NSAID chosen because it had the same dosing regimen as the norfloxacin. Overall, symptoms were gone after a median of two days in the antibiotic group but lasted twice as long in the NSAID group, with the majority of NSAID women eventually needing antibiotics. Also of note was that 5% of women in the NSAID group developed pyelonephritis compared with none in the antibiotic group. So even though research suggests we can safely withhold antibiotics in a number of self-limiting bacterial diseases such as acute otitis media, sinusitis and traveller’s diarrhoea – we should perhaps reconsider that strategy for treating UTIs, the study authors suggest. BMJ 2017; 359: j4784. http://dx.doi.org/10.1136/bmj.j4784

Diana Lucia

Abstaining from alcohol during preconception and pregnancy is usually considered to be the woman’s responsibility. The main concern surrounding alcohol exposure during pregnancy often relates to well-established evidence of newborns developing a range of behavioural, physical and cognitive disabilities later in life. But recent research is also pointing to a link between alcohol and poor sperm development, meaning the onus is on expectant fathers too. A myriad of studies are showing biological fathers who drink alcohol may have a significant role in causing health problems in their children. Studies are showing paternal alcohol consumption has negative effects at all levels of the male reproductive system. This is as well as altered neurological, behavioural and biochemical outcomes in subsequent generations.
Read more: Hey dad, your health affects your baby’s well-being too

Men and risky drinking

In Australia, men consume alcohol at high or risky levels on a regular basis. National health guidelines recommend no more than two standard drinks on any day. According to the National Alcohol and Drug Knowledgebase, Australian men usually drink more alcohol than women. Data has shown males are twice more likely than females to consume more than two standard drinks per day on average over a 12-month period (24% compared with 9.8%). And about a third of males said they exceeded the guideline not to drink more than five standard drinks on a single occasion on a monthly basis.

Booze and swimmers

These figures are alarming given the compelling evidence about the impact of excessive, chronic or binge alcohol consumption on sperm, semen quality, fertility and child health.
Read more: Dads get postnatal depression too
Animal studies have shown a single dose of ethanol into the stomach lining (equivalent to a human binge drinking) induces damage to the testis, damaging the cells essential for sperm formation. In another experimental study, sperm health and fertility was assessed in male rats after administration of alcohol into the stomach for ten weeks. The results confirmed alcohol significantly reduced sperm concentration and the ability of the sperm to move properly. And none of the rats exposed to alcohol fertilised the females, despite confirmation of successful mating. A myriad of other non-human studies have also shown similar results, suggesting ethanol has the ability to damage sperm and fertility. Studies in humans have also supported these findings. A recent study of 1,221 young Danish men (18-28 years of age) tracked alcohol consumption in the week preceding the study to determine its effects on semen quality (volume, concentration, total count, and shape). The results showed sperm concentration, total sperm count and percentage of sperm with normal shape got worse the more the men drank. This association was observed in men reporting at least five units of alcohol in a typical week, but was most pronounced for men with a typical intake of more than 25 units a week. This suggests even modest habitual alcohol consumption of more than five units a week can negatively affect semen quality.
Read more: Mother knows best? Fathers missing in research about kids
A recent review of studies and meta-analysis of population data replicated many of these findings. The main results showed daily alcohol intake at moderate to high levels had a detrimental effect on semen volume and normal shape.

The effects on children

Limited studies have tracked the drinking patterns of fathers around the time of conception and subsequent health outcomes of the child. But rodent models have shown changes in offspring weight and development, learning and activity, anxiety related behaviours and molecular and physiological effects. A study also reported the women whose partners consumed ten or more drinks per week prior to conception had two to five times increased risk of miscarriage compared to those whose partners did not drink during preconception. Other studies provide some preliminary evidence that paternal preconception alcohol use is associated with acute leukemia at high-level use, heart malformation with daily use, microcephaly with low to moderate use, and effects in relation to fetal growth and mild cognitive impairments.

How can alcohol affect kids before they’re born?

The exact mechanism of how alcohol alters developing sperm and the later health outcomes of the foetus is still not yet fully understood. It’s been suggested alcohol can change the micro-environment within the testes, altering the development and maturation of the sperm. It’s also been suggested alcohol can influence sperm by creating genetic alterations and epigenetic marks. This means changes to gene expression occur without changes to the underlying DNA sequence. These epigenetic marks can be transferred at the time of fertilisation. This can subsequently alter the molecular makeup of the early embryo, leading to alterations in foetal development and the potential to impair offspring health. The biggest hurdle for researchers now is continuing to translate findings from the basic sciences to more sophisticated research in humans. The next stage is to identify patterns of alcohol use by men during the preconception period on foetal and childhood outcomes in the Australian context. The ConversationBut most importantly we need to realise decisions about alcohol use during the preconception period are not the sole responsibility of women. We need to be talking to men about these issues to ensure healthy outcomes for the baby. Diana Lucia, PhD candidate, Neuroscience, School of Biomedical Sciences, The University of Queensland and Karen Moritz, Professor, The Univeristy of Queensland, The University of Queensland This article was originally published on The Conversation. Read the original article.
Dr Linda Calabresi

There has been a lot of noise around opioid use lately. In particular, in the States where it’s been declared a public health emergency. While concerted efforts are being made to ensure that patients who are experiencing chronic pain are not also in a position where they also have to deal with opioid addiction, in the cases of severe, acute pain most doctors would consider pain relief the priority and opioids the gold standard. Well it seems that too may need a rethink. According to a new randomised controlled trial just published in JAMA, an oral ibuprofen/paracetamol combination works just as well at reducing pain, such as that felt with a suspected fractured arm as a range of other oral opioid combinations including oxycodone and paracetamol. The US researchers randomly selected over 400 patients who presented to emergency with moderate to severe arm or leg pain, severe enough to warrant investigation by imaging to receive an oral paracetamol/ibuprofen combination pain relief or one of three other opioid combination analgesics including oxycodone/paracetamol, hydrocodone/paracetamol or codeine/paracetamol. Two hours after ingestion there were no statistically significant or clinically important difference in pain reduction between the four groups. A limitation of the study was that it didn’t compare adverse effects, however the study authors said their findings support the use of the paracetamol/ibuprofen combination as an alternative to oral opioid analgesics, at least in cases of severe arm or leg pain. Their findings also contradict the long-held idea that non-opioid pain killers are less effective than opioids, an idea that has been underpinned by the WHO pain ladder that has guided clinicians managing both cancer and non-cancer pain since 1986. Even though most scripts for opioids are written out in the community, previous research has showed that long-term opiate use is higher among those patients who were initially treated in hospital. “Typically, treatment regimens that provide adequate pain reduction in the ED setting are used for pain management at home,” an accompanying editorial stated. “[This trial] provides important evidence that nonopioid analgesia can provide similar pain reduction as opioid analgesia for selected patients in the ED setting.” What’s more, the effectiveness of this paracetamol and ibuprofen combination for moderate to severe pain may also translate to its more widespread use for acute pain in other clinical conditions traditionally treated with opioid medication, however this would need further investigation, the editorial author concluded. Ref: JAMA 2017; 318(17): 1661-1667. Doi:10.1001/jama.2017.16190 JAMA 2017; 318(17) 1655-1656

Dr Daman Langguth

Research in rheumatoid arthritis (RA) over the past 10 years has gained significant ground in both pathophysiological and clinical understanding. It is now known that early aggressive therapy within the first three months of the development of joint symptoms decreases the chance of developing severe disease, both clinically and radiologically. To enable this early diagnosis, there has been considerable effort made to discover serological markers of disease. Around 80% of RA patients become rheumatoid factor positive (IgM RF), though this can take many years to occur. In other words, IgM RF (hereafter called RF) has low sensitivity in the early stages of RA. Furthermore, patients with other inflammatory diseases (including Sjögren’s syndrome, chronic viral and bacterial infections) may also be positive for RF, and thus RF has a relatively low specificity for RA. The RF is, therefore, not an ideal test in the early detection and confirmation of RA. There has been an on-going search for an auto-antigen in RA over the past 30 years. It has been known that senescent cells display antigens not present on other cells, and that RA patients may make antibodies against them. This was first reported with the anti-perinuclear factor (APF) antibodies directed against senescent buccal mucosal cells in 1964, but this test was challenging to perform and interpret. These cells were later found to contain filament aggregating protein (filaggrin). Subsequently, in 1979, antibodies directed against keratin (anti-keratin antibodies, AKA) in senescent oesophageal cells were discovered. In 1994, another antibody named anti-Sa was discovered that reacted against modified vimentin in mesenchymal cells. In the late1990s, antibodies directed against citrullinated peptides were ‘discovered’. In fact, we now know that all of the aforementioned antibodies detect similar antigens. When cells grow old, some of the structural proteins undergo citrullination under the direction of cellular enzymes. Arginine residues undergo deamination to form the non-standard amino acid citrulline. Citrullinated peptides fit better into the HLA-DR4 molecules that are strongly associated with RA development, severity and prognosis. It is also known that many types of citrullinated peptides are present in the body, both in and outside joints. It has been determined that sera from individual RA patients contain antibodies that react against different citrullinated peptides, but these individuals’ antibodies do not react against all possible citrullinated peptides. Thus, to improve the sensitivity of the citrullinated peptide assays, cyclic citrullinated peptides (CCP) have been artificially generated to mimic a range of conformational epitopes present in vivo. It is these artificial peptides that are used in the second generation anti-CCP assays. Sullivan Nicolaides Pathology uses the Abbott Architect assay which is standardised against the Axis-Shield, Dundee UK, second generation CCP assay. False positive CCP antibodies have recently been reported to occur in acute viral (e.g. EBV, HIV) and some atypical bacterial (Q Fever) seroconversions. The antibodies may be present for a few months after seroconversion, but do not predict inflammatory arthritis in these individuals.

Anti-CCP assays

CCP antibodies alone give a sensitivity of around 66% in early RA, similar to RF, though they have a much higher specificity of >95% (compared with around 80% for RF). The combination of anti-CCP and RF tests is now considered to be the ‘gold standard’ in the early detection of RA. Combining RF with anti-CCP enables approximately 80% (i.e. 80% sensitivity) of RA patients to be detected in the early phase (less than sixmonths duration) of this disease. The presence of anti-CCP antibodies has also been shown to predict RA patients who will go on to develop more severe joint disease, both radiologically and clinically. They also appear to be a better marker of disease severity than RF. Anti-CCP antibodies have also been shown to be present prior to the development of clinical disease, and thus may predict the development of RA in patients with uncharacterised recent onset inflammatory arthritis. At present, it is not known whether monitoring the level of these antibodies will be useful as a marker of disease control, though some data in patients treated with biologic (e.g. etanercept, infliximab agents) suggests they may be useful. It has not been determined whether the absolute levels of CCP antibodies allow further disease risk stratification. Our pathology laboratories reports CCP antibodies in a quantitative fashion – normal less than 5 U/mL with a range of up to 2000 U/mL. References
  1. ACR Position statement on anti-CCP antibodies http://www.rheumatology.org/publications hotline/1003anticcp.asp.
  2. Forslind K, Ahlmen M, Eberhardt K et al. Prediction of radiologic outcome in early rheumatoid arthritis in clinical practice: role of antibodies to citrullinated peptides (anti-CCP). Ann Rheum Dis 2004; 63:1090-5.
  3. Huizinga TWJ, Amos CI, van der Helm-van Mil AHM et al. Refining the complex rheumatoid arthritis phenotype based on specificity of the HLA-DRB1 Shared epitope for antibodies to citrullinated proteins. Arthritis Rheum 2005; 52:3433-8.
  4. Lee DM, Schur PH. Clinical Utility of the anti-CCP assay in patients with rheumatic disease. Ann Rheum Dis 2003; 62:870-4.
  5. Van Gaalen FA, Linn-Rasker SP, van Venrooij Wj et al. Autoantibodies to cyclic citrullinated peptides predict progression to rheumatoid arthritis in patients with undifferentiated arthritis. Arthritis Rheum 2004;50: 709-15.
  6. Zendman AJW, van Venrooij WJ, Prujin GJM. Use and significance of anti-CCP autoantibodies in rheumatoid arthritis. Rheumatology 2006; 46:20-5.

General Practice Pathology is a new regular column each authored by an Australian expert pathologist on a topic of particular relevance and interest to practising GPs. The authors provide this editorial, free of charge as part of an educational initiative developed and coordinated by Sonic Pathology.
Dr Linda Calabresi

Looks like there is yet another reason to rethink the long-term use of proton pump inhibitors. And this one is a doozy. According to a new study, recently published in the BMJ journal, Gut, the long-term use of PPIs is linked to a more than doubling of the risk of developing stomach cancer. And before you jump to the reasonable conclusion that these patients might have had untreated Helicobacter Pylori, this 2.4 fold increase in gastric cancer risk occurred in patients who had had H.pylori but had been successfully treated more than 12 months previously. What’s more, the risk increased proportionally with the duration of PPI use and the dose, which the Hong Kong authors said suggested a cause-effect relationship. No such increased risk was found among those patients who took H2 receptor antagonists. While the study was observational, the large sample size (more than 63,000 patients with a history of effective H.pylori treatment) and the relatively long duration of follow-up (median 7.6 years) lent validity to the findings. The link between H.pylori and gastric cancer, has been known for decades. It has been shown that eradicating H.pylori reduces the risk of developing gastric cancer by 33-47%. However, the study authors said, it is also known that a considerable proportion of these individuals go on to develop gastric cancer even after they have successfully eradicated the bacteria. “To our knowledge, this is the first study to demonstrate that long-term PPI use, even after H. pylori eradication therapy, is still associated with an increased risk of gastric cancer,” they said. By way of explanation, the researchers note that gastric atrophy is considered a precursor to gastric cancer. And while gastric atrophy is a known sequela of chronic H. pylori infection, it could also be worsened and maintained by the profound acid suppression associated with PPI use and this could be why the risk persisted even after the infection had been treated. Bottom line? According to the study authors, doctors need to ‘exercise caution when prescribing long-term PPIs to these patients even after successful eradication of H. pylori.’ Ref: Gut 2017; 0:1-8. Doi:10.1136/gutjnl-2017-314605

Dr Linda Calabresi

Self-harm among teenagers is on the increase, a new study confirms and frighteningly it’s our younger girls that appear most at risk. According to a population-based, UK study the annual incidence of self-harm increased by an incredible 68% between 2011 and 2014 among girls aged 13-16, from 46 per 10000 to 77 per 10000. The research, based on analysis of electronic health records from over 670 general practices, also found that girls were three times more likely to self-harm than boys among the almost 17,000 young people (aged 10-19 years) studied. The importance of identifying these patients and implementing effective interventions was highlighted by the other major finding of this study. “Children and adolescents who harmed themselves were approximately nine times more likely to die unnaturally during follow-up, with especially noticeable increases in risks of suicide…, and fatal acute alcohol and drug poisoning,” the BMJ study authors said. And if you were to think this might be a problem unique to the UK, the researchers, in their article actually referred to an Australian population based cohort study published five years ago that found that 8% of adolescents aged less than 20 years reported harming themselves at some time. The UK study also showed that the likelihood of referral was lowest in areas that were the most deprived, even though these were the areas where the incidence was highest, an example of the ‘inverse care law’ where the people in most need get the least care. While the link between social deprivation and self-harm might be understandable, researchers were at a loss to explain the recent sharp increase in incidence among the young 13-16 year old girls in particular. What they could say is that by analysing general practice data rather than inpatient hospital data, an additional 50% of self-harm episodes in children and adolescents were identified. In short, it is much more likely a self-harming teenager will engage with their GP rather than appear at a hospital service. And even though, as the study authors concede there is little evidence to guide the most effective way to manage these children and adolescents, the need for GPs to identify these patients and intervene early is imperative. “The increased risks of all cause and cause-specific mortality observed emphasise the urgent need for integrated care involving families, schools, and healthcare provision to enhance safety among these distressed young people in the short term, and to help secure their future mental health and wellbeing,” they concluded. BMJ 2017; 359:j4351 doi: 10.1136/bmj.j4351

Shomik Sengupta

Bladder cancer affects almost 3,000 Australians each year and causes thousands of deaths. Yet it often has a lower profile compared to other types of cancer such as breast, lung and prostate. The rate at which Australians are diagnosed with bladder cancer has decreased over time, which means the death rate has fallen too, although at a slower rate. This has led to an increase in the so called mortality-to-incidence ratio, a key statistic that measures the proportion of people with a cancer who die from it. For bladder cancer this went up from 0.3 (about 30%) in the 1980s to 0.4 (40%) in 2010 (compared to 0.2 for breast and colon cancer and 0.8 for lung cancer). While the relative survival (survival compared to a healthy individual of similar age) for most other cancers has improved in Australia, for bladder cancer this has decreased over time.

Who gets bladder cancer?

Australia’s anti-smoking measures and effective quitting campaigns have led to a progressive reduction in smoking rates over the last 25 years. This is undoubtedly one key reason behind the observed decline in bladder cancer diagnoses over time. Environmental risk factors are thought to be more important than genetic or inherited susceptibility when it comes to bladder cancer. The most significant known risk factor is cigarette smoking. Bladder cancer risk also increases with exposure to chemicals such as dyes and solvents used in industries like hairdressing, printing and textiles. Appropriate workplace safety measures are crucial to minimising exposure, but the increased risk of occupational bladder cancer remains an ongoing problem. Certain medications, such as the chemotherapy drug cyclophosphamide, and pelvic radiation therapy have also been linked to bladder cancer. Patients who have had such treatment need to be specifically checked for the main symptoms and signs of bladder cancer, such as blood in urine. Men develop bladder cancer about three times as often as women. In part, this may have to do with the fact that men are exposed more to the risk factors. Conversely, women have a relatively poorer survival from bladder cancer compared to men. The reasons for this are unclear, but may partly relate to difficulties in diagnosis.
Read more – Interactive body map: what really gives you cancer?

How is bladder cancer diagnosed?

At present, unlike other cancers such as breast cancer that can be picked up on mammograms, bladder cancer can’t be diagnosed at the stage where there are no symptoms. The usual symptoms that lead to the diagnosis of bladder cancer are blood in the urine (haematuria) or irritation during urination, such as frequency and burning. But symptoms are quite common and, in most instances, caused by relatively benign problems such as infections, urinary stones or enlargement of the prostate. So, the key to bladder cancer diagnosis is for suspicious symptoms to be quickly and appropriately assessed by a doctor. Haematuria, in particular, always needs to be considered a serious symptom and investigated further. Up to 20% of patients with blood in the urine will turn out to have bladder cancer. Even if the bleeding occurs transiently, this could still be the first symptom that leads to the earliest possible diagnosis of bladder cancer. It shouldn’t be ignored, since delayed diagnosis of bladder cancer is known to worsen treatment outcomes. Unfortunately, delays in investigation of blood in urine are well known to occur and particular subgroups such as women and smokers tend to experience the greatest delays. Recent studies from Victoria and West Australia have shown how some Australian patients have significant and concerning delays in investigation of urinary bleeding. Multiple factors contribute to such delays, including public perception and anxiety, lack of referral from general practitioners and administrative and resourcing limitations at hospitals. Patients reporting blood in their urine should be referred for scans such as an ultrasound or computerised tomography (CT) to assess the kidneys. They should also have their bladder examined internally (cystoscopy) using a fibre-optic instrument known as a cytoscope. Cystoscopy, a procedure usually performed by urologists (medical specialists of urinary tract surgery), remains the gold standard for diagnosing bladder cancer. Although diagnostic scans can help detect some bladder cancers, they have significant limitations in detecting certain types of tumours.

What happens if cancer is detected?

If a bladder cancer is noted on cystoscopy, it is removed and/or destroyed using instruments that can be passed into the bladder alongside the cystoscope. These procedures can be carried out at the same setting or subsequently, depending on available instruments and anaesthesia. The cancerous tissue removed is examined by a pathologist to confirm the diagnosis. This also provides additional information such as the stage of the cancer (how deep it has spread) and grade (based on appearance of the cancer cells), which help determine further management.

Are there any new developments?

Given that cystoscopy is an invasive procedure, there has been considerable effort to develop a non-invasive test, usually focusing on markers in the urine that can indicate the presence of cancer. To date, none of these have been reliable enough to obviate the need for cystoscopy.
Read more: Can we use a simple blood test to detect cancer?
Additionally, to enhance the ability to detect small bladder cancers, cystoscopy using blue light of a certain wavelength (360-450nm) can be combined with the administration of a fluorescent marker (hexaminolevulinate) which highlights the cancerous tissue. While this approach does lead to the detection of more cancers, the resulting clinical benefit remains uncertain. The ConversationAt present, immediate and appropriate investigation of suspicious symptoms, especially haematuria, using a combination of radiological scans and cystoscopy, remains the best means to diagnose bladder cancer in an accurate and timely manner. Shomik Sengupta, Professor of Surgery, Eastern Health Clinical School, Monash University This article was originally published on The Conversation. Read the original article.