Articles

Read the latest articles relevant to your clinical practice, including exclusive insights from Healthed surveys and polls.

By reading selected clinical articles, you earn CPD in the Educational Activities (EA) category whenever you click the “Claim CPD” button and follow the prompts. 

Dr Linda Calabresi

At first read, the study results seemed disappointing. Yet another promising premise fails to deliver when it comes to actual proof. But the researchers aren’t ready to give up on this hypothesis just yet. In fact, commentators on the study say the results offer ‘great hope’ and represent ‘a major leap forward.’ The SPRINT MIND study, recently published in JAMA was investigating whether intensive blood pressure control (to a systolic less than 120mmHg) worked better than standard blood pressure control (SBP<140mmHg) at reducing the risk of mild cognitive impairment and dementia. This randomised controlled trial was a component of the well-publicised Systolic Blood Pressure Intervention Trial (SPRINT) which looked at the effect of more intensive blood pressure control on cardiovascular and renal outcomes in addition to cognitive function in over 9000 people without a history of diabetes or stroke. Basically, what this study showed was that intensive blood pressure control to a target of less than 120mmHg did not reduce the incidence of probable dementia compared to lowering BP to a target of less than 140mmHg. Depressing, yes? No, say the study authors. Firstly, they say the study demonstrated no ill-effects of intensive BP lowering – which has been an issue of concern for some who have been worried that lowering the BP could decrease cerebral perfusion thereby harming cognitive function. In fact, the study authors showed quite the opposite was true. The intervention actually helped protect cognitive ability. “This is the first trial, to our knowledge, to demonstrate an intervention that significantly reduces the occurrence of [mild cognitive impairment], a well-established risk factor for dementia, as well as the combined occurrence of [mild cognitive impairment] or dementia,” they said. The study authors suggest the lack of benefit in dementia may be due to the fact the SPRINT study was terminated early following the demonstration of benefit of intensive BP control on cardiovascular outcomes and all-cause mortality. Because of this shortened time frame and the fact that there were fewer than expected cases of dementia, they suggest the study may have been ‘underpowered’ to show a result for lowering the risk of dementia. They also say there were fewer cases of dementia among the intensive treatment group compared with the standard treatment group (7.2 vs 8.6 cases per 1000 patient years) even though this wasn’t statistically significant. We cannot know whether this trend would have reached statistical significance had the intervention continued. An accompanying editorial views the study and the results with a good deal of positivity. “For older adults, almost all of who have concern about being diagnosed with Alzheimer Disease and related dementia, [this study] offers great hope,” the US epidemiologist, Dr Kristine Yaffe, said. She points out that this a readily modifiable risk factor, and we should be accelerating our efforts into investigating whether this, along with other vascular health interventions such as physical activity, can indeed prevent dementia, building on the positive results of this study. “The SPRINT MIND study may not be the final approach for prevention of Alzheimer disease or other cognitive impairment but it represents a major leap forward in what has emerged as a marathon journey.”

Reference

Yaffe, K. Prevention of Cognitive Impairment With Intensive Systolic Blood Pressure Control. JAMA [Internet]. 2019 Jan 28. DOI: 10.1001/jama.2019.0008 [Epub ahead of print]
Suzanne Mahady

Bowel cancer mostly affects people over the age of 50, but recent evidence suggests it’s on the rise among younger Australians. Our study, published recently in Cancer Epidemiology, Biomarkers and Prevention, found the incidence of bowel cancer, which includes colon and rectal cancer, has increased by up to 9% in people under 50 from the 1990s until now. Our research examined all recorded cases of bowel cancer from the past 40 years in Australians aged 20 and over. Previous studies assessing bowel cancer incidence in young Australians have also documented an increase in the younger age group. This trend is also being seen internationally. A study from the United States suggests an increase in bowel cancer incidence in people aged 54 and younger. The research shows rectal cancer incidence increased by 3.2% annually from 1974 to 2013 among those aged age 20-29. Bowel cancers are predicted to be the third most commonly diagnosed cancer in Australia this year. In 2018, Australians have a one in 13 chance of being diagnosed with bowel cancer by their 85th birthday. Our study also found bowel cancer incidence is falling in older Australians. This is likely, in part, to reflect the efficacy of the National Bowel Cancer Screening Program, targeted at those aged 50-74. Bowel cancer screening acts to reduce cancer incidence, by detecting and removing precancerous lesions, as well as reducing mortality by detecting existing cancers early. This is important, as bowel cancer has a good cure rate if discovered early. In 2010 to 2014, a person diagnosed with bowel cancer had a nearly 70% chance of surviving the next five years. Survival is more than 90% for people who have bowel cancer detected at an early stage. That is why screening is so effective – and we have previously predicted that if coverage rates in the National Bowel Screening Program can be increased to 60%, around 84,000 lives could be saved by 2040. This would represent an extraordinary success. In fact, bowel screening has potential to be one of the greatest public health successes ever achieved in Australia.

Why the increase in young people?

Our study wasn’t designed to identify why bowel cancer is increasing among young people. However, there are some factors that could underpin our findings. The increase in obesity parallels that of bowel cancer, and large population based studies have linked obesity to increased cancer risk. Unhealthy lifestyle behaviours, such as increased intake of highly processed foods (including meats), have also been associated with increased bowel cancer risk. High quality studies are needed to explore this role further. Alcohol is also thought to be a contributor to increasing the risk of bowel cancer. So, should we be lowering the screening age in Australia to people under the age of 50? Evaluating a cancer screening program for the general population requires a careful analysis of the potential benefits, harms, and costs. A recent Australian study modelled the trade-offs of lowering the screening age to 45. It showed more cancers would potentially be detected. But there would also be more colonoscopy-related harms such as perforation (tearing) in an extremely small proportion of people who require further evaluation after screening. A lower screening age would also increase the number of colonoscopies to be performed in the overstretched public health system and therefore could have the unintended consequence of lengthening colonoscopy waiting times for people at high risk.

How to reduce bowel cancer risk

One of the most common symptoms of bowel cancer is rectal bleeding. So if you notice blood when you go to the toilet, see your doctor to have it checked out. A healthy lifestyle including adequate exercise, avoiding smoking, limiting alcohol intake and eating well, remains most important to reducing cancer risk. Aspirin may also lower risk of cancer, but should be discussed with your doctor because of the potential for side effects including major bleeding. Most importantly, we need to ensure eligible Australians participate in the current evidence-based screening program. Only 41% of the population in the target 50-74 age range completed their poo tests in 2015-2016. The test is free, delivered by post and able to be self-administered.The Conversation   This article is republished from The Conversation under a Creative Commons license. Read the original article.
Dr Linda Calabresi

It would be a brave doctor who would ignore the warning ‘allergic to penicillin’ when deciding which antibiotic to prescribe for a patient. But according to a new review published recently in JAMA, despite up to 10% of the population reporting allergies to penicillin, few have clinically significant reactions. “Although many patients report they are allergic to penicillin, clinically significant IgE-mediated or T lymphocyte-mediated penicillin hypersensitivity is uncommon (<5%),” the US review authors said. And the issue is an important one. As the authors point out, not only will patients, labelled as having a penicillin allergy be given alternative antibiotics that are more likely to fail and cause side-effects but the use of these alternatives increase the risk of antimicrobial resistance developing. So for all these reasons, the researchers propose that is worthwhile that all patients labelled as having an allergy to penicillins be re-evaluated. As a starting point, a comprehensive history should be taken. And while the reviewers acknowledge that, to date no allergy questionnaires have been validated in terms of defining risk levels, there are plenty of features in a history that can give a clue as to whether a person could safely be offered skin prick testing or a drug challenge. Broadly speaking, patients with a history of a minor rash, that was not significantly itchy that developed over the course of days into the course of the antibiotic are considered low-risk. This is opposed to people who have a history of developing a very pruritic rash within minutes to hours of taking the drug (which tends to indicate an IgE-mediated reaction) or people who experienced significant blistering and/or skin desquamation after taking penicillin (which generally represents a severe T-cell-mediated reaction). Among those patients whose rash-history suggests they are at low-risk, other factors should be considered before attempting a challenge. “Even in the context of low-risk allergy history, patients with unstable or compromised haemodynamic or respiratory status and pregnant patients should be considered as having at least a moderate-risk history,” they said. However, patients whose penicillin allergy history included non-allergic-type symptoms such gastrointestinal symptoms or patients who only have a family history of penicillin allergy should be considered at low-risk. Once the patient has been assessed as being at low-risk of having an acute allergic reaction, the study authors suggest they be given amoxicillin under medical observation. “For penicillin allergy, administration of 250mg of amoxicillin with one hour of observation demonstrates penicillin tolerance,” they said. Should the patient tolerate this dose of amoxicillin, it can be concluded that all beta-lactams can be administered safely, and the issue of cross-reactivity (between penicillin and cephalosporin which occurs in about 2% of truly penicillin-allergic people) is rendered irrelevant. Patients who are considered at moderate-risk of having an allergic reaction to penicillin, namely those patients with a history of urticaria or mild pruritic rashes but no anaphylaxis should be considered for skin-prick testing. Only those with a negative skin prick test should be considered for an oral drug challenge. People with a history of high-risk reactions – usually anaphylaxis should not be skin-prick tested or challenged. They might be considered for desensitisation programs but only in select circumstances and only under the close supervision of a specialist. All in all, the authors advocate health professionals not simply take the label of ‘allergic to penicillin’ as gospel. “Evaluation of penicillin allergy has substantial benefits for patients by allowing improved antimicrobial choice for treatment and prophylaxis,” they concluded.

Reference

Shenoy ES, Macy E, Rowe T, Blumenthal KT. Evaluation and Management of Penicillin Allergy: A Review. JAMA 2019 Jan 15; 321(2): 188-99. DOI: 10.1001/jama.2018.19283    
Eloise Stephenson

Ross River virus is Australia’s most common mosquito-borne disease. It infects around 4,000 people a year and, despite being named after a river in North Queensland, is found in all states and territories, including Tasmania. While the disease isn’t fatal, it can cause debilitating joint pain, swelling and fatigue lasting weeks or even months. It can leave sufferers unable to work or look after children, and is estimated to cost the economy A$2.7 to A$5.6 million each year. There is no treatment or vaccine for Ross River virus; the only way to prevent is to avoid mosquito bites. Mosquitoes pick up the disease-causing pathogen by feeding on an infected animal. The typical transmission cycle involves mosquitoes moving the virus between native animals but occasionally, an infected mosquito will bite a person. If this occurs, the mosquito can spread Ross River virus to the person.

Animal hosts

Ross River virus has been found in a range of animals, including rats, dogs, horses, possums, flying foxes, bats and birds. But marsupials – kangaroos and wallabies in particular – are generally better than other animals at amplifying the virus under experimental infection and are therefore thought to be “reservoir hosts”. The virus circulates in the blood of kangaroos and wallabies for longer than other animals, and at higher concentrations. It’s then much more likely to be picked up by a blood-feeding mosquito.

Dead-end hosts

When we think of animals and disease we often try to identify which species are good at transmitting the virus to mosquitoes (the reservoir hosts). But more recently, researchers have started to focus on species that get bitten by mosquitoes but don’t transmit the virus. These species, known as dead-end hosts, may be important for reducing transmission of the virus. With Ross River virus, research suggests birds that get Ross River virus from a mosquito cannot transmit the virus to another mosquito. If this is true, having an abundance of birds in and around our urban environments may reduce the transmission of Ross River virus to animals, mosquitoes and humans in cities.

Other reservoir hosts?

Even in areas with a high rates of Ross River virus in humans, we don’t always find an abundance of kangaroos and wallabies. So there must be other factors – or animals yet to be identified as reservoirs or dead-end hosts – playing an important role in transmission. Ross River virus is prevalent in the Pacific Islands, for instance, where there aren’t any kangaroos and wallabies. One study of blood donors in French Polynesia found that 42.4% of people tested had previously been exposed to the virus. The rates are even higher in American Samoa, where 63% of people had been exposed. It’s unclear if the virus has recently started circulating in these islands, or if it’s been circulating there longer, and what animals have been acting as hosts.

What about people?

Mosquitoes can transmit some viruses, such as dengue and Zika between people quite easily. But the chances of a mosquito picking up Ross River virus when biting an infected human is low, though not impossible. The virus circulates in our blood at lower concentrations and for shorter periods of time compared with marsupials. If humans are infected with Ross River virus, around 30% will develop symptoms of joint pain and fatigue (and sometimes a rash) three to 11 days after exposure, while some may not experience any symptoms until three weeks after exposure. To reduce your risk of contracting Ross River virus, take care to cover up when you’re outdoors at sunset and wear repellent when you’re in outdoor environments where mosquitoes and wildlife may be frequently mixing.   This article is republished from The Conversation under a Creative Commons license. Read the original article.
Dr Linda Calabresi

While it appears the message about risky drinking is getting through to younger Australians, baby boomers are as bad as ever. According to a research letter appearing in the latest edition of the MJA, the proportion of 55-70-year-olds who could be classed as high-risk drinkers has risen over the last 15 or so years. The South Australian researchers say this is in ‘stark contrast to the significant decrease in risky drinking among people aged between 12-24 years during the same period.’ And while they do emphasise that by far the majority of older Australians (over 80%) are abstainers or drink at low risk levels, the proportional increase of those now in the high-risk category (from 2.1% in 2004 to 3.1% in 2016) represents an additional 400,000 at-risk individuals – significant in anyone’s language. The findings were based on secondary analyses of data from National drug Strategy Household Surveys conducted in 2004, 2007, 2010, 2013 and 2016. Interestingly the researchers defined the risk categories on the basis of the maximum number of standard alcoholic drinks drunk on a single occasion over the course of a month. So low-risk were those individuals who never consumed more than four drinks in a single session, risky drinkers drank 5-10 drinks in one session at least once a month and high-risk drinkers needed to have drunk 11 or more drinks at least once a month. It’s a slightly different means of assessment to the more common approach of asking about average daily alcohol intake and appears more likely to detect the binge drinker – or your classic ‘social drinker.’ As the letter authors point out, detecting problem drinking in this age group is especially important as this cohort is particularly vulnerable to a range of alcohol-related adverse events from falls to diabetes. Once again, the researchers are looking to GPs to detect those at-risk from drinking among our baby boomer patient population and initiate evidence-based interventions, such as short, opportunistic counselling and information sessions. But they recognise this isn’t always easy. “To facilitate early identification of problem drinking and early intervention, educating health care professionals about patterns and drivers of alcohol consumption by older people should be a priority,” the authors said. Perhaps using the study’s categorisation technique of the maximum number of drinks consumed in a single session might go some way to detecting those at risk.  

Referernce:

Roche AM, Kostadinov V. Baby boomers and booze: we should be worried about how older Australians are drinking. Med J Aust. 2019; 210(1): 38-9. DOI: 10.5694/mja2.12025. Available from: https://www.mja.com.au/journal/2019/210/1/baby-boomers-and-booze-we-should-be-worried-about-how-older-australians-are
Martyn Lloyd Jones

For many years experts in the field of drug policy in Australia have known existing policies are failing. Crude messages (calls for total abstinence: “just say no to drugs”) and even cruder enforcement strategies (harsher penalties, criminalisation of drug users) have had no impact on the use of drugs or the extent of their harmful effects on the community. Whether we like it or not, drug use is common in our society, especially among young people. In 2016 43% of people aged 14 and older reported they had used an illicit drug at some point in their lifetime. And 28% of people in their twenties said they had used illicit drugs in the past year. The use of MDMA (the active ingredient in ecstasy) is common and increasing among young people. In the last three months alone five people have died as a result of using illicit drugs at music festivals and many more have been taken to hospital. The rigid and inflexible attitudes of current policy-makers contrast dramatically with the innovative approaches to public health policy for which Australia was once renowned. Since the 1970s many highly successful campaigns have improved road safety, increased immunisation rates in children and helped prevent the spread of blood-borne virus infections. The wearing of seatbelts was made compulsory throughout Australia in the early 1970s. Randomised breath testing and the wearing of helmets by bike riders were introduced in the 1980s. These measures alone have saved many thousands of lives. The introduction of needle exchange and methadone treatment programs in the late 1980s and, more recently, widespread access to effective treatments for hepatitis C have dramatically reduced the health burden from devastating infections such as HIV and the incidence of serious liver disease. Each of these programs had to overcome vigorous and sustained hostility from opponents who argued they would do more harm than good. But in all cases the pessimists were proved wrong. Safety measures on the roads did not cause car drivers and bike riders to behave more recklessly. The availability of clean needles did not increase intravenous drug use. Easier access to condoms did not lead to greater risk taking and more cases of AIDS. We believe — along with many other experts in the field — that as was the case for these earlier programs, the evidence presently available is sufficient to justify the careful introduction of trials of pill testing around Australia. Specifically, we support the availability of facilities to allow young people at venues or events where drug taking is acknowledged to be likely to seek advice about the substances they’re considering ingesting. These facilities should include tests for the presence of known toxins or contaminants to help avert the dangerous effects they may produce. Such a program should be undertaken in addition to, and not instead of, other strategies to discourage or deter young people from taking illicit drugs. Although pill testing has been widely and successfully applied in many European countries over a twenty year period, it has to be admitted the evidence about the degree of its effectiveness remains incomplete. That’s why any program in Australia should be linked to a rigorously designed data collection process to assess its impact and consequences. However, we do know that the argument that pill testing programs will increase drug use and its associated harms is very unlikely to be true. Most people seeking advice about the constituents of their drugs will not take them if they are advised that they contain dangerous contaminants. And it’s easy to avoid false reassurances about safety by careful explanations and detailed information. The opportunity to provide face-to-face advice to young people about the risks of drug taking is one of the great strengths of pill testing programs. Over the last half century we have learnt public health programs have to utilise multiple strategies and provide messages carefully and tailored for different audiences. What works to combat the harms associated with drug-taking in prisons is different from what works for specific cultural groups or for young people attending music festivals. The available evidence suggests pill testing is an effective and useful approach to harm minimisation in this last group. We believe it has the capacity to decrease ambulance calls to festival-goers, help change behaviour and save lives. It has taken until now for pill testing techniques to be developed to a level where they are able to identify the constituents in analysed samples with sufficient precision, reliability and speed. These techniques, and the range of substances for which they can test, will continue to improve over time. On the basis of experience gained in the UK, Europe and Australia it’s clear pill testing is now feasible and practicable. The members of the Australasian Chapter of Addiction Medicine within the Royal Australasian College of Physicians are the main clinical experts in the field of addiction medicine in this country. Together with the Australian Medical Association and many prominent members of the community with experience in this area we feel this is the time for pill testing to be introduced, albeit in careful and controlled circumstances. We believe this position is also supported by peer users, concerned families, and past and present members of police forces across Australia. The fact the “War on Drugs” has failed does not mean we should give up. There are many new weapons available to us, as we have learnt from the successful public health campaigns of the past. Pill testing will not abolish all the harms associated with drug taking, but if handled carefully, carries the likelihood of reducing them significantly. Martyn Lloyd Jones, Honorary Senior Lecturer, University of Melbourne and Paul Komesaroff, Professor of Medicine, Monash University This article is republished from The Conversation under a Creative Commons license. Read the original article.

Dr Linda Calabresi

Finally, we’ve got some robust evidence to answer the question - is ondansetron safe to take for morning sickness. Published in JAMA, a very large retrospective study involving over 1.8 million pregnancies, almost 90,000 of which included exposure to ondansetron in the first trimester has found that taking the drug did not increase the risk of cardiac malformations or congenital malformations overall. However, first trimester ondansetron was associated with a very small increased risk of oral clefts (three additional cases per 10,000 women treated). Interestingly the increased risk for oral clefts was confined to cleft palate, there was no evidence for an increased risk of cleft lip. The information will be eagerly received by the thousands of pregnant women who experience severe nausea and vomiting, and the clinicians who care for them many of whom have been prescribing ondansetron because of its effectiveness, despite the lack of detailed safety data. “Although not formally approved for the treatment of nausea and vomiting during pregnancy, ondansetron, a 5-HT receptor antagonist, has rapidly become the most frequently prescribed drug for nausea and vomiting during pregnancy in the United States because of its perceived superior antiemetic effects and improved adverse effect profile compared with treatment alternatives,” the study authors said. “In 2014, an estimated 22% of pregnant women used ondansetron in the United States,” they said. The major strengths of this study lie in the size of the cohort and the fact that the information on ondansetron exposure was based on filled prescriptions, thereby negating any possible recall bias. Both these factors are particularly important given how rare these abnormalities are and how many possible confounders there could be. As for limitations of the study, of course just because a prescription has been filled doesn’t always mean the medication has been taken, but even if the exposure wasn’t as great as calculated, the risk would be only lessened rather than raised. There is also the possibility that there might have been some other unrecognised factor involved especially since all the women in the study were uninsured and treated under Medicaid insurance and therefore included a higher percentage of women from disadvantaged communities. However, given the detailed information collected on these women and their pregnancies, and the multiple analyses conducted on this data, the likelihood of unmeasured confounders affecting the findings was thought to be low. Overall the results of this study should provide reassurance for clinicians and pregnant women, according to an accompanying editorial, written by a US obstetrician and gynaecologist. “As clinicians and pregnant women engage in informed, shared decision-making surrounding treatment for nausea and vomiting, the current information is important for contextualising risks in light of the potential benefits,” he concluded.

References

Huybrechts KF, Hernández-Díaz S, Straub L, Gray KJ, Zhu Y, Patorno E, et al. Association of Maternal First-Trimester Ondansetron Use With Cardiac Malformations and Oral Clefts in Offspring. JAMA. 2018 Dec 18; 320(23): 2429-37. DOI: [10/1001/jama.2018.18307] Haas DM. Helping Pregnant Women and Clinicians Understand the Risk of Ondansetron for Nausea and Vomiting During Pregnancy. JAMA. 2018 Dec 18; 320(23): 2425-6. DOI: [10.1001/jama.2018.19328]
Daryl Efron and Harriet Hiscock

The rate of medications dispensed for attention-deficit hyperactivity disorder (ADHD) in children aged 17 and under increased by 30% between 2013-14 and 2016-17. The Australian Atlas for Healthcare Variation, released today, shows around 14,000 prescriptions were dispensed per 100,000 children aged 17 and under in 2016-17, compared with around 11,000 in 2013-14. The atlas for 2016-17 also showed some areas had a high dispensing rate of around 34,000 per 100,000 while the area with the lowest rate was around 2,000 per 100,000 – a 17-fold difference. This difference is much lower than in 2013-14, when the highest rate was 75 times the lowest. For decades people have been concerned too many children could be diagnosed with ADHD and treated with medications. We are conducting a study called the Children’s Attention Project, following 500 children recruited through Melbourne schools. So far, we have found only one in four children who met full ADHD criteria were taking medication at age ten. So it looks like, if anything, more children with ADHD should be referred for assessment and consideration of management options.

How many kids are medicated?

ADHD is the most common neurodevelopmental disorder of childhood – the prevalence is around 5% in Australia. Children with ADHD have great difficulty staying focused, are easily distracted and have poor self-control. Many are also physically hyperactive, especially when they are young. To be diagnosed, children need to have major problems from their ADHD symptoms both at home and school. These include learning difficulties, behavioural problems and trouble making friends. Young people with ADHD are more likely to fail school, have lower quality of life, experience substance abuse issues and teenage pregnancy, or end up in prison. Medication can make a big difference to these children’s lives. While there are many ways to help children with ADHD, stimulant medication is the most effective treatment. All international clinical guidelines recommend it for children with significant ADHD that persists after non-medication approaches have been offered. Our previous research found that about 80% of children diagnosed with ADHD by a paediatrician (the main medical specialty that manages ADHD) in Australia are treated with medication. The atlas shows the proportion of children and adolescents who had at least one ADHD medication prescription dispensed was 1.5% in 2013-14 and 1.9% in 2016-7. This is similar to the prevalence of stimulant medication prescription in previous Australian studies in the past 15 years. It sits between the US (high) and Europe (low) and is not excessive given the prevalence of the condition. The Children’s Attention Project found those with the most severe symptoms were more likely to be prescribed medications, as were those from families of lower socioeconomic status. Other Australian studies have found similar results. This is not surprising as ADHD does appear to be more common in children from socioeconomically disadvantaged families. Our research suggests that disadvantaged families in Australia appear to be able to access services for ADHD, at least in metropolitan centres.

Why does it vary between areas?

The atlas finding that there is considerable regional variation in prescribing of stimulant medications in Australia has been identified in previous studies and needs to be better understood. Some variation in health care is normal and good, but too much suggests there may be a problem with the quality of care or access to care. For example, greater prescribing in regional areas may reflect lack of timely access to non-pharmacological services. We do need to keep watching this space, monitoring rates and regional variation of medication use. A landmark study in the US, published in 1999, compared medication with intensive parent and teacher behaviour training. The children who received medication had a much greater reduction in ADHD symptoms. But medication is only one consideration in ADHD. Other supports are also important. Behavioural therapies can help reduce anxiety and behaviour problems in children with ADHD and improve relationships with parents and teachers. However, accessing psychologists can be hard for many families. While Medicare rebates are available for up to ten sessions per year, costs can still be a barrier. In our research, Victorian parents reported out-of-pocket costs of up to A$200 per session with a psychologist. ADHD is not considered a disability under the National Disability Insurance Scheme, so families are not eligible for funding packages. Further research is needed to better understand the factors influencing access to care for Australian children with ADHD, and why there is such variation in rates of prescribing between regions. We also need to ensure children across Australia get equitable access to non-medication management. We need evidence-based clinical guidelines relevant to the Australian healthcare system, which is quite different from places such as the UK and US. This work must include adult ADHD, which is an emerging area with a raft of clinical and service system complexities.The Conversation This article is republished from The Conversation under a Creative Commons license. Read the original article.
Dr Linda Calabresi

Women with a normal BMI can no longer tick off weight as breast cancer risk factor, US researchers say. According to their study, published in JAMA Oncology, it’s body fat that increases the risk even if the woman falls into a healthy weight range. The study was in fact a secondary analysis of the Women’s Health Initiative clinical trial along with observational study cohorts involving almost 3500 post-menopausal, healthy BMI women who at baseline had their body fat analysed (by DXA) and were then followed up for a median duration of 16 years. What the researchers discovered was that women in the highest quartile for total body fat and trunk fat mass were about twice as likely to develop ER-positive breast cancer. “In this long-term prospective study of postmenopausal with normal BMI, relatively high body fat levels were associated with an elevated risk of invasive breast cancers,” the study authors spelled. Perhaps less surprisingly, the analysis also found that the breast cancer risk increased incrementally as the body fat levels increased. “We found a 56% increase in the risk of developing ER-positive breast cancer per 5-kg increase in trunk fat, despite a normal BMI,” they said. The proposed mechanism that explains why high body fat levels increases the risk of breast cancer, is much the same as the known mechanism that explains the link between obesity and breast cancer risk. People with high body fat levels tend to have adipocyte hypertrophy and cell death which means the adipose tissue is chronically although sub-clinically inflamed. This inflammation triggers the production of a number of factors including an increased ratio of oestrogens to androgens which is believed to predispose to the development of oestrogen-dependent breast cancer. Basically the study authors believe these women with high body fat but normal BMI, are ‘metabolically obese’ even though they do not fit the standard definition of obese. And while using DXA to determine body fat levels is highly accurate, such an assessment is rarely used in everyday practice. Most doctors look only at BMI measurements or they may also assess waist measurement which has variable sensitivity in terms of diagnosing excess body fat. Consequently, the researchers say, many non-overweight women who are at increased risk of breast cancer because of their high adiposity may be going unrecognised. So where does that leave us? Here the study authors were less definitive. The link between body fat and breast cancer is clear but, they say, more research is needed to determine the most appropriate management for this cohort of women with high body fat levels and normal BMI. “Future studies are needed to determine whether interventions that reduce fat mass, such as diet and exercise programs or medications including aromatase inhibitors, might lower the elevated risk of breast cancer in this population with normal BMI,” they concluded.

Reference

Iyengar NM, Arthur R, Manson JE, Chlebowski RT, Kroenke CH, Peterson L, et al. Association of Body Fat and Risk of Breast Cancer in Postmenopausal Women With Normal Body Mass Index: A Secondary Analysis of a Randomized Clinical Trial and Observational StudyJAMA 2018 Dec 6. DOI: [10.1001/jamaoncol.2018.5327] [Epub ahead of print]
Dr Amanda Henry

Women often wonder what the “right” length of time is after giving birth before getting pregnant again. A recent Canadian study suggests 12-18 months between pregnancies is ideal for most women. But the period between pregnancies, and whether a shorter or longer period poses risks, is still contested, especially when it comes to other factors such as a mother’s age. It’s important to remember that in high-income countries most pregnancies go well regardless of the gap in between.

What is short and long

The time between the end of the first pregnancy and the conception of the next is known as the interpregnancy interval. A short interpregnancy interval is usually defined as less than 18 months to two years. The definition of a long interpregnancy interval varies – with more than two, three or five years all used in different studies. Most studies look at the difference every six months in the interpregnancy interval makes. This means we can see whether there are different risks between a very short period in between (less than six months) versus just a short period (less than 18 months). Most subsequent pregnancies, particularly in high-income countries like Australia, go well regardless of the gap. In the recent Canadian study, the risk of mothers having a severe complication varied between about one in 400 to about one in 100 depending on the interpregnancy interval and the mother’s age. The risk of stillbirth or a severe baby complication varied from just under 2% to about 3%. So overall, at least 97% of babies and 99% of mothers did not have a major issue. Some differences in risk of pregnancy complications do seem to be related to the interpregnancy interval. Studies of the next pregnancy after a birth show that:

What about other factors?

How much of the differences in complications are due to the period between pregnancies versus other factors such as a mother’s age is still contested. On the one hand, there are biological reasons why a short or a long period in between pregnancies could lead to complications. If the gap is too short, mothers may not have had time to recover from the physical stressors of pregnancy and breastfeeding, such as pregnancy weight gain and reduced vitamin and mineral reserves. They may also not have completely recovered emotionally from the previous birth experience and demands of parenthood. If the period between pregnancies is quite long, the body’s helpful adaptations to the previous pregnancy, such as changes in the uterus that are thought to improve the efficiency of labour, might be lost. However, many women who tend to have a short interpregnancy interval also have characteristics that make them more at risk of pregnancy complications to start with – such as being younger or less educated. Studies do attempt to control for these factors. The recent Canadian study took into account the number of previous children, smoking and the previous pregnancy outcomes, among other things. Even so, they concluded that risks of complications were modestly increased with a lower-than-six-month interpregnancy period for older women (over 35 years) compared to a 12-24-month period. Other studies, however, including a 2014 West Australian paper comparing different pregnancies in the same women, have found little evidence of an effect of a short interpregnancy interval.

So, what’s the verdict?

Based on 1990s and early 2000s data, the World Health Organisation recommends an interpregnancy interval of at least 24 months. The more recent studies would suggest that this is overly restrictive in high-resource countries like Australia. Although there may be modestly increased risks to mother and baby of a very short gap (under six months), the absolute risks appear small. For most women, particularly those in good health with a previously uncomplicated pregnancy and birth, their wishes about family spacing should be the major focus of decision-making. In the case of pregnancy after miscarriage, there appears even less need for restrictive recommendations. A 2017 review of more than 1 million pregnancies found that, compared to an interpregnancy interval of six to 12 months or over 12 months, an interpregnancy interval of less than six months had a lower risk of miscarriage and preterm birth, and did not increase the rate of pre-eclampsia or small babies. So, once women feel ready to try again for pregnancy after miscarriage, they can safely be encouraged to do so.
Dr David Kanowski

Short or tall stature is considered to be height below or above the 3rd or 97th percentile respectively. Abnormal growth velocity, showing on serial height measurements, is also an important finding. Growth charts based on the US NHANES study are available from www.cdc.gov/growthcharts/charts.htm. Copies of growth charts, together with height velocity and puberty charts are available at the Australasian Paediatric Endocrine Group (APEG) website, https://apeg.org.au/clinical-resources-links/growth-growth-charts/. Local Australian growth charts are currently not available. The height of the parents should be considered in evaluating the child. Expected final height can be calculated from the parents’ heights as follows: For boys: Expected final height = mean parental height + 6.5cm For girls: Expected final height = mean parental height – 6.5cm Assessment of bone age (hand/wrist) is also useful. With familial short or tall stature, bone age matches chronological age. Conversely, in a child with pathological short stature, bone age is often well behind chronological age, and may continue to fall if the disease is untreated. The stage of puberty is relevant, as it will affect the likely final height. A short child who is still pre-pubertal (with unfused epiphyses) is more likely to achieve an adequate final height than one in late puberty.

Short stature

Causes to consider include:
  • Malnutrition, the commonest cause worldwide
  • Chronic disease, for example, liver/renal failure, chronic inflammatory diseases
  • Growth hormone deficiency, with/without other features of hypopituitarism
  • Other endocrinopathies, for example, hypothyroidism, (rarely) Cushing’s syndrome
  • Genetic/syndromic causes, for example, Down, Turner, Noonan, Prader-Willi syndromes
  • Depression or social deprivation should also be considered
  • Idiopathic short stature is a diagnosis of exclusion
Appropriate initial screening investigations can include liver and renal function tests, blood count, iron studies, thyroid function tests, coeliac disease screen, thyroid function tests, urinalysis (including pH) and karyotype. Other specialised tests may be needed, based on suspicion. In the lower range, IGF-1 shows considerable overlap between normal and abnormal levels, especially in the setting of poor nutrition. Small children tend to have low levels, regardless of whether growth hormone deficiency is the underlying cause. Random growth hormone levels vary widely because of pulsatile secretion and are also not a reliable test. Therefore, unless there is a clear underlying genetic or radiological diagnosis associated with clearly low IGF-1, stimulation testing is typically required to formally diagnose growth hormone deficiency and may be essential for funding of growth hormone treatment.

Tall stature

Causes include:
  • Chromosomal abnormalities, for example, Klinefelter syndrome (qv), XYY syndrome
  • Marfan syndrome
  • Homocystinuria
  • Hyperthyroidism
  • Growth hormone excess (see Acromegaly; Growth hormone; Insulin-like growth factor-1 (IGF-1))
  • Precocious puberty
  • Other syndromic causes, for example, Sotos, Beckwith-Wiedemann syndromes
  • Familial tall stature (predicted final height should match mid-parental height)
Investigation of stature is a specialised area and early discussion with a paediatric endocrinologist is indicated if there is clinical concern, for example, height below the 3rd percentile at age five, slow growth (crossing two percentile lines away from the median), significant height/ weight discrepancy (more than two centile lines), suspected/confirmed metabolic or genetic abnormality, or clinical evidence of malnutrition or marked obesity.

References

  1. Cohen P, Rogol AD, Deal CL, Saenger P, Reiter EO, Ross JL, et al. Consensus statement on the diagnosis and treatment of children with idiopathic short stature: a summary of the Growth Hormone Research Society, the Lawson Wilkins Pediatric Endocrine Society, and the European Society for Paediatric Endocrinology workshop. J Clin Endocrinol Metab. 2008 Nov; 93(11): 4210-7. DOI: [10.1210/jc.2008-0509]
  2. Nwosu BU, Lee MM. Evaluation of short and tall stature in children. Am Fam Physician. 2008 Sep 1; 78(5): 597-604. Available from: www.aafp.org/afp/2008/0901/p597.pdf.
  General Practice Pathology is a regular column each authored by an Australian expert pathologist on a topic of particular relevance and interest to practising GPs. The authors provide this editorial free of charge as part of an educational initiative developed and coordinated by Sonic Pathology.
Dr Linda Calabresi

Giving children with acute gastroenteritis probiotics will not help them recover more quickly, according to two large randomised controlled trials. At least if the probiotic includes Lactobacillus rhamnosus. The research, published in the New England Journal of Medicine, provides solid evidence against the adjunctive treatment, which, as the study authors point out, has been recommended by many health professionals and authoritative bodies. “Many experts consider acute infectious diarrhoea to be the main indication for probiotic use,” they said. However, the two studies, both conducted on children aged three months to four years with a less than 72-hour history of acute vomiting and diarrhoea, failed to show any benefit of taking a five-day course of the probiotics. One of the studies conducted across six tertiary paediatric centres in Canada, involved almost 900 children with acute gastroenteritis randomly assigned to receive either a combination probiotic (L. rhamnosus and L. helveticus) or placebo. The other very similar study, this one involving US centres, included 970 children with gastroenteritis and tested the effectiveness of giving the single probiotic Lactobacillus rhamnosus against placebo. The results of the two trials, using almost identical outcome measures were the same – the probiotics did not make a difference. “Neither trial showed a significant difference in the duration of diarrhoea and vomiting, the number of unscheduled visits to a health provider or the duration of day-care absenteeism,” an accompanying editorial concluded. The role of probiotics in the management of gastroenteritis in children has been an area of controversy and contradiction not only among individual specialists but also among different expert bodies, with guideline recommendations varying from “not recommended” by the Centers of Disease Control and Prevention to “strongly recommended” by the European Society for Pediatric Gastroenterology, Hepatology and Nutrition. But now, it appears this grey area has now become very black and white. “Taken together, neither of these large, well-done trials provides support for the use of probiotics containing L. rhamnosus to treat moderate-severe gastroenteritis in children,” the editorial stated. The caveat, of course, is that this evidence while robust only applies to this particular probiotic. There might still be probiotics out there that do make a difference. The editorial author referred to a recent large randomised-controlled trial conducted in rural India that found giving healthy newborns the probiotic, L. planatarum in the first few days of life was associated with a significantly lower risk of sepsis and lower respiratory tract infection in the subsequent two months. So while these studies might appear to be the nail in the coffin for L. rhamnosus -containing probiotics, it is still a case of ‘watch this space’ with regard the role of probiotics more generally.

Reference

Schnadower D, Tarr PI, Casper TC, Gorelick MH, Dean JM, O'Connell KJ, et al. Lactobacillus rhamnosus GG versus Placebo for Acute Gastroenteritis in Children. N Engl J Med. 2018 Nov 22; 379(21): 2002-2014. DOI: 10.1056/NEJMoa1802598 Freedman SB, Williamson-Urquhart S, Farion KJ, Gouin S, Willan AR, Poonai N, et al. Multicenter Trial of a Combination Probiotic for Children with Gastroenteritis. N Engl J Med. 2018 Nov 22; 379(21): 2015-26. DOI: 10.1056/NEJMoa1802597 LaMont JT. Probiotics for Children with Gastroenteritis. N Engl J Med 2018 Noc 22; 379(21): 2076-77. DOI: 10.1056/NEJMe1814089