Download as PDF
Economic evaluation is a tool in which evidence about the cost and benefits (outputs, impacts and/or outcomes) of programs* is gathered and compared in order to identify those that represent ‘best buys’.
The NSW Treasury Policy and Guidelines: Evaluation (TPG22-22)1 sets out mandatory requirements, recommendations and guidance for NSW General Government Sector agencies and other government entities to plan for and conduct the evaluation of policies, projects, regulations and programs. In addition, the NSW Health Guide to Measuring Value2 provides specific guidance about measuring improvements across the quadruple aim of value-based healthcare at NSW Health as part of monitoring and evaluation. It should be read in conjunction with these NSW Government resources.
The purpose of this guide is to assist NSW Health staff in engaging an independent evaluator for economic evaluations, particularly in relation to population health programs. The guide includes information to assist with decisions on whether an independent evaluator should be engaged, and considerations specific to the development of economic evaluation plans. Processes for engaging an independent evaluator, which are equally relevant to economic evaluations, are outlined in the companion document Planning and Managing Program Evaluations: A Guide.3
Economic evaluation essentially compares the costs and benefits of the program in question (new or existing) to an alternative program; it is dependent on the availability of information on the costs and effectiveness of a program. In comprehensive program evaluations process, outcome and economic evaluation should be integrated, with all evaluation components planned at the same time as the development of the intervention. Ideally economic data and other evaluation data are collected simultaneously, however, retrospectively collected cost data can be used along with evidence of effectiveness drawn from the literature or from a previous or retrospective outcome evaluation.
In framing an economic evaluation, the nature of the comparison to be undertaken, the perspective of the analysis, and the timeframe for analyses, need to be considered.
There are six commonly-used forms of economic evaluation: cost-minimisation analysis, cost-effectiveness analysis, cost- efficiency analysis, cost-utility analysis, cost-consequences analysis and cost-benefit analysis. NSW Treasury recommends cost-benefit analysis (CBA) as the preferred approach for evaluating NSW government programs because it captures social and environmental impacts, as well as economic impacts.
Opportunities to collect cost data, including direct and indirect costs, and cost offsets, should be acted upon at an early stage. Options for collection of data on health care utilisation include self-reported data, data linkage and previously published cost information.
In conducting an economic evaluation, other issues that need to be considered, particularly in relation to population health programs, are the need to predict through the use of economic models the costs and outcomes that occur beyond the period in which they can be directly observed, the discounting of future events and the conduct of sensitivity analysis to account for uncertainty.
Ultimately the economic evaluation needs to be designed to meet its primary purpose (i.e. to inform the investment decision at hand). It is important for the team engaging the independent evaluator to set the appropriate question, identify the key parameters for the evaluation and facilitate evaluators’ access to appropriate data on costs and outcomes. Evidence from an economic evaluation should be considered alongside other evidence (e.g. equity considerations) in making investment decisions.
* In this guide the word ‘program’ is used interchangeably with ‘initiative’. The NSW Treasury Policy and Guidelines: Evaluation (TPG22-22) define an initiative as a program, policy, strategy, service, project, or any series of related events. Initiatives can vary in size and structure; from a small initiative at a single location, a series of related events delivered over a period, or whole-of-government reforms with many components delivered by different agencies or governments. This guide also uses the term ‘intervention’ as an alternative to ‘program’.
NSW Health is committed to the development of evidence-based policies and programs and the ongoing review and evaluation of existing programs. This guide has been developed to support NSW Health staff in engaging an independent evaluator for economic evaluations of health programs, particularly those in population health.
Economic evaluation is a tool in which evidence about the cost and benefits (outputs, impacts and/ or outcomes) of programs is gathered and compared in order to identify those that represent ‘best buys’. The most commonly cited definition of economic evaluation is that it is the “comparative analysis of alternative courses of action in terms of both their costs and consequences”.4 In an era in which health care resources are increasingly stretched, the use of this type of evidence is important in ensuring that health care investments are optimised to achieve value for money.†
This guide should be read in conjunction with Planning and Managing Program Evaluations: A Guide, a companion document from the Evidence and Evaluation Guidance Series of the Population and Public Health Division.3 The Planning and Managing Program Evaluations guide promotes a proactive, planned and structured approach to engaging an independent evaluator and includes information on when and how to engage an evaluator, and how to make the most of the results. It draws on the Treasury Evaluation Policy and Guidelines and NSW Government Evaluation Toolkit, which outline the requirements, and suggested processes, for suitable evaluation of NSW public programs to assess their effectiveness, value for money and continued relevance, and to improve transparency in decision making.1,7
In addition, the NSW Health Guide to Measuring Value provides specific guidance about measuring improvements across the quadruple aim of value-based healthcare at NSW Health as part of monitoring and evaluation.
The Planning and Managing Program Evaluations guide5 outlines two major steps:
This guide to Engaging an Independent Evaluator for Economic Evaluations does not duplicate, but will rather cross-reference, the information provided in the Planning and Managing Program Evaluations guide. It provides additional information to help decide whether an independent evaluator should be engaged for the economic evaluation and considerations specific to the development of economic evaluation plans (Appendix 2). In particular, it contains information that will assist with the considerations outlined in Figure 1.
The guide is not intended to provide comprehensive information about how to conduct economic evaluations. There are a number of textbooks and reference materials that are available for this purpose which are outlined at the end of the document. Rather, the aim of this guide is to give decision makers an appreciation of the circumstances in which they may benefit from commissioning of an economic evaluation and to provide information to enable this to be done effectively.
† Economic evaluations are different from ‘cost of illness’ or ‘burden of disease’ studies that aggregate the cost to society associated with a particular disease. Such studies typically produce findings that ‘Disease X costs the Australian community $Y per year.’ These studies can attract attention to a problem and can be effective advocacy tools but are of limited value in informing the allocation of resources. It is also worth distinguishing economic evaluation studies from priority setting exercises such as program budgeting and marginal analysis (PBMA) and option appraisals.5,6 Priority setting exercises address broad allocative efficiency questions by examining how best to allocate across a range of program areas from a given budget, and typically utilise economic evaluation evidence from a range of studies to do so. For instance, a priority setting exercise in PBMA may entail a funding agency looking at how much it invests across a portfolio of disease prevention and curative services. From an economic perspective the imperative would be to prioritise those programs that yield the greatest benefit for a given level of resources and in this respect cost-effectiveness evidence across all potential areas of spending needs to be taken into consideration. This guide focuses on the commissioning of individual economic evaluation studies; the evidence generated from these studies is potentially of use in broader priority setting initiatives.
Note that, for external evaluations, some of these decisions will be made by the team engaging the independent evaluator prior to calling for a request for tender (RFT) for the evaluation. Other decisions may be made by the successful evaluator, in consultation with the team engaging the independent evaluator. Which decisions are made before and after the call for applications will depend on the level of economic expertise in the team engaging the independent evaluator.
Whether or not a program should be formally evaluated will depend on factors such as the size of the program (including its scope and level of funding), its strategic significance, the degree of risk, resources available, timing and degree of program complexity. Appendix 1 outlines issues for consideration in deciding whether or not a program should be formally evaluated and whether an external evaluator is required. An Executive sponsor with appropriate delegation will need to approve the conduct of any proposed evaluation and the associated allocation of resources.
Additional considerations in deciding whether an economic evaluation should be undertaken include: whether it will provide the right type of economic evidence to support the investment decision at hand; whether there is already good, relevant economic evidence available; whether evidence of program feasibility and effectiveness is, or will be, available; how important economic evidence is for the investment decision to be made, given other considerations such as equity; the level of upfront investment; whether there are plans for scaling up the program and whether it will be possible to obtain the data required for the economic evaluation.
By enabling choices to be made across alternative programs, economic evaluation is a tool for guiding rational investment decisions in population health. Some of the investment decisions (i.e. questions) that population health decision makers are likely to encounter will be well informed by an economic evaluation.
This guide focuses on investment decisions that may be addressed by economic evaluations and considerations relevant to these evaluations.
An initial review may be useful to see if available economic evidence is sufficient to inform the investment decision at hand. However, as described elsewhere in this guide, economic evaluations are best framed around a specific investment decision or question, should use locally relevant cost data where possible, and be based on an incremental analysis in which the comparator usually reflects current practice. Thus, economic evaluations are not designed to achieve a high degree of generalisability. When translating economic evidence from other settings, it is therefore important to account for differences in costs, practice or service variations, population characteristics and the nature of the comparator. Also the findings from previous studies need to be adjusted for inflation. The tasks of making these adjustments to existing published evidence are not inconsequential and therefore, while there may not be a need to conduct a new economic evaluation, work will nonetheless be required in extrapolating from the available evidence to the question at hand.
It is important to determine whether the program is in fact feasible and effective and therefore potentially worth investing in. There are a number of aspects of feasibility that may influence whether a decision maker will want to invest in a program. For example, is there capability (human capital, resources, skills etc) to implement the intervention? Is the intervention affordable (i.e. is the necessary funding available (as opposed to value for money))? Will the intervention be acceptable to the target population? These issues can be assessed through a process evaluation. Economic evaluation should not be undertaken if a comprehensive process evaluation has not been, or will not be, done.
Economic evaluations are highly dependent on the availability of evidence of program effectiveness. This may be based on either:
Economic evaluation is primarily about evaluating efficiency. There are two types of efficiency that are of importance: technical efficiency and allocative efficiency. Technical efficiency refers to the maximum output/outcome obtained for a given program from a given set of resources. Allocative efficiency is about the optimal allocation of resources across a portfolio of programs so as to achieve the maximisation of benefits for that portfolio. Thus, allocative efficiency focuses on whether better outcomes can be achieved by investing more in one program and less of another. Economic evaluations generally focus on technical efficiency, although cost-benefit analysis can also address questions of allocative efficiency. Cost-utility analysis can also address allocative efficiency, although only when health outcomes are the only outcome of interest across the mix of programs being considered for investment.
In some cases, the rationale for the program in question may be based on an objective, or objectives, other than value for money. For example, equity may be an over-riding criterion for providing a program that seeks to improve health outcomes for certain disadvantaged populations.
In most cases, economic evaluations promote efficiency but do not address equity. Equity refers to fairness. Economic evaluations determine the program that maximises health gain at least cost (i.e. efficiency) for the respective population as a whole. However, population health programs often target specific groups of people (e.g. men or women, people with different socioeconomic status, Aboriginal or Torres Strait Islander populations or other social/ethnic groups), where there are inequalities in health compared to the general population. There is often a trade-off between efficiency and equity, because the most efficient program (i.e. provides the most health gain overall) is not always the most equitable as programs targeting marginalised groups may require more resources to implement and may not be as effective. Hence decision makers need to assess the results of an economic evaluation alongside other data on equity in order to ascertain a more complete picture of the social impact and investment case for a program. Although there are methods available for incorporating equity considerations alongside an economic evaluation (see Appendix 3), in practice, these are rarely deployed.8-10
Agencies are expected to prioritise evaluation, including economic evaluation, of larger, more strategic and/or risky programs.1 Sometimes, a policy or program in question involves little or no investment of resources, such as the enactment of public regulations or a new tax on tobacco. On the face of it, there seems to be little scope for economic evaluation of such interventions as they may appear to be ‘free’ or indeed revenue generating. However, it needs to be recognised that such programs potentially have implications for downstream costs to individuals, the community and the government (e.g. costs savings from reduced hospitalisations for chronic diseases, costs involved with law enforcement of the new tax). In these instances, if an economic evaluation is undertaken, the challenge is in capturing relevant costs and consequences.
Another issue to consider, which is not directly addressed in economic evaluation, is the scalability of the program. Once a program has been shown to be effective or cost-effective, it can be rolled out to a wider population than the one in which the original evaluation was carried out. The challenge is to assess how well the evaluation evidence gathered during the formal evaluation can be generalised to the program once it is scaled up. For example, will capacity constraints, such as a lack of staff, undermine the ability of governments to scale up the program? This is important in ascertaining whether the outcomes of a program that has been shown to be effective and cost-effective on a small scale will successfully translate into population-wide health improvements (see the Evidence and Evaluation Guidance Series publication Increasing the Scale of Population Health Interventions: A Guide).11
Scalability and plans for scaling up a program may be important in the consideration of investment options, but may also inform the decision whether or not to undertake an economic evaluation, as they relate to the size of the investment and the strategic importance of the program.
Data collection in itself can be a significant impost and expense, and good quality data are essential for high quality economic evaluations. It is important to consider early the types of data likely to be needed for an economic evaluation and whether these data are likely to be available and accessible, or alternatively collectable, and affordable.
Engaging an external evaluation consultant is important where there is a need for special evaluation expertise and/or an independent assessment of the program. It is more likely that the expertise required to conduct a high-quality economic evaluation will need to be sourced from outside the team engaging the independent evaluator, than for evaluations of program implementation and effectiveness.
An evaluation plan that is agreed in consultation with stakeholders can help ensure a clear, shared understanding of the purpose of an evaluation and its process (see element 3 in Appendix 2). For external evaluations, elements of the economic evaluation plan will form the basis for a request for tender (RFT) document and a contract with the successful evaluator (see Section 6.1 of the companion document Planning and Managing Program Evaluations: A Guide). Note that all of the information required for a comprehensive economic evaluation plan may not be known when preparing the RFT, and the successful tenderer may value-add to the plan. In addition, external economic expertise may need to be sourced at, or prior to, the development of the RFT for the evaluation, depending on the level of economic expertise in the team engaging the independent evaluator. An external agency could provide advice on a draft RFT or could be commissioned, as a first step, to develop an economic evaluation options paper.
The decision of how much to invest in an economic evaluation in monetary terms should be taken on a case by case basis, given the different aims, size, perspective and scope of each program of interest. One of the key drivers of costs associated with an economic evaluation is data collection which may include linkage to patient-level healthcare utilisation data (i.e. hospital records, Medicare records, etc). Similarly, if quality-adjusted life years (QALYs)‡ were the outcome of interest and the economic evaluation required collection of data on QALYs (as opposed to obtaining data from the relevant literature), then one needs to take into account the costs involved in administering the questionnaire (potentially at different points in time), including the associated staff costs. Other drivers of the cost of an economic evaluation include but are not limited to: modelling (if required) of the economic evaluation into the future; and conducting systematic reviews of relevant evidence. A rough estimate of cost for a program evaluation is around 10% of the program costs;12 around 20-40% of these evaluation costs should be set aside for the economic evaluation.
The approach to engaging an independent evaluator for economic evaluations should follow the same processes as for other evaluations (see Appendix 2). Ideally, the economic evaluation should be planned at the same time as the development of the intervention and a data collection strategy developed to enable economic data (alongside other evaluation data) to be collected concurrently with the implementation of the program.
This guide identifies and explains considerations specific to the development of economic evaluation plans.
When engaging an independent evaluator for an economic evaluation, the prospective evaluators will need to be guided in framing the analysis. This involves a number of tasks:
The perspective of the economic evaluation is the point of view from which the costs and benefits of the program are to be analysed. The economic evaluation analysis can be conducted from a range of perspectives, including, but not limited to, the agency that funded the program, the health sector, other government sectors such as housing or education, the public sector more generally, particular population subgroups or communities that the program is targeting, or in its broadest form, the societal perspective which takes into account all the costs and benefits accrued by whomever is affected by the program.
While it would be ideal to take a societal perspective,13 in practice collecting all relevant cost and benefit information is costly and very time-consuming; the health sector is the most commonly used perspective for health economic evaluations. The choice of perspective can influence the conclusions drawn from the economic evaluation.
The team engaging the independent evaluator will need to make a choice regarding the perspective. Relevant questions are:
Whichever perspective is chosen, it is important to ensure that all important costs and benefits are captured within this perspective, thus the choice of perspective dictates the data collection strategy and in particular the type of costs that are to be estimated in the evaluation.
By way of illustration, the implementation of a population health program could see costs and benefits realised upon a range of different parties since the achievement of population health outcomes often depends on action in non-health sectors. A healthy eating information program at school for example would impose costs for the education department and generate benefits to the health sector. The results of an economic evaluation will differ depending on the perspective taken. Although the team engaging the independent evaluator may only be interested in their own particular perspective, a danger is that a program deemed to be cost-effective through the lens of a single agency may only achieve this due to a shifting of costs onto other parties. As such, even if a single agency perspective is the most relevant to the investment decision at hand, it is good practice to supplement this primary analysis with secondary analyses that look at alternative perspectives such as ‘whole of government’, ‘health sector’ and ‘societal’. This will help untangle issues of cost-shifting from those of efficiency.
Economic evaluation is essentially a comparative analysis between two or more different options – usually a new intervention versus the status quo or ‘do nothing different’ option. The comparator is generally intended to reflect current practice or what was historically done prior to the program of interest. Ultimately the question that needs to be addressed in defining the comparator is ‘what would be in place if the program in question did not exist?’ It is important that the comparator is realistic. If the comparator is based on an unfavourable account of current practice, the evaluation will generate results that potentially overstate the added value and cost effectiveness of the new program.
The timeframe represents the period over which evidence of costs and outcomes will be collected. Deciding on a timeframe requires the team engaging the independent evaluator to identify the potential health outcomes associated with the program, how long the program needs to be implemented to exert enough influence to achieve these outcomes, and the length of time over which these outcomes are likely to accrue. It is important to note that the costs and benefits of some population health programs can extend many years after the program has concluded. In such circumstances it may not be possible to rely completely on primary data and the health economic evaluation will need to use modelling techniques to extrapolate costs and outcomes into the future.
Six forms of economic evaluation applied in population health are summarised below and then described in more detail later. Drummond et al. 20054 provides further reading on these economic evaluation techniques (except cost-efficiency analysis); examples of each technique from the literature are provided in the Economic evaluation techniques section. NSW Treasury recommends cost-benefit analysis as the preferred approach for evaluating NSW government programs because it captures social and environmental impacts, as well as economic impacts.
Monetary units
Outcomes are assumed to be equal between alternatives and thus are not assessed
The relative costs of the program are measured with the assumption that the outcomes are equal
Simplest of all forms of economic evaluation
There are very limited circumstances where the assumption of equal health outcomes can be made
Natural health units
Enables comparison of programs using the same health outcomes
Limited to a single dimension of effectiveness so doesn't capture the multidimensional outcomes of most population health programs
Service outputs
A modification of CEA where the benefits are service outputs rather than health outcomes
Focus on minimising the cost per unit of output
Does not consider potential impact on health outcomes
QALYs/DALYs‡
Estimates costs in monetary terms and benefits expressed as either QALYs or DALYs†
A common outcome measure is provided so that different programs can be compared
Multiple methods to evaluate quality of life which could affect results Population health programs have additional benefits beyond that which are captured in a QALY or DALY
Natural health units but there may be multiple outcomes
A modification of CEA where all important outcomes are profiled so that none may be overlooked
Ensures all outcomes of importance are assessed
Difficult to determine whether a program is effective is some outcomes improve and others deteriorate
Values and compares all of the costs (C) and benefits (B) of programs in equivalent monetary terms. An intervention is considered efficient if B-C>0 or B/C>1
Comparability across programs that generate different types of benefits, inside or outside of the health sector
Difficulty in assigning a monetary value to benefits of a program
† Further details regarding how to interpret cost-effectiveness and cost-utility results are provided in Appendix 4.
‡ See cost-utility analysis and key definitions for details on QALYs and DALYs.
Clark et al. have outlined a simple flow diagram to assist in choosing the appropriate economic evaluation technique for different situations.14 This has been adapted for use in this guide (Figure 2). The particular technique to be chosen should be determined by the nature of the program alternatives under consideration for investment.
Other factors, such as the availability of relevant and reliable data, the resources assigned to the economic evaluation, the requirements of those commissioning the economic evaluation, the feasibility of the research, and the decisions that will be made using the results of the evaluation might also influence the choice of technique.
If a decision is made not to go ahead with an economic evaluation, at the very least the major elements of the program should be costed (referred to as costing in Figure 2) to provide information for program management.
A cost-minimisation analysis (CMA) is conducted when the comparison involves two or more programs (usually including a status quo option) in which the outcomes are assumed to be, or have been demonstrated to be, equivalent and thus the comparison is made solely on the basis of cost. The program which accrues the lowest cost would be the most desirable from an economic perspective. CMA is quite a narrow form of analysis and should be undertaken with caution as the assumption of equivalent outcomes is often difficult to justify.
Mariño et al. undertook a cost-minimisation analysis comparing a new community-based oral health promotion program versus usual-practice among immigrant older adults in Melbourne, Australia.15 The intervention program incorporated oral health seminars; one-to-one oral hygiene sessions demonstrating tooth brushing and dental flossing; and the provision of relevant oral health products. Usual practice was non-tailored one-on-one chairside oral hygiene instruction at a public dental clinic over 6 weeks. The outcome of interest (assumed equal between the two groups) was a reduction in gingival bleeding.
The cost-minimisation analysis found that the community-based intervention would cost $69.65 per participant, whereas the chairside instruction would cost $401.85. The program would therefore result in a saving of $332.20 per person in favour of the community-based intervention over a six-week period.
A cost-effectiveness analysis (CEA) is carried out when programs being compared are similar to the extent that their outcomes can be valued in the same units of health gain. Typically, cost-effectiveness analysis produces an incremental cost-effectiveness ratio presented in terms of a cost-per-unit of health outcome gained relative to the comparator (e.g. incremental cost per case prevented or incremental cost per life year gained). This is the most common form of economic evaluation in health. Its advantage is that it provides a fairly transparent means of comparing the costs and outcomes of interventions. However, a potential weakness of CEA is the lack of comparability of the relative value of health outcomes across different programs (e.g. incremental cost per fall prevented compared to incremental cost per death averted).
A variation of CEA is cost-efficiency analysis. It compares options in terms of cost relative to a common measure of output (e.g. client visited, service delivered, procedure performed etc). It differs from conventional CEA because its focus is on service outputs rather than health outcomes. The objective with efficiency analysis is that the focus is on achieving the lowest cost per unit of output, the assumption being that potential differences in health outcomes between options either do not exist, are difficult to measure or are irrelevant to the question at hand. In health economics this category of evaluation tends to be grouped under CEA.
The Healthy Beginnings Trial by Hayes et al. set out to determine the cost-effectiveness of an early-childhood obesity prevention program delivered to families in socioeconomically disadvantaged areas of Sydney, Australia.16 The economic evaluation was a complete-case analysis (i.e. patients were followed up for the length of the study) of the costs and cost-effectiveness of the intervention during the intervention phase, up to age 2 years only. The perspective was that of the health care funder.
Height and weight were measured for the infant patients at 2 years of age to calculate comparative body mass index (BMI). The direct costs of delivering the intervention over 2 years included staff time, vehicle purchase, vehicle running costs for home visits, costs of training community nurses, educational materials, and equipment costs of scales and portable stadiometers. Downstream costs due to healthcare utilisation by participants were assessed through analyses of deidentified claims details for individual patients under the Medicare Benefits Schedule (MBS) and Pharmaceutical Benefits Scheme (PBS) and data linkage to the NSW Admitted Patient Data Collection, for hospitalisations and the NSW Emergency Department Data Collection, for emergency presentations.
A discount rate of 5% per year was used.
The cost of the intervention over 2 years was $1,309 per child. The mean (95% Confidence Interval) costs of other healthcare, over the first 2 years of life, were $2,706 ($2,238-$3,175) in the intervention group and $2,582 ($2,199- $2,964) for usual care. The incremental cost-effectiveness ratio (ICER) was $4,230 per unit of BMI avoided based on results from the trial. Under a more realistic model of intervention delivery with shorter travel times for home visits, the ICER was $2,697 per unit of BMI avoided.
Cost-utility analysis (CUA) is one means of addressing a limitation of cost-effectiveness analysis: namely, its limited comparability based on its reliance on a single, program- specific measure of outcome. CUA uses either quality-adjusted life years (QALYs) or disability-adjusted life years (DALYs) as outcome measures and these can be employed as a means of comparing across diverse sets of programs.
QALYs are a measure of health outcomes in which life expectancy, in terms of life years, is weighted by an index of quality of life and measured on a scale in which 1 represents full health and zero represents health states equivalent to death. For instance, if an intervention results in a 10-year gain in life-expectancy but the quality of life of each of those life-years was valued at 0.5, then the QALY gain over a 10- year period is deemed to be 5. Cost-utility analysis enables comparison of diverse interventions because it accounts for both length and quality of life. It also enables comparison across programs which are focused on different areas of population health as the benefits are measured with the same units (QALYs or DALYs).
QALYs are recommended for use in economic evaluations of health regulatory programs in guidelines produced by the Pharmaceutical Benefits Advisory Committee (PBAC) in Australia and the National Institute for Health and Clinical Excellence (NICE) in the UK.
There are a number of methods available for assessing quality of life for the purposes of estimating QALYs. The method generally recommended for health technology assessment (including by the Australian PBAC) is the use of a multi- attribute utility index (MAUI) such as the Euro-Qol 5D (EQ-5D), Health Utilities Index Mark 3 (HUI3), SF-6D or the Assessment of Quality of Life (AQoL), which are questionnaires used to generate preference-based measures of health status and health-related quality of life to estimate QALYs in economic evaluations. In principle there is no reason why these measures cannot be used in population health, although in practice it is unlikely that changes in these outcomes will be seen within the limited timeframe of most studies. Further information on these measures can be found in the Useful Resources section.
DALYs are a measure of overall disease burden, which is expressed as the number of years that are lost due to ill-health, disability or early death. They were developed by the World Health Organization (WHO) primarily to enable assessments of the global burden of diseases. A DALY can be thought of as the equivalent of one lost year of ‘healthy life’. DALYs can be measured as the sum of the years of life lost (YLL) due to premature mortality in the population and the years of life lost due to disability (YLD) for people living with the health condition or its consequences. The sum of DALYs measured across a population is the gap between the current health status of the population and the ‘ideal’ situation where the entire population lives to an advanced age, free of ill-health or disability. However, DALYs have also been used in economic evaluation whereby the disability weights used in the assessment of DALYs for a particular disease are used to weight years of life lost. Such weights are based on pre- assigned values generated by the WHO and operate in much the same manner as the quality of life weights used to assess QALYs, except in reverse, where the DALYs are assessed in terms of the number averted, while QALYs are assessed in terms of the number gained.
Dalziel et al. conducted a modelled cost-utility analysis in New Zealand to determine whether a physical activity counselling program was cost-effective in general practice.17 The cost-utility of the intervention was compared with “usual care” (assumed to be the patient being routinely seen in primary care).
The economic evaluation took a health system perspective, with the effectiveness of the program based on published trial data of 878 inactive patients who presented to general practice, with costs collected as part of the trial. The trial was over a period of 12 months, with a Markov Model developed to extrapolate over an individual’s lifetime. The main outcome measure was cost per QALY gained. The incremental cost-utility of the program was NZ $2,053 per QALY.
The study found that if decision makers were willing to pay at least NZ $2,000 per QALY, the program was likely to be better value for money than usual care.
Lal et el. conducted a cost-utility analysis, from a health system perspective, using a deterministic model to assess the impact of tobacco control programs on costs and health.18The analysis was a cost-effectiveness study evaluating the impact of a call-back counselling service for smoking cessation (which included multiple counselling sessions and self-help materials) as part of the Quitline in Queensland, Western Australia and the Northern Territory, compared to current practice. Current practice was defined as provision of counselling if requested through the initial call to Quitline.
The cost-utility analysis assessed the potential impact of varying tobacco control interventions on costs and health using data from a similar counselling service in Victoria and the literature. Varying estimates of efficacy and cost from these sources were used and current practice was used as the comparator. The outcome measure was disability-adjusted life years (DALYs) averted over a lifetime. Costs were obtained and adapted from the Victorian study which included telephone counsellors, team leaders, recruitment of smokers by GPs and counselling sessions with smokers. Costs and benefits were adjusted to 2010 Australian dollars, with a discount rate of 3%.
The introduction of call-back counselling for smoking cessation in Quitline achieved net cost savings due to the cost offsets being greater than the cost of the intervention. The study found that even where the cost offsets (the projected healthcare costs that would have resulted in the absence of the intervention) were excluded, the cost per quitter is $773 and the incremental cost-effectiveness ratio was $294 per DALY.
Cost-consequences analysis (CCA) recognises that there are often multiple outcomes from an intervention, which may include a range of health and/or non-health benefits. This form of economic evaluation may appeal to population health decision makers due to the multi-dimensional outcomes of their programs. CCA involves estimating changes resulting from an intervention across each type of outcome, measured in their natural units. This type of evaluation is particularly useful for interventions where, in addition to health gain, an objective may be to initiate other valuable changes within an organisation or community,19 for example, the encouragement of volunteer activity through a health promotion program.
The general limitation of CCA is that because it uses multiple measures of outcome it does not always provide decision makers with a clear indication on whether or not to invest. It is often employed as a supplement rather than as an alternative to approaches such as CEA or CUA, which reduce the evaluation to single numerical value (i.e. a cost-effectiveness or cost-utility ratio). Ideally, a CCA is conducted with a prespecified protocol outlining the outcomes (or ‘consequences’) of interest, along with the rationale for their inclusion.20
Moss et al. performed a cost-consequence analysis of providing women with mild gestational diabetes mellitus with dietary advice, blood glucose monitoring and insulin therapy as needed, compared with routine pregnancy care, using data from a multicentre randomised clinical trial in Australia.21
Primary clinical outcomes were perinatal deaths, serious perinatal complications, admission to neonatal nursery, jaundice requiring phototherapy, induction of labour and caesarean delivery. Economic costs measured were outpatient and inpatient hospital costs.
The results showed that for every 100 women who were offered the intervention in addition to routine obstetric care, $53,985 additional direct costs were incurred at the hospital and $6,251 additional costs were incurred by women and their families. There were 2.2 fewer babies who experienced serious perinatal complications and 1.0 fewer babies experiencing perinatal death for every 100 women. The study found that the additional costs associated with achieving reductions in perinatal mortality and serious complications were justified.
A cost-benefit analysis (CBA) is the broadest form of economic evaluation and is typically carried out using a societal perspective (i.e. including costs and benefits to all individuals and agencies in society). Like cost-consequences analysis, cost-benefit analysis may be of particular value in population health where programs often seek to achieve a diverse set of outcomes. The defining characteristic of cost-benefit analysis is that it values the benefits of programs in monetary terms. Strictly, all costs and benefits should be included however, studies labelled as cost-benefit analyses often measure only those costs and benefits that can be easily monetised and miss relevant outcomes that are not amenable to such valuation, creating bias in the evaluation.
As costs and benefits are valued in the same (monetary) units, the advantage of cost-benefit analysis is that it provides a simple decision rule for decision makers: if benefits to society exceed the costs to society, then the program should be funded and vice versa (although other factors such as feasibility and equity may need to be considered). In relation to health programs this means undertaking the potentially contentious task of valuing lives saved or other dimensions of health in dollar terms (see Appendix 5 for discussion of methods used to derive such monetary values).
Wang et al. conducted a cost-benefit analysis, from a public health perspective, of physical activity using bike or pedestrian trails to reduce health care costs associated with inactivity in Lincoln, Nebraska, USA.22
The cost of construction and annual maintenance of five bike/pedestrian trails was obtained from the city’s Recreational Trails Census Report and the literature. The trails were assumed to last for 30 years with the construction costs allocated evenly over that period. The annual cost of using the trails which included construction and maintenance was US$209.28 per user. The direct health benefit was measured using the estimated difference in the direct medical cost for active persons and their inactive counterparts. Using the National Medical Expenditure Survey, the difference was estimated to be US$564.
Sensitivity analysis was conducted using worst and best-case scenarios for key parameters (construction and maintenance of trails, equipment and travel costs, direct health benefit, the life of the trails).
The benefit-cost ratios ranged from 1.65 to 13.40 with an average of 2.94. This study showed that every US$1 invested in trails subsequently resulted in a greater return in direct medical benefit.
The accuracy and usefulness of cost data can be substantially improved if methods for its collection are planned prior to program implementation, for example, the development of surveys or diaries for recording costs and the processes surrounding ethics approval and consent to release data. This allows the collection of cost data to be built into program delivery.
It is important when costing an intervention to consider all the types of costs that may be incurred that are relevant to the intervention.
Costs can be categorised into four types:
Guidance on how to collect direct and indirect costs associated with large-scale health programs can be found in Issues in the Costing of Large Projects in Health and Healthcare.23
There are three methods by which data on healthcare utilisation for the purposes of assessing cost offsets can generally be obtained. These methods may also provide information about indirect costs and non-health cost offsets:
Often economic modelling is required within health economic evaluation as a means of generating estimates of long-term costs and benefits. Notably, this is standard practice in Australia for demonstrating the cost-effectiveness of health technologies for listing on the PBS and MBS. There are three main reasons why evidence from individual studies may need to be augmented with external evidence (such as through literature review) to enable modelled estimates of the costs and health benefits of an intervention to:
A number of techniques are available to carry out the modelling, including decision tree analysis, Markov modelling and Monte Carlo simulation. These involve the consolidation of multiple sources of evidence and as such, the validity of economic models is constrained by the quality of data available. Given the health benefits of population health programs are likely to be long term and subject to multiple influences, economic modelling has the advantage of being able to capture some of this complexity to an extent not possible through individual studies. Further information on these techniques can be found in the Useful Resources section.
Discounting is an adjustment made to the value of costs and outcomes occurring in the future and is standard practice in economic evaluation. One rationale for discounting is based on the assumption that society places a lower value on events that occur in the future than those that occur in the present (in terms of both costs and outcomes). That is, they would rather enjoy benefits now than deferring them into the future. In practice, both costs and outcomes should be discounted, for both the intervention and the comparator program.
The cost-effectiveness of population health programs is often particularly sensitive to discounting, and the rate that is used, as outcomes could occur many years in the future.
For example, Torgerson and Raftery demonstrated the effects of discounting on the cost-effectiveness of hip fracture prevention.27 The undiscounted cost-effectiveness ratio for 10 years of hormone replacement therapy was estimated at £7,362 per QALY, whereas at a 6% discount rate, the discounted cost-effectiveness ratio was estimated at £42,374 per QALY. NSW Treasury recommends a 5% discount rate (in real terms).13 The recommended discount rate can differ according to the country or state in which the economic evaluation is conducted (e.g. the Australian Department of Health uses 5%28 and the UK has a recommended discount rate of 3.5% for both costs and benefits). Different discount rates should be tested in sensitivity analyses to determine whether they have an impact on the results. NSW Treasury recommends sensitivity testing of discount rates at 3% and 7% (in real terms) to test how robust the results are at these different rates.13
All economic evaluations are subject to uncertainty. Assessing the impact of uncertainties on the results of an economic evaluation is therefore considered standard practice. Sensitivity analysis is conducted in economic evaluations to ensure that the results generated do not change drastically if the values of underlying variables, or assumptions made in the economic evaluation, are changed. It also identifies which variables contribute most to the uncertainty around the results of the economic evaluation.
One-way sensitivity analysis explores the impact on results if an assumed parameter is adjusted. For example, would a program remain a cost-effective intervention if the discount rate was varied from 5% to 10%? Other parameters that could be tested are:
More advanced sensitivity analyses can be conducted with the availability of individual-level data combined with modelling approaches. Probabilistic sensitivity analysis (PSA) is an example of such a technique and is described in detail by Briggs et al.29 PSA is now part of the guidance provided by the National Institute of Health and Clinical Excellence in the UK.
The fundamental reason for engaging an independent evaluator for an economic evaluation is to inform health policy and program decisions for the benefit of the NSW public. To this end the report of the evaluation should contain key inclusions such as those recommended by the International Society for Pharmacoeconomics and Outcomes Research in their checklist for reporting standards for health economic evaluations.30
A comprehensive report will allow readers, including the team engaging the independent evaluator, to assess whether:
Within the description of the economic techniques included in this guide, examples of typical results arising from the technique have been provided, along with information to assist in the interpretation of the results.
Depending on the level of economic expertise in the team engaging the independent evaluator, it may be prudent to seek independent economic advice on the quality of the evaluation report and the interpretation of the findings.
It is important to emphasise that economic evaluations provide evidence around whether a program of interest is worth investing in compared to alternatives. To this end, economic evaluations can provide a rational framework for decisions about investments. However, evidence from an economic evaluation should be considered alongside other evidence in making investment decisions, such as information on program feasibility and effectiveness, and equity considerations which may be relevant to the investment decision of interest.
§ Centre for Epidemiology and Evidence. Planning and Managing Program Evaluations: A Guide. Evidence and Evaluation Guidance Series, Population and Public Health Division. Sydney: NSW Ministry of Health; 2023.
** Centre for Epidemiology and Evidence. Planning and Managing Program Evaluations: A Guide. Evidence and Evaluation Guidance Series, Population and Public Health Division. Sydney: NSW Ministry of Health; 2023.
# Ideally a program logic model should be developed in the program planning phase.
Cookson et al†† provide three approaches to providing evidence in relation to equity considerations that can be used alongside an economic evaluation:
This approach is the least costly and easiest to do as it does not involve the generation of any new quantitative evidence. Instead, it requires an outline and review of relevant equity considerations and background information that might be useful to decision-makers.
Background information may include patterns and causes of the type of health inequality being studied, information on the effects of similar interventions on inequality in other settings and the views of stakeholders on how important reducing a health inequality is compared to other potential uses of scarce resources that would benefit a population.
This approach looks at the impact the intervention is likely to have on health inequalities. As generation of new quantitative evidence is required, it is more complex than reviewing background information on equity. Here, standard evaluation methods can be used to determine the effectiveness or cost-effectiveness of an intervention across equity-relevant subgroups (e.g. socioeconomic status, ethnicity, age or gender).
Tugwell et al.* has proposed a method using existing epidemiological studies. One difficulty that may arise when using this approach is that some trials or studies may not look at the effect of an intervention on particular subgroups but rather, the average effect on the general study population.
Where resources are available, simulation modelling can be used by combining data on existing patterns of health inequality and data on cost-effectiveness of the intervention for particular subgroups.
The aim of this approach is to estimate the opportunity cost of a particular equity consideration by looking at how important it is. This is determined by looking at what was forgone by the population in order to pursue the equity consideration. Cookson et al†† provide the example of quality-adjusted life years (QALYs) that would be forgone in order to pursue an equity option instead of a QALY maximising option.
An advantage of this approach is that it is flexible and can be used to answer other questions beyond equity considerations.
A disadvantage of this approach is that it does not look at benefits, only the cost of the equity consideration. This approach can be applied using standard methods of cost- effectiveness analysis.
†† Cookson R, Drummond M, Weatherly H. Explicit incorporation of equity considerations into economic evaluation of public health interventions. Health Economics, Policy and Law 2009; 4(2): 231–45.
* Tugwell P, de Savigny D, Hawker G, Robinson V. Applying clinical epidemiological methods to health equity: the equity effectiveness loop. BMJ 2006; 332(7537): 358–61.
In both cost-effectiveness analysis and cost-utility analysis, the program of interest is compared to an alternative in terms of costs and benefits. An incremental cost-effectiveness ratio (ICER) can be calculated which incorporates both variables of interest into one unit:
An ICER can be interpreted as the net cost for an additional unit of benefit. For example, in a cost-effectiveness study conducted for a falls prevention program, the findings could be presented as an incremental $10,000 per fall prevented, or in a cost-utility analysis, as an incremental $10,000 per QALY gained. A program with a lower ICER is deemed preferable to one with a higher ICER. However, in Australia there is no explicitly stated threshold for what is defined as cost-effective, as other relevant factors (equity, feasibility, affordability, the degree of uncertainty around the cost-effectiveness results, etc) need to be considered when making a decision. A review of submissions made to the Pharmaceutical Benefits Advisory Committee (PBAC) between 1991 and 1996 found the cost- effectiveness threshold lay between $37,000 and $69,000 per extra life year gained.§§
Further information about interpreting cost-effectiveness results is available in the Useful Resources section.
Willingness to pay estimates provide a measure of the economic benefit arising from participation in a program. It is based on the premise that the value of a program is reflected in how much consumers are willing to pay for it. Of course, this assumes then that consumers are well informed about the merits of the program in question. Willingness to pay estimates, regardless of how they are elicited, tend to be related to individuals’ ability to pay and therefore when applied to the valuation of health, tend to value more highly health gains to the rich than gains to the poor. Willingness to pay estimates can be generated through either revealed preference or stated preference methods:
The human capital approach involves the valuation of health based on its contribution to individuals’ economic production. Production is generally valued by wage rates, based on the assumption that such rates reflect individuals’ contribution to production. As such, an intervention that increases life expectancy (such that an individual gains 10 working years) would be valued by the wage paid to that person over that period, subject to appropriate discounting. Although potentially useful in cost-benefit analyses, this approach has most commonly been used in the health economics literature within ‘burden of disease studies’ in which production losses generally form a significant component of the measured economic burdens to society, along with the costs of treatment. A general criticism of the human capital approach is the equity implications associated with valuing health according to income.