Patient Satisfaction in New Zealand
Wednesday 22 October 2008
Patient Satisfaction in New Zealand
It is exactly eight years ago since the Sector Accountability & Funding Directorate, then known as Crown Company Monitoring & Advisory Unit or CCMAU) published the "Patient Satisfaction Survey Guidelines 2000". The report, which embodied the collaborative effort of several Ministry of Health staff and a team of public hospital Quality Managers and Customer Services personnel, described the newly proposed Inpatient and Outpatient questionnaires and explained in great detail the "Best Practice" methodology that should be used by all New Zealand public hospitals so that they would be able to monitor patient satisfaction accurately and reliably. In an accompanying letter, the then Hon Minister of Health, Ms Annette King, said that (these guidelines) would ...
* Improve the statistical robustness of survey results and the consistency with which DHBs can apply them * Expand the base of the patient populations being surveyed * Focus the questions asked on the key determinants of patient satisfaction, from the patients' perspective
The question that needs to be asked now is, has the implementation of the new survey gone to plan? And have we been able to increase the statistical robustness and the usefulness of the statistics obtained? More to the point, and keeping in mind the issues raised in previous publications (Zwier & Clarke 1999; 2001), are we now in a position to use the data to better understand and/or increase patient satisfaction?
In an attempt to answer this question, the dataset has been analysed over the last 8 years on the basis of the survey results submitted by each DHB to the Directorate. This database, which presently contains 210,000 inpatient and 231,000 outpatient records from 21 New Zealand DHBs, incorporates patient satisfaction ratings on 17 inpatient and 15 outpatient items respectively. It presents New Zealand with a treasure trove of information, both from the perspective of statistical analyses and from the potential use that can be made of it to further improve our patient satisfaction ratings.
To permit analyses of specific aspects of care, the inpatient questionnaire asks questions about patient perceptions of the Emergency Department, the availability of staff, the manner in which they were treated by staff (did they receive enough information, did the staff treat them with dignity and respect?), their opinion of the hospital's facilities (safety & security, cleanliness, food), discharge procedures and the adequacy of communication between different departments involved in their care
The outpatient questionnaire covers the usual topics such as the patients' perceptions of the appointment system, the manner in which they were treated by staff (did they receive enough information, did staff ask permission to treat the patient?), their opinion of the clinic's facilities (e.g. cleanliness), the adequacy of communication between different departments involved in their care and their satisfaction with the organisation of their care with other service providers.
The present overview is divided into two separate sections: 1. an assessment of the reliability and validity of the questionnaire and 2. an analysis of the results of the survey data using ESPRI software This unavoidably sketchy overview is concluded with a recommendation regarding future requirements.
1. How reliable and valid is the data? Whilst a thorough analysis of the data must necessarily be deferred to a later date, it may be of some interest to carry out a preliminary investigation into the reliability and validity of the present survey. Because if it were found to be severely lacking, a lot of effort has been made to no avail. The public could rightly accuse the government of wasting good public hospital money.
Is the prescribed method implemented? When we ask whether the DHBs are surveying their patient population using the method prescribed in the Patient Survey Guidelines, it is clear that some do but most don't. Table 1 shows that only Bay of Plenty and Taranaki consistently achieve the minimum number of required questionnaires returned by patients. Three other DHBs, i.e. Auckland, Canterbury and Nelson Marlborough do achieve this some of the time. Several DHBs miss out altogether on supplying the required quarterly data; i.e. Capital & Coast, Otago, Waikato while West Coast submits less than a dozen questionnaires each quarter and might as well not have participated. Even when DHBs are sending out a sufficiently large enough number of questionnaires, the response rate is in most cases quite low. Excluding such obvious errors as made by Hutt which in the first quarter this year recorded sending out 600 questionnaires and receiving 609 responses, the average response rate among these DHBs is around 35% (see Table 1).
Table 1 Over and under target numbers and response rate by DHB
2007 qtr 3 2007 qtr4 2008 qtr 1 2008 qtr2 Over/ under Response rate Over/ under Response rate Over/ under Response rate Over/ under Response rate Auckland -25 28% 26 32% 90 34% 156 37%
Bay of Plenty 441 31% 385 33% 511 37% 390 29%
Canterbury 50 46% -25 38% 24 43% 23 43%
Capital & Coast 86 33% 86 33%
Counties Manukau -103 22% -98 22% 9 26% -73 20% Hawke's Bay -150 31% -161 31% -132 35% -158 33%
Hutt -41 54% -41 53% 278 107% 150 85% Lakes -49 31% -94 30% -34 33% -32 33% Mid Central -114 44% -75 51% -116 44% -74 51%
Nelson Marlborough 13 45% -6 42% -37 38% 27 46% Northland 10 42% -31 38% -44 36% -2 39%
Otago -109 44% -112 43% -70 49% South Canterbury -129 42% -131 33% -97 45% -97 42% Southland -149 36% -111 43% -156 35% -134 39%
Tairawhiti -188 26% -171 24% -154 24% -194 25%
Taranaki 68 42% 29 36% 18 35% 23 36%
Waikato 35% 36 40% Wairarapa -114 39% -118 38% -101 43% -117 39%
Waitemata -128 34% -159 34% -142 32% -156 34%
West Coast 1 1% 12 9
Whanganui -68 43% -31 41% -50 41% -51 39%
Furthermore, the bias in the sample caused by self-selection (older and European patients are more likely to respond than are younger and Maori/Pacific patients) has lead in virtually all cases to a lack of representativeness of the resulting sample of patients: older and European patients are over-represented and younger and Maori/Pacific patients are under-represented.
Yet disappointingly, the agency charged with monitoring the implementation, i.e. The Sector Accountability and Funding Directorate of the Ministry of Health which is responsible for funding, monitoring and ensuring the sector is compliant with accountability expectations, has taken no action to rectify these shortcomings. Consequently, for most DHBs the number of questionnaires used to calculate the patient satisfaction scores on a quarterly basis is insufficient and the detailed reporting that is done by the Ministry of Health (e.g. in the quarterly produced DHB Hospital Benchmark Information Report) is shaky at best.
Instead of encouraging the DHBs to improve their performance and increase their sample size, the Directorate last week issued a directive to all DHBs that data on the patient population make up (age, sex, ethnicity) was no longer required - reason given was that the information wasn't used anyway. That this makes it impossible to do checks on the extent to which samples accurately represent patient populations appears to have been regarded as unimportant.
But does this mean that the results of the nationwide patient survey are totally unreliable and worthless? What happens when we examine the reliability and validity of the data?
Reliability Across the board, and on a scale where 1=very poor and 5=very good, average patient satisfaction ratings for inpatient services range from 3.74 (quality of hospital food) to 4.56 (treating the patient with dignity and respect). For outpatient services, the scores range from 4.33 (informing the outpatient about how long they would have to wait) to 4.52 (treating the patient with dignity and respect).
The scores are well distributed and have relatively large standard deviations ranging from 14% to 32%. The relatively smaller standard deviations on items measuring patients' rating on being treated with "dignity and respect" suggest the high scores are unanimously endorsed whereas, conversely, large standard deviations on items measuring satisfaction with hospital food (inpatients) and waiting times (outpatients) demonstrate that there is considerable variability across the 21 DHBs on these measures of quality.
To determine the reliability of the inpatient and outpatient questionnaires, we calculated the most commonly used measure of internal consistency: a statistic called "Cronbach alpha". The value of alpha can range between 0 and 1 and we are told that if a set of items has an alpha above .60, it is usually considered to be internally consistent. If it goes above .80, it signifies a very high reliability.
Following Nelson et al (1989), who assessed the reliability and validity of the 68-item "Patient Judgement System" (PJS), we also measured the alpha statistic of the New Zealand Inpatient & Outpatient Questionnaires. Although the New Zealand questionnaires were not constructed to assess patient satisfaction on a set of dimensions (as does the 68-item PJS), our results show that on measures that gauge satisfaction among inpatients with specific aspects of treatment such as communication (i.e. providing explanation and information), adopting a personal approach and facets of organising patient care, unexpectedly high alpha levels of 0.87, 0.85 and 0.84 were achieved. Similar Cronbach alpha levels were achieved when constructs such as "explanation" and "a personal approach" were analysed among outpatient ratings.
Another method by which one can assess the reliability of a survey instrument is to perform a test-retest reliability analysis. Test-retest reliability estimates are obtained by repeating the measurement using the same questionnaire under as nearly equivalent conditions as possible. However, as it is not possible to re-administer the questionnaire to the same patient three months later, we compared the average absolute value of the difference between the two means of two consecutive periods.
The results show extremely small changes in the average scores from one period to the next. If we compare the entire sample in this manner, the difference among inpatients and outpatients over comparable calendar quarters is less than half a percent. Without even taking into account the possibility that some of these differences are caused by actual changes in the delivery process, this stability of measurement provides further support for the reliability of the measures.
Validity Further analyses focussing on the annual period ending June 2008 show that there is substantial variability across the DHBs on all items in both questionnaires. These statistically significant differences between the DHBs (many at p<.01, all at p<.05) provide some support for the validity of the items used.
In the absence of a set of different scales all measuring the same construct, the best example of convergent validity must be the way in which all items are in some way or another associated with the one general validity indicator variable, namely an item which relates directly to the patient's overall satisfaction with his or her treatment.
The results indicate that, among inpatients, the "overall satisfaction" item is highly correlated with items such as staff availability (item 13; r=0.67) and availability of hospital staff (item 13; r=0.68). Among outpatients, overall satisfaction is most strongly correlated with items inquiring about the quality of information given to patients on how to manage their condition after the visit to the clinic (item 14; r=0.64) and the extent to which care was coordinated with other healthcare providers such as GPs or midwife (item 12; r=0.63).
It is reassuring to note that the highest correlations were found between items that measured closely related aspects of patient care. For instance, among inpatients, information given by ED staff on (a) the patient's condition and (b) length of waiting time (items 1 and 2) were very strongly correlated (r=0.77). Among outpatients, the high correlation (r=0.70) between (a) approval of the effort exerted by staff to make an appointment time that suited the patient and (b) satisfaction with the appointment time itself (items 1 and 2) was most revealing.
Conversely, discriminant validity of the nation-wide patient survey is shown by the very low correlations between items such as satisfaction with the quality of hospital food and informed consent (r=0.26). Similarly, among outpatients, a low correlation was evident between the item measuring satisfaction with waiting time and cleanliness (r=0.29).
As the survey clearly distinguishes between items that ought to correlate with one another and items between which one would not expect to find a strong association, these findings provide additional empirical support for the validity of these items.
2. What do the results tell us? Keeping in mind that the sample size is not sufficiently large to analyse the data on a quarterly basis, and acknowledging the lack of representativeness caused by self-selection of respondents, we can nevertheless have a look at the characteristics of the sample.
Age and sex The inpatient sample from the latest 12 month period ending June 2008 consists of 23,431 patients: 12,539 female patients and 10,892 male patients. Figure 1 shows that the distribution of age between the two sexes is disproportionate due to greater percentage of childbearing women in the 24-44 year age bracket.
Figure 1 Distribution of age and sex in the sample
Ethnicity Across the board, 80% of these inpatients are European, 10% are Maori, 2% are Pacific Island and 2% are of Asian origin. Figure 2 shows that Maori and Pacific Island patients are disproportionately represented in the lower age bands while European patients make up 94% of the over 85 year old age group.
Figure 2 Distribution of age and ethnic group in the sample
Comparing the distribution of non-European inpatients across all DHBs (West Coast is excluded because of its very small sample size), it is evident that Otago has the smallest percentage and Counties Manukau the largest percentage of non-European inpatients (see Figure 3) Figure 3 Distribution of non-European patients across DHBs
Satisfaction as a function of demographic variables Before we can try and answer the question: "How satisfied are New Zealand patients?" we need to check how patient satisfaction relates to these demographic variables. As expected, patient satisfaction rates are a function of age, sex and ethnic group. For instance, Figure 4 shows that age is strongly correlated with satisfaction: older patients are more likely to express greater satisfaction than are younger patients (p<.01). Figure 4 Distribution of overall satisfaction as a function of age
Similarly, patient satisfaction correlates with the patient's sex (males are more likely to express satisfaction; p<.01) and ethnicity (European patients are more likely to report greater satisfaction; p<.01). Thus it is no surprise that hospitals with proportionally more female patients, more non-European patients and a younger population will tend to have lower patient satisfaction rates than hospitals with more and older European male patients. Comparisons between DHBs will have to take this into account to be of any use.
The best way therefore to make appropriate and valid comparisons is either to apply a post-stratification weighting method (i.e. weighting each response using inverted selection probabilities multiplied by the ratios of expected to observed counts) or by confining one's analysis to a subset of the database, e.g. a specific age or ethnic group or sex.
Another issue is the difference in size between New Zealand hospitals and District Health Boards. There is sufficient evidence to indicate that, compared to smaller country hospitals, the larger city hospitals with more complicated booking systems, more complex case management, more departments, more facilities, and being physically larger in terms of the ground they occupy, are less likely to have greater patient satisfaction. In order to provide a level playing field when comparing patient satisfaction rates, in the New Zealand Patient Satisfaction Index we apply a weighting to take into account differences in patient profile between the various DHBs and compare satisfaction ratings between DHBs of approximately a similar size.
Patient satisfaction in New Zealand
Now we are in a better position to answer the question: "How satisfied are New Zealand patients?"
Contrary to what you may have read in the many newspaper articles about discontented hospital patients, the analysis of the 23,166 inpatients who answered the general "overall satisfaction" question during the most recent 12 month period shows that 64% are very satisfied and an additional 24% are satisfied. This suggests that, across the board, 88% of all inpatients say they had a good hospital experience. Only 8% of inpatients say that their satisfaction is only "average" while 3% of inpatients express overall dissatisfaction. Similarly, of the 27,384 outpatients who answered this same question about their overall satisfaction with outpatient services and facilities, 67% indicate that their satisfaction is "very good" and an additional 24% reply with "good". This means that almost nine out of ten outpatients are positive about their treatment by the outpatient services. Yet 6% rate their satisfaction as "average" and now only 2% are dissatisfied (1% respond with "poor" and another 1% respond with "very poor"). And when we inquire whether these percentages have increased or decreased over time, we find that, while overall inpatient satisfaction has not changed much over time, there has been large and significant improvement over the last eight years in terms of outpatient satisfaction. This is illustrated in the control chart shown in Figure 5 The control chart shows the "Upper Control Limit" and the "Lower Control Limit" of the series over the last 32 quarterly periods.
Figure 5 Control chart showing overall outpatient satisfaction in New Zealand
The Upper and Lower control limits will vary depending on the variation from quarter to quarter: the greater the variation, the wider the space between the limits. These control limits represent three standard deviations on either side of the distribution.
For any increase in satisfaction to be significant, the combined percentage of "very good" and "good" responses must be greater than the Upper Control Limit. Conversely, any real decrease in satisfaction can only occur when the series dips below the Lower Control Limit. The - rather arbitrary - partition of the graph represents the pre- and post years of Helen Clark's second term as PM.
This increase in outpatient satisfaction has been particularly evident in the smaller DHBs such as Hawke's Bay, Lakes, South Canterbury, Tairawhiti, Taranaki and Wairarapa. But if the patient survey was only able to show general satisfaction rates, any analysis would be rather limited and would not be able to show progress on specific aspects of care or identify which issues should be addressed. Having data available that stretches back to Sept 2000 allows us to ask questions such as "What was the impact on patient satisfaction when new facilities were built for inpatients?" For example, what happened to patient satisfaction with cleanliness of facilities at Auckland DHB when the new city hospital was opened in October 2003?
Figure 6 shows that after a short period of adjustment, there was a substantial and statistically significant (p<.05) increase in satisfaction with cleanliness in the years following the use of the new facilities. Figure 6 Inpatient satisfaction with cleaning at ADHB 3. Conclusion Based on the current dataset of some 210,000 inpatient and 231,000 outpatient records from 21 New Zealand DHBs, there are a myriad of questions that can be answered. Examples are: * Which DHBs have experienced an increase and which DHBs have experienced a decrease in overall patient satisfaction? * In what area(s) of patient care have the increases/decreases been most salient?
* What strategies to improve patient satisfaction have proved to be effective and which have proved to be relatively ineffective? We believe that a very careful comparison between hospitals on all 32 measures of patient satisfaction has the potential to generate extremely valuable information that can be used to increase patient satisfaction throughout New Zealand.
Gerard Zwier PhD Managing Director Health Services Consumer Research Limited www.hscr.co.nz Publishers of the www.healthconncetion.co.nz directory References
1. CCMAU. Patient Satisfaction Survey Guidelines. Unpublished Crown Company Monitoring Advisory Unit Report June 2000 2. Nelson, E C, Hays R D, Larson C, Batalden P B. The Patient Judgment System: Reliability and Validity QRB June 1989
3. Zwier G, Clarke D. How well do we monitor patient satisfaction? Problems with the nation-wide patient survey. New Zealand Medical Journal 1999; 112 (1097); 371-5
4. Zwier G, Clarke D. Benchmarking Patient Satisfaction: Do we have a level playing field? New Zealand Health and Hospital, January-February 2001, 21-24