This note explains technical improvements to beatonbenchmarks that provide a better, faster, and more value-for-money service.
In the beginning
beatonbenchmarks relies on the clients of professional services firms sharing their perceptions and experiences in relation to various firms in each of the markets surveyed (1). Starting in 2003, the sample for beatonbenchmarks was obtained largely from a number of professional associations’ membership lists. These respondents firstly provided feedback about the associations that supplied their details, with a subset also providing ratings of one or more professional services firms with which they had recent experience.
As the reach of beatonbenchmarks grew, a number of firms began to submit their client databases for inclusion in the research. These firm-supplied contacts were initially in the minority. And they differed in several significant ways from the association-supplied contacts that made up the majority of the sample. They were more likely than association-supplied contacts to have direct experience with professional services firms, and more likely to provide feedback on one or more of those firms’ performance. They also tended to give slightly different ratings under some circumstances, necessitating small adjustments to be made when combining these two datasets.
At the time, with association-supplied data still considered the ‘norm’, ratings were classified as “independent” or “dependent” based on the origins of the sample. Respondents who were provided by an association were considered to be independent, as were those who were provided by firms other than the firm being rated – but not by the firm itself. Weighting and scaling methods were used to bring firm-supplied responses into alignment with those supplied by associations based on this designation of client-firm ratings as independent or dependent. This system introduced an extra level of complexity to the data processing, but under the conditions at the time, with the majority of records sourced from the associations, it ensured comparability across the two types of respondent.
A changing landscape
Over the subsequent years, and as more firms became directly involved in the research, the ratio of association- and firm-supplied respondents continued to shift and was accelerated by the introduction of the Client Choice Awards in 2005 (to which well over 100 firms submit complete databases of their clients as a condition of entry). As the years passed and with the growing success of the Client Choice Awards, professional services firms supplied more and more databases. The consequence was an increasing number of clients’ contact details being supplied by more than one firm – meaning that they were available from multiple sources, even without inclusion in an association database.
As of 2015, a majority of the total sample’s more than 500,000 records – 67% – was supplied by firms participating directly in the beatonbenchmarks research via the Client Choice Awards. Associations alone supplied only 33%. Looking specifically at ratings of firm performance, the proportions were even more skewed: 85% of firm ratings were from clients made available on one or more firm databases, while just 15% came from association-only sources. This difference is because firm-supplied respondents are more likely provide ratings of the firms that they use, and more likely to rate multiple firms, than their association-supplied counterparts.
Specifically, in 2015:
51% of firm-supplied respondents rated at least one firm, compared with just 15% of association-only respondents, and
32% of firm-supplied respondents rated 2 or 3 different firms, compared with just 8% of association-only respondents.
In summary, with the passage of time there are more and more firm-supplied respondents, and these respondents tend to give more ratings of firms than do association-supplied respondents. This is schematically represented in the diagram.
As the primary objective of the research is to obtain robust client ratings of firm performance, this shift means that the mechanisms once required to adjust firm-supplied ratings to match association-supplied have become less relevant – as have the association-supplied records themselves.
And at the same time as completing this review of sample and ratings composition, beaton has engaged in a wide-ranging consultation process with firms aimed at improving the strategic relevance and value of the beatonbenchmarks product to the c-suite of these firms. This initiative further underscored the need for a simplified approach to the collection, analysis, and explanation of beatonbenchmarks in 2016 and beyond.
Looking forward: Technical improvements to beatonbenchmarks
From 2016 onwards, for the sourcing of respondents, beatonbenchmarks will rely exclusively on firms’ databases of clients. This is in line with the primary objective of obtaining client feedback on firm performance and is the natural consequence of the trend towards an increasing concentration of firm vs. association respondents over the previous 13 years.
One benefit of this rationalisation is that all respondents will now come from comparable sources. This removes the need for the complex systems of weighting and scaling that were previously required to reconcile ratings from different and fluctuating types of respondents.
Like any methodological change, this adjustment will cause a shift in results when compared to previous years. However, an extensive review conducted on historical beaton benchmarks data indicates that the changes are for the most part minor. In general, scores will shift down slightly, but this shift occurs across the market such that trends and relative positions are maintained: it does not substantially alter the overall picture presented by the data. The conclusions that can be drawn from the data remain the same, while the scores themselves are both reliable, i.e. likely to be repeated on re-test, and valid, i.e. accurately representing the survey respondents' perceptions.
To preserve internal consistency, longitudinal tracking will be re-created by back-dating the new method over the preceding four, or more when required, years. This means that the scores shown for previous years will differ from those reported previously, but any changes observed in the new tracking charts can be regarded as “true” shifts, and not an artefact of the change in method. When this occurs in beatonbenchmarks reports, a full explanation will be provided.
Better, faster, more value-for-money benefits for clients
The key benefits of this new streamlined approach may be summarised as follows:
An improved concentration of high-quality respondents – those in a position to rate one or more firms based on their own direct experience, which is the core objective of the research.
A simplified analysis process with all respondents derived from comparable sources.
Together, these changes mean a more efficient collection and production process in 2016 and beyond, with the savings passed on to clients subscribing to beatonbenchmarks. For more information about this and other improvements to beatonbenchmarks, click here.
(1) Currently the beatonbenchmarks service is available in Australia and New Zealand for larger professional services firms in accountancy, consulting engineering, intellectual property, law, and management consulting.
This post is also available as a PDF. Click Note on technical improvements to beatonbenchmarks.
More related to this topic
+ Five ways in which the Beaton Benchmarks 2016 value proposition is better than ever
Irene Rix describes herself as a ‘data tamer’. She is a highly skilled and versatile data analyst who helps mere mathematical mortals turn millions of data points into practical, actionable insights.
Irene consults on how to bring the Beaton Benchmarks product suite to Beaton’s clients in ways that contribute to their profitable growth, are easy to understand, and deliver more value-for-money.