Determining KOL Status – Are We Overpaying HCPs?

Key Opinion Leader (KOL) evaluation models have long been used by pharmaceutical companies to justify paying premium fees to HCPs. However, companies sometimes utilize KOL models with significant practical limitations, which can render the models unreliable and provide an inaccurate assessment of an HCP’s medical expertise. Furthermore, in the absence of a more nuanced assessment, these models may yield results that compromise the outcomes of their Fair Market Value (FMV) evaluations since FMV measures only the market value of different types and levels of medical expertise. From a company’s perspective, faulty KOL models open the door for potential overpayment of HCPs.

Existing Evaluation Models and Their Limitations

Evaluation models based on point systems currently dominate the KOL assessment tools landscape. KOL status is then determined by the cumulative number of point associated with specific criteria or the number of check box criteria that an HCP meets. These models have the advantage of being easy to create and implement, but they also contain numerous limitations, which increase the likelihood of overpayment. These include a bias towards breadth of recognition, selection vs. expertise bias, and a subjectivity bias.

Breadth of Recognition Bias

The breadth of recognition bias is one of the most notable limitations of points-based models. In practice, many KOL evaluation models overinflate the expertise of some HCPs while discounting that of others. HCPs who participate in a broad range of professional activities can more easily accumulate points toward KOL status compared to HCPs who choose to be more selective in their professional focus.

While it is certainly possible for breadth of recognition to correlate with high levels of medical expertise, that is not always the case. For example, HCPs who have an average level of expertise across a number of categories may still receive national KOL status by simply amassing points in many categories. A clear risk for overpaying exists in these situations.

Conversely, the models tend to undervalue the expertise of HCPs that are active in a smaller number of professional categories. In many instances high levels of achievement in only a couple of categories can indicate high levels of overall expertise. However, this inherent lack of flexibility for instance, may result in a lower score for a possible national KOL who has strong research and publications credentials but who may not have actively sought leadership roles within medical associations. Such KOLs end up being penalized for choosing to develop expertise in a way more suited to their personalities and professional interests.

The issue of flexibility also matters when assessing the employment record of KOLs given that some might have a more academic focus and others a more clinical one. Points-based models may not reflect such nuances and fail to incorporate enough flexibility to capture the expertise of many qualified experts.

Selection vs. Expertise Bias

In addition to the problem of recognition bias, a number of KOL evaluation models incorporate criteria that don’t directly measure medical expertise and that focus too narrowly on the relevance of the HCP to the company’s business. Including these “selection” criteria in a KOL model can seriously weaken its ability to effectively measure expertise.

Such criteria include participation in company registration trials, knowledge of company offering, or experience in a specific research area. These factors can certainly be important from a selection perspective when companies seek HCPs for their speaker or consultant bureaus. However, while these selection criteria are often associated with high levels of expertise (companies regularly use top KOLs for significant projects), they are not a measure of expertise and should not be used to justify premium FMV rates.

One common but less obvious example of “selection” criteria is expertise in a very focused area of research. Here, the HCP is clearly an expert in a particular area that is closely aligned with a company’s product or research interests, but he or she may not be recognized as having broader expertise. Simply put, is this a big fish in a big pond or a big fish in a small pond? FMV should provide premium fees to the biggest fish in the big pond.

Ultimately, KOL status should be an objective measure of an HCP’s level of expertise and the premium FMV rates associated should be applicable to any consulting engagement for which he or she is hired. However, in some cases, when working with a non-KOL who is the expert in a narrow field, companies may be forced to pay premium fees for a particular serviceThis is a classic application of an FMV exception process and should not be used as a basis for awarding a higher KOL status.

Subjectivity Bias

Historically KOL evaluation methodologies have also included criteria, which, while clearly relevant as a way to evaluate an HCP’s expertise, don’t easily lend themselves to objective measurement. This creates opportunities for subjective interpretation of the criteria based on the views of specific individuals or the needs of the company. The absence of objective ways to measure success can cause inconsistency in KOL tiering outcomes and lead to potential allegations of favoritism.

Often the problem is related to a lack of specificity. In some cases, companies include overly broad, catch-all criteria such as “recognized by peers as leading doctor in the field.”In practice, terms such as “extensive,” “well recognized,” “frequent,” “leading institution,” and even “professor” can have different meanings to different reviewers, leading to inconsistency in scoring and the potential for overpaying HCPs.

For example, simply stating that an HCP should have an “extensive” authorship record can be problematic. HCPs with a similar publication history may receive different scores because reviewers have different views about the number and type of publications needed to meet the criteria. Depending on the interpretation of “extensive,” this could lead to an HCP being over tiered and overpaid.

Approaches for Effective KOL Evaluation

Bearing in mind the potential FMV compliance pitfalls that companies face with their current KOL evaluation models, a successful KOL evaluation model should:

  •  Use objective criteria grounded in the peer judgment of a physician’s medical expertise
  • Define quantifiable and well-documented criteria
  • Be flexible enough to account for the different professional paths of the HCPs

Peer-recognition based criteria ensure that the assessment of someone’s expertise is grounded in the judgment of his or her peers rather than being based on the opinions of the company or individual reviewer. This provides the objectivity and foundation for an effective KOL evaluation model.

As a result, much of the “selection bias” inherent in a number of current models is eliminated. The use of objective criteria is also valuable in demonstrating independence in the face of any regulatory review as KOL status represents the recognition of other physicians as opposed to meeting the needs of the company.

Quantifiable and easily documented criteria based on information primarily derived from CVs should underpin all KOL evaluations. These criteria support consistency in the evaluation process while clear documentation (i.e. CVs) provides transparency to the process and a ready audit trail if necessary. Additional information such as citation records or influence mapping exercises can be incorporated, but the value of the additional information may not justify the added complexity.

To ensure that the criteria are quantifiable and objective, a supplementary guidance document can provide consistency in the interpretation of key terms such as “regular,” “frequent,”“extensive,” “leading,” “top,” “major,” and even “professor” (e.g. “professor” could include adjunct, clinical, assistant, or associate professors). In addition, the guidance document can specify relevant time periods during which individual criteria may be assessed as well as specific numbers or ranges for those criteria that lend themselves to counts.

Remaining flexible constitutes the third and final hallmark of a successful evaluation model. Companies can achieve this objectives by establishing high standards and rigorous criteria in distinct categories. In addition, companies should refrain from simply adding up a cumulative score, which can potentially lead to overvaluing an HCP’s expertise.

By focusing on a small handful of evaluation categories (e.g. employment, publications, etc.) and setting rigorous standards within each category, companies may award KOL status without having recognized achievements in every single category. The nimbleness of this approach allows companies to evaluate physicians with non-traditional backgrounds who tend to get undervalued under more traditional point systems models.

Lastly, flexible models have an important practical dimension. They are often able to yield an accurate evaluation of an HCP’s credentials even when only a partial CV or bio is available.

Conclusions

While companies regularly identify KOLs and pay premium rates to those identified as KOLs, the underlying evaluation processes may contain a number of potential biases. Poorly designed KOL evaluation processes can lead to overpayment of HCPs.

The most effective processes will be grounded in the peer judgment of an HCP’s medical expertise – a method that provides a strong, objective, and independent foundation for KOL evaluation. In addition the underlying criteria should be quantifiable and clearly documented to ensure consistent application across all HCPs. Finally, criteria should be set at levels strongly associated with regional or national level expertise. By avoiding weak criteria, companies can rely on a more limited number of criteria to justify KOL status and avoid the biases associated with cumulative point models.