Is physician-reported data better than patient-reported outcomes? (Example of prostate cancer)

Reason for inclusion: An accurate assessment of treatment-related complications is missing in follow-ups.

Practise changing statement: Physician assessment+ Patient report outcomes on a continuous basis will help more to define the extent of problems. We also need more quantitative measures.

Page 1

  • specific information before the design of any particular research study.
  • Patient-specific information may also be collected from health insurance billing claims. However, in claims data, researchers must use billing codes generated from patient encounters.
  • Secondary data analysis describes the use of any cohort generated through the collection and reporting of patient-
  • hematuria in NSQIP is directly captured in a predetermined data field whereas using Medicare claims data, complications are indirectly accounted for by diagnosis codes (e.g., gross hematuria) or procedure codes (e.g., cystoscopy with fulguration)

This formal review opens up Pandora’s box of problems. Who will you trust? Especially for the deeper dive of analysis for the patients? Patient themselves? Or the person who’s coding?

Page 2

  • selecting the correct codes is challenging.
  • These data sets with large numbers of patients also facilitate examination of toxicity outcomes over time, accounting for the learning curve often associated with new technologies.
  • Claims data are more likely to capture severe conditions which require surgical intervention and/or those in which the patient persistently pursues the health problem with the provider(s) who ultimately code the condition. In cases of low sensitivity, misses can also be due to unbilled services (bedside bladder irrigation).

This also has a major limitation to see the treatment-related side effects from the prism of a SINGLE country! If you multiply the issues, it has huge ramifications across the various heterogeneous populations.

  • The primary genitourinary complication reported was anastomotic stricture, which decreased by almost 40% over the study period.

These are “false pretences”. It could well be the fact that the incidence of side effects are being underreported or non-compliant. Especially if there is no mechanism to cross-check with what the surgeon has reported versus what the patients are suffering from.  (Hint: Radiation is always better where the patients are ideal candidates for surgery!)

 

  • There is also less patient attrition with claims-data analysis, especially Medicare data, for which changes in insurance coverage are uncommon. In contrast, for studies which collect patient-reported outcomes, missing data can be a common issue that threatens study validity.

These are relative issues. I am not sure how the claims data is processed but I can imagine that it is very amorous and extremely prone for individual judgement; especially if the coders are not specialists.

  • Patient-reported outcomes (PRO) data sources are created through the distribution of survey materials to patients. Mul- tiple disease-specific instruments exist to quantify symptom severity and disease-specific health-related quality of life in prostate cancer
  • Coding practices are subjective, lack guidelines can be dependent on the specific electronic health record system used at each facility and may be delegated to billing mangers − all of these factors contribute to inconsistent classification.

Screenshot 2019-10-28 11.59.22

Page 3

 

  • First, pretreatment expectations have been shown to affect post-treatment assessment of quality of life, with those experiencing worse complications than expected reporting significantly worse quality of life.16 Studies that require patients to recall their functional status over a prolonged period of time are subject to an additional error: in recall, patients can overestimate or underestimate their pretreatment function as compared to their actual self-reported baseline.
  • A major advantage is that PRO data often reveal a higher prevalence (higher sensitivity) of complications than administrative claims data or physician-reported toxicity.

This is the most important factor where you shouldn’t completely rely on the “surveys”.

  • PRO data are often analysed in scales − and the granular scores facilitate statistical analysis to compare outcomes across groups and examine changes over time. These scales allow for the characterisation of both functional symptoms as well as assessments of bother.

They are totally missing the concept of recall bias!   That remains the cornerstone for the accurate assessment. Therefore a regular mechanism for the symptom diagnosis/management is required.

  • Limitations of PRO data include missing data (e.g., patient non- response or drop-out), patient accuracy, and reproducibility.

Page 4

  • Physician-graded toxicity is often considered an “objective” measure of treatment-related toxicity, in contrast to patient-reported outcomes which are sometimes viewed as being “subjective.” Physicians are also able to infer causation in a patient’s symptoms, screening out issues unrelated to treatment (such as diarrhoea due to infection rather than radiotherapy).
  • Multiple studies have consistently shown that clinicians typically report fewer symptoms overall,27,28 of a lower severity29,30 than patients. There are several potential reasons for this, including inaccurate patient reporting of complications (e.g., patients being hesitant to tell their physicians about all of their complications), failure by the physician to attribute symptoms to the disease or treatment, as well as general limitations due to limited clinical encounter time and communication barriers.
    Screenshot 2019-10-28 11.59.39

I personally feel that there should be a mix of the survey physician assessment.