National Cancer Database: The Past, Present, and Future of the Cancer Registry and Its Efforts to Improve the Quality of Cancer Care

(The highlights appear in standard text as bulleted lists while my comments appear as block quotes).

Page 1

  • In those early days, hospital registrars would source paper charts to abstract tumor cases into hospital registries. During the annual Call for Data, years of completed cases were submitted to the NCDB via mailing floppy disks.
  • This collection covers 75 different diseases and over 72% of newly diagnosed patients in the United States.

Page 2

  • All patients in the database are considered for eligibility for each quality measure, and compliance is computed by a rule based algorithm using dozens of variables from a given patient record. Over the last several years, the NCDB has grown to provide 23 quality measures in our technology platform covering 10 disease sites.
  • Registrars and clinicians will more easily and effectively be able to monitor patient alerts, quality measure performance, and quality measure national and regional comparisons on an integrated platform.
  • Rapid Quality Reporting System (RQRS) linked to real-time patient alerts.
  • This will become even more relevant as national efforts toward interoperability combine with artificial intelligence techniques to make hospital registrars more efficient and productive than they already are today. The NCDB will be able to quickly utilize these advances as they become feasible.

I still hold that these are “moon-shots”. Unless you have a system that is properly integrated into electronic medical records and a concurrence of “open standards”, it would be impossible to enforce “standards”. Furthermore, technology will be limited because of apprehensions of end users. Clinicians will be stymied by the apprehensions of being “watched”. It may offer them a perverse advantage of underrating or overrating the patient’s symptoms, for example.

One way out is to enforce the AI and specifically look for patterns of symptoms during the treatment course to identify outliers of disease presentation or symptoms. Sadly, it is easier to reconstruct the user identity from the “anonymous data”.

  • In 2010 the Participant User File (PUF) program was begun by creating de-identified, patient level data files for each disease site, which are made available to investigators affiliated with hospitals participating in the NCDB.

This is a great achievement. Comparing the burden of the disease with the disease management protocols ensures a greater compliance in “standardisation”. I may feel awkward about the “standard” patients as no patient responds to the medicines/radiation in equal measure”; nevertheless, it is important to push the idea because only then we will have a better “personalised care”. Currently, we have institutional based claims for the “outcomes”.

  • As the American Joint Committee on Cancer evolves, how cancer stage is defined and how often staging definitions are updated and made available, we will be able to update our data collection practices, including prognostic factors, in a real-time manner to support clinical advancement.

This is an important issue. We need a mechanism to identify the prognostic factors (in addition to the other factors, for example, related to smoking cessation etc.).

 

Page 3

  • The NCDB will also be able to utilize advances in technologies powering the platform − from artificial intelligence approaches like natural language processing and machine learning, to interoperability and a growing “internet of things” and personal devices − and continue to evolve as a cancer registry and platform for improving the quality of cancer care across the United States.
  • Additionally, a Comparative Effectiveness Research platform could more efficiently be used to extend randomized clinical trial research into communities across the country in real time by tracking relative risks across diverse patient demographics and communities as new science disseminates into clinical best practices.

This is an overoptimistic projection.  The volume of data required will be huge and will get an additional headache of managing (and securing it). Interoperability will again pose significant issues because the corporations would want to extract their “toll-tax”.  I’d keep this as a “thing-to-do” in the layman’s terms. It is a desirable projection of events. Whether it can be done is entirely different issue altogether.