Physicians always believe they do a good job with Quality. But then, when year-end rolls along and your Registry reports MACRA quality scores that just don't make sense ... no one can either explain what happened, or find anything to do about it.
It should be clear at that point that something needs to happen "next time around". The question will often be "who is in charge?", and "what actions should we take?".
Our RCM Collaboration creates a team with a well-organized program, and regular, timely data.
We see it all the time. Most Quality Shortfalls don't come from lack of compliance.
In our Monthly MACRA reviews with physicians, someone is using a phrase that is slightly different from what a Quality Measure expects. Or notes are documented in an area of the chart where systems and coders are not looking.
Some of these shortfalls can be corrected after-the-fact. But anything that is a part of the permanent patient record needs to be kept as originally recorded. And when these inconsistencies impact a "met / not met" quality criterion, scores drop.
So the earlier you can start analyzing shortfalls, the more you catch. Half way through the year is better than year-end ... and even earlier makes a big difference.
Would you make any other investment (time or money), without knowing ROI? Annual changes to the MACRA financial model make projections tough ... but MACRA is that rare CMS program that can add reimbursement.
Our sophisticated MIPS P&L ties MACRA scores into budgeted financial targets and variances.
Not all quality measures contribute ROI with a high MIPS score. Our concierge team isolates measures with the best chance for improving MACRA ROI, based on achievement, attractiveness of benchmarks, practice relevance and best probability for improvement.
Coders are fluent in clinical language and CQM nuances. They beat technology for completenes and accuracy.
And they identify the errors and trends that cause artificially low scores.
We give your coders the right focus for the right improvements.
Clinical language is wonderfully precise - but documentation phrases can fall out of synch with identifying all the conditions required for each quality measure. Smart coders can find nomenclature discrepancies and help our consulting team to share "the best language" with clinical staff.
One other place we often find these nomenclature discrepancies is in analyzing the details between high-scoring and low-scoring physicians. The problems are far more likely to be those of language, than those of compliance.
EHR technology can be pretty darn flexible and complex. Small discrepancies in where in the chart a particular provider documents a particular clinical circumstance, can put good information out of the reach of day-to-day coding. A great coding analyst, when collaborating with a great MIPS consultant can identify and standardize on these variables. The result is greater accuracy in measuring CQM "criterion met".
For some quality measures, simply documenting why an action was taken (or not taken) can make a coder more accurate. Periodic review of CQM specifications, involving Coders, Physicians and MIPS Concierge can help assure that the best measures are being evaluated, while keeping clinical staff efficiently focused on appropriate patient care.
At the beginning of each MIPS reporting year, we support a formal MIPS CQM Kickoff.
First our MIPS Consultant will identify potentially relevant measures, from all available sources (EHR, Registry and even QCDR). Next, the MIPS Consultant, Coder and Clinical Leadership will winnow the list down to those that have good scoring benchmarks, and reflect active best practices across most of the physician population.
For the final measure list, the Coding team prepares education and support materials for the clincial staff. Were appropriate, Coders should prepare macros or dot phrases that make documentation efficient and consistent.
The goal, and net result should be a set of CQMs for which clinical staff does little or nothing outside their normal practices ... and good CQM scores fall out naturally.
Coding always starts with a selection of Encounters, matched against Quality Measure Initial Patient Population Requirements (Denominator). Each Encounter needs to be tagged to all relevant CQMs (of the current year), based on a first pass selection based on CPT, ICD-10 and Demographics. A simple error in Denominator selection can have dramatic impact on quality score for any measure.
Only after we have been through the Data Quality steps of standardization, nomenclature, IT and Measure Selection ... will we know the data are complete, clean and accurate. And if scores fall in line, all is complete.
Once reliable reporting quality has been achieved, Medical Mangement can step up the process of physician training, protocol implementation, and peer analysis.