(and neither is “kind of” following it)

Healthcare data that is not completely and accurately reported, but is attested to as being so, always raises the question, “Should plan leadership have known this data was not reported appropriately?” Exactly what do you say to a regulator if you were the certifier, or if you were the one who supplied the data to be certified and it was not thoroughly vetted, therefore inaccurate: “Oops”?

Let’s be clear, self-assessment by health plans of data completeness and accuracy is explicit in 42 C.F.R. §422.504(I), Certification of Data that Determine Payment, and in 64 F.R. 61893-61900, OIG’s Compliance Program Guidance for Medicare+Choice Organizations Offering Coordinated Care Plans. Failure to validate completeness and accuracy of reported data is a significant compliance concern and can develop into a legal issue under the Civil False Claims Act (“FCA”), which prohibits knowingly presenting a false claim or knowingly making a false record or statement material to a false claim. The 2009 Fraud Enforcement and Recovery Act (“FERA”) expands FCA liability to include the “knowing” retention of overpayments of government funds, and the Affordable Care Act requires that overpayments be reported and repaid within 60 days after identification. Let’s face it, if you cannot demonstrate reasonable care by assuring before certifying reported data and that you acted when mistakes are discovered, “they’ve got you”.

So, how does plan leadership ensure reporting completeness and accuracy compliance? Babel Health suggests an informed, holistic approach that addresses the following questions:

  • How do you measure to determine if all providers are submitting claims data and it is complete?
  • – Are capitated provider submissions evaluated for reasonable volume?
    – Are you studying your denied claim activity for things like “righteous” denials and overturned denials? Do you know the financial impact of an administratively denied claim versus unreported, high-risk diagnoses used for risk scoring?
    – Do you have a way to discovered gaps in previously, but not currently reported diagnoses and a standard method to communicate that information to providers?

  • How do you know if all reportable encounter data is being extracted from your source system(s)?
  • – Do you perform a comparison between reported expenses and encounter submissions? Can you account for all expenses (reportable, non-reportable)?

  • Are all reportable encounter records being submitted?
  • – Do you reconcile the records “input to” with the “output from” your encounter system? How much is left on the table and why?
    – Were you able to get the record accepted for one program (RAPS), but not another (EDPS)?
    – Are you able to see the status (accepted, rejected, pre-submission errors, awaiting submission, awaiting response, etc.) of all records at a glance in a dashboard?

  • Are all pre-submission errors being resolved? Who is working what? What is the turnaround time and success rate? Are you staffed properly?
  • – Are unresolvable errors being identified and researched for root cause? Are the root causes being addressed?

  • Are duplicate records being identified before submission? Do you know if you have accepted duplicate submissions which would overstate your expenses and risk that need to be deleted?
  • Are all reported submissions being accepted? Are all rejections being resolved?
  • – Is rejection repair and re-submission part of routine processes or crisis managed when deadlines approach? What in your backlog to be reviewed and repaired at this moment?
    – Are the rejections that matter most being resolved first? How are those being discovered and organized for priority attention?

  • Are claim adjustments well managed? What is your overturned and correction rate?
  • – Are encounter submissions voided and replaced in a timely and accurate manner?

  • Are your risk adjustment results what you expected? Did you set expectations to begin with?
  • – How close are you to your predicted financial scores?

  • Do you have a mock Risk Adjustment Data Validation (RADV) audit procedure to ensure you can validate the data that you are being paid on?
  • What activities based on these questions will make the biggest impact to the financial and compliance bottom line?
  • “Risk” will take on a whole new meaning if plan leadership does not ask these questions and ensure there are responsible team members equipped to perform and measure these activities. How do you stack up? Need any help? If so, give us a call and one of our subject matter experts will give you a hand.

    Deb Kircher, Vice President, Operations