The Missing/Suspect Charges report, part 1

Management asked for a report, to be emailed to the office coordinators twice daily, showing any qualifying visits in the past 24 hours that were missing a service charge. In addition, they also wanted it to show any “new patient” charges that should have been “established patient” and vice-versa. They also wanted a copy sent to the practice managers on a weekly basis, showing any outstanding items from the past 90 days. This would allow the practices to stay on top of their charges and hopefully recoup some of the money they were hemorrhaging due to inattentiveness and human error.

Based on what shows up on this report every day, I wish I was being paid on commission.

I’m going to cover this report in three parts. In this post I’ll walk you through the Missing Charges stored procedure. Part two will be the Suspect Charges procedure, and then finally we’ll look at bringing it all together in Crystal Reports and Logicity.

cus_RPT_MissingCharges is a long script, too long to post here in its entirety. So download a copy right here and follow along. I’ll post snippets where necessary.

This procedure takes four input parameters. @StartDate and @EndDate are usually passed the current date, but I made them input parameters for flexibility. @LocationString is a pipe-separated list of facility abbreviations, corresponding to LOCREG.ABBREVNAME. (I’ll explain why I went pipe-delimited when we get to part three.) This parameter allows us to report on a single practice or any collection of them. Finally, @Weekly tells us whether this is the daily version of the report, or the one that gets sent to the practice managers every week. We’ll see the differences as we get into the script.

The first thing I do is roll @StartDate back to 4:00pm the previous day, and set @EndDate forward to the end of its day.

Now, I know there’s going to be a SQL Server guru reading this who throws his hands up and yells, “You’re manipulating input parameters? But what about parameter sniffing?!?” Yes, I know, but I tested the script with separate variables and it didn’t improve performance to any degree I could notice. So the rest of you who don’t know what that means, either Google it or don’t worry about it. Or both, which is what I did.

Next we need three table variables. @Locations is going to contain all the facility abbreviations passed into @LocationString after we parse them out. @OrderCodes will contain a list of CPT Codes we need to look for when identifying charges. And @DocTypes will contain the document types we’re interested in; namely, office visits, nurse visits, Coumadin visits, and physicals.

You should be able to see from the code how @OrderCodes and @DocTypes are populated. For @Locations, I have a table-valued function called cus_fnParseDelimitedList() that converts a character-separated string to a table of values.

There are any number of string-parser functions out there on the Web, so I haven’t bothered to post a copy of mine. (Correction, see the comments section.) Also, I’m embarrassed by the fact that I forgot to credit the original author in my code, and I don’t fully understand how it works. So there’s that. If you really want to see it, leave a comment. Otherwise, let’s just take it as read that those LOCREG.ABBREVNAME values are now in the @Locations table variable.

Next we have a block of code that makes sure the report doesn’t choke in the event @LocationString is no good. Basically we just want to send one dummy record back to the report so everyone’s happy.

Now we have two temporary tables, #Docs and #Appends, to hold all relevant documents and appends. Why did I use temporary tables here and table variables earlier? I’m afraid I don’t have a good answer to that. There is absolutely nothing in this script that makes one a better choice over the other. I generally try to be consistent, and I tend to prefer table variables because you don’t have to worry about dropping them. What can I say, I must have had a heavy lunch that day. You do it however you like.

The INSERT into #Docs is admittedly a bit ugly. As we tested and fine-tuned this report, there was a lot of “oh no, don’t count those” and “only include these here during a full moon” and so on. As a result, it’s a bit of a Frankenstein. But we can make sense of it if we just break it down.

Zoinks! Okay, so we’re collecting SDIDs, which as you know are document identifiers. We INNER JOIN to @DocTypes to ensure we only get those four types of documents. Via LOCREG, we INNER JOIN to @Locations so we only get documents from the facilities we’ve passed in by the @LocationString parameter. We LEFT JOIN to cus_vExcludePatients in order to filter out test patients.

On to the WHERE clause. We constrain our data to the given date range, use XID to make sure we’re only pulling top-level documents, and exclude anything that’s unsigned. We also want to exclude anything that came in via LinkLogic, questionnaires, worker’s comp forms, and new resident assessments. (These were all “oh, exclude this too” requests that came in during the testing process.)

We want to confirm that there’s actually an appointment or visit we can link this document to. If it’s the weekly version of the report, they wanted INRs excluded (I have no idea why, I know better than to ask sometimes). AND ep.PId IS NULL ensures that no test patients sneak in.

Finally we want to be sure this document isn’t tied to a canceled or hidden appointment. There’s probably a way I could have done this in my JOINs rather than using the subquery, but it would have required a LEFT JOIN because sometimes they forget to link the document to the appointment, and any unlinked documents would therefore have been excluded from the report. This would have prevented me from putting any conditions relating to the Appointments table in my WHERE clause, because when you combine a LEFT JOIN with a WHERE condition that wants a value in that table to being anything but NULL, you essentially end up with an INNER JOIN. If what I just said caused you to raise a skeptical brow, either Google it or ask me to write a separate post about it. Trusting me is also a fine option.

Once you get your head around how that works, the next block that populates #Appends should be self-explanatory. We gather the appends because if the note is already signed when they catch the missing charge, they’re going to have to attach it to an append.

So now we have all of our qualifying documents in two neat little temp tables. Each of these SDIDs should have an associated charge in the ORDERS table, either via the top-level document or one of its appends. If it doesn’t, it ends up on the report and I get a 5% cut. I wish.

I don’t think we need to paste the entire final block here, because it should be pretty clear what’s going on. #Docs is our starting point; that contains all the relevant SDIDs for the facility and time frame. Everything we JOIN to it is just to pull data for the report. It’s the WHERE clause that runs the show:

It’s best to look at this from the inside out. Consider that each document is represented by d.SDID. So that derived table down near the bottom, the one we’ve aliased as “docs”, contains that SDID, along with the SDIDs of all of its appends. We’re INNER JOINing docs to ORDERS by SDID, so as to find all of the orders associated with that document and its appends. We INNER JOIN ORDERS to @OrderCodes because we only want to look for service charges, not tests or referrals, etc. And we add a few other filters in there because, as before, some things came through the testing phase that they told me I needed to exclude. I don’t have to understand why, I just have to know how to get them out of there.

By using WHERE NOT EXISTS, we end up with a list of documents without service charges, and whatever information they require about those documents. Works a charm.

I want to digress a bit before wrapping up part one, and talk about my inconsistent comparison usage. Sometimes I write NOT [field] = [value]. Other times, [field] <> [value], or even [field] != [value]. Each of these will result in the same execution plan, but I admit I could use to be more consistent. If I had a team of coworkers looking at my code, I’m sure I would be. But I’m all on my lonesome here, and I’m often flying by the seat of my pants. I encourage you to do better.

In part two, we look at the Suspect Charges procedure.

4 thoughts on “The Missing/Suspect Charges report, part 1

  1. Beverly

    Regarding “The Missing/Suspect Charges report – Part I”.
    Can you send me the pattern matching function ?
    How does this function know what Table Name to select from, is that internal to the function definition ?


    1. Ron Moses Post author

      Hi Beverly. I’ll paste the function here in my reply if that works for you. The function doesn’t select from any table name. You pass it a string and the delimiter, and it passes back a table with the values.

      Hopefully that formats properly here in the comments section. So for example, when we pass in our @LocationString parameter to the stored procedure, that’s a pipe-delimited string that looks something like ‘ABC|DEF|GHI|JKL’. When we call dbo.cus_fnParseDelimitedList(@LocationString,'|'), we get back a table with those four values in a column named Element.

      I’ll admit I don’t fully understand this function, and I didn’t write it. I haven’t worked much with XML, so I’m just glad it works. You may also need to run GRANT SELECT ON dbo.cus_fnParseDelimitedList TO public afterward. Thanks for reading!

    1. Ron Moses Post author

      Hi Cole.

      Yes I have. There’s a potential window of error you need to account for, but you can get there.

      The path is:

      DOCUMENT.AppointmentsId > Appointments.AppointmentsId
      Appointments.PatientVisitId > PatientVisit.PatientVisitId

      Now you’re at the Visit, which is your door to the billing side.

      The one thing you need to be aware of is, if the user forgets to attach the visit document to the visit, the link between DOCUMENT and Appointments won’t work.

      Does that help?

      EDIT: It occurs to me I could have been a bit more “SQL-y” in my response. Let me clarify:


Leave a Reply