This is the second post in a two-part series that discusses healthcare predictive and propensity modeling and selecting the optimal analytics partner to support your growth and engagement efforts. The first post in this series shares five best practices in healthcare propensity modeling.
In our last post, we talked about big data, healthcare, and predictive modeling: How can you leverage your data, analytics, and modeling to get the “biggest bang” for your marketing dollars?
However, as we mentioned, the million-dollar question is: With so many variables (and so much data) at play, how can healthcare marketers ensure they are effectively leveraging propensity and other predictive modeling?
We provided five guidelines for consideration when you commence predictive modeling:
- Define target/goal
- Use best data
- Use multiple data sources and most appropriate analytics
- Ensure data are vetted/validated
- Deploy validated analytics and employ follow-up testing
If you decide to use predictive modeling, you must ensure you are engaging with a HCRM partner that can support best-in-class analytics. There are several components to consider when doing your due diligence on a potential partner. For example, some provide in-house modeling and analytics; sometimes as a menu of options, other times as custom services for a fee. Others outsource their modeling and analytics to (usually) industry agnostic companies; however, there are also many smaller, boutique/niche analytics companies that provide some specialization.
That being said, because healthcare data are unique, and how organizations engage with patients is becoming more sophisticated, first and foremost you want to make sure a potential partner has experience applying healthcare data, is invested in ongoing and current analytics, and provides access to results that allow for direct application of results.
Specifically, ask the following 5 critical questions:
1. What is your experience with healthcare data and how do you leverage clinical, diagnostic, and insurance, information?
2. Are your data and models dynamic and refreshed/rebuilt regularly?
3. What is the model vetting and validation process?
4. How do you use patient and non-patient data elements?
5. What tools and applications are available for leveraging predictive modeling and other analytics?
Clinical, Diagnostic, Insurance and Other Health Data
In healthcare, targets for modeling should be clinically based. For example, there are statistically significant differences between a “bariatric surgery” target group defined by clinical and diagnostic codes as opposed to a target group defined using demographic, sociographic, and socioeconomic data alone.
Many modelers do not use clinical and diagnostic information (or do not have the healthcare acumen to do so). Without it, you lose the predictive strength that comorbidity and family history add to the modeling and analytics, which, when included, result in more robust models.
At the very least, if your potential vendor does not use clinical, diagnostic, and insurance information, find out how you can still access and use these fields in the marketing process as filters and preselects for list pulls, geo-targeting, and messaging.
In sum, when it comes to healthcare marketing, the more clinically-based the models, the better-defined your targets and goals and ultimately more successful your campaign and engagement outcomes.
Key Takeaway: Ensure that a vendor leverages clinical and diagnostic, as well as insurance information, in modeling exercises.
Dynamic versus Static Models and the “Best” Data Elements
Dynamic models (and analytics) are refreshed on a regular basis. Two important variables include how often models are refreshed and how often models are rebuilt. Refreshing refers to updating the patient data on a continuous basis; the goal: not only ensure new patient and consumer records are appended with model scores so all individuals can be considered for marketing outreach, but patients with a change of status (e.g., had bariatric surgery) are flagged accordingly.
Different vendors have different refresh protocols, usually based on factors such as the cadence of the data feed (of patient and/or consumer data), frequency and/or percentage of new records, and specific stipulations in the agreement. However, it is suggested that data minimally be refreshed every 30-90 days.
Static models are what many refer to as “canned” models. They tend to be less specific and are not as statistically robust as dynamic models. The Type I error rates (false positives) are much higher for static models, and a consequence of this is unknowingly using “bad” leads, which over time can be very expensive. A key way to reduce CPL (cost per lead) spend is to reduce or eliminate the “bad” leads.
Model rebuilding is the practice of completely rebuilding predictive models from the ground up. As a rule of thumb, clinical/diagnostic propensity models, as well as other predictive models (e.g., churn, best payer, optimal channel) should also be rebuilt at least annually. Related, find out what the rebuild protocol includes; e.g., rebuilding clinical models should consider the latest clinical research in addition to identifying any new best behavioral or preference indicators. Finally, refreshing and rebuilding models may be something a vendor charges extra for; make sure you understand what is included and advocate for including these best practices as part of the subscription agreement.
Key Takeaway: Dynamic modeling and analytics, refreshed on a regular basis, are always preferred. Ensure the vendor is refreshing models at least every 30-90 days. In addition, models should be rebuilt annually.
Vetting and Validation
This was item number 4 in our last post, “Five Best Practices in Healthcare Propensity Modeling“ – and is equally important here, so we are repeating it. At this point, you will know if your potential vendor is using clinical and diagnostic information in their model-building and analytics, especially in the creation of the target event group (e.g., individuals who have received/experienced hip replacement bariatric surgery – and thus the target group in which the model/analytics are based).
If they are, that is great, but there is one more step. You want to ask whether the modeling and analytics have been vetted by clinical experts. As we discussed previously, best practices are to ensure the clinical/diagnostic event groups are vetted by Physician Clinical Boards and other experts to ensure the validity of the codes (e.g., ICD 10 and 9) and other clinical attributes utilized for clinically-oriented predictive analytics.
In addition to clinical vetting, a potential vendor should be able to speak to additional model validation processes and procedures. For example, statistically, model validation can/should include approaches such as standard use of validation samples, hold-out groups and sampling with and without replacement.
In addition, event group segmentation and profiling, before and after campaigns, also using the breakdowns of responders (converters as opposed to hand-raisers), are all part of the family of modeling and analytics validation procedures.
Key Takeaway: Propensity modeling, as is the case with any advanced analytics, requires subject matter expertise. It is important to ensure the clinical inputs and statistical approaches applied are sound – or, minimally, are regularly reviewed by reputable 3rd parties – and that a variety of testing approaches are employed in creation and maintenance of the models. You need solid verification that a vendor uses several statistical and other modeling and analytics validation practices.
Use All Data Elements
Does your potential vendor leverage all the data elements available? We already discussed clinical, diagnostic, and insurance information from patient files. What about past marketing responses and previous points of contact? Are patient tenure and loyalty considered? How many of the appended consumer data elements are actually used? Does appended data include more than basic demographics? For example, many marketers use standardized socio-demographic and socio-economic segmentation schemes (e.g., Experian Mosaics) for creating prospect lists and messaging. Does your potential vendor do the same?
Critically important are some of the more proprietary data elements that are available, such as new movers, deceased lists and new pregnancies, as well as self-reported ailment and behavioral data. Equally important is ensuring 3rd party information is up-to-date and applied to the database correctly. Specifically, ask if they are using standard NCOA and CASS certification with any/all of their appended data elements and inquire about their matching logic (this may be easier done via asking for proof of match rates and certifying via the opinions of current clients).
Below is an example of the top 10 statistical drivers (variables) from a bariatric surgery predictive algorithm. Note that patient, consumer, and 3rd party variables are ALL represented in the set of top drivers.
Key Takeaway: Not all data elements will be used in the modeling and analytics. However, experienced vendors know the data well, especially the uniqueness of the healthcare data, and understand which variables work best in specific modeling and analytics schemes. Ask a vendor for examples of model results so that you can confirm that predictors include a variety of data elements: individual attributes, household elements, clinical data points, etc.
Tools and Applications to Optimally Leverage the Modeling and Analytics
One of the biggest problems in marketing campaigns is that the messaging is not aligned with the specific prospect target groups. Once you have a good handle on items 1-4, you now want to ask what a potential vendor has for tools and applications for leveraging modeled data and other analytics. You need to create targeted lists, provide data summaries, automate the mailing/emailing and other channel flows as much as possible, perform campaign response analytics, incorporate control/test groups, and perform many other post-modeling and analytics tasks for your targeted campaigns.
There is a wide range of features and services provided by vendors including stand-alone platforms, in-house and on-line data deployment and access tools, and campaign management that ranges from fully automatic to fully manual. Ask your vendor what options they provide in terms of visualizations and market intelligence and whether these are standard deliverables or an extra charge.
Strong market intelligence tools can help you leverage scored data by allowing you to quickly and visually analyze the impact of various preselects and filters on the counts and quality of scored prospects.
Another strong option a potential vendor may offer is real-time reporting that allows you to track response rates, test cell results, and campaign ROI. This tool may also alert you to potential problems and opportunities, allowing you to make faster, data-driven decisions, giving you the ability to adapt your campaigns with greater speed and flexibility to make sure your marketing efforts are consistently aligned with your marketing goals.
Finally, make sure your prospective vendor knows what you do now, what you would like to do moving forward, and what your long-term goals are for modeling, analytics, list creation, and optimal campaign execution and management.
Key Takeaway: You want to have a good understanding of your business/marketing models in relation to the tools and applications you will want to optimally leverage. Does your budget allow for new tools to increase automation? Are you spending money on tools and features you will never use? Spend some time weighing the various vendor options using both current and desired states, and making sure any decisions align with your budget, business rules, and ultimate goals.
With an ideal vendor, modeling and analytics are optimized, and you have full transparency into not only your data, but the sources of your data and how all of your data elements factor into the multivariate analyses you are using to optimize your marketing lists.
As we mentioned previously, predictive modeling is most effective and efficient when employed in an ongoing lifecycle, and you want to track, measure, and analyze the effect of the models and analytics on the overall success of your campaigns. A solid vendor will allow you to easily gain insights you can use to improve subsequent campaigns.
In our two pieces, we provided five guidelines for consideration when you commence predictive modeling as well as five critical factors to consider when selecting the best partner to support your modeling and analytics efforts.
Now that you know the ‘whys’ behind modeling best practices, you can feel more confident asking vendors the critical questions (and DO IT!). At the end of the day, this due diligence will pay off, as you employ the tools, solutions and partners that will support efforts to increase your ROMI, using the best resources possible, while at the same time, reduce spend.
As Director of Analytics at Evariant, Bill’s focus is on design, execution, and implementation of optimal analytics. Maximizing ROI as a result of multivariate predictive modeling is a primary goal. Bill has been a presenter at major marketing, analytics, and academic conferences including the DMA, AMA, APA, APHA, and GSA. His background as an experimental and health psychologist has included teaching as well as several years of clinical work. His specialty areas include older adults, community-based research, HIV/AIDS, sexual behavior and risk, environmental stress, influenza and pneumonia vaccination, depression and mental health, medication and health literacy. Dr. Disch earned his Ph.D. in experimental psychology from the University of Rhode Island, with a specialty in quality of life and well-being, and mixed-methods (quant, qual, mixed).
- Top 10 Skills for Data Engineers in 2021 222 views | by ODSC Team | under Career Insights, Featured Post
- ODSC East 2021 Schedule Released – See the Details Here 25 views | by ODSC Team | under Conferences, Featured Post
- Continuous Delivery for Machine Learning 23 views | by ODSC Community | under Conferences, Machine Learning, Modeling, ODSC Speaker