Topics:
Decision Science

Staying Fresh

At its core, predictive models represent a snapshot in time. Whether that snapshot was taken one week or one year ago, it is the basis of your predictions—and just like a printed photograph, this snapshot can become less relevant over time. This is not dissimilar to a written policy or business process. The act of creating it is only the first step, and the real work is in regularly reviewing and updating it. It is critically important that predictive models be monitored, evaluated, and refreshed on a regular basis. Unfortunately, only a small number of top-level fundraising teams operate under this paradigm.

Predictive Modeling is a Process, Not a Project

Organizations that employ predictive modeling most effectively understand the ongoing and iterative nature: it is more process than project. Every predictive model will lose effectiveness over time, regardless of original accuracy or strength. Understanding what is driving loss in model effectiveness is core to building a road map for sustainable impact.

As donors increase their giving, give for the first time, attend events, open emails, and perform other recordable behaviors, any predictive model’s “snapshot” of your data can become less and less reflective of the true population. Think of this as a family portrait that was taken before a birth or marriage—that old portrait is no longer an accurate representation of the family. Change in behavior is a primary driver to models losing efficacy, and thus, developing a guideline for response is essential.

Today, we also see many organizations mature in their perspectives of data. In many ways, it’s becoming our most valuable resource. Each year, most nonprofits record more and more data than the year before, which results in more data being available than when the models were first built. Typical examples include event attendance, social media data, and online giving days. Additional data may improve model accuracy or capture more nuance, and therefore should be considered when reviewing model relevance.

As organizations mature, their overall constituency grows. Some sectors such as healthcare and NGOs can see substantial increases in constituent count year-over-year, which is relevant to model impact. While we want models to accurately reflect behavior using current data, we also need coverage across our constituency. If an organization adds 12,000 new records, and they do not have predictive modeling scores, that presents a significant gap in our line of sight for tactical and strategic decision making. Model coverage should be comprehensive.

Refreshing Predictive Models

A predictive model can be refreshed or rebuilt at any time. The more appropriate question is, “When would rebuilding or refreshing a model have impact on business?” Feel free to dismiss time and calendar considerations and let significant benchmarks in data outline this process. BWF recommends following a 20/20/20 guideline for model rebuilding—building prior to these thresholds may not have significant impact, and waiting too long may diminish real-time insight.

Speed of Data: The first 20 reflects a 20 percent change in available data. As mentioned previously, organizations are dramatically increasing the variety and mount of data they track and align with constituent records. While more data will not always mean a more accurate model, when you add a significant amount of new information it is useful to rebuild models with that new information. Consider the amount independent variables that were included in your last model. When that increases by a minimum of 20%, we recommend it is worthy of review.

Speed of Growth: As nonprofit constituencies grow, so will the need for coverage of new records for key predictive models. The impact, however, may not be as direct as it seems. If a nonprofit’s constituency grows from 100,000 to 120,000 constituents over a two-year window, then the impact is often at the very top in adjacent ways. Most new constituents to a file will score low in any model (limited history, giving, data). They will, however, change fractional rankings. If your prospect development team focused on the top 1 percent of the major gift model, a 20 percent change would increase that group from 1,000 to 1,200 and would impact major gift prospecting. A significant change in growth warrants a predictive modeling rebuild.

Speed of Business: This is by far the most commonly observed yet most varied threshold for a modeling rebuild: a significant (>20%) change in the population being modeled. While organizations can add data—and add constituents—it is far more common that a model becomes less effective because the target population has changed. A university goes from 400 to 500 leadership annual donors or a hospital has a successful donor acquisition campaign—these both represent a significant shift in the population the model was built upon, signaling that a model may be less current and effective. The speed of business for actionable intelligence to inform strategic decisions, varies often by gift type/size. Many organizations build new acquisition gift models for every direct mail campaign. Conversely, it may take 3-4 years to see even a 20 percent increase in planned giving prospects, and this is a model that commonly has a longer shelf life.

External Factors. Major changes to the environment external to your organization must also be taken into account when evaluating whether a model has reached the end of its lifecycle. A major economic shift, such as the Great Recession, can have significant implications on your constituents’ likelihood to support your organization. Similarly, elections or other political events may present unique opportunities to your organization and can be strong indicators that your models are in need of refreshing.

Time to Update. Now What?

The “how” of a predictive model refresh can be trickier than it sounds. Should your organization simply revise an existing model or rebuild it from scratch? Is your model static or scored dynamically on a nightly basis? For all organizations, the question of refreshing predictive models must first start with a re-evaluation of the business objective: What was the need for modeling in the first place? Is that need still valid? Has it changed? You should also evaluate whether new additional models are needed on top of the existing models and whether they are a higher priority than refreshing the existing models.

All of this can be done by creating a monitoring and evaluation plan to periodically review your model’s effectiveness and to quantify your model lifecycles. At the very least, your plan should cover the following questions.

Monitoring. Are your models still reasonably accurate based on their starting points? How stable are your models over time? Are they prone to large shifts in scoring? Do new records score as expected based on known data?

Evaluation. What is the model’s ROI? Is the model still producing results for your organization? If the model’s goal is to identify prospects for certain types of gifts, how effective is the model? What gains in efficiency result from the model? Are prospect researchers more effective and efficient when utilizing the model? What is the organizational impact of the model?

For most nonprofit organizations, reviewing your monitoring and evaluation plan every 3–6 months is probably sufficient to stay ahead of the naturally occurring degradation experienced by all predictive models.

To learn more about the analytics services offered by BWF, or to discuss your predictive modeling needs with a member of the BWF Decision Science team, please contact us at decisionscience@bwf.com or 800.921.0111. Together, we transform philanthropy.