Berlin Tech Meetup: The Future of Relational Foundation Models, Systems, and Real-World Applications

Register now:
Learn13 min read

Next Best Action: From Rules Engines to Relational Intelligence

Your rules engine has 200 rules. Your customers have 200 million possible states. The gap between the two is where revenue leaks, churn happens, and customers receive tone-deaf offers at the worst possible moment.

TL;DR

  • 1Rules engines cover 50-200 scenarios out of 4,000+ possible actions per customer. The revenue opportunity lives in the 3,800 action combinations that rules never explore.
  • 270% of customers report annoyance from irrelevant outreach. Rules engines send renewal offers during billing disputes because they cannot weigh thousands of signals simultaneously.
  • 3ML-based NBA makes four simultaneous predictions: what to offer, when to offer it, through which channel, and whether to act at all. Each draws on the full relational context across all CRM tables.
  • 4DoorDash saw 1.8% engagement lift across 30M users. Databricks saw 5.4x conversion lift on account expansion. Both from relational ML seeing the full customer graph, not from better rules.
  • 5KumoRFM collapses the 6-12 month NBA pipeline into a single predictive query. New products, channels, and customer segments are handled without rebuilding the pipeline.

A telecom company sends a premium plan upgrade offer to a customer who called support three times last week about billing errors. A bank sends a credit card cross-sell email to a customer who just reported fraud on their existing card. A retailer sends a "we miss you" discount to a customer who placed an order yesterday.

These are not hypothetical examples. They happen every day at companies with sophisticated marketing stacks and dedicated personalization teams. The problem is not that these companies lack data. It is that their decisioning systems cannot use it.

Next best action is the discipline of determining the right interaction with the right customer at the right time through the right channel. Most implementations fall dramatically short of this promise because they rely on rules that encode what a human thinks should happen, rather than models that learn what actually works.

customer_state (snapshot)

customer_idproductcontract_endlast_supportNPSrecent_activity
C-4401Pro Plan45 daysBilling error (3 days ago)6Browsed competitor comparison
C-4402Enterprise380 daysNone in 90 days9Invited 2 team members
C-4403Basic12 daysFeature request (7 days ago)7Downloaded API docs
C-4404Pro Plan210 daysOnboarding (60 days ago)8Usage dropped 40% this week
C-4405Enterprise90 daysEscalation (1 day ago)4VP called account manager

Five customers, five different states. A rules engine sees contract_end and triggers a renewal offer for C-4403. It misses that C-4401 needs support resolution before any sales touch, and C-4404 needs a usage check-in.

rules_engine_actions vs optimal_actions

customer_idrules_engine_actionoptimal_actiongap
C-4401Renewal offer (45d rule)Support resolution firstTone-deaf upsell during frustration
C-4402No action (no rule fires)Expansion offer for team planMissed $48K expansion opportunity
C-4403Renewal offer (12d rule)Feature demo + renewalGeneric offer vs personalized
C-4404No action (no rule fires)Usage check-in callSilent churn risk undetected
C-4405Renewal offer (90d rule)Executive retention callSales email to an angry VP

Highlighted: the worst outcomes. C-4401 receives a sales pitch during a billing dispute. C-4405 receives an automated renewal email while their VP is on the phone with account management about a service failure.

How rules engines approach NBA

A typical rules-based NBA system works like this: business analysts define 50 to 200 rules in a decisioning platform (Pega, Salesforce, Adobe). Each rule has a condition and an action. If contract expiry is within 30 days, trigger renewal offer. If last purchase was more than 60 days ago, trigger win-back campaign. If customer segment is "high value" and product holding is "basic," trigger upgrade offer.

Rules are prioritized. Conflict resolution logic determines which action wins when multiple rules fire. Suppression rules prevent over-contact. Channel preferences route the action to email, push, SMS, or call center.

This approach has four structural limitations.

1. Rules encode known patterns only

Every rule was written by a human who observed a pattern and codified it. The system cannot discover patterns that no human has articulated. It cannot learn that customers who browse competitor comparison pages on mobile devices between 9-11 PM are in a specific decisioning state that calls for a specific response. The rules reflect what the business already knows. The value of ML is finding what the business does not know.

2. Rules cannot weigh thousands of signals

A human can hold 5-10 conditions in a rule. A customer's state is defined by thousands of signals: recent transactions, support history, product usage, marketing response history, life events, channel preferences, time of day, device, location, and the behavior of similar customers. No rules engine can weigh all of these simultaneously. ML models can.

3. Timing is treated as a schedule, not a prediction

Rules engines trigger actions on fixed schedules: 30 days before renewal, 60 days after last purchase. But the optimal timing varies per customer. Some customers should receive a renewal offer 45 days out. Others should receive it 15 days out. The difference depends on engagement patterns, competitive exposure, and historical response behavior that rules cannot capture.

4. No feedback loop

When a rule-based action fails (the customer ignores the offer, or worse, churns after receiving it), the rules do not update themselves. Someone has to analyze the failure, hypothesize what went wrong, and manually adjust the rules. This cycle takes weeks to months. ML models learn from every interaction outcome automatically.

What ML-based NBA actually decides

An ML-based NBA system makes four simultaneous predictions for each customer, each drawing on different parts of the relational data.

What to offer

The model predicts acceptance probability for every possible offer given the customer's current state. This is not collaborative filtering ("customers who bought X also bought Y"). It is a prediction conditioned on the full relational context: the customer's product holdings, usage patterns, support history, tenure, and the outcomes of similar offers to similar customers in similar states.

When to offer it

Timing models predict the optimal window for each action. A retention offer sent too early feels premature. The same offer sent too late arrives after the customer has already decided to leave. The model learns timing patterns from historical data: which engagement signals precede successful interventions, and how far in advance the signal appears.

Through which channel

Channel preference is not a static attribute. It varies by action type, time of day, and customer state. A customer who responds to email for informational content but prefers push notifications for time-sensitive offers has a context-dependent channel preference. The model learns these patterns from the interaction history across all channels.

Whether to act at all

Sometimes the best action is no action. A customer who is happily engaged and on a good trajectory should be left alone. Over-contact is a real risk: McKinsey found that 70% of customers say they get annoyed by irrelevant outreach. The model includes "do nothing" as an action and selects it when the predicted value of all other actions is below a threshold.

Rules-based NBA

  • 50-200 human-written rules with fixed conditions
  • Timing based on fixed schedules (30 days before renewal)
  • Channel preference as a static customer attribute
  • No feedback loop; manual rule updates quarterly
  • Cannot weigh more than 5-10 signals per decision

ML-based NBA on relational data

  • Evaluates all possible actions against full customer state
  • Predicts optimal timing per customer based on engagement patterns
  • Learns context-dependent channel preferences from interaction data
  • Continuous learning from every interaction outcome
  • Weighs thousands of signals across all CRM tables simultaneously

Why relational data matters for NBA

Most ML-based NBA systems still operate on flat customer profiles with engineered features: total spend, days since last purchase, number of products held, support ticket count. This is better than rules but still loses the relational signal.

Consider what the relational data reveals that a flat profile cannot.

Action-outcome sequences

The model sees the full history of actions taken and outcomes observed: which offers were presented, through which channels, at what times, and what the customer did next. This is a temporal graph connecting customers to actions to outcomes. The model learns that sending a cross-sell offer within 48 hours of a support resolution has a 3x higher acceptance rate than sending the same offer during a support wait. Flat models aggregate these into "number of offers accepted" and lose the sequence.

action_history: Customer C-4401

dateaction_takenchannelcustomer_state_at_timeoutcome
Feb 1Upsell offer: Premium PlanEmailNPS 8, no open ticketsAccepted
Feb 15Feature webinar inviteEmailNPS 8, using new featuresAttended
Mar 1Billing error reported by customerSupportTicket opened
Mar 2Upsell offer: Add-on moduleEmailNPS 6, open billing ticketIgnored
Mar 4Renewal reminderEmailNPS 6, open billing ticketUnsubscribed from emails

Highlighted: the same customer who accepted an upsell in February ignored one in March and unsubscribed. The difference: an unresolved billing ticket. The rules engine cannot condition actions on support state because it does not join the marketing and support tables.

flat_customer_profile (what the rules engine sees)

customer_idproductcontract_days_leftoffers_acceptedoffers_sentaccept_rate
C-4401Pro Plan452540%
C-4402Enterprise3801333%

The flat profile shows C-4401 has a 40% accept rate. It does not show that the last two offers were sent during an open support ticket and both failed. The trajectory flipped from receptive to hostile, but the aggregate hides it.

Peer influence

Customers connected through the same account, household, or referral network influence each other's responses. When one member of a household upgrades, the probability that other members upgrade increases by 40-60%. The relational model sees these connections and factors peer state into the action decision.

household_network (account: Acme Corp)

user_idroleplanaction_last_30doutcome
U-101AdminEnterpriseUpgraded to Enterprise+Accepted
U-102DeveloperProNo action taken
U-103DeveloperProNo action taken
U-104AnalystBasicNo action taken

U-101 (Admin) just upgraded. Historical data shows that when an admin at a multi-user account upgrades, team members have a 47% probability of upgrading within 30 days. The model surfaces U-102, U-103, and U-104 as expansion targets. The rules engine sees four independent users.

Product interaction effects

Some products complement each other; some substitute. The model learns from the product co-occurrence graph that customers holding Product A and Product B respond differently to an offer for Product C than customers holding only Product A. This interaction effect requires traversing the customer-product-transaction graph and cannot be captured by counting product holdings.

Real-world outcomes

DoorDash deployed relational ML for next-best-action decisioning across 30 million users and saw a 1.8% engagement lift. That sounds modest until you calculate the revenue impact: at DoorDash's scale, a 1.8% engagement lift translates to hundreds of millions in annual GMV.

Databricks used the same approach for account expansion decisions and saw a 5.4x conversion lift. The model predicted which accounts were ready for expansion offers, which products to recommend, and when to present the offer, all from the relational account-usage-support graph.

These results are driven by the same dynamic: the relational context carries more signal than any flat profile, and the model explores the full action space rather than the limited set of scenarios that rules cover.

PQL Query

PREDICT best_action, best_channel, best_timing
FOR EACH customers.customer_id
WHERE customers.status = 'active'

Predict the optimal action, channel, and timing for every active customer. The model evaluates all possible actions against the full customer state: product holdings, support history, usage trends, contract timeline, and peer behavior.

Output

customer_idactionchanneltimingexpected_value
C-4401Support follow-upPhoneToday+$2,100 (retention)
C-4402Team plan expansionEmailThis week+$48,000 (expansion)
C-4403Feature demo + renewalIn-app + EmailTomorrow+$8,400 (renewal)
C-4404Usage check-inCSM callToday+$12,000 (save)
C-4405Executive retention callVP-to-VP callImmediate+$240,000 (save)

From pipeline to query

Building an ML-based NBA system traditionally requires 6-12 months: data extraction from multiple systems, feature engineering across CRM and transactional tables, model training for each action type, a serving layer for real-time scoring, and an orchestration layer for action selection and suppression.

KumoRFM collapses this into a predictive query. The foundation model is pre-trained on billions of relational patterns and understands the universal dynamics of customer decisioning: recency effects, action-response patterns, channel preferences, and temporal receptivity. You connect your database, specify the action space, and the model predicts the optimal action for each customer.

The shift is not just faster. It is structurally different. Instead of a team of data scientists building and maintaining separate models for each action type, you have a single foundation model that reasons across the full relational context. New products, new channels, and new customer segments are handled without rebuilding the pipeline.

If your NBA system is powered by rules, your customers are receiving the same 50-200 treatments that every other customer in their segment receives. The value is in the other 3,800 actions that rules never explore. That is where ML on relational data operates.

Frequently asked questions

What is next best action?

Next best action (NBA) is a customer decisioning strategy that determines the optimal interaction with each customer at any given moment. It goes beyond product recommendations by considering the full context: what to offer, when to offer it, through which channel, and whether to offer anything at all. The goal is to maximize customer lifetime value, not just the next transaction.

How do rules-based NBA systems work?

Rules-based NBA systems use decision trees defined by business analysts: if the customer's contract expires in 30 days, send a renewal offer; if they haven't purchased in 60 days, send a win-back campaign; if their balance exceeds $50,000, offer premium services. These rules are static, handle a limited number of scenarios (typically 50-200 rules), and cannot discover new patterns or adapt to changing customer behavior without manual updates.

Why do rules-based NBA systems underperform?

Rules engines handle the obvious scenarios but miss the nuanced ones. They cannot learn that customers who contacted support twice in a week and browsed competitor comparison pages should receive a retention call, not a cross-sell email. They cannot learn that the optimal time to send a renewal offer varies by customer segment and engagement pattern. They process conditions sequentially and cannot weigh thousands of signals simultaneously the way ML models can.

How does relational ML improve next best action?

Relational ML sees the full customer graph: transactions, support interactions, product usage, marketing responses, account relationships, and temporal sequences. It learns patterns like: customers in this segment who received a specific offer within 48 hours of a support interaction churned at 2x the rate. These multi-table, temporal patterns are what determine the truly optimal action, and they are invisible to rules engines and flat-table models alike.

What results do companies see from ML-based NBA?

Companies that move from rules-based to ML-based NBA typically see 15-30% improvement in offer acceptance rates, 20-40% reduction in customer churn, and 10-25% increase in revenue per customer. DoorDash reported a 1.8% engagement lift across 30 million users. The gains come from both better action selection and better timing: knowing not just what to offer but exactly when the customer is most receptive.

See it in action

KumoRFM delivers predictions on relational data in seconds. No feature engineering, no ML pipelines. Try it free.