Berlin Tech Meetup: The Future of Relational Foundation Models, Systems, and Real-World Applications

Register now:
Learn14 min read

Kumo vs DataRobot: Relational Foundation Model vs AutoML

DataRobot automates model selection. Kumo automates feature discovery. DataRobot needs a flat feature table as input. Kumo reads raw relational tables directly. This is not a marginal improvement - it eliminates the 80% bottleneck that DataRobot leaves entirely manual.

TL;DR

  • 1On the SAP SALT enterprise benchmark, KumoRFM scores 91% accuracy vs 75% for PhD data scientists with XGBoost and 63% for LLM+AutoML - with zero feature engineering and zero training time.
  • 2DataRobot is an AutoML platform that automates model selection and hyperparameter tuning - the last 20% of the ML pipeline. It still requires a manually built flat feature table as input, leaving the 80% feature engineering bottleneck (12.3 hours and 878 lines of code per task) untouched.
  • 3KumoRFM is a relational foundation model that reads raw relational tables directly, discovering multi-hop predictive patterns across tables without any feature engineering. It automates the full pipeline, not just the modeling step.
  • 4On RelBench benchmarks, KumoRFM zero-shot achieves 76.71 AUROC vs AutoML + manual features at ~64-66 AUROC. The 10+ point gap comes from features the model discovers in relational structure that a flat table never contains.
  • 5At scale (20 prediction tasks), the DataRobot approach costs $650K-$900K/year including the data science team needed for feature engineering. The Kumo approach costs $80K-$120K/year - an 85% cost reduction.

DataRobot is one of the most recognized names in enterprise machine learning. Since 2012, it has built a comprehensive AutoML platform used by thousands of organizations to automate model selection, hyperparameter tuning, and deployment. It has real strengths: a polished UI, strong model monitoring, broad ecosystem integrations, and a track record with regulated industries.

But DataRobot is an AutoML platform. And AutoML, by design, solves one specific problem: given a flat feature table, find the best model. It does not solve the problem that consumes 80% of data science time - converting raw relational data into that flat feature table in the first place.

This is not a criticism of DataRobot's engineering. It is a description of AutoML's architecture. Every AutoML tool - DataRobot, H2O, Google AutoML, SageMaker Autopilot - takes a pre-built feature table as input. None of them can read a relational database directly. None of them discover features from table joins and multi-hop relationships. None of them eliminate the feature engineering bottleneck.

Kumo takes a different approach entirely. Instead of automating model selection on a feature table someone else built, KumoRFM reads raw relational tables directly and discovers predictive patterns across the full relational structure. This is the difference between optimizing a step in the pipeline and eliminating the pipeline.

The headline result: SAP SALT benchmark

Before diving into detailed comparisons, here is the result that matters most. The SAP SALT benchmark is an enterprise-grade evaluation where real business analysts and data scientists attempt prediction tasks on SAP enterprise data. It measures how accurately different approaches predict real business outcomes (customer behavior, demand patterns, operational metrics) on production-quality enterprise databases with multiple related tables.

sap_salt_enterprise_benchmark

approachaccuracywhat_it_means
LLM + AutoML63%Language model generates features, AutoML selects model
PhD Data Scientist + XGBoost75%Expert spends weeks hand-crafting features, tunes XGBoost
KumoRFM (zero-shot)91%No feature engineering, no training, reads relational tables directly

SAP SALT benchmark: KumoRFM outperforms expert data scientists by 16 percentage points and LLM+AutoML by 28 percentage points. Zero feature engineering. Zero training. The model reads raw enterprise tables and predicts.

This is not a marginal improvement. KumoRFM scores 91% where PhD-level data scientists with weeks of feature engineering and hand-tuned XGBoost score 75%. The 16 percentage point gap is the value of reading relational data natively instead of flattening it into a single table.

kumo_vs_datarobot_comparison

dimensionDataRobotKumo (KumoRFM)
Data inputSingle flat feature table (CSV, dataframe)Raw relational tables connected by foreign keys
Feature engineeringManual - user must build all features before uploadAutomatic - model discovers features from relational structure
Multi-table supportNone - requires pre-joined flat tableNative - reads multiple tables and discovers cross-table patterns
Time to first predictionWeeks (feature engineering) + hours (AutoML training)~1 second (zero-shot) to minutes (fine-tuned)
Accuracy on relational data~64-66 AUROC (limited by manual features)76.71 AUROC zero-shot, 81.14 fine-tuned
ExplainabilityFeature importance on provided featuresFeature importance across discovered relational patterns
Snowflake integrationImport/export via connectorsNative Snowflake-based processing, no data movement
Pricing modelPer-seat + compute licensingPer-prediction-task, no per-seat fees
Pipeline maintenanceFeature pipelines + model retraining + monitoringNo feature pipelines to maintain
Best forSingle-table problems, mature feature stores, Kaggle-style tasksMulti-table relational data, fast iteration, teams without dedicated DS

Head-to-head comparison across 10 dimensions. The key difference is not model quality - DataRobot builds strong models on the data it receives. The difference is what data it receives.

What DataRobot does well

DataRobot has earned its market position. A fair comparison requires acknowledging where the platform genuinely excels.

  • Model selection and tuning. DataRobot's core competency is testing dozens of algorithms (XGBoost, LightGBM, neural networks, SVMs, ensembles) and automatically selecting the best performer with optimized hyperparameters. On a clean, well-engineered feature table, it consistently outperforms manual model selection by 2-4 AUROC points.
  • Model monitoring and governance. DataRobot offers mature MLOps capabilities: drift detection, accuracy tracking, challenger models, compliance documentation, and audit trails. For regulated industries, these are valuable features.
  • Ecosystem breadth. DataRobot integrates with most enterprise data platforms, BI tools, and deployment targets. It supports a wide range of problem types (classification, regression, time series, image, text) within the AutoML framework.
  • Accessibility. DataRobot's visual interface makes model building accessible to analysts and citizen data scientists who may not write code. For organizations where the bottleneck is modeling skill rather than data preparation, this is useful.

What DataRobot requires you to do manually

DataRobot's input is a flat feature table. Everything that happens before that table exists is your responsibility. For enterprise data that lives in relational databases, this is the majority of the work.

  • Table joins. Your customer data spans customers, orders, products, interactions, support tickets, and payment tables. Someone writes the SQL to join them. For 5 tables with temporal constraints, this is easily 100+ lines of SQL.
  • Aggregation computation. DataRobot cannot compute avg_order_value_last_90d, support_tickets_last_30d, or product_return_rate_by_category. Each aggregation must be pre-computed and added as a column to the flat table.
  • Temporal feature engineering. Trends, seasonality, and behavioral sequences (purchase frequency accelerating, engagement declining over 6 weeks) must be manually encoded as numeric features. DataRobot sees a static snapshot, not a temporal sequence.
  • Multi-hop pattern encoding. If a customer's churn risk depends on the satisfaction scores of other customers who bought the same products, that three-hop relationship (customer → orders → products → other customers' reviews) must be manually computed and flattened into a single column.
  • Feature iteration. When the first model underperforms, the data scientist goes back and engineers more features. This iteration loop - build features, train model, evaluate, build more features - averages 3-4 cycles per task.

what_the_flat_table_misses_vs_relational_model (lead scoring example)

signalvisible in flat table (DataRobot)visible in relational model (Kumo)
Total emails openedYes - single column: emails_opened = 7Yes - plus sequence, recency, and response time patterns
Content progressionNo - only total page viewsYes - Blog > Case study > API docs > Pricing (buying signal)
Multi-threaded engagementNo - aggregated to one rowYes - 4 contacts from 3 departments active on this account
Similar account outcomesNo - no cross-entity joinsYes - accounts with similar profile closed at 73% win rate
Firmographic momentumNo - static company size onlyYes - company raised Series B 30 days ago, hiring 12 engineers
Product engagement depthNo - boolean feature_used = trueYes - tried 3 integrations, API call volume increased 4x this week

A concrete lead scoring example. The flat table DataRobot receives captures simple counts. The relational model captures the behavioral patterns, sequences, and cross-entity signals that actually predict conversion.

DataRobot workflow

  • Data scientist writes SQL to join 5+ tables (2-4 hours)
  • Data scientist computes aggregations and temporal features (4-6 hours)
  • Data scientist iterates on features 3-4 times (4-6 hours)
  • Upload flat table to DataRobot
  • DataRobot runs AutoML: tests 20+ models, tunes hyperparameters (1-2 hours)
  • Deploy best model, maintain feature pipeline ongoing

Kumo workflow

  • Connect Kumo to your data warehouse (one-time setup)
  • Write a PQL query defining what you want to predict
  • KumoRFM reads raw tables, discovers features, returns predictions
  • Zero feature engineering, zero model selection, zero pipeline code
  • Time to first prediction: ~1 second (zero-shot)
  • No feature pipeline to maintain

Benchmark results: RelBench

The RelBench benchmark provides an apples-to-apples comparison across 7 databases, 30 prediction tasks, and 103 million rows. These are real relational datasets - not pre-flattened Kaggle tables - which is why the gap between approaches is so stark.

AUROC (Area Under the Receiver Operating Characteristic curve) measures how well a model distinguishes between positive and negative outcomes. An AUROC of 50 means random guessing. An AUROC of 100 means perfect prediction. In practice, moving from 65 to 77 AUROC is a significant improvement - it means the model correctly ranks a true positive above a true negative 77% of the time instead of 65%. For fraud detection, that difference can mean catching 40% more fraud with the same false positive rate. For churn prediction, it means identifying at-risk customers weeks earlier.

relbench_benchmark_results

approachAUROCfeature_engineering_timelines_of_codewhat_is_automated
LightGBM + manual features62.4412.3 hours per task878Nothing - fully manual pipeline
AutoML (DataRobot-class) + manual features~64-6610.5 hours per task878Model selection and tuning only
KumoRFM zero-shot76.71~1 second0Feature discovery + model + inference
KumoRFM fine-tuned81.14Minutes0Full pipeline + task-specific adaptation

Highlighted: KumoRFM zero-shot outperforms the AutoML approach by 10+ AUROC points with zero feature engineering. The gap is not about model quality - it is about the features the model discovers in the raw relational structure.

The 2-4 point improvement from LightGBM to AutoML reflects the value of better model selection. The 10+ point improvement from AutoML to KumoRFM reflects the value of better features - features that exist in the relational structure but never make it into the flat table. DataRobot cannot close this gap by building a better model, because the signals are not in the data it receives.

PQL Query

PREDICT churn_90d
FOR EACH customers.customer_id
WHERE customers.segment = 'enterprise'

One PQL query replaces the entire DataRobot pipeline: the SQL joins, the feature engineering code, the feature iteration cycles, and the AutoML model selection. KumoRFM reads the raw customers, orders, products, support_tickets, and payments tables directly.

Output

customer_idchurn_prob_kumochurn_prob_automldelta
C-44010.870.72+15 points (Kumo detects declining multi-product engagement)
C-44020.120.31Kumo correctly lower (stable cross-department usage)
C-44030.930.58+35 points (Kumo sees support escalation + similar account churn pattern)
C-44040.080.11Both correctly low (healthy account)

The cost comparison at scale

The accuracy gap matters. But for most enterprises, the cost gap is what changes the decision. Despite broad evaluation of AutoML tools across enterprises, adoption as a primary ML workflow remains limited. Industry analysts consistently report that a large share of ML models never reach production. The reason is not model quality - it is the cost and complexity of the feature engineering pipeline that AutoML still demands.

total_cost_of_ownership (20 prediction tasks, annual)

cost_dimensionDataRobot approachKumo approachsavings
Feature engineering labor246 hours ($61,500)0 hours$61,500
DataRobot / Kumo platform license$150K-$250K$80K-$120K$70K-$130K
Data science team (feature pipelines)3-4 FTEs ($450K-$600K)0.5 FTE ($75K)$375K-$525K
Pipeline maintenance (annual)520 hours ($130K)20 hours ($5K)$125K
Time to new prediction task2-4 weeksMinutes99%+ reduction
Total annual cost$650K-$900K$80K-$120K~85% savings

Highlighted: the 85% cost savings come almost entirely from eliminating feature engineering labor and pipeline maintenance - work that DataRobot's AutoML does not automate.

When to choose DataRobot

DataRobot is a strong platform in specific scenarios. Choose DataRobot when:

  • Your data is already in a single flat table. If you have a well-curated CSV or dataframe with all the features you need, DataRobot's AutoML will find the best model efficiently. No multi-table joins or feature engineering required.
  • You have a mature feature store. If your organization has invested years in building and maintaining a comprehensive feature store, DataRobot can use those features effectively. The feature engineering cost is already paid.
  • You need broad problem-type coverage. DataRobot supports image classification, NLP, time series forecasting, and other problem types beyond tabular prediction. If your use cases span these modalities, DataRobot's breadth is valuable.
  • Kaggle-style benchmarking. For single-table competitions or internal model bake-offs where the feature table is provided, DataRobot is an efficient way to find the best model quickly.

When to choose Kumo

Kumo solves a different problem than DataRobot. Choose Kumo when:

  • Your data lives in multiple relational tables. Customers, orders, products, interactions, support tickets - if your predictive signals span table boundaries, Kumo discovers them automatically. DataRobot requires you to flatten them first.
  • You do not have a large data science team. If you cannot dedicate 3-4 FTEs to feature engineering and pipeline maintenance, Kumo eliminates that requirement entirely. A single ML engineer or analyst can operate the platform.
  • Speed to production matters. KumoRFM delivers predictions in approximately 1 second (zero-shot) versus weeks for the DataRobot pipeline. When business conditions change quickly, the ability to stand up a new prediction task in minutes is a competitive advantage.
  • You need maximum accuracy on relational data. The 10+ AUROC point gap between AutoML and KumoRFM on relational benchmarks translates directly to business outcomes: more fraud caught, fewer false positives, better-targeted campaigns, lower churn.
  • You want to scale prediction tasks. Going from 1 to 20 prediction tasks with DataRobot means 20 separate feature engineering pipelines. With Kumo, it means 20 PQL queries against the same connected data - marginal cost near zero.

KumoRFM was built by the team behind the ML systems at Pinterest, Airbnb, and LinkedIn: Vanja Josifovski (CEO, former CTO at Airbnb and Pinterest), Jure Leskovec (Chief Scientist, Stanford professor, co-creator of GraphSAGE), and Hema Raghavan (Head of Engineering, former Sr. Director at LinkedIn). Backed by Sequoia Capital.

Frequently asked questions

What is the main difference between Kumo and DataRobot?

DataRobot is an AutoML platform that automates model selection and hyperparameter tuning on a pre-built flat feature table. Kumo uses a relational foundation model (KumoRFM) that reads raw relational tables directly, discovering predictive patterns across multiple tables without any manual feature engineering. DataRobot automates the last 20% of the ML pipeline. Kumo automates the full pipeline including the 80% that DataRobot leaves manual.

Does DataRobot handle multi-table relational data?

DataRobot requires a single flat feature table as input. If your data lives in multiple relational tables (customers, orders, products, interactions), someone must write the SQL joins, compute aggregations, and flatten everything into one row per entity before DataRobot can use it. DataRobot cannot discover multi-hop patterns across tables or preserve temporal sequences from raw relational data.

Is DataRobot accurate on relational data?

DataRobot is effective at selecting and tuning models on the features it receives. On single-table or well-engineered flat datasets, it performs well. However, on relational data benchmarks like RelBench, AutoML approaches with manually engineered features score approximately 64-66 AUROC, while KumoRFM zero-shot achieves 76.71 AUROC. The gap is not about model selection quality but about the features the model never sees.

When should I choose DataRobot over Kumo?

DataRobot is a strong choice when your data is already in a single flat table, when you have a mature feature store with curated features, or when you need broad AutoML ecosystem integrations. It excels at Kaggle-style classification tasks on pre-engineered data and has a well-developed model monitoring and deployment infrastructure.

How much does DataRobot cost compared to Kumo for relational prediction tasks?

For an organization running 20 prediction tasks on relational data, the AutoML approach (including DataRobot licensing plus the data science team required for feature engineering) costs approximately $650K-$900K per year. A foundation model approach with Kumo costs $80K-$120K per year, representing roughly 85% cost savings. The difference comes almost entirely from eliminating the manual feature engineering that DataRobot still requires.

Can I migrate from DataRobot to Kumo?

Yes. Because Kumo reads raw relational tables directly, migration does not require rebuilding feature pipelines. You connect Kumo to your data warehouse (Snowflake, BigQuery, Databricks), define your prediction tasks in PQL (Predictive Query Language), and get predictions immediately. The feature engineering code you maintained for DataRobot becomes unnecessary. Many organizations see their first predictions within hours of connecting their data.

See it in action

KumoRFM delivers predictions on relational data in seconds. No feature engineering, no ML pipelines. Try it free.