Can a Small Language Mannequin Predict Kernel Latency, Reminiscence, and Mannequin Accuracy from Code? A New Regression Language Mannequin (RLM) Says Sure


Researchers from Cornell and Google introduce a unified Regression Language Mannequin (RLM) that predicts numeric outcomes straight from code strings—overlaying GPU kernel latency, program reminiscence utilization, and even neural community accuracy and latency—with out hand-engineered options. A 300M-parameter encoder–decoder initialized from T5-Gemma achieves sturdy rank correlations throughout heterogeneous duties and languages, utilizing a single text-to-number decoder that emits digits with constrained decoding.

What precisely is new?

  • Unified code-to-metric regression: One RLM predicts (i) peak reminiscence from high-level code (Python/C/C++ and extra), (ii) latency for Triton GPU kernels, and (iii) accuracy and hardware-specific latency from ONNX graphs—by studying uncooked textual content representations and decoding numeric outputs. No function engineering, graph encoders, or zero-cost proxies are required.
  • Concrete outcomes: Reported correlations embody Spearman ρ ≈ 0.93 on APPS LeetCode reminiscence, ρ ≈ 0.52 for Triton kernel latency, ρ > 0.5 common throughout 17 CodeNet languages, and Kendall τ ≈ 0.46 throughout 5 traditional NAS areas—aggressive with and in some circumstances surpassing graph-based predictors.
  • Multi-objective decoding: As a result of the decoder is autoregressive, the mannequin circumstances later metrics on earlier ones (e.g., accuracy → per-device latencies), capturing sensible trade-offs alongside Pareto fronts.
https://arxiv.org/abs/2509.26476

Why is that this necessary?

Efficiency prediction pipelines in compilers, GPU kernel choice, and NAS usually depend on bespoke options, syntax timber, or GNN encoders which might be brittle to new ops/languages. Treating regression as next-token prediction over numbers standardizes the stack: tokenize inputs as plain textual content (supply code, Triton IR, ONNX), then decode calibrated numeric strings digit-by-digit with constrained sampling. This reduces upkeep price and improves switch to new duties through fine-tuning.

Information and benchmarks

  • Code-Regression dataset (HF): Curated to help code-to-metric duties spanning APPS/LeetCode runs, Triton kernel latencies (KernelBook-derived), and CodeNet reminiscence footprints.
  • NAS/ONNX suite: Architectures from NASBench-101/201, FBNet, As soon as-for-All (MB/PN/RN), Twopath, Hiaml, Inception, and NDS are exported to ONNX textual content to foretell accuracy and device-specific latency.

How does it work?

  • Spine: Encoder–decoder with a T5-Gemma encoder initialization (~300M params). Inputs are uncooked strings (code or ONNX). Outputs are numbers emitted as signal/exponent/mantissa digit tokens; constrained decoding enforces legitimate numerals and helps uncertainty through sampling.
  • Ablations: (i) Language pretraining accelerates convergence and improves Triton latency prediction; (ii) decoder-only numeric emission outperforms MSE regression heads even with y-normalization; (iii) realized tokenizers specialised for ONNX operators improve efficient context; (iv) longer contexts assist; (v) scaling to a bigger Gemma encoder additional improves correlation with enough tuning.
  • Coaching code. The regress-lm library supplies text-to-text regression utilities, constrained decoding, and multi-task pretraining/fine-tuning recipes.

Stats that issues

  • APPS (Python) reminiscence: Spearman ρ > 0.9.
  • CodeNet (17 languages) reminiscence: common ρ > 0.5; strongest languages embody C/C++ (~0.74–0.75).
  • Triton kernels (A6000) latency: ρ ≈ 0.52.
  • NAS rating: common Kendall τ ≈ 0.46 throughout NASNet, Amoeba, PNAS, ENAS, DARTS; aggressive with FLAN and GNN baselines.

Key Takeaways

  1. Unified code-to-metric regression works. A single ~300M-parameter T5Gemma-initialized mannequin (“RLM”) predicts: (a) reminiscence from high-level code, (b) Triton GPU kernel latency, and (c) mannequin accuracy + gadget latency from ONNX—straight from textual content, no hand-engineered options.
  2. The analysis exhibits Spearman ρ > 0.9 on APPS reminiscence, ≈0.52 on Triton latency, >0.5 common throughout 17 CodeNet languages, and Kendall-τ ≈ 0.46 on 5 NAS areas.
  3. Numbers are decoded as textual content with constraints. As a substitute of a regression head, RLM emits numeric tokens with constrained decoding, enabling multi-metric, autoregressive outputs (e.g., accuracy adopted by multi-device latencies) and uncertainty through sampling.
  4. The Code-Regression dataset unifies APPS/LeetCode reminiscence, Triton kernel latency, and CodeNet reminiscence; the regress-lm library supplies the coaching/decoding stack.

Our Feedback

It is rather fascinating how this work reframes efficiency prediction as text-to-number era: a compact T5Gemma-initialized RLM reads supply (Python/C++), Triton kernels, or ONNX graphs and emits calibrated numerics through constrained decoding. The reported correlations—APPS reminiscence (ρ>0.9), Triton latency on RTX A6000 (~0.52), and NAS Kendall-τ ≈0.46—are sturdy sufficient to matter for compiler heuristics, kernel pruning, and multi-objective NAS triage with out bespoke options or GNNs. The open dataset and library make replication simple and decrease the barrier to fine-tuning on new {hardware} or languages.


Take a look at the Paper, GitHub Page and Dataset Card. Be happy to take a look at our GitHub Page for Tutorials, Codes and Notebooks. Additionally, be at liberty to observe us on Twitter and don’t overlook to affix our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

The publish Can a Small Language Model Predict Kernel Latency, Memory, and Model Accuracy from Code? A New Regression Language Model (RLM) Says Yes appeared first on MarkTechPost.

Leave a Reply

Your email address will not be published. Required fields are marked *