Learning Parametric Distributions from Samples and Preferences

Abstract

Recent advances in language modeling have underscored the role of preference feedback in enhancing model performance. This paper investigates the conditions under which preference feedback improves parameter estimation in classes of continuous parametric distributions. In our framework, the learner observes pairs of samples from an unknown distribution along with their relative preferences depending on the same unknown parameter. We show that preference-based M-estimators achieve a better asymptotic variance than sample-only M-estimators, further improved by deterministic preferences. Leveraging the hard constraints revealed by deterministic preferences, we propose an estimator achieving an estimation error scaling of $O(1/n)$ — a significant improvement over the $\Theta(1/\sqrt{n})$ rate attainable with samples alone. Next, we establish a lower bound that matches this accelerated rate; up to dimension and problem-dependent constants. While the assumptions underpinning our analysis are restrictive, they are satisfied by notable cases such as Gaussian or Laplace distributions for preferences based on the log-probability reward.

Publication
International Conference on Machine Learning
Marc Jourdan
Marc Jourdan
Post-Doctoral Researcher

Post-Doctoral Researcher at EPFL in the TML lab.

Gizem Yüce
Gizem Yüce
PhD Student
Nicolas Flammarion
Nicolas Flammarion
Tenure-track Assistant Professor