# Convergence Rates of Gradient Descent and MM Algorithms for Generalized Bradley-Terry Models

@article{Vojnovic2019ConvergenceRO, title={Convergence Rates of Gradient Descent and MM Algorithms for Generalized Bradley-Terry Models}, author={Milan Vojnovic and Seyoung Yun and Kaifang Zhou}, journal={ArXiv}, year={2019}, volume={abs/1901.00150} }

We show tight convergence rate bounds for gradient descent and MM algorithms for maximum likelihood estimation and maximum aposteriori probability estimation of a popular Bayesian inference method for generalized Bradley-Terry models. This class of models includes the Bradley-Terry model of paired comparisons, the Rao-Kupper model of paired comparisons with ties, the Luce choice model, and the Plackett-Luce ranking model. Our results show that MM algorithms have same convergence rates as… Expand

#### References

SHOWING 1-10 OF 54 REFERENCES

Convergence Rates of Gradient Descent and MM Algorithms for Bradley-Terry Models

- Mathematics, Computer Science
- AISTATS
- 2020

We present tight convergence rate bounds for gradient descent and MM algorithms for maximum likelihood (ML) estimation and maximum a posteriori probability (MAP) estimation of a popular Bayesian… Expand

MM algorithms for generalized Bradley-Terry models

- Mathematics
- 2003

The Bradley-Terry model for paired comparisons is a simple and muchstudied means to describe the probabilities of the possible outcomes when individuals are judged against one another in pairs. Among… Expand

Efficient Bayesian Inference for Generalized Bradley–Terry Models

- Computer Science, Mathematics
- 2010

It is shown here that iterative minorization-maximization algorithms can be reinterpreted as special instances of expectation-maximizeization algorithms associated with suitable sets of latent variables that allow for simple Gibbs samplers for Bayesian inference. Expand

Asymptotics when the number of parameters tends to infinity in the Bradley-Terry model for paired comparisons

- Mathematics
- 1999

We are concerned here with establishing the consistency and asymptotic normality for the maximum likelihood estimator of a merit vector (u 0 , ..., u t ), representing the merits of t+1 teams… Expand

Minimax-optimal Inference from Partial Rankings

- Computer Science, Mathematics
- NIPS
- 2014

It is shown that even if one applies the mismatched maximum likelihood estimator that assumes independence (on pairwise comparisons that are now dependent due to rank-breaking), minimax optimal performance is still achieved up to a logarithmic factor. Expand

Accelerated Spectral Ranking

- Computer Science
- ICML
- 2018

This paper designs a provably faster spectral ranking algorithm, which it is called accelerated spectral ranking (ASR), that is also consistent under the MNL/BTL models, and gives the first general sample complexity bounds for recovering the parameters of theMNL model from multiway comparisons under any (connected) comparison graph. Expand

A Statistical Convergence Perspective of Algorithms for Rank Aggregation from Pairwise Data

- Mathematics, Computer Science
- ICML
- 2014

This paper shows that, under a 'time-reversibility' or Bradley-Terry-Luce (BTL) condition on the distribution, the rank centrality (PageRank) and least squares (HodgeRank) algorithms both converge to an optimal ranking. Expand

Estimation from Pairwise Comparisons: Sharp Minimax Bounds with Topology Dependence

- Computer Science, Mathematics
- J. Mach. Learn. Res.
- 2016

This work considers parametric ordinal models for pairwise comparison data involving a latent vector w* e Rd that represents the "qualities" of the d items being compared; this class of models includes the two most widely used parametric models|the Bradley-Terry-Luce (BTL) and the Thurstone models. Expand

Fast and Accurate Inference of Plackett-Luce Models

- Computer Science
- NIPS
- 2015

It is shown that the maximum-likelihood estimate of models derived from Luce's choice axiom can be expressed as the stationary distribution of a Markov chain, and a new spectral algorithm is formulated that is significantly more accurate than previous ones for the Plackett-Luce model. Expand

Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning

- Computer Science, Mathematics
- SIAM J. Optim.
- 2015

This work proposes an incremental majorization-minimization scheme for minimizing a large sum of continuous functions, a problem of utmost importance in machine learning, and presents convergence guarantees for nonconvex and convex optimization when the upper bounds approximate the objective up to a smooth error. Expand