TY - GEN
T1 - An Efficient Pessimistic-Optimistic Algorithm for Stochastic Linear Bandits with General Constraints
AU - Liu, Xin
AU - Li, Bin
AU - Shi, Pengyi
AU - Ying, Lei
N1 - Funding Information:
Acknowledgment: This work has been supported in part by NSF CNS-2001687, CNS-2002608 and CNS-2152657.
Publisher Copyright:
© 2021 Neural information processing systems foundation. All rights reserved.
PY - 2021
Y1 - 2021
N2 - This paper considers stochastic linear bandits with general nonlinear constraints. The objective is to maximize the expected cumulative reward over horizon T subject to a set of constraints in each round τ ď T . We propose a pessimistic-optimistic algorithm for this problem, which is efficient in two aspects. First, the algorithm yields Õ´´ K 0δ.75 ` d¯ ?τ ¯ (pseudo) regret in round τ ď T, where K is the number of constraints, d is the dimension of the reward feature space, and δ is a Slater’s constant; and zero constraint violation in any round τ ą τ1, where τ1 is independent of horizon T. Second, the algorithm is computationally efficient. Our algorithm is based on the primal-dual approach in optimization and includes two components. The primal component is similar to unconstrained stochastic linear bandits (our algorithm uses the linear upper confidence bound algorithm (LinUCB)). The computational complexity of the dual component depends on the number of constraints, but is independent of the sizes of the contextual space, the action space, and the feature space. Thus, the computational complexity of our algorithm is similar to LinUCB for unconstrained stochastic linear bandits.
AB - This paper considers stochastic linear bandits with general nonlinear constraints. The objective is to maximize the expected cumulative reward over horizon T subject to a set of constraints in each round τ ď T . We propose a pessimistic-optimistic algorithm for this problem, which is efficient in two aspects. First, the algorithm yields Õ´´ K 0δ.75 ` d¯ ?τ ¯ (pseudo) regret in round τ ď T, where K is the number of constraints, d is the dimension of the reward feature space, and δ is a Slater’s constant; and zero constraint violation in any round τ ą τ1, where τ1 is independent of horizon T. Second, the algorithm is computationally efficient. Our algorithm is based on the primal-dual approach in optimization and includes two components. The primal component is similar to unconstrained stochastic linear bandits (our algorithm uses the linear upper confidence bound algorithm (LinUCB)). The computational complexity of the dual component depends on the number of constraints, but is independent of the sizes of the contextual space, the action space, and the feature space. Thus, the computational complexity of our algorithm is similar to LinUCB for unconstrained stochastic linear bandits.
UR - http://www.scopus.com/inward/record.url?scp=85124642873&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85124642873&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85124642873
T3 - Advances in Neural Information Processing Systems
SP - 24075
EP - 24086
BT - Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
A2 - Ranzato, Marc'Aurelio
A2 - Beygelzimer, Alina
A2 - Dauphin, Yann
A2 - Liang, Percy S.
A2 - Wortman Vaughan, Jenn
PB - Neural information processing systems foundation
T2 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
Y2 - 6 December 2021 through 14 December 2021
ER -