Suivre
Ashok Vardhan Makkuva
Titre
Citée par
Citée par
Année
Optimal transport mapping via input convex neural networks
A Makkuva, A Taghvaei, S Oh, J Lee
International Conference on Machine Learning, 6672-6681, 2020
472020
Learning one-hidden-layer neural networks under general input distributions
W Gao, AV Makkuva, S Oh, P Viswanath
The 22nd International Conference on Artificial Intelligence and Statistics …, 2019
272019
Equivalence of additive-combinatorial linear inequalities for Shannon entropy and differential entropy
AV Makkuva, Y Wu
IEEE Transactions on Information Theory 64 (5), 3579-3589, 2018
222018
Breaking the gridlock in mixture-of-experts: Consistent and efficient algorithms
A Makkuva, P Viswanath, S Kannan, S Oh
International Conference on Machine Learning, 4304-4313, 2019
132019
Barracuda: the power of ℓ-polling in proof-of-stake blockchains
G Fanti, J Jiao, A Makkuva, S Oh, R Rana, P Viswanath
Proceedings of the twentieth ACM international symposium on mobile ad hoc …, 2019
112019
Reed-Muller subcodes: Machine learning-aided design of efficient soft recursive decoding
MV Jamali, X Liu, AV Makkuva, H Mahdavifar, S Oh, P Viswanath
2021 IEEE International Symposium on Information Theory (ISIT), 1088-1093, 2021
72021
Learning in gated neural networks
A Makkuva, S Oh, S Kannan, P Viswanath
International Conference on Artificial Intelligence and Statistics, 3338-3348, 2020
62020
On additive-combinatorial affine inequalities for Shannon entropy and differential entropy
AV Makkuva, Y Wu
2016 IEEE International Symposium on Information Theory (ISIT), 1053-1057, 2016
62016
KO codes: inventing nonlinear encoding and decoding for reliable wireless communication via deep-learning
AV Makkuva, X Liu, MV Jamali, H Mahdavifar, S Oh, P Viswanath
International Conference on Machine Learning, 7368-7378, 2021
42021
Event-driven stochastic approximation
VS Borkar, N Sahasrabudhe, M Ashok Vardhan
Indian Journal of Pure and Applied Mathematics 47 (2), 291-299, 2016
2016
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–10