Follow
Yoav Levine
Yoav Levine
Stanford University
Verified email at mail.huji.ac.il - Homepage
Title
Cited by
Cited by
Year
In-context retrieval-augmented language models
O Ram, Y Levine, I Dalmedigos, D Muhlgay, A Shashua, K Leyton-Brown, ...
Transactions of the Association for Computational Linguistics 11, 1316-1331, 2023
3182023
Quantum entanglement in deep learning architectures
Y Levine, O Sharir, N Cohen, A Shashua
Physical review letters 122 (6), 065301, 2019
2652019
Deep autoregressive models for the efficient variational simulation of many-body quantum systems
O Sharir, Y Levine, N Wies, G Carleo, A Shashua
Physical review letters 124 (2), 020503, 2020
2482020
Sensebert: Driving some sense into bert
Y Levine, B Lenz, O Dagan, D Padnos, O Sharir, S Shalev-Shwartz, ...
Proceedings of the 58th Annual Meeting of the Association for Computational …, 2020
2302020
Fundamental limitations of alignment in large language models
Y Wolf, N Wies, O Avnery, Y Levine, A Shashua
arXiv preprint arXiv:2304.11082, 2023
1492023
Deep learning and quantum entanglement: Fundamental connections with implications to network design
Y Levine, D Yakira, N Cohen, A Shashua
6th International Conference on Learning Representations (ICLR), 2018
1422018
Parallel context windows for large language models
N Ratner, Y Levine, Y Belinkov, O Ram, I Magar, O Abend, E Karpas, ...
arXiv preprint arXiv:2212.10947, 2022
622022
MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning
E Karpas, O Abend, Y Belinkov, B Lenz, O Lieber, N Ratner, Y Shoham, ...
arXiv preprint arXiv:2205.00445, 2022
612022
PMI-Masking: Principled masking of correlated spans
Y Levine, B Lenz, O Lieber, O Abend, K Leyton-Brown, M Tennenholtz, ...
9th International Conference on Learning Representations (ICLR), 2021
602021
Limits to Depth Efficiencies of Self-Attention
Y Levine, N Wies, O Sharir, H Bata, A Shashua
Advances in Neural Information Processing Systems 34 (NeurIPS), 2020
53*2020
The learnability of in-context learning
N Wies, Y Levine, A Shashua
Advances in Neural Information Processing Systems 36, 2024
512024
Generating benchmarks for factuality evaluation of language models
D Muhlgay, O Ram, I Magar, Y Levine, N Ratner, Y Belinkov, O Abend, ...
arXiv preprint arXiv:2307.06908, 2023
462023
Analysis and design of convolutional networks via hierarchical tensor decompositions
N Cohen, O Sharir, Y Levine, R Tamari, D Yakira, A Shashua
arXiv preprint arXiv:1705.02302, 2017
442017
Standing on the shoulders of giant frozen language models
Y Levine, I Dalmedigos, O Ram, Y Zeldes, D Jannai, D Muhlgay, Y Osin, ...
arXiv preprint arXiv:2204.10019, 2022
432022
The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design
Y Levine, N Wies, D Jannai, D Navon, Y Hoshen, A Shashua
10th International Conference on Learning Representations (ICLR), 2022
332022
Benefits of depth for long-term memory of recurrent networks
Y Levine, O Sharir, A Shashua
6th International Conference on Learning Representations (ICLR) workshop, 2018
32*2018
Tensors for deep learning theory: Analyzing deep learning architectures via tensorization
Y Levine, N Wies, O Sharir, N Cohen, A Shashua
Tensors for Data Processing, 215-248, 2022
28*2022
Human or Not? A gamified approach to the Turing test
D Jannai, A Meron, B Lenz, Y Levine, Y Shoham
arXiv preprint arXiv:2305.20010, 2023
222023
Sub-task decomposition enables learning in sequence to sequence tasks
N Wies, Y Levine, A Shashua
ICLR 2023, 2023
222023
Which transformer architecture fits my data? A vocabulary bottleneck in self-attention
N Wies, Y Levine, D Jannai, A Shashua
International Conference on Machine Learning, 11170-11181, 2021
202021
The system can't perform the operation now. Try again later.
Articles 1–20