You can also find my articles on my Semantic Scholar profile.
Uri Shaham, Maor Ivgi, Avia Efrat, Jonathan Berant, Omer Levy. Published in Arxiv preprint
ZeroSCROLLS is a suite of datasets that require synthesizing information over long texts. The benchmark includes ten natural language tasks across multiple domains, including summarization, question answering, aggregated sentiment classification and information reordering.
Maor Ivgi, Oliver Hinder, Yair Carmon. Published in ICML (2023)
DoG is a tuning-free dynamic SGD step size formula, backed by strong theoretical guarantees and empirically demonstrated over many domains and model-architectures to achieve comparable results to well-tuned SGD with best-practice learning-rate schedule.
Maor Ivgi, Uri Shaham, Jonathan Berant. Published in TACL 2023, will be presented in ACL 2023
Can short-range LMs perform long-range reasoning? They can!
In this work, we propose the SLiding-Encoder and Decoder (SLED) which leverages existing battle-proven enc-dec LMs to operate over long-range NLU tasks.
Maor Ivgi, Yair Carmon, Jonathan Berant. Published in Findings of EMNLP 2022
Scaling laws are undoubtedly fascinating, but can they be harnessed for efficient model design? In this work, we explore their usefulness across a variety of language understanding tasks, and show that in some cases, they can!
Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, Omer Levy. Published in EMNLP 2022
SCROLLS is a suite of datasets that require synthesizing information over long texts. The benchmark includes seven natural language tasks across multiple domains, including summarization, question answering, and natural language inference.
Amirata Ghorbani, Dina Berenbaum, Maor Ivgi, Yuval Dafna and James Zou. Published in MDPI (vol. 13), 2021
A novel way to visualize not only the importance of each feature in tabular data, but also the semantic meaning and relationships of features.
Maor Ivgi and Jonathan Berant. Published in EMNLP 2021
While many works established that modern transformer-based NLP models are not robust, this work is all about achieving this lost robustness back.
Maor Ivgi, Yaniv Benny, Avichai Ben-David, Jonathan Berant, and Lior Wolf. Published in ICIP 2021
A novel approach to gradually generate realistic image layouts from scene-graphs by attending to all objects in the generated layout simultaneously.