Data programming: Creating large training sets, quickly AJ Ratner, CM De Sa, S Wu, D Selsam, C Ré Advances in neural information processing systems, 3567-3575, 2016 | 277 | 2016 |

Incremental knowledge base construction using deepdive J Shin, S Wu, F Wang, C De Sa, C Zhang, C Ré Proceedings of the VLDB Endowment International Conference on Very Large …, 2015 | 206 | 2015 |

Global convergence of stochastic gradient descent for some non-convex matrix problems C De Sa, C Re, K Olukotun International Conference on Machine Learning, 2332-2341, 2015 | 144 | 2015 |

Taming the wild: A unified analysis of hogwild-style algorithms CM De Sa, C Zhang, K Olukotun, C Ré Advances in neural information processing systems, 2674-2682, 2015 | 125 | 2015 |

Understanding and optimizing asynchronous low-precision stochastic gradient descent C De Sa, M Feldman, C Ré, K Olukotun Proceedings of the 44th Annual International Symposium on Computer …, 2017 | 97 | 2017 |

Representation tradeoffs for hyperbolic embeddings C De Sa, A Gu, C Ré, F Sala Proceedings of machine learning research 80, 4460, 2018 | 95 | 2018 |

High-accuracy low-precision training C De Sa, M Leszczynski, J Zhang, A Marzoev, CR Aberger, K Olukotun, ... arXiv preprint arXiv:1803.03383, 2018 | 55 | 2018 |

Deepdive: Declarative knowledge base construction C De Sa, A Ratner, C Ré, J Shin, F Wang, S Wu, C Zhang ACM SIGMOD Record 45 (1), 60-67, 2016 | 54 | 2016 |

Generating configurable hardware from parallel patterns R Prabhakar, D Koeplinger, KJ Brown, HJ Lee, C De Sa, C Kozyrakis, ... Acm Sigplan Notices 51 (4), 651-665, 2016 | 50 | 2016 |

Parallel SGD: When does averaging help? J Zhang, C De Sa, I Mitliagkas, C Ré arXiv preprint arXiv:1606.07365, 2016 | 48 | 2016 |

Have abstraction and eat performance, too: Optimized heterogeneous computing with parallel patterns KJ Brown, HJ Lee, T Romp, AK Sujeeth, C De Sa, C Aberger, K Olukotun 2016 IEEE/ACM International Symposium on Code Generation and Optimization …, 2016 | 47 | 2016 |

DeepDive: Declarative knowledge base construction C Zhang, C Ré, M Cafarella, C De Sa, A Ratner, J Shin, F Wang, S Wu Communications of the ACM 60 (5), 93-102, 2017 | 38 | 2017 |

Ensuring rapid mixing and low bias for asynchronous Gibbs sampling C De Sa, K Olukotun, C Ré JMLR workshop and conference proceedings 48, 1567, 2016 | 37 | 2016 |

Accelerated stochastic power iteration C De Sa, B He, I Mitliagkas, C Ré, P Xu Proceedings of machine learning research 84, 58, 2018 | 35 | 2018 |

Improving neural network quantization without retraining using outlier channel splitting R Zhao, Y Hu, J Dotzel, C De Sa, Z Zhang arXiv preprint arXiv:1901.09504, 2019 | 31 | 2019 |

A kernel theory of modern data augmentation T Dao, A Gu, AJ Ratner, V Smith, C De Sa, C Ré Proceedings of machine learning research 97, 1528, 2019 | 23 | 2019 |

Swalp: Stochastic weight averaging in low-precision training G Yang, T Zhang, P Kirichenko, J Bai, AG Wilson, C De Sa arXiv preprint arXiv:1904.11943, 2019 | 20 | 2019 |

The convergence of stochastic gradient descent in asynchronous shared memory D Alistarh, C De Sa, N Konstantinov Proceedings of the 2018 ACM Symposium on Principles of Distributed Computing …, 2018 | 19 | 2018 |

A formal framework for probabilistic unclean databases C De Sa, IF Ilyas, B Kimelfeld, C Ré, T Rekatsinas arXiv preprint arXiv:1801.06750, 2018 | 18 | 2018 |

Gaussian quadrature for kernel features T Dao, CM De Sa, C Ré Advances in neural information processing systems, 6107-6117, 2017 | 18 | 2017 |