In-datacenter performance analysis of a tensor processing unit NP Jouppi, C Young, N Patil, D Patterson, G Agrawal, R Bajwa, S Bates, ... Proceedings of the 44th annual international symposium on computer …, 2017 | 4684 | 2017 |
Neural network compute tile O Temam, R Narayanaswami, H Khaitan, DH Woo US Patent 9,710,265, 2017 | 57 | 2017 |
Neural network instruction set architecture R Narayanaswami, DH Woo, O Temam, H Khaitan US Patent 9,836,691, 2017 | 37 | 2017 |
Neural network accelerator with parameters resident on chip O Temam, H Khaitan, R Narayanaswami, DH Woo US Patent 10,504,022, 2019 | 35 | 2019 |
Neural network instruction set architecture R Narayanaswami, DH Woo, O Temam, H Khaitan US Patent 11,379,707, 2022 | 29 | 2022 |
Neural network instruction set architecture R Narayanaswami, DH Woo, O Temam, H Khaitan US Patent 9,959,498, 2018 | 20 | 2018 |
Neural network compute tile O Temam, R Narayanaswami, H Khaitan, DH Woo US Patent 10,175,980, 2019 | 16 | 2019 |
Virtualizing external memory as local to a machine learning accelerator LJ Madar III, T Fadelu, H Khaitan, R Narayanaswami US Patent 11,176,493, 2021 | 15 | 2021 |
Accessing data in multi-dimensional tensors using adders O Temam, H Khaitan, R Narayanaswami, DH Woo US Patent 10,534,607, 2020 | 8 | 2020 |
Alternative loop limits for accessing data in multi-dimensional tensors O Temam, H Khaitan, R Narayanaswami, DH Woo US Patent 10,248,908, 2019 | 8 | 2019 |
Neural network compute tile O Temam, R Narayanaswami, H Khaitan, DH Woo US Patent 11,422,801, 2022 | 7 | 2022 |
Accessing data in multi-dimensional tensors using adders O Temam, H Khaitan, R Narayanaswami, DH Woo US Patent 9,946,539, 2018 | 7 | 2018 |
Neural network weight distribution using a tree direct-memory access (dma) bus H Khaitan US Patent App. 17/030,051, 2022 | 4 | 2022 |
Accessing prologue and epilogue data O Temam, H Khaitan, R Narayanaswami, DH Woo US Patent 10,802,956, 2020 | 4 | 2020 |
Hardware double buffering using a special purpose computational unit O Temam, H Khaitan, R Narayanaswami, DH Woo US Patent 10,496,326, 2019 | 4 | 2019 |
Virtualizing external memory as local to a machine learning accelerator LJ Madar III, T Fadelu, H Khaitan, R Narayanaswami US Patent App. 17/507,188, 2022 | 3 | 2022 |
Neural network accelerator with parameters resident on chip O Temam, H Khaitan, R Narayanaswami, DH Woo US Patent 11,501,144, 2022 | 2 | 2022 |
Hardware double buffering using a special purpose computational unit O Temam, H Khaitan, R Narayanaswami, DH Woo US Patent 10,175,912, 2019 | 2 | 2019 |
Accessing prologue and epilogue data O Temam, H Khaitan, R Narayanaswami, DH Woo US Patent 10,108,538, 2018 | 2 | 2018 |
Neural network accelerator with parameters resident on chip O Temam, H Khaitan, R Narayanaswami, DH Woo US Patent 11,727,259, 2023 | | 2023 |