Suivre
Souvik Kundu
Souvik Kundu
Research Scientist, Intel Labs; Ph.D - University of Southern California
Adresse e-mail validée de intel.com - Page d'accueil
Titre
Citée par
Citée par
Année
Spike-thrift: Towards energy-efficient deep spiking neural networks by limiting spiking activity via attention-guided compression
S Kundu, G Datta, M Pedram, PA Beerel
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2021 …, 2021
1042021
DNR: A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of DNNs
S Kundu, M Nazemi, PA Beerel, M Pedram
Proceedings of the 26th ASP-DAC 2021, 344-350, 2021
602021
HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep Spiking Neural Networks by Training With Crafted Input Noise
S Kundu, M Pedram, PA Beerel
IEEE/CVF International Conference on Computer Vision (ICCV 2021), 5209-5218, 2021
572021
Pre-Defined Sparsity for Low-Complexity Convolutional Neural Networks
S Kundu, M Nazemi, M Pedram, KM Chugg, PA Beerel
IEEE Transactions on Computers 2020 69 (7), 1045-1058, 2020
502020
Training Energy-Efficient Deep Spiking Neural Networks with Single-Spike Hybrid Input Encoding
G Datta, S Kundu, PA Beerel
IJCNN 2021, 2021
372021
P2M: A Processing-in-Pixel-in-Memory Paradigm for Resource-Constrained TinyML Applications
S Kundu, G Datta, Z Yin, RT Lakkireddy, PA Beerel, A Jacob, ARE Jaiswal
Nature Scientific Reports 2022, 2022
30*2022
ACE-SNN: Algorithm-Hardware Co-Design of Energy-efficient & Low-Latency Deep Spiking Neural Networks for 3D Image Recognition
G Datta, S Kundu, A Jaiswal, PA Beerel
Frontiers in Neuroscience, 2022
29*2022
AttentionLite: Towards Efficient Self-Attention Models for Vision
S Kundu, S Sundaresan
ICASSP 2021, 2021
272021
Analyzing the Confidentiality of Undistillable Teachers in Knowledge Distillation
S Kundu, Q Sun, Y Fu, M Pedram, P Beerel
Advances in Neural Information Processing Systems (NeurIPS 2021) 34, 2021
232021
Learning to Linearize Deep Neural Networks for Secure and Efficient Private Inference
S Kundu, S Lu, Y Zhang, J Liu, PA Beerel
International Conference on Learning Representation (ICLR) 2023., 2023
222023
A highly parallel FPGA implementation of sparse neural network training
S Dey, D Chen, Z Li, S Kundu, KW Huang, KM Chugg, PA Beerel
2018 International Conference on ReConFigurable Computing and FPGAs …, 2018
202018
Towards Low-Latency Energy-Efficient Deep SNNs via Attention-Guided Compression
S Kundu, G Datta, M Pedram, PA Beerel
2nd Sparse Neural Networks Workshop (co-located with ICML 2022), 2021
192021
pSConv: A Pre-defined S parse Kernel Based Convolution for Deep CNNs
S Kundu, S Prakash, H Akrami, PA Beerel, KM Chugg
2019 57th Annual Allerton Conference on Communication, Control, and …, 2019
19*2019
Revisiting Sparsity Hunting in Federated Learning: Why does Sparsity Consensus Matter?
S Kundu*, S Babakniya*, S Prakash, Y Niu, S Avestimehr
Transactions on Machine Learning Research (TMLR), 2023
17*2023
Pipeedge: Pipeline parallelism for large-scale model inference on heterogeneous edge devices
Y Hu, C Imes, X Zhao, S Kundu, PA Beerel, SP Crago, JP Walters
2022 25th Euromicro Conference on Digital System Design (DSD), 298-307, 2022
15*2022
BMPQ: Bit-Gradient Sensitivity Driven Mixed-Precision Quantization of DNNs from Scratch
S Kundu, S Wang, Q Sun, PA Beerel, M Pedram
Design Automation and Test in Europe (DATE) 2022, 2021
142021
Overcoming Resource Constraints in Federated Learning: Large Models Can Be Trained with only Weak Clients
Y Niu, S Prakash, S Kundu, S Lee, S Avestimehr
Transactions in Machine Learning Research (TMLR), 2023
13*2023
P2M-DeTrack: Processing-in-Pixel-in-Memory for Energy-efficient and Real-Time Multi-Object Detection and Tracking
S Kundu, G Datta, Z Yin, J Mathai, Z Liu, Z Wang, M Tian, S Lu, ...
VLSI-SoC 2022, 2022
13*2022
Understanding of Emotion Perception from Art
D Bose, K Somandepalli, S Kundu, R Lahiri, J Gratch, S Narayanan
ICCV CLVL Workshop 2021, 2021
122021
Attention-based Image Upsampling
S Kundu, H Mostafa, SN Sridhar, S Sundaresan
arXiv preprint, 2020
122020
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–20