Multimodal deep learning J Ngiam, A Khosla, M Kim, J Nam, H Lee, AY Ng Proceedings of the 28th international conference on machine learning (ICML …, 2011 | 3931 | 2011 |
Sample-level deep convolutional neural networks for music auto-tagging using raw waveforms J Lee, J Park, KL Kim, J Nam SMC, 220-226, 2017 | 228 | 2017 |
SampleCNN: End-to-end deep convolutional neural networks using very small filters for music classification J Lee, J Park, KL Kim, J Nam Applied Sciences 8 (1), 150, 2018 | 173 | 2018 |
Multi-level and multi-scale feature aggregation using pretrained convolutional neural networks for music auto-tagging J Lee, J Nam IEEE signal processing letters 24 (8), 1208-1212, 2017 | 157 | 2017 |
Deep learning for audio-based music classification and tagging: Teaching computers to distinguish rock from bach J Nam, K Choi, J Lee, SY Chou, YH Yang IEEE signal processing magazine 36 (1), 41-51, 2018 | 127 | 2018 |
Sample-level CNN architectures for music auto-tagging using raw waveforms T Kim, J Lee, J Nam ICASSP, 366-370, 2018 | 115 | 2018 |
A Classification-Based Polyphonic Piano Transcription Approach Using Learned Feature Representations. J Nam, J Ngiam, H Lee, M Slaney ISMIR, 175-180, 2011 | 110 | 2011 |
Learning Sparse Feature Representations for Music Annotation and Retrieval. J Nam, J Herrera, M Slaney, JO Smith III ISMIR, 565-570, 2012 | 102 | 2012 |
Systems and methods for evaluating strength of an audio password LH Kim, J Nam, E Visser US Patent 10,157,272, 2018 | 99 | 2018 |
Raw waveform-based audio classification using sample-level CNN architectures J Lee, T Kim, J Park, J Nam Machine Learning for Audio Signal Processing Workshop, NIPS, 2017 | 94 | 2017 |
Comparison and analysis of SampleCNN architectures for audio classification T Kim, J Lee, J Nam IEEE Journal of Selected Topics in Signal Processing 13 (2), 285-297, 2019 | 90 | 2019 |
Representation learning of music using artist labels J Park, J Lee, J Park, JW Ha, J Nam ISMIR, 717-724, 2018 | 76 | 2018 |
EMOPIA: a multi-modal pop piano dataset for emotion recognition and emotion-based music generation HT Hung, J Ching, S Doh, N Kim, J Nam, YH Yang ISMIR, 318-325, 2021 | 75 | 2021 |
Melody Extraction on Vocal Segments Using Multi-Column Deep Neural Networks. S Kum, C Oh, J Nam ISMIR, 819-825, 2016 | 75 | 2016 |
Systems and methods for audio signal processing E Visser, LH Kim, Y Guo, J Nam US Patent App. 13/828,415, 2013 | 73 | 2013 |
Joint detection and classification of singing voice melody using convolutional recurrent neural networks S Kum, J Nam Applied Sciences 9 (7), 1324, 2019 | 72 | 2019 |
Zero-shot learning for audio-based music classification and tagging J Choi, J Lee, J Park, J Nam ISMIR, 67-74, 2019 | 52 | 2019 |
VirtuosoNet: A Hierarchical RNN-based System for Modeling Expressive Piano Performance. D Jeong, T Kwon, Y Kim, K Lee, J Nam ISMIR, 908-915, 2019 | 52 | 2019 |
Graph neural network for music score data and modeling expressive piano performance D Jeong, T Kwon, Y Kim, J Nam ICML, 3060-3070, 2019 | 52 | 2019 |
Alias-suppressed oscillators based on differentiated polynomial waveforms V Valimaki, J Nam, JO Smith, JS Abel IEEE Transactions on audio, speech, and language processing 18 (4), 786-798, 2009 | 43 | 2009 |