Suivre
Jike Chong
Jike Chong
Adresse e-mail validée de andrew.cmu.edu
Titre
Citée par
Citée par
Année
Efficient parallelization of h. 264 decoding with macro block level scheduling
J Chong, N Satish, B Catanzaro, K Ravindran, K Keutzer
2007 IEEE international conference on multimedia and expo, 1874-1877, 2007
1192007
Parallel scalability in speech recognition
K You, J Chong, Y Yi, E Gonina, CJ Hughes, YK Chen, W Sung, K Keutzer
IEEE Signal Processing Magazine 26 (6), 124-135, 2009
732009
Data-parallel large vocabulary continuous speech recognition on graphics processors
J Chong, Y Yi, A Faria, N Satish, K Keutzer
Proceedings of the 1st Annual Workshop on Emerging Applications and Many …, 2008
692008
A fully data parallel WFST-based large vocabulary continuous speech recognition on a graphics processing unit.
J Chong, E Gonina, Y Yi, K Keutzer
Interspeech 2009, 1183-1186, 2009
582009
Extensible and scalable time triggered scheduling
W Zheng, J Chong, C Pinello, S Kanajan, A Sangiovanni-Vincentelli
Fifth International Conference on Application of Concurrency to System …, 2005
522005
Classification, customization, and characterization: Using milp for task allocation and scheduling
A Davare, J Chong, Q Zhu, DM Densmore, AL Sangiovanni-Vincentelli
Systems Research, 2006
422006
Belief propagation by message passing in junction trees: Computing each message faster using GPU parallelization
L Zheng, O Mengshoel, J Chong
arXiv preprint arXiv:1202.3777, 2012
362012
Efficient On-The-Fly Hypothesis Rescoring in a Hybrid GPU/CPU-based Large Vocabulary Continuous Speech Recognition Engine.
J Kim, J Chong, IR Lane
INTERSPEECH, 1035-1038, 2012
282012
Opportunities and challenges of parallelizing speech recognition
J Chong, G Friedland, A Janin, N Morgan, C Oei
Proceedings of the 2nd USENIX conference on Hot topics in parallelism …, 2010
272010
Acceleration of market value-at-risk estimation
M Dixon, J Chong, K Keutzer
Proceedings of the 2nd Workshop on High Performance Computational Finance, 1-8, 2009
252009
Method and system for parallel statistical inference on highly parallel platforms
J Chong, Y Yi, EI Gonina
US Patent 8,566,259, 2013
232013
Efficient automatic speech recognition on the gpu
J Chong, E Gonina, K Keutzer
GPU Computing Gems Emerald Edition, 601-618, 2011
212011
Apparatus and method for sharing a functional unit execution resource among a plurality of functional units
J Chong, C Olson, GF Grohoski
US Patent 7,353,364, 2008
212008
Exploring recognition network representations for efficient speech inference on highly parallel platforms.
J Chong, E Gonina, K You, K Keutzer
INTERSPEECH, 1489-1492, 2010
192010
Parallelizing speaker-attributed speech recognition for meeting browsing
G Friedland, J Chong, A Janin
2010 IEEE International Symposium on Multimedia, 121-128, 2010
182010
Scalable hmm based inference engine in large vocabulary continuous speech recognition
J Chong, K You, Y Yi, E Gonina, C Hughes, W Sung, K Keutzer
2009 IEEE International Conference on Multimedia and Expo, 1797-1800, 2009
162009
Monte Carlo–based financial market value-at-risk estimation on GPUs
MF Dixon, T Bradley, J Chong, K Keutzer
GPU Computing Gems Jade Edition, 337-353, 2012
152012
Pattern-oriented application frameworks for domain experts to effectively utilize highly parallel manycore microprocessors
J Chong
University of California, Berkeley, 2010
132010
Methods for hybrid GPU/CPU data processing
I Lane, J Chong, J Kim
US Patent 9,558,748, 2017
122017
A parallel implementation of viterbi training for acoustic models using graphics processing units
S Buthpitiya, I Lane, J Chong
2012 Innovative Parallel Computing (InPar), 1-10, 2012
112012
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–20