Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models YF Zhang, Q Wen, C Fu, X Wang, Z Zhang, L Wang, R Jin arXiv preprint arXiv:2406.08487, 2024 | | 2024 |
CARD: Channel aligned robust blend transformer for time series forecasting X Wang, T Zhou, Q Wen, J Gao, B Ding, R Jin The Twelfth International Conference on Learning Representations, 2024 | 5 | 2024 |
Addressing Concept Shift in Online Time Series Forecasting: Detect-then-Adapt YF Zhang, W Chen, Z Zhu, D Qin, L Sun, X Wang, Q Wen, Z Zhang, ... arXiv preprint arXiv:2403.14949, 2024 | | 2024 |
Debiasing large visual language models YF Zhang, W Yu, Q Wen, X Wang, Z Zhang, L Wang, R Jin, T Tan arXiv preprint arXiv:2403.05262, 2024 | 7 | 2024 |
HyRSM++: Hybrid relation guided temporal set matching for few-shot action recognition X Wang, S Zhang, Z Qing, Z Zuo, C Gao, R Jin, N Sang Pattern Recognition 147, 110110, 2024 | 14 | 2024 |
Onenet: Enhancing time series forecasting models under concept drift by online ensembling Q Wen, W Chen, L Sun, Z Zhang, L Wang, R Jin, T Tan Advances in Neural Information Processing Systems 36, 2024 | 12 | 2024 |
FusionSF: Fuse Heterogeneous Modalities in a Vector Quantized Framework for Robust Solar Power Forecasting Z Ma, W Wang, T Zhou, C Chen, B Peng, L Sun, R Jin arXiv preprint arXiv:2402.05823, 2024 | | 2024 |
Attention as Robust Representation for Time Series Forecasting PS Niu, T Zhou, X Wang, L Sun, R Jin arXiv preprint arXiv:2402.05370, 2024 | 2 | 2024 |
Sparse-VQ Transformer: An FFN-Free Framework with Vector Quantization for Enhanced Time Series Forecasting Y Zhao, T Zhou, C Chen, L Sun, Y Qian, R Jin arXiv preprint arXiv:2402.05830, 2024 | | 2024 |
One fits all: Power general time series analysis by pretrained lm T Zhou, P Niu, L Sun, R Jin Advances in neural information processing systems 36, 43322-43355, 2023 | 144 | 2023 |
Model-free Test Time Adaptation for Out-Of-Distribution Detection YF Zhang, X Wang, T Zhou, K Yuan, Z Zhang, L Wang, R Jin, T Tan arXiv preprint arXiv:2311.16420, 2023 | 1 | 2023 |
One fits all: Universal time series analysis by pretrained lm and specially designed adaptors T Zhou, P Niu, X Wang, L Sun, R Jin arXiv preprint arXiv:2311.14782, 2023 | 3 | 2023 |
What Limits the Performance of Local Self-attention? J Zhou, P Wang, J Tang, F Wang, Q Liu, H Li, R Jin International Journal of Computer Vision 131 (10), 2516-2528, 2023 | 1 | 2023 |
Fedxl: Provable federated learning for deep x-risk optimization Z Guo, R Jin, J Luo, T Yang International Conference on Machine Learning, 11934-11966, 2023 | 3 | 2023 |
Adanpc: Exploring non-parametric classifier for test-time adaptation Y Zhang, X Wang, K Jin, K Yuan, Z Zhang, L Wang, R Jin, T Tan International Conference on Machine Learning, 41647-41676, 2023 | 23 | 2023 |
CARD: Channel Aligned Robust Blend Transformer for Time Series Forecasting W Xue, T Zhou, Q Wen, J Gao, B Ding, R Jin arXiv preprint arXiv:2305.12095, 2023 | 3 | 2023 |
Self-supervised learning from untrimmed videos via hierarchical consistency Z Qing, S Zhang, Z Huang, Y Xu, X Wang, C Gao, R Jin, N Sang IEEE Transactions on Pattern Analysis and Machine Intelligence 45 (10 …, 2023 | 4 | 2023 |
Achieving Human Parity on Visual Question Answering M Yan, H Xu, C Li, J Tian, B Bi, W Wang, X Xu, J Zhang, S Huang, ... ACM Transactions on Information Systems 41 (3), 1-40, 2023 | 7 | 2023 |
Paramcrop: Parametric cubic cropping for video contrastive learning Z Qing, Z Huang, S Zhang, M Tang, C Gao, R Jin, MH Ang, N Sang IEEE Transactions on Multimedia 25, 9002-9014, 2023 | 3 | 2023 |
Power time series forecasting by pretrained lm T Zhou, P Niu, X Wang, L Sun, R Jin arXiv preprint arXiv:2302.11939, 2023 | 2 | 2023 |