Skip to content

Commit

Permalink
new
Browse files Browse the repository at this point in the history
  • Loading branch information
AtlasWang committed Oct 3, 2024
1 parent fe9a287 commit b1d7204
Show file tree
Hide file tree
Showing 4 changed files with 16 additions and 12 deletions.
17 changes: 12 additions & 5 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -171,19 +171,26 @@ <h2>News</h2>
<b style="color:rgb(68, 68, 68)"><i>If you are here to seek "TL;DR"...</i></b>
<ul style="margin-bottom:5px">
<li><b style="color:rgb(71, 71, 71)">Question</b>: What's a quick read to get a snapshot of VITA's current research focus? </li>
<li><b style="color:rgb(71, 71, 71)">Short Answer</b>: We pick five papers that represent our recent flavors as "chef's choice". The list will change over time: <a href="https://arxiv.org/pdf/2403.03507.pdfs">[GaLore, ICML 2024]</a>, <a href="https://openreview.net/pdf?id=9vKRhnflAs">[Flextron, ICML 2024]</a>, <a href="https://instantsplat.github.io/">[InstantSplat, arXiv 2024]</a>, <a href="https://streamingt2v.github.io/">[StreamingT2V, arXiv 2024]</a>, <a href="https://openreview.net/pdf?id=ctPizehA9D">[Heavy-Hitter Oracle, NeurIPS 2023]</a>
<li><b style="color:rgb(71, 71, 71)">Short Answer</b>: We pick five papers that represent our recent flavors as "chef's choice". The list will change over time: <a href="https://largespatialmodel.github.io/">[Large Spatial Model, NeurIPS 2024]</a>, <a href="https://arxiv.org/pdf/2403.03507.pdfs">[GaLore, ICML 2024]</a>, <a href="https://openreview.net/pdf?id=9vKRhnflAs">[Flextron, ICML 2024]</a>, <a href="https://instantsplat.github.io/">[InstantSplat, arXiv 2024]</a>, <a href="https://openreview.net/pdf?id=ctPizehA9D">[Heavy-Hitter Oracle, NeurIPS 2023]</a>
</li>
<li><b style="color:rgb(71, 71, 71)">Long Answer</b>: Please refer to our (still brief) <a href="research.html">Research Agenda</a>. Also, please consider <a href="https://twitter.com/VITAGroupUT?ref_src=twsrc%5Etfw" class="twitter-follow-button" data-show-count="false">following @VITAGroupUT</a><script async src="https://platform.twitter.com/widgets.js" charset="utf-8" data-size="large"></script>
<li><b style="color:rgb(71, 71, 71)">Long Answer</b>: Please refer to our (still brief) <a href="research.html">Research Agenda</a>. Please also consider <a href="https://twitter.com/VITAGroupUT?ref_src=twsrc%5Etfw" class="twitter-follow-button" data-show-count="false">following @VITAGroupUT</a><script async src="https://platform.twitter.com/widgets.js" charset="utf-8" data-size="large"></script>
</li>
<li><b style="color:rgb(255, 0, 0)">Since May 2024, Dr. Wang has been on leave from UT Austin to serve as the full-time Research Director for XTX Markets. You can read more <a href="https://www.linkedin.com/feed/update/urn:li:activity:7191830305933004801/">[here]</a></b></li>
<p>
</ul>

<b style="color:rgb(68, 68, 68)">[Oct. 2024]</b>
<ul style="margin-bottom:5px">
<li> 1 TMLR (amortized 3D Gaussians) accepted</li>
<li>Our group co-organized the MICCAI 2024 Challenge on <a href="https://bionlplab.github.io/2024_MICCAI_CXRLT/">Long-tailed, Multi-label, and Zero-shot Classification on Chest X-rays (CXR-LT)</a></li>
</ul>

<b style="color:rgb(68, 68, 68)">[Sep. 2024]</b>
<ul style="margin-bottom:5px">
<li>8 NeurIPS'24 (LightGaussian + expressive gaussian avatar + Read-ME + Found in the Middle + Large Spatial Model + transformer training dynamics + Diffusion4D + AlphaPruning) accepted </li>
<li>1 NeurIPS Datasets & Benchmark Track'24 (Model-GLUE) accepted </li>
<li> 1 IEEE Trans. PAMI (symbolic visual RL) accepted</li>
<li>Our group co-organized the ECCV 2024 <a href="https://dd-challenge-main.vercel.app/">"Sometimes Less is More: the 1st Dataset Distillation Challenge"</a></li>
</ul>

<b style="color:rgb(68, 68, 68)">[Aug. 2024]</b>
Expand All @@ -206,7 +213,7 @@ <h2>News</h2>
<li> 1 VLDB'24 (data privacy in LLMs) accepted</li>
<li> 1 IROS'24 (3DGS SLAM) accepted</li>
<li> 1 Nature Communication Medicine (clinical SSL for echocardiography) accepted</li>
<li> Ph.D. dissertation of VITA alumni Dr. <a href="https://chenwydj.github.io/">Wuyang Chen</a> is selected to receive the INNS Doctoral Dissertation Award, and the iSchools Doctoral Dissertation Award</li>
<li> Ph.D. dissertation of VITA alumnus Dr. <a href="https://chenwydj.github.io/">Wuyang Chen</a> is selected to receive the INNS Doctoral Dissertation Award, and the iSchools Doctoral Dissertation Award</li>
<li> We thank AICoffeeBreak for the very cool video <a href="https://www.youtube.com/watch?v=VC9NbOir7q0&ab_channel=AICoffeeBreakwithLetitia">[Youtube]</a> highlighting our latest work, GaLore (ICML'24 Oral) <a href="https://arxiv.org/pdf/2403.03507">[Paper]</a> <a href="https://github.com/jiaweizzhao/GaLore">[Code]</a> <a href="https://huggingface.co/blog/galore">[Hugging Face]</a></li>
<li>Our group co-organized the CVPR 2024 Workshop and Prize Challenge on <a href="https://cvpr2024ug2challenge.github.io/">Bridging the Gap between Computational Photography and Visual Recognition (UG2+)</a></li>

Expand Down Expand Up @@ -317,7 +324,7 @@ <h2>News</h2>
</li>
<li> Dr. Wuyang Chen will join CS@Simon Fraser as an Assistant Professor (starting from Fall 2024), after another year of postdoc at UC Berkeley
</li>
<li> Dr. Junru Wu joins Google Research, New York as a Research Engineer
<li> Dr. Junru Wu joins Google Deepmind, New York as a Research Engineer
</li>
<li> Dr. Haotao Wang joins Qualcomm AI Research, San Diego as a Research Scientist
</li>
Expand All @@ -335,7 +342,7 @@ <h2>News</h2>
<li> 1 MICCAI'23 (multi-label long-tail chest X-ray) accepted </li>
<li> 1 AutoML-Conf'23 (neural architecture "no free lunch") accepted </li>
<li> 1 TMLR (partial graph transfer learning) accepted</li>
<li> Our group co-organized the ICLR 2023 Workshop on <a href="https://www.sparseneural.net/">Sparsity in Neural Networks: On Practical Limitations and Tradeoffs between Sustainability and Efficiency</a></li>
<li> Our group co-organized the ICLR 2023 Workshop on <a href="https://www.sparseneural.net/">Sparsity in Neural Networks: On Practical Limitations and Tradeoffs between Sustainability and Efficiency</a></li>
</ul>


Expand Down
1 change: 1 addition & 0 deletions publication.html
Original file line number Diff line number Diff line change
Expand Up @@ -162,6 +162,7 @@ <h2>Journal Paper</h2>
<li>W. Zheng*, S. Sharan*, Z. Fan*, K. Wang*, Y. Xi*, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Symbolic Visual Reinforcement Learning: A Scalable Framework with Object-Level Abstraction and Differentiable Expression Search”</b><br>IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024. <a href="https://arxiv.org/abs/2212.14849">[Paper]</a> <a href="https://github.com/VITA-Group/DiffSES">[Code]</a></li>
<li> H. Yang*, Y. Liang, X. Guo, L. Wu, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Pruning Before Training May Improve Generalization, Provably”</b><br> Journal of Machine Learning Research (JMLR), 2024. <a href="">[Paper]</a> <a href="">[Code]</a></li>
<li> H. Yang*, Z. Jiang*, R. Zhang, Y. Liang, and Z. Wang<br> <b style="color:rgb(71, 71, 71)">“Neural Networks with Sparse Activation Induced by Large Bias: Tighter Analysis with Bias-Generalized NTK”</b><br> Journal of Machine Learning Research (JMLR), 2024. <a href="">[Paper]</a> <a href="">[Code]</a></li>
<li> D. Xu*, Y. Yuan, M. Mardani, S. Liu, J. Song, Z. Wang, and A. Vahdat<br> <b style="color:rgb(71, 71, 71)">“AGG: Amortized Generative 3D Gaussians for Single Image to 3D”</b><br>Transactions on Machine Learning Research (TMLR), 2024. <a href="https://arxiv.org/abs/2401.04099">[Paper]</a> <a href="https://ir1d.github.io/AGG/">[Code]</a></li>
<li> G. Holste*, M. Lin, R. Zhou, F. Wang, L. Liu, Q. Yan, S. Tassel, K. Kovacs, E. Chew, Z. Lu, Z. Wang, and Y. Peng<br> <b style="color:rgb(71, 71, 71)">“Harnessing the power of longitudinal medical imaging for eye disease prognosis using Transformer-based sequence modeling”</b><br>npj Digital Medicine, 2024. <a href="https://www.nature.com/articles/s41746-024-01207-4">[Paper]</a> <a href="">[Code]</a></li>
<li> G. Holste*, Y. Zhou, S. Wang, A. Jaiswal, M. Lin, S. Zhuge, Y. Yang, D. Kim, T. Nguyen-Mau, M. Tran, J. Jeong, W. Park, J. Ryu, F. Hong, A. Verma, Y. Yamagishi, C. Kim, H. Seo, M. Kang, L. Celi, Z. Lu, R. Summers, G. Shih, Z. Wang, and Y. Peng<br> <b style="color:rgb(71, 71, 71)">“Towards Long-tailed, Multi-label Disease Classification from Chest X-ray”</b><br>Medical Image Analysis, 2024. <a href="https://www.sciencedirect.com/science/article/abs/pii/S136184152400149X?CMX_ID=&SIS_ID=&dgcid=STMJ_219742_AUTH_SERV_PA&utm_acid=216299604&utm_campaign=STMJ_219742_AUTH_SERV_PA&utm_in=DM481041&utm_medium=email&utm_source=AC_">[Paper]</a> <a href="https://bionlplab.github.io/2024_MICCAI_CXRLT/">[Code]</a></li>
<li> G. Li, D. Hoang*, K. Bhardwaj, M. Lin, Z. Wang, and R. Marculescu<br> <b style="color:rgb(71, 71, 71)">“Zero-Shot Neural Architecture Search: Challenges, Solutions, and Opportunities”</b><br> IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024. <a href="https://arxiv.org/abs/2307.01998">[Paper]</a> <a href="https://github.com/SLDGroup/survey-zero-shot-nas">[Code]</a></li>
Expand Down
5 changes: 3 additions & 2 deletions research.html
Original file line number Diff line number Diff line change
Expand Up @@ -227,14 +227,15 @@ <h4>Theme 2: Optimization in Modern ML - Learning to Optimize, Black-box optimiz

<h4>Theme 3: Generative Vision - 3D/4D/Video Synthesis, and Related Applications</h4>
<p>
Our group's earlier (pre-2021) work includes several influential algorithms for GAN-based image enhancement and editing “in the wild”. More recently (post-2021), we push the boundaries of generative AI for visual tasks, with a focus on 3D/4D reconstruction (<a href="https://arxiv.org/abs/2403.20309">InstantSplat</a>, <a href="https://arxiv.org/abs/2311.17245">LightGaussian</a>, <a href="https://arxiv.org/abs/2312.00451">FSGS</a>, & <a href="https://arxiv.org/abs/2211.16431">NeuralLift-360</a>), novel view synthesis (<a href="https://openreview.net/forum?id=xE-LtsE-xx">GNT</a> & <a href="https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136820712.pdf">SinNeRF</a>), and video generation (<a href="https://arxiv.org/abs/2403.14773">StreamingT2V</a> & <a href="https://arxiv.org/abs/2303.13439">Text2Video-Zero</a>).
Our group's earlier (pre-2021) work includes several influential algorithms for GAN-based image enhancement and editing “in the wild”. More recently (post-2021), we push the boundaries of generative AI for visual tasks, with a focus on 3D/4D reconstruction (<a href="https://largespatialmodel.github.io/">LSM</a>, <a href="https://arxiv.org/abs/2403.20309">InstantSplat</a>, <a href="https://arxiv.org/abs/2311.17245">LightGaussian</a>, & <a href="https://arxiv.org/abs/2211.16431">NeuralLift-360</a>), novel view synthesis (<a href="https://openreview.net/forum?id=xE-LtsE-xx">GNT</a> & <a href="https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136820712.pdf">SinNeRF</a>), and video generation (<a href="https://arxiv.org/abs/2403.14773">StreamingT2V</a> & <a href="https://arxiv.org/abs/2303.13439">Text2Video-Zero</a>).
</p>


<b style="color:rgb(68, 68, 68)"><i>Selected Notable Works:</i></b>
<li>Z. Fan*, J. Zhang, W. Cong*, P. Wang*, R. Li, K. Wen, S. Zhou, A Kadambi, Z. Wang, D. Xu, B. Ivanovic, M. Pavone, and Y. Wang, <b style="color:rgb(71, 71, 71)">“Large Spatial Model: Real-time Unposed Images to Semantic 3D”</b>, Advances in Neural Information Processing Systems (NeurIPS), 2024. <a href="">[Paper]</a> <a href="https://largespatialmodel.github.io/">[Code] </a>
<li>Z. Fan*, K. Wang*, K. Wen, Z. Zhu*, D. Xu*, and Z. Wang, <b style="color:rgb(71, 71, 71)">"LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS”</b>, Advances in Neural Information Processing Systems (NeurIPS), 2024. (Spotlight) <a href="https://arxiv.org/abs/2311.17245">[Paper]</a> <a href="https://github.com/VITA-Group/LightGaussian">[Code] </a> </li>
<li>Z. Fan*, W. Cong*, K. Wen, K. Wang*, J. Zhang, X. Ding, D. Xu, B. Ivanovic, M. Pavone, G. Pavlakos, Z. Wang, and Y. Wang, <b style="color:rgb(71, 71, 71)">"InstantSplat: Unbounded Sparse-view Pose-free Gaussian Splatting in 40 Seconds”</b>, arXiv preprint arXiv:2403.20309, 2024. <a href="https://arxiv.org/abs/2403.20309">[Paper]</a> <a href="https://instantsplat.github.io/">[Code] </a> </li>
<li>R. Henschel, L. Khachatryan, D. Hayrapetyan, H. Poghosyan, Vahram Tadevosyan, V. Tadevosyan, Z. Wang, S. Navasardyan, and H. Shi, <b style="color:rgb(71, 71, 71)">"StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text”</b>, arXiv preprint arXiv:2403.14773, 2024. <a href="https://arxiv.org/abs/2403.14773">[Paper]</a> <a href="https://github.com/Picsart-AI-Research/StreamingT2V">[Code] </a> </li>
<li>Z. Fan*, K. Wang*, K. Wen, Z. Zhu*, D. Xu*, and Z. Wang, <b style="color:rgb(71, 71, 71)">"LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS”</b>, Advances in Neural Information Processing Systems (NeurIPS), 2024. (Spotlight) <a href="https://arxiv.org/abs/2311.17245">[Paper]</a> <a href="https://github.com/VITA-Group/LightGaussian">[Code] </a> </li>
<li>L. Khachatryan, A. Movsisyan, V. Tadevosyan, R. Henschel, Z. Wang, S. Navasardyan, and H. Shi, <b style="color:rgb(71, 71, 71)">"Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators”</b>, IEEE International Conference on Computer Vision (ICCV), 2023. (Oral) <a href="https://arxiv.org/abs/2303.13439">[Paper]</a> <a href="https://github.com/Picsart-AI-Research/Text2Video-Zero">[Code] </a> <em> (Commercialized as <a href="https://picsart.com/blog/post/introducing-your-new-favorite-unhinged-ai-tool-ai-gif-generator">Picsart AI GIF generator</a>)</em></li>
<li>D. Xu*, Y. Jiang*, P. Wang*, Z. Fan*, Y. Wang*, and Z. Wang, <b style="color:rgb(71, 71, 71)">"NeuralLift-360: Lifting An In-the-wild 2D Photo to A 3D Object with 360◦ Views”</b>, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. (Highlight) <a href="https://arxiv.org/abs/2211.16431">[Paper]</a> <a href="https://vita-group.github.io/NeuralLift-360/">[Code] </a> </li>
<li>M. Varma*, P. Wang*, X. Chen*, T. Chen*, S. Venugopalan, and Z. Wang, <b style="color:rgb(71, 71, 71)">"Is Attention All That NeRF Needs?”</b>, International Conference on Learning Representations (ICLR), 2023. <a href="https://openreview.net/forum?id=xE-LtsE-xx">[Paper]</a> <a href="https://github.com/VITA-Group/GNT">[Code]</a> </li>
Expand Down
5 changes: 0 additions & 5 deletions resource.html
Original file line number Diff line number Diff line change
Expand Up @@ -159,11 +159,6 @@ <h2>Open Calls for Papers/Participation</h2>
<div class="trend-entry d-flex">
<div class="trend-contents">
<ul>
<li>ECCV <a href="https://dd-challenge-main.vercel.app/">"Sometimes Less is More: the 1st Dataset Distillation Challenge"</a>, Milan, Italy, Sep 2024</li>

<li>MICCAI Challenge on <a href="https://bionlplab.github.io/2024_MICCAI_CXRLT/">Long-tailed, Multi-label, and Zero-shot Classification on Chest X-rays (CXR-LT)</a>, Morocco, Oct 2024</li>


<li>NeurIPS <a href="https://llm-pc.github.io/">LLM Privacy Challenge</a>, Vancouver, Dec 2024</li>

<li>NeurIPS Workshop on <a href="https://genai4health.github.io/">GenAI for Health: Potential, Trust and Policy Compliance</a>, Vancouver, Dec 2024</li>
Expand Down

0 comments on commit b1d7204

Please sign in to comment.