Zhangyang (Atlas) Wang
Assistant Professor (Fall 2017 - ) [CV]
- Machine Learning and Computer Vision Researcher [Google Scholar] [GitHub]
- Ph.D., ECE@UIUC, 2016; B.S., EEIS@USTC, 2012
Department of Computer Science and Engineering
(Adjunct) Department of Electrical and Computer Engineering
Texas A&M University, College Station, TX
Address: 328C H.R. Bright Building, College Station, TX, 77843-3112
Phone: +1 979-845-7977
Email (preferred): firstname.lastname@example.org
Most Recent Updates
- 2019 (so far): Our group has published 3 NeurIPS, 5 ICCV, 1 ICML, 2 CVPR, 1 MICCAI, 2 AAAI, 1 Bioinformatics, 1 IEEE TIP, 1 IEEE TMI, and more. We also won 3rd prize of ICCV'19 WIDER Challenge Track 4.
- 2018: Our group published 2 NeurIPS, 1 ICLR, 1 ICML, 1 AISTATS, 1 ECCV, 1 KDD, 1 IJCAI, 1 AAAI, 3 IEEE TIP, and more. We also won 2 challenges (winner of CVPR'18 UG2; 2nd prize of ECCV'18 ChaLearn Track 3)
[See more in News]
- Multiple openings for Ph.D. and visiting students (scroll down to the page bottom).
- Call for Participation & Papers: CVPR 2020 Workshop and Prize Challenge: Bridging the Gap between Computational Photography and Visual Recognition (UG2+) [Website TBA]
I live in the blessed world of machine learning and computer vision. My research interests constantly evolve: below are just some recent stuffs that I work on. I always stay open to be intellectually excited and inspired by new things.
[A] Enhancing Deep Learning Robustness, Efficiency, and Privacy
I seek to build deep learning solutions that are way beyond just data-driven accurate predictors. IMHO, an ideal model shall at least: (1) be provably robust to perturbations and attacks (therefore trustworthy); (2) be efficient and hardware-friendly (for deployments in practical platforms); and (3) be designed to respect individual privacy and fairness.
- Robustness: We are keen on improving and/or certifying model robustness, under both "standard adverse conditions" (input domain shifts) [CVPR'16, ICCV'17, AAAI'18, IJCAI'18, IEEE TIP'18, etc.], and "adversarial perturbations" (malicious input attacks) [NeurIPS'19]. We are also interested in uncertainty quantification as another powerful tool for "knowing when to fail" [AISTATS'19, AAAI'20].
- Efficiency: We conduct the pioneering work of energy-efficient training for deep models [NeurIPS'19]. For inference, we look at reducing model size [ICML'18], energy cost [NeurIPS'18 workshop, IEEE JSTSP'19, AAAI'20], and memory cost [CVPR'19]. We are actively collaborating with hardware experts to pursue efficient algorithm-hardware co-design.
- Privacy: We address privacy leak risks that arise in the share of training data [arxiv'19, ECCV'18], and the share of trained models, using the techniques from adversarial learning and differential privacy. We recently released the first privacy-preserving video recognition dataset, PA-HMDB51.
[B] Deep Learning for Optimization, and Optimization for Deep Learning
- How to utilize deep learning to accelerate classical model-based optimization for solving inverse problems or even more complicated optimization, with theoretical guarantees [NeurIPS'18, ICLR'19, ICML'19].
- How to design and train better deep models, by extensively referring to tools and techniques derived from or inspired by classical optimization [CVPR'16, AAAI'16, KDD'18, NeurIPS'18, etc.]
- I am growingly enthusiastic about the broader picture of AutoML, including neural architecture search (NAS) and learning to learn (L2L), as special & powerful tools to solve complicated tasks and intractable optimization. Our latest works include [ICCV'19, NeurIPS'19] and a few others under submission.
[C] Applications: Computer Vision and Interdisciplinary Problems
- I spent lots of time during my Ph.D. working on low-level computer vision (image enhancement and restoration). I still keep an active research profile here, e.g., [ICCV'17, IEEE TIP'18, ICCV'19, arXiv'19, WACV'20], to just name a few. Specifically highlighted are:
- On high-level computer vision, I recently work on semantic segmentation [CVPR'19], UAV-based visual perception and control [ICCV'19], re-identification [ICCV'19, WACV'20], and style recognition & transfer [ACM MM'15, ICCV'19]. I previously worked on image clustering, hashing, and perceptual assessment.
- I gain tremendous interests of exploiting machine learning to solving scientific and societal challenges. Through interdisciplinary collaborations, we strive to make impacts in the fields of bioinformatics [Bioinformatics'19], geoscience [Remote Sensing'19, CVPR'19 workshop], medical imaging [IEEE TMI'19, MICCAI'19], and healthcare [THSE'18].
VITA is gratefully sponsored by (details available upon requests):
- Government: NSF (4 grants), DoD (4 grants; including DARPA and ARL)
- Industry: Adobe, NEC Labs America, USAA, Chevron, Varian Medical Systems, MoodMe, Kuaishou, Walmart
- University: TAMU X-Grant (2018), T3-Grant (2019), PESCA (2019)
Notes to Prospective Students
- I am always looking for strong Ph.D. students, for every semester. Research assistantships (RAs) will be provided. Interested candidates please email me their CVs, transcripts, and brief research statements.
- TAMU is a great place for AI/ML/CV research. TAMU has been most renowned for its world-class College of Engineering (ranked 11th by US News 2017). According to csrank.org, in the year 2018, TAMU CSE department is ranked 26th nation-wide in all CS research areas, and is ranked 22th in the specific field of AI.
- I firmly believe in the values of two things:
- a truly deep understanding of your problem of interest - don’t naively plug and play "hot" tools;
- a solid background and a true passion for mathematics - I constantly benefit from digging more from matrix analysis, optimization, and statistical learning.
- Eventually, nothing is more important than a true enthusiasm and devotion to research.
- I am hands-on and work very closely with my every student. I also provide strong support to my students for internship, visiting, and collaboration opportunities.
- We welcome highly motivated M.S. students and undergraduates in TAMU to explore research with us. Self-funded visiting students /scholars are welcome to apply.