Zhangyang (Atlas) Wang
Address: 328C H.R. Bright Building, College Station, TX, 77843-3112
Phone: +1 979-845-7977
Email (preferred): firstname.lastname@example.org
Most Recent Updates
- 2019 (as of 03/25): Our group has 10 papers accepted so far, including 2 CVPR, 1 Bioinformatics, 1 IEEE TIP, 1 IEEE TMI, etc.
- 2018: Our group has 20+ papers accepted, including 2 NeurIPS, 1 ICLR, 1 ICML, 1 AISTATS, 1 ECCV, 1 KDD, 1 IJCAI, 1 AAAI, 3 IEEE TIP, etc. We also won 2 challenges in CVPR'18 and ECCV'18.
[See more in News]
- Multiple openings for Ph.D. and visiting students (scroll down to the page bottom).
- Call for Participation + Papers: ICCV 2019 Workshop and Challenge on Real-World Face and Object Recognition from Low-Quality Images and Videos (FOR-LQ). [Website coming soon]
- Call for Participation + Papers: CVPR 2019 UG2+ Workshop and Prize Challenge: Bridging the Gap between Computational Photography and Visual Recognition. $60,000 prize for winners! [Website]
- Call for Papers: the 4th International Workshop on Biomedical infOrmatics with Optimization and Machine learning (BOOM), co-located with IJCAI'19. [Website] [CFP]
- Call for Participation: ICIP 2019 Grand Challenge on Realistic Single Image Recovering in Adverse Weathers [Website]
I live in the blessed fields of machine learning and computer vision. My research interests constantly evolve: below are just some recent stuffs that I work on. I always stay open to be intellectually inspired by new things.
[A] Enhancing Deep Learning Robustness, Efficiency, and Privacy
High-level speaking, I pursue the concept of "co-design": jointly optimizing multiple desired aspects throughout the machine learning lifecycle (from model selection and architecture design, to training, and to deployment). A good example would be, to build a simultaneously accurate, (provably) robust, efficient and privacy-preserving deep learning model. My latest enthusiasms are devoted to examining three aspects:
- Robustness: We are keen on improving and/or certifying model robustness, under both "standard adverse conditions" (input domain shifts) [CVPR'16, ICCV'17, AAAI'18, IJCAI'18, IEEE TIP'18, etc.], and "adversarial perturbations" [arXiv'19]. We are also interested in the capability of uncertainty quantification [AISTATS'19].
- Efficiency: We look at reducing model size [ICML'18], inference energy cost [NeurIPS'18 workshop], and inference memory cost [CVPR'19]. We are actively collaborating with hardware experts [Rice EiC lab] to pursue efficient algorithm-hardware co-design.
- Privacy: We address privacy leak risks that arise in the share of training data [ECCV'18], and the release of trained models, using the techniques from adversarial learning and differential privacy.
[B] Deep Learning for Optimization, and Optimization for Deep Learning
- How to utilize deep learning to accelerate classical model-based optimization for solving inverse problems, with theoretical guarantees [NeurIPS'18, ICLR'19].
- How to design and train better deep models, by extensively referring to tools and techniques derived from or inspired by classical optimization [CVPR'16, AAAI'16, KDD'18, NeurIPS'18, etc.]
- I am growingly interested in neural architecture search (NAS), and learning to learn (L2L), as special tools to solve intractable optimization. We recently submitted a few works.
[C] Applications: Computer Vision and Interdisciplinary Works
- I spent lots of time during my Ph.D. period working on low-level computer vision (image enhancement and restoration). I still keep an active research line here, e.g., [ICCV'17, IEEE TIP'18]. Specifically highlighted are:
- On high-level computer vision, I recently work on semantic segmentation [CVPR'19], person re-identification, and robust object detection [ACM MM'16]. I previously worked on image clustering and hashing.
- I gain tremendous interests of exploiting machine learning to solving scientific and societal challenges. Through interdisciplinary collaborations, we strive to make impacts in the fields of bioinformatics [Bioinformatics'19], geoscience [Remote Sensing'19], medical imaging [IEEE TMI'19], healthcare [THSE'18], and even artistic design [ACM MM'18, ACM MM'15].
Notes to Prospective Students
- I am always looking for strong Ph.D. students, for every semester. Research assistantships (RAs) will be provided. Interested candidates please email me their CVs, transcripts, and brief research statements.
- TAMU is a great place for AI/ML/CV research. TAMU has been most renowned for its world-class College of Engineering (ranked 11th by US News 2017). According to csrank.org, in the year 2018, TAMU CSE department is ranked 26th nation-wide in all CS research areas, and is ranked 22th in the specific field of AI.
- I firmly believe in the values of two things:
- a truly deep understanding of your problem of interest - don’t naively plug and play "hot" tools;
- a solid background and a true passion for mathematics - I constantly benefit from digging more from matrix analysis, optimization, and statistical learning.
- Eventually, nothing is more important than a true enthusiasm and devotion to research.
- I am hands-on and work very closely with my every student. I also provide strong support to my students for internship, visiting, and collaboration opportunities.
- We welcome highly motivated M.S. students and undergraduates in TAMU to explore research with us. Self-funded visiting students /scholars are welcome to apply.