Zhangyang (Atlas) Wang

Assistant Professor (Fall 2017 - ) [CV]

  • Machine Learning and Computer Vision Researcher [Google Scholar] [GitHub]
  • Ph.D., ECE@UIUC, 2016; B.S., EEIS@USTC, 2012
  • Recent Honors: IBM Faculty Research Award (2020), TEES Young Faculty Fellow (2020), TAMU Engineering Genesis Award (2019), ICCV/CVPR/ECCV Challenge Prizes (2019, 2018), ...

Department of Computer Science and Engineering

(Adjunct) Department of Electrical and Computer Engineering

Texas A&M University, College Station, TX

Address: 328C H.R. Bright Building, College Station, TX, 77843-3112

Email: atlaswang@tamu.edu

Most Recent Updates


  • 2020 (so far): Our group has published 3 CVPR, 1 AISTATS, 1 ISCA, 1 IEEE TIP, and more. I also received an IBM Faculty Research Award, and was awarded as a TEES Young Faculty Fellow (TAMU’s premier junior engineering faculty award).
  • 2019: Our group published 4 ICLR, 3 NeurIPS, 5 ICCV, 1 ICML, 2 CVPR, 1 MICCAI, 2 AAAI, 1 Bioinformatics, 1 IEEE TIP, 1 IEEE TMI, and more. We won 3rd prize of ICCV'19 WIDER Challenge Track 4, and I also received a TAMU Engineering Genesis Award.
  • 2018: Our group published 1 ICLR, 2 NeurIPS, 1 ICML, 1 AISTATS, 1 ECCV, 1 KDD, 1 IJCAI, 1 AAAI, 3 IEEE TIP, and more. We also won 2 challenge prizes (winner of CVPR'18 UG2; 2nd prize of ECCV'18 ChaLearn Track 3).

[See more in News]

Open Calls


  • Multiple openings for Ph.D. and visiting students (scroll down to the page bottom).
  • Call for Participation & Papers: ECCV 2020 Workshop on Real-world Recognition from Low-quality Inputs and 1st Tiny Object Detection Challenge (RLQ-TOD) [Website]
  • Call for Papers: IJCAI 2020 International Workshop on Biomedical infOrmatics with Optimization and Machine learning (BOOM) [Website]
  • Call for Participation & Papers: CVPR 2020 Workshop and Prize Challenge: Bridging the Gap between Computational Photography and Visual Recognition (UG2+) [Website]


Research Narrative

I live in the blessed world of machine learning and computer vision. My research interests constantly evolve: below are just some recent stuffs that I work on. I always stay open to be intellectually excited and inspired by new things.


[A] As Goals -- Enhancing Deep Learning Robustness, Efficiency, and Privacy

I seek to build deep learning solutions that are way beyond just data-driven accurate predictors. IMHO, an ideal model shall at least: (1) be robust to perturbations and attacks (therefore trustworthy); (2) be efficient and hardware-friendly (for deployments in practical platforms); and (3) be designed to respect individual privacy and fairness.

    • Robustness: We are keen on improving deep model robustness, under both "standard adverse conditions" (input domain shifts) [CVPR'16, ICCV'17, AAAI'18, IJCAI'18, IEEE TIP'18, IEEE TIP'20, etc.], and "adversarial input attacks" [NeurIPS'19, ICLR'20, CVPR'20]. We are also interested in uncertainty quantification for "knowing when to fail" [AISTATS'19, AAAI'20, AISTATS'20], and develop new robustness evaluation metrics [ICLR'20].
    • Efficiency: We conduct some pioneering work of energy-efficient training for deep models [ICLR'20, NeurIPS'19]. For inference, we look at reducing model size [ICML'18], energy cost [NeurIPS'18 workshop, IEEE JSTSP'19, AAAI'20], and memory cost [CVPR'19]. We are actively collaborating with hardware experts to pursue efficient algorithm-hardware co-design [ISCA'20].
    • Privacy: We address privacy leak risks that arise in the share of training data [arxiv'19, ECCV'18], and the share of trained models, using the techniques from adversarial learning and differential privacy. We released the first privacy-preserving video recognition dataset, PA-HMDB51.


[B] As Toolkits -- Automated Machine Learning (AutoML), and Learning-Augmented Optimization

I am growingly enthusiastic about the rising and important field of AutoML, on both consolidating its theoretical underpinnings and broadening its practical applicability. State-of-the-art ML systems consist of complex pipelines, with choices of model architectures, algorithms and hyperparameters, as well as other configuration details to be tuned for optimal performance. They further often need to be co-designed with multiple goals and constraints. I consider AutoML to be a powerful tool and a central hub, in addressing those design challenges tremendously faster and better.

    • Neural Architecture Search (NAS): We are primarily interested to unleash the potential of NAS on more complicated model types, task scenarios, and data modalities, which will reveal magnitudes of research challenges that easily fail existing NAS algorithms developed for simpler models and "toy" tasks. We developed the first NAS for GAN [ICCV'19], and the state-of-the-art NAS on real-time segmentation [ICLR'20].
    • Learning-Augmented Optimization: A fast rising subfield of AutoML investigates to use ML to develop data/task-specific optimization algorithms, a.k.a, learning to learn or learning to optimize. But this field is pre-mature due to two bottlenecks, both of which we seek to address:
        • Theory: While classic optimization results often provide worst-case guarantees, limited theory exists pertaining to such learned optimizers. Our recent works pioneer on establishing convergence (rates) and simplifying parameter complexity, e.g., for convex problems such as LASSO [NeurIPS'18, ICLR'19], and plug-and-play optimization with learned priors [ICML'19]. We recently study the "generalizability" of learned optimizers to "unseen optimization problems" for the first time [arXiv'20].
        • Practice: Existing learned optimizers often study continuous, differentiable optimization problems. To enlarge the practical scope especially on handling intractable optimization, we introduce learned optimizers to Bayesian swarm optimization [NeurIPS'19] , and to training graph networks [CVPR'20], both for the first time. Besides, our earlier works suggested the benefit of incorporating optimization-inspired building blocks to designing better deep models [CVPR'16, AAAI'16, KDD'18, NeurIPS'18, etc.].


[C] As Applications -- Computer Vision and Interdisciplinary Problems

Sponsorship

VITA is gratefully sponsored by (details available upon requests):

  • Government: NSF (4 grants), DoD (4 grants; including DARPA and ARL)
  • Industry: IBM Research, Adobe, Microsoft, NEC Labs, Chevron, Walmart, Kuaishou, USAA, Varian Medical Systems, MoodMe
  • University: TAMU X-Grant (2018), T3-Grant (2019), PESCA (2019)

Notes to Prospective Students


  • I am always looking for truly strong Ph.D. students, for every semester. Research assistantships (RAs) will be provided. Interested candidates please email me their CVs, transcripts, and brief research statements.
  • TAMU is a great place for AI/ML/CV research. TAMU has been most renowned for its world-class College of Engineering (ranked 11th by US News 2017). According to csrank.org, in the year 2018, TAMU CSE department is ranked 26th nation-wide in all CS research areas, and is ranked 22th in the specific field of AI.
  • I firmly believe in the values of two things:
      1. a truly deep understanding of your problem of interest - don’t naively plug and play "hot" tools;
      2. a solid background and a true passion for mathematics - I constantly benefit from digging more from matrix analysis, optimization, and statistical learning.
  • Eventually, nothing is more important than a true enthusiasm and devotion to research.
  • I am hands-on and work very closely with my every student. I also provide strong support to my students for internship, visiting, and collaboration opportunities.
  • We welcome highly motivated M.S. students and undergraduates in TAMU to explore research with us. Self-funded visiting students /scholars are welcome to apply.