• Sarah Tan
  • October 28, 2016

WiML 2016 talks are online at https://www.periscope.tv/wimlworkshop

Welcome to WiML 2016! For a printable version of the information below, see our program book: https://docs.google.com/document/d/18ssboiAJTkF-smbH274gODdZXVD7MnX_tUQG5OnfQGY. Our poster session (Mon Dec 5, 1.30-3.30pm) is open to WiML and NIPS attendees.

Sunday, Dec 4

12.00 – 14.00 – Registration desk openEntrance Hall (enter from Entrance C)

14.00 – 19.00 – Workshop on Effective Communication by Katherine Gorman of Talking Machines and Amazon (Optional). Invitation-only, RSVP required

16.00 – 18.00 – Amazon Panel & Networking (Optional). Invitation-only, RSVP required

17.00 – 19.00 – Facebook Lean-In Circles (Optional). Invitation-only, RSVP required

19.15 – 22.00 – WiML Dinner (Optional). Separate registration requiredDedicated to Amazon

22.00 – 23.30 – OpenAI Happy Hour (Optional). Invitation-only, RSVP required

Monday, Dec 5

All events are held in Rooms 111 and 112, level P1, CCIB except for the poster session, which takes place in Area 5+6+7+8, level P0.

07.00 – 08.00 – Registration and Breakfast. Dedicated to Microsoft and OpenAI. Registration desk at Entrance Hall (enter from Entrance C); Breakfast in Rooms 111 and 112, level P1

08.00 – 08.05 – Opening Remarks

08.05 – 08.40 – Invited Talk: Maya Gupta, Google Research. Designing Algorithms for Practical Machine Learning. [Abstract] [Video]

08.40 – 08.55 – Contributed Talk: Maithra Raghu, Cornell Univ / Google Brain. On the Expressive Power of Deep Neural Networks. [Abstract] [Video]

08.55 – 09.10 – Contributed Talk: Sara Magliacane, VU Univ Amsterdam. Ancestral Causal Inference. [Abstract] [Video] [Slides]

09.10 – 09.15 – Break

09.15 – 10.15 – Research Roundtables (Coffee served until 9.40am). Dedicated to Apple and Facebook

10.15 – 10.50 – Invited Talk: Suchi Saria, John Hopkins Univ. Towards a Reasoning Engine for Individualizing Healthcare. [Abstract] [Video]

10.50 – 11.05 – Contributed Talk: Madalina Fiterau, Stanford Univ. Learning Representations from Time Series Data through Contextualized LSTMs. [Abstract] [Video]

11.05 – 11.10 – Break

11.10 – 11.25 – Contributed Talk: Konstantina Christakopoulou, Univ Minnesota. Towards Conversational Recommender Systems. [Abstract] [Video] [Slides]

11.25 – 12.00 – Invited Talk: Anima Anandkumar, Amazon / UC Irvine. Large-Scale Machine Learning through Spectral Methods: Theory & Practice. [Abstract] [Video] [Slides]

12.00 – 13.00 – Career & Advice Roundtables

13.00 – 13.30 – Lunch and Poster Setup. Dedicated to DeepMind and Google

13.30 – 15.30 – Poster Session (Coffee served until 2pm). Open to WiML and NIPS attendees. Dedicated to our Silver Sponsors: Capital One, D.E. Shaw, Intel, Twitter. Area 5+6+7+8, level P0; Round 1: 1.40pm – 2.30pm; Round 2: 2.30pm – 3.20pm; Poster Removal: 3.20pm – 3.30pm

15.30 – 15.45 – Raffle and WiML Updates: Tamara Broderick, MIT and Sinead Williamson, UT Austin. [Video]

15.45 – 16.00 – Contributed Talk: Amy Zhang, Facebook. Using Convolutional Neural Networks to Estimate Population Density from High Resolution Satellite Images. [Abstract] [Video]

16.00 – 16.35 – Invited Talk: Jennifer Chayes, Microsoft Research. Graphons and Machine Learning: Estimation of Sparse Massive Networks. [Abstract] [Video]

16.35 – 16.40 – Closing Remarks

 

NIPS Main Conference (NIPS registration required)

17.00 – NIPS Opening RemarksArea 1 + 2, level P0

Invited Talks

Jennifer Chayes, Microsoft Research
Graphons and Machine Learning: Estimation of Sparse Massive Networks

[Video]

Abstract: There are numerous examples of sparse massive networks, including the Internet, WWW and online social networks. How do we model and learn these networks? In contrast to conventional learning problems, where we have many independent samples, it is often the case for these networks that we can get only one independent sample. How do we use a single snapshot today to learn a model for the network, and hence predict a similar, but larger network in the future? In the case of relatively small or moderately sized networks, it’s appropriate to model the network parametrically, and attempt to learn these parameters. For massive networks, a non-parametric representation is more appropriate. I review the theory of graph limits (graphons), developed over the last decade, to describe limits of dense graphs and, more recently, sparse graphs of unbounded degree, such as power-law graphs. I then show how to use these graphons to give consistent estimators of non-parametric models of sparse networks, and moreover how to do this in a way that protects the privacy of individuals on the network.

Bio: Jennifer Tour Chayes is Distinguished Scientist, Managing Director and Cofounder of Microsoft Research New England and Microsoft Research New York City. Before joining Microsoft in 1997, Chayes was for many years Professor of Mathematics at UCLA.  Chayes is the author of over 130 academic papers and the inventor of over 30 patents. Her research areas include phase transitions in discrete mathematics and computer science, structural and dynamical properties of self-engineered networks, graph theory, graph algorithms, algorithmic game theory, and computational biology.  Chayes is one of the inventors of the theory of graph limits, which is widely used for machine learning of massive networks. Chayes holds a BA in biology and physics from Wesleyan, where she graduated first in her class, and a PhD in mathematical physics from Princeton. She did postdoctoral work in the Mathematics and Physics Departments at Harvard and Cornell. She is the recipient of an NSF Postdoctoral Fellowship, a Sloan Fellowship, the UCLA Distinguished Teaching Award, and the ABI Women of Vision Leadership Award. She has twice been a member of the IAS in Princeton. Chayes is a Fellow of the American Association for the Advancement of Science, the Fields Institute, the Association for Computing Machinery, and the American Mathematical Society, and an Elected Member of the American Academy of Arts and Sciences.  She is the winner of the 2015 John von Neumann Lecture Award, the highest honor of the Society of Industrial and Applied Mathematics.  Chayes received an Honorary Doctorate from Leiden University in 2016.

Maya Gupta, Google Research
Designing Algorithms for Practical Machine Learning

[Video]

Abstract: Machine learning is now widely used in industry, and more and more surprising real-world challenges are being discovered. I’ll highlight a few of these open problems as well as some example solutions, focusing on interpretability, churn, efficiency, train/test sampling, and fairness.

Bio: Maya Gupta founded and runs the GlassBox Machine Learning R&D Group at Google, focusing on designing and delivering human-friendly machine learning solutions.  Gupta joined Google Research in 2012. Before Google, Gupta was a professor at the University of Washington for ten years, after doing her PhD at Stanford University  Bob Gray and Rob Tibshirani. She has also worked for Ricoh’s California Research Lab, NATO’s Undersea Research Center, Hewlett Packard R&D, and AT&T Labs, and founded and runs the jigsaw puzzle company Artifact Puzzles.

Anima Anandkumar, Amazon / UC Irvine
Large-Scale Machine Learning through Spectral Methods: Theory & Practice

[Video] [Slides]

Abstract: Most learning problems can be cast as optimization tasks which are non-convex. Developing fast and guaranteed approaches for solving non-convex problems is a grand challenge. I will show how spectral optimization can reach the globally optimal solution for many learning problems despite being non-convex. This includes unsupervised learning of latent variable models, training neural networks and reinforcement learning of partially observable Markov decision processes. It involves spectral decomposition of moment matrices and tensors. Tensors are rich structures that can encode higher order relationships in data. In practice, tensor methods yield enormous gains both in running times and learning accuracy over traditional methods such as variational inference. I will end the talk with ongoing efforts to run spectral methods at scale on AWS infrastructure.

Bio: Anima Anandkumar is a principal scientist at Amazon Web Services, and is currently on leave from UC Irvine, where she is an associate professor.  Her research interests are in the areas of large-scale machine learning, non-convex optimization and high-dimensional statistics. In particular, she has been spearheading the development and analysis of tensor algorithms.  She is the recipient of several awards such as the Alfred. P. Sloan Fellowship, Microsoft Faculty Fellowship, Google research award, ARO and AFOSR Young Investigator Awards, NSF CAREER Award, Early Career Excellence in Research Award at UCI, Best Thesis Award from the ACM SIGMETRICS society, IBM Fran Allen PhD fellowship and several best paper awards. She has been featured in a number of forums such as the Quora ML session, Huffington post, Forbes, O’Reilly media, and so on.  She received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She was  postdoctoral researcher at MIT from 2009 to 2010,  an assistant professor at U.C. Irvine between 2010 and 2016, and a visiting researcher  at Microsoft Research New England in 2012 and 2014.

Suchi Saria, John Hopkins Univ
Towards a Reasoning Engine for Individualizing Healthcare

[Video]

Abstract: Healthcare is in the early stages of a digital revolution. In this talk, I will give my perspective on how advances in machine intelligence are likely to play a critical role in optimizing the delivery of healthcare. At its core, the fundamental computational challenges are to integrate the diversity of noisy measurements that are collected on an individual over time, and to provide estimates of the individual’s future trajectory in order to facilitate decision making. I will describe one or two example directions where there is opportunity for exciting work. I’m also giving a tutorial later that day on this topic so you’re welcome to join if interested (NIPS tutorial registration needed).

Bio: Suchi Saria is an assistant professor of computer science, health policy and statistics at Johns Hopkins University. Her research interests are in statistical machine learning and “precision” healthcare. Specifically, her focus is in designing novel data-driven computing tools for optimizing care delivery. Her work is being used to drive electronic surveillance for reducing adverse events in the inpatient setting and to individualize disease management in complex, chronic diseases. She received her PhD from Stanford University with Prof. Daphne Koller. Her work has received recognition in the form of two cover articles in Science Translational Medicine (2010, 2015), paper awards by the the Association for Uncertainty in Artificial Intelligence (2007) and the American Medical Informatics Association (2011), an Annual Scientific Award by the Society of Critical Care Medicine (2014), a Rambus Fellowship (2004-2010), an NSF Computing Innovation fellowship (2011),  and competitive awards from the Gordon and Betty Moore Foundation (2013), and Google Research (2014). In 2015, she was selected by the IEEE Intelligent Systems to the “AI’s 10 to Watch” list. In 2016, she was selected as a DARPA Young Faculty awardee and to Popular Science’s “Brilliant 10’’.

WiML Updates

Tamara Broderick, MIT & Sinead Williamson, UT Austin

[Video]

Bio: Tamara Broderick is the ITT Career Development Assistant Professor in the Department of Electrical Engineering and Computer Science at MIT. She is a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), the Center for Statistics, and the Institute for Data, Systems, and Society (IDSS). She completed her Ph.D. in Statistics with Professor Michael I. Jordan at the University of California, Berkeley in 2014. Previously, she received an AB in Mathematics from Princeton University (2007), a Master of Advanced Study for completion of Part III of the Mathematical Tripos from the University of Cambridge (2008), an MPhil by research in Physics from the University of Cambridge (2009), and an MS in Computer Science from the University of California, Berkeley (2013). Her recent research has focused on developing and analyzing models for scalable Bayesian machine learning—especially Bayesian nonparametrics. She has been awarded the ISBA Lifetime Members Junior Researcher Award, the Savage Award (for an outstanding doctoral dissertation in Bayesian theory and methods), the Evelyn Fix Memorial Medal and Citation (for the Ph.D. student on the Berkeley campus showing the greatest promise in statistical research), the Berkeley Fellowship, an NSF Graduate Research Fellowship, and a Marshall Scholarship.

Bio: Sinead Williamson is an Assistant Professor of Statistics at the University of Texas at Austin, in the IROM Department and the Division of Statistics and Scientific Computation. She obtained her PhD from the Computational and Biological Learning group at the University of Cambridge, and spent two years as a post doc in the SAILING laboratory at Carnegie Mellon University. She is interested in nonparametric Bayesian methods for use in machine learning applications. The nonparametric Bayesian paradigm is an elegant and flexible approach for modeling complex data of unknown latent dimensionality. In particular, she is interested in dependent nonparametric processes — distributions over collections of measures indexed by values in some covariate space. Such models are appropriate for spatio-temporally variable data, and for sharing information between related tasks. She is also interested in the development of fast and scalable inference algorithms for Bayesian nonparametric models.

Contributed Talks

Konstantina Christakopoulou, Univ of Minnesota; Joint work with Filip Radlinski and Katja Hofmann
Towards Conversational Recommender Systems
Madalina Fiterau, Stanford Univ
Learning Representations from Time Series Data through Contextualized LSTMs
Sara Magliacane, VU Univ Amsterdam; Joint work with Tom Claassen and Joris Mooij
Ancestral Causal Inference
Maithra Raghu, Cornell Univ / Google Brain; Joint work with Ben Poole, Jon Kleinberg, Surya Ganguli and Jascha Sohl-Dickstein
On the Expressive Power of Deep Neural Networks
Amy Zhang, Facebook; Joint work with Xianming Liu, Tobias Tiecke and Andreas Gros
Using Convolutional Neural Networks to Estimate Population Density from High Resolution Satellite Images

Research Roundtables (9.15-10.15am)

Table 1: Deep learning I – Katja Hofmann, Microsoft Research, Oriol Vinyals, DeepMind

Table 2: Deep learning II – Junli Gu, Tesla, Sergio Guadarrama, Google Research, Niv Sundaram, Intel

Table 3: Reinforcement learning – Emma Brunskill, Carnegie Mellon / Stanford, Yisong Yue, Caltech

Table 4: Bayesian methods I – Barbara Engelhardt, Princeton, Lamiae Azizi, University of Sydney

Table 5: Bayesian methods II – Ferenc Huszar, Twitter / Magic Pony

Table 6: Graphical models – Margaret Mitchell, Google Research, Danielle Belgrave, Imperial College London

Table 7: Learning theory – Cynthia Rush, Columbia University, Corinna Cortes, Google Research

Table 8: Statistical inference and estimation – Katherine M. Kinnaird, Brown University, Alessandra Tosi, Mind Foundry, Oxford

Table 9: Optimization – Anima Anandkumar, Amazon / UC Irvine, Puja Das, Apple

Table 10: Neuroscience – Irina Higgins, DeepMind, Jascha Sohl-Dickstein, Google Brain

Table 11: Robotics – Raia Hadsell, DeepMind, Julie Bernauer, NVIDIA

Table 12: Natural language processing I – Catherine Breslin, Amazon, Olivia Buzek, IBM Watson

Table 13: Natural language processing II – Pallika Kanani, Oracle Labs, Ana Peleteiro Ramallo, Zalando, Aline Villavicencio, Federal University of Rio Grande do Sul, Brazil

Table 14: Healthcare/biology applications – Tania Cerquitelli, Politecnico di Torino, Jennifer Healey, Intel

Table 15: Music applications – Luba Elliott, iambicai, Kat Ellis, Amazon Music, Emilia Gomez, Universitat Pompeu Fabra, Barcelona

Table 16: Social science applications – Allison Chaney, Princeton University, Isabel Valera, Max Planck Institute for Software Systems

Table 17: Fairness, accountability, transparency in machine learning – Sarah Bird, Microsoft, Ekaterina Kochmar, University of Cambridge

Table 18: Computational sustainability – Erin LeDell, H2O.ai, Jennifer Dy, Northeastern University

Table 19: Computer vision – Judy Hoffman, Stanford University, Manohar Paluri, Facebook

Table 20: Human-in-the-Loop Learning – Been Kim, Allen Institute for AI / Univ of Washington, Saleema Amershi, Microsoft Research

Table 1: Machine Learning @Amazon: Jumpstarting your career in industry – Anima Anandkumar, Catherine Breslin, Enrica Maria Fillipi

Table 2: Careers@Apple – Meriko Borogove, Anh Nguyen

Table 3: Machine Learning @DeepMind: Research in industry vs. academia – Nando De Freitas, Viorica Patraucean, Kimberly Stachenfeld

Table 4: Machine Learning @FacebookSponsorship vs. Mentorship Throughout Your Career – Angela Fan, Amy Zhang, Christy Sauper, Natalia Neverova, Manohar Paluri

Table 5: Machine Learning @GoogleIndustrial Research and Academic Impact – Corinna Cortes, Google

Table 6: Machine Learning and Deep Learning @Microsoft – Christopher Bishop, Mir Rosenberg, Anusua Trivedi

Table 7: Delivering phenomenal customer experiences with Machine Learning @Capital One – Jennifer Hill, Marcie Apelt

Table 8: Networking I – Olivia Buzek, IBM Watson, Jennifer Healey, Intel

Table 9: Networking II – Pallika Kanani, Oracle Labs, Been Kim, Allen Institute for AI / Univ of Washington

Table 10: Work/Life Balance (academia) – Namrata Vaswani, Iowa State University, Beka Steorts, Duke University

Table 11: Work/Life Balance (industry) I – Yuanyuan Pao, Lyft, Antonio Penta, United Technologies Research Centre, Ireland

Table 12: Work/Life Balance (industry) II – Kat Ellis, Amazon Music, Puja Das, Apple

Table 13: Choosing between academia/industry I – Katherine M. Kinnaird, Brown University, Jascha Sohl-Dickstein, Google Brain

Table 14: Choosing between academia/industry II – Sarah Bird, Microsoft, Oriol Vinyals, DeepMind

Table 15: Life with Kids – Jenn Wortman Vaughan, Microsoft Research, Julie Bernauer, NVIDIA

Table 16: Getting a job (academia)  I – Jennifer Chayes, Microsoft Research, Yisong Yue, Caltech

Table 17: Getting a job (academia) II – Tamara Broderick, MIT, Cynthia Rush, Columbia University

Table 18: Getting a job (industry) I – Anne-Marie Tousch, Criteo, Sergio Guadarrama, Google Research

Table 19: Getting a job (industry) II – Margaret Mitchell, Google Research, Erin LeDell, H2O.ai

Table 20: Doing a postdoc – Cristina Savin, IST Austria / NYU, Judy Hoffman, Stanford University

Table 21: Doing research in industry – Junli Gu, Tesla, Samy Bengio, Google Brain

Table 22: Keeping up with academia while in industry – Irina Higgins, DeepMind, Alessandra Tosi, Mind Foundry, Oxford

Table 23: Surviving graduate school – Allison Chaney, Princeton University, Viktoriya Krakovna, DeepMind

Table 24: Seeking funding: fellowships and grants – Aline Villavicencio, Federal University of Rio Grande do Sul, Brazil, Danielle Belgrave, Imperial College London

Table 25: Establishing collaborations – Barbara Engelhardt, Princeton University, Ekaterina Kochmar, University of Cambridge

Table 26: Joining startups – Alyssa Frazee, Stripe, Ferenc Huszar, Twitter / Magic Pony

Table 27: Scientific communication – Katherine Gorman, Talking Machines, Ana Peleteiro Ramallo, Zalando

Table 28: Building your professional brand – Luba Elliott, iambicai, Lamiae Azizi, The University of Sydney

Table 29: Commercializing your research – Katherine Boyle, General Catalyst, Zehan Wang, Twitter / Magic Pony

Table 30: Long-term career planning – Inmar Givoni, Kindred.ai, Jennifer Dy, Northeastern University

WiML 2016 Poster Session

Monday, Dec 5, 1.30pm to 3:30pm, Area 5+6+7+8, level P0, open to WiML and NIPS attendees

200+ posters covering theory, methodology, and applications of machine learning will be presented in 2 rounds.
Accepted posters
Accepted posters (with abstracts)
Abstracts listed here are for archival purposes and do not constitute proceedings for this workshop.

Information for poster presenters:

Posters for both rounds should be setup 1-1.40pm and removed 3.20-3.30pm. Each poster board is shared by 2-3 presenters. Please check the program book for your round number and poster number. Look for that number in the poster room with ‘W’ appended to the front, e.g. W1, W2, etc.

Poster size: up to 37.9 inches width and 35.8 inches height (or 96.3 cm x 91.0 cm), portrait or landscape.

Copyright © Women in Machine Learning 2016