Invited Talks and Tutorials
Unifying Speakers
Michael Ernst (University of Washington, USA)
Natural language is a programming language
A powerful, but limited, way to view software is as source code alone. Treating a program as a sequence of instructions enables it to be formalized and makes it amenable to mathematical techniques such as abstract interpretation and model checking. This is a limited way to view a program, because a program consists of much more than a sequence of instructions. Developers make use of test cases, documentation, variable names, program structure, the version control repository, and more. I argue that it is time to take the blinders off of software analysis tools: tools should use all these artifacts to deduce more powerful and useful information about the program.
Researchers are beginning to make progress towards this vision. This paper gives, as examples, four results that find bugs and generate code by applying Natural Language Processing to English that appears in or with source code.
- Comparing observed error messages to text in the user manual to determine whether the error message is adequate.
- Using dictionary similarity among variable names to discover when variables are used incorrectly.
- Creating test oracles (assert statements) from English descriptions of behavior (Javadoc documentation).
- Translating user queries into executable bash commands.
The four techniques use four different NLP techniques: document similarity, word semantics, parse trees, and neural networks. Initial results suggest that by paying attention to these rich sources of information, we can produce better software analysis tools and make programmers more productive. Many more opportunities exist, and I urge the community to grasp them.
This is joint work with Arianna Blasi, Juan Caballero, Sergio Delgado Castellanos, Alberto Goffi, Alessandra Gorla, Victoria Lin, Deric Pang, Mauro Pezzè, Irfan Ul Haq, Kevin Vu, Luke Zettlemoyer, and Sai Zhang.
Bio: Michael D. Ernst is a Professor in the Computer Science & Engineering department at the University of Washington. His research aims to make software more reliable, more secure, and easier (and more fun!) to produce. His primary technical interests are in software engineering, programming languages, type theory, security, program analysis, bug prediction, testing, and verification. Ernst's research combines strong theoretical foundations with realistic experimentation, with an eye to changing the way that software developers work.
Ernst is an ACM Fellow (2014) and received the inaugural John Backus Award (2009) and the NSF CAREER Award (2002). His research has received an ACM SIGSOFT Impact Paper Award (2013), 8 ACM Distinguished Paper Awards (FSE 2014, ISSTA 2014, ESEC/FSE 2011, ISSTA 2009, ESEC/FSE 2007, ICSE 2007, ICSE 2004, ESEC/FSE 2003), an ECOOP 2011 Best Paper Award, honorable mention in the 2000 ACM doctoral dissertation competition, and other honors. In 2013, Microsoft Academic Search ranked Ernst #2 in the world, in software engineering research contributions over the past 10 years.
Kim G. Larsen (Aalborg University, Denmark)
Validation, Synthesis and Optimization for Cyber-Physical Systems
The growing complexity of Cyber-Physical Systems increasingly challenges existing methods and techniques. What is needed is a new generation of scalable tools for model-based learning, analysis, synthesis and optimization based on a mathematical sound foundation, that enables trade-offs between functional safety and quantitative performance. In this talk I will provide highlights of theoretical developments and industrial applications of the tool UPPAAL and illustrate how recent branches of the UPPAAL tool suit are making significant progress by combining model checking and machine learning techniques.
Bio: Kim G. Larsen is a professor in the Department of Computer Science at Aalborg University within the Distributed, Embedded and Intellingt Systems Unit and director of the ICT-competence center CISS, Center for Embedded Software Systems. He is also director of the Sino-Danish Basic Research Center IDEA4CPS, the Danish Innovation Network InfinIT, as well as the newly founded innovation research center DiCyPS: Data Intensive Cyber Physical Systems. In 2015, he won an ERC Advanced Grant with the project LASSO for learning, analysis, synthesis and optimization of cyber physical systems. His research interests include modeling, verification, performance analysis of real-time and embedded systems with application and contributions to concurrency theory and model checking. In particular he is a prime investigator of the tool UPPAAL and co-founder of the company UP4ALL International. In 2013 he was the co-recipient of the CAV Award for the work on UPPAAL as "the foremost model checker for real-time Systems". Kim G Larsen became Honorary Doctor (Honoris causa) at Uppsala University, Sweden, in 1999. In 2007 he became Knight of the Order of the Denmark. In 2007 he became Honorary Doctor (Honoris causa) at ENS Cachan, France. In 2012 he became Honary Member of Academia Europaea. He is life-long member of the Royal Danish Academy of Sciences and Letters, Copenhagen, and a member of the Danish Academy of Technical Sciences, for which he served as vice-chairman for the group on Electro- and Information Systems. Since 2016 he has been appointed INRIA International Chair for a 5 year period. Also he has won the prestigious industrial Grundfos Award 2016.
FoSSaCS Invited Speaker
Joël Ouaknine (MPI-SWS, Germany, and University of Oxford, UK)
Fundamental Algorithmic Problems and Challenges in Dynamical and Cyber-Physical Systems
Dynamical and cyber-physical systems, whose continuous evolution is subject to differential equations, permeate vast areas of engineering, physics, mathematics, and computer science. In this talk, I consider a selection of fundamental algorithmic problems for such systems, such as reachability, invariant synthesis, and controllability. Although the decidability and complexity of many of these problems are open, some partial and conditional results are known, occasionally resting on certain number-theoretic hypotheses such as Schanuel's conjecture. More generally, the study of algorithmic problems for dynamical and cyber-physical systems draws from an eclectic array of mathematical tools, ranging from Diophantine approximation to algebraic geometry. I will present a personal and select overview of the field and discuss areas of current active research.
Bio: Joël Ouaknine is a Scientific Director at the Max Planck Institute for Software Systems in Saarbrücken, Germany, where he leads the Foundations of Algorithmic Verification group. He also holds secondary appointments at Oxford University and Saarland University. His research interests span a range of topics broadly connected to algorithmic verification and theoretical computer science. His group's recent focus has been on decision, control, and synthesis problems for linear dynamical systems (both continuous and discrete), making use among others of tools from number theory, Diophantine geometry, and real algebraic geometry. Other interests include the algorithmic analysis of real-time, probabilistic, and infinite-state systems (e.g. model-checking algorithms, synthesis problems, complexity), logic and applications to verification, automated software analysis, and concurrency.
Prior to joining MPI-SWS, Joël worked as an academic in the Computer Science Department at Oxford University from 2004 to 2016, becoming Full Professor in 2010. He earned a BSc and MSc in Mathematics from McGill University, and received his PhD in Computer Science from Oxford in 2001. He subsequently did postdoctoral work at Tulane University and Carnegie Mellon University, and has held visiting professorships at the Ecole Normale Supérieure de Cachan, France. In both 2007 and 2008 he received an Outstanding Teaching Award from Oxford University, and the following year he was awarded an EPSRC Leadership Fellowship, enabling him to focus (almost) exclusively on research for a period of five years. He is the recipient of the 2010 Roger Needham Award, given annually "for a distinguished research contribution in Computer Science by a UK-based researcher within ten years of his or her PhD", and in 2015 was awarded an ERC Consolidator Grant to carry out research in dynamical systems.
TACAS Invited Speaker
Dino Distefano (Facebook and Queen Mary University of London, UK)
The Facebook Infer Static Analyser
Infer is an open-source static analyser developed at Facebook. Originally based on Separation Logic, Infer has lately evolved from a specific tool for heap-manipulating programs to a general framework which facilitates the implementation of new static analyses.
In this talk, I will report on the Infer team's experience of applying our tool to Facebook mobile code, each day helping thousands of engineers to build more reliable and secure software. Moreover, I will discuss the team's current effort to turn Infer into a static analysis platform for research and development useful both to academic researchers and industrial practitioners.
Bio: Dino Distefano is Engineering Manager at Facebook and Professor of Software Verification at Queen Mary University of London. He is former Royal Academy of Engineering Research Fellow. His research interests include programming languages, static analysis, logic, and program verification.
In 2009 he co-founded Monoidics Ltd, a London-based high-tech start-up providing automatic software verification to safety critical industries. Monoidics was acquired by Facebook in 2013. He has co-developed several software tools for program analysis and verification. The latest tool, the Facebook Infer program analyser, helps developers to identify bugs before software is shipped to users.
Dino is the recipient of the Roger Needham Award 2012, given annually "for a distinguished research contribution in Computer Science by a UK-based researcher within ten years of his or her PhD”. He received the Royal Academy of Engineering Silver Medal 2014, given annually "for an outstanding personal contribution to UK engineering by an early to mid-career engineer resulting in market exploitation". In 2016 he was the co-recipient of the CAV Award "for the development of Separation Logic and for demonstrating its applicability in the automated verification of programs that mutate data structures".
Unifying Public Lecture
Serge Abiteboul (INRIA, ENS Paris, France)
Issues in Ethical Data Management
Data science holds incredible promise of improving people's lives, accelerating scientific discovery and innovation, and bringing about positive societal change. Yet, if not used responsibly, this technology can propel economic inequality, destabilize global markets and affirm systemic bias. In this talk, we consider issues such as violation of data privacy, or biases in data analysis. We discuss desirable properties in data analysis such as fairness, transparency, or diversity. A goal of the talk is to draw the attention of the computer science community to the important emerging subject of responsible data management and analysis. We will present our perspective on the issue, and motivate research directions.
Bio: Serge Abiteboul is a researcher at Inria, the Institut National de Recherche en Informatique et Automatique, and is currently a member of the DI Lab at the École Normale Supérieure, Paris.He obtained his Ph.D. from the University of Southern California, and a "thèse d'Etat" from the University of Paris-Sud. He was a Lecturer at the École Polytechnique and Visiting Professor at Stanford and Oxford University. He has been Chaire Professor at Collège de France in 2011-12 and Francqui Chair Professor at Namur University in 2012-2013. He co-founded the company Xyleme in 2000.
Serge Abiteboul has received the ACM SIGMOD Innovation Award in 1998, the EADS Award from the French Academy of sciences in 2007, the Milner Award in 2013, and was PI of the Webdam ERC (2008-2013). He became a member of the French Academy of Sciences in 2008, and a member the Academy of Europe in 2011. His research work focuses mainly on data, information and knowledge management, particularly on the Web.
Invited Tutorials
Véronique Cortier (CNRS research director at Loria, Nancy, France)
Secure composition of security protocols
Cryptographic protocols aim at securing communications over insecure networks like the Internet. Over the past decades, numerous decision procedures and tools have been developed to automatically analyse security protocols. The field has now reached a good level of maturity with efficient techniques for protocols analyzed in isolation. However, protocols are typically built using several sub-protocols and tools do not scale well for complex protocols. In this tutorial, we will present formal models for security protocols and explore when it is possible to securely (and formally) compose protocols, in several scenarios, such as long-term key sharing, symmetric and asymmetric key distribution, secure channels.
Bio: Véronique Cortier is CNRS research director at Loria (Nancy, France). In 2003, she received her Ph.D. degree in Computer Science from the École Normale Supérieure de Cachan, from which she graduated. Her research focuses on formal verification of security protocols, in particular e-voting, using formal techniques such as first order logic or rewriting.
She has co-authored more than 80 publications on these topics. In 2010, she was awarded an ERC starting grant and in 2015, she received the INRIA - Académie des Sciences young researcher award.
Kenneth McMillan (Microsoft Research Redmond, USA)
Compositional Testing
While full formal proof of complex systems remains a challenge, formal methods can readily be applied in practice to improve the performance of testing in exposing design errors. A good example of this is compositional testing. In this methodology each component of a system is given a formal specification and it is proved formally that these specifications guarantee system-level correctness. The components are then rigorously tested against their formal specifications. This approach has the advantages of unit testing in covering component behaviors, while at the same time exposing all system-level errors to testing. Moreover, it can expose latent bugs in components that are not stimulated in the given system but may occur when the component is re-used in a different environment. This tutorial will cover the basics of compositional testing by example, introducing the available tools, and will also discuss industrial applications.
Bio: Ken McMillan is a principal researcher at Microsoft Research in Redmond, Washington. He works in formal verification, primarily in model checking for hardware and software. He holds a BS in electrical engineering from the University of Illinois at Urbana (1984), an MS in electrical engineering from Stanford (1986) and a Ph.D. in computer science from Carnegie Mellon (1992). He is the author of the book "Symbolic Model Checking", and the SMV symbolic model checking system. For his work in model checking, he has received the ACM doctoral dissertation award, the SRC technical excellence award, the ACM Paris Kannelakis award, the Alan Newell award from Carnegie Mellon and the Computer-aided Verification Conference award. He was formerly a member of the technical staff at AT&T Bell Laboratories and a fellow at Cadence research labs.