Researcher interests: Creating complex software is difficult, and so people have started using machine learning (ML) to generate software instead of writing it manually. The problem: ML-produced software is opaque to humans, and it is difficult to certify that it always behaves correctly. In safety-critical systems, i.e. autonomous vehicles, this raises serious concerns. As part of my research, I seek to develop techniques and tools that will allow us to formally reason about deep neural networks (a specific kind of ML-produced software), and in particular to verify that they are correct. Research on neural network verification is still at an early phase, and there are many challenges to overcome. First, there is the matter of scalability: the verification problem is NP-complete, i.e. is exponentially difficult in the size of the network, and so it is challenging to apply verification to large networks. Our research group is addressing this difficulty by applying state-of-the-art techniques from the fields of verification, linear programming and SMT solving. Another interesting topic is that of applications: verification of neural networks is a useful hammer, and we are just beginning to discover all the potential nails. One project where we've successfully applied this technique is the ACAS Xu system: an airborne collision-avoidance systems for drones, currently being developed by the US Federal Aviation Administration (FAA). The ACAS Xu system includes 45 deep neural networks, and it is very important that these networks behave correctly as to avoid potential mid-air collisions. Using verification, we were able to formally prove that this is indeed the case (for a list of desired properties), and in one case even detected a bug. Another interesting application is robustness to adversarial examples, where we use verification to prove that small perturbations to a network's inputs do not cause it to make mistakes.
DEPARTMENT: Computer Science
Contact Business Development: Anna Pellivert

Selected Publications

assaf marron, aviran sadon, gera weiss, Guy Katz
on the fly construction of composite events in scenario based modeling using constraint solvers (2019)| Proceedings of the 7th International Conference on Model-Driven Engineering and Software Development| Read more
clark barrett, corina păsăreanu, divya gopinath, Guy Katz
deepsafe a data driven approach for assessing robustness of neural networks (2018)| International Symposium on Automated Technology for Verification and Analysis| Read more
Efficient Distributed Execution of Multi-component Scenario-Based Models (2018)| Communications in Computer and Information Science| Read more
clark barrett, david dill, Guy Katz, nicholas carlini
ground truth adversarial examples (2018)| arXiv: Learning| Read more
assaf marron, david harel, Guy Katz, rami marelly
wise computing toward endowing system development with proactive wisdom (2018)| IEEE Computer| Read more
clark barrett, justin gottschlich, Guy Katz, kyle julian, lindsey kuper, mykel kochenderfer
toward scalable verification for safety critical deep networks (2018)| arXiv preprint arXiv:1801.05950| Read more
alain mebsout, andrew reynolds, burak ekici, cesare tinelli, chantal keller, clark barrett, Guy Katz
smtcoq a plug in for integrating smt solvers into coq (2017)| Computer Aided Verification - 29th International Conference| Read more
clark barrett, david dill, Guy Katz, kyle julian, mykel kochenderfer
reluplex an efficient smt solver for verifying deep neural networks (2017)| computer aided verification| Read more
Finally, a peek inside the ‘black box’ of machine learning systems (2017)| Computation & Data, Technology & Society| Read more
assaf marron, daniel gritzner, david harel, joel greenyer, Guy Katz, shlomi steinberg
distributing scenario based models a replicate and project approach (2017)| 5th International Conference on Model-Driven Engineering and Software Development| Read more
clark barrett, david dill, Guy Katz, kyle julian, mykel kochenderfer
towards proving the adversarial robustness of deep neural networks (2017)| FVAV@iFM| Read more
clark barrett, corina păsăreanu, divya gopinath, Guy Katz
deepsafe a data driven approach for checking adversarial robustness in neural networks (2017)| arXiv preprint arXiv:1710.00486| Read more

Contact for more information:

Anna Pellivert
Manager BD
+972-2-6586697
Contact ME: