Top A-Tier Publications*

S3AI has produced a list of widely regarded publications, which have been presented at the following esteemed conferences:

2021

NeurIPS

ICML

  • Hubert Ramsauer, Bernhard Schäfl, Johannes Lehner, Philipp Seidl, Michael Widrich, Thomas Adler, Lukas Gruber, Markus Holzleitner, Milena Pavlović, Geir Kjetil Sandve, Victor Greiff, David Kreil, Michael Kopp, Günter Klambauer, Johannes Brandstetter, Sepp Hochreiter, “Hopfield Networks is All You Need”, International Conference on Learning Representations (ICLR 2021),

IJCNN**

  • Cinà, A.E., Vascon, S., Demontis, A., Biggio, B., Roli, F. and Pelillo, M., “The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?”, 2021 International Joint Conference on Neural Networks (IJCNN), 2021, pp. 1-8, doi: 0.1109/IJCNN52387.2021.9533557 and http://arxiv.org/abs/2103.12399, Software: https://github.com/Cinofix/beta_poisoning

2022

NeurIPS

  • Pintor, M., Demetrio, L., Sotgiu, A., Demontis, A., Carlini, N., Biggio, B., & Roli, F. (2022). Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples. NeurIPS 2022.

IJCAI

  • Bernhard A. Moser, Michal Lewandowski, Somayeh Kargaran, Battista Biggio, Werner Zellinger, Christoph Koutschan: Tessellation-Filtering ReLU Neural Networks, IJCAI 2022, https://doi.org/10.24963/ijcai.2022/463

Neurocomputing

  • Crecchi, M. Melis, A. Sotgiu, D. Bacciu, and B. Biggio. Fader: Fast adversarial example rejection. Neurocomputing, 470:257–268, 2022, https://doi.org/10.1016/j.neucom.2021.10.082
  • Zhang, M. Kumar, W. Ding, X. Li, and J. Yu, Variational learning of deep fuzzy theoretic nonparametric model, Neurocomputing, vol. 506, pp. 128-145, 2022.

IEEEE TFS

  • Kumar, W. Zhang, L. Fischer and B. Freudenthaler, Membership-Mappings for Practical Secure Distributed Deep Learning, IEEE Transactions on Fuzzy Systems, 2023 (early access), https://doi.org/10.1109/TFUZZ.2023.3235440

The NeurIPS conference is maybe the most prestigious conference in the field of deep learning, its review process is highly competitive. Nevertheless, we got three papers accepted in 2021 and a further one in the reporting period. "Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples" is about the safety evaluation against adversarial attacks and provides a novel basis for debugging based on the analysis of attack failures, and to exploit these insights for defence. Related to this publication, the Neurocomputing paper "Do gradient-based explanations tell anything about adversarial robustness to android malware?" is a journal publication addressing the computational efficiency of defense methods against adversarial examples. We propose a novel optimized approach for fast adversarial example rejection which allows to control the runtime complexity of adversarial examples detectors without sacrificing classification accuracy on both clean and adversarial data. The IJCAI paper "Tessellation-Filtering ReLU Neural Networks" is a novel computational geometric approach to reconstruct geometric characteristics of the decision surface at a given data point. The computational feasibility of this problem is challenging because of the high-dimensionality which causes exponential complexity. We tackle this challenge, first, by identifying a computationally tractable geometric structure in the neighbourhood of a given point by exploiting the hypercube of activation patterns, and, second, by employing a probabilistic approach to sample random points within this structure to determine geometric primitives that allow the characterization of the whole structure (in terms of non-convexity). The generality of our topological characterization will pave the way towards obtaining mathematical insight and evidence for practically relevant questions such as adversarial vulnerability and also generalization capabilities. The IEEE TFS paper "Membership-Mappings for Practical Secure Distributed Deep Learning, IEEE Transactions on Fuzzy Systems" considers the problem of privacy-preserving distributed deep learning where data privacy is protected by fully homomorphic encryption. An approach that leverages fuzzy-based membership-mappings for data representation learning is considered for distributed deep learning with fully homomorphic encrypted data. The method introduces globally convergent and robust variational membership-mappings to build local deep models. The local models are combined in a robust and flexible manner by means of fuzzy attributes to build a global model such that the global model can be homomorphically evaluated in an efficient manner. The membership-mappings based privacy-preserving distributed deep learning method is accurate, practical, and scalable. The Neurocomputing paper "Variational learning of deep fuzzy theoretic nonparametric model, Neurocomputing" presents an alternative approach to deep learning based on the concept of representing a mapping through a fuzzy set such that a deep model can be learned analytically via variational optimization technique.

* See https://scholar.google.es/citations?view_op=top_venues&hl=en&vq=eng_artificialintelligence for the top 20 Conferences and journals in AI
** In the meanwhile the ranking of IJCNN dropped is not anymore in the list above