Peer-reviewed Journals and Contents

  • Mirsky, Y., Demontis, A., Kotak, J., Shankar, R., Gelei, D., Yang, L., Zhang, X., Pintor, M., Lee, W., Elovici, Y., & others (2022). The threat of offensive AI to organizations. Computers & Security, 103006, https://doi.org/10.1016/j.cose.2022.103006
  • Kravchik, M., Demetrio, L., Biggio, B., & Shabtai, A. (2022). Practical Evaluation of Poisoning Attacks on Online Anomaly Detectors in Industrial Control Systems. Computers & Security, 122, 102901, https://doi.org/10.1016/j.cose.2022.102901
  • Grosse, K., Lee, T., Biggio, B., Park, Y., Backes, M., & Molloy, I. (2022). Backdoor smoothing: Demystifying backdoor attacks on deep neural networks. Computers & Security, 120, 102814, https://doi.org/10.1016/j.cose.2022.102814
  • Melis, M., Scalas, M., Demontis, A., Maiorca, D., Biggio, B., Giacinto, G., Roli, F., 2022. Do gradient-based explanations tell anything about adversarial robustness to android malware? Int. J. Mach. Learn. & Cyber. 13, 217–232, https://doi.org/10.1007/s13042-021-01393-7
  • Crecchi, M. Melis, A. Sotgiu, D. Bacciu, and B. Biggio. Fader: Fast adversarial example rejection. Neurocomputing, 470:257–268, 2022, https://doi.org/10.1016/j.neucom.2021.10.082
  • Pintor, M., Angioni, D., Sotgiu, A., Demetrio, L., Demontis, A., Biggio, B., & Roli, F.(2023). ImageNet-Patch: A dataset for benchmarking machine learning robustness against adversarial patches. Pattern Recognition, 134, 109064, https://doi.org/10.1016/j.patcog.2022.109064, https://github.com/pralab/ImageNet-Patch
  • Kumar, W. Zhang, L. Fischer and B. Freudenthaler, Membership-Mappings for Practical Secure Distributed Deep Learning, IEEE Transactions on Fuzzy Systems, 2023 (early access), https://doi.org/10.1109/TFUZZ.2023.3235440
  • Kumar, Differentially private transferable deep learning with membership-mappings, Advances in Computational Intelligence, vol. 3, 2023, https://doi.org/10.1007/s43674-022-00049-5
  • Zhang, M. Kumar, W. Ding, X. Li, and J. Yu, Variational learning of deep fuzzy theoretic nonparametric model, Neurocomputing, vol. 506, pp. 128-145, 2022

Non Peer-reviewed Journals and Contents

  • -C. Dinu, M. Holzleitner, M. Beck, D.H. Nguyễn, A. Huber, H. Eghbal-zadeh, B.A. Moser, S.V. Pereverzyev, S. Hochreiter, W. Zellinger, Ensemble Learning for Domain Adaptation by Importance Weighted Least Squares, RICAM-Report 2022-10, https://www.ricam.oeaw.ac.at/files/reports/22/rep22-10.pdf

Conferences / Workshops

  • Kumar, B. Moser, L. Fischer, and B. Freudenthaler, Towards Practical Secure Privacy-Preserving Machine (Deep) Learning with Distributed Data, In: Kotsis G. et al. (eds): Database and Expert Systems Applications - DEXA 2022 Workshops. DEXA 2022. Communications in Computer and Information Science. vol 1633. Springer, Cham, https://doi.org/10.1007/978-3-031-14343-4_6
  • Bernhard A. Moser, Michal Lewandowski, Somayeh Kargaran, Battista Biggio, Werner Zellinger, Christoph Koutschan: Tessellation-Filtering ReLU Neural Networks, IJCAI 2022, https://doi.org/10.24963/ijcai.2022/463
  • Anton Ponomarchuk, Christoph Koutschan, and Bernhard Moser: Unboundedness of Linear Regions of Deep ReLU Neural Networks, DEXA AISys Workshop, 2022, https://doi.org/10.1007/978-3-031-14343-4_1
  • Martin Gauch, Maximilian Beck, Thomas Adler, Dmytro Kotsur, Stefan Fiel, Hamid Eghbal-zadeh, Johannes Brandstetter, Johannes Kofler, Markus Holzleitner, Werner Zellinger, Daniel Klotz, Sepp Hochreiter and Sebastian Lehner, Few-Shot Learning by Dimensionality Reduction in Gradient Space, Conference on Lifelong Learning Agents (CoLLAs 2022), PMLR 199:1043-1064, 2022.
  • Manuel Kauers, Christoph Koutschan, Guessing with little data. In: Proceedings of the International Symposium on Symbolic and Algebraic Computation (ISSAC), pp. 83–90, 2022. ACM, New York, USA.
  • Pintor, M., Demetrio, L., Sotgiu, A., Demontis, A., Carlini, N., Biggio, B., & Roli, F. (2022). Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples. NeurIPS 2022
  • Bieringer, L., Grosse, K., Backes, M., Biggio, B., & Krombholz, K. (2022). Industrial practitioners' mental models of adversarial machine learning. In Eighteenth Symposium on Usable Privacy and Security (SOUPS 2022) (pp. 97-116).