计算机网络和信息集成教育部重点实验室(东南大学)

 
   



2018年学术报告


--- 2018年学术报告
---
<strong>Adversarial Machine Learning: Attacker's Strategy in Android Malware Detection via Graphlet Sampling</strong>

时间:2018年6月19日 下午14:30 地点:九龙湖计算机楼313

报告简介:

                 The Cybersecurity defense and malware detection schemes are increasingly using machine-learning- based signature and vulnerability detection to relieve human experts from the tedious and subjective task of manually defining features. However, this inevitably devolves into the cat-and-mouse game seen in many facets of security. Traditionally, attackers evade signatures and heuristics, and they evade statistical models too. In this talk, Feng Li offers some background on the academic security world’s attempt at understanding how to break and fix Machine-Learning-based Cybersecurity systems. He will discuss the design of an Android Malware Detection scheme via Graphlet-Sampling-based machine learning. With the context from this case study, he will discuss serval possible strategies for the attackers to evade the detection or poison the machine learning. These sophisticated attackers clearly motivate the need to study the Adversarial Machine Learning (ML) in the Cybersecurity.                Android systems are widely used in mobile & wireless distributed systems. However, with the popularity of Android-based smartphones/tablets comes the rampancy of Android-based malware. We first introduce our design of a novel topological signature based ML scheme for Android apps, using the function call graphs (FCGs) extracted from their Android App PacKages (APKs). Specifically, by leveraging recent advances on graphlet mining, the proposed method fully captures the invocator-invocatee relationship at local neighborhoods in an FCG. Using real benign app and malware samples, we demonstrate that our method, ACTS (App topological signature through graphlet Sampling), can detect malware and identify malware families robustly and efficiently. Using the context of this learning-based Cybersecurity scheme, we switch to the attackers’ point-of-view and explore they strategy space to counter the ML-design. We will discuss some possible strategies in adversarial data manipulation for the attackers to evade the classification, poison the ML model, and/or violate the privacy of the users of the learning-based Cybersecurity scheme.                     

报告人简介:

                Feng Li is the Chair and Associate Professor in the Department of Computer Information & Graphics Technology within the Purdue School of Engineering and Technology, at Indiana University-Purdue University Indianapolis (IUPUI). His current research interests include cybersecurity, mobile computing and wireless networks, cloud and distributed computing, privacy protection in social networks, and security and privacy in machine learning. Dr. Li regularly publishes in scholarly journals, conference proceedings, and book chapters. Dr. Li was the Publication Co-Chair and Organization Committee Member for 2015 ACM Conference on Computer and Communications Security(CCS) and 2013 IEEE International Conference on Distributed Computing Systems(ICDCS), TPC Chair for National Workshop for REU Research in Networking and Systems (REUNS) 2017, 2016 and 2014. He was the Technical Program Committee member for IEEE International Conference on Computer Communications (INFOCOM 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018 and 2019) and many other international conferences.

   

东南大学计算机网络和信息集成教育部重点实验室 版权所有