The Georgia Institute of Technology Student Security Seminar, sponsored by School of Cybersecurity and Privacy, is an opportunity for Georgia Tech faculty and students working in computer security and privacy to present their research and invite guest speakers.
The seminar is currently on hiatus, and information here is from Fall 2022.
To receive regular updates on upcoming talks, as well as Zoom links to join them virtually, please subscribe to the mailing list.
If you’re interested in presenting a talk, please contact us with a date and talk proposal.
Founder/Organizer: Pradyumna Shome
Email Address: pradyumna.shome@gatech.edu
Location: Vinings, 10th Floor, Coda, 756 West Peachtree St NW, Atlanta, GA 30332
Time: Wednesdays at 12pm ET
Mailing List: https://mailman.cc.gatech.edu/mailman/listinfo/scp-security-seminar
Upcoming Talks
Past Talks
Abstract
We are approaching one decade of (public) Rowhammer research. Do you remember that one time Rowhammer was used to hijack a journalist’s phone remotely? Or that other time when Rowhammer was used to build a 200K USD exploit chain at pwn2own? How about that leaked Proof-of-Concept that was found in a popular exploit kit available on the black market…? While scientific papers on the topic make you feel the apocalypse is near, industry - and the real world - often seem less pessimistic. Who is right? What is going wrong? And where is the disconnect coming from? In this talk, I investigate and share my perspectives on both worlds.Biography
Victor van der Veen is an engineer in Qualcomm’s Product Security Group. Before joining Qualcomm, he obtained his PhD in the VUSec group at Vrije Universiteit Amsterdam. He was among the first to publicly report Rowhammer bit flips in mobile devices. At Qualcomm, he continued his work on this fundamental issue in modern DRAM. In his ongoing attempts to bring academia and industry closer together, he helped some of our best next-generation scientists to publish their seminal Rowhammer research. Although he is currently trying to move to a different field, his past is starting to catch up and Rowhammer keeps coming back at him.Abstract
This talk discusses the top ways threat actors are continuing to innovate and evolve across the threat landscape. Across ransomware, supply chain exploits, zero-day attacks, business email compromise, and other evolving risks, threat actors aren’t slowing down. In this session, Col. (USA, Retired) Hensley, former Director of the Army’s Global Network Operations and Security Center, shares keen insights into the nature of today’s threats and vulnerabilities.Biography
Barry Hensley is the Chief Threat Intelligence Officer of Secureworks and currently responsible for the Securework’s Counter Threat Unit (CTU) Security Research Group and global Incident Response, and Adversary Emulation teams. Before joining SecureWorks, Colonel (US Army, RET) Barry Hensley was the Director of the Army's Global Network Operations and Security Center (AGNOSC). While at the AGNOSC, he was responsible for directing the operations and defense of the Army's portion of the Global Information Grid (GIG) consisting of over 1.2 million users. He served in various leadership positions within the communications and information security career field throughout his 24+ year Army career to include assignments with United States Special Operations Command and deployments to Saudi Arabia, Kuwait, Somalia, and Iraq. He commanded the 57th Signal Battalion in support of the Multi- National Force – Iraq (MNF–I) as part of Operation IRAQI FREEDOM (OIF). He holds a BBA in Information Systems from Georgia Southern University, a M.S. in Telecommunication from the University of Colorado, and a graduate of the National War College. He was named the 2009 Georgia Southern University Alumnus of the Year for the College of Information Technology and was named by Federal Computer Week as a 2008 "Federal 100" winner, a select group of top executives in the Federal IT industry.Abstract
JavaScript cross-platform frameworks are becoming increasingly popular. They help developers easily and conveniently build cross-platform applications while just needing only one JavaScript codebase. Recent security reports showed several high-profile cross-platform applications (e.g., Slack, Microsoft Teams, and Github Atom) suffered injection issues, which were often introduced by Cross-site Scripting (XSS) or embedded untrusted remote content like ads. These injections open security holes for remote web attackers, and cause serious security risks, such as allowing injected malicious code to run arbitrary local executables in victim devices (referred to as XRCE attacks). However, until now, XRCE vectors and behaviors and the root cause of XRCE were rarely studied and understood. Although the cross-platform framework developers and community responded quickly by offering multiple security features and suggestions, these mitigations were empirically proposed with unknown effectiveness. In this paper, we conduct the first systematic study of the XRCE vulnerability class in the cross-platform ecosystem. We first build a generic model for different cross-platform applications to reduce their semantic and behavioral gaps. We use this model to (1) study XRCE by comprehensively defining its attack scenarios, surfaces, and behaviors, (2) investigate and study the state-of-the-art defenses, and verify their weakness against XRCE attacks. Our study on 640 real-world cross-platform applications shows, despite the availability of existing defenses, XRCE widely affects the cross-platform ecosystem. 75% of applications may be impacted by XRCE, including Microsoft Teams. (3) Finally, we propose XGuard, a novel defense technology to automatically mitigate all XRCE variants derived from our concluded XRCE behaviors.Biography
Feng is a 4th year PhD student working with Prof. Wenke Lee. His research interest is in program analysis and software security. Specifically, he applies program analysis to software supply chain-based ecosystem to detect novel vulnerabilities and mitigate potential exploits. Feng has published three works in top-tier academic conferences like IEEE S&P.Abstract
Operational networks commonly rely on machine learning models for many tasks, including detecting anomalies, inferring application performance, and forecasting demand. Yet, model accuracy can degrade due to concept drift, whereby the relationship between the features and the target prediction changes. Mitigating concept drift is thus an essential part of operationalizing machine learning models in the context of networking---or regression models in general. Unfortunately, as we show, concept drift cannot be sufficiently mitigated by frequently retraining models using newly available data, and doing so can even degrade model accuracy further. In this paper, we characterize concept drift in a large cellular network for a major metropolitan area in the United States. We find that concept drift occurs across many important key performance indicators (KPIs), independently of the model, training set size, and time interval---thus necessitating practical approaches to detect, explain, and mitigate it. To do so, we develop Local Error Approximation of Features (LEAF). We introduce LEAF and demonstrate its effectiveness on a variety of KPIs and models. LEAF detects drift; explains features and time intervals that most contribute to drift, and mitigates drift using forgetting and over-sampling. We evaluate LEAF against industry-standard mitigation approaches (notably, periodic retraining) with more than four years of cellular KPI data. Our initial tests with a major cellular provider in the US show that LEAF is effective on complex, real-world data. LEAF consistently outperforms periodic and triggered retraining while reducing costly retraining operations.Biography
Shinan Liu is a 4th year Ph.D. student in the Computer Science Department at the University of Chicago advised by Prof. Nick Feamster. He is interested in networked systems, security, explainable AI, and measurement. Typical scenarios he has explored include Cellular Networks, Internet of Things, and Cyber Physical Systems. Shinan publishes papers in USENIX Security and his past work was reported by Forbes, The Wall Street Journal, ACM TechNews, and more. Shinan is also the recipient of the Daniels Fellowship.Abstract
Machine learning models are becoming increasingly powerful and are actively deployed in security- and privacy-critical systems; however, this shift has inevitably led to abusive machine learning applications which are lucrative for adversaries and harmful to users. Notably, deepfake-generated content has been increasingly used in social profiles to construct artificial personas which serve disinformation or perform social engineering attacks on other users in online social networks. Many of these victims are attempting to understand and navigate these security and privacy threats for the first time, requiring adjustments in their behavior and responsibilities. In this talk, I will discuss how end-users perceive the novel threat of detecting deepfake social profiles from genuine, human-crafted ones. Through this work, we will discuss what implications exist for content moderators, social media platforms, and future defenses.Biography
Jaron Mink is a PhD Candidate of the Computer Science Department at University of Illinois at Urbana-Champaign. He received his Bachelor’s of Science (Magna Cum Laude) in the field of Computer Science at University of California, Los Angeles in 2019. Jaron investigates computer security and privacy threats and focuses on users’ perception and mitigation of emerging concerns. His work has appeared in venues such as CHI, USENIX Security, IEEE S&P, WWW, and has been reported on by the Scientific American and the 21st Show. He is a recipient of the NSF Graduate Research Fellowship (GRFP). Jaron also serves as a consultant to Partnership on AI, investigating ways to better anticipate AI risks. He has spent two summers working with faculty at the Max Planck Institute for Security and Privacy (2021) and the Max Planck Institute for Software Systems (2022).Abstract
Internet censorship is widespread, impacting citizens of hundreds of countries around the world. Recent work has developed techniques that can perform widespread, longitudinal measurements of global Internet manipulation remotely and have focused largely on the scale of censorship measurements with minimal focus on reproducibility and consistency. In this work we explore the role packet headers (e.g., source IP address and source port) have on DNS censorship. By performing a large-scale measurement study building on the techniques deployed by previous and current censorship measurement platforms, we find that choice of ephemeral source port and local source IP address (e.g., x.x.x.7 vs x.x.x.8) influence routing, which in turn influences DNS censorship. We show that 37% of IPs across 56% ASes measured show some change in censorship behavior depending on source port and local source IP. This behavior is frequently all-or-nothing, where choice of header can result in no observable censorship. Such behavior mimics and could be misattributed to geolocation error, packet loss, or network outages. The scale of censorship differences can more than double depending on the lowest 3 bits of the source IP address, consistent with known router load balancing techniques. We also observe smaller-scale censorship variation where only a few domains experience censorship differences based on packet parameters. We lastly find that these variations are persistent; packet retries do not control for observed variation. Our results point to the need for methodological changes in future DNS censorship measurement, which we discuss.Biography
Abhishek Bhaskar is a 4th year PhD student at SCP working under Dr. Paul Pearce. His research explores the impact of router load balancing on various aspects of network security and measurement. Before beginning his PhD at Georgia Tech, Abhishek obtained his Master's degree from Syracuse University and subsequently worked at GrammaTech.Abstract
Microarchitectural attacks are side/covert channel attacks which enable leakage/communication as a direct result of hardware optimizations. Secure computation on modern hardware thus requires hardware-software contracts which include in their definition of software-visible state any microarchitectural state that can be exposed via microarchitectural attacks. Defining such contracts has become an active area of research. In this talk, we will present leakage containment models (LCMs)—novel axiomatic hardware-software contracts which support formally reasoning about the security guarantees of programs when they run on particular microarchitectures. Our first contribution is an axiomatic vocabulary for formally defining LCMs, derived from the established axiomatic vocabulary used to formalize processor memory consistency models. Using this vocabulary, we formalize microarchitectural leakage—focusing on leakage through hardware memory systems—so that it can be automatically detected in programs. To illustrate the efficacy of LCMs, we first demonstrate that our leakage definition faithfully captures a sampling of (transient and non-transient) microarchitectural attacks from the literature. Next, we develop a static analysis tool, called Clou, which automatically identifies microarchitectural vulnerabilities in programs given a specific LCM. We use Clou to search for Spectre gadgets in benchmark programs as well as real-world crypto-libraries (OpenSSL and Libsodium), finding new instances of leakage. To promote research on LCMs, we design the Subrosa toolkit for formally defining and automatically evaluating/comparing LCM specifications.Biography
Nicholas Mosier is a 3rd-year PhD student at Stanford University advised by Caroline Trippel. His research focuses on developing Spectre detection and mitigation techniques that are scalable, efficient, and comprehensive. He is broadly interested in hardware and software security and enjoys bug hunting on the side.Abstract
Web applications (apps) provide a wide array of utilities that are being abused by malware authors as a replacement for attacker-deployed C&C servers. Stopping this Web App-based Command and Control (WACC) requires collaboration between Incident Responders (IRs) and web app providers. However, little research has been done to prove that WACC malware are prevalent enough to warrant such an investment. To this end, we designed Marcea, a malware analysis pipeline to study the prevalence of WACC. Marcea revealed 487 WACC malware in 72 families abusing 30 web apps over the last 15 years. Our research uncovered the number of WACC malware increased by 5.5 times since 2020 and that 86% did not need to connect to an attacker-deployed C&C server. Our study uncovered patterns indicating how specific web apps attract or disincentivize WACC malware. Moreover, web app engagement data collected by Marcea suggests that these malware are active enough to produce up to 5,844,144 access points. To date, we have used Marcea to collaborate with the web app providers to take down 70% of the active WACC malware.Biography
Mingxuan Yao is a fourth year Ph.D. student in the School of Electrical & Computer Engineering (ECE) at Georgia Institute of Technology, under the guidance of Professor Brendan Saltaformaggio in the Cyber Forensics Innovation (CyFI) Lab. He finished his Master Degree in Cybersecurity before that. His research interests lie in cyber attack forensics, and binary analysis techniques. His current research focuses on cyber-threats abusing prestigious web services, aiming to adopt different novel strategies to boost the analysis process.Abstract
We give the first examples of public-key encryption schemes which can be proven to achieve multi-challenge, multi-user CCA security via reductions that are tight in time, advantage, and memory. Our constructions are obtained by applying the KEM-DEM paradigm to variants of Hashed ElGamal and the Fujisaki-Okamoto transformation that are augmented by adding uniformly random strings to their ciphertexts and/or keys. Our proofs for the augmented ECIES version of Hashed-ElGamal make use of a new computational Diffie-Hellman assumption wherein the adversary is given access to a pairing to a random group, which we believe may be of independent interest.Biography
Akshaya Kumar is a first-year Ph.D. student in Computer Science at the Georgia Institute of Technology's School of Cybersecurity and Privacy where she is advised by Professor Joseph Jaeger. Her research interests include cryptography, information security, and generally, theoretical computer science. Her most recent work focuses on provable security in the memory-aware setting. Her paper on memory-tight proofs for public key encryption schemes was recently accepted at Asiacrypt 2022. She is a part of The Association for Women in Mathematics (AWM), an initiative that promotes women in mathematics.Abstract
The discovery of the Spectre attack in 2018 has sent shockwaves through the computer industry, affecting processor vendors, OS providers, programming language developers, and more. Because web browsers execute untrusted code while potentially accessing sensitive information, they were considered prime targets for attacks and underwent significant changes to protect users from speculative execution attacks. In particular, the Google Chrome browser adopted the strict site isolation policy that prevents leakage by ensuring that content from different domains is not shared in the same address space. The perceived level of risk that Spectre poses to web browsers stands in stark contrast with the paucity of published demonstrations of the attack. Before mid-March 2021, there was no public proof-of-concept demonstrating leakage of information that is otherwise inaccessible to an attacker. Moreover, Google's Leaky.page, the only proof-of-concept that can read such information at the time of writing, is severely restricted to only a subset of the address space and does not perform cross-website accesses. In this paper, we demonstrate that the absence of published attacks does not indicate that the risk is mitigated. We present Spook.js, a JavaScript-based Spectre attack that can read from the entire address space of the attacking webpage. We further investigate the implementation of strict site isolation in Chrome, and demonstrate limitations that allow Spook.js to read sensitive information from other webpages. We further show that Spectre adversely affects the security model of extensions in Chrome, demonstrating leaks of usernames and passwords from the LastPass password manager. Finally, we show that the problem also affects other Chromium-based browsers, such as Microsoft Edge and Brave.Biography
Jason Kim is a second-year Ph.D. student advised by Prof. Daniel Genkin at Georgia Tech's School of Cybersecurity and Privacy. Jason's research lies at the intersection of side-channel attacks arising from CPU microarchitecture and how they can be exploited from web browsers. His ultimate goal is to harden web browsers against leaking secrets: billions of people browse the internet on a daily basis and handle sensitive or personal information on the web, yet browsers automatically execute untrusted code served from websites as soon as a user visits the site. Prior to Georgia Tech, Jason graduated from the University of Michigan in 2021 with a Bachelor's in Computer Science. He is an author and presenter of Spook.js, which was published at the 2022 IEEE Symposium on Security and Privacy.Abstract
In this paper, we perform the first multifaceted measurement study to investigate the widespread insecure practices employed by tertiary education institutes (TEIs) around the globe when offering WPA2-Enterprise Wi-Fi services. The security of such services critically hinges on two aspects: (1) the connection configuration on the client-side; and (2) the TLS setup on the authentication servers. Weaknesses in either can leave users susceptible to credential theft. Typically, TEIs prescribe to their users either manual instructions or pre-configured profiles (e.g., eduroam CAT). For studying the security of configurations, we present a framework in which each configuration is mapped to an abstract security label drawn from a strict partially ordered set. We first used this framework to evaluate the configurations supported by the user interfaces (UIs) of mainstream operating systems (OSs), and discovered many design weaknesses. We then considered 7045 TEIs in 54 countries/regions, and collected 7275 configuration instructions from 2061 TEIs. Our analysis showed that majority of these instructions lead to insecure configurations, and nearly 86% of those TEIs can suffer from credential thefts on at least one OS. We also analyzed a large corpus of pre-configured eduroam CAT profiles and discovered several misconfiguration issues that can negatively impact security. Finally, we evaluated the TLS parameters used by authentication servers of thousands of TEIs and discovered perilous practices, such as the use of expired certificates, deprecated versions of TLS, weak signature algorithms, and suspected cases of private key reuse among TEIs. Our long list of findings have been responsibly disclosed to the relevant stakeholders, many of which have already been positively acknowledged.Biography
Man Hong Hue is a first-year Ph.D. student in Computer Science at the Georgia Institute of Technology (School of Cybersecurity and Privacy). His research focuses on network security, internet measurement, and usable security. The goal is to detect and address large-scale security threats/issues, considering human factors. He obtained a Bachelor in Information Engineering at the Chinese University of Hong Kong (CUHK) in 2020. Before joining Georgia Tech, he had been working with Prof. Sze Yiu Chau at CUHK and collaborating with Prof. Omar Chowdhury and Prof. Endadul Hoque. His work on the security of WPA2-Enterprise and PKCS1 v1.5 implementations has been published at the ACM Conference on Computer and Communications Security (CCS) in 2021.Abstract
The popularity of JavaScript has led to a large ecosystem of third-party packages available via the npm software package registry. The open nature of npm has boosted its growth, providing over 800,000 free and reusable software packages. Unfortunately, this open nature also causes security risks, as evidenced by recent incidents of single packages that broke or attacked software running on millions of computers. This paper studies security risks for users of npm by systematically analyzing dependencies between packages, the maintainers responsible for these packages, and publicly reported security issues. Studying the potential for running vulnerable or malicious code due to third-party dependencies, we find that individual packages could impact large parts of the entire ecosystem. Moreover, a very small number of maintainer accounts could be used to inject malicious code into the majority of all packages, a problem that has been increasing over time. Studying the potential for accidentally using vulnerable code, we find that lack of maintenance causes many packages to depend on vulnerable code, even years after a vulnerability has become public. Our results provide evidence that npm suffers from single points of failure and that unmaintained packages threaten large code bases. We discuss several mitigation techniques, such as trusted maintainers and total first-party security, and analyze their potential effectiveness. This talk is based on a USENIX Security 2019 paper by Markus Zimmermann, Cristian-Alexandru Staicu, Cam Tenny, and Michael Pradel.Biography
Pradyumna Shome is a PhD student in Computer Science at the Georgia Institute of Technology, researching hardware security and microarchitectural side-channel attacks. His research has been published at ISCA and has won an Honorable Mention at the Intel Hardware Security Academic Award. Prior to his PhD, he graduated with a BS in Computer Science from the University of Illinois Urbana-Champaign advised by Christopher W. Fletcher, and then worked as a Software Engineer at Meta. He has been on the Shadow Program Committee for the IEEE Symposium on Security & Privacy, as the sole undergrad.Credit to the Stanford Systems Seminar for the website!