The Georgia Institute of Technology Student Security Seminar, sponsored by School of Cybersecurity and Privacy, is an opportunity for Georgia Tech faculty and students working in computer security and privacy to present their research and invite guest speakers.
To receive regular updates on upcoming talks, as well as Zoom links to join them virtually, please subscribe to the mailing list.
If you’re interested in presenting a talk, please contact us with a date and talk proposal.
Founder/Organizer: Pradyumna Shome
Email Address: email@example.com
Location: Vinings, 10th Floor, Coda, 756 West Peachtree St NW, Atlanta, GA 30332
Time: Wednesdays at 12pm ET
AbstractWe are approaching one decade of (public) Rowhammer research. Do you remember that one time Rowhammer was used to hijack a journalist’s phone remotely? Or that other time when Rowhammer was used to build a 200K USD exploit chain at pwn2own? How about that leaked Proof-of-Concept that was found in a popular exploit kit available on the black market…? While scientific papers on the topic make you feel the apocalypse is near, industry - and the real world - often seem less pessimistic. Who is right? What is going wrong? And where is the disconnect coming from? In this talk, I investigate and share my perspectives on both worlds.
BiographyVictor van der Veen is an engineer in Qualcomm’s Product Security Group. Before joining Qualcomm, he obtained his PhD in the VUSec group at Vrije Universiteit Amsterdam. He was among the first to publicly report Rowhammer bit flips in mobile devices. At Qualcomm, he continued his work on this fundamental issue in modern DRAM. In his ongoing attempts to bring academia and industry closer together, he helped some of our best next-generation scientists to publish their seminal Rowhammer research. Although he is currently trying to move to a different field, his past is starting to catch up and Rowhammer keeps coming back at him.
AbstractThis talk discusses the top ways threat actors are continuing to innovate and evolve across the threat landscape. Across ransomware, supply chain exploits, zero-day attacks, business email compromise, and other evolving risks, threat actors aren’t slowing down. In this session, Col. (USA, Retired) Hensley, former Director of the Army’s Global Network Operations and Security Center, shares keen insights into the nature of today’s threats and vulnerabilities.
BiographyBarry Hensley is the Chief Threat Intelligence Officer of Secureworks and currently responsible for the Securework’s Counter Threat Unit (CTU) Security Research Group and global Incident Response, and Adversary Emulation teams. Before joining SecureWorks, Colonel (US Army, RET) Barry Hensley was the Director of the Army's Global Network Operations and Security Center (AGNOSC). While at the AGNOSC, he was responsible for directing the operations and defense of the Army's portion of the Global Information Grid (GIG) consisting of over 1.2 million users. He served in various leadership positions within the communications and information security career field throughout his 24+ year Army career to include assignments with United States Special Operations Command and deployments to Saudi Arabia, Kuwait, Somalia, and Iraq. He commanded the 57th Signal Battalion in support of the Multi- National Force – Iraq (MNF–I) as part of Operation IRAQI FREEDOM (OIF). He holds a BBA in Information Systems from Georgia Southern University, a M.S. in Telecommunication from the University of Colorado, and a graduate of the National War College. He was named the 2009 Georgia Southern University Alumnus of the Year for the College of Information Technology and was named by Federal Computer Week as a 2008 "Federal 100" winner, a select group of top executives in the Federal IT industry.
BiographyFeng is a 4th year PhD student working with Prof. Wenke Lee. His research interest is in program analysis and software security. Specifically, he applies program analysis to software supply chain-based ecosystem to detect novel vulnerabilities and mitigate potential exploits. Feng has published three works in top-tier academic conferences like IEEE S&P.
AbstractOperational networks commonly rely on machine learning models for many tasks, including detecting anomalies, inferring application performance, and forecasting demand. Yet, model accuracy can degrade due to concept drift, whereby the relationship between the features and the target prediction changes. Mitigating concept drift is thus an essential part of operationalizing machine learning models in the context of networking---or regression models in general. Unfortunately, as we show, concept drift cannot be sufficiently mitigated by frequently retraining models using newly available data, and doing so can even degrade model accuracy further. In this paper, we characterize concept drift in a large cellular network for a major metropolitan area in the United States. We find that concept drift occurs across many important key performance indicators (KPIs), independently of the model, training set size, and time interval---thus necessitating practical approaches to detect, explain, and mitigate it. To do so, we develop Local Error Approximation of Features (LEAF). We introduce LEAF and demonstrate its effectiveness on a variety of KPIs and models. LEAF detects drift; explains features and time intervals that most contribute to drift, and mitigates drift using forgetting and over-sampling. We evaluate LEAF against industry-standard mitigation approaches (notably, periodic retraining) with more than four years of cellular KPI data. Our initial tests with a major cellular provider in the US show that LEAF is effective on complex, real-world data. LEAF consistently outperforms periodic and triggered retraining while reducing costly retraining operations.
BiographyShinan Liu is a 4th year Ph.D. student in the Computer Science Department at the University of Chicago advised by Prof. Nick Feamster. He is interested in networked systems, security, explainable AI, and measurement. Typical scenarios he has explored include Cellular Networks, Internet of Things, and Cyber Physical Systems. Shinan publishes papers in USENIX Security and his past work was reported by Forbes, The Wall Street Journal, ACM TechNews, and more. Shinan is also the recipient of the Daniels Fellowship.
AbstractMachine learning models are becoming increasingly powerful and are actively deployed in security- and privacy-critical systems; however, this shift has inevitably led to abusive machine learning applications which are lucrative for adversaries and harmful to users. Notably, deepfake-generated content has been increasingly used in social profiles to construct artificial personas which serve disinformation or perform social engineering attacks on other users in online social networks. Many of these victims are attempting to understand and navigate these security and privacy threats for the first time, requiring adjustments in their behavior and responsibilities. In this talk, I will discuss how end-users perceive the novel threat of detecting deepfake social profiles from genuine, human-crafted ones. Through this work, we will discuss what implications exist for content moderators, social media platforms, and future defenses.
BiographyJaron Mink is a PhD Candidate of the Computer Science Department at University of Illinois at Urbana-Champaign. He received his Bachelor’s of Science (Magna Cum Laude) in the field of Computer Science at University of California, Los Angeles in 2019. Jaron investigates computer security and privacy threats and focuses on users’ perception and mitigation of emerging concerns. His work has appeared in venues such as CHI, USENIX Security, IEEE S&P, WWW, and has been reported on by the Scientific American and the 21st Show. He is a recipient of the NSF Graduate Research Fellowship (GRFP). Jaron also serves as a consultant to Partnership on AI, investigating ways to better anticipate AI risks. He has spent two summers working with faculty at the Max Planck Institute for Security and Privacy (2021) and the Max Planck Institute for Software Systems (2022).
AbstractInternet censorship is widespread, impacting citizens of hundreds of countries around the world. Recent work has developed techniques that can perform widespread, longitudinal measurements of global Internet manipulation remotely and have focused largely on the scale of censorship measurements with minimal focus on reproducibility and consistency. In this work we explore the role packet headers (e.g., source IP address and source port) have on DNS censorship. By performing a large-scale measurement study building on the techniques deployed by previous and current censorship measurement platforms, we find that choice of ephemeral source port and local source IP address (e.g., x.x.x.7 vs x.x.x.8) influence routing, which in turn influences DNS censorship. We show that 37% of IPs across 56% ASes measured show some change in censorship behavior depending on source port and local source IP. This behavior is frequently all-or-nothing, where choice of header can result in no observable censorship. Such behavior mimics and could be misattributed to geolocation error, packet loss, or network outages. The scale of censorship differences can more than double depending on the lowest 3 bits of the source IP address, consistent with known router load balancing techniques. We also observe smaller-scale censorship variation where only a few domains experience censorship differences based on packet parameters. We lastly find that these variations are persistent; packet retries do not control for observed variation. Our results point to the need for methodological changes in future DNS censorship measurement, which we discuss.
BiographyAbhishek Bhaskar is a 4th year PhD student at SCP working under Dr. Paul Pearce. His research explores the impact of router load balancing on various aspects of network security and measurement. Before beginning his PhD at Georgia Tech, Abhishek obtained his Master's degree from Syracuse University and subsequently worked at GrammaTech.
AbstractMicroarchitectural attacks are side/covert channel attacks which enable leakage/communication as a direct result of hardware optimizations. Secure computation on modern hardware thus requires hardware-software contracts which include in their definition of software-visible state any microarchitectural state that can be exposed via microarchitectural attacks. Defining such contracts has become an active area of research. In this talk, we will present leakage containment models (LCMs)—novel axiomatic hardware-software contracts which support formally reasoning about the security guarantees of programs when they run on particular microarchitectures. Our first contribution is an axiomatic vocabulary for formally defining LCMs, derived from the established axiomatic vocabulary used to formalize processor memory consistency models. Using this vocabulary, we formalize microarchitectural leakage—focusing on leakage through hardware memory systems—so that it can be automatically detected in programs. To illustrate the efficacy of LCMs, we first demonstrate that our leakage definition faithfully captures a sampling of (transient and non-transient) microarchitectural attacks from the literature. Next, we develop a static analysis tool, called Clou, which automatically identifies microarchitectural vulnerabilities in programs given a specific LCM. We use Clou to search for Spectre gadgets in benchmark programs as well as real-world crypto-libraries (OpenSSL and Libsodium), finding new instances of leakage. To promote research on LCMs, we design the Subrosa toolkit for formally defining and automatically evaluating/comparing LCM specifications.
BiographyNicholas Mosier is a 3rd-year PhD student at Stanford University advised by Caroline Trippel. His research focuses on developing Spectre detection and mitigation techniques that are scalable, efficient, and comprehensive. He is broadly interested in hardware and software security and enjoys bug hunting on the side.
AbstractWeb applications (apps) provide a wide array of utilities that are being abused by malware authors as a replacement for attacker-deployed C&C servers. Stopping this Web App-based Command and Control (WACC) requires collaboration between Incident Responders (IRs) and web app providers. However, little research has been done to prove that WACC malware are prevalent enough to warrant such an investment. To this end, we designed Marcea, a malware analysis pipeline to study the prevalence of WACC. Marcea revealed 487 WACC malware in 72 families abusing 30 web apps over the last 15 years. Our research uncovered the number of WACC malware increased by 5.5 times since 2020 and that 86% did not need to connect to an attacker-deployed C&C server. Our study uncovered patterns indicating how specific web apps attract or disincentivize WACC malware. Moreover, web app engagement data collected by Marcea suggests that these malware are active enough to produce up to 5,844,144 access points. To date, we have used Marcea to collaborate with the web app providers to take down 70% of the active WACC malware.
BiographyMingxuan Yao is a fourth year Ph.D. student in the School of Electrical & Computer Engineering (ECE) at Georgia Institute of Technology, under the guidance of Professor Brendan Saltaformaggio in the Cyber Forensics Innovation (CyFI) Lab. He finished his Master Degree in Cybersecurity before that. His research interests lie in cyber attack forensics, and binary analysis techniques. His current research focuses on cyber-threats abusing prestigious web services, aiming to adopt different novel strategies to boost the analysis process.
AbstractWe give the first examples of public-key encryption schemes which can be proven to achieve multi-challenge, multi-user CCA security via reductions that are tight in time, advantage, and memory. Our constructions are obtained by applying the KEM-DEM paradigm to variants of Hashed ElGamal and the Fujisaki-Okamoto transformation that are augmented by adding uniformly random strings to their ciphertexts and/or keys. Our proofs for the augmented ECIES version of Hashed-ElGamal make use of a new computational Diffie-Hellman assumption wherein the adversary is given access to a pairing to a random group, which we believe may be of independent interest.
BiographyAkshaya Kumar is a first-year Ph.D. student in Computer Science at the Georgia Institute of Technology's School of Cybersecurity and Privacy where she is advised by Professor Joseph Jaeger. Her research interests include cryptography, information security, and generally, theoretical computer science. Her most recent work focuses on provable security in the memory-aware setting. Her paper on memory-tight proofs for public key encryption schemes was recently accepted at Asiacrypt 2022. She is a part of The Association for Women in Mathematics (AWM), an initiative that promotes women in mathematics.
BiographyJason Kim is a second-year Ph.D. student advised by Prof. Daniel Genkin at Georgia Tech's School of Cybersecurity and Privacy. Jason's research lies at the intersection of side-channel attacks arising from CPU microarchitecture and how they can be exploited from web browsers. His ultimate goal is to harden web browsers against leaking secrets: billions of people browse the internet on a daily basis and handle sensitive or personal information on the web, yet browsers automatically execute untrusted code served from websites as soon as a user visits the site. Prior to Georgia Tech, Jason graduated from the University of Michigan in 2021 with a Bachelor's in Computer Science. He is an author and presenter of Spook.js, which was published at the 2022 IEEE Symposium on Security and Privacy.
AbstractIn this paper, we perform the first multifaceted measurement study to investigate the widespread insecure practices employed by tertiary education institutes (TEIs) around the globe when offering WPA2-Enterprise Wi-Fi services. The security of such services critically hinges on two aspects: (1) the connection configuration on the client-side; and (2) the TLS setup on the authentication servers. Weaknesses in either can leave users susceptible to credential theft. Typically, TEIs prescribe to their users either manual instructions or pre-configured profiles (e.g., eduroam CAT). For studying the security of configurations, we present a framework in which each configuration is mapped to an abstract security label drawn from a strict partially ordered set. We first used this framework to evaluate the configurations supported by the user interfaces (UIs) of mainstream operating systems (OSs), and discovered many design weaknesses. We then considered 7045 TEIs in 54 countries/regions, and collected 7275 configuration instructions from 2061 TEIs. Our analysis showed that majority of these instructions lead to insecure configurations, and nearly 86% of those TEIs can suffer from credential thefts on at least one OS. We also analyzed a large corpus of pre-configured eduroam CAT profiles and discovered several misconfiguration issues that can negatively impact security. Finally, we evaluated the TLS parameters used by authentication servers of thousands of TEIs and discovered perilous practices, such as the use of expired certificates, deprecated versions of TLS, weak signature algorithms, and suspected cases of private key reuse among TEIs. Our long list of findings have been responsibly disclosed to the relevant stakeholders, many of which have already been positively acknowledged.
BiographyMan Hong Hue is a first-year Ph.D. student in Computer Science at the Georgia Institute of Technology (School of Cybersecurity and Privacy). His research focuses on network security, internet measurement, and usable security. The goal is to detect and address large-scale security threats/issues, considering human factors. He obtained a Bachelor in Information Engineering at the Chinese University of Hong Kong (CUHK) in 2020. Before joining Georgia Tech, he had been working with Prof. Sze Yiu Chau at CUHK and collaborating with Prof. Omar Chowdhury and Prof. Endadul Hoque. His work on the security of WPA2-Enterprise and PKCS1 v1.5 implementations has been published at the ACM Conference on Computer and Communications Security (CCS) in 2021.
BiographyPradyumna Shome is a PhD student in Computer Science at the Georgia Institute of Technology, researching hardware security and microarchitectural side-channel attacks. His research has been published at ISCA and has won an Honorable Mention at the Intel Hardware Security Academic Award. Prior to his PhD, he graduated with a BS in Computer Science from the University of Illinois Urbana-Champaign advised by Christopher W. Fletcher, and then worked as a Software Engineer at Meta. He has been on the Shadow Program Committee for the IEEE Symposium on Security & Privacy, as the sole undergrad.
Credit to the Stanford Systems Seminar for the website!