ACM Conference on Computer and Communications Security 2020
This year's ACM CCS was hosted virtually in gather.town, which is a 2d platform very much like the old pokemon games so everyone is an avatar. You can move your avatar to different session rooms and plenary rooms.
Machine Learning and Security: The Good, The Bad, and The Ugly
Wenke Lee, Georgia Institute of Technology
Machine learning started to be used in security in the 1980s, starting with anomaly detection and nowadays using deep learning. It is especially useful in intrusion detection. Humanly classified threats can be used as training data for a machine learning model. Machine learning is now a standard approach for security vendors and research (e.g. user authentication and malware analysis). Recently, it is also been shown that machine learning models can be attacked and fooled.
The good: machine learning is a way to automate the defense part of security in terms of analyzing malware etc.
The bad: adversarial machine learning allows an automatic way to circumvent the machine learning mechanisms used in defense. Additionally, attackers can use machine learning to simplify their task in the case where attacks require many complicated steps and deciding which steps to take first. Machine learning has been shown to improve password brute forcing, camouflaging malware network traffic, phishing. One of the most well known forms of using machine learning in attacks is deep fakes. In general, machine learning can increase the camouflage, persistence and generalization of attacks.
Attackers can use ML to generate suspicious network traffic to mistrain the intrusion detection system. Another example is by chaning a couple of pixels a stop sign can be classified as a maximum speed limit sign (JPEG compression actually foils this attack).
Security people know that we are in a perpetual cycle where attackers modify based on the defense and the defense is modified by the attacks, this should be applied with machine learning as well. Additionally, we need to design robust machine learning models. Additionally, we need to make sure that humans remain in the loop, during training and during deployment. Critical decisions should still be made by humans.
To keep humans in the loop models should be explainable. For example, for image classification there is a tool called SHAP that hightlights what part of the image was used to classify the image that can be verified by a human.
The ugly: it's hard to do prioritization of all the incidents that a model flags. Also, we want to ignore defense that are only reliant on machine learning.
Realistic Threats and Realistic Users: Lessons from the Election
Alex Stamos, Stanford University ([email protected])
The Stamos hierarchy of the actual bad stuff that happens online to real people: vast majority is abuse rather than information security. Abuse is a technically correct use of a product, but it causes harm to people. Abuse can not just be solved by computer scientists. The most important problem in information security is password reuse, lack of patching, configuration errors and old app vulnerabilities. The stuff discussed at ACM CCS is very tiny impact. How do we reframe our security research to tackle the actual big problems?
Stamos works with the Election Integrity Partnership. Each small county is in charge of protecting against attacks from state adversaries. They are integrated directly with social media and they have handled over 1000 incidents. They saw three themes: election day challenges, post-election uncertainty and red waves/blue shifts. Example misinformation are that votes are switched from one candidate to another which is extremely rare or using errors of inputting data at media organizations are used to say there are inconsistency. Sharpy-gate even though it is untrue is that misinformation spreads way faster than factual responses to that misinformation. This is very different from 2016 when foreign state actors used bot nets to amplify messages, while 2020 has actual influencers spreading this misinformation.
Misinformation is caused by misunderstanding computer security. The overall idea in the public is that every piece of software is broken all the time because of all the news of ransomware and the continuous patching of your computer. In essence security has create a priesthood so that people cannot think about computer security for themselves.
Why is software still so bad? We still have not solved memory bugs. The bug density is still very high in critical software. How do we stop building security tightropes? We need to make security systems resistant to failure. How are we going to understand and prevent harms while respecting privacy? Cryptography needs to make some intellectual structure to make this tradeoff. How do we train the next generation to make different mistakes? We should make security classes mandatory in the undergraduate degree. There should also be courses on the actual abuse side of computer systems (e.g. building a moderation bot for slack).
Session 1D: Applied Cryptography and Cryptanalysis
How to make primality tests robust? There have been ways to find composite numbers that fool cryptographic libraries to think they are prime and thus use insecure values for cryptographic operations. This comes from the fact that cryptographic libraries use fast prime tests that are not one hundred percent accurate. There are multiple techniques for fast prime tests and what they usually do is check trivial divisibility for small primes before doing the more slow test. They improve the fast primality test in openssl by checking fewer smaller primes before doing a provable primality test. Lastly they also make recommendations on how to improve cryptographic APIs.
ProMACs: Progressive and Resynchronizing MACs for Continuous Efficient Authentication of Message Streams
Instead of adding a MAC to each message, make the MAC dependent on all the previous packets and so be able to validate all previous packets with one MAC. They then use this dependent MAC and split it over multiple packets, which has a similar effect as using a truncated MAC. They have 4 requirements: direct authentication of each message possible, save communication cost, higher security level than truncated MACs and resynchronization possible. Their construct holds the previous states and allows resynchronization. As long as the message loss is not close to zero, this construct is a lot better than truncated MACs.
ECDSA is the most popular signature schemes today. Attacks usually work by discovering a small part of the secret key and then use the hidden number problem to find the rest. They find a novel cache attack on montgomery scalar multiplication in openssl and they make improvements to Fourier analysis. Their attack uses a flush and reload cache side-channel attack. They use this information to perform Bleichenbacher's attack: quantify the modular bias of randomness and find a candidate secret key which leads to that bias. One key takeaway is that even less than 1 bit of leakage can break crypto systems
These encryption schemes are used for video streaming on the internet. A naive approach can allow attackers to arbitrarily extend streams. Current work adds a finished flag to nonce to stop this. However, how do you allow random access decryption without needing to know the rest of the ciphertext? They develop nOAE2 which allows random-access decryption and multiple users.
Session 2D: Mobile Security
Fake base stations are used to send spam messages to mobile phones. Even phones that use 3G and above, a downgrade attack can force these phones to use vulnerable 2G. The categorize spam messages and find 7,884 spam campaigns. Spam is most common in provincial population hubs. By looking at sending frequency they find that different spam campaigns can use the same FBS. The customer service for these campaigns are also often shared.
Android malware usually repackages known APKs and add malicious payloads. App virtualization has made this a threat again. Their system first detects whether app virtualization is used and then finds whether user consent has been attained or not.
Deploying Android Security Updates: An Extensive Study Involving Manufacturers, Carriers, and End Users
The android software update ecosystem is fragmented and complicated. This work looks at security updates, when they are adopted by Google, by device manufacturers, by carriers and by users. Manufacturer delays have not changed much in the past decades. User delay is nearly negligible.
Android One, project treble and project mainline are projects to decrease the manufacturer delay. Project treble does not decrease the manufacturer delay by enough, but it does improve upgrades to latest Android version.
Security updates should include CVEs and carriers should not be have a blackbox relationship.
App-in-app allows sub apps to be installed in a host app. Examples of host apps are wechat and tiktok. Sub apps include banking etc. There is a sub-app permission check library to help host apps. Problems with app-in-app is that it cannot reuse the permission model, user interface model or lifecycle management of the main OS. This work found inconsistencies between permissions in the sub-app API and the system API. They also find unclear OS security requirements like access to the Bluetooth scan. If there are discrepancies between different platforms, what do you do in the sub-app API?
Session 3E: Fuzzing/Trusted Execution Environments
SQUIRREL: Testing Database Management Systems with Language Validity and Coverage Feedback
Database management systems are divers and complex. Their system is called Squirrel to generate syntax correct queries and adopt feedback mechanisms to prioritize interesting queries. Their system has an intermediate representation to generate a dependency graph to generate correct entries. For example, only query an entry that actually exists in the database. Their tool has been used to confirm 63 bugs in existing systems.
FREEDOM: Engineering a State-of-the-Art DOM Fuzzer
BlackMirror: Preventing Wallhacks in 3D Online FPS Games
Cheaters are bad for games and there is a constant arms race between anti-cheating techniques and cheaters. Wallhack is a way to see behind a wall (e.g. see invisible state). To solve this issue we should store invisible state in a secure region and prevent this state in a GPU. BlackMirror uses Intel SGX to prevent this type of cheating. BlackMirror does a visibility test inside the enclave and only then send the data to the GPU for rendering. This solves the Wallhack
Cache-in-the-Middle (CITM) Attacks : Manipulating Sensitive Data in Isolated Execution Environments
Isolated execution environments on an Arm system creates a protected area in the normal world without putting it in the secure world. The problem is that these IEEs depend on insecure caches unlike the secure cores. One attack that they mention is data leakage based on cache coherence protocol between L1 caches. Another attack is by fooling an IEE from reading an attacker written value in the insecure cache instead of the secure memory. A malicious OS can also read out secret data directly from the cache. Countermeasures include: secure cache attributes, cache cleaning operations and enforcing secure cache attributes.
Session 4C: Kernel Security
Pdiff: Semantic-based Patch Presence Testing for Downstream Kernels
Vendors might not patch vulnerabilities on time and they do not say what patches they have done. They want to create a tool to detect which patches are in a kernel. Challenges: patches may only be a minor change, there may be variances between the mainstream and downstream version, vendors might have non-default configurations. Their tool considers semantic-level properties of patch-affected regions and removes patch-unrelated code. Their tool takes a pre-patch reference, a post-patch reference and a target reference. It returns whether the target is closer to the pre-patch or post-patch. Their tools has a better false negative than the state of the art and have no false positives. This tool can be used to analyze which vendors are not patching their kernels.
Elastic objects contain a length field and a buffer of content. By changing the length, a user can get a kernel pointer which defeats KASLR. Severity of this attack is unknown. They use static analysis to find vulnerable elastic objects. They find that a lot of kernel vulnerabilities rely on elastic objects: breaking KASLR, stack canaries etc. They propose a new defense mechanism to mitigate the threat posed by elastic objects by isolating them into individual cache zones.
iDEA: Towards Static Analysis on the Security of Apple Kernel Drivers
Apple kernels are implemented as C++ classes. Previous C++ recovery tools do not work on Apple because of their unique programming model. For example, Apple removes class constructors and vtable entries. It is also hard to find the entry points. Their tool finds 35 new zero-days and got 5 CVEs in Apple kernels.
Exaggerated Error Handling Hurts! An In-Depth Study and Context-Aware Detection
There are multiple ways to handle errors including panicking, logging and specific error values. Previous work has looked at adequacy and completeness, but not whether the error is at the right severity. Their tool create warnings that can leak information. Causing a panic when not necessary is very bad, because it affect availability.
Session 5B: Secure Messaging and Key Exchange
Oracle Simulation: A Technique for Protocol Composition with Long Term Shared Secrets
Make a proof of a composed protocol small reusable and modular. They use a top down approach. If an attacker can simulate a part of a protocol then that part of the protocol can be ignored in the attack. Their attacker A is trying to break P while having access to Q. P and Q may share a secret, but they get around this by creating an oracle O. This oracle contains all the previously signed messages and the attack succeeds if the attacker can produce a signature of a message not already known by the oracle. They use this proof technique on SSH and show that it is better than previous proof techniques.
The Signal Private Group System and Anonymous Credentials Supporting Efficient Verifiable Encryption
All users need to know all others in the group because each message is sent to all users individually. Signal has a distributed membership mechanism to preserve privacy. They solve consistency by having an encrypted membership list in the server. Users access this encrypted list using a zero knowledge proof. With this way the server cannot change the list.
They us Schnorr for zero-knowledge proofs. They use an Elgamal based encryption of elliptic curve group elements. Important is that the authenticated encryption is deterministic. They create a new algebraic MAC to encode elliptic curve group elements.
Post-quantum TLS without handshake signatures
TLS 1.3 is a signed diffie-hellman key exchange followed by a signature. Previous work has replaced these with post-quantum versions. This paper says that you shouldn't use post-quantum signatures but to use post-quantum KEMs (key encapsulation mechanism) instead for performance. They avoid an extra round-trip needed to get the server public key by preemptively sending the data. They call their solution KEMTLS. For the ephemeral key exchange they are IND-1CCA secure and for server authentication they use IND-CCA secure. They explore different scenarios optimizing for different requirements. Using SIKE needs fewer bytes of communication, but they increase delay a lot.
Post-compromise security (PCS) can gain security for future messages assuming that the attacker is inactive for a while. They expect signal-based protocol to have PCS. In their experiments they clone a device B as C and allow A and B to send messages to heal the protocol. Afterwards B is switched off and let A and C communicate directly. They found that 2 out of the 10 apps broke PCS. In terms of fault detection, 1 app locked out, 3 apps showed an error and 6 apps continued as usual.
The app does not know if C is an attacker or simply a user with state loss. To detect clones they modify Signal's double ratchet to be desynchronization-tolerant using message counters as well as MACs for full data loss.
Session 6C: Side Channels
InSpectre: Breaking and Fixing Microarchitectural Vulnerabilities by Formal Analysis
There are lots of variants of Spectre and many more are being found. They create a model where instructions transform abstract micro operations and a new out of order semantics. They formally show what reasonable speculation is, the example they show is with store to load forwarding. Finally, they formally define desired properties for secure speculation. They find that out of order without speculation already fails these properties. They can model all variants of Spectre and find 3 new variants.
Speculative Probing: Hacking Blind in the Spectre Era
They start with a single buffer overflow with code-reuse and Spectre mitigations. They combined the buffer overflow with the speculative execution. Their attacker is blind, so without any exploitable code next to the buffer. Current mitigations do not stop this attack. Code page probing is an example of an attack. They find many indirect branches in current Linux kernels. They show a proof of concept in Linux for which they got a CVE.
Deja Vu: Side-Channel Analysis of Mozilla's NSS
NSS is used for internet security (TLS) and crypto. Previous work has found vulnerabilities in pretty much all parts of crypto libraries. They use automated SCA vulnerability detection on NSS. They found 5 different vulnerabilities for DSA, ECDSA and RSA as well as on different devices. They discovered a DSA remote timing attack, which leaked the nonce length via timing. They also found a timing attack in ECDSA nonce padding. A third attack found that an ECDSA multiplication that was not constant time. A fourth attack shows a microarchitectural attack on scalar recoding for ECDSA. Finally, they found an attack on RSA where a non-constant function was used during key generation. They helped Mozilla fix all the side channels they found.
SGX stores enclave data in the EPC (enclave page cache). Previous work has shown side-channel and speculative-execution vulnerabilities in SGX. Conventional defense is oblivious RAM (ORAM), but is slow. They use FPGA as external storage, which is physically separated from the CPU. TrustOre is a system that securely sets up the FPGA and creates a secure interface between the FPGA and the enclave. Their system trusts the FPGA manufacturer, which embed some keys in the non-volatile part of the FPGA. The FPGA storage is attestable by the enclave. Their communication guarantees: constant packet length, constant response time and constant address access pattern. In terms of performance they guarantee constant throughput independent of block size. Their system is over 100 times faster than ORAM.