Secure Systems Group

Blog for Helsinki Area Secure Systems Collective.

Monday, June 27, 2016

Off the hook: A New Privacy-Friendly Phishing Protection Add-on

Phishing attacks are those in which unsuspecting users are lured to a rogue website that mimics a legitimate website and are fooled into giving up their login credentials or other sensitive information to the rogue site. It has been a serious threat against ordinary users on the Internet. Current solutions for steering users away from phishing websites are typically server-based and have several drawbacks. They impact user privacy and are not effective against attackers who can adaptively serve different content to different clients. Moreover, while they might warn a user about a phishing website, they offer no guidance as to how the user can find the real website that they intended to go to in the first place.
To address these limitations, we have developed Off the hook, a phishing website protection add-on. Off the hook is implemented entirely on client-side. Consequently, it is privacy-preserving and is able to detect adaptive attacks. It is very efficient and fast at detecting phishing websites. It can also identify the target of a phishing website . The add-on displays warning messages in real-time as a user is browsing the Web. 


Challenges in Phishing Detection: Phishing websites do not exploit any technical flaw but take advantage of the inability of ordinary users to effectively link a physical identity (e.g. a person, a company) to an electronic identity (e.g. a website, an email). Phishers mostly use social engineering techniques to impersonate an identity to lure their victims. Given their non-technical nature, protecting against phishing attacks is challenging. In addition, phishers use tricks such as serving different content (legitimate/phishing) to different client requests in order to defeat current centralized detection systems that crawl the Web from a single location. Blacklists of URLs are built based on this centralized analysis; when a user visits a webpage, it is checked against these lists to warn the user about phishing websites. This requires users to effectively disclose their entire browsing history to the centralized anti-phishing services, thus raising a serious privacy concern.

An example phishing page

Control and Constraints: A phisher mimics the look and feel of a target website. However, he does not have complete freedom in doing so. Some attributes of a webpage (displayed in a browser) are beyond the control of the phisher while others impose certain constraints on him. We modeled these aspects of control and constraints in the features we chose for Off the hook, our machine learning based detection system. By assessing the consistency of the information contained in controlled/uncontrolled and constrained/unconstrained elements of a webpage, Off the hook is able to identify phishing websites and their targets effectively. Off the hook outperforms several state-of-the-art methods in terms of accuracy and speed while distinguishing itself in its privacy-friendliness and its resilience to adaptive attacks. In addition, the identification of phishing webpage relies solely on the data extracted from the webpage and no external source of information which makes it language- and brand-independent and admit a client-side-only implementation.

Add-on: Off the hook is implemented as a browser add-on that analyses visited webpages in real-time. If a webpage is identified as phishing, interaction with the website is disabled and an interstitial warning message pops up in less than half a second. The user is then presented with several choices including the option of being redirected to the identified likely target, getting back to a search engine or proceeding to the website with the option to whitelist it. The user interface has been refined using a cognitive walkthrough exercise in order to provide intelligible and efficient warnings to protect unsavvy users.

Interstitial warning page from Off the hook

Off the hook add-on is available for Mozilla Firefox and  Google Chrome, and supported on Windows, Mac OS X and Ubuntu. Download and installation instructions are available in our project page.

More detailed information about the phishing detection and target identification methodology can be found in our ICDCS paper and our technical report. Implementation details for the add-on and phishing warnings are available in our ICDCS demo paper.

This work is supported by the Academy of Finland (via the Contextual Security project) and the Intel Collaborative Research Institute for Secure Computing.

Thursday, May 26, 2016

HAIC

Today, the Heads of CS departments at Aalto University and University of Helsinki made a joint announcement establishing Helsinki-Aalto Center for Information Security HAIC. HAIC is intended to serve as a main focal point for research and education in the broad areas of information security & privacy at both universities. Initially HAIC will focus on selecting top incoming MSc students choosing to specialize in information security in our MSc programs.

This is an important and timely milestone. The NordSecMob joint degree program has been a resounding success during the past decade in training top-notch students from Finland and abroad at Aalto University. Many of them have gone on to work in Finnish industry. Some have continued with doctoral studies both a Aalto University and at other top universities in Europe. One of the reasons for NordSecMob's success is its scholarship program. But NordSecMob is coming to an end. At the same time, Finnish universities will introduce tuition fees for students from outside the EU/EEA area. This is what makes HAIC timely: both universities have allocated some scholarships for MSc students; this will help us attract and train similar levels of top quality students in information security.

Over time, we plan to extend HAIC in several ways. The first and most important among them is to reach out to our industrial partners in Finland and inviting them to sponsor more scholarships under the auspices of HAIC. The quality and international recognition of infosec research at Aalto University and University of Helsinki have significantly risen in the past few years. But the trend in government funding for higher education and research Finland is unmistakably downwards. Finnish industry stepping up its support is therefore crucial for ensuring our research and education continues to excel and expand. I am looking forward to dialogue with industry partners.

Monday, February 22, 2016

OmniShare: Encrypted Cloud Storage for all your Devices

Cloud storage services like Dropbox and Google Drive are widely used, but security and privacy are often cited as serious concerns. One solution is to encrypt data on the client devices before it is uploaded, but this introduces a "key distribution" problem - how do you share the decryption key with all your devices? Some services sidestep this issue by deriving these keys from your password, but this does not offer much security since passwords are notoriously easy to guess. To overcome this challenge, we have developed OmniShare, the first system to combine client-side encryption using high-entropy keys with a suite of secure, yet intuitive, key distribution mechanisms. Based on your devices' capabilities, OmniShare automatically selects the best mechanism to transfer your decryption key securely between your devices. All you have to do is scan a QR code or bring your devices close enough for them to communicate over an ultrasonic channel. OmniShare also allows you to share individual encrypted files with other users. OmniShare is open source software and a beta version is currently available (for closed beta testing) on Windows and Android.


How secure is your cloud storage? 

Cloud storage services, such as Dropbox and Google Drive, are increasingly being used by individuals and businesses. A 2015 EU survey showed that at least one in every five people in Europe use cloud storage services. The two main benefits cited by users are the abilities to 1) use files from several devices or locations and 2) easily share files with other users. However, of the respondents who chose not use cloud services, 44% said that security and privacy were important concerns.

Who has your data? All major cloud storage providers use a variety of good security measures to protect your data. For example, they encrypt the data in transit as it is uploaded and downloaded, and they also encrypt the data at rest while it is stored on their servers. However, the cloud providers themselves still have access to this data. Even if the provider is completely trustworthy, this still increases the risk of your data falling into the wrong hands. For example, if the provider suffers a data breach, will your data be secure? Some of the provider's staff have legitimate access to the data for development or maintenance purposes, but what could a disgruntled employee do with this access? Depending on where the cloud provider is located, could the provider be legally forced to disclose your data?

Client-side encryption: Encrypting data on the users' devices before uploading it to the cloud is an effective way to mitigate these risks. However, users want this data to be accessible from all their devices. For example, if Alice encrypts a file on her PC and uploads it to the cloud, she also wants to access it from her smartphone. If this file is encrypted, her smartphone must have (or be able to obtain) the relevant decryption key. Naturally, these keys should not be managed by the cloud service provider due to the risks described above. So now we have a key distribution problem: how can Alice securely distribute her decryption key to all her devices?



The problem with passwords: Current encrypted storage services like SpiderOak and Tresorit sidestep the key distribution problem by deriving keys from the user's password using a deterministic password-based key derivation function (PBKDF). Both Alice's PC and smartphone can derive the same key from Alice's password. However, it is well-known that human-chosen passwords are relatively easy to guess, so this approach does not provide very strong security guarantees. To avoid deriving keys from weak passwords, services like Viivo, BoxCryptor, and Sookasa use additional servers to manage and distribute keys, but this adds cost and introduces new vulnerabilities.

To address these challenges, we have developed a new system called OmniShare. Here is a two minute summary of what OmniShare is. Read on for a longer explanation.

OmniShare runs as a client-side app on each of your devices. When you upload a file to your cloud storage (e.g. Dropbox) using OmniShare, the app automatically encrypts the file with a strong (i.e. high entropy) key. Since the files are encrypted on your own device, there is no longer a risk of your cloud provider losing, leaking or disclosing your data. The encryption keys are securely generated and stored on your own device. For additional security, these keys can be kept in a hardware-backed key-store or protected by a Trusted Execution Environment (TEE).



Key distribution: To solve the key distribution problem, OmniShare provides a suite of mechanisms to securely transfer your keys between your devices. These mechanisms are based on the insight that you can bring your own devices physically close together in order to establish an out-of-band (OOB) communication channel. For example, if one of your devices has a screen and the other has a camera, OmniShare can display a QR code on the first device and ask you to scan it from the second. If one device has a speaker and the other has a microphone, OmniShare can pair your devices using ultrasonic communication (the same technique that is used in the Google Chromecast devices). OmniShare automatically chooses the best mechanism based on your devices' capabilities. The majority of the communication takes place via the cloud storage service itself, but the local OOB communication between devices ensures that the key is transferred to the correct device. This approach is both usable and secure and, as you can see, avoids the problem with passwords.



Sharing: To support the full range of cloud storage functionality, OmniShare also allows you to share selected files with your friends and colleagues. The files remain encrypted throughout this process, and OmniShare securely transfers the decryption keys to the other user. Again, OmniShare provides a suite of intuitive mechanisms for achieving this, including: Bluetooth, Near Field Communication (NFC), and ultrasonic communication.

Open source security: OmniShare is open source software available under the Apache 2.0 license. It currently exists for Windows and Android, and an iOS version is in the works. We plan to open a closed beta test soon.

OmniShare was chosen as the overall winner in the MAPPING Privacy via IT Security competition and was demonstrated at CeBIT 2016.

Here is a video explaining OmniShare.

You can find more information and register to participate in the beta test from our official project page: https://ssg.aalto.fi/projects/omnishare/

We have also published a technical report on arXiv describing the full details of this system: http://arxiv.org/abs/1511.02119

For a brief overview of OmniShare, have a look at our slides from CeBIT 2016: https://ssg.aalto.fi/omnishare/download/OmniShare-Overview.pdf

Development of OmniShare was supported by the Academy of Finland (via the CloSe project) and the Intel Collaborative Research Institute for Secure Computing.

OmniShare was initially developed as part of Nguyen Hoang Long's MSc thesis.

Monday, February 15, 2016

Zero-effort authentication : useful but difficult to get right


At the forthcoming 2016 Networking and Distributed Systems Symposium we present case study on designing systems that are simultaneously secure and easy-to-use. Transparent ("zero-effort") authentication is a very desirable goal. But designing and analyzing them correctly is a challenge. At the 2014 IEEE Security and Privacy Symposium, a top conference on information security and privacy, researchers from Dartmouth presented a simple and compelling scheme called ZEBRA for continuously but transparently re-authenticating users. Through systematic experiments, we show that a hidden design assumption in ZEBRA can be exploited by a clever attacker. More technical details in the extended research report and in our project page.

[With co-author and guest blogger Nitesh Saxena (University of Alabama at Birmingham)]

Computing systems are pervasive. We need to authenticate ourselves to them all the time. All of us know the tales of woe trying to manage zillions of passwords. But unwieldy authentication and access control systems are not limited to computing -- we struggle with multiple keys, loyalty cards and so on. This is what makes attempts to design transparent "zero-effort" authentication systems very compelling. The adjectives "transparent" and "zero effort" refer to the fact that the system is able to authenticate users without requiring them to do anything special just for the purpose of authentication. There has been a lot of work in trying to get the design of zero-effort authentication systems right and the security community has been making progress. But getting usability and security right is a difficult challenge.

Transparent de-authentication: Two years ago, researchers from Dartmouth college described a system called ZEBRA at IEEE Security and Privacy Symposium, one of the top four venues for academic security research. ZEBRA was intended to solve the problem of prompt, user-friendly deauthentication. Think about what happens when you stop interacting with your phone or PC and walk away. What you really want is your device to lock itself or log you out promptly. This is what deauthentication is. It is the means to defend against accidental or intentional misuse of your device by someone else when you walk away while leaving it open.

Typically deauthentication is based on idle time limits: if you have not interacted with your device for a period exceeding some pre-set time limit, it will deauthenticate you. But therein lies the rub: if the time limit is too short, you will be annoyed at having to log in frequently and unnecessarily; but if the time limit is too long, you risk that someone else may use your unattended device. Transparent zero-effort schemes have been tried out before. One is BlueProximity which allows you to pair your Bluetooth-enabled phone with your PC. Thereafter your PC will use Bluetooth to sense your phone's presence in the vicinity and will automatically lock if it can no longer sense your phone. BlueProximity itself is not very secure (an attacker who knows the Bluetooth address of your phone can easily break BlueProximity) but it is an example of the class of zero-effort authentication solutions that rely on proximity detection via short-range wireless channels. Another prominent example of this class of solutions is the collection of various "keyless access" schemes found in high end cars. These systems have many problems: they are vulnerable to relay attacks, and they are not effective in situations where people step away from their devices (and would thus want to be deauthenticated) but do not walk away far enough so that their devices can no longer sense their presence -- such situations are common in offices with cubicles or workstations in hospital wards and factories.


The ZEBRA approach to deauthentication
ZEBRA: This is where ZEBRA comes in with a very simple and elegant solution to the problem. In ZEBRA every user is required to wear a Bluetooth-enabled "bracelet" equipped with an accelerometer and a gyroscope in his/her dominant hand. As a ZEBRA user, when you first log into a device ("terminal" in ZEBRA terminology), it establishes a secure connection to your bracelet. Thereafter while you interact with your terminal, e.g., using its keyboard or mouse, the bracelet will send the measurements generated by your interactions over to the terminal which then uses a machine learning classifier to map them into a sequence of predicted interactions. Now your terminal has two different views of the same phenomenon of you interacting it: the first is the sequence of interactions it can directly observe via its peripherals like keyboard and mouse; the second is the sequence of predicted interactions inferred from the measurements sent over from the bracelet. Now you can perhaps see the logic behind ZEBRA: if the two sequences match, ZEBRA can conclude that the person who is interacting with it is the same person who is wearing the right bracelet for the current login session; if the sequences diverge, ZEBRA can promptly deauthenticate the current login session. ZEBRA calls this "Bilateral Recurring Authentication" -- you can think of it as the Rashomon approach to deauthentication! It is simple, elegant, and, unlike biometric authentication, does not depend on the particular behaviour of the user being authenticated.

Hunting ZEBRA: Is ZEBRA secure? To evaluate this, authors of ZEBRA conducted experiments in which participants were asked to act as attackers. A researcher, acting as the victim, left the terminal ("terminal A") with an open login session moved to another terminal ("terminal B") and started interacting with it. The job of the "attacker" was to watch what the victim was doing on terminal B, mimic the same interactions on terminal A and try to avoid being logged out of terminal A by ZEBRA. Security 101 teaches you not to under-estimate the capabilities of attackers. Realizing this, ZEBRA authors arranged to give a generous handicap to their attackers: the victims actually announced aloud what they are about to do ("typing", "scrolling" etc.). The experiments showed that ZEBRA was able to deauthenticate all attackers even with this extra help.

Evaluating security of ZEBRA experimentally


We realized that there is a problem here: in the experiments, the "attackers" were asked to mimic everything that the victim was doing; is that the best strategy for the attackers? It turned out that ZEBRA was doing its bilateral comparison only when the terminal perceived interactions via its peripherals. But the terminal being attacked (terminal A) is under the control of the attacker. Therefore we conjectured that a clever attacker does not need to mimic everything that the victim was doing but can opportunistically choose to mimic only those interactions where he has a good chance of passing ZEBRA's bilateral comparison test!

To test our theory, we needed an implementation of ZEBRA. Since the authors of ZEBRA did not have an implementation to share with us, we built our own end-to-end implementation from scratch using standard off-the-shelf Android Wear smartwatches. We then did a series of systematic experiments modeling different types of opportunistic attackers. For example, our opportunistic keyboard-only attacker ignores everything else except keyboard interactions (since he can do a lot of damage to all modern operating systems using only keyboard interactions). In our experiments, the test participants were victims while our researchers played the role of attackers.

Opportunistic attackers can circumvent ZEBRA

Our results show that while ZEBRA is indeed effective against naive attackers, it can be circumvented by opportunistic attackers. In the figure above, the graph on the left shows that ZEBRA deauthenticates all naive attackers. The graph on the right shows that opportunistic attackers evaded detection 40% of the time. The red and blue lines correspond to two different configurations of ZEBRA.

The fundamental problem in the design of ZEBRA was that the deauthentication decision was subject to an input source (interactions on terminal A) that was under the control of the attacker! We can see this as a case of tainted input, which is familiar to computer scientists. Well-known defenses against tainted input can be adapted to strengthen the design of ZEBRA.

Summary: There are two messages here:
  • Transparent, "zero-effort" authentication, which works by taking the user out of the loop of authentication decisions, is important. The need for it is demonstrated by already available production solutions that attempt to provide zero-effort authentication. But designing them correctly is subtle and difficult.
  • Modeling the adversary correctly and realistically is a fundamental challenge in the analysis of security schemes, especially when novel approaches for balancing usability and security are introduced, as in the case of ZEBRA.
We showed that under a more realistic adversary model, ZEBRA, as it was originally designed, is not secure against malicious attackers. It is still an effective defense against accidental misuse. The fundamental problem in the design of ZEBRA is a case of tainted input -- we are currently developing ways of improving ZEBRA.

More detailed information can be found in our NDSS paper or our extended research report. All figures in this post are taken from our paper/report.

This work was supported by the Academy of Finland (via the Contextual Security project) and the US National Science Foundation.

Tuesday, October 27, 2015

Practical attacks against 4G (LTE) access network protocols

















(Countries with commercial LTE service as of December 7, 2014 shaded in red; Source Wikipedia article)

4G mobile communication networks, also known as "LTE" (Long Term Evolution) are widely deployed. How good is LTE security? Ravishankar Borgaonkar, a postdoc in my research group, and Altaf Shaik, a doctoral student at TU Berlin, discovered several surprising lapses in 4G security standards and baseband chipsets which they will demonstrate at the forthcoming T2 Security Conference 2015Black Hat Europe 2015. We will also be presenting an academic paper on the topic at the prestigious Internet Society NDSS conference in February 2016.  More technical details in our arXiv report (see our project page)

Security in mobile communications networks has been improving in every generation. 3G introduced mutual authentication that made it a lot harder to mount fake base station attacks (such as those used in IMSI catchers). 4G tightened up many signaling protocols by requiring authentication and integrity protection for them. The generally held belief has been that 4G security is robust and in particular, fake base station attacks mostly difficult to mount.

Over the last several months Ravishankar Borgankar and Altaf Shaik, working with me, Valtteri Niemi (University of Helsinki) and Jean-Pierre Seifert (TU Berlin), painstakingly experimented with several LTE devices in a laboratory setting and uncovered two classes of vulnerabilities:

Leaking information about user locations: Already when 2G (GSM) networks were being designed, location privacy was considered important. When a mobile device attaches to a network, it is given a temporary identifier (known as TMSI - Temporary Mobile Subscriber Identity). All the signaling messages between the mobile device and the network will thereafter refer only to the TMSI, rather than the user's permanent identifiers (such as phone numbers or IMSIs - International Mobile Subscriber Identity). TMSIs are random and updated frequently (e.g., whenever the mobile device moves to a new location area). The idea is that an attacker passively monitoring radio communication would not be able to link TMSIs to permanent identifiers or track movements of a given user (since his TMSIs would change over time). A couple of years ago, Dennis Foo Kune and Yongdae Kim showed that an attacker in a 2G (GSM) network can trigger a paging request (by sending a silent text message or initiating and quickly terminating a call) to be sent to a target user with a given phone number. Paging requests contain TMSIs. The attacker can thereby link TMSIs to phone numbers.

We discovered that paging requests can be triggered in a new and surprising manner -- via social network messaging apps. For example, if someone who is not your Facebook friend sends you an instant message, Facebook will silently put it in the "Other" folder as a spam protection mechanism (unless the spammer pays Facebook 1€!). If you have Facebook Messenger installed on your LTE smartphone, incoming messages, including those destined to the Other folder, will trigger a paging request, allowing a passive attacker to link your TMSI to your Facebook identity and track your movements. To make matters worse, we noticed that TMSIs are not changed sufficiently frequently -- in one urban area TMSIs assigned by multiple mobile carriers persisted up to three days! In other words, once the attacker knows your TMSI, he can passively track your movements for up to three days.

An active attacker using a fake base station can do even better. LTE access network protocols incorporate various reporting mechanisms that allow the network to troubleshoot faults, recover from failures and assist mobiles in handovers. For example, after a failed connection, a base station can ask an LTE device to provide a failure report with all sorts of measurements including which base stations are seen by the device and with what signal strength. Once an attacker grabs such a report, he can use the information to triangulate the device's location. In fact, at least one device we tested even reported its exact GPS location. Failure recovery mechanisms are essential to the reliable operation of large mobile networks -- LTE designers had a difficult design trade-off between potential loss of user privacy and ensuring network reliability.

Denial of service: Imagine that you travel to a different country but your mobile subscription does not include roaming. When your phone tries to connect to the network at your destination, it will be rejected with an appropriate "cause number" (such as "ROAMING NOT ALLOWED"). Your phone will dutifully accept this directive and will not attempt to connect again until you re-initialize the device (e.g., by rebooting it). This is so that your device does not waste its battery in vain by repeatedly trying to connect to a network only to be rejected. It is also a way to minimize unnecessary signaling over the air. In other words, this design decision was also motivated by a desire to trade-off in favor of reliability and performance. You can guess the rest: we show, for example, that an attacker can deny 4G and 3G services to a 4G device, thereby effectively downgrading it to 2G. Once downgraded, the device is open to all the legacy 2G vulnerabilities. The parameter negotiation during the LTE connection set up process is vulnerable to bidding down attacks by a man-in-the-middle who can fool an LTE device and an LTE network into concluding that they can only communicate using 2G.

The equipment for our attacks (a USRP board and antennas, along with open source software openLTE) is inexpensive and readily available. It costs a little over 1000 €!

Why do these vulnerabilities exist in the first place? A likely reason is that designing a complex system like LTE is a complicated task that calls upon the designers to make several difficult trade-offs (such as the privacy vs. availability trade-off we mentioned above). The trade-offs embedded in the design of LTE were made in the late 2000s when LTE was being designed. Several years on, the equilibrium points in the trade-offs have changed, making the kinds of attacks like ours very practical and economically viable for a potential attacker. We recommend that future standardization incorporate agility of trade-offs in the face of such changes. Software-defined networking will make it easier to build in such agility.

We discovered the vulnerabilities and attacks earlier this year. We notified affected manufacturers and carriers during June and July. We have been following the standard responsible disclosure practices and waited till the end of the 90-day waiting period before publicising the attacks. We have also been working with relevant standards organizations to address these vulnerabilities. Several manufacturers have already issued patches. 3GPP, the body that standardizes LTE, has initiated several specification changes to address the vulnerabilities we discovered.

Our arXiv technical report has more technical details. (see our project page)