iPhone Hacks – Should Apple Have Seen It Coming?

In another article I summarized the series of events that lead to a potentially huge number of iOS devices being overtaken by malicious actors. While increasingly more information about these incidents is revealed, one particularly interesting question should be raised: To what extent is Apple to blame?

Fast Reaction

Let’s start with the good news. As Project Zero researcher Ian Beer writes, they have informed Apple about two of the exploits on February 1st, 2019. Apple reacted within six days and released an emergency update (iOS 12.4.1) on February 7th. This short reaction time is exemplary (especially compared to Microsoft – it recently took them more than 90 days to fix a critical Windows vulnerability reported by Project Zero, which resulted in Google disclosing the vulnerability as previously announced).

Sloppy Quality Assurance?

However, this is where Apple’s exemplary behavior ends. Again according to Ian Beer, Project Zero has identified severe mistakes made by Apple that allowed the attackers to circumvent their security. Since Apple declined to comment on the current issue of exploits, his and his colleagues’ views are taken as the only reliable source of knowledge here.

The researchers identified three different bugs that could have been prevented by a more thorough quality control process. First, they investigated [1] the vulnerability that was used in the exploit chain targeting iOS 11-11.4.1. Apparently, a regression in libxpc was the root cause [1]:

It’s difficult to understand how this error could be introduced into a core IPC library that shipped to end users. While errors are common in software development, a serious one like this should have quickly been found by a unit test, code review or even fuzzing. It’s especially unfortunate as this location would naturally be one of the first ones an attacker would look, as I detail below.

This is a pretty big accusation; one would imagine that the largest tech company by revenue could be expected to write unit tests for all functions in critical libraries shipped to millions of devices.

The analysis of the other vulnerabilities shows similarly devastating results. Regarding the one that was used in targeting iOS 12-12.1 (a sandbox escape vulnerability also involving XPC), the researchers write [2]:

It [the vulnerability] is the kernel bug used here which is, unfortunately, easy to find and exploit (if you don’t believe me, feel free to seek a second opinion!).


Similar to iOS Exploit Chain 3 [the one analyzed previously], it seems that testing and verification processes should have identified this exploit chain.

But the worst finding is yet to come by analyzing the vulnerability in the “vouchers” feature:

It [the code snippet] remained in the codebase and on all iPhones since 2014, reachable from the inside of any sandbox. You would have triggered it though if you had ever tried to use this code and called [the function] with a valid voucher. Within those four years, it’s almost certain that no code was ever written to actually use the [function], despite it being reachable from every sandbox.

It was likely never called once, not during development, testing, QA or production (because otherwise it would have caused an immediate kernel panic and forced a reboot). I can only assume that it slipped through code review, testing and QA.


In other words: there was vulnerable code pushed to the production repository without being used. Ever. Not in a single test, not in a single other line of code. This is the kind of mistake a free code analysis tool would immediately report.


I vividly remember a lecture on Formal Methods in Computer Science where the instructor presented the destruction of the Ariane 5 rocket as an example of a software bug that caused a massive financial damage ($370 million) [4].

Just as the Ariane 5 explosion serves as an example of bad software design, I believe the mistake made in Apple’s voucher feature shows the tremendous importance of a solid continuous integration pipeline. Especially in long-lived software projects, it is impossible for single developers to stay on top of such mistakes without the help of automatized code scanners.


  1. https://googleprojectzero.blogspot.com/2019/08/in-wild-ios-exploit-chain-3.html
  2. https://googleprojectzero.blogspot.com/2019/08/in-wild-ios-exploit-chain-4.html
  3. https://googleprojectzero.blogspot.com/2019/08/in-wild-ios-exploit-chain-5.html
  4. https://en.wikipedia.org/wiki/Cluster_(spacecraft)#Launch_failure
Bernhard Knasmüller on Software Development