
Sign up to save your podcasts
Or


So What Is the Coruna Exploit?
The Coruna iOS exploit framework is a new and powerful exploit kit targeting Apple iPhone models running iOS 13.0 (released in September 2019) up to version 17.2.1 (released in December 2023). It was identified by Google Threat Intelligence Group (GTIG) and iVerify. The exploit contained five full iOS exploit chains and a total of 23 exploits. The core technical value of this exploit kit lies in its comprehensive collection of iOS exploits, with the most advanced ones using non-public exploitation techniques and mitigation bypasses.
GTIG has been tracking this exploit since 2025, and at first, that threw me off a bit. Like, you mean to tell me this has just been out in the wild for a whole year without any major reporting on it? But that’s exactly what they do. There’s another report from 08-29-2024 from Google that states, “Today, we’re sharing that Google’s Threat Analysis Group (TAG) observed multiple in-the-wild exploit campaigns, between November 2023 and July 2024, delivered from a watering hole attack on Mongolian government websites.” Now I’m still new to cyber security and threat intelligence, so I don’t know if there are procedures around exploit discovery that require monitoring to understand them. To be honest, that kinda makes sense as I say it out loud, so maybe there’s some truth to that assumption. But these specific campaigns first delivered an iOS WebKit exploit affecting iOS versions older than 16.6.1 and then, later, a Chrome exploit chain against Android users running versions from m121 to m123. These were n-day exploits for which patches were available, but would still be effective against unpatched devices. They assessed that, “with moderate confidence, the campaigns were linked to the Russian government-backed actor APT29”.
This leads me back to the Coruna exploit, because it seems like security vendors that are goverment backed have increasingly become more and more careless with who they sell the exploits to. That’s right folks, commercial spyware is sold to the government and other brokers. And it’s becoming more common that once spyware or an exploit capability is sold, control over the end customer is lost. Brokers can’t be trusted with these capabilities, and business-to-business transactions over the spyware market are highly unregulated. Now, this lack of control helped launch discussions about responsible use of spyware and aligning on a formal voluntary framework for its use called the Pall Mall Process. But those discussions are ongoing, and the economic pressures for spyware companies to return a profit mean these tools are being sold to a broader array of organizations. Some things just should’t be based on the constant need for a return on investment, and at the end of the day, Capitalism is to blame for this industry getting sloppy with its handling of exploits.
Google is actually more on the forefront of reporting the slippery slope we are in when it comes to the unchecked commercial surveillance industry, and there is a great report you can read here → “Buying Spying”. I highly recommend the read regardless of your interest in CS, because whether you like it or not, these leaks or unethical sales of spyware affect all of us. So I want to elaborate on the definition of these attacks and exploits.
0-day Exploits
A 0-day is a vulnerability or security hole in a computer system unknown to its developers or anyone capable of mitigating it. Until the vulnerability is remedied, threat actors can exploit it in a zero-day exploit or zero-day attack.
The term "zero-day" originally referred to the number of days since a new piece of software was released to the public, so "zero-day software" was obtained by hacking into a developer's computer before release. Eventually, the term was applied to the vulnerabilities that allowed this hacking, and to the number of days that the vendor has had to fix them. Vendors who discover the vulnerability may create patches or advise workarounds to mitigate it, though users need to deploy that mitigation to eliminate the vulnerability in their systems. Zero-day attacks are severe threats.
Watering Hole
Watering hole is a computer attack strategy in which an attacker guesses or observes which websites an organization's users frequently use and then uses one or more of the websites to distribute malware.
Eventually, some member(s) of the targeted users will become infected. Attackers looking for specific information may only target users coming from a specific IP address. This also makes the attacks harder to detect and research. The name is derived from a strategy of predators in the natural world, who wait for an opportunity to attack their prey near watering holes. The attack strategy was named in an RSA blog in 2012.
These are just 2 of many different types of attacks and exploits that threat actors use to gain confidential information or credentials from their targets. If you’re interested in learning about more common attacks, you can check out this article from Fortinet → https://www.fortinet.com/resources/cyberglossary/types-of-cyber-attacks, where they go over the 20 most common attacks and exploits.
Initial Discovery: The Commercial Surveillance Vendor Role
In February 2025, GTIG captured parts of an iOS exploit chain used by a customer of a surveillance company. The exploits were integrated into a previously unseen JavaScript framework that used simple but unique JavaScript obfuscation techniques.
The framework starts a fingerprinting module, collecting a variety of data points to determine if the device is real and what specific iPhone model and iOS software version it is running. Based on the collected data, it loads the appropriate WebKit remote code execution (RCE) exploit, followed by a pointer authentication code (PAC) bypass as seen in Figure 2 from the deobfuscated JavaScript.
At that time, GTIG recovered the WebKit RCE delivered to a device running iOS 17.2 and determined it was CVE-2024-23222, a vulnerability previously identified as a zero-day that was addressed by Apple on Jan. 22, 2024 in iOS 17.3 without crediting any external researchers. The image below shows the beginning of the RCE exploit, exactly how it was delivered in-the-wild with GTIG’s annotations.
I’m gonna throw in a shameless plug from my Hack w/ Me Episode 2: Search Skills:
Because I used one of the specialized databases I learned about, the Common Vulnerabilities and Exposures (CVE) database, to pull up the record of this exploit. As previously mentioned, the record is CVE-2024-23222, and as you can see below, this exploit was fixed with the iOS 17.3 and iPadOS 17.3, macOS Sonoma 14.3, and tvOS 17.3 updates.
The last update on the record states 2024-06-12, so I’m not sure if that is when the OS updates came out or if that was the last fix forward from the initial releases of the OS patches. Either way, most people I can assume are safe from this attack moving forward. But there are apparently still many users within the US and outside of the country who still have an older OS version, for one reason or another.
The Coruna Exploit Kit is In The Wild
This is a huge issue, and the fact that these exploits that are being funded by and built for government entities should be concerning to all of us. Google’s report doesn’t explicitly mention the original CSV customer that deployed Coruna, but iVerify, which also analyzed a version of Coruna it obtained from one of the infected Chinese sites, suggests the code may well have started life as a hacking kit built for or purchased by the US government. Google and iVerify both note that Coruna contains multiple components previously used in a hacking operation known as “Triangulation” that was discovered targeting Russian cybersecurity firm Kaspersky in 2023, which the Russian government claimed was the work of the NSA. The US government didn’t respond to Russia’s claim and you can be damn sure that if they DIDN’T have any involvement in “Triangulation”, they would make it known.
iVerify also noted that the code appears to have been originally written by English-speaking coders, saying “It's highly sophisticated, took millions of dollars to develop, and it bears the hallmarks of other modules that have been publicly attributed to the US government." Adding, “This is the first example we’ve seen of very likely US government tools—based on what the code is telling us—spinning out of control and being used by both our adversaries and cybercriminal groups.”
So here we are again, another extremely sophisticated exploit, leaked by the US government. I say another because this isn’t the first time this has happened. Back in 2017, EternalBlue was a Windows-hacking tool stolen from the NSA (National Security Agency) and leaked to the world, leading to its use in catastrophic cyberattacks, including North Korea's WannaCry worm and Russia's NotPetya attack. We can most certainly expect something of the same caliber to be developed and deployed over the next couple of years. Even Google stated, “Beyond these identified exploits, multiple threat actors have now acquired advanced exploitation techniques that can be reused and modified with newly identified vulnerabilities.”
The loosely regulated industry is a problem within itself. iVerify’s cofounder, Rocky Cole, points to the industry of brokers that may pay tens of millions of dollars for zero-day hacking techniques that they can resell for espionage, cybercrime, or cyberwar. Notably, Peter Williams, an executive of US government contractor Trenchant, was recently sentenced to seven years in prison for selling hacking tools to the Russian zero-day broker Operation Zero from 2022 to 2025. Williams’ sentencing memo notes that Trenchant sold hacking tools to the US intelligence community as well as others in the “Five Eyes” group of English-speaking governments—the US, UK, Australia, Canada, and New Zealand. So they are just double-dipping in contracts and reselling these dangerous toolkits to whoever is willing to shell out the money for them. You can imagine how slippery this slope can actually get.
Spyware Kill Chains Explained
Here is a good explanation of what exploit chains are and the reason these are the main ways an attacker tries to succeed with their spyware.
As the security design of devices has progressed, attackers have to use exploit chains rather than a singular exploit to remotely install spyware onto a target’s device. An exploit chain is made up of several exploits “chained” or linked together, and often includes three or four different 0-day exploits. Generally, the exploits fall into three types: initial remote code execution (RCE), sandbox escape (SBX), and local privilege escalation (LPE). Information leaks are sometimes used to help with the exploitation within the chain as well.
For spyware to be successful, it has to gather data without alerting the user. Government customers want to gather data from a user’s device, such as reading messages on their phone or accessing their browser history. However, by design, a single application on a mobile device does not have the privileges needed to access all other applications or data on the device. Each application requires the user to explicitly grant permission to access data, otherwise any downloaded game would be able to access all messages or even the browser history of the device. This barrier between applications is referred to as a sandbox.
An application requesting permission to access data could alert the users to unusual activity, and possibly reveal the presence of the spyware. Instead, CSVs have to exploit vulnerabilities in the device to break out of sandboxes and gain additional privileges. Technology companies have added additional layers of security to increase the difficulty of exploitation. Installing spyware and accessing all the data on a device requires the highest level of privilege, referred to as “root privilege”. Exploit chains often contain local privilege exploits to gain the root privilege needed to install the spyware and access the users’ data. Below is a good visualization of how exploit chains/spyware kill chains work from top to bottom.
Conclusion
I highly recommend reading both Google’s published article & iVerify’s published article about this mass iOS exploit. Google, in particular, goes into the technical detail of all of the used exploits in this particular exploit chain and how it works exactly. It isn’t understated that the framework surrounding the exploit kit is extremely well-engineered; the exploit pieces are all connected naturally and combined together using common utility and exploitation frameworks. They dissect the kit and explain all the unique actions it performs from start to finish. This article was just to explain the issue at hand and the exploit that got out of hand. Hopefully, we don’t have to experience another WannaCry catatrophe but from how it’s looking, we most likely need to prepare for the worst.
If you want to keep up with my work or want to connect as peers, check out my social links below and give me a follow!
* 🦋 Bluesky
* ▶️ Youtube
* 💻 Github
* 👾 Discord
By Digital DopamineSo What Is the Coruna Exploit?
The Coruna iOS exploit framework is a new and powerful exploit kit targeting Apple iPhone models running iOS 13.0 (released in September 2019) up to version 17.2.1 (released in December 2023). It was identified by Google Threat Intelligence Group (GTIG) and iVerify. The exploit contained five full iOS exploit chains and a total of 23 exploits. The core technical value of this exploit kit lies in its comprehensive collection of iOS exploits, with the most advanced ones using non-public exploitation techniques and mitigation bypasses.
GTIG has been tracking this exploit since 2025, and at first, that threw me off a bit. Like, you mean to tell me this has just been out in the wild for a whole year without any major reporting on it? But that’s exactly what they do. There’s another report from 08-29-2024 from Google that states, “Today, we’re sharing that Google’s Threat Analysis Group (TAG) observed multiple in-the-wild exploit campaigns, between November 2023 and July 2024, delivered from a watering hole attack on Mongolian government websites.” Now I’m still new to cyber security and threat intelligence, so I don’t know if there are procedures around exploit discovery that require monitoring to understand them. To be honest, that kinda makes sense as I say it out loud, so maybe there’s some truth to that assumption. But these specific campaigns first delivered an iOS WebKit exploit affecting iOS versions older than 16.6.1 and then, later, a Chrome exploit chain against Android users running versions from m121 to m123. These were n-day exploits for which patches were available, but would still be effective against unpatched devices. They assessed that, “with moderate confidence, the campaigns were linked to the Russian government-backed actor APT29”.
This leads me back to the Coruna exploit, because it seems like security vendors that are goverment backed have increasingly become more and more careless with who they sell the exploits to. That’s right folks, commercial spyware is sold to the government and other brokers. And it’s becoming more common that once spyware or an exploit capability is sold, control over the end customer is lost. Brokers can’t be trusted with these capabilities, and business-to-business transactions over the spyware market are highly unregulated. Now, this lack of control helped launch discussions about responsible use of spyware and aligning on a formal voluntary framework for its use called the Pall Mall Process. But those discussions are ongoing, and the economic pressures for spyware companies to return a profit mean these tools are being sold to a broader array of organizations. Some things just should’t be based on the constant need for a return on investment, and at the end of the day, Capitalism is to blame for this industry getting sloppy with its handling of exploits.
Google is actually more on the forefront of reporting the slippery slope we are in when it comes to the unchecked commercial surveillance industry, and there is a great report you can read here → “Buying Spying”. I highly recommend the read regardless of your interest in CS, because whether you like it or not, these leaks or unethical sales of spyware affect all of us. So I want to elaborate on the definition of these attacks and exploits.
0-day Exploits
A 0-day is a vulnerability or security hole in a computer system unknown to its developers or anyone capable of mitigating it. Until the vulnerability is remedied, threat actors can exploit it in a zero-day exploit or zero-day attack.
The term "zero-day" originally referred to the number of days since a new piece of software was released to the public, so "zero-day software" was obtained by hacking into a developer's computer before release. Eventually, the term was applied to the vulnerabilities that allowed this hacking, and to the number of days that the vendor has had to fix them. Vendors who discover the vulnerability may create patches or advise workarounds to mitigate it, though users need to deploy that mitigation to eliminate the vulnerability in their systems. Zero-day attacks are severe threats.
Watering Hole
Watering hole is a computer attack strategy in which an attacker guesses or observes which websites an organization's users frequently use and then uses one or more of the websites to distribute malware.
Eventually, some member(s) of the targeted users will become infected. Attackers looking for specific information may only target users coming from a specific IP address. This also makes the attacks harder to detect and research. The name is derived from a strategy of predators in the natural world, who wait for an opportunity to attack their prey near watering holes. The attack strategy was named in an RSA blog in 2012.
These are just 2 of many different types of attacks and exploits that threat actors use to gain confidential information or credentials from their targets. If you’re interested in learning about more common attacks, you can check out this article from Fortinet → https://www.fortinet.com/resources/cyberglossary/types-of-cyber-attacks, where they go over the 20 most common attacks and exploits.
Initial Discovery: The Commercial Surveillance Vendor Role
In February 2025, GTIG captured parts of an iOS exploit chain used by a customer of a surveillance company. The exploits were integrated into a previously unseen JavaScript framework that used simple but unique JavaScript obfuscation techniques.
The framework starts a fingerprinting module, collecting a variety of data points to determine if the device is real and what specific iPhone model and iOS software version it is running. Based on the collected data, it loads the appropriate WebKit remote code execution (RCE) exploit, followed by a pointer authentication code (PAC) bypass as seen in Figure 2 from the deobfuscated JavaScript.
At that time, GTIG recovered the WebKit RCE delivered to a device running iOS 17.2 and determined it was CVE-2024-23222, a vulnerability previously identified as a zero-day that was addressed by Apple on Jan. 22, 2024 in iOS 17.3 without crediting any external researchers. The image below shows the beginning of the RCE exploit, exactly how it was delivered in-the-wild with GTIG’s annotations.
I’m gonna throw in a shameless plug from my Hack w/ Me Episode 2: Search Skills:
Because I used one of the specialized databases I learned about, the Common Vulnerabilities and Exposures (CVE) database, to pull up the record of this exploit. As previously mentioned, the record is CVE-2024-23222, and as you can see below, this exploit was fixed with the iOS 17.3 and iPadOS 17.3, macOS Sonoma 14.3, and tvOS 17.3 updates.
The last update on the record states 2024-06-12, so I’m not sure if that is when the OS updates came out or if that was the last fix forward from the initial releases of the OS patches. Either way, most people I can assume are safe from this attack moving forward. But there are apparently still many users within the US and outside of the country who still have an older OS version, for one reason or another.
The Coruna Exploit Kit is In The Wild
This is a huge issue, and the fact that these exploits that are being funded by and built for government entities should be concerning to all of us. Google’s report doesn’t explicitly mention the original CSV customer that deployed Coruna, but iVerify, which also analyzed a version of Coruna it obtained from one of the infected Chinese sites, suggests the code may well have started life as a hacking kit built for or purchased by the US government. Google and iVerify both note that Coruna contains multiple components previously used in a hacking operation known as “Triangulation” that was discovered targeting Russian cybersecurity firm Kaspersky in 2023, which the Russian government claimed was the work of the NSA. The US government didn’t respond to Russia’s claim and you can be damn sure that if they DIDN’T have any involvement in “Triangulation”, they would make it known.
iVerify also noted that the code appears to have been originally written by English-speaking coders, saying “It's highly sophisticated, took millions of dollars to develop, and it bears the hallmarks of other modules that have been publicly attributed to the US government." Adding, “This is the first example we’ve seen of very likely US government tools—based on what the code is telling us—spinning out of control and being used by both our adversaries and cybercriminal groups.”
So here we are again, another extremely sophisticated exploit, leaked by the US government. I say another because this isn’t the first time this has happened. Back in 2017, EternalBlue was a Windows-hacking tool stolen from the NSA (National Security Agency) and leaked to the world, leading to its use in catastrophic cyberattacks, including North Korea's WannaCry worm and Russia's NotPetya attack. We can most certainly expect something of the same caliber to be developed and deployed over the next couple of years. Even Google stated, “Beyond these identified exploits, multiple threat actors have now acquired advanced exploitation techniques that can be reused and modified with newly identified vulnerabilities.”
The loosely regulated industry is a problem within itself. iVerify’s cofounder, Rocky Cole, points to the industry of brokers that may pay tens of millions of dollars for zero-day hacking techniques that they can resell for espionage, cybercrime, or cyberwar. Notably, Peter Williams, an executive of US government contractor Trenchant, was recently sentenced to seven years in prison for selling hacking tools to the Russian zero-day broker Operation Zero from 2022 to 2025. Williams’ sentencing memo notes that Trenchant sold hacking tools to the US intelligence community as well as others in the “Five Eyes” group of English-speaking governments—the US, UK, Australia, Canada, and New Zealand. So they are just double-dipping in contracts and reselling these dangerous toolkits to whoever is willing to shell out the money for them. You can imagine how slippery this slope can actually get.
Spyware Kill Chains Explained
Here is a good explanation of what exploit chains are and the reason these are the main ways an attacker tries to succeed with their spyware.
As the security design of devices has progressed, attackers have to use exploit chains rather than a singular exploit to remotely install spyware onto a target’s device. An exploit chain is made up of several exploits “chained” or linked together, and often includes three or four different 0-day exploits. Generally, the exploits fall into three types: initial remote code execution (RCE), sandbox escape (SBX), and local privilege escalation (LPE). Information leaks are sometimes used to help with the exploitation within the chain as well.
For spyware to be successful, it has to gather data without alerting the user. Government customers want to gather data from a user’s device, such as reading messages on their phone or accessing their browser history. However, by design, a single application on a mobile device does not have the privileges needed to access all other applications or data on the device. Each application requires the user to explicitly grant permission to access data, otherwise any downloaded game would be able to access all messages or even the browser history of the device. This barrier between applications is referred to as a sandbox.
An application requesting permission to access data could alert the users to unusual activity, and possibly reveal the presence of the spyware. Instead, CSVs have to exploit vulnerabilities in the device to break out of sandboxes and gain additional privileges. Technology companies have added additional layers of security to increase the difficulty of exploitation. Installing spyware and accessing all the data on a device requires the highest level of privilege, referred to as “root privilege”. Exploit chains often contain local privilege exploits to gain the root privilege needed to install the spyware and access the users’ data. Below is a good visualization of how exploit chains/spyware kill chains work from top to bottom.
Conclusion
I highly recommend reading both Google’s published article & iVerify’s published article about this mass iOS exploit. Google, in particular, goes into the technical detail of all of the used exploits in this particular exploit chain and how it works exactly. It isn’t understated that the framework surrounding the exploit kit is extremely well-engineered; the exploit pieces are all connected naturally and combined together using common utility and exploitation frameworks. They dissect the kit and explain all the unique actions it performs from start to finish. This article was just to explain the issue at hand and the exploit that got out of hand. Hopefully, we don’t have to experience another WannaCry catatrophe but from how it’s looking, we most likely need to prepare for the worst.
If you want to keep up with my work or want to connect as peers, check out my social links below and give me a follow!
* 🦋 Bluesky
* ▶️ Youtube
* 💻 Github
* 👾 Discord