Officials in South Korea’s National Tax Service stood behind a table of seized hardware wallets prepping for their victory lap — cameras rolling, proud to show the public that crypto‑enabled tax dodging had consequences. The photos went out in high resolution. On social media, people zoomed in — and found the handwritten seed phrase for one of the wallets sitting in plain view. Within hours, an unknown actor drained roughly 4.8 million dollars’ worth of tokens from a wallet the state believed it controlled.
Nothing “got hacked” in the traditional sense. The failure was cultural and procedural, not cryptographic. A secret that should have been treated as the master key to a vault was handled like stage dressing for a press shot. That distinction—between technical strength and human practice—is where this story matters far beyond crypto. It shows how quickly organizations can undermine their own security when culture, incentives, and communications workflows are out of sync with the sensitivity of what they’re handling.
The photo that drained a wallet
According to local coverage and subsequent analysis, the sequence was painfully simple. The NTS held a press conference to announce it had seized crypto assets from tax delinquents. Among the props: hardware wallets and a sheet containing a recovery phrase placed next to them on the table. The images, distributed to media and posted online, were crisp enough for anyone to read and transcribe the phrase.
An opportunistic observer did exactly that. They used the exposed phrase to reconstruct the wallet, sent in a small amount of cryptocurrency to pay transaction fees, and then emptied the wallet of approximately 4 million PRTG tokens — about 6.4 billion won, or 4.8 million US dollars at the time. Later, another actor reportedly moved remaining funds from a follow‑up address to a wallet that had already been flagged for phishing activity.
Cryptography did its job. The system behaved exactly as designed: whoever controls the key controls the funds. What failed was the institution’s understanding of what that “key” represented and how ruthlessly the outside world would act the moment it appeared in public.
When culture treats secrets like props
It is tempting to file this away as a uniquely crypto problem. It is not. The dynamics are the same ones that leak API keys in screenshots, expose credentials in Git commits, and publish production URLs in release notes. The specific artifact changes; the cultural pattern does not.
Several forces showed up in this incident:
-
Showmanship over substance. Communications staff were under pressure to demonstrate impact. A table full of recognizable devices makes for a better story than a text‑only press release. No one stopped to ask whether those objects carried embedded secrets.
-
Lack of shared mental models. To the custody or investigations team, a seed phrase is a live weapon. To the PR team, it may look like random words on a notepad. Without a shared language for risk, people make very different assumptions about what is safe to show.
-
Blurred ownership. Who owns the decision to put sensitive artifacts in front of cameras? Legal? Security? Communications? In many organizations, there is no clear answer, so the most enthusiastic function wins by default.
These pressures exist in every sector. Marketing wants screenshots of real dashboards. Sales wants to show “authentic” customer configurations. Engineering wants to share code snippets on GitHub and in conference talks. If there is not a strong security culture framing what is out‑of‑bounds, brute‑force enthusiasm will eventually drag secrets into public view.
The master‑key problem in every system
What makes this class of mistake so unforgiving is that it tends to involve master keys — artifacts that collapse layered defenses into a single point of failure.
In the NTS case, the handwritten mnemonic phrase was functionally equivalent to a hardware wallet’s private key. Possessing it allowed an outsider to bypass:
-
Physical custody of the hardware.
-
Any PIN or biometric protection on the device.
-
Whatever internal controls the agency believed it had around seized assets.
This same pattern shows up elsewhere in 2026:
-
A compromised private key allowed attackers to drain millions directly from IoTeX vaults without needing to break the protocol itself.
-
Threat reports describe attackers combing public repositories, issue trackers, and screenshots for exposed API keys and credentials that grant full access to databases, payment gateways, or cloud consoles.
In each case, sophisticated technical stacks are undone by a single leaked secret. The cultural error is thinking of those secrets as “just another string” rather than as the effective ownership token for an entire system.
Security culture lives in non‑technical teams
If security only lives inside the SOC or in the heads of engineers, organizations will keep replaying this story with new props. The people most likely to surface secrets into public view often sit outside the technology function.
Communications, PR, marketing, investor relations, and even HR all create artifacts that leave the building:
-
Press photos and video from events, labs, and data centers.
-
Product screenshots and architecture diagrams in launch decks.
-
Social posts showing “day in the life” scenes from operations floors or support centers.
Without an ambient understanding that “certain things simply never appear in public,” these teams will optimize for storytelling and aesthetics, not defensive posture. The NTS photos were not malicious; they were proud. That is what makes them dangerous.
Building culture here looks less like another slide deck and more like:
-
Shared threat stories. Using concrete, recent incidents (like this one) to show non‑technical staff how small visual details can translate into multi‑million‑dollar losses.
-
Visual do‑not‑cross lines. Making it explicit that recovery phrases, QR codes, terminal windows with production URLs, internal dashboards, and customer PII are never to appear in any external asset.
-
Default security review. Treating photos, videos, and major public artifacts like code releases: they pass through someone empowered to say “no.”
When non‑technical teams feel ownership of protecting the organization’s “crown jewels,” they become allies rather than unintentional attackers.
Processes that assume someone will eventually slip
The other lesson from this case is that humans will make mistakes even in well‑intentioned cultures. Good security design assumes that at some point, a sensitive secret will escape its intended boundary.
For high‑impact assets, that implies a few broad principles:
-
Separation of duties. The people responsible for showcasing wins should not also be the ones managing raw secrets. In the NTS scenario, that would mean communications never handles unredacted seed phrases or live hardware; they work from safe replicas or staged photos pre‑cleared by security.
-
Pre‑defined escalation paths. If someone notices that a press photo or social post might have exposed something sensitive, they need a clear, blame‑light way to raise the alarm and trigger assessment and rotation.
-
Design for partial exposure. Moving toward systems where no single phrase, key, or screenshot represents unilateral control: hardware security modules, multi‑party computation, split‑knowledge procedures, or just well‑segmented access patterns.
These patterns apply as much to database connection strings, internal signing keys, and identity provider secrets as they do to crypto wallets. Over time, they shift organizations from “we hope nothing leaks” to “we expect something will, and we’ve limited the blast radius.”
Incentives, not just instructions
Policies alone rarely stand up to the combined pull of convenience, status, and speed. In the NTS case, the desire for a compelling narrative — “look at all this crypto we seized” — overpowered whatever vague guidance might have existed about sensitive information handling.
Aligning culture with security means changing incentives:
-
Reward caution publicly. Celebrate when a campaign is delayed or redesigned because someone noticed a potential leak in an image or screenshot. Treat that as a win, not an annoyance.
-
Make security review part of the definition of done. A release or announcement is not “shipped” until it has passed a minimal sensitivity check, the same way it must pass legal review or brand guidelines.
-
Avoid scapegoating individuals. When incidents happen, focus on process and structure rather than vilifying the person who clicked “upload.” People mirror what the system rewards and tolerates.
If individuals believe that raising a concern will slow them down or that mistakes will be punished harshly, they will either hide issues or race ahead. Neither posture helps.
The uncomfortable truth: the internet is watching
One detail that stands out in this episode is how quickly the outside world acted. There was no weeks‑long reconnaissance or advanced exploit chain. A stranger zoomed a press photo, copied twelve words, and exercised the rights that phrase conferred faster than the issuing institution could react.
That speed matters for culture. It underscores a few realities security leaders should say out loud:
-
Anything that can be captured in a photo or screenshot should be treated as eventually public.
-
There is always someone in your audience who both understands what they are looking at and is willing to act on it, whether for profit, curiosity, or notoriety.
-
Control in digital systems follows keys and tokens, not job titles or legal documents. If you have not embedded that idea across the organization, people will keep making decisions that assume paper ownership equals technical control.
Once those truths are acknowledged, the question shifts from “How do we make sure this never happens?” to “How do we make it hard to happen, quick to notice, and limited in impact when it does?”
From “we’re secure” to “we act securely”
The South Korean incident will likely be remembered in crypto circles as the case where a government leaked a seed phrase and lost $4.8M, but it deserves a broader reading. It is a story about how organizations with real capabilities and authority can still be undone by gaps between their technical systems and their human systems.
Security, in that sense, is less about how strong your cryptography is and more about how your culture treats the things that unlock it. If master keys are stage props, if screenshots are unexamined, if external storytelling is divorced from internal risk, the math will not save you.
The more useful takeaway is also the most humbling: assume that at some point, someone in your organization will put the wrong thing in front of a camera, a slide deck, or a repo. Then build a culture, and a set of practices, that treats that inevitability not as a distant edge case but as a central design constraint.