- Attackers can use LLM APIs to generate phishing pages in real time inside a victim’s browser.
- Each victim may see a different JavaScript payload, which makes detection harder.
- The technique reduces static artifacts that security tools typically scan and block.
- Stronger browser monitoring, workplace LLM controls, and better AI guardrails are needed.
For years, the big promise of generative AI was personalization at scale. Websites that rebuild themselves instantly for every visitor. Content that adapts to your device, your location, your habits, even your intent. It sounded futuristic, ambitious, and a little unrealistic.
That future is arriving in a place few people expected to see it first. Cybercrime.
According to new research shared by Palo Alto Networks Unit 42, attackers can use legitimate large language model APIs to assemble phishing pages on the spot, right inside a victim’s browser.
The idea is both clever and worrying because it removes the usual signs defenders look for. There is no traditional phishing kit being downloaded. No obvious malicious payload sitting on the page. Nothing static to grab, scan, and blacklist.
Instead, the page becomes a staging area. It looks harmless at first glance, then quietly asks an LLM to generate the “real” attack in real time.
How dynamic phishing pages are built on demand
The method Unit 42 describes reads like a proof of concept, but the ingredients are already common in modern attacks.
Here’s the simplified flow. A victim clicks a link and lands on what appears to be a benign webpage. It does not display suspicious content or embed a classic phishing template. It can even look like an empty placeholder or a basic landing page. That is the point.
Once the page loads, it sends carefully designed prompts to a legitimate LLM service through an API. The prompts request JavaScript code that can generate a realistic login page or a credential capture form. The LLM responds with code, and that code is then assembled and executed directly inside the browser.
The result is a working phishing site that appears only after the page loads and only for that user.
This is where the technique becomes particularly dangerous. Because the code is generated on demand, every visitor can receive a different version. That means defenders are no longer dealing with a single repeatable payload. They are dealing with endless variations, each one slightly rewritten, reshaped, and rearranged.
Traditional detection tools struggle when there is nothing consistent to fingerprint.
Why this matters even if it is “just” a concept
Unit 42 reportedly did not claim it has seen this exact method at scale in the wild. But it also made it clear that this is not science fiction. The building blocks already exist and are being used in other ways.
LLMs are already being used to generate obfuscated JavaScript, even if it happens offline. Runtime code execution is a familiar tactic on compromised systems. AI assisted malware development is increasing, and so are campaigns tied to ransomware, espionage, and credential theft.
Put those pieces together and the direction is obvious. Phishing is moving toward a model where the attacker delivers less and generates more. The fewer artifacts they leave behind, the harder it becomes to trace, block, or prove what happened.
It also changes the economics of phishing. Historically, criminals reused the same kits and templates because it was fast. But reuse is also what made phishing easier to detect. If AI can create a fresh page every time, attackers get the speed and the variety.
That is a bad trade for defenders.
What security teams should do next
Even though the technique avoids static payloads, Unit 42 argues detection is still possible. It just needs to evolve.
One option is improved browser based crawling and behavioral analysis. Instead of scanning a page for known bad code, defenders can look for suspicious behavior such as unusual prompt like requests, dynamic script assembly, strange execution patterns, and unexpected outbound connections.
Another recommendation is limiting unsanctioned LLM use in workplaces. This is not a magic fix, but it reduces risk by cutting off one easy path attackers might exploit. If employees can freely access random AI services from corporate devices, it becomes harder to control what data is exposed and what tools are being used behind the scenes.
The researchers also highlight a bigger issue: guardrails. Many AI platforms have protections designed to prevent malicious use, but careful prompt engineering can still work around them. If a phishing page can be generated through “safe” requests that are framed as normal web development tasks, then the guardrails are not strong enough.
The takeaway is uncomfortable but clear. Defenders need to assume attackers will keep pushing LLMs into the workflow, not just to write better scam emails, but to generate the entire attack surface dynamically.
Phishing is no longer just about convincing language. It is becoming a live, adaptive system.
Follow TechBSB For More Updates
