BSides (Security BSides) is a community-driven information security conference series held around the world.
It emphasizes openness, accessibility, and the free exchange of knowledge, welcoming researchers, engineers, students, and anyone interested in security. BSides events are known for fostering diverse perspectives, hands-on technical presentations, and lively discussions. With a grassroots, low-commercial approach, BSides encourages new ideas and experimentation.
BSides Tokyo embraces this spirit and provides a space where Japan’s security community can learn, connect, and grow together.
Event Overview. Details are subject to change.
Day Session
- Date: Saturday, May 16, 2026
- Time: Time: 9:50 AM – 6:40 PM (Doors open at 9:00 AM)
- Venue: Internet Initiative Japan Inc.
Night Session (After-Party)
- Date: Saturday, May 16, 2026
- Time: 8:00 PM – 10:30 PM (Doors open at 7:50 PM)
- Venue: JAM ORCHESTRA, 4-3 Rokubancho, Chiyoda-ku, Tokyo 102-0085, GEMS Ichigaya 2F
Ticket Information
- Day Session Ticket: ¥2,000 (Early Bird: through April 12) / ¥3,000 (Regular: through May 16)
- Night Session Ticket: ¥8,000 (Early Bird: through April 12) / ¥10,000 (Regular: through May 16)
Please purchase your tickets from the link below:
https://peatix.com/event/4825609/view
Important Information
- QR codes required for entry will be distributed at the 2nd-floor registration desk. As they are required for both entry and re-entry, please keep them safe and do not lose them until the end of the event.
- If you move to floors other than the event venue or the designated smoking area on the 2nd floor, you may be unable to return to the venue due to building security restrictions. Please be very careful when moving between floors.
- No cloakroom (luggage storage) will be provided. Please manage your belongings, including valuables, at your own responsibility.
- Power outlets are provided at each seat.
- Guest Wi-Fi will be provided within the venue. Connection details will be announced at the venue on the day of the event.
- Whether photography is permitted during a session will be announced by the speaker before the start of each session.
- The organizers will only provide water. Please prepare other beverages yourself (bringing your own alcohol is permitted).
- Lunch will not be provided. Please use nearby restaurants or bring your own meal.
- Trash bins will be placed in the venue, and the organizers will collect and dispose of the waste. To ensure a clean environment and smooth operation, please cooperate with accurate waste sorting according to the specified rules.
- When posting photos on social media that include other attendees, please ensure you have their permission and show consideration for their privacy.
News
About ticket sales for BSides Tokyo 2025
The ticket page for BSides Tokyo 2025 is now open! Tickets are on sale now, so please purchase yours today. ...
Sending the results of the first round o...
We have sent out the results of the first round of CFPs. CFPs that were not selected in the first round will continue to...
The speaker has been decided
After careful consideration, we have selected 11 presentation this year. Thank you very much for the many submissions.</...
Venue
Internet Initiative Japan Inc.
Iidabashi Grand Bloom, 2-10-2 Fujimi, Chiyoda-ku, Tokyo 102-0071, JAPAN
schedule

Opening remarks


Jan Michael Alcantara
Jan Michael is a senior threat researcher in Netskope Threat Labs working on threat hunting and replication, detection validation and cloud application abuse research. As an active contributor to the security community, he consistently publishes thought leadership on emerging malware and phishing trends, with his research frequently cited by top-tier cybersecurity publications.
He has previously worked as an incident responder for one of the top 4 banks in Australia and a senior systems engineer in Trend Micro. He is an advisory board member of GIAC as a forensic analyst.
AttackGPT: Making LLM-Generated Malware Operational with Self-Healing and Strategic Model Routing
Is malicious code dead? Can malware now contain only text-based prompts without any malicious logic, relying on LLMs as autonomous malware authors that generate malicious code at the moment of execution? We explore this shift from “stored code” to “on-demand synthesis” and share the results of our testing.
To transition this concept from a research curiosity into an operational threat, we developed a framework capable of navigating the inherent friction of AI generation. Introducing AttackGPT: a post-exploit modular C2 framework that bridges the gap between theoretical AI risk and operational reality. To overcome the LLM’s code hallucination and guardrails, we employed strategic model routing and a self-healing loop. Through this architecture, AttackGPT achieves the capability to generate bespoke payload.
We will demonstrate the system in action: identifying execution failures in real-time, feeding logs back to the LLM for autonomous debugging, and successfully materializing functional attack chains that didn’t exist seconds prior. Can malware now contain only just text-based prompts without any malicious logic, relying on LLMs to generate malicious code during execution?


Taiga Shirakura
Taiga Shirakura has been researching security since his student days and currently works at Mitsui Bussan Secure Directions, Inc. (MBSD), conducting web application security assessments and penetration testing. He enjoys vulnerability research as a hobby and has obtained multiple CVEs in web frameworks by investigating gaps between specifications and implementations in HTTP headers, URLs, and related areas.
Simple Request Blind Spots Overlooked by Web Frameworks - Four Pitfalls in Modern CSRF Protection
Origin header validation combined with Simple Request checking is a modern CSRF protection technique. However, many web frameworks that implemented this approach had validation flaws due to misunderstanding the detailed browser specifications. There is a gap between how Simple Requests are explained in the context of CORS and how browsers actually send requests—a gap that you cannot notice just by reading MDN or technical books.
In this talk, I focus on Content-Type validation flaws and demonstrate vulnerabilities I discovered in several popular web frameworks, along with the actual requests browsers send.
I will present four patterns of validation flaws, explain the pitfalls that even framework developers overlooked in modern CSRF protection, and describe correct implementation practices.


Theo Webb
I am a Security Engineer at GMO Cybersecurity by Ierae, Inc., specializing in malware analysis and research, as well as software development for our security products.
I joined GMO Ierae in February 2025. Prior to that, I founded and built a tech startup, graduated from university in Japan, and started self-studying infosec in 2023. I am particularly interested in reverse engineering, system internals, and low-level programming. I gave a lightning talk at JSAC 2026, and I occasionally share C-related projects on my GitHub.
GPUGate: Repo Squatting and OpenCL Anti-Analysis to Deliver HijackLoader
In early September 2025, we observed a new malware campaign in which attackers hijacked the official GitHub Desktop repository to distribute a multi-stage loader disguised as the GitHub Desktop installer. This loader, dubbed GPUGate, cleverly uses a GPU-based API called OpenCL to evade sandbox and VM-based analysis, and to obscure the real decryption key from analysts. In our case, this forced analysis onto a physical machine with a GPU, where I could debug the loader, understand its inner workings, and recover the correct decryption key.
In this talk, I will provide a detailed explanation of how OpenCL works, and how it can be abused to evade sandbox analysis and hinder static decryption, using techniques observed in this campaign. You will learn how to spot and work around these techniques and gain a deeper understanding of OpenCL-based malware.
I will also discuss our research into the initial delivery technique (which I dubbed repo squatting). This is enabled by GitHub’s fork-network commit visibility, which allows attackers to “squat” under an official repository’s namespace via commit hashes. I will show how similar platforms are affected, and share the additional delivery technique the attackers used during the December 2025 to January 2026 activity.
Lunch


Yu Chiao-Lin
Chiao-Lin Yu (Steven Meow) is a Staff Red Team Security Threat Researcher at Trend Micro Taiwan. He holds professional certifications including OSCE³, OSCP, CRTO, CRTP, CARTP, CESP-ADCS, and LPT. He has previously spoken at DEF CON (USA), CCC (Germany), BSides Tokyo, HITCON Training, and CYBERSEC. He has disclosed over 50 CVE vulnerabilities affecting major vendors such as VMware, NEC, D-Link, and Zyxel. His recent research focuses on using AI for offensive security and investigating cross-border scam ecosystems — he has been actively breaching phishing-as-a-service infrastructure across Asia through vulnerability research and AI-powered OSINT. He specializes in red teaming, web security, IoT, and cats🐱.
My AI Partner Fell Down a Rabbit Hole — Accidentally Uncovered a Criminal Food Chain Across Asia
It started with a scam message. The rabbit hole went deeper than expected.
Using AI-assisted vulnerability research and 0-day exploitation, we breached phishing-as-a-service (PhaaS) platforms and traced criminal infrastructure across Taiwan, Malaysia, Japan, Hong Kong, and Russia. We mapped 80+ infrastructure hosts impersonating 8 major Taiwanese brands, extracted thousands of victim records, and reverse-engineered how fraud groups deploy and scale their operations — with AI doing the heavy lifting on OSINT automation.
Inside the compromised scam infrastructure, we discovered webshells that weren’t ours. The platform developers had shipped pre-backdoored installation packages — a supply chain attack targeting their own criminal customers. Meanwhile, a separate threat actor had deployed PHP SEO parasitic implants (link179 campaign) with 4-layer obfuscation, hijacking the already-compromised scam sites to run large-scale SEO poisoning — including fake Japanese shopping sites and phishing pages targeting Japanese users — alongside Russian bank fraud.
Each layer exploits the one below — with its own infrastructure, its own automation, and its own victims. And the impact reaches directly into Japan.
This talk traces the full descent: how one researcher, with AI as a partner, dug through a criminal ecosystem layer by layer — and found a food chain where nobody is safe, not even the criminals themselves.


Ryo Minakawa
Ryo is a malware and intelligence analyst at NTT DOCOMO BUSINESS, Inc., where he is currently responsible for attack surface and vulnerability information management at NTTCom-SIRT.
Some of his research has been presented at JSAC (2023, 2024), BSides Las Vegas 2024, AVTOKYO 2024, and Botconf 2025.
Ghost in the 7‑Zip: The Shadow of Residential Proxies Creeping into Your Life
In January 2026, a security incident involving a fraudulent 7-Zip installer caused a significant stir in Japan. Analysis of sample submissions and query patterns on VirusTotal confirms that a wide range of organizations across the country were impacted by this campaign.
In this talk, I will dissect the attack campaign utilizing the fake 7-Zip installer and explore related operations linked to the same threat actor. I will specifically focus on the residential proxy networks being built behind these attacks and detail the methodologies used to investigate them. Furthermore, while fake installer incidents are often oversimplified as mere “stepping stone” compromises, I will provide a clear, step-by-step perspective on the technical evolution of the attack and the real-world consequences as the operation unfolds.



Jack Sessions
Ray Goh
Cheaters Leave Footprints: Forensics of Cheats in Modern Competitive Games
Cheating in modern competitive games is no longer a gameplay issue!
It is a digital forensics problem. Using popular multiplayer titles in Asia such as League of Legends, Valorant, and Apex Legends, this talk examines how cheats behave like malware implants and how they leave persistent forensic artefacts in memory, disk, and telemetry.
The session focuses on post incident analysis rather than cheat development, showing how cheating activity can be reconstructed after the fact, how claims of innocence can be validated or disproven, and why anti-cheat engineering closely mirrors modern endpoint detection and response systems.
Break


Yusuke Nakajima
Joined the NTT DATA Group in 2019, initially working in sales, providing solutions such as image processing and natural language processing. In April 2023, transferred to the company’s CSIRT unit, NTTDATA-CERT, where engaged in incident response, threat hunting, IoC collection and distribution, as well as enhancing CSIRT operations through LLM-based automation. Also has a strong interest in offensive security, including C2 framework development, OSS vulnerability research, and participation in bug bounty programs. Presented at conferences such as BSides Tokyo 2025, Black Hat USA 2025 Arsenal, HITCON 2025 and JSAC 2025. CISSP, OSDA, OSTH.
The Dark Side of Autonomy: Exploiting DFIR Agents Through Adversarial Manipulation
In recent years, DFIR tools have increasingly integrated Large Language Models (LLMs) to automate analysis and reporting. This study shows that attackers can target DFIR agents by exploiting prompt injection through boundary perturbation of structured data—injecting closing and opening delimiters to trick parsers into interpreting data as new instructions—a format long assumed to be resistant to such prompt-injection-based manipulation. Crucially, this is not a tool-specific issue but a broader risk inherent to integrating DFIR tools with autonomous LLM agents. Although I contacted Google because their tools were also affected, they responded that the issue does not meet the criteria for a security bug.
By embedding malicious instructions into routine forensic artifacts such as logs and scheduled tasks, adversaries can cause agents to misinterpret data as commands, resulting in three outcomes: Hide, Mislead, and Exploit.
This is, to my knowledge, the first demonstration of structured-data injection attacks in LLM-integrated DFIR environments. The study also proposes practical defense-in-depth measures, including least-privilege design, strict structured-output validation, and human-in-the-loop oversight to ensure safe automated workflows.
This presentation offers organizations a foundation for rethinking how much autonomy to grant LLM-driven DFIR agents and where human supervision must remain essential.


Gabriel Rodrigues de Oliveira
I am a researcher and penetration tester at https://hakaisecurity.io/
I have one year of experience in the offensive security field.
CPTS certified.
My friends call me “Texugo” because one of them once said I look like one.
Who protect the defender?
In SIEM/XDR architectures, the Master is the king and the agents obey. But what happens when the hierarchy is reversed?
In this talk, we will dissect CVE-2026-25769, a critical Insecure Deserialization vulnerability in Wazuh that allows a compromised Worker to achieve remote code execution on the Master.
We will analyze the impact of this vulnerability from an APT perspective and walk through the entire vulnerability discovery process.


Anna Ohori
Anna is a university student majoring in Information Science. She is an alumna of the Security Camp National Convention 2024 (Threat Analysis Class) and previously served as a tutor for the program. Her research primarily focuses on the empirical analysis of attack tools and attacker decision-making, supported by hands-on verification. She is currently an intern at Powder Keg Technologies, where she works on technologies for evaluating detection by EPP/EDR products and online detection/evasion services.
Is VirusTotal Really an Attacker’s Ally? — An Empirical Analysis of VT vs. Local EDR and What the Sample-Sharing Ecosystem Reveals About Detection Blind Spots and Opportunities for Attackers
VirusTotal (VT) is widely used by defenders to analyze and evaluate suspicious files through its 70+ scanning engines and sample-sharing ecosystem. At the same time, it has long been assumed that attackers avoid VT because uploaded samples may be shared. However, an analysis of internal chat logs from the Black Basta ransomware group, leaked in 2025, reveals a more complex reality. While internal discussions warned against uploading samples to VT, the logs also show that “undetected on VT” was used as a factor in deciding whether to proceed with an attack. Why do attackers continue to use VT despite being aware of the risks of sample sharing? This talk examines that question through empirical data.
In this session, I present an analysis of the Black Basta logs alongside a chronological observation of custom samples incorporating evasion techniques across VT and local EPP/EDR environments for seven major security products. This analysis highlights cases in which detection coverage remains uneven even long after upload, and shows that the relationship between being “undetected on VT” and being “undetected in local endpoint environments” varies significantly across vendors.
Furthermore, I propose two new metrics to quantify this discrepancy: VDG (Vendor Detection Gap) and SSR (Safety-side Rate). I discuss how attackers may strategically use VT based on these characteristics, and how defenders should assess the reliability of VT results. The presentation will also include a live demo of “VDG-Tracker,” a continuous observation tool designed to help practitioners interpret VT data more effectively.
Break


Seiga Ueno
Seiga joined NTT DOCOMO BUSINESS, Inc. as a new graduate in 2025. As an Offensive Security Engineer, he researches adversary techniques from an attacker’s perspective and supports Red Team operations.
LLM-Based Natural Language Steganography for Covert C2 Communications
This presentation explores a potential future threat model for C2 (Command and Control) communication from an attacker’s perspective, with a focus on the implications for defenders. As monitoring of C2 traffic becomes increasingly sophisticated, simply encrypting payloads may no longer be sufficient to avoid detection. In some cases, high-entropy or otherwise unnatural data can itself become a signal for defenders.
Against this backdrop, the talk focuses on natural language steganography using large language models (LLMs): embedding hidden information into seemingly ordinary text. Through a proof of concept (PoC), it examines how commands and host status information could be concealed in natural language to enable covert C2 communication. It also discusses the possibility of using public platforms such as social media or websites as communication channels, and considers how defenders might detect and analyze this kind of activity when content inspection alone is no longer enough.


Mitsuaki (Mitch) Shiraishi
Mitsuaki was a speaker at BSides Tokyo 2018 and has served as a lecturer for the Security Camp National Program from 2023 to 2025. In 2016, he established Japan’s first Red Team operations service at Secureworks Japan, where he was responsible for service design and project delivery as the technical lead. Since 2022, he has been with a major global cybersecurity firm, leading the launch of its domestic Red Team services while also participating in international Red Team projects as a member of the global team.
OSEE/OSCE/OSCP/GREM/CARTP/CHMRTS/CISSP/CISA/Information Security Specialist/Software Development Engineer/Master of Technology Management (Professional)
A Proposal for TLPT 2.0: Decoupling “Knowing the Enemy” from “Knowing Yourself”
This session proposes an evolutionary update to the Threat-Led Penetration Test (TLPT) framework—the standard for cyberattack simulations in the financial sector for Japan—to further enhance its practical effectiveness.
The current TLPT framework follows a linear path: “Threat Intelligence (TI) → Attack Scenario Creation → Cyberattack Exercise.” However, I believe that the rigid requirement to couple “detailed attack scenarios based on organization-specific TI” with the “execution of the exercise” unnecessarily restricts the flexibility and impact of these projects.
Fundamentally, Threat Intelligence and cyberattack exercises can be conducted independently. Forcing them together via mandatory detailed scenarios creates several unnatural constraints:
- Threat Intelligence becomes a “one-shot” effort specifically for the TLPT, rather than a continuous process.
- Detailed attack scenarios are not a prerequisite for conducting effective cyberattack exercises.
- Focusing primarily on scenario-based execution can undermine the core essence of a red team exercise: the discovery of previously unknown attack paths.
To address these issues while upholding the core philosophy of TLPT, I propose a redesign based on the following principles:
- Decouple Threat Intelligence from Attack Simulation.
- Treat Threat Intelligence as a continuous activity. Financial institutions should regularly develop attack scenarios based on this intelligence to verify the necessity of their own countermeasures.
- Simplify the requirements for starting an attack simulation. Only the definition of the “Start” (Attack Origin) and “Goal” (Target) should be mandatory, leaving detailed scenarios as an optional element.
Drawing on my experience leading Japan's first full-scale Red Team services since 2016 and conducting simulations for numerous global and domestic organizations, I will explain the background of this proposal, focusing on:
- Threat Intelligence as a means of “Knowing the Enemy.”
- Cyberattack Simulation as a means of “Knowing Yourself.”
- Observations on why these two are currently so tightly coupled in the existing TLPT framework.
Attendees will gain a clear understanding of the structural issues in the current TLPT and walk away with design guidelines for a new, improved framework. This session aims to spark constructive dialogue among Red Team and TI providers, regulators, CISOs, and security professionals involved in the TLPT ecosystem.


Kuniyasu Suzaki
Professor at the Institute of Information Security (IISEC), Graduate School. He maintained the Japanese version of KNOPPIX (1CD Linux) from 2003 to 2012. His current research interests include confidential computing based on Trusted Execution Environments (TEE) and trusted computing based on the Trusted Platform Module (TPM).
Remote Attestation Sample for Cloud Confidential Computing
I report remote lesson learned by attestation sample codes for cloud confidential computing services. https://github.com/iisec-suzaki/cloud-ra-sample
Remote attestation is essential for confidential computing because it guarantees the genuineness of CPU hardware and the integrity and origin of software. However, it is not widely adopted due to its configuration complexity and because it is often treated as an optional security feature.
This talk explains practical and easy ways to deploy and use remote attestation, along with its importance. The samples cover major cloud confidential computing services (Azure, AWS, GCP, and Sakura Internet) and architectures (Intel SGX, Intel TDX, AMD SEV-SNP, and AWS Nitro). I will demonstrate the differences in setup and verification processes across cloud providers.
In addition, I will briefly explain differences in how each cloud handles virtual TPM (vTPM) and Secure Boot, as well as variations in remote attestation APIs, including evidence formats and verification workflows.
Finally, I will share key insights to help avoid common pitfalls and strengthen remote attestation practices across different cloud environments.

