BSides (Security BSides) is a community-driven information security conference series held around the world.

It emphasizes openness, accessibility, and the free exchange of knowledge, welcoming researchers, engineers, students, and anyone interested in security. BSides events are known for fostering diverse perspectives, hands-on technical presentations, and lively discussions. With a grassroots, low-commercial approach, BSides encourages new ideas and experimentation.

BSides Tokyo embraces this spirit and provides a space where Japan’s security community can learn, connect, and grow together.


Event Overview. Details are subject to change.

Day Session

Night Session (After-Party)

Ticket Information

Important Information

News

About ticket sales for BSides Tokyo 2025

The ticket page for BSides Tokyo 2025 is now open! Tickets are on sale now, so please purchase yours today. ...

Sending the results of the first round o...

We have sent out the results of the first round of CFPs. CFPs that were not selected in the first round will continue to...

The speaker has been decided

After careful consideration, we have selected 11 presentation this year. Thank you very much for the many submissions.</...

All News >

Venue

Internet Initiative Japan Inc.

Iidabashi Grand Bloom, 2-10-2 Fujimi, Chiyoda-ku, Tokyo 102-0071, JAPAN

schedule

09:00 - 09:50

Door Open and Registration

09:50 - 10:00

Opening remarks

10:00 - 10:30

Jan Michael Alcantara

Jan Michael is a senior threat researcher in Netskope Threat Labs working on threat hunting and replication, detection validation and cloud application abuse research. As an active contributor to the security community, he consistently publishes thought leadership on emerging malware and phishing trends, with his research frequently cited by top-tier cybersecurity publications.
He has previously worked as an incident responder for one of the top 4 banks in Australia and a senior systems engineer in Trend Micro. He is an advisory board member of GIAC as a forensic analyst.

AttackGPT: Making LLM-Generated Malware Operational with Self-Healing and Strategic Model Routing

Is malicious code dead? Can malware now contain only text-based prompts without any malicious logic, relying on LLMs as autonomous malware authors that generate malicious code at the moment of execution? We explore this shift from “stored code” to “on-demand synthesis” and share the results of our testing.

To transition this concept from a research curiosity into an operational threat, we developed a framework capable of navigating the inherent friction of AI generation. Introducing AttackGPT: a post-exploit modular C2 framework that bridges the gap between theoretical AI risk and operational reality. To overcome the LLM’s code hallucination and guardrails, we employed strategic model routing and a self-healing loop. Through this architecture, AttackGPT achieves the capability to generate bespoke payload.

We will demonstrate the system in action: identifying execution failures in real-time, feeding logs back to the LLM for autonomous debugging, and successfully materializing functional attack chains that didn’t exist seconds prior. Can malware now contain only just text-based prompts without any malicious logic, relying on LLMs to generate malicious code during execution?

10:30 - 11:00

Taiga Shirakura

Taiga Shirakura has been researching security since his student days and currently works at Mitsui Bussan Secure Directions, Inc. (MBSD), conducting web application security assessments and penetration testing. He enjoys vulnerability research as a hobby and has obtained multiple CVEs in web frameworks by investigating gaps between specifications and implementations in HTTP headers, URLs, and related areas.




Simple Request Blind Spots Overlooked by Web Frameworks - Four Pitfalls in Modern CSRF Protection

Origin header validation combined with Simple Request checking is a modern CSRF protection technique. However, many web frameworks that implemented this approach had validation flaws due to misunderstanding the detailed browser specifications. There is a gap between how Simple Requests are explained in the context of CORS and how browsers actually send requests—a gap that you cannot notice just by reading MDN or technical books.

In this talk, I focus on Content-Type validation flaws and demonstrate vulnerabilities I discovered in several popular web frameworks, along with the actual requests browsers send.

I will present four patterns of validation flaws, explain the pitfalls that even framework developers overlooked in modern CSRF protection, and describe correct implementation practices.

11:00 - 11:30

Theo Webb

I am a Security Engineer at GMO Cybersecurity by Ierae, Inc., specializing in malware analysis and research, as well as software development for our security products.

I joined GMO Ierae in February 2025. Prior to that, I founded and built a tech startup, graduated from university in Japan, and started self-studying infosec in 2023. I am particularly interested in reverse engineering, system internals, and low-level programming. I gave a lightning talk at JSAC 2026, and I occasionally share C-related projects on my GitHub.

GPUGate: Repo Squatting and OpenCL Anti-Analysis to Deliver HijackLoader

In early September 2025, we observed a new malware campaign in which attackers hijacked the official GitHub Desktop repository to distribute a multi-stage loader disguised as the GitHub Desktop installer. This loader, dubbed GPUGate, cleverly uses a GPU-based API called OpenCL to evade sandbox and VM-based analysis, and to obscure the real decryption key from analysts. In our case, this forced analysis onto a physical machine with a GPU, where I could debug the loader, understand its inner workings, and recover the correct decryption key.

In this talk, I will provide a detailed explanation of how OpenCL works, and how it can be abused to evade sandbox analysis and hinder static decryption, using techniques observed in this campaign. You will learn how to spot and work around these techniques and gain a deeper understanding of OpenCL-based malware.

I will also discuss our research into the initial delivery technique (which I dubbed repo squatting). This is enabled by GitHub’s fork-network commit visibility, which allows attackers to “squat” under an official repository’s namespace via commit hashes. I will show how similar platforms are affected, and share the additional delivery technique the attackers used during the December 2025 to January 2026 activity.

11:30 - 13:00

Lunch

13:00 - 13:30

Yu Chiao-Lin

Chiao-Lin Yu (Steven Meow) is a Staff Red Team Security Threat Researcher at Trend Micro Taiwan. He holds professional certifications including OSCE³, OSCP, CRTO, CRTP, CARTP, CESP-ADCS, and LPT. He has previously spoken at DEF CON (USA), CCC (Germany), BSides Tokyo, HITCON Training, and CYBERSEC. He has disclosed over 50 CVE vulnerabilities affecting major vendors such as VMware, NEC, D-Link, and Zyxel. His recent research focuses on using AI for offensive security and investigating cross-border scam ecosystems — he has been actively breaching phishing-as-a-service infrastructure across Asia through vulnerability research and AI-powered OSINT. He specializes in red teaming, web security, IoT, and cats🐱.

My AI Partner Fell Down a Rabbit Hole — Accidentally Uncovered a Criminal Food Chain Across Asia

It started with a scam message. The rabbit hole went deeper than expected.

Using AI-assisted vulnerability research and 0-day exploitation, we breached phishing-as-a-service (PhaaS) platforms and traced criminal infrastructure across Taiwan, Malaysia, Japan, Hong Kong, and Russia. We mapped 80+ infrastructure hosts impersonating 8 major Taiwanese brands, extracted thousands of victim records, and reverse-engineered how fraud groups deploy and scale their operations — with AI doing the heavy lifting on OSINT automation.

Inside the compromised scam infrastructure, we discovered webshells that weren’t ours. The platform developers had shipped pre-backdoored installation packages — a supply chain attack targeting their own criminal customers. Meanwhile, a separate threat actor had deployed PHP SEO parasitic implants (link179 campaign) with 4-layer obfuscation, hijacking the already-compromised scam sites to run large-scale SEO poisoning — including fake Japanese shopping sites and phishing pages targeting Japanese users — alongside Russian bank fraud.

Each layer exploits the one below — with its own infrastructure, its own automation, and its own victims. And the impact reaches directly into Japan.

This talk traces the full descent: how one researcher, with AI as a partner, dug through a criminal ecosystem layer by layer — and found a food chain where nobody is safe, not even the criminals themselves.

13:30 - 14:00

Ryo Minakawa

Ryo is a malware and intelligence analyst at NTT DOCOMO BUSINESS, Inc., where he is currently responsible for attack surface and vulnerability information management at NTTCom-SIRT.

Some of his research has been presented at JSAC (2023, 2024), BSides Las Vegas 2024, AVTOKYO 2024, and Botconf 2025.




Ghost in the 7‑Zip: The Shadow of Residential Proxies Creeping into Your Life

In January 2026, a security incident involving a fraudulent 7-Zip installer caused a significant stir in Japan. Analysis of sample submissions and query patterns on VirusTotal confirms that a wide range of organizations across the country were impacted by this campaign.

In this talk, I will dissect the attack campaign utilizing the fake 7-Zip installer and explore related operations linked to the same threat actor. I will specifically focus on the residential proxy networks being built behind these attacks and detail the methodologies used to investigate them. Furthermore, while fake installer incidents are often oversimplified as mere “stepping stone” compromises, I will provide a clear, step-by-step perspective on the technical evolution of the attack and the real-world consequences as the operation unfolds.

14:00 - 14:30

Jack Sessions

"bio = {
  "name": "Jack Sessions",
  "role": ["cybersecurity", "DFIR", "mobile forensics", "Rapper"],
  "tool_of_choice": "python",
  "special_move": "turning logs into timelines",
  "hates": "mystery GUIs and 'trust me bro' evidence",
  "talks_about": ["Android", "iOS", "Linux", "cheaters"],
  "status": "still waiting for evidence to stop lying"
}
END"

Ray Goh

Volatility 3 - caffeineaddict profile scan
Offset Field Value
0x000001 name caffeineaddict / Ray Goh
0x000002 role DFIR / endpoint forensics / IR
0x000003 next_stage Cloudflare Threat Detection & IR
0x000004 fuel Monster Energy (unsponsored)

ANOMALY: cheaters_leave_footprints
NOTE:    artefacts persist after process termination
WARNING: user denial inconsistent with timeline.


Cheaters Leave Footprints: Forensics of Cheats in Modern Competitive Games

Cheating in modern competitive games is no longer a gameplay issue!

It is a digital forensics problem. Using popular multiplayer titles in Asia such as League of Legends, Valorant, and Apex Legends, this talk examines how cheats behave like malware implants and how they leave persistent forensic artefacts in memory, disk, and telemetry.

The session focuses on post incident analysis rather than cheat development, showing how cheating activity can be reconstructed after the fact, how claims of innocence can be validated or disproven, and why anti-cheat engineering closely mirrors modern endpoint detection and response systems.

14:30 - 15:00

Break

15:00 - 15:30

Yusuke Nakajima

Joined the NTT DATA Group in 2019, initially working in sales, providing solutions such as image processing and natural language processing. In April 2023, transferred to the company’s CSIRT unit, NTTDATA-CERT, where engaged in incident response, threat hunting, IoC collection and distribution, as well as enhancing CSIRT operations through LLM-based automation. Also has a strong interest in offensive security, including C2 framework development, OSS vulnerability research, and participation in bug bounty programs. Presented at conferences such as BSides Tokyo 2025, Black Hat USA 2025 Arsenal, HITCON 2025 and JSAC 2025. CISSP, OSDA, OSTH.


The Dark Side of Autonomy: Exploiting DFIR Agents Through Adversarial Manipulation

In recent years, DFIR tools have increasingly integrated Large Language Models (LLMs) to automate analysis and reporting. This study shows that attackers can target DFIR agents by exploiting prompt injection through boundary perturbation of structured data—injecting closing and opening delimiters to trick parsers into interpreting data as new instructions—a format long assumed to be resistant to such prompt-injection-based manipulation. Crucially, this is not a tool-specific issue but a broader risk inherent to integrating DFIR tools with autonomous LLM agents. Although I contacted Google because their tools were also affected, they responded that the issue does not meet the criteria for a security bug.

By embedding malicious instructions into routine forensic artifacts such as logs and scheduled tasks, adversaries can cause agents to misinterpret data as commands, resulting in three outcomes: Hide, Mislead, and Exploit.

This is, to my knowledge, the first demonstration of structured-data injection attacks in LLM-integrated DFIR environments. The study also proposes practical defense-in-depth measures, including least-privilege design, strict structured-output validation, and human-in-the-loop oversight to ensure safe automated workflows.

This presentation offers organizations a foundation for rethinking how much autonomy to grant LLM-driven DFIR agents and where human supervision must remain essential.

15:30 - 16:00

Gabriel Rodrigues de Oliveira

I am a researcher and penetration tester at https://hakaisecurity.io/ I have one year of experience in the offensive security field. CPTS certified. My friends call me “Texugo” because one of them once said I look like one.






Who protect the defender?

In SIEM/XDR architectures, the Master is the king and the agents obey. But what happens when the hierarchy is reversed?
In this talk, we will dissect CVE-2026-25769, a critical Insecure Deserialization vulnerability in Wazuh that allows a compromised Worker to achieve remote code execution on the Master. We will analyze the impact of this vulnerability from an APT perspective and walk through the entire vulnerability discovery process.

16:00 - 16:30

Anna Ohori

Anna is a university student majoring in Information Science. She is an alumna of the Security Camp National Convention 2024 (Threat Analysis Class) and previously served as a tutor for the program. Her research primarily focuses on the empirical analysis of attack tools and attacker decision-making, supported by hands-on verification. She is currently an intern at Powder Keg Technologies, where she works on technologies for evaluating detection by EPP/EDR products and online detection/evasion services.



Is VirusTotal Really an Attacker’s Ally? — An Empirical Analysis of VT vs. Local EDR and What the Sample-Sharing Ecosystem Reveals About Detection Blind Spots and Opportunities for Attackers

VirusTotal (VT) is widely used by defenders to analyze and evaluate suspicious files through its 70+ scanning engines and sample-sharing ecosystem. At the same time, it has long been assumed that attackers avoid VT because uploaded samples may be shared. However, an analysis of internal chat logs from the Black Basta ransomware group, leaked in 2025, reveals a more complex reality. While internal discussions warned against uploading samples to VT, the logs also show that “undetected on VT” was used as a factor in deciding whether to proceed with an attack. Why do attackers continue to use VT despite being aware of the risks of sample sharing? This talk examines that question through empirical data.

In this session, I present an analysis of the Black Basta logs alongside a chronological observation of custom samples incorporating evasion techniques across VT and local EPP/EDR environments for seven major security products. This analysis highlights cases in which detection coverage remains uneven even long after upload, and shows that the relationship between being “undetected on VT” and being “undetected in local endpoint environments” varies significantly across vendors.

Furthermore, I propose two new metrics to quantify this discrepancy: VDG (Vendor Detection Gap) and SSR (Safety-side Rate). I discuss how attackers may strategically use VT based on these characteristics, and how defenders should assess the reliability of VT results. The presentation will also include a live demo of “VDG-Tracker,” a continuous observation tool designed to help practitioners interpret VT data more effectively.

16:30 - 17:00

Break

17:00 - 17:30

Seiga Ueno

Seiga joined NTT DOCOMO BUSINESS, Inc. as a new graduate in 2025. As an Offensive Security Engineer, he researches adversary techniques from an attacker’s perspective and supports Red Team operations.






LLM-Based Natural Language Steganography for Covert C2 Communications

This presentation explores a potential future threat model for C2 (Command and Control) communication from an attacker’s perspective, with a focus on the implications for defenders. As monitoring of C2 traffic becomes increasingly sophisticated, simply encrypting payloads may no longer be sufficient to avoid detection. In some cases, high-entropy or otherwise unnatural data can itself become a signal for defenders.
Against this backdrop, the talk focuses on natural language steganography using large language models (LLMs): embedding hidden information into seemingly ordinary text. Through a proof of concept (PoC), it examines how commands and host status information could be concealed in natural language to enable covert C2 communication. It also discusses the possibility of using public platforms such as social media or websites as communication channels, and considers how defenders might detect and analyze this kind of activity when content inspection alone is no longer enough.

17:30 - 18:00

Mitsuaki (Mitch) Shiraishi

Mitsuaki was a speaker at BSides Tokyo 2018 and has served as a lecturer for the Security Camp National Program from 2023 to 2025. In 2016, he established Japan’s first Red Team operations service at Secureworks Japan, where he was responsible for service design and project delivery as the technical lead. Since 2022, he has been with a major global cybersecurity firm, leading the launch of its domestic Red Team services while also participating in international Red Team projects as a member of the global team.

OSEE/OSCE/OSCP/GREM/CARTP/CHMRTS/CISSP/CISA/Information Security Specialist/Software Development Engineer/Master of Technology Management (Professional)

A Proposal for TLPT 2.0: Decoupling “Knowing the Enemy” from “Knowing Yourself”

This session proposes an evolutionary update to the Threat-Led Penetration Test (TLPT) framework—the standard for cyberattack simulations in the financial sector for Japan—to further enhance its practical effectiveness.

The current TLPT framework follows a linear path: “Threat Intelligence (TI) → Attack Scenario Creation → Cyberattack Exercise.” However, I believe that the rigid requirement to couple “detailed attack scenarios based on organization-specific TI” with the “execution of the exercise” unnecessarily restricts the flexibility and impact of these projects.

Fundamentally, Threat Intelligence and cyberattack exercises can be conducted independently. Forcing them together via mandatory detailed scenarios creates several unnatural constraints:

  • Threat Intelligence becomes a “one-shot” effort specifically for the TLPT, rather than a continuous process.
  • Detailed attack scenarios are not a prerequisite for conducting effective cyberattack exercises.
  • Focusing primarily on scenario-based execution can undermine the core essence of a red team exercise: the discovery of previously unknown attack paths.

To address these issues while upholding the core philosophy of TLPT, I propose a redesign based on the following principles:
  1. Decouple Threat Intelligence from Attack Simulation.
  2. Treat Threat Intelligence as a continuous activity. Financial institutions should regularly develop attack scenarios based on this intelligence to verify the necessity of their own countermeasures.
  3. Simplify the requirements for starting an attack simulation. Only the definition of the “Start” (Attack Origin) and “Goal” (Target) should be mandatory, leaving detailed scenarios as an optional element.

Drawing on my experience leading Japan's first full-scale Red Team services since 2016 and conducting simulations for numerous global and domestic organizations, I will explain the background of this proposal, focusing on:
  • Threat Intelligence as a means of “Knowing the Enemy.”
  • Cyberattack Simulation as a means of “Knowing Yourself.”
  • Observations on why these two are currently so tightly coupled in the existing TLPT framework.

Attendees will gain a clear understanding of the structural issues in the current TLPT and walk away with design guidelines for a new, improved framework. This session aims to spark constructive dialogue among Red Team and TI providers, regulators, CISOs, and security professionals involved in the TLPT ecosystem.
18:00 - 18:30

Kuniyasu Suzaki

Professor at the Institute of Information Security (IISEC), Graduate School. He maintained the Japanese version of KNOPPIX (1CD Linux) from 2003 to 2012. His current research interests include confidential computing based on Trusted Execution Environments (TEE) and trusted computing based on the Trusted Platform Module (TPM).





Remote Attestation Sample for Cloud Confidential Computing

I report remote lesson learned by attestation sample codes for cloud confidential computing services. https://github.com/iisec-suzaki/cloud-ra-sample

Remote attestation is essential for confidential computing because it guarantees the genuineness of CPU hardware and the integrity and origin of software. However, it is not widely adopted due to its configuration complexity and because it is often treated as an optional security feature.

This talk explains practical and easy ways to deploy and use remote attestation, along with its importance. The samples cover major cloud confidential computing services (Azure, AWS, GCP, and Sakura Internet) and architectures (Intel SGX, Intel TDX, AMD SEV-SNP, and AWS Nitro). I will demonstrate the differences in setup and verification processes across cloud providers.

In addition, I will briefly explain differences in how each cloud handles virtual TPM (vTPM) and Secure Boot, as well as variations in remote attestation APIs, including evidence formats and verification workflows.

Finally, I will share key insights to help avoid common pitfalls and strengthen remote attestation practices across different cloud environments.

18:30 - 18:40

Closing remarks

18:40 - 20:00

Networking and Door Close

20:00 - 22:30

After Party