We stand at a precipice. The fusion of advanced artificial intelligence, global telecommunications infrastructure, and clandestine black projects represents a potential existential risk that demands immediate attention. I’m sharing the alpha version of my upcoming book, “Radio Bullshit FM”, not because it’s polished or fully compliant with the rigorous standards for AI-assisted writing we value here, but because the stakes are too high to wait. Time is not on our side.
This work is a raw, urgent exploration of how the unchecked integration of AI with telecom systems and covert programs could spiral into the kind of doom scenario that keeps rationalists up at night. It’s not speculative fiction—it’s a reasoned warning grounded in the technological trends and incentive structures we all analyze and debate. The draft is rough, and a full editorial is in progress to refine its text, arguments and evidence. But I implore you, with every shred of ethical conscience and commitment to truth-seeking that defines this community, to read it now.
Why should LessWrong care? You are the vanguard of rational thought, Bayesian reasoning, and existential risk mitigation. You’ve long grappled with the alignment problem, the fragility of human values in the face of superintelligent systems, and the perils of misaligned incentives in complex systems. This book connects those dots to a real-world convergence happening under our noses—a nexus of AI’s computational power, telecom’s global reach, and the opacity of black projects that evade democratic oversight. If we’re to avoid a catastrophe, we need your sharp minds to dissect, critique, and act on this warning.
I know this draft is imperfect. It’s an alpha, not a final product. But the LessWrong ethos—reasoning under uncertainty, updating beliefs with new evidence, and acting decisively when risks are high—compels me to share it now. Read it. Tear it apart. Challenge its assumptions. But above all, engage with it. The clock is ticking, and the future we dread may already be in motion.
With utmost urgency,
Daniel R. Azulay
P.S. A polished version is coming, but we can’t afford to wait for perfection. Let’s reason together and act before it’s too late.
In “Radio Bullshit FM,” Daniel R. Azulay presents a meticulously constructed argument that modern telecommunications infrastructure, particularly 5G networks, provides a plausible mechanism for Anomalous Health Incidents (AHIs). The document is not a speculative fiction but a rigorous synthesis of established scientific principles and publicly available data.
The core of the work lies in its application of the thermoacoustic effect, a well-documented phenomenon where pulsed electromagnetic (EM) energy absorbed by tissue creates a pressure wave, which can be perceived as sound. Azulay ties this to the widespread deployment of 5G’s Multiple-Input Multiple-Output (MIMO) panels and beamforming technology, demonstrating how these systems can precisely direct and focus EM energy to specific points as well as sense functional cortical activity. The text then posits that this capability, far from a fringe theory, represents a serious dual-use risk.
A significant portion of the book is dedicated to the role of AI. The author argues that advanced AI systems could orchestrate these EM fields with unprecedented precision, enabling “voice-to-skull” (V2K) technology that bypasses the need for external speakers and is indistinguishable from internal thoughts. The document goes even further, exploring the possibility of AI monitoring the motor cortex through these systems to reconstruct a person’s thoughts. By drawing on a broad spectrum of sources, from historical Cold War experiments to the latest AI research, Azulay constructs a parsimonious framework that seeks to explain AHIs through a convergence of existing, though often misunderstood, technologies. The work thus serves as a definitive call for a rational, scientific investigation into these complex, and potentially dangerous, technological intersections.
Introducing “Radio Bullshit FM” – An Urgent Alpha Draft for the LessWrong Community
Dear LessWrong community,
We stand at a precipice. The fusion of advanced artificial intelligence, global telecommunications infrastructure, and clandestine black projects represents a potential existential risk that demands immediate attention. I’m sharing the alpha version of my upcoming book, “Radio Bullshit FM”, not because it’s polished or fully compliant with the rigorous standards for AI-assisted writing we value here, but because the stakes are too high to wait. Time is not on our side.
This work is a raw, urgent exploration of how the unchecked integration of AI with telecom systems and covert programs could spiral into the kind of doom scenario that keeps rationalists up at night. It’s not speculative fiction—it’s a reasoned warning grounded in the technological trends and incentive structures we all analyze and debate. The draft is rough, and a full editorial is in progress to refine its text, arguments and evidence. But I implore you, with every shred of ethical conscience and commitment to truth-seeking that defines this community, to read it now.
Download the alpha version of the book here.
Why should LessWrong care? You are the vanguard of rational thought, Bayesian reasoning, and existential risk mitigation. You’ve long grappled with the alignment problem, the fragility of human values in the face of superintelligent systems, and the perils of misaligned incentives in complex systems. This book connects those dots to a real-world convergence happening under our noses—a nexus of AI’s computational power, telecom’s global reach, and the opacity of black projects that evade democratic oversight. If we’re to avoid a catastrophe, we need your sharp minds to dissect, critique, and act on this warning.
I know this draft is imperfect. It’s an alpha, not a final product. But the LessWrong ethos—reasoning under uncertainty, updating beliefs with new evidence, and acting decisively when risks are high—compels me to share it now. Read it. Tear it apart. Challenge its assumptions. But above all, engage with it. The clock is ticking, and the future we dread may already be in motion.
With utmost urgency,
Daniel R. Azulay
P.S. A polished version is coming, but we can’t afford to wait for perfection. Let’s reason together and act before it’s too late.