Building the methods, frameworks, and audit systems needed to evaluate knowledge claims with rigor and independence in an AI-mediated world.
As AI fundamentally changes how we learn and make sense of the world, we face a critical risk: systems that prioritize consensus over rigor, suppressing valid dissent and eroding our capacity for independent thought. We exist to ensure AI serves as a partner in inquiry, not a replacement for reasoning.
Developing rigorous frameworks to audit information systems for logical consistency and structural integrity, safeguarding cognitive liberty against algorithmic bias.
Providing seed funding and grants to frontier researchers whose work aligns with our vision of AI-enhanced inquiry that strengthens human reasoning.
Creating open-source methodologies and digital tools that democratize sophisticated analytical capabilities and accelerate productive paradigm shifts.
We believe AI offers an extraordinary opportunity to see patterns and implications individual minds might miss. Our goal is to consciously shape this transition, ensuring technology serves the deepest human values of truth-seeking and collective wisdom.
The Future of Inquiry Institute is actively recruiting research fellows to join four live projects. Each fellowship pairs a motivated researcher with AI-powered infrastructure, a defined methodology, and a clear publication pathway. We welcome applicants from adjacent disciplines — intellectual curiosity and epistemic integrity matter more than field-specific credentials.
Frontier AI models correctly identify logical gaps in high-prestige scientific theories — and then ignore them under default conditions. This project documents that failure mode, builds a live multi-model demonstration platform, and publishes the methodology as a replicable protocol.
Learn more Express interestSignal Beyond introduces AI-generated control conditions into mediumship research, replacing adversarial skepticism with a structurally sound epistemic audit. A medium who consistently outperforms an LLM baseline produces a result that cannot be explained away by confirmatory bias.
Learn more Express interestOver 1.25 million words of meticulously preserved dream journals — hundreds of entries coded for precognition and verified against independent witnesses and physical evidence. AI-driven analysis is now making this corpus scientifically legible for the first time.
Learn more Express interestIn collaboration with the Near Death Experience Research Foundation — the world's largest NDE database — this project applies phase-locked statistical methodology to thousands of cardiac arrest self-reports, holding them to the same evidential standards already accepted in mainstream medical research.
Learn more Express interest