01.02.2026
Request for Proposals
admin
Building AI-uplifted systems for improved human collective agency
Future of Life Foundation
1. Introduction
Overview
The Future of Life Foundation (FLF) is soliciting proposals for new large-scale projects or new organizations to develop systems for improved human reasoning and coordination, uplifted by AI tools that keep human understanding and decision-making firmly at the center. We seek ambitious initiatives that can meaningfully advance humanity’s collective competence on high-stakes decisions and actions. There are three award types, for design studies ($25K-50K), projects ($50K-$250K) and for organization building ($500K+).
Notes:
- This RFP is invitation-only. If you know others who should be invited, please contact FLF rather than sharing this document directly.
- We may make awards in the form of grants, contracts, FLF fiscal sponsorship, or a hybrid (e.g. for entities that plan to incorporate during the award period.)
- In the interests of moving faster, the content of this RFP represents a map of how we expect things to go, rather than a complete specification. We may adapt the process in reaction to information coming in. Similarly, if there’s a part of this that you think doesn’t make sense, please let us know!
Context
Civilization and technology have radically improved the human condition. Nonetheless, the world sometimes goes in directions which essentially nobody would prefer – nuclear arms races, unexpected financial crashes, predatory marketing, or ubiquitous political misinformation. This is superficially puzzling: outcomes are determined by people’s actions, so why don’t we simply avoid the bad outcomes? It’s a challenge to our collective competence.
We can identify four key problem areas where humanity systematically fails:
First, failures to understand what is real and true (epistemics). Experts and the public alike struggle to determine facts, evaluate evidence, and understand reality – especially in complex domains where stakes are high and information is contested.
Second, failure to develop shared goals (collective reasoning and deliberation). Groups struggle to surface, iterate, and combine their preferences. We lack effective processes for discovering collective wisdom and will, leading to decision-making that satisfies no one.
Third, failure to connect interventions to goals (prediction and modeling). Even when we know what we want, we struggle to anticipate the consequences of our actions. Our ability to forecast and plan effectively remains limited, particularly for novel situations.
Fourth, failure to take effective coordinated collective action. Multipolar dynamics create perverse incentives – race dynamics, tragedy of the commons, coordination failures. Even when goals align and paths are clear, execution fails due to problems of trust and commitment.
Motivation
New platforms, institutions, and methods, bolstered by the right technology, could help address these failures. In particular, the rise of modern AI systems unlocks new prospects for improvement across all four problem areas.
AI systems and AI-enhanced tools can support finding and trusting truth and understanding – helping people locate reliable information and sources, and giving them valid reasons to trust what they learn. This builds epistemic foundations for better decision-making.
They can support deliberation and collective wisdom to formulate shared goals – surfacing and synthesizing diverse preferences and values, enabling productive group reasoning, and discovering collective will and common ground.
They can support development of good plans and potential actions – improving forecasting and scenario planning, connecting interventions to likely outcomes, and modeling the consequences of different approaches, and helping us avoid catastrophic errors of judgment
And they can support coordinated action – surfacing common interests and groups supporting them, defusing race dynamics and multipolar traps, enabling commitment and trust mechanisms, preventing narrowly-interested cliques from exploiting less coordinated groups.
We believe the world is radically underinvested in these beneficial applications of AI. Many people have not yet had the space to take these prospects seriously. This situation calls for ambitious and creative efforts. With sufficiently good tools, we might steer away from a world in which humans are increasingly irrelevant, toward one with deep institutional competence and individual empowerment.
2. Priority Areas
FLF has identified several current priority areas where we believe AI-enhanced systems could transform human reasoning and coordination. These areas address some of the problems outlined above and are not mutually exclusive – we welcome proposals that bridge multiple domains, or extend beyond the descriptions as written. We also welcome proposals that help improve collective human agency in different ways than described below. Further details on our thinking are provided in the appendices.
Epistemic Systems
Epistemic Virtue Evaluation: We seek proposals for creating evaluations to measure and improve the epistemic competence and trustworthiness of large-scale AI systems. This includes developing benchmarks, leaderboards, and assessment frameworks that can guide AI development toward greater transparency, rigor, good-faith inquiry, and impartiality – ultimately improving the epistemic foundations that support both individual and collective reasoning.
Full Epistemic Stack: We are interested in proposals for building integrated infrastructure that makes knowledge provenance transparent and traversable. When anyone encounters a claim – in news, research, social media, or AI outputs – they should be able to access its complete epistemic genealogy: chains of citations, original sources, methodological assumptions, and reliability scores. This raises the floor of public epistemic hygiene and enables better-informed decisions.
Tools for Situational Awareness and Planning: We are interested in proposals for tools that can help relatively-informed actors to remain even more informed about the global situation and priorities. This could include horizon monitoring for emerging threats, scenario planning for dealing with advancing technology, or red-teaming for strategic plans. This could raise the ceiling for sensible global decision-making around the big challenges we expect to face in the coming years.
Coordination Systems
We seek proposals for AI-enhanced platforms and systems in four categories, each applicable across multiple high-stakes arenas (international relations, governance institutions, technology development, and public discourse):
Connecting and Networking Platforms: Systems that help parties discover and connect with relevant counterparties based on shared concerns, interests, needs, or capabilities. Key technologies include semantic indexing, clustering, and search for matching open-ended specifications.
Collective Deliberation Platforms: Systems that surface, iterate, and synthesize diverse preferences and values to discover group wisdom and will. Key technologies include conversational AI for rich elicitation and tools for synthesizing collective positions.
Negotiation and Bargaining Platforms: Systems that facilitate agreements despite real disagreements, limited discussion capacity, and trust challenges. These could support international agreements on AI governance, enable mutual oversight conditions between developers, or break governance impasses.
Assurance tech: Methods and platforms that help groups of people overcome collective action barriers, through technologically-supported threshold commitment mechanisms, or other methods.
Governance Design and Implementation Platforms: Systems that help design and manage institutions to avoid unintended consequences like power concentration, corruption, or inefficacy. Key technologies include simulation/testbeds, mechanism design tools, and governance templates.
The above platform types can be combined into multi-function systems, and individual platforms may serve diverse use cases across multiple arenas. See more [here.]
The Coordination Lab Concept: We also seek proposals for a centralized R&D environment developing the underlying technologies for these platforms – testbeds, evaluation frameworks, core algorithms, and infrastructure that multiple applications can build upon. This lab would combine theoretical research with learning from real applied experiments, potentially offering open-source components that an ecosystem of applications can leverage.
3. Award Typology
FLF offers three award types scaled by scope and organizational maturity. Each award type has specific requirements.
4. Evaluation Criteria
Applications will be evaluated based on the following criteria:
- Alignment with Priorities: Relevance to the priority areas outlined in Section 2 and/or the objectives outlines in Section 1, with clear connection to addressing failures in understanding truth, developing shared goals, connecting interventions to goals, or enabling coordinated action.
- Ambitious Impact Potential: Scale of potential impact and clarity of theory of change. Does this have a credible path to addressing high-stakes problems? Is it innovative and original? This includes realistic distribution prospects: is there a credible pathway to adoption by relevant stakeholders, understanding of target users and how to reach them, and appropriate sequencing? (Note: the proposed activities won’t always themselves constitute very ambitious impact potential, but they should meaningfully contribute to an initiative that does.)
- Team Capabilities: Team qualifications, relevant experience, and track record. For organization-building awards, strength of leadership team and their history working together.
- Technical Feasibility and Plan: Is this primarily an execution challenge with a clear technical path, or are there major research questions with unknown answers? How well-conceived is the approach? What are the key risks and how are they mitigated?
- Relation to Broader Ecosystem: How does this complement or build on existing work? Connection to testbeds, arenas, and other initiatives in the space.
-
admin