Philippe’s moment begins with a sense of possibility.
In the early 2010s, discussions around accountability to affected populations were gaining momentum. At the same time, social media platforms were spreading beyond the West. For Philippe and others working on accountability, this convergence felt promising. Technology appeared to offer a way to rebalance power, to create more direct forms of engagement, and to give people affected by conflict and crisis a stronger voice.
The idea was simple and compelling. If people could speak more directly to humanitarian organisations, programmes would become more responsive. Listening would become easier. Participation would increase.
What Philippe and his colleagues encountered, however, was that listening was not a neutral act.
As engagement through digital platforms expanded, so did the risks that came with it. People were being asked to speak in spaces they did not control, through technologies they did not own, and in contexts where the consequences of being visible were unevenly distributed. Issues of privacy, data protection, surveillance, and unintended exposure came into view, often faster than humanitarian organisations were equipped to understand them.
The shift for Philippe came when it became clear that good intentions were not enough. Creating channels for participation without understanding the full chain of consequences could place people at risk, even when the aim was inclusion and accountability.
Listening, in this context, was not simply about openness or responsiveness. It required a deeper understanding of how data moves, how information can be used, and how vulnerability changes the meaning of consent. What felt empowering from a distance could feel coercive or dangerous on the ground.
This realisation changed how Philippe approached technology in humanitarian action. Rather than asking how quickly new tools could be deployed, the question became whether their risks were understood well enough to justify their use at all. Efficiency and innovation no longer stood on their own. They had to be weighed against severity of harm, likelihood of misuse, and the asymmetry of power between institutions and the people they serve.
In response, Philippe helped initiate work that brought technologists, ethicists, legal experts, and humanitarian practitioners into the same conversation. Policies such as Do No Digital Harm and later the ICRC’s AI framework emerged from this approach, not as abstract principles, but as attempts to slow down decision-making in situations where speed itself could create harm.
Philippe’s moment is not about rejecting technology or participation. It is about recognising that accountability cannot be reduced to mechanisms for listening. Without understanding the consequences of being heard, listening can become another way of shifting risk onto those least able to absorb it.
In this sense, understanding did not make decisions easier. It made them heavier. It changed what responsible action looks like when innovation, speed, and inclusion collide with the realities of power and vulnerability.