A socio-cognitive approach to trust can help us envisage a notion of networked trust for multi-agent systems (MAS) based on different interacting agents. In this framework, the issue is to evaluate whether or not a socio-cognitive analysis of trust can apply to the interactions between human and autonomous agents. Two main arguments support two alternative hypothesis; one suggests that only reliance applies to artificial agents, because predictability of agents’ digital interaction is viewed as an absolute value and human relation is judged to be a necessary requirement for trust. The other suggests that trust may apply to autonomous agents because predictability of agents’ interaction is viewed only as a relative value since the digital normativity that grows out of the communication process between interacting agents in MAS has always deal with some unpredictable outcomes (_reduction of uncertainty_). Furthermore, human touch is not judged to be a necessary requirement for trust. In this perspective, a diverse notion of trust is elaborated, as trust is no longer conceived only as a relation between interacting agents but, rather, as a relation between cognitive states of control and lack of control (_double bind_).