In case of large decentralized system like eth or dot it is fairly easy to deal with malevolent or malfunctioning agents and gain trust of people. But I wonder how it looks like in case of robonomics and the robots itself? I doubt there will multiple observers verifying that robots work as expected. How are you going to deal with that issue? Or do you assume it will be marginal and shouldn’t damage the project?
In Robonomics over Ethereum there is a realized mechanism of observers, verifying that the liability is executed. You can read more about this in a whitepaper par 5.5
In Robonomics over Substrate this feature is not yet implemented.