Is it possible to have a setup where a simple computer with little computing power can ask a heard of supercomputers to calculate something in the cloud and somehow check that the suggested result from the cloud is likely to be correct? In this project we try to determine what the state of the art is in relation to this.
Essentially this boils down to how it can be possible to verify computations without reexecuting them. In this project we conduct a literature survey and target to produce a new survey paper about this topic. The applicability of verified computation in the design, development and operation of three broad areas will be investigated. Demonstrators will be constructed to provide an experiment testing the guarantee of correctness of an outsourced computational problem. Results of each demonstrator qualitatively and quantitatively evaluated. Future developments that challenge the value of verified computation in cyber resilience will be identified and conclusions and actionable recommendations documented.
The main starting point of this project is the earlier survey conducted by Walfish and Blumberg in https://dl.acm.org/doi/10.1145/2641562. This combines results from complexity theory and cryptography. Its main results are that the client can be given a guarantee of the correctness of the entire computation that makes no assumptions about the performing platform (other than cryptographic hardness assumptions) and that applies generally. The server runs a prover that returns a proof with the results. The client runs a verifier that can efficiently and probabilistically check the proof. If the entire computation was executed correctly, the client accepts, and if there is any error, the client rejects with high probability.