C1: A Speedup Theorem (a technique for lower bounds)
Why is it important?
I am interested in any techniques showing the lower bounds. This paper [1] introduced an elegant way of proving lower bounds.
A Quick Review of the Idea
For any LCL problem (see this post) with deterministic time complexity can be reduced to a sequence of problem with time complexity . Therefore, if we can prove it stops at some problem , then, the round complexity is at least .
The main idea is simple, but it is not easy to implement it. To overcome difficulties of implementing this idea, several techniques are proposed in [1].
Let us start from the simplest case, i.e., from to .
Strictly speaking, there is no particular relationship between and , because algorithms can do anything on an arbitrary graph. To make it work, we require the input graph satisfies a property, i.e., -independence. To this end, we need to require that the girth of the input graph is at least .
t-independence
The strict definition is boring. Intuitively speaking, if a graph is -dependence, if it satisfies that for any edge , the set of extension of is independent of the set of extension of , where the extension of an edge is .
This definition hides the way to following speed up theorem. Fix a node , no matter how we choose the incident to , the extension of is independent of each other, which means that we execute procedures in parallel!
The most headache problem in distributed computing is the conflict caused from parallel and distributed computation. Due to this nice property, we can speed up our algorithms.
To be continued…
Reference
[1]. Brandt, S., 2019, July. An automatic speedup theorem for distributed problems. In Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing (pp. 379-388).