Toolbox

C1: A Speedup Theorem (a technique for lower bounds)

Reminder: This post contains 1746 words · 5 min read · by Xianbin

Why is it important?

I am interested in any techniques showing the lower bounds. This paper [1] introduced an elegant way of proving lower bounds.

A Quick Review of the Idea

For any LCL problem Π\Pi (see this post) with deterministic time complexity T(n,Δ)T(n,\Delta) can be reduced to a sequence of problem Π1Π2\Pi_1 \to \Pi_2 \to \cdots with time complexity T(n,Δ)1,T(n,Δ)2,T(n,\Delta)-1, T(n,\Delta)-2, \cdots. Therefore, if we can prove it stops at some problem Πt+1\Pi_{t+1}, then, the round complexity is at least tt.

The main idea is simple, but it is not easy to implement it. To overcome difficulties of implementing this idea, several techniques are proposed in [1].

Let us start from the simplest case, i.e., from Π\Pi to Π1\Pi_1.

Strictly speaking, there is no particular relationship between Π\Pi and Π1\Pi_1, because algorithms can do anything on an arbitrary graph. To make it work, we require the input graph satisfies a property, i.e., tt-independence. To this end, we need to require that the girth of the input graph is at least 2t+22t+2.

t-independence

The strict definition is boring. Intuitively speaking, if a graph GG is tt-dependence, if it satisfies that for any edge e1=(u1,v1),e2=(u2,v2)E(G)e1 = (u_1,v_1), e2 = (u_2,v_2) \in E(G), the set of extension of e1e_1 is independent of the set of extension of e2e_2, where the extension of an edge e=(u,v)e=(u,v) is EXTvt(e)=Nt(v)Nt(e),Nt(e)=Nt(v)Nt(u)\text{EXT}^t_v(e) = N^t(v)\setminus N^t(e), N^t(e) = N^t(v)\cap N^t(u).

This definition hides the way to following speed up theorem. Fix a node uV(G)u\in V(G), no matter how we choose the incident to uu, the extension of ee is independent of each other, which means that we execute procedures in parallel!

The most headache problem in distributed computing is the conflict caused from parallel and distributed computation. Due to this nice property, we can speed up our algorithms.

To be continued…

Reference

[1]. Brandt, S., 2019, July. An automatic speedup theorem for distributed problems. In Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing (pp. 379-388).