Grocery on Distributed Algorithms

C1: Private Coins and Public Coins

Reminder: This post contains 3287 words · 10 min read · by Xianbin

This post is based on the paper [1].

Communication complexity is a very important tool in the theory of computer science. This series of posts is to make anyone reading this post understand how the communication complexity theory works.

Take Home Messages:

We can use \(O(\log n)\) bits to have a common random string!

Definition

Consider two parties Alice and Bob. Alice gets a n-bit string \(x\in X\) and Bob gets a n-bit string \(y\in Y\). They play a game (not adversarial, but collaborative) and their goal is to output \(f(X, Y)\) where \(f: X\times Y\to \{0,1\}\). We omit local computation costs. Note Alice and Bob do not know each other’s input. They determine the output by exchanging messages.

Let us consider probabilistic protocols in which the algorithms may be probabilistic. We use \(C_P(x,y)\) to denote the expected number of bits exchanged for a probabilistic protocol \(P\) on input \((x,y)\). The complexity of a probabilistic protocol \(P\) is \(C(P) = \max_{(x,y)}C_P(x,y)\).

There are two randomized algorithms, algorithms with public coins and algorithms with private coins (each player tosses his/her private coin). For simplicity, we say the former algorithm (protocol) is PUBLIC and the later one is PRIVATE.

Question:

What is the connection between the above two algorithms?
We let

\[\mathcal{P}_\epsilon^{com} = \{ P \in \textup{COMMON} | \exists (x,y) \in X\times Y \ \ \mathbb{P}(P(x,y) \not= f(x,y)) \leq \epsilon \}\] \[\mathcal{P}_\epsilon^{pri} = \{ P \in \textup{PRIVATE} | \exists (x,y) \in X\times Y \ \ \mathbb{P}(P(x,y) \not= f(x,y)) \leq \epsilon \}\]

We define \(C^{com}_\epsilon(f) = \min \{ C(P): P \in \mathcal{P}_\epsilon^{com}(f)\}\) \(C^{pri}_\epsilon(f) = \min \{ C(P): P \in \mathcal{P}_\epsilon^{pri}(f)\}\).

That is to say \(C^{com}_\epsilon(f)\) is the best complexity that a COMMON probabilistic protocol can achieve if its error probability is at most \(\epsilon\).

\(\textbf{Theorem 1}\). Let \(0 \leq \epsilon \leq 1/2\) and \(0<\delta \leq 1\), then we have \(C^{pri}_{(1+\delta)\epsilon}(f) = O(C^{com}_\epsilon(f)+ \log \frac{n}{\epsilon \delta})\)

Before showing the proof, let us look at what Theorem 1 means.

It says that the difference between an algorithm with private and public coins is only \(O(\log n)\). Amazing! Also, we can notice that it is very simple to transform a PRIVATE algorithm to a PUBLIC algorithm. Suppose the PUBLIC algorithm has a random string with size \(O(\log n)\). Then, one player sends this string to another (public coin), costing \(O(\log n)\). Next, each play runs PRIVATE algorithms.

Therefore, to prove Theorem 1, we only need to prove that such an algorithm (protocol) exists.

Formally, we need to prove the following one.
\(\textbf{Lemma 1}\). Let \(P\in \mathcal{P}_\epsilon^{com}(f)\), \(0\leq \epsilon < 1/2\), then for any \(0<\delta \leq 1\) there is a protocol \(P^* \in \mathcal{P}^{com}_{(1+\delta)\epsilon}\) such that \(C(P^*) = O(C(P))\) and \(P^*\) uses only \(O(\log \frac{n}{\epsilon \delta})\) random bits.

\(\textbf{Proof}\) (sketch). We construct \(P\) as a finite collection of deterministic protocols \(\{P_1,\ldots, P_\ell \}\) each of which is defined by some fixed random string (each protocol has a fixed random string). By Chernoff bound and Hoeffding inequality, we can show that after randomly and uniformly selecting a protocol \(P^*\) from \(P\) (just as if they select the same public random string from a set of \(\ell\) strings and then they have a PUBLIC algorithm with that common random string), this \(P^*\) has an error at most \((1+\delta)\epsilon\). To achieve the goal of randomly selecting, we use \(O(\log \ell)\) bits.

  1. A little advice can be very helpful.

Reference

[1]. Newman, Ilan. “Private vs. common random bits in communication complexity.” Information processing letters 39.2 (1991): 67-71.