Grocery on Distributed Algorithms

C1: Distributional Disjointness

Reminder: This post contains 8377 words · 24 min read · by Xianbin

This post is based on [1]. For a distribution \(\mu\), we use \(D^\mu_\epsilon(F)\) to denote the \(\epsilon\)-error distributional communication complexity that is the minimum cost of deterministic protocol \(\Pi\) such that \(P_\mu[\Pi(X,Y)=F(X,Y)] \geq 1-\epsilon.\) On the other hand, the cost if for the best deterministic protocol that gives the correct answer for \(F\) on at least \((1-\epsilon)\) fraction of all inputs of \(X \times Y\) given \(\mu\).

We say that \(DISJ(X, Y) = 1\) if and only if \(X \cap Y = \emptyset\). We will prove the following theorem.

Theorem 1

\(\textbf{Theorem 1}\). There exists a distribution \(\mu\), such that \(D^\mu_\epsilon(DISJ_n)\geq \Omega(n)\) where \(\epsilon < 0.01\).

\(\color{red}\textbf{Notice}\) that \(\mu\) is not a product distribution (i.e., \(\mu_X(x)\mu_Y(y) = \mu(x,y) )\) because the common choice of i. Therefore, it does not violate the \(\tilde O(\sqrt{n})\) algorithm for DISJ.

\(\textbf{Proof}\). We first give the distribution \(\mu\). Let \(\color{blue} n = 4t- 1\) and let \(\pi = (S_X, S_Y, \{i\})\) be an arbitrary partition of \([n]\) such that \(\lvert S_X\rvert = \lvert S_Y\rvert = 2t-1\) (\(i\in[n]\)). Now, let us construct the input for the first player, Alice. We select \(t\) elements from \(S_X \cup \{i\}\) uniformly at random as \(X\). For the player Bob, we select \(t\) elements from \(S_Y\cup \{i\}\) uniformly at random as \(Y\). We know that with probability \(1/2\), we can select the \(i\) in the input. For simplicity, we use \(X_0\) to denote the input \(X\) such that \(i \not \in X\). We use \(X_1\) to denote the input \(X\) such that \(i\in X\). Similarly, we define \(Y_0\) and \(Y_1\).

In the above distribution \(\mu\), we have four types of input, i.e., \((X_1,Y_1)\) (we use \(I\) to denote the set of such an input) and other three cases \((X_0,Y_0), (X_0, Y_1), (X_1, Y_0)\) (we use \(J\) to denote such inputs) . It is easy to find that \(\mu(J) = 3/4\).

The following lemma is the key to proving this theorem.

\(\textbf{Lemma 1.}\) Given any rectangle \(R\), we have \(\color{blue} \mu(R\cap J) \geq \alpha \mu(R\cap I) - 2^{-\delta n}\) where \(\alpha,\delta\) are some constants.

Suppose \(D^\mu_\epsilon = k\). Then, protocols will partition \(\{0,1\}^n \times \{0,1\}^n\) into at most \(2^k\) rectangles. Let \(\color{blue} R_1,R_2,\ldots,R_\ell\) denote the monochromatic \(\color{blue} \text{rectangles with value} 1\) where \(\ell = 2^k\). Because we allow \(\epsilon\) error for protocols, \(\mu(\cup_{i=1}^\ell R_i \cap I) \geq \frac{3}{4} - \epsilon\) and \(\mu(\cup_{i=1}^\ell R_i \cap J) \leq \epsilon\) (Recall that \(\mu(I) = 3/4\) and \(\mu(J)=1/4\). Notice that here \(J\) is the set of announcing 0, so the probability of announcing 1 is 0).

Thus, we have

\[\epsilon \geq \mu(\bigcup_{i=1}^\ell (R_i\cap J)) \geq \sum_{i=1}^\ell (\alpha \mu(R_i \cap I) - 2^{-\delta n}\] \[\geq \alpha(\frac{3}{4}-\epsilon) - 2^{k-\delta n}\]

So, \(k\geq \Omega(n)\).

Proof of Lemma 1.

\(\color{blue}{\text{Step 1> Generate random input } (x_0,y_0), (x_1,y_1)}\)

Let \(\pi:=(S_X,S_Y,\{i\})\) be the random partition of \([n]\) mentioned above. Recall the process of \(\pi\), \(X\) and \(Y\) are independent.

Given \(\pi\), we define the following. Let \(R = \mathbb{X}\times \mathbb{Y}\) be a rectangle where \(\mathbb{X,Y}\subseteq 2^{[n]}\).

\[\begin{aligned} p_{X}(\pi):=\mathcal{P}[X\in \mathbb{X} \mid \pi] \\ p_{X_0}(\pi):= \mathcal{P}[X_0\in \mathbb{X} \mid \pi] = \mathcal{P}[X\in \mathbb{X} \mid \pi, i\not \in X] \\ p_{X_1}(\pi):=\mathcal{P}[X_1\in \mathbb{X} \mid \pi] = \mathcal{P}[X\in \mathbb{X} \mid \pi, i\in X]\\ p_{Y}(\pi):=\mathcal{P}[Y\in \mathbb{X} \mid \pi] \\ p_{Y_0}(\pi):= \mathcal{P}[Y_0\in \mathbb{X} \mid \pi] = \mathcal{P}[Y\in \mathbb{X} \mid \pi, i\not \in Y]\\ p_{Y_1}(\pi):=\mathcal{P}[Y_1\in \mathbb{X} \mid \pi] \mathcal{P}[Y\in \mathbb{X} \mid \pi, i\in Y]\\ \end{aligned}\]

Then, by the property of rectangle and \(X \perp Y\), we have

\[\begin{aligned} \mathcal{P}[(X_0,Y_0)\in \mathbb{X}\times \mathbb{Y}] = E[p_{X_0}(t)p_{Y_0}(t)]\\ \mathcal{P}[(X_1,Y_1)\in \mathbb{X}\times \mathbb{Y}] = E[p_{X_1}(t)p_{Y_1}(t)] \end{aligned}\]

Now, let us rewrite elements of the goal of our proof.

\[\begin{aligned} \mu(I\cap R) = \frac{1}{4}\mathbb{E}_\pi[p_{X_1}(\pi)p_{Y_1}(\pi)]\\ \mu(J\cap R) = \frac{3}{4}\mathbb{E}_\pi[p_{X_0}(\pi)p_{Y_0}(\pi)] \end{aligned}\]

To prove Lemma 1, we need to connect \(p_{X_0}(\pi)p_{X_1}(\pi)\) and \(p_{Y_0}(\pi)p_{Y_1}(\pi)\) together. Intuitively speaking, they should be close. It is not hard to find the connection.

\(\color{blue}{\text{Step 2> Define Bad Events}}\)

To build the connection, we define bad events as follows.

\(\textbf{Bad Event}\). We say a partition \(\pi=(S_X, S_Y,\{i\})\) is \(X\)-bad if

\[\color{blue} \begin{aligned} p_{X_1}(\pi) < \frac{1}{3}p_{X_0}(\pi) - 2^{-\epsilon n} \text{ (1)} \end{aligned}\]

\(\textbf{Claim 1}\). Let \(Bad(\pi)\) be the indicator of the event \(\pi=(S_X,S_Y,\{i\})\) being Bad. We have \(\color{blue} \mathbb{E}_\pi[p_{X_0}(\pi)p_{Y_0}(\pi)Bad(\pi)] \leq \frac{4}{5} \mathbb{E}_\pi[p_{X_0}(\pi)p_{Y_0}(\pi)] \text{ (2)}\)

Now, we have (using Inequality (1) and (2)).

\[\begin{aligned} \mathcal{P}[(X_1,Y_1)\in R] = \mathbb{E}_{\pi}[p_{X_1}(\pi) p_{Y_1}(\pi)] \\ \geq \mathbb{E}_{\pi}[p_{X_1}(\pi) p_{Y_1}(\pi)(1-Bad(\pi)]\\ \geq \mathbb{E}_{\pi}[(\frac{p_{X_0}(\pi)}{3}-2^{-\epsilon n}) (\frac{p_{Y_0}(\pi)}{3}-2^{-\epsilon n})(1-Bad(\pi)]\\ \geq \alpha' \mathbb{E}_\pi[p_{X_0}(\pi)p_{Y_0}(\pi)]-2^{-\Omega(n)} \\ = \alpha'\mathcal{P}[(X_0,Y_0)\in R] - 2^{-\Omega(n)} \end{aligned}\]

Therefore,

\[4\mu(J\cap R) \geq \frac{4\alpha'}{3}\mu(I\cap R)-2^{-\Omega(n)}\\\]

Thus,

\[\mu(J\cap R) \geq \frac{\alpha'}{3}\mu(I\cap R)-2^{-\Omega(n)}\\ =\alpha\mu(I\cap R)-2^{-\Omega(n)}\]

Appendix

Proof of Claim 1.

Some fact: \(p_{X_0}(\pi) + p_{X_1}(\pi) = 2p(\pi)\). To prove Claim 1, we need to prove the following Claim 1.1.

\(\textbf{Claim 1.1}\). For any set \(S_Y \subseteq [n] \text{ such that} \mid S_Y \mid = 2t - 1\), \(\mathcal{P}(\pi \text{ is } X \text{-bad} \mid S_Y) < 1/5\)

\(\textbf{Proof}\). By the definition of (1), we know that \(p_{1}(\pi) \geq 2^{-\epsilon n}\) (otherwise, bad events will not happen and it is proved).

Recall that \(X = \{\mathcal{X}: \mathcal{X}\subseteq [n], \lvert \mathcal{X}\rvert=t, \mathcal{X}\subseteq S_\mathcal{X} \cup {\color{red} \{i\}}\}\).

We define \(X' = \{\mathcal{X}: \mathcal{X}\subseteq [n], \lvert \mathcal{X}\rvert=t, \mathcal{X}\subseteq S_\mathcal{X} \cup {\color{red}\emptyset} \}\).

Let us select \(s\) uniformly at random in \(X\), then we have

\[\mathcal{P}[i\not \in X] = \frac{\lvert X'\lvert }{\lvert X\rvert}\]

Thus,

\[p_{X_0}(\pi) =\mathcal{P}[ X\in \mathbb{X} \mid S_Y, i\not \in X]\\ =\frac{\lvert X' \rvert}{2t-1 \choose t} = \frac{\lvert X' \rvert}{\lvert X \rvert} p(\pi)\cdot 2\\ =2p(\pi)\mathcal{P} [i\not \in s]\]

where \(s = X \in \mathbb{X}\) under \(\pi\).

Similarly, we get that

\[p_{X_1}(\pi) = 2p(\pi)\mathcal{P} [i \in s]\]

Combining with the definition of \(X\)-bad, we get that

\[\mathcal{P}[i\in s] < \frac{1}{3}\mathcal{P}[i\not \in s] - \frac{2^{-\epsilon n}}{2p(\pi)}\]

So, \(\color{blue} \mathcal{P}(i\in s) < 1/4. \text{ (3) }\)

Now let \(\{i_1,\ldots,i_{2t}\}\) denote \([n]\setminus S_Y\) and let \(\textbf{s}=\{s_1,\ldots,s_{2t}\}\). We say that \(s_j = 1\) if \(i_j \in s\).

We prove this cliam by contradiction. Suppose that the bad event happens with probability at least 1/5. So, 1/5 of \(\{s_1,\ldots,s_{2t}\}\) is \(X\)-bad, which corresponds to ii). For the remaining, it corresponds to i).

Consider \(\textbf{s}\), we compute its entropy as follows.

\[\textbf{H(s)}=\textbf{H}(s_1,\ldots,s_{2t})\leq \sum_{i=1}^{2t} \textbf{H}(s_i) \\\leq \frac{2t}{5}\textbf{H}(1/4) + \frac{8t}{5}\textbf{H}(1/2) < 1.93 t\]

On the other hand,

\[\textbf{H(s)} = \log \lvert X\rvert \geq \log\big( p(\pi) {2t \choose t}\big) \leq \log \big( 2^{-\delta n} {2t \choose t}\big)\\ \geq \log \big( \frac{2^{2t}}{c\sqrt{t}} 2^{-\delta n}\big) = t(2-4\delta - o(1))\]

By selecting \(\delta\) to be very small, there is a contradiction. So, Claim 1.1 is proved.

Finally, we can prove Claim 1.

Since \(Bad(\pi) \leq Bad_X(\pi) + Bad_Y(\pi)\), we only need to prove \(\mathbb{E}_\pi[p_{X_0}(\pi)p_{Y_0}(\pi)Bad_X(\pi)] \leq \frac{2}{5}\mathbb{E}_T[p_{X_0}(\pi)p_{Y_0}]\) by symmetry.

When fixing \(S_Y\), we fix \(S_X\cup \{i\}\). Therefore, we fix \(p(\pi)\) (because \(\color{blue} S_X\cup \{i\}\) is fixed ) and \(p_{Y_0}(\pi)\) (because \(\color{blue} S_Y \cup {\{\text{no } i = \emptyset}\} = S_Y\)).

It is sufficient to prove Claim 1 by looking the probability for each \(S_Y\),

\[\mathbb{E}_\pi[p_{X_0}(\pi)p_{Y_0}(\pi)Bad_X(\pi) \mid S_Y]\\ =c\cdot \mathbb{E}_\pi[p_{X_0}(\pi)Bad_X(\pi) \mid S_Y]\\ \leq 2c\mathbb{E}_\pi[p(\pi)Bad_X(\pi) \mid S_Y]\\ =2cr\mathbb{E}[Bad_X(\pi)\mid S_Y]\\ =2cr\mathcal{P}[Bad_X(\pi)=1\mid S_Y]\\ \leq \frac{2cr}{5}\\ =\frac{2}{5}c \mathbb{E}_\pi[p(\pi) \mid S_Y]\\ \frac{2}{5}c \mathbb{E}_\pi[p_{X_0}(\pi) \mid S_Y]\\ =\frac{2}{5}\mathbb{E}[p_{X_0}(\pi)p_{Y_0}(\pi) \mid S_Y]\]

So, we prove Claim 1.1.

Reference

[1]. Razborov, Alexander A. “On the distributional complexity of disjointness.” International Colloquium on Automata, Languages, and Programming. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990.

[2]. Kushilevitz, E. and Nisan, N. (1996). Communication Complexity. Cambridge.