Grocery on Distributed Algorithms

C1: Compress Interactive Communication (1)

Reminder: This post contains 3163 words · 10 min read · by Xianbin

It’s much more important today to be able to become an expert in a brand-new field in nine to twelve months than to have studied the right thing a long time ago. — Eric Jorgenson.

This post is based on [1, 2].

We start by examples.

Examples

\(\textbf{Question}\). Can we use less communication bits to achieve the same goal?

\(\textbf{Answer}\). There exists a protocol with less than \(n\) bits of communication simulating E1.

  1. Alice and Bob use public randomness to sample a set \(S \subseteq [n]\) of size \(k\) as well as a uniformly random \(n\)-bit string \(W\).
  2. Alice computes the number \(k'\) of coordinates of \(i\in S\) such that \(W_i \neq X_i (i\in[n])\).
  3. Alice sample a uniformly random subset \(T' \subseteq [n]- S\) of size \(k-k' > 0\) with high probability).
  4. Alice construct the following string \(X'\): If \(i\in S\), \(X'_i = W_i\); If \(i \in T'\), \(X'_i = 1 - X_i\); \(X_i\), otherwise. Alice sends the part of \(X'\) that Bod does not know to Bob.

Since Bob knows \(S, W\), the \(k\) bits of string is not sent. So, the number of bits sending by Alice is \(n-k\). Next, let us check the correctness. We only need to focus on the selected \(k\) elements. When \(W_i \neq X_i\) for these elements, Alice only needs to send \(W_i\) because it should be \(1-X_i\) and they are different. For the rest of elements, they are same, so we need to send \(1-X_i\). Then, the two strings \(X'\) are identical.

From the above example, we can see that we should be very careful about the randomness used when giving a lower bound.

Next, we introduce two important conceptions.

Definitions

\(\textbf{Definition}[\text{External Information Cost}]\). Given a distribution \(\mu\) on \(X, Y\), and a protocol \(\pi\), denoted by \(\pi(X,Y)\) the public randomness and message exchanged during the protocol, the external information cost \(\mathsf{IC^\circ_\mu (\pi)} := I(XY; \pi(X,Y))\).

For a particular function \(f\), the external information cost of \(f\) w.r.t the distribution \(\mu\) is the infimum of \(\mathsf{IC^o_\mu (\pi)}\) over all protocols \(\pi\) that compute \(f\) on inputs from \(\pi\) with probability at least \(2/3\).

Intuitively speaking, an external information cost is the minimum number of bits of information that an external observer reveals by learning about the inputs of two parties, Alice and Bob in any protocol to compute a function \(f\).

\(\textbf{Definition}[\text{Intenal Information Cost}]\). Given a distribution \(\mu\) on \(X, Y\), and a protocol \(\pi\), denoted by \(\pi(X,Y)\) the public randomness and message exchanged during the protocol, the external information cost \(\mathsf{IC^i_\mu (\pi)} := I(X; \pi(X,Y) \mid Y) + I(Y;\pi(X,Y) \mid X)\).

It is the information that a party can learn about the inputs of another one through messages and public randomness.

Basic Lemmas

Let \(M\) be the message from executing the protocol \(\pi\) and \(R\) be the public randomness of the protocol.

\(\textbf{Lemma 1}\). The size of external information is not smaller than the size of internal information, i.e., \(I(X: M\mid Y R) + I(Y: M\mid X R) \leq I(XY: M\mid R)\).

Reference

[1]. Barak, Boaz, et al. “How to compress interactive communication.” Proceedings of the forty-second ACM symposium on Theory of computing. 2010.

[2]. Rao, Anup, and Amir Yehudayoff. Communication Complexity: and Applications. Cambridge University Press, 2020.


\(\color{red} \text{To be continued \ldots}\)