Toolbox

C1: Compress Interactive Communication (1)

Reminder: This post contains 3163 words · 10 min read · by Xianbin

It’s much more important today to be able to become an expert in a brand-new field in nine to twelve months than to have studied the right thing a long time ago. — Eric Jorgenson.

This post is based on [1, 2].

We start by examples.

Examples

Question\textbf{Question}. Can we use less communication bits to achieve the same goal?

Answer\textbf{Answer}. There exists a protocol with less than nn bits of communication simulating E1.

  1. Alice and Bob use public randomness to sample a set S[n]S \subseteq [n] of size kk as well as a uniformly random nn-bit string WW.
  2. Alice computes the number kk' of coordinates of iSi\in S such that WiXi(i[n])W_i \neq X_i (i\in[n]).
  3. Alice sample a uniformly random subset T[n]ST' \subseteq [n]- S of size kk>0k-k' > 0 with high probability).
  4. Alice construct the following string XX': If iSi\in S, Xi=WiX'_i = W_i; If iTi \in T', Xi=1XiX'_i = 1 - X_i; XiX_i, otherwise. Alice sends the part of XX' that Bod does not know to Bob.

Since Bob knows S,WS, W, the kk bits of string is not sent. So, the number of bits sending by Alice is nkn-k. Next, let us check the correctness. We only need to focus on the selected kk elements. When WiXiW_i \neq X_i for these elements, Alice only needs to send WiW_i because it should be 1Xi1-X_i and they are different. For the rest of elements, they are same, so we need to send 1Xi1-X_i. Then, the two strings XX' are identical.

From the above example, we can see that we should be very careful about the randomness used when giving a lower bound.

Next, we introduce two important conceptions.

Definitions

Definition[External Information Cost]\textbf{Definition}[\text{External Information Cost}]. Given a distribution μ\mu on X,YX, Y, and a protocol π\pi, denoted by π(X,Y)\pi(X,Y) the public randomness and message exchanged during the protocol, the external information cost ICμ(π):=I(XY;π(X,Y))\mathsf{IC^\circ_\mu (\pi)} := I(XY; \pi(X,Y)).

For a particular function ff, the external information cost of ff w.r.t the distribution μ\mu is the infimum of ICμo(π)\mathsf{IC^o_\mu (\pi)} over all protocols π\pi that compute ff on inputs from π\pi with probability at least 2/32/3.

Intuitively speaking, an external information cost is the minimum number of bits of information that an external observer reveals by learning about the inputs of two parties, Alice and Bob in any protocol to compute a function ff.

Definition[Intenal Information Cost]\textbf{Definition}[\text{Intenal Information Cost}]. Given a distribution μ\mu on X,YX, Y, and a protocol π\pi, denoted by π(X,Y)\pi(X,Y) the public randomness and message exchanged during the protocol, the external information cost ICμi(π):=I(X;π(X,Y)Y)+I(Y;π(X,Y)X)\mathsf{IC^i_\mu (\pi)} := I(X; \pi(X,Y) \mid Y) + I(Y;\pi(X,Y) \mid X).

It is the information that a party can learn about the inputs of another one through messages and public randomness.

Basic Lemmas

Let MM be the message from executing the protocol π\pi and RR be the public randomness of the protocol.

Lemma 1\textbf{Lemma 1}. The size of external information is not smaller than the size of internal information, i.e., I(X:MYR)+I(Y:MXR)I(XY:MR)I(X: M\mid Y R) + I(Y: M\mid X R) \leq I(XY: M\mid R).

Reference

[1]. Barak, Boaz, et al. “How to compress interactive communication.” Proceedings of the forty-second ACM symposium on Theory of computing. 2010.

[2]. Rao, Anup, and Amir Yehudayoff. Communication Complexity: and Applications. Cambridge University Press, 2020.


To be continued \color{red} \text{To be continued \ldots}