Grocery on Distributed Algorithms

C1: Multi-party Communication Model Applications (3): Welfare maximization with limited interaction

Reminder: This post contains 5370 words · 16 min read · by Xianbin

This post is based on [1].

Problem

There are \(n\) bidders and each bidder has a set of wanted items. The goal is to maximize the welfare. This is problem can be abstracted as a maximum matching problem (one-side).

Model

The model considered here is number-in-hand (NIH) multiparty communication complexity model with shared black-board. In this model, there are \(n\) players with inputs \((x_1,\ldots,x_n)\). Each player (bidders) is associated with a node \(u\in L=[n]\) of some bipartite graph \(G = (L, R, E)\), and the input of a player is a set of incident of edges on the node. The goal is to compute a maximum matching.

The players communicate in some fixed number of rounds \(r\) and in each round, players simultaneously write at most some bits each on the shared blackboard that is visible to all players. At the end of \(r\) rounds, there is a referee outputting the result, which id completely determined the transcript (the content of the blackboard).

The input is a distribution!

A hard distribution for \(r\)-round protocols

The distribution is defined recursively.

Initially, \(r = 0\), \(G^0 = (L^0, R^0, E)\) where \(L^0\) is a set of \(n_0\) bidders and \(R^0\) is a set of \(m_0\) items, \(n_0 = m_0 = \ell^5\). The set of edges \(E^0\) is obtained by randomly selected a permutation from bidders to items. The above determines the initial distribution \(\mu_0\).

Notice that in \(G^0\), there is a perfect matching!

In the following, we only the case of \(r=1\) (it is easy to see how to construct the graph inductively from this case).

\(r = 1\), \(G^1 = (L^1, R^1, E^1)\):

Vertices:

The set of bidders is \(L^1 = \bigcup_{i=1}^{n_0^4} B_i\), where \(\lvert B_0 \rvert = n_0, n_1 = n_0^5\).

The set of items is \(R^1 = \bigcup_{i=1}^{n_0^4+\ell \cdot n_0^2}T_j\) where \(\lvert T_j \rvert = m_0, m_1 = (n_0^4+\ell \cdot n_0^2)m_0\).

Edges:

There are \(n_0^4+\ell \cdot n_0^2\) blocks of items \(T_j\).

1) We randomly select \(\ell n_0^2\) blocks of items and let it be fooling set. For each \(u\in B_i\), we select \(d_0\) edges to each one of \(T_j\) where \(d_0\) is the degree if \(G^0\).

2) For the rest of \(n_0^4\) indices, we make a random invertible map \(\sigma\) removing the indices of fooling sets. We connect the entire block \(B_i\) to the hidden block \(T_j\) where \(\sigma: i \to j\) using the distribution \(\mu_0\), i.e., the constructed graph is \(G_0\).

In summary, we use hidden blocks of items and corresponding B_i to construct the previous-round instance (hiding the previous instance). The increased part for each vertex u in B_i reaches to the fooling blocks of items, which causes \ell n_0 d_0^2 degrees.

Put it more specifically, there are \(n_0^4\) blocks of \(B_i\) and \(n_0^4\) blocks of hidden blocks of items (corresponding to the previous-round instance) using previous distribution \(\mu_0\). Also, there are \(b\cdot n_0^2\) fooling blocks of items, and each bidder will randomly select \(d_0\) edges to each of the fooling blocks, using the product marginals of \(\mu_0\):

\[\mu'_0 := \times (\mu_0 \mid {u_1}) \times \cdots \times(\mu_0\mid u_{n_r})\]

(Notice that each vertex has \(d_0\) incident edges to a block of items.)

Think about why we call such a distribution embedded!

Also, we find that

There is a perfect matching in each G_r (each bidder has an item).

The intuition of this distribution

The player cannot distinguish the fooling blocks and the hidden blocks.

Put it more formally, we have

Notaions:

For each block \(B_i\), let \(\Zeta_i = \{ \sigma(i), a_1,\ldots,a_{\ell n_r^2+1}\}\) be the set of corresponding indices of blocks of items.

Let \(\tau_i: \Zeta_i \to [\ell n_r^2+1]\) be the bijection that maps any index in \(\Zeta_i\) to the index. We use \(G^i_j\) to denote the induced subgraph of \((B_i, T_{\tau^{-1}(j)})\) (\(\tau(1)\) is the smallest index in \(\Zeta_i\) and so on).

\(\textbf{Lemma}.\) \(\textbf{I}[\Tau_u : J_i \mid \Zeta_i] = 0\),

where \(\Tau_u = \{G^u_1,\ldots, G^u_{\ell n_r^2+1}\}\) and \(J_i = \tau_i(\sigma(i))\).

This lemma can be obtained from the fact that product distribution of fooling blocks.

The final embedded instance

After constructing \(r\)-round, any algorithm will solve the maximum matching in this instance. There are approximately \(n^{4/5}\) independent \((r-1)\)-round instance, i.e., hidden blocks of items. And there are approximately \(n^{2/5}\) independent fooling blocks of items.

Let us prove the lower bound

Main Idea

When \(r=0\), the referee (the shared blackboard) has no information. The only thing that the referee can do is to guess. Then the expectation of the size of the obtained matching is \(1/n \cdot n = 1\). Without \(1\)-round instance, after communication, i.e., each node sending its incident edge to the shared blackboard, the referee will know the entire graph and obtain the perfect matching. That is the reason that we need a embedded instance.

In the \((r=1)\)-round instance, the degree of each node is much larger than \(\ell\), which means that each player cannot send the whole set of edges to the blackboard. Also, as we mentioned before, each player cannot distinguish the fool block and the hidden block (hiding the the perfect matching). Therefore, for each level instance, one round is used to determine the place of the hidden blocks.

We cannot say we can guess all hidden blocks in one round because that probability is very very small by computation.

Now, it is clear on how to show the lower bound. First, we give the relationship between the first message and the influence of distribution in the next level. By the construction, the first message reveals little information about the second level instance. Then, recursively, we give expected size of maximum matching in each round.

\(\textbf{Theorem}\). The size of expectation of maximum matching in the \(r\)-round is at most

\[5n_r \sum_{k=0}^{r-1}\frac{1}{\sqrt{n_k}} + 1\]

By the above theorem, we can get that the expectation of the maximum matching is at most

\[5 n_r^{1-1/5^{r+1}}\]

Because the maximum matching size is \(n_r\), to get a maximum matching, \(\Omega(\log \log n)\) rounds are required.

For the formal proof, please refer to [1].

Reference

[1]. Alon, N., Nisan, N., Raz, R. and Weinstein, O., 2015, October. Welfare maximization with limited interaction. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science (pp. 1499-1512). IEEE.