Toolbox

C1: Multi-party Communication Model Applications (3): Welfare maximization with limited interaction

Reminder: This post contains 5370 words · 16 min read · by Xianbin

This post is based on [1].

Problem

There are nn bidders and each bidder has a set of wanted items. The goal is to maximize the welfare. This is problem can be abstracted as a maximum matching problem (one-side).

Model

The model considered here is number-in-hand (NIH) multiparty communication complexity model with shared black-board. In this model, there are nn players with inputs (x1,,xn)(x_1,\ldots,x_n). Each player (bidders) is associated with a node uL=[n]u\in L=[n] of some bipartite graph G=(L,R,E)G = (L, R, E), and the input of a player is a set of incident of edges on the node. The goal is to compute a maximum matching.

The players communicate in some fixed number of rounds rr and in each round, players simultaneously write at most some bits each on the shared blackboard that is visible to all players. At the end of rr rounds, there is a referee outputting the result, which id completely determined the transcript (the content of the blackboard).

The input is a distribution!

A hard distribution for rr-round protocols

The distribution is defined recursively.

Initially, r=0r = 0, G0=(L0,R0,E)G^0 = (L^0, R^0, E) where L0L^0 is a set of n0n_0 bidders and R0R^0 is a set of m0m_0 items, n0=m0=5n_0 = m_0 = \ell^5. The set of edges E0E^0 is obtained by randomly selected a permutation from bidders to items. The above determines the initial distribution μ0\mu_0.

Notice that in G0G^0, there is a perfect matching!

In the following, we only the case of r=1r=1 (it is easy to see how to construct the graph inductively from this case).

r=1r = 1, G1=(L1,R1,E1)G^1 = (L^1, R^1, E^1):

Vertices:

The set of bidders is L1=i=1n04BiL^1 = \bigcup_{i=1}^{n_0^4} B_i, where B0=n0,n1=n05\lvert B_0 \rvert = n_0, n_1 = n_0^5.

The set of items is R1=i=1n04+n02TjR^1 = \bigcup_{i=1}^{n_0^4+\ell \cdot n_0^2}T_j where Tj=m0,m1=(n04+n02)m0\lvert T_j \rvert = m_0, m_1 = (n_0^4+\ell \cdot n_0^2)m_0.

Edges:

There are n04+n02n_0^4+\ell \cdot n_0^2 blocks of items TjT_j.

1) We randomly select n02\ell n_0^2 blocks of items and let it be fooling set. For each uBiu\in B_i, we select d0d_0 edges to each one of TjT_j where d0d_0 is the degree if G0G^0.

2) For the rest of n04n_0^4 indices, we make a random invertible map σ\sigma removing the indices of fooling sets. We connect the entire block BiB_i to the hidden block TjT_j where σ:ij\sigma: i \to j using the distribution μ0\mu_0, i.e., the constructed graph is G0G_0.

In summary, we use hidden blocks of items and corresponding B_i to construct the previous-round instance (hiding the previous instance). The increased part for each vertex u in B_i reaches to the fooling blocks of items, which causes \ell n_0 d_0^2 degrees.

Put it more specifically, there are n04n_0^4 blocks of BiB_i and n04n_0^4 blocks of hidden blocks of items (corresponding to the previous-round instance) using previous distribution μ0\mu_0. Also, there are bn02b\cdot n_0^2 fooling blocks of items, and each bidder will randomly select d0d_0 edges to each of the fooling blocks, using the product marginals of μ0\mu_0:

μ0:=×(μ0u1)××(μ0unr)\mu'_0 := \times (\mu_0 \mid {u_1}) \times \cdots \times(\mu_0\mid u_{n_r})

(Notice that each vertex has d0d_0 incident edges to a block of items.)

Think about why we call such a distribution embedded!

Also, we find that

There is a perfect matching in each G_r (each bidder has an item).

The intuition of this distribution

The player cannot distinguish the fooling blocks and the hidden blocks.

Put it more formally, we have

Notaions:

For each block BiB_i, let Zi={σ(i),a1,,anr2+1}\Zeta_i = \{ \sigma(i), a_1,\ldots,a_{\ell n_r^2+1}\} be the set of corresponding indices of blocks of items.

Let τi:Zi[nr2+1]\tau_i: \Zeta_i \to [\ell n_r^2+1] be the bijection that maps any index in Zi\Zeta_i to the index. We use GjiG^i_j to denote the induced subgraph of (Bi,Tτ1(j))(B_i, T_{\tau^{-1}(j)}) (τ(1)\tau(1) is the smallest index in Zi\Zeta_i and so on).

Lemma.\textbf{Lemma}. I[Tu:JiZi]=0\textbf{I}[\Tau_u : J_i \mid \Zeta_i] = 0,

where Tu={G1u,,Gnr2+1u}\Tau_u = \{G^u_1,\ldots, G^u_{\ell n_r^2+1}\} and Ji=τi(σ(i))J_i = \tau_i(\sigma(i)).

This lemma can be obtained from the fact that product distribution of fooling blocks.

The final embedded instance

After constructing rr-round, any algorithm will solve the maximum matching in this instance. There are approximately n4/5n^{4/5} independent (r1)(r-1)-round instance, i.e., hidden blocks of items. And there are approximately n2/5n^{2/5} independent fooling blocks of items.

Let us prove the lower bound

Main Idea

When r=0r=0, the referee (the shared blackboard) has no information. The only thing that the referee can do is to guess. Then the expectation of the size of the obtained matching is 1/nn=11/n \cdot n = 1. Without 11-round instance, after communication, i.e., each node sending its incident edge to the shared blackboard, the referee will know the entire graph and obtain the perfect matching. That is the reason that we need a embedded instance.

In the (r=1)(r=1)-round instance, the degree of each node is much larger than \ell, which means that each player cannot send the whole set of edges to the blackboard. Also, as we mentioned before, each player cannot distinguish the fool block and the hidden block (hiding the the perfect matching). Therefore, for each level instance, one round is used to determine the place of the hidden blocks.

We cannot say we can guess all hidden blocks in one round because that probability is very very small by computation.

Now, it is clear on how to show the lower bound. First, we give the relationship between the first message and the influence of distribution in the next level. By the construction, the first message reveals little information about the second level instance. Then, recursively, we give expected size of maximum matching in each round.

Theorem\textbf{Theorem}. The size of expectation of maximum matching in the rr-round is at most

5nrk=0r11nk+15n_r \sum_{k=0}^{r-1}\frac{1}{\sqrt{n_k}} + 1

By the above theorem, we can get that the expectation of the maximum matching is at most

5nr11/5r+15 n_r^{1-1/5^{r+1}}

Because the maximum matching size is nrn_r, to get a maximum matching, Ω(loglogn)\Omega(\log \log n) rounds are required.

For the formal proof, please refer to [1].

Reference

[1]. Alon, N., Nisan, N., Raz, R. and Weinstein, O., 2015, October. Welfare maximization with limited interaction. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science (pp. 1499-1512). IEEE.