Mechanism Design via Differential Privacy (Hoda Heidari)

27 Sep

The field of Mechanism Design studies the design and analysis of algorithms robust to strategic manipulation of the inputs by selfish agents. Algorithmic mechanism design mostly tries to design “truthful” mechanisms where truthfulness means that the mechanism is designed so that a dominant strategy for each user is to truthfully report her value. Designing mechanisms that are truthful simplifies the analysis as users might not apply gaming to get a higher expected utility. However, in many settings, assuming truthfulness seems to be too strong, and therefore it prevents the mechanism from having other useful properties.

Privacy concerns can be viewed as a form of strategic play: participants are worried that the specific values of their inputs may result in noticeably different outcomes and utilities. While results from Mechanism Design can provide interesting privacy-preserving algorithms, this paper tries to look into the converse and show that strong privacy guarantees, such as given by differential privacy, can enrich the field of Mechanism Design.

In this paper we see that differential privacy leads to a relaxation of truthfulness assumption i.e. the incentive to lie about a value is not zero, but it is tightly controlled. This approximate truthfulness also provides guarantees in the presence of collusion, and under repeated applications of the mechanism. Moreover, it will allow us to create mechanisms for problems that cannot be handled with strict truthfulness.

The paper first introduces the use of differential privacy as a solution concept for mechanisms design. Authors show that mechanisms with differential privacy are approximate dominant strategy under arbitrary player utility functions. More accurately, mechanisms satisfying \epsilon-differential privacy make truth telling an (\exp{\epsilon} - 1)-dominant strategy for any utility function mapping Range(M) to [0, 1].

They also show that such mechanisms are resilient to coalitions, i.e. people cannot collude to improve their utilities. The mechanisms also allow repeated runs i.e. if we run them for several times, people cannot lie in the early rounds to lead to a more desirable utility in later rounds. Note that these guarantees are approximate: incentives are present, though arbitrarily small as controlled by the parameter \epsilon.

Secondly, the paper expands the applicability of differential privacy by presenting the exponential mechanism that can be applied in a much larger settings. The authors show that the mechanism is very unlikely to produce undesirable outputs and prove guarantees about the quality of the output, even for functions that are not robust to additive noise, and those whose output may not even permit perturbation.

Finally, they apply this general mechanism to several problems in unlimited supply auctions and pricing, namely the problem of single-commodity pricing, attributes auctions, and structurally constrained pricing problems. Although auctions permit offering different prices to different players, all of the results in this work are single price, and envy-free.

\textbf{Unlimited Supply Auctions.} The digital goods auction problem involves a set of n bidders. Each one has a private utility for a good at hand, and the auctioneer has an unlimited supply of the good. The bidders submit bids in [0, 1], and the auctioneer determines who receives the good at what prices. Let OPT denote the optimal fixed price revenue that the auctioneer could earn from the submitted bids. By applying the exponential mechanism to it, the authors show that there is a mechanism for the digital goods auction problem giving \epsilon-differential privacy, with expected revenue at least OPT - 3\ln{(e + \epsilon^2 OPT n)}/\epsilon.

A comparable result which uses machine learning applied to random samples of users to arrive at a truthful mechanism, gives expected OPT - O(\sqrt{OPT}) revenue.

\textbf{Attribute Auctions.} The digital goods attribute auction problem adds public, immutable attributes to each participant. The output of mechanisms for this problem describe a market segmentation of the attribute space, and prices for each market segment. Let OPT_k denote the optimal revenue over segmentations into k markets, and SEG_k, the number of segmentations of the specific bidders at hand into k markets. The result presented about this problem is that “There is a mechanism for the digital goods attribute auction problem giving \epsilon-differential privacy, with expected revenue at least \max_{k}{(OPT_k - O((OPT_k^2 - 3\ln{(e+\epsilon^{k+1}OPT_k SEG_k n^{k+1})}/\epsilon)}“.

\textbf{Constrained Pricing Problems.} Here bidders submit a bid for each of k different kinds of products, and the mechanism is constrained to sell at most one kind of product at a fixed price. The authors show that there is a mechanism for this problem giving 2\epsilon-differential privacy, with expected revenue at least OPT - 3\ln{(e+\epsilon^2 OPT km)}/\epsilon, where m is the number of items sold in OPT.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: