Efficient approximately fair item allocation


When allocating objects among people with different preferences, two major goals are Pareto efficiency and fairness. Since the objects are indivisible, there may not exist any fair allocation. For example, when there is a single house and 2 people, every allocation of the house will be unfair to 1 person. Therefore, several common approximations have been studied, such as maximin-share fairness '', envy-freeness up to one item, proportionality up to one item'', and equitability up to one item. The problem of efficient approximately fair item allocation is to find an allocation that is both Pareto-efficient and satisfies one of these fairness notions. The problem was first presented at 2016 and has attracted considerable attention since then.

Setting

There is a finite set of objects, denoted by M. There are n agents. Each agent i has a value-function Vi, that assigns a value to each subset of objects. The goal is to partition M into n subsets, X1,...,Xn, and give each subset Xi to agent i, such that the allocation is both Pareto-efficient and approximately fair. There are various notions of approximate fairness.

Efficient approximately envy-free allocation

An allocation is called envy-free if for every agent believes that the value of his/her share is at least as high as that of any other agent. It is called envy-free up to 1 item if, for every two agents i and j, if at most one item is removed from the bundle of j, then i does not envy j. Formally:
Some early algorithms could find an approximately fair allocation that satisfies a weak form of efficiency, but not PE.
This raised the question of finding an allocation that is both PE and EF1.

Maximum Nash Welfare rule

Caragiannis, Kurokawa, Moulin, Procaccia, Shah and Wang were the first to prove the existence of a PE+EF1 allocation. They proved that, when all agents have positive additive utilities, every allocation that maximizes the product of utilities is EF1. Since it is obvious that the maximizing allocation is PE, the existence of PE+EF1 allocation follows.
While the max-product allocation has desirable properties, it cannot be computed in polynomial time: finding a max-product allocation is NP-hard and even APX-hard. This led to various works attempting to approximate the maximum product, with improving approximation factors:
  • Cole and Gkatzelis presented a 2.89-factor approximation.
  • Anari, Gharan, Saberi and Singh presented an e-factor approximation.
  • Cole, Devanur, Gkatelis, Jain, Mai, Vazirani and Yazdanbod presented a 2-factor approximation.
However, these approximations do not guarantee EF1. Some more recent algorithms guarantee both approximate max-product and fairness:
  • Barman, Krishanmurthy and Vaish present an algorithm that guarantees PE, -EF1 and a 1.45 approximation to the max product, in pseudopolynomial time.
  • Garg and McGlaughlin present an algorithm that guarantees PE, PROP1, and a 2-approximation to the max product, in strongly polynomial time.
  • Caragiannis, Gravin and Huang present an algorithm that guarantees EFX, PROP1, and a 2.9-approximation to the max product, by discarding some goods.
The max-product solution is particularly appealing when the valuations are binary :
  • Amanatidis, Birmpas, Filos-Ratsikas, Hollender and Voudouris prove that with binary valuations the max-product solution is not only EF1 but also EFX. This holds whenever the value of each item can take one of two values - not necessarily 0 or 1. With general additive valuations, max-product does not imply EFX but implies a natural generalization of it.
  • Halpern, Procaccia, Psomas and Shah prove that with binary valuations the max-product rule with lexicographically tie-breaking can be computed in polynomial time, and it is also group-strategyproof.

Non-additive valuations

If the agents' utilities are not additive, the max-product solution is not necessarily EF1; but if the agents' utilities are at least submodular, the max-product solution satisfies a weaker property called Marginal-Envy-Freeness except-1-item : it means that each agent i values his bundle at least as much as the marginal utility of. Formally:
Similar approximations have been found for more general utility functions:
  • Bei, Garg, Hoefer and Mehlhorn and Anari, Mai, Gharan and Vazirani study markets with multiple units of each item-kind, where the valuations are piecewise-linear concave. This means that the utility of a bundle with different item-kinds is the sum of utilities for each single item-kind, but for each item-kind, the valuation has decreasing marginal utilities. They give a 2-approximation to the max-product.
  • Ortega studies a multi-unit market with binary valuations. He proves that the egalitarian rule is Lorenz dominant, unique in utilities, and group-strategyproof.
  • Garg, Hoefer and Mehlhorn study budget-additive valuations - a subclass of submodular utilities. They give a -approximation to the max-product in time polynomial in the input size and 1/ε.
  • Benabbou, Chakraborty, Igarashi and Zick study submodular utilities with binary marginal gains. They prove that, with such valuations, both the max-product and the leximin allocations are EF1 and maximize the utilitarian welfare.
  • Babaioff, Ezra and Feige also study submodular utilities with binary marginal gains. They present a deterministic truthful mechanism that outputs a Lorenz dominant allocation, which is hence EFX and max-product.

Mixed valuations

Martin and Walsh show that, with "mixed manna", maximizing the product of utilities does not ensure EF1. They also prove that an EFX3 allocation may not exist even with identical utilities. However, with tertiary utilities, EFX and PO allocations, or EFX3 and PO allocations always exist; and with identical utilities, EFX and PO allocations always exist. For these cases there are polynomial-time algorithms.

Increasing price algorithm

Barman, Krishanmurthy and Vaish presented a pseudo-polynomial time algorithm for finding PE+EF1 allocations for positive additive valuations. They proved the following results.
  • The algorithm finds a PE+EF1 allocation in time O, where m is the number of objects, n the number of agents, and vmax the largest value of an item.
  • The same algorithm provides a 1.45 approximation to the maximum Nash welfare.
  • The algorithm also proves the existence of an allocation which is both EF1 and fractionally Pareto optimal.

Basic concepts

Their algorithm is based on the notion of competitive equilibrium in a Fisher market. It uses the following concepts.
  • Approximate EF1 allocation: Given a constant e > 0, an allocation is e-EF1 if it satisfies the EF1 condition up to a multiplicative constant of. Formally:
  • *
  • Price-vector: a vector assigning a price to each item.Bang-for-buck ratio: for an agent i and an object o, it is the ratio of the agent's valuation of the item, to the item price: vij / pj.Maximum bang-for-buck set: for an agent i, it is the set of objects maximizing his bang-for-buck ratio.MBB allocation: an allocation in which each agent receives only objects from his MBB set. In an MBB allocation, each agent maximizes his utility given his budget. The first welfare theorem implies that an MBB allocation is fractionally Pareto optimal.
  • Price-envy-free allocation: an allocation X in which, for every two agents i.''j, the price of Xi is at least as large as the price Xj. This implies that all incomes are identical. Moreover, an allocation that is both MBB and pEF is envy-free, since each agent maximizes his utility given his income, and all other agents have the same income.
  • Price-envy-free-except-one allocation: an allocation in which, for every two agents i''.j, the price p is at least as large as the price of Xj without its most expensive item. This does not imply that the incomes are identical. However, an allocation that is both MBB and pEF1 is also EF1.
  • e-pEF1 allocation, for some constant e'' > 0: an allocation in which, for every two agents i.''j, the product ·p is at least as large as p without its most expensive item. Note that the e''-pEF1 condition is weaker when e is larger. In particular, a pEF1 allocation is e-pEF1 for every e > 0, but the opposite is not true. An allocation that is both MBB and e-pEF1 is also e-EF1.
  • Least-spender: Given an allocation and a price-vector, it is the agent i'' such that p is smallest. Note that an allocation is e-pEF1 iff the e-pEF1 condition is satisfied for the least spender.
  • MBB Hierarchy of agent i : a tree-structure built in the following way.
  • * Put agent i at the root.
  • * Connect to agent i all objects in his MBB set.
  • * Connect to each object o the agent owning it in X, if it is not yet in the tree
  • * Connect to each agent at level 1, all objects in his MBB set that are yet in the tree.
  • * Continue adding agents and objects alternately in a similar way: for each agent, add his MBB objects; for each item, add its owning agent.
  • Violator : an agent h that violates the pEF1 condition w.r.t. the least-spender i. So the price of Xh, even when the most expensive item is removed from it, is higher than p. Similarly, e-violator is an agent that violates the e''-pEF1 condition w.r.t. the least-spender.
  • Path-violator : an agent h that appears on the MBB hierarchy of the least-spender i, and partially violates the pEF1 condition w.r.t. i. In more detail: suppose there is a path, along the edges of the MBB hierarchy, from the least-spender i to object o, and then an edge from object o to agent h. If p > p, then h is a path-violator. Note that every violator is a path-violator, but the opposite is not true. Similarly, If p > ·p, then h is an e''-path-violator.

Algorithm

Given a parameter e, the algorithm aims to find an allocation that is both fPO and 3e-pEF1. It proceeds in several phases.
Phase 1: Construct an initial MBB allocation+price .
  • One way to do it is to give each object o to the agent i who values it the most, and set the price of o to vi,o. This guarantees that, for each agent, the bang-for-buck of all objects in his bundle is exactly 1, and the bang-for-buck of all objects in other bundles is at most 1. Hence the allocation is MBB, hence it is also fPO.
  • If the allocation is 3e-pEF1, return it; otherwise proceed to Phase 2.
Phase 2: 'Remove price-envy within MBB hierarchy:
  • Construct the MBB hierarchy of the least-spender, given the current.
  • For L'' = 1,2,...:
  • * For each agent h in level L of the tree:
  • **If h is an e-path-violator along the path: i →... → h'o → h, then transfer object o from agent h to agent h. Restart Phase 2.
  • Once there are no more e''-path-violators:
  • * If the allocation is 3e-pEF1, return it; otherwise proceed to Phase 3.
Phase 3: 'Increase the prices'. Increase the prices of all objects in the MBB hierarchy by the same multiplicative factor, until one of the following three things happen:
  1. The identity of the least-spender changes. This may happen if some agent outside the hierarchy becomes the least-spender In this case we restart at Phase 2.
  2. A new object gets added to the MBB hierarchy. This may happen since, when the prices of objects in the hierarchy increase by a factor r, the BB ratio of objects in the hierarchy decrease by r, while the BB ratio of objects outside the hierarchy remain the same. Therefore, when r is sufficiently large, some outside object will become MBB for some hierarchy agent. In this case, too, we restart at Phase 2.
  3. The current allocation X becomes 3e-pEF1. This may happen since, when the prices of objects in the hierarchy increase, the income of the least-spender increases, while the income of agents outside the hierarchy remains constant. Therefore, when r is sufficiently large, it is possible that the 3e-pEF1 condition is satisfied w.r.t. the least-spender. In this case, we return the allocation X and the new price p.

Proof of correctness

First, assume that the above algorithm is executed on an instance in which all values are powers of, for some fixed e>0.
  • The first challenge is to prove that the algorithm terminates. It can be proved that, when all valuations are powers of, the algorithm terminates in time O. Note that for each agent i and object o, the rounded value Vi' is bounded between Vi and Vi.
  • By the previous paragraph, the resulting allocation is fPO and 3e-EF1 with respect to the rounded instance.
  • For every e sufficiently small, an allocation that is fPO for the rounded instance is PO for the original instance.
  • By combining the 3e-EF1 guarantee for the rounded instance, with the bound for rounding, we get that the returned allocation satisfies the following approximate-EF1 condition:
  • *
  • For sufficiently small e, the product is at most. Therefore, an allocation that is 3e-EF1 for the rounded instance is 7e-EF1 for the original instance.
  • Since the original valuations are integers, when e is sufficiently small, a 7e-EF1 allocation is also EF1.
  • Thus, the resulting allocation is PO+EF1 for the original instance.

Generalized Adjusted Winner

Aziz, Caragiannis, Igarashi and Walsh extended the condition of EF1 to mixed valuations. They presented a generalization of the adjusted winner procedure, for finding a PO+EF1 allocation for two agents in time O.

Efficient approximately proportional allocation

An allocation of objects is proportional if every agent values his/her share at least 1/n of the value of all items. It is called proportional up to one item if for every agent i, if at most one item is added to the bundle of i, then i values the bundle at least 1/n of the total. Formally, for all i :
  • .
The PROP1 condition was introduced by Conitzer, Freeman and Shah in the context of fair public decision making. They proved that, in this case, a PE+PROP1 allocation always exists.
Since every EF1 allocation is PROP1, a PE+PROP1 allocation exists in indivisible item allocation too; the question is whether such allocations can be found by faster algorithms than the ones for PE+EF1.
Barman and Krishnamurthy presented a algorithm finding a PE+PROP1 allocation for goods.
Branzei and Sandomirskiy extended the condition of PROP1 to chores. Formally, for all i:
  • .
They presented an algorithm finding a PE+PROP1 allocation of chores. The algorithm is strongly polynomial-time if either the number of objects or the number of agents are fixed.
Aziz, Caragiannis, Igarashi and Walsh extended the condition of PROP1 to mixed valuations. In this setting, an allocation is called PROP1 if, for each agent i, if we remove one negative item from i's bundle, or add one positive item to i's bundle, then i's utility is at least 1/n of the total. Their Generalized Adjusted Winner algorithm finds a PE+EF1 allocation for two agents; such an allocation is also PROP1.
Aziz, Moulin and Sandomirskiy presented a strongly polynomial-time algorithm for finding an allocation that is fractionally PE and PROP1, with general mixed valuations, even if the number of agents or objects is not fixed, and even if the agents have different entitlements.

Efficient approximately equitable allocation

An allocation of objects is called equitable if the subjective value of all agents is the same. The motivation for studying this notion comes from experiments showing that human subjects prefer equitable allocations to envy-free ones. An allocation is called equitable up to one item if, for every two agents i and j, if at most one item is removed from the bundle of j, then the subjective value of i is at least that of j. Formally, for all i, j:
  • .
A stronger notion is equitable up to any item : for every two agents i and j, if any single item is removed from the bundle of j, then the subjective value of i is at least that of j:
  • .
EQx allocations were first studied by Gourves, Monnot and Tlilane, who used a different term: "near jealosy-free". They proved that a partial EQx allocation of always exists, even with the additional requirement that the union of all allocated goods is a basis of a given matroid. They used an algorithm similar to the envy-graph procedure. Suksompong proved that an EQ1 allocation exists even with the additional requirement that all allocations must be contiguous subsets of a line.
Freeman, Sidkar, Vaish and Xia proved the following stronger results:
  • When all valuations are strictly positive, a PE+EQx allocation always exists, and there is a pseudopolynomial time algorithm that finds a PE+EQ1 allocation.
  • When some valuations may be zero, even a PE+EQ1 allocation may not exist, and deciding whether a PE+EQ/EQ1/EQx exists is strongly NP-hard.
  • A PE+EQ1+EF1 allocation may not exist. Deciding whether it exists is strongly NP-hard in general, but polynomial-time solvable with binary valuations.

Algorithms for a small number of agents

Bredereck, Kaczmarcyk, Knop and Niedermeier study a setting where there are few agents and few item-types, the utility per item-type is upper-bounded, but there can be many items of each type. For this setting, they prove the following meta-theorem : Given an efficiency criterion E, and a fairness criterion F, if  is fixed, then it is possible to decide in polynomial time whether exists an allocation that is both E-efficient and F-fair, as long as E and F satisfy the following properties:Fairness: The question of whether an F-fair allocation exists can be modeled by an integer linear program with a fixed dimension.Efficiency: The question of whether there exists an allocation that E-dominates another given allocation can be modeled by an integer program whose dual tree-depth, and the absolute value of the largest coefficient, are upper-bounded by some function.
Then, they prove that several common fairness and efficiency criteria satisfy these properties, including:Fairness notions: envy-freeness, EF1, EFx, graph-EF, egalitarian, maximin-share, and average-group-envy-freeness.Efficiency notions: Pareto-efficiency, graph Pareto-efficiency, and group-Pareto-efficiency.   An allocation X as k-group-Pareto-efficient if there is no other allocation Y that is at least as good for all groups of size k, and strictly better for at least one group of size k.  Note that GPE1 is equivalent to Pareto-efficiency. GPEn is equivalent to a utilitarian-maximal allocation, since for the grand group G of size n, the arithmetic-mean utility is equivalent to the sum of all agents' utilities. For all k, GPEk+1 implies GPEk.
The runtime of their algorithm is polynomial in the input size times, where d is the number of variables in the resulting ILP, which is.
They later developed more practical algorithms for some of these problems.