
32:17
QUESTION

33:01
Best to type your question here

33:25
QUESTION: main theorem says there exists r, but does it work for every fixed positive r?

35:00
It works for every fixed positive r provided r is smaller than some value r₀ depending on the distance to a maximal lottery we would like to achieve.

35:13
Q: a) Are there any issues with path dependence? b)Process reminds me of genetic algorithms, wrongfully?

35:26
Question: The convergence statement is in Cesàro average. So this allows for arbitrarily long excursions away from the limit lottery, arbitrarily far in the future, as long as the time-average converges. Correct? In your statement “the urn distribution is close to a maximal lottery most of the time”, the phrase “most of the time” means a set of times of Cesàro density 1, correct? Final question: is the proof obtained by applying the Perron-Frobenius theorem to obtain the stationary probability measure for a Markov chain defined by the majority margin matrix, plus the Birkhoff ergodic theorem?

37:45
The existence of a stationary distribution is obtained from the P-F theorem. However, most of the work is determining the stationary distribution.

38:16
OK thanks.

38:59
Close to a maximal lottery most of the time means that given any τ > 0, the distribution in the urn is δ-close to a maximal lottery all but a τ-fraction of the time.

39:19
QUESTION: Didn’t Laslier and Laslier show that “2 is not enough”?

40:50
Yes, the process can (and with probability 1 will) have long excursions far away from the maximal lottery. However, these excursions happen very rarely.

41:32
Can any linear program be coded as a maximal lottery?

48:12
QUESTION Can this be interpreted as saying that “fair” deliberation is good to achieve a faire result? Os something similar

48:21
QUESTION: Do you have any bounds on how long it takes until you start being close to ML often?

49:54
Question — is myopic choice equivalent to sophisticated choice via backward induction?

50:47
Dominik: One can obtain statements like “given enough iterations, it is likely that the distribution in the urn is close to a maximal lottery”. The number of iterations required for a given distance δ behaves like 1/\delta.

51:23
QUESTION from Peter van Emde Boas: Question how fast is the convergence compared to other LP solving methods?

52:52
Sean: I am not aware of a connection to backward induction. By myopic we mean that voters only consider the effect of their choice on the winner of the current round.

01:15:41
Question: Could we interpret the weights as the importance of making the *correct* judgment on the issue? This would take an epistemic perspective, by assuming that judgments can be (objectively) correct or incorrect.

01:16:37
A better way to think about the weight of issue k is as the importance of agreeing with the majority opinion on issue k.

01:17:25
If issue k has larger weight than issue j, then it is more important to satisfy a majority on issue k than an (equally sized) on issue j.

01:17:41
um… equally sized majority on issue j

01:18:52
This *might* be connected to an epistemic interpretation via some kind of CJT argument. But it is not immediate, because there are multiple issues, voters opinions (and errors) across issues might be correlated, etc.

01:22:30
QUESTION: Can the dependencies Marcus mentions be handled by placing weights on certain Boolean combinations of issues, rather than restricting weight assignments to individual issues?

01:24:06
Interesting idea. This might work, but it is definitely outside of our framework, because what you suggest would not be “additively separable” across issues, whereas our additive majority rules *are* additively separable.

01:24:40
It is also possible that what you are proposing is equivalent to recoding the JA problem in a larger judgement space where these Boolean combinations are represented explicitly as separate “issues”.

01:27:13
But if you did this recoding, you would probably violate the “thickness” condition which appears on the current slide.

01:30:59
QUESTION: could you give some examples of a "well behaved" domain X (distal and rugged)?

01:31:37
Thickness means, intuitively, that .. . there are not too many redundancies among issues?

01:32:29
To be precise, it means that there are no linear dependencies in the family of admissible truth value vectors.

01:32:54
I think that’s a version of “yes.”

01:33:28
A linear dependency would be a fairly strong form of redundancy. Weaker forms of redundancy are OK. For example, the truth values of some issues could be functionally determined by the truth values of other issues, and in general that would not violate thickness.

01:34:02
Question: [Better to state orally. If time permits.]