Collective knowledge is easy — Ignorance is hard

Collective knowledge is easy, collective ignorance is hard

A parable

Consider the following parable:

A student applies to a large, and moderately prestigious university for a graduate scholarship to undertake a PhD. The student is very good: the Department to which she has applied thinks this is the best student in the current field of applicants.

Scholarships are awarded by a central committee, not by individual Departments, so this Department needs to submit a ranking of its prospective students and await the determination of the committee as to how many scholarships it will get. The Department usually gets at least one scholarship, so presumably, this exceptional student is practically guaranteed to be made an offer, right?

Unfortunately, no. If the student refuses the scholarship, there is no guarantee that the Department will be able to offer a scholarship to their second ranked candidate. Instead, it will go to the next ranked candidate on the central committee’s list. So the Department now faces a strategic problem: should they rank this student, who is arguably “too good” for the Department, highly or not? If the student is likely to have offers from elsewhere, and is likely to accept one of those offers, then it is not in the Department’s interest to rank the student highly. So the Department – if it is sufficiently cynical – is likely to try to assess the student’s level of interest in the program, and may even use the student’s exceptional quality as a reason not to rank her highest among the prospective students.

The central committee, however, to the extent that they want to enhance the quality of the university as a whole by admitting the best possible students, wants the departments to submit perfectly meritocratic rankings, which will ensure that the best possible students get scholarships, regardless of which department they are applying to.

This story is not too far removed from what happened in the recent past at my University. Mercifully, the procedure has been greatly improved. But similar issues are almost certainly still occurring, not just in universities but in all sorts of institutions. It is a typical case of an institution design that gives rise to perverse incentives. In this case, the incentive is to strategically misrepresent the Department’s beliefs. While it is in some sense rational for individual departments to not offer scholarships to excellent students who are unlikely to accept, every department’s following that strategy is likely to lower the average quality of graduate students below what the university could achieve.

Diagnosing collective failure

As it happens, there is an extensive body of work in economics on how to optimally design institutions that allocate scarce resources (like donor organs, medical residencies, and school places) to avoid these sorts of problem. Most of the members of economics departments in typical universities are reasonably familiar with this work – at least they know that it exists, even if they don’t know the exact measures recommended – but the universities themselves are apparently blind to it.

Here is my question: how should we describe the sort of failure that is occurring in stories like the one above? Here are two possibilities:

  1. A failure of knowledge. While many individuals in the institution know about the problem of strategic misrepresentation and the means to rectify it, the institution as a whole is ignorant. The relevant sort of organisation required for a collective to know these facts is lacking.
  2. A practical failure. While the institution knows what the problem is and how to fix it, it is somehow failing to translate its belief into optimal action. It might be weak willed, hypocritical, or simply incompetent to act on its knowledge.

Having noted these possibilities, it is far from obvious to me which is the better interpretation. What would the difference between these interpretations amount to?

One thought: ask ourselves different counterfactual questions, about the circumstances under which the institution would behave optimally. For instance, if the epistemic interpretation is correct, perhaps we could say:

If some individuals in the organisation were to have more accurate beliefs about the problem and its solution (but we held their basic values fixed), then the organisation would act optimally.

And if the pragmatic interpretation were correct, perhaps we could say:

if some individuals in the organisation had different values (or habits, or incentives, or willpower, or …), but we held everyone’s beliefs about the problem fixed, then the organisation would act optimally.

I can immediately think of at least one sort of case that doesn’t exactly fit these analyses, but I don’t want to try to offer a more complicated analysis.[1] Rather, I’ve got some reason to doubt that this is a fruitful approach at all.

Collective knowledge

Alexander Bird, writing about the conditions under which a social collective has knowledge, adopts a similar approach: arguing that we need to understand social knowledge as a functional analogue of individual knowledge.[2] He suggests that the functional role of individual knowledge is roughly: when our cognitive faculties are operating reliably, they produce true beliefs. Those beliefs are then used as the proper input to reasoning processes. These beliefs, when all is going well, are knowledge.

He elaborates this to suggest that a social collective should have, to be doing something analogous to individual knowing:

  1. characteristic propositional outputs of its inquiring activities
  2. truth filters on those outputs
  3. and the outputs must be used as input to social action

This account looks promising as an account of success. For instance, I take it as clear that my University knows a lot about government regulations regarding higher education. And clearly, it has characteristic outputs of its inquiries – minutes of meetings, policy documents, webpages, emails, mental states of key officials – that are propositional. It has filters on these propositions: incorrect descriptions of the regulations are identified and discarded. And it acts on these outputs: the University strategically responds to the incentives created by the government regulations.

But now consider the example of failure with which I began. Is the university producing propositional outputs of the right sort? Well there are documents that are produced by academics at the university that make true claims about the existence of incentive compatibility, mechanism design, and the problem of strategic voting. But from Bird’s account, I don’t really know whether this counts: certainly these propositions are produced in very different locations than the propositions about government regulations. So perhaps the university is failing at stage (1).

Alternatively, we might think the University has passed stage (1), and is failing at stage (3): it produces the right sorts of propositions in the right sort of structural locations, but does not act on them. But failure at stage (3) does not look like a failure of knowledge per se, it is a practical failure to act on what is known.

So we have got back to much the same question I started with: is the university ignorant, or is it incompetent? I’m tempted to think the contrast is much less sharp than we ordinarily assume, especially in collective cases.

(Aside: Notice that Bird says that a collective knows when it produces true propositions, and acts on them. The acting makes them belief-like. The truthiness makes them knowledge. But what if the collective produces representations of true propositions without ever acting on them? That suggests the collective would have knowledge without belief. (!))

Functionalism provides success analogues, not pathology analogues

I suggest that what is happening here is symptomatic of a broader phenomenon regarding functionalism. The functionalism in the present case is a functionalism about agency: a collective is an agent if it does what ordinary agents do: they act in ways to effectively realise their desires, given their beliefs. A collective is knowledgeable if it does what a knowledgeable agent does: it has beliefs which it is capable of acting on, and those on which it acts are reliably true.

Functionalism about knowledge or agency means that we can recognise agency and knowledge in entities very unlike ourselves. We can imagine robot agents or knowledgeable computer systems, and various animals other than humans may qualify as agents and knowers too. But just because in a give case we have specified a functional analogue of success, it does not follow that we will be able to identify any functional analogues of failure. Consider a less lofty example: circulation in a biological organism. Circulation involves making sure that the molecules involved in respiration, both as input and output (oxygen, carbon dioxide) are supplied to and removed from the relevant locations in the body, so that respiration can efficiently proceed. So understood, circulation does not only occur in vertebrates, who have organs like hearts, but also in invertebrates like worms, who can achieve sufficient circulation from the overall contraction and movement of their body. Circulation even occurs in plants, which need to move water, carbon dioxide, oxygen, and sugars through their biomass to ensure the entire organism stays alive.

But now consider some of the ways that circulation can fail in humans. We can have heart failure, or we can have an aortic dissection, or we can have atherosclerosis, etc. These ways in which circulation can fail are not specified in purely functional terms, but in mechanical terms: the concepts involve particular organs failing in specific ways. Consequently, we cannot find ready analogues of these failures in other species, like invertebrates and trees. There is perhaps an analogue of atherosclerosis for plants, but the analogy is strained at best, and heart failure, given that plants and invertebrates both achieve circulation without a heart at all, is clearly something that can only occur in vertebrates.

Something similar happens in the case of collective agency. We think of our failures of agency in terms of our characteristic “organs” of agency: beliefs can go awry (epistemic failure) and decisions can go awry (practical failure). Although collectives manage to do something that resembles the functional package of agency overall, they achieve this without any clear, isolatable analogue of beliefs, intentions, and the like. So we cannot identify any analogous failure of those “organs”, even if we are confident that the package as a whole has failed. Collectives can be agents, and they can be subject to failures of agency. But their failures may not be classifiable as either ignorance or weakness of will.


  1. Regarding the epistemic criterion: it seems that this is too inclusive. It might be that the Vice Chancellor of the university (roughly analogous to a CEO) is hypocritically maintaining the current policy, because of some corrupt values. But this corruption is only possible because nobody with oversight over the VC knows about the problem. If somebody with oversight (e.g. the Chancellor of the university – roughly analogous to chair of a corporate board of directors) knew what was going on (understood the problem, understood the means to address it), the VC would act differently. This then fits the epistemic counterfactual (if people believed differently, the organisation would act better) but it does not seem like an epistemic failing, but rather a pragmatic failing – a sort of hypocrisy.  ↩

  2. Bird 2010. Bird also makes the surprising claim that social knowledge does not supervene on the mental states of individuals. His counterexamples involve cases where discoveries are made, then published, then forgotten, but later rediscovered. He claims social knowledge persists through the interim, even though the individual mental states of the entire society, during the period of forgetting, are indistinguishable from the mental states in a society where the discovery was never made. I suggest that the supervenience claim might be saved if we were willing to grant a sufficiently strong extended mind hypothesis. The mental states of people who live in worlds with access to scientific journals are different from those of people who live without those journals, even if the brain states are identical.  ↩