This is a draft of a paper on the prospects for a realist-friendly solution to the problem of creeping minimalism, and the constraits any such solution must satisfy.
This is a paper written as part of the
Theology, Science, and Knowledge project, which looks at the epistemology of divine predicates in Maimonides and al-Ghazali.
Realists about normativity often face epistemological challenges. Here I develop such a challenge which relies on the notion of
meta-semantic risk.
What we ethically ought to do can be vague, and in such cases the requirement that the best motivations are
de re motivations are implausible.
In debates about whether theological predications are univocal with ordinary predications (and hence mean the same thing) or are merely analogous (and hence have a different, but related meaning), much debate has focused on whether theological predications on either account can be
true. I illustrate how the issue also connects with the question of whether theological predications can be
known<\i>, drawing on some arguments of medieval thinkers, and applying their insights to contemporary accounts of theological predication.
The Metaphyiscal Conception of Realism (forthcoming in Meaning, Decision, and Norms: Themes from the work of Allan Gibbard)
I argue that realism can be best understood as a claim about the fundamentality of a domain. This sheds light on the nature of quasi-realist views, and where they depart from full-blown versions of realism.
I argue that anti-realists do not gain any epistemloigcal advantages over their realist counterparts simply by grounding normative facts in facts about an agent's mental states.
In the late 13th Century, Duns Scotus put the theory of Divine Illumination to rest. Roughly, this was the view that we could not have knowledge via sensation unless God regularly intervened in our mental lives in certain ways. This paper discusses one of Scotus's main arguments against Divine Illumination, showing that it employs a principle that is very similar to 'anti-luck' principles in contemporary epistemology. This makes Scotus's argument remarkably resistant to criticisms from contemporary scholars, and also sheds light on how to understand contemporary applications of anti-luck epistemology in certain problem cases.
Sometimes it is vague what one ethically ought to do. The question I ask in this paper is whether there is a sense of 'ought' (perhaps an ought of practical rationality) which determinately yields verdicts about what to do in some of these borderline ethical cases. I outline an approach to the rational 'ought' which requires one to
maximize expected moral value in cases of ethical vagueness. I close by using this approach to show that the ratioanl requirements on action in vague cases differ substantially given various alternative theories of what vagueness is.
Proponents of evolutionary debunking arguments in ethics often help their case by drawing analogies with familiar examples of epistemic luck. I ask whether epistemic luck is in the end helpful to the debunker's case, or whether the support from luck-based considerations derives from a conflation of epistemically benign and epistemically problematic forms of luck. I conclude that the answer to this question hinges on the precise details of the correct empirical theory of the evolutionary origins of ethical belief.
Scepticism (with John Hawthorne; in The Oxford Handbook of Epistemology and Theology)
To what extent are theological questions knowable? We outline some tools for addressing this question by first giving some plausible structural constraints on knowledge. Then we use these constraints to explore the relationship between the possibility (or impossibility) of theological knowledge and various issues including private interpretation, faith, the problem of evil, religious diversity, and morally good action.
This paper explains how reference magnetism, when grounded in metaphysical fundamentality, provides a solution to the Moral Twin Earth argument from Horgan and Timmons.
Expressivism, even of a quasi-realist variety, has metaphysical commitments that differ from realism, since it has different consequences for the naturalness or eliteness of the normative.
Frank Jackson's supervenience argument against non-naturalism fails, because its premises entail that non-naturalism is false even under the supposition that the normative
fails to supervene on the natural. But non-naturalism is clearly
true under such a supposition. Jackson's premises are too strong, and I develop an under-explored approach on which non-naturalists can reject the conjunction of these premises.
Whither Anankastics? (with Alex Silk; in Philosophical Perspectives)
So-called 'anankastic conditionals' (such as
If you want to go Harlem, you ought to take the A-train) have been used to motivated a variety of substantive views about obligation and linguistic views about the semantics of
ought. We isolate the key structual features of anankastics and argue that these same features are present in a wide range of constructions that do not contain deontic modals. We conclude that this shows that anakastics cannot be used to motivate the substantive linguistic and normative views in question.
Philosophers have debated whether
realism about ethics is true, and have asked whether ethics is
objective. But less attention has been directed to what realism and objectivity are. I outline some issues for existing substantive views on these questions, and sketch answers to the questions of how and whether theorizing about realism and objectivity ought to be pursued.
The theoretical benefits of analyzing the modal operators ☐ and ♢ in quantificational terms have been
assumed to come with an ontological cost. The cost is an ontology of “possible worlds”, which may be either the concrete worlds of David Lewis, or else some kind of abstract entity. I show how, with two independently motivated resources, we can reject the assumption that the benefits of quantificational analyses require this ontology. The resources in question are primitive second-order quantifiers, which bind variables in predicate-position and have no analysis in terms of first-order
quantifiers, and a hyperintensional connective like 'in virtue of'.
Much of contemporary experimental philosophy involves taking surveys of 'folk' concepts. The results of these surveys are often claimed to be surprising, and treated as evidence that the relevant folk intuitions cannot be predicted from the 'armchair'. We conducted an experiment to test these claims, and found that a solid majority of philosophers could predict even results that were claimed to be surprising in the literature. We discuss some methodological implications as well as some possible explanations for the common surprisingness claims.
Some expressivists (most notably, Simon Blackburn) have claimed that expressivists can adopt a piecemeal strategy
to show that all of the sentences realists accept are consistent with expressivism. I argue that the project cannot be carried out in the way Blackburn describes. This is because Blackburn's claims about the meaning of the realist's sentences commit him to claims about the meaning of the parts of these sentences— commitments the realist does not share.
Is the possible that
relative naturalness (in Lewis's sense) is
perfectly natural? I outline an argument that it is not, which is inspired by Sider's 'Purity' constraint. Distinguishing between what I call 'horizontal' and 'vertical' conceptions of naturalness, however, shows that Purity does not necessarily rule out perfectly natural relative naturalness.