We have come to the end! The last seminar of the year and of the AT project at Fuller. For those of you who have been with us from the beginning, this is something like 60 seminars with speakers from around the country and the globe. It might be a fitting bookend in that our very first seminar was from Oliver Crisp…who obviously works right here at Fuller, and our last seminar was given by Dr. Ben Myers…from all the way in the land down under, where he is the Director of the Millis Institute in Australia. Ben is well known in the theology blogosphere as the driving force behind Faith and Theology, one of the very first and most well-known biblioblogs.

Let me offer a brief summary of his talk followed by a bit of an expanded summary of the main sections. I think the main idea can be put like this: talking about the relationship between theology and science, and how that relates to human nature, can be tricky. Some folks in the past thought that the human being is a microcosm of the whole cosmos, which is an interesting thought that might be helpful for today.

Myers offered a summary in three parts focusing on Francis Bacon, some Greek thinkers (mostly Plato), and Augustine. Here is my paraphrase of Myers’ summaries:

Part one: In the Instauratio Magna, Francis Bacon says many interesting things about education and society. One of the most interesting things he writes is when he allegorically interprets the 6 days of the creation week to mirror the proper way humans acquire knowledge. Bacon also talked about how he thought of humans as a microcosm of the universe. Which leads to…

Part two: This idea that humans are a microcosm of the cosmos pervades ancient Greek philosophy, especially in Plato. Plato says many interesting things, but in Republic and Timaeus he says that humans are a microcosm of the universe.


Brief interlude: the early church fathers said this too.


Part three: Pivoting now to Augustine, turns out like Bacon and Plato, he too says some interesting things about how humans are a microcosm of the cosmos. And, also like Bacon, he does some interesting figurative interpreting of the creation week in his commentary on Genesis against the Manicheans. Also, remember some of those weird bits in Confessions and City of God? Augustine layers a bunch of stuff onto this figurative interpretation of the creation week: the history of the world, an individual human’s stages of age, an individual human’s stages of spiritual growth, and even Augustine’s own life.


Epilogue: Mash it all up together and the thought is that a human is a microcosm of creation and the order of creation is how humans know creation.

What I don’t know after hearing Myers’ summaries is whether any of this is right. I really like Plato, so I’m super-sympathetic just to saying (like Digory in the Narnia books), “It’s all in Plato, all in Plato!” But, at the end of the day, just painting a picture of how some folks in the past thought things might be doesn’t tell me that this is actually how things are. Perhaps the idea is that if one paints the picture, then just the aesthetic appeal of the picture will be proof of its accuracy. In the classical world, goodness, truth, and beauty all—in the end—end up coinciding (…in the One, yada yada yada Plotinius, etc.). So the beauty of a picture really is truth-indicative. If this really is how Myers’ argument was supposed to function, then I suppose I find myself both convinced and not convinced. Convinced because I have had about 20 years of feeling an appeal toward Platonic views of the world. But also not convinced because I have not yet been assured my draw to Plato is not due to weird idiosyncratic vices in my intellect.

So, I’ll keep pondering this and, if the idea really is, “It’s all in Plato,” I’d say, perhaps Myers would just like to join me in reading some Plato together.

As the penultimate seminar speaker of the entire Analytic Theology Project, we were delighted to welcome Fuller’s own Dr. Kutter Callaway. In his talk, “Experimental Theology: Theological Anthropology in Conversation with the Sciences,” Callaway explored directly the “conversational” aspect of our research project, viz., what we mean by deploying this term, how it describes the normal use/misuse of varying disciplines in an interdisciplinary project, and what sort of method might better be used to go beyond mere conversation. To this end, Callaway offers “experimental theology” as a method of theological inquiry that yields (or hopefully yields) a “collaborative convergence” between the theological discipline and the scientific one (particularly the psychological sciences insofar as the research is about the human person).

Here’s a brief review for why Callaway thinks that going beyond conversation is needed. Far too often, argues Callaway, researchers in theology and the psychological sciences distort or cherry pick in unfaithful ways from the disciplines with which they’re supposed to be in conversation. Part of the issue, it seems, is that conversation doesn’t require native fluency. Compare: the massive qualitative difference between the sort of conversation one has with one’s Spanish teacher as a first-year Spanish student and the sort of conversation between two native Spanish speakers. There are idioms, turns of phrase, pronunciations, speed, thought-patterns, and so on that a non-native speaker will miss. Yet, both of these are conversations. The point is this: for real collaborative convergence, a project that sees conversation as a starting point rather than an end, theologians and psychological scientists need to become native speakers in each other’s disciplines. For, if they don’t, occasions of ignorant misuse of data/theories/doctrines lie in wait.

At least, this is my gloss on Callaway’s concern. And, if this is in the neighborhood, I think he’s got a point. I’m reminded of first year Greek students who, once having learned how to read some Koine Greek, often criticize this or that New Testament translation (ones having been put together by Greek experts) for “miss-translating” a particular verse, all the while the student is completely unaware of the idiomatic turns-of-phrase native Greek writers deployed, of which the experts know and for which they take account. First year students in many disciplines know just enough to be dangerous. I take it that Callaway is concerned that this is the sort of danger lurks at the doorsteps of those theologians and scientists who settle merely for being conversational in the “other” discipline.

The call for cross-disciplinary expertise is noted. I think it’s good and needed. I’m worried it’s not feasible, for all but a few geniuses, given how specialized disciplines have become. But that’s a point for another time. With this all-too-brief summary of Callaway’s talk, I wish to raise a concern.

My concern is this: experimental theology seems prima facie to commit a category mistake. Here’s what I mean: in his talk, Callaway suggests that (as an example of interdisciplinary abuse) James K. Smith’s arguments that Christian liturgies are better suited to form good humans as opposed to secular liturgies are empirically impoverished. That is, that there is no empirical evidence for Smith’s claims. Suppose Callaway is right. What follows from this? I’m not sure anything does; for, one can’t have empirical evidence for that which isn’t empirical. What’s not empirical about Smith’s project? What good humans are. “Good” is amenable to metaphysical investigation only; that’s in the domain of philosophy, not empirical science. That x is “good” is a metaphysical claim about the quality of x. And metaphysics is categorically not in the domain of investigation of the empirical sciences. So, the complaint against Smith seems, in my view, to commit an error of first philosophy. And, if one is confused about the proper methods of investigation vis-à-vis carving reality at the joints, one is bound to make all sorts of methodological mistakes (and elementary philosophical ones).

There’s a worse problem here to which the first problem points: making mistakes of first philosophy inevitably lead to mistakes in theology. Theology, and the proper study thereof, comes only after one masters carving nature at the joints (see the Medieval Trivium). For that is the far easier task, says the theologian. God is a very complicated subject, vastly more so than His incredibly complicated creation. If one can’t even get in the neighborhood of figuring out how properly to investigate the creation, one’s not going to do very well at getting a grip on the Creator. At least, this is what I’d contend. And, I think this is a large point of the AT project, generally. Theological mistakes and sloppiness begin where theologians are deficient in first principles, viz., philosophy (particularly metaphysics).

What does this have to do with experimental theology, particularly as it relates to theological anthropology? Well, theology and theological questions almost always trade on metaphysical questions (or epistemological questions). Where these questions are amenable to empirical study seem only to be in tertiary areas: e.g., what does a human brain look like when one’s praying? (Is this a theological anthropological question or just a psychological one?) But Callaway thinks that deploying ET can help empirically ground theological research. To wit (as above) he suggests that Smith’s claims can be experimentally tested. But this strikes me as plainly false, again because it’s a category mistake. Smith is making a claim about the teleological (and axiological) end of humans. Science cannot even, in principle, evaluate the nature of the good, let alone what it is for a human to be good or to achieve her teleological end. To do such an experiment presupposes a philosophical move and a theological one: it takes first what a good approximation of goodness is and then figures out one of a number theological questions: e.g., is God “good” in a univocal way to the philosophical good? Does God design things to a good and fitting end, and so on? There’s no scientific experiment to answer such questions. At best, the psychologist could observe phenomena to see if the phenomena correspond with a theologian’s speculations. For example, suppose we think that Christian liturgies make one more theologically virtuous (e.g., abounding in acts of charity). We might be able to examine whether those who go through liturgy in fact do more acts of charity (a category of act filled out only through philosophical and theological research). But notice that this doesn’t actually deliver any theological answers. For, even if those who go through Christian liturgies don’t in fact do more charitable acts than those going through “secular” liturgies, this doesn’t at all tell us if Smith’s hypothesis is correct. For we have no way of evaluating whether a person in Christian liturgy is a Christian, whether he/she is being made holy, and so on. All the science could tell us is the phenomena of what happened. It can’t in principle tell us what ought to happen or what’s going on behind the phenomena. It can’t settle the theological question: is Christian liturgy better suited to help humans achieve their teleological ends?

These are some of my worries. I suppose the pithiest way of putting my worry is this: it seems false that ET is doing theology, for theology is not the kind of thing amenable to scientific experimentation.

J. T. Turner is a Research Associate on Fuller Theological Seminary’s Analytic Theology Project for the 2016 – 2017 academic year. He holds a PhD from The University of Edinburgh, a ThM from Erskine College and Seminary, and an MA and BS from Liberty University. Turner’s current research projects include writing a book on the metaphysics of afterlife in Christian theology, and work on constructing an analytic theology of what some biblical theologians call “holistic eschatology.”

Very much of Christian teaching is not empirically verifiable — for instance, the doctrines of Trinity, Incarnation, and Atonement certainly are not — but for some time the claim has been made that at least one core doctrine is. Various figures in recent Christian history have made the affirmation that Christian teaching about Original Sin is empirically verifiable, though they have not made great efforts to try to prove their point. This is where Dr. Jesse Couenhoven’s paper comes in. I will summarize the argument here very succinctly and raise a worry about its cogency.

The doctrine of Original Sin, as Dr. Couenhoven defines it, claims that human nature as it currently exists is affected by a certain sickness and distortion. It is not natural for human beings to be sinful, but they are all born with a kind of illness of their nature, such that they are incapable of living lives of righteousness and justice as they ought. This is true for children as much as for adults: all humans are affected by this condition.

As empirical evidence of this teaching, Dr. Couenhoven brings forth two findings of recent psychological research. On the one hand, there are other experiments which suggest certain fundamental asymmetries of moral evaluation: when people make mistakes, they are likelier to excuse themselves and downplay them, attributing them to situational factors beyond their control, whereas the mistakes of others are taken to be signs of poor character, representative in some way of the kind of persons those others are. This seemingly innate, self-serving inclination towards injustice is taken to evidence a sickness of the human person of the sort that Original Sin describes.

On the other hand, there are the situationist experiments that suggest that human behavior is more a result of highly contingent situational factors rather than lasting character traits belonging to the person in some semi-permanent manner. One such experiment found that people were more likely to help someone who had dropped a stack of papers on the street if they had just previously found a small coin in a phone booth. Rather than indicating that character as such does not exist, however, Dr. Couenhoven interprets these experiments also to show that developing genuinely virtuous character is immensely difficult for us because of a fundamental weakness or sickness of our nature, as the doctrine of Original Sin affirms.

Dr. Couenhoven’s paper is more detailed and sophisticated than this brief summary can capture, but I think I have said enough thus far to raise an important objection against his project. More precisely, I am of the opinion that any empirical evidence taken into consideration for “verifying” the doctrine of Original Sin will be underdetermined, precisely because Original Sin is a metaphysical and not natural doctrine. Original Sin affirms that there is such a thing as human nature, a stable, metaphysical principle that belongs to every human being and defines her as such — something that no amount of empirical observation can ever verify because it is not strictly speaking observable. The experiments brought forth by Dr. Couenhoven as evidence do not justify the conclusion that human nature itself is affected by some kind of moral sickness, as Original Sin affirms, rather than merely that the conditions in which human beings exist are such as to contribute to the development of these diverse moral weaknesses. Granting the results of the psychological experiments mentioned above, it is nevertheless possible that human beings, just as they exist in the actual world, might be upright if external conditions were somehow different.

Indeed, it is difficult to empirically verify anything, let alone a robust metaphysical doctrine like the Christian teaching of Original Sin, because scientific experimentation is never perfectly passive observation of phenomena; there is also a positive, speculative element in which various assumptions and hypothetical suppositions are made which are not themselves verified and, in many cases, cannot be. For this reason, I am less than optimistic about the prospect of empirically verifying Original Sin. I would rather say that that various aspects of human moral experience, including some which have been experimentally observed in controlled conditions, are such as to be compatible with, even suggestive of what Christians teach about the fallenness of human nature.

Steven Nemes is a PhD student at Fuller Theological Seminary whose research primarily concerns philosophical theology.

According to Cognitive Science of Religion (CSR) we now have empirical support that humans are naturally inclined to interpret their environment religiously. What should we do with this discovery?  In other words, what is Cognitive Science of Religion and how should it inform theological method? These questions provided the outline for Dr. Myron Penner’s recent lecture to the Analytic Theology group at Fuller SeminaryThis blog briefly reviews some of the highlights of Dr. Penner’s talk and then asks questions about his third section. You can watch his talk here:   facebook.com/analytictheology/videos/2040669446199165/

Dr. Penner opens his talk by explaining how CSR emerged as a fourth way of accounting for religious experiences over three previous etiological accounts (e.g. divine causation, Freudian, Marxist). What CSR has going for it is that it can be (and has been)  grounded in an expansive body of empirical research.

In part two Penner gives a helpful introduction to the basics of CSR and whether it counts as evidence for theism (a topic he has written on elsewhere).    The third part of his presentation addresses the relationship between CSR and theological methodology by proposing an analogy between theological and scientific laws. It is here that one of the more interesting points of his presentation arises; namely, the question of theological method and authority.    

Penner queries which sources of theology should serve as a norm when answering theological questions. His answer – authoritative sources should be (a) Domain specific and (b)  without hierarchy when placed aside other primary sources (ie. scripture, reason, science). He labels these two norms – Epistemic Pluralism. Penner wants us to treat each source as authoritative for providing epistemic justification for some claims but not others. Only in specific domains may reason, experience, tradition, or scripture be given greater theological weight than the other sources. “Theology becomes disordered when we expect epistemic authority X to provide data about a domain that is beyond it’s epistemic pay grade,” says Penner. A sort of theological reductionism results if we aren’t careful.

Penner closes his talk on CSR with an illustration of two gardens; both appear overgrown. One is deliberately a sort of wild expression of nature in an urban setting. The other is genuinely abandoned and overgrown. Penner asks whether we should view the cognitive “tool-set” discovered by CSR like the deliberate garden (i.e. the cognitive “tools” are a feature allowed by God) or the unkempt garden (i.e. these cognitive “tools” are a bug in our mental system). His final slide implies that he tentatively sides with the former verdict .


The presentation as a whole had the feel of four potentially free-standing presentations combined into one. While very informative it makes the talk as a whole hard to evaluate. We will thus limit our comments to part three of his presentation.

Part three was not about CSR but was instead a straightforward proposal on theological method. Penner suggests there should be no hierarchy among potential theological sources of authority – outside of certain domains (i.e. across the board). Second, inside those domains, only certain sources should be given authority.

At first glance this seems reasonable. Most people today are willing to accept that the Bible is not trying to be a science book; science should be given authority on questions of science.  However, Penner’s domain-specificity seems like it would run into a problem any time two domains appear to be speaking clearly on the same topic.   How do we adjudicate such cases? Think for example of Origins debates, abortion, or gender roles.  We can’t suggest that all theological questions fall into only one domain can we? The above examples fall into at least two. How then does the principle of “domain specificity” aid us in these boundary blurring questions? If Penner says each questions falls into only and one domain; how does he decide this? This is right where theological method matters most!

Second, there is a legitimate worry about our honesty when deciding what topics Scripture intends to speak about. J.P. Moreland suggests that in Western Christianity,  each time a Biblical mandate falls out of fashion culturally, theologians and biblical studies scholars suddenly discover that the Bible never intended to speak authoritatively on that issue. He states that he has always found the timing of such “discoveries” ironic. As my colleague Steven Nemes is wont to ask, “Is there room for scandal in our theological method? Is there room for God to flat out challenge the way we think the world should go?” The second worry then is whether “reason” in Penner’s model is susceptible to bias right where we need clarity most.

 Finally what is one to do if  error distorts a source of authority – in its own domain area?  If Christians are supposed to submit their theological conclusions to science (where science speaks clearly)  then the clockwork picture of the universe provided by the mechanical sciences of the 18th century would likely rule out special divine action during those centuries. Today we know that this picture of science was inaccurate. Given the 20th century’s new view of physics, with it’s implications of ontological indeterminacy in quantum mechanics, the 18th century’s ban on divine action seems mistaken. Some theologians in the 18th century were ready to affirm divine action but many theologians ruled it out on scientific grounds. The domain specific method seems to shackle us to error, if error has crept into a domain that Scriptural interpretation is to give way to.

The flurry of questions above are not intended to imply Penner’s proposal on theological method won’t work. Penner has hit a hot topic, however. Questions of authority and adjudication in theological method are just as crucial as ever.  Minimally, our comments highlight the complications that await Penner’s model in real world application. Specific case studies would be helpful. Whether we like his conclusions or not, proposals of this sort are the kind of conceptual tools we need as we  incorporate cross-domain discoveries like those turned up by new research in CSR, neuroscience, cosmology, and epigenetics.

Jesse Gentile is a new PhD student at Fuller seminary studying systematic theology. He has interests in theological anthropology, epistemology, ethics, technology and pretty much everything else. Jesse is the father of two awesome elementary school kids and is the husband of Ella (who works as a wills and trust attorney). He regularly does itinerant preaching among plymouth brethren assemblies throughout the U.S. Jesse hold’s degrees in Biblical Studies, Philosophy, and Instructional Design.

Kendrick Lamar, the Los Angeles based rapper, achieved a feat toward which many artists strive: a #1 Billboard charting hit. What made this hit unique was its subject matter—Humility—a subject not typically associated with rap music. In this song K-Dot (as Kendrick is affectionately known) enjoins other rappers to “sit down, be humble,” all while he raps about the greatness of his own skills. In doing so Kendrick raps humbly.

As a reader, you might be confused why I would say that Kendrick is being humble in this song. In fact, it sounds like he is being prideful! Under one common definition of what humility is, however, such an act would in fact be considered humble. This is the view often called: The Proper Self-Assessment view. According to this view, what it means to be humble is to have proper self-assessment, including proper assessment of one’s strengths and limitations. When it comes to rap, the subject of the song, Kendrick Lamar properly assesses himself as the best rapper currently in the rap scene, and he recognizes the very few rap limitations he has. Thus, if he’s correct in his self-assessment, he can be completely humble even while bragging about his skills. There are, however, other accounts of humility under which such bravado would not count as a humble act. For example, the dependence account of humility, this is the account in which one recognizes oneself as dependent upon others. Or, the limitations owning account, in which one has a willingness to attend to the fact that one is unable to do certain things.

In a recent lecture, Dr. Josh Blander (The Kings College, New York) argued that the standard accounts of humility, like those I just described, are problematic because they may not be compatible with the claim that humility is a divine characteristic. So what is humility in Blander’s view?

HumilityB = A disposition to give little or no regard to perceived status or position for the sake of some good, particularly the highest good, of another.

Blander’s account of humility emphasizes that humility involves a willingness to serve others, especially if they lack or are perceived to lack status or position or some good with which they can reciprocate the act of service that was directed towards them. This account has the benefit, unlike the limitations owning account or the dependence account, that one can ascribe humility to God. (It seems to me that one could ascribe the proper self-assessment view to God, but for some people that might generate some unhappy consequences.)

Setting aside the fact that one might be able to say that God is humble in the traditional account of humility—the proper self-assessment view—Blander highlights an important issue: Divine Humility. Blander’s view, however, is not without some difficulties.

One such worry was raised by a number of seminar participants: Is this view exegetically sound? Blander derives his view primarily from Philippians 2, which the NRSV subtitles “Imitating Christ’s Humility.” So, in one sense, Blander is correct to highlight this passage for his purposes. However, an issue that needs more careful attention is the history of exegesis of this passage. Is this passage’s subject primarily about the Logos asarkos or ensarkos? For Blander’s argument to work, Philippians 2 needs to be about the Logos asarkos, not even the Logos incarnandus. These are issues that need to worked through carefully in order for Blander’s account to be a purely divine virtue.

Another issue that could be raised is that HumilityB is not actually humility; it is just a species of love. In fact, HumilityB nearly perfectly fits two classic texts dealing with God’s love.

For God so loved the world that he gave his only Son, so that everyone who believes in him may not perish but may have eternal life. (John 3:16)

For while we were still weak, at the right time Christ died for the ungodly. Indeed, rarely will anyone die for a righteous person—though perhaps for a good person someone might actually dare to die. But God proves his love for us in that while we still were sinners Christ died for us. (Romans 5:6-8)

Both of these texts are primarily about God’s love. Both texts highlight God’s willingness to send the Son to die for the good of those who have nothing to offer God, namely sinners. In these two verses, we see that God has the “disposition to give little or no regard to perceived status or position for the sake of some good, particularly the highest good, of another.” What needs to be clarified, in Blander’s account of humility, is why such passages should be seen as exemplifications of humility, rather than love, despite the fact that these passages explicitly state that God’s actions here spring primarily from love.

Despite these two hurdles that Blander’s view faces, the presentation has prompted me to think more carefully about the nature of humility and whether or not we can genuinely speak of God’s humility. For what it’s worth, I am still attracted to the proper self-assessment view. Under this view, Kendrick Lamar, the greatest rapper today, and God, the greatest possible being, can both be humble in recognizing their greatness.

Christopher Woznicki is a PhD student in the Analytic Theology Project at Fuller Theological Seminary. He received a MA in Theology from Fuller Theological Seminary and a BA in Philosophy from UCLA. Christopher has written several journal and encyclopedia articles on Jonathan Edwards.


Coming in with home court advantage at our April 25th Analytic Theology seminar was Fuller’s own Veli-Matti Kärkkäinen (affectionately known around Fullerland as “VMK”). VMK proffered an exposition of—his term—“multidimensional monism” (MDM) as an account of human nature. Folks seem to love the word “monism” around this land of Fuller where the work of such proponents as Nancy Murphy, Joel Green, Warren Brown, and Brad Strawn are well known for articulating some kind of monism. Some are happy to adopt a description of their views from the realm of philosophy such as “non-reductive physicalism,” a term VMK eschews. None have a snappier name for a view of human nature than Brown and Strawn’s “Complex Emergent Developmental Relational Linguistic Neurophysiologicalism,” but we can’t all be in marketing.

When it comes to discussions of the ontology of human beings there are some fairly easy diagnostic questions one can ask of someone to elicit one’s view:



If you answer “yes” to the first question, you are a physicalist, but also probably not a Christian because Christians hold that God exists and God is not physical. If you answer “yes” to the second question, you are probably a substance dualist, holding that humans are essentially immaterial souls, but are typically connected to a human body. If you answer “yes” to the third question, you are also a physicalist, but you might answer “no” to the first question so you could think that God is not physical, but humans are.

Substance dualism and pure physicalism (or “reductive” physicalism…because everything about the human can be “reduced” to the physical) are two competing views on the nature of humans. But there is also a strange DMZ between these two extremes wherein wander non-reductive physicalists, hylemorphists, Complex Emergent Developmental Relational Linguistic Neurophysiologicalists, and, apparently, multidimensional monists. The wanderers of this realm want to maintain—with physicalists—a monism of a sort, but—with dualists—want to hold that not everything about the human is physical.

The problem with this DMZ, however, is that depending upon what angle one glances at the zone, these wanderers can look a lot like a resident of one of the two borderlands. This is the impression I had of VMK’s view. At times, he would say something about the corrupting Hellenistic influence on the Christian notion of the immaterial soul and I’d think he is leaning toward physicalism. At other times, he would say something like there is “more to human than just physical” and morality and ethics “calls for more than the material,” and I’d think he is leaning toward dualism.

What VMK wants to keep as paramount are a few principles: that there is an “integral connection” between brain events and one’s mental life, that Scripture does not require dualism or any particular ontology of the human being, that the ultimate reality is God and God is spirit, and that we should not downplay neuroscience. To the diagnostic questions, I think, VMK would say, 1: “no”; 2: “maybe/kinda/not sure”; and 3: “no.” So, if this is right, it seems to me that he is already about 80% of the way to the dualist side. VMK explicitly says he endorses property dualism, the predominant form of which is non-reductive physicalism (…don’t even ask me how a “monisim” can be labeled a “dualism”) and that he accepts the causal power of the mental on the physical.

Is MDM still monist despite such dualist leanings? Well, the verdict is still out. However, I conveniently failed to mention some of the more exotic wanderers in the DMZ between substance dualism and physicalism: these include your idealists, your Type F Monists, and your [insert ghoulish voice] panpsychists. These are tricky views to articulate and the ink continues to flow trying to do so. It might be that VMK’s MDM property dualism view can be identified with one of them. The wanderers of the DMZ between dualism and physicalism want to have their cake and eat it too.

Given all that VMK says about the existence and irreducibility of the mental, why not just be a substance dualist? Why continue to insist on being a monist when all indications point to dualism? My impression is that VMK seems to be worried about “spooky” substance dualism. The kind of distorted-neoplatonist Gnosticism that holds the physical to be the locus of all that is horrible in the world (see here for an interesting take on this view in contemporary Evangelicalism). But, of course, that isn’t a necessary entailment of a substance dualist position. The dualist can just as easily describe an “integral connection” between the soul and the body, the brain and the mental life without untoward ramifications (for a recent treatment, see Joshua Farris’s The Soul of Theological Anthropology). So, to VMK, I say, “bring some of those worries with you, but cross the border and come on over to the dualist side!”


One plausible implication of a thoroughly physicalist understanding of human beings is that humans do not have the sort of robust freedom that libertarians about free will suggest is needed for moral responsibility. Libertarians believe that, minimally, a person performs a free action, A, in a specific circumstance, C, only if she could have done otherwise (i.e., chosen not-A) within C. The motivation for this claim is that: necessarily, a person is free only in so far as whether she acts or refrains from acting is up to her. Now, consider what some think the findings of “science” entail: humans are nothing other than physical organisms, the movements and actions of which are traceable ultimately to the fundamental “laws of nature.” Gluons and protons, quarks and atoms, and so on behave as these sorts of things behave, in a mindless, law-like, manner owing what they do at any given time t to the laws of nature and the events of the past. Since humans are nothing other than various assortments of fundamental physical particles behaving in the ways that they do, what happens is that human actions are explained fully by the events of the past and the laws of nature; what humans do is a product of what their individual material components, in combination, do. That’s a rough gloss on the matter, anyway. And it looks like there’s a puzzle here: is what “science” says about the human being true and, if so, does this undermine moral responsibility?

To address this question, Dr. Timothy O’Connor, Professor of Philosophy at Baylor University (though, soon to be again at the University of Indiana, Bloomington) came to speak at our Analytic Theology Seminar on April 18. In so doing, he offered at least two points of worry. The first is what I understood to be a riff on the so-called “Consequence Argument,” a sketch of which is as follows:

Assume that physical determinism is true: the conjunction of the laws of nature and the events of the past entail all of the events of the future.

  1. A human person is not responsible for the laws of nature. [Assumption]
  2. A human person is not responsible for the events of the past. [Assumption]
  3. If a human person is neither responsible for the laws of nature nor the events of the past, then neither is she responsible for the conjunction of the laws of nature and the events of the past.
  4. A human person’s actions are entailed by the conjunction of the laws of nature and the events of the past. [Entailment of determinism]
  5. A human person is not responsible for her actions. [From 1 – 4]

O’Connor points out the worrying implication: if we want to take moral responsibility seriously, the Consequence Argument puts us on the horns of a dilemma: either we accept the purported deliverances of science (that physical determinism is true) and find some way to hold morally responsible people who are not responsible for their actions or else we assume that (5) is false and find some premise that’s false. But, given physicalism and physical determinism, the premises seem unassailable. To what, then, shall we object?

I understand O’Connor to have argued that the arguments for physical determinism, the assumption driving the direct argument, are unconvincing. And this is for at least two reasons. First, it’s not clear that the physical sciences do suggest that future events are entailed in the way physical determinism suggests, even if it is a majority report among physicists. Second, in the explanation of human actions, qualia are, at least sometimes, a part of the explanation of human actions. ‘Qualia’ are phenomenal experiences: the “what it’s like” of human consciousness. For example: there’s a “what it’s like” to being drowsy enough to decide to get up from one’s chair to make coffee. The felt experience of drowsiness, so goes the argument, features in the explanation of why one might make coffee. Importantly, qualia resist bottom-up physical explanation. Indeed, they are a central and load-bearing difficulty in the physicalist’s so-called “Hard Problem” of consciousness (though van Inwagen might remind us that it’s not easy to explain consciousness for non-physicalists either). And if this is correct, if qualia—things that aren’t reducible to physical events—are part of the explanation of human actions, then physical determinism is false. It’s not the case that every future event is an entailment of the laws of nature and the events of the past. And that’s because some events aren’t physical events. Here’s the upshot: perhaps some of these undetermined events are human actions for which humans are morally responsible.

For my part, while I have theological motivations that lean me away from libertarian accounts of freedom—that is, there’s something about compatibilist accounts (wherein determinism of some variety and human freedom are compatible) that don’t strike me as theologically counterintuitive as they do for many thinkers—I find the Consequence Argument persuasive. It’s hard to deny its premises, given physical determinism. Add to this that O’Connor provided some reasons to think that even the purported scientific evidence for determinism is underdeterminative. If so, I’m inclined to think that physical determinism, at least, is false.

J. T. Turner is a Research Associate on Fuller Theological Seminary’s Analytic Theology Project for the 2016 – 2017 academic year. He holds a PhD from The University of Edinburgh, a ThM from Erskine College and Seminary, and an MA and BS from Liberty University. Turner’s current research projects include writing a book on the metaphysics of afterlife in Christian theology, and work on constructing an analytic theology of what some biblical theologians call “holistic eschatology.”


A person who spends some significant amount of time studying philosophy and theology very quickly learns that language can be tricky. It is quite difficult to speak clearly and precisely about the things with which these domains are concerned. For that reason, clarity and precision of expression are two centrally important virtues of any work of analytic philosophy or theology. Tim Pawl’s presentation in this week’s Analytic Theology seminar, titled “The Metaphysics of the Incarnation: Christ’s Human Nature,” attempts to bring some clarity and precision into theological discourse about the humanity of Christ.

His lecture was concerned with the following questions. How should theologians and philosophers speak about Christ’s human nature? More specifically, is His human nature to be conceived abstractly, along broadly Platonic lines (in the contemporary/analytic sense of “Platonic”), or concretely, as more or less identifiable with his human body? Moreover, is it possible to make predications of Christ’s human nature without falling into Nestorianism? For example, if we say that Christ is both passible and impassible, does that commit us to the existence of two persons, the passible human person and the impassible divine person?

In response to the first question, Pawl suggests that the debate between abstract and concrete conceptualizations of Christ’s human nature is not as theologically substantial as it might initially seem. If a person were to opt for the Platonic interpretation of Christ’s nature, she would still have to grant that He had a ensouled human body that exemplified that abstract nature, which gestated in the womb of Mary, which consumed food and produced spittle, which was beaten and crucified, etc. If she opts for the concrete interpretation of Christ’s human nature, she simply identifies the nature with that ensouled body, while still granting that it perfectly exemplified the abstract nature of a human being. The difference is more one of words than of substance. In any case, we can use the phrase “human element” to refer the ensouled human body of Christ, leaving open the question of whether or not we identify Christ’s human nature with his human element.

Moving on from this matter, then, the question is then raised as to whether it is possible to make substantial predications of Christ’s human element, or whether it is strongly ineffable. A person may think that there is a danger of falling into Nestorianism when making predications of the human element: after all, if we say that the Logos is impassible, on the one hand, and that the human element which belonged to Him suffered, on the other, does not that require us, on pain of contradiction, to admit that the human element is a distinct person capable of suffering?

Pawl responds to this issue by distinguishing different modes of predication. He claims that there is no universal principle of aptness of predictability by which we can distinguish between predicates which do not apply to Christ’s human nature and those which do. Instead, he opts for a careful consideration of the language of the Ecumenical Councils. There we find that some predicates apply to Christ’s nature and others to Him as person. For example, the Councils of the Church predicate of Christ’s human nature that it was hanged and pierced (physical predicates), that it makes judgments and statements (intellectual predicates), and that it wills (volitional predicates). On the other hand, the Councils forbid the predication of “hypostasis” or “person” to the human nature of Christ. How can we understand this?

Pawl introduces the notion of “supposit” (or “suppositum”) into the discussion in order to distinguish between three different modes of predication. Some predicates — to which Pawl gives the unfortunate designation “suppository” — require that their subject be a supposit. Other “non-suppository” predicates, do not require that their subject be a supposit. Finally, there is a “faculty” predicate, which is only appropriate for subjects that are faculties of a certain sort. “Man” is an example of a suppository predicate: to call something a “man” is to call it a supposit with a human nature. Thus, conciliar Christianity rejects the notion that “man” is predicable of Christ’s human nature because His human nature is not a supposit or substance or hypostasis — it is anhypostatic, not belonging to any person when considered in itself, and enhypostatized once it is assumed by Christ (though Pawl does not make explicit reference to this distinction). On the other hand, both “passible” and “impassible” are predicable of Christ because they are not suppository predicates. On Pawl’s analysis, “passible” means “possesses a nature that can be causally affected,” whereas “impassible” means “possesses a nature that cannot be causally affected.” Thus, Christ is both passible and impassible because He possesses a nature which can be causally affected, namely His human nature, and one which cannot be so affected, namely His divine nature.

Returning in the end to the earlier discussion about abstract vs. concrete conceptualizations of Christ’s human nature, Pawl proposes a “formula of union” that should prove acceptable to proponents of either view: The Word, the Second Person of the Trinity, became incarnate in the human element of Christ — that flesh and blood composite of body soul. As incarnate, the Word hung on a cross, bled, and willed the salvation of souls — all in the suppository sense of those terms. Moreover, Jesus’s human element, too, bled was hung on a cross, and willed the salvation of souls — all in the non-suppository sense of these predicates. The distinction between the “suppository” and “non-suppository” predication helps us to understand how these phrases can be true without requiring Nestorianism.

Importantly, Pawl emphasizes that the distinction between “suppository” and “non-suppository” predication is already recognized as a feature of ordinary language. If I cut my hand, it is equally appropriate to say that it is bleeding and that I am bleeding, but in the first case the predicate is not applied to a supposit but to a part of one, and in the second case it is applied to the supposit itself. The implication, then, is that the predication of different terms to Christ, such as “passible” and “impassible,” can be understood along the same lines as the predication of a term to a part of me that would apply equally to me as a supposit. Christ’s natures are understood as “parts,” in a sense, which can be isolated in speech and predicated various terms which are not equally applicable to both natures. At the same time, the predicates can be applied equally well to Christ the supposit. So Christ’s human nature can be said to suffer and so can Christ Himself; Christ’s divine nature can be said to be impassible, and so can Christ Himself.

I am very sympathetic both to Tim Pawl’s methodological commitment to the language and vision of the Ecumenical Councils, as well as to the distinction he proposes between various forms of predication for the sake of clarifying their theological language. I also appreciate his effort to move past the debate about competing conceptions of Christ’s human nature in order to get at the heart of the matter. I think that his proposals regarding the interpretation of various predicates adequately recognize the unique case of the Incarnation, the fact that in the Godman we have a perfectly singular phenomenon which calls for a careful reconsideration of the way in which we use predicative language. Indeed, precisely because Christ has two natures, in contradistinction to every other possible subject of predication, new possibilities of language open up for us, so that we can say, without contradiction, “Christ the impassible God suffered and died on the Cross.”

Steven Nemes is a PhD student at Fuller Theological Seminary whose research primarily concerns philosophical theology.

In our first blog, we summarized Helen De Cruz’s recent presentation of a Transmission Model of original sin. We also looked at a possible problem with her attempt to reappropriate Joseph Henrich’s use of the Price Equation. However, at the end of blog one, we granted her this key move in mirroring Henrich’s use of the equation. Doing so allows us to move forward in her presentation to a bigger problem which we address here.

During Part Five of her presentation, De Cruz explains how she follows Henrich’s use of the Price Equation in attempting to model a Social Transmission account of sin. His study is an attempt to give reasons for the loss of artifact creation abilities by indigenous Tasmanians between the last ice age and the arrival of Europeans. His answer has to do with the fact that the Tasmanians were cut off from the rest of Australia with the rise of sea level. This lowered the number of people that indigenous Tasmanians could interact with in their generation by generation pattern of learning to make advanced tools. As population shrinks, fewer and fewer people are trying to learn to make advanced tools – and as a result – fewer and fewer people are coming up with innovations on tool usage. As population shrinks further, they even lose the ability to make tools they had known how to make in previous generations. This brings us to the second part of how De Cruz uses the Price equation. We will, however, need to explain more of what Henrich does in his paper in order to make clear where De Cruz takes a wrong turn.

De Cruz’s presentation makes clear that she follows Henrich’s steps by simplifying her Price equation from ∆z̅ = Cov(f,z) + E(f∆z)   to  ∆z̅ = zh – z̅ + Δzh  just as he does.  Readers need not worry about the math involved in the simplification above; Henrich gives the details on why he can do this in Appendix A of his article. More importantly, Henrich knew that his study had to account for mistakes that Tasmanians made in their attempts to learn how to make tools from the best exemplars in their community.  While the best tool maker has the highest z score ( represented by zh ) other people in the community do less well and earn lower z scores.  However, once in a while, someone gets lucky and actually comes up with an innovation on the skill they were trying to copy from zh. This person then becomes the new (zh ).That skill gets spread through the community and the culture’s abilities evolve upward over time.

Henrich’s paper captures the influence that mistakes and innovations have on the evolution of population by translating the Price equation into this simpler equation: Δz̅ = – α + β(γ + Ln(N)). De Cruz applies the same exact modification in her talk. Again, don’t worry about the math. (Do note, in passing, that the ability to make this simplification from the original Price equation is based exactly on our granting De Cruz the assumption in Part One of our blog about everyone following the highest exemplar. Without that simplification she can’t move on to this simplified equation.)

While the equation Δz̅ = – α + β(γ + Ln(N) may look intimidating, the concepts at play are easy to understand.  In the equation above, α is a number that represents how difficult it is to learn some skill. It represents mistakes that are made in the learning process. The larger α is, in the equation above, the lower it will push the population’s positive rate of change over time (Δz̅). In plain English, the harder a thing is to learn, the slower that skill will spread through the population over time. That is not all that is at play, however. β represents lucky guesses and innovations people make when they try to learn the same skill. β will raise the rate of change in the average skill of a population over time (Δz̅).  These β changes are rare though. However, the larger a population of skill learners is, the more frequently these lucky changes will emerge. This increase in β (thanks to population growth) will overcome the negative effect of mistakes made in mimicking a skill (α). The size of a population is represented by the N, in the right hand of Henrich’s equation: Δz̅ = – α + β(γ + Ln(N).

At last, we arrive at the other problem with De Cruz’s use of the Price equation. The gist of Henrich’s entire study is that the larger population (N) is, the higher the rate Δz̅ will be or vice versa. His argument was that when sea levels rose, the Tasmanians were cut off from the rest of Australian population of learners who were trying to figure out skills and also coming up with innovations. Their N value in the above equation dropped below 1000 and this helps to explain why their Δz̅ dropped and they lost the ability to make tools over time. Trace your finger along the x axis in “Figure 3” below and note how the shift in population affects the change in the average z̅ of the population (y axis).  Then turn your attention to how De Cruz uses this concept.


De Cruz’s presentation suggests that Δz̅ is the overall increase or decrease in the rate of conformity to the moral norms of a hominid population. For her, α represents the distorting influences of socially transmitted sin.  β represents the ability of individual people to reflect on the moral norms of their community and improve them – as Schleiermacher and Raushenbush taught people could do. So far so good.

Now we come to something perplexing. De Cruz seems to say the equation works in a way opposite to how Henrich does. During her explanation of her use equation she says,

“. . .you will see that as a population size increases (and that’s what sort of cancels this out) that the distorting influences of socially transmitted sin will also increase. So the cultural Price equation predicts that we individual members of the community will inevitably be impacted by sin. That lowers the community’s moral standard. And overly large alpha values… lots of transmitted sinful behaviors will lower the average z value. So what that means is that individuals need to reflect and not critically accept all the moral ideas in their societies. They can make a difference.[8] [Emphasis mine]

If you look back at Henrich’s equation, Δz̅ = – α + β(γ + Ln(N)) you will see that the rise in population (N) is independent of α. Contra De Cruz, as population rises, α does not rise. A rise in population (N) only affects the rate of helpful innovations (β). Recall that according to the order of operations in math, the figures inside the parentheses are first multiplied to β but only later added to α.

What is the point here? The difficulty of doing something (α; the level of an individual’s failure in attempting to conform to a social norm), does not rise with the population size (N); it stays constant. According to Henrich, as population rises and innovation with it (i.e. β) the negative effects of α can actually be overcome. So contrary to De Cruz, the Price equation as used by Henrich does not predict that individual members of the community will inevitably be impacted by sin. Ironically, if we apply Henrich’s use of the model to De Cruz’s hominid population, then the Price equation may predict just the opposite. As population (N) grows, the effects of sin (α) will be offset, because people are innovating better ways  (i.e. β) to accomplish z. In other words, the overall average conformity to the population’s morality (Δz̅ ) will actually increase with their improvements – if the population grows.[9] If this analysis above is correct, then contrary to DeCruz, the Price equation (as modified by Henrich) doesn’t show that the Augustinian-Pelagian dichotomy is a false dichotomy. It instead seems to imply a Pelagian-like ability to overcome the downward pull of sin such that the society gets better and better over time – as a result of population growth.

In conclusion, it should be noted that our two blog responses to De Cruz do not detract from her excellent points on the social aspect of the transmission of sin. At least, they will furnish Dr. De Cruz with examples of the sorts of questions that may get asked about the more statistical aspect of her project. At most, they will require a revisitation of the type of equation that is used to model how sin is transmitted through social groups.


[8] From video around 41 minutes. See: https://www.facebook.com/analytictheology/videos/2021690228097087/

[9] If none of this makes sense, simply look at Figure 3 from Henrich. As population (x axis) increase, the overall rate of accomplishing z increase. For De Cruz this would mean that as population grows, the ability to conform to the population’s moral norm grows with it.


Jesse Gentile is a new PhD student at Fuller seminary studying systematic theology. He has interests in theological anthropology, epistemology, ethics, technology and pretty much everything else. Jesse is the father of two awesome elementary school kids and is the husband of Ella (who works as a wills and trust attorney). He regularly does itinerant preaching among plymouth brethren assemblies throughout the U.S. Jesse hold’s degrees in Biblical Studies, Philosophy, and Instructional Design.

The Statistics of Sin in Hominid Populations: Checking the Math on De Cruz’s Use of the Price Equation (Part One)

2018 marks the third year of the Analytic Theology project at Fuller. This year’s theme is theological anthropology. It would have been unfortunate if the year had passed without anyone addressing the question of humanity’s struggle with sin. Happily, this topic was boldly engaged by Dr. Helen De Cruz in a presentation titled, “Transmission of Original Sin: A Cultural Evolutionary Model.”

Dr. De Cruz’s presentation proposes a social transmission model of original sin to answer two questions in hamartiology. First, how is sin transmitted from generation to generation (i.e the mechanism of transmission)? Second, what should we make of the doctrine of original guilt?  How can humans be blameworthy, if at all, for the sins committed by humans before us. You can watch her presentation online here: https://www.facebook.com/analytictheology/videos/2021690228097087/


De Cruz’s presentation is organized into six sections. Part One introduces the topic of sin theologically by contrasting Augustinian and Eastern orthodox approaches to original sin.  Part Two presents three models of transmission: an Augustinian model, a Federalist model, and a social transmission model. De Cruz further focused on two leading examples of a social transmission models, that of Friedrich Schleiermacher and Walter Rauschenbusch. In Part Three she enumerates recent empirical studies that lend support to a social transmission model. These include research from developmental psychology and sociology on topics such as over-imitation, promiscuous normativity, and child development.  Part Four of the presentation turns to paleoanthropology to locate the earliest possible point in hominid history offering evidence of the capacity for “God consciousness” necessary for moral responsibility. In other words, if a transmission model of sin is correct, at what point might the transmitting of sin have begun? In Part Five of the presentation De Cruz uses the Price equation to mathematically model the spread of negative or positive behaviors throughout the early human population. She closes with implications of the model for ascribing guilt and blame to individuals.


Given our limited space, I want to focus on the more novel aspect of De Cruz’s talk; her use of the Price Equation to model the transmission of sin throughout early hominid culture.  In her defense, De Cruz made superb use of presentation time to cover a wide variety of issues. She states clearly that she can only briefly explain how the Price equation portrays the Social Transmission account of sin.  However, in her talk, she refers us to a 2004 article by Joseph Henrich as the place to look for an explanation of how the equation works. Having looked carefully at Henrich’s paper, two significant questions arise about how she uses the equation. This first blog (Part One) will attempt to help the reader understand enough of Henrich’s project to (a) see and (b) think through a possible faulty assumption De Cruz is making in her otherwise impressive presentation.  A second blog (Part Two) will continue to unpack Henrich’s project so as to raise a possible (and more serious) mistake De Cruz may have made in her use of the price equation.

Joseph Henrich’s 2004 article analyzes the gradual decline of tool usage by the indigenous population in Tasmania between the last ice age and the arrival of Europeans.[1] He attempts to give an account of the factors affecting this decline of more advanced cultural artifacts (e.g. the ability to make complicated hunting tools) in this population vis-a-vis other indigenous groups on the Australian. At the end of the last ice age, sea levels rose, cutting off the Tasmanians from the mainland. Gradually they lost the ability to make advanced tools while mainland populations did not.  To account for the decline of skills Henrich employs a version of the Price Equation, an equation introduced by George Price in the 1970’s as a way to account mathematically for the rise or fall of traits in evolving populations.[2] The equation has wide applicability in population stirs and various sciences.

∆z̅= Cov(f,z) + E(f∆z)

First the equation, briefly. The bottom line is the equation’s answer, represented here by ∆z̄. This is the change over time (∆; e.g. from one generation to the next) in the average ability of a population (i.e. the bar over the z̄) to use an advanced skill like making a special fishing net.[3] If ∆z̄ is above zero for multiple generations in a row, that means that from generation to generation the skill is spreading through the population. If it is below zero, multiple generations in a row, the population is forgetting how to use that skill. The equation captures this sort of change over time.

Our first question deals with the left term in the equation (i.e. Cov(f,z)). De Cruz mimics an assumption that Henrich makes in his paper – about  Cov (f, z). This assumption may not be warranted for De Cruz. Cov(f,z) is a covariant between z and f.[4] z is a quantitative number given to each Tasmanian to represent how good he is at performing certain skill, and f is a number representing how likely others are to mimic that person in their own attempts to learn how to perform that skill. Henrich simplifies his equation by assuming that any time z is the highest out of all the other z values for the population, f can be set to a value of one. In plain English, he assumes all the Tasmanians always copied the artisan with the best skills (i.e. the highest z value). This allows him to simplify his equation to:   ∆z̅ = zh – z̅ + Δzh .[5]

In De Cruz’s presentation, she assumes that z will represent the capacity to perform the overall moral norms of a hominid community, rather than the capacity to make some particular tool or use a particular fishing skill. The Price equation can accommodate that. However, she then assumes that she too can set the f value (she uses the letter c instead)  to to one.[6]  She states in her presentation: “Assume that every member of a hominid community strives to fulfill the normative ideals of their community (the highest possible z-value in the community), zh. So, we’re not talking about some sort of absolute moral ideal but just the sort of ideal that you have within a community.”[7] Then she proceeds to set her f value to one like Henrich and zero for any lower z value. De Cruz then goes on to simplify her equation to  ∆z̅ = zh – z̅ + Δzh   like Henrich does. Is this assumption and simplification warranted in her case?

In Henrich’s case, the zh was the person in the community with the highest skills (h subscript stands for highest) at performing some tool making craft. For De Cruz, zh would have to be the person in the community with the highest display of moral norms. Now, for our question: is it right for De Cruz to assume that all hominids are interested in copying the person with the highest moral norms? It seems plausible that most hominids in a population would want to copy from the best fishing net maker or the best crawfish diver (i.e. zh ).   Can De Cruz assume that, analogously, all the hominids in a population would try to mimic the hominid that best displayed the moral norms of the community? The disanalogy seems significant enough to raise a shadow of doubt. The motivation to learn to make the best spear versus approximating a community’s most moral member” seem too different. If this worry is right, then De Cruz is not warranted in simplifying her c ( Henrich’s f) to a value of one any time zh is plugged into Cov (f, z) and zero when any community member other than zh is plugged in. If she is not warranted, she can’t simplify her Price equation to ∆z̅ = zh – z̅ + Δzh like Henrich does. If she can’t simplify, then it doesn’t seem she can make use of his subsequent steps and arrive at the final conclusion about what the Price equation “predicts.”

We could conclude our discussion here. However, a second and more worrisome problem from De Cruz’s use of the Price equation begs to be addressed. To get to that point, we have to assume that the assumption addressed above goes through. Therefore, let’s conclude this blog, instead, by granting De Cruz’s assumption so that we can turn to a more interesting question in a subsequent blog. That question is whether the Price equation, as used by Henrich, actually predicts a more Pelagian outcome of the transmission of sin.



[1] Henrich, Joseph. “Demography and Cultural Evolution: How Adaptive Cultural Processes Can Produce Maladaptive Losses—The Tasmanian Case.” American Antiquity 69, no. 02 (April 2004): 197–214.

[2] See Gardner, Andy. “The Price Equation.” Current Biology 18, no. 5 (March 2008): R198–R202.

[3] Imagine giving each person in a population of 1000 Tasmanians a number (e.g. 1 to 10) that represents their ability to make an advanced fishing net. Average all 1000 numbers.

[4] Covariance is simply a mathematical way of showing how two sets of data vary in relationship to each other. Imagine a table with monthly ice cream sales in the first column and average temperatures in the second column. These data points are related; they co-vary together. As temperatures rise, ice cream sales rise. They co-vary positively (i.e. their covariance approaches one). If in the second column we had winter-coat sales we would expect them to co-vary negatively (i.e. the covariance would approach  negative one) . As temperature rose, coat sales would drop. The closer covariance reaches zero, the less relationship there is between the data sets (e.g. relationship between masking-tape sales and temperature should give a covariance close to zero).

[5] See Appendix A in Henrich’s article.

[6] Remember that she uses c. Again, this is a quantitative number that indicates how likely someone is to mimic zh – the person in the community with the highest z value.

[7] Quote taken from De Cruz’s presentation slides.

Jesse Gentile is a new PhD student at Fuller seminary studying systematic theology. He has interests in theological anthropology, epistemology, ethics, technology and pretty much everything else. Jesse is the father of two awesome elementary school kids and is the husband of Ella (who works as a wills and trust attorney). He regularly does itinerant preaching among plymouth brethren assemblies throughout the U.S. Jesse hold’s degrees in Biblical Studies, Philosophy, and Instructional Design.