Derek Parfit (1942-2017)

Leaureate Derek Parfit

Derek Parfit.

It is with great sadness to learn today that one of my all-time favourite philosophers has died overnight, in the beginning of this year. He was an odd character, certainly, but had a fantastic mind for philosophy and went out of his comfort zone to push his field forward. He was for this reason also, in my opinion, a great academic example of collegiality, transparency, and engagement in the social and public life of academia; despite his reclusive nature he engaged readily and frequently with the media and fellow researchers in order to get at and share the ‘truth’.
 
In true Parfit style, in the closing paragraphs of his third volume of On What Matters (being published next month by OUP), he wrote: “I regret that, in a book called On What Matters, I have said so little about what matters. I hope to say more in what would be my Volume Four.”
 
Sadly we may never read volume four. However, in what Peter Singer kindly shared, we may read Parfit’s final, fitting printed words before Feb:
 
“What now matters most is how we respond to various risks to the survival of humanity. We are creating some of these risks, and discovering how we could respond to these and other risks. If we reduce these risks, and humanity survives the next few centuries, our descendants or successors could end these risks by spreading through this galaxy.
 
Life can be wonderful as well as terrible, and we shall increasingly have the power to make life good. Since human history may be only just beginning, we can expect that future humans, or supra-humans, may achieve some great goods that we cannot now even imagine. In Nietzsche’s words, there has never been such a new dawn and clear horizon, and such an open sea.
 
If we are the only rational beings in the Universe, as some recent evidence suggests, it matters even more whether we shall have descendants or successors during the billions of years in which that would be possible. Some of our successors might live lives and create worlds that, though failing to justify past suffering, would give us all, including some of those who have suffered, reasons to be glad that the Universe exists.”
 
Rest well, mighty mind.

Why animals should be treated as co-citizens

Dogs on a farm in Canada. (Photo: Martin Cathrae)

Dogs on a farm in Canada. (Photo: Martin Cathrae)

Cats, dogs, dolphins, chimps, and humans – we’re all technically animals, but do some of us deserve more rights than others? There is a tiny town in northern Spain that thinks not. In late July, the municipality of Trigueros del Valle unanimously passed a local law which officially defines cats and dogs in the town as ‘non-human residents’.

“The mayor must represent not just the human residents but must also be here for the others,” the Spanish town’s mayor told The Independent.

While it might seem a bit far-fetched, the idea that non-human animals should be given human-like rights is gaining traction in jurisdictions as far as India and Argentina to Romania and the United States. But what are the cultural and philosophical implications of all this? And isn’t giving ‘human-like rights’ going a bit too far? I don’t think so, in fact, I think we should be prepared to grant not just human-like ‘rights’ or ‘residency’ to animals, but indeed we should be prepared to give them citizenship and let their interests be directly represented in our governments.

Such a radical shift in thinking about non-human animals is unlikely to occur quickly, and there seem to be some clear stepping stones which will first need to reached. One of the most pivotal steps centres around the debate of whether some animals have a level of ‘personhood’ that can be legally meaningful.

Personhood – that an entity has the essential capacities of a person, like self-consciousness, intellect, experience of suffering and complex emotional states, etc. – comes in different forms and to varying degrees. We would not say, for instance, that a human infant was criminally liable for their own actions, even if those actions caused serious harm to another human, since we know the infant wouldn’t be properly aware of their own actions or the consequences of those actions. However, if a competent adult harmed an infant, that adult would definitely be criminally liable for their own actions, since they have adequate foresight and self-awareness. In this context, then, both the adult and infant have different levels of legal personhood and this is reflected in how the law treats them.

If non-human animals like chimpanzees or orangutans could be argued as being at least somewhat equivalent in a legal or moral sense (courtesy of their intellectual and other human-like capacities) to a human person – even on the level of a human infant – then courts could be persuaded to recognise them as non-human persons. And with the status of personhood can come great things.

The English Somerset case, which gave an African slave his freedom in 1772, was prompted by a writ of habeas corpus, a legal summons that requires the custodian of a prisoner to demonstrate before a court that their detention of the person in question is lawful. Animal rights activists in Argentina and New York have argued that the same legal summons should be employed to require a zoo or university to demonstrate their lawful detention of an orangutan or chimpanzee, respectively. The question for such cases hinges on whether the zoo or university is detaining a legally-defined person.

While these cases are currently ongoing, some politicians and scientists have already made up their minds, and it’s easy to see why. Advanced non-human animals like dolphins or chimpanzees or others are highly intelligent and share huge swaths of genetic heritage with us humans. They lead rich emotional lives and have human-like capacities such as self-awareness.

But personhood should not be the only game in town when it comes to thinking about animals. My pet dog is quite dull, but that shouldn’t mean I can get away with mistreating it any more than I could get away with mistreating a clever orangutan. The ability to experience suffering, therefore, seems to be important in this regard. That said, there can be indirect negative effects or suffering which results from killing something as basic as even an ant (even though it’s unlikely insects experience pain). Sure, stepping on one accidently from time to time won’t cause a catastrophe, but we couldn’t live a world without ants altogether. They, along with the rest of the insect class, form the basis of the food chain. Crops couldn’t grow and cows couldn’t eat grass if it weren’t for creepy crawlies. So even suffering doesn’t seem to capture everything there is to value about animals. In a broad sense, I think most of us recognise that we ought to value our ecosystem as a whole, if not for its own sake then for our own survival’s.

I’m not arguing that trees or bees ought to be considered our co-citizens, however, but rather the non-human animals which form a part of our societies: companion animals like dogs and cats, produce or working animals like horses and sheep. These are the animals which we have actively enlisted into the ranks of our societies for our own purposes. They are the biggest modern caste group worldwide and are regularly exploited for financial gain without full consideration of their welfare.

Just like we wouldn’t expect a co-citizen to work their whole lives and never be given adequate time for rest and relaxation, we shouldn’t expect this of animals. In most high-income countries, we expect that our co-citizens will enjoy a basic level of provision and protection in the forms of food, medicine, and housing; the same should all be true for animals as well.

What might be a touch difficult is getting hoof and paw prints on electoral votes in a meaningful manner. Indeed, it’s highly unlikely any non-human animal could be expected to understand the complexities (and absurdities) of modern politics. This doesn’t mean, however, that we shouldn’t seek to know what is in their best interests and have those interests represented in our governments’ decision-making and services. We don’t expect children to know their best interests or be able to fully care for themselves outside of their families, but we still make concerted efforts to care for children who cannot be cared for by their own families; we, as a society, take it upon ourselves to care for them and to avoid their being exploited or abused. In the same way, perhaps we ought to create policies and agencies which care for animals.

It is likely that the way we treat animals will change and one day we might even call them our co-citizens. A few decades ago the animal rights movement seemed to some like a fringe fad, but it is now part the mainstream. Call me barking mad, but I suspect that in a few more decades we might be talking about co-citizen adoption agencies rather than pet shops.

Friendship and partiality

Old friends sitting on a park bench in Guatemala. Photo: Keneth Cruz

Old friends sitting on a park bench in Guatemala. Photo: Keneth Cruz

On Monday 6 July 2015 I guest hosted BioethxChat – a weekly Twitter discussion based around bioethical issues – on the topic of friendship and partiality. We had 40 participants during the live, online discussion and a few of us continued conversing during the following days.

The discussion was organised into four topic questions, each discussed for approximately 10 or 15 minutes. These were:

  1. What is friendship? What makes a friend a true friend?
  2. Does friendship have intrinsic or merely instrumental value?
  3. Can impartialists coherently maintain true friendships or value friendships intrinsically?
  4. Is impartiality required in moral judgements? If so, to what degree? Does this change with context, e.g. public vs. private morality?

Although these questions are not bioethical in a conventional or typical sense, their answers can be relevant to a range of bioethical dilemmas. Some examples which were brought up in our discussion were of a medical professional reporting on the malpractice of a peer who was also their friend and of medical professionals triaging patients in partial or impartial ways.

To many in the literature and many BioethxChat participants, friendships are something that complete or add to the persons involved. My brother said to me once that friendship was like a sculpture that two people work on. I agree, and think that some of the most beautiful sculptures are those which can be viewed as beautiful from all sides – in the same way, friendships are most beautiful, I think, when both persons put effort into the friendship.

However, can a friendship be valued despite instrumental concerns or does it need to result in something outside of itself to be worthy of value? A quintessentially consequentialist response is presumably that friendship should only be valued by what it gives rise to. But if what is being valued is not the friendship itself but rather what it gives rise to, is friendship a true value? Not according to arguments by Michael Stoker and others. In his paper ‘The Schizophrenia of Modern Ethical Theories’, Stoker provides the example of being bored and lonely in hospital when your friend, Smith, visits you not (as you find out) because he is simply your friend, but because he felt it was his duty (subscribing to deontology) or because he could think of no better way to improve utility (subscribing to consequentialism, specifically utilitarianism). It’s argued that Smith isn’t as true a friend as someone who visited you because they valued your friendship. Many BioethxChat participants seemed to agree with this sentiment and some said that in their close relationships and friendships, instrumental aspects were not a major part of what they valued. However, several participants noted that instrumental aspects damages or gives rise to intrinsic value, as in the case of a friend who ‘uses’ the friendship for their own benefit or when two people begin a friendship and are only at the start of developing their intrinsic value for one another.

My own view on this debate is that while this ‘problem of friendship’ for consequetialists (and impartialists generally) is a genuine and worrying problem, there are still options to consider. I present one such option in a paper recently submitted to a journal which I call the ‘personification solution’. I argue that if we agree that personhood grants some intrinsic value to a subject, and it can be shown that friendships possess some level of personhood, then friendships (and relationships generally) can be intrinsically valued by consequentialists and perhaps other impartialists to some degree. If consequentialists can coherently value relationships not only for their instrumental value, but also for their intrinsic value, then this might allow them to engage in ‘true’ (or ‘truer’) friendships and relationships. Such a view also has broader implications for other normative perspectives on the ethics of friendship and relationships generally.

As for the ultimate roles of partiality and impartiality in ethical decision-making, on face value it seems that neither can be used without escaping potential problems: being partial to friends and family is a natural and perhaps essential part of being human, but we can’t justifiably ignore a stranger’s dire needs. Though, in a hospital emergency department, would it be right of a medical professional to see to their friend or family member’s ailment before a stranger’s? It probably depends on the ailments of both the friend/family member and the stranger, but to what degree? I’m not sure, but I know I regularly give friends and family members gifts on special occasions when I could have donated them to charity or spent the money more effectively.

The fault in our contexts

Adapted from John Green's 'The Fault in Our Stars' cover.

Adapted from John Green’s ‘The Fault in Our Stars’ cover.

Why does the public misunderstand the academy?

English biologist and author Richard Dawkins was in hot water recently after saying that, if given the choice, expectant mothers ought to abort their foetus if tests confirm it to have Down’s syndrome. Australian ethicist Peter Singer recently faced similar public backlash for stating a man who committed suicide to avoid prison may have been acting “rationally”.

Should we begin dismantling the ivory tower now or does this merely point to repairable structural faults? In this potential-academic’s appraisal: the fault is in our contexts.

Context was the name of the intellectual game in 1967, or at least one French philosopher thought so. Jacques Derrida, the founder of deconstructive criticism – an approach to the study of meaning – set course for context, and yet still managed to have his own words yanked out of it. “Il n’y a pas de hors-texte,” (there is no outside-text) he wrote in his seminal work, Of Grammatology. It took but a few years for critics to begin insisting he instead meant ‘Il n’y a rien en dehors du text’ (there is nothing outside of the text). Irony couldn’t have been crueler; any full reading of the quote within its context would avoid such embarrassing mistranslations or misinterpretations.

So differing contexts can even confuse discourse between individual ivory towers (the adjoining canopy is dense, no doubt). If even professional philosophers can get lost, it’s no fault of anyone’s that the public can often misinterpret the academy, and vice-versa.

“Evolution is just a theory,” some like to remind us. Religious adherents who believe that a divine being created us, and not that we evolved – along with everything – from common ancestors, often attempt to discredit the scientific theory of evolution by abusing linguistic contexts. In the academic, specifically scientific, context, a theory is something which robustly explains natural phenomena and is developed in response to overwhelming experimental evidence, whereas the common parlance ranks theory only mildly above an unsubstantiated suspicion.

When a scientist uses the phrase significant difference, they mean something quite specific in terms of statistics and probability. When a philosopher says an argument appears valid, they don’t mean its premises are true or that they agree with the conclusion, just that the conclusion logically follows if the premises are true.

Such differences are not limited to the meaning of individual words or phrases, however, often they can be in the very way an idea is articulated. In the academy, one might speak exclusively abstractly, or use a shocking analogy to prove some underlying logic or principle. The analogy might seem obscene, or the abstract argument, if taken literally, absurd, but all of that is superficial – it is the principle at work or the logic at play. Articulating concepts in this way is often necessary if sufficiently complex or counterintuitive, as many important ones are.

These clashes of context are often what drives humour, especially those lame two-liners – Never iron a four-leaf clover. You don’t want to press your luck. However when public and academic contexts collide it can be anything but funny.

Down’s syndrome is a genetic disorder which causes developmental delay and intellectual disability. Of the many resultant medical problems that can arise in the first few years of life, heart and blood diseases are the most serious and life-threatening. The majority of children with the syndrome require early and ongoing educational and medical intervention, and most do not graduate from secondary school. Independence in adulthood is varied, though many report a good quality of life.

Dawkins would rather an expectant mother abort their foetus with Down’s syndrome and attempt another pregnancy. This may seem unduly harsh, but let’s consider some reasons why we might think so.

First, we must agree that abortion – in principle – is morally acceptable. The stronger version of this argument relies on the concept of moral personhood, I think; the foetus is not a person, and lacks the same capacities (therefore status) of a person. A foetus cannot understand itself or its surroundings, for instance. (Incidentally, this sort of reasoning is transferable to the ethics of euthanasia.) Justification based on a woman’s autonomy or reproductive rights is equally common and sufficient for our purposes here, however.

Next we should decide if the future person will have a good life, and (if we are to be the parent of this child) whether or not we can financially, emotionally, and otherwise support the child. Even if we could reasonably predict that the foetus would eventually become a happy adult, we might still have reason to think abortion is preferable in order to avoid medical risks and problems, especially in early childhood. That we prevent the future potential person from ever being cannot be put as a mark against us, just like the couple who chooses to abstain from reproduction entirely does nothing wrong by preventing future potential persons to come into existence.

When Singer stated that some suicides could be rational, he also took likely future events or experiences into account. A key difference in this case, however, was that these experiences were inevitable due to the current existence of a person.

None of this is new territory for Dawkins or Singer. Most recently, in July, Dawkins weathered a similar barrage of criticism stemming from him saying act X is worse than act Y, and others mistakenly thinking this implies that one need also be saying act Y is acceptable at the same time. Not so. If I were to say that “two murders are worse than one,” does it follow that I find one murder acceptable? No, I have only said what is worse than one. The problem for Dawkins in this instance was that he (necessarily, per his chosen point) used examples of rape – hardly a non-emotive issue.

This is another common contextual hurdle for academics. They are trained in objectivity, and approach sensitive issues from a rational standpoint. Comparatively, moral knee-jerks are an in-built and natural human response; our emotions all too easily pervade our reason.

Reconciliation of these contexts won’t come easily or quickly or with anything less than the more public exposure of academic discourse. Our ultimate aim should be the altogether falsification of the dichotomy by way of education and more rigorous public debate. If that’s to happen, controversies like these won’t fade away, they’ll become more common. Academics and the public should therefore brace for impact.

Author’s note: For help or information on depression, or if you are experiencing mental distress, contact your local medical or mental health service.

Diversity in Philosophy

Pages from Confucius Sinarum Philosophus (China's Philosopher Confucius) , a translated and annotated edition of three out of four Confucian

Pages from Confucius Sinarum Philosophus (China’s Philosopher Confucius) , a translated and annotated edition of three out of four Confucian “Four Books”, by Prospero Intorcetta, Philip Couplet, Rougemont, and Herdtrich. Paris, 1687.

Eugene Sun Park, a former doctoral candidate in philosophy “at a well-respected department in the Midwest [of the United States]”, recently wrote about his motivation for leaving the academy.

“Philosophy is predominantly white and predominantly male,” reads the opening line.

Being quite familiar with both whiteness and maleness (and recently diving head-first into a philosophy department from the battlements of the sciences), I thought I’d compare notes with Eugene, and add a few thoughts.

Please, when and if you read this, Eugene, don’t for a moment think I don’t sympathise with your cause. In fact, I too want philosophy enriched by non-Western thought. (Interestingly, whenever I have had reason to go looking for classical views on philosophical issues, I’ve always made a point of searching specifically for Eastern thought first, and then Western thought.)

The closest tome of philosophical meanderings to me at the time of my writing this little piece is The Oxford Textbook of Clinical Research Ethics. A weighty thing, but an excellent and unique resource – it comes highly recommended. The first page of the contributors’ list bears six names – all are male, all but one have an Anglo-Saxon name, and all hail from either US or British universities. A cursory glance through the following five pages confirms the trend.

Though since women and minorities have been significantly oppressed, this should come as no great surprise to us. Do academics truly look at a mirror of themselves in the lecture hall, however? A quick count of the students who gave presentations at the seminar I attended today put white males in the minority (despite our white male faculty member). So perhaps times are changing  – at least where I am.

While Eugene would presumably welcome such diversity, I think he’d still call it superficial to some extent. He rightly takes issue with the traditional Western focus of modern philosophical thought; there’s no denying we currently focus on Socrates more than Confucius. But maybe we only do so because of those who teach us, and those who taught them. In the student presentations today, Kant came up once, and some Islamic philosophy came up at another point, but outside of those two references, we spoke exclusively in terms of principles, duties, rights, and so on.

Such discourse – that centred on arguments and ideas, not the people who presented them or their respective cultural background – affords the young philosopher room to explore any and all ideas she can find or that she knows of, whether they be Western in origin or not.

Perhaps all this points to the relative progressiveness of ethics as a sub-discipline more than it points to a larger shift within philosophy, or maybe that’s a lie and all this is just wishful thinking. In any case, we’ve got a ways to go, and it’s sad that we won’t have people like Eugene with us. We ought to take note and learn from his departure.

Is morality in contradiction with our evolution?

Image

A mother gray langur (Semnopithecus entellus) holding her infant. Photographer: Nevil Zaveri.

Humans, Homo sapiens, are primates. Our unique genetic heritage can be traced back to 85 million years ago, when the distinctive order of Primates arose from mammals. It took more than 82.5 million years for the first hominids (ancestors belonging to our genus, Homo) to evolve. This first species of the genus, Homo habilis, fashioned rudimentary stone tools and lived in small groups similar in size to modern chimpanzees. This small group size afforded two distinctive advantages: protection from predators and enhanced efficiency in food gathering. In other words, our ancestors cooperated to survive (much like we do today). However, it wasn’t for another two million years and many evolutionary steps later, that the first anatomically-modern humans evolved (between 400,000 and 250,000 years ago). During this long stretch of evolutionary time, and even in the relatively short period since, humans have evolved to become the type of animal they are today – flaws and all.

The consequence of this, is that as our societies and technologies have progressed, various traits and features which once ensured our species’ development and survival are now less relied upon or are altogether inappropriate. These include anatomical vestiges, like the recurrent laryngeal nerve, which instead of running directly from the upper parts of the vagus nerve in the head and neck to the larynx (our voice box), it makes a massive detour down to the heart and back up again. This same anatomical vestige exists in the giraffe, where the nerve travels from the top of the neck all the way down, around the heart, and back up again. This unnecessary and indirect route is all due to the fish, where this nerve first developed and had a very direct route. Humans also possess behavioural vestiges like the ‘goose-bumps’ reaction we get when we are cold (which helped keep our hairy ancestors warm) and when we are frightened (which helped make our hairy ancestors look bigger, meaner, and more threatening to potential attackers).

It’s probably unimportant for us to bother changing the routes of our nerves to be more efficient, or removing now-innocuous in-built behaviours like ‘goose-bumps’, but would it be worthwhile (or is it even possible) to change our innate human intuitions which influence what we call morality? In the case of impartialist normative theories, I think we can consciously reason past them to some extent and that this is worthwhile, but we are sometimes working against the grain of our biological hardwiring.

Though this is not always and completely the case. Sun-tailed monkeys, Cercopithecus solatus, a fellow primate, are known to make warning calls to their group when they spot nearby predators. However, this also generates attention from the predator and generally increases the chance of the individual who makes the warning call of being captured as prey. Through an ethical lens, this seems like a heroic case of self-sacrifice for the good of the many. But how many exactly and what is their relation to this many? Since their mean group size is 17 individuals, and these individuals both know and are closely related to one another, it might not have the same gravitas as the archetypal, heroic self-sacrifice we might imagine of some humans – whether historical or mythical figures like William Wallace and Hercules, or more recent activists like Gandhi and Martin Luther King Jr.

This type of kin altruism exhibited by sun-tailed monkeys (and other species), whereby altruism is limited to a few known, especially related, individuals, is also shown in humans. In a psychological experiment on humans lead by Jens Koed Madsen from University College London, participants held a painful skiing position for as long as they wished to, and the longer they held that position, the greater a reward was for a related family member. Participants held the painful position longest for those they were related most closely to, confirming that human altruism is affected by the relatedness to the benefiting individual.

While we may have evolved to be partial to those closest to us, and impartialist theories like utilitarianism and Kantianism go against this evolutionary bias, it is helpful that we have at least some intuitive altruism (albeit not necessarily ‘true’ altruism – it being, in natural contexts, directed primarily or exclusively towards our kin, i.e., kin altruism). Nevertheless, using sound argumentation to extend this altruism may enable us to direct our intuitive altruism towards a larger number of less-related individuals, like the millions around the world suffering and dying from preventable causes.

However, reason might not always win over vestigial moral intuitions, at least at first. Jonathan Haidt’s psychological experiments on moral knee-jerks to what could be reasoned as a morally acceptable instance of incest, demonstrated that (at least in our initial reaction to some scenarios) our intuitions can persist in spite of being demonstrated as unreasonable. This could indicate that reason simply takes its time or needs to be very convincing to have an effect on our thinking.

Devastating the devastation of devastating arguments against religion

Roman Catholic monks of the Order of Saint Benedict singing Vespers on Holy Saturday at St. Mary's Abbey in Morristown, New Jersey.

Roman Catholic monks of the Order of Saint Benedict singing Vespers on Holy Saturday at St. Mary’s Abbey in Morristown, New Jersey. (Source: John Stephen Dwyer)

I once was more easily drawn-in by the concourse of religious versus secular debates one could find anywhere from social media to shopping centres. They are debates essential to human flourishing, I think, since they wrestle with the fundamental questions and presumptions of our existence and therefore our living. Not one to normally shy away from argument, I have recently been consciously distancing myself from such conversations, though – especially in the online sphere.

My main reasoning is that the arguments are repetitive and a bore. Nothing new is learned nor gained by either party. An ideal debate is one which refines the positions, though I should note that this does not necessarily mean that either position need be ‘weakened’ in the mind of the debater. But to break my silence in reverence of this argument’s regular and peculiar futility, I wanted to briefly reply to an article refuting some of the common arguments of atheists.

Why has this broken my silence? Quite simply, because I have noted the undeserved traction it has gained with otherwise intelligent, well-meaning Christians in my circle of acquaintances and, dare I admit, friends.

The first claim refuted in the article is that of religion being the primary cause of war. Cited is the authoritative Encyclopedia of Wars, which claims that only some 7% of all recorded wars were driven by religion. Not being a particularly novel statistic to employ in one’s counterargument, it is a simple task for even the most novice of discerning observers to propose the question: By what definition? The definitions, upon closer inspection, are highly specified and unmatched to the common parlance, giving rise to the misapprehension. In effect, the Christian or religious defender deliberately oversimplifies the complexities and motives of ‘war’, reducing it to the single dimension they most commonly complain about anything else being reduced to: religiosity. At best, then, this is a misunderstanding of an academic text and at worst another dubious hypocrisy.

Surprised by its omission, but easily found elsewhere, is the natural extension of this bone of contention: that Stalin and Pol Pot and all these other wicked, godless men committed terrible genocide – in the name of godlessness, is the implication. (More commonly still is the insistence that Hitler was an atheist, despite there no being no clear evidence of his personal beliefs.) Perhaps in this extension, though, is the revelation that the faithful are flocking to a peripheral issue rather than appreciating the salient point: that there is no logical pathway from irreligion to violence, but that there exist many logical pathways from religion to violence. Again, the godly defender tries to have it both ways, claiming her critics are untrustworthy for their simplifications but then wavering into fly-by-night oversimplification herself. Then again, irony has never been more lost than on some believers.

Second on the list of ‘devastating’ arguments reads the claim that religion’s days are numbered due to the progress of scientific inquiry. To refute this, the author quotes the growth figures of major religions (and irreligion) from the World Religion Database. Naturally, these show the rapid growth of religions in the Developing World, particularly Asia. Perhaps in another bout of unappreciated irony (upon entering the realm of objective facts there can be no pseudo-philosophical meandering which rallies any serious offence or defence) in this case is the term projection and its implications appearing entirely lost on the author. I just mentioned exactly why these projections would have to show the rise of specific religions from specific parts of the world which are experiencing specific and unmatched population growth. Hint, hint.

Perhaps what actually is devastating is the lack of family planning options in India and elsewhere due in no small part to the archaic, poverty-binding views espoused by ‘saints’ like Mother Teresa. Perhaps what is devastating is so many’s lack of access to education, incidentally the one thing that repeatedly correlates with irreligiosity and less-fundamentalist religious views.

Penultimately, the topic of the History War of the Dark Ages raises its ugly head. While the Higher or Later Middle Ages were quite amenable to the progress of science, acknowledgement must be made of the stagnation of the Early Middle Ages. Expand one’s study to the entire period, and with it, the social and economic life of its inhabitants, and their many religiously-derived detriments become apparent. To put it plainly, the Dark Ages might not have been as Dark as originally thought, but they were still dark.

The final ‘devastating’ argument presented for ridicule is the outdated Christ myth theory. How laughable indeed. In light of modern evidence it is a truly ridiculous theory, and perhaps the only one worth refuting by this author. Though, in unknowing disparagement, even U.S. comedian and talk-show host Bill Maher fell for this old-hat gimmick. Then again, he also pouts an unhealthy and unethical denialism of vaccine efficacy and safety, and to suggest anyone be altogether infallible would no doubt be asking for a miracle – even by religious standards.

What might have appeared to some as a brief polemic of polemics is actually a hapless list of ill-thought and, at times, strawman defences to some of the genuine questions that bears considering for believers and non-believers alike.