To “sleep cosmologically against a rock”: the fractured—and great—mind of Fernando Pessoa birthed this phrase in his lonely and angst-ridden “factless” semi-autobiography The Book of Disquiet. Though one could argue—and quite convincingly at that—that Pessoa’s outlook on life closely resembles the criteria for clinical depression and he is thus only someone to be regarded as imitable if you dislike getting out of bed in the morning, I have always thought he captured a great truth about imagination, and therefore literature, in this phrase. Reading is a cosmological experience, one that frees you from the material technicalities of your existence and transports you to wherever in the universe you want to go. With a book, you can indeed live cosmologically against a rock, perhaps forever if the fancy takes you—but you shouldn’t.

Great literature can absolve you of the need to engage with the world but the greatest strength of literature is its capacity to transform both you and, by extension, the world around you. Opting out of reality, as Pessoa seems to think we should, is certainly a service books can provide but opting in—and opting in better—is the ultimate gift provided to us by great literature. Azar Nafisi, an American-Iranian academic and writer—and one of my favourite authors—knew this well. When we eventually emerge from great literature, she thinks, we do so with a fresh perspective on the world around us. She taught literature at the University of Tehran under the Islamic regime, and is best known for her enchanting and deeply poignant book Reading Lolita in Tehran. In her most recent book, The Republic of Imagination—an exploration of the link between democracy and fiction—she writes:

If my students in Iran and millions of other brave souls like Malala and Ramin risked their lives in order to preserve their individual integrity, their access to free thought and education, what will we risk to preserve our access to this Republic of Imagination? To say that only repressive regimes require art and imagination is to belittle life itself. It is not pain and brutality that engender the need to write or the desire to read. If we believe in the first three words of the constitution, “we the people,” then we know that the task of defending the right to imagination and free thought is the responsibility not just of writers and publishers but of readers, too. I am reminded of Nabokov’s statement that “readers are born free and ought to remain free.” We have learned to protest when writers are imprisoned, or when their books are censored and banned. But what about readers? Who will protect us? What if a writer publishes a book and no one is there to read it?

“Until I feared I would lose it, I never loved to read. One does not love breathing.” So says Scout in To Kill a Mockingbird, expressing the feelings of millions. We must read, and we must continue to read the great subversive books, our own and others. That right can be guaranteed only by the active participation of every one of us, citizen readers.

In this passage, Nafisi is not referring to the much-debated and contested “free speech crises” playing out very publicly—or merely purportedly if you are in the camp doing the contesting—across college and university campuses and much more softly in the words unspoken between friends and colleagues and in the hushes that cannot be quantified. Instead, Nafisi is talking about a danger that springs from nonchalance towards imagination and fiction, a nonchalance produced by not knowing how good you have it and nurtured by cuts in public spending and the disappearance of libraries. It is these circumstances, Nafisi thinks, that leads to the devaluing of books, art and the importance of free inquiry—and this devaluing plays out in the real world in a loss of insight and perspective. It is our duty as readers, therefore, not to allow our imaginations to dissipate in this way.

Safeguarding imagination from ideology

I think a parallel can be drawn between her imploration to readers to protect the Republic of Imagination from nonchalance to the need for protecting the Republic of Imagination from ideology. By this I am not, strictly speaking, referring to censorship in art and literature—though this, of course, is a problem. Indeed, given the fortitude that led Nafisi to teach great American works of fiction to women within the Islamic Republic of Iran—at great personal risk, in amongst the turmoil of politicisation and moralisation of books and fiction—so that they could access a freedom in literature that was denied to them in their lives under the regime, I would certainly wonder what she thinks about the fact that two of the books she speaks of—Huckleberry Finn and To Kill a Mockingbird—are being banned from curriculums as we speak, or that publishing houses are being pressured to drop authors and that other publishing houses as well as individual authors are hiring censors to prevent their books from getting “cancelled”. But I need not guess. In a podcast with the Iran’s Weekly Wire, Nafisi says:

It is so amazing. I am always amazed by how much of my own experiences that I am talking about now, and the ideas that I have formulated now come from Iran, and my experiences over there, both good and bad, because I experienced political correctness in its extremist form, where you were punished for thinking differently, and sometimes what you thought differently might have been terrible, but punishment itself is not enough. Political correctness usually targets things that need to be corrected, when we talk about insulting women, or insulting people of other races or nationalities, all of these are things that go back to our real values and principles in life, and they should not be taken lightly. But when you have people in schools censoring Huckleberry Finn because they call it racist, without at least having the debate within the classrooms rather than just eliminating, allowing people to genuinely experience something, and form their own ideas, then it becomes dangerous.

I think that political correctness, in the form of ideology, in the form of didactic, self righteous preaching to others, runs against imagination, because what imagination does, it puts you into the experience of all sorts of people.

Under this ideology, censorship is allowed to creep its way back in to a place people flee to after their own countries and nations are upended from totalitarian regimes that, in the banning of books and censorship of the arts, sought to remove their imagination—and, according to Nafisi, by extension their identity. It is typically those who are privileged to live in places which have free expression who fail to recognise the importance, and tenuousness, of such freedom—a freedom that necessitates constant defence to remain in place. This can lead them to express dismissive attitudes towards or even demonise those who warn against the creeping of censorship. Yet, as is the case in Nafisi’s hypothetical book without a reader, book banning and censoring in the west, implemented by institutions rather than the state or government, is not synonymous with an inability to access literature. In the world of the internet, most societies—even if they had the inclination—would have a great deal of trouble genuinely ousting books or ideas. This is especially true in democratic countries.

A book that is removed from a school syllabus is not a book one would have trouble finding oneself; likewise, ideas that have been deemed dangerous might be hard to share if your livelihood depends on support from certain sections of the public sphere i.e., in much of academia and journalism, but there is no shortage of platforms springing up on the internet for such ideas. I am more concerned, then, with the way many of us are being taught to read—especially in English departments within universities, and increasingly, schools. There are more ways to lose our access to imagination than simply not reading great works—reading them with a (predominantly) dogmatic ideological lens will suffice for such a loss just as well.

Literary Theory

The ideological lens I am referring to here is known broadly as “literary theory” —or simply, “theory”. There are different literary theories—and therefore lenses—one can choose from: queer literary theory, postcolonial literary theory, feminist literary theory, Marxist literary theory—the list goes on. In Beginning theory: an introduction to cultural and literary theory, Peter Barry outlines the points in common between these different lenses—the comments in square brackets are mine:

    • Politics is pervasive.[Broadly speaking, this refers to the rejection of an apolitical or dispassionate reading or writing of a text; we can neither interpret a text without some kind of lens nor can we write a piece of literature without some kind of political context, and theory merely makes the lens utilised explicit.]
    • Language is constitutive. [Language does not merely affect our reality, but actively constructs, limits and shapes it—and therefore has the power to oppress or liberate.]
    • Truth is provisional. [There is no one truth; much of what we take for granted as being “fixed” or “stable” truths are in fact socially constructed.]
    • Meaning is contingent.[The meaning in texts is not to be found by working out the “facts of the matter”; instead, meanings are constantly shifting and changing rather than being fixed and absolute.]
    • Human nature is a myth.Within theory, human nature is not only found to be an inaccurate depiction of humanity but an oppressive one—in so far as what is deemed human nature is, purportedly, often male and “Eurocentric”.]

So, the different lenses tend to hold the above points in common and differ with regards to the oppressed group they focus on and they interpret literature so that the oppression of such a group is salient i.e., they require the close reading of literature to find evidence of oppression and oppressive representations (even when no such reading makes sense—as was the case when Helen Pluckrose’s reading of Shakespeare’s Othello was problematised for not bringing out racist themes that, given the historical context of Shakespeare’s times, were unlikely to be there).

In the ‘70s, English departments began to morph and grow, shaking off the limits placed on them by “New Criticism”—an approach to literature that kept the focus narrow and concerned mainly with the aesthetics of the works themselves—and embraced “literary theory”. English departments thus began to delve into philosophy, history and culture so that lectures might more closely resemble women’s studies and other such identity-based fields within academia than being spaces for the exploration of great writing. Lisa Schubert writes in For the “Public Good”: Contradictions in Contemporary Literary Theory:

Literary critics (by this time a no longer sufficiently defined, free-standing category, but always preceded by a further denomination: feminists, black, black feminists, lesbian, Marxist etc.) questioned not just the literature of the canon and the images therein but the very ideologies masked behind these works, the predominantly Western white male values, assumptions and ways of thinking imbued in the texts. Empowering the marginalized groups that they represented, some critics began to reject anything that resembled a “traditional” (read: white male) theoretical approach to literature. Indeed, battle lines were drawn, as critics like Barbara Smith in “Towards A Black Feminist Criticism” effectively told white male critics to keep their theoretical hands off black women’s art.

More recently, Lennard Davis, a professor at the University of Chicago said, in reference to the state of the modern English department:

English departments might be seen, also, as semiology departments involved in studying the signs and meanings of such things. Cultural studies, which was often housed in English, has risen and fallen as a trendy topic. Now the idea that one should study the semiology of culture is so built into the system as to be invisible to the ordinary student.

[…] To some, English (and the language to which it is linked) is seen as yoked to an oppressive history of conquest, enslavement and imperialism. Hence, another feature of the moment is decolonising the curriculum. This reshaping of the canon of literature now includes paying attention to the global south. It also means reconsidering the European basis of English culture, to the extent that foundational texts like those of Plato or Aristotle are being challenged as “white” and “Eurocentric”.

Of course, it doesn’t seem like too much of a shock, nor a massive indictment of an English studies course, that it might be Eurocentric (unless we deem it problematic that an African studies course might be Afro-centric?). But still, broadening the horizons of English departments is not necessarily a bad thing; expanding the range of authors studied as well as bringing in culture, philosophy and history to the study of literature can surely make for a deeper and more interesting analysis of texts; indeed, texts in English studies courses never used to be so divorced from context in the first place. So, it is not efforts to diversify and expand literature and syllabuses that is the problem, or even the reading of works through a lens of political struggle. After all, reading works from around the globe and from people with differing life experiences and perspectives is, I think, one of the best ways to expand your own world, to empathise with others, and to gain fuller and deeper perspectives on life.

It is certainly enriching and beneficial to Western students to become acquainted with thinkers and societies outside of the West(especially since the West itself includes all kinds of people and cultures), be they from Iran, the land of poets—their Persian myths and histories retained and immortalised in the works of Ferdowsi, author of the Shahnameh— or any number of other places each with their own unique history of thought and philosophy, of intellectual achievement and progression as well as struggle, war and strife. And, yes, books can be political. And, yes, books can provide lessons about morally heinous power imbalances and can make us question accepted truths and norms—Huckleberry Finn being a prime example of such a book.

However, to actively and unapologetically politicise all literature, in the manner that a “critical lens” would dictate, seems to me to be not only akin to sacrilege but also largely restrictive and narrow. For all the talk of diversification and deeper perspectives, the lenses available in literary theory are all suspiciously ideologically aligned at least in so far as they all fall in line with the radical left—and any criticism of such lenses is only put up with if the criticism reveals some oversight of some oppression. Deconstructing texts to the point where a real, or imagined, ideology of identity is all that one can find in the words laid out in front of one is a travesty for books, writers and readers alike. As movements to “decolonise the curriculum” that include the removal of books, reading canons through a “critical lens”—note, using a “critical lens” does not mean to think critically, it means to think and interpret literature through the lens of a particular ideology—and to increase diversity of skin colour, but often not diversity of viewpoint gain traction, it is essential that we, as readers, retain our sense of wonder and curiosity in books, ideas and fictional landscapes, regardless of the skin colour and sex of the author and characters. One does not find groups in literature, but individuals.

If the transforming power of books lies in their ability to endow one with new perspectives on life then we must ask ourselves—what perspectives are available to those who read only, or predominantly, from a restricted set of ideological lenses? Is life enriched when political struggle and warring identities are said to permeate all levels of interactions from the individual to the institutional? Are we empowered when all we can see is the myriad of ways we have been oppressed? Can we move forward and enact real change to create a more compassionate and fairer society when we obsessively tally up the degree to which those around us are supposedly complicit in our marginalisation simply by glancing at the colour of their skin or their sex? And, perhaps most importantly:are these lenses true? One might be forgiven for the cynicism of critical lenses if they accurately reflected reality.(“Critical theorists” would, of course, reject my terms, suspicious as they are of notions of truth and reality.) However, stopping to question the validity of such lenses is an often ignored task.

So, it seems to me, then, that Nafisi’s invocation of our duty, as readers, to not give in to complacency can just as much be transferred to our duty now, as readers, to not give in to ideologically homogeneous prescribed reading lists, or reductive analyses of literature through vacuous identity prisms or to the idea that we might be unable to cope with words and characters that might offend or bring about discomfort. We have a duty not to suck the life out of imagination by reading literature as if it were simply a bundle of identities warring for power or a seducing instrument of western power and “cis-gendered, white, male, heteronormativity”. The vitality of a book and an author is not determined by a group identity. It cannot be reduced to a way of spreading marginalising norms and perpetuating inequality or speaking the “lived experience” of a whole group. Reading books of authors with differing ethnicities and backgrounds should not be conflated with reading people with one particular viewpoint who happen to have the same skin colour.

The imagination of Western youth is not in danger from an inability to access ideas and literature considered offensive or immoral. They are in danger from their failure to read it. They are in danger from failing to really see and imagine. They are in danger from an ideology that tells them that the stories they read are about—and only about—the struggle between the oppressors and the oppressed. Such a failure of imagination, if continued, amounts to the death of literature and defies the beauty and truth that can be found in our great works of fiction—and, indeed, our non-fiction. Searching books for ways to separate and put artificial chasms between the characters themselves and between the readers and the writings based on differing group identities is a fundamental betrayal of literature; it is the responsibility of all of us who read to explore and appreciate the differing circumstances we reside in but, more importantly, to have the capacity to transcend these differences by falling into the vast landscapes of our common humanity. This is one place where the transforming power of books lies.

Nafisi thinks that in books and stories we can find home; I myself have certainly found a home in her books and many others’. But what is home if not that which is familiar? And how can we find that sense of familiarity in lands that appear so fundamentally different to the ones we know?

By appreciating that books offer the greatest avenue to connection across differing nationalities, ethnicities, places and times. By appreciating that in literature we manage to find ourselves in worlds and people that should not be our own but are, nonetheless.

Isobel Marston is a student of Philosophy at the University of Southampton and Counterweight’s Content Coordinator.

 

 


The question of whether or not there are biologically based psychological and cognitive differences between the sexes is a touchy subject in much feminist literature and increasingly—as these ideas have by now far outgrown the confines of academia—society at large. Judith Butler has gone so far as to say that sex itself is a social construct that has “no ontological status” beyond our social realities; that is, the significance of sex is socially constructed in the way we classify it. Others argue for the weaker claim that only gender is a social construct. Typically speaking, sex is thought of as one’s biology—as determined by the chromosomes one possesses and the ways in which one’s body is made up to aid sexual reproduction—and gender as the social expression or psychological traits that are typically associated with a particular sex.

Suspicion of psychological and intellectual differences between the sexes is understandable when placed in its historical context. It was not so long ago (and is still the case in many parts of the world today—as well as in many pockets of societies that have come far in gender equality) that females were thought of as inherently subservient, irrational, intellectually subpar and generally inferior to men. A lot of these ideas and stereotypes were dubiously justified by purported biological realities and used to keep women out of fields and jobs they were deemed incapable of succeeding in. It is not surprising, then, that to some the women’s liberatory project seems to necessitate demonstrating the falsity of these ideas and stereotypes by denying biologically based gender differences between the sexes and showing these differences, or ideas of differences, to be the product of socialisation. This does not have to include a Butler-esque radical suspicion of the biological categories of male and female; it merely requires a denial that such biological differences play a significant role in gender differences.

While this route is an understandable one, it can go too far. That is, if women’s liberation is deemed to only be achievable by eradicating stereotypes, one may be tempted to conclude that this involves demonstrating that women are identical to men in all ways—and that anyone who says otherwise is morally suspect or bigoted. However, from the correct claim that some ideas regarding gender differences and actual gender differences have been the product of misinformation and socialisation it does not follow that all ideas regarding gender differences and actual gender differences are the result of misinformation and socialisation. Indeed, given the years of evolutionary history that have produced the sexually dimorphic species that we are today, it would be quite startling if there were no differences in personality traits, interests, intellectual capabilities etc. between the sexes. Our cognitive and psychological traits, after all, are not under the remit of some immaterial Cartesian substance, but our physical brains.

The notion that anyone who doubts parity across all cognitive and psychological domains of the sexes is merely serving to reinforce, or reintroduce, ungrounded stereotypes we would be better off without, is misguided. Acknowledging differences between the sexes does not constitute or necessitate the disempowerment of one sex in favour of the other; in fact, I think a reasonable case can be made that—in our current times—the denial of genuine differences can send unnecessarily disempowering messages to women.

But figuring out exactly what these differences are, the degree to which they affect real world differential outcomes and the causes of these differences is much easier said than done.

Take IQ for example. With the proliferation of podcasters in recent years, I have found myself accessing ideas and thinkers through a medium I am unused to in the form of videos or recordings. As an avid book reader with a lot of love for the written word and less, so I thought, for the spoken one, my steps into this world were tentative. However, I found myself becoming enamoured with certain thinkers and speakers for their eloquence and wit in the face of situations and intellectual challenges which would have most stammering and red-faced. Watching people who surely fit the bill of “brilliance”—I am thinking here of the likes of the late Christopher Hitchens and, for people closer to my own age, Coleman Hughes and Alex O’Connor—can be something of an awe-inspiring experience and it filled me with a keen, but clearly unrealisable, desire to emulate.

For me, sometimes compounding this sense that emulation was unlikely was the realisation that not all, but almost all, of those I most admired and who filled me with that sense of awe tended to be men. There are of course many brilliant and awe-inspiring women, some of whom are in our Counterweight staff; however, the overwhelming majority of those who seemed to fit the bill of intellectual brilliance had or have an appendage that I lack. This became increasingly apparent to me once I had started accessing more and more ideas through YouTube. In reading, one is less inclined to notice the sex of the author – and therefore less inclined to ponder over their lack of appendage and whether such a state of affairs constitutes a hindrance in intellectual progression.

I started to worry if this might be caused by my own sexism. In conversations with a friend, we had both acknowledged that the really smart people we have met or looked up to tend to be men. Perhaps, though, it just seemed this way to us? Maybe we simply perceive men as smarter. Or maybe brilliant women just get less attention in general. Perhaps we are less likely to buy the books of brilliant women, listen to the podcasts of brilliant women, or invite brilliant women to debates. Perhaps we, as a society, really do have deeply ingrained biases that affected both my perception of the relative intelligence of the most intelligent men and women and the ability of brilliant women to progress.

These thoughts are quite depressing. I have always thought that men and women were of equal intelligence; both my parents have doctorates in STEM and I have precious little experience of people treating me as inferior, or assuming a lesser intelligence, due to my sex. However, the creeping suspicion started that I would never be like my heroes. This, of course, is probably true for almost everyone regardless of their sex. However, it stings a bit more when it seems like this is not the result of sheer statistical unlikelihood but the hobbling inflicted on my sex by a potentially unfair and biased society. A prejudice, in fact, that works so effectively I seemed to be sexist myself despite holding no conscious ideas of IQ differences between the sexes.

At first light, articles and research seem to support this depressing narrative; that is, that women, despite being equally capable, are not considered to be as brilliant as men. For example, in an article titled ‘We Are Biased to Think Men Are Smarter, and That Hurts Women, the author writes (referring to a study on gender bias):

“The odds of referring a woman were 38.3 percent lower when the job description mentioned brilliance,” the researchers report. This same bias was found whether the person making the recommendation was male or female. …

By many metrics, women are equal, if not superior, to men in the intellectual arena. “Girls make up over half of the children in gifted and talented programs,” the researchers note. “Women graduate from college at higher rates, as well as from master’s and doctoral programs.”

Yet the underlying prejudice persists. (It helps explain why children still think of scientists as male.) That means some of our smartest citizens are not getting the opportunities they deserve, which ultimately hurts everyone.

Bian and her colleagues point to two possible ways to fight this bias: “By changing the brilliance=men stereotype, or by making this stereotype irrelevant to decisions about employment.”

From another article, discussing a study on gender stereotypes:

“Overall, STEM fields are more likely to endorse the belief that you have to be brilliant to succeed,” he [Andrei Cimpian, the study co-author] says. “But there’s variation among the STEM fields, and that variation tracks with their diversity.” A 2015 study co-authored by Cimpian found that while women were earning around half of all PhDs in fields such as molecular biology and neuroscience, less than 20% of women were earning PhDs in physics and computer science, two disciplines commonly associated with “brilliance.” By contrast, around 70% of all PhDs in the humanities such as art history and psychology were earned by women at the time.

 

These articles and studies all seem to point towards the same conclusion: women are unfairly considered to be less brilliant than men. So, on the brink of denouncing all of western society and lamenting the unfairness imposed on me by drawing the short end of the stick of our species’ sexual dimorphism—why could I not be a clownfish?!—I came across some interesting information. There is a question in need of answering which is suspiciously lacking from consideration in these articles: are there differences between men and women when it comes to the very top end of the IQ distribution? The answer seems to be yes. While research tends to show that average IQ differences between men and women are non-existent or small—indeed, IQ tests are usually constructed so that there are no questions that advantage men over women and vice versa—there seem to be large differences between the sexes when it comes to IQ distributions. Namely, there are far more men on the extreme top end of the IQ distribution as well as the extreme low end of the IQ distribution. This means that, yes, there are more brilliant men—if we define brilliance in terms of exceptionally high IQs. However, it seems that there are more significant differences at the low end of the IQ distributions than at the high end. There is a decent amount of evidence in favour of this. See here, here, here, here and here for studies that seem to confirm the male variability hypothesis—the hypothesis that males show more variability than females in certain traits. Two of these studies are very large cross-cultural meta-analyses. Another includes two Scottish population samples of 11-year-olds, one containing around 71,000 children and the other 81,000 children. Another is a 30-year study with 1,173,350 test scores ranging over 20 years for SAT-mathematical ability and 440,369 test scores ranging over 30 years for ACT-mathematics and ACT-science.

So, is it settled? Is the male variability hypothesis confirmed and can we finally put to rest the question of biological or genetic cognitive differences between men and women? Alas, probably not. It is clear that males are more variable than females in some countries. However, disentangling nature from nurture is an often-impossible task, and it is not yet clear the degree to which these IQ differences are the product of genetic variability that is perhaps greater in men due to how they evolved for sexual competition or environmental and social factors. Indeed, it often makes little sense to pose the question as an either-or in the first place. The nature versus nurture debate is almost always a false dichotomy—the development of organisms is not so neat or easily divided into broad categories of biology versus environment. Further, the biological is indeed malleable. And, for all the countries that show homogenous findings of greater male variability, there are still some, albeit fewer, countries that some studies have found to be neutral in regards to male and female variability in IQ, and even countries that show greater female variability. Further, male variability seems to shrink to an extent as societies progress with gender equality, suggesting that IQ differences are not the sole product of genetics.

There is some conflicting information regarding the degree to which male variability tracks gender equality. One meta-analysis finds that mathematical variability correlates with gender equity measures, another meta-analysis finds that higher male variability is almost universal in countries with comparable assessments and another study finds that the shrinking of the male to female advantage in the very top end of mathematical ability (top 5%) has pretty much stopped within the last two decades. That is, it shrunk rapidly when many barriers in the way of women were taken away, and then plateaued.

Why do I always admire smart men, then? Is it because there are more men with exceptionally high IQs, and is this only a reality due to a prejudiced society, or is it the result of genetics, or, more likely, some combination of the two? Or perhaps it has less to do with IQ and more to do with average lower neuroticism in men and average increased aggression in men, helping them make it to the top of their professions in extremely competitive markets. Or perhaps it really is the result of prejudice and discrimination. In all honesty, I am unsure, and I am not convinced we have the data that will conclusively settle these issues. What I do think, however, is that finding out is important. And that these issues are unnecessarily polarised given the importance of the answers on the functioning of our society and given the fact that working out the data and sifting through the studies is genuinely hard. It will not do, for example, to fire or pressure to resign, those with whom we disagree on this topic.

Imagine, for example, that it is the case that men are genetically more variable than women, leading to far more men on the top end of the IQ scale. If we ignore this, demonise those who research it or simply suggest it as a possibility, what happens? What is the cost of viewing all disparities between the sexes as a result of bias and prejudice if they are not? Without heavy social manipulation, we will never see perfectly equitable outcomes. This means that fields in which men tend to excel will always be viewed as sexist even in a hypothetically perfect utopia of meritocracy. What does it say to a young female mathematician who is, herself, brilliant, that she will likely be received badly in the academic world, that she will be falsely perceived as less smart than those with whom she has comparable or superior ability, that she will likely fail to progress and not be suited for jobs for which she is perfectly capable? When we ignore differences in capability, we convince those who are capable that they will not be perceived as such. We bolster the view of those who are not, and never were, capable that failures are the fault of bias rather than the genetic lottery.

It is important to note that the current evidence supporting differences between the sexes at the top end of the IQ distributions does not entail that all differences of representation in academically demanding jobs are the result of these differences. Further, it does not entail that woman are always perceived as less brilliant because they are less brilliant. Even stereotypes that have at least some bases in reality are still stereotypes which are essentially shortcuts that are often wrongly applied to, and affect our perception of, those who do not fit them. Further, there is a very large degree of overlap between the sexes and certainly a lot of women with high IQs. The differences we are seeing are, at the risk of repeating myself, at the very top end of the IQ distribution which means that the majority of men and women will not be impacted in the least by these differences.

On the other hand, let’s say that greater male variability is the primary product of socialisation. First, this would not negate the importance of taking into account current differences in the IQ distribution: it would not help, for example, to force equitable outcomes at high levels if there are not enough women to successfully compete at those levels – regardless of what caused this skew. Instead, research would need to be done to determine how exactly these different distributions are coming about and what could be done to reduce them in the long term.

We must, then, be cautious of four things. One, of suppressing or denying truths that are socially unpopular, the effects of which will always play out in differential outcomes in the real world yet distort our ability to effectively deal with, and interpret, them—to the ultimate disservice of women. Two, of utilising these truths to lazily justify all differential outcomes or to ignore the effects of bias and prejudice that can result from wrongly applying knowledge of the existence of differences in IQ distribution to the individuals in front of us who should only be assessed on their individual capabilities. Third, of conflating differences in IQ distribution with proof of genetic differences between men and women. And fourth, of trying to solve complex problems with surface solutions like forced equitable outcomes.

We could all do with stepping outside of our echo-chambers when it comes to matters such as these which are prone to politicisation from both sides of the spectrum. Whether your echo-chamber more closely resembles publications like the Guardian, the “intellectual dark web”, or something more right wing, the likelihood is you are getting an incomplete picture of the issues at hand packaged up to look neater and more conclusive than the data would allow you to be.

Isobel Marston is Counterweight’s Content Coordinator & a student of philosophy at the University of Southampton.

On what do we disagree?

This is a question which admits many answers. We might offer the safety of vaccines, the cause of COVID-19 or the best way to achieve social justice, to name but a few points of contention in 2021. In fact, entertain pretty much any belief that you hold to be true and—no matter how much you think the evidence points to the unquestionable veracity of your belief—somebody will disagree. What I find interesting, however, is when topics become so emotionally charged that people all over the place apparently—and vehementlydisagree over moral issues to an authoritarian-stifle-your-opponent-extent, when I am convinced their morals are really not so far apart. Our discussions around racism often work like this. For this phenomenon, the more interesting question becomes: on what do we agree, and further, why have we found ourselves disagreeing?

Before we answer these questions, let’s take a step back and explore how disagreements can serve to help or harm a given society, how the current disagreements around racism are falling into the ‘harm’ category, and why finding at least some ethical consensus in this domain is vital for the functionality of our society.

 

The social significance of disagreement

That members of the human race find themselves in constant states of disagreement with one another is indisputable (challenge me if you like, you will only be supporting my point). From genetic dispositions and personality traits to personal life experiences and even indoctrination, one will find no lack of sources to point at and blame for such a state of affairs. Luckily though, throughout the western world disagreement and conflict are much less likely to degenerate into violence than in times gone past. Since all of you reading this have survived thus far, and have probably had your fair share of disagreements, it seems that it is not disputes per se that are inherently dangerous and threatening. Instead, it is how differences of opinion are handled that distinguishes a society liable to degenerate into intra-societal tribalism and warfare at the drop of a hat, from one that can capitalise on differing opinions to the advantage of everyone.

The latter is the ideal of a liberal society.

Liberal societies are predicated upon viewpoint diversity and utilise disagreements to keep hashing out ideas until we find the ones that work best, whilst allowing people the freedom to retain their own personal beliefs. Upholding freedom of belief is crucial for functional societies since it serves to circumvent the outbreak of violence and repression that can occur when we start to think that everyone ought to believe what we believe, and that they should be made to do so by any means necessary. Of course, the totalitarian impulse can still rear its head when it comes to sensitive moral issues; it becomes harder to respect another’s freedom of belief if, say, that belief is one in which torturing a child or killing a puppy is thought to be justifiable. But the belief, if it has not transformed into action, is still one that people have the right to hold.

In our current discussions regarding racism, we can see this totalitarian impulse raising its head.

Emotion, hysteria and endless moralising are entrenched within the topic of racism, and understandably so, but, whilst understandable, this state of affairs is hardly optimal. Why? The political orthodoxy on matters of race no longer allows for disagreement. So, not only are people being pressured to accept things that they don’t believe to be true, but important social issues that require varied perspectives if there is to be any hope of progress, are being viewed through a one-sided lens.

The justification for rigid etiquette on what can, or cannot, be said about race and racism is grounded in the moral importance of the cause. When fighting for liberation, you know your cause is morally good, you know those who oppose you are morally bad, and therefore there is no time for nuance or liberalism when lives are at stake. The problem, then, is that when people begin to feel that they are operating from the unequivocal moral high ground and that anyone who dissents is morally bankrupt, they begin to think that their own morality must be imposed onto the world and that all dissenting views must be stifled.

This, of course, is an untenable belief to enact in a liberal and free society. We all know what can happen when we ditch individual freedom and autonomy in favour of the “common good”. Consequently, finding ethical consensus can help to ameliorate totalitarian impulses by humanising those who think differently to us. So, let’s apply this to the racism debate by figuring out what we are disagreeing about, why we are disagreeing and how we might resolve at least some of those disagreements to find some ethical consensus.

 

The redefinition of terms

So, what exactly are we disagreeing about?

In the cacophony of polarised voices that have become the defining feature of our cultural times—and, indeed, our Twitter feeds—one might be forgiven for thinking that those on either side of the loudest debates are operating from ethical frameworks that share very little in common. In fact, they cannot even agree on what constitutes immoral treatment of a particular race. After all, the “woke”—a term I use with some hesitancy for its propensity to simplify and homogenise a wider variety of voices than it is sometimes credited for—are ardently against racism. The “anti-woke” also claim to be ardently against racism. It is the racism of “wokeness” that often leads people to become aggressively “anti-woke”.

What is going on here?

The explanation for such a phenomenon is already out there. It goes something like this: we do not all mean the same things by our terms anymore. Words like racism have been weaponised by a linguistic sleight of hand. The redefinition of racism has two main ramifications in that it changes a) who can actually be racist and b) what constitutes racism.

Let’s begin with (a): who can be racist?

Those who buy into Critical Race Theory (CRT) think that people without power cannot be racist, and they hold that determining who has power and who doesn’t can be figured out by the quantity of melanin in a person’s skin. The reasoning for such a position rests on the view that we live in a white supremacist system; racism operates on a systemic level and thus cannot be reduced to the attitude of any one individual. Whilst a black person’s prejudice might be unpleasant, a white person’s prejudice serves to perpetuate social inequality. Consequently, the former cannot be a case of racism, whereas the latter can be. Those who use the traditional definition of racism tend to think that anyone can be racist since they believe it to be a racially prejudiced attitude that an individual holds. Thus, Critical Race Theorists are fine with making derogatory or stereotypical statements about race, as long as that race is white people.

Now, let’s move on to (b): what constitutes racism?

For CRT, what constitutes racism is, at the risk of sounding facetious, pretty much anything. Any unequal outcomes are racist, subtle body language is racist, things that feel racist are racist, not being actively anti-racist is racist, science is racist, believing in the principle of colour-blindness is racist. You get the point. Those who utilise the traditional definition of racism have a much more specific state of affairs mapped out for determining what constitutes racism: that is, an act that is prejudiced or discriminatory against a particular group and individuals within it on the basis of their membership in a particular group, where the group is defined racially.

Consequently, the paradox of both sides thinking the other is racist partially parallels what is sometimes called a “merely verbal dispute”, in so far as the paradox dissolves when you realise they mean something different by their terms. Except, in this case, the dispute is clearly not merely verbal. If we were all using the same unambiguous terms, the conflict would not simply resolve itself and vanish into thin air. For at least some of the population, however, I am inclined to think that while the disagreement can be traced back to these terminological differences, it is not the result of buying into the CRT definition of racism; instead, it is born out of ignorance that such a radical redefinition has even gone on. For this group, at least, using agreed upon terms might just dissolve the dispute.

 

How the redefinition of terms misleads us

Few among us would be quick to stand in the way of a movement that advocates for social justice, yet support for such movements often comes too quickly due to a paucity of knowledge regarding the different approaches to it. Critical Social Justice (CSJ) has diametrically opposed starting assumptions and values to Liberal Social Justice. Thus, it seems likely that, for some of us, our differences would fall apart if we were working from the same brute facts, instead of accepting second-hand interpretations of reality, as conceived by CSJ activists.

Take, for example, the frenzied debates over cancel culture and its existence, or lack thereof. Some hold that it would be more aptly named “consequence culture” and is therefore morally justified (of course, the moral righteousness of a mob has dubious ethical standing if our history is anything to go by). If those who condone or dismiss cancel culture took the time to look into the unveiled circumstances—as opposed to relying on hyperbolic hit pieces—that have caused some to be fired, censured and lacking in job security due to allegations of racism or bigotry, I am sure a lot of them would think, like I do, that “consequence culture” or “cancel culture” is not something that ought to be frivolously dismissed as nothing to worry about.

After all, oftentimes those who come under the “woke” banner do not really ascribe to different theories of truth or spend their time reading obscure philosophy and arguing about the merits of differing accounts of social constructivism. Not everyone is concluding “that’s racist” because they mean something different by racism; instead, they have heard—from people who truly do have a different definition of racism—that something racist is happening. They assume it’s true and then they parrot it. Maybe they have read a scathing article denouncing all those right-wing white men that are intent on silencing the oppressed by circulating delusory fantasies of censorship. Since the free-speech debate is certainly not new (and can often be more delusion than reality), they assume that the current furore over cancel culture is just more of the same.

However, you can see here for a list of around 90 people de-platformed, cancelled, or fired within British academia and here for a list of around 100 cancelled academics in America and Canada. The sins that have led to ruined reputations, loss of jobs, de-platforming, and public shaming range from positing that the police are not all inherently racist, to suggesting that college students can decide on their own Halloween costumes or indeed for simply writing an essay about the importance of academic freedom. The problem far surpasses the realm of academia and celebrity culture; these are simply the most high-profile and easy to track cases.

So, how many among those who condone cancel culture would really think that all police are racist? Or that writing an essay on academic freedom is a fireable offense? I am willing to bet not as many as it might appear. For some, our differences are not primarily moral or ideological, but informational. We are not armed with the same facts.

At a stretch, I think this point may be extended even to those who have settled on different definitions of racism.

 

How dodgy information encourages acceptance of redefined terms

Ask yourself, why might individuals have accepted the CRT conception of racism in the first place?

If we keep our answer in relation to the average person, thus excluding the enthralling nature of Foucault’s prose from the equation, we might better understand why CRT’s assessment of society has been accepted by people who are unlikely to be familiar with the academic literature of CRT.

Put simply: it provides a ready-made explanation as to why certain identity groups do not do as well as others within our society. When our social feeds were overflowing with videos of black men being brutalised by the police, some explanation was called for. Discrimination is illegal. Racist views have been deemed socially unacceptable by the majority of society. How is this happening in the 21st century?

The answer: racism works through an invisible power system, perpetuated by hidden biases, discourses and institutions. When we are bombarded with statistics showing time and time again that black people do worse than white people, it seems like racism must be everywhere, we just cannot see it. So, we listen to the “experts”. We accept redefinitions of racism because it appears unavoidable that traditional definitions of racism are not sufficient to fill the explanatory gap that has opened up.

Of course, unequal outcomes and police brutality are not nearly as black and white—pardon the pun—as they are presented in the media and Facebook or Twitter posts (those well-known sources of accurate, objective information). Since the realities of these two topics require careful consideration, I do not have the space to go in-depth within this article. However, Coleman Hughes wrote an excellent piece in Quillette, The Racism Treadmill, on what he calls the disparity fallacy, elucidating why it is a mistake to assume that unequal outcomes are always the result of discrimination. Further, our most current evidence to date, a study conducted by Roland G. Fryer, Jr., indicates that racial bias is not an all-pervasive force that leads to the state-sanctioned killing of black men by the police. In fact, they found no evidence of racial bias when it came to white police shooting black men.

Maybe if everyone knew this, then they would be less likely to accept the new definition of racism. Unfortunately, once a particular worldview has become entrenched, it is usually resistant to change on the basis of new information.

 

Cause for hope

Whilst the above paints a pretty drab picture of the discourse on race, I do think, probably naively, that not agreeing on the same facts might give us at least some reason to be optimistic. Perhaps it is not the case that our morals or principles are miles apart from one another. Perhaps we just think we know different things, and as such have different perspectives. If this is the case, then we have a lot in common. Both I and those who dismiss cancel culture think that people ought to lose their jobs if they commit objectively racist acts. Both I and the BLM supporters think that black lives matter and that racially motivated state-sanctioned killing is morally reprehensible. Both I and those who fight for (Critical) Social Justice abhor the notion of white supremacy and believe that a person’s worth is not determined by their immutable characteristics. I could go on; in the ethical domain, we have more points of consensus than we do disagreement.

To be clear, I am not trying to imply that if we all knew the same things, we would all think the same things, nor am I trying to imply that such a state of affairs is desirable. It isn’t. However, in trying to understand how we each got to our conclusions and perspectives and in trying to find points of consensus between those with radically different world views, communication is made possible once more. It’s hard to engage in real debate when you assume your opponent is bad, morally bankrupt or totally absurd.

No matter how vast and unbridgeable our divides may seem, creating caricatures of our opposition is unlikely to do us any favours. When I see the views I share being denounced as “fascist” or “racist”, I resist the urge to dehumanise my opposition in the way some of them might do to me. I bear no ill will to people who have been misled by redefinitions or people who accept redefinitions on the basis of their access to dubious “facts”. I assume they are operating from similar moral landscapes to my own until I have good reason to believe otherwise. I simply hope that in time—and with the concerted effort of the swathes of people who are interested in unearthing the truth—more people might come to see that our differences are really not so great.

 

Isobel Marston is a student of philosophy at the University of Southampton and the Content Coordinator for Counterweight.