John Forrest
Table of Contents
Why Anthropology? (Some warnings). 6
Ethnocentrism and Cultural Relativism.. 27
Chapter 2: You are OK, How am I? Feedback Loops. 37
Chapter 3: Traffic Jams: Optimizing and Maximizing. 53
Chapter 4: No Free Lunch: Gifts, Reciprocity, and Transactions. 68
Chapter 5: What’s the Difference? Emic and Etic. 84
Chapter 6: A Good Guy with a Gun: Power and Authority. 97
Chapter 7: Who is My Brother? Kinship Systems. 114
Chapter 8: Till Death Us Do Part: Marriage. 131
Chapter 9: Forbidden Fruit: Incest 150
Chapter 10: Neither in nor out: liminal stuff 169
Chapter 11: Waiter there’s a dragonfly in my soup: Food Taboos. 184
Chapter 12: Do Eskimos Have 100 Words For Snow? Color Terms, Classification, and Language. 201
Chapter 13: Black and White: What is Race?. 217
Chapter 14: Tag, You’re It: Magic, Religion, and Science. 229
Chapter 15: Then and Now: The Search for Origins. 244
Chapter 16: Got Any Change? Marx versus Weber. 260
Chapter 17: Once Upon a Time: Storytelling. 275
Chapter 18: Is it Art? Anthropology of Aesthetics. 290
Chapter 19: A Change is as Good as a Rest: Revitalization and Social Change. 308
Chapter 20: Where is the Center? Who Am I?. 325
Preface
This book is a series of essays for people who are interested in anthropology, whether they be school leavers considering possibilities for university, or simply curious minds looking for alternative ways of thinking about the world we live in. The book is not in any sense meant to be exhaustive in terms of the things that interest anthropologists, nor completely definitive in terms of the current state of play within the discipline, but, rather, a look at how the anthropological lens can be helpful in viewing the world, particularly as it relates to problems that you encounter – maybe on an everyday basis. The chapters are based on my experience as a fieldworker and lecturer in anthropology for over forty years. It is inevitably personal in parts because, like all my colleagues, I have my likes and dislikes. That is the nature of the field. There is not an anthropologist alive who will agree with everything I say here, and many will have vehement disagreements with chunks (while agreeing wholeheartedly with others). When I deal with contentious issues, I try to give alternatives to my way of thinking so that you can sort things out for yourself based on your own proclivities.
I was born in Buenos Aires, went to primary and secondary school in South Australia, and then continued on to university in England where I first encountered social anthropology as an undergraduate and post-graduate (by accident at the outset because I was studying theology at the time). Then I pursued an MA in folklore and a Ph.D. in anthropology at the University of North Carolina. My mentors were Rodney Needham at Oxford and James Peacock at UNC. From North Carolina I was hired at Purchase College, State University of New York, where I rose up the ranks from starting assistant professor to department chair, and now I am professor emeritus. During my time both as a doctoral candidate and teacher, my field research was primarily devoted to aesthetics, religion, and dance. I did fieldwork in Appalachia, Tidewater, North Carolina, New Mexico, and various parts of western Europe. After retiring early, I devoted my time to living and doing fieldwork in Argentina, China, Italy, Myanmar, and now, Cambodia (where I also guide researchers at a university).
I give you a potted version of my personal history not so much to present my credentials, as a way of showing you that I, like all anthropologists, have a past that resonates throughout my research and writing. Many chapters draw on my own fieldwork and specialties, and I often use examples from my own experience. By drawing on my own life experience I am acting as an exemplar to show you how anthropology can be useful in your own life too.
Each chapter is self- contained, but you should read the Introduction before exploring the rest of the book, because it lays out some fundamentals that will get you oriented.
I would like to thank . . . (to be continued).
Dedication
I dedicate this work to all of my students who no doubt remember the lessons of these chapters. They have often been my teachers, and sometimes my harshest critics, as well as my loyal friends.
Chapter 1: Orientation
Every chapter in this book is designed to be self-contained and can be read independently of the others. However, if you have had no, or little, exposure to social or cultural anthropology before now you should start here before hunting and pecking around the rest of the book, because there are some important things you need to know about what anthropology is, and what it can and cannot do before you get into the meat of things, as well as some cautions concerning anthropology as a way of looking at the world (and yourself). Although you can pick and choose as you see fit after that, I have ordered the chapters with a tiny sense that a few of the later chapters build on ideas in earlier ones. If a later chapter has elements in it that benefit from earlier chapters, I give you a heads up at the time.
This chapter has three main topics:
- What anthropology is and how it can be useful.
- Why the common conception of “human nature” is bogus and misleading.
- Ethnocentrism and cultural relativism.
These topics give you a strong sense of the approach that I, and other anthropologists, take.
Why Anthropology? (Some warnings).
In the United States, where I taught anthropology at a university for 35 years, if you tell a person you meet on the street that you are an anthropologist they may think of you as Indiana Jones or Margaret Mead, or, more likely, they will have no idea what you do for a living. Undergraduates usually arrive at college with no experience of anthropology from secondary school, and little or no exposure to real anthropology from the media. A few students who took my general courses got interested enough in the subject to consider taking an anthropology major, but then the overdetermined question would always come up: “What can you do with a degree in anthropology?” This question is part of a larger discussion that annoys me intensely. The question assumes that undergraduate education is some kind of professional training that prepares you for certain jobs. This notion developed gradually over the twentieth century, and is, in my oh-so-humble opinion, thoroughly misguided.
My overdetermined answer to that question was usually something along the lines of: “You can be smart, well educated, and have a little more insight into the way the world works than the average person.” I understand that certain professions, such as medicine and engineering, require advanced qualifications. No argument. But . . . the world also needs smart, educated people, as it always has; people who know how to think for themselves, without being trained for a specific job or profession. That task seems to have been left in the dust these days. Furthermore, the world needs people who know something about the cultures of the world. That is where anthropology comes in. Anthropologists study world cultures, and, most importantly, recognize that they are not all the same, by any means, and do not want to be the same. We also recognize that the idea of “human nature” – ways of behaving that are genetically programmed – is nonsense. Human behavior is enormously varied and complex, and what seems normal in one culture, can seem utterly bizarre in another. Give me any example you want of a behavior that you think is human nature, and I will show you a culture where that behavior does not exist, or is looked down upon, or penalized. Human behavior is an exceedingly complex mix of history, habit, learning, and genetics, and this mix is precisely what anthropologists investigate. We ask questions such as: Why does marriage exist? Why are there incest rules? Why are there food taboos and why are they different in different cultures? Are people inherently greedy? Why are there wars? . . . and on and on.
The problem anthropologists must confront is that we have answers to these questions, but they are not simple, and, much of the time, people with no background in social science will not listen to us, or else downplay our expertise as worthless. When it comes to a problem in physics, engineering, or medicine, people routinely seek an expert because they understand that there is a body of knowledge in these fields that experts have a command of, and “regular” people do not. When it comes to cultural problems, everyone thinks they are experts simply because they live in the world. Well, you use gravity every day too; do you understand gravity? Or electricity? Or the weather? Living in the world does not make you an expert in human culture, any more than it does in the science and technology you use daily.
This book is a series of essays that address a number of cultural challenges, some of which you may not even think of as cultural, such as traffic jams. Don’t all cultures experience traffic jams? Yes, they do. But in different cultures they happen for different reasons and can be solved in different ways. Traffic in all cultures follows rules, some of them formally codified, some of them not. If you don’t know the rules, you can get into a lot of trouble quickly. Naturally, neighboring cultures will have similar rules. You can drive over the border from Italy to Switzerland and you will mostly be all right. But, if you learned to drive in New York or London, I would not recommend jumping straight into a rental car in Mandalay, Bangkok, or Phnom Penh without getting someone to drive you around first to get the hang of it. In Myanmar, for example, they drive on the right, but, because of their old British colonial heritage, the steering wheels are on the right, (this is because until recently they drove on the left). They switched the one, but not the other. Care to try without any practice first? To add to such technical issues there are complicated rules of the road that are not part of the law, yet everyone follows them. Under certain (well understood) conditions in Cambodia you can drive the wrong way down a traffic lane, or drive through a red light. Neither is legal, but everyone (including the police on most days) accepts that that is how the rules work.
Cultural rules are like traffic rules, only they are vastly more complex and harder to define. Is it incestuous to have sex with your first cousin? Is it incestuous to have sex with a step-sibling who lives in the same house with you, but is not a blood relative of any sort? Questions such as these have legal answers, but these answers rarely coincide with what people think the answers are. First cousin sex (or marriage), is perfectly legal in most cultures in the world (including the majority of states in the United States), and in many cultures it is preferred. A few years ago, when I polled my students informally on whether first cousin marriage was legal in the United States, more than 80% thought it was illegal, and they believed the reasons were obvious. The reasons are far from obvious, and typically, as well as being shocked to discover that it was legal in most states, they expressed an aversion to a physical relationship with a first cousin.
Why is first cousin marriage preferred in some cultures? I know why, but do you? Your answer might be, because they are ignorant cultures that do not know any better, but that is just prejudice. There is no good, biological reason for avoiding first cousin marriage as a general rule. No, you will not have kids with webbed feet or three eyes if you marry your first cousin. Under certain, very specific, conditions, sustained first cousin marriage over generations can cause problems, but these conditions are rare. There are, however, plenty of cultural reasons, that vary from culture to culture, for not marrying your first cousin, or, conversely, preferring to marry your first cousin.
Then there is the supposed “Biblical definition of marriage” which conservative politicians and preachers in the U.S. are always going on about. What exactly is that? Jacob, father to 12 sons who originated the 12 tribes of Israel, gave birth to those sons through 2 wives and 2 concubines. Abraham married his half-sister, had a son with her, and also had a son with her maid. Is that Biblical enough for you? Adam had only one wife because there was only one woman. As soon as there were more women, men had more than one wife. Solomon had hundreds of wives, not to mention concubines whom he had sex with, but was not married to. Lot got both of his daughters pregnant. God killed Onan because he refused to have sex with his dead brother’s widow. How would you define “Biblical marriage” given all of this? It is certainly not “one man and one woman.”
The chapters in this book range over a great raft of customs in our own cultures that most of us accept as normal or natural, but they are far from it. Taken from a global perspective, those customs look truly weird to other cultures, as their customs do to us. In most cases, each culture thinks that the other culture is bizarre, and that they alone are doing things properly. “Why don’t you eat spiders? We do. They are delicious!” (They are.) Anthropologists generally do not take sides in the debate; we simply want to know why each culture does things the way they do (with some exceptions).
There was a time, in the not-too-distant past, when the huge variety of cultures in the world was of little interest to the majority of Western Europeans because travel outside of a narrowly limited zone was expensive and daunting. That was the world I was born into, although my family was well outside the norm (I had lived on three different continents by the age of seven). For the first half of the twentieth century, and all of the nineteenth century, Western Europeans and North Americans traveled to “exotic” cultures (that is, outside of Europe and North America), for the most part, because they were in the army or worked for colonial administrations, with a smattering of missionaries, and inveterate travelers roaming the world for their own purposes added to the mix. Travel abroad for enjoyment or pure interest, even to fairly local destinations, was almost unheard of, except for the rich. In the post-war era, foreign travel eventually became more affordable and much less daunting. Not only could Europeans travel to “exotic” destinations, people from those locations could travel to, and live in, Europe. Quite a number of Europeans were not so happy with the latter situation when it arose, and continue to be unhappy about it. Again, anthropology can help with some of the problems that have arisen.
While this book does not take sides, it does much more than explore the oddities of other cultures, by trying to explain why they do the things that they do that seem odd to us. It also holds up a mirror to our own cultures and asks why we do the things that we do. Anthropology has always been interested in studying other cultures, not merely for the purposes of gathering data, but to understand ourselves better. By stripping away the false notion that we do many of the things that we do because of “human nature,” it is possible to consider changing those things if we desire. Before getting into detailed discussions, however, I want to provide you with some important cautions concerning what anthropology can and cannot do because of its inherent limitations.
First caution: all cultures, even very small ones, cover a wide range of behaviors. In Argentina, when it is available, people eat beef like there is no tomorrow. I have watched a man, of average size, polish off a kilo of beef in under 30 minutes. Even so, there are vegans in Argentina. They have a hard time of it, but they exist. One size does not fit all. Anthropologists in the past have tended to blur over diversity as it exists in various cultures, and I do not want to do that. It is, however, impossible in a book of this size and scope to stop at every turn to address every nuance of a particular culture. That is the domain of specialist ethnographies. I am going to have to, of necessity, paint in broad brush strokes. I do not believe that my major points are weakened, thereby. This practice of mine will mean, however, that some nuance in describing cultures will be lost.
Second caution: at one time, anthropologists wrote about cultures as if they were bounded and clearly identifiable entities. They talked about the Kwakiutl or the Nuer or the Tikopia as if these cultures were easily defined by geographic location, language, kinship systems, patterns of food production, art, and the like, and that these cultures had relatively fixed outer limits (with some fuzziness at the margins). Older ethnographies, that is, basic descriptions of these cultures, assumed that these cultures could be defined without too much trouble, but that notion has since been heavily contested. Think about your own culture. Can you identify its key elements and its boundaries without too much trouble? I expect not. In some ways this problem is inevitable because you are not an anthropologist. But anthropologists also find themselves in the same hole. In the past, the problem was simply glossed over, but now we are aware that talking about “Nuer culture” or “English culture” is not a straightforward enterprise. Cultures are not only quite varied internally, they also change over time, do not have clear boundaries, and are constantly influenced by other cultures.
Third caution: there is not just one way to do anthropology. The modern discipline emerged in several countries at around the same time (end of the nineteenth and early twentieth centuries) out of a welter of discussions in Europe and North America concerning how to classify world cultures. Key players in this evolution of anthropology included academics in Britain, France, Germany, North America, and a scattering of other places, and the different needs of those various academics colored the disciplines of anthropology that emerged in those individual locations. In Britain, for example, there was an emphasis on social systems and institutions, and, so, social anthropology developed as a distinctively British way of doing things. In North America, under the influence of the German geographer Franz Boas, anthropology took a quite different track. Boas wanted to include all of the human experience under the umbrella of anthropology, including history, archeology, language, physical evolution, and so forth, even though such an undertaking was immense and unwieldy. From the very beginning, at the turn of the twentieth century, American anthropology fragmented into specialized sub-fields. Cultural anthropology was the sub-field that dealt with contemporary behavior, differing from British social anthropology in that it was concerned with the underlying mental patterns of culture that led to the formation of social institutions, rather than focusing primarily on the institutions themselves. Nowadays, that distinction has been blurred, but the histories of these separate disciplines continue to inform current practice.
Fourth caution: anthropology developed out of the concerns of Euro-American universities. As such, it is intrinsically biased, even though it tries hard not to be. For example, in the next section I discuss at some length the conclusions reached by Richard Lee concerning the amount of work that foragers in the Kalahari Desert, that he studied, did in comparison with people in the industrialized world. His basic point is perfectly legitimate: the more technology you have, the more effort you have to put into supporting that technology, but this conclusion avoids addressing the question of how to define “work.” It assumes that “work” can be equated with “making a living” or some such, but this is actually a deeply vexed question whether you live in an industrialized society or in a band of foragers. If you define your work as time spent at your job, things may seem simple enough on the surface, but they are not when you probe deeply. The old mantra for the eight-hour work day that divided the day into eight hours for work, eight hours for sleep, and eight hours for “what you will” is clearly flawed. That “what you will” part (supposedly leisure time) could be eaten into with commuting, shopping for work clothes and other work supplies, and doing any number of other activities that are work related. Anyone who has worked an eight-hour per day job knows that there are not eight leisure hours in those days. One might also ask how you define “leisure.” Lee defined “work” as time devoted to activities such as hunting and food gathering (making a living). That is, he used his own culture’s economic definition of work when studying African foragers. But how did they define “work,” if they defined it at all? One proposed solution to this dilemma is to get indigenous cultures to do their own anthropology, but I suspect you can see the inherent weaknesses in such an approach (although it is done). By teaching indigenous people how to do ethnography you have altered their way of thinking about their culture.
Fifth caution: anthropology is not only different in different countries, it has serious disagreements within those countries as well. There are two related issues here. First there is the fact that anthropological theory evolves over time, and, second, fads in analysis come and go. Anthropology at the beginning of the twenty-first century is a far cry from the anthropology of one hundred years ago. We now have hundreds of university departments across the globe whereas a century ago there was a handful. Contemporary departments are churning out data at a staggering rate, and as the amount of analysis compounds, so does the disagreement about the analysis. Books that were once considered unassailable classics in the field have been contested repeatedly in recent years. A great example is Coming of Age in Samoa (1928) by Margaret Mead. Mead wanted to show that typical adolescent rebellious behavior among teenagers in the US, was a result of culture and not biology (as asserted by Freud and others). Mead’s fieldwork seemed to show that because adolescent girls in Samoa had complete sexual freedom, they had no need, nor desire, for teenage rebellion. Subsequent researchers, notably Derek Freeman, found her data to be in serious error (see Freeman 1983). Her two key interviewees, when interviewed again later, admitted that they had greatly exaggerated their sexual exploits, for example, so the book’s validity has been seriously called into question, with many anthropologists taking sides pro and con (did they lie to Mead or to Freeman or both?). We must also accept the fact that anthropology is not like natural science. Fieldwork cannot be replicated accurately to test hypotheses, in the way that chemists and physicists repeat experiments, because all cultures change over time, so the results will inevitably be different when fieldwork is repeated.
Furthermore, theories in anthropology come and go. When I was a Ph.D. student in the early 1970s, Claude Lévi-Strauss and his brand of structuralism was all the rage, and I wrote a number of papers using his approach. Now structuralism has been relegated to the dustbin of old paradigms, although it still has its advocates, and we are on to new ones which will also be trashed eventually. Nonetheless, anthropology does maintain a sense that there is some bedrock underneath the shifting sands of new theory. As much as possible, in this book, I want to highlight the bedrock and not spend too much time drifting on the sands. Just take it for granted that everything I say here will be contested by someone. My statements here are all meant as preliminary points of departure for discussion, not absolute truths of anthropology carved in stone.
If you like you can think of the chapters in this book as similar to chapters in an introductory textbook on physics. Well before you can learn about the implications of quantum mechanics, quantum field theory, and general relativity, each of which requires knowledge of highly specialized mathematics, you have to learn some basic equations formulated by the likes of Galileo and Newton. Then you can begin to understand how their theories have been expanded upon and modified, and, in a few cases, outright rejected. Likewise, if I ask you what the square root of 1 is, if you remember basic mathematics you may say that it is 1, or if you remember a bit more you may say that there are two square roots of 1, namely, -1 and +1. The more mathematics you know, the more roots you will be able to add (e0, for example, although that’s a bit of a quibble, or you can tell me that (eπi)2 = 1). You have to qualify or explain your answer further as you learn more mathematics. The same holds true for this introductory primer. Most of what I say here is reasonably solid stuff, but it can all be qualified and added to. Those additions and qualifications are for later detailed study if you get interested. I will give you hints along the way, but I do not want to get too complex as we start out. Nor do I want to give the impression that anthropology evolves in the same way that physical science or mathematics does. It most certainly does not.
Human Nature
When I was growing up in South Australia in the 1950s and 1960s there were often stories in the news about extraordinary aboriginal trackers who could find people who were lost in the bush, or who were wanted by the law. Kids had the habit of wandering off from campsites and getting lost in the desert, and criminals frequently fled into the bush, away from settlements, to evade the police who were chasing them. The center of Australia is formidable country, and you can get lost in a heartbeat if you are unfamiliar with the territory – even if you are close to a settlement. Ever since aboriginal trackers had found the legendary nineteenth century bushranger, Ned Kelly, when the European colonial police had repeatedly failed to locate him, enforcement agencies had used indigenous trackers for detective work. My school friends and I knew all about the guys who were called “black trackers” – they were legends. One of my mates told me once about a tracker who had followed a man wearing shoes with soft soles over flat rock for several miles. I was suitably amazed. We, and the majority of European-Australians, believed that the Australian aborigines as a group had some special powers they used in tracking: probably something genetic that set them apart from Europeans. Maybe they were born with keener eyesight, or some kind of “sixth sense.” We were pretty stupid back then.
Aboriginal peoples lived in the central desert of Australia for millennia before Europeans arrived. During all that time they did not have any domesticated plants or animals.[1] They did not need them. The well-watered lands of the east coast, where most aboriginal people lived before Europeans arrived, provided ample plants and animals to live on. I expect that even Europeans in the nineteenth century could have lived by hunting, fishing, and gathering along the east coast, if they had wanted to. The desert was quite another matter. Europeans could not have survived in those lands for very long. But aborigines thrived there living entirely by hunting wild animals and gathering natural plants. They also had plenty of water to drink. Water – in the desert, where sometimes it does not rain for years.
Aborigines had been able to live in the harsh interior of Australia for tens of thousands of years because they had accumulated an enormous store of knowledge concerning tracking animals to hunt, finding water, and locating the resources they needed to survive. There was nothing magical or genetic about their tracking abilities. They needed them to live. They were all learned. As the aboriginal groups of the desert center were increasingly controlled by Europeans who sent their children to schools in cities, put them to work on cattle ranches, and generally disrupted their traditional way of life, they lost the ability to track. A few persistent people kept the knowledge of tracking alive, and, in turn, got employed by police departments in the cities. By the time I was living in Australia, police trackers were a dying breed, and now they are all gone. The last tracker retired a few years ago. They are no more, not because their genes have changed or vanished, but because the knowledge needed to track successfully has become much scarcer, and the indigenous people of modern times are less and less inclined to share their knowledge with outsiders.
When you see someone with extraordinary ability, whether it is tracking lost individuals, composing and playing amazingly complex music, or solving impossibly difficult mathematics problems in their heads, it is easy to put it all down to some kind of natural ability. Many people believe that such “talent” is all about genetics. Johann Sebastian Bach, one of the most creative musicians of all time, came from four generations of musicians, and his sons and one grandson also became famous musicians and composers. His musical “talent” had to have been genetic, didn’t it? Not so fast. If you grow up in an environment where music is highly valued, and training is not only easy to come by from older generations within your family, but regularly enforced, you are likely to end up with more than your fair share of musicians in the family, especially if being a musician is well paid and socially prestigious. Likewise, if you have a society where poor people are regularly starving because the society denies them the resources to get adequate nutrition to live, some of them are going to steal food to survive, even if being caught stealing means the death penalty (as was the case in eighteenth-century England). People who steal to survive are not born criminals; their society makes them criminals. Yet you will read in media, from the eighteenth century down to modern times, that poor people belong to a “criminal class” because they are born that way. These media are saying that poor people are poor, and often resort to crime, because they have bad genes.
Such oversimplified thoughts, and many more, are part of what is generally known as the “nature versus nurture” debate. When we look at any behavior is it possible to decide whether it is caused by our genetics (nature) or our learning (nurture)? As an anthropologist I am here to tell you, there is no debate. The matter was settled a long time ago, but the information has not filtered down to the general public. For centuries, all kinds of thinkers, not just anthropologists, have debated the question of what we are born with versus what we learn. One school of thought, going all the way back to Aristotle, argues that everything we do is learned. These scholars suggest that the mind at birth is a blank slate (or tabula rasa), to be written on in any way imaginable. In more modern language we might use the term “empty notebook.”
The blank slate idea has had its followers for centuries, and, in modified form, was the basis for modern American anthropology, beginning in the early twentieth century. After all, if you take a newborn baby from England and give it to a Chinese family to raise without telling it that it was born English, it will grow up learning Chinese language (probably a very specific dialect of Chinese), will adopt Chinese customs, enjoy Chinese food, think in Chinese ways, and in every respect think of itself as Chinese, even though it may look a little different from people around it. It will not have any “English” genes lurking in the background that will make it yearn for fish and chips, keep a stiff upper lip in emergencies, or slip into an English accent now and again.
Early twentieth century anthropologists in both Britain and, especially, the United States took the position that all behavior is learned. That point of view was convenient for their purposes, and, unfortunately, goes too far in one direction. They wanted to prove that race, gender, and age, along with other attributes associated with our biology, do not force us to act in certain ways. They believed that we learn to act in those ways because our society has pushed us in certain directions, and those directions become so common and habitual that we end up thinking of them as natural, when, in fact, we have learned them. Pick any behavior you want – adolescent rebellion, mothers nurturing their children, male aggression – and there is an anthropological study to show that these behaviors are not universal. Some cultures do the opposite. Therefore, at one time, it was common for some anthropologists to conclude that those behaviors are not natural, but learned from our society, even though they may feel natural.
The counterargument to this point of view is very simple. It is possible for a behavior to be natural, but for society to push against it. After all, it is natural to want to eat to survive. That is certainly hard wired. If it were not, the species would die out (same with the drive to have sex and reproduce). Yet there are people who deliberately starve themselves to death for one reason or another. Pointing to anorexics is not proof that eating is somehow learned behavior that can be unlearned. That much is obvious. But what about more complex, and more social, examples such as sharing food? Are we driven biologically to share, or do we learn it? My mantra here and throughout the book is: THE ANSWER IS COMPLICATED. It is not one thing or the other. It is always both. There is no nature versus nurture. Both always play a part. The real question is never an either/or thing, but, rather, which one is more important at any given time, and what can be done about it? The huge additional complication is that our genetic makeup is diverse and is constantly changing. Furthermore, it can always be overridden. There is no such thing as human nature that is so hard wired that it cannot be overcome – even when you are in a crisis and have to act fast. There is also the added twist that modern epigenetics has contributed to the discussion in that it is now clear that our basic DNA is not in itself a hard wiring system: genes can turn on or off, or express themselves differently because of environmental factors, without any changes in the DNA.
Anthropologists used to look to modern foragers for some answers concerning our genes and behavior because they thought that contemporary foragers would be the most like our prehistoric ancestors who lived exclusively by foraging until they invented the domestication of plants and animals roughly 10,000 years ago.[2] Currently, anthropologists estimate that Homo sapiens emerged as a separate species between 200,000 and 300,000 years ago. Even taking the low estimate, modern humans lived as foragers for at least 190,000 years before they came up with domestication. Put another way, biologically modern humans have lived for about 95% of their time on earth as foragers, and only 5% eating domesticated foods. This fact is the basis for the so-called “paleo diet.”
The “paleo” bit comes from “paleolithic” which is the era when the genus Homo, including ancestors other than modern Homo sapiens, started using tools, beginning about 2.6 million years ago. “Paleolithic” is Greek for “Old Stone Age.” The paleo diet supporters claim that the biology of our digestive system evolved when we were foragers, and is adapted to eating the foods that people from the Old Stone Age ate, and is not adapted to modern foods. They claim that to be healthy we need to return to what prehistoric foragers ate. This point of view omits the crucial fact that human evolution did not stop 2.6 million years ago, 50,000 years ago, 10,000 years ago, or even 10 years ago. Evolution does not stop.
Travelers know very well that it is possible to get sick by going to a foreign country and eating the food. They are usually all right if they do not travel too far from their home country, but more out-of-the-way destinations can cause problems. This is largely because their digestive systems are not adapted to the local food in those places. There are also likely to be pathogens in the environment that strangers are not immune to and can succumb to, but leaving that issue aside, foreigners may still get sick simply because of the strangeness of the ingredients, or unaccustomed proportions of them in the food. Locals do not get sick because they have digestive systems that have adjusted to the diet. The word “evolve” is exactly equivalent to “adjust.” It does not mean “progress.” That is a common mistake that many (perhaps most), people make. Animals (including humans) adjust to their environments biologically – all the time.
I used to live in southwest China where people eat gigantic quantities of pork and duck fat (by common European standards). They smother their noodles in it, eat hotpots where half the liquid is pure fat, and cook their meats and vegetables in rich fats all the time. If you are not used to that level of fat you will likely get sick – immediately – and I have plenty of foreign friends who lived there who can confirm that fact. The locals never get sick, and, adding insult to injury, they do not get fat either. Their digestive systems have evolved to their diet.
Richard Lee worked in the 1960s among the !Kung San of the Kalahari who were one of the very few remaining groups in the world who, at the time, relied exclusively on foraging. Everything they owned and ate they got from the wild. Lee’s primary goal was to document their diet to see if foraging was a reliable way to live a healthy life. His results were truly amazing, and they shook the anthropological world. Until Lee’s research, one major theory about the shift from foraging to the domestication of plants and animals was that foraging was hard work with uncertain results, whereas domestication made life easier and the food supply was more stable. Along with domestication came civilization which made our lives even easier. Thomas Hobbes is famous for speculating, in the seventeenth century, that prehistoric people, living in a “state of nature” (that is, foragers), lived lives that were “solitary, poor, nasty, brutish, and short.” For Hobbes our nature drives us to be selfish, and to seek advancement for ourselves at the expense of others. It is only civilized society with the power of the law and the threat of punishment that prevents us from following our selfish natures. We overcome our natures by learning to be civilized, and being civilized makes us more comfortable than living in a state of nature. Richard Lee showed a very different picture. The foragers he studied lived long, healthy, happy lives. It was the farmers and herders around them – the “civilized” ones – who had lives that were uncertain, unhealthy, and intermittently miserable.
The mistake anthropologists and philosophers made in the past, and non-anthropologists still make, was in thinking of foraging in terms of what it would be like if we had to do it. We would be terrible at it for any number of reasons. We do not have the detailed knowledge of all the edible plants and animals that are available in the wild, for starters. We also have an extremely narrow range of foods that we consider “edible.” If you are going to live on nothing but wild foods you have to expand enormously the number of foods that you are willing to eat. You cannot be picky about eating insects, for example. You also have to be willing to move to where wild foodstuffs are, depending on the seasons. Animals go where the food is from season to season, and you have to follow them, not just to hunt them for meat, but also to eat the fruits and vegetables that the animals eat as well. That means that you have to be nomadic. Being nomadic does not mean wandering aimlessly around, as it is often characterized, but, rather, following a set pattern of seasonal migration. Because you cannot stay in one place permanently, you cannot build permanent housing, cities, and so forth. Also, and perhaps the hardest to bear for modern people, you have to limit your possessions to what you can carry conveniently, and they must all be made from wild materials: no cars, cell phones, or computers.
We would generally find this lifestyle unpleasant, but it has one gigantic benefit: foragers put very little effort into producing food and keeping themselves clothed, housed, and comfortable, in comparison with people who live off domesticated plants and animals, or live in cities. If we define work, simplistically, as effort put into making a living, then foragers do far less work than we do. It is hard to get our minds around this fact, but it ought to be obvious. We take it as “obvious” that the more technology you have, the less work you have to do. The opposite is the case. Just think about this for a minute. People who grow crops or herd animals have to work all the time. If you farm wheat there is the constant threat of too much, or too little, rain, poor soil fertility, pests, crop diseases, and on and on. If you herd cattle you have to worry about someone stealing your animals, diseases, breeding, daily milking, finding food and shelter for your animals, and all the rest of it. Whether you are farming or herding, you are constantly busy. Foragers do not have any of these problems – none.
For the !Kung, daily gathering of plant materials to eat was a sure thing, and took very little time. Think of it as every day being harvest day (minus the planting and tending). They did not have to worry about planting seeds, watering plants, fending off pests and diseases, and so forth. They just went into the bush and gathered nuts, fruits, seeds, fungi, leaves, and roots, and made their meals from them. Normally it took about 1 hour per day or less to gather enough for the day. If one plant was not particularly productive one year because of drought or disease or pests, they picked a different one. When one area was exhausted, they moved on. They let nature worry about all the things that farmers worry about. They just harvested from nature’s bounty.
The hunters worked in a similar way although it took more effort, and was not as certain. They did not have to worry about theft, animal diseases, and the like, but they still had to catch the animals, and that is not easy, even for skilled hunters. Even so, a hunting band could be expected to bring home something each time they went out (between 40 and 50 hours per month). Hunted meat was crucial because it provided needed protein and fats. Here is the catch. The hunting band as a whole would return with something every time, but not each individual hunter. Some skilled hunters might come back with something more often than others, but all of them had times when they caught nothing. To make sure that the whole group – men, women, and children – had enough meat at all times, they had to share every time they returned from the hunt. When they returned from the hunt, they immediately shared out what they had with all the families back at the camp. The best hunter for the day might get a little extra, but everyone got enough.
For the !Kung, sharing was essential for survival. If you had a bad day hunting, you still had something to eat, and if you had a good day hunting, you evened things out with the others because they had shared with you when you had nothing. For reasons I have already mentioned, we cannot conclude from this study that humans in prehistory shared food by nature, because it was built into their genes through evolution – and therefore is built into ours too, because we are their descendants. We cannot even argue that all humans in prehistory were like the !Kung. Within recent history, anthropologists have studied numerous forager cultures, and many of them are nothing like the !Kung. At the turn of the 20th century the Kwakiutl, who lived as foragers in the Pacific NW of North America, lived in settled villages, made elaborate blankets and rugs, built highly decorated buildings, carved totem poles, made complex art, and had all manner of tools and artefacts of considerable sophistication. They could do this because they got more than enough to live on from their rich, lush environment. They did not share either. They had an enormously complicated system of distribution of goods based on rank, privilege, and geographic location known as potlatch. Our prehistoric ancestors could have been like the !Kung or the Kwakiutl, or like neither. It is quite likely that humans in prehistory lived in many different kinds of societies, just as we do today. Whatever their societies were like, however, they did all live in societies, which means that it is fair to argue that while sharing is not hard wired into us, being social is.
Here is the crux of the matter. Saying that something is hard wired in humans is not saying much. Our behavior results from a complex mixture of three things:
- Genetics
- Individual personality
- Social conditioning
How they are mixed, in individuals as well as in people as a whole, is part of a long, ongoing discussion in biology and anthropology. Of the three parts, the genetic component is the least flexible, although it is not completely fixed and uniform. The genetic part is what some people want to call “human nature” – the part that kicks in during a crisis when other parts of our personality fail us. This idea is nonsense for all kinds of reasons. Our genetics do not have to kick in, even in the worst of circumstances.
If the engines cut out, all of a sudden, on a passenger plane I am traveling on, I do not want the pilot to fall back on “human nature” even though this situation is clearly a crisis. We could all die, including the pilot. If the pilot stays calm and level-headed, we have a chance of survival. If the pilot falls back on natural selfishness, puts on a parachute and bails out, the rest of us are doomed. You may think that this kind of selfishness is hard wired, and the fact that planes do not come equipped with parachutes might be your proof. Without the option of bailing out, the pilot must try to save the plane, if only out of selfishness. The pilot’s heart rate and blood pressure will almost certainly increase. That bit is hard wired. But training is likely to overcome panic in terms of behavior. Here is the complicated part. All pilots are different. The pilot’s training, individual personality, and genetics may all be in conflict as the plane starts to spiral down, or they may all work together to find a resolution. The archives of the US National Transportation Safety Board are filled with records of tough situations where the Board has had to determine the roles that personal factors, social factors, and genetics played. Their job is not an easy one. They have to try to pull these factors apart to determine what went wrong so that they can refine policy concerning which people to hire as pilots based on individual factors including genetics (and which ones to weed out), and how to train them for emergencies, so that the three parts will work together.
The exact nature of how the three parts work together in harmony, or in conflict, is a conversation that is too complicated for us to have right now, in large part because there is little agreement among experts. What we can agree about is that we always have choices concerning how to behave. We are not driven unavoidably by human nature. Nor are we driven strictly by culture, either. We can always change. The small catch is that while we can always change our ways, we do not usually want to change. Change is hard. What can help in the process is working out what we do because of individual desires (which may include genetics), and what we do because of social pressures. For the individual parts you will need to consult a clinical psychologist or a geneticist. As an anthropologist I can help with the social part. Our social training is very difficult to change for one critical reason: we are taught to believe that what we do because of social training is not social, but natural, and cannot change. This is the mistake I want to correct in the meat of this book.
Ethnocentrism and Cultural Relativism
Before moving on to substantive issues I want to address one more general factor: our built-in prejudices. No matter how flexible we might like to think we are, we are all subject to cultural biases, sometimes consciously but often unconsciously. Anthropologists have their cultural biases too, like everyone else, even though we strive for a degree of neutrality in assessing other cultures. Absolute neutrality is simply impossible. The best we can hope for is an awareness of our biases, and there are numerous really sneaky biases that escape our net. Ethnocentrism affects us all whether we are aware of it or not.
In its crudest form, ethnocentrism is the belief that the culture you grew up in does things the correct way, and that other cultures that do things differently are wrong. Derogatory terms, such as, “savage,” “primitive,” or “barbaric” are typically used to describe those foreign cultures. In anthropology the opposite of ethnocentrism is cultural relativism, which begins from the point of view that other cultures are not wrong in their customs, but different. Cultural relativism is a hard sell for two reasons. First, ethnocentrism is deeply rooted in all cultures. There are gradations: some cultures are much more ethnocentric than others, and cultures usually have a range of responses to other cultures from absolute hatred to simple disgust or aversion. But, ethnocentrism exists everywhere, and any behavior that is universal is very important to anthropologists given that human behavior is not hard wired.
It is not difficult to make the argument that ethnocentrism concerns the self-preservation of one’s own culture. In that case, cultural relativism runs counter to some basic cultural values. Second, even though anthropologists tout cultural relativism, there are numerous disagreements in the discipline about its possibility and applicability. Extreme cultural relativism, arguing that it is unacceptable to form any judgments about other cultures, is rare. It is easy to see people eating spiders or sheep’s eyeballs and say, “I don’t want to eat that, but if you think it is yummy, go ahead.” It is difficult to stand by and not pass judgment while a girl has her clitoris cut out (without anesthesia), or a female, firstborn baby is killed, or old people are left to die because they are too weak to travel on seasonal migrations. Anthropologists argue constantly about the limits of cultural relativism. Are there some absolute moral laws that should not be broken regardless of culture? This is not the place for a lengthy discussion on the complexities of ethnocentrism and cultural relativism, but I can lay out some basic principles to guide later chapters. Let’s take things one step at a time.
Certain kinds of blind patriotism are the most blatant forms of ethnocentrism – “My country is the best in the world, and all others are inferior, or just plain wrong” – but these are extreme cases. Ethnocentrism is usually much subtler and more veiled. Many cultures, from antiquity to the present day, have had one term to designate themselves and one for outsiders (generally with explicitly or implicitly negative undertones). In ancient Greece, non-Greek-speaking people were called ʽοι βάρβαροι (hoi barbaroi), barbarians, that is, uncivilized people, and they applied the term equally to peoples such as Egyptians, Medes, and Persians, even though they had highly sophisticated cultures. Being “other” meant, inevitably, being inferior. No doubt you know of other examples, perhaps from modern times. Anthropologists have recorded numerous examples of peoples having one word in their language for themselves, which can be translated as “real people” and another word for outsiders, which has the implied connotation of “inferiors.” This kind of ethnocentrism is extremely common worldwide and is easy to recognize.
It is a lot harder to recognize implicit ethnocentrism. Lee’s classification of “work” as “activities that perform an overtly economic function” is implicitly ethnocentric. That is, Lee was not being blatantly ethnocentric. He was not arguing that living in an industrial society is better than living by foraging: quite the opposite. But he did classify !Kung activities into “work” and “non-work” in a Western way, even though the !Kung did not. Anthropology has sub-fields such as economic anthropology, political anthropology, medical anthropology, aesthetic anthropology and so forth, that divide other cultures into arenas of activity that make sense in our own cultures as distinct activities, but such labels rarely make sense in indigenous contexts. Talking about “art” anthropologically is a perfect case in point. Anthropologists have studied what used to be called “primitive art” for well over a century, even though there are numerous cultures worldwide for whom the concept of “art” is strange, and many who do not even have a word for it indigenously. For generations, anthropologists have collected artefacts from cultures they have studied, taken them back home, and displayed them in museums as “art” even though indigenously they have practical purposes. I admit I am straying into extremely complex issues here, but the essential point is important.
Imagine you want to interview a restaurant chef, and you decide to conduct the interview while the chef is on the job so that you can experience cooking in action. This is the kind of exercise I often had my students conduct. Imagine that during the interview the chef uses a wooden spoon that catches your eye because it is so attractive to you: it is made of a dark wood with a complex grain, the bowl is gently curved and extremely smooth, and the handle is tapered in an interesting way. You photograph the spoon from many angles, and then compliment the chef on the beauty of the spoon. He might agree with you, or he might think you are loony. He would be justified in the latter assessment if you subsequently displayed the photos in a gallery with the title “chef’s art” if for him the spoon was simply one of his tools of the trade. You would be emphasizing an aspect of the spoon that has no meaning to its user. That would be a form of ethnocentrism that has been far too common in anthropology. As it happens, this kind of thing is normal in the Museum of Modern Art in New York (MOMA), where spoons, knives, typewriters, even a (small) helicopter, are on display as “art” because the museum considers them to be things that are good to look at. But MOMA has specific design criteria in mind, and the curators are not anthropologists. Is something that an anthropologist finds attractive in the course of fieldwork “art” or not? Is a practice that looks like “marriage” or “ritual” or “magic” to the anthropologist thought of as such indigenously? When a pastoralist receives 100 cows from a groom’s family because his daughter is getting married, is it fair to say that he is “selling” his daughter?
We end up having to accept the fact that terms such as “marriage” “sell” “ritual” etc., are all loaded terms that acquire their meanings through socially accepted beliefs. Anthropologists call such entities “social constructs,” that is, things that acquire their meaning and value through shared beliefs. Paper money has often been used as a classic case of a social construct. The banknote itself has no intrinsic worth, but society as a whole imbues it with a particular exchange value (and that value can vary widely depending on circumstances at the time). We can all understand that kind of social construct. Things get a lot harder when we assert that all meaning is socially constructed, and we can easily disappear down a deep, dark rabbit hole following that reasoning. We must, however, accept that terms we commonly accept as basic, are neither universal in meaning, nor in legitimacy. To do so is ethnocentric, yet it is all too common in anthropology in general, and I fail in this regard constantly, as do all of my colleagues. Being aware of one’s inclination towards ethnocentrism is an important first step, and recognizing how deep seated and pernicious it is, is the second.
The problem is that if we simply lump all human behavior into a giant pool of “social constructs,” then all behavior is different and unique, and cross-cultural comparison is impossible. This would be the death knell of anthropology (which some scholars would be happy to ring). In this book I am making the (admittedly troublesome) assumption that cross-cultural comparison is possible, and I do not spend too much time fretting about the underlying philosophical conundrums. Just be aware that they exist.
Cultural relativism, as a counter to ethnocentrism, went hand-in-hand with anthropological theory for decades, and still does. Structural-functionalism as a theory, in one guise or another, was the backbone of British social anthropology for decades, and still persists as an undertone in much anthropological writing. The basic idea of structural-functionalism is that a culture is analogous to a living organism, with the different parts performing different functions, yet working together for the benefit of the whole. Structural-functionalists argued that the goal of the anthropologist was to figure out what the function of each “body part” was and not to comment on its method of carrying out its function. If a particular religion, no matter how odd it looks to outsiders, functions to keep a society happy and well ordered, then it “works” within that context. Criticizing or eradicating that religion would kill the culture (just as cutting out someone’s liver would kill that person). Therefore, the anthropologist’s job is not to criticize or interfere, but to simply take notes and try to assess what the purpose of certain behaviors is.
Lauriston Sharp’s article “Steel Axes for Stone-Age Australians” (1960) is a classic example of structural-functionalism in action. Sharp (an ironic name for someone interested in axes) shows that the axe in traditional Yir Yoront functioned in a way that was completely different from its European counterpart. Indigenously, the Yir Yoront lived in Cape York peninsular in the far north-east of Australia, and until the 1930s were relatively autonomous, although they had ongoing contact with European missionaries. Traditional Yir Yoront axes were prized possessions held by clan elders and highly valued for both practical and ritual purposes. The stone axe heads were made from materials that were not available in Yir Yoront territory, but had to be traded for over long distances involving interactions with multiple neighboring groups. The axes were used primarily by women, and also young men, who had to use complex kinship networks to “borrow” them from elders when they needed to use them. Axes also played a vital role in indigenous rituals.
Local missionaries attracted women and young men from the Yir Yoront to the mission by offering steel machetes in exchange for various activities such as attending church or performing tasks around the mission station. This seemed like a fair deal to the women and young men because the machetes made their work with axes a lot easier, and obviated the need of going through the complicated and tiresome negotiations necessary to obtain a traditional stone axe from one of the elders. Sharp’s surprising conclusion is that the simple exchange of steel machetes for stone axes caused the rapid demise of Yir Yoront culture. The stone axe was of such central importance to the Yir Yoront that without it, all manner of social institutions, including respect of elders, kinship patterns, gender roles, alliances with neighbors, religion, and even day-to-day activities, simply fell apart. Using the analogy of the body, it was as if the heart had been cut out of Yir Yoront culture. Without its “heart” it could not survive.
Sharp’s argument is certainly oversimplified. There were numerous encroachments on the Yir Yiront from colonists happening at the same time, so to argue that the replacement of stone with steel was the sole cause of the demise of the culture is simplistic. But his underlying point has merit. It is ethnocentric to think of the axe has having a universal meaning and purpose, and quite mistaken to think that you can swap out one kind for another without ripple effects throughout the culture. Every part of a culture is interconnected to all the other parts in ways that are not always easy to see, much like the human body. A pain radiating down your arm may have nothing to do with your arm, but may be a symptom of a heart attack; a toothache may be caused by a sinus infection. Furthermore, what looks like a sickness in a population, may, in fact, be a benefit, depending on environmental conditions. Sickle cell anemia is a genetic trait that produces oddly shaped red-blood cells that clump when deoxygenated, and causes early death if an individual inherits the trait from both mother and father. But if an individual gets a normal gene (N) from one parent, and a sickle cell gene (S) from the other, the person has physical problems that a person without the sickle cell gene does not have, but will survive to adulthood. Having sickle cell anemia is not normally a good thing because it causes cardiac and respiratory problems. But, people with sickle cell anemia are resistant to the parasite that causes malaria. So, if you live in a region where malaria is endemic, and anti-malarial drugs are not available, you are better off having sickle cell anemia than not.
In other words, cultures have to be viewed holistically, and not just from the perspective of individual traits, good or bad. Here is where cultural relativism shines. Before passing any judgments on particular traits, we must look to see how they function in the culture as a whole. What looks dysfunctional at first blush may be vital to the health of the culture when viewed in a wider context. But what about when you have taken a good look at the totality of the whole culture? Then what? Some anthropologists argue that enough is never enough. There is always more to learn before passing judgment. Others argue that there comes a tipping point beyond which judgments are possible, and when some traits can be earmarked as bad or dysfunctional. I tend to lean in that direction. After all, we make judgments about our own cultures all the time, and we don’t necessarily always have all the information needed to make such judgments.
US expenditures on the industrial-military complex are immense. In fiscal year 2017, the US government spent $590 billion dollars on the military, which was more than China, Russia, France, Germany, the UK, and Saudi Arabia spent, combined. A reasonable case can be made, and has been made by citizens of the US, that the military budget is way too high, and the money should be spent on other things such as education, eradication of poverty, and medical benefits. I don’t disagree, but taking a step back reveals problems in the big picture. All 50 states in the US gain economic benefits from military expenditures, whether it be from manufacturing jobs from the production of weapons, uniforms, and the like, profits to be made from services provided in towns with military bases, or any one of a dozen other ways that there is money to be made from the military budget. Not only are members of the US Congress not going to vote against budgets that benefit their local districts, out of pure self-interest, (and I don’t even need to bring in the wrinkle of defense contractors spending large sums to help elect their favored candidates), they are not going to vote against massive military budgets in general because the whole US economy depends on them. In consequence, war is all but inevitable, because you cannot have a $590 billion per annum military sitting around all the time doing nothing.
The epistle writer, Saul of Tarsus, (aka St Paul), faced much the same dilemma. When he converted to Christianity he pledged to be generous and kind to friend and enemy alike, and to live a life of lovingkindness, and we have his letters to prove it. Yet, in those same letters he not only does not denounce slavery, he advocates that slaves be dutiful and uncomplaining. This is in part because he knew that slavery was vital to the health of the Roman empire, and, even though he detested much of what the empire represented, he knew that overthrowing it would lead to untold destruction and misery – for everyone.
You cannot change a part of culture you do not like without incurring attendant changes that you may not like. That is where cultural relativism comes in, even though you end up heaping difficulty upon difficulty. Radical relativism requires you to do nothing but observe, even if you observe problems, because those “problems” may have hidden benefits. Weaker relativism says that it is all right to step in and make changes if you have assessed the risks adequately. I will leave you to make up your own mind about this issue, and also decide for yourself what kinds of behaviors you are comfortable observing without judgment, and what ones you find universally offensive (if any). This is an area where there is major disagreement among anthropologists – when they think about it. Unfortunately, they don’t think about it enough, and they don’t spend enough time thinking about the impact of their own research on the people they study. It is impossible to observe a culture without changing it in some way.
Fortunately, this book is not about massive social or cultural change and its collateral effects. I am concerned with the problems that we all face in everyday life, and I suggest ways that these problems can be solved, or minimized, by looking at the ways other cultures handle such matters, or by applying general anthropological theory developed by observing other cultures. In the process I hope you gain insight into how anthropologists view the world, and how that view is different from what you are used to.
Chapter 2: You are OK, How am I? Feedback Loops
What if I were to ask you to list the combination of things that make you who you are? If I made such a list I could say that I am of average height, I have grey hair, and I am thin; I am a teacher, a widower, a paramedic, a brother, and a father; I am friendly, sociable, reasonably good looking (and modest !!); I sing baritone and play several musical instruments moderately well. There are other things, but that list can get us started. Only one of these features is independent of other people. I have grey hair whether you compare me with other people or not. It is a fixed part of my identity. All the others are dependent on the existence of other people. When I say I am of “average” height, I am saying I am average in comparison with other Europeans. But, right now I live in Cambodia where I am certainly not average in height. Almost all Cambodians are shorter than I am – some of them, a lot shorter. I am also not thin by Cambodian standards. Some of the other attributes I have listed have to do with how I compare to other people, and some concern how I relate to other people. I am not friendly and sociable in a vacuum, for example. The factors I want to concentrate on for the moment are teacher, widower, paramedic, brother, and father. In each case I fill the role because of how I relate to specific individuals. I cannot be those things without other people who also have a special role in relation to me.
The role I will focus on first is “father.” I am a father because I have a son. Before I had a son, I was not a father. I cannot be a father in the abstract. That kind of pair was called a “dyad” by the anthropologist, Gregory Bateson. Bateson wanted to show that while dyads are made up of two separate individuals, when they interact they become one unit. He called the interaction between two individuals in a dyad a “feedback loop,” taking his terminology from engineering. The dyad is a system as a whole and cannot function well unless both parts are in constant feedback. They have limited use as isolated units.
To help explain how dyads work we can use the heating system of a house as an analogy. The system has two main components – a furnace and a thermostat – linked together in constant feedback. Both have individual existence, but when they are linked in a feedback loop their nature changes and they become a single unit. A furnace creates heat and a thermostat records the temperature, and they can do this without being connected to one another. If they are connected in a feedback loop, however, they create a new system and cease to be independent. Let’s say you set the thermostat at 70 degrees Fahrenheit (21 degrees Celsius if you live anywhere but the U.S.). The thermostat constantly checks the temperature in the house and can do one of two things. If the temperature is too low, the thermostat sends a signal to the furnace to turn on, and the furnace will send heat through the house. If the temperature is above a certain reading, the thermostat sends a signal to the furnace to turn off, and without heat from the furnace the house will begin to cool down.
The house does not stay exactly at 70 degrees. The thermostat allows a limited range of temperatures. If it is set at 70 degrees it will let the temperature fall to, perhaps, 68 degrees before it sends a signal to the furnace to turn on, and it will let the house rise to 72 degrees before it sends a signal to the furnace to turn off. Thus, over the course of a day, the house temperature will go up and down, but, if everything is working correctly, it will stay roughly in the neighborhood of 70 degrees. This situation is known as stable feedback. Communication between the thermostat and the furnace goes two ways. The thermostat sends information to the furnace whether to turn off or on, and the furnace sends heat (or not) back to the thermostat. Many human relationships exist in stable feedback, but I will get to that in a minute. First, I need to talk about unstable feedback, again, with an analogy.
Consider a setup where you have a microphone sending sounds to an amplifier and then out a speaker. In this case, you need to avoid communication between the components because they will cause unstable feedback. In the usual situation, sounds go into the microphone, are sent to the amplifier where they are magnified, and then sent out of the speaker. As long as the amplified sounds do not go back into the microphone, everything is fine. But, if you point the microphone directly at the speaker you create unstable feedback. In this situation, sounds go into the microphone, are amplified, then come out of the speaker to be fed back into the microphone, amplified again, then out the speaker, back into the microphone, and round and round until you have a horrible, loud squealing sound, which you have to stop by pointing the microphone away from the speaker. Human relationships may also exist in unstable feedback.
In extremely general terms, stable feedback is good, and unstable feedback is bad whether you are talking about mechanical systems or human relationships. Stable feedback keeps systems working as they should, and unstable feedback causes them to fly out of control. There are exceptions, of course. A genocidal dictatorship might be in stable feedback, for example, and if you live in one, you might want to throw it into unstable feedback in order to cause it to break down (even though, in that case, unstable feedback could be disastrous also). For the moment, I want to deal with common situations where stable feedback is beneficial, and unstable feedback is not. Before I do that, I need to give you a little more technical information and then I will explain how understanding feedback loops in human relationships can help you.
Also in very general terms, there are two kinds of dyads when it comes to humans: complementary and symmetric. A complementary dyad exists between two people who are unequal, with one being dominant and the other being subordinate. Examples include teacher/student, employer/employee, father/son, and doctor/patient. A symmetric dyad exists between two people who are roughly equal, such as, co-workers, siblings, and neighbors. Both kinds of dyads always involve some kind of feedback loop. For a stable, peaceful, and predictable life (if that is what you want), they need to be in stable feedback. The big question is how to keep and maintain stable feedback. The first order of business is to recognize the various dyads in your life and to understand how they work. Let’s look at complementary dyads first.
Imagine a hypothetical, healthy economy with close to full employment. I am being overly simple in order to illustrate a point, so bear with me. In such an economy, the employer/employee relationship usually starts off in stable feedback. The employer cannot be too stingy when offering a new job because, with near full employment, potential employees have choices and will likely take the job that pays the best wage with good working conditions. An offer that is lower than all the rest will get few takers. Negotiating wages and working conditions during the hiring process can be civil, and both employer and employee will likely come out of it reasonably satisfied. All will be well as long as the economy is doing well, and the employer’s business is also doing well. The employer needs workers, and is content to keep them happy with reasonable work conditions and wages. The employees need jobs so they too are content to work hard enough in order to stay employed. It is in the interests of everyone involved to keep things stable. If the boss becomes too demanding, employees can become resentful and quit. Likewise, if the employees become too demanding, the boss can fire them. They both know the limits of what they can expect. This situation is akin to the thermostat and furnace analogy. Relations between employer and employee can fluctuate slightly, but, with a little back and forth, they stay within acceptable limits. The relationship is in stable feedback.
This kind of dyad in stable feedback can become unstable for many reasons. Suppose that our hypothetical employer and employee are doing fine, but the national economy goes sour and the business starts floundering, and unemployment increases. One common “solution” would be for the employer to gather all the employees together and explain that some of them will have to be fired because the business can no longer pay all of them. This situation could well throw the employer/employee dyad into unstable feedback. The employer can become more demanding, asking the employees to be more productive, work longer hours, and/or take pay cuts. For fear of losing their jobs, employees will likely accept the harsher conditions. They cannot afford to lose their jobs because there is widespread unemployment. Consequently, they may become more docile and accepting of worse and worse conditions. Now employer and employee are in unstable feedback, like the microphone and speaker analogy. The more the employer demands, the more the employees submit to those demands. The employer becomes more and more demanding, and the employees become more and more submissive. Eventually, as with all unstable feedback loops, something breaks. To avoid problems the system must be stabilized.
It is important to understand that in unstable feedback situations, everyone loses, not just the people at the bottom. A system in unstable feedback is in stress at all levels. If it is not corrected, one of the components will break, and with the employer/employee dyad it is not necessarily the people at the bottom. The stress of becoming more and more dominant may cause the employer to break down. Maybe the employer does not like being a demanding, tyrannical person on the job. The unstable feedback loop can force the situation, though. The trick is to see that unstable feedback is at work, and to stop it before it gets out of hand and something breaks. Mechanical feedback loops usually have safety valves built into them, so that if they flip into unstable feedback, the safety valves stop the system and prevent breakdown. Electrical circuits have fuses, boilers have pressure valves. Human systems can have safety valves too, but quite often they do not, and so they break down.
Certain rituals, or activities, in society, known as rituals of inversion, can act as social safety valves. In this case I am not necessarily talking about religious rituals. Any routine behavior that is performed according to standard protocols can be a ritual whether it is sacred or secular. Rituals of inversion reverse the normal state of affairs, typically so that a subordinate becomes dominant, and the dominant one becomes subordinate. At one time, rituals of inversion were common in Europe. In England in the Middle Ages, some cathedrals held a Boy Bishop day once a year (usually on the Feast of Holy Innocents – December 28th). The youngest choirboy was made bishop for a day, during which time he wore the bishop’s robes and mitre (looking ridiculous, of course, because they were too big). In these oversized clothes he performed a mass, and generally acted – in a foolish way – as if he were the real bishop. In the Royal Navy in my father’s time, in the 1930s and ʼ40s, the youngest midshipman was made captain for the day on Christmas Day, and everyone had to obey his orders (including the real captain). Choirboys and midshipmen were under the command of the high and mighty 364 days of the year, but for one day they got to make fun of the system, which helped defuse its power over them for a while. These rituals were their safety valves.
Rituals of inversion such as these have lost their force these days. The Boy Bishop ceremony, if it occurs at all now, is a mere token ceremony, with the choirboy putting on the bishop’s mitre and sitting in his seat for a few photos for newspapers and the congregation, before things revert to normal. Having a choirboy actually say the mass was banned centuries ago as sacrilegious. It is. But back in the Middle Ages, the Catholic Church had absolute power over people (including monarchs) and could afford to let loose once in a while. Everyone benefited, and potentially unstable dyads were re-stabilized by reversing roles for a day. Without such rituals, the powerful can simply get more and more and more powerful, and the social system can be in seriously unstable feedback, and eventually break down. One can easily argue that the imbalance of power and wealth in European countries and the US nowadays indicates a system in catastrophic feedback that will eventually cause breakdown.
England broke down in the 17th century when the Puritans came to power and removed all of the social safety valves that had been in place for centuries. The Puritans fought a civil war with loyalists to the monarchy. They executed the king, Charles I, defeated the royalist army, and put in place a strict Puritan republic. If the Puritans had had anthropologists as advisers to tell them that safety valves were necessary to maintain stability, England might still be a republic. Instead, the Puritans explicitly banned all forms of entertainment, including Mock Mayors and Mock Kings (or Lords of Misrule), Boy Bishops, and all other ceremonies that involved role reversal and parodying the social system. In consequence, English society had no safety valves to stabilize it. The Puritan leadership demanded more and more work from the people, and if they complained, they were severely punished. English society was in seriously unstable feedback. It had to break down.
The English republic lasted for only 11 years, from 1649 to 1660. In 1660 England was hopelessly unbalanced, and it did not take long for the country to declare that it had had enough of life without safety valves, and welcomed Charles I’s son, Charles II, as their new king. One of his first acts was to declare publicly, in notices sent all around the country and to be read out loud in pulpits and town squares, that Lords of Misrule, Mock Kings, etc. were not only legal, but should be encouraged. The poor were still poor, and the rich still rich. But the society was in stable feedback once again. From that time until now, England has had a monarch as the head of state. The complementary dyads between monarch and people have had to be adjusted from time to time, but they have remained stable.
You may have unstable feedback loops in your own life, at work or at home. These will certainly make your life uncomfortable, but to change things you have to first identify the feedback loops, and then figure out what to do about them. When my son was growing up, I was well aware of the possibility for unstable feedback between us. I home schooled him, and his mother died when he was a young teenager. We lived in an isolated, rural village in the Catskills of New York where we had few neighbors, and not much chance for my son to meet with friends unless we got in the car and visited other people. Much of the time it was just the two of us at home together. I was in charge, of course, but if all I did was issue orders and have him obey them, we could have easily gone into unstable feedback. I put a number of safety valves in place to prevent that happening.
Sometimes when we were out visiting friends or going to karate or band practice, we would get in the car and I would say, “You are navigator. Get us home.” I did nothing without his orders, and whatever he ordered, I did, even if it was completely wrong. I made zero comments or suggestions. If he forgot to order a turn, I continued straight. Usually things worked out just fine, but if we got lost, we got lost, and he had to figure it out for himself. Until we got home, he was completely in charge.
I did something similar on a much grander scale when he graduated high school when he was 16. I offered him the chance to spend three weeks with me in Japan, all expenses paid. The rule was that he had to plan out the trip – hotels, flights, trains, sites, festivals, meals etc. – and he would be in control for the whole time we were in Japan. We sat down at our computers with a calendar, and he decided where we would go and for how long, how we would go from place to place, where we would stay, and what we would do. I did the actual bookings, and paid, of course: all the decisions were his. It took him a month to get everything organized because he had a thousand things he wanted to do, and we could not do them all. We also had a limited budget which he had to consider. It was generous, but not regal.
We had an absolutely splendid time. We began in Tokyo, going to the kabuki, having breakfast in the Tsukiji fish market, going up Tokyo tower, playing video games in the main electronics sector, visiting castles and palaces, shopping for clothes and gadgets, and doing a certain amount of general wandering around by local trains. Then we moved on to Nagoya for a famous sumo tournament, then to the main green tea region around Uji with day trips to Kyoto and the sacred mountain of Mt Atago (which was hair raising because we had to take local buses using an impossibly complex system, with no one on the buses or at bus stops speaking English), finishing in Osaka for the annual fishermen’s festival, before heading back to Tokyo and home. I made zero decisions the entire time, and my son came back feeling empowered. When we returned, our feedback loop returned to normal, with me in charge. Our feedback loop was stable because it had safety valves built in. I will not go so far as to say that we always lived in stable feedback. Fathers and sons have their issues. We managed, and I believe he turned into a reasonably stable young man. He turned into an anthropologist too, as it happens, but that was not my fault.
That is complementary feedback. We also have to consider symmetric feedback. Spouses, siblings, and co-workers can operate according to the principles of symmetric feedback if they are genuine equals. Keeping these feedback loops stable is rather different from keeping complementary feedback loops stable. First of all, let me be clear about symmetric relationships. To be truly symmetric the two people must genuinely be equals. That is not always the case with spouses or siblings, for example. I will deal with unequal dyads in a moment. I want to focus on true equals right now. True equals can either co-operate or compete. Generally speaking, co-operation is potentially stable, and competition is potentially unstable. This fact should be obvious. When two people co-operate on a problem, the feedback between them is mutually beneficial and mutually reinforcing. When two people compete, the feedback can easily get out of control with each half of the pair doing more and more and more to outdo the other. Whether a symmetric dyad is in stable or unstable feedback is entirely in the hands of the two people in the dyad.
When I was a teenager, I once went camping in the woods with my Boy Scout troop. We were divided into two patrols. I was the leader of one patrol, and my friend Geoff was the other patrol leader. Geoff and I were at school together, and I knew him as a highly competitive boy. He was particularly good at sports and excelled as a sprinter. Every year he represented the county at the all-England championships. Our scout leader for the camp gave both of us a list of activities to accomplish with our respective patrols. Each time we finished an activity we would get points for how well we did. We could do as many or as few of the activities as we wanted. At the end of the camp, the patrol with the most points would win a prize: classic symmetric competition.
First day of camp, Geoff and I met to talk about the competition. As usual, he was all ready for battle: his patrol was going to win. I, on the other hand, was not interested in competition. I went back to my patrol and asked them how they wanted to proceed. We could spend the week working like dogs on the activities, perhaps winning, perhaps losing, or we could decide not to do any of them. They decided not to compete (maybe with a little help from me). Geoff led his patrol in many of the assigned activities, and won the prize. My patrol did a variety of things including hiking, picking and cooking wild foods, and building things out of natural materials, none of which was on the list of activities that would earn us points. I am not going to pass judgment on which patrol had the best time. We both had fun. My point is that the two patrols did not descend into unstable symmetric feedback through competition, because my patrol refused to participate. Refusing to compete is always an option in symmetric situations, although there may be negative consequences. My patrol did not win the prize. We had a good time, though. Weighing the consequences of competing, or not, can get complicated. The bottom line is that in symmetric dyads we always have a choice.
Things get complicated when we move to situations that are more than simple dyads, as is true of our lives in general. You can be in symmetric feedback with your co-workers on the job, but you are all also in complementary relationships with the boss. If the complementary relationships with the boss are stable, then the symmetric relationships between co-workers may also be stable. If the boss is not under pressure to fire someone, or the workers see no special need to curry favor with the boss, the whole system can be completely stable.
Now consider another wrinkle. Not all relationships that have the potential to be symmetric, are symmetric. Husband and wife, for example, can be in a symmetric relationship, or in a complementary one. Historically in the West it was frequently complementary, with the man holding the higher social role. These days it is possible for husband and wife to live in symmetric feedback, but this situation is not universal, by any means. I have no judgment to make about a couple that wants to live in a complementary feedback loop that is stable, whether the man or the women is the dominant one. If they are happy with the situation, I have nothing to say. If the feedback loop is not stable, we have a problem. I will remind you that when a complementary feedback loop is unstable, no one is happy: neither the dominant, nor the subordinate partner. Instability creates problems, and eventually the system breaks down, one way or another.
A husband and wife living in an unstable complementary relationship have three main choices: (1) Make the complementary relationship stable. (2) Change the complementary relationship to a (stable) symmetric one. (3) Do nothing, and let the relationship break down. I do not have firm statistical evidence to back me up, but my best guess is that #3 is the most common, #1 is next, and #2 is very rare. It does not matter which spouse is dominant and which subordinate as long as the feedback loop between them is stable, and both are comfortable with the situation. I would venture to say that a great many happy marriages work in this fashion. It is not something I ever wanted, but my values are irrelevant. At minimum, an unstable complementary relationship can be stabilized by having some safety valves in place. These will depend on individual circumstances, so it is impossible to generalize. Husband and wife can, for example, reverse roles once in a while. Safety valves are useful, but it is better if they never have to be used. To achieve stability, both partners have to recognize what is making their feedback loop unstable (which entails recognizing that the feedback loop exists, in the first place, and that it is, indeed, unstable).
If the two can communicate about the nature of the feedback loop, instead of degenerating into accusations and name calling, the instability might be stabilized. In other words, both partners have to understand that they are acting in a system as a dyad, not as individuals. Seeing the problem as the fault of one person, as an individual, is a classic mistake, and we are all prone to it because we like to think that people act as individuals. We do not. We are endlessly responding to our place in dyads.
One way of stabilizing an unstable complementary dyad is to have a discussion about the situation to try to stop the instability in the feedback loop. Another is for one partner in the dyad to see the feedback loop for what it is and refuse to participate. I do not mean, necessarily, to leave the dyad (although that is one option), but not to continue to reinforce the unstable feedback. This can be achieved in a number of ways. Not following the normally expected behavior in an unstable feedback loop is one way to stabilize it. For this to work there is one proviso that is critical: you have to be willing to accept that the relationship may break down.
Not caring whether an unstable feedback loop becomes stable or breaks down is an extremely strong position, particularly if you are in the subordinate position. For example, just after I graduated from university in England, and before I went to the United States, I taught in several secondary schools in some rough areas. They were not pleasant jobs, and I went into them with the mentality that I had learned from being a student myself. Both I and my students knew we were in complementary dyads, but there was no way for me to make them unstable by forcing my dominant role on them because they simply refused to participate. If they misbehaved and I punished them by giving them extra work or making them stay after school, they did not care. Sometimes they did the extra work, sometimes they did not. It was the same either way to them, and there was no way I could make the dyads unstable.
A difficult, but effective, way of stabilizing an unstable complementary dyad is to convert it to a symmetric one. Usually this is not possible, or is not considered as an option. A marriage that starts as a complementary dyad starts that way for a reason. There can be both social and individual factors that are hard to break. When researching how many women in the US change their last names when they get married, I was surprised at how many men were deeply insistent that they would not marry a woman unless she agreed to change her name, because failure to do so was a sign she was not completely committed. Fair enough, but what about the man? What was he going to do to assure his future wife of his commitment? Looks like a complementary dyad from the start to me. Turning that relationship into a symmetric dyad is going to be very difficult if that is how it starts.
One time, I was able to turn a potentially complementary dyad into a symmetric one, but in describing the situation I am going to change some of the details to obscure the particular circumstances (a common trick of anthropologists to protect the people involved). I wanted a visa that was extremely difficult to get and involved numerous steps including an interview with a consular officer. It took me many months to get to that point, involving a number of trips to different offices, accumulating a raft of documents, and having fingerprints and photographs taken and verified officially. When I showed up for the interview, the officer presented me with a form to sign. The oath at the bottom of the form where I had to sign contained a phrase I objected to. It was not a requirement for the visa, and I am not sure why it was there. The law of the country allowed me to strike through the phrase. Before I did this, I raised my objection to the phrase. This dialogue followed:
Me: I do not want to sign this form with this phrase in place.
Officer: Well, I guess you don’t want a visa today.
Me: I do want a visa, and according to the laws of your country I do not have to accept this phrase. I can strike it out and still obtain a visa. If you deny me a visa because of this I will complain to the consul.
Officer: Oh, OK. I’ll strike it out for you.
At the outset he wanted to set up the dyad as complementary, with him dominant and me subordinate. That was the situation he was accustomed to because most people who came to him were in some kind of desperate need. I was not. By refusing to accept that the dyad had to be complementary I was able to make it symmetric. From that point on, the interview was more like a nice chat between friends, and I got my visa. I believe he had a good time too because for once he could break out of his dominant role.
An exercise you can do is to map out all the dyads in your life. When I was a university professor, and head of department, I was the dominant half of complementary dyads with my junior faculty and my students. I was the subordinate half of complementary dyads with my dean, the provost, and the president of the university. I was in symmetric dyads with my fellow department heads, and with my wife. For the most part, the dyads were stable, but not always. For me these dyads, taken all together, created a healthy balance. I was sometimes the dominant partner, sometimes the subordinate one, and sometimes I was a co-equal.
Once in a while, the symmetric dyads I was part of would go into unstable feedback, creating problems. At meetings of department heads there was always the risk of unstable competition. We always wanted more money for our departments and lower workloads. For many years this potential for instability could be stabilized by offering to co-operate instead. In this case it was always necessary to show that co-operating would produce better results for all involved rather than competition, but it did not always work, and one of the reasons I left the job was that the president at the time encouraged increased competition through workload and financial rewards for the “winners.” When my efforts to resist the president, and co-operation between department heads failed, I quit. When a dyad is in unstable feedback and efforts to stabilize it fail, you can always leave the dyad. We always have choices, even if the consequences are not optimal. I lost a good salary when I quit, and I missed my students whom I liked very much.
The crucial point of this whole chapter is that we should not see unstable feedback or leaving an unstable dyad as an individual failure. It is a failure of the total system. When a microphone and a speaker are in unstable feedback it is neither a failure of the microphone nor of the speaker. Both are doing their jobs. In fact, they are in unstable feedback precisely because they are functioning correctly. They have to be repositioned to get out of unstable feedback. So do you.
Chapter 3: Traffic Jams: Optimizing and Maximizing
For thirty years I lived in the New York Catskills and commuted to my university near White Plains in Westchester County three days a week during term time. I had a variety of routes I could choose, and I made my determination concerning which one to take based on all kinds of factors including the weather, time of day, and knowledge of potential holdups caused by road repairs and the like. On a good day I could make it door to door in about one hour and forty-five minutes. On my worst day it took five hours. When you have done a commute of that kind for thirty years you come across traffic jams often enough, and they occur for all kinds of reasons. My five-hour commute was caused by a combination of snow and road accidents. Over time you learn how to deal with the traffic jams on your regular route one way or another depending on what is causing the jam. Sometimes the only thing you can do is turn off the engine and sit calmly in stalled traffic until something clears and you are moving again. At other times there are things you can do to ease the situation. In this chapter I want to talk about only one kind of traffic jam, the one caused when you are traveling in heavy traffic on a multi-lane highway, and one of the lanes up ahead is taken out of service.
If the traffic is not heavy, when you are driving along at a good pace and you see a sign indicating a lane closure ahead, there is no problem. Drivers in the lane that is going to end, move into another lane, and things keep moving along just fine. There may be a little slowing, but nothing serious. If the traffic is moderately heavy, there is some inevitable slowing because the flow of traffic under those conditions is similar (not identical) to the flow of liquid in a pipe. If a pipe is carrying very little volume of water, that is, if it is not completely full, and the pipe constricts at some point, the constriction does not affect the flow of water. If the diameter of the water flowing in the pipe is less than the diameter of the constriction, the constriction has no effect on the flow of water. But if the pipe is completely full of water, a constriction will slow the flow. Unfortunately, this analogy works only up to a point because there is a fundamental difference between the flow of a liquid in a pipe and the flow of traffic on a highway: drivers have brains, water molecules do not.
In light and moderately heavy traffic, when drivers can easily shift into another lane when a closure is ahead, and the reduced lanes can handle the traffic with no problem, the slowing occurs because some drivers apply the brakes when they see other drivers shifting lanes. Liquid molecules don’t have brains or brakes, so their flow is determined by physics only, not by social forces. In these circumstances the flow of a liquid is more efficient than the flow of traffic. There would be fewer traffic jams if drivers had no brains and followed the laws of physics. Here’s the nub – traffic can follow the laws of physics if all the drivers agree to follow those laws. But it must be all: it takes only one driver to screw up a well-organized traffic pattern and cause a jam. The lesson we can learn from traffic jams applies to a wide variety of social situations.
Let me be very clear here. I’m not talking about official rules and laws of the road. I’m talking about rational decision making. What I want to discuss is the situation where there is heavy traffic that the highway is barely managing, and one lane ahead is closed. Often in this situation traffic does not just slow to compensate (as a liquid in a pipe does); it frequently stops completely and there are long delays. Here’s a thought experiment for you. Imagine you have a garden hose and the tap feeding it is on full blast so that water is coming out the end freely. Now take that hose and squeeze it a little bit (not a lot) with your fingers. You’d expect the water coming out the end to reduce a little bit: right? You would not expect the water flow to turn into a few drops or a tiny trickle. Yet, that is exactly what happens when traffic flow is constricted by a lane closure. The flow of traffic through the constriction is reduced to a trickle, and traffic backs up at the point of the lane closure. Why? That’s where anthropology comes in.
The decisions that drivers make, turn an orderly flow of traffic into a clogged mess. Let’s examine those decisions. The thing about water flowing in a hose is that the individual water molecules are not making any decisions. They all get through an unexpected constriction in the hose, but they are all slowed down just a little. They are all slowed down equally because they are following the laws of physics, not making decisions. If drivers could act like water molecules in a hose, every car would be slowed just a little, but everyone would get through in an orderly manner. The way this could, in theory, work would be for every driver to see the warning sign indicating a lane closure ahead, immediately slow down, and for the drivers in the lane that is due to close to merge over into a lane that is going to remain open. This would be a reasonable imitation of water molecules. As with water molecules, one or two drivers would be a little bit more inconvenienced than the majority because of difficulty merging at a particular point. But, in general, the flow would continue – without drivers having to stop and start. They would all have to go a little slower – that’s all. My point is that they all have to go a little slower from the outset. Every single driver must be in agreement.
Here I need to introduce two technical terms: optimizing and maximizing. In an optimizing situation, everyone works together for the good of the whole. Everyone benefits (or loses) more or less equally when a problem arises because everyone works together for the benefit of the whole. Everyone is a “winner” (or “loser”) but gains (and losses) are spread out evenly. In a maximizing situation, individuals compete to gain the maximum benefit for themselves at the expense of others. There are some “winners” and some “losers.” Traffic jams can be resolved either by optimizing or by maximizing. Typically, in the US, maximizing makes the traffic situation worse for most drivers, and better for a precious few. Let’s look at maximizing first, because it is the usual state of affairs – certainly the one I have experienced the most.
You are driving along in moderately heavy traffic and you see the sign about a lane closure ahead. You have a choice, especially if you are in the lane that is going to close. A good percentage of drivers will see the sign and attempt to move over into a lane that is not closing. But a few will continue in the closing lane, especially if the drivers ahead of them are moving over, because that means that the lane they are in is becoming emptier, allowing them to get to the point of closure faster – while those in the other lanes are moving at a crawl, if at all. If the traffic has slowed to a virtual standstill at this point, a few drivers may even move over on to the hard shoulder to try to get ahead of the stalled traffic. In anthropological terms, the few drivers who are trying to beat all the others to the place where the lane has closed, and, therefore, get ahead of everyone else in getting through it, are called “maximizers” because they want to maximize their benefit, and do not care that their actions are slowing everyone else down more. By speeding ahead to the closure, they force all the drivers in the long line, in the next lane over that is still open, to stop, while they move into it so as to be able to carry on. These few maximizing drivers inconvenience the vast majority of other drivers who have to stop to let them in. It is not just the driver who lets in the maximizing driver who is inconvenienced, it is the whole line of traffic, perhaps stretching back miles, that is delayed more than it need be if there were no maximizers gumming up the system.
The optimizing system for traffic is the one I described before where the drivers are all acting like water molecules in a hose; all slowing down a little to let everyone merge over from the closing lane, but all getting through in an orderly – but slightly slowed – manner. There are cultures in the world where optimizing traffic patterns are the norm, but this is not usually the case in the United States where I have the greatest experience. People in the United States are generally averse to optimizing. It is a culture where competition and individual gains (at the expense of others), are considered normal – natural even. Even if, by chance, a traffic pattern is optimizing, it takes only one maximizer to come along and mess it up. For an optimizing system to work, absolutely everyone must be in agreement. This means that there must be one of two conditions in effect, preferably both together.
First, optimizing works best when everyone in the system knows everyone else personally. In the modern, crowded, urbanized world, the drivers in a traffic pattern do not know one another. In consequence, it is quite easy to ignore the interests of the other drivers and concentrate only on your own interests. This is as much a question of personal temperament and belief, in large anonymous societies, as it is a cultural norm. If every driver in a traffic pattern knew every other driver in that pattern, there could be negative consequences later for selfish behavior while in that pattern. Every driver would have to weigh immediate, short-term, gains against long-term penalties. In this situation there is still no guarantee that drivers would avoid maximizing, but optimizing is more likely if another, second, condition also obtains.
The second condition for optimizing to work is that the entire culture must value optimizing over maximizing. This condition makes it possible for every driver in a difficult traffic situation to come to agreement, without needing to communicate with one another, concerning the best outcome. Naturally, what the optimal outcome is, is based on experience. If you have encountered enough jams, you know what to do without need of communication. If the culture is one that places a high value on optimizing in general, then the individuals in that culture will work to find optimal solutions to specific problems, such as traffic jams. Anyone who wants to break the cultural norms of optimizing, and try to maximize in a difficult situation, will be punished. Optimizing cultures have to have penalties in place to prevent people from maximizing, because if they do not, maximizing will always win out.
Let me repeat that: optimizing must be strictly enforced, otherwise maximizing by even a single individual will bring it down. I mentioned the !Kung of the Kalahari in chapter 1 as an example of a culture that shared food as a norm. They were not only a sharing culture, but also an optimizing culture. The !Kung knew from long experience that a foraging band of about 100 adults was the optimal number. With many more than 100 they exhausted the local resources too quickly and had to move too often; with many fewer than 100, the hunters could not spread their workload evenly enough to ensure that the band was regularly fed. Small maximizing segments would die off.Consequently, optimizing was built into the whole culture of the !Kung. If a young hunter made a big kill and decided to come home and brag about it, the other hunters ridiculed and shamed him. They might even ostracize him, because attempting to maximize could be fatal to the entire band. In this way, young hunters learned to be humble, and to share as a norm. To generalize, maximizing can be penalized, and optimizing can be enforced, but only if the culture wants things that way.
It is possible to enforce optimizing in situations that cause traffic jams when some drivers want to maximize. But the culture as a whole must agree on the outcome, otherwise enforcement will not work. I know I sound a little like a therapist when I say that, but in a way the situations are similar, except I am dealing with cultural therapy rather than personal therapy. Right now, many people in the United States are frustrated by jams caused by maximizers, but for the most part they simply complain and live with the situation, rather than trying to find alternatives. One highly aggressive way to stop maximizing would be to have traffic police stationed at regular intervals from the point of closure back to where the traffic is flowing at normal speed. These police would not only indicate that it was time for drivers in the lane that is about to close to move over, but would also indicate that everyone needed to slow down, both allowing drivers from the closing lane to move over safely, and also anticipating the inevitable slowing of traffic flow. If a driver insisted on speeding up in the closing lane to maximize by getting ahead of the slowing traffic, or acted in some other maximizing way, there would be severe penalties. Would you optimize in traffic if the penalty for maximizing were a $50,000 fine or loss of a driver’s license for 5 years? I imagine you would. But not everyone would. Some selfish billionaire could decide that the fine was worth it. Also, if the penalty system were not absolutely foolproof, would-be maximizers would be tempted to take the risk. Either way, external enforcement, no matter how strict, cannot work fully unless the majority in the culture is in agreement. In any case, enforcement of this kind would be enormously expensive, and would be applicable in only a small number of traffic situations.
In some traffic jams of this sort, individual drivers take the initiative to try to enforce optimization. One of these methods is for drivers in open lanes to refuse to let drivers merge over from the closed lane if they wait until the last minute. You will see this happen quite often on highways in the US. The one lane closes, and the cars in the open lanes drive so close together, bumper-to-bumper, that there is no room to squeeze in from the closed lane. This works until one of the drivers in an open lane stops and lets a driver in from the closed lane. Another strategy you will see, is for a driver in one of the open lanes, often an 18-wheeler truck, to shift over into the closing lane but maintain the same speed as the open lanes. Drivers in the closing lane are forced to merge into an open lane, or trail behind the driver in the closing lane. Either way, they cannot maximize. For this strategy to work, an individual driver must be willing to enforce optimizing by moving into the closing lane and holding firm. This can be tough for a driver in a small car who has to put up with a lot of horn honking and angry looks from the maximizers behind. Also, the maximizers may still try to squeeze through by moving on to the hard shoulder or finding some other place to pass and continue in the closing lane, which is now empty and looks oh-so inviting to maximizers. Furthermore, the vehicle blocking maximizers from behind, cannot prevent maximizers in front.
In some future time, this kind of traffic jam may be solved when drivers become obsolete, and all vehicles are driven automatically. If the vehicles are being controlled by a centralized computer, traffic flow can be managed by software that has optimizing built in. If the computer detects a lane closure up ahead that has the potential to cause a jam, it can slow all the vehicles equally and cause all the traffic in the closing lane to move over into an open lane. With egos taken out of the equation, the traffic can act exactly like water in a hose. This technology is actually available now, and is used to prevent jams in some railway systems. My guess is that even if the technology becomes available in the US in the future, there will be objections, and some people will press for the ability to override the system and maximize.
It’s likely that everyone will agree that emergency vehicles must be given priority. That is the situation now. Ambulances and fire trucks are allowed to exceed the speed limit, ignore red lights, block traffic, and travel the wrong way down one-way streets without penalty. They are not permitted to cause accidents in the process, but if they create jams that is just too bad. In this case, maximizers win and that is all there is to it. I suspect, also, that some road users will insist on being able to maximize in an optimized traffic pattern, not because they have emergencies to get to, but because optimizing simply does not sit well with them. They will ask that an override mechanism be installed in their vehicles so that they can beat the system. Of course, buying and using an override mechanism will come at a price, maybe even a big price. This will be an acceptable situation for many people, as a parallel example from the contemporary world indicates.
Amusement parks are becoming increasingly popular in the US, with the consequence that the popular rides have incredibly long waiting lines. You pay for a ticket that allows you to go on unlimited rides, but you spend most of your day standing in line, and a few minutes actually on the rides. In a worst-case scenario, you spend 9+ hours at the park. You might spend 8 hours to wait in line for 2 hours each for 4 rides, 40 minutes (or less) on the rides, and the remaining minutes walking between rides and grabbing something to eat. This is obviously an absolutely worst case, but 1 hour to 90-minute waits are routine in the high season at many parks. If you desperately hate lines and have the ability to go to the parks in the low season, your experience will be more enjoyable because the lines will be much, much shorter, and you can go on many more rides. But if you must go in the high season, a number of parks have instituted a priority ticketing system whereby you can automatically go to the head of the line (under certain conditions). For the privilege of maximizing in this way you pay more for your ticket – a lot more. The park owners have to charge a lot more for priority tickets, not only because they want to make yet more money, but also because they need to discourage maximizers. If every person who went to the park bought a priority ticket, the system would break down. A maximizing system is based on the notion that there are winners and losers, and in a capitalist culture, this kind of system is taken as normal – natural even. If you are a loser you just have to sigh and say, “that’s how it works.” People who have more money get the better deal.
I am not going to take sides and say that optimizing is better than maximizing or vice versa, even though I have a personal preference. There are advantages and disadvantages to both sides. At one time – long ago – it would have been fair to say, even if the statement is grossly oversimplified, that people who favored the left wing of politics were more in favor of optimizing, and the right wing favored maximizing. This is not remotely true any more. To be in favor of a maximizing culture, you have to believe that you have a chance, even a remote one, of being a “winner.” Maybe that culture holds up stories of people who have become “winners” through brains, hard work, and sheer determination. If you believe those stories, you equally accept that you are a “loser” because you did not try hard enough.
Christianity as it is laid out in the gospels is an inherently optimizing system, but even in the first century there were writers who had trouble with the notion. Admittedly there is a kind of reverse maximizing in the sayings of Jesus, such as, “The first shall be last,” suggesting that there is a hierarchy in heaven, and it is the reverse of the system on earth. Underneath that idea, however, there is the general concept that you cannot work your way into heaven, because a place in heaven is not based on rewards. Rich and poor, saint and sinner, all get a place at the table if they decide to love God. Conversely, if you do not love God, you get turned away no matter how exemplary your life has been, because a place in heaven is not based on merit or hard work.
Historically, churches have not been able to cope with a Christian doctrine that denies maximizing. The Catholic church has led the vanguard in this respect for centuries. This was one of the aspects of the Catholic church that led to the Protestant Reformation (although there were many others). Dante’s Divine Comedy depicts deeper and deeper circles of Hell where sinners are confined based on the gravity of their sins, and higher and higher circles of Paradise based on the quality of the virtues of those who merit salvation. He was wry enough to place quite a few popes in Hell, but you get the general idea. Both Hell and Paradise, in Dante’s worldview, are maximizing systems, with only a precious few making it to the very top, or the very bottom. Your place in the next life is determined by what kind of effort you put out in this life. Protestants are not immune to this kind of thinking either. Some evangelical Christians in the US believe that pastors get to wear a special crown in Paradise, and some claim that being a pastor is an automatic ticket in. You paid the extra price, so you get to go to the head of the line.
What is happening here is that religious doctrines are mirroring cultural values. Maximizing cultures prefer maximizing religions. Maximizing cultures prefer maximizing everything: business, politics, religion, amusement parks, traffic. Maximizing cultures believe that maximizing is natural – the way the world works: people by nature want to maximize. The common popular mistake that evolution is about a species maximizing, at the expense of other species, comes from a maximizing mentality. The phrase, “survival of the fittest” did not come from Darwin although it is commonly attributed to him, and maximizers accept the idea as a law of nature. But that is not how evolution works. Some species (including humans) have survived, even flourished, because they function using optimizing systems. Their best chance of survival is in co-operating rather than competing. Ants and honeybees are classic optimizers. The individuals in their colonies are all hard wired to work for the benefit of the whole (and ants on a tight trail never have traffic jams). It could be that humans have a genetic proclivity towards optimizing, but human optimizing is not at all like that of ants and honeybees. In humans, optimizing must be enforced because maximizing is always a threat, and optimizing is by no means hard wired.
We have to face the fact that optimizing cultures, such as desert foragers like the !Kung have to suppress maximizing when it arises. If a hunter comes home with a big kill and starts bragging about it and acting superior, the other members of the band ridicule and shame him. Maximizing is a threat to the whole band. Does that mean that maximizing is hard wired? I think not. I suspect that people in all societies exhibit a range of behaviors, and it is not possible to say definitively where those behaviors originate. Some people like optimizing, some like maximizing. It is up to the culture as a whole to decide whether it will tolerate maximizing or not. Western capitalist cultures not only tolerate maximizing, but generally treat it as the only way to be. If you like optimizing, what can you do when you live in a maximizing culture?
The first factor we must come to terms with is that attempting to change a whole culture is next to impossible, and probably not desirable either. Maximizing in the West is here to stay. If you want to create and live in a wholly optimizing community, the best you are likely to achieve is building a small, isolated community of optimizers within the larger maximizing culture. Even this effort is probably doomed to failure. In the 19th century a string of so-called Utopian socialists from Europe, such as Robert Owen, Henri de Saint-Simon, Charles Fourier, and their followers, attempted to build optimizing communities in the United States, including New Harmony (1825), Brook Farm (1841), La Reunion (1855), and the North American Phalanx (1843). They all collapsed within a few years of their founding. Religious communities in the 19th century, such as the Shakers and Old Order Amish, also worked on optimizing principles and were highly successful for a period. The Shakers eventually died out because they were strictly celibate, and the Old Order Amish became a maximizing culture (of an unusual sort). Hippies in the 1960s also tried to build optimizing communes, but they did not last.
Short of Armageddon or some massive Apocalypse, the modern world is going to consist of maximizing cultures for the foreseeable future, with a few, small, optimizing communities tucked away in isolated spots. You might be able to optimize on a small scale, however. School and family are probably the best units to work on. I had a mentor once who used optimizing principles in his family of seven. Here is a small example of his principles. When the family had something delectable, such as a chocolate cake, to divide up, any family member could do the dividing. With something like a round cake, making seven equal portions was difficult. The rule was that if you were the one doing the cutting, you got the last piece chosen. If you wanted your “fair” share, therefore, you had to work hard to make the pieces approximately even.
I taught numerous classes in university in the United States and in one where optimizing was a major topic I tried, now and again, to get the students to optimize the class. They never agreed. At my university it was my duty to assign a grade, A to F, to each individual student which customarily led to classic maximizing. Once in a while, I suggested that we optimize the grading system. To do so, I would assign the same grade to all the class members based on how I rated their overall effort as a group, not as individuals. The ambitious maximizers in the class immediately rejected the idea, thinking that I would simply average the class’s efforts, and they would lose out because the slackers would bring the class average down. That was not how I envisaged the system. Slacking under my rules would have inherent penalties. For example, following the rules of foraging bands, a persistent slacker could be publicly shamed in class. If the system worked well, it would also be in the interests of the ambitious students to help the weaker students. They could hold sessions outside of class to go over lectures and reading materials: benefitting everyone. In 35 years of teaching in university I never had any formal class take me up on the offer, although I did teach a few voluntary seminars that worked in this way.
There is also a version of optimizing arbitration that some courts in both Europe and the United States use for conflicts between parties. Under this system, the arbitrator hears out both parties and then orders them to go off separately and decide on a solution they would be content with. They must not communicate with one another, and their final solutions are delivered, sealed, to the arbitrator. The arbitrator then reads the two solutions and picks one of them. Under this system, optimizing by both parties is the best strategy. Let’s say that two parties are a car dealer and a customer in disagreement about the breakdown of a new car. The customer was driving to an important business meeting, 100 miles from his home, when the car engine malfunctioned causing him to be stuck in the middle of nowhere to get the engine fixed. Consequently, he missed the meeting which cost him a potentially lucrative deal, and he had to stay overnight in a hotel while the car was repaired. The car mechanic was not able to determine the cause of the breakdown. It could have been a defect in the engine or the result of poor maintenance. The car dealer and the customer go to court thinking to maximize their situations.
In the maximizing situation, the customer believes he should be paid for the car repair, the hotel room, and some damages for the loss of the business deal. The car dealer who sold him the car believes that he owes the customer nothing on the grounds that the malfunction was caused by poor maintenance, that is, the breakdown was the customer’s fault. The judge sends them to optimizing arbitration because there is no practical way to determine where the fault lies. If you were one of the parties what would you submit as your offer? For the sake of argument, let’s say the car dealer is offering $0 in compensation, and the customer wants $20,000. With no opportunity for collusion between the two parties, they are at the mercy of the arbitrator who is going to pick only one of the two offers. The arbitrator cannot compromise. If either party submits their maximized offer, they run the risk of losing completely. They must optimize in order to have the hope of getting some satisfaction.
Car, medical, and house insurance policies are systems in which most Westerners are accustomed to optimizing. A number of people pay relatively equal, (small) amounts into a general pool based on the assumption that their cars will not crash, they will not need catastrophic hospitalization, or their houses will not burn down, because these are rare events. But, if these disasters occur, the expenses are crippling. If you have insurance your expenses are covered from the pool of money that all the people who did not suffer calamities paid into.
Where do your sympathies lie on the whole? Do you tend towards optimizing or maximizing? Give this question careful thought by imagining a variety of scenarios from traffic jams to lines for roller coasters.
Chapter 4: No Free Lunch: Gifts, Reciprocity, and Transactions
Many anthropologists see reciprocity as a bedrock component of society, but reciprocity comes in a variety of flavors, so that generalizing about it can culminate in shallow conclusions if you are not careful. Both anthropology and sociology have had a long history of studying reciprocity in different cultures, with Marcel Mauss’s The Gift (1925), as one foundational text among many. The “norm of reciprocity” was key to the economic theories of John Locke and Adam Smith, in the seventeenth and eighteenth centuries. They were interested in the ways in which free markets in Europe regulate themselves, but Mauss broadened the perspective by looking at different kinds of exchange in different cultures and concluded that reciprocity was more than just an economic reality, it was the glue that held societies together. Since Mauss’s time, the nature of reciprocity and exchange have been hotly debated, so the best I can do here is lay out the broadest of themes, because I want to know why Westerners, especially people in the United States, have a habit of casting all relationships in terms of reciprocal transactions. When I have laid some groundwork, I want to ask you whether you see your relationships in terms of exchange and transactions, and if this habit satisfies you. If this point of view does not satisfy you, I can suggest some alternatives.
Gift giving is one kind of reciprocity that can be generalized really widely if we take an expansive view of what a “gift” is and what “exchange” is. One of the simplest forms of exchange of gifts that we are familiar with is, “I give you a gift, and you are expected to return a gift at some point.” That much is easily understood, but the rules of reciprocity can be complex: sometimes the return gift is delayed, sometimes it is immediate. That is the difference between birthday gifts and Christmas gifts. Sometimes the return gift is of equal value, sometimes not. For example, you shower your infant sons or daughters with expensive gifts for their birthdays, but if they hand-draw a card for you in crayon as their only gift on your birthday, you are happy (but you do expect something). Although reciprocal gift giving can be extremely complicated, the rules within a culture are generally well understood, even if they are rarely spelled out in writing. Some anthropologists call a relationship based on reciprocal exchange, whether the exchange is equal or not, a transactional relationship, and we are inclined to assume that most, if not all, relationships involve some sort of reciprocal transaction. Getting something for nothing does not sit right. People ask, “What’s the catch?” when they are offered something for “free,” leading to the aphorism, “There’s no such thing as a free lunch.” One way or another you are going to pay for that lunch.
Reciprocity involves the exchange of things of value, but the nuances of “exchange” and “value” get thorny very quickly. You will not find much agreement among anthropologists when it comes down to specific cases. For example, it has been customary among pastoral cultures for millennia for a groom’s family to give animals, and other things of value, to the bride’s family before and/or at the wedding. This custom is known as bridewealth. Is this an example of reciprocal exchange (cattle in exchange for women)? Anthropologists vehemently disagree.
If you want to know how venerable this practice is, look in the Bible, Genesis chapter 24. Abraham, a pastoralist with vast herds of cattle and camels, and great wealth in general, sends his servant north to his ancestral homeland, to find a wife for his son, Isaac. He sends the servant with 10 camels laden with gold, jewels, and goods of great value. When the servant meets Rebekah at the town well, he determines that she is the potential bride, and gives her some golden ornaments (a nose ring and bracelets). Then, when they go back to her house, he gives Rebekah’s brother and mother “costly gifts” as well as giving her gold, silver, jewels, and clothing. After a short sojourn in Rebekah’s home town, the servant and Rebekah return to Abraham’s dwelling for Rebekah to marry Isaac: a classic case of bridewealth in action. Was this an exchange: costly goods going to the bride’s family and a bride going to the husband’s family? In other words, did Abraham buy a wife for his son? The actions could be interpreted this way. Here is where we must be careful to avoid ethnocentrism.
On the surface it looks as if Abraham did something similar to what you and I do when we go to the store to buy a book, (although Abraham had a servant as intermediary). If I want to find, let’s say, a new detective mystery to read, I go to a book store, find the relevant section, look for my favorite authors, find a book I have not read, and inspect it briefly to see if it will keep me entertained. Once I have made a selection, I go to the cashier, pay an amount of money, and receive the book in return. In Genesis, Abraham’s servant went to a place where young, available women hang out (the town well), looked around and made a selection from the choices available (in this case a young virgin), went to her family and said he was interested in her, handed over things of great value, and went home with a woman to marry. Are the two instances similar? To say so without considering cultural context is blatant ethnocentrism. Context is everything. (That, too, should be one of your mantras).
It is never a good idea to assume that certain actions have universal meaning, but it is a very common assumption. There is a whole branch of philosophy, epistemology, that addresses the question of meaning, and its twists and turns are too intricate for discussion here. I am taking as foundational (and philosophers and anthropologists alike could challenge me), that meaning derives from the frame of reference that you choose. So, hold up your index and little finger with your ring and middle fingers tucked under your thumb. The gesture will mean “cuckold” in Italy, “hook ’em horns” if you are a University of Texas football fan, or “six” if you live in southern China. Change the frame of reference (cultural context), and you change meaning. In the case of bridewealth, goods of value move from the groom’s family to the bride’s family, and the bride moves from her family of birth to the family of her future husband. Can we say that in this case the groom’s family is buying a bride? Anthropologists do not agree on this point.
If you buy a book and it proves to be defective, maybe there are pages missing or the spine is broken, you would have a reasonable expectation of returning it to the store and getting your money back, or a replacement book. The same appears to be true in some pastoral cultures. If the wife fails to produce children, in some cultures (not all), this can be grounds for returning her to her kin, with the expectation of getting the bridewealth back. Again, are the situations comparable? My first answer is that it is impossible to generalize about the meaning of bridewealth. You have to talk to the people involved, and in different cultures you will get different answers.
In some cultures, they will tell you that bridewealth is not about exchange at all. For exchange to operate, there has to be some sense of equivalence of items exchanged, and there also has to be some sense of permanence (as well as of ownership). When I buy a book, it is mine in perpetuity, and I have given the original owner money that is equivalent to the value of the book. I could barter cigarettes or bananas for the book in place of money, but the notion of exchange would be the same. Thus, in some pastoral cultures they will be offended if you suggest that cattle and brides are in any sense equivalent, and are “exchanged” on marriage. Rather, they will talk about the “gift” of cattle in one direction as creating one kind of bond between two kin groups, and the “gift” of a bride in the opposite direction as creating a different kind of bond. There are still “gifts” involved, but it is not fair to speak of “exchange.”
Other pastoral cultures will accept the fact that an exchange is involved, but they will not say that they are “buying” brides, but they might assent to the idea that they are “buying” the bride’s offspring (if we define “buying” loosely). Here things get yet more complicated, and issues rest on how different cultures view conception and procreation. What cannot be contested is that children come out of a woman’s womb. Paternity can always be contested, but maternity cannot. Given that a woman can uncontestably make a claim that the children she has given birth to are her biological kin, she can also make the claim that they belong to her and belong with her. She could therefore, in theory, bear a child, or children, with one kin group and then return with them to her birth kin. Bridewealth makes such actions very difficult. If a woman returns to her kin, the husband’s kin can reasonably expect to get the bridewealth back (including all of the offspring of the animals). Returning the animals is next to impossible, if not downright impossible, because they would probably have, in turn, been given in bridewealth or dispersed in some other fashion (particularly among the wife’s father’s kin) – not to mention the fact that people who have been enriched by bridewealth are reluctant to give the animals up.
Perhaps a better analogy to bridewealth than buying things would be the pawning of items, or the permanent loan of objects. When you take something of value to a pawnbroker, you give something of value to the pawnshop and get something of value in return, but you have not conducted an exchange. The item is still yours. You have temporarily received money, but you have not sold the item. You can get it back by paying back the pawnbroker (with interest). There is reciprocity involved, but it is not the same kind of reciprocity as simple buying and selling. When you pawn an object, you create a bond with the pawnbroker that remains in place until you redeem the object, or until you decide to keep the money and let the pawnbroker keep the object (in which case you have, de facto, sold the item).
What I am getting at in this example is that the notions of exchange, gifts, and reciprocity are far from straightforward, and vary greatly according to circumstances, and cultural expectations. Delayed reciprocity is an extremely common way of creating and maintaining social bonds. In the community in Tidewater, North Carolina where I did my doctoral fieldwork, all of the residents of the village kept running accounts with one or other of the two general stores, where they bought simple consumables such as milk, bread, beer, petrol, etc. Commonly, locals popped into a store, bought a few items, and asked the owner to put the total on their bill (which was kept in one of a series of notebooks in a box by the cash register). On payday, or at some convenient time, the residents would go to the store to buy something, but also with the express purpose of paying off most (not all) of the bill. The store owners were happy to run credit in this fashion, and did not charge interest, because the line of credit created a bond between customer and owner, ensuring the customers’ loyalty.
The store that I frequented, employed a young man to pump petrol, but it was quite common for customers to pump their own, come into the store to report how much they had bought, and have the cashier mark it in the credit book. This was an honor system because the total recorded on the pump was not mechanically reported inside the store, and could easily be returned to zero after pumping the petrol. One time when I was hanging out at the store, which was a common place for older village men to sit and talk endlessly (a boon for a young fieldworker), a young man from the village drove up, pumped some petrol, and came in to report the amount for credit. At that point the owner confronted him, accusing him of routinely underreporting the total, that is, telling him to his face that he was a thief. Things got heated, and were concluded when the young man asked what his total in credit was, pulled out a huge roll of cash, paid his bill, and never returned. This incident pointed out two things to me. First, running a line of credit was not a necessity, although on occasion it was, and there were a few notorious examples of people in the village who ran up enormous debts with no ability to pay them. The young man had the money in his pocket to pay for petrol any time he wanted, but it was more efficient for him to say the amount of petrol he had pumped and leave, rather than pulling out his money, paying on the spot, waiting for change, and all the other things involved in paying with cash. Second, the line of credit was an overt act of bonding. By running a line of credit in one store it would have been unethical to shop in the other store in the village (and everyone would notice the betrayal of trust). When the young man paid his bill in full, his clear message, which everyone understood, was “I am not going to shop here again.” The bond was broken.
Thus, in general, delayed reciprocity creates bonds and a sense of obligation that immediate reciprocity does not. When you buy a magazine in a shop, pay your money, and leave, the reciprocity is instant and the bond is transient. When you buy an expensive gift for a boyfriend or girlfriend, and your birthday is 8 months hence, you expect reciprocity, but it will be delayed. This particular kind of gift giving does not create strong bonds, but there is an underlying obligation of some sort. If you break up a few months later, however, don’t expect a birthday present. Gifts of this kind create bonds that are not the most important bonds in a relationship.
Reciprocity in long term relationships, such as marriage, is multifaceted. Whether a marriage is balanced in expectations between spouses (symmetric), or one partner has some measure of control over the other (complementary) (see chapter 2), there is always some reciprocity at work. The nature of reciprocity in marriages, as well as between parents and children has always fascinated me because it is created by cultural expectations, and need not exist at all. Before I talk about specific relationships I need to broaden the general definition of reciprocity. However, as I broaden the definition you may be able to see for yourself that as the concept gets looser, the broader it gets, and you have to decide for yourself whether it can lose all meaning entirely if it gets too broad.
When it comes to material things, reciprocity is a relatively simple notion given that the idea was spawned by economics and grew in anthropology within roughly the same domain. In Stone Age Economics (1972), the anthropologist Marshall Sahlins divides reciprocity into three broad types: balanced reciprocity, in which the exchange of things of value is immediate and no social bond is created in the transaction (such as buying a pack of cigarettes); negative reciprocity, in which each party haggles about the value of an exchange item in order to get the best deal (as is extremely common outside of the developed world); and generalized reciprocity, in which the market value of the items exchanged is underplayed in relation to the social and symbolic value of exchanging the items at all (such as with birthday gifts). Generalized reciprocity is supposedly the realm of “true gifts,” that is gifts which are ostensibly altruistic.
In, Toward an Anthropological Theory of Value (2001), Sahlins’ doctoral student, David Graeber, who was also my student as an undergraduate, suggests what ought to be obvious, namely, that generalized reciprocity and balanced reciprocity have more in common than people want to admit. Sheldon Cooper in The Big Bang Theory, on several occasions rails against giving Christmas gifts because he hates not knowing what other people’s gifts to him are and, therefore, he cannot gauge what to buy them of equivalent monetary value. He is constantly upended, however, when people buy him things of immense personal value (to him), but little monetary value. Here lies the twist. Graeber prefers to talk about “open” and “closed” reciprocity, in which open reciprocity is based more on mutual commitment to a relationship than on the actual market value of things exchanged, but is always in danger of ending should the exchange become overtly balanced, as is the case with closed reciprocity. Openness in reciprocity typically involves not counting costs, and, therefore, involves no debt that can be canceled by balancing the reciprocity.
Look, however, at general Christmas gift giving in the US and the UK that is quite obviously a matter of calculated financial exchange, to the point where it is common to include the sales receipt along with the gift so that the receiver can exchange it at the place of purchase for something of equal value if not satisfied with it. The day after Christmas is one of the busiest days for stores in the US, with the bulk of transactions being returning unwanted gifts. Furthermore, people select the cost of gifts to others based on their perceived importance to the giver. A parent or sibling is going to get something of much greater value than a work colleague or boss. The, not particularly veiled or subtle, message is that you can put a dollar figure on the value of a relationship. But can you?
There is much more going on with gift giving, at Christmas or otherwise, than the mercenary assessment of what relationships are worth. Couples, families, or office colleagues may agree on a maximum spending limit on gifts at Christmas which is not an attempt to minimize the importance of the relationship, but to curb runaway competition or simply avoid unnecessary excess. In this case there is a sense that the gifts are a token of the relationship’s existence, and not an assessment of the price put on it. The endlessly debated question in anthropology is whether the reciprocal exchange of gifts creates and maintains social bonds, or whether they symbolize, in tangible form, social bonds that are not inherently tangible and that spring from other factors besides reciprocity in material exchange. As in all questions of this nature, it is not one or the other.
We all recognize that factors such as kinship, love, obligation, and social duty, which are decidedly non-material, are of immense importance in developing and maintaining many types of relationships. I have a relationship with my son that is occasionally manifested in gift giving, and other forms of tangible exchange, but it is founded on our biological kinship (and maintained by myriad social interactions), not on things of material value. Until his late teens, he had no money of his own, so I would give him money at Christmas to be able to buy gifts for my wife and myself. That is scarcely the reciprocal exchange of gifts (given that I paid for my gifts to him and his gifts to me), but he did have to “invest” time and effort into choosing gifts he believed I would like. Here lies the crux of the matter. Are time and effort things that can be invested or exchanged? Did my son add “value” to the gifts he gave us through his careful choice of them? At first blush you might say, yes, but let’s break it down a bit more.
In the labor market, time and effort are obviously commodities in the same way that bananas and screwdrivers are. “Time is money” is the great capitalist maxim. By this token, anything that has an exchange value is a commodity. Working for an hourly wage brings this message home forcibly. The monetary value of your time working for an hourly wage is determined by the market value of the skill you bring to the job. From there it is perhaps a little too easy to slip into the mindset of thinking that your value as a person can be equated to what you are capable of earning. This way of thinking is driven by the notion that “value” (of anything, tangible or intangible) is ultimately assessed in dollars and cents, and, more to the point, that all intangible things can be treated as commodities. Hence, by that way of thinking, you can “exchange” love, hate, respect, friendship, etc., in analogous ways to exchanging birthday presents, and rules of reciprocity apply every bit as much as with tangible goods. Do you see a problem with that way of thinking?
I do want to be crystal clear here. I am not saying that we can, or do, implicitly or explicitly, put a monetary value on everything from watermelons to love. That would be an absurd assertion. What I am saying is that it is easy to treat the nature of relationships as “transactions” if we are not careful, and we often do – some more than others. That is at the heart of what I am calling a “transactional relationship.” Marriage in the US is quite obviously transactional, even though the transactional part gets glossed over at the outset. When you attend a wedding in the US you will hear numerous formulaic and impromptu things about love, devotion, commitment, and so forth, and very little talk of transactions. True, there can be mutual vows concerning the giving and receiving of love and honor, in sickness and in health, for richer or for poorer, and all the rest of it, betokened by the “exchange” of rings, but this is all surface talk. Under this surface talk are the hard and immutable laws of the state. Marriage in the US is governed by contract law.
When a couple wishes to marry in the US, they go to a state office (usually the county clerk’s office) and apply for a marriage license. This license is an official contract. The couple signs the contract, and it is witnessed by the clerk (or a representative). The couple brings the license/contract to the officiant who performs the wedding ceremony, and, at some point, the officiant signs it and has the document also signed by two witnesses, usually the best man and the maid/matron of honor (but it can be anyone who is at the wedding). The signatures of bride, groom, clerk, officiant, and witnesses are all that is necessary – legally – for the couple to be married. Commonly, when I acted as officiant at weddings in my church, I had the witnesses sign the marriage license before the ceremony because things could get hectic afterwards, and it was not always easy to find the witnesses at that time to sign when they were caught up in wedding photos and general celebration. According to the letter of the law, therefore, the bride and groom were legally married before the ceremony began (although things were not fully legal until I had filed the license with the clerk’s office). All this legal stuff is buried beneath heaps of flowers, kisses, and champagne; but it is there.
Transactional law comes to the surface, forcibly, when problems in the marriage arise. Ask anyone who has been through a divorce. Divorce courts don’t go back and review the wedding vows, or poems and scriptural passages read at the wedding ceremony, to decide on the terms of divorce. They deal in the cold, hard, transactional, facts of alimony, child support, and division of property. They are the foundational, legal, facts of marriage in the eyes of the state. What about in the eyes of the couple? There things get gnarly, because individual cases vary enormously. Some marriages are preceded by a pre-nuptial contract, usually insisted on by the party with considerable assets, knowing that divorce can end in the equal division of property, or worse, and in the full knowledge that marriage is a transactional affair. Most do not, but it is not uncommon for a couple to spend time before (and after) the wedding, giving careful thought to what each is bringing to the marriage, with a sense that it is “fair” to balance what they are bringing – that is, to think in transactional terms. They may not put a monetary value on the things they are trying to balance, and would probably find it abhorrent to do so, but the transactional nature of the marriage is still there.
Some anthropologists, such as Graeber, think that broadening the concepts of exchange and reciprocity in this way, generalizes them to the point of meaninglessness. I don’t think so. I would argue that, in the US, the realities of the marketplace create an ethos that pervades all social relations. I am not claiming some sort of universal principle here, but arguing that the idea is strongly pervasive. In Lord I’m Coming Home (1988), I make the case that in Tidewater, North Carolina, in the 1970s, a female head of house saw it as a marital obligation to take care of duties inside the house, while the male head took care of duties outside the house (with a grey area in the immediate vicinity of the house). This principle was unstated, but firmly understood. I very quickly became aware of the rules, because I lodged with a widow who had a widowed daughter, and within a short space of time they adopted me as the surrogate man of the house because I could mow the lawn, chop firewood, paint, dig the garden, hunt and fish, and all the other jobs that they had hired men to do before I arrived. The household became suitably “balanced” on my arrival and my adoption of male duties.
You do not need to look very deeply into couples therapy, self-help books, and the cascade of advice on YouTube, to see that a transactional model of marriage is normal in the US. It is also common in parent/child relationships. No end of disagreements arise in universities across New York State because it is illegal for professors to discuss the progress of students with their parents. In fact, students are under no obligation to report their grades to their parents, but a large number of parents insist upon it – and will then often call professors to argue about the grades (or, once in a while, ask what they can do to help). Mercifully, the law prohibits all such conversations, and serves the purpose of muting transactional relations between parent and child. Despite the law, the transactional relations exist – put in place by the parents, of course, not the children. When the children are minors, the law is on the parents’ side (sort of). Parents have certain legal obligations towards their children which they can be taken to court over if they ignore them. Parents are required to school their children up to a certain age, for example. What counts as schooling is negotiable, but not the principle. Thus, parents who send their children to school expect regular progress reports from the school, and may reward or punish their children based on the result: a transactional relationship. Likewise, it is common for parents to expect their children to do chores around the house, considered to be a reciprocal response to being housed and fed.
At this point you can raise all manner of exceptions and qualifications on what I have said. Don’t bother; I can do it for you. All I am saying is that transactional reciprocity underlies more relationships than we care to acknowledge. So, instead of quibbling with me, take a look at relationships in your life, past and present, and consider how transactional they were/are. Then ask yourself: Are you happy with this state of affairs? I am going to be thoroughly culturally relative about this point, and not tell you about my relationships, nor what kind of relationships I prefer. I will simply point out that relationships need not be transactional. We get caught into thinking that they have to be because that’s what our culture tells us, and constantly reinforces, not only by laws, but by therapy, movies, self-help books, and the advice of friends.
One alternative to transactional relationships are what are called incorporative relationships by some anthropologists. Such relationships do not deal in transactions – tangible or intangible – and do not count costs. The bond exists in these relationships regardless of any reciprocity. There can be giving with no receiving and no expectation of receiving anything. I am not talking about unequal reciprocity here, where one partner gives a great deal, and the other gives very little. There may, indeed, be things given and received in an incorporative relationship, but they are neither expected nor required, and no balance sheet is kept. You can be forgiven if you roll your eyes at this point and refuse to accept that such relationships are possible. “No, no, no,” you say, “there’s reciprocity hidden down there somewhere.” I would argue that you think that way because you have been raised in a transactional culture, and it is very difficult for you to think otherwise.
We are right to be cynical when something in the marketplace is offered to us “free.” There is always a catch. Some restaurant owners teach their servers to say that the bread offered to the table is “included” rather than that it is “free” because there is no situation in which it is acceptable to sit at a table, eat a basket of bread and nothing else, and then walk out without paying. Nor is it acceptable to go into a supermarket, pour 10% of the washing powder out of a box that is marked “10% free” and take it out of the store without paying. In such cases, “free” is not strictly honest. It is a word being used as bait that disguises the cost you actually pay for the item. What about love? Can it be given for free, or is there always a catch? Is it possible to have social bonds without some form of reciprocity?
There is a meme that circulated on social media that I liked, and makes an important point about reciprocity. “If I give you a dollar, and you give me a dollar, we started with one dollar each and we still have one dollar each. If I give you an idea and you give me an idea, we started off with one idea each and now we have two ideas each.” Some intangible things – ideas, love, fellowship, commitment – are not reducible to transactions unless you want them to be. Work relationships can easily become transactional relationships because they take place in an environment where economic factors may dominate. But with marriage and parenting, economic factors need not be the dominant values, and we have to examine why these relationships are so often cast in transactional terms. If we return to the bridewealth example we can see that the apparent exchange of things of “value” (animals and brides) is only seen as an exchange, an example of reciprocity, if the culture wants to see it that way. The notion of “value” itself is socially constructed.
In your own life it is possible that there are relationships that are transactional that suffer from being seen in that way. Or it is possible that the transactional nature of them is not spelled out clearly enough. Examine the relationships in your own life. Are they all transactional, and, if so, what is being exchanged and how do you count costs? Are the exchanges equal or unequal? If some of them are not transactional, what is their nature? What preserves those relationships in the absence of exchange? Are they weaker or stronger than transactional ones? Why?
My point is that Euro-American relationships can be dominated by a transactional ethic because it is so culturally prevalent, but it need not be. You can challenge that ethic, but you have to be aware that it exists, and that there are alternatives. You may also decide that you are quite happy with the way things are.
Chapter 5: What’s the Difference? Emic and Etic.
When you learn a foreign language, you have to come to terms with the fact that some of the sounds of the new language do not correspond to sounds in English (or whatever your first language is), and they can be difficult to master. With practice you can do better than at the outset, but you are unlikely to ever sound like a native speaker. That’s just a fact of life we all live with. What is more problematic, and our starting point for this chapter, is that in many languages there are some sounds that are close to each other, but different – and you can hear the difference – but the differences do not matter in English. They do, however, matter in the other language. Take the sound of the letter “p” in English, for example. You can pronounce it with a little puff of air coming out as you say it, or without the puff of air. Linguists call the puff of air “aspiration,” so the “p” can be aspirated or unaspirated. Hold your hand in front of your mouth and say the word “paper” naturally. You should be able to feel a puff of air on the first “p” and not on the second – the first is aspirated, the second is not. The difference does not make a difference to the meaning of the word, though. Practice saying “paper” with both p’s aspirated and then both p’s unaspirated. You are still saying the same word; you are referring to the same thing (or, “denoting” the same thing in linguist-speak).
There are many differences in sounds in English that do not make a difference to meaning. The letter “t” can similarly be pronounced aspirated or unaspirated and it makes no difference. We can hear the difference when it is clearly enunciated, but in normal speech we don’t notice it because it is not important to the meaning we are conveying. In Khmer (the official language of Cambodia), on the other hand, whether you use aspirated or unaspirated “p” or “t” or “j” or “k” (and several vowels) you can completely change the meaning of a word, and you have to pay careful attention to the difference. The sounds of a language that make a difference to meaning are called that language’s phonemes. Linguists tend to disagree about the number of phonemes in English, from 35 to 44, largely because they disagree about how to classify vowels, and because regional accents can complicate matters. I am not going to worry too much about that disagreement. What does need to be firmly understood, however, is that phonemes are not the same as letters. Because of the quirky conventions of English spelling, some letters (for example, “c” and “g”) can be sounded in different ways, and some letters (for example, “k” and “q”) are unnecessary because other letters are pronounced in the same way. There are also some sounds that do not have a letter to themselves, but have to be represented by clusters of letters (for example, “sh” and “ch”). When coming to grips with phonemes, therefore, don’t think in terms of letters, think in terms of sounds.
We identify the phonemes of a language by taking minimal pairs: two words that differ by only one sound, but that difference in sound makes a difference in meaning. Take the word-pair “bat” and “pat.” They differ by only one sound but they have different meanings. Therefore, /b/ and /p/ are different phonemes in English. Linguists conventionally use the double slash marks to indicate phonemes, and quotation marks to indicate letters (although they have other conventions for technical discussions). The study of the phonemes of a language is known as phonemics. We can also study the sounds that people produce in articulating their language without regard to the basic semantic meanings of words. Let’s go back to the normal pronunciation of “paper” as an example. The fact that the first /p/ is aspirated and the second is not, can be detected, but the difference in sound has no semantic importance. Linguists call the analysis of the sounds that humans can produce irrespective of meaning, the study of phonetics, contrasted with the study of phonemics – differences in sound that create differences in meaning.
Let me be clear. We are talking about semantic meaning here – not the myriad of nuances created by subtle changes in vocal tone. If you shout “PAPER !!!” to a newsagent, or reply “paper” in a quivering, timid voice when the newsagent asks you what you want, there are subtleties of intention in the way you are voicing the words, but you are denoting the same object. Denoting is the domain of semantics. Let’s not get lost in the Byzantine maze of complexities associated with anthropological and philosophical linguistics – remember, broad strokes. Phonetics is the study of differences in sounds in general, and phonemics is the study of differences that make a difference in semantic meaning. “Differences that make a difference” is the key concept here.
The anthropological linguist Kenneth Pike in Language in Relation to a Unified Theory of the Structure of Human Behavior (1954/55 and 1967), generalized the idea in the study of human behavior, by stripping out the suffixes “-emic” and “-etic” to make the new words “emic” and “etic” – the first referring to the study of human behavior from the perspective of a cultural insider, and the second from the perspective of an outside observer. Put another way, an emic study of a culture is a study of the meaning of activities within that culture as understood by people who were raised in that culture, whereas an etic study is an overall view of that culture, looking at activities without regard to meaning as it is understood by insiders.
We can take as an example the difference between men appearing in public with their shoulders and knees bared versus having them covered. In Britain and the US this is a difference that has little or no cultural meaning. In Cambodia (where I now live) it is a difference that has major significance. Men in Britain and the US have no problem with wearing tank tops and shorts on a hot day in order to keep cool, whereas in Cambodia, where it is hot every day of the year, men typically wear short-sleeved shirts and long trousers in public. Even construction workers, and other outdoor manual laborers whose work makes them hot, do not usually wear clothes that expose their shoulders and knees. This is because in Cambodian culture, the shoulders and knees have sensitive social meanings. It is considered disrespectful to expose them. You will be denied entry to pagodas, monasteries, or royal palaces if the skin of your knees or shoulders is visible. Cambodians know this without having to be told, but tourists sometimes get into trouble. It’s not enough to cover your shoulders with a scarf or shawl, either. They have to be covered by a fitted piece of clothing. Here we have differences that make a difference. For Cambodians, shoulders covered with a shawl and bare shoulders are the same; shoulders covered with a shirt and shoulders covered with a shawl are different. This is an emic statement because I am referring to shoulders as they are defined by Cambodians. If I say that bare shoulders, shoulders covered by a shawl, and shoulders covered by a shirt are all different, I am making an etic statement, without imputing meaning to any of them. I am simply noting that they are different.
Over the years, various anthropologists have characterized the emic/etic distinction in a number of ways that part company with Pike’s original intention. In the 1980s there were a number of public debates sponsored by the American Anthropological Association between Pike and anthropologists who had famously (or infamously) adopted the emic/etic distinction and used it in ways he disapproved of. I attended one packed debate between Pike and Marvin Harris that mostly served to show that Harris’ interpretation was misguided, as are many. The emic/etic distinction has sometimes been characterized as the subjective (emic) and objective (etic) distinction, but this characterization is problematic in all kinds of ways that I will not get into here. All interpretations that diverge from Pike’s are problematic, so I will stick with his: an emic cultural analysis seeks to understand the meaning of behaviors in a culture from the point of view of members of that culture, and an etic analysis makes observations and draws conclusions that are not dependent on an insider’s perspective, but, instead, relies on the observer’s interpretations (that is, is ethnocentric).
When you learn a new language, especially one that is nothing like your first language, you begin with phonetics, but you have to shift to phonemics as you learn more. As your vocabulary in the new language deepens, you learn to distinguish between words by noticing differences in sounds that make a difference to the meanings of the words. Make a single change in sound in Khmer and instead of saying “I understand Khmer language” you can end up saying “I know about Cambodian food.” When we generalize from the phonemic/phonetic distinction to the emic/etic distinction, it is reasonable to argue that anthropological fieldwork is an endeavor that begins with etic observation with the intention of moving to emic understanding over time. Not all anthropologists accept the terminology, but there is reasonably uniform agreement that fieldwork’s overarching goal is to understand the workings (and meanings) of cultures from the insiders’ point of view. Taking this approach, we can say that a major component of anthropology is to “translate” the meanings of another culture into the meanings of our own.
It would be really ridiculous to say that English is intrinsically the best language in the world, and that all others are inferior in some ways. Yet, there are plenty of people who will say that English culture is the best in the world and all others are inferior. Anthropologists try to understand other cultures emically, not just because other cultures are interesting, but also because we can interact with other cultures better the more we understand them from the inside. We can also avoid misunderstandings, and we can understand our own culture better in the process. We can, for example, see that there are ways that other cultures do things that are better than the way we do things, or, minimally, that there are alternatives to the way we do things. Studying other cultures mimics studying other languages in this respect.
Sometimes we adopt words or phrases from other languages into English because they express something that we want to express precisely, but there is no exact English expression that captures our meaning. Recipes in English talk about cooking pasta or vegetables “al dente” which is Italian, and literally means, “to the tooth” but has come to mean in Italian, and, hence, in English, “cooked just to the point that the food is done, but still retaining plenty of bite to it.” Or there is the French expression, “l’esprit de l’escalier,” literally, “the wit of the staircase,” meaning “a clever riposte to a statement made by someone that you did not think of at the time, but you thought of after you had left the room and it was too late.” In both cases, the foreign language expression is much more concise than anything you might say in English. English has the ability to express these meanings, but foreign expressions are more efficient, so we adopt the foreign expression.
Foreign languages can also be more efficient in certain situations. Italian, for example, is, generally speaking, a better language for singing opera arias than English. Numerous words in Italian end in a vowel, and, since long notes in singing are carried by vowels and not consonants, a composer can draw out the final sound of a word in Italian without any confusion as to what the word is. But if the final word of a line sung in English is drawn out as, let’s say, roooooooooooooo we have no idea what the word is until we hear the final consonant. It could be “root” or “ruse” or “room” or “rune” or “rude” or whatever. This linguistic quirk also means that the singer has to enunciate the final consonant pointedly to make sure the meaning is clear, which can ruin an effect if the singer is trying to be soft and tender. Italian is the runaway winner in this respect. All languages are more efficient in certain spheres than in others. Likewise, all cultures are more efficient in doing certain things than in doing others.
One purpose of anthropology, and of this book, is to assess ways in which other cultures function better than our own (or, at the very least, differently) when handling certain situations. We cannot be so arrogant as to assume that our culture does everything in the best way possible, and that our way of doing things is the only way. Look at Christmas celebrations as an example. In the US, people spend huge sums on gifts, often more than they can afford, for long lists of friends, acquaintances and co-workers; malls are choked with shoppers; delivery services are worked to capacity, and there is plenty of stress to go around. Many people are not happy with this state of affairs, although, of course, some, especially merchants, are delighted. Are there alternatives?
In Argentina, expectations at Christmas are radically different. There can be some exchange of gifts, but it is hardly noticeable in many families. Malls and shops in general are no busier leading up to Christmas than at any other time of the year. The big day is Christmas Eve, not Christmas Day. No one works on Christmas Eve if it can be avoided. A few emergency and essential services operate, but restaurants, bars, kiosks, shops, offices, factories and all non-essential operations are shut tight. Nothing can persuade Argentinos to work on Christmas Eve if they do not have to. US fast-food chains in Buenos Aires have tried to be open on Christmas Eve, but no amount of extra pay will persuade employees to work on that day because they would rather be with family than earning more money. Hence, these restaurants are all shuttered along with the rest.
On Christmas Eve, or the night before, people travel to be with family and friends, and spend the day in food preparation. Pork is a common Christmas dish, and it has to be cooked outside the house on an asado, which takes hours to set up and operate. Then, when the pork is cooked, it has to be cooled, because the evening meal is not normally a hot meal. There are also salads and desserts to prepare, which means that everyone pitches in to help. They sit down to eat at around 10 or 11 pm, and then on the stroke of midnight they break open a bottle of champagne or sparkling wine and drink a toast to Christmas, while crowds of people flood the streets and set off fireworks. If anyone has any gifts to exchange, they do it at this point, but they are often no more than tokens, and they are not expected.
In this example, I am not comparing US culture with the Dinka or the Tikopia, I am comparing two cultures that see Christmas as an important holiday, but treat it in markedly different ways. What Christmas means, emically, to the two cultures, diverges. Both have a strong concern for the gathering of family and friends, and both see a big meal as important. But the timing is different, and the focal points are not the same at all. In Argentina, a family may or may not have a Christmas tree, and decorations are minimal. Eating and drinking together is the principal endeavor. In the US, opening of presents, placed under the tree, defines Christmas. As ever, you can give me plenty of quibbles and exceptions, but let us stay with broad strokes for now.
By seeing how other cultures do what we do, but in decidedly different ways, we can begin to ask why we do things the way that we do. If we are happy with the way we do things: no problem. But if we are not happy, we can learn lessons from other cultures, and maybe change things. Here is also where we have a fly in the ointment, bringing us back to emics and etics. It is very easy to document the etic differences between Christmas in the US and in Argentina. We can draw a table with a column for the US on one side and a column for Argentina on the other, and rows for presents, meal time, decorations, dinner menu, and all the rest of it, and then come up with a comprehensive list of the etic differences we observe. The emic differences are missing. You have to do fieldwork in both places to begin to uncover the emics – what those holidays mean subjectively to people in those cultures.
I am in a very fortunate situation with this example because I have lived in both Argentina and the US for long periods of time, I have done fieldwork in both cultures, and I feel perfectly at home with participating in the customs of both. I think, therefore, that I can say something intelligent about the emics of Christmas in both. Most people do not have that luxury. As it happens, I have also lived for many years in Australia, Italy, and England, and know about Christmas in those places also. I can take my pick of these places, should I care to, to match up my personal emic preferences with a culture of choice, in the same way that I can choose to speak Spanish in situations that I feel warrant it, and speak English when it is more comfortable for me. But . . . if you do not like how Christmas is celebrated where you live, what can you do? To be honest, I am not sure that there is a whole lot you can do, because in this arena you are taking on a huge chunk of your culture. Furthermore, you were raised with the emics of your own culture. Certainly, you can spend Christmas away from home in a foreign culture, but you are switching etics, not emics.
Suppose you live in Scotland and decide to celebrate Christmas one year in Barbados because you are tired of the Scottish way of celebrating. You will trade snow and ice for sand and sun; you could even eat cou-cou (cornmeal and okra) and fried flying fish for Christmas dinner instead of roast turkey and mince pies. You could trade all kinds of routine customs for new ones, and you might have a happy time of it. But, you take your emic understanding of Christmas with you. You change the etics of the situation, but the emics remain unaltered. To change your emics of Christmas you would need to migrate to Barbados and live there for a long time, altering your whole subjective sense of what Christmas means to you. You certainly cannot visit for one Christmas, enjoy what you experience, and bring the whole ethos of Barbadian Christmas back to Scotland. The tangled task is to figure out what part of your emic sense of your world is amenable to change.
The whole point of this book is to show that it is possible to change some of the things in our lives that we do not like, and we can do this on an emic level. Such changes are made possible by the fact that our emic view of the world is not monolithic. There is diversity within all cultures, including our own, so we have a degree of latitude. But we also need to accept the fact that our emic view of the world is deep seated. Change is not always easy, although changing some emic points of view is easier than changing others. The first step has to be mapping our own emics, and then comparing them with the emics of other cultures. One huge stumbling block in this enterprise is differentiating between what we actually believe as opposed to what we think we believe. This is also a tricky point when it comes to doing fieldwork on other cultures. Delving the meaning of actions in other cultures is fraught with difficulties anyway. But add to that the fact that why people in other cultures do what they do, may not mesh with the reasons they give for doing them, and you can have a real conundrum on your hands. Instead of getting trapped in this difficulty let us take it as a given and move instead to considering how understanding the emics of a foreign culture can be useful in our own lives.
You can visit a foreign culture with a list of DO’s and DON’T’s which will probably help you if you are a simple tourist, but if you have deeper business with people in another culture, a simple list is a poor substitute for having real insight into the emics of the culture. You can explain to visitors to Myanmar, for example, that they should not use their left hands to eat food, or you can tell foreigners that when they are in Thailand not to touch people on the head. These are just prohibitions, not emic explanations. One way that I can try to get inside the emics of such prohibitions is to look at two differences that make a difference: left versus right, and high versus low in relation to the human body. These differences that make a difference vary in meaning in SE Asian cultures, so let me focus on Cambodia since I know that culture best.
The usual statement by Cambodians is that you use the right hand to eat because the left hand is unclean (because you use the left hand to clean yourself in the toilet). Though the facts are correct, underneath the overt message is the covert sense that the left hand is intrinsically suitable for dirty jobs and the right for clean jobs. Obviously, you can scrub your left hand very well to remove any harmful things, but underlying the prohibition is the thought that the left hand is by its very nature unclean. No amount of scrubbing can remove this nature because it is built in. Likewise, you give money with the right hand and not with the left (although you can also use both hands). When you give things with the right hand only, you place your left palm on your right elbow.
The greeting gesture in Cambodia is called sampeah: the two palms of the hands placed together, fingers upright. The higher on the body, the more respect is being shown. Sampeah at chest level is for friends, at mouth level for older people and bosses, at nose level for parents or teachers, at eyebrow level for monks and the king, and at forehead level for God. That is, the higher up the body, the greater the respect. Touching other people is generally frowned upon, but touching someone on the head is deeply disrespectful because the head, especially the top of the head, is sacred. Without further information you can now infer that touching someone with your left hand (anywhere on the body) is going to be deeply offensive. When you learn the emics of a culture, your understanding of that culture goes a lot deeper than knowing some do’s and don’t’s and you can interact with people in that culture much more effectively.
Edward Hall in The Hidden Dimension (1966) invented the word “proxemics” – combining “prox-” (distance) and “-emics” – to talk about how people use distance between each other to convey social information non-verbally. The emics of proximity vary greatly from culture to culture, and there are many dimensions. There are the proxemics of horizontal space (what the distance from another person signifies), vertical space (what being physically higher or lower than another person signifies), territorial space (how much space around you that you can lay claim to), and so forth. Hall called proxemic communication “hidden” because we are not usually conscious of using space to convey social messages, but we do it all the time. You can intimidate someone, or be intimate with someone, by getting very close to that person. Stop for a second and consider what “very close” means to you, or “too close.” How do you indicate to strangers that this space is “your space” and they should not invade it? What happens if they do anyway? Now add a cross-cultural dimension.
Immigrants to your culture can receive a warm welcome or be shunned, or anything in between. If they intend to become citizens there will certainly be a language test to pass and probably a civics test as well, testing knowledge of history, government, and like information. There is no emics test. What does it feel like to be Scottish, or Australian, or Canadian from the inside? Should immigrants adopt these emics if they want to be citizens? That is, should there be an emics test, and more importantly, what would it look like?
The problem with Pike’s method is that it is not obvious that culture can be treated like language. Does culture really have a “grammar” and a “vocabulary”? Cultures vary internally much more than languages do, and it may not be possible to assign structures to them in the way languages are broken into structures. Furthermore, a phonemic analysis of a language is not a fluent speaker’s understanding of the language, but a tool of the linguist. Therefore, in an important way, phonemics is an outsider’s analysis even though the linguist is a native speaker. This is not a fatal problem, however. Native speakers of English have considerable differences in the way they pronounce words. A Londoner may say “’arf” whereas I say something like “hahf” and someone from Newcastle will say “haff” (for the word “half”). Our phonemics are all different, yet we are able to communicate adequately. There has to be some bedrock somewhere under the differences, otherwise people with different accents would not understand one another.
There have been numerous objections to the emic/etic distinction since the 1980s, yet it is still implicit in many ethnographic accounts of cultures to this day. In a number of following chapters I will bring up the example of riding a bicycle in cities in quite different cultures: Argentina, England, U.S., Australia, China, Myanmar, and Cambodia (Yes, I have ridden a bicycle all of my life). Each country has its own traffic laws, but these are not the same as the “rules” that bicycle riders actually follow (the emics of the road). Ride a bicycle like a Cambodian in London, and you will likely get run over. When I live in a different country, I have to learn the rules that people actually use, not the highway laws. Here we have a rough equivalent of the formal grammar of a language versus how people actually speak. Whether or not a fieldworker can map the emics of a whole culture is still a much-debated question.
Chapter 6: A Good Guy with a Gun: Power and Authority
Wayne LaPierre of the National Rifle Association in the United States famously weighed in on the gun control debate in that country, following the 2012 mass shooting in Sandy Hook, by saying, “The only thing that stops a bad guy with a gun is a good guy with a gun”. I am going to skate over the major problems in that statement concerning such issues as how you define or identify a “good guy” or a “bad guy” and zoom in on the central point that guns are powerful tools for imposing your will on others. LaPierre’s argument is that the only way to keep that power under control is through the use of more weapons. It is an argument based on a simplistic understanding of power, with some moralizing thrown in for good measure. So, let’s start with the nature of power.
In the most general sense, we can define power as the ability to control (or influence) the actions of others. Without too much effort, you can list numerous ways in which one person can try to control the actions of others. There are, for example, brute force, charm, appeal to emotions, financial rewards, psychological manipulation, control of information, and so forth. The first point to note is that power is relational: power involves, at minimum, a relationship between two people. You can declare yourself king of a desert island if you want, but with no other people around you have no power. In some cases, power may be simply interpersonal, but in the vast majority of cases it has a social dimension (including very simple relations between two people). Power is, by definition, a function of relationships, not of individuals. When the power relationship has some form of social legitimacy, we call it authority. Thus, a cop has authority to have a gun, but a criminal does not. They have equal power, but unequal authority.
It is a grave mistake to believe that power in society and force in physics are roughly analogous, although this is a mistake that is made over and over. LaPierre’s maxim is a case in point. He is envisaging a bad guy with a gun as one kind of force, that needs to be countered with an equal (and opposite) force. We are, indeed, talking about physical power here, but there is an important twist. There is much more to the situation than pure brute force. A bad guy with a gun may not be firing it. He may be threatening to fire it. His power in this situation lies in the potential of the gun, not its actual actions. If he is holding people hostage with a gun, a common response is to employ a negotiating team at the outset. The negotiating team may also have guns, and may threaten to use them also. But they are not going to storm the hostage taker with guns blazing as their first resort (one hopes). First, they will talk and negotiate. They will attempt to use different kinds of power to control the actions of the hostage taker.
Here, then, we have the first wrinkle: not all forms of power in society are equal in form, and they are not all equally effective. Numerous theorists from ancient Greek philosophers to the present day have talked about how power works in society. The social psychologists John R. P. French and Bertram Raven, in “The Bases of Social Power” (1959), came up with a schema of five bases of power in society that has been influential in the social sciences in general. The schema has its weaknesses, but is a useful starting point. The five bases are:
- Positional power. Positional power is held by individuals by virtue of their status within a social system. To a great extent, authority is positional power. It is also sometimes called legitimate power.
- Referent power. Individuals use referent power to influence others by virtue of their interpersonal skills. Charisma is one of the best-known forms of referent power.
- Reward power. People who can distribute things of material value (as perceived by a culture), as they see fit, have reward power. The strength of reward power is directly related to the perceived value of the rewards on offer.
- Expert power. People with valued skills and information can use them to control situations, especially when it comes to employment. Expert power is unlike the others, however, in that it has a domain that is limited to the area of expertise.
- Coercive power. Coercive power relies on punishments or the threat of punishments for its effectiveness. It can involve actual physical harm, or the denial of desired rewards. Therefore, in some ways it is the complement of reward power, but it is not an exact complement.
This schema gets us started, inasmuch as it shows the nuances of power in society to a degree, but, as with all classification systems, there are cases that do not fit neatly. Think of it as an analytic tool rather than a mirror held up to social reality. Imagine a police officer has stopped a motorist for some reason and needs that motorist to get out of the car. Numerous power alternatives are available: pulling a gun or dragging the motorist out (coercive), showing a badge or stating the law (legitimate), offering a reduction in penalties (reward), and so forth. The police officer may use these methods sequentially, or several at once. The main point to grasp is that, with the exception of pure brute force, these forms of power are socially constructed. Their power lies in the value placed on them by society.
Because power is socially constructed, it can be resisted by changing its basis or its social meaning. In addition, the strength of power is determined by the relationship between the individuals involved and their own relative social status. Offer the average person a few thousand dollars to do something for you, and (depending on the task, of course), you will probably get what you want. Offer a multi-billionaire a few thousand dollars for the job, and your chances are a lot slimmer. In the latter case, if you want that person to do something for you, and you have nothing of material value the other person wants, you have to change the basis of power. Maybe you can blackmail that person with some incriminating evidence in your possession. In that case you have replaced reward power with informational power.
For now, I want to focus on how you deal with power exerted over you, because I want to challenge the common statement, “I didn’t have a choice” that people are fond of using. You always have choices. Sometimes, you do not like your choices, but often you do not have the tools to see the choices that are available to you. That is where anthropology can help. I have already dealt with problems facing people in situations where they are subordinate to others in the chapter on feedback (chapter 2), and you might want to consult that as well. Here I want to concentrate on power being exerted by others, and how you can counter it if you don’t like the situation.
One “solution” is to counter power with an equivalent power, as in LaPierre’s statement. The severe weakness in this approach can be seen by using physics to continue the metaphor. A brick wall will stop a speeding car which is out of control because the brakes have failed. Chances are, however, that the car will be destroyed and the driver and passengers will be killed or seriously injured. The wall may be destroyed also. The force has been negated, but the outcome is destruction. The reason that SWAT teams do not storm into a hostage situation with guns blazing as their first choice is that, while the hostage taker may be killed, so might some of the hostages and the police. The situation will be neutralized, but the outcome is destruction. Thus, meeting coercive force with coercive force is problematic. But this is not so true of other kinds of force.
Imagine you go to a doctor who tells you that you have a tumor and that the best option is surgery to remove the tumor. That doctor is exerting expert power over you. If you are reluctant to go under the knife, you can counter that expert power in several ways. The commonest counter is to ask for a second opinion. This option relies on the fact that expert power is not infallible. The quality of expert power depends on numerous factors, such as, training and experience. It may also be too narrow for your purposes (the single biggest weakness of expert power). If you go to a surgeon with a tumor, it is likely that the surgeon will recommend surgery because that is what surgeons do, and that is what they tend to see as the solution to a medical problem. If you go to another surgeon for a second opinion, you will probably get the same answer. But if you go to a doctor who is an expert in tumors, but is not a surgeon, you might get a very different answer. This is countering expert power with expert power.
You can also be your own advocate. Even though you may not have any medical training, you can go on the internet, or read medical textbooks, to see what general medical advice is available to you. This approach is a weak one because medical expertise is voluminous and requires considerable knowledge to assess adequately. But, at minimum, it can provide you with some expert knowledge to challenge the surgeon’s decision.
Expert power is countered by expert power all the time in law courts. The prosecution and the defense in a criminal case both have the right to call expert witnesses to help their case, and experts in all fields can disagree. The jury has to decide, when experts disagree, which expert opinion is more credible. This decision may stay within the realm of expert power by having the jury compare the qualifications and experience of the experts, but it may also slide into other areas of power. Some people make a substantial income as expert witnesses, and that income increases as they become more successful in convincing juries. An expert may, therefore, also employ charisma (referent power) to convince a jury in addition to expert knowledge. Sometimes, also, people can disguise referent power as expert power, as in the classic case of the “snake oil” salesman. The actual expert power is invented in this case, but the salesman uses charm to sell the snake oil while pretending to have expert knowledge.
Advertising relies very heavily on referent power to sell products. A company employs celebrities, attractive people, or people who are charismatic in some way, to endorse their products, hoping that you will be charmed by the personality into buying their product. Companies are, therefore, constantly vying for charismatic people to endorse their products, thus, countering referent power with referent power. You can be seduced by this referent power, or you can counter it with expert power. You can add reward power to the mix also. You can investigate clinical trials conducted on two different analgesics and determine that one is shown to be more effective than the other, even though they are both promoted by sexy actors. Or, you can determine that both are equally effective, but one is much cheaper than the other. There are numerous ways to counter one type of power with a different type of power. But, before attempting to counter someone trying to use power over you, you first have to determine what kind of power they are using.
Unfortunately, referent power is not always susceptible to a counter with expert power, as both politics and political history teach us. Hitler came to power through the constant repetition of demonstrable lies. His lies were easy to expose, but his followers chose to ignore the truth and accept his lies because he had immense charismatic power. Modern politicians often use exactly the same ploy. They are quite happy to disseminate lies, knowing that if a large percentage of the population believes those lies, their personal appeal will win the day. I am not going to name names, but I will point out that in the US, a number of actors have won elections, in large part (not entirely), because they are celebrities. It is difficult to counter extreme referent power in the political arena with expert power. In fact, the US has a history of countering referent power with coercive power. A great many charismatic US presidents have been assassinated.
Quite often people will try to disguise one kind of power as another, and it is your job to see beneath this ploy if you wish to counter that power. A person who enters a bank intending to rob it, carrying a realistic looking toy gun, you could say was technically using coercive power, or a social construction of coercive power. If the people in the bank are convinced that the gun is real, then for all intents and purposes this is a case of coercive power. But if anyone in the bank realizes that the gun is a toy, then the situation changes. The robber was actually using referent power all along, but the people in the bank were thinking they were being controlled by coercive power. This case highlights the importance of belief when it comes to the exercise of power. In fact, in the case of a genuine armed robbery, the people have to be convinced that the people carrying weapons will use them, otherwise they have no power. Actually shooting someone will do the trick, but they can also use referent power to convince people of their seriousness.
Another way to counter the use of power is to understand its limits. Obviously, the coercive power of a gun runs out if the gun runs out of bullets. Reward power is exhausted when things of value run out. Positional power is an interesting case because so much is dependent on the nature of the position occupied by the person wielding that power. Both reward power and coercive power may come into play to reinforce positional power, but it can exist independently of them. In many societies, an aging member of a family can command respect and honor simply by virtue of age without resort to rewards or punishment. The culture expects the family to show respect by virtue of position alone. But, what are the limits of this power? Positional power is the form of power that is most amenable to abuse, but there are ways to counter such abuse. For this you need insight into its limits which varies greatly according to the source of the power.
Police officers, parents, teachers, bosses, and army officers all wield positional power, but its nature and limits vary considerably. This is the realm of authority. All authority has limits. An army officer has the authority to order soldiers to do any number of things, from marching to the point of exhaustion to cleaning toilets. There are, however, some things that an officer cannot legitimately order. In the US army, for example, all soldiers have the right to disobey an order that is illegal. This is a case where expert power supersedes positional power. If an officer orders the massacre of civilians, the soldiers have a right to refuse the order, and if the soldiers carry out the order and are brought to trial to answer for it, they cannot plead that they were following orders as their defense. Here things get tricky because the soldiers may fear being shot, or in some other way intimidated by coercive force, by their own officer if they refuse the order. That is, the officer is not using positional power on its own. Hence, courts martial have had mixed results.
In other, less dangerous, situations, expert power can be more effective against positional power. For example, there are things that a police officer can ask of a stopped motorist, and things the officer does not have the right to ask for. In most states in the US, the officer can ask for the driver’s license and the car’s registration. These documents identify the driver and verify the vehicle’s ownership. Beyond that things get murkier. The officer cannot legally require the driver to open closed or locked parts of the car without probable cause. If the officer claims probable cause, the driver can counter with evidence to the contrary, or by citing the law. This is another example of using expert power to counter positional power.
Depending on the circumstances, it is also possible to use positional power to counter positional power. This very much depends on how the positional power is constructed. In most, if not all, Euro-American countries, a great deal of positional power derives from one’s position in a hierarchy. The flow of the power in a hierarchy may be complex, but I will leave that topic aside for the moment. A typical hierarchy is pyramidal, with one person at the top, and then as one goes down the hierarchy, the number of individuals increases and their power decreases. In a feudal system, the king sits at the top, and under the king are nobles who owe loyalty to the king, and expect loyalty from those beneath them in the hierarchy. The nobles control knights, and the knights control peasants. You can change the titles, and some feudal systems have more layers, but the idea is clear. Positional power in hierarchies tends to flow from the top down (quibbles coming later).
History teaches us that feudal systems were constantly subject to the abuse of positional power. When the abuses became intolerable there were numerous recourses with varying chances of success. When their feudal overlords became onerously oppressive peasants sometimes rebelled – countering positional power with coercive power. They were rarely successful because their coercive force was not normally adequate to combat the coercive force that the knights and nobles had at their disposal. History is littered with peasant revolts and slave revolts that ended with the execution of the ring leaders and a return to business as usual with little or no redress for the abuses. When people much higher up the hierarchy rebelled, the results were often different because they had more coercive power, which they usually combined with referent power to convince others to join them. What rarely occurred was the use of positional power to challenge positional power because the hierarchy did not usually allow it. In hierarchies that we are more familiar with, using positional power to challenge positional power is more available.
If your boss loads you with more work than you can handle or habitually tells you to do things that are outside your job description, there are a number of things you can do. You can complain, suck it up, or quit. If complaining to the boss is ineffective, tolerating the situation or quitting may seem like the only other options, but most hierarchies have a mechanism within the system for objecting to abuses. That is, there can be checks on positional power using positional power. One such option is to go to your boss’s boss and complain. Some hierarchies allow for this option, others do not. The metaphor taken from the military is “the chain of command” and in some hierarchies, including the military, the chain of command can be rigid. If complaints about your boss have to go through your boss to reach higher levels, then you probably stand no chance of rectifying your problems. If you can go up the chain without going through your boss, you may be more successful.
Some hierarchies have officers that are, to one degree or another, outside the hierarchy. In some jurisdictions, complaints of abuses of power by police go to an internal review board, and in some cases to an external one, or both. External review gets outside of the positional power structure entirely, whereas internal review is part of the hierarchy. If you want to lodge a complaint about police brutality, do you think you will have more success filing the complaint with Internal Affairs, itself staffed by police, and under the authority of police, or to a civilian complaint board, that is independent of the police hierarchy? Some cities have an office of ombudsman whose job it is to address complaints filed by citizens against government agencies. The degree to which they are effective depends on how they fit within the government hierarchy. The effectiveness of an agency to monitor and correct abuses of positional power depends on the faith that is placed on the agency by the wider society as much as on reward power, coercive power, or any other form of power at its command. In any case, all of those forms of power are granted to them by society. Authority is socially constructed, but we tend to forget that fact.
Resorting to positional power to reinforce positional power is one way that bosses can maintain abuses, but it can also backfire. Once I had a job teaching English at a private school in China and my boss was unusually abusive of her power. She constantly gave me more work, changed my schedule, asked me to cover for classes on my days off, and generally made the job unpleasant. For a time I let it all go because the job paid well, and I needed the money at that time. But then she went too far and we had a big fight in the hallway, which continued via email. She thought she had the upper hand because my visa was dependent on being employed at the school. So, she went to the owner to complain about my insubordination, and asked him to threaten to fire me if I did not get in line. She did not have the power to fire me directly. Instead the owner fired her because he valued my services over hers. The school had just begun a new program teaching university graduates the English necessary to pass standardized tests for entry into graduate degree programs in the US, and I was an exceptionally rare commodity because I had had a lot of experience teaching for those exams. Almost no one in the city had those qualifications, and offering such courses was extremely lucrative for the school. At the time, no other school in the city could offer the courses, because no one could teach them. The owner could easily replace the school’s principal, but could not replace me. Thus, although I was low down in the hierarchy, I had expert power that trumped positional power.
In sum, to challenge positional power you have to know how it works, and this is the purview of social science in general, and anthropology in particular, because in many cases hierarchies deliberately hide the source of their power so that it is not challenged, or else challenges are ineffective. Positional power rarely exists in isolation. Some hierarchies are supported with coercive power, such as feudal systems and the military, some by expert power, such as universities (at least in theory) and hospitals, and some by referent power. But it takes no time at all to pull such an analysis apart when you think of situations in your own life where one sort of power ought to dominate, yet another sort wins out. Maybe you had a colleague promoted over you because that person sucked up to the boss, even though you were more competent? Politicians in the US pour millions of dollars into election advertising, frequently relying on image over substance.
Universities and schools have fairly straightforward hierarchies that present us with some interesting social facts. In my experience as a university professor, undergraduates have very little understanding of how university hierarchies work. This is because the hierarchy has very little impact on their lives. They also have very little understanding of what their lecturers do outside of teaching because attending class and passing exams is what matters most to them. We can generalize this by saying that people at the bottom of hierarchies are likely to know little about the upper echelons of the hierarchy they are in because their day-to-day lives are mostly concerned with the people immediately above them. As you climb the hierarchy, you have to deal with people both above you and below you, and, sometimes, you have to have detailed knowledge of the whole hierarchy. To be promoted at a U.S. university you are typically reviewed by a committee of colleagues, then their recommendation goes up the hierarchy, perhaps to your dean, then provost, then president, to be approved at each level. Each of these levels has different concerns that could be educational, financial, institutional, or even personal, and these concerns reflect that level’s place in the hierarchy. Knowing what these different levels care about can influence how individuals present themselves for promotion. These levels can, indeed, make decisions that can have a major impact on the lives of undergraduates also, but undergraduates’ lack of knowledge of the hierarchy makes it difficult for them to effect changes when they are negatively impacted. The power that people at the bottom of a hierarchy have lies in their sheer numbers, which can be utilized if they work together, and not if they are divided.
I have already mentioned peasant and slave revolts as a way of challenging positional power. Historically these revolts almost always failed because of the coercive power in the form of brute force that those at the top commanded, yet this must be tempered by understanding that this coercive force also resided in people at the bottom of the hierarchy, but in a different part. Kings, nobles, and knights together are vastly outnumbered by people below them in a feudal system, and if those people were completely united there could be no effective resistance. Loyalty, a form of referent power, was necessary to divide the people at the bottom into rebels and loyalists, so that the rebels could be defeated.
The complexity of hierarchies is sometimes deliberately managed to prevent people at the bottom from uniting. Imagine an organization with a president who has several vice presidents, each of whom has a department divided into sections headed by a section chief, with each section divided into different functions carried out by workers in units under the direction of a unit head. Such an organization may have hundreds, if not thousands, of workers at the bottom of the hierarchy, but if they are all split up into units and sections, they may not feel the need, or have the ability, to work together to bargain for better pay and work conditions. The workers in the individual units may have no interest in, or knowledge of, the conditions in other units, making it difficult for all workers at the bottom of the hierarchy to work together to improve conditions. Such narrowing of vision focuses the individual worker’s attention on climbing the hierarchy, not improving conditions for everyone across the board.
A generalized workers’ union can counter such institutionalized fragmentation and segregation by bringing all the workers of similar rank together to bargain for better conditions. Collectively they can use the coercive power of going on strike, but they must do it collectively, and, hence, they must be organized. Trade unions were an outgrowth of the Industrial Revolution, but soldiers and sailors have periodically exerted the coercive power of mutiny for centuries to combat the abuse of positional power. An enduring question in anthropology is why people with collective interests do not act collectively more often when they feel oppressed within hierarchies. This question strikes at the heart of how the positional power of hierarchies works.
In the history books you will read that Napoleon won the battle of Marengo, Wellington beat Napoleon at the battle of Waterloo, Washington defeated the British, Grant defeated Lee, etc. Really? Did one person win each of these battles? Were they really battles between only two or three men? Of course not. Hundreds of thousands of soldiers were involved, and without those hundreds of thousands, the battles would not have been fought at all. Giving all the credit to the person at the top of the hierarchy is greatly overestimating the importance of that one person. Yet, we do it all the time, and it is this social construction of the importance of the person at the top, and with the sense that power flows from the top down – naturally – that keeps positional power functioning. There is an alternative. Power can flow from the bottom up.
The Protestant Reformation in sixteenth century Europe rocked more than the Church, it shook the political and social world. One of the movements that came out of the Reformation was Presbyterianism which proved to be a fundamental challenge to existing power hierarchies. In a Presbyterian system the members of a church elect elders (presbyters in Greek), and the elders as a body make decisions concerning the church, including choosing a pastor. This was the complete inverse of the Catholic church where the pope sat supreme at the top. He created cardinals, and below them were archbishops and bishops and so on down the hierarchy. Power flowed from the apex to the base. In the Presbyterian system, power begins at the base and flows upwards – in theory, at least. Many modern constitutional democracies use this Presbyterian model, and politicians talk about their “base,” meaning the mass of people at the bottom who support them. Without a base, these politicians have no power.
Absolute monarchs were deeply troubled by Presbyterianism, which surfaced in England under Elizabeth I, brought there by Protestants who had fled to the Continent under Mary’s persecution of Protestants, and who picked up some nasty habits whilst there. Monarch and Presbyterians fought a Civil War in the following century when Charles I asserted that kings were appointed by God. Presbyterians begged to differ, and executed Charles after they won. In the end, though, the Presbyterians mostly moved to North America, and top down hierarchy survived in the Church of England and English government. Presbyterianism was modified to become constitutional democracy in the US.
The study of political systems was a dominant part of British social anthropology in the early to mid-twentieth century, begun largely because the British government was interested in political control in areas it had colonized, although the relationship between the government and anthropology soured. Classics include African Political Systems (1940), edited by Meyer Fortes and E. E. Evans-Pritchard, Political Systems of Highland Burma (1954) by Edmund Leach, and Politics, Law and Ritual in Tribal Society (1965) by Max Gluckman. Gluckman, who was born in South Africa, was thoroughly and loudly opposed to British colonialism. In the US, the government co-opted anthropologists to aid in understanding the power structures and cultures of defeated nations in the aftermath of World War II, and Ruth Benedict’s The Chrysanthemum and the Sword (1946) was one product of this quest. Political Anthropology (1966) edited by Marc Swartz, Victor Turner and Arthur Tuden coalesced the field as a sub-discipline, which subsequently flourished.
Since the 1960s, the anthropological study of power has changed significantly, in large part under the influence of Pierre Bourdieu, who introduced the notion of “social fields” within which individuals vie for dominance based on their “cultural capital” which includes education, intellect, style of speech, and dress, among other things. Thus, simply speaking in a certain way or wearing a particular uniform can be enough to exude authority. For Bourdieu these social fields are hierarchically ordered, with economic power typically at the top. Bourdieu has, however, come in for some serious criticism, especially from British social anthropologists because his concepts, especially cultural capital, are only vaguely defined and have been adopted by numerous anthropologists according to their own proclivities, hence tend to mean whatever you think they mean.
Let me leave aside the complexities of Bourdieu for now and ask the more general question: What is it about hierarchies? Why do we find them everywhere? Look at your own life. How many hierarchies are you part of and where do you sit in them? What are your aspirations within each of those hierarchies? How do you expect to achieve those aspirations? I was the head of anthropology at my college for 20 years, but I did not achieve that position because of my prestige within anthropology. All the senior members of the department were much more well-known within anthropology than I was. I was elected because the other senior members did not want the job (because it involves endless paperwork and committee meetings), so I ended up with a great deal of authority (positional power), but very little coercive power, at the outset, to accomplish anything. I was also the pastor of a church for many years within the Presbyterian system. To become pastor I had to be elected by the elders of a church, and, even though as pastor I was the moderator (president) of session (council of elders), I did not set the agenda, nor did I have a vote when it came to making decisions. In both cases, my ability to get things done in the way I wanted them done, had to be managed through persuasion (referent power).
What Bourdieu emphasized, and I want to underscore, is that power is not what it seems on the surface. The schema I started with is simply an academic tool, and not an especially precise one. It points out, as much as anything, that the exercise of power is complex, and its mechanisms are frequently hidden from the people involved. The power that one person wields over another may be a complicated mix of charm, money, threat, and, prestige, not necessarily separated out as such. Why do you vote for a particular politician, or obey your boss, or do what you are told by a teacher, or respect an elder (if you do)? In each case there is usually not a simple answer. In families, a certain degree of positional power accrues naturally by getting older, but that power can be enhanced or diminished by numerous other factors. A rich elder can command more respect than a poor one because that person can choose who will inherit the wealth. Hierarchies and networks of power and authority are driven by a great many factors. At the end of the day, knowing that the bases of power, however defined, are social constructed is a vital component of manipulating them.
Chapter 7: Who is My Brother? Kinship Systems
The study of family and kinship is a huge component of anthropology, and it has been studied extensively. How people identify relatives varies greatly across cultures, and I can only scratch the surface of this deep subject in this chapter. One of the first things you discover when you try to learn a language from a culture way outside of Europe – from Asia, for example – is that you have to learn (different) words for, perhaps, older brother and younger brother, or maternal grandmother and paternal grandmother, or aunt by blood and aunt by marriage. Why do we lump certain relatives all together in one category, and are very careful to be specific about others? When you tell me that your cousin visited you yesterday, I do not know whether that person is male or female, a cousin on your mother’s side or father’s side, a first cousin or second cousin, etc. When you tell me your brother visited you, I can be much more certain of the relationship – although not completely certain. It might be a half-brother, step-brother, or someone else lumped into the “brother” category. Nonetheless, the range is much more limited. Except under rare circumstances, you and your brother share a relationship with one or both parents (and your brother is male). How you are related to a person you call cousin is much more complicated because there are so many more ways that a cousin can be related to you.
Typically, European languages have very specific words for members of their nuclear family , and only generic ones for relatives outside the nuclear family. This is not true in all cultures; in fact, it is an unusual system worldwide. In some Polynesian cultures, people that we, in English, call “brother” or “cousin” (male) are all called by the same term. The difference between a cousin and a sibling is not recognized in their languages. Among the !Kung San of the Kalahari, any woman who has the same name as a man’s sister, regardless of biological relationship, he calls “sister” (that is, he uses the same term for her as for his biological sister), and it is incestuous for him to marry her. Mandarin Chinese has a different word for every single biological relative: elder brother versus younger brother, maternal grandmother versus paternal grandmother etc.
We have to be a little careful here. Just because people use the same word in their language for people related to them in different ways does not mean that the differences are not important or that they are not recognized. In our kinship system we call women “aunt” whether they are related to us on our mother’s side or our father’s side, and whether they are our mother’s or father’s sisters, or our mother’s or father’s brother’s wife. Even though the kinship name is the same (“aunt”), and despite the fact that we know exactly how they are related to us, we don’t mark the difference linguistically. We do not have a term “aunt-in-law” for women who are our aunts by marriage as we do for sisters-in-law or mothers-in-law. Why? Good question. Asking such a question gets to the heart of the anthropological study of kinship.
All cultures have a technical vocabulary for discussing kinship, and, as I will repeat many times, any practice that is universal is worthy of study because it means that it is central to being human. I am going to talk a great deal about how kin are named in different cultures, and the implications for such systems of naming, but I also want to give a giant caveat at the outset. The technical terms in a language are not necessarily the terms that people actually use. There is a term for father-in-law in Chinese, but my Chinese daughter-in-law calls me the Chinese equivalent of “daddy” and I call her “daughter.” When referring to me when she is talking to others about me, she usually calls me “father-in-law” to avoid confusion with her biological father, but she never uses the term when addressing me directly. This is standard practice in China. In English, when talking about her, I usually call her “my son’s wife” not “my daughter-in-law,” but I call her “daughter” to her face. If I were interviewed by an anthropologist, and I knew nothing about kinship analysis, when questioned about my kin I would probably call her “daughter-in-law” or label her as such on a chart, because that’s the correct term, even though I almost never use the term in speech. Anthropologists in the past did not take such variations in usage into account, and built theories, sometimes very elaborate ones, based solely on the technical kinship terms. We must avoid this mistake, but we can still theorize some social meanings based on the technical terms.
A standard fieldwork task in anthropology is to draw a person’s kindred , that is, a chart listing all the people related to that person (either by blood or marriage). A kindred chart is a representation of kin as they relate to one person, usually labeled EGO. Here is my kindred diagram (abbreviated), based solely on memory; it is the kindred chart I have in my head, not what I could draw if I did genealogical research.
(Figure 00 here)
I use standard anthropological symbols which should not be hard to understand: triangle for male, circle for female, = for marriage, and a symbol struck through to indicate an end (a person struck through is dead, a marriage struck through has ended), plus assorted symbols explained in the key. What I have done on this chart is to give the technical terms for all of my kin, but also the kin terms I actually use (which are sometimes the same, sometimes different). Several things stand out. First, some kin terms have diminutives, and some do not. I call some of my aunts, “auntie” and some “aunt.” I call all my uncles, “uncle.” Second, I do not always use a diminutive when it is available to me. I use diminutives only when I feel a particular closeness to a relative. Third, I use diminutives only in conversation with close family members; I do not use them when referring to my kin to outsiders. Fourth, my use of kin terms has changed over the years. In middle age I found it too baby-ish to use diminutives, but I am (sort of) comfortable with them now. Fifth, I don’t use kin terms at all for some relatives when talking to certain other relatives (notably my sisters). So, for example, I call one of my relatives, auntie Ruth, when I am talking to my sisters, but I call her daughter, Sally, not cousin Sally.
I will return to the anthropology of formal kinship systems later, but for now let’s stick with personal information. Using my kindred as your model for the basic idea, draw your own kindred. That is, if you feel up to it. It can be a rewarding exercise. Here’s the most important part of this exercise; do everything from memory. Draw the chart based on things you actually recall, not things you can look up, or what you can ask others about. This chart should be a visualization of what you keep in your head. You may need a big sheet of paper, and you may need to make several attempts because at the outset you will not leave enough space inside the diagram to fit everyone in. You may also need to be creative if some of your relatives have unusual relationships. Start with yourself and mark it EGO (me). Then work out starting with siblings, then parents and children, then grandparents, aunts and uncles, their children and so forth. It will take a long time. When you have put people on your diagram, mark under each one what you call them (noting all the alternatives – as I have done). Now – very important – do you see any patterns?
When I started teaching kinship, I drew my own kindred on the board, from memory, without having ever done it before, as a simple exercise to show my students how to do it. I had never drawn my own kindred chart before. A couple of features popped out that had not occurred to me before drawing the chart. The most salient to me was that there were numerous divorces on my mother’s side, in fact, everyone in my generation and my mother’s generation was divorced, and none on my father’s side. The perennial question then followed: WHY? A plausible answer came to me because I knew my family’s history, but my students did not. So, I asked them to suggest possible reasons for the pattern. Give it a shot before reading on. My first answer is probably not the whole story, but it’s a very good start, and it involves a single variable. To find a single, simple explanation you have to figure out what might be different between my father’s family and my mother’s family. My father and mother were four years apart in age, and all the members of their generation, and all the members of my generation, on both sides, fit within a narrow enough age range. Thus, age was not the factor. On the other side of the coin, we are all over the map when it comes to levels of education and careers. So that cannot be the single, simple variable. My father and mother were different in one significant way, and this difference colored my upbringing and my vision of who my parents were. Give up? Read on.
My father was born in Glasgow in Scotland, and my mother was born in Eastbourne on the south coast of England. If you are not from the UK, this distinction may not mean much to you, but if you know anything about Scotland and England you will know that the differences perceived by members of the individual cultures are substantial. There are also substantial similarities, but when members of each come face to face, it is the differences that are noticeable. My father’s family were Scottish Presbyterians in the Calvinist tradition and my mother’s family were raised in the Church of England tradition. That is a shorthand way of saying that my father’s family were solidly lowland Scots, and my mother’s were run-of-the-mill southern English. All of my cousins and their children on my father’s side were born and raised in Glasgow, and most of them still live there. All of my maternal cousins and their children were born and raised in the south of England and still live there. Thus, it is possible to draw a line down the middle of my kindred with my maternal kin on one side and my paternal kin on the other, and we can mark one side Scottish and the other English.
It is not correct to jump straight to the conclusion that being English leads to a high prevalence of divorce, while being Scottish leads to stability in marriage. That is going too far. It is, however, quite correct to say that a high divorce rate is correlated (in my family) with being born and raised in England. Correlation is not causation, although the two are frequently confused. To show that being English leads to a higher divorce rate requires a great deal more data and hypothesis building and testing, but finding a correlation is a start. Because every single person related to me by blood on my mother’s side in her generation and my generation has been divorced, and none on my father’s side in the same generations, is unlikely to be coincidence. Patterns such as this one are the cornerstone of anthropological inquiry. Data from one family is merely anecdotal, of course. You have to collect much more data from multiple families before you can develop a workable hypothesis. I will be the first to admit that my family is quirky.
Another hypothesis you might generate is that divorce is contagious. In my maternal family, one person got divorced, the results were not disastrous, so others with problems in their marriages followed suit. In my paternal family, no one got divorced so the contagion never started. Here you can also start a quibble. My nuclear family (where divorce was prevalent) is not, strictly speaking, on one side or the other. My sisters and I are related equally to my father and my mother. I can counter that argument by saying that when we lived in the UK we lived in southern England, and we had a much closer association with my mother’s kin than my father’s kin (and still do). This fact is demonstrated by looking again at my kindred.
From memory I can trace my maternal kin back to my great grandparents (both my grandmother’s and my grandfather’s parents). I can name all of my maternal grandparents, and their siblings, and can also tell you a great deal about their lives, even though I met only a few of them, because my mother had a huge collection of photographs of her kin, and would sometimes pull them out on a Sunday afternoon when I was a small boy and tell stories about them. My sisters and I still have all those photos. My father was extremely taciturn about his family, and I rarely met his kin. In consequence I know almost nothing about them. In anthropological terms I would, therefore, call my kinship knowledge matrifocal, that is, focused on my mother’s side, and I would hypothesize that my nuclear family was more heavily influenced in kinship matters by my maternal than by the paternal side. Here I will introduce you to an abbreviated table of kin terms.
| Matri – (related to mother) | — focal (focused on) |
| Patri – (related to father) | — lateral (side) |
| Bi – (both mother and father) | — lineal (descent and inheritance) |
| Ambi – (either mother or father) | — local (residence) |
This is a small sampling of kin terms, but it will be enough for present purposes. It is also an area of endless debate among anthropologists, so I will simply give you a cursory explanation to get us on our way. The table is mix and match. Any term in the left column can be attached to any term in the right column, resulting in a convenient word that allows anthropologists to talk about kinship patterns without a lot of explanation. Thus, instead of saying “kin on my mother’s side” I can say “matrilateral kin,” or, instead of saying “when a couple marries in this culture they live with the father’s kin” I can say that the culture is patrilocal. Now let us move to more general kinship matters.
Anthropologists discovered in the nineteenth century that the way that European languages name kin is not the norm in the rest of the world, in fact, it is rather unusual. Look at figure 00. This is how Europeans typically name kin. To avoid linguistic complications, anthropologists use numbers rather than names for kin, so that kin in the diagram with the same number have the same kin term.
Figure 00 here
This system used to be called the Eskimo kin naming system, but now is more usually called the Inuit system to take account of the fact that “Eskimo” is an outsider term, whereas “Inuit” is the more preferred term by indigenous people applied to themselves. In textbooks you will find both terms used. Any term you use is going to be plagued with problems. Looking at the table you see that kin terms differentiate generations, with special terms inside the nuclear family, and very general terms outside the nuclear family. Generally, gender is distinguished also, but in English the gender of “cousin” is unstated (it is specified in most other European languages, in part because the gender of nouns overall is of much greater importance). Matrilateral and patrilateral kin are not distinguished at all, and, in the parental generation, aunts and uncles are not distinguished between those that are biological kin and those that are kin by marriage.
The Khmer use this system also, but with a wrinkle. In the Khmer language (reflecting Khmer cultural practices), there are different words for older siblings and younger siblings. Thus, there are also different words for uncles and aunts who are older than ego’s father or mother, and those who are younger. Things are further complicated by the fact that those kin terms are regularly used for people who are not related to ego at all, but who are intimate in some ways. Social rank, while currently in flux, is still very strongly influenced by relative age, with the eldest commanding the greatest respect.
My first Khmer language teachers were both around my son’s age, and it took almost a month for them to settle on a term of address for me because, by virtue of my age, they should have used a respectful term, but normally with students (even slightly older ones) they used a subordinate term. This decision was important because in the Khmer language there are multiple ways to express the second person “you,” and with a person you interact with all the time it is not polite to use a generic form of “you.” Instead you use a kin term or status term or personal name. My students in the US used to use my first name, which would have been fine with me in Cambodia, but my Khmer teachers would not dream of it. Using a first name with a man of my age and social status is unthinkable, and using a kin term would have been too intimate. We settled on an awkward compromise.[3] My quirky experience aside, the age-rank system in Khmer culture complicates kin terms within the Inuit system they use, and points out that not all kinship systems that are grouped together are identical. English does not distinguish male and female cousins, but Romance languages do. Even so we can say that the French and Italians use the Inuit system. But how different is different enough to classify a kinship system as a different system entirely?
Until Lewis Henry Morgan did kinship studies of the Iroquois, the Inuit system was thought to be obvious, or normal, or even universal, but Morgan discovered something startling. The Iroquois used the same term for a father’s brother as for father, and used the same term for father’s brother’s children as for brothers and sisters. This discovery opened up the whole field of kinship studies. You can see some of the Iroquois system in this figure:
Figure 00 here.
What you notice is that ego’s parents’ siblings of the same gender (father’s brother, mother’s sister) have the same kin term as parents, and their children have the same kin terms as ego’s siblings. Anthropologists call these kinds of cousins, parallel cousins. The children of siblings of the opposite gender (father’s sister and mother’s brother) are called cross cousins, and they have a different kin term from siblings, quite often a term similar to “in-law” or “husband/wife” in cultures where cross-cousin marriage is encouraged. I deal with issues of marriage expectations and incest in separate chapters. For now, I just want to note that kin terms carry connotations of expected behavior. Calling two kin who are related to you biologically in different ways by the same kin term implies that you expect to relate to them in similar ways – not identical, but with an understanding that the relationship is equal in certain fundamental ways.
The Iroquois kinship system is found in widely different cultures all over the world, in parts of sub-Saharan Africa, Melanesia, and south Asia for example. The cultures that use this system tend to prefer cross cousin marriage, which promotes constant ties between lineages or clans over time. The Yąnomamö of the Amazon rainforest, for example, routinely marry cross cousins, while it is forbidden to marry parallel cousins. It is quite common for men to marry each other’s sisters, creating double bonds between their lineages. A man uses the term roughly equivalent to “wife” for his female cross cousins and “brother-in-law” for his male cross cousin, reflecting expected behavior. Such lineage alliances are important for long-term survival because lineages control vital resources, including land. The reason that it is not productive to marry a parallel cousin is that they belong to the same lineage as you, so you are not strengthening an alliance between different lineages.
The Yąnomamö marry cross cousins bilaterally. For example, two men from different lineages will marry each other’s sisters. Their children will be cross cousins, so they can marry and have children, and so on indefinitely. The lineages become more and more stitched together via cross cousin marriage. Lineages and lineage ties are important in cultures where lineages control resources, but their importance lessens, when the lineage’s control of resources lessens. This may lead you to believe that kinship is determined by economic values, but this is not always the case. Resources need not have a direct economic value. They may involve rank, prestige, and power as well.
The open question in anthropology concerns what happens when the social values of a culture change due to changing historical circumstances. Do kinship systems change as well to accommodate new circumstances? If a group stops practicing cross-cousin marriage will they abandon the Iroquois kinship system? If the nuclear family becomes extinct in Europe, will kin terms change? The answer is not simple. Kin terms can persist even though their utility has weakened, but new terms can also be introduced. Percentages of single-parent households are growing in the United States, and the percentage of traditional nuclear families of husband and wife plus their offspring, is shrinking. Getting exact statistics is difficult because the reporting agencies use different definitions of the nuclear family. If we define a nuclear family as a married couple and their biological offspring, then it is below 50% in the US. It is still the most common, but no longer the majority. Single-parent households now make up 27% (according to census data reported in 2016), more than tripling since 1960. The statistics do not normally account for families made up of couples who are married and living with their children, but the children are from earlier marriages (sometimes called a blended family). These are commonly lumped together as nuclear families.
What the confusion in terminology of reportage indicates is that the nuclear family is the enduring model regardless of shifting circumstances. A man and woman who are not married, but live together with their biological children, is usually counted as a nuclear family, along with a married couple living with children from earlier marriages. They are very different kinds of families, however, and may have different ways to refer to kin within the family. Here, once again, there is a complication arising from formal usage versus daily usage. In the blended family, for example, all the children may call the married male, “dad,” but when talking about him to outsiders may refer to him as “father” or “stepfather” (or “stepdad”) depending on a variety of circumstances. That is, despite kaleidoscopic changes in the exact nature of family relationships, classic kin terms persist.
When it comes to reportage, the terms used may be more wishful thinking than a reflection of actual practice. The United Nations Statistics Division (UNSD), for example, defines a nuclear family as any household containing a parent (one or two) and their children, without other family members, such as grandparents or aunts and uncles. So, by that definition, the nuclear family includes single-parent households, and in consequence allows (unthinking) analysts to claim that the nuclear family is as popular as ever, even though the structure of the family is rapidly changing. Now that same-sex marriage is legal in many places, a nuclear family can consist of same-sex partners and their adopted children. By the UNSD definition, this family is the same as a single woman living with her biological offspring from multiple fathers.
Here we have a problem that plagues much social analysis in general, and is of particular concern to the anthropological study of kinship. Is the Khmer kinship system the same as the English system because they both differentiate between parents and siblings on the one hand, and aunts, uncles and cousins on the other hand, or are they different because the Khmer system insists on delineating (in classes) older and younger siblings, but the English one does not? The answer depends on what characteristics you consider to be essential to the definition of the Inuit kinship system, and which characteristics are simply secondary variations. Study of the Iroquois kinship system was dogged with such questions once comparative analysis showed that systems that had at one time been all lumped together as Iroquois, were, in fact, markedly different, so different that one of them, the Tamil system, is now considered its own distinct system.
Leaving aside the immense technical debates, anthropologists have argued that understanding kinship systems is important because they lie at the heart of our social structure. We don’t just use kin terms for people who are related to us by blood or marriage, but for all manner of people who are not related to us in any way, and we use the language of kinship to define a whole variety of relationships, and of statuses. So, “who is my brother?” In strict biological terms, my brother is a male child of my parents, with extensions allowed for half-brothers and step brothers. But the definition of “brother” certainly does not end there. Catholic monks are called brothers, and many trade unions and similar societies are known as brotherhoods. Freemasons refer to each other as brothers. No doubt you can list many more examples. What do all these “brothers” have in common? There is something about being a brother that is fundamental to the social fabric. There are expectations in behavior that stretch well beyond biological kin, but that behavior is defined in kinship terms.
Kin terms that we use, radiate into the whole of society, and they are quite specific in application and meaning. Kin terms from the nuclear family resonate especially for us. If you say, “he is like a brother to me” or “she is like a mother to me” I will get the basic idea of the relationship without needing the details spelled out. If you say, “she is like a cousin to me” I have no idea what you are talking about. Cousins are kin, but their roles are not very well defined, so they cannot be generalized to other social relations very easily. Getting more general, we can talk about a company that is the parent company, or a sister company, to the one I work for. Maybe my contract has a grandfather clause in it. There is something fundamental to the structure of society about kin terms. That means that if a culture perceives that family relationships are changing, the people in that culture may see those changes as a threat, not just to individual families, but to the whole culture. That is why politicians in the US often run on a platform of “family values.”
Think about why some people are bitterly opposed to same-sex marriage and/or same sex adoption. These policies do not require them to change their own families. But these people merge the idea of “family values” and “cultural values,” or maybe even “universal values.” They do not understand that the ideal of a nuclear family consisting of mother, father, and offspring living as a unit in their own household, is not the only archetype in the world, nor even one that many cultures recognize or want. Cultures who group parallel cousins together with siblings (as insiders), and view cross-cousins as potential marriage partners think their family structure is the best. Family structures work in conjunction with other social variables within culture, and changes in family structures change the culture (and changes in culture, change family structures).
When the Chinese government enacted the one-child policy in an effort to curb massive population growth, little thought was given to what the policy would do to family structure, because stemming runaway population growth was a top priority to the exclusion of other considerations. Thus, from 1979 to 2013, that is, 34 years, or one generation, the vast majority of children born in China had no siblings. There were some exceptions in rural areas (if the firstborn was a girl), and among ethnic minorities (and for people who had enough money to pay stiff fines). But growing up with no siblings was the experience for a whole generation of Han Chinese. There is mountainous discussion, both inside and outside China, concerning the demographic effects of the policy, but scant discussion of the internal effects of the policy within families.
It is generally believed that the single-child policy has had some effect on how women are raised now, because when families had multiple children, the girls were relegated to inferior status, and boys were favored. Now, families who have a single girl, shower as much attention on them as they had done only for boys in the past. Single children also carry much more social responsibility than children in large families once did. It is expected that children will take care of their parents and grandparents when they get old, but with the single-child policy you end up with what is called the 4-2-1 syndrome: 4 grandparents, 2 parents, and only 1 child responsible for taking care of all of them in old age. I know from my own experience living and working in China that there is immense pressure on women in their 20s to make a “good” marriage. I would hardly call my experience teaching university students a well defined fieldwork project, but I did have many conversations with young women, all of whom lived at home, all completely funded by their parents, but expected to go on dates that their parents arranged, and expected to marry by 30, a man approved of by their parents. If love were in the mix that was a plus, but the main criteria for acceptability of a marriage partner were financial: does the potential husband have a well-paying job, and does he have job security and obvious prospects for advancement? I will deal more with marriage in chapter 00, but, for now, note that a culture with single-child households has pressures that cultures with other family structures do not.
What constitutes an “ideal” family, thus, is dependent on a number of cultural factors that change over time. You may believe that your notion of what an ideal family is comes from individual desires and circumstances, but it should be clear at this point that cultural factors play a major part in your decision making. In Euro-American culture it is very common to be raised in one nuclear family, then, on marriage (or before), to leave one’s birth family and set up a new household that becomes a new nuclear family. That’s how it operated in my family. My mother and father left home and raised a family, when I reached 19, I left home, eventually married and raised a family, and now my son is married, living separately from me with his wife, and plans on raising a family. Meanwhile, I live alone. I am happy with the situation. I do not expect my son to look after me when I get older. His wife, however, who is Han Chinese, does expect to look after her parents in old age.
Chapter 8: Till Death Us Do Part: Marriage
Anthropologists used to claim that marriage was found in all cultures (because marriage is fundamental to kinship, and kinship is universal), but they had trouble coming up with a definition of marriage that covered the kaleidoscope of possibilities. At one time marriage was defined as a socially recognized union between a male and a female (sometimes very loosely considered as such by the culture), which carried formally recognized rights and obligations between the people, as well as obligations towards their offspring, primarily about inheritance. It was also understood that marriage typically formed bonds between kinship groups, and not just between individuals. What those bonds are and how they are played out can be exceedingly varied. Marriages between heirs to kingdoms could have monumental political implications, such as when Ferdinand of Aragon and Isabela of Castile married, effectively creating a powerful new state out of the union of their kingdoms. Or a marriage in the contemporary US might involve little more than deciding which family to go to for Christmas and which for Thanksgiving. Regardless, the fates of two families become entwined in ways that they were not prior to the marriage, even when the couple are blood relatives.
Because any definition of marriage cross-culturally has to be vague in order to be universally applicable, some contemporary anthropologists treat such definitions as useless for all intents and purposes. So, the question becomes: is marriage truly universal? I am going to be a bit craven and skate over the technical details of that discussion very lightly. Each part of the old definition of marriage, which was hammered out from the 1920s to the 1950s, has been contested, so that now there is not a single component of the definition that is rock solid. At one time the definition defined the two partners as male and female, but same-sex marriages, ghost marriages, and other such twists destroyed that part. The definition did not specify that both partners had to be alive, which is fortunate because marriage to a dead person has been formally legalized in France, China, and Sudan. In France, for the marriage to proceed, certain stringent requirements must be met, but there is an acceptance by the government that there can be good reasons for marriages to dead people. Remember, we are talking about French people in the 21st century – not 19th century Polynesians or sub-Saharan pastoralists!
In some cultures, while marriage to a dead person is far from common, it can occur when the legitimacy of offspring is vital for inheritance. Take the hypothetical case of a woman who is about to be married and is pregnant, but her fiancé dies before the wedding. If she gives birth without being married, the child is illegitimate, and if the laws of the culture require that only legitimate children can be the father’s heirs, the child is disinherited. If a marriage takes place despite the groom being dead, the newborn becomes legitimate in the eyes of the law, and can inherit the father’s estate. Legitimacy and inheritance were also once seen as critical components of marriage, but these too have since been contested using examples of cultures where inheritance is not tied to legitimacy.
Rather than get bogged down in the intricacies of inheritance cross-culturally, and other issues related to the definition of marriage, I am going to take a rather cavalier approach to what marriage is that avoids the claim of absolute cultural universality, yet gets to the heart of the matter for current purposes. The vast majority of cultures in the world have some kind of ceremony that legally binds two people (commonly a man and a woman) permanently into a social unit, with resultant rights and expectations. There are plenty of quibbles here, but it will get us moving forward. First task is to establish some vocabulary, so we can see the array of possibilities. Here is a chart to help:
| poly – (many) | – gamy (marriages) |
| bi – (two) | – gyny (wives) |
| mono – (one) | – andry (husbands) |
This is not entirely workable as a mix and match schema, but there are many words we can form by attaching the prefix in the left column with the suffix in the right. There is a symmetry in the chart in that poly- can be attached to all the suffixes, and -gamy can be attached to all the prefixes. The importance of the table is that it makes a crucial distinction between types of polygamy (multiple marriages simultaneously): polygyny, which describes a man with multiple wives, and polyandry, which describes a woman with multiple husbands. They are very different kinds of polygamy! Polygyny is legal in more cultures than it is outlawed, and polyandry is very rare. Statistically (inasmuch as we can get accurate data), monogamy is the most common type of marriage even where polygamy is legal.
In the Biblical record, polygyny is normal from Genesis down to the earliest kings of Israel and Judah as recorded in the books of Samuel and Kings, although historians and archeologists do not accept Bible documents as historically accurate. In fact, they even deny the existence of many (not all) of the characters all the way from Adam to the later kings of Judah. But the declaration that notable patriarchs and kings, such as Jacob, David, and Solomon had many wives reflects normal practice in the Middle East in ancient times. Having many wives was a symbol of power, prestige, influence, and wealth, and is not a great testament to the modern evangelical assertion that “one man and one woman” is the “Biblical definition of marriage.” One very good question to ask is why monogamy is the most common form of marriage in societies that allow polygyny. There is, of course, no single answer to the question, but financial considerations are a common factor. Having multiple wives can be expensive.
You will note that love is conspicuously absent from the anthropological definition of marriage because it was not even the common denominator in the history of marriage in Europe, let alone cross-culturally. Nor are sexual privileges, and exclusivity, universal by any means. The common denominator is that the bond of marriage has social consequences, which, of course, is not saying a whole lot. There are all kinds of social bonds, and marriage creates one of them – with different properties in different cultures. In cultures such as the US and Britain, the vast majority of marriages are between people who have no biological relationship. Anthropologists call bonds and relationships created by blood kinship, consanguineal , and those created by marriage, affinal (and affinal relatives, are called affines). Both consanguineal and affinal relationships carry legal and/or social obligations, but the relationships begin in different ways. Consanguineal relationships begin at birth, and are based on who your parents are; and you have no say in the matter. Affinal relationships are formed by a social and/or legal contract, and there is some room to maneuver.
A marriage is usually established by a rite of passage . Most rites of passage that are widespread cross-culturally concern life stages, particularly, birth, puberty, and death. You have no choice about going through these life stages, although the nature and importance of the rites associated with them vary considerably. Marriage is the one rite of passage that is virtually universal, that is not associated with a biological life stage, and about which the participants have some choice. In consequence, the rites of passage are pointedly imbued with symbolism concerning the nature of the transition that is taking place, and the bonds that are being formed.
Let us focus, for the moment, on the Christian wedding ceremony as it existed traditionally in the West, particularly in Britain and the United States. Nowadays people do all manner of things and consider it a marriage ceremony. You can hang out on the beach in casual clothes and bare feet, and get married with the waves lapping at your feet, or you can do it while skydiving. It has all been done. Even so, at the core of Western weddings are a number of fundamental concepts that have existed for hundreds of years, that are not affected by the fact that the bride is wearing a parachute instead of a white tulle gown.
At heart, the wedding ceremony is a rite of passage, but who is undergoing transition? At first blush we want to say that they are both getting married. Yes, they are. But who is making all the changes? Who is going through transition? A moment’s reflection shows that traditionally it is the woman, not the man, going through transition, and even though now things are somewhat different, the symbolism of this fact endures. The key question is why does the symbolism endure? I will get to that in a minute. I am going to leave aside the myriad variations of wedding ceremonies around the world, and focus on one which should be reasonably familiar.
The absolutely traditional Protestant and Catholic marriage ceremony, performed for centuries in the United States, Britain and the British colonies, quite clearly removes a woman from the control of her father and places her under the control of her husband (and until quite recently this was a legal and financial fact in these and many other countries). Let’s look at some of the details of that traditional ceremony.
- The wedding takes place in the bride’s home town – that is, her father’s residence – conventionally in the church where she grew up.
- She begins the ceremony with her father’s last name and ends it with her husband’s last name.
- She changes her status from Miss to Mrs. When she is Miss Smith she is the daughter of Mr Smith, and when she marries Mr Jones she becomes Mrs Jones. She never has a name in her own right, only a name (and title) in relation to a man. In strictly formal terms, if she marries James Jones, she becomes Mrs James Jones. The groom does not change names.
- The bride is the only member of the wedding party to wear white, the color that worldwide symbolizes transition.
- Traditionally the bride wore a wedding ring after the wedding and the groom did not. The placing of the ring on the bride’s finger during the ceremony, did not have a counterpart for the groom.
- The bride is walked down the aisle by her father and handed over to the groom who is waiting for her. Until recently the service at that moment (or soon thereafter) had the words from the officiant – “who gives this woman?” Traditionally the bride’s father said, “I do” and then stepped back.
- The traditional Anglican vows were that each promised, in turn, to love and honor the other, but only the bride additionally promised to obey the groom. The groom made no such promise.
- The bride enters the church on the arm of her father, and leaves on the arm of her husband.
In a nutshell, the woman was being passed from one man to another. Nowadays the ceremony has been altered, but significant portions remain. Why has it not been completely overhauled? I would suggest that a powerful reason is that there is cultural resistance to complete equality of men and women, rather than the more usual argument that tradition has a habit of lagging behind reality. The resistance to change is not just about nostalgia. Women are still subordinate to men in many ways, and a good many men want to keep it that way. Let’s take the traditional sequence, step by step, and look at what has changed and what has not.
- The location of the ceremony is quite mutable these days, and the couple often has a say in where it is performed. There was a fad for a while of having weddings in exotic locations, making them feel as much like holidays as rituals. This fad is fading mainly because it can put a financial strain on the guests. Furthermore, couples these days live where they work, which could be a great distance from where either was raised. Thus, location is more about convenience than tradition.
- Changing family names for the woman still happens in the vast majority of cases in the English-speaking world. In the United States the percentage of women keeping the family name they were born with has hovered around 20% since the 1960s, when it first became an issue. As a small linguistic note, “maiden” name is a euphemism for “virgin” name, although that etymological fact is largely forgotten. Family name is a more neutral term for the name on your birth certificate. The excuse often offered for the name change for the bride, if one need be given, is that if the parents have two different names, their children have to choose which to take, or have hyphenated names. Sometimes, in the United States, a married woman hyphenates her birth name with her husband’s name, placing her birth name first, so that the husband’s name is still technically her last name, and the children take his name not the hyphenated name. There are many flavors, but 80% of women change their last names to their husbands’ on marriage.
- Titles are, to a degree, linked to name changes. A woman who does not change her last name, cannot take Mrs as a title. There was a strong movement in the 1960s onward for all women to use the title Ms, and this has become much more prevalent than changing last names. It is not really possible to cite statistics in this case because we are talking about daily usage which can change depending on the people and the circumstances. Regardless, men do not have to make this choice – ever. A man’s title before and after marriage does not change.
- The bride wearing white has not changed. The meaning of the symbolism has changed a little though. White certainly still represents purity (and hypothetical virginity), but I seriously doubt that guests at a wedding these days think that the whiteness of the bride’s dress is a guarantee of her sexual innocence. Nonetheless, the fact that the wedding dress is white and very special, makes the bride stand out. Everyone comments on how the bride looks; the groom’s outfit is nowhere near as important.
- The post-war era saw the emergence of symmetry in ring giving in the wedding ceremony. Traditionally only the bride wore a ring, usually with the engagement ring as well, but equality asserted itself here. I have no statistics, but I’d guess that the majority of men wear wedding rings now. The one-ring ceremony is making a comeback, however. Obviously, the traditional ring for the bride only was a symbol of ownership by the husband, muted by having rings for both.
- “Who gives this woman?” was retained for quite some time in the Anglican prayer book, but the father’s response of “I do” was struck by the 1960s. Thereafter, the priest simply asked, and the father stepped back, and sat down. Even the question is not used much any more, but getting rid of it entirely took some time. I do not expect that the tradition of the father walking the bride down the aisle to the waiting groom is ever going to change much, yet this is the clearest symbol of all symbols in the ceremony that the bride is changing ownership from father to husband. The social reality is certainly different, though. Why cling to the old symbols? The groom is not led in by his mother, he still just waits for his bride to be led in, making the bride the center of attention, and not the couple.
- The “obey” bit for women was struck from prayer books in the 1970s. The asymmetry was clearly anachronistic and had to go.
- Like #6 there is still a sense that the woman always needs to be on the arm of a man. I have officiated at weddings where the bride walks down the aisle by herself, but they are rare.
It could be argued, and frequently is, that these symbols are merely symbols and mean nothing these days. Anthropologically speaking, there is no such thing as a “mere” symbol. Symbols have meaning, otherwise they are not symbols. Of course, symbols can change meaning, but they cannot lose meaning entirely. My argument is that they retain a great deal of their traditional meaning because that meaning lingers within culture. Women are not equal to men today. When they are fully equal the symbols will change. Let us consider what could be done to have a rite of passage that expressed gender equality.
The bride and groom could enter the ceremonial space as individuals from different sides, both wearing special clothes that distinguish them from the congregation. They could exchange symmetrically equal vows and rings, and then leave the ceremonial space together. In this way, neither bride nor groom is singled out as making a change. The question of names after the ceremony remains an issue. The most egalitarian solution would be for the couple to choose a new married name that they would both adopt after they were married. This is an option in some states in the US, but it rarely happens. There is also the question of titles. Mister and Miss could be co-opted for men and women who are not married, but then there would have to be a new set of titles for married men and women: perhaps, Master and Missus. So, Mister Blue and Miss Yellow are not married, but after they marry each other they become Master and Missus Green.
I hope you are not laughing too hard at these proposals, but you know they are not going to occur any time soon. They are not going to happen because the culture is not ready for the changes. Exactly how many Modern Groom magazines are there? How many groom’s stores? How big is the groom industry? Indeed, there is a wedding industry which is concerned with the ceremony and accompaniments as a whole. But the focus is still on the bride. Individual couples can have an egalitarian ceremony, and can make (some) choices about names after the ceremony and for their children, but individual acts of this sort do not change the culture as a whole. The changes that have occurred to downplay the gross gender inequality of the ceremony were made because they were so evidently outdated. The fact that to this day 80% of women take on their husbands’ family name by choice, is instructive. They do not have to change names, as was once true, yet they consciously make the choice to change. The change follows a woman for the rest of her life. On many official forms, such as, passport applications, and all government documents, there is a space to list “previous names.”
Changing names after marriage creates a great many more problems than does keeping the same name. After marriage, a woman who changes her name has to get a new passport, driver’s license etc. and has to change her name on bank accounts, insurance policies, and whatnot. It is a gigantic rigmarole that the man does not have to go through. And, if she wants to return to her unmarried name after a divorce, she has to apply for a legal name change in most states in the U.S. and has to repeat the whole rigmarole of changing documents, yet again. Why do women still overwhelmingly want to change their names, and why do so many men insist on the change for their wives? There are no official records kept on men insisting that their future wives change their names, but anecdotal evidence shows that some men have strong opinions on the issue.
A major reason given for changing names concerns the naming of offspring. If the wife changes her name to her husband’s, there is no problem, but if both parents retain their family names they have to decide what the last names of the children are going to be. One solution is to give the children double-barreled names, combining father’s and mother’s last names, but there are two weaknesses to this solution. The second “barrel” of a double name has historically held greater weight, and is, more often than not, the father’s family name. The more intractable problem concerns the naming of children with parents who retain their unmarried names and they are both double-barreled names. Do the children have quadruple-barreled names? You do not need to be a whizz at mathematics to see what problems that “solution” would create after very few generations. There is a clear solution that would be gender neutral and which has both historical and cross-cultural precedent, but the culture as a whole has to desire the change.
In both Anglo-Saxon and Norse cultures in the Middle Ages, a person’s last name was not a family name but could be one of several designations. It could be one’s trade (Smith, Gardener, Forrester, Farmer), or one’s place of origin/residence (London, French, Townsend), for example. These were not family names but, rather, individual designations that were fluid. Another possibility was to have a last name that was an indication of one’s father’s or mother’s name. Father’s name has persisted in current family names like Johnson, Jackson, or Davidson, but mother’s name has all but disappeared; Megson and Babson are rarities. Patronymic naming of boys was once common. Under that scheme, a boy called David born to a man named Michael would be called David Michaelson. If he had a son called John, he would be called John Davidson . . . and so on. In Scandinavian cultures the suffix -dotter, could be added for girls, but historically this custom was patronymic also (e.g. Hansdotter, Chistiansdotter). However, it could also be matronymic (e.g. Helgadóttir). This custom suggests a logical solution to the name change issue.
It is perfectly feasible, and logical, for couples to retain their unmarried names upon marriage, and assign patronymic names for their sons, and matronymic names for their daughters. This system would privilege neither husband nor wife. Let us imagine that John and Mary marry. They have a son which they name David and a daughter they name Emma. The son’s full name would be David Johnson and the daughter’s would be Emma Marysdaughter. The matronymic can be a bit of a mouthful, but Scandinavians manage. David’s male offspring would be surnamed Davidson, and Emma’s daughters, Emmasdaughter. Problem solved – sort of. First snag would be getting everyone to agree to the change and then making the change. There would be tremendous resistance, not least because there would be resistance from all quarters, including the 80% of women who like making the name change under the current system (or who are, at minimum, agreeable), men who like how the current system makes them feel, people who are proud of their family name, and so forth. Second snag is that this system would make genealogical inquiry extremely difficult because there would be no family name to anchor research. For these and a host of other reasons, such a switch is not going to happen.
Perhaps more than anywhere else in culture, marriage customs symbolize and reinforce status and roles based on gender. Let us now look at a classic description of marriage among the Nuer by E.E. Evans-Pritchard (1951). There are some cautions to bear in mind when using Evans-Pritchard’s ethnographic descriptions of the Nuer. First, they are old (from the 1930s and 40s), but they are still considered classics in social anthropology. Second, they cannot be taken as descriptions of some imagined “pristine” Nuer culture. The Nuer were heavily affected by European colonization in the nineteenth century, and Evans-Pritchard represented the colonizers when conducting fieldwork. Third, Nuer marriage customs vary as widely as European ones, so I am being somewhat simplistic. On the other hand, I am simply taking Evans-Pritchard’s descriptions, and not getting involved in his analyses. I would also note that Evans-Pritchard himself quite openly stated that understanding the culture and minds of indigenous Africans was impossible. All analysis is going to reflect the culture and interests of the anthropologist and should be treated as such.
The Nuer are a Nilotic, sub-Saharan people living mostly in the Sudan whose traditional mode of life is herding: cattle primarily, as well as sheep, and goats. Every part of a cow has a domestic use after it is slaughtered, as does all of its products when alive: milk, blood, urine, dung. Cows are central to daily life and social relations, and are fundamental to ritual and religious activities. A man’s wealth is measured in terms of the number of cattle he owns, and Nuer language is littered with words that refer to cows.
Nuer social organization is based on lineages that follow the male line (called patrilineages), and one of the longstanding questions in anthropology is how a culture based on patrilineage ensure that a boy, born to a woman who belongs to a different patrilineage from his father becomes a full member of his father’s patrilineage and not his mother’s. The ceremonies surrounding marriage are of central importance to the Nuer in this regard. Unlike European weddings, Nuer marriage ceremonies center on members of the bride’s and groom’s patrilineages, and not on the bride and groom themselves. Furthermore, the whole process of getting married is a long drawn out affair, with negotiations concerning cattle being transferred from the groom’s family to the bride’s family as the most elaborate centerpiece. By Evans-Pritchard’s account, a Nuer wedding has several distinct parts:
- Betrothal
- Wedding ceremony
- Consummation
- Firstborn
- Wife’s relocation
- Spoon ceremony
The entire process can take several years to complete, with each step along the way strengthening the union. The steps are not always carried out in the same order, nor are individual events within the steps. Many of the ritual components are ignored by the majority of people in attendance, although they must be completed.
Unmarried Nuer men and women have ample opportunity to meet prospective spouses at dances in surrounding villages at which flirting, and physical contact is normal and expected. Thus, in many cases, young men and women come to some sort of agreement about the possibility of marriage. Otherwise, men who want to find a wife, circulate around villages dressed in a manner that indicates they are looking for a bride, and accompanied by close friends or relatives. Either way, there comes a point when the couple determine that they wish to be married, and this decision initiates the betrothal.
Before the betrothal ceremony takes place, members of the bride’s and groom’s patrilineages have come to a very general agreement as to the number of cattle that will be given by the groom’s family to the bride’s family (generically called “bridewealth” by anthropologists) in the course of the marriage. For the betrothal ceremony, the bridegroom and kin travel to the bride’s village with a token number of cattle (between 3 and 10 head), representing a kind of downpayment, or earnest money, for the full bridewealth. They tether the cattle outside the village, and enter in war formation, performing mock warrior activities. The bride’s male kin respond in kind followed by general dancing, which continues well into the night. Before midnight, the bride’s father sacrifices an ox, most of the meat being distributed amongst the groom’s relatives. The sacrifice is accompanied with formal speeches by the bride’s kin, and the groom’s kin are not involved. In the morning there is more dancing and feasting, the groom performs a ceremony with his bride’s mother, then the groom’s party depart. Henceforth, the couple are referred to as husband and wife, and all the respectful attitudes owed to in-laws are observed, but the couple is prevented from having sex (as much as such prohibitions are possible). The marriage could still fall apart at this point.
Depending on the wealth of the groom’s family, the wedding ceremony can take place a few weeks after the betrothal, or several years may intervene. The bride’s brothers do not like long delays of this sort, because they are likely to want the bridewealth paid as quickly as possible so that they can use the cattle to get married themselves. The betrothal cattle cannot be used for this purpose until after the wedding because they have not been fully given, and if the marriage does not proceed, they have to be given back. Between the betrothal and the wedding, bridewealth negotiations continue between the bride’s and groom’s kin, and on the day of the wedding itself, the groom’s elder kin go to the bride’s village to continue bridewealth discussions. These involve not only the number of cattle to be given, but also how they will be distributed among the bride’s kin. Later in the day the groom and his younger kin and friends enter the village and there is a mock combat between them and the girls of the bride’s village. This is followed by a ritual between the groom and the bride’s mother, and then dancing commences.
Around the time of the commencement of dancing the elders on both sides signal their approval of the marriage, and the ritual specialist of the bride’s patrilineage delivers a long, formalized speech, followed by an equivalent speech given by the ritual specialist of the groom’s patrilineage. These specialists are not involved in the bridewealth negotiations since their role is purely ceremonial. Their speeches may or may not be followed by speeches from the bride and groom’s kin. Chanting and dancing follows, during which there is a small, but important, ritual involving the kin on both sides, led by the bride’s father. The bride’s kin sacrifice a wedding ox in the evening or on the following morning, and, as with the betrothal ox, the bulk of the meat is distributed among the groom’s kin. Some is eaten on the spot, but most is taken back to the groom’s village where it is shared out and eaten. This ceremony does not conclude the marriage, but it is overwhelmingly certain that the union will proceed from this point on. The bride still has the option to back out, however. It is rare for the bride to refuse to go forward, but it can happen if she has met another man she prefers. It does not break down at this point because of unsatisfactory bridewealth negotiations.
The third ceremony, the consummation, is pivotal in ensuring that the marriage will continue. Until this point, the bride can still attend dances and flirt with other men, but afterwards she is prevented from doing so by her kin, because subsequently her husband can accuse her of adultery and demand compensation. The groom’s kin can also create an ugly scene if she is found at a neighborhood dance after the consummation. The consummation formalizes the marriage, even though there is probably still bridewealth outstanding, and there are important formalities to follow.
The groom’s young kin and friends go to the bride’s village and take her and her female kin and friends back to the groom’s village. There are several rites and activities, performed primarily by the women of both villages, including a formal “consummation” of the marriage (initially a ritual). The bride is not expected to be a virgin, and may even be pregnant at this point. Nonetheless, the couple retires to a hut amid ritual activities, and later emerge for more ritual. Three rites must be performed afterwards: a sacrifice of an ox by the groom’s kin, a ritual hand washing by the bride and her friends, and shaving the bride’s head. One of the groom’s kin shaves her head, and she is stripped of all her clothing and ornaments. She is given clothing and ornaments by the groom’s kin, and is now a member of the groom’s patrilineage, and no longer of her father’s patrilineage. In all legal and social respects she is now a wife, but the marriage proceedings are not concluded. However, from this point on, the bridewealth cattle belong to the wife’s patrilineage and need not be returned if things fall apart later.
After the consummation, the wife returns to her village where her parents prepare a hut for her. However, she generally sleeps with her unmarried female kin except when her husband visits. He is not expected to be seen in the village, and so comes at night secretly. He sleeps with his wife in her private hut and leaves before dawn. She then goes about her business, ostensibly as if she were not married. The husband’s nightly visits are supposed to be secret but, of course, everyone knows about them. If he is caught in the village after dawn he has to forfeit his spears to his mother-in-law, which is shameful. The birth of a child is eagerly anticipated, not least because the bridewealth cattle, while now owned by the wife’s patrilineage, cannot be redistributed without complications until a child is born. When a first child is born, the mother and child remain in the wife’s village until it is weaned, although it is taken once to the husband’s family for a domestic ritual. When the child is weaned, the husband goes to his wife’s village and asks his parents-in-law to take his wife back to his village. This is a formality, and is never denied even though it is likely that some of the bridewealth has not been paid. Her parents give her a horn spoon and gourd symbolizing that she will now eat porridge in her husband’s village, and she returns with him. Thus, the marriage is finally completed.
Note that among the Nuer, many years elapse between the beginning of the wedding rituals and the end, and the bride and groom are rarely the center of attention. Key events revolve around cattle and patrilineages, not the couple being married. In my chapter on reciprocity (chapter 00), I talked a little about bridewealth, and it should be clear from my brief description of Nuer marriage, that paying bridewealth is not like buying a pack of cigarettes. We are talking about years of haggling and disputing, and years of the wife living in her parents’ village after she is married. As Evans-Pritchard asserts, getting inside the mind of a Nuer is impossible, and looking upon bridewealth as simply buying a bride is certainly ethnocentric. Nuer marriage involves numerous steps and countersteps in ritual, sacrifice, gifts, and general activities. There is not complete symmetry in the steps, but it is also not fair to think of them in terms of reciprocity, delayed or otherwise. You might think, instead, of acts of transition accompanied by negotiable obligations. If you said that the groom’s family compensates the bride’s family for the loss of a daughter with cattle, you might be closer the mark. Compensation is not the same as purchase.
Your job now is to compare Nuer marriage and marriage customs that you are familiar with. How are they the same and how are they different? We cannot generate a universal definition of marriage out of only two disparate examples, but we have a starting point. Both are concerned with creating a bond that did not exist before, and which is expected to be permanent. Offspring are expected to be one of the outcomes of the union – probably more so among the Nuer than in Anglo-American culture, but, nonetheless a significant factor. What else? I have actually left out major components of Euro-American weddings, including the wedding reception and the honeymoon, as well as the engagement, showers, bachelor party, rehearsal dinner and so forth. When you add them all together the whole thing can be as protracted and complex as Nuer weddings.
Chapter 9: Forbidden Fruit: Incest
All cultures have some form of incest rules. They are not the same from culture to culture, however. It is never correct to say, “Such-and-such culture practices incest.” No culture in the world condones incest. Incest taboos (of some sort) are completely universal and always have been. The correct statement should be, “Such-and-such culture practices incest according to the rules of my culture.” The ancient pharaohs of Egypt could marry and have children with their sisters. Whether this actually happened, or how often, is a matter of debate because the records are sketchy, and kinship terms are not always clear. But we do know that Cleopatra VII (yes, that Cleopatra), married both her brothers. Whether she had sex with them is doubtful. If she had had sex with them it would not have been incest, because it was legal. The most important starting point, which even anthropologists get wrong sometimes, is that incest rules concern sex, not marriage. The two are entangled, of course, because marriage typically involves sex (not always), but incest is incest, whether the couple is married or not. Anthropologists tend to conflate the two because incest laws are commonly expressed in documents, or rules, that define legal and illegal marriage partners, not sex partners. The sex part is implied when talking about marriage rules, but a distinction needs to be made to clarify what the rules are actually about.
There are many speculations concerning why incest rules exist, but none is totally satisfactory. Sigmund Freud famously speculated that incest rules are the cornerstone of society, and he may have had a point about that even though his hypothesis is based on a fairy tale that he conjured up out of nowhere. The fact that some kind of incest taboo is a cultural universal tells us that incest rules are very important to humans. We can take that for granted. Food and sex are necessary for the survival of humanity, so it is no surprise that both are surrounded in every culture by complex rules and taboos, written and unwritten. I deal with food taboos in another chapter (chapter 00). Sex is my topic here.
There are four common reasons given for the universality of incest taboos, and the first two are quite widespread in popular consciousness (to the point of seeming like “common sense”):
1. Genetics
2. Natural aversion
3. Family disruption
4. Social cohesion
The most common popular reason given for the existence of incest rules is the genetic one (followed by, or coupled with, natural aversion). Practice incest and you will produce deformed babies. This is, quite simply, false, but also misleading. Here we must distinguish between incest and inbreeding. Incest can be defined as sex between two people who are perceived as too closely related for relations to be permissible, whereas inbreeding is the coupling of individuals with strong biological links. Let’s be clear about what is meant by “perceived closeness” when it comes to incest. In some societies sex between two people may be classified as incest, even though they are not biologically related at all, yet they are perceived as being too closely related for sex between them to be acceptable. Here are the laws of incest for forbidden sexual relations laid out in the Qur’an for a Muslim man:
- his father’s wife (whether his mother or not), his mother-in-law, a woman from whom he has nursed and the children of this woman
- either parent’s sister
- his sister, his half-sister, a woman who has nursed from the same woman as he, his wife’s sister while he is still married
- the child of a sibling
- his daughter, his stepdaughter (if the marriage to her mother has been consummated), his daughter-in-law
The perceived closeness in this list includes many blood kin, but it also includes people related by marriage, and people who are not necessarily kin at all. This fact by itself rules out the genetic argument as a blanket justification.
The genetic argument was made famous by charting the genealogy of queen Victoria who carried the gene linked to hemophilia B, which she passed down to a number of boys in the extended royal family (see figure 00), in which cousin marriage was common (a kind of inbreeding, that was not incestuous). Hemophilia B is a recessive trait that is carried by a gene on the X chromosome. Women have two X chromosomes, so if they have the hemophilia gene on one X chromosome, and a normal gene on the other chromosome they will be “carriers” of the gene, but will not have hemophilia, because you have to have the gene on both chromosomes for hemophilia to occur. The normal gene “blocks” the recessive gene. Men have an X and a Y chromosome, and because the Y chromosome is so much shorter than the X chromosome, it does not have paired sites for all the genes on the X chromosome. So, if a man inherits the hemophilia gene from his mother’s X chromosome, he will not have a matching normal gene on the Y chromosome from his father to block it, and so will have hemophilia.
Figure 00 Queen Victoria’s descendants.
European noble families are known historically for their inbreeding, and certain genetic problems show up because of it. Let’s now distinguish between incest and inbreeding. Figure 00 charts degrees of consanguinity. You count generations up from yourself to the common ancestor that you and a proposed spouse share, and then count down generations to that individual. These degrees correlate roughly with how much DNA you are likely to share with a relative, although it is not mathematically exact.
If we plot parts of this chart on a table showing degrees of relationship and expected percentage of shared genes we get:
Degree Relationship Shared Genes
| 0 | identical twins | 100% | |
| 1 | parent–offspring | 50% | |
| 2 | full siblings | 50% | |
| 2 | grandparent–grandchild | 25% | |
| 2 | half siblings | 25% | |
| 3 | aunt/uncle–nephew/niece | 25% | |
| 3 | great grandparent–great grandchild | 12.5% | |
| 4 | first cousins | 12.5% |
There is no precise definition of inbreeding, in humans or in animals, rather, there are degrees of inbreeding, calculated by the approximate percentage of shared genes between the couple. How much inbreeding is tolerated depends on the culture. In the Catholic church, up to four degrees of separation is too close, but in the Church of England only up to three degrees is too close. The former deems first cousin marriage incestuous, the latter does not. In this example, inbreeding and incest are interconnected, but there are many other kinds of incest.
Connecting incest rules with the potential problems that can arise from inbreeding is fraught with difficulties. Certainly, inbreeding is associated with the concentration of genes, but this can be a good thing or a bad thing. Long before anything was known about genes, breeders were inbreeding stock deliberately to produce traits they wanted. This is the whole idea behind producing pedigree cattle, dogs, cats, or pigeons. The trouble is that inbred stocks can also carry genes you don’t want, and inbreeding can result in lower rates of fertility and shorter life spans. Either way, genetic problems with inbreeding, though a factor, cannot be a major reason for incest taboos.
You cannot speculate that cultures, long before anything was known about genetics, looked at the offspring of inbred couples and decided that inbreeding was a bad idea because of the unhealthy children they produced, and, so, invented an incest taboo. Inbreeding cannot create genetic problems out of nowhere. For inbreeding to result in traits you don’t want, the parents have to have the genes for those traits in the first place. You can’t produce children with six fingers unless someone in the gene pool has the gene for six fingers, and descendants persistently inbreed. Furthermore, first cousin marriage, by itself, does not concentrate genes to any dangerous extent in general.
You can point to hemophilia in queen Victoria’s descendants, or the famous Habsburg jaw that was found throughout the royalty of Medieval and Renaissance Europe into modern times, as examples of problems connected to persistent first cousin marriage, but these are rarities. First cousin marriage is common in many cultures in the world, and physical anomalies are unusual in them, because first cousins share only 12.5% of genes. The Habsburg king Charles II of Spain not only had a Habsburg jaw so severe that he had difficulty chewing, he was also mentally disabled and infertile. But, he was not just the product of repeated first cousin marriage. His pedigree was so deeply inbred that his ancestor, Joanna of Castile, shows up in his family tree fourteen times, and it has been speculated that his genetic makeup was much the same as if his parents had been full siblings.
Inbreeding of one sort or another is preferred in cultures, or segments of cultures, where families have resources they would rather protect than share. Marry outside a group and you dilute your resources (negative), but you broaden your gene pool (positive). Marry inside a group and you concentrate your resources (positive) but you run the risk of concentrating your genes which could be good or bad depending on what your gene pool looks like. It becomes a balancing act. In any case, these examples concern incest that is associated with inbreeding, but incest rules apply to many couples who are not biologically related at all. Therefore, even if genetics plays a part in establishing incest rules, it is not the whole story by a long shot. Why is it incestuous for step-siblings to marry in many cultures, when they have no biological relationship whatsoever?
In many cultures it is incestuous for men and women to have sex if they have been raised in the same household, even if they are not biologically related. Sigmund Freud, in Totem and Taboo (1918), argued that the formulation of incest rules was the beginning of human society. He claimed that incest rules were the foundation of all cultural rules. His line of reasoning was based on an imaginary prehistoric patriarchal family in which the father kept his daughters for his own sexual partners and so his jealous sons, who wanted to have sex with their sisters, killed him. The sons then felt so guilty about what they had done that they vowed to seek sexual partners outside the family. While this model is way too speculative for modern scholars, Freud was on to something important. Sex within families can lead to jealousies.
The family disruption theory is a functional explanation for the incest taboo. By functional I mean that the theory assumes that the taboo exists because it has positive consequences for society. This theory holds that sexual relations between members of the same, or related, households can cause jealousies and rivalries that are destructive for the households involved. Therefore, such relations are taboo. Consider, for example, the mother-son incest rule which, as far as I know, is universal and strongly held. Leviticus requires the death penalty for mother-son incest, and in Greek legend when Oedipus discovered that his sexual partner, Jocasta was his mother he blinded himself and she committed suicide.
Among the possible reasons for a mother-son incest taboo is the fact that sexual relations between a mother and her son would drive a wedge between the son and his father. The son being the weaker of the two males might then be in great peril of harm from his father. Freud had much to say about this hypothesized pattern in his writings on the Oedipus complex, of course. According to family disruption theory, a society that allows mother-son sexual relations will quickly see its sons marginalized or destroyed by jealous fathers, and would not thrive. It is possible to extend this argument to include other household members. Brother-sister sexual relations, for example, might set up excessive sibling rivalry among the other siblings left out of brother-sister relations. For Freud’s theory to apply, you must begin from the stance that family members are naturally attracted to one another, and it is incest rules, rigorously applied, that push against this natural attraction.
There is some anecdotal support for Freud’s position, although it is not conclusive. There is an hypothesis known as Genetic Sexual Attraction (GSA) that proposes that people who are strongly biologically related are naturally sexually attracted to one another. The main “evidence” for this hypothesis is the existence of marriages or relationships between siblings who were raised separately, met as adults without being aware that they were siblings, and were instantly attracted on meeting. This is pretty flimsy evidence, and can easily be dismissed as the results of random chance. The family disruption hypothesis contends that family members are sexually attracted to one another, but incest rules push against them for the sake of family cohesion: a classic case of arguing that nurture is used in culture to defeat nature (and carries with it all the problems that I raised in chapter 00 about looking at nature and nurture as competitors). Freud based his belief that boys are sexually attracted to their mothers on his own experience being aroused seeing his mother getting dressed. One case of a clearly confused man is hardly evidence for all of humanity.
On the other side of the coin, some people have argued for a family aversion theory which argues that people who are raised together are naturally disgusted by the idea of having sex together. This is usually called the Westermarck Effect after its principal early proponent, Edvard Westermarck, in his 1891 book, The History of Human Marriage. This theory does have wider applicability than the genetic theory, because it applies to all family members, whether they are biologically related or not. But it also has several weaknesses. The simplest counterargument is that if people are naturally averse to sex with family members, there should be no need for incest laws – certainly ones that are so widespread globally. There must be more to incest rules than that.
The Westermarck Effect has its supporters and detractors because the clinical evidence is complex and inconclusive. For example, long term studies of children raised together communally on kibbutzim in Israel showed a much lower than average tendency to marry one another, but there are many explanations for this phenomenon besides the Westermarck Effect. Studies have focused on marriage rather than sexual attraction, so it is possible that children raised together on kibbutzim are, indeed, attracted to one another, but social pressures steer them away from forming attachments. Because we are dealing with humans and not lab rats, we cannot perform controlled experiments, ethically. We have to use data collected by the vagaries of circumstance, and all manner of factors can creep in that taint the data, and compromise conclusions.
Freud gets a tiny amount of support from recent studies that have suggested that people are attracted to partners that look like their parents (or themselves). It is possible that this evidence has more to do with narcissism than genetic attraction because one of the key experiments involved asking subjects to rate photographs of people as to their attractiveness, and several of these photos were digitally altered images of themselves. Subjects routinely rated the altered images of themselves higher than other images. So, we may be attracted to people who look like our parents, because our parents look like us. The hypothesis here is that we are attracted to family members because they look like us, but incest rules kick in to prevent inbreeding or family disruption, so we look farther afield, yet still end up marrying our parents in surrogate. Inconclusive clinical evidence seems to support the claim that chosen sexual partners resemble parents more than would be expected by random chance.
All of these speculations suffer the same weakness in that incest rules frequently extend well beyond people in the same household, or even in the same community as is shown in the Muslim example. Current Catholic law, for example, prohibits sexual/marital relationships between third cousins, without a special dispensation. Such taboos cannot possibly be linked to jealousy and household disruption, nor to genetic sexual attraction. Third cousins share approximately 0.78% DNA, and are not likely to be raised in the same or related households. I barely know who my second cousins are, let alone my third cousins.
The fourth general explanation for incest taboos, namely that they force ties with outside groups and, thus, create strong networks outside the immediate family and kin, as well as minimizing inbreeding to a degree, is favored by many anthropologists although the explanation comes in a number of flavors. This model stresses the fact that communities are stronger when they have multiple ties to other communities. If a community has no outside ties, it has no one else to call on in times of stress such as famine, drought, or warfare. There are many ways to forge links with other communities, such as trade relationships, but marriage bonds between members of different communities are arguably the strongest and longest lasting way to unite the two. One way to promote marriage outside the community is to ban sexual relations, and hence marriage, within it. These approaches to incest taboos are favored by anthropologists because they emphasize social factors over biological and psychological ones. This fact should ring warning bells for you, however. Maybe there is a bias here. Do anthropologists favor social explanations simply because they are social scientists? Very good question. When all you have is a hammer is everything you see a nail?
Before I get into the meat of things, I have to introduce a new term: exogamy . It is a combination of the prefix “ex-” (out of) and the suffix “-gamy” (marriage), and has the counterpart, “endogamy” (“endo-” means “inside of”). Exogamy and incest are related, but not identical, terms. They often get a bit muddled in anthropological research because many languages do not have different words for “incest” versus “exogamy.” Rules of exogamy and endogamy apply to social groups, and the members of these groups need not have any biological or kin ties with one another other than membership in the group. Members of clans, for example, all claim descent from a common ancestor who may be an animal or god or an inanimate object. If that clan practices exogamy, its members must marry outside the clan. Marrying a clan member may not be classified locally as incest, but it is prohibited nonetheless. Here we often have a problem of language because clan members may conceive of themselves as biologically related even though they are not related in Western genetic terms. However, some marriages outside the clan may also be classified as incestuous even though they are exogamous. It is possible, for example for an aunt or uncle to be members of a different clan from yourself. If aunt or uncle marriage is prohibited by the laws of incest, you cannot marry one of them even though you would be obeying the laws of exogamy.
Anthropologists interested in kinship often speak about incest rules in terms of building alliances but are they talking about incest or exogamy? They do not always make a clear distinction. Freud’s ideas about incest rules and social cohesion are also a bit vague. His simple notion was that incest rules forge bonds between otherwise unrelated families, but rules of exogamy are usually much better defined than that, and the alliances that they create can be much more targeted. The leader in what has become known as “alliance theory” was the French anthropologist, Claude Lévi-Strauss, who made the claim that incest taboos are, in effect, prohibitions against endogamy, and that through exogamy, ties are built between lineages – but not just any old lineages. The basic theory, as laid out in Elementary Structures of Kinship (1949), takes off from Marcell Mauss’s study of reciprocity in The Gift (see chapter 00).
Through exogamy, households or lineages form relationships through marriage that strengthen social solidarity. Lévi-Strauss views certain kinds of marriage as an exchange of women as “gifts” (in the broadest sense) between two social groups. Lévi-Strauss argues, following Mauss, that,
exchange in primitive societies consists not so much in economic transactions as in reciprocal gifts, that these reciprocal gifts have a far more important function than in our own, and that this primitive form of exchange is not merely nor essentially of an economic nature but is what [Mauss] aptly calls “a total social fact”, that is, an event which has a significance that is at once social and religious, magic and economic, utilitarian and sentimental, jural and moral. (Lévi-Strauss 1969:52)
Lévi-Strauss differentiates between what he calls “primitive” society and “complex” society in analyzing the nature and meaning of incest and marriage regulation, and this dichotomy is now seen as misleading or unhelpful. It is true that we can observe patterns of intermarriage in cultures outside the contemporary Euro-American world that show much more regulation than we are used to, but dividing cultures into primitive and complex is inaccurate and misses some key points.
Lévi-Strauss was especially interested in persistent cousin marriage and how this created either direct or indirect exchanges that united distinct lineages. Going into too much detail about Lévi-Strauss’s analysis will confuse you very quickly, so I will just give you a taste. If men in lineage A always marry women from lineage B, and men from lineage B always marry women from lineage A, Lévi-Strauss sees the practice as a form of exchange of women between lineages. Bonds are tighter if a man from lineage A marries a woman from lineage B and in return his sister marries his brother-in-law as diagrammed in Figure 00:
Figure 00
In the diagram, 1 marries 4, and 1’s sister (labelled 2) marries 4’s brother (labelled 3). In Lévi-Strauss’s terms, men of different lineages exchange sisters, thus uniting the lineages in a double bond expressed in the exchange. Both couples have children and they are cousins (actually, double cousins because their father’s sister is their aunt, and their mother’s brother is their uncle). If ego marries 6, and 5 marries 7, you have a system of cousin marriage that can continue indefinitely, constantly binding the lineages together. This system creates a rule that is driven by incest and exogamy rules, but they are carefully laid out so that a man is not free to marry anyone he wants. The rule essentially says: “Marry someone outside your immediate family, but don’t go too far away.” That way inheritable resources are kept within a narrowly defined set of social groups. Something of this sort was going on within the royal families of pre-modern Europe.
I won’t get you caught up too much with the technicalities of Lévi-Strauss’s alliance theory. It gets very complicated, very fast, and you need not be troubled with the details. It has been challenged in a number of ways, primarily because it reduces incest and marriage to mathematical formulas, and realities on the ground are more convoluted than simple mathematics. What do I do if I want to marry a man’s sister, but I don’t have a sister to “give” in return, for example? Underneath such challenges to specific social theories of incest, however, is an important general idea which I will call “cooperation theory”: avoid sex with kin that are too close to you, biologically or socially, and you widen your network of people who you can count on in times of need. This theory sidesteps the problems of both biological (genetic) and psychological (family avoidance) theories, because it does not operate only on biologically related kin or people raised together. It sets up a class of people who you can already count on in emergencies and tells you to look outside that class for marriage partners.
Cooperation theory is obviously highly situational or circumstantial. Anthropologists argue that incest rules are most likely to be extensive (ruling out close biological kin) in situations where having the widest possible group of related people to call upon in times of stress is critical to group survival. Conversely in contexts where some groups have resources they wish to protect and not share with others, such as royal families, sex and marriage among close family relations are tolerated and sometimes promoted. This notion means that incest rules are not fixed, but change depending on social and environmental circumstances, which seems to be the case.
Cooperation theory suggests that incest rules require a delicate balancing act between the need to keep resources within the family and the need to have outside allies to count on when necessary. Finding sexual/marital partners far afield increases the human resources you can call upon, but it also diffuses your resources over a wider area. While you can call on all of your kin by marriage in crisis, they can also call on you when necessary. If, however, you find your partners close to home, you consolidate and protect your resources. But you limit the socio-spatial range in which you can call on others in a crisis.
The Babylonian Exile of Judeans (later called Jews) of 587/6 to 538 BCE provides an interesting case study. When Judah rebelled against the Babylonian king, called in the Bible, Nebuchadnezzar – more correctly, Nubuchadrezzar II – he crushed the nation of Judah and deported its upper class, priests, scholars, and nobles, to Babylon, where they could settle. In Babylon they had a choice. They could all live together in isolation and maintain their culture and religion, or they could assimilate with Babylonians. Both courses were open to them. If they lived in isolation, they preserved what it meant to be Judean, but they were cut off from the riches of the empire. If they assimilated into Babylonian society, they gained the power of larger social networks, but lost their culture. Some did one, some the other.
It is my contention, following a number of Biblical scholars, that the narratives in Genesis, and much of the law, although transmitted for hundreds of years orally, were written down and/or assembled into sacred books during the Babylonian Exile. It is strikingly obvious that both the Genesis narratives and the law have much to say about sex, marriage, and incest. To be simplistic, I can reduce these laws about sex and marriage in the Torah to a single statement: “If you have limited options, marry close kin (even very close kin) rather than marrying a foreigner.” Marrying close kin preserves the culture, marrying foreigners dilutes it. The Judeans living in exile in Babylon who preserved their culture by remaining isolated, survived as a culture; those who assimilated were largely swallowed up and vanished (although some survived).
The question of whether the narratives in Genesis have any factual, historical basis is irrelevant. What matters is the significance of the stories – and ever after. A good tale affects behavior whether it is true or not. Genesis is curiously silent about where Adam and Eve’s sons got their wives, because the logic of the situation was not important. When Cain was exiled after killing his brother, he wandered off and found wives and started a lineage. If you follow strict logic, his wives must have been his sisters, but Genesis is not interested in logic. Cain got some wives – end of story. Later descendants of Adam and Eve are more intriguing.
Abraham married his half-sister (same father, different mothers), his son Isaac married his first cousin once removed (child of his first cousin), and his son Jacob married sisters who were his first cousins, as well as double cousins (on both mother’s and father’s sides). When God destroyed Sodom and Gomorrah, and Lot and his two daughters thought they were the only people left on earth, his daughters got Lot drunk and had sex with him, producing powerful sons. Sure, they had to get him drunk, presumably because he was not happy with the situation; but the daughters were all right with it, and the Bible does not condemn the practice. Desperate times call for desperate measures. If you are living in exile in a strange land and your choice is to marry a foreigner or marry your sister, marry your sister. That way you preserve your culture, even though you might want to marry someone a little less biologically close to you; and creating ties with a powerful foreigner may seem attractive to you. Preserving culture trumps personal gain.
When the dust has settled, you can see that no single theory on the origins and purposes of incest taboos fits all the data. For me, the natural aversion theory is a non-starter. Why ban something you are naturally averse to? It’s like banning people from flying like superman. Prohibitions against inbreeding, and rules that force cooperation with people outside the immediate family are clearly equally strong contenders, and there is no reason to pit one against the other as mutually exclusive competing theories. They probably both play a part, with different weights in different cultures, depending on historical circumstances and current realities. This leads me to my key question, with a small twist “Which people, or group of people do you consider “too close” to have a partnership with?”
To answer this question I am going to propose an experiment, which is going to stray a little bit away from incest. I am going to call this the Goldilocks experiment. On a sheet of paper place a dot in the center and label it EGO. Then draw a circle around it and label it TOO CLOSE. Then draw two more circles (see figure 00). Label the outer circle TOO FAR, and label the middle circle JUST RIGHT. Now fill in the circles. Who is too close – that is, who do you consider an incestuous partner. So, parents, siblings . . . who else? Next consider the “too far” circle. You could call this anti-incest if you like. Maybe you would place someone here who is too old, certainly too young. Who else? Someone not of your ethnic or religious background? Someone who does not speak your language? As it happens there are a few cultures that require marriage to someone whose native language is not your own (linguistic exogamy). People in the Goldilocks group are in the middle. No one is going to see this diagram, so you can be brutally honest.
Figure 00
One of the strengths of cooperation theory is that it explains the wide range of incest taboos from culture to culture. The theory treats incest rules as a spectrum of possibilities and where you fall on this spectrum depends on a number of cultural circumstances. So here is another question for you: Do we need incest laws in the modern world given that we have no need to guard resources or seek allies via marriage alliances? Careful – this is not as easy a question to answer as you might think.
Chapter 10: Neither in nor out: liminal stuff
The word “liminal” is used by anthropologists often, but it is not common in everyday speech because it was invented in the twentieth century as a technical term in social science. It is an extremely important concept in cultural anthropology, especially in the analysis of rites of passage, and it is the one concept that my beginning students are quite often excited by above all others because it is so rich in application, yet so new to them. “Liminal” means on the boundary between two things (from a Latin word meaning “a threshold”). Whenever we create contrasting categories – day/night, black/white, true/false – there are always things that are neither one nor the other. Dusk is neither day nor night; it is in between. Anthropologists call things or places, or times that are on the border (threshold) between categories, liminal things. Liminal things can be both especially powerful and especially dangerous. Weird things might appear, or strange things might happen, at dusk (between day and night) or at midnight (between one day and the next). Being able to manipulate the liminal is a common source of power in all cultures.
Analysis of the liminal can wander all over the map, so I am going to start with a basic area that has been productive in anthropological theory, and work from there. Rites of passage have been the subject of anthropological inquiry since fieldwork was first made a part of the discipline, over one hundred years ago, in large part because they are so prominent. In general speech people talk about “rites of passage” in a sloppy way, meaning just about any activity that changes a person’s status, but in anthropology the term is much more rigidly defined. The “passage” referred to is the passage from one life stage to another. The two obvious ones are birth and death. The trinity, known as “hatch, match, and dispatch” by clergy, is completed with marriage. Marriage is the odd one out here because it is a matter of choice. Another rite of passage that is common in many cultures, but rare or muted in the UK and the US, is a puberty rite that signals the passage from childhood to adulthood. The puberty rite is less a matter of choice than marriage, and the timing is less rigid than birth and death. But it is an inevitable function of growing up, whereas marriage is an option.
The passage to womanhood for girls is much more easily marked biologically than the passage to manhood for boys because of the obvious onset of menstruation. In consequence, the puberty rites of passage for girls are usually markedly different from those for boys. For the moment, I will concentrate on boys. The term “rite of passage” was popularized by French anthropologist Arnold van Gennep in his book Les Rites de Passage (1909). His analysis was augmented, and, in consequence, better known in Britain and the United States, by Victor Turner who trained in social anthropology at Manchester University. I mention Manchester only because when anthropology was coming of age in the first half of the twentieth century, certain universities were associated with key players in the discipline. Manchester department of anthropology was founded by Max Gluckman who had his own slant on how fieldwork and analysis should be carried out. There was a time, now passed, when you could tell who trained an anthropologist and which school he or she was from, simply by reading their works without a name attached.
Because the transition from boyhood to manhood cannot be clearly marked or signaled by a biological transformation, having a rite of passage to nail down the transition can be advantageous to a culture. Having members of the culture in an ambiguous state between childhood and adulthood for too long is not always good for them or the culture (although some people make a lot of money out of the ambiguity). Think about when you became a man or woman in your own eyes and in the eyes of your culture (or, if you are still young, when this will happen).
At 67 years old, there is no question that I am a man in everyone’s eyes, but pinpointing when I became a man is difficult. I tend to think of my first day as an undergraduate at Oxford University as my (soft) rite of passage because in those days the college staff all referred to me from the moment I arrived as “sir” and my tutors all called me Mr Forrest. This was the first time in my life that this had happened. I had passed some important milestones before that day, such as being able to drink alcohol in a pub legally, being able to get a driver’s license, and being able to vote, but the outward markers of adulthood at Oxford symbolized a critical turning point for me. Those markers did not extend to my dealings with everyone in society, however, and my experience was certainly not the norm. Some boys left school at 14 and became apprentices, some left after A-levels at 18 and started working, and many went their various ways on diverse paths. There was no singular event that we all passed through and became men.
I believe that not having a puberty ritual that marks the transition to adulthood is problematic, but having one does not necessarily solve any problems in the modern world. The Jewish bar mitzvah ceremony that marks the coming of age for boys at age 13 brings with it certain adulthood rights and responsibilities within the Jewish temple tradition, but in the wider world it counts for very little. In Babylonian Aramaic, the term “bar mizvah” (which literally means “son of commandment” or, more generally, “a person subject to the law”), refers to a male who has passed through a rite of passage, but in modern English it has come to mean the ceremony itself. Before the ceremony, the boy’s father must answer to God for the boy’s sins, but afterwards the boy is responsible himself, and the father is absolved. Also, after the ceremony the bar mitzvah can be counted in a minyan (the number of men needed for certain ceremonies), can be called upon to lead prayers or read from the Torah, can testify in rabbinical court, and can be married according to Jewish law.
Within the confines of relatively closed Orthodox communities, the bar mitzvah ceremony is a big deal, but in Reform and Conservative communities, not to mention among secular Jews, the ceremony counts for little. A 13-year-old is still a boy in the eyes of secular courts and the community as a whole. He cannot drink alcohol legally, vote, get a driver’s license, or get married. He is also, more often than not, going to be tried in juvenile court for civil offenses, and his record will be sealed at the age of 18. 18 is a much more pervasive age of majority in the secular world, but even then there is a divide between being considered a man legally versus in reality.
When I taught English at a university in China a few years ago there was an exercise in their textbook that was highly revealing. It was designed to teach English words for different ages, such as, toddler, infant, baby, adolescent, middle-aged, and so forth. The task was to draw a table, listing all the words that describe the stages of life from birth to death, and putting a numerical value beside each word. The results were both instructive and hilarious. You might try this yourself. The first thing you will notice is that it is difficult, if not impossible, to mark a definitive age range for each category. Even “teenager” is problematic, although it shouldn’t be. The “teens” run from thirteen to nineteen. Simple enough. But “teenager” is more of a social category than a mathematical one. A married nineteen-year-old with a baby is hardly a teenager. What about “adolescent” or “toddler” or “middle-aged”?
My students puzzled over the exercise for well over 40 minutes and when they were finished, I went around the class asking each student for their lists. They were all different, although some were close. There was a certain amount of furious erasing and rewriting as student followed student. Then I put my list on the board, and I had to quickly stop them from erasing their lists and copying mine. It was so engrained in them that there was one right answer to questions, and the teacher was always right (and would test them at the end of the term) that they could not grasp the notion that words for life stages were flexible, and that no matter how many words you have, you will never be able to pin them down exactly. That is where rites of passage can help.
Some life transitions are not terribly important. Middle age to old age is something I think about, and because I draw what is sometimes called an old-age pension, you could call me “old” at 67. No one calls me “old,” though, and the only time I call myself “old” is when I jokingly refer to myself as an “old git.” The transition is slow and unimportant to me. The transition from boy to man was a big deal to me, and it would have helped if there had been a rite of passage to mark the transition. The problem, as seen with the bar mitzvah ceremony, is that in modern, developed countries, there are numerous points of transition on the way from childhood to adulthood, and there is no single one that is the key point.
Van Gennep and, later, Turner, argued that because the passage from one life stage to another can often be deeply ambiguous and, therefore, troubling, a prescribed, socially accepted ritual that formalizes the transition from one stage to the next can help contain and control the ambiguity. Van Gennep coined the term “liminal” to designate people who were neither one thing nor the other as they underwent transition. For van Gennep the centerpiece of rites of passage was the liminal phase (that is, transitional phase), when individuals undergoing formal transition are protected and segregated from the rest of society as they go through transition:
I propose to call the rites of separation from a previous world, preliminal rites, those executed during the transitional stage liminal (or threshold) rites, and the ceremonies of incorporation into the new world postliminal rites. (Van Gennep 1960:21).
Turner kept the idea of the liminal, but changed the names of the stages to separation, segregation, and re-integration, and also expanded on the analysis. Nowadays Turner’s work is better known in the English-speaking world than van Gennep’s but their ideas are congruent. I will use Turner’s vocabulary here because I think it is clearer. Parts of the analysis here derive from Turner and van Gennep, and parts are my own interpretation.
The first period, the period of separation, is usually marked by a ceremony that is attended by a significant number of community members, led by a specialist. The people undergoing transition are set apart from the rest of the community in a number of ways: special dress, special symbols, special statements, and the like. The second period, the period of segregation, or the liminal period, is a time of isolation from the community for the participants when they are taught, or learn by themselves, the rights and responsibilities of their new status. At this point they are neither one thing nor the other. At the third period, the period of re-integration, the participants return to their community with their new status, and are greeted by members of the community in a way that befits their new role in society.
Turner wrote extensively on the rites of passage of the Ndembu, a division of the Lunda people of the Congo, and focused especially on the puberty rites that transformed boys into men (Turner 1969). The outward, visible, symbol of manhood among the Ndembu is circumcision. At a certain point, a community determines that there are too many uncircumcised pubescent boys in the village and holds a circumcision ritual. The circumcision ritual is public, and boys prepare themselves for it ahead of time with individual trials of endurance, because the circumcision is performed without anesthetic. Showing no signs of fear or pain is an important test of manhood, and is keenly observed. The ritual is accompanied by village feasting and dancing. This is the period of separation.
The newly circumcised boys are subsequently taken to a place of isolation by village elders where they remain until their wounds have healed. This is the period of transition, or liminal period. At this point they are not quite boys, not quite men: they are liminal. During their seclusion, they are instructed by the elders in a variety of aspects of life that are the sole prerogatives of men. It is a time of intense communality and bonding for the newly circumcised. They are given special names, and there is no status differentiation between them during their isolation: for this period they are all equals. For the rest of their lives they will have a particular bond with the men who shared in their seclusion.
When the wounds have healed, and the initiates have completed their instruction, they return to their village as men. They have been learning their new roles in seclusion, but the rest of the village has also been adjusting to the idea that the former boys will be returning as men. They are greeted back in the village with feasting and dancing: the period of re-integration. From this point onwards, they must be treated as men, even though they are only a month older than when they were treated as boys. They are still young, of course, and still have a lot to learn and experience, but their status is fundamentally different.
Turner’s analysis of Ndembu ritual is complex, and I won’t go into detail here. Turner emphasizes the importance of symbols of ambiguity during the liminal phase, and also the importance of the color white, a common symbol of purity worldwide. You can think of white in numerous ways: as a blank slate, as protection against impurity at a dangerous time, and so forth. Ambiguity is potentially powerful, but also potentially dangerous. Think about rites of passage that you are familiar with. What symbols of ambiguity are prominent? How does the color white feature?
For most people, the liminal phase during transition is exhilarating, but temporary, and they are happy to pass through to the other side and settle into their new statuses. A few people, however, find the liminal phase captivating, and prefer to stay in it indefinitely. Do you know someone, or know of someone, who is not a boy not a man – permanently? Not being alive and not being dead, that is, being in a persistent coma, is not a choice, but it is a state of permanent liminality. It is a constantly troubling state, however. Friends and relatives would prefer one thing or the other, and, in consequence, prefer to keep that person in isolation. Being liminal by choice or through circumstances has its advantages and drawbacks.
The concept of the liminal can be more widely generalized, although, by so doing, there is a risk of loss of meaning. Take any two contrasting categories – male/female, natural/artificial, day/night. Both sides of the contrast have clear meanings within our culture, and if they were only clear-cut and completely separate, the world would be nicely ordered and stable. There would be no uncertainty. But in each case that I have listed, there are things that are not one thing or the other, and because they disturb the order, they are potentially dangerous, but also potentially powerful. They are liminal.
Male and female is a particularly fraught contrast in the contemporary US. You might think that male and female is a simple biological fact of life, but you are wrong in two ways. Certainly, most people are born either with a pair of XX chromosomes and are biologically female, or have one X and one Y chromosome, and are biologically male. But some people are born with an X chromosome and no pairing chromosome, either X or Y. Some are born with multiple Y chromosomes, some with multiple X as well as Y chromosomes. So, XO, XXY, XYY, XYYY, etc. are all possibilities. Normally, a fetus with a Y chromosome (whether one or more than one), develops male genitalia, and those without a Y develop female genitalia. But biology is complex, and is not the whole story.
Some people are born with both male and female genitalia. They used to be called hermaphrodites, but the term had unfortunate connotations, so “intersex” is now the preferred term. Are people with both male and female genitalia, male or female? At birth, some intersex individuals are assigned as either male or female and one set of genitalia is surgically removed. Because newborns are minors, the parents have the choice whether the child has surgery or not, and they are usually not in the position to make an educated decision – neither are the hospital staff. To put it bluntly, parents and doctors are freaked out usually. “Freak” is the appropriate verb here, because intersex babies are usually treated as freaks.
In Freaks: Myths and Images of the Secret Self, Leslie Fiedler (1978) suggests that people classified as freaks, actually challenge social categories concerning the nature of being human. In effect, they are liminal beings – not one thing, not another. Conjoined twins, for example, confuse the category of “individual” – depending on the degree to which they are conjoined (especially if it is impossible to separate them), they are not one person, not two people. The bearded lady is a woman, but has male characteristics. Some sideshows had fetuses in jars of formaldehyde that purported to be baby mermaids, unborn babies that were half human and half fish, and the like. At one time, freaks were major attractions at circus sideshows, but they are a thing of the past now that computer generated imagery in movies can produce any oddity that can be imagined.
Parents who give birth to a baby with both male and female genitalia who assign one sex or the other to the baby, usually claim they are doing so in order to give the child a “normal” life, and they may not even tell the child about the surgery to remove genitalia parts. You might want to ask yourself whether the surgery is to help the child cope, or to help the parents cope (or both). Being stereotypically male or female is very important to a huge swathe of the Euro-American population. Archetypes of good looks and appropriate behavior for male versus female are fundamental to advertising and the media in general. Confusing these deliberately opposite categories is potentially troubling. Yet these categories go far beyond the dictates of biology, and, as such, are now seen by anthropologists as both biological realities and social constructs. Little boys are encouraged to play with dump trucks and erector sets, while girls are given dolls to care for, and pretty clothes. As such they are steered towards roles that are socially acceptable for them. In the 1960s feminist scholars started drawing a distinction between sex (biological information) and gender (social information), and this distinction is now mainstream. The baseline assumption is that biological sex is more or less fixed, but gender is extremely flexible, and that it is gender that society cares about more than the simple realities of biological sex.
Because gender is inherently flexible, bending the rules of what constitutes appropriate behavior for a man or a woman is quite possible, but threatens the very foundations of a culture because gender rules are driven by culture, not by biology. Paul was really confused about this issue when he wrote in 1 Corinthians 11:14, “Does not nature itself teach you that if a man wears long hair it is a disgrace for him?” The Greek word for “nature” here makes it clear that Paul thinks that biology is the key issue, but that cannot be the case. If men were biologically meant to have short hair, and women, long hair, it would not be possible for men to grow their hair long. Length of hair is a cultural choice, not a biological fact, and what long hair signifies changes over time and from culture to culture. Paul was not an anthropologist. What counts as feminine versus masculine also changes over time and from culture to culture, although some biological facts have a part to play.
Women give birth and can breast feed, for example, so that certain social roles can be guided by these biological facts, just as men tend to have greater upper body strength and can run faster on average, and their roles can take advantage of this biology. But biology is not destiny. Women can develop strength and stamina to rival many men, and men can feed babies with bottles. The simplest path need not be the only path. Why gender roles become fixed and clung to as immutable facts of life is a good question. Crossing gender “lines” has vocal detractors. But, there are some safety valves built into Euro-American culture. No matter how comfortable men or women are with their stereotyped gender roles, there is apparently an element of envy on both sides because neither side wants to be as pigeon-holed as society dictates.
You frequently hear, in popular culture, talk about a man expressing his “feminine side” or a woman expressing her “masculine side.” You might hear a woman say that she likes men who are comfortable with their “feminine side,” for example. This talk does complicate the issue a little, but only a little. There is still, underlying it all, a notion that there are “feminine” qualities – caring, nurturing, attention to looks, etc. – and “masculine” qualities – toughness, stoicism, emotional coolness, etc. – and that to be a “man’s man” or a “woman’s woman” (maybe a girly-girl), you have to emphasize one side and suppress the other. One solution that would please me, would be to get rid of gender roles altogether. But, despite the fact that they are social constructs and not biologically determined realities, they are here to stay. However, creating socially constructed opposing sides that are not written in stone, also creates problems. The liminal can come to the rescue.
Some people who are uncomfortable with the gender roles assigned them by society openly flout them. But they run grave risks. There are men who want to wear frothy wigs, dresses, high heels, and makeup, but doing so in the average town will certainly raise eyebrows. However, they can do so without drawing undue attention to themselves in parts of London, New York, and San Francisco. We can call these zones, “liminal spaces,” that is, places where liminality is acceptable on a regular basis. But these liminal spaces are carefully circumscribed and marked off from the rest of “regular” society, not unlike the seclusion spaces for Ndembu initiates during the liminal phase of their puberty rites – except such liminal spaces are permanent. People who want to be liminal are safe in these spaces, as long as they stay within them.
There can also be liminal times, when people are safe to indulge in favored liminality. Halloween in the United States is one such time. At Halloween it is acceptable to wear any costume you choose – in fact, the more outrageous, the better. Costumes cover the waterfront, of course, but you are always going to find a number of liminal characters: men dressed as women and women dressed as men is standard fare. On New Year’s Eve, similar practices are condoned in some places. Not all costumes at Halloween or on New Year’s Eve are liminal, by any means, but participants have “permission” to be liminal if they choose. This is because Halloween and New Year’s Eve are liminal times of the year: the year is undergoing its own transition.
The liminality of New Year’s Eve is easy to understand. The old year is ending and the new year is beginning. That is, the year is in transition. The whole idea is a cultural invention, of course. January 1st need not be the first day of the new year, and was not in the past, and there is no reason to celebrate one particular day opposed to another. Nonetheless, to many people New Year’s Eve is an important time: they look back on the year past, and they look forward to the year coming. Sometimes they make resolutions, or make other kinds of decisions about how the coming year will be different. If you strip away the concept of a calendar, then one day is the same as another. But if you divide time into years in cycles, then the point at which one cycle ends and the next begins holds meaning. Your birthday is just another day, but we say that on that day you are a year older, not just one day older. Midnight on New Year’s Eve is like the world’s birthday: it is not just one day older, but a year older. We take time, which is linear, and make cycles out of it. Our own birthdays are important to us, but they are individual. Marking a new year involves the whole community, and, therefore, can be special for everyone at once.
When one year ends and another begins is completely arbitrary. January 1st has become the first day of the new year for most cultures worldwide these days, but it was not always so, and many cultures still have their own days when the year flips: Chinese, Jewish, Muslim, Hindu, Buddhist, etc. January 1st was the first day of the year in the ancient Roman calendar, but in Medieval Europe regions chose all manner of dates to mark the turn of the year, including March 1st, March 25th, Easter Sunday, September 1st, and December 25th. In England, years were counted from the date the monarch assumed the crown. Beginning in 1582, with the advent of the Gregorian calendar, regions shifted to January 1st, but it took many centuries for the change to become widespread in Europe, and then the rest of the world.
In the Celtic world the year changed, not once but twice: at Beltane (roughly April 30th /May 1st) and Samhain (roughly October 31st/November 1st). There were actually four seasonal festivals, but these two divided the year into summer and winter. Beltane was the beginning of summer when animals were driven to summer pasture, and Samhain was the beginning of winter, when animals were taken from their summer pastures to winter quarters. A great deal of nonsensical speculation has been made in modern times about how widespread customs were at Beltane and Samhain and what they were; mostly by neopagans who want Celtic ritual to be more of an organized religion than the historical and archeological evidence suggests it was. For now, I will simply note that Samhain is documented in abundant Celtic literature dating to the 10th century CE as a time when the boundary between the human world and the Otherworld (which has various names and various descriptions), was especially thin, so that various otherworldly creatures could enter our world. Very clearly a liminal time of the year.
It does not take much of a leap of imagination to see that some customs from Samhain bled into Halloween and they traveled with Irish immigrants to North America. Unfortunately for you, this is one of my areas of specialization, but fortunately for you I will avoid rambling on about this issue at length for the moment. What the Samhain customs were and how they mutated into Halloween customs is highly contentious, and modern neo-pagans are apt to draw conclusions that are not warranted by historical data, and get quite rabid with scholars, such as myself, who insist on academic standards of inquiry. What we are reasonably justified in saying is that October 31st/November 1st has been earmarked as liminal time, giving permission to people to be liminal for a limited period.
The bottom line is that cultures create bounded categories – times, places, people, things – to create a sense of order. But the world is not as orderly as these categories suggest. Day and night are clearly distinct, but dawn and dusk muddle them up for a little while. Male and female are mostly clear cut, but there are people who confuse the two. Some people are clearly children, while others are clearly adults, but some people sit between the two groups. There are many, many ways that cultures can handle these things that lie between two categories, and they do. The question that some anthropologists ask – and you should ask yourself – is what categories are particularly important to a culture and why? Why are some liminal things more disturbing than others?
If we accept Shakespeare at face value, dusk was an especially troubling time for the Elizabethan English. Ghosts and witches might appear at dusk, and mysterious things might happen. Based on the numerous bills passed by state and federal governments in the United States, transgender and intersex individuals are deeply troubling at present. Hallucinogenic drugs, which blur the distinction between real and unreal, are banned in many countries, (yet in some cultures they are an important component of religious ceremonies). What makes them so troubling that they must be made illegal?
Chapter 11: Waiter there’s a dragonfly in my soup: Food Taboos
I have a slide presentation that I give periodically in different countries which I (slightly ironically) call “strange foods.” Some of the images are my own from my travels, and some are taken from the internet. The foods cover just about every climate on every continent and have one thing in common: each food is a delight in some cultures, and abhorrent in others. At the time I put the slide show together I was living in Kunming, in Yunnan province in southwest China, and near where I lived was a wet market. This is a food market with fruits, vegetables, and meats of all kinds on sale, plus cooked dishes to eat there or take away, but it is called a “wet” market because there are numerous shallow tanks scattered around filled with live fish that you can select from. They also had live ducks, chickens, frogs, snakes, and crabs as well as dogs and cats, and assorted rodents and insects. No one was completely clear with me whether the dogs and cats were on sale as food or as pets, but grilled dog was, indeed, on sale in one street in town at night. The stall that caught my attention the first time I was in the market was a table stacked with several sets of wax cells from a bee hive, each cell containing a live bee larva. The seller plucked larvae out of the cells for customers with chopsticks, and would either fry them on the spot as a snack, or sell them live in a bag to take home and cook.
The bee larvae got me musing about what cultures treat as good to eat and what they find abhorrent. My slide show had images of cheese crawling with maggots, chocolate-covered spiders, whole fruit bats in broth, stinky tofu, and whatnot, and I have shown it to students in a number of countries including China, Italy, and Myanmar. I am interested to see what foods look good to them, and which ones turn them off. There is not complete agreement among my classes – ever – but trends emerge. My Chinese students were not big on cheese of any kind, but were particularly revolted by moldy cheeses, such as Stilton and Roquefort, and casu marzu, a Sardinian sheep’s milk cheese crawling with live maggots, almost made them ill to look at. My Italian students thought all the cheeses, including the casu marzu, looked yummy and made them hungry. Once when I showed a photo of a bright yellow soup with whole dragonflies floating in it to my Chinese students, one of them said, “My grandmother makes that for autumn festival. It’s delicious.”
Some food preferences are rooted in biology, but most of them are products of culture. Dairy products are not wildly popular in China because a significant percentage of the population has one degree or another of lactose intolerance. That is simple biology. But why should it be illegal to sell horse meat for human consumption in one country, and have thriving horse butchers in neighboring countries? There are undoubtedly many factors at play here, but simple cultural preferences are key. In the US, organ meats are not popular, but throughout Europe there are many specialty dishes made with them. Caen, in Normandy, has an entire confraternity of chefs who specialize in cooking the four chambers of a cow’s stomach into tripes à la mode de Caen, and down the road in La Ferté-Macé they make a specialty dish of tripe en brochette. Heart, liver, kidneys, thymus gland, intestines, brains, and what have you, are scarfed down by some people and revolt others. Why?
Here we must distinguish between simple food preferences and formalized food taboos. I want to start with the latter and then move from them to cultural preferences in general. The formal food taboos that are most commonly known are linked to religious practices: beef for Hindus, and pork for Jews and Muslims. Anthropologists have spent some time considering why these taboos exist, with mixed results. Marvin Harris, for example, who believes that environmental and material factors determine the shape (and apparent oddities) of a culture, in Cows, Pigs, Wars, and Witches: The Riddles of Culture (1974) points to the economic benefits historically of these and other food related issues. When you read Harris, you may first be beguiled by his arguments, but if you start to think critically you will find more questions than answers. He argues that Hindus in India do not slaughter cattle, because they are more valuable to them as traction animals for cultivating their fields than as meat (as well as bestowing other benefits when alive). Fair enough, but how is it that neighboring Muslim farmers slaughter cattle routinely without obvious harm to their economies?
Pork is far from the only taboo food in Jewish tradition. Jews are not supposed to eat lobster, oysters, ostrich, and frogs either. Why not? A (dubious) case can be made for why raising pigs in the highlands of Judah in ancient times was economically unworkable, but where do shellfish and snakes fit into this theory? Furthermore, we can legitimately ask why these food taboos persist outside of the lands where they were first promulgated. Jews in ancient Jerusalem might have had sound economic reasons for not raising pigs – maybe – but what about Jews in Berkshire in England or Wisconsin in the US? Pigs thrive in those places. Why does changing time and place not change food customs if they are rooted in environmental circumstances? Here we can start by distinguishing between how a food taboo began and what it means now.
Judaic law concerning what foods are acceptable to eat (clean) and which foods are taboo (unclean) is, arguably, the most complex and most complete system of dietary laws in the world. The law also details what foods you can mix together, and which ones you cannot, how you must kill and butcher them, and the plates and utensils you must use to serve them. It is exceptionally comprehensive. Therefore, singling out pork, while you might be able to make an interesting biological or environmental case for its exclusion, is missing the point completely. People who are not Jewish and not familiar with kashrut (Jewish dietary law), often single out pork when thinking about what Jews can and cannot eat because they are thinking in the narrowest of terms. They are simply casting a mental eye over the meat selection in a market, and pork pops out as taboo. Let’s leave aside, for the moment, the fact that all the meat is taboo unless slaughtered in the correct manner and must be eaten off certain plates. Pork pops out because the average supermarket shopper is not normally choosing between beef and alligator or ostrich or rabbit, yet the latter three are all taboo according to kashrut. The average shopper may, on the other hand, be choosing between beef and pork sausages, and so might want to know why one is acceptable and the other taboo: they are both delicious. We need not make the same mistake. We can start by asking: “Why is Jewish dietary law so detailed and so comprehensive?” Then we can ask why it exists at all, and why particular foods are included or excluded.
I have argued at length elsewhere (Forrest n.d.) that the dietary laws of Leviticus were fully codified during the time of the Babylonian Exile (597 to 538 BCE), based on laws that were being solidified in the century before, specifically in the kingdom of Judah (the region around Judah). In the seventh century BCE, Judah faced threats of domination from Assyria and Egypt, which it had been partially successful in staving off by hunkering down and paying tribute (taxes) when asked. Their northern neighbor, the kingdom of Israel, chose to resist Assyria militarily, and, in consequence, was crushed in 722 BCE by Tiglath-Pileser III, with the bulk of the population being carted off to parts unknown (the so-called, Lost Tribes of Israel), and some of them fleeing to Judah.
Judah chose to pay tribute rather than fight, and, consequently, was left intact. When Babylon rose to replace Assyria, Judah at first kept hunkering down, but then, under king Josiah, who came to the throne around 640 BCE, nationalist sentiment centered on the priesthood and, when temple scholars became the dominant political faction, Judah chose to resist. Josiah challenged Egyptian forces at the battle of Megiddo (giving us the word Armageddon) in 609 BCE, and was killed. His sons were short-lived as rulers, ultimately following in dad’s footsteps and being crushed by the Babylonian empire in 586 BCE, with the bulk of priests, nobles, and scholars being transported to Babylon, with Jerusalem in ruins.
Shifting whole cultures that were rebellious, lock, stock, and barrel, had been the policy of the Assyrian empire, and Babylon followed suit. The policy has been an effective means of controlling ethnic populations down to the present day. In the nineteenth century, the US government displaced numerous Native American groups from their indigenous homelands, and in the twentieth century Stalin did the same in the Soviet Union. When an entire people is forced out of its homeland to a strange location, it is difficult for them to survive unless they adopt the customs of the new people around them and assimilate to the conditions that they find there. As such, they are likely to be much less troublesome than in their homeland. That was the whole purpose behind the forced mass migrations in the US and the Soviet Union.
The Israelites who were forcibly relocated by Assyria are lost to history because they assimilated. The Judeans who were relocated to Babylon were aware of the perils of assimilation, because of Israel’s example, and hit on a new strategy that was safer than military resistance: keep absolutely to themselves in language, in worship, in dress, in appearance, and in food regulations. By creating incredibly complex food laws it was virtually impossible to dine with outsiders.
Not being able to eat with strangers is a potent weapon in the war against assimilation. People who eat together form friendships in ways that are more personal than through simple meetings around town, or business transactions. Over dinner, you relax, tell stories, and generally have a good time. Eat in one another’s houses a few times, and you are firm friends; prevent people from eating together and you have an obstacle to becoming friends. Having arcane rules about what you are allowed to eat does not stop strangers coming to your house, but it absolutely prevents you from going to theirs to eat. That kind of asymmetry is obviously going to make creating friendships difficult. People are not going to keep coming to your house and eating your food if they cannot reciprocate. This attitude goes a long way towards explaining the basis of the food laws.
Marvin Harris’ argument focuses on the potential destructiveness of keeping pigs in the arid lands around Judea in ancient times before the Babylonian Exile. In fact, he does not mention the Exile at all. His point is that the environment was so harsh that any scrap of land that was suitable for growing crops on was put to that use, and the cereals, fruits, and vegetables grown on that land were more efficiently used directly as human food, rather than as pig fodder. It’s a simple fact of nutrition that pigs eat what humans eat, but in the process of making meat and fat for us to eat there is a tremendous amount of waste. Humans can eat what is fed to the pigs and, thus, eliminate the waste. Goats, sheep, and cows are a completely different matter. They eat grass, leaves, and foliage that humans cannot digest, and turn them into meat, fat, and milk which humans can eat. They can be turned out to graze on the lands that are unfit for agriculture and make those lands productive.
Harris makes a fair point: keeping pigs in an arid land is inefficient. But you can keep them in urban environments, feeding them on scraps and kitchen waste. If you make cheese or butter you can give the pigs the whey along with the scraps. Give them anything that you don’t want to eat. That way, nothing goes to waste, not even spoiled food, and they, like cows and goats, turn the inedible into meat and fat. Win-win. Turning to archeology (e.g. Sapir-Hen et al, 2013), we discover that the picture is complicated, and Harris should have checked here first before idly speculating. The archeology of the Iron Age Levant (roughly 1200 – 587 BCE), that is, the region incorporating the kingdoms of Israel and Judah, and the coastal area occupied by Philistines from the Aegean, reveals that urban populations kept domestic pigs for centuries in many parts. This conclusion is based on the appearance of the bones of domesticated pigs (not wild boars) in assemblages in multiple layers. But there is an important twist.
Pig bones are prevalent in urban parts of Israel and Philistia, but much less so in Judah. Why? If Israelites could, and did, keep domestic pigs in urban areas, why didn’t Judeans in Jerusalem keep them? Actually, they did, just not in anywhere like the numbers found among their neighbors. One answer could be that keeping sheep, goats, and cattle for meat and other animal products was more efficient in Judah, because the region was rocky and arid. That would bolster Harris’ position, but it is not the whole answer by any means. The ancient Judeans could have kept pigs in limited numbers in urban areas, but, as time went on, they chose to do so less and less. This cannot be a simple matter of economics and ecology. Something else was going on. The speculation by Israel Finkelstein and colleagues (e.g. Finkelstein 2007), is that by the time of Josiah, identity politics mattered a great deal.
Josiah’s priests and scholars were intent on forming a national identity for Judah that marked the kingdom apart from its neighbors. Their goal was political: to build a strong and independent nation that could resist outside domination. One monumentally important strategy in nation building, then and now, is to create a unique national identity that marks you off as distinct from your neighbors, focusing on visible signs such as clothing, grooming, language, and food. Therefore, foodways that developed out of necessity – cows over pigs – become symbols of the new national identity. It’s a matter of us versus them: they eat pork, we don’t. This strategy stood the exiled Judeans in good stead when they were facing assimilation in Babylon, and it has served their descendants well ever since. Jews are different because they do not eat pork. Of course, the law is much, much more complex than a simple taboo on pork, but it is a significant component in lands where pork is common.
I believe that there is another feature of this taboo that is more ideological than ecological. If people are going to adopt certain symbols as a component of their national identity, they need to assign meanings to those symbols that go beyond simple necessity. You can’t simply say, “We are who we are because we eat beef and not pork because cattle are easier to raise than pigs.” That’s a lame stance to take. My argument is that the temple priests, in particular, took old narratives about their founding fathers – what anthropologists call “culture heroes” – and crafted them into a continuous narrative, the book of Genesis, that extolled their virtues, and decried their wrongdoing, and specifically linked their virtues to rearing animals in the wilderness. The founding fathers, called the patriarchs, namely, Abraham, Isaac, and Jacob, were particularly notable for keeping cattle, goats and sheep. Their tales are shot through with elements concerning how cattle herders are strong, independent, courageous and crafty. They live by their wits and stamina in wild places and grow strong, but people who live in cities (and raise pigs) get fat and lazy (like pigs). Ideology replaces necessity.
The temple priests needed to ground their ideology about food in a manner that went well beyond cows versus pigs: it had to be universal. The image of the alpha male cowboy was a good one, and worked well, but it was only a small, albeit vital, aspect of a complete reworking of Judean national identity. They were intent on showing that all animals were either clean or unclean for a compelling, universal, reason that stretched back to the dawn of time. Enter the creation narrative in Genesis 1. On days 2 and 3 of creation, God created three distinct zones: land, sea, and sky, and on days 5 and 6 he populated them. The key term used repeatedly in the narrative is “separation” – God separated the waters above and the waters below on day 2, and separated the seas and the dry land on day 3. The verb “separate” here comes from a Hebrew root, קדש, which can be inflected to mean “distinct,” but also can mean “holy.” Holy things are separated from ordinary things, and a holy people are separated from “ordinary” people – like Israelites separate from Philistines, or from Babylonians.
Put Genesis 1 and Leviticus together – both documents created by the priestly class – and you have a complete worldview . There are three zones, and three kinds of animals in those zones. Those that fit the zones well are clean, those that do not fit well, or cross zones routinely, are unclean. Here we come to the big question: “What does ‘fit well’ mean?” With animals that cross the boundaries there is no problem. Amphibians that live equally well in water and on land are taboo, as are birds that cannot fly. So, don’t eat frogs or ostriches. So far, so good. They do not fit well. They are the rebels of the animal kingdom. But what about the animals that stay where they belong? Which are clean, and which are unclean? The simple answer is: the ones we prefer (symbolically). The next order of business is to find defining characteristics that mark off what we eat from what we don’t eat that have universal application. Here the priests faltered a little, but did a halfway decent job.
“We eat sheep, goats, and cattle, but we do not eat pigs – that’s who we are. If you eat pigs you are not one of us.” The trick is to make sense of this mantra in ideological terms. The priests appear to have had two governing principles lurking below the surface – sometimes spelled out explicitly, sometimes not:
- Animals are clean to eat if they move the way they should in their natural environment.
- Animals are clean to eat only if they do not eat other animals.
Thus, you have three zones – land, sea, and air. Land animals should walk, sea animals should swim, and air animals should fly. Parts of this ideology are easy to work out. For example, ostriches can’t fly and, therefore, are not fit to eat. Lobsters and crabs walk, and mollusks and bivalves don’t move at all, so they are off the menu. Land animals present a bit of a problem in this regard, and my speculation is that the priestly reasoning was somewhat after the fact. That is, they asked the question, “How do the animals that we do eat walk?” with the answer, “On split hooves.” – therefore, the “correct” way of walking on land is on split (cloven) hooves. Not walking at all rules out snakes and things that crawl or hop, and not having split hooves rules out horses and camels, which were much more valuable as traction animals and beasts of burden than as food. Besides, camels are notoriously difficult to breed, so using the young for food is counterproductive if you are trying to increase your herd.
The second rule, concerning what animals eat, completes the picture. Land animals must chew their cud – that is, they must be ruminants. Ruminants have special digestive systems that allow them to eat grass and other foliage, the outward sign of which is chewing their cud. Air animals cannot eat either live or dead flesh to be clean, ruling out hawks and crows, but allowing chickens, ducks, doves, and quail. Unless you are an underwater diver you are unlikely to know much about what fish eat, but an encounter with a shark will give you a small idea. By requiring fish to have scales to be clean to eat, you eliminate sharks and other carnivorous fish, to an extent, but not entirely.
I am not suggesting that the 7th century BCE priests were attempting to be ancient versions of Linnaeus, trying to build a rational, scientific taxonomy. Clearly they were not. With land and sea animals they did come up with simple rules for classifying clean animals – cloven hooves and cud chewing for the former, fins and scales for the latter. With animals of the air, they got a bit stuck. They eliminated birds that ate carrion, but for the rest they simply made lists rather than finding an underlying principle. Then there are insects. By Jewish law, most insects are unclean, but kashrut makes an exception for ones that can fly and hop. There are four clean species mentioned in Leviticus, but they are now impossible to identify. They seem to be species of locusts and/or grasshoppers. It has been suggested that these species were included because they were important sources of protein for the poor who could not afford to keep animals.
In the final analysis, I am saying that the priests were codifying the actual food preferences of the Judeans that made them different from their neighbors. These preferences may have been rooted in ecology, but the priestly codes turned them into an ideology. By moving from practicalities to abstractions, the priests made the food laws portable and permanent. “Wherever I travel, you will know me as a Judean by the foods I eat, and the ones I avoid.” Given that this ideology has worked for over two and a half millennia, I’d say they found a winner.
What this analysis all comes down to is that looking for, or proposing, a single reason why Jews forbade eating pork in the past is a non-starter. There were multiple reasons at the outset, and there still are. In the 1830s a parasite was discovered to be the cause of trichinosis. A decade later, undercooked meat, especially pork, was discovered to be the vector, and in the next few decades this fact was accepted by the entire medical community. Hooray, hailed the rabbis. Our priests in the ancient past knew about how pork makes you sick and so they banned it. Not so fast. It is only undercooked pork that is the problem. In fact, all undercooked meats carry risks. Chicken can give you salmonella, and beef, mutton, and goat can give you anthrax, yet the priests declared all of those meats clean to eat. In general, the argument from medical reasons for food taboos is a poor one, especially given that food taboos differentiate one culture from another. If one foodstuff made people reliably sick, all cultures would avoid it.
Even if we accept the (dubious) premise that ancient Judean priests had an understanding that undercooked pork can make people sick we cannot argue that this fact is the sole reason for the taboo. Once again, it singles out pork. Why are rabbits taboo also? They are plentiful, cheap, and delicious, and they will not make you sick if you take proper precautions. In fact, pork will not make you sick if you cook it thoroughly. If the ancient concern was health, then methods of cooking would be more prominent in the law than which animals you can and cannot eat. Elsewhere in the law there are obvious health concerns, such as, how to deal with leprosy and skin diseases, so if health were the prime reason for food taboos, there would have been explicit mention of it.
If I can cast my thoughts here into a generalized rule, I would say that looking for single, simple causes for cultural norms, whether it be food taboos or incest prohibitions or kinship patterns or whatever, is a mistake. Culture is just not that simple, and history complicates things further. A food taboo might work well for one reason at a certain point in time, but it may work well for a completely different reason at a later time. Nonetheless, I believe that Judaic Law has always had cultural identity in the forefront when upholding food taboos.
Let’s now turn to food preferences that are not codified in the way that Judaic Law is. I have likes and dislikes, and I am sure you do too. What foods do you avoid, and why? When my son was growing up, we had one household rule: I would not force him to eat anything he did not like, but he could not refuse to eat something without at least trying it. If on the first taste he did not like it, he did not have to eat more – but he had to have one bite. Now, as an adult (and an anthropologist), he orders duck feet and pig’s stomach if he sees them on a menu, but will not eat lentils or anything made with mushrooms or eggs. Yes, cooking for him as a boy was a challenge. I eat pretty much everything that is put in front of me, but I have certain things I avoid: not specific foods, but foods with a certain texture. Foods that have a soft, almost watery, texture, such as, junket or silky tofu, I find unappetizing. I don’t know why. These are matters of personal tastes, however, and that is not the domain of anthropology. The subject becomes a topic for anthropology only when your tastes mirror values throughout your culture.
Cultural values concerning particular foods can be a difficult issue to assess, but we can look to statistics of consumption and legal regulations and prohibitions for insight. For this exercise horse meat seems like a fair test case. It is legal to sell horse meat for consumption in most parts of the English-speaking world, but good luck trying to find a horse butcher in Devon or Mississippi. There is a strong cultural taboo against eating horse flesh in some countries, regardless of the legalities. In 2013 there was a scandal in the UK and Ireland because ground horse meat was being sold in a number of products, by several distributers. The problem was not that horse meat was being sold: it is legal to do so. The problem was that the meat was being sold as beef. The center of the scandal was mislabeling, but one reason the meat was mislabeled was that people in those countries would not eat meat from horses if they knew that that was what they were eating. Yet, hop across the channel and you will find horse meat in supermarkets all over France, or go farther afield to Italy where the people eat almost 1 kilo of horse meat per person per annum.
In 732, pope Gregory III banned Catholics from eating horse flesh, condemning the practice as a filthy pagan habit of Germanic peoples. All was well until the tenth century when Olaf Tryggvason ascended the throne of Norway in 995. He set his sights on purging his realm of Norse gods and establishing Christianity, including in Iceland which was then part of Norway. He sent a native Icelander, Stefnir Thorgilsson, who had been living in Norway, back to Iceland to carry out the conversion, but his mission was a dismal failure because his idea of persuasion was to violently smash idols and temples, which the Icelanders did not appreciate. In 997, Olaf sent the more moderate Thangbrand as missionary to Iceland, and for two years he achieved some success. But there were some sticking points, a major one being the church’s ban on eating horse flesh.
Finally, in the year 1000, the governing body of the Icelanders agreed to arbitration between the factions for and those against Christianization. Thorgeir Thorkelsson, was chosen as mediator: acceptable to both sides as a fair and reasonable man. Thorkelsson spent a day and a night under a fur covering in contemplation, and then delivered his decision: Iceland would be Christianized as long as the Catholic church allowed Icelanders to continue certain customs, which included eating horse meat. The church agreed, but once Catholicism was firmly established, the ban on eating horse was re-instated. For some cultures, eating horses is a big deal, while for others a ban is important.
As with the Jewish taboo on pork, you are not going to find a single, simple reason why some cultures think eating horses is a great pleasure, while others are disgusted by the practice. You might also be surprised to learn that in some countries where it is virtually impossible to find horse meat for sale for human consumption, such as Canada, the US, and the UK, horse slaughter and butchery is, or was, routine. In these countries, they either sell the meat to zoos to be fed to their carnivores, or they sell it for human consumption overseas. There is good reason not to eat such meat. Horses that are slaughtered in countries where the meat is not eaten locally, are, more often than not, fed or injected with chemicals that are not approved for human consumption, and their presence in the meat is difficult to detect. The last slaughterhouse for horses in the US closed in 2007 after accusations of cruel treatment of the animals, and such accusations continue against Canadian slaughterhouses. Thus, a taboo against horse flesh as food can seem either practical or humane. But that is not the whole story.
Horse meat is not kosher, but Muslim laws are equivocal. The equivocation stems from the fact that the laws of halal (Muslim dietary laws), are not as systematic as kashrut. Pigs are forbidden under halal, but which other animals are acceptable to eat, and which are not, varies between the Islamic sects and geographical region. Horse meat has been considered halal among Muslim Turks and Persians for centuries, but only among small groups of Muslims in North Africa. In fact, horse meat consumption is high in most parts of Muslim central Asia – especially Kazakhstan, Kyrgyzstan, and Mongolia, which are in the top 5 of horse meat producers in the world. Archeological evidence strongly suggests that horses were first domesticated in the central Asian steppes, where previously wild horses had been hunted for meat. Their domestication was certainly for riding and traction, but also for consumption. The strong association of horses with the cultural identity and practices of peoples such as the Mongols predates the establishment of Islam in the region by millennia. It would have been a tough sell to convert these peoples to Islam if the process of conversion involved banning horse meat (much like the conversion of Icelanders to Christianity).
What you cannot argue, as Harris does endlessly concerning cattle in India, is that horses were so important to these cultures that they created food taboos to prevent their slaughter. In fact, the exact opposite appears to be the case, not only in central Asia, but in Europe as well. There is some evidence that Germanic peoples in Europe treated horses as deities, and they sacrificed them and ate them because they were gods, not in spite of the fact. No doubt this is one of the reasons pope Gregory banned horse consumption. It was not just a “filthy” habit, it was tied to pagan sacrifice in a significant way. Banning horse meat consumption was implicitly a ban on horse sacrifice, and its attendant beliefs, that were anathema to Christianity.
Here we are stuck with a conundrum. Modern people in the English-speaking world have a generalized taboo against eating animals that can be treated as pets (or friends of a sort). Dogs and cats fit that category, and so do horses. Rabbits occupy a grey area in between cuddly and delicious. Cows, sheep, goats, chickens, and pigs are not normally kept as pets. I have known a few people who have had a pet chicken or a pet sheep, but they are certainly not the norm (they are considered a bit weird by their neighbors). If we can tease out a rule here it would be that dogs, cats, and horses are part of the household, and you don’t eat family members. I know it sounds like an absurd question, but, why don’t you eat family members? In some cultures in New Guinea and South America, eating family members – the human kind – is a reverential act performed after they die. By so doing, they remain part of you even after death.
You could say that “you are what you eat” is a widely accepted maxim across the world. The critical point here is determining what you think you are eating. This idea was best expressed by Claude Lévi-Strauss, “Les espèces sont choisies non commes bonnes à manger, mais comme bonnes à penser.” (Species are chosen, not because they are good to eat, but because they are good to think.). The French is a bit tricky, but more informally we might translate “bonnes à penser” as “good to think about.” Let’s be more informal and translate: “We chose certain animals to eat, not because they are delicious, but because they symbolize something important to us.” Maybe eating an animal you think of as strong will make you strong, and eating an animal you think of as weak will make you weak. By that reasoning, eating an animal you think of as a god will imbue you with the qualities of a god. Unfortunately, by the same reasoning, eating your friend (human or animal) who is adorable ought to make you adorable. What else is going on here? “You are what you eat” is not the whole story.
Chapter 12: Do Eskimos Have 100 Words For Snow? Color Terms, Classification, and Language.
You were probably taught in school at a young age that the rainbow is made up of seven colors – red, orange, yellow, green, blue, indigo, and violet – and maybe you were given a handy mnemonic to remember them in order: “Richard Of York Gave Battle In Vain” is the one I was taught. But physicists know that there are not just seven colors in the rainbow (what they call the visible light spectrum). The number of colors in the visible spectrum is not infinite, but it is an extremely large number, and they are not the only colors we can perceive by a country mile. They can be mixed in innumerable ways, so that the possibilities are vast – infinite for all intents and purposes (although well short of infinite in mathematical terms). English also has a vast array of words for colors, and most of these colors, such as brown and gold, are not even in the visible spectrum. Neither are black and white.
The study of the perception of color, and words for colors cross-culturally, was brought to the attention of anthropologists by Brent Berlin and Paul Kay in Basic Color Terms: Their Universality and Evolution (1969). The book is still central to the anthropological study of color perception and language with both supporters and detractors to this day. Their key assumption was that all humans, with properly functioning eyesight, see colors physically in the same way. What they believed they demonstrated experimentally was that all humans also process what they see in the same way. Thus, while different languages break up visible colors into different numbers of categories, the categories are completely predictable. For example, if a language has only three basic color words, they will be dark, light, and red – invariably. There is no language that has only three color words and they translate as dark, light, and blue. Berlin and Kay’s research opens up a world of questions concerning how people process information: how people think. Berlin and Kay focused on how people see colors and how they describe them, but the implications of their research go much, much deeper than puzzling about words for colors in different languages. They were saying that there are some fundamental ways that people think that are universal, and do not change according to what language you speak. They are hard wired. Let’s look at how they came to that startling conclusion.
First, Berlin and Kay limited their research to what they defined as “basic color terms.” The basic color term was their unit of research. We could also call them “basic color categories.” Obviously English has hundreds, if not thousands, of words for colors, but Berlin and Kay were interested only in basic categories, not words in general. If you are reading critically, you will see a potential problem in this methodology, right from the start. How do you define a “basic” color category? They identified eleven possible basic color categories: white, black, red, green, yellow, blue, brown, purple, pink, orange, and grey. To be considered a basic color category according to Berlin and Kay, the term for the color in a language has to meet these criteria:
1. It must be monolexemic and monomorphemic. That is a fancy way of saying that the term has to be a single word (e.g. blue, and not light blue), and the word has to have only one meaning component (e.g. greenish has two meaning components: green (the color) and -ish (sort of)).
2. The word cannot be subsumed under a more general color term (e.g. crimson is excluded because it is a type of red).
3. The word must be universally applicable, and not be restricted to a small class of objects (e.g. blond(e) is restricted to hair, wood, and beer).
4. The word cannot be an individual invention, but must be a normal, everyday word known to all speakers of the language.
In the case of doubtful possibilities, the following criteria applied:
5. The term in question must be able to be modified in the same way that other basic color terms can be (e.g. you can apply -ish to green, blue, and pink, but not to avocado).
6. A color term that is also the name of an object that characteristically has that color is suspect, (e.g. gold, silver, avocado, bone, and ivory).
7. Recent foreign loan words are suspect (e.g. chartreuse)
Right from the start you may have your doubts about this methodology. For example, why does orange make the list, but silver does not? The word “orange” in English comes from the color of the fruit (check an etymological dictionary if you don’t believe me). How recent is “recent” when it comes to loan words? What is wrong with “avocado-ish” or “salmon-ish”? Can’t “-ish” or “-y” be attached to most color words? Maybe you already believe that Berlin and Kay’s methodology is flawed from the outset, but let’s press on.
Berlin and Kay suggested, after testing native speakers of twenty languages from diverse language families, that color terms appeared in languages in an evolutionary sequence. Tests involved giving participants a selection of color chips from the Munsell Color System (the definitive system for classifying colors at the time) and asking them to make piles of them. Each pile had to contain all the chips that the participants considered to be the same color, and only chips of that one color. The separate piles had to be different. Some participants made two piles, some three, and so forth. Berlin and Kay found that if the languages of the test subjects had fewer than the maximum eleven color categories, the color terms in the language followed a reasonably fixed evolutionary pattern. This pattern is as follows. You should read the list as cumulative.
1. If a language has only two basic color terms, they are roughly equivalent to dark (or cool) versus light (or warm). The distinction is sometimes translated as “black” and “white,” but that translation misses the point. Translating the terms as “warm” versus “cool” or “dark” versus “light” is much closer to a binary distinction most English speakers understand and closer to what the indigenous words in those languages actually denote.
2. If a language has three terms, then the third term is always a term for red.
3. If a language has four terms, then the fourth is a term for either green or yellow (but not both).
4. If a language has five terms, then it has terms for both green and yellow.
5. If a language has six terms, then the sixth is a term for blue.
6. If a language has seven terms, then the seventh is a term for brown.
7. If a language has eight or more terms, then it has terms for purple, pink, orange and/or grey (with no particular preference or sequence).
Berlin and Kay also noted that participants always picked what they called “focal hues.” That is, for each color category, participants, regardless of language, picked virtually identical shades in the Munsell color system as the best representative of the color category, say “red” for example. They argued, therefore, that “red” is not simply a linguistic term, it actually corresponds to something biologically basic to all humans. Here we get to the crux of the matter (as well as the difficulties in accepting Berlin and Kay at face value). They are saying that if a group speaks a language with only three color terms, whether they live in the Amazon rain forest, the highlands of New Guinea or a desert in Africa, those terms will be dark, light, and red (or warm, cold, and red), and what people in all of those cultures call “red,” even though they live thousands of miles apart, and have no contact whatsoever, will be best represented by an almost identical shade of red (and the same shade that you will pick as the best example of red).
Berlin and Kay are making the case that there is something universal about color perception and classification, and, as such, they and their kind are called universalists. This idea reminds me of a question that people raise once in a while, namely, “Is what I see as the color blue, the same color that you see, or in your head do you see what I call ‘red,’ but you call it blue?” It is actually a meaningless question, although it does raise additional questions concerning how much we can truly know about the internal workings of other people’s minds, both in our own cultures and in others. Berlin and Kay believe they have the answer to that meaningless question. They are making the remarkable assertion that people all over the world – except those with obvious limitations, such as color blindness – not only see the world in the same way physically, but also process it mentally in much the same way. Houston, I think we have a problem. If they are right, a giant chunk of anthropological theory needs to be scrapped.
The physics of light does not alter wherever you are in the world, and the bio-physics of all human eyes of people with clear vision probably does not differ substantially from culture to culture either. Light enters the eye through the iris, passes through the lens, and hits light sensitive cells on the retina. Admittedly, there are some differences in pigmentation of the eye from region to region, and other structural differences that impact how light is received by the eye, but let’s put those issues aside for the moment and accept the idea that light passing into the eye and on to the retina is the same process everywhere. Now let us consider what happens when the information registered by the retina leaves the eye via the optic nerve and is processed by the brain. Processing information is the realm of cognition, as opposed to perception. Berlin and Kay are claiming, not only that light is perceived physically by all peoples in all cultures in the same way, but also that the way that the information is processed by the brain is the same for all cultures (with varying degrees of specificity). That second claim needs some investigation.
Berlin and Kay are saying that there is a fixed and universal evolution of color terms in all the languages of the world because everyone processes information about color in the same way, no matter what language they speak. Perception comes first, and is basic, and language describing that perception comes afterwards. No matter what culture you belong to, the way you think about color will be roughly the same, with varying degrees of precision. Thus, all cultures differentiate between light and dark colors, and if a culture wants to be a bit more specific than distinguishing between light and dark, they will inevitably pick red as the third color category because the mechanics of perception drive mental processes, and, therefore, drive the way language works. Well . . . a healthy number of anthropologists (not to mention philosophers, psychologists, and neuro-physiologists) do not agree with this conclusion.
On first encountering Berlin and Kay’s work, you might stop and think about black, white, and red in your own culture, and recall the times you see those colors used together. Maybe you will think of warning signs with a black icon on a white background, such as a smoldering cigarette or a person walking, surrounded by a red circle and crossed through with a red line. Black, white, and red are apparently much more effective as warnings on signs than the same warning on a sign in blue, purple, and pink. Black, white, and red seem to be primal. Likewise, traffic signals are red, yellow, and green, the third, fourth, and fifth color terms – equally primal (as long as we do not quibble too much about whether the yellow light is “really” yellow: the color is between red and green on the visible spectrum). Red, yellow, and green were used first for railway signals in the nineteenth century, and then adopted for road traffic signals, because they were obviously distinctive and different. You might, therefore, come to the conclusion that Berlin and Kay are on to something, until you stop and think about what they are proposing.
Berlin and Kay are saying that there is a universal progression in the development of language, and this development is based on fixed human biology: in the brain, not just in the eye. At first, there was not much dissent in the anthropological community, but when the implications of Berlin and Kay’s conclusions were fully grasped there was considerable blow back. Berlin and Kay’s research was aimed directly at the work in anthropological linguistics of Edward Sapir and Benjamin Lee Whorf who have become immortalized in what is commonly called the Sapir-Whorf hypothesis, which is not really one hypothesis, but a number of related ones that cannot be stated baldly. In fact, Sapir and Whorf never worked together nor published together, and the term, Sapir-Whorf Hypothesis was coined later by one of Sapir’s students, and was never approved by either. Whorfian hypotheses (plural), is more accurate given that Whorf was the leading scholar in the area. What passes as the Sapir-Whorf hypothesis these days has two forms, a strong version and a weak version, (also not approved by either Whorf or Sapir):
- According to the strong version, language determines thought, and linguistic categories limit and determine cognitive categories.
- According to theweak version, linguistic categories and usage only influence thought and decisions.
We would probably put Whorf in the “strong” camp, and Sapir in the “weak” camp nowadays. Whorf gave a number of examples about how language influences thought and action, one of which is fairly commonly known. He claimed that the Inuit had a great many words for snow because snow is an ever-present reality for them, and they have to interact with snow in different ways: food, travel, hunting, housing, etc. Thus, their different words guide how they perceive snow and how they respond to the different types. English has only one word for snow, so we perceive it differently from the Inuit. This claim eventually got inflated into a commonly expressed cliché that “the Eskimo have 100 words for snow” or whatever number pops into the speaker’s head at the time. We can trace this (demonstrably false) assertion back to Franz Boaz who did initial fieldwork on Baffin Island, and through him to Whorf. Boas states in Handbook of American Indian languages:
Another example of the same kind [like “water” in English], the words for SNOW in Eskimo, may be given. Here we find one word, aput, expressing SNOW ON THE GROUND; another one, qana, FALLING SNOW; a third one, piqsirpoq, DRIFTING SNOW; and a fourth one, qimuqsuq, A SNOWDRIFT. (Boas 1911:25-26).
Whorf followed this assertion with:
We [English speakers] have the same word for falling snow, snow on the ground, snow hard packed like ice, slushy snow, wind-driven snow — whatever the situation may be. To an Eskimo, this all-inclusive word would be almost unthinkable….(Whorf 1940:247).
This idea was picked up by popular writers, such as Roger Brown in “Words and Things” and Carol Eastman in “Aspects of Language and Culture,” and they were quoted in sensationalized stories, so that by 1978, the number of Eskimo words for snow was usually given as fifty. On February 9th, 1984, an unsigned editorial in The New York Times gave the number as one hundred. Don’t put all the blame for the spreading of exaggerated claims and false stories on the internet and social media. These claims have been around a long time. The idea that Eskimos have one hundred words for snow is absurd.
For starters, there is no single language or group of people that can be labeled “Eskimo.” The word “Eskimo” is an outsider word that can be applied to a number of different circumpolar ethnic groups who speak different languages. These languages are usually grouped by linguists into the Eskimo-Aleut language family, also called Eskaleut languages, or Inuit-Yupik-Unangan. Furthermore, what even counts as a “word” in these languages is open to debate because these languages are what linguists called polysynthetic. Look at this “word” in Central Alaskan Yupic:
Qayarpaliqasqessaagellruaqa
It can be translated into English as, “I asked him to make a big kayak. (but actually he has not made it yet).” Is it really just one word? Technically it is a single “word” because none of its components, with the exception of “qayar” (kayak), can stand alone. They must be attached to a root. But the separate components have distinct meanings – big, ask, him etc. Are they different words or all parts of one word? If you break Eskimo-Aleut languages down into roots you find that they have very few roots for “snow” – maybe three or four – not fifty or one hundred. Having a few words for snow is no more complex than having a few words for water – ice, steam, water – as English does. The fact that we have different words for frozen-water or water-as-a-gas is not surprising and congruent with Eskimo-Aleut vocabulary. As Boas notes, we also have words for flowing-water (river), big-water (lake, ocean), floating-water (fog, cloud), and so on. When it comes to flowing-water the list is seemingly endless: freshet, rivulet, brook, stream, gill, creek, rill . . . (add your favorite).
Another famous example, explored by Whorf, concerned his time as a claims adjuster for a fire insurance company. While inspecting a chemical plant he noted that the facility had two rooms for storing barrels for petroleum products. One was for empty barrels, the other for full barrels. Whorf noted that employees never smoked cigarettes in the storage rooms for full barrels, but routinely smoked in the room for empty barrels. He concluded that the words “full” and “empty” carried different connotations for the workers. The word “full” carried the implication “dangerous,” (i.e. don’t smoke near), and the word “empty” implied that they were harmless (i.e. OK to smoke here). The opposite is, in fact, the case. Empty petroleum barrels, may be empty of petroleum, but they are filled with fumes that are much more dangerously flammable than liquid petroleum.
What do you think? Is Whorf correct that words influence behavior, or is something else going on? When I asked my students this question, one replied, “No, the workers are idiots.” I might be a little more generous and call them “ill informed” or “lacking insight” but I agree. There is no reason to believe that the workers were influenced by a couple of adjectives. Besides, Whorf’s observation is anecdotal – one isolated, and slightly peculiar, example. His speculation is based on his preconceived ideas about how language influences behavior. For the example to carry any weight it would need to be replicated many times with many different words in many different situations.
A more complex example that Whorf used, that is still being modestly disputed, is the Hopi conception of time based on how time is expressed in the Hopi language. Whorf argued that, unlike European languages which treat the flow of time as a sequence of distinct, countable units, such as “five minutes,” or “twelve years,” Hopi language treats time as a single process that cannot be broken into individual units: Hopi has no nouns that refer to units of time. Consequently, the language does not allow the Hopi to carve time into distinct units or to think of time in terms of units at all, and, therefore, affects their behavior with respect to time. Linguists have refuted Whorf’s claims by examining modern Hopi (which has been influenced by European languages), as well as archeological evidence, (including pre-Columbian Hopi calendars), and concluded that Hopi language has always had units of time, and that the Hopi conceptualize time much as Europeans do (and always have).
I have taught English in China, Myanmar, and Cambodia, and I have always had trouble teaching (and even explaining) verb tenses in English. Let me slide over, for the moment, the technical challenges of defining and describing what a tense is linguistically and get to the heart of things. Mandarin Chinese, standard Burmese, and standard Khmer cannot easily express time in the complex ways that English does. There is no easy way to say in those languages, “By this time tomorrow I will have been pain free for two weeks,” or “He has had a difficult time adjusting to the tropical weather.” It can be done, but speakers find the expressions needed to say those things in a precise manner to be cumbersome and unnecessary. But when it comes down to it, so do a lot of English speakers. This is one of the reasons it makes more sense to study what speakers of a language actually say, or typically say, when making statements about language and behavior, rather than drawing conclusions from a technical discussion of the linguistic capabilities of a language. When was the last time you used the future perfect in a conversation? Be honest. Outside the classroom I use the future perfect in normal conversation perhaps twice a year, at the very most – sometimes never.
Yes, I have diagrams which I can put on the board that explain such things as how to talk about events that will be happening continuously in some future time in reference to now, but will be happening in the past in reference to a future point in time. But look at how convoluted that sentence is, and its exemplar: “In an hour that pot will have been boiling for 90 minutes.” Would you ever utter a sentence like that? It can be done, but do you ever do it? More importantly, do you think in terms of time as slipping and sliding around between past, present, and future, both now in the present, and also when you are thinking about the past or the future? That would be a reasonable conclusion for an observer to draw, but I doubt it reflects the reality of how you see time. It is not how I see time, and I am a native speaker of English, with a better than average command of the language.
I have lived for long periods in English-speaking countries, and also in Argentina, China, Italy, Myanmar, and Cambodia where I have a decent grasp of the local languages. Those languages use verb tenses in many different ways, different from each other, and different from English. However, I do not believe that locals in those countries view time in a way that is fundamentally different from mine. Their idea of punctuality often differs considerably from mine, but that is a different issue. Time, when speakers of these languages think about it at all, works in the same way for them as for me. This observation supports a universalist conception of language and thought, like Berlin and Kay’s, that the physical perception of experiences is the same the world over. It is the way that languages treat that perception which differs from place to place. The problem with accepting that analysis uncritically is that the English-speaking world has long arms, and its way of thinking has penetrated cultures all over the world and infected their languages. Different cultures – even vastly different ones – may all view time in the same way now because they have all been indelibly changed by English-speaking culture, and not because there is an underlying biology of perception that we all share.
Whorf and his followers, who are commonly called relativists, argue that we learn language first, as infants, and the particular language we learn shapes the way we subsequently learn how to perceive the world. For relativists, language is the primary variable and perception is dependent on language. As it happens, this stance has some (small) empirical support. If test subjects are shown a variety of paint chips on one day, and then the next day are shown an equal number of paint chips, some the same as the ones shown on the first day, some different, the participants will do a much better job identifying the chips that were the same on both days if they have names for the colors. This holds true even if the shades of the colors are only subtly different. That is, having words for colors aids in perception, discrimination, and memory of those colors.
The strong relativist position, that language determines perception, has few takers these days, but the weak relativist position, that language influences perception is more widespread, and the universalist position that perception is controlled by our biology and determines how language works also has few takers in pure form. Paul Kay himself has softened somewhat as more scholars from diverse fields have entered the debate, and more experiments have been conducted as well as more languages included in the dataset. Kay now believes that pitting universalist and relativist positions against one another is a mistake, although he is still writing like a universalist. He now talks about (almost all) languages as having three broad categories:
- Black and white
- Cool and warm
- Red
I do not want to get any deeper into the debate because it gets very technical, with, in my opinion, preconceived ideas driving the arguments. I, like many scholars, believe that Berlin and Kay’s initial experimental work found the answers they wanted to find because they were (probably unconsciously) built into their methodology. Using a Western scientific color system, and a Western definition of “basic color term” practically guaranteed that they would find what they were looking for. Their research was intrinsically ethnocentric.
Let’s leave color terms behind and look at categories that exist in other languages, but are not present in English. I will start with gender and then consider count nouns. In English, living things can be referred to as “he” or “she” (or “male” and “female”) where appropriate, and all other nouns are neutral. But in many European languages, a variety of nouns are classed as masculine or feminine. Do speakers of those languages think of things that have a feminine gender to be one class of things, and things with a masculine gender to be another class of things? No, they do not. Nor do they consider masculine nouns to have “male” qualities and feminine nouns to have “female” qualities. Look at Italian, for example. Some nouns that denote stereotypically male occupations are feminine (e.g. la vedetta (the sentinel)) and some female occupations are masculine (e.g. il soprano). Some countries are masculine (e.g. il Belgio (Belgium), il Perù (Peru)), some are feminine (e.g. la Francia (France), la Spagna (Spain)), and there is no sense that these countries are more male or more female than others. If you are not a native speaker of one of those languages, you might be tempted to hypothesize that native speakers divide the world into male and female classes. They do not. In the list of readings at the end I have put some pieces that qualify my thoughts here, but I stand by the general statement.
The same is true of measure words, or count classifiers, in Chinese (and other Asian languages). In Chinese you cannot simply say “two cats” or “three books.” You have to use the syntax: number + measure word + noun. There are a great many measure words in Chinese, and you need to learn them early on when you are studying the language because they crop up all the time. There are about 25 that are commonly used, but there are many more. Like gender in European languages, you have to just learn what measure word goes with what noun, because the ways that these measure words classify things is strange to people not from Asia. Some are straightforward. For example, zhī is used for things that come in pairs (although, confusingly it is also used for small animals), and shàn is used for things that open and close, such as, doors and windows. Rather more difficult are piàn, used for flat objects, like cards, slices of bread, and tree leaves, which are different from miàn, flat and smooth objects including mirrors, flags, and walls, and different again from zhāng, flat and square or rectangular things, including tables, credit cards, tickets, paintings, and constellations. Trust me, you go loony learning which measure word goes with which noun, and it is virtually impossible to guess if you don’t know. Again, it would be a major misunderstanding of Chinese language and thought to infer from Chinese measure words that Chinese people think of tables, bus tickets, and constellations as somehow the same kind of thing. That simply is not true. We talk about a pair of trousers, a pair of scissors, a pair of shoes, and a pair of hands without thinking that trousers, scissors, shoes, and hands are all the same kind of thing. Why do we even call trousers, a “pair”? They are one garment. German, French, and Dutch do not have this problem. They have words we could translate as “one trouser” for one garment.
What this discussion all comes down to is that looking at a language and inferring how people who speak that language perceive the world, or think about the world, is not useful based on that information alone. You have to live with the people and speak the language to begin to understand how they think. The tremendously complicated problem is sorting out how people in other cultures classify the world (or even if they classify the world), and if they have an underlying logic to their thinking that can be determined – and if it is like our own.
What it all comes down to, I believe, is that there is a certain confusion between correlation and causation. There is no question that different groups of people think about their world in fundamentally different ways. There is also no question that different groups of people speak fundamentally different languages. It would not surprise me in the slightest to learn that how they perceive the world and how they describe their perceptions in words are correlated. But which comes first: perception or language? Does the perception create the language or does the language create the perception? The answer is probably, neither and both. I do not believe there is a simple flow of causation from one to the other.
To conclude, let’s take a topic of endless discussion: things you put in your body. Here is a list for you: caffeine, nicotine, marijuana, cocaine, aspirin, morphine, fat, chicken, fish, milk, cheese, bread, rice, pasta, tomatoes, wine, beer, vodka, lettuce, salt, sugar, Prozac, Viagra, bacon, eggs, insulin. You get the idea. Add to the list if you want. Now, sort these items into categories. They could be legal versus illegal, good for you versus bad for you, or they could be more fine-grained such as really good for you, good for you, neutral, bad for you, really bad for you, or something else along those lines. Or the categories could concern how you get them into your body, such as eat, drink, and smoke. You decide on the categories.
Chapter 13: Black and White: What is Race?
For almost a century, anthropologists have argued that there is no biological basis for the category that Westerners (and other cultures) call “race.” By US common definitions, Barack Obama had a White mother and a Black father. Why is he commonly spoken of as Black? Why isn’t he White? Or Grey? Biologically he is a mix. Everyone is a biological mix. Race is not a biological fact, it is a culturally determined category. There is no genetic test that can unequivocally divide people into races, even if you decide on a very large number of races to divide up the people of the world. Genetically we are all on a continuously graded spectrum.
One of my professors at the University of North Carolina used to give a lecture on race in his Introduction to Anthropology class that began with him asking the students to list their categories of race in columns or boxes in their note books. Then he would show fifty, or so, slides of individuals from around the world and ask the students to sort them into their racial categories. It was a trick exercise, of course. His first few slides would conform to well-known stereotypes: maybe he’d start with a Masai, a Dane, and a Korean. Southerners in those days, who made up the vast majority of the student body, usually began with three categories: Negroid, Caucasian, and Mongoloid (or equivalent). You could call them Black, White, and Yellow, if you like (I am being deliberately simplistic). Then he would throw in an Apache and a Pakistani. Those students who had only three categories usually added Red and Brown at that point. With those five – Black, White, Yellow, Brown, Red (you get the idea, even though I am still being simplistic) – the majority were satisfied they had all their bases covered. Then things unraveled.
Are SE Asians brown or yellow? What about Bolivian mestizos? Or Inuit? Or Arabs? What do you do with a blond Australian aborigine? Of course, I have oversimplified things by using color as the identifier, but that is in part because skin color gets used more often than any other common marker of race. In Argentina we use rubio (white), moreno (brown), and negro (black), most commonly (with some extra bits). In reality, the students in North Carolina all had in mind a complex cluster of physical traits such as eye shape, hair color, hair quality, facial features etc. for identifying race, but that cluster had skin color as its focal identifying point. What the exercise pointed up, as much as anything else, was that they had had very limited experience of people. Their racial categories were based on the people they encountered daily. This was the American South of the early 1970s where segregated schools, water fountains, bus seats, and restaurants were a thing of the very recent past, and they had all known segregation first hand. Race was an ever-present reality for them, but it had extremely limited application as an identifier. What mattered to them was Black versus White. All other racial categories were, at best, curiosities with minimal application to their daily lives.
For almost exactly a century, from the Emancipation Proclamation of 1863 to the Civil Rights Act of 1964, segregation and discrimination on the basis of race was a legal fact of life in great swathes of the US South. The key issue was how an individual was allotted a race, which had profound consequences from where you could sit on a bus and what water fountain you could drink from, to what kind of education you could receive and whether or not you could own property or hold certain jobs. What are commonly known as “blood quantum” rules might apply, but there was also the “one drop rule.” The one drop rule, a refinement of blood quantum rules, states that if any of your ancestors was Black, you are Black, no matter how far back in your ancestry this occurred. The underlying principle is that you are assigned to the race with the lower social status if you have any “blood” of that race, no matter how little. In 1822, Virginia passed a law stating that a person was not White if he or she had one Black grandparent. In blood quantum terms that person would be one-quarter Black (one out of four grandparents). Clearly biology is not a factor here, but things got even more restrictive. In 1910 the legislature changed the standard to one-sixteenth, meaning that if an individual had a single great-great-grandparent who was Black, that individual was Black. In 1924, even that extraordinarily restrictive standard was scrapped under the Racial Integrity Act, and a person was legally defined as “colored” (that is, Black) for classificatory and legal purposes if the individual had any African ancestry: the one drop rule.
You can see that the one drop rule has little, or nothing, to do with biology. It is a cultural rule that serves as a window on how people think about race, and how meaningless the concept is, both in actual practice and in everyday life. The underlying assumption, not always expressed overtly, is that one’s race determines one’s behavior, and the slightest drop of blood from an “inferior” race supersedes all the blood from a “superior” race.
Thinking that races could be ordered according to ability was initiated in the eighteenth century by Carl Linnaeus and other taxonomists. Linnaeus divided Homo sapiens into the continental varieties (or races) of europaeus, asiaticus, americanus, and afer, each associated with one of the four ancient humors: sanguine, melancholic, choleric, and phlegmatic, respectively. Homo sapiens europaeus was supposedly active, acute, and adventurous, whereas Homo sapiens afer was crafty, lazy, and careless. Linnaeus’ taxonomy, aided by the ordering of peoples in Genesis chapter 10 (the sons of Noah), was used as a justification of slavery of people of African descent by European colonists.
Johann Friedrich Blumenbach’s MD thesis, De generis humani varietate nativa (On the Natural Variety of Mankind) which was published in 1775 was extremely influential in discussions about race. He proposed five major divisions: the Caucasoid race, the Mongoloid race, the Ethiopian race (later called Negroid), the American Indian race, and the Malayan race. He did not propose any hierarchy among the races, and noted that there are no fixed boundaries between the races, but, rather, they shade gradually from one to another when they are adjacent. Thus, even in the eighteenth century, taxonomists had problems with rigid definitions of race (“you cannot mark out the limits between them,” he wrote). The general popular consensus, however, followed Linnaeus and not Blumenbach.
The United States Census Bureau officially recognizes six racial categories: White American, Black or African American, American Indian and Alaska Native, Asian American, Native Hawaiian and Other Pacific Islander, and people of two or more races. There is also a category called “some other race” used in the census and other surveys, but this is not an official category. The Bureau also classifies Americans as either “Hispanic or Latino” or “Not Hispanic or Latino,” but it identifies Hispanic and Latino Americans as an ethnicity (not a race) distinct from others. On the 2000 census long form there was an “Ancestry Question” which extends to all ethnicities, and thus can include Jewish and Arab as well as Polish or Italian or Irish, etc.
Here the rubber hits the road. Are Jews, Hispanics, Arabs, and the like, distinct races or not? For a time, mostly in the twentieth century, la Raza (literally, “the Race”– capitalized), was used by activists in the Americas, and is still used by some people of Mexican heritage living in the United States, to demark people of Hispanic heritage. The term has now largely been dropped in favor of Hispanidad (Hispanic), because it was clear, almost from the outset, that the people included in the grouping had language in common, but not much else, and certainly not physical features. The term was originally short for la raza española, introduced by Faustino Rodríguez-San Pedro y Díaz-Argüelles in 1913 with his proposal for a secular fiesta de la raza española (Spanish-race Festival) on October 12th to replace Columbus Day celebrations which were widespread at the time across the Americas: October 12th, 1492 being the date that Columbus first made landfall.
“Are Jews White?” is another question that gets raised from time to time. Hitler and the Nazis certainly did not think so. Nazi “scientific” classification of races tended to be vague, because there is nothing scientific about it. The supposedly superior Aryan race was the Nazi archetype of White-ness, and Jews did not qualify. Jews purportedly sprang from “near-Asian” or Levantine (inferior) stock. The farther east from Germany you went, the worse things got. Slavs and Romani (Gypsies) came from bloodlines rooted in regions east of the prime Nordic and Alpine, regions, the purported heartlands of the best of Aryan stock, and, hence, were inferior, but Jews originated even farther east. The Nazi conception of the Aryan race arose from earlier proponents of a supremacist conception of the race as described by racial theorists of the nineteenth century.
Nazi racial theorist Hans F. K. Günther identified the European “race” as having five sub-races: Nordic, Mediterranean, Dinaric, Alpine, and East Baltic. The Nordics were the highest in the racial hierarchy amongst the five. Günther said that Germans were composed of all five European subtypes, but suggested that Nordic heritage was strongest amongst Germans. I am reminded of a book, The Races of Britain, which I found in my school library in the 1960s. It was mostly a picture book with some text accompanying the photos, based on an 1885 book of the same name by John Beddoe: https://archive.org/details/racesofbritainco00bedd/page/n7 The original is more text heavy than the book I first saw, with lithographs rather than photographs, and a great many tables indicating distribution. I recommend that you browse the original using the link I have given, and, if nothing else, look at the incredible specificity in the classification. Supposedly people from Boston, once on the coast in Lincolnshire, are distinguishable from people from Lincoln, more inland. This is a laughable idea, even for the nineteenth century, when travel, although more accessible than in previous centuries, was still quite limited by modern standards.
Hitler classified the British as Aryans of an inferior sort because they comprised only 60% Nordic stock, leavened, no doubt, by Viking and Danish invaders adding to the Angle and Saxon blood of lowland German conquerors, before the Norman conquest, but diluted by Celts and Romans before that. His idea was to boost the Nordic and Alpine “blood” in German stock, and to eradicate people, such as Slavs, Gypsies, and Jews who could “contaminate” that blood. At the root of this all was the notion that behavior was driven by biology. What is usually forgotten is that Hitler and the Nazis got their ideas on race from theorists in the United States, where racial eugenics, the practice of sterilizing people with “inferior” blood, so that they could not “contaminate” the general population, had been practiced for some time.
The US eugenics movement was driven by the biological determinist theories of Francis Galton from the 1880s. Based on studies of the British aristocracy, Galton argued that they owed their elite positions in society to superior breeding, and that the lower classes were of inferior stock (a misapplication of Darwinian theory). US eugenicists, following Galton, argued that selective breeding could be used to direct the evolution of the human species, and, of course, believed in the genetic superiority of Nordic and Germanic (including Anglo-Saxon) populations. The US eugenicists promoted strict, selective immigration policies, anti-miscegenation laws, and the forcible sterilization of the poor, disabled and “unfit.”
Eugenics was widely accepted in the U.S. academic community and subsequently spread to Germany. California eugenicists routinely sent their literature to German scientists and medical professionals. By 1933, California had subjected more people to forceful sterilization than all other U.S. states combined. The forced sterilization program engineered by the Nazis was partly inspired by California’s. The Rockefeller Foundation helped develop and fund various German eugenics programs, including the one that Josef Mengele worked in before he went to Auschwitz.
The California eugenics leader C. M. Goethe bragged to a colleague, after returning from Germany in 1934 where 5,000 people per month were being sterilized:
You will be interested to know that your work has played a powerful part in shaping the opinions of the group of intellectuals who are behind Hitler in this epoch-making program. Everywhere I sensed that their opinions have been tremendously stimulated by American thought … I want you, my dear friend, to carry this thought with you for the rest of your life, that you have really jolted into action a great government of 60 million people. (http://www.newsreview.com/sacramento/darkness-on-the-edge-of/content?oid=27587 )
The African American historian and sociologist W. E. B. Du Bois, argued that there is a difference between “racialism,” the philosophical position that races exist, and “racism,” the argument that one race is superior to other races. Du Bois was also a firmly committed eugenicist, believing that only about 10% of African Americans were biologically fit to reproduce. Whether you are a racialist, or a racist, which seems to me to be a distinction without much of a difference, you believe that races exist, and that different races have different stereotypical characteristics. These differences may be nothing more than physical differences, but it is quite common to believe that there are characteristic behaviors that go along with the physical differences. So, where did the idea of race as a physical way of grouping people come from in the first place? A complete answer to that question is much too complicated to detail here, but I can point in the general direction. It is an extremely old idea and is widespread historically. A third century Han dynasty historian in China describes “barbarians” as having blond hair and green eyes like “the monkeys from which they are descended” (Gossett 1997:4).
The ancient Greeks classified all non-Greeks as barbarians (although the connotation is not quite the same as the modern one), and attributed differences in the physical appearance of populations to their environment, especially climate and geography. Barbarian status was not fixed, however. One could become Greek by adopting Greek culture. Hippocrates of Kos, the great physician, ascribed personality traits to populations based on where they lived. Warm climates led to people who were indolent and unwilling to work, whereas colder climates produced vigilant and industrious workers. Rugged mountainous terrains led to enterprising and warlike peoples, but well-watered lands produced civil, gentle populations.
Distinguishing “us” from “them” is universal in one form or another. Organizing “them” into categories has, historically, been very much a matter of what kinds of peoples you have experience of. The ancient Middle East was (and still is) a crossroads of cultures, with peoples from Asia and Africa in direct contact (and conflict), and peoples from Europe adding to the mix. Genesis 10 separates out three distinct groups based on the three sons of Noah – Shem, Ham, and Japheth – who were the sole survivors on earth of the Flood (along with Noah and their wives), and were responsible for repopulating the earth when the waters had cleared. It is conventional to place Shem in Asia, Ham in Africa, and Japheth in Europe, but that is simplifying things a little too much. The ancient Near East, according to Genesis, was a melting pot of all three. If you are interested you should investigate studies of Genesis 10, and I will give you some readings at the end of the chapter. You need to be a little careful though, because interpretations of Genesis 10 are deeply colored by the biases of the authors.
Drastically simplifying for the sake of brevity, descendants of Shem originally populated the area to the north and northeast of the Levant. One descendant, Terah, came out of Mesopotamia and settled immediately to the north of Canaan, and his son, Abraham, migrated south to the land occupied by Canaanites. Canaanites were descendants of Ham, and related to peoples in Egypt and north Africa (Canaan and Egypt were brothers). According to Genesis 9, Ham saw Noah naked when he was drunk and was cursed by Noah for seeing him naked, but Canaan was the one who bore the curse. He was to be the servant of the descendants of Shem, whereas the descendants of Japheth were to be foreigners, but good guys. What you see here is a window on to the actual political struggles of the times, cast in a certain light by the descendants of Abraham (that is, the tribes of Israel). Their vision was that the descendants of Ham were the indigenous peoples of the Levant (related to north Africans), but their land rights were forfeit to the descendants of Shem. The descendants of Japheth, such as the Kittim (from Cyprus) were trading partners who were not a threat.
The taxonomy in Genesis 10 gets many things wrong in actual historical terms. For example, it places the Philistines as descendants of Ham, whereas modern archeology tells us that they are likely Aegean in origin. For Genesis what matters is how you related to “us.” The Philistines are our persistent enemy occupying land that belongs to “us” therefore they must be descended from a hostile branch of Noah’s family (that is, Ham). All of this classification of peoples (goyim in Hebrew) is cast in terms of lineage, not race. As such, it is closer to a concept of ethnicity than of race. People are ethnically related if they share customs, especially language, rather than because they look alike. In modern terms we would say that ethnicity is a cultural term, and race is a biological one. It is anachronistic to apply terms such as “ethnicity” and “race” to the theorizing of ancient peoples, because lineage, “blood”, customs, language, and physical appearance were all jumbled together, but lineage was the key factor in classifying people.
You cannot say that the descendants of Shem, Ham, and Japheth are different races biologically as they are defined by Genesis. The lineages of each son are quite distinct from each other, and physical characteristics are almost never mentioned. It was not until much later in history when European voyages of discovery led to colonization, that included slavery and the oppression of colonized peoples, that notions of race superseded lineage and ethnicity as means of classifying peoples. In words of one syllable, the biological concept of race is a tool of oppression, and has been for several hundred years, and it remains so. But by replacing the concept of race with the concept of ethnicity (or culture), you are not out of the woods. Comparing groups of people using any scheme of classification is going to run into problems. The mere grouping of people is problematic, as anthropology has discovered.
To be fair, when I walk down the streets of Phnom Penh or Mandalay or Kunming or La Paz, I might as well have “foreigner” tattooed on my forehead. Not so in Buenos Aires or Adelaide or Mantua or New York. Why is that? There are certainly cultural qualities in the mix, such as my clothing and general demeanor, but it mostly concerns my physical appearance. I have seen people in China bump into things because they were staring at me rather than looking where they were going. Tuk-tuk drivers in Cambodia endlessly hassle me to take rides with them but have no interest in local people (even though locals use them much more than foreigners). There is a price range for locals and a higher one for foreigners (which can be bargained for, but is still higher for foreigners than the local rate even when they are beaten down, and even when you bargain in Khmer), so getting a fare from a foreigner is always a bonus.
My physical appearance stands out in some countries and not in others. You can think of this as a simple fact of biology, but it really concerns the cultural rules that determine who “looks like me” and who does not. Where are the boundaries between “same” and “different”? Once you start classifying things that exist on a continuum you are going to have problems with the boundary zones between categories. I discussed this problem in chapter 00, in terms of life stages, but the analysis applies to race and culture as well.
If you take West Africans out of their homelands as slaves and dump them down in a colony of people from Britain in North America, you are not going to have any trouble sorting out the Africans from the British. Now, draw a line on a map starting from Karachi in Pakistan and keep heading east through India, Bangladesh, and Myanmar to Yunnan in SW China. Imagine taking a journey along that line. You will cross national borders along the way, but where are the borders between cultures or between races (or ethnicities)? You won’t find them because they do not exist. You will, however, find specific zones, or heartlands, where certain common physical features, languages, and customs cluster. The great mistake that early twentieth-century anthropologists made was in treating those heartlands as diagnostic of race and of culture, and leaving aside the blurry areas at the margins. They also did not take into account sufficiently the massive effects that colonialism had had on indigenous peoples, nor the ways in which cultures exchanged ideas (and intermarried). That said, the existence of those heartlands is important.
Draw another line on a map from Madrid to Berlin. Now imagine traveling along that line, sampling local dishes along the way. Like race/ethnicity and culture along the line from Pakistan to China, you will find that there is no clear boundary line that marks the transition from Spanish cuisine to French, or from French to German. In fact, you will find a number of dishes along the route that are almost identical, such as a breaded and fried veal cutlet. The names will be different, but the dishes are essentially the same. Nonetheless, the lack of boundaries and the existence of common foods, does not mean that we cannot legitimately talk about Spanish, German, and French cuisines as distinctive and different. They are. We just have to think in terms of a mix of core ideas at the center that dilute as you move outward, rather than in terms of rigid and bounded definitions.
With this kind of thinking we are on shaky ground when it comes to pinpointing the origin and defining characteristics of a cuisine, culture, or ethnicity. When it comes to race there is no ground under us at all. People vary genetically in gradations from place to place with zero identifiable boundaries. D.J. Witherspoon and colleagues make a bold (and often contested) claim that there is more genetic variation among populations than between them. They sampled the DNA of populations from Africa, east Asia, and Europe, and found more variability within the populations than between them, and concluded that there is no reliable genetic test for identifying place of origin (Witherspoon et al 2007). Simply put: biological race has no scientific basis – period. This should give you pause when you consider investing money in a DNA testing protocol which purports to be able to determine your ancestry.
Chapter 14: Tag, You’re It: Magic, Religion, and Science
Here is a somewhat dated UC Berkeley course catalogue description of an anthropology course, “Magic, Religion, and Science”
We are now in a world where science has set itself up as the supreme arbiter of rationality. This outward peace conceals a great inward struggle and transformation. Anthropologists from Frazer to Lévi-Strauss have attempted to trace the genealogy of science and have found its foundations, paradoxically, in the realms of primitive magic and comparative religion. Their absorbing studies have shown how notions of physical causation, empirical observation, or rational deduction – the mainstay of science – are equally prevalent in natural magic. Ideas of genesis, order, or chaos, derive their reference and significance from religion. Divergent beliefs about rituals provoke sharp disputes on the efficacy of miracles. This course invites you to examine the boundaries and interrelations of fundamental categories called Magic, Religion and Science. . . .The goal of the course is to rethink, in critical depth, those aspects of knowledge, faith and action, which do not fit accepted categories and explore their implications for contemporary social life.
There’s a certain amount of sloppy thinking in this piece, but the germ of what I want to say in this chapter is contained in it. I don’t think it is paradoxical that contemporary science has its roots in magic and religion: just the opposite. I think all three are deeply entwined. The distinction between magic, religion, and science is a construct of modern academia. Most people in most eras and most cultures have not made such a distinction, consciously. Most people go about their daily lives doing what works for them without worrying too much about why it works or what the underlying reasoning is. Experience, not explanation, is usually what counts. The science versus religion pseudo-debate does come to the fore a great deal these days, however, and, if nothing else, I will show in this chapter that it is a phony debate based on dubious premises (on both “sides”).
Magic, religion, and science became entrenched as three distinct categories in anthropology in the nineteenth century as part of general evolutionary thinking, and they broadly match modern conceptions of the different ways of thinking about how things work, although not for the same reasons. General evolutionists in the nineteenth century argued that there were inevitable stages in the progression of all cultures – with names such as savagery, barbarism, and civilization – and each stage in this cultural evolution was associated with certain kinship systems, technology, political structures, and belief systems (in strict progression from magic to religion to science). Thus, magic was the most primitive worldview for understanding how things work, religion was one step up from magic, and science was on the top step.
Magic, religion, and science used as distinct intellectual categories (without those names) can be dated back to the sixteenth and seventeenth century in European history during periods known commonly as the Reformation and the Enlightenment, although “enlightenment” is certainly a loaded term suggesting, “We finally saw the light, and started using science instead of religion to address questions concerning how the world works.” It’s probably closer to the mark to say that some intellectuals in those periods started teasing apart the realms of human endeavor where science could be profitable, and those where religion and magic worked better. Isaac Newton, one of the great pillars of scientific thinking in seventeenth century England, was both a devout Christian and an alchemist, as well as a brilliant scientist and mathematician. He most definitely did not believe that science was the answer to every question. Indeed, I would venture to say that in every culture, historical or modern, what we can academically pull apart as magic and religion and science all play a part. Our first order of business, therefore, is to come up with a reasonable way of distinguishing them, realizing that our exercise is an intellectual one, not a pragmatic or empirical one, and that we’re not talking about watertight categories, but, rather, focal points within ways of thinking that sprawl around in overlapping ways. At heart I would argue that all three seek to reduce human stress by different methods, based on different ideas of what works and why it works. At heart are complex notions of control (or quite deliberate avoidance of it).
First, we have to distinguish between natural and supernatural – itself an area of heated debate. Very simply (too simply) we can say that the supernatural is that which cannot be explained by natural causes. The big problem is to define “natural.” There really are no satisfactory answers here. Usually something to the effect of “according to the laws of physics” gets offered. But that’s just circular. You might just as well say “according to the laws of nature.” What are the laws of nature? That’s the billion dollar question, and that’s exactly what science seeks to discover. A more productive question might be: “What are the limits of scientific inquiry?” and “Can magic or religion help when science fails?”
Scientific method attempts to codify the laws of nature by repeated experimentation and verification. Thus, for example, we have Newton’s law of universal gravitation boiled down from countless observations. From high school physics I still remember the informal statement of the law: “all bodies in the universe attract all other bodies in the universe with a force proportional to the product of their masses and inversely proportional to the square of their distance apart,” or, even more informally, “things attract each other more strongly the bigger they are, and less strongly the farther they are apart.” So, we remain firmly attached to the earth because it is very big and we are very close to it. In the absence of forces pushing against gravity we stay on the ground. Newton’s law has been superseded by Einstein’s general theory of relativity, but it still works pretty well here on earth in everyday life (as do all of Newton’s laws of physics). If I see someone floating off the ground, I assume there is some force involved that is countering gravity. Gravity is natural. If a person is floating in the air and there is no natural force countering gravity, that would be a supernatural event. People don’t hover above the ground without help. It’s a question of whether the help is natural or supernatural. Contemporary magicians perform what appear to be acts of levitation on their assistants, but we all know that what appears to be a supernatural act is a trick using physics, that is, natural forces, not supernatural ones. But . . . does the supernatural exist?
Because the existence of the supernatural runs counter to many people’s understanding of modern science, they dismiss it as non-existent – preferring to rest their faith on science alone. That’s fine with me as long as they understand that this is their act of faith and not pure rationality. We could get into a long philosophical wrangle here about the efficacy of the scientific method, the limits to observation, indeterminacy, inductive versus deductive logic, Occam’s razor, the hypothetico-deductive model and all the rest of it, but I’ll leave you to research that yourself. All we need to know to press forward is that some worldviews require a belief in the supernatural. Whether you believe in it or not is irrelevant. You may be curious to know why some people believe in the supernatural – miracles, God (or gods), angels, demons, and other supernatural beings – or, conversely, you might want to know why some people do not. I’ll leave that up to you. The supernatural, by definition, is not open to investigation using logic or scientific reasoning or standards of evidence. The supernatural is outside scientific reasoning, but that does not mean that it does not exist.
We can distinguish, still being overly simplistic, between science on the one hand, and magic and religion on the other, by saying that science seeks natural causes for events, but magic and religion sometimes invoke the supernatural (not always). The difference between magic and religion is fraught with problems because the two can often be found in conjunction. My hopelessly inadequate heuristic method is to say that magic involves a kind of cause and effect, and religion does not. Magical thinking and practice derive from the belief that the world, natural and supernatural, is connected in a gigantic whole: everything affects everything. Acting on one thing in one place, therefore, can affect other things in other places. The important point is that human intention plays no part in this system. If you step on the root of a sacred tree that is forbidden it does not matter whether you did it accidentally or deliberately, the consequences are the same. Western superstition fits into this framework. I doubt many people break mirrors intentionally, but they get seven years of bad luck whether they intended to or not.
Within religious belief systems, intention matters, although what counts as intention is not monolithic. Both the Hebrew Bible and the Greek Bible are replete with stories of people getting what they wanted from God because their hearts were pure, or not getting what they wanted because they had bad motives. Islam, Buddhism, Hinduism, Sufism, etc. all follow suit in some fashion or another. They all have a moral code setting out good versus bad things, with consequences for each. Individuals are active agents whose motives influence outcomes. But here is also where my heuristic method breaks down a little. Buddhism is most definitely a religion, yet, like what I have defined as magic, it believes that everything is connected to everything else, and that everything can influence everything else. According to certain foundational beliefs within Buddhism, human intention plays a vital role in connecting with the interconnected whole, but intention has to be abandoned at some point: the “self” has to be abandoned and intention along with it.
It is a sad, but simple, truth that when people in general, and anthropologists in particular, start a sentence with “Religion is . . .” that sentence is inevitably wrong or incomplete. In popular discourse this happens because when people say “Religion is . . .” they typically mean, “The religion that I am most familiar with is . . .” They are not thinking about Siberian shamanism or Amazonian animism, but, rather about the Baptist church or Jewish temple they went to as children and extrapolating from there to all religion, without adequate knowledge. One would like to think that anthropologists are immune to this kind of ethnocentrism, but, sadly, they are not entirely free of it. Religion can end up being a catchall category for “the things that people do that have no scientific justification.” If a group performs a ritual before planting crops and the anthropologist can find no justification for the act in terms of promoting strong plant growth, it gets marked down as a religious (or magical) act. Therefore, you have an automatic division between what the anthropologist determines “actually” works (according to the laws of science), versus what the people believe works (religion and magic).
What has been bundled together worldwide as “religious” acts by anthropologists, is enormously varied, and within cultures there are always people who are strong believers (maybe even set aside as skilled practitioners), ordinary believers (of many flavors), and non-believers. Yet when you read ethnographic descriptions of religion in a culture, you are usually left with the impression that all people in the culture have the same beliefs. This is simply not true. At the risk of being proven wrong (which is a distinct possibility) I would venture to say that every culture throughout history has housed at least one religious skeptic, if not downright atheist. And, there are some religions that tolerate atheism as a reasonable position to hold.
Over the years, Christianity and Judaism have blown hot and cold on magic as I have defined it here. One of the major components of the Protestant Reformation was to strip what Reformers conceived of as magic from the church, including such practices as purifying with holy water, wearing sacred medallions for good fortune, touching relics of saints, mechanically reciting prayers as if they were magical incantations, getting time off in Purgatory through the performance of certain acts (usually financial), and so on. As much as anything else, the Reformers complained that most of these practices were moneymakers for the church preying on the gullible. Here we have a marked division between what the Catholic church said (and says) officially about certain practices, and what everyday people believe. Let’s delve a little deeper into classic anthropological discussions of magic.
James George Frazer in The Golden Bough (1906-1915), laid down the evolutionary theory that magic precedes religion, which precedes science in the development of cultures, and gave definitions for each that are consistent with the ones I have given here. He further divided magic into sympathetic (or imitative) magic – “like produces like” – and contagious magic – physical contact has permanent magical effects. Frazer’s whole theoretical framework is no longer considered legitimate within anthropology, but parts persist.
Frazer’s great, and irremediable, flaw was to take practices from around the world that had been reported by travelers or were recorded in documents of various ages, and assemble them into a gigantic compendium with no consideration for cultural or historical context. As was normal in his day, he did no fieldwork and did not assess the reliability of his sources. His overarching aim was to show that the story of a culture hero who was born of a virgin, performed miracles during his lifetime, was killed and buried, but came back to life, was the underpinning of all religions in agricultural societies, and it mirrored the cycle of the seasons: birth in the spring, growth in the summer, and harvest (death) in the autumn, followed by a winter of anxiety, relieved by new birth the next spring. That is, Christianity is no more than one variant of a universal religious narrative.
One of life’s great ironies is that Bronislaw Malinowski, pioneer in detailed participant-observer fieldwork, was led to the study of anthropology through reading Frazer while recuperating from an illness (at least, according to his own account), yet his intensive fieldwork methods that followed his conversion from philosophy to anthropology upended Frazer’s methods and his conclusions. To make sense of magic and religion we need detailed studies of individual practices in context, not grand sprawling speculations. Yet, the latter continue to be popular with people with little knowledge of anthropology. In the 1970s and 1980s, Joseph Campbell had a following for his books on myth that did little more than expand on Frazer in a general way. Campbell was a literary scholar, not an anthropologist, and while Frazer was rejected in the early twentieth century by anthropologists, his work lived on with writers and artists who were more enamored of his imagery than the legitimacy of his theory. Thus, Campbell was able to build on Frazer to popular acclaim from a public seeking simple answers to complex questions. Building a magnificent mansion on disastrously crumbling foundations is not a good plan.
You will have to decide for yourself whether there is a magical or religious consciousness that is somehow universal throughout humanity or not. It is beguiling to think that we are all brothers and sisters under the kaleidoscopic variety: all deeply searching for the same thing in different ways. Or, are we all fundamentally different? Are Odin, Yahweh, Allah, Zeus, Vishnu, etc., all manifestations of a universal human yearning for a supreme being who has all the answers to life’s mysteries, or are they all different, with different meanings and purposes?
Teasing out magic, religion, and science, as I have done here – for pedagogic purposes – may serve no greater function than intellectual neatness at the expense of genuine profundity. Let’s take Frazer’s conception of contagious magic as a small case study. Contagious magic entails the belief that when you touch something or somebody, its properties, natural or supernatural, impact you in some way. There are a great many instances of such belief worldwide. In the Catholic church, for example, it has long been held that the physical remains, or relics, of saints have the capacity to perform miracles, and that touching them, or touching something that has touched them, can have miraculous power. In classic Judaism, if a person touches something that is deemed unclean (pork, a corpse, a menstruating woman), that person becomes unclean and must be purified to become clean again. Nor can you touch something that has touched something unclean. You cannot eat off plates that have served pork, for example, even if they have been thoroughly sterilized, because the uncleanness is not physical but spiritual. Within Buddhist tradition touching the ground with bare feet is a fundamental component of a great many rituals. In many Buddhist traditions you must remove all foot covering when entering temples because only your skin can touch the ground inside them. Furthermore, Buddhist monks are not supposed to touch other people, especially women, and other people are not supposed to touch them, unless in an emergency, such as a medical situation.
Thinking superficially about touching cross-culturally in this way you could be led to believe that there is something universally magical about touch. But as soon as you get into particulars you can see that they are all different actions with fundamentally different meanings. The Jewish prohibitions about touching unclean things concerns the bad consequences that will result, whereas the Catholic advocacy of touching relics concerns the potentially good results. You can argue that they are two sides of the same coin, but the specific emphasis is quite different. Touching the ground and touching other people in Buddhist tradition are in themselves different kinds of acts, and they have very different meanings, and have little or no relation to touching in the Judeo-Christian tradition. Lumping all acts of touching under one theoretical umbrella is sloppy thinking. Ultimately, lumping behaviors under the headings of magic, religion, and science is sloppy thinking also, although we do not have to throw out the divisions completely.
Anthropologists have long noted that modern natural science is very much akin to what is called magic in that it rests on the notion of inevitable cause and effect, but does not accept the existence of the supernatural realm as a component of causality, and has no place for intentionality. If you place zinc in hydrochloric acid you will get hydrogen and zinc chloride whether you do it for malicious or benign reasons. Action at a distance is also a contested difference between magic and science. The physicist will tell you that the farthest stars from earth at the fringes of the universe exert a gravitational force on the earth, but they are so impossibly far away that the force is negligible, and it takes an incredibly long time for the force to work. It is there, though. Magical practitioners, on the other hand, might try to convince you that they can affect the behavior of others at a distance – instantaneously – using magical spells and incantations.
The question remains, if magic is simply bad science why do people continue to practice it? One answer lies in part in understanding when magic is practiced, what its purposes are, and how it views causality. According to Bronislaw Malinowski, people use magic commonly when the outcome of an action is in doubt. For example, foragers need magic much more when they hunt animals than when they collect fruits and plants. Gathering is based on extensive knowledge of local biology, and, in addition, nuts and berries don’t run away when you try to pick them. Its outcome is reasonably assured. Animals, on the other hand, can easily make themselves scarce when humans are in the vicinity, and don’t hold still to be killed.
“Even so,” the scientist says, “surely magic does not work and should have been abandoned long ago.” Anthropologists and others, however, have shown repeatedly that, under the right conditions, magic does work although not necessarily for the reasons the practitioners claim. In “Baseball Magic,” George Gmelch (1978) runs through a string of superstitious practices in baseball from outfielders tagging second base on the way to the dugout to lucky amulets and ritualized behaviors with no rational link to desired outcomes (for example, a pitcher not shaving to preserve a winning streak). The thing is that both pitching and hitting in baseball are subject to a slew of non-quantifiable factors. A batter can be in a slump even though his mechanics are fine; a pitcher can pitch extremely well and still lose the game. The players resort to magic to get an edge because the game (hitting a speeding round ball with a round bat) is uncertain, and a lot rides on being successful (including big salaries and bigger egos). There’s a great deal you can do “scientifically” to improve your game, but there’s still an awful lot left to chance. Naturally this situation leads to stress, which can definitely be harmful. A batter in a slump is not helped by stress at the plate. So, if he can reduce his stress, his chances of hitting safely go up. Therefore, if magic reduces stress, it helps the batter. In other words, magic can work. The trick is, of course, that you have to believe in it for it to work. Skepticism kills magic (according to this analysis).
What anthropology reveals time and again is that when people have a problem, they do the things that they believe will work for them. People in all cultures, at one point or another in their lives, will likely resort to what I have loosely defined as magic or religion or science (or some combination), depending on the situation and personal beliefs. Bearing in mind that this is an intellectual exercise with all manner of weaknesses, I can break down magic, religion and science according to this tabulation:
| Supernatural | Cause & Effect | Intention Matters | |
| Magic | Yes | Yes | No |
| Religion | Yes | No | Yes |
| Science | No | Yes | No |
Imagine you have an ongoing chest pain that is severe at times. What would you do? Maybe you’d do nothing at first, but if it persists, I expect you would go to the doctor. It is possible you could say a prayer when you first get the pain and before you see the doctor, or maybe hold your lucky amulet. If the pain goes away, you might take this as proof that prayer or the amulet work, even though there is no scientific reason that they should. If you see a doctor you could get a diagnosis of, let’s say, acid reflux, and be prescribed medication and dietary limitations, with an estimated 95% chance that these will be successful in reducing or eliminating the pain. A scientifically minded person will take the medication and follow the diet with the reasonable expectation (95%) that the pain will go away. Chances are that prayers or an amulet will not make a reappearance.
Now imagine that you go to the doctor and, after extensive tests, you are told that you have a virulent form of lung cancer with a 95% chance of dying within 6 months. Then what? Do you simply accept your fate, trusting the rational judgment of science? Do you simply follow the treatments recommended by your doctor, hoping that you are in the lucky 5%, or do you resort to a little magic or religion to increase your chances? You can be honest; this is not an exam and no one is going to grade your answer. Or we can be more neutral and ask, what people in general do. When my wife was diagnosed with cancer and given not a 95% chance, but a 99.9999% chance of dying within a year, she did all the treatments that doctors offered for a while, but then gave them up when they were clearly not working and resigned herself to her fate. She was an anthropologist and knew all about the realms of magic, religion, and science, and chose to believe science. Nonetheless, friends and family constantly sent suggestions that had no basis in science whatsoever. Why do you think they did that? My immediate answer is that some people will never give up hope, even when modern science says there is no reason for hope. Lurking in the background is the, obviously correct, belief that modern science does not know everything. It has limits. You can either push the limits, or else try something that is outside the scientific worldview.
We know with an absolute certainty that the scientific consensus can be completely wrong. For centuries, scientists around the world were convinced that the earth was the center of the universe, and the sun, moon, planets, and stars all revolved around the earth. Observation of the sun, moon and stars appeared to confirm that belief, but the movement of the planets caused problems because they do not move smoothly around the earth. They move in one direction most of the time, but, from time to time, they move in the opposite direction for a short while, and then they return to moving in the old direction. Astronomers came up with numerous theories to explain the planetary motions, but they were never satisfactory. Copernicus (and others) hit on the idea that the sun was at the center of the solar system, and all the planets revolved around it, with only the moon revolving around the earth. But he proposed this theory because it made the equations work better, not because he had an explanation for why the earth moves, and what keeps it in orbit.
It was not until Isaac Newton came up with his laws of gravity and motion that science had the answer. The first law of motion keeps the earth moving in a straight line with constant velocity, and the law of gravity pulls it towards the sun. These two forces are exactly balanced (well, almost exactly), so the earth does not spin off into space nor crash into the sun, but moves around the sun. That explanation worked for a long time until Einstein upended Newton’s ideas of motion and gravity with his theories of special and general relativity. And so it goes on. Shortly after Einstein’s revelations, quantum mechanics upset the applecart yet again, and there seems to be no end in sight. Check out quantum entanglement if you like puzzles. Einstein called it “spooky action at a distance” because it made no sense and could not be explained. The short version is that two particles that are entangled can be separated by great distances, yet with no obvious means of communication between them will act in completely connected ways. Do something to one of the particles and the other will respond as well in predictable ways even though the two particles have no way of communicating with one another. Spooky, eh? What other mysteries are waiting to be solved?
Chapter 15: Then and Now: The Search for Origins
When I was a university professor in the United States, I could rely on getting a phone call annually from some local, young, eager beaver reporter wanting to write a piece about an upcoming Friday 13th, usually opening with the deathlessly original question, “What is the origin of the superstition?” Anyone who knows me knows not to ask me such a question unless they have a lot of time on their hands. To begin with, seeking the “origin” of just about any custom is a pointless exercise for all manner of reasons, and I have certainly spilled a lot of ink on the subject (see Forrest 1999). It is possible to pin down a few customs to a specific time, date, person, and/or event, but they are rarities. Bonfire Night in England on the 5th of November is one such oddity.
We know that Guy Fawkes was part of a plot to blow up the Houses of Parliament when the king was there, and that Fawkes was discovered with barrels of gunpowder and matches in the cellars around midnight on the 4th of November, but because the plot was supposed to be hatched when the king was opening Parliament on the 5th, November 5th was chosen as the date of commemoration, and has remained so to this day. The exact year (1605), the names of the main conspirators (apart from Fawkes), what they did before and after the plot was revealed, and a thousand other details are known to historians, although not usually remembered by participants in celebrations. Nonetheless, revelers remember Fawkes, make bonfires, let of fireworks, and have a jolly time. In this case, the origin of the custom is really straightforward. Christmas trees, Halloween jack-o-lanterns, Friday the 13th, walking under a ladder, morris dancing, Maypoles, Easter eggs, etc. etc. are another matter entirely.
The vast majority of common customs and superstitions do not have such simple explanations as Bonfire Night, yet there are plenty of spurious “explanations” that you will find sprinkled around. These customs come down to us through a long process of evolution over time with no single origin to point to, and precious little in the way of documentary evidence of them in history. One question that has always fascinated me is, “Why does a supposed origin for a custom matter to you?” What difference would it make to you if I could prove that Friday the 13th is a commemoration of the Last Supper when there were 13 people present (the 12 apostles plus Jesus), added to the fact that Jesus was crucified the next day: a Friday? There is not a shred of evidence that this origin story is true, but what if it were? Would it make the slightest difference to how you treated the day?
I call one branch of thinking about these things the origin-as-essence school. People who believe in origin-as-essence believe that what something once was, it always will be (Forrest 1999). Thus, some people in the US refuse to celebrate Halloween because they believe it originated in devil worship (or something like that). This is complete nonsense, but, even if it did – which it didn’t – it is not a devil’s holiday now. Things change. Why do some people think it is bad luck to walk under ladders? Because they do. End of story. But some people will tell you that ladders are reminders of the ladders that condemned people had to climb up the gallows as a prelude to being hanged. They will also claim that 13 is an unlucky number because the hangman’s ladder had 13 rungs, or there were 13 steps from the cart to the ladder – or whatever. It’s all made up.
The best that studious enquirers can come up with concerning Friday the 13th is that the superstition is not documented before the late nineteenth century. That century has a lot to answer for. You have to break the custom down into two components: Friday as an unlucky day, and 13 as an unlucky number. Put them together and you come up with an unlucky combination. Treating certain days as inauspicious has a venerable history in any number of cultures. The ancient Greeks and Romans did it, so did the Inca, Aztec, Babylonians, Chinese, and many others. Which days and what numbers are unlucky varies from place to place. The Chinese consider the number 4 to be unlucky and in Italy Friday 17th is an unlucky day. Superstitions stick and there are no obvious reasons why. There is considerable evidence that humans like to find order where there is none, and that may be part of the explanation. As a general rule humans don’t like disorder or uncertainty.
In that sense, origin stories can provide order where none exists. But there is much more to origin stories than this simple explanation. I can understand speculating about the origins of customs, and coming up with a fanciful fiction to fill the void. For many people, “No one knows,” or “It is impossible to know,” is not satisfying, so any answer that seems remotely plausible can feel better than nothing. What both puzzles and fascinates me is why people believe origin stories that are clearly, and easily proven to be, false. I am especially interested in why, when presented with two origin stories, one correct but banal, and the other wrong but interesting, many people will prefer the wrong one – even when they know the truth. I am interested because some people (including many politicians), make a living out of telling false stories, which people will believe even when given clear evidence that the stories are false. Here’s a simple case in point.
You will sometimes hear that the word “marmalade” is a corruption of the French, “Marie est malade.” Supposedly, so the (false) story goes, Mary Queen of Scots, when she was in exile in France had frequent headaches. When she was afflicted, she found that a sweet concoction of oranges that her chef made helped relieve the pain, so the call would go out from her bedchamber “Marie est malade” (Mary is sick), and the sweet orange dish was produced. Over time “Marie est malade” became “marmalade.” This is absolute rubbish, and a flick through an etymological dictionary will tell you the truth. Oxford English Dictionary will tell you that “marmalade” first appeared in English in 1480, and Mary Queen of Scots was born in 1542. So, even without knowing the correct history of the word, its origin with Mary is an obvious fabrication. The word entered English from French, marmelade, derived from a Galician word, marmelada. The root is marmelo, a quince, so marmelada was once quince jam. If you know any Spanish or Italian you will know that cognates of “marmalade” still mean “jam” in those languages. The word was specifically applied to citrus preserves in English in the seventeenth century, when citrus fruits were common enough to be used to make what we now call marmalade. Mary died in 1587. Thus, there is no way that she can be remotely connected to marmalade; the word appeared before she was born, and was used first for specifically orange marmalade well after she died.
The first time I heard the “Marie est malade” story, I checked a dictionary and found the truth. Anyone can do the same – but they don’t. The story just gets repeated as a genuine origin story. Once I heard Michael Caine repeat the tale on an interview with Michael Parkinson when he was supposedly displaying his erudition, and no one corrected him. I expect, instead, people watching the interview accepted the story as true and repeated it. Why won’t demonstrably false origin stories die? Why didn’t Michael Caine bother to look in a dictionary? Do a simple internet search for the false stories about the relationship between Alexander Fleming and Winston Churchill that circulate around the internet endlessly. Why won’t such stories die?
I spent over thirty years documenting the history of morris dancing in England, which I eventually published as the definitive history of the dance from its earliest appearance in the fifteenth century to the mid-eighteenth century, at which point it had evolved into the dances you can see performed today (Forrest 1999). If you are from England, you have probably seen morris dancing. If not, I cannot give you a simple description – sorry. Check out a YouTube video if you are curious. It is a traditional form of team dancing, that takes many forms depending on the geographic location and the historical period you are talking about. In the late nineteenth and early twentieth centuries, dances from four regions of England – the South Midland counties of Oxfordshire and Gloucestershire, the northwestern counties of Cheshire and Lancashire, the Welsh border counties, and East Anglia – were recorded from a few surviving dancers from once-thriving teams, as well as from a few teams that were still performing. Subsequently new teams sprang up, hoping to revive and keep alive what they saw as an ancient heritage of England, dying because of the ravages to traditional villages and traditional ways of life caused by the Industrial Revolution.
No one knew where morris dancing came from or what it was about, but there was plenty of speculation, some of it stretching almost as far back as written documentation of the dance. Cecil Sharp, one of the most prolific collectors of dances, initially believed that the dance came from Morocco because “morris” is probably a cognate of “Moorish.” However, he changed his tune later on, and began insisting that morris dancing was home grown, the descendant of pre-Christian pagan dances performed to ensure the fertility of the soil and crops. He got the idea from Reformation era documents that condemned the dances as pagan, and he subsequently set about demonstrating the pagan origins of the dances by pointing to features of the dances that had a “pagan ritual” quality. Morris dancers to this day call their dances “ritual” dances.
I started dancing fifty years ago, and from my very first years I was taught by older dancers that morris dancing had its origins in pagan ritual. I believed them, but I was not satisfied by their vague stories and wanted to know more. So, I read everything I could get my hands on, but there was not much available to a six-former using only public libraries in south Buckinghamshire. Then I went to Oxford University as an undergraduate and had the vast resources of the Bodleian Library at my disposal: printed and manuscript. I combed the documents for five years, continuing as a Ph.D. candidate and then as a university professor, amassing an archive of thousands of sources that ended up covering, not only England, but a great swathe of Europe and large areas of South and North America, because dances of a similar character (called by cognate names, such as morisque, moresca, etc.) were far flung.
The ineluctable conclusion I came to was that morris dancing did not have a single place and time of origin. Rather, a kaleidoscope of dances had sprung up in numerous places in Europe around the fourteenth century possibly imitating dances of Moors, or what were thought to be Moorish dances, by Europeans in Spain and other regions along the Mediterranean, whose actual intent (mimicked in the dances) was to drive the Moors out of Europe. By various means, these highly diverse dances radiated out across Europe, and were taken, primarily by Spanish conquistadors, to the Americas and other Spanish colonies. The dances arrived in England in the fifteenth century, and for a number of years were performed in royal courts. Thence they were adopted by town guilds, or performed on stage, or used by churches as attractions for money-making festivals. In the process, the dances evolved and adapted to their environments. What you can see today bears almost no resemblance to the early dances of the royal courts.
During the Reformation in England, a great many customs of the English Catholic church were outlawed, including holding annual festivals with feasting, drinking (beer brewed and sold by the churches), and morris dancing. These customs were decried as “pagan” by Puritan propagandists, by which they meant Roman, Rome being the center of Catholicism. Rome, for these Puritans, was “pagan” because they viewed a great deal of Catholic practice as inherited from (pagan) ancient Rome. For them, “pagan” and “Catholic” were synonyms. Calling morris dancing “pagan,” was saying it was a custom associated with the Catholic church, and needed to be abolished.
Fast forward to the twentieth century, and you have folklorists such as Cecil Sharp, who read this Puritan propaganda, and uncritically believed that “pagan” meant “originating in pre-Christian Britain.” This notion fit both the anthropology of the day, and the nationalist agendas of the likes of Sharp. In particular, the anthropologist Edward Burnet Tylor (1832-1917), one of the founders of anthropology in Britain, proposed that some modern customs that seem odd or obscure are actually “survivals” from earlier eras (much as the appendix in humans is a survival from an earlier stage of human evolution). He writes that survivals are:
processes, customs, and opinions, and so forth, which have been carried on by force of habit into a new state of society different from that in which they had their original home, and they thus remain as proofs and examples of an older condition of culture out of which a newer has been evolved (Tylor 1920:16)
Sharp cannot be blamed for taking Tylor’s theory at face value. Sharp was a musician, not a trained anthropologist, and Tylor’s perspective was state-of-the-art anthropology at the turn of the twentieth century when Sharp was collecting and publishing dances. The theory fit the narrative he concocted. What he understood about British “pagans” (which was next to nothing and had no documentary support) was that they practiced rituals to ensure good crops and to scare away evil spirits. Morris dancers wore pads of bells on their legs, and waved handkerchiefs or clashed sticks when they danced, which Sharp interpreted as “survivals” from pagan rituals that originally were meant to banish evil spirits. Other individual movements in the dances once had “magical” intent, but all of these ritual meanings were lost on nineteenth century dancers who carried on the tradition in an unthinking manner, because they had always done the dances in the same way.
There are two fundamental problems with Sharp’s reasoning. First, Tylor’s “doctrine of survivals” has been debunked numerous times, and is, at best, elitist. The most obvious critique is that Tylor provides no explanation for why some customs that appear to have no overt purpose, survive intact for centuries. The implicit assumption is that peasants do not think too much about such things and just continue to repeat what they have learned, generation after generation, until the original purpose of a custom is completely lost, yet they blindly keep on doing it, because they always have. Meanwhile, the self-designated “more creative” elements in society (that is, the rich who know how to promote themselves) move on. Second, I have been at pains to point out, in excruciating detail, that many of the elements in the dances that Sharp took to be survivals of ancient ritual, such as waving handkerchiefs and clashing sticks, have not been part of the dances for very long: two hundred years at most. In fact, the dances performed in the sixteenth century bear absolutely no resemblance to dances of the nineteenth. The only common element that appears across the centuries is the wearing of bells, but even this practice evolved considerably. Royal dancers of the sixteenth century wore costumes that were festooned with bells, sometimes hundreds of them, whereas some modern dancers in Lancashire wear bells on their clogs only. Furthermore, historically and cross-culturally, dancers who are clearly not morris dancers have sometimes worn bells (or waved scarves or handkerchiefs).
What extant documents show very clearly is that morris dancing arrived in England (from somewhere – possibly Spain or Italy) in the fifteenth century, went through a constant evolution for two centuries, changing drastically depending on venue and financial support, and then settled into a few, regionally distinct, styles in the late seventeenth century. Its heyday was the late eighteenth and early nineteenth century, before suffering severe loss of interest and manpower during the Industrial Revolution. Sharp found the last gasps of a dying tradition which he revitalized, in large part by claiming that morris dancing was a truly English, ancient custom, and that restoring it to its former glory was a patriotic act. He made it clear to the men he trained to dance, all gentlemen of means, that the dance was not fun and games, but a serious ritual performance to be taken seriously.
Sharp’s conception of morris dancing as a survival of pagan ritual is stubbornly persistent among dancers and onlookers, even though my researches have meticulously shown that this origin story is false. Why? I have questioned numerous dancers who believe the false tale of pagan ritual origins in preference to my story, despite the fact that my version has mountains of evidence to support it and their version has none. They like the false narrative because it’s a better story than mine – at least, in their eyes. They want to believe that their dances were once danced (in some form or other) by Anglo-Saxon pagan ancestors, rather than by Italian noble courtiers. One story fits their self-image better than the other. This conclusion leads to my general conjecture that it is more comfortable to believe something that is untrue but accords with one’s worldview than to accept a story which is true but challenges one’s worldview. There is more to it than this simple conjecture, but let me pass over the complexities for the moment, and investigate a more all-encompassing narrative – how the world got here in the first place, and how we came to populate it.
Modern science has one narrative concerning the origins of the universe and of humans, and classic Christianity has another. The consensus among astrophysicists and biologists is that the universe was “born” approximately 13.799 billion years ago (give or take 0.021 billion years), when a super-dense primordial singularity expanded outwards very fast – the Big Bang – and eventually settled into the universe we have today. Physical anthropologists have trouble with precise dates, but peg the emergence of modern humans – Homo sapiens – to somewhere between 250,000 and 400,000 years ago, as the end product of a long line of ancestral humans, whose ancestry parted ways with apes in stages, the last fork in the road (between ancestral humans and ancestral chimpanzees) occurring between 4 and 7.5 million years ago. Classic Christian theology states that God created the world in 6 days, and created humans on the 6th day. Efforts to date the first day of creation in Genesis vary quite a bit depending on certain assumptions made in the calculations, but they tend to fall somewhere around 6,000 years ago. There’s a big difference between roughly 14 billion years (Big Bang age of the universe) and 6,000 years (Biblical age of the universe).
Choosing between modern science and the Bible is not quite as clear cut as you might think. You may believe that there is no contest: modern science is right, and the Bible is wrong. Even though I am an ordained Presbyterian minister, I come down firmly on the side of modern science as do most of my colleagues in the clergy. I do have some professional quibbles about some of the details of evolution, but I am comfortable with the broad strokes. There are, however, plenty of people who support the Biblical stance entirely, and reject modern science. Here the issue of belief versus empirical evidence takes an intriguing twist because there are limits to what modern science can achieve in the way of proof. This is not to say that anything goes: far from it. It is perfectly possible to show that a theory is wrong, but it is not possible to show that one is 100% correct. That is why it is called a theory and not a law (as was the norm at one time). Scientists now realize that they always need some wiggle room. Sometimes theories get tinkered with a little bit, sometimes they get overhauled completely.
Isaac Newton’s “laws” of motion work fine on earth under local conditions, but they fail when dealing with motion on a cosmic or sub-atomic scale. The Big Bang Theory (the real one, not the television show) explains a great deal about the universe including the prevalence of hydrogen, the background radiation, the movement of stars and galaxies, and so forth, but it still has some puzzles, such as what triggered the Big Bang in the first place and what the ultimate end of the universe is, if it has an end. It also does not, and cannot, address the possibility of the existence of other universes and other riddles that physicists dream up in their spare time, and science fiction writers play with. I am not saying that this wiggle room puts contemporary physics on a par with Genesis: it does not. What I am saying is that, when it comes to the origin of the universe, one story is clearly wrong, and one story is on the right track to the best of our knowledge – for now. As such, the question of the origin of the universe is similar to my case for the origin of morris dancing. My answer is horribly incomplete (because of incomplete data) but seems to be on the right track, whereas the origin in pagan ritual is clearly wrong – yet people still believe it.
A significant number of people believe that the Genesis story of creation is correct. Likewise, some people believe the earth is flat. Some people do not believe that gravity is real. In 2017, Gallup polled people in the US on the following question:
Which of the following statements comes closest to your views on the origin and development of human beings — (human beings have developed over millions of years from less advanced forms of life, but God guided this process, human beings have developed over millions of years from less advanced forms of life, but God had no part in this process, (or) God created human beings pretty much in their present form at one time within the last 10,000 years or so)? https://news.gallup.com/poll/21814/evolution-creationism-intelligent-design.aspx
The results were (1) Evolution plus God: 38% (2) Evolution minus God: 19% (3) Creation by God alone: 38% (4) No opinion 5%. In sum, 76% of people in the US believe God put humans on earth, evenly split between God doing it by guiding evolution and God creating us wholesale, and less than one in five believes that evolution is an unguided process. That 19% are in line with the biological community’s consensus whose main evolutionary tenet is that natural selection is guided by environmental circumstances, not by some preordained plan. But twice that number believe that the Genesis account of creation is correct, and believe that modern biology (and by extension, modern physics) is wrong. I expect that the 38% who believe that God guided evolution also believe that God guided the Big Bang, but that is just a conjecture on my part.
Contemporary mainstream physical science does not allow the supernatural of any kind to enter its theories. A number of people wholeheartedly accept that position, but there are two opposite responses to it that are both radical, yet consistent: one is to reject the supernatural, the other is to reject science. There are middle grounds as well. The 38% who want God to be the guiding hand behind evolution are trying to somehow juggle science and the supernatural in a way that is not thought through at all. Another middle position is to let science deal with the natural world and let religion deal with the supernatural.
There are many, many questions to ask about people’s positions on God and science, which we cannot pursue here. For me the most significant question concerns why some people cling to the idea that the earth is only a few thousand years old when there is so much evidence to the contrary. Potassium-argon and argon-argon radiometric dating can determine the age of rocks that are tens of millions of years old. How can a person, on the one hand, use a smartphone that relies on fundamental theories in physics, and, on the other hand, deny the utility of those same fundamental theories of physics when it comes to dating the earth?
The simple, but incomplete, answer is that many (perhaps most) people have little knowledge of physics, as well as little comprehension of the distances and, most especially, the time spans involved when talking about the evolution of the universe and of the earth. They rely more on the everyday experience of the senses, and have trouble grasping the distance involved in, say, a light year, nor can they comprehend the vastness of the universe nor the immensity of the time span from the Big Bang to now. I freely admit that I cannot either, even though I accept the reality. I know that the earth revolves on an axis and, relative to the earth, the sun is fixed; yet I see the sun rise every morning and move across the sky in the course of a day. I rarely, if ever, think of the earth as in motion even though I know it is. My senses trump my scientific knowledge. I cannot force myself to experience the earth moving and the sun standing still. Nor can I grasp what a billion or even a million years is like. I have no experience to draw on. I can get my mind around a century, but even a millennium is a foggy notion in terms of what it feels like, and how dramatic the changes in society can be over that time span.
Likewise, it is hard for many people to look at chimpanzees or other apes and see them as (very) distant cousins, even though there is plenty of evidence that they are. Sequencing of the DNA of the human and chimpanzee genome reveals that we share 99% of our genes. That number is rather deceptive, however, since the 1% that is unique to humans pervades the entire genome in ways that are not fully understood. Even so, it is hard to imagine an all-knowing, all-powerful creator making the stars, sun, and moon, forming the seas and continents, creating plants and animals in all different shapes and sizes, and then, at the last minute, when creating humans – the supposed pinnacle of creation – running out of ideas, and building them on the same basic model as a certain kind of mammal. Evolution that creates different species over time, via natural selection, seems like a much more reasonable explanation for the shared DNA.
You may accept the biological conception of human evolution also, but what impact does it have on your daily life? If I ask you to name your favorite mammal, it is highly unlikely that you will think of your mother (or any other human) as your first pick. Chances are that you won’t think of humans at all. What about your favorite vertebrate or your favorite warm-blooded animal? There again, your father or your best friend are not likely to come to mind first. Even though Homo sapiens sits on a biological classification chart along with all the other animals, when it comes to everyday life it is common to think of other humans as “us” and the rest of the animal kingdom as “them.” I know you can quibble about aspects of this statement, but you grasp my basic point.
We each have a worldview that gets shaped in a number of ways – by our culture, by those close to us, by our experiences, by our schooling – and that worldview is not easily shaken or changed. It is bedrock for us. Some components of that worldview are relatively trivial and can be changed without too much harm, but there are some components that are really deep seated and cannot be altered unless something profound happens. When hard evidence buts up against the deep-seated components we have to make some decisions. These are the main choices when our worldview and empirical evidence collide:
- Accept the evidence and change worldviews
- Challenge the evidence and maintain the original worldview
- Find a way to make the evidence and the original worldview compatible
- Keep the evidence and worldview in separate compartments
- Accept both the evidence and worldview
Any of these options is possible, but some are more likely than others. It depends how important the component in your worldview is to you. #1 Can happen, but it is drastic. With something like the evidence for the origins of the universe, it might cause a person to lose faith in religion. #2 Is certainly very common. Creationists have their own pseudo-science to challenge conventional science so as to maintain their belief in the Bible. #3 Is also common although it can involve some strange mental gymnastics. Saying that God directed evolution is absurd. It’s like saying that God determines the best winner for the lottery each week. Yet, it’s what 38% of people in the US believe apparently. #4 and #5 are probably the most common, although that is simply my conjecture. Some people when they are told that they believe A and B, but A and B are contradictory, will simply shrug and not think about it. It should come as no surprise that people hold contradictory beliefs. I would argue that holding contradictory beliefs is essentially human. It’s one of the things that separates us from computers.
Now I want to get back to questions of origins for my key questions. Why do we celebrate the origins of so many things and why do we question the origins of so many things? What is it about origins that is so important? Birthdays, wedding anniversaries, national days such as Independence Day, Founder’s Day, or Constitution Day or any kind of anniversary are a certain kind of celebration of origins. Why are they important? Are they important? First, consider all the reasons why such dates are not as vital as points of origin as we want to believe. In the US, for example, 4th of July, Independence Day is a really big deal. But it is not the date when the US gained independence. It was not even, strictly speaking, the day when the colonies declared independence in 1776. What happened on that date was one among many actions that happened, both before and after, that culminated in the Treaty of Paris of 1783 that formally ended the Revolutionary War and gave the colonies their freedom. Picking one date is entirely arbitrary. Is your birthday your real origin point: the day you “came into the world”? Why don’t you celebrate the day you got your first job, or the day you got your driver’s license, or your first day of school? In terms of what a job or a driver’s license mean for your life, they are much more important than the day on which you were born.
Chapter 16: Got Any Change? Marx versus Weber
Anthropologists have had an enduring interest in what causes cultures to change. “Why did humans first start domesticating plants and animals?” is a big one with no consensus as yet, possibly because the answers to the question differ in different locations. I’d be really surprised to discover that ancient Mesopotamians started growing wheat for the same reason that ancient Mesoamericans started growing hot peppers. Why did the Industrial Revolution occur and why did it start in the eighteenth century in Britain? Historians have their answers to these questions, but anthropologists do as well, and they do not always mesh particularly well. In the midst of the Industrial Revolution in England, Karl Marx asked the deathless question, “Where did capitalism come from?”
Marx asked the question because Industrial capitalism in Britain at the time was a decidedly mixed blessing. Technology changed at a rapid rate such that people had amenities they never imagined a generation before: travel became available to all classes (by canal, road, rail, and ship), sanitation improved, and some people had social choices they could not have dreamed of a century earlier. What is more, some (a few) humbly-born people with a vision and sheer personal drive became fabulously rich by turning their ideas into industrial reality. Meanwhile, urban centers developed into enclaves of the rich surrounded by grindingly poor slums. Masses who were thrown out of work on the land because of agricultural advances, migrated to cities where they remained unemployed, or underemployed and underpaid, and discontent was rampant. Marx wanted to know how the situation arose in the first place, and what could be done to rectify it. Good question.
In the midst of the conflicting opinions concerning the value of Industrial capitalism in the mid-nineteenth century, a number of theorists tried to get below the surface realities to see if they could figure out what the mechanisms were that were driving change in society. Answers came in many forms, but for the purposes of this chapter, two stand out: (a) change happens because someone in society has a good idea and puts it into practice. (b) change happens because of social forces, not because people desire it.
When I lectured on the rise of domestication of plants and animals in my classes, I often asked my students to give me ideas on why it happened before presenting the commonly proposed hypotheses of anthropologists (none of which is definitively accepted). Almost invariably one student would suggest that the domestication of plants came about because in prehistory some brilliant person noticed that when you planted a seed, it germinated and grew, and subsequently people started to experiment with growing crops. This cannot be true for any number of reasons. The most obvious followup question is: why would people want to put effort into growing plants when they grow already? Students who come up with that idea are thinking about how they are taught that technological change happens in the modern world. Someone has a bright idea, develops it and markets it. Why not project that notion back into the deep, dark past?
The intrinsic problem with thinking that change happens because people want it to happen is that we can come up with multiple counterexamples. Think about your own life. Have all the changes in your life happened simply because you wanted them to happen? If you answered yes, you are an exceptional person. Changes happen in our lives for any number of reasons: external pressures, accidents, and dumb luck are in the mix, with conscious choice trailing a long way behind. Isn’t the same likely to be the case (or more so) for whole societies? Nonetheless, at the start of the nineteenth century when some of the negative aspects of the Industrial Revolution were starting to bite, a number of social theorists, including Henri de Saint-Simon, Charles Fourier, Étienne Cabet, Robert Owen, and others, thought that they could create societies that would benefit everyone, and several of them built experimental socialist communities from the ground up – mostly in the United States where land to build new communities was available. None of these communities lasted more than a few years.
Karl Marx branded these theorists “Utopian socialists” and, while he used a few of their theories, he denied their basic premise that society could be improved by figuring out what the current flaws in society were and designing a better model – like building a more efficient loom or designing a better plough. Marx pointed to the failures of their intentional communities, and theorized that social change does not happen through deliberate design but because of historical forces that could be studied and possibly manipulated in the future. Thus, in his magnum opus, Capital, he set out to show that capitalism was the end product of a long chain of historical events, characterized by class struggle, and would eventually be overturned by a revolution of the workers, who had been thoroughly crushed and oppressed by the bourgeoisie.
Marx’s theories of social change, never mind the multitudinous ways in which they have been interpreted, are far too complex for me to get into in detail here, but I’ll put some readings for you at the end if you are interested. The main thing to bear in mind is that what Marx actually wrote and what people popularly think he wrote (and believed) are animals from different planets – and that includes people who call themselves Marxists. In particular, the Soviet Union under Lenin and Stalin, and China under Mao are not in any sense a manifestation of Marx’s particular vision of history.
Nowadays, Marx’s (not always the same as “Marxist”) perspective is usually called “historical materialism” or a materialist conception of history. In this usage, “materialism” and “materialist” do not refer to a love of “things” but is a theory concerning social change in history: change occurs in history because the ways that people produce what they need for survival (the means and mode of production), evolve in predictable ways, and these changes in the means and mode of production bring with them attendant changes in the ways societies are organized and in their worldviews. That is, the combination of a society’s technological and productive capacity along with the social relations of production (who does what), fundamentally determine society’s organization and development. The ways that a society is structured mirrors its economic activity.
Let’s take as an example the shift from foraging to the domestication of plants and animals in Mesopotamia. There is no dispute in anthropology that at one time in history all humans survived by eating fruits and vegetables they could gather and animals they could hunt or fish. The common assumption (now considerably modified in anthropology) is that foraging is necessarily nomadic because foragers have to follow the seasonal changes in plants, the migration of animals, and the availability of water. Because they are nomadic, they cannot live in permanent homes, cannot create a technology (a toolkit), that is too heavy or bulky to be carried easily from place to place, and they live in small bands because they have limited food resources. These small bands have little division of labor because everyone is involved in the same occupations: gathering plant materials and killing animals for food. The main division is along gender lines: women tend to be gatherers, and men tend to be hunters – but this division is fluid.
Domestication of plants changes everything (domestication of animals is a different story). When you plant crops, you are forced to settle down because tending domesticated plants is a full time, sedentary job. Crops need planting, cultivating, weeding, watering, harvesting, and storage. All kinds of social changes occur as a result. People who plant crops build permanent housing in villages which can have greater populations than foraging bands. Farmers not only store harvested crops for their own subsistence, they can also store a surplus (which foragers cannot do because they are nomadic and cannot carry surpluses around with them). Farmers also need to defend their stored crops against theft from neighbors. Their neighbors can include groups who tend domesticated herds. Herders do not have to settle in villages because their animals are mobile, and, especially in marginal lands, they need to be moved periodically for seasonal fodder. In many parts of the Fertile Crescent (the roughly crescent-shaped lands stretching from Egypt to the plains between the Tigris and Euphrates (Mesopotamia = “between rivers”)), the farmers took the fertile plains for planting crops, and the animal herders exploited the marginal lands that were not suitable for agriculture. Thus, with domestication comes a division of labor based on production.
Farmers and herders produce different kinds of food that are insufficient in themselves for long-term survival. Put simplistically, herders need bread and farmers need meat. So, they have to trade (or find some means of exchange). The invention of bread making is an interesting tale all by itself, but it is too complex to go into in detail here. (Hint: brewing beer is tangled up in it). The basics are that the wild grains of Mesopotamia were not suitable for bread making, but the domesticated cereals were. Bread making leads to the invention of ovens (the technology of sedentary people), which, in turn, can be used for firing pottery, and things evolve from there. Herders kill animals for a living, and, therefore, they are not only used to killing, they have the technology of killing ready at hand. Herders can trade with farmers, or they can steal from them. Herders have weapons and are mobile, thus stealing looks like a good option: grab what you want under force and then run away. In turn, farmers have to build defenses around their villages, develop their own technology of warfare, and learn the arts of war.
You can see where I am going with this. Changes in the means and mode of production create changes in social structure. Over time, farming villages become towns which evolve into cities. As societies become larger and more complex, they need increasingly complex systems of government. Surplus production allows classes to develop that are not food producers. Some people who are not producers become rulers who control the production of the society. The rulers need soldiers to defend their cities and themselves. They also need people to record their trade of surpluses, so you get systems of counting (leading to mathematics) and systems of recording transactions (leading to writing). As cities become more sophisticated in protecting themselves, animal herders become more sophisticated in their ways to defeat the farmers in cities – or, farmers and herders join forces in trade and warfare.
I am being overly simple in order to give you the basic flavor of historical materialism, but, of course, different cultures in different regions of the world developed in markedly different ways. Historical materialism was quite popular in anthropology at one time, although its influence is now fading. The last vigorous champion of historical materialism (or cultural materialism) was Marvin Harris, and his theories do not find many takers these days. The trouble is that the broad strokes of cultural/historical materialism seem, at first blush, to be on target, but the devil is in the details. What makes one society develop in certain ways, whereas another society, facing similar challenges, develops in completely different ways? Is it differences in the environment? Or pure chance? Or what? Why did urbanized Mesopotamians develop complex systems of writing, but only relatively rudimentary calendars, whereas urbanized Inca, Maya, and Aztecs developed only crude systems of writing, but extremely complex calendars? Enter Max Weber.
Weber’s most well-known book is The Protestant Ethic and the Spirit of Capitalism which is a direct challenge to Marx and historical materialism. Whereas Marx argues that changes in material circumstances lead to changes in the way cultures think, Weber looks at the ways cultures think and argues that when the basic mode of thinking in a culture changes, its material circumstances change. He argues that the ideas that came out of the Protestant Reformation, when they are melded together, result in capitalism. That is, ideas come first, and cultural change is the outcome. Weber saw the theology of John Calvin (1509 – 1564) as the prime mover, with Martin Luther (1483 – 1546) and the Pietists adding bits. His leading question was: “Why did industrial capitalism develop in Protestant countries first, and not in Catholic countries?” This had been an enduring question on the minds of social theorists since the end of the nineteenth century. Weber’s answer was that the underlying ideas of Protestantism engendered capitalism whereas those of Catholicism could not have done so.
Weber’s argument is as follows. At the heart of Calvin’s theology is the concept of predestination which is based on a logical understanding of the Bible (which is the infallible word of God). The Bible says that God is all powerful and all knowing. “All knowing” means that God knows everything – past, present, and future. God knows the beginning of the world and the end. These events are documented in the Bible (the ending parts being rather cryptic). If God knows all about the future, God also knows about the destinies of every individual. In particular, God knows who is destined for heaven and for hell. There is no other logical conclusion if God is all knowing. But, what about free will? Can’t we choose our own destinies freely? Yes and no. You are free to choose, but God already knows what your choice will be. Tricky.
Contrast this with the standard Catholic doctrine of sin, repentance, and absolution. Catholic doctrine teaches that we are born as sinners, and continue in sin all of our lives unless we do something to get rid of the sin. If we remain in sin we are doomed to hell, but if we repent of our sins and receive absolution, we are rewarded with heaven. The problem is that we keep sinning, so we have to keep repenting and getting absolution. The good part is that the system gives total assurance to the sinner: if you receive absolution and then die, you will go to heaven. The not so good part is that you may die before you receive absolution for the most recent set of sins. Nonetheless, absolute assurance of heaven is within your grasp if you follow the rules. Not so for Calvinism.
Calvin eliminated confession, repentance, and absolution conducted by the church from the play book. Forgiveness, according to Calvin, comes from God only, not from any human institution. Furthermore, you do not get your report card until the Day of Judgment. Your entire life is weighed in the balance on the Final Day. Until that Day you cannot know your fate. However, God already knows. Your fate is predetermined. Weber argued that the doctrine of predestination with no escape clauses or safety valves, and no way of knowing whether you are going to heaven or hell – for eternity – engendered enormous stress among ordinary people. They needed a way to escape the stress.
Here’s a hypothetical situation to help understand Calvin’s doctrine of predestination and its effect on people according to Weber. Imagine you live in a world where there is a seminar that everybody has to take before they begin their careers. The seminar is pass/fail, and if you do not take it, you automatically fail. If you pass, your entire work life is wonderful: you are paid well, your work is enjoyable, and you have plenty of free time and vacations. If you fail, your life is miserable: you live in poverty, work is hard, and you never get any time off. On the first day of the seminar your teacher arrives and gives you a big book that contains all the information you need for passing the class. The book is long and complicated, but a few sections are highlighted as really important. You are told that you will not get any grades during the seminar to let you know how you are doing. You will just receive notification of whether you have passed or failed at the end of the seminar, and your teacher chooses when to give you the final test. Finally, the teacher tells you this: “I have a personality file on each of you, and I already know who will pass and who will fail. So . . . good luck!” What would you do?
You could take the tack that since your teacher already knows whether you will pass or fail there is no point in working at all. WRONG !! The book does say that hard work is not an absolute guarantee of success, but it also says that not working at all will guarantee failure. In addition, the book tells a lot of stories about people who took the seminar years ago and gives examples of the things they did to pass. But the stories are confusing. Some people did some really awful things, but pulled out a passing mark eventually by turning things around. Therefore, you might think that it would be all right to do nothing at first, but then work really hard later on. The problem with that approach is that you do not know when you will get the test. The teacher may come and give you the test while you are still goofing off, and then you are sure to fail. I know this is a silly thought experiment, but take a moment to consider what your best strategy would be given that the consequences of failing are truly dire. (Hint: Studying hard from the first day is one strategy).
According to Weber, Calvinists decided that, faced with the stressful reality of predestination, the only way to reduce the stress and anxiety of uncertainty was to live a life of calm assurance that you were destined for heaven, because even anxiety was a sign that you were not. The dilemma was figuring out how to be calm and assured. Here Martin Luther and the Pietists come into play. Luther followed the writings of Paul in 1 Corinthians 12 which say that we all have a gift from God and they are all different. Even so, these gifts are all important to the world, and they all work in harmony. He gives the ancient analogy of the body. Some people are hands, others are feet, others are eyes. They all have different functions, but the body needs them all. The hand cannot decide to be a foot or an eye, it is a hand. Its job is to be good at what hands do and not try to be an eye or a foot, or even be jealous of eyes and feet.
Luther turned this passage into his doctrine of vocation. According to this doctrine, we are all born with a gift, and it is our job to discover that gift and then work to improve it. According to Weber, this doctrine gave the Calvinists some hope. If they worked hard at their vocations, they were on the right track to salvation. This is sometimes called the Protestant work ethic. There is a small catch, though. If you find what you are good at and work diligently, you will not only get better and better, you will also get richer and richer because more and more people will want to buy what you make or hire your services. Here the Pietists (and other Protestants) had the final piece to the puzzle: a warning. If you take all the money that you have earned through hard work and spend it on frivolous things such as big houses, fancy clothes, and big parties, you will certainly go to hell because these are the works of the Devil. You have no choice but to invest your profits back into your work, which means that your business continues to grow and you end up with: CAPITAL. In other words, the Protestant ethic is the spirit of capitalism. The one leads to the other.
So now we have two opposing views of how culture change occurs. Marx is saying that there are forces driving changes in the material means and mode of production which, in turn, drive all manner of other cultural changes including religion, whereas Weber is saying that the world of ideas in a culture evolves which, in their turn, drive changes in the material circumstances of the culture. Both general theoretical perspectives have their supporters and their critics to this day. There are plenty of Marxist anthropologists, but materialism as the engine of history and culture change does not find much favor in the discipline these days without serious modification. Likewise, you will find Weberians knocking around, but Weber’s ideas on Protestantism and capitalism took a big hit when it was pointed out that capitalism flourished in fourteenth century Italy, and Renaissance Italian bankers were most decidedly not Protestants. Do I need to repeat my mantra? IT’S COMPLICATED.
Sometimes I think of the nineteenth century in Europe as the “evolution century” (when I am not using other, less flattering, terms). Thoughts about evolution popped up everywhere. Of course, there’s Darwin and his theory of biological evolution through natural selection, but you also have the Grimm brothers looking at the evolution of languages, Charles Lyell making sense of the evolution of rocks in geology, and others. In the mix, anthropology spawned general theories of the evolution of culture. Lewis Henry Morgan, whose work heavily influenced both Marx and Freud, argued for universal stages of cultural evolution from savagery through barbarism to civilization, echoing the European three-stage archeological model of stone age, bronze age, and iron age.
One theory that can explain everything has an appeal. Newton’s three laws of motion purportedly explain all motion in the universe. And, they are so simple: force equals mass multiplied by acceleration, for example (second law). We call this method “reductionism” where you “reduce” complicated masses of data to simple rules. Many anthropologists have been seduced by reductionism, and an equal number have resisted the temptation. Social science is not natural science. Herein lies our dilemma. Do we want to emulate physics and find overarching laws that govern all cultures, or not? If not, then maybe anthropology is not a science at all. What keeps us in the science column is that we keep looking for order and patterns in culture. Just because previous attempts at reducing all of human behavior to simple laws have been crude and valueless, does not mean that the whole enterprise was wasted energy. Anthropology evolves too. We have learnt from our mistakes (well – some of us have). It was important to play around with grand theories of cultural evolution in the nineteenth century because they led to different approaches. In North America, Franz Boas looked at these approaches and suggested an alternative: look at the specifics of individual cultures and see how they have evolved – individually – using all the tools in the box: history, linguistics, biology, fieldwork, and whatever else works to make sense of how people are the way they are.
When I try to apply Marx and Weber I tend to use “and” rather than “or.” Both approaches have strengths and weakness. What about a symbiotic model, employing both, where you have changes occurring in the sphere of ideas that influence material changes, and, also, changes in the material sphere that influence ideas? This way, instead of having one variable that dominates all of the other variables, you have multiple places in culture where change occurs, and each sphere influences every other sphere mutually.
We are still left with the problem of figuring out why change happens at all – or why it doesn’t happen – no matter where it occurs. If you are not careful you can get caught up in determinism . There are various kinds of determinism on offer, some of which argue that one variable (religion, economics, climate, geography, etc.) is the foundational variable, and change in other areas stems from changes in the key variable. It used to be a common argument that humans started domesticating plants because populations grew beyond their ability to be sustained by foraging. In this case, population growth is the key variable and food production is the determined variable. Unfortunately for this position, archeology generally shows that domestication occurs first, and then population growth explodes. Another argument was that climate change reduced the availability of plants and animals, so people were forced to domesticate to increase production. Archeological evidence does not support this hypothesis universally either.
Domestication of plants and animals occurred independently in at least five different locations, maybe more, and the reasons for the change were likely not the same in all five. The one that has intrigued me the most is the domestication of wheat in the Fertile Crescent, and one hypothesis (which is now contested) has always attracted me because it involves a symbiosis between humans and plants. Wild wheat is a grass, and, like all grasses, when it is ripe, the rachis—the stem that keeps the wheat shafts together—shatters so that the seeds can disperse themselves. Without hulls they germinate rapidly. This naturally useful brittleness doesn’t suit humans, who prefer to collect the wheat seeds on the plant and then strip them off at home, rather than scratch around in the dirt to find seeds that have dispersed themselves.
Domestication occurred in the Stone Age when foragers had only stone tools to harvest the wheat. You can imagine that cutting grass stalks is going to jolt them considerably, so that if the seeds are only loosely attached they are going to fly off. But if they are firmly attached they can be gathered and taken home intact. In this way, wild wheat with firmly attached seeds is going to be gathered more frequently, will predominate in the kitchen, and will be more useful overall. But because it cannot disperse itself for the coming season (the seeds are too tightly attached), some will have to be saved and deliberately planted the next season. Thus, you have a form of selective breeding in which certain (rare) genetic varieties of grass/wheat, that have difficulty self seeding, are selected for, with the downside that it has to be planted to concentrate its stocks. Subsequently, farmers could select for greater grain size and other useful traits.
The big mystery is that humans began domesticating wheat 10,000 years ago, but they gathered wild wheat for 9,000 years before that. Why did they change? It cannot be that some bright spark said, “Hey, I have an idea. Let’s harvest only the wheat that stays attached to the stalk because it will be less work for us – oh – wait a minute – that means we have to plant the seeds later because they do not disperse themselves. Oh well! Let’s try domestication and see what happens.” I am going to call that the “bright spark” theory of culture change and junk it immediately. Something must have occurred 10,000 years ago (and not before), that forced the change.
The bright spark theory stems from a view of history that believes that great men (they are usually men) once in a while have great ideas or do great things that change the world. Alexander the Great conquered the known world and established an empire, the Duke of Wellington won the battle of Waterloo and ushered in a new order in Europe . . . etc. Really? They did this all by themselves? How remarkable. I wouldn’t be surprised if this was the way you were taught history – key dates, key people, and key events (1066, William the Conqueror, battle of Hastings; 1492, Christopher Columbus, colonization of the Americas). In the twentieth century, social history, the study of the lives of ordinary people in history, took hold in Europe and acts as a counterweight to “bright spark” theories, as did the anthropological study of cultures. But we cannot agree on where cultural change comes from. We do know that in science and mathematics, new and fruitful ideas can occur to many people at roughly the same time because of a convergence of ideas that all point in the same direction. Isaac Newton and Gottfried Leibnitz got into a fierce argument about who came up with calculus first, and the modern consensus is that they both arrived at the method independently at the same time. Charles Darwin came up with evolution through natural selection, but kept quiet about it for fear of the furor it would cause in the church until Alfred Russell Wallace started communicating with him about his own theory of evolution which he had come up with independently of Darwin. Hugo de Vries, and others, at the turn of the twentieth century proposed a theory of inheritance via genes, only to discover that Gregor Mendel had already published detailed work on genetics in 1866, and, at the time no one paid attention.
Well and good, but that’s natural science. What about culture? If cultural change does not occur because of bright sparks, or environmental changes, or social stress, or whatever, why does it occur? As a slightly tangential issue I am interested in why people think they can stop cultural change, or why they think stopping it is a good idea. All cultures change, all the time. Rates of change and areas of change vary, but all cultures change all the time: end of story. Marx, more than Weber, was interested in change because he saw the negative consequences of the Industrial Revolution which was a massive cultural change, and was concerned about what would happen next. Marx (and Engels) used a materialist theory of cultural change to provoke and control change; Weber did not.
[Note to reviewers: This chapter replaces the original one on storytelling, dropping much of the technical details about folk tales, etc., but retaining the analysis of “home” as expressed in tales and other media. Instead of analyzing the formal structure of tales, it deals more with the complexity of the concept of home, and how it figures in identity formation. When I first wrote the chapter it had all manner of branches, including the drive for ethnic groups to have an autonomous nation, and the like, but I have pared it down to some basics, and used other chapters to deal with nationalism, ethnicity, etc. The chapter may still feel a little unfocused, and I would welcome suggestions concerning cuts or additions. I realize, for example, that the opening section is too long and needs to be trimmed. It is a topic I am working on for a MS of reflexive ethnography: Being Argentino, Becoming Porteño]
Chapter 12: Lord, I’m Coming Home: Home and Identity
I have lived most of my life (64 of 72 years at time of writing) as an immigrant or foreigner in numerous countries. I was born in Buenos Aires and my actual legal name is Juan Alejandro Forrest de Sloper – and I have a birth certificate to prove it. My mother was thoroughly English and was furious that the Argentine government would not allow me to have an English name. It had to be a Spanish name and it had to be from an approved list. My father wanted to name me Roderick Seyton because he was Scottish – of Shetland heritage (his mother was from Lerwick) – and wanted to cement my bona fides as a son of the tartan. Rodrigo would have been the closest to Roderick but that was not an approved name back then, and Seyton was completely out of bounds. So, he and my mother settled for Juan which is the Spanish for John, my father’s name (and his father’s), and Alejandro is Spanish for his brother’s name, Alexander. My last name is a combination of my father’s and mother’s family names.
As soon as she was able, my mother had me christened John Alexander at St Andrew’s Presbyterian church in Buenos Aires, and kept the baptismal certificate with my birth certificate as proof that I was really John Alexander despite what the Argentine government said, and when I reached adulthood she passed them both on to me. It was her firm conviction that the baptismal ceremony counted as a legal name change which is not strictly true but it passed muster when I was growing up.
Several years after I was born, Eva Perón died and my father was afraid of the potential for armed revolution (which did, indeed, come about). There was soon violence in the streets, and my father told me of a time when he had to shelter me in a doorway in a shopping district when gunfire erupted. As soon as possible, he booked passage for the family to England. As a native-born citizen of Argentina I had to have my own passport, and I was granted a one-year exit visa to “visit” my parents’ homeland. From there my travels began and I did not return to Argentina until 2008.
From England the family emigrated to South Australia where I spent the bulk of my childhood – all of my primary school years and a big chunk of my secondary school years. Being Argentino was not much of an issue in Australia because I spoke English. I had a marked Southern English accent and so I was classified as a Pommie – one of numerous classes of New Australians (that is, immigrants) at the time. My friends at school were predominantly immigrants like me, or the children of immigrants, courtesy of Australia’s aggressive immigration policies to attract skilled workers (my father was a chemistry teacher). One or two of my classmates were from England, but the bulk were from continental Europe with a significant percentage from the Soviet bloc.
We returned to England in the year I was due to take my first public exams in Australia, and I completed my secondary schooling at a grammar school in Buckinghamshire. As soon as I arrived at an English school, I was pegged as an Aussie because my accent had metamorphosed over the years from choir boy to sheep shearer. I did my undergraduate work at Oxford university where I was introduced to social anthropology, via friends who were students in the department. They and I were more interested in folklore than classic social anthropology, but anthropology worked as an umbrella that embraced folklore – sort of.
After a couple of years of teaching in secondary schools in England I enrolled in an M.A. program at the University of North Carolina at Chapel Hill where, once again, I was the immigrant – this time English. My fate had crystallized – I would always be taken for a member of the last culture I had stayed in. When I did my doctoral fieldwork in the coastal swamps of North Carolina I was universally treated as English, as I was also when I took up a job as assistant professor at a State University of New York campus where I stayed for 30 years.
I could have continued at S.U.N.Y. forever. I had tenure, so my job was secure for as long as I cared to keep it up, and I had risen to a position of strength – head of department, known figure on campus, and all the rest of it. But when my wife died of cancer and my son went off to university, the bottom fell out of my life. I soldiered on for a while, but I was just going through the motions. In the end I quit my job and bought a one-way ticket to Buenos Aires. I was 58 years old.
Entering the arrivals hall at Ezeiza airport was a bewildering experience after a long, cramped flight. I had not made any arrangements for accommodations ahead of time, the airport is a good hour’s drive from the center of the city, and I was both hungry and thirsty. So, I had a great deal to sort out. Yet, as I looked around at the bustle of the terminal, heard the Argentine dialect of Spanish all around me, saw the way people were dressed and how they acted, and listened to tango music flooding the air, all I could think was: I AM HOME !!! Given that I had left Argentina as a small boy and spent almost all of my life living in English-speaking countries, you’d be forgiven for thinking this to be a strange sentiment. But it was a profound feeling of welcome and security even in the midst of being faced with a laundry list of things to do to get settled. I knew this place – deeply – I felt a sense of inner peace.
Although I grew up in Australia, the imprint of my Argentine background was ever present. My father had a mate and bombilla that had a special place in our house. These are what all Argentinos use to share yerba mate – every single day. His mate (gourd) was covered in leather and decorated with images by branding (very gaucho). It also smelled richly of yerba mate. My mother had only two cookbooks: a copy of Mrs Beeton’s Household Management, given to her by her parents on her wedding day, which she used sparingly for English recipes, and a copy of El Libro de Doña Petrona (in Spanish) which my father bought in Buenos Aires. Doña Petrona was a celebrity chef in Buenos Aires for decades. My mother cooked the classic milanesa (breaded veal cutlets, aka wiener schnitzel) on occasion, and my father often cooked tuco (Argentine spaghetti sauce), and estofado (meat stew with tomatoes) once in a while. My father’s speech was peppered with Spanish (pronounced in an Argentine way), and his bookcase was filled with Spanish classics – Cervantes, El Cid, etc. The first page of my stamp album contained Argentine stamps, most of which had images of Evita on them. Shards of my Argentine identity were embedded in my soul. It took my return to Buenos Aires to activate them, and to feel a sense that I had returned home.
What does it mean to be home? To answer that question we have to delve deeply into the complexities of the word itself, and also into the relationship between the concept of home and personal/social identity. Oxford English Dictionary lists multiple meanings for the noun “home.” Some of them are:
- The residence of a family.
- The place where one was brought up.
- A domestic setting.
- A family or social unit occupying a house.
- A refuge or sanctuary where one belongs or feels at ease.
- One’s own country or native land.
- A residential institution providing care, comfort, accommodation, and treatment.
Without too much musing I am sure that you can come up with numerous other uses. There are certain meanings in this list that overlap and some that are, in a sense, foundational, such as, notions of comfort, origin, family, and, most importantly for our purposes in this chapter, of identity.
All of my life people have asked me, “Where are you from?” and for a long time I puzzled over how to answer them. My accent, whether I am speaking English or Spanish, confuses people when they meet me for the first time. Most of the people I know who have changed the countries where they live continue to speak in the same accent that I have always associated with them. For example, I met a friend from my old grammar school in Phnom Penh a few years ago, and I had not spoken with him in 50 years. He had an east London accent when we were schoolmates and he sounds exactly the same now even though he moved all over England after school and then moved to Germany where he normally speaks German. Likewise, I have a former flat mate from my university days who grew up in Liverpool who has lived in Nairobi for at least 30 years and yet his Scouse accent has not blunted at all.
Probably because I moved around so much in my childhood and early adulthood, my accent has drifted with the tides. Wherever I am, I am always thought of as “other” in some way, although locating that “otherness” is not immediately obvious. My accent also sometimes shifts depending on who I am talking to. When I am with Australians, the sheep shearer is much stronger than when I am with my Oxford University friends, for example. I can also make an effort to emphasize a particular accent under certain circumstances. When I taught in New York, I deliberately shifted the pronunciation of my vowels to be closer to ways of speaking in the US northeast because my students had difficulty understanding me otherwise.
Before I returned to Argentina, when I was asked “Where are you from?” (when traveling abroad) I always answered, “I live in _______.” This answer was often followed up with the question, “But where are you from originally?” which I typically answered with, “I grew up in South Australia.” I did not have a good answer that would satisfy them because I did not feel as if I belonged anywhere. That situation changed dramatically when I returned to Buenos Aires. In Argentina I found my home, and, hence, a fixed identity. Now when I am asked where I am from, I immediately reply, “I am from Argentina.” In Asia I often have to add “where Messi (or Maradona) are from” as a clarification because people’s geography here is a bit hazy but they all know football heroes. This clarification also speaks volumes about the nature of identity.
Identity is a complex, constantly debated, issue in anthropology as well as in psychology. In anthropology the notion of identity is tied to concepts such as ethnicity, race, nationality, and culture, which, in turn, are deeply entwined with definitions of “home.” When my Chinese friends say that they are going “home,” they rarely mean that they are going back to their apartments, but, rather, that they are going to visit their parents or to the town where they grew up. A great many young Chinese grow up in smaller towns outside of the urban centers, but then migrate to the big cities for university and ultimately for work. On major holidays, such as Chinese New Year or Autumn Festival, when they get a week off work, it is customary for them to return to their “home” towns, and they talk of these trips as “going home.” Their identities are in many ways (not all) connected to their sense of home, their sense of belonging.
The concept of “home,” and the need to return home, is quite clearly deeply resonant in Euro-American cultures, but how this concept is expressed within those cultures is as complex as the concept itself. Perhaps the oldest extant tale of homecoming is the Odyssey, reputed to have been composed in the 8th or 7th centuries BCE and reaching canonical form by the mid-6th. The epic tells of Odysseus’ 10-year journey home to Ithaca from the Trojan wars only to discover on his return that he is presumed dead and his wife Penelope is surrounded by suitors. He arrives in disguise in order to test Penelope’s loyalty to him, and, upon being satisfied, proceeds to slaughter all the suitors. Not quite the stereotypical “happily ever after” ending, but does place homecoming itself front and center. No matter what obstacles Odysseus faces on his return (and every single one of his companions dies on the journey), it is his duty to return home from the wars. Home is where he belongs.
The very final act of the Odyssey, after Odysseus has killed all the suitors and reunited with his wife, is to visit his aging father, Laertes, whom he finds tending his orchard. There Odysseus recalls his childhood, how his father blessed him with gifts of fruit trees and grape vines, and how his father nurtured him growing up. All rolled into one is the sense of home as the seat of family, marriage, childhood, learning, pride, and love. But . . . to fully appreciate all of those qualities, the hero must journey away from home, face many challenges, and then return triumphant. It is a useful exercise for you to list all the different ways in which this trope plays out in different arenas.
You can think, perhaps, of tales such as Jack and the Beanstalk. At the outset, Jack and his mother live in dire poverty but by climbing the magic beanstalk, finding treasures in the giant’s castle, and slaying the giant, Jack is able to return home victorious. The tale has a number of variants, the oldest of which portray Jack as a thief and killer of dubious moral character, whereas later versions cast the giant as an evil villain. Regardless, the significance of the tale remains the same. The whole point of the story is that the central character is physically weak but cunning and is ultimately able to return home to his mother in triumph. In many ways, baseball is a sports version of Jack and the Beanstalk.
Baseball players begin their journey at home [plate]. While they are at home it is their job to protect home from aggression by outsiders. If the pitcher pitches a ball directly over home, it is the duty of the batter to hit it away or else be penalized. If batters can hit the ball far enough away from home, they can run to first base where they are safe (for the moment). Either by cunning (stealing bases) or with help from other batters (hits or sacrifices), batters on base can run from base to base, but in the process they are always in danger from players on the other team. The primary goal is to return home. Running the bases is pointless unless the runner gets back home, to the cheers of teammates. Almost all other well-known sports, various forms of football, hockey, basketball, lacrosse, etc., have linear goals: the idea is to drive forward from one end of a field to the other. Baseball, and allied sports such as softball and rounders, have a home that players leave from, face enemies intent on causing them harm while they are away from home, and their ultimate goal is to return home. The version of hide and seek that I used to play as a boy in Australia, had a place known as home which all the players left in order to hide while one seeker counted slowly. Then the hiding ones had to get back to home (and shout “home free”) before being caught by the seeker. Which trope works best for you – driving relentlessly forward to achieve a goal, or leaving home adventurously with the ultimate purpose of finding your way home?
Whether the structure of a sport is linear or circular, there is always going to be a home team who have both a home field and a home field advantage (based on being cheered on by home supporters). Rivalries between home teams within a home town are based on identity. In both Liverpool and Glasgow there are rival football teams whose fans identify as either Catholic (Everton and Celtic) or Protestant (Liverpool F.C. and Rangers) and whose heritage can be traced back to either Ireland or Great Britain. In Buenos Aires the great rivalry is between River and Boca Juniors. Both were originally formed in the docklands of the city by dock workers and construction workers, but early in its history River moved from La Boca, first to Recoleta and then to Belgrano, both upscale barrios, whilst Boca Juniors stayed in La Boca which is a lower-class neighborhood. In consequence, fans tend to view themselves and rival fans in class terms.
Social identity, much like individual identity, is fluid and has many components. Inasmuch as the two can be separated, individual identity concerns how you see yourself as a person (or, more complexly, how you construct your sense of self through the ways other people see you as an individual), whereas social identity concerns the social groups you identify with. Of course, there is considerable overlap between the two kinds of identity and their separation owes more to academic analysis than to some tangible boundary between them that can be defined. The idea of identity itself is hard to define and is a much-contested sphere. In other chapters I take up specific identities such as racial and ethnic identity, gender identity, and so forth (chapters 00). Here my focus is on how we use the attributes of “home” to construct identity.
One way to start the discussion of home is to investigate what it means to be homeless. The word “homeless” in simple terms means “lacking a home,” but the context in which it is used is determined by the way “home” is being construed. In everyday speech in urbanized nations in Europe and the US, “homeless” normally means “not having a regular domicile” in which case the “domicile” is thought of as a permanent structure of some sort (including mobile homes and the like). The definition has to be flexible to accommodate the likes of retired couples who travel around the country in camper vans. They are not homeless. They have a reliable place where they can sleep, eat, wash, and store their belongings. The camper van is a place of security even though the couples may not call it “home.”
Likewise, nomadic and migratory peoples are not homeless even though they lack permanent structures. They have a well-established territory that they range across, and they are confident in their ability to make use of that land for necessities. In their case, “home” is a looser term than that used by sedentary peoples, but it still applies. Nomadic peoples have a home region rather than a fixed abode. It is only when they are forced off this home territory that the term “homeless” becomes (marginally) applicable. You cannot call the !Kung San (Bushmen) of the Kalahari or the Bambuti (Pygmies) of the Congo homeless, even though when they live as foragers they are constantly on the move. They have to be nomadic because their food and water resources change locations annually, and so they have well-known migratory patterns – yet within a “home” territory.
Traveling people, such as the Roma (gypsies), are a curious category halfway between migratory foragers and sedentary urbanites. Nowadays, most Roma (in Europe, Australia, and the Americas) are sedentary, but for centuries they were migratory. My maternal great-great grandparents were Romanichal (English Roma). They spoke Romani and lived in a horse-drawn vardo (gypsy caravan). One of their sons, my great-grandfather, worked in his youth as a trapeze artist with a circus – circus and carnival jobs being traditional for Romanichals – but on one of his circus stops in Oxford he met a didicoy (non-Rom) woman, married her, and eventually settled down in a house there. At the time, mid-nineteenth century, it was not common for Romanichals to settle in a fixed home, but when my great-grandfather married a didicoy he effectively left Roma culture behind, because exogamy (marrying outside the group) was deeply frowned upon – and still is in many Rom communities. My mother often spoke fondly of her “gypsy heritage” and I take pride in it also.
For centuries, traveling people fulfilled vital social functions at a time when moving about the countryside was difficult. The advent of canals and railways in the nineteenth century made inter-regional travel much simpler than before, but, prior to their advent, traveling people moved from town to town, on reasonably regular annual circuits, trading goods and animals, doing seasonal agricultural work, repairing household items, and providing diversions, such as circus acts and fortune-telling. As it became easier for townsfolk to move about and find their necessary services elsewhere (and more reliably), the options for travelers diminished considerably. About 40% of twenty-first century Romanichals still travel, and live in, caravans, typically working in seasonal jobs or trading. But nowadays they are mostly viewed by sedentary people with suspicion as potential thieves or vagabonds with evil intent – people without a permanent home are to be feared and frowned upon. The same is true of Roma across Europe, especially in the former Soviet bloc. As many as 500,000 Roma are estimated to have been killed in pogroms and death camps during the Nazi Holocaust.
In the modern world there are multiple disadvantages to the traveling life. Traveling children cannot attend school on a regular basis and, therefore, are likely to receive little in the way of formal education, opportunities for making money are limited, and suspicion of criminality hangs over them. At one time in England, if something went missing it was proverbial to say, “the gypsies must have taken it.” Not having a homeland and a home government also means that the Roma have no centralized voice to defend themselves, and, thus, can easily be used as scapegoats, either as a group or as individuals – something that Jews know from their own centuries of persecution worldwide. But, unlike Jews, who were given a small resolution with the creation of the state of Israel after the Second World War, Roma do not have a “home” land – and do not want one. They do not have a central narrative, religious or otherwise, of a home that they were once forced to leave and which they yearn to return to. Their traveling is their identity.
Enforced homelessness has been the weapon of choice, used by hegemonic empires for centuries, and continues to this day. In the late 8th century BCE, the northern kingdom of Israel came into conflict with the Neo-Assyrian empire over governmental control and taxation, and the Assyrians responded by crushing Israel, and deporting its surviving citizens to other parts of the empire. These deported Israelites are now spoken of as the “Lost Tribes of Israel” because they have disappeared from history. Losing their home meant losing their identity.
The southern kingdom of Judah managed to weather the storms with the Assyrians, but then fell afoul of the Assyrians’ successor state, the Neo-Babylonian empire, in the early 6th century BCE. The result was that Judah was crushed and the elite class, which included priests, scribes, and nobility, were deported en masse to Babylon. The temple in Jerusalem, and other significant sites, were razed to the ground. The Judeans who were deported to Babylon had several choices: they could live in the city of Babylon and (potentially) assimilate or they could live in their own enclaves, isolated from Babylonian culture. The Babylonian Exiles are not lost to history, as the tribes of Israel were, but they could have been. Jerusalem and the Temple (and Mount Zion) were absolutely central to Judean identity, such that without them they had lost the physical/geographical locus of what it meant to be a Judean. Their solution, which I have written about at length (Forrest 2021), was to shift their sense of identity as a people from a physical location to the written word, the Torah, which was portable, and, through copying, indestructible. Home was no longer defined as a place but as an idea.
The Judeans, or Jews as they became called, were able to return to Judah at the end of the 6th century, when the Persian empire supplanted Babylon, and the Persian ruler, Cyrus, decreed that conquered peoples could live in their homelands as long as they obeyed the rules. Some returned and some did not. The Exiles who stayed in Babylon established a permanent home for themselves and produced a series of Torah commentaries that remain monumentally important for the development of Judaism. The descendants of this Babylonian community are now much reduced in numbers because of the numerous conflicts in Iraq (where Babylon is located), but they still exist. The returnees were able to build a Second Temple and restore Jerusalem to its former glory. But the identity of the Judeans/Jews had been utterly transformed by their time in Babylon, such that Jerusalem, while remaining central to Jews ideologically, was no longer absolutely critical as a physical home.
The Persians were replaced by the Greeks and the Greeks by the Romans, so that Judah had very few periods when it was free from domination by hegemonic powers. Yet the Judeans fiercely clung to their identity in a way that few other states could under Roman rule, and the Romans tended to be wary of encroaching too strongly on that identity for fear of rebellion. This hands-off approach worked for a while, but in 66 CE a major revolt broke out which was severely crushed by the Roman emperors Vespasian and Titus, and in 70 CE the Temple was destroyed. Jews continued to live in the region, which the Romans renamed Syria Palestina, but then subsequent rebellions into the 2nd century CE caused major reprisals and the mass dispersal of Jews, including a large contingent that moved back to Babylon where they prospered for centuries.
By the time of the destruction of the Second Temple, Jews had already been living for several centuries throughout the Middle East, Europe, and North Africa, which meant that “home” and “home land” were no longer linked to Jewish identity in an absolutely direct/physical way. Israel, Judah, and Jerusalem remained as critical focal points because their locations are deeply embedded in Hebrew scriptures. But these sites were more significant as ideas than as geographical locations: spiritual homes. For many centuries Jerusalem and surrounds were controlled by Islamic empires, making them physically inaccessible to the huge majority of Jews worldwide. Even so, they remained, and remain to this day, spiritual homes for Jews. Going home is as much a psychological idea as a physical one, rooted in deeply significant narratives.
Traditional Jewish history is an endless cycle of a people being forced away from their “home” in the Middle East and returning to it after a period of exile. Abraham and his descendants settled in Canaan, but they ultimately ended up as slaves in Egypt. Their return to the “Promised Land” under Moses and then Joshua is marked by the defining feast of Passover. Then they were forced into exile in Babylon by Nebuchadnezzar, and returned a generation later. Then the Romans provoked a new Diaspora which continues to this day, but is ameliorated by the creation of the nation of Israel in 1948, which Jews are encouraged to think of as home. One concluding prayer at Passover is L’Shana Haba’ah B’Yerushalayim (“Next year in Jerusalem”), and the Talmud is replete with exhortations to all Jews that it is their duty to live in Israel. This commandment is divisive, however, in that many sects of Judaism believe that the return to Jerusalem can occur only when the temple is rebuilt, ushering in the era of the Messiah. Either way, Israel is home – the place of final return.
In the nineteenth and twentieth centuries, dislocation of “troublesome” peoples from their homes was a pivotal tactic employed by dominant classes in powerful nations. The point of this dislocation is to assert the power of the dominant groups and to destroy the identities of the subordinate ones through assimilation into the dominant identity.
The Lenape in North America were subject to forced removal from their ancestral lands in what is now New Jersey, New York, and Pennsylvania as early as colonial times as settlers sought more and more arable lands. But the most devastating disruptions occurred in the nineteenth century, with the relocation of the southern Unami Lenape to Oklahoma being the most significant for a number of reasons. The Lenape were an Eastern Woodland population, and Unami was the southern dialect of Lenape (Munsee being northern). All aspects of their culture, from food production, to clothing, and ritual, were rooted in their geographic environment. They hunted in the woodlands and fished in the rivers and ocean. They felled trees to make small dwellings as well as the central Big House for each village. The Big House was the ceremonial center for Lenape populations with deep roots in Lenape religion and creation narratives. And, they farmed the land extensively, planting the famous Three Sisters – maize, beans, and squash.
Nora Thompson Dean (1907 – 1984), also known in Unami as Weenjipahkihelexkwe (“Touching Leaves Woman”) was one of the last fluent speakers of the southern Unami dialect of the Lenape language (her brother was the last), and she was an activist in preserving the traditions of Unami culture. Several of my students worked with her in Oklahoma in the 1970s until her death, and brought her to New York to visit ancestral lands. She was deeply moved to see the ocean and the woodlands, and performed numerous ceremonies to honor the spirits of her ancestors whom she believed still resided in the region, and were lonely (and might fade from existence without the proper attention). For her, returning home was a defining moment in her life.
Over the course of the nineteenth and twentieth centuries, the Unami in Oklahoma had incrementally lost a great deal of their core identity because the geographic environment was unsuitable for their cultural needs. They could not build Big Houses because they lacked the timber to do so, and were rapidly losing the technical knowledge to build one. With the loss of the Big House came the loss, not only of the ritual core of Unami culture, but also of specialized knowledge. Traditionally, Big Houses in the northeast were periodically refurbished and rebuilt, with the older generations passing on the construction techniques to the younger. Big Houses had to be built from only wood – no metal or other extraneous materials were allowed – which meant that their construction required technically difficult joinery skills. The older generations passed on this knowledge to the younger in small steps, annually. Without the necessary wood easily available to build Big Houses in Oklahoma, those skills were lost within a generation.
The annual Big House ceremony was a 12-day affair with participants gathering from a wide area. The rituals involved chanting and singing (in Lenape), dancing, traditional foods, and the use of carved images and other paraphernalia, all of which gradually disappeared over the years. The last full Big House ceremony was held in 1924, although there were loose attempts at a partial revival in 1944 and 1945. Imagine trying to have a traditional US Thanksgiving dinner in Cambodia where it is hard/impossible to get turkey or pumpkins, and no one has an oven. You might be able to evoke some of the spirit of the event, but the central elements that give it meaning are missing. Much the same happened with Unami culture when it was torn from its home. It can be argued that Unami culture would have changed anyway, under the forces of modernization, even if the people had not been forced off their land. No argument. But the types of changes and the manner in which they came about would have been worlds apart. In Oklahoma, the relocated Unami lost their language, their ceremonies, and multiple traditional patterns of life. Compare their situation with the Puebloan peoples of New Mexico, with whom I conducted fieldwork on traditional dances in 1993/94.
The territory that is now New Mexico was explored by Spanish colonists from New Spain (Mexico) in the mid-sixteenth century and colonized by the end of the century. Relations between Puebloans and Spanish colonists were uneasy, but stable, throughout much of the seventeenth century, but then a severe drought leading to widespread famine combined with the imprisonment and execution of a number of Puebloan ritual specialists by Spanish authorities, led to a well-orchestrated rebellion in 1680. Around 400 Spanish men, women, and children were killed, and the remaining 2,000 were allowed to flee south. In 1692, the government of New Spain sent a force up to New Mexico to take back control and initially were able to come to terms with the Pueblo leaders without the use of force. It took until the end of the century, however, for calm to be restored and there was bloodshed as Puebloans put up resistance to Spanish control. But, in the long run, the Pueblo Revolt gained the Puebloans a degree of independence from Spanish attempts to quash their culture, and, especially, their religion. Franciscan priests, on their return, allowed indigenous religion to exist side-by-side with Catholicism – as is still the case to this day.
When the Treaty of Guadalupe Hidalgo of 1848 ceded New Mexico to the United States at the conclusion of the Mexican-American War, the status quo continued. Puebloan peoples continued to occupy the sites they had lived in for centuries, they could farm the land, and could practice indigenous religions. Over time, numerous changes came about as a result of modernization, so that now the pueblos have electricity, running water, internet, etc. and these technological changes have had a massive impact on the way of life for Puebloans much as they have had in the rest of the world. But, Puebloan peoples still speak their indigenous languages, kiva ceremonies continue, and a raft of traditional habits have been preserved, all because they have been able to maintain a sense of “home.” The contrast of the fates of the Unami and the Puebloans could not be clearer.
So far, we have ranged over a number of different ideas of “home” and how the concept is linked to personal, social, and national identity. You can continue to deepen this analysis by examining what “home” means to you. For me it has numerous meanings that all contribute to my sense of identity. I can think of “home” as the place where I was born, my familial home, my lineal home, the place where I raised my son with my wife, where I live now, and so forth. In Argentina, one of my nicknames is gitano (gypsy) because I have traveled all over the world – thus uniting my personal history with my matrilineage. When I am asked, “Where are you from?” I can legitimately claim my Argentino heritage, but it is only one strand of a complex sense of self.
In the U.S., the sense of the importance of returning “home” is deeply resonant. High schools and churches often have a homecoming weekend, and there is a popular notion that being “home for the holidays” is to be expected. Within that same context, popular movies cover the trials and tribulations of returning home on special occasions, exposing the fault lines in identity that such actions make manifest. Home is a trope that covers a slew of meanings and identities across history and cultures, some of which I have teased apart here, especially in the context of returning home. As such, the chapter fits into what is generally called “interpretive” anthropology, the opposite of “scientific” anthropology. Interpretive anthropology does not try to reduce complex social phenomena to a few basic rules – as physical science does – but, instead, does the exact opposite: takes a seemingly simple event or expression, and shows how its analysis expands ever outward in wider and wider circles of meaning, in much the same way that a literary scholar probes a poem to extract increasingly complex meanings.
It is now common for some anthropologists to analyze cultural activities in ways that equate them with texts, and even speak of a “text-in-context” method. But is this any more than a skilled academic exercise? What use is it? Very good questions. I have never been a big fan of interpretive anthropology because its methods are vague and untestable. But this approach does attempt to address the question of why certain peoples do things in their own special ways. If you have any aspirations to effecting social change, then getting to the heart of why people are acting in certain ways is a crucial first step, especially if these behaviors appear on the surface to be counterproductive. The question Why? is the most fundamental and most important question in all spheres of inquiry. In truth, it is the only question. But . . . how you go about answering the question is never easy and will always be contentious.
Chapter 17: Once Upon a Time: Storytelling
Once upon a time I had a student who told me that he never read fiction because he was only interested in facts. Back then I said nothing because I pictured the ensuing debate and decided it was not likely to bear fruit. I never did follow up in later years, and now I can’t follow up because I have forgotten his name. I filed away the exchange in my “strange things students believe” mental file and let it drop. What I should have asked at the time (give me some slack I was a young assistant professor) was: “What do you think a ‘fact’ is?” Unless I am sadly mistaken, he probably meant empirical facts such as the boiling point of water or the year Beethoven was born: things that can be verified easily. Furthermore, I doubt that the only things he read were lists of empirical facts. That would be thoroughly tedious. I expect he preferred to read books that were grounded in empirical facts, rather than stories about people who never existed or events that never happened.
Bookstores and publishers divide books into fiction and non-fiction, but that is not a terribly useful way to classify books. They make the distinction for practical reasons, not because they have deep philosophical convictions about fact and fiction. If you have read Moby Dick, you will know that Melville has a story to tell about Ahab and the white whale (that never happened), but the tale is heavily interlaced with chapters concerning navigation, and winds, and economics of whaling, and on and on and on. These chapters are not fiction at all. By the same token, I have read more than my fair share of books classified as non-fiction that are riddled with falsehoods and half-truths.
“What is truth?” Pilate asked Jesus at his trial (if we can believe John’s gospel), but he did not wait for an answer. Smart man. But, indeed – what is truth? If Pilate had waited for an answer, he and Jesus would probably be arguing until they were both old men. The question leads in endless directions. The path I want to take in this chapter is that while fictional stories are not true accounts of events, they contain truths; if they did not, no one would read them. The events in Shakespeare’s Macbeth bear almost no relation to the events in the life of the historical Macbeth. But, we do not go to a performance of Macbeth to learn historical facts, we go to see the tragic consequences of blind ambition. We go to the movies to see what befalls bad guys in the end, or how love works. Fiction explores the nature of human relationships within a framework that we know is not real, but we expect the stories to tell us something that is true about those kinds of relationships. In critiquing those stories we can hold them up to a legitimate standard of truth concerning how relationships work, and whether those stories helped us understand those relationships better.
One of the huge advantages of story telling is that it can make the abstract concrete. I expect that if Pilate had waited for an answer, Jesus would have said, “Let me tell you a story.” In Luke 10 a lawyer asks Jesus how to gain eternal life and Jesus replies, “”You shall love the Lord your God with all your heart, with all your soul, with all your strength, and with all your mind; and your neighbor as yourself.” The lawyer then asks, “Who is my neighbor?” and Jesus replies, “Let me tell you a story” and tells the tale of the Good Samaritan. When we have abstract ideas put into concrete story form, they are easier to understand and process, than when presented to us in bald statements. They are also more engaging. We like stories.
The story of Snow White has always struck me as a tale with an underlying message that is so blatant that it is almost embarrassing, yet when I used to analyze it in class, some of my students were shocked at what I had to say (and would frequently argue with me). For the moment I will just focus on three elements: the queen, her stepdaughter, and the mirror. Every morning the queen looks in the mirror and asks, “Who is the most beautiful woman in the world?” The mirror is “magic” and so every day replies, “You are.” Stop right there. What does that scene remind you of? Don’t you look in the mirror every morning? Don’t you want to know, “Do I look good today?” All mirrors are magic; they all tell you what you want to hear (or show you what you want to see) – until one day they don’t. When you get to be my age, you’ll know that there is a day when you look in the mirror and say, “When did I get to look so old?” All right, I am being over-general, but I think my point is legitimate: if we are not careful, mirrors can stoke our vanity. The queen in Snow White is a classic narcissist, and the tale is a clear warning about where narcissism leads (spoiler alert – your own destruction).
Here is a translation of the beginning of the original Grimm tale:
Once upon a time in midwinter, when the snowflakes were falling like feathers from heaven, a queen sat sewing at her window, which had a frame of black ebony wood. As she sewed she looked up at the snow and pricked her finger with her needle. Three drops of blood fell into the snow. The red on the white looked so beautiful that she thought to herself, “If only I had a child as white as snow, as red as blood, and as black as the wood in this frame.”
Soon afterward she had a little daughter who was as white as snow, as red as blood, and as black as ebony wood, and therefore they called her Little Snow-White. And as soon as the child was born, the queen died.
A year later the king took himself another wife. She was a beautiful woman, but she was proud and arrogant, and she could not stand it if anyone might surpass her in beauty. She had a magic mirror. Every morning she stood before it, looked at herself, and said:
Mirror, mirror, on the wall,
Who in this land is fairest of all?
To this the mirror answered:
You, my queen, are fairest of all.
Then she was satisfied, for she knew that the mirror spoke the truth.
Snow-White grew up and became ever more beautiful. When she was seven years old she was as beautiful as the light of day, even more beautiful than the queen herself.
One day when the queen asked her mirror:
Mirror, mirror, on the wall,
Who in this land is fairest of all?
It answered:
You, my queen, are fair; it is true.
But Snow-White is a thousand times fairer than you.
The queen took fright and turned yellow and green with envy. From that hour on whenever she looked at Snow-White her heart turned over inside her body, so great was her hatred for the girl. The envy and pride grew ever greater, like a weed in her heart, until she had no peace day and night. https://www.pitt.edu/~dash/grimm053.html
The tale sets up two kinds of mothers: nice and nasty. In general, the biological mothers in the Grimms’ tales are nice and the stepmothers are nasty, but I think you can see that the biological mother/stepmother distinction is a simple literary (or storytelling) device that makes you think, “Stepmother is the nasty mother.” Notice that the king (the father) appears only once at the very beginning of the tale; after that he vanishes. This is a tale specifically about mothers and daughters.
When Snow White is a little girl, she is not a threat to her mother, but as soon as she begins to mature, she becomes a rival. One day the mother looks in the mirror and it tells her, “Your beauty is fading because you are aging, and your daughter’s beauty is just beginning.” Nice mothers accept this reality as a fact of life; nasty mothers get jealous. The consequences of the jealousy are dire because time marches on regardless of feelings. Though the queen tries to get rid of Snow White, she cannot, and in the end it is the queen who dies. Moral: accept the cycle of life or your emotions will destroy you.
I am not saying that this is all there is to the tale of Snow White, but it is a major theme. Bruno Bettelheim, in The Uses of Enchantment, explores Snow White and a host of other tales. Bettelheim probably plagiarized his ideas from A Psychiatric Study of Myths and Fairy Tales: Their Origin, Meaning, and Usefulness (1963, 1974 rev. ed.) by Julius Heuscher, so, if you want to study further take your pick. There are plenty of folk tales that warn of the dangers of copying someone else’s work without giving credit! There are numerous tales analyzed in both books. I will move on to Jack and the Beanstalk, mainly because I want to talk about baseball.
Jack and the Beanstalk is an example of the general theme of “home” found in numerous Grimm tales. Jack starts out at home with nothing but some magic beans, and a good heart. When the beans grow, he leaves home, climbs the beanstalk and finds a place that is filled with dangers, but also has safe places to hide to be protected from the dangers. With quick wits, Jack scampers from safe place to safe place, all the while being chased down by the giant, until eventually he is able to return home with treasure, and everyone is happy (except the giant). I should not need to spell it out. Baseball is Jack and the Beanstalk in sports clothing. The batter starts at home (with nothing but his wits), leaves home and goes from safe place to safe place, with the ultimate goal of returning home victorious.
This kind of analysis is called interpretive anthropology, akin to literary analysis. The reigning monarch of interpretive anthropology for a long time was Clifford Geertz who looked upon interpretive anthropology as a process of peeling back layers upon layers of meaning in culture, a process he called “thick description,” based on the philosophy of Gilbert Ryle. Geertz himself points out that the only test of the validity of an interpretive analysis is plausibility, and plausibility is not much of a test given that a con man’s tale rings true (Geertz 1973). Interpretive anthropology may be a big con – as Geertz’ detractors are quite happy to remind you. I would not argue with Geertz’ critics who claim that his interpretations are ethnocentric, but that is because anthropology is ethnocentric. We take other cultures and interpret them in terms we understand. Anthropology is the product of North American and European culture. Ethnocentrism is built in. For the purposes of this chapter, this criticism is irrelevant because I am shining interpretive anthropology on our own culture.
I would like you to consider what tropes show up repeatedly in media and what they say about the culture that consumes those media. Home is patently a major one. The baseball/Jack and the Beanstalk image of the hero leaving home, facing danger, and returning home triumphant is only one form: there are others, because our culture has as a major component the notion that you grow up, leave home, and start your own home. Characters who do not leave home are mocked as failures (Howard in Big Bang Theory, Wayne in Wayne’s World, Tripp (as well as Demo and Ace) in Failure to Launch, etc.). Yet, returning “home” for the holidays once you have created a new home for yourself is a complicated wrinkle. Even though you have launched yourself into the world and created your own home, there is still an obligation to return “home” to your parents for big holidays. These stories always have happy endings, but the veiled message is that the complications that arise during the stories tell you that you are better off in your new home than staying with your parents.
Happy endings are also monumentally popular. “Happily ever after” has been around for several centuries, but has not always been the only choice. In Shakespeare, stretching back to ancient Greece, these is a balance between comedy and tragedy. Both kinds of plays have humorous moments and serious moments, so that we should not think of them in terms of the modern meanings of “comedy” and “tragedy.” Rather, we should focus on their beginnings and endings. They are binary opposites in this sense. Tragedy begins with an orderly world that descends into chaos (and death), whereas comedy begins in chaos which eventually results in order (usually marriage). Merchant of Venice is a mostly serious play, yet Shakespeare called it a comedy, because the principal character, the merchant, is mired in chaos but eventually all the chaos is resolved, and he is able to marry and live happily ever after.
When a Shakespeare play is titled “The Tragedy of (fill in the blank)” you know that the named person will die at the end, as well as a host of characters associated with that person. You go to see the play for the first time, not because the death of the main character at the end is a surprise, but to be entertained by the specific details. Tragic heroes have tragic flaws that cause their downfall: Othello (jealousy), Macbeth (ambition), Romeo (impulsiveness), Hamlet (indecisiveness). Comedic heroes are not quite so easily classified but they all champion courage, honesty, loyalty, and steadfastness in one way or another. Thus, both types of plays are pushing life lessons: be jealous, ambitious, impulsive or indecisive, and you will fail; be courageous, honest, loyal, and steadfast, and you will succeed. Whether these sentiments pass the truth test is highly debatable, but we can probably agree that these values were important in Elizabethan England which is why they were the dominant themes for Shakespeare.
What you may notice concerning Shakespearean tragedy and comedy is that the struggle of good against evil is not prominent in either. Villains are routinely punished, but the tragic heroes, who are not evil people, go down with them. Evil is a secondary issue: human weakness is the main focus. In contemporary popular media, however, the struggle of good against evil is front and center. Why? If you think about “good” and “evil” at all you know that personifying them as opposing forces exemplified by “good people” and “bad people” is ridiculously simple minded. Yet movie franchises like Star Wars perpetuate the stereotypes. Good is light, and evil is dark. Once in a while, characters change sides (for the sake of plot twists), but the sides are there and clearly defined. Superheroes are not tragic heroes. They all have their flaws, but they fight to overcome them because they are good, and they are working for the good against evil.
Why are media obsessed with happy endings nowadays? Good always triumphs over evil. Couples who are meant to be together will overcome all obstacles. Hard work and persistence will pay off in the end. You can be anything you want to be. Add your favorite trope to the list. None of these statements is remotely true as a generalization, so why are they enduringly popular as story structures? When you watch a romantic comedy, you know before it starts that you will meet a man and a woman who are the central characters. They may fall in love at the start, they may not. But you know that throughout the movie there will be scenes where they are obviously destined for each other, but the plot will lead you through mounting difficulties until towards the ending it will look as if they cannot be together, then, hey-presto, something unexpected happens and they are together (and they lived happily ever after) – THE END. As with Shakespeare, the interest is in the details. I get that. But what makes that underlying structure satisfying to us? Why have we lost interest in classic tragedy and now cling only to classic comedy – meaning, comedy in the Shakespearean sense of “things work out in the end” (not a story that is amusing). That is, Star Wars (especially the original trilogy) is comedy in the Shakespearean sense.
The Russian philologist Vladimir Propp, in the Morphology of the Folktale (1928) tried to demonstrate that there were only a few basic plots carried out by a few basic characters and all folktales could be reduced to these plots and characters. This work largely went unnoticed outside of Russia until it was translated into other languages in the 1950s and 1960s when it had a huge impact on literary analysis and media studies: and still does. Here is a sampling.
Plot devices:
ABSENTATION: A member of the hero’s community or family leaves the security of the home environment. This may be the hero themselves, or some other relation that the hero must later rescue. This division of the cohesive family injects initial tension into the storyline. This may serve as the hero’s introduction, typically portraying them as an ordinary person.
RETURN: The hero travels back to their home.
INTERDICTION: A forbidding edict or command is passed upon the hero (‘don’t go there’, ‘don’t do this’). The hero is warned against some action.
VIOLATION of INTERDICTION. The prior rule is violated. Therefore the hero did not listen to the command or forbidding edict. Whether committed by the Hero by accident or temper, a third party or a foe, this generally leads to negative consequences. The villain enters the story via this event, although not necessarily confronting the hero. They may be a lurking and manipulative presence, or might act against the hero’s family in his absence.
Propp lists 31 plot devices, many of them paired as above, so that the plot can be laid out as a series of initiating actions which are resolved at the end. For example, the hero sets out on a journey and is told not to do something. Of course, he does what he is not supposed to and in the end returns home successful.
Stock characters:
The villain — an evil character that creates struggles for the hero.
The dispatcher — any character who illustrates the need for the hero’s quest and sends the hero off. This often overlaps with the princess’s father.
The helper — a typically magical entity that comes to help the hero in their quest.
The princess or prize, and often her father — the hero deserves her throughout the story but is unable to marry her as a consequence of some evil or injustice, perhaps the work of the villain. The hero’s journey is often ended when he marries the princess, which constitutes the villain’s defeat.
The donor — a character that prepares the hero or gives the hero some magical object, sometimes after testing them.
The hero — the character who reacts to the dispatcher and donor characters, thwarts the villain, resolves any lack or wrongs and weds the princess.
The false hero — a Miles Gloriosus figure who takes credit for the hero’s actions or tries to marry the princess.
You can see that Propp is turning the study of folktales into something akin to natural science. Isaac Newton took all kinds of motion – falling objects, planets, cannon balls, horses pulling carriages, ice skating, bowling, etc. – and reduced all the complexity to three laws of motion plus a law of gravity, that can explain absolutely all motion: all of it. This method is called “reductionism.” If you want to aim a cannon or shoot a rocket into space, these laws, obtained through reductionism, are invaluable. Can reductionism be applied to storytelling? As always, the answer is Yes and No. You can reduce stories to basic plot devices and stock characters, obviously, and the process may be helpful in a crude way to find the common features in a kaleidoscope of stories. Then what? One possibility is that you find that the same basic story repeats itself over and over and over in a culture, and you can then ask why that basic story resonates so much. Does the story reveal something basic in the culture?
Ruth Benedict, who was a student of Franz Boas at Columbia, thought that all cultures could be divided into two categories: Apollonian and Dionysian. In Patterns of Culture (1934) she follows Nietzsche’s model of the Apollonian and Dionysian, laid out in The Birth of Tragedy (1872), to divide all cultures into one or the other. This is, of course, another type of reductionism. The Apollonian, exemplified by the SW pueblos of North America, is characterized by a desire for harmony, and a need for actions that resolve problems to be taken by the entire community, and for individual (heroic) action to be suppressed. Dionysian cultures, as exemplified by the Kwakiutl of the NW coast of North America, are, by contrast, enamored of struggle and chaos and love outrageous, heroic figures.
If you want to be super-reductionist you can put this all together and say that Dionysian cultures will tell stories in the tragic vein (heroic struggle leads to chaos), and Apollonian cultures will prefer the comedic (creative action produces order from disorder). This kind of analysis can be really seductive at first, but I know you going to have a “wait-a-minute” moment sooner or later: “that can’t be right – what about . . . ?” And, you would be correct. The devil is in the details. Hamlet is not just about a hero who cannot make up his mind, and this flaw leads to his destruction. Nor is he simply a hero who sets out on a quest which is dogged by misfortune. He has a voice, and it is a unique voice. His indecision is summed up early in the plot by a famous speech. The speech does not go something like, “Should I kill myself or not? Tough question. On the one hand . . .” No – he says, “To be or not to be . . .” and we remember it because it is the perfect summation of his particular dilemma in a highly specific way of speaking.
We now have our own dilemma. Should we follow the reductionist path with Propp and Benedict, finding basic underlying structures, or should we be interpretive like Geertz, peeling off layer by layer to reveal riches at every level? We need not limit interpretation in that fashion either. We can pile one interpretation on another, making the analysis yet richer and more complex. The infuriating answer is that we should probably do both, rather than pitch our tent in one camp or the other. In our training as doctoral students we typically get pushed on to one side or the other, and, if we are not careful, we end up making our careers out of that one point of view. The trouble is that advancement in doctoral training and professional careers usually requires us to take a theoretical position and stick to it.
You will notice in several chapters in this book I present opposing points of view without preferring one over the other. In part that is because this book is an introduction to ideas, but it is also in part because there are times when seemingly conflicting theories can apply. Both reductionist and interpretative methods of analyzing tales can work depending on what your reasons for analyzing tales is. If you tell me that you really like Groundhog Day, and I ask you to explain why, it is not very helpful to tell me that the story of “man meets woman, woman rejects man, man succeeds in marrying her in the end” (that is, the comedic form), really appeals to you. All you are telling me is that a certain basic structure resonates with you. But, why Groundhog Day in particular? Why not Sleepless in Seattle or Notting Hill? What specific details in Groundhog Day appeal to you over the other movies? When I know the particulars, we can have an intelligent conversation, maybe teasing out layers of meaning from the movie’s different scenes. We can also ask precise questions, such as, “How many days was Phil in Punxsutawney in total?” “How long would it take for him to become a skilled ice carver, piano player, and also develop medical and language skills?” “Why doesn’t Phil escape one morning before the blizzard hits?” These are questions relating only to that movie, not to all movies in the overall genre. If, on the other hand, you say that you really like romantic comedies in general, then we can have a discussion, but it will be a different discussion. That discussion will be rooted in fundamental structural and philosophical elements, such as, true love, destiny, and happy endings, that are common to all romantic comedies.
There is a good case to be made that the kind of reductionism that Benedict uses in Patterns of Culture is flawed, and I would agree. It is not legitimate to boil down a culture to a few defining features, and it is absolutely not legitimate to have two categories of culture only. The normal distribution (the bell curve) that is the mainstay of the statistical analysis of any human variable (most people clustering in the middle with smaller and smaller numbers of people at the edges), has to apply to members of a culture. In Benedict’s description of the SW pueblos she suggests that all members of a pueblo work together to promote peace and harmonious relations. But that cannot be true. Maybe most of the members work that way, but I can guarantee that you will find at least one or two misfits who do not like the norms. What is more, there are bound to be times when the normal social structures break down. In 1680, the bulk of the pueblos rebelled against Spanish colonizers, killed 400 of them and drove the remaining 2,000 out. Does that sound like peace-loving Apollonians to you? Sounds more like Dionysians to me. On the whole, pueblo peoples are peaceable and calm of spirit, but they can be pushed too far.
Back in Benedict’s day, anthropologists glibly talked about Tikopia culture, or Trobriand culture or Kwakiutl culture as if they were bounded and monolithic. Cracks in that way of thinking about cultures appeared quite early on, and nowadays you’d be hard pressed to find an anthropologist talking in those terms. EB Tylor write the classic exposition of what culture is that stood for a very long time: “Culture, or civilization, taken in its broad, ethnographic sense, is that complex whole which includes knowledge, belief, art, morals, law, custom, and any other capabilities and habits acquired by man as a member of society.” (Tylor 1871)
Now we tend to examine local populations from an internal frame of reference, but we also situate them in a larger social, political, and economic context. James Clifford and George Marcus are particularly noted for their work in this regard (Clifford and Marcus 1986). There is now, for example, multi-sited ethnography, as discussed in George Marcus’ article, “Ethnography In/Of the World System: the Emergence of Multi-Sited Ethnography” which uses traditional methodology in various locations both spatially and temporally to gain greater insight into the impact of world-systems on local and global communities.
While these critiques of the concept of culture are fair, they need not hinder us too much when analyzing our own culture. When I used to teach Benedict’s work in New York I used to ask my students, “Is US culture Apollonian or Dionysian?” It was a trick question, of course, to get them mulling over the limitations of thinking in such simplistic terms. Yet, the concepts are not completely worthless either. Maybe ask instead, “What parts of the culture are Apollonian/Dionysian and when?” or “When do stories about heroes leaving home to seek their fortune and then returning home triumphant resonate with you?”
Chapter 18: Is it Art? Anthropology of Aesthetics
I assume that you have been to your share of art galleries and museums. Maybe you have even studied some art history. If so, you were probably taught that European art went through several identifiable phases to get where it is today. There are plenty of general texts on the history of European art following a standard line. Kenneth Clark’s 1969 television series, Civilisation, which can now be found on YouTube, is a painless way to get the traditional viewpoint if you are not aware of it. Briefly, the Middle Ages in Europe, sometimes spoken of as the Dark Ages, particularly in the early phases after the Fall of Rome, were great for architecture, but not so good for visual art. You get soaring Gothic cathedrals that are inspiring, but the visual art is flat, lacking perspective, and only very crudely representational. Then, wonder of wonders, the Renaissance breaks forth in Italy. Artists rediscover the sculptures of ancient Greece and Rome, and are inspired by them to turn back to their emphasis on a representation of the human form that matches what is found in the natural world, with the addition of studies of anatomy to improve the depiction of bones and muscles, plus a greater concern with the whole kaleidoscope of human emotions and actions, with scientific perspective added on as a bonus.
From the Renaissance onwards, European art progressed through the Baroque with its intense concern with dramatic lighting, through the Rococo with its highly ornamental and theatrical style of decoration combined with asymmetry, and an interest in trompe l’oeil frescoes that trick the eye into seeing three dimensions where there are only two, into the Romantic era and its worship of nature, producing serene landscapes, violent storms at sea, and animals at war and peace. Then the Industrial Revolution exploded everything with the dominance of materialism, and the invention of photography. Photography took away the absolute need for the rich and powerful to have enduring likenesses for posterity painted in oils, and created an opening for artists to explore new modes of art.
All right, that’s the thumbnail version that even contemporary art historians will take issue with, but it is an enduringly common approach. Clark stops in the early twentieth century. I’m not sure whether he thought that European art went to hell in a hand basket at that point, or whether he simply lost interest, but it is in the nineteenth century that anthropology lends a hand. By mid-nineteenth century, artists were seeking new ways to be original in their art which led to experiments with light and color, based on the premise that light and the perception of light are not fixed realities, but ever-changing variables. Hence, Monet in 1892/93 painted a series of thirty images of the façade of Rouen cathedral under changing natural lighting conditions at different times of the day and year, to show that our perception of objects is not static, and that lighting alters not only perception but mood.
Following on from Monet’s Impressionism, we get a raft of departures from classical representation including Pointillism, Cubism, and Futurism, fracturing old ideas concerning representation, and leading ultimately to completely Abstract art with no pretense of representing anything. I expect you are familiar with the likes of Mondrian’s mature works, and maybe you have wondered what the point is of painting canvas after canvas of colored squares and rectangles, and even wondered out loud: “Is it art?” To answer that question, you have to define “art” in the first place. There we have a problem that can be both egocentric and ethnocentric. You also have the problem of determining whether a piece of art is any good, or not.
Also in the nineteenth century, anthropologists (and archeologists) were stuffing European museums with artefacts from outside the Euro-American mainstream. At first, the masks, sculptures, fetishes, totem poles, and whatnot were seen as curiosities, and, at best, were called “primitive art” fitting into a universal pattern of the evolution of art and culture. European art’s evolution was driven by the twin urges of “improvement” and originality. “Primitive art” was seen as the kind of art that Europeans might have produced in ancient times, before they could do better. This attitude mirrored nineteenth-century anthropological theories of the general evolution of all cultures in which there was hypothesized a universal progression from primitive through barbaric to civilized (that is, us), so that if we wanted to know what our primitive ancestors were like, all we had to do was observe contemporary cultures living in primitive or barbaric conditions. Thus, it was asserted, European art before the evolution of modern improvement and sophistication, would have looked like the artefacts from Africa or Oceania in anthropological museums – crude and unsophisticated. The drive for originality in European art, coupled with the changing theories of what art could be, changed that perception.
At the turn of the twentieth century, artists started seeing these museum artefacts as “raw” and “energetic” and “authentic,” rather than as bad art or unskilled art, and began studying them in order to copy their supposed appeal to the emotions. Of course, they had no idea what these representations and designs actually signified in their original cultural contexts, but they sparked an interest that resulted in a number of new directions for the European art market. Pablo Picasso’s Les Demoiselles d’Avignon (1907), for example, shows five naked prostitutes with fairly typical angular body shapes in modernist style, but two of them have faces roughly resembling possibly Oceanic, Iberian, or African masks (his sources are much debated). This painting (figure 00) is widely considered to be the dawn of cubism because of the marked angularity of the bodies, but the masks/faces signal an interest in “primitive” art as well. Originally, Picasso had painted all the faces in the same manner, but had subsequently changed two of them to reflect the “primitive” style apparently because he had seen masks in a museum and was inspired by their form to rework his original vision. Subsequently, Picasso produced a number of paintings using “primitive art” for inspiration (in his so-called “African period”) and a branch of Primitivism entered the European art scene.
In part to counter the popular (mis)conception of museum artefacts, by artists and others, as “savage” and “crude” but “honest,” Franz Boas published Primitive Art, in 1927. It was a best seller in its day, and has remained in print ever since, not least because it is chock full of images. I taught a class on the anthropology of art and aesthetics for twenty years, using Primitive Art as an assigned text, and the visual arts students who took the course, were captivated by the images (much more than the text, unfortunately). These students still held on to the idea that the Western art market that they were training to be part of was bourgeois, biased, and inauthentic, but non-Western images were fresh and authentic. My job was to lead them down a more nuanced path via Boas’ analysis, whilst also pointing out his errors.
The statues and masks from Africa and Oceania that European artists (and Boas) were concerned with were certainly not produced as art in a Western sense: objects produced to be displayed and admired for their aesthetic value alone. But they were treated as such in Europe. Putting these objects into European museums transformed them into “art” from a Western perspective (objects to be looked at and admired) whereas in original context that was far from their purpose. Primitive Art addresses a number of questions concerning what the producers of “primitive art” were attempting to achieve with the objects which they produced. One of Boas’ prime insights was that the makers were not incompetent, or could not do any better. Rather, they had sensibilities that were totally different from Western artistic conventions. For example, he argued that in North American northwest coast two-dimensional representations of animals, there were two governing principles at work. One was that all the key features of the animal had to be present; the other was that these features could be displayed symbolically, and did not have to “look like” the actual features of the living animal, nor be ordered exactly according to their position on the actual animal. One common convention was to represent the animal splayed open so that the entire surface of the animal could be seen at once (figure 00). Boas thought this style of representation may have been inspired by seeing animal skins spread out for drying. More symbolic representations, on blankets, for example, (figure 00) had multiple indigenous interpretations. They were frequently said to tell stories, but the “reading” of the stories could be quite different from observer to observer.
Before pursuing specific topics, Boas made the crucial distinction between art and aesthetics. Primarily, for Boas, art was a product of human action. Thus, a sunset or landscape could be beautiful (that is, aesthetically pleasing), but they were not art. Art had to have three qualities: it had to be made by humans, it had to involve skill in production, and it had to be aesthetically pleasing. Boas focused mainly on visual art, but he did also include performing arts, literature, and even food in his general category of art. One of his fundamental concerns was to break the ethnocentrism of the European colonial vision of non-Western art as inherently inferior. He was also intent on showing that European artists’ conceptions of such works were misguided also. They were not “raw” or “immediate” or any such thing, but followed their own indigenous rules of production and appreciation, in the same way that Western art did, except the rules were different but understandable when broken into basic analytic categories, such as, form, symbolism, pattern, representation, and style.
The bulk of Primitive Art is taken up with pursuing these analytic categories in detail, with copious illustrations to explain the points. For his times, Boas’ agenda was simple. The nineteenth century theory that all cultures went through the same evolutionary phases (such as, savage to barbaric to civilized) had been overturned by Boas and replaced with the concept of cultural specificity: that is, each culture was an integrated whole, subject to its own unique historical and environmental conditions making it what it was, and not evolving in specific directions according to fixed laws. The nineteenth century evolutionary model seemed to work for many cultural systems such as technology and kinship, but art did not appear to fit well into any theory.
Art conformed to the various theories of human development of the nineteenth and twentieth centuries only in the most convoluted way. Let’s take technology as a simple example. Given limited archeological information, it had been easy in the nineteenth century to craft an evolutionary sequence from stone tools to bronze tools to iron tools because the technological sequence is obvious. Stone tools can be made with nothing but other stones to knock off chips to make an edge: not even fire is needed. The control of fire makes the smelting of metals possible – first of metals with low melting points such as copper and its alloys, and then of iron, which requires much higher temperatures than copper to smelt and work. Along with increasing complexity of technology comes an increase in complexity of society from hunting bands to village organization to cities. When you aggregate a number of cultural features together – government, kinship, technology, economic systems, religion, and so forth – a satisfying, but false, evolutionary sequence from simple to complex can emerge. The trouble is that a satisfyingly general evolutionary sequence for art could not be articulated.
One possibility was that the earliest humans had carved simple geometric shapes which had evolved into more sophisticated representations over time. But the archeological evidence did not support this theory wholeheartedly. Some cultures had moved in the opposite direction from more representative forms to more geometric ones. The “simple” solution to this academic conundrum was to leave art out of the equation altogether. Although Boas advanced the anthropological analysis of art and aesthetics considerably at the dawn of modern anthropology, the topic remained decidedly secondary for almost half a century. When I proposed studying the aesthetics of everyday life for my Ph.D. fieldwork in 1978, several members of my doctoral committee objected that the subject was trivial and unimportant, and when I applied for jobs with new Ph.D. in hand there was not a single job advertised for a specialist in art and aesthetics. Fortunately, a college that blended traditional liberal arts with conservatories of the arts took pity on me, and I found a home. Since then, the anthropology of art has blossomed although it remains a minor interest within the discipline.
The failure of aesthetics to attract attention in anthropology is both surprising and obvious: surprising, because aesthetic sensibilities are human universals; obvious, because aesthetic behavior is notoriously difficult to document. Aesthetic behavior is based on the human senses, which, of course, are universal. The difficulty arises in that they are deeply personal and subjective, and in some cases, smell in particular, our vocabulary for communicating about sensation is limited. At least with sight and sound we have photography, videography, and audio recording for documentation, but with taste, smell, and touch, we have to rely on crude descriptions that are not remotely adequate. Yet smell is arguably the most evocative of our senses. Catch a whiff of an aroma that you have not smelled in twenty years and you are instantly transported to a time and place where that smell was important to you or that has strong associations for you. Why, then, have we virtually no smell words to describe those scents? We have words such as “reek” and “stench” for bad smells, but they are extremely general, as are “perfume” or “aroma” for the good ones. It’s no great mystery, therefore, that tastes and smells don’t make it into standard ethnographic descriptions of cultures. We don’t have the words for them.
The ancient Greeks, notably Plato, divided human experiences into noeta (things that could be conceptualized) and aestheta (things that could be perceived with the senses). For Plato, the distinction was of critical importance, because, for him, the things that we can conceptualize can be perfect, but the things that we perceive with our senses are prone to error. The noeta included mathematics, logic, and geometry, as well as theoretical articulations of systems of government, economy, and kinship. The aestheta were the way those conceptions played out in the world around us. Thus, we can conceive of a circle in mathematical terms such that it is purely theoretical and can be manipulated in absolutely perfect ways. The center of a circle is a point in space with no dimensions; the radius of a circle is a line drawn from the center to the circumference. Both the radius and the circumference have one dimension only. When we draw a circle it is imperfect because we cannot draw lines with one dimension only. They have to have some width, otherwise we could not see them; and we certainly cannot see a dimensionless point. By turning ideas into sensory objects, they become imperfect (or, in Platonic terms, they lack truth). Ironically, the aestheta also lack the beauty of the noeta for the same reasons. Real objects are never as beautiful as imagined ones.
Modern mathematics and physical scientists are not so far from the Platonic model in that they see beauty in their findings when they can abstract them from real things in the day-to-day world. A pure mathematician will tell you that there is beauty, sometimes elegance, in a new theorem; when a physicist discovers a regularity (what used to be called a law) governing all interactions of a certain kind, it is declared a thing of beauty. Such perfect laws, or mathematical theorems, underpin the messy, sensual world around us where the laws and theorems get all tangled up together and lead to confusion.
Ancient Greek artists took these Platonic ideals of noetic beauty and sought, as best they could, to translate them into real objects. The took such mathematical concepts as the Golden Mean, the equilateral triangle, and the circumscribed square, and tried to sculpt human figures that embodied these ideals in such a way that they embodied beauty. They were doomed to failure, of course, but they kept pursuing the goal.
In the course of time, we dropped the word “noeta,” and “aesthetic” came to mean more than simply “sensory.” For us, the aesthetic is both sensory and beautiful. The Platonic division is still with us in some form, however. In the modern world, it is common to equate the scientific reduction of empirical observations to equations as the highway to truth, and all other inquiries as interesting – perhaps – but secondary. When nineteenth-century anthropologists created theories that explained the evolution of all cultures, which allowed them to place every living culture on a fixed evolutionary sequence, they were following a scientific model. Hence, anthropology was classed as a social science. The problem was, and is, that it is unethical to conduct controlled experiments on living populations, so we have to make do with data collected from fieldwork, and try to find patterns in it. Boas made it clear by the turn of the twentieth century that the old evolutionary model was bankrupt, but the pursuit of patterns within data persisted for some time.
Finding globally applicable patterns in kinship, systems of government and economy, religion, marriage, and so forth has had its problems, but there have been some positive outcomes, nonetheless. Some kind of scientific cause and effect appears to be at work some of the time. For example, when a culture changes its mode of production from nomadic foraging to sedentary agriculture all manner of possibilities open up. Nomadic foragers build open fires to cook food and fire pottery, but sedentary agriculturalists can build permanent ovens that get hotter than open fires, and with them they can fire durable pottery and smelt ores (as well as bake yeast bread, and roast pieces of meat). Likewise, statistical correlations can produce insight now and again. For example, cultures under population pressure are more likely to tolerate homosexuality than those that are not. Suicides rise in a population under economic stress. These are genuinely social variables, not personal or psychological ones. That is, you may be able to predict the rise in suicide rates in a community based on economic variables (or other quantifiable variables) with some accuracy, but you cannot predict exactly which individuals will commit suicide.
Because certain variables within cultures (economics, technology, political systems) are relatively easy to document, and even quantify, they have historically taken center stage over aesthetics. You can collect and document a boatload of aesthetic artefacts or record a raft of tunes and songs from a culture, but then what? In the first half of the twentieth century, few anthropologists cared to devote much of their attention to the arts as central to their investigations of other cultures (there were a few). Ethnographies might have a chapter at the end that was a catch-all miscellany of information about material or performing arts with little beyond basic descriptions. Academic papers about the arts in other cultures tended to be afterthoughts written by anthropologists with little or no training in the arts who happened to have stumbled on something interesting, but their concern lay in how these arts could be used to elucidate issues concerning economics, trade, government, religion and such, because those areas were what the anthropologists in question actually knew something about and cared about. Questions such as, “Why are humans aesthetic animals?” “Why do cultures favor certain art styles?” “Why do some people become skilled artists?” were treated as beyond the scope of anthropological investigation or unknowable.
Boas certainly threw in the towel when it came to uncovering why certain art styles are the way they are and why they are so persistent in cultures. He did not feel it necessary to define what an art style is or how to recognize one – European or otherwise – assuming that we all understood the term. After all, if I talk about the Dutch Baroque, the Italian Renaissance, or Cubism you get the general idea of what I mean by an art style even though defining a particular style may be difficult. Boas devotes a whole chapter of Primitive Art to style in art, but the general issue of how to define what an art style is gets lost in detailed descriptions of specific styles. He does show clearly that we can identify certain elements of a particular style that are fixed (in minute detail), whether they be form, pattern, ornamentation, symbolism, or whatever, but he is more interested in showing that certain formal elements of a particular style are limitations imposed on the artist based on the use of the object in question or the material it is composed of, rather than fully coming to grips with why particular art styles are the way they are. He does note at the end of the chapter that art styles within one culture may vary depending on the technological processes involved, but he also notes that two cultures that have the same technologies for weaving, basketmaking, or pottery can, nonetheless produce markedly different styles of ornamentation. His answer to the question, “Why are the styles different?” is “We’ll probably never know.”
Collections of artefacts, music, and dance continued unabated through the first half of the twentieth century, but little analysis followed. Composers had been collecting examples of folk music and non-traditional performances since the end of the nineteenth century, but their purpose was to import them into their compositions to breathe new life into stale ideas rather than to examine the music analytically. They did note that singers and musicians who were not classically trained did things that broke the rules for concert musicians. You probably know, for example, that classical Western concert music is based on a cycle of keys that can be either major or minor. With no sharps or flats in the scale you have the key C Major or A Minor. C Major is C D E F G A B C and A Minor is A B C D E F G A. Folk singers in England, Hungary, Germany, and elsewhere, sang in scales that broke this mold. They could sing in scales D to D, G to G, or E to E effortlessly. They also sometimes sang using notes that were not on the piano (called microtones). Jazz performers know these as “blue notes.” This “rule breaking” was considered to be inspiring, challenging even, but it did not provoke much in the way of academic discourse.
In the post-war years things changed. In the anthropology of dance, a number of women, trained as dancers, took Ph.D.s in anthropology and began a systematic analysis of dance cross-culturally. They joined forces with dance historians to form the Committee on Research in Dance (CORD) which held annual meetings and produced Dance Research Journal. As such, anthropologists of dance have been isolated from mainstream anthropology, although they do participate in other conferences. The anthropology of music, under the umbrella of “ethnomusicology” has also had a bit of a split personality. Interest in non-Western music is still fostered by composers for their own purposes, but in the 1960s and 1970s the study of music from a cultural standpoint became increasingly popular. It too remains isolated from mainstream anthropology largely because its study can be highly technical and often requires advanced musical training.
The anthropological analysis of the visual arts got a boost in the 1950s and 1960s from psychologists interested in the hypothesis that a culture’s art style was a visual representation of its inner working. They were saying that in the same way that an individual’s dreams are a window into that person’s unconscious world, a culture’s “dreams” (that is, its art) are a window into the collective unconscious – the bedrock ideas – of that culture. In 1961 John L. Fischer published “Art Styles as Cultural Cognitive Maps,” which was subsequently anthologized multiple times as a classic study in the anthropology of art.
Fischer took four variables in the visual arts of different cultures:
Repetition of motifs
Symmetry
Empty space
Boundaries around individual motifs
He then ran a statistical correlation analysis between them and measures of these cultures’ positions on a social scale running from more egalitarian to more hierarchical, following the earlier work of Herbert Barry III who worked on the relationship between the severity of child socialization and art styles (Barry 1957). Fischer’s hypothesis was that the more egalitarian a culture was, the more likely its art would involve repetition, symmetry, empty space, and enclosed figures. You can think of his hypothesis as akin to a Rorschach (inkblot) test in which the psychologist attempts to see into the unconscious mind of patient based on how the patient interprets visual images. Fischer was arguing that a culture’s art is a representation of deeply held beliefs concerning how society should work. The heavy use of repeated motifs, for example, represented the fact that in an egalitarian society, all the members of the society are essentially the same in terms of their roles.
Fischer also ran tests on other social variables such as marriage type and residence patterns and, likewise, found statistical correlations. At best we can say that his findings were provocative, and, indeed, his kind of analysis was followed by others in different arts; but his underlying assumptions and methodology are deeply flawed. Fischer himself readily admitted, in a long footnote, that dividing societies into a dichotomous egalitarian versus hierarchical oversimplified complex data, and that all societies sat on a continuum between the poles of egalitarian and hierarchical, and that most clustered somewhere in the middle (as you would expect from a normal distribution). This admission alone should have told him that his findings were useless, or, to be kinder, of limited utility.
Also in the 1960s, Alan Lomax, who followed his father John Lomax in recording traditional dance and music styles in the United States (and subsequently in Britain), began what he called the Cantometrics Experiment (statistical correlation testing between social variables and aspects of song style), followed later by the Choreometrics Experiment (social variables and dance style) (Lomax: 1968). Like Fischer’s work with visual art, Lomax’ methodology is flawed even though occasional insights are potentially useful. In song analysis for example he found a correlation between the severity of child rearing and vocal tension in adult singers. In dance he found correlations between styles of dance and aspects of daily labor. These findings mesh with intuitive interpretations that had been knocking around for a long time, and had largely been debunked before Lomax took up the cause, particularly because it could be easily shown that there is a great deal more variation in song and dance styles in individual cultures than represented in Lomax’ samples.
At around the same time that Fischer and Lomax were producing their statistical correlations, Robert Plant Armstrong published The Affecting Presence: An Essay in Humanistic Anthropology (1971), as a counter to quantifying aesthetics in culture, yet still amenable to the idea that aesthetic and social values were closely linked. Armstrong focused on human affect: the realm of human feelings and emotions. Such a focus takes as basic that aesthetic products have as one of their primary goals stirring feelings in the observer and asks “How do aesthetic products stir positive affect in the observer?” His answer is that all aspects of social life in a culture, aesthetic and otherwise, have affecting qualities, and there is a degree of uniformity between them. Armstrong takes two scales – extensive versus intensive, and continuous versus discontinuous – and asserts that within particular cultures all behaviors cluster in identifiable zones. You have to read the whole book to understand Armstrong’s complexity and nuances, but I can give you some of the flavor. Extensive versus intensive refers to the human body in how it moves and how it is represented. A more extensive culture represents the human figure with arms and legs extended away from the trunk, and its daily activities involve extending the limbs and keeping external objects at a distance. A more intensive culture represents the limbs drawn in tightly to the trunk, and its daily activities keep external objects close to the body. He compares SE Asian cultures (extensive), represented by Java, with West African cultures (intensive), represented by the Yoruba in a number of areas including competitive sports, visual arts, dance, eating styles, and housing. His claim is that each culture is remarkably consistent across these social arenas. Eating with chopsticks is extensive; eating with the fingers is intensive. Shadow puppets with elongated limbs are extensive; sculptures representing people and gods with all extremities contiguously attached to the main body are intensive. Boxing is extensive; wrestling is intensive. You get the point. The Javanese fit the first listed and the Yoruba, the second. Javanese culture is extensive and Yoruba culture is intensive.
Armstrong’s second scale of continuous versus discontinuous can involve equally diverse variables based on whether something is smoothly and continuously variable or broken into discretely identifiable units. Porridge or blended soups are continuous dishes, bacon, eggs, mushrooms, sausage and toast is a discontinuous dish. Dances may be freely and continuous flowing, or broken into individual steps and motions. Next task is to put the two scales together and you end up with a matrix of possibilities. Eating porridge with the fingers is intensive-continuous and eating stir-fried meat and vegetables with chopsticks is extensive-discontinuous. Armstrong is trying to show that the aesthetic realm is a part of a much large whole of affective style. He is not terribly far away from the post-modern claim that all movement is dance, all speech is poetry, all action is theater, etc.
Whatever approach you take to aesthetics, the question remains, “Why are humans aesthetic animals at all?” What difference does it make whether your meals are elaborately prepared to appeal to sight, sound, smell, and taste or you eat your food in much the same way as you fill your car’s petrol tank (as simply and efficiently as possible so as to be able to get back on the road quickly)? Or, take Michelangelo. Why did he spend every waking moment perfecting his visual arts, yet gobbled his meals like a pig, as quickly as possible, without a moment’s thought for what he was eating so that he could get back to his work? Why do people/cultures value elaboration in some branches of aesthetics over others?
Even though it is still a marginal sub-field, the anthropology of art and aesthetics is now in healthier shape than it was forty years ago when I was starting out, but the enduring questions are no closer to being answered than they were then. Alfred Gell rounded out the twentieth century with Art and Agency: An Anthropological Theory of Art (1998), formulating a theory that was influential for a time, although it focuses on artifacts rather than the arts in general. At the time “agency” was the buzz word du jour in anthropology, drawing attention to what anthropologists called abductive reasoning (which can get rather vague philosophically). Gell was using the concept of abduction as it was used by the philosopher Charles Pierce who was a key player in the development of a philosophy of signs leading to a whole school of semiotics (the study of signs). Abduction differs from deduction and induction, in that the viewer makes inferences concerning what is observed by virtue of its likeness to something already known. Gell believes that humans are attracted to certain objects because of their technical virtuosity (mirroring Boas), and they enter into a personal relationship with those objects, evoking basic emotions. The art acts as a mediator between the underlying idea expressed and the viewer: it achieves agency.
This analysis can get us mired in the depths of philosophy very quickly, and so I will try to escape before that happens. This analysis can come close to saying that art is a language if you are not careful in your interpretation. If art were a language then the artist would be making art in much the same way that a speaker says sentences, and the viewer would be observing the art in the same way that a hearer interprets sentences. Thus, artists commonly talk about what they are trying to “say” in their art, and will use clichés such as, “if I could put my ideas into words, I would not need to be an artist.” This assumes that the artist is “saying” something, and the viewer is receiving the “message.” This is not Gell’s point. Art is not a language. It is a mode of aesthetic expression that has three components: the maker, the object, and the viewer. The viewer sets up a relationship with the object, not with the maker. The maker’s intentions are only marginally relevant to the relationship established between the viewer and the object.
We are, unfortunately, no closer to explaining why elaborating objects and movements aesthetically is a human universal. Why do we like beautiful things? Why do different cultures elaborate different aspects of their lives: some are passionate about wood carving, others about dancing etc. Because this is my specialty, I have the habit of believing that if we can crack the puzzle of aesthetic behavior, we will be a lot further down the road in understanding human behavior than by simply studying economic or political behavior, because the aesthetic taps directly into the emotions.
Chapter 19: A Change is as Good as a Rest: Revitalization and Social Change
Classic ethnographies of the first half of the twentieth century tended to present a somewhat static view of the cultures that they described. They used what we call the “ethnographic present,” that is, using the present tense concerning customs and values even though the culture could well have changed since the original fieldwork had been conducted. Fieldworkers were aware of culture change, but ethnographies tended to be “snapshots” rather than “videos” even though anthropologists of the day knew perfectly well that cultures undergo change as a matter of course, and some, notably Julian Steward, were interested in cultural change. My mantra, “All cultures are always changing” is worth remembering, although I must add some qualifications to the general idea.
Not everything in a culture undergoes change all the time and change occurs at different rates in different cultures. In this chapter I am concerned with one kind of change only: abrupt and rapid change that extends over all, or most, aspects of life. We use the umbrella term “revitalization” for this kind of change, following the work of Anthony F.C. Wallace who used the term in that sense in his 1956 paper, “Revitalization Movements” (Wallace 1956). Wallace noted that there were many different types of revitalization movements depending on the nature of the change proposed or desired (and depending on the nature of the original culture), but underneath all the differences was a well-defined sequence of events (assuming the movement was successful). This sequence was:
I. Period of generally satisfactory adaptation to a group’s social and natural environment.
II. Period of increased individual stress. While the group as a whole is able to survive through its accustomed cultural behavior, changes in the social or natural environment frustrate efforts of many people to obtain normal satisfactions of their needs.
III. Period of cultural distortion. Changes in the group’s social or natural environment drastically reduce the capacity of accustomed cultural behavior to satisfy most persons’ physical and emotional needs.
IV. Period of revitalization: (1) reformulation of the cultural pattern; (2) its communication; (3) organization of a reformulated cultural pattern; (4) adaptation of the reformulated pattern to better meet the needs and preferences of the group; (5) cultural transformation; (6) routinization, when the adapted reformulated cultural pattern becomes the standard cultural behavior for the group.
V. New period of generally satisfactory adaptation to the group’s changed social and/or natural environment.
I can dispense with periods I to III fairly quickly because they are simply leading up to IV, the period that I would like to focus on. Its steps are my major interest here.
Wallace’s assumption is that cultures tend to adapt well to their social and environmental circumstances (period I), and do not consider major changes until something disruptive occurs (periods II and III). I would say that this is a dubious premise, but, even if we accept it, we still need to ask, “How much distress in a culture is so much (or too much) that it moves towards drastic revitalization?” The answer tends to be circular – when revitalization begins, the stress on the culture is too much to bear.
Without question, the most serious stressor on cultures outside of Europe for hundreds of years was colonization by European nations. Indigenous cultures, worldwide, were taken over, primarily by violence, and forced to accept changes that benefitted the colonizers. Albert Memmi, in The Colonizer and the Colonized (Memmi 1965), argues that colonized peoples have two choices: either accept the colonial situation and adapt to it, or resist. Both acceptance and resistance usually lead to revitalization movements, but they are quite different on the surface.
In the early twentieth century, anthropologists did not pay anywhere near enough attention to the cultural disruption caused by colonization, and, in many cases, were blissfully unaware that the cultures that they thought of as pristine had been drastically changed in response to colonization. Now we know better. Take, for example, the indigenous cultures of the central plains of North America. When European colonists from the American northeast made direct contact with them, the horse was a central feature of many of these cultures, and they were assumed to be pristine horse-cultures. But, horses had been introduced to the North American southwest by Spanish colonists where their utility was quickly understood by groups who were foragers, hunting on foot. From there, the horse quickly diffused up the Great Plains causing massive cultural changes among groups such as the Blackfoot, Arapaho, Cheyenne, Comanche, Crow, Kiowa, and Lakota, who in modern popular consciousness are imagined as horse riding warriors and hunters. Before the arrival of horses, the North American bison that proliferated across the Great Plains was an important source of protein, but only in limited supplies. Once they had horses, indigenous groups held major annual hunts to procure meat for the long winter months and skins for trading. Such gatherings were also used to settle disputes, decide political strategies, plan raiding parties, and conduct trade.
The Comanche were the first group to realize the potential of the horse, and were so successful at breeding and using horses that they became the dominant group in the Great Plains south of the Arkansas river by the 1730s. Competing groups soon followed suit. By the nineteenth century, the average Comanche family owned 35 horses, when 5 or 6 would have been sufficient for household needs. Care of such numbers of horses exacted a toll on grazing land, and also required care and attention. Thus, formerly egalitarian societies became divided by wealth, and there was a negative impact on the role of women. The richest men could have several wives as well as captive slaves who could manage their possessions, especially horses.
There are many documented cultural changes of this sort, intentional or unintentional, caused by colonization, but such cultural transformations do not qualify as revitalization movements. Revitalization is a different animal. Revitalization is a deliberate attempt to restructure a culture in the face of stress that cannot be coped with using the means currently at the culture’s disposal. Wallace divides revitalization movements into three broad types:
Three varieties have been distinguished already on the basis of differences in choice of identification: movements which profess to revive a traditional culture now fallen into desuetude; movements which profess to import a foreign cultural system; and movements which profess neither revival nor importation, but conceive that the desired cultural end- state, which has never been enjoyed by ancestors or foreigners, will be realized for the first time in a future Utopia. The Ghost Dance, the Xosa Revival, and the Boxer Rebellion are examples of professedly revivalistic movements; the Vailala Madness (and other cargo cults) and the Taiping Rebellion are examples of professedly importation movements. Some formulations like Ikhnaton’s monotheistic cult in old Egypt and many Utopian programs, deny any substantial debt to the past or to the foreigner, but conceive their ideology to be something new under the sun, and its culture to belong to the future. These varieties, however, are ideal types. A few movements do correspond rather closely to one type or another but many are obvious mixtures. (Wallace 1956:275-276)
Typically, a single leader brings a radically new message to the people, and the whole culture undergoes transformation. This brings us to period IV in Wallace’s model: the actual process of revitalization. Here Wallace makes some good points, but the whole analysis is too general to be much help in analyzing specific cases. Let’s break the steps of revitalization down. I’ll give you my slight reworking of Wallace’s original, not exactly his wording, but catching the general spirit, I believe:
(1) New vision for the culture.
Wallace focuses on religious revitalization movements but does not ignore secular, political ones. He does, however, say that this first phase is normally conceived by a single person usually as a result of a vision, dream or hallucinatory experience. The single person part is fairly accurate (not entirely), but the supernatural vision part is not at all accurate when it comes to strictly secular movements such as the Russian and Chinese revolutions. Secular revitalization movements can be grounded purely in intellectual ideologies. Regardless, Wallace’s point is that a revitalization movement is not just a piecemeal analysis of things that are wrong in a culture by a charismatic leader who then fixes them (or suggests practical remedies). A revitalization movement, by his definition, starts with a radical new vision of what the whole culture should, or could, be. The entire perception of what is basic to a culture – what he calls the “mazeway” – is reformulated via the vision of a single individual.
(2) Preaching
The originator of the new vision for a culture – its prophet, if you will – begins disseminating his new vision either to large crowds or to a group of disciples who, themselves, preach the new vision, or both. New converts are offered special protection from the ills befalling them, and promised that both they and society will benefit from the new movement.
(3) Organization
The new movement develops a rudimentary structure: leader, small inner circle of close disciples, and a mass of followers. The message of the leader becomes codified into small, easily remembered phrases and sayings, and the leader is increasingly viewed by close disciples and followers alike as the fountain of truth and salvation, sometimes even conceived of as a supernatural entity.
(4) Adaptation
As the movement progresses and grows it may meet formal resistance or simply obstacles to implementing its mandates as they are originally formulated. Therefore, adjustments are made so that the ideals of the movement and social realities and practicalities can be satisfactorily meshed.
(5) Cultural transformation
When enough followers are on board with the movement, especially if they are important members of the former status quo, the culture shifts from its former “mazeway” to the new one. At this point, the movement may fail because it proves to be unworkable, or it may succeed.
(6) Routinization
If the movement succeeds it becomes the new status quo, and its ideology becomes the standard for the culture (and may, in the long run, be the target of a new revitalization movement down the road when what was once a set of new, exciting, and revolutionary ideas have become stale and ineffective to meet new stressors).
You can research various revitalizations in history and you may start to see the validity of Wallace’s analysis on a general level. You will also see that the steps in the revitalization process are so general that they verge on the meaningless. And there are many questions that he does not answer. Why do some movements succeed while others fail? How do single charismatic leaders emerge to coalesce a movement? Why do movements arise instead of a population simply finding pragmatic solutions to their problems as they occur? I believe, for example, that neither the United States nor the United Kingdom will undergo a nationwide revitalization within my lifetime, nor yours either. But, why not? Stressors abound. Both countries have endemic poverty, low wages, underemployment, racism, socio-economic class inequality, and rampant dissatisfaction with their governments. They seem ripe for radical overhaul, yet they limp along following the time-honored status quo amid a lot of muttering and grumbling. What makes them immune to massive revitalization? I don’t see any signs of revolution in either place (barring some minor ripples in out-of-the-way places that are easily stomped out by the powers that be).
To get down to specifics in testing Wallace’s analysis, I want to look at two revitalization movements, one from the Hebrew Bible (the Deuteronomistic movement), and one from the Greek Bible (Christianity), to examine what works and what does not work when applying Wallace’s perspective. I will readily admit that my reading of the rise of the Deuteronomists from Biblical texts is only one of many (albeit a standard one) in terms of timing, people involved, and the sequence of events, and if you do not like this example, pick another. I chose the Deuteronomists because they initiated one revitalization movement that, when it became routinized, was the target of another movement, Christianity. First, a caution. The Hebrew Bible is not reliable history in the modern sense, but we can interpret it if we use extra-Biblical written sources and contemporary archeology judiciously. What follows here is my interpretation based on Biblical scholars and archeologists whom I trust. Many, many, many scholars and laity will disagree because of deeply held religious beliefs. Sorry – this is my book.
The Biblical history tells us that there were two kingdoms in the Middle East in ancient times: Israel and Judah. The history also tells us that for a brief period under David and Solomon, they were united, but afterwards they split into separate kingdoms. The United Monarchy is likely an invention of later scribes, but the existence of the two kingdoms – Israel in the north, and Judah in the south (centered on Jerusalem) – is well attested. Israel was far larger and more prosperous than Judah, and was coveted by neighboring empires, including Assyria and Egypt, as a source of revenue and as a land route. Judah did have some wealth but it was a hilly backwater in comparison with Israel.
In the late eighth century BCE, Assyria asserted its dominance over the region as its empire expanded. In 721 BCE, Israel put up resistance to this expansion and was completely crushed. The population was either deported to other parts of the Assyrian empire (the so-called Lost Tribes of Israel) or fled south to Judah. Some of the poorer classes probably remained in the area, but were mixed with other ethnic groups who were moved in by the Assyrians, and became a rather mixed group – eventually called Samaritans. Judah, realizing that Assyria was too strong to resist, knuckled under as a vassal state and paid tribute to the empire, and, thus, was spared Israel’s fate. The vast bulk of the Hebrew Bible was written in Judah after the destruction of Israel. The refugees from Israel did, however, bring oral and written narratives with them which were merged with Judean texts.
Fast forward to 640 BCE when there was a crisis in Judah. The king, Amon, was murdered in a coup attempt, but his loyalists suppressed the coup, executed the ringleaders, and put Amon’s eight-year-old son, Josiah, on the throne. This series of events coincided with a rapid, and largely unanticipated, decline in Assyria’s power, and, at the time, Egypt, Assyria’s main rival for dominance in the region was also in a weakened state. Therefore, the loyalists to Josiah had a golden opportunity to assert the strength and independence of Judah, and, in consequence they initiated a program of revitalization. According to 2 Kings, chapters 22 and 23, in 622 BCE, Josiah, ordered some repairs to the Temple, and, in the process, the high priest Hilkiah “discovered” The Book of the Law in the rubble. Modern scholars are divided over whether this Book was an early version of Deuteronomy, or something else, but there is a general consensus among Biblical scholars that it was a pious forgery, by a scribe, or coterie of scribes, planted in the repair work for Hilkiah to discover.
The Book was the cornerstone of a proposed revitalization of religion and society in Judah, envisaged by a (now unidentifiable) cadre of nobles, priests and scholars who have become known as the Deuteronomists. Their goal was to elevate Josiah to the stature of Messiah (that is, anointed one), a sacred king who would lead Judah to triumph over all enemies, and establish an unassailable kingdom centered on Jerusalem. Numerous texts were produced – histories, prophecies, and poetry – all pointing to a king in the Davidic line being the chosen one who would usher in a new age of peace and prosperity for Judah. The resultant revitalization was a combination of Messianism (renewal by a sacred ruler) and nativism (purifying the culture by ridding it of foreign influences and restoring old customs and traditions).
Josiah read the Book out loud in public in Jerusalem, and then set about following its provisions by destroying all the “foreign” cults in Judah (such as, Ba’al, Asherah, and astral worship) and executing their priests, clearing all “foreign” images and symbols from the temple in Jerusalem, and generally purging any religious practice not prescribed in the Book. Yahweh was to be the only god worshipped. The sacred texts did not go as far as to say that Yahweh was the only god, but they did indicate that he was supreme over all others, and that not giving him strict devotion was fatal. The histories, known as Deuteronomic histories, especially the book of Kings, made one point crystal clear: when kings obeyed Yahweh, they prospered, and when they fell away from Yahweh, they were doomed. Israel had fallen to Assyria because its monarchs had not been faithful to Yahweh. Archeology shows us that the Deuteronomic analysis of history was not strictly accurate – some apostate kings of Israel had done very well for themselves – but the key point that Israel failed because of a lack of faith in Yahweh alone fell on willing ears (ideology trumped fact).
The Book placed the first Passover, and the flight from Egypt as central to the history of the people of Israel and Judah (when Yahweh delivered the people from the bondage of the Pharaoh), and that commemorating it annually was vital to the wellbeing of the people. Thus, Josiah ordered a massive Passover to be celebrated in Jerusalem, and the book of Kings (2 Kings 23: 21-23) seems to suggest that this celebration was new to worship in Judah. Thereafter, it was ordained to be celebrated annually and the faithful should gather in Jerusalem for its celebration whenever they could.
Thus, you have all the hallmarks of a classic revitalization process: a sacred leader with a vision (even though he was not the creator of the vision), a teaching period by the leader (the reading of the Book), and then a period of radical change following the teaching, ultimately becoming the new normal. In Wallace’s terms, the revitalization of Judah under Josiah was successful in that it set in place a strictly codified, radically new order for Judah, but it was not a political success. Josiah was emboldened by the revitalization of Judah to flex his muscles and defy the pharaoh, Necho, in a battle on the Plain of Megiddo (in Hebrew, a final battle here is called Armageddon). Josiah was defeated and killed in battle. The new order had been established, however, and it guided the culture for centuries.
For over five centuries, the territory, religion, and political structure of Judah (or Judea) went through numerous radical changes as the region was successively controlled by Babylon, Persia, Greece, and Rome (with glimpses of semi-autonomy now and again when the major empires were weak). In the first century CE, Rome was the imperial master and around 30 CE a preacher, whose Aramaic name was Joshua, appeared on the scene, at first in the northern region of Galilee, and later in Jerusalem. Joshua, transliterated (badly) into Greek, is Jesus. The movement inaugurated by Joshua of Nazareth, was another classic revitalization movement, but its energy took a major left turn before it was completed.
One has to be careful treating the Greek Bible as history, because the supposedly historical bits (the gospels and the Acts of the Apostles) were written at least a generation after the events described in them, and were not written by eyewitnesses. Some of the accounts are demonstrably false, such as the birth in Bethlehem, because the gospel writers wanted the events of the life of Joshua/Jesus to mesh with Messianic prophecy. The original Messianic stuff was probably tailored to fit Josiah, but, when he crashed and burned, the concept of Messiah became more abstract and future oriented. Consequently, we have to sift through dubious material and pull out only those parts that are not obviously biased towards a particular theology, and not supernatural, but provide a consistent historical picture.
Stripped of miracles and controversial actions, the message preached by Jesus around 30 CE, as reported in various sayings in the oldest gospels, was that classic Judaism was the one true religion at heart, but centuries of legalism had corrupted it to the point where the underlying message had been almost completely lost, and Jews needed to get back on track by looking under the maze of laws and seeing what the fundamentals were. These fundamentals were love God, and love your neighbor – THE END. Then, of course, the quibblers come in on cue and asked, “Who is my neighbor?” “What is love?” etc., and Jesus answers them, usually with a parable. Jesus was clearly a charismatic leader who was able to speak to large crowds and amass a huge following because he broke his message down into simple stories that everyone could relate to. He simplified everything in Judaism so that common people did not need to consult lawyers and priests to understand the basics. Consequently, he built a huge number of followers among the ordinary people, and a small clique of powerful enemies among the lawyers and priests whom he was threatening to put out of business.
Thus far, Jesus followed Wallace’s steps: have a vision, build an inner circle of disciples, spread the word. Then the movement potentially came unraveled (or, at least, went sideways). Jesus was arrested following a Passover meal with his inner circle, tried overnight, and executed the following day. Leaving aside the supernatural elements of the following days described in the gospels for now, we can conjecture that Jesus’ execution left his followers in a panic. At the time, Rome was deeply worried about the turmoil in that part of the empire: Judeans just would not knuckle under, and disturbances were routinely crushed in savage manner – only to spring up again in different guise.
Jesus’ inner core of disciples, notably, John, James, and Peter, coordinated to keep the revitalization of Judaism going in Jerusalem, but were met with constant resistance from the temple priests and also the general population. Nonetheless, they formed communities that worked, ate, and worshipped together, pooling resources in what is now thought of as primitive socialism. This would be Wallace’s period of adaptation. The next step, cultural transformation, never happened, because in 70 CE, Rome, tired of the endless rebellion in Palestine, sent a major force to crush the Jews, destroy the temple, and disperse the population. Case closed, as far as Rome was concerned, and case closed for the revitalization of Judaism in Jerusalem. There was a small catch, however.
Saul of Tarsus, a staunch member of the Sanhedrin (the ruling Temple faction) and vehement enemy of the new Jesus movement, which at the time was called The Way, had had a conversion experience in the process of harassing Jesus’ followers and had become as vocal an advocate of the new vision of Judaism as any of the disciples in Jerusalem. When he met with the disciples in Jerusalem – possibly on two occasions – they all agreed on a division of labor. The original disciples would work primarily in Jerusalem, and Saul (renamed Paul) would travel in Asia Minor and Europe, preaching to Jews in those parts. His original mission was not to preach to non-Jews because he was carrying a message of revitalization of Judaism. This made sense. Imagine someone coming to your town and preaching a message of a new kind of Hinduism. You’d be nonplussed or bewildered probably, because you are not familiar with the old kind, and have no idea as to why it needs revitalization. Such a preacher would likely not get much traction at the outset unless he started with Hindus in your community, and then those revitalized Hindus reached out to others.
Paul was part of the adaptation phase of revitalization – in a major way. The message of Jesus – “love God and love your neighbor – is all well in good as a philosophical core (even when elaborated on philosophically), but how do you put it into practice? Paul was a nuts-and-bolts guy who addressed practicalities, and wrote numerous letters to the communities that he had established explaining how they needed to organize churches, how to worship, and how to pay the bills. Paul completed the phases of revitalization started by Jesus by adapting the message to local circumstances, leading ultimately to cultural transformation and routinization. There’s still a catch.
Paul started with the Jews in the cities he visited, but the populations who ended up being most receptive to his preaching were non-Jews. In that sense, Christianity had ceased to be a revitalization of Judaism, and had become a more general movement of cultural transformation. In some of the cities where Paul created churches, notably Rome, the movement gained ground, and in others it died. The movement continued to face considerable resistance for several centuries and continued to adapt in the face of this resistance. It did, however, eventually become the status quo in the Roman empire under Constantine (c.272 – 337) and later emperors.
As you can see from this brief account of two significant revitalization efforts, Wallace’s general analysis sort of works and sort of doesn’t. In both cases, the charismatic leader died at the point of initial mass enthusiasm for the movement in the face of violent opposition from the powers that be who were invested in the old status quo. These deaths were the catalyst for major adaptations of the movement, which led to their ultimate acceptance. Why didn’t the movements die when the charismatic leaders died? In both cases there were specific external factors that caused the movements to strengthen rather than disband. But specific factors are not built into Wallace’s model. He does not address the issue of how stress in a culture tips the balance from incremental change to revitalization, and why some movements succeed and others fail.
Wallace is the victim of using the norms of natural science for social science. He wants to be able to describe regularity within the general category of social change by comparing specific situations and seeing if he can reduce all the myriad differences to a few concrete rules, in the way that Newton took all kinds of motion – falling objects, planets in space, cannon balls – and condensed them into a set of equations that would explain them all neatly and precisely. Social science cannot do this kind of exercise, partly because the variables are complex, but mostly because these variables cannot be reduced to straightforward mathematical quantities, in turn because they are poorly defined. How do you define social stress and how do you quantify it? Is social stress one variable or many? What scale do you measure it on? If we cannot define our variables rigorously, and we can’t, we are not going to get anywhere trying to reduce these variables to comprehensive rules.
At this point you might believe I am simply throwing in the towel; but I am not. While so much of what Wallace says is too general to be of much use, not all of it is. While it is true that we cannot use his analysis to predict where and when revitalization movements will start, and whether or not they will be successful, we can say something about their consequences when they occur. For me the most significant, and potentially depressing, conclusion that Wallace came to is that revitalization movements have a limited lifespan. Whether they succeed or fail, they end. Either the old order takes back hold, or the new order settles into a routine pattern. Major social change cannot perpetuate itself indefinitely, because it is not a comfortable position to be in, and because new ideas are finite.
At present we are living in times when technological change happens quite speedily, it seems, although not all advertised change is real change. There have not been radical changes in automobile design in 100 years. Getting bigger, smoother, and faster are not fundamental changes in design. Running on solar power would be a radical change, but, despite the availability of the technology, there are no solar cars being massed produced by manufacturers. The introduction of the personal computer, the internet, and smartphones is a major change. These devices have altered how we communicate with one another and how we access data, and, in turn, they have caused some shifts in social interaction. None of these changes are on the scale of revitalization movements, so we can adjust to them. They are not asking us to fundamentally question the meaning of life, why we are here, or the purpose of existence. If a prophet came along saying that we and the planet are doomed if we do not reject our current technology and return to working the land with no electricity, no smartphones, no computers, no cars – nothing that creates destructive pollution – that would be a revitalization movement.
That prophet would have to have to offer more than a message of fear, doom, and gloom, however. The message would have to be one that proposed how the new life would clearly be better than the current one, which is a mighty hard sell. And, it would also have to offer substitutes for all the institutions and customs that we currently find appealing. I do not see anything remotely like that on the horizon, although there are plenty of prophets of doom concerning climate change, overpopulation, pollution, and poverty. Revitalization movements have to offer solutions that are appealing, and not just statements of how bad things are at the moment.
Revitalization movements, furthermore, cannot envisage an indefinite future where things are always being turned upside down for the sake of change. Revitalization is always uncomfortable for some, if not all, members of a culture. In many cases, people die in the process. Revitalization movements must end in a steady state at some stage. Wallace’s period 6, routinization, is absolutely necessary. Many Protestant churches, created out of revitalization within the Catholic church, have as their motto “reformed and always reforming” which is both a mistranslation of a phrase attributed to Augustine of Hippo and a ludicrous concept. If you know anything about the Presbyterian church in the United States, which has this motto, you will know that it has not had a revitalization in a very long time. Its leaders sit around and endlessly debate finer points in the rule book (The Book of Order), but that is all. That is not “reforming.” That is merely tinkering at the edges. The church is hemorrhaging members annually at an alarming rate to the point where it is possible to predict when the church will collapse, but no one is looking for solutions that involve revitalization. In any potentially revitalizing situation, innovators are going to butt heads with vested interests. I would say, therefore, that Wallace does offer something of a blueprint for change, but gets tripped up by specifics.
Chapter 20: Where is the Center? Who Am I?
In one way or another we are all egocentric and ethnocentric; we can’t help it. Anthropology is ethnocentric, much as it likes to believe otherwise. Being sympathetic to people from other cultures, even accepting the reasonableness of their beliefs, does not mean being able to get inside their heads and think the way they do. You start from what you know and move out from there to try to understand new things. Learning new things may change you in the process, but you are still the center. It cannot be any other way. The best we can do is recognize our egocentrism and ethnocentrism and make adjustments for them. As a small qualifier I would add that by “egocentric” I do not mean that we are all selfish, but only that our “selves” – our worldview – is the filter through which all information is processed.
You were probably taught somewhere in your schooling that there is a difference between objective and subjective statements, or between fact and opinion, and unless you had a philosophically minded teacher, this point of view remained unchallenged. I’m sorry to break it to you but there is no such thing as a completely objective statement nor an absolutely true one. All statements reflect a point of view and exist within a frame of reference. They cannot be made independently of context. I can take a simple set of statements to illustrate my general point:
“This tea is not hot enough.”
“This tea’s temperature is 89°C.”
Both statements can be made concerning the same cup of tea, but they are not the same kinds of statement. The first is usually called a subjective statement or a matter of opinion, whereas the second is usually called an objective statement, or a statement of fact. This notion needs modification. There is no question that the first statement could include “in my opinion” or “from my point of view” at the end, to make it clear that “not hot enough” is not some kind of absolute judgment, but refers to how I like my tea, not how everyone likes their tea. Fair enough. No quarrel. What may be less clear is that the second statement is also subjective, although of a different kind of subjectivity from the first.
The cardinal rule is that you cannot observe anything without altering it somehow. Physicists know this, but it took anthropologists (and philosophers) a long time to understand the implications of this rule. With the cup of tea it’s fairly easy to understand. You need a thermometer to measure the temperature of the tea, and a thermometer takes some heat from the tea in order to make the measurement. Certainly, it is not much heat, but it is some. What is the objective temperature of the tea (that is, without observing it in any way)? That information is unknowable. You can calculate ways of compensating for the amount of heat the thermometer extracts, but that process relies on the accuracy of your algorithm of compensation. Besides, how do you know that your thermometer is accurate? When did you last have it tested? Can you trust the testing process, anyway? Maybe you will try to sneak around the problem by arguing that you can know when water is at 100°C because that is the boiling point of water. So – boil a kettle and you know that the water is at 100°C. Nice try, no cigar. For water to boil at exactly 100°C, it must be free of dissolved solids and must be at 1 standard atmosphere (101 325 Pa) air pressure – exactly. You still have to make observations and apply your trusted theory.
The temperature of water is a trivial example, but it makes my point. Observations interfere with the observed. When it comes to social behavior, the interference is far from trivial. How would you describe a dinner party? If you are not there it is one kind of party, but to describe it you have to rely on the observations of others (which will be from their subjective point of view). As soon as you arrive, it is a different party because you have become part of the social situation. Even if you sit in the corner and do nothing but observe, you are still there, and still affecting the others at the party (having someone sitting silent in the corner may make the diners self-conscious – more so, if you are taking notes). Imagine what kind of interruption it is to have an anthropologist enter a culture for a year or so, ask questions, take notes, and generally get in the way. What is the culture like in the absence of the anthropologist? As unknowable as the dinner party you did not attend.
Not only is a community disrupted by the presence of the anthropologist as a matter of course, the data collected by that anthropologist are corrupted in any number of ways. Informants lie or are uninformed, as numerous case studies show. Napoleon Chagnon famously reported that when he was in the process of documenting kinship among the Yąnomamö, he had to toss out most of his early notes because his informants were making up names for their own amusement (Chagnon 1968). Derek Freeman interviewed some of Margaret Mead’s informants on Samoa, and they reported lying about their sexual activities (Freeman 1983). In Mead’s case, Freeman argues that her whole argument about adolescence and sexuality in Coming of Age in Samoa falls apart when the inadequacy of the data is revealed.
The quality of information from key informants has also often been called into question. In Two Crows Denies It (Barnes 1984), R.H. Barnes shows that the Omaha kinship system, enshrined in anthropological kinship studies, (and the bane of beginning students because it is so counterintuitive), may not have been accurately reported originally – hence, “Two Crows denies it.” That is, Two Crows was asked whether the kinship information originally reported was accurate, and he replied, “Nope.” Not all members of a culture see their culture in the same way, or have equal access to the same information. Who the anthropologist relies on for information is going to color the whole study. In the early twentieth century, when reporting on cultures whose languages were not well documented, anthropologists often had to rely on bilingual speakers as their access points to informants within the culture. Sometimes they became key informants themselves, but being bilingual meant that they lived on the margins of the culture, and could have seen having access to an anthropologist as a benefit to themselves personally.
Most importantly, anthropologists bring with themselves all manner of ethnocentric baggage. Colin Turnbull’s studies of the Mbuti (Turnbull 1961) and the Ik (Turnbull 1972) are a complete contrast, partly because these peoples led such different lives, but mostly because of Turnbull’s contrasting attitudes towards each of them. He clearly had a great time with the Mbuti, and a terrible time with the Ik. In fact, his dedication of The Mountain People was “to the Ik, whom I learned not to hate.” Hardly an “objective” stance towards the Ik. But Turnbull can be forgiven in that he exposed something we all know: anthropologists pick and choose their field sites for personal reasons. One of my fellow students at UNC asserted that anthropologists all choose field sites in places where they think they will feel “at home” because they are so uncomfortable in their own cultures. I know that is not a fair generalization overall, but we all pick field sites because we have agendas. As such, fieldwork is inherently egotistical and ethnocentric.
Subsequent to Turnbull’s apparent meltdown with the Ik, which got us all thinking about what it was we were trying to achieve, George Marcus and James Clifford produced the anthology Writing Culture (1986) which made a splash at the time, and opened up an array of questions concerning the nature of fieldwork and writing ethnography. They invoked Michel Foucault’s critique of literature in asking, “Is there such an animal as ethnographic truth (or objectivity)?” In one of the essays, “From the Door of His Tent,” Renato Rosaldo points out that in classic ethnographic writing, the anthropologist makes a brief appearance in the introduction, and then disappears, suggesting that the ethnography is a description of THE TRUTH about a culture, as opposed to one anthropologist’s perspective on those people.
Marcus and Clifford’s critique of ethnographic writing opened the door for reflexive ethnography, an approach that self-consciously places the fieldworker within the field situation when writing about it. This approach was not entirely novel. Richard Lee’s “Eating Christmas in the Kalahari” (Lee 1969) and Laura Bohannan’s “Shakespeare in the Bush” (1966) had been knocking around for some time, had been reprinted and anthologized several times, and were mainstays of Intro to Anthropology. These and other popular essays had firmly situated the anthropologist in the field situation, as had Never in Anger: Portrait of an Eskimo Family by Jean L. Briggs (1970). But, at the time, these examples were exceptions whereas Marcus and Clifford’s volume suggested a complete rethinking of the anthropological enterprise.
Fieldwork changes fieldworkers. In every doctoral department there is a divide between those students who have completed their fieldwork and those who are in the planning stages. The difference is palpable. Those who have not engaged in fieldwork tend to be loaded to the gills with theory, ideals, and hopeful expectations, whereas those who have returned from lengthy fieldwork tend to be more cautious and reserved; sometimes they have a “you’ll learn” look when talking to their juniors. Many, such as myself, do not even return to their departments full time, but go off and write their dissertations in isolation. The fieldwork, not the conferring of the degree is their rite of passage. Until Writing Culture, the transformational effects of doing fieldwork were not much written about nor discussed. The focus had been on the data, not the fieldworker.
After Writing Culture, there were a few reflexive ethnographies and papers produced that intertwined what the fieldworker was experiencing alongside what the informants in the field situation were doing and saying. Just as you cannot make sense out of what you see through a microscope until you understand how a microscope works and what its limitations are (an electron microscope cannot be used to observe electrons), you cannot make sense of field data without detailed knowledge of the fieldworker. Unfortunately, Writing Culture led to much more theoretical and philosophical hand wringing among anthropologists about the nature of truth and objectivity in anthropological data than to genuinely reflexive ethnographies allowing the reader to see the fieldworker completely embedded in the field data. Diedre Sklar’s Dancing with the Virgin (2001)is sometimes held up as an exemplar, and, by chance, I wrote a review of it when it was published (Forrest 2006), and I found it problematic.
Here’s the problem. If I use a mercury thermometer or a digital thermometer or an infrared thermometer to test the temperature of my tea and, unless one of them is faulty, I am going to get close to the same reading from each of them. If I ask a friend to test the temperature using different thermometers, I am going to get roughly the same reading. The objective truth of the temperature is unknowable, but a usable approximation within reasonable tolerances is possible, and I don’t need to know with excruciating accuracy. Ethnography is of a completely different order. Fieldworkers bring a ton of personal baggage with them from their own backgrounds, their career aspirations, their past interactions and relationships, and the like, and they encounter informants who are helpful or unhelpful, knowledgeable and ignorant, and field situations that are comfortable or uncomfortable, and a myriad of circumstances that cannot be predicted nor controlled. Even so, I believe that there is a broad consensus within the discipline that under this welter of difficulties there is a discoverable core that can be agreed upon. Otherwise there would be no anthropology.
I once had an idea, which I presented to my fieldwork class, that I optimistically called “stereo fieldwork.” We have stereoscopic vision because we have two eyes that see almost identical images, but the images are slightly different because our eyes see from slightly different angles. Likewise, we have stereo hearing because we have two ears that hear slightly different things from slightly different angles. Why could we not do stereo fieldwork by having two fieldworkers observe the same field situations side by side? One of my students gave the obvious answer. Stereo vision and hearing work because the brain takes the two signals it receives and interleaves them into a single sensation. How would you take two sets of field observations and interleave them into one stereo report? Answer: you can’t. You are always going to have two separate reports. In fact, it would be most instructive to have two fieldworkers of radically different backgrounds and theoretical perspectives work side by side and then compare their reports to see just how much they differ. Fieldwork inevitably produces egocentric (and ethnocentric) data, but how much does this fact distort the data?
At the moment anthropology is in a kind of post-just-about-everything phase: post-colonial, post-structural, post-modern, post-Foucaultian etc. etc. Slapping “post-” in front of the particular theoretical perspective you are objecting to is not particularly useful nor enlightening. It is also not always an honest approach either. The functionalism and structural-functionalism of British social anthropology was supposedly debunked decades ago, yet they are not absolutely dead. We cannot speak of “post-functionalism” because, while the term may not be invoked, you’ll still read ethnographic accounts talking about the organic structures of communities and the ways in which they operate to the benefit of the culture.
In criticizing Evans-Pritchard’s ethnographic method of discourse in (over)generalizing behaviors Rosaldo notes:
Notions of Nuer character often emerge through the analysis of social structure. Thus the people appear portrayed as “primitive” in their social conformity and their lack of individuality. The narrator, following a disciplinary norm, verbally represents the people with the group noun (the Nuer) or the masculine pronoun (he) rather than with more individuating personal names. One frequently, for example, encounters statements in the distanced normalizing mode of discourse, such as the following: “When a man feels that he has suffered an injury there is no authority to whom he can make a complaint and from whom he can obtain redress, so he at once challenges the man who has wronged him to a duel and the challenge must be accepted” . . . Evans-Pritchard fails to discuss, for example, what happens when, or how it happens that, a man does not take up the challenge. Surely a Nuer man has a certain leeway in deciding whether or not he has wronged. Only someone pathologically oversocialized could follow such programmed normative commands without exception or contextual qualification. (Rosaldo 1986:94)
Rosaldo commits the same fault he blames Evans-Pritchard for: failing to make allowances for context. He does admit that Evans-Pritchard is “following a disciplinary norm” but does not follow through with this thought. That is, if questioned on this, or any other normative claim about Nuer behavior, Evans-Pritchard would certainly have confessed that all norms have exceptions. Evans-Pritchard’s original audience would have taken this for granted. The more salient point is whether in writing about cultures it is ever acceptable to generalize in this fashion. I would argue that it is, but not all the time.
If you visit Cambodia, I might caution you about tuk-tuk drivers. One hard and fast rule is not to get into the tuk-tuk without agreeing on the price first, and (at time of writing) a good rule of thumb is to calculate US $1 per kilometer. It’s a good idea to offer somewhat less than this at first, but you will probably end up in that neighborhood eventually. Under no circumstances get into a tuk-tuk without negotiating the price first because you will be overcharged at your destination. At this point you are screwed: no one will come to your defense – not other tuk-tuk drivers, not bystanders, and certainly not the police. If you try to argue, or, God-forbid, start a fight, every Cambodian within earshot will come to the aid of the driver. Have there ever been exceptions to this generalization? Maybe, although I’ve never heard of one. Exactly how much you will be overcharged if you do not negotiate ahead of time is not certain, but you will be overcharged and the driver will not budge. I can say with confidence that you ignore my advice at your peril. In this sense my statement mirrors Evans-Pritchard’s and I see nothing wrong with that.
Where we get into trouble is generalizing when it is not warranted. If certain circumstances are not easily generalized, we need to say so, but we do not have to abandon the entire ethnographic enterprise because it is complicated. Nor is it fair to give up because terms and situations cannot be rigidly defined. Philosophy has played around with these issues for centuries, and anthropologists would do well to be aware of the issues, but not throw in the towel because the lines are not clear cut, nor start playing around with philosophical conundrums when they are ill equipped to do so.
Epistemology, the philosophical study of knowledge and belief, has been around for millennia, and philosophers continue to argue about where knowledge comes from, the nature of proof, truth, and justification, and answering such questions as “How do we know what we know?” I get caught up in their puzzles once in a while, but I do not wake up in the morning and ask myself whether my waking state is real or an illusion; I get on with my day. I have habitual ways of behaving that get me through the day, and I am not forever questioning them. Nor is anyone else, not even philosophers.
If I wanted to, I could write down for you a detailed analysis of the rules for riding a bicycle safely and efficiently in New York City, Oxford (England), Mantua (Italy), Kunming (China), Mandalay (Myanmar), and Phnom Penh (Cambodia) because I have ridden a bicycle for long periods in each of those cities, and, through experience, I have figured out what works and what doesn’t work. An epistemologist can challenge numerous aspects of how I know what I know, how accurate my knowledge is and all the rest of it, but the simple fact is that I can get around efficiently on a bicycle in all those cities. Philosophical wrangling about how I know what I know is a waste of time.
Learning the rules of an entire culture is a lot more complex than figuring out how to ride a bicycle in that culture, and it is possible to be mistaken or have incomplete information. But the task is not completely impossible, even if a philosopher wants to take issue with how the task is carried out, how you know what you know, and the limitations of that knowledge. If a philosopher told you that you should stop using the word “blue” because on the visible spectrum there are no clear boundaries between green, blue, and indigo, and because people identify any number of different shades as “blue” would you comply and stop calling things blue? I seriously doubt it. Blue is a useful concept even though rigid definitions are hard to come by.
If you know anything about the history of philosophy, you would probably call me a pragmatist (not with an upper case “P”). That is, I am concerned with what works and what does not work. Physical scientists (the ones who think about these things) do not waste much time with epistemology, fretting about the fact that their knowledge has limitations and that their theories are always going to be incomplete. They just go with what works – for now. Newton’s laws of motion have been completely overhauled by Einstein’s equations, but they work well enough for playing billiards, even for the motion of the planets. Euclid’s geometry has a major hole in it when it comes to parallel lines, but engineers and architects don’t worry about that when building bridges. The square root of minus 1 or the kinds of infinity that are hypothesized to exist, put us in the realm of the absurdly abstract, yet they are concepts that work well enough for solving certain mathematical puzzles, which, in turn, have practical uses (sometimes).
I can hypothesize concerning the nature of the Asian cultures I am familiar with and use those hypotheses to explain why traffic patterns in those cultures are the way they are, and my hypotheses might be wrong or incomplete or sloppy at the margins of those cultures. But if you take those hypotheses as guides for how to ride a bicycle safely in those cultures you will find that they work. I can also explain to you how to get on a bus in China at a crowded bus stop, and how to get through a regional airport to your departure gate when you have limited time. What I have to accept about such knowledge, gained from painful experience, is that it is framed in terms of the way I think. That is, it is egocentric and ethnocentric. For example, I could say to you, “When you encounter a crowd of Chinese people who are all clamoring for the same thing (getting on a bus, checking in at an airport, negotiating traffic), assume that they will all put their own needs ahead of everyone else’s needs.” That may be an inaccurate statement of what is going through their heads at the time, but it is a good description of what those situations feel like and how to negotiate them. I have translated the experiences I have had into terms that make sense to me. I can also check my analysis by asking Chinese people if it makes sense to them. Such checking is not foolproof, though.
You may be a native speaker of English, but that does not mean that you understand how the language works, even though you use it correctly in speaking and writing. Why is “black, French, big dog” incorrect, and “big, black, French dog” correct? You know that the first phrase is wrong and the second is right because you have internalized the rules of grammar, even though you cannot always explain those rules to someone else. To make matters worse, the rules change, and there can be disagreement. What about “If I were you” versus “If I was you”? Technically, the first is correct and the second, incorrect, but you will hear both used by native speakers, perhaps with equal frequency (I have not done a statistical survey). Does telling native speakers that they are using their own language incorrectly make any sense? If enough native speakers use the incorrect form as a matter of preference, it will become the correct form.
Anthropologists are somewhat like linguists in that they are attempting to capture the basic framework of rules that cultures live by. We can argue all day and all night about whether this is a philosophically justified enterprise, whether it is a worthy enterprise, or whether it is even possible, but such arguments will not alter the fact that people do it all the time – just as people learn new languages all the time. I will never be mistaken for a native Khmer speaker, nor will I ever have a remotely universal grasp of the language’s shape and nuances. But daily I am getting better at using the language in everyday situations, and native speakers understand me. I have devised my own system for using classifiers correctly – words attached to numbers of things to indicate what kinds of things they are – because I have not been able to find one written down anywhere, and my system generally works fine. I do occasionally wonder why books, cattle, and enemy soldiers all have the same classifier, but I know that they do, and I use the classifier correctly (most of the time). My system works even though it is not a “native” system. It is ethnocentric (and egocentric), but I am not worried about that. I am only concerned with what works for me in gaining fluency.
Now the big question: why do anthropology at all? There are many answers to that question. Mine is that I want to understand other cultures better because I want to understand myself better. I, like everyone, am a complex mix of genetic imperatives, personal experiences, and socialization. I can do something to modify all three in various ways if I wish to (if I am not happy with them), and I can use different academic disciplines to understand each. Anthropology helps me understand the socialization part. It does this, in part, by showing me how people in other cultures are socialized differently. To be fair, I said that my personality is a complex mix of three components, and it is not necessarily possible to tease them apart because they are so deeply intertwined. Nonetheless, I can identify some behaviors that reflect my socialization into larger cultural values, and I can modify some of them if I choose, but to do so I have to understand their place in society.
How I refer to my kin has to do with how the culture in which I was raised views family relationships. I have special names for my nuclear family (sister, father, mother) and different names for kin outside the nuclear family (aunt, uncle, cousin). Knowing that other cultures do things differently is not of much use to me. I am not about to start calling all my female cousins “sister” just because this is the norm in other cultures. There would be no point. However, this information leads me to look at the nuclear family more closely. Why do we live in nuclear families and why are they so important to us?
I was raised in a nuclear family. When I reached adulthood, I left home, got married, and started my own nuclear family in my own house. My son, when he reached adulthood, left home, married, and now lives with his wife in their own apartment. Probably they will have children and raise them in a nuclear household. Neither of us gave a second thought to why we lived this way at the time, although we are now both anthropologists and could challenge that mode of thinking (and living) if we cared to. We are not genetically programmed to live in nuclear families, even though on the surface it looks that way. Nor is the nuclear family the more “advanced” mode of living, although that was once the anthropological theory (in the nineteenth century). We can now see that the nuclear family is an economic unit of consumption that benefits modern capitalism better than other modes – even though it developed before modern capitalism. We cannot say that cause and effect is at work here, but we can see capitalism and the nuclear family co-evolving (in Euro-American cultures).
It is the norm for each nuclear family to have a stove and oven, a washing machine, a vacuum cleaner, etc. and also perhaps a lawnmower, dishwasher, clothes drier, one or more cars, microwave oven, etc. etc. In the 1970s I lived on a cul-de-sac in Chapel Hill, North Carolina, in a new development of new houses, where every man was out on a Saturday morning mowing his lawn at roughly the same time, each with his own lawnmower. Then we would each pack up our lawnmowers for the week. That was five lawnmowers used for about one hour per week for five lawns. If we had had a single communal lawnmower, we could have taken turns using it and saved money. The idea was unthinkable, even though I floated it once. Each household wanted to be an independent unit. Lawnmower shops benefitted. Three of us worked at the same university and went to work at around the same hours, yet we each took our own cars to and from work. Car and petrol sales benefitted.
When I worked at a university near New York City, many of the faculty did, indeed, carpool. Driving a car in Manhattan is a major expense, and there was no efficient public transport from the city to my university. When enough social and economic factors intervene, the norms of nuclear family consumption can be overridden. But there is a price to pay. You lose flexibility and independence. The faculty at my university were endlessly on the phone negotiating difficulties concerning carpooling.
Thinking anthropologically, I can decide whether I want to live in a nuclear family or not because I now realize that it is a choice based on my socialization. I can also analyze how the nuclear family unit is built into the fabric of the societies I have been a member of in order to determine what changes are possible for me, should I choose to make those changes, and what it would take to make those changes. In other words, learning about how other cultures work makes me a (potentially) more active agent when it comes to those aspects of my life that are governed by how I was socialized. Is this a good thing? I can give you my answer if you ask, what is yours?
References Cited
Barnes, R.H.
1984
Two Crows Denies It: A History of Controversy in Omaha Sociology. U. Nebraska Press.
Benedict, Ruth
1934
Patterns of Culture.
1946
The Chrysanthemum and the Sword. Boston, MA: Houghton Mifflin
Berlin, Brent and Paul Kay
1969
Basic Color Terms: Their Universality and Evolution. Berkeley: U. California Press
Boas, Franz.
1904
“The History of Anthropology” Science, 20 (512): 513-524.
1911
Handbook of American Indian Languages. Washington DC.
1927
Primitive Art.
Briggs, Jean L.
1970
Never in Anger: Portrait of an Eskimo Family. Cambridge MA: Harvard UP.
Chagnon, Napoleon
1968
Yąnomamö: The Fierce People. New York:Holt, Rinehart, and Winston.
Clifford, James and George E. Marcus (eds.).
1986.
Writing Culture: The Poetics and Politics of Ethnography. Berkeley: University of California Press.
Evans-Pritchard, E.E.
1951
Kinship and Marriage Among the Nuer. Oxford: Clarendon Press.
Fiedler, Leslie
1978
Freaks: Myths and Images of the Secret Self. Anchor
Finkelstein, Israel
2007
The Quest for the Historical Israel. Society of Biblical Literature.
Forrest, John
1988
Lord I’m Coming Home: Everyday Aesthetics in Tidewater, North Carolina. Cornell U.P.
1999
The History of Morris Dancing: 1458-1750. Toronto and Cambridge.
n.d.
The Genesis Option. Unpublished MS.
Fortes, Meyer and E. E. Evans-Pritchard (eds)
1940
African Political Systems. Oxford: OUP.
Frazer, James George
1906-1915
The Golden Bough: A Study in Magic and Religion. 3rd ed. 12 vols. London: Macmillan.
Freeman, Derek
1983.
Margaret Mead and Samoa: The making and unmaking of an anthropological myth. Cambridge: Harvard University Press
French, J. and Raven, B.
1959
“The Bases of Social Power.” In Studies in Social Power, D. Cartwright (ed.): 150-167. Ann Arbor, MI: Institute for Social Research.
Freud, Sigmund
1918
Totem and Taboo: Resemblances Between the Psychic Lives of Savages and Neurotics. A. A. Brill (trans.). New York: Moffat.
Geertz, Clifford
1973
The Interpretation of Cultures. New York: Basic.
Gell, Alfred
1998
Art and Agency: An Anthropological Theory of Art. Oxford: OUP.
Gluckman, Max
1965
Politics, Law and Ritual in Tribal Society. Oxford: Blackwell.
Gmelch, George
1978
“Baseball Magic.” Human Nature
Gossett, Thomas F.
1997
Race: The History of an Idea in America. New York: Oxford University Press.
Graeber, David
2001
Toward an Anthropological Theory of Value: The False Coin of Our Own Dreams. New York: Palgrave.
Hall, Edward
1966
The Hidden Dimension. Garden City, New York: Doubleday.
Harris, Marvin
1974
Cows, Pigs, Wars, and Witches: The Riddles of Culture. New York: Random.
Heuscher, Julius
1963
A Psychiatric Study of Myths and Fairy Tales: Their Origin, Meaning, and Usefulness. Springfield, IL: Thomas.
Leech, Edmund
1954
Political Systems of Highland Burma: A Study of Kachin Social Structure. Harvard UP.
Lévi -Strauss, Claude
1949
Les structures élémentaires de la parenté. (The Elementary Structures of Kinship). Mouton.
Lomax, Alan
1968
Folk Song Style and Culture. New Brunswick: Transaction.
Mauss, Marcel
1925. Essai sur le don. Forme et raison de l’échange dans les sociétés archaïques. L’Année Sociologique.
Mead, Margaret
1928. Coming of Age in Samoa. A Psychological Study of Primitive Youth for Western Civilization. New York: William Morrow.
Memmi, Albert
1965
The Colonizer and the Colonized. Boston: Beacon
Pike, Kenneth
1967
Language in Relation to a Unified Theory of the Structure of Human Behavior. The Hague: Mouton.
Propp, Vladmir
1958
The Morphology of the Folktale.
Sahlins, Marshall
1972
Stone Age Economics. New York: de Gruyter.
Sapir-Hen, Lidar et al.
2013
“Pig Husbandry in Iron Age Israel and Judah: New Insights Regarding the Origin of the ‘Taboo’”
Zeitschrift des Deutschen Palastina-Vereins 129(1):1-20
Sharp, Lauriston.
1960 “Steel Axes for Stone-Age Australians” Human Organization Summer 1952, 11/ 2: 17-22.
Sklar, Diedre
2001
Dancing with the Virgin: Body and Faith in the Fiesta of Tortugas, New Mexico.
Swartz, Marc, Victor Turner, and Arthur Tuden (eds)
1966
Political Anthropology. Chicago: Aldine.
Turnbull, Colin
1961
The Forest People. Simon and Schuster
1972
The Mountain People. Cambridge University Press
Turner, Victor
1969
The Ritual Process: Structure and Anti-Structure. Chicago: Aldine.
Tylor, E. B.
1920 [1871]
Primitive Culture. New York: Putnams.
Van Gennep, Arnold
1960
Rites of Passage. Monika Vizedom and Gabrielle Caffee (trans.). London and Henley: Routledge and Kegan Paul.
Wallace, Anthony F. C.
1956
“Revitalization Movements.” American Anthropologist: 58: 264-281.
Weber, Max
1930 [1905]
The Protestant Ethic and the Spirit of Capitalism. Talcott Parsons trans. London & Boston: Unwin.
Westermarck, Edvard
1891
The History of Human Marriage. London: Macmillan.
Whorf, Benjamin Lee
1940
“Science and Linguistics.” MIT Technology Review 42:229–231, 247-248
Witherspoon, D.J., S. Wooding, A.R. Rogers, et al.
2007. “Genetic Similarities Within and Between Human Populations.” Genetics 176 (1): 351–9.
[1] New archeological evidence seems to point to the possibility of a small amount of domestication of plants in eastern Australia prior to the arrival of Europeans. So far, this evidence is fragmentary and disputed, and European colonists found zero evidence of the indigenous domestication of plants and animals for food.
[2] We now call hunters and gatherers “foragers” because they do a lot more than hunting to get animal products to eat.
[3] They call me lokkrou, which roughly translates as “teacher” in the 2nd person. Not brilliant because I am their student, but it works because it accords me respect. I used their given names.