Last week there was a lively discussion here on MT of the problem of sexual harassment that occurs in the field, where anthropologists do their work. The 'field' is sometimes a laboratory, but the particular problem discussed related to the field 'out there' more remote from the university, often far from urban areas and importantly often in other countries.
The discussion concerned many aspects of how we know the extent and diversity of the problem, and that didn't get resolved, but the real problem now at hand is what to do about it....or, more cogently, that something should be done about it.
How to address such a topic in a way that actually gains consensus that is more than pro-forma agreements to bureaucratic documents that can be filed away to guard against lawsuits, and to get real compliance, is not so obvious. Acceptable sexual behavior is subtle, culturally variable and not all people agree on what the rules should be, though there's no disagreement that assault, including rape, is unacceptable.
So, if people really care about this subject, as it clearly seems they should, then what is needed is to try to find some way to formulate policies and procedures that might actually work. Discussions about how awful the problem is are fine, but realistically implementable ways to constrain action in unusual, hard-to-monitor settings is what needs attention. If the ongoing discussion since the Anthropology meetings has brought that attention to this issue, great. It clearly needs to continue.
15 comments:
Thanks Ken.
A great start to moving forward would be for us all to avoid discussing the issue as if it's just another science project, contemplating it in academic/theoretical terms, when it's about real people who've been really abused and who are really our friends and colleagues in anthropology (that is, if they were able to continue in anthropology after the abuse).
Questioning the sampling and considering science's inability to measure true prevalence, and pointing out that humans do nasty things everywhere, matters nothing for how to address the problem and only piles pain on the victims whose life-changing experiences are being greatly reduced by their own colleagues who are musing about such things.
Tastefully done, pointing out how sexual abuse is part of the human condition is fine, but that sentiment has long been used to directly and indirectly support abusers.
Here are the prelim findings which should have been linked in the post about it on Friday: Reports of harassment and abuse in the field
I'm interested in seeing how Clancy's team moves forward with this and makes recommendations to us once the study's accomplished.
(not Friday, but Thursday it should say ^)
Our point today was to get unstuck from the argument about the methods, which was going nowhere and is irrelevant to the objective that needs to be kept in mind.
The notion that abuse is part of the human condition is, in a sense, the very reason that, in my view, a solution will not be easy to reach.
I can't relate details at all, but I think we all know of people (mainly men) who think that pressuring 'no' to mean 'yes' is perfectly alright behavior and mandated by Darwinian biology.
That's why my view is that a document that people sign as a requirement for, say, getting a grant or being on a field project, may not be enough, especially without some form of enforcement, but we know how complicated accusations and response to them can be.
If I had the answers, I'd not be blathering around here as I am, but would be proposing something.
I think everyone's on the same page here, really. The point Dan and now Ken have been trying to make is that the problem is obvious, real, raw, painful, and the field needs to address it. And kudos to Clancy and her team for making this an important subject of conversation, and clearly compelling many to think about what to do about it.
It was Clancy's talk, and her report of preliminary results, that introduced 'science' into the issue, with graphs of incidence and prevalence. If it's not legitimate to ask whether these are valid or reliable figures, that's odd.
Nancy Howell wrote a book about the dangers of fieldwork, without any statistics at all. In that case, and it would seem in the case of sexual abuse in the field, given that it's so difficult to produce accurate figures, the point can be made just fine without them. And without the distraction of arguing over the numbers.
I'm sorry this has gotten contentious. It shouldn't be. As far as I can tell, we all agree. There's a problem. People have been hurt. Something should be done.
I once had a copy of Howell's book, but I can't find it anymore. It was about physical dangers of field work, and provided a kind of guide as to how to avoid or minimize them, something like that.
I think it did not include topics like sexual abuse, but in any case a question might be interesting to see if policies to prep field workers have actually been in a formal or documentable sense, and whether it has made a difference.
If so, it could provide a model for how to draft something; if not, a lesson.
Holly, if this is going to be treated as a scientific problem, why should we not ask the same scientific questions about it as we would any other scientific problem? Because its so important?
I said my piece. We have a fundamental disagreement that won't be solved since the study is a study and uses numbers and percentages and that's not going away (and they said it's being packaged as a paper to submit for peer review) so the foundation of our disagreement won't go away. I should probably, as "odd" as I sound just bow out now.
I think if we really want to improve the situation we need to change our own acceptable notions of field behavior, and I include myself and my own field experiences in with the problem. There are simply a lot of things we allow in the field because it is "the field" that were they in our lab or our classroom we would never even remotely consider.
First on the list is problem the near ubiquitous and casual association with alcohol and the field. "Beer hours" and social gatherings are not unusual graduate lab experiences, but seldom do they match the degree of sponsored and encouraged drinking that takes place at some field locations.
Universal approaches towards dealing with the problems are always going to be difficult to come by given the unique and specific circumstances of every field location. But encouraging safer environments by not intentionally promoting unsafe ones through massive provisioning of alcohol is a good start.
Probably any set of standards and expectations like this would help, at least somewhat. Getting acceptance, much less enforcement, if that's the right word, is going to be a lot harder.
I wonder how different this is, in terms of standards and compliance, from rules about, say, how research funds are used in the field (is everything spent legit in the US and in the other country, for example?) or bringing home artifacts if such is prohibited (i.e., small mementos packed away in one's luggage), and that sort of thing. Or less than rigorously applied informed consent in, say, studies of small indigenous villages.
If things are not 'universal' then how is the general understanding, and agreement to be achieved? I think this is the challenge. But I don't do this kind of fieldwork, so it's for those who do to join in the discussion.
When choosing a bed and breakfast, or hotel, or such, we all probably go to trip adviser, or some similar site, to take advantage of the experience of others. It seems to me, if you really want to do something about this problem, the way to do so is to give potential students/field workers information about potential field sites before they go. That way they can make informed decisions about whether it is a good environment.
I know of a grad student who was advised by her supervisor not to do field work in the most obvious place because of the reputation of the site. That is good information, and she was fortunate to have a conscientious adviser.
This information must be known about every established field site out there, collectively. The way this sort of information is shared in every internet market is in user reviews of products or services. Academics are already reviewed this way by students (ratemyprofessor.com) for their classroom performance. Perhaps a similar anonymous marketplace for field sites would allow students to make informed choices.
...of course anonymity in a small field like anthropology is hard to come by, and this is a level of scrutiny I doubt most people would be comfortable with (it would also be subject to abuse, etc.).
Good thoughts!
I'm all for going directly to the sources: Asking the people who've experienced this how we can and should improve things and also using their stories to make recommendations. Pretty sure that's part of Clancy et al's project aims.
I want to respond to this curious comment: "It was Clancy's talk, and her report of preliminary results, that introduced 'science' into the issue, with graphs of incidence and prevalence. If it's not legitimate to ask whether these are valid or reliable figures, that's odd."
Since I must assume that all the bloggers here who have commented extensively on our study were at the talk themselves or read the blog post or reviewed the freely available slides carefully, I make this point for other readers: as was and has been repeatedly stated, we presented preliminary data from our sample. Period. We do not characterize our findings as being reflective of incidence or prevalence on any scale beyond that of our sample. It would be irresponsible to do so, just as it is irresponsible to suggest that we have done so. This is also one of the reasons we have not jumped immediately to making recommendations for culture change and enforcement. Based on our data, I can assure you that our figures are indeed "valid" and "reliable." Do they validly and reliably represent what is happening beyond what respondents reported to us? Who knows? And we don't suggest that they do. So the continuing questioning of this is, for lack of a better word, odd.
Let me offer an example. I study marmoset placentas. Actually, truth be told, I have studied a very teeny tiny subset of all of the marmoset placentas in space and time. Not all of them, just a really tiny small amount of them. That's what that "n" means in my tables. And the way scientific research is constructed means that because I have reported what my sample size is and have provided the reader with my sample descriptives, I am allowed to talk about "marmoset placentas" and not have to constantly defend the validity and reliability of *really* only talking about this terribly small number of marmoset placentas. Similarly, we presented our findings in the context of what the preliminary sample was (note that this was also very clearly reported in the original sources but not here in the blog for some reason): self-selected predominantly female, white, straight, US citizens. We are quite aware that this preliminary sample is unlikely to be globally representative of anthropologists, of field scientists, of women, of white people, of whatever. It seems odd (there's that word again) that MT (with the exception of Holly) finds this so troubling and confusing.
I guess we're just going to have to agree to disagree on the statistical aspects of your work. Your marmoset placentas are not self-selected, nor, I assume, are they selected for a characteristic that only some marmoset placentas have. So, they are a representative samples of marmoset placentas. What you say about your sample anyone else working with marmoset placentas can assume pertains to their sample as well (in a probabilistic sampling sense). Unless your marmosets happen to be,say, an inbred, localized population. In that case, you can only generalize to that population.
As far as we can tell, from the presentation, blog posts and the comments you've all left here, because your study sample is self-selected, and therefore respondents may have a special interest in answering your survey, your sample represents your sample and that's all, in the sense that we can't infer characteristics of the world of field workers generally from your data (your results might be representative in that sense, but one can't tell from the self-selected nature of the survey). In that sense, your statistics can say nothing about the presumed population of inference (people at risk of sexual abuse in the field).
Let's pick an extreme example -- let's say 100% of your sample reports abuse. You report that in your paper, say. But, what does it mean about the problem generally? It's impossible to say. Perhaps every person ever abused in the field answered your survey, and no one who didn't had any such experience. That's extreme and highly unlikely, but you have no way to know from the way you selected your sample. The issues with self-selected samples, and representative sampling, are not issues that we've made up.
But, the real point we were hoping to make with our posts is that that doesn't matter! You've got anecdotal evidence that there's a problem, and that's plenty. Nobody seems to question that, in the discussions that were held, nor that it's important.
If you don't do any statistics at all you'll have already done a major service. You've brought attention to a serious problem, and people are talking about it. Including, from your own and other comments here, people talking about how to address the problem, in some way that might actually work.
In any case, Julienne, we don't want to be in an endless loop about the statistical issues, when the real, important issues are clear enough.
Anne, I agree that the loop is counterproductive. But your comments questioning the validity and reliability (your words) of our statistics belie your repeated assertions that "statistics don't matter." If they don't, then why the continued criticism of them? Our findings are perfectly representative of our sample, which we have repeated over and over and over and over again is self-selected and thus not necessarily representative of any other sample, exactly as you also suggest in your example above.
While my marmoset example may not be exactly apropos, it still illustrates that by reporting the nature and limitations of a sample, one is generally allowed to say something about the findings. And the placentas I study are from captive weirdos that aren't inbred but perhaps quite different from other captive weirdo populations in ways we don't yet know, and since we don't know everything there is about what shape placental (or any other) phenotypes, we can only assume that what we have found is generalizable to other populations. And I don't think anyone knows how valid that assumption - which grounds almost all the work that most of do - really is even in samples that are supposedly free from selection bias.
The main issue I take exception to is the suggestion that we have generalized our preliminary findings to any population beyond those who responded to the survey, an erroneous perspective most of the bloggers here (who are also colleagues in our field) seem invested in promoting. Since the purpose of our study - again, stated clearly in the presentation - was NOT to quantify the "world of field workers generally". To continue to make this argument is indeed odd and disappointing, but it's your blog and you are free to promote your viewpoint. We will indeed be in disagreement about it, but I am glad that our hard work in designing and implementing a survey that to date has had several hundred responses and scores of carefully analyzed interviews has generated so much passionate discourse.
Post a Comment