Yesterday’s post on the tension between curatorial/service-y intellectual work and straight up analytical work was intentionally kept rather general, both for larger appeal and since I’m trying to figure out my approach to these questions in a way that’s consistent. Today, I’ll be a bit more specific, and this is sort of a warning about that.1 I want to show how geospatial scholarship can be a sort of ground zero of these kinds of discussions and debates, though, about curating and analysis.2
Whether factually true or not, I got the sense that I was one of the only people at the NEH-sponsored Institute for Enabling Geospatial Research at UVA’s Scholar’s Lab who had never used Google Earth as more than a fun little toy.3 I’m still pretty uncomfortable with the interface, I have no idea what the program does, and I really can’t figure out what role it plays in my life. Most succinctly, finding the url for the following link is the most work I have ever done with the Google Earth API.
On the other hand, I was probably in the top quintile among participants in the use of ArcGIS. So how is it that I’m reasonably proficient, by humanities student standards, with ArcGIS, but completely covered in thumbs regarding Google Earth? Curating vs. analyzing. Webmapping vs. mapping (for the argument).
While trying to figure out, during the proposal process, if my approach to literature was crazy or mainstream, I spent a lot of time trying to find similar projects to mine online. Much of the work I found was similar to Google Lit Trips, a persistent whipping boy of mine. Google Lit Trips is a curated repository of data from multiple contributors (so, collaborative) leaning on the Google Earth API, available largely to enhance the experiences of the encoded text for K–12 readers. In other words, it features service- and pedagogy-oriented work that leaves the work of analysis and argument to the reader. There are many projects like this online, and we were even shown many over the course of the Institute, either as in-the-world examples or by the “curators” themselves.4
I have no epistemological quarrel with Google Lit Trips (despite the fact that they’re getting darned close to my datasets!) or similar projects, like (the now defunct?!?) Gutenkarte.5 At this time, I have not made any use of them, as I prefer (and have the luxury of doing so) to collect my own data and code my own representations of the data. Furthermore, though our initial actions are similar–we make note of where things happen in texts–the scope of the following step is dramatically different. Google Lit Trips compiles that data into a public fly-through that helps readers orient themselves with a text. Gutenkarte scatters place names from a text on a publicly accessible map. I, on the other hand, process the data (in private) to try and make a (public) argument with it. I play the role not only of the service, but also that of the sole reader, who then transforms into analyzer and broadcasts results of analysis to the public of my dissertation committee.
My argument with Google Lit Trips, and why it’s “my whipping boy,” is that I find it limiting as far as general expectations of what geospatial scholarship in the humanities can do.6 I don’t want it to be the case that saying “I do geospatial work on novels” will come to be understood as “I wrote my own version of Google Lit Trips.” This is why the tension between curating and analyzing I remarked on yesterday is still not entirely resolved.
To, me, in fact, putting the data out there by itself, as, let’s say, a table of geolocated and page referenced events, is almost irresponsible, since I feel an obligation to make use of my training to provide some kind of analysis. Sure, anyone can read a .kmz displayed on Google Earth, just like anyone can read a novel. But close reading a novel is a skill that, presumably, adds some kind of value that justifies the extra layer of literary scholar to the interpretation of a text. The same is true of a map, which is basically the same thing as a Google Earth fly-through. I have been trained to close read maps, and I think that’s a skill worth sharing.7
But it gets even more complex. Not only have I been trained to read maps, but I’ve been trained to make them. And then I’ve been trained to augment the data on them (“geoprocess”) in order to answer questions. Once I introduce other analytical tools (network analysis, geostatistical analysis), not only does the range of possible questions I can ask (and subsequently try to answer) explode, but the answers become much more… precise. I can start talking about “confidence” and “significance.” And then I can generate arguments, which lead to chapters, monographs, etc.
At some point in this chain of events, however, I stopped being interested in webmapping, in providing a service for people to “explore” the data I’ve accumulated. I turned inward, engaging in my own play of buffers, directional distributions, and nearest neighbor calculations to see what I could learn from that play. None of this looks anything like “so you wrote your own version of Google Lit Trips,” which is why I want to encourage a high enough profile for it, so that people don’t underestimate the usefulness of geospatial work in the humanities.
But this, then, explains why I’ve never used the Google Earth API, and it’s why I only found out about Mapnik for the first time last week. Making web-accessible, pretty, interactive maps has never been the focus of my work. In fact, my maps are pointedly offline, ugly, and frozen, which they have to be for when I start processing them.8 And even now, I’m not sure I have a great interest in changing my approach. So, at the Institute, I was less interested in the discussions of making webmaps aesthetically appealing than in finding out if there were free ways to recreate the ArcGIS Spatial Statistics toolbox with something like Quantum GIS (signs point to no). It’s also why what I was possibly most excited about at the Institute–as far as “I can use this in my work immediately!” value is concerned–was learning that Open GeoDa had become publicly available.
On the other hand, back when I was planning my dissertation proposal, and back when that project included geocoding the events of a couple dozen novels, I always assumed that, when I was done geocoding, I would make the data available publicly. Maybe not as a interactive webmap, but still. Why not, after all? I can’t claim proprietary control over mere facts that I collected (as we learned during the fair use presentation at the Institute!). The number of novels I’ll geocode (and the depth to which I’m coding them) has greatly shrunk as my project has changed, however, to the point where I stopped thinking about the data I’m collecting as of any public utility.
In other words, I always assumed some sort of service-y aspect to my dissertation work in addition to the analytical; I was already pursuing an argument via curation, via play, via iterative exploration of my own dataset. I just wasn’t calling it that, and once I stopped considering my data to be of interest to the public, I even stopped thinking about it as a potential site of service.
But that tension… that tension remains, since now I’m saying basically that I would leave the data available in my wake. That is, I’ll continue building a monograph, and as a side benefit, anyone with a web browser can see where all the activity in Dos Passos’s U.S.A. occurred. As a result, service becomes the cherry on top, which feels kind of wrong. Or, at least, a copout.
I think I’ll end here, despite the fact that it feels like my argument has more run out of steam than concluded. These pieces I’m writing this week–most everything I’m writing about the Institute–is going to be half-baked, since it’s more a question of relating a response to the events of the Institute to my own interests than presenting finished, tidy thoughts. Oh well.
- I know, I promised “fieldwork vs. armchairwork,” but that will come later! [↩]
- Ooh! maybe this will be “fieldwork vs. armchairwork”! [↩]
- A variant of this arose during our Twitter argument over cartographical aesthetics. I was forwarded links to Mapnik, which I had never even heard of before. As I’ll show, I’m still not sure how I’ll ever use it, though I’m glad to know about it. [↩]
- I don’t know that the people involved would agree to such a designation, which is why I isolate it in quotes [↩]
- Gutenkarte’s domain has expired. Here’s Metacarta‘s description of it: “Ever read a book, and wondered where the heck it took place? With Gutenkarte, we combine books with maps to show where a story is taking place.” [↩]
- This part is a bit strawmanny, but I think it’s an important discussion to have, and I sort of regret shying away from it at the Institute. [↩]
- A quick example: everyone who has taken a statistics course has probably had some kind of exercise to show that humans are awful at detecting or creating randomness. This is true on the spatial plane, too. Eyes are crappy at separating clusters from random distributions, so that it can often be the case that untrained eyes (non-skeptical eyes) can straight up misread a map. That actually may be easier to do than to misread a novel. [↩]
- I pretty them up a bit when I prepare them for public presentation. And by “frozen” here, I mean in contrast to “flat,” which is a distinction I’ve tackled elsewhere. [↩]