It’s difficult to overstate our society’s fascination with Artificial Intelligence (AI). From the millions of people who tuned in every week for the new HBO show WestWorld to home assistants like Amazon’s Echo and Google Home, Americans fully embrace the notion of “smart machines.” As a peculiar apex of our ability to craft tools, smart machines are revolutionizing our lives at home, at work, and nearly every other facet of society.
We often envision true AI to resemble us – both in body and mind. The Turing Test has evolved in the collective imagination from a machine who can fool you over the phone to one who can fool you in front of your eyes. Indeed, modern conceptions of AI bring to mind Ex Machina’s Ava and WestWorld’s “Hosts,” which are so alike humans in both behavior and looks that they are truly indistinguishable from other humans. However, it seems a bit self-centered to me to assume that a being who equals us in intelligence should also look like us. Though, it is perhaps a fitting assessment for a being who gave itself the biological moniker of “wise man.” At any rate, it’s probably clear to computer scientists and exobiologists alike that “life” doesn’t necessarily need to resemble what we know it as. Likewise, “person” need not represent what we know it as.
Though we often take for granted that humans are persons, they are not exempt from questions surrounding personhood. Indeed, what it means to be a person is largely an unsettled argument, even though we often speak of “people” and “persons.” Just as it’s important to ask if other beings might ever be persons, it is also important to ask if humans are ever not persons. In this pursuit, it’s crucial to separate the concept of personhood from notions of respect, love, and importance. That is to say, while a person may necessitate respect, love, and importance, something need not be a person to also demand respect, love, or importance.
When the concept of personhood in humans comes into discussion, it inevitably is punted to the medical community, often in the context of abortion and end of life. When does the heart first beat? When can a fetus feel pain? When does the brain begin/stop producing electrical activity? There is no doubt that advancements in our understanding of human physiology have enlightened discourse on what it means to be both a human and a person. However, the question of personhood is all too often debated solely in light of Western medical contexts. This conflation of physiology and personhood is the same issue that was discussed in my previous post on primate personhood and will be revisited in my next post on artificial intelligence. To escape this quandary we need to consider factors outside of physiology that are important to the concept of personhood, such as the social.
Invited post by: Sally A. Applin (@AnthroPunk on Twitter)
I recently finished my Ph.D. As a present, a friend of mine gave me a hand. Not help, which he had done during the process, but rather a battery-powered automated hand, cut off at the wrist, similar to that of Thing, the Addams Family’s servant from TV and film. In part of my thesis, and my research on automation, I’ve looked to Thing as a metaphor for IoT software automation. Thing, on TV, is a trusted friend who builds relationships with family members and can negotiate with others on their behalf. In fiction, and the representation of fiction, Thing works beautifully and embodies what a smart agent could be. It is aware of its surroundings, it builds trust. It connects people. Thing is a keeper of local knowledge. The Applin and Fischer (2013) Thing agent, is a software construct using deontic logic to encourage and support human agency, building trust in a relationship based context. The hand my friend gave me moved on a fixed path for several seconds, and then stopped until its button was pushed again. It looked like Thing, but it was only a physical representation, a simulation of physical form. In automation, data collection is not the same as building relationships, and community knowledge cannot easily be derived from quantitative Big Data. This is one of the more serious problems with Amazon Go.
Amazon Go is a grocery store concept that allows people who have activated the Amazon Go app on their mobile phone, to walk through an “authentication” turnstile into an Amazon Go supermarket. Once inside, people can “grab” what groceries they want or need, and walk out the door, without needing to check out, because Amazon’s “computer vision, sensor fusion, and deep learning” will calculate what people take, and charge them accordingly via the app. Amazon Go has a video on their website that explains all of this, and shows people “grabbing and going” with their groceries, stuffing them into bags or just holding onto them, and walking out. In the Amazon Go video, no one is shown talking to each other. Continue reading
Last month, a New York Post article about video games being like “digital heroin” for kids caused a bit of an uproar. The article describes a young boy losing interest in reading and baseball in favor of Minecraft, increasingly throwing tantrums until late one night his mother finds him in a catatonic state. Many have refuted this article as based on suspect evidence and even as a plug for the author’s addiction recovery center, noting the human tendency to treat new technologies—especially those used by children—with hysteria. It’s just the latest in the “screen time” debates.
But beyond scaremongering, what does screen time and immersion in digital worlds actually mean in terms of child rearing? Continue reading
[Savage Minds welcomes guest blogger Sara Perry.]
On Friday my colleague, Dr Colleen Morgan, and I will be co-delivering a paper at the University of Bradford’s Archaeologies of Media and Film conference in Bradford, UK. For anyone not familiar with the still-emerging field of “media archaeology,” this is an exciting event, featuring some of its pivotal thinkers (e.g. Jussi Parikka, Thomas Elsaesser), and a diversity of researchers discussing everything from 19th century stereoscopy to statistical diagrams and animated GIFs. As the organisers stated in their Call for Papers, the conference is a gathering of various interests, all converging on “an approach that examines or reconsiders historical media in order to illuminate, disrupt and challenge our understanding of the present and future.”
Colleen and I are talking on the last day, in the last block of parallel sessions, in a line-up of speakers who appear to be the only other archaeologists at the event. While I’ll delve into the details of “media archaeology” in a subsequent post, it is notable that archaeologists effectively never feature in this stream of enquiry. Rarely do archaeologists or heritage specialists attempt to overtly insert themselves into the media archaeological discourse (Pogacar 2014 is arguably one exception), and neither do media archaeologists typically reach out to archaeology for intellectual or methodological contributions (but see Mattern 2012, 2013; Nesselroth-Woyzbun 2013). Indeed, the media archaeological literature has explicitly distanced itself from archaeology, with the editors of one keystone volume writing:
“Media archaeology should not be confused with archaeology as a discipline. When media archaeologists claim that they are ‘excavating’ media—cultural phenomena, the word should be understood in a specific way. Industrial archaeology, for example, digs through the foundations of demolished factories, boarding-houses, and dumps, revealing clues about habits, lifestyles, economic and social stratifications, and possibly deadly diseases. Media archaeology rummages textual, visual, and auditory archives as well as collections of artifacts, emphasizing both the discursive and the material manifestations of culture. Its explorations move fluidly between disciplines…” (Huhtamo and Parikka 2011).
I’ve been curious about this trend of archaeology-free media archaeology for a while now, particularly after attending Decoding the Digital last year at the University of Rochester (see Matthew Tyler-Jones’ excellent review of the meeting in two parts: I and II). At this conference, one of the attendees with an obvious media archaeological bent lamented the difficulties of studying abandoned virtual worlds wherein direct identification of human beings was essentially impossible (for all that was left in these worlds were fleeting digital traces). The implication was that few methodologies were available to negotiate this seemingly hopeless interrogative exercise.
About a year ago I wrote a long post that discussed both my general approach to working with academic PDFs as well as the specific Apple (OS X/iOS) software I use to manage my own workflow: Sente. I still consider Sente to be a kind of gold standard for reference management software, but there are a couple of things about it that lead me to regularly check out the competition. One is that it only works on Apple products and many of my students are Windows users. The other is that, even on the Mac, it does not work within the web browser itself, but forces you to launch the app and use its own built-in web browser, which always interrupts my workflow. In my last post I mentioned a few other issues and briefly surveyed the competition; however my current work environment has me on a Windows 7 computer and so I decided to look again at the competition, especially cross-platform solutions. The first one I discovered is ReadCube but I found it just didn’t meet my needs. It didn’t do a very good job getting citation information (I had lots of errors in my metadata) and the iPad app was too limited for my needs. However, another service turned out to be more promising: Paperpile, and I thought I’d write a short post about how I’m using that.
*North American Dialogue; with apologies in advance for acronym abundance
Savage Minds welcomes guest blogger Lindsay A. Bell
I recently became the Associate Editor of North American Dialogue (NAD). Part of the AAA Wiley-Blackwell basket of goodies, NAD is the peer reviewed journal of the Society for the Anthropology of North America (SANA). I was brought on to help with the journal’s “brand issues”; namely its recent conversion to a peer reviewed publication and its history as being, um, well CUNY-centric. I am pretty excited about working with SANA on NAD. As a relatively recent section of the AAA, SANA has done much in the way of establishing anthropologies of North America as politically and theoretically important. As the incoming Associate Editor, I am hoping to pick your savage minds about publishing, social media and related issues. In particular, for those of you whose work is North American (and we mean that as broadly as possible), what would you like to see from this publication? From the digital gurus in the crowd, I want to hear about how or if social media should be used to draw a broader public to scholarly work?
[This is an invited post by Lavanya Murali Proctor. Lavanya is a linguistic and cultural anthropologist who believes that the academic class system is incompatible with the principles and ethics of anthropology, and therefore we can—and should—be at the frontlines of this battle. She lives online at @anthrocharya].
Many contingent faculty have noted that the AAAs are very expensive, and therefore exclude those who cannot afford to go—a fairly large number of anthropologists. At the Chicago meetings, I spoke to a few members of the AAA governance on this issue. They said that the AAA aims to increase accessibility broadly defined. This is no bad thing considering the meetings are inaccessible in a variety of ways to a variety of people, which problems anthropologists rehash every year (for example, unaffordable to adjuncts or hard to navigate for anthropologists with disabilities). The focus, in increasing accessibility, is on media and technology.
The question I’d like to throw open to the readership of this blog is this: do you have any suggestions for participatory media technologies that can be used at the meetings that would allow those currently excluded to be included as presenters and collaborators and not just audiences (within the parameters of limited bandwidth)?
[The following is an invited post by Jay Ruby. Jay has been exploring the relationship between cultures and pictures for the over forty years. His research interests revolve around the application of anthropological insights to the production and comprehension of photographs, film, and television. For the past three decades, he has conducted ethnographic studies of pictorial communication among several U.S. communities.]
I first became interested in documentary and ethnographic film in the 1960s and was a witness to a profound technological change motivated by the need some filmmakers had to create a new cinematic form. It occurred in two places almost simultaneously – France and the U.S. Filmmakers wanted lightweight 16mm cameras with sync sound that needed no lighting and would need only a small crew for location shoots. In 1960, Drew Associates – Bob Drew, Albert Maysles, and D.A. Pennybaker jerry-rigged a fairly lightweight 16mm camera attached to a synced tape recorder and made the first American Direct Cinema film, Primary. (Dave Saunders, Direct Cinema: Observational Documentary and the Politics of the Sixties, London, Wallflower Press 2007) With its grainy, wobbly sometimes out of focus images and often-garbled sound, the film radically altered how some U.S documentarians made movies. While an interest in observational style films was relatively short among U.S. documentarians, some European anthrofilmmakers still consider it the best way to make films (See Anna Grinshaw and Amanda Ravetz’s 2009 Observational Cinema: Film and the Exploration of Social Film, Indiana University Press).
[The following is an “invited post” by Dr. Sarah Hillewaert. Sarah is an Assistant Professor of Linguistic Anthropology at the University of Toronto. Her works focuses on shifting notions of personhood and the changing linguistic and material practices of youth in (coastal) Kenya.]
On Saturday September 21st 2013, an upscale shopping center in Nairobi, Kenya became the target of a ruthless siege. A group of gunmen, their estimated number ranging between 6 and 15, entered the Westgate Mall and opened fire on bewildered shoppers, indiscriminately killing men, women and children. A few hours into the siege, Al-Shabaab – a Somali Islamist group with ties to Al-Qaeda – claimed the Westgate attack, not through an auspicious video delivered to a major television network, nor through an official statement of Al-Shabaab’s leader, Ahmed Godane, but via a Tweet on the organization’s Twitter account. The militants’ use of social media, and of Twitter in particular, would be featured centrally in the international media’s coverage of the attack. This preoccupation with Al-Shabaab’s use of new media technology, and the concern it was able to create, revealed much more about our apprehension toward the unexpected linkages and similarities social media create than it did about Al-Shabaab’s international reach. The media coverage of the Westgate siege illustrated how we laud the “power” of social media when it generates desirable similarities; unanticipated linkages, however, need to be explained away. A focus on “outliers” or “extremists,” or the identification of practices that answer to our social imaginary then restores the familiar distance between of “us” and “them.”
Some of you may be aware of the productivity cult known as “Getting Things Done” (GTD). Although I find the full-blown GTD approach doesn’t really fit well with an academic lifestyle (what’s the use of using “contexts” when your work follows you everywhere?), reading about GTD taught me a few basic principles that make me feel less stressed out by allowing me to focus better on the work at hand. I mention GTD because I intend to use it as a framework to discuss reference management software, especially Sente for the iPad which recently got a significant upgrade. This review consists of three sections: 1. Applying GTD principles to academic reading with Sente. 2. Some comments about new features and continued limitations in the latest version of Sente for the iPad. And 3. Other options for reading and managing references on the iPad.
Recently Kieran Healy posted a link on Twitter to a co-citation graph he’d made to try to understand what philosophers “have been talking about for the last two decades?” He also posted a nice poster he made from this data [PDF]. I reposted these and mentioned that it would be great to have something similar for anthropology. The internet being the wonderful place that it is, I shortly had my wish, courtesy of Jonathan Goodwin.
This chart isn’t as clean as Kieran’s – and probably has too much data (four journals going back to 1973), but Jonathan has helpfully provided instructions for how he did it in case anyone is interested in pursuing it further. I’d love to be able to create separate charts for each of the various sub-disciplines in anthropology, but that might be harder to do since they often appear in the same journals. Still, hopefully some interesting insights can be gleaned from this kind of data. If you are able to do anything with this, let us know in the comments!
UPDATE: Jonathan made a new, lower-density, chart for just 1998-to-the-present.
UPDATE: And a new one, with a chronological slider.
Very soon Sente will be releasing a major update to the PDF rendering engine on their iPad app. When they do, I will revisit Sente with an in-depth review of an app which has evolved a lot since I last wrote about it. Till then, here is a quick list of seven lessser-known, but invaluable, apps for doing research on your iPad:
If you look through the archives of Savage Minds you will find a lot of posts that are seemingly unformatted. Most of these are by Rex, who was an early fan of Markdown, a “a text-to-HTML conversion tool for web writers” developed by John Gruber. Unfortunately, the plugin we were using to make those posts appear pretty was sucking up a lot of server resources so we disabled it until we could find something better. There are probably better options out there now, but we haven’t looked at them. I personally write my blog posts in raw HTML and never saw the advantage of learning Markdown… until now.
Before I go on, a word of warning. Usually I only write my “Tools We Use” posts about software I feel confident about. That means it is bug-free, already has all the promised features, and can be easily used even by those who are less tech-savvy (with a bit of effort). However, some (but not all) of the tools discussed in this post aren’t really ready for prime time.
So what changed? Why did I come around to Markdown (MD)? Well, the main thing for me was my discovery of FoldingText. I know a lot of academics, Rex included, really like Scrivener (“the first and only word processing program designed specifically for the messy, non-linear way writers really work”), but despite trying really hard to like it, it just never “clicked’ for me. Mainly because I don’t like how it works as an outliner. FoldingText, on the other hand, is a great outliner. Yes, the current version is still missing some important features one would expect from an outliner, but I already love it. In this post I will write a bit about why I like FoldingText so much, as well as some of the other MD tools I’ve found helpful, including a way of writing powerpoint-style presentations in MD, and a new proposed syntax for annotating documents in MD. All this and more after the fold… Continue reading
Obama may have gaffed, neoliberal assistant editors at Fox News and the Republican National Committee, exploitatively edited, repurposed, and exaggerated the speech, but it was Wall Street Journal writer L. Gordon Crovitz who mistook the misedits as evidence for US executive branch internet revisionism. Crovitz, ex-publisher of the Journal, ex-executive at Dow Jones, and social media start-up entrepreneur, attacked President Obama’s statement that the internet was funded and engineered by the federal government. “It’s an urban legend that the government launched the Internet,” he idiosyncratically declared. The crux of Crovitz’s argument was focused on Robert Taylor, who ran the ARPAnet, a US DAPRA project that connected computer networks to computer networks. Taylor, according to Crovitz, stated that this proto-internet, “was not an Internet.” And therefore, most importantly for Crovitz, this meant that President Obama was dead wrong, Taylor, a federal employee at this time did not help to invent the internet. The internet was not made by engineers paid by public but private hands. Crovitz’s twist on the accepted story is that Taylor later made a different internet, ethernet, at Xerox PARC where we worked after DARPA. And it was Ethernet that became the internet. Continue reading