Workshop: “Introduction to iOS Programming”

Recently, I attended the “Introduction to iOS Programming” workshop, led by Jeremy March from the CUNY Digital Fellows. I must say that, before this event, I had no knowledge of iOS or Android applications at all, so I found this overview of the workflow regarding digital apps very useful. Besides, the resources provided by Jeremy March and the Digital Fellows will surely be beneficial in the near future.

When it comes to the decision making process, first of all we need to ask ourselves what platform is the more adequate for our purposes. Web applications are hard to create for a variety of reasons: they are not easy to code, they have to support different screen sizes and their storage is also limited, since they can just be saved in the Cloud. If we compare iOS with Android—even though there is a wider spectrum on the latter—we were told that it makes more sense to lean towards the Apple Inc. operating system.

We then were introduced to XCode; more precisely, to the languages and target platforms it may contain. Jeremy March illustrated how nowadays there is a transition from Objective-C to Swift with regard to iOS application language. Even though Objective-C provides more libraries to draw on, Swift includes many more modern features (safety and performance). Taking the platforms (iPhone and iPad) into account, that just depends on one’s personal project.

Once we learned some generic characteristics of XCode and iOS programing, we had the chance to explore and play around with a few sample projects. On the one hand, I did find myself somewhat lost with the directions at the beginning of the actual app creating process. It was a lot of brand new information and I believe that being familiar with certain programming knowledge would have been more helpful. On the other hand, though, I personally found the experience very enlightening. Overall, it was absolutely positive to take the first steps on this field with the help of the materials and resources provided. I definitely look forward to being part of more workshops connected to iOS programming next semester.

Posted in Student Post, Uncategorized | Comments closed

Humanities and space

What I found relevant, in last weeks reading for our classes, is the relation established between space and humanities. At a first sight, indeed, and also basing my thoughts on what I previously learned and how I previously studied humanities, I was sure that space does not deal with humanities. I did not see a possible relation between the notion of space and the notion of humanities, they seem to me very distant and – above all – I could not even imagine how a spacial approach could be useful to study the humanities, and literature above all. But, then, Franco Moretti’s book on one side and HyperCities by Todd Presner, David Shepard and Yoh Kawano opened up my eyes and made me discover a new, fundamental approach to the study of literature.

Graphs, Maps, Trees by Franco Moretti was fundamental, for me, to discover space as one of the constituent of literature, and also as a tool to be abstractly employed to analyze and understand it. Indeed, in Maps chapter, the space is analyzed in its basic meaning of what surround a situation described into a book, usually considered as a set of natural or artificial elements where the fictional human people protagonist of the book establish their lives. Moretti, indeed, provided us with the description of a countryside landscape surrounding different tales in the book of an English writer to show us how the spatial disposition of the objects related to human life could help us to understand the kind of narration we are reading (and, consequently, the literary genre where it is inscribed and its historical and literary context). Moretti analyzed the relationships between the stories told and written by different authors and the spatial position of their characters’ lives to show us how historical context could influence the way a writer conceive, and depict, his fictional world (usually as a reaction to what happens in a precise historical context).
If this could be related to the “basic” idea of space we usually already have stored in our mind, the Graphs and Trees chapters provide us with an idea of space which, being more abstract, is used as an instrument to schematically and logically (Moretti often used the word ‘algorithm’ into this book) approaches literary patterns. Indeed, in this chapter he does not directly relate to space as the surrounding set of characters in a book, but he focuses on spatial instruments which give him the possibility to create models of literature which are diachronic in the same time than synchronic. This is true, for me, especially when Moretti talks about trees: he indeed imagines these instruments as capable of representing a system of forces operating together in the same time, and to make this clear to us by simply schematizing it in a two-dimension symbol. So, one more time I understood the relevance of space and spatial dimension to study literature.

What HyperCities gave me what, instead, a new sense of space as connections of thousands of different events and of different constituents. “Lexicon” and “The Humanities in the Digital Humanities” provided me, first of all, with a methodological introduction about the purposes of the book and, above all, of the use of Humanities in Digital Humanities and in this hypertextual context. Indeed, the hypercities projects here described embed in their spatial dimension other possibilities to enter into not-spatial dimension. The maps of Berlin and Rome, for instance, allow us to see in a two-dimension representation the changes due to time on buildings, streets, and spaces. And, much more important, the social media maps provide us with a dynamic representation of a specific events. If one of the goals of the Digital Humanities is the collaboration and cooperation, these social maps could surely be considered as collaborative and cooperative projects, and provide their viewers with a more accurate representation of an event during time.

So, to conclude, the possibility to integrate time into map representation is surely fundamental to create better models of events. As Moretti tought us, also literature could be considered an event, with a development through time: maps, and space, must not to be considered only as static, synchronic representations, but also as the models for the development of an event through time. They give us an instantaneous gaze to different aspects of the event we are analyzing: they are not simple a representation, but a model for real events which are depicted in a two-dimension space.

Works cited:

Moretti, Franco. Graphs, Maps, Trees: Abstract Models for a Literary History. London: Verso, 2005. Print.

Presner, Todd; Shepard, David; Kawano, Yoh. HyperCities: thick mapping in the digital humanities. Harvard University Press: Cambridge, MA, 2014. Print.

Posted in Student Post, Uncategorized | Comments closed

Data Visualization – Presented by Micki Kaufman

Micki Kaufman, former class visitor and creator of Quantifying Kissinger recently hosted a data visualization workshop on October 31st where she not only showed how to set projects up in Gephi and Tableau, but also showed the power of smaller script GUIs for text analysis like AntConc and Mallet. Her initial project dealt with thousands of documents, and the constraints of originally undefined metadata. How do you create raw data from documents, and the other question is – how do you visualize it in useful ways that sparks humanities based questions? Read More »

Posted in Student Post, Uncategorized | Comments closed

Guide to Live Tweeting for Academic Conference

On Oct. 20, I attended the Live Tweeting for Academic Conferences workshop hosted by CUNY Graduate Social Media Fellows in anthropology, earth and environmental science, English, music, philosophy, and urban education. One might say  that “Tweeting? We all know how to do that”. Yet, personal tweeting is different than tweeting for a particular event. Simply, the latter requires a basic planning, in parallel to the conference.

Live tweeting is a documentation of an event by Twitter users, and it includes sharing quotations, images, and external links. Tweets from different users are usually interlinked to each other by an event based hashtag. Today, live tweeting is quite essential, since it provides a wider audience to the conference. Especially for those who can not attend, it presents an opportunity for contribution; they can comment, ask questions, or just retweet the content. At this point, it is possible to consider live tweeting as a creation of a communication platform for those who are interested in the same topic, either they are in or out of the event venue. As Social Media Fellows highlighted, live tweeting should be planned as a particular session of the conference, and according to them its organization is divided in three periods; before, during, after.

The first step in the “before the event” phase is creating a hashtag, and its promotion. It could be an abbreviation of the event name, or refer directly to the conference theme. It’s important to check the availability of the hashtag on Twitter, in order to eliminate confusion with other causes. It is also useful to check the hashtags of the previous events. For instance, if it is an annual event, it’s helpful to mark the year. Moreover, conference hashtag should be mentioned on print materials such as posters, booklets, flyers, etc. It is also an effective way to remind official hashtag during welcoming talk, and also by panel moderators.

The following step is setting up a tweeting team, and CUNY Social Media Fellows suggest four basic roles: Preparatory Researcher (compiles pre-drafted tweets beforehand, and makes an archive of Twitter handles and hashtags they can use throughout the conference), Master Tweeter (the person designated to write/craft tweets), Designated Retweeter (retweeting using the hashtag), Post-Conference Curator (curating/showcasing highlights after the event). Having multiple tweeters help to avoid overwhelmness of live tweeting, because it is very exhausting to try quote and/or rephrase speakers’ statements in 140 characters, including hashtag. Another very important issue to always remember is the knowledge of tweeters about the conference theme. Being familiar to topic would provide a certain filter to select what to tweet. At this point, it is important also to consider the risk of tweeting too much, which might cause unfollows.

The third job in before the event organization is making a preliminary research on speakers, their institutions and previous works, and their relevant tweets, handles, external links. It is called master document,and it is very helpful to find quickly needed information. It is also a practical way to schedule some tweets of particular announcement; such as, the 2nd panel will be starting after lunch. But it is important to be on alert in case of any changes in the event program.

The final step is about technological infrastructure; controlling wifi connection, power connections, computers and mobile devices, taking some photos before the conference in order to share during live tweeting, etc. Fellows highly recommend laptops, which facilitates writing quickly.

After setting everything up, it’s time for live tweeting, and there are issues that tweeters should pay attention during the event. First of all, tweeters should be always on alert during the talks, and use their own judgment to take out important parts and tweet. The fellows suggests writing speech on a notepad simultaneously, and paraphrase the content in 140 characters. Furthermore, attribution is also quite important during live tweeting in order to connect tweets through handles and hashtags with other Twitter users. Another tip from the fellows is using images as much as possible. In my opinion, it would be also helpful to set an image library, including images of speakers, books of speakers, flyers, conference team, event venue, etc. During live tweeting session, usually follower number increases, and it is essential to follow them back . But controlling the accounts is a must. It could be a Twitter robot, or an irrelevant person to your event. On the other hand, it is also very effective to promote your hashtag during live tweets via retweets. The fellows suggest planning also some re-tweeters among scholars, conference team, and speakers. Before the event you can ask for their support by retweeting, and inform them about your tweets. The retweeters are mostly the people who also tweet often during an event.

The final phase of live tweeting is after the event. Curating all the tweets, and writing history of the event with tweets. My interpretation is that it is possible to call it a conference report on Twitter. For instance, Storify is a digital tool that you can create, and present this curation.

Workshop provided a certain look at Twitter vocabulary, some abbreviations that accelerate the communication among tweeters by connecting them via hashtags. If you are new on Twitter you can use Twittionary to get tweeting language.  

Above all, the fellows gave some advices on certain applications which helps to moderate the account easily and quickly; such as Hootsuite, and Tweetdeck. In both of them you can follow multiple accounts in particular feeds; newsfeed, mention, direct messages, etc. Furthermore, these applications allow you to control your other social media accounts; such as Facebook and Instagram.

Live Tweeting workshop was very fruitful for me; it provided me a framework on developing a social media session for a conference by focusing on key concepts of tweeting, which is beyond creating interesting content, more about organizational issues to increase the impact of hashtag. Because there are millions of hashtags everyday, and if you do not outreach sufficiently, there is always a risk of not being noticed.

By following this link you can find CUNY Social Media Fellows’ guide, which includes further reading links.

 

Posted in Uncategorized | Comments closed

So you want to make a map: Starting a GIS project

I recently attended the workshop “So you want to make a map: Starting a GIS project” with Javier Otero Peña and Kelsey Chatlost. Although I have always had an interest in mapping projects, I have never attempted one myself, mostly because I didn’t know where to start. This workshop, despite some of its flaws, pointed me in the right direction by providing what I thought was an accessible introduction to QGIS, a map-making program.

We began with some questions: Do you really need a map? What type of map do you need? How / Where are you going to get / produce additional data you may need? Although this section was somewhat tedious, it did open up a space in which I could think about the potential use maps could serve in my project, or if a map was something that was even worth my time pursuing.

We then covered some basic terminology, the most important of which seemed to be the term “data.” Data, at least in a map project, can be divided into two separate categories: raster data and vector data. In either case, however, one needs a dataset, that is, a spreadsheet / attribute table / database, which contains the information you would like to render as, or on top of, a map.

This led to a discussion about layers. I found that they worked similarly to the layers in Photoshop, in that each layer could only contain one type of data: either vector or raster. And with all this information, we were finally able to begin our own map experiment!

Despite being an enjoyable experience, especially for a new learner like me, there were some things that I felt could be improved upon. First and foremost, the time allotted to actually making a map was insubstantial; I found that we spent too much time on the intro stuff (such as definitions, what a map could do, etc.). Somewhat paradoxically, another issue was the speed of the directions once we began the map-making. I often found myself looking at other people’s computers, in the vain hope that I could see what they were doing, and therefore catch up. This was somewhat alleviated by the PowerPoint Otero and Kelsey sent out after the workshop, but I believe the workshop would have benefited from more detailed attention given to the actual process of making a map. This isn’t a total dismissal of the workshop’s procedure, just an observation about what would have made my experience better.

Posted in Uncategorized | Comments closed

Walled Gardens and Websites in Boxes: Planned Obsolescence, “Texts”

“The Internet hates walled gardens,” Kathleen Fitzpatrick writes in the “Texts” chapter of Planned Obsolescence, and this reality highlights some of the failures of digital publishing to acknowledge and facilitate the communal readings of texts (117). Certain file formats—namely, PDF, EPUB, and Kindle—often encourage the reproduction of the look and feel of the print book while the economics surrounding these file formats often isolate readers in ways that the economics of the print book did, and do, not.

With the PDF, the encouragement to reproduce the look and feel of the print book seems clear enough: this format fixes text into place, as typesetters do, freezing it for printing and portability and not much else. In “Texts,” Fitzpatrick writes that “these documents are, until printed, like paper under glass: mostly unmarkable, resisting interaction with an active reader or with other such documents in the network” (93). It comes as no surprise, then, that, before delivering the finalized version of a book to a printer, typesetters export that book to PDF (from software such as InDesign or Quark). With EPUB and Kindle files, however, the encouragement to reproduce the look and feel of the print book operates in a more subtle fashion. In fact, both formats rely on HTML, CSS, and some basic metadata (mostly captured in the Dublin Core standard)—all core web technologies. Yet, EPUB and Kindle documents remain discrete entities, unconnected to larger networks of text; they become “websites in boxes” as I once heard a conference speaker call them. Furthermore, these digital documents are often rendered via skeuomorphic software. Consider, for example, the way in which you can “flip” through pages in iBooks—an older version of that software actually featured a “flipping” sound effect. Even something as necessary and useful as the EPUB 3 Structural Semantic Vocabulary encourages publishers to partition and code their files using epub:type values such as frontmatter or pagebreak (IDPF). As Fitzpatrick argues, “we remain tied to thinking about electronic texts in terms of print-based, or, more specifically, codex-based, models” (94).

Perhaps more concerning, however, are the economics of the digital publishing environment in which these file formats reside. While PDF, EPUB, and Kindle documents operate as discrete entities, they often operate as discrete entities tied to one reader or one device through digital rights management (DRM). Intended to thwart piracy, DRM takes many forms, from prohibiting the transfer of a file to watermarking a file with a reader’s name, that seem to roll back some of the communal benefits of the print book. For example, while a reader can share a purchased print book with a friend, she or he might not be able to do so with an ebook with restrictive DRM. Similarly, while publishers might sell print books to libraries with liberal terms of use, they might license ebooks to libraries with more stringent terms of use and expiration dates, potentially depriving readers access to these digital documents via public institutions. Meanwhile, the Kindle file format remains proprietary and intended to keep Amazon customers within the confines of Amazon. All of this is not to suggest that the economics that have surrounded, and do surround, the print book are necessarily ideal. Instead, I want to suggest that the economics surrounding these file formats might frustrate and isolate readers in new ways.

How, though, can we avoid the walled gardens of these file formats and create texts that neither mimic the look and feel of print books or operate in problematic economic systems? For Fitzpatrick, acknowledging the communal nature of texts might be a good start: “developers of textual technologies would do well to think about ways to situate those texts within a community, and to promote communal discussion and debate within those texts’ frames” (107). A platform like CommentPress does this, she argues, by encouraging “the social interconnections of authors and readers” and, ultimately, implying that publishing remains an ongoing process conducted across time and texts (119–120). Meanwhile, a more recent project like EPUB Zero, which explores reading the EPUB format in browsers and freeing it from dependency on reading software, might encourage us both to rethink the nature of digital texts while acknowledging that all texts operate within networks and should, therefore, be portable and shareable among readers (Cramer).1 Ultimately, Fitzpatrick’s ability to address some of the shortcomings of digital publishing and gesture, through an understanding of textual scholarship, towards a more open, interconnected environment seems like the most compelling part of this chapter.

Notes

1. Planned Obsolescence does not mention this project. Instead, I have mentioned it here as another, more recent (from 2015) example.

Works Cited

Cramer, Dave. “EPUB Zero.” GitHub, 30 November 2015, https://github.com/dauwhe/epub-zero/blob/gh-pages/readme.md.

Fitzpatrick, Kathleen. Planned Obsolescence: Publishing, Technology, and the Future of the Academy. New York University Press, 2011.

IDPF (International Digital Publishing Forum). “EPUB 3 Structural Semantics Vocabulary,” 5 October 2016, https://idpf.github.io/epub-vocabs/structure/.

Posted in Student Post, Uncategorized | Comments closed

Planned Obsolescence: Peer Review

In the first chapter of Planned Obsolesce, Kathleen Fitzpatrick discusses peer review. She begins with a summation of its history, tracing it back to seventeenth-century censorship; knowledge production was relegated to various “societies” (such as the Royal Society of London) which were funded, in part, by the state, and therefore peer review was a tool that allowed the state to police what could and could not be considered “knowledge.” However, this isn’t, according to Fitzpatrick, how peer review functions in the present:

Gradually…scholarly societies facilitated a transition in scientific peer review from state censorship to self-policing, allowing them a degree of autonomy but simultaneously creating…a disciplinary technology, one that produces the conditions of possibility for the academic disciplines that it authorizes (21).

Peer review is now a self-policing enterprise, meaning that academics create and perpetuate the conditions that determine what is and is not “scholarship.” They are, in essence, slaves to their own system, a system which, as Fitzpatrick points out, is no longer sustainable. This unsustainability has been caused by numerous factors: academic publication is no longer profitable, more and more work is being done online, and, perhaps, most importantly, the very nature of “authority” is being undermined by the Internet, where the production of knowledge is often crowdsourced.

This creates the following problem, as Fitzpatrick notes: “The production of knowledge is the academy’s very reason for being, and if we cling to an outdated system for establishing measuring authority while the nature of authority is shifting around us, we run the risk of becoming increasingly irrelevant to contemporary culture’s dominant ways of knowing” (17).

This notion of relevancy is, at least for, me, where one could provide some pushback. Doesn’t privileging relevancy imply that academia had/has some sort of ongoing relationship with the public? I don’t believe that this is necessary true, at least historically, nor, dare I say, in the present. Isn’t it rather the case, as we’ve discovered by tracing the beginnings of peer review to “societies,” that academia has always been a space cut off from the rest of the world? We may, in fact, be arguing for a relevancy that never existed, which isn’t to say that Fitzpatrick’s proposals are worthless. But wouldn’t it be more honest to say that embracing alternative methods of knowledge production is first and foremost a way for academia to gain a relevancy it never had?

Fitzpatrick also often uses the terms consensus and dissensus, buzzwords taken from the philosophical project of Jacques Rancière. The internet is a site of dissensus, which means that no one opinion (theoretically) can ever gain precedence over all other opinions. This, according to Fitzpatrick, benefits peer review, in that it can provide multiple perspectives from which the writer can draw, essentially democratizing scholarship. The problem with dissensus, however, is that it can just as likely lead to a terrible consensus (one can think of the dissensus of a portion of America, which led to the consensus around a presidential nominee: Trump). Fitzpatrick seems to acknowledge this:

As we think about peer-to-peer review, it will be important to consider the ways that network effects bring out the both the best and the worst in the communities they connect, and the kinds of vigilance that we must bring to bear in guarding against the potential reproduction of the dominant, often exclusionary ideological structures of the Internet within the engagement between scholars and readers online.

Doesn’t this imply, however, that all dissensus isn’t created equal, that what Fitzpatrick is actually advocating for is in fact a dissensus with certain limitations, i.e. consensus? Can there be a truly sustained dissensus, one which leads to a fully democratic scholarship? Wouldn’t such a sustained dissensus be the end of scholarship?

WORKS CITED

Fitzpatrick, Kathleen. “Peer Review.” Planned Obsolescence: Publishing, Technology, and the Future of the Academy. New York: New York University Press, 2011. 15-49. Print.

Posted in Uncategorized | Comments closed

Planned Obsolescence, Preservation

In the fourth chapter of Dr. Kathleen Fitzpatrick’s Planned Obsolescence, titled “Preservation,” Dr. Fitzpatrick addresses a sort of anxiety about the future of texts produced in the age of networked publishing systems. This anxiety lies in underlying assumptions that printed books are tangible/material/durable while digital texts and data are insubstantial/ephemeral/fragile. Dr. Fitzpatrick dismantles this binary by highlighting the way in which printed material degrades and how recoverable data is.

[In my current job at a conservation studio, I can attest to the fragility of paper (or any other material). Even acid-free can develop all sorts of unforeseen problems: accumulations of dirt, mold, salt, foxing (a sort of rust), etc. It can be really gross but also fascinating but I digress…]

I am very generally simplifying her nuanced disagreement with these assumptions about printed material and digital texts in order to get to a key difference, which Fitzpatrick explains, “has to do with our understandings of those forms” (123).

Centuries of theory and practice have given way to a culture and infrastructure devoted to the preservation of printed material. Dr. Fitzpatrick argues that there now exists a need to quickly and carefully “develop practices appropriate to the preservation of our digital heritage” (123) that involves “collective insight and commitment of libraries, presses, scholars, and administrators” (125). One challenge in this age of digital production and the establishment of preservation practice(s) is due to the multiplicity of systems that host digital projects, which makes the task of developing unified theory difficult. Dr. Fitzpatrick argues that three preservation issues must be addressed: the development of commonly held standards for markup, the provision of appropriate metadata, and continued allowance of access.

What was most compelling to me was the way that Dr. Fitzpatrick highlights the significance of social systems are in addressing these issues. In discussing standards, she highlights that open source software is supported by committed development communities. When discussing metadata, she argues that future classification systems should reflect both expert opinions as well as user experience. In the section on access, Dr. Fitzpatrick discusses two preservation programs, LOCKSS and Portico, with regard to their models of installation, collection, and distribution of material, and user experience.

Overall, this chapter demonstrates how productive a shift in focus can be. Certainly, issues of technique and best practices for digital preservation need to be considered, which Dr. Fitzpatrick discusses. At the same time, by calling for a shift in focus from the (im)material aspects of digital publications, which can fatalistic and reductive, to the social aspect of preservation. Establishing communities devoted to digital preservation beneficial to host institutions as well as a public good.

Source:

Fitzpatrick, Kathleen. “Preservation.” Planned Obsolescence: Publishing, Technology, and the Future of the Academy. New York: New York University Press, 2011. 121-154. Print.

Posted in Uncategorized | Comments closed

Planned Obsolescence: The University

In Kathleen Fitzpatrick’s final section, she tackles the “unstable economic model” that university publishing operates under (155). The university press unfortunately is in a strange state of limbo where they operate for the university and are subsidized by the university, but also serve a purpose outside the university. She’s aware that this section of hers would easily be subject to the most scrutiny, because most of what she goes over is speculative. Besides going into costs and the history, she offers a few different models under which a new university press system could operate. However, she’s aware of the scale of what she’s trying to promote, understanding that it’s a “broad reconsideration of the press’s relationship to the institution’s core mission” (172). Commercial press organizations exist on a different playing field where funding and business occurs under the auspices of conglomerates. The university press for Fitzpatrick needs to move out of this state of limbo, and be a cornerstone of just the advancement of knowledge under the university.

Read More »

Posted in Student Post, Uncategorized | Comments closed

Inspiration from Ted Talks

I receive a weekly email from Ted Talks every Saturday morning and depending on how busy I am catching up on all the things I wasn’t able to do during the week, I try and make (a little) time to browse the newest talks.  Not too many Saturdays ago, I was feeling a little burnt out on work and school (and the ever-present big data project we’re supposed to be thinking about), so I turned to Ted Talks for some inspiration.

The first talk I viewed was given by Amit Sood that introduces the audience to “Every piece of art you’ve ever wanted to see up close and searchable.”  Amit is the director of Google’s Cultural Institute & Art Project and got me thinking how a large scale collection of images can be used to make new stories and connections across the various places they are located!   This project has been done in coordination with many of the world’s greatest art institutions (including museums, archives, and other foundations) and is a shining example of how access to valuable cultural artifacts can be shared extensively with everyone, despite their proximity or ability to physically (or monetarily) go to a museum.  However, I’m most likely not going to use images as my large data set (I’m thinking more along the lines of gathering data about my library’s book collection), I was impressed by the many visualizations displayed and the possibilities of uses for this kind of data set.

A big bang metaphor was used regularly throughout this talk and it reminded me fondly of the many times I’ve gone to the AMNH’s Hayden Planetarium.  At minute 3:51, Amit states,

So let’s zoom. We start from this one object [the Venus of Berekhat Ram–one of the oldest objects in the world, found in the Golan Heights around 233,000 years ago, and currently residing at the Israel Museum in Jerusalem]. What if we zoomed out and actually tried to experience our own cultural big bang? What might that look like? This is what we deal with on a daily basis at the Cultural Institute — over six million cultural artifacts curated and given to us by institutions, to actually make these connections. You can travel through time, you can understand more about our society through these. You can look at it from the perspective of our planet, and try to see how it might look without borders, if we just organized art and culture. We can also then plot it by time, which obviously, for the data geek in me, is very fascinating. You can spend hours looking at every decade and the contributions in that decade and in those years for art, history and cultures. We would love to spend hours showing you each and every decade, but we don’t have the time right now. So you can go on your phone and actually do it yourself.”  

Amit Sood speaks at TED2016 - Dream, February 15-19, 2016, Vancouver Convention Center, Vancouver, Canada. Photo: Bret Hartman / TED

Amit Sood speaks at TED2016 – Dream, February 15-19, 2016, Vancouver Convention Center, Vancouver, Canada. Photo: Bret Hartman / TED

Amit and his co-presenter, Cyril Diagne who is a professor of Interaction Design at ECAL (Lausanne, Switzerland) and an artist in residence at the Google Cultural Institute Lab (Paris, France), give the audience a demonstration of how these images, when put together on such a massive scale, demonstrate how this platform can allow a user to go through time and across great distances to make connections among the 6 million objects contained in the collection.  I’ve just downloaded the app for my phone and I plan to play quite a bit!

After being inspired by Amit’s talk, I wanted to search for a more literary themed talk and project.  I did a search for “library” and found this talk among the top hits entitled Jean-Baptiste Michel + Erez Lieberman Aiden: What we learned from 5 million books which was given a little earlier than Amit’s talk.

Apparently, I seem to be most impressed with Google enabled/sponsored projects–or at least that’s what TedTalks has presented me with!  I promise, I am not a loyal devotee to all things Google, but I do recognize their strides in contributing to the availability of information online and how pretty & user friendly they’ve made it all.  I was not aware of the Ngram viewer prior to this talk, so at the very least, I learned of one more tool available to me.  The presenters humor alone is worth viewing this short talk.  I was appeased by their discussion of how awesome it would be to read all the published books in the known world in order to form an understanding about a massive corpus, but how absolutely impractical it is!  They also poke a little bit of fun at the practical practice of close reading.  Which of course has its own merits, but with access to millions of digitized works, distant reading is opening up new means for awesomeness to be had!  Nonetheless, they make an excellent case for undertaking such large scale endeavors.

I often feel that the many digital humanities project ideas I have often feel awesome yet terribly impractical and mostly insurmountable with my current skill set.  Although, I’m slowly realizing that many projects are practical and #gcpraxis16 is designed to move us from frightened big-data novices into seasoned practitioners.  I am looking forward to, and inspired by, the process ahead.

 

Posted in Uncategorized | Comments closed
  • Archives

  • Welcome to Digital Praxis 2016-2017

    Encouraging students think about the impact advancements in digital technology have on the future of scholarship from the moment they enter the Graduate Center, the Digital Praxis Seminar is a year-long sequence of two three-credit courses that familiarize students with a variety of digital tools and methods through lectures offered by high-profile scholars and technologists, hands-on workshops, and collaborative projects. Students enrolled in the two-course sequence will complete their first year at the GC having been introduced to a broad range of ways to critically evaluate and incorporate digital technologies in their academic research and teaching. In addition, they will have explored a particular area of digital scholarship and/or pedagogy of interest to them, produced a digital project in collaboration with fellow students, and established a digital portfolio that can be used to display their work. The two connected three-credit courses will be offered during the Fall and Spring semesters as MALS classes for master’s students and Interdisciplinary Studies courses for doctoral students.

    The syllabus for the course can be found at cuny.is/dps17.

  • Categories

Skip to toolbar