We have a zine union catalog project communications plan in a shared document you can comment on. Feedback wanted!
Detail of Kitty Magick Zine from RAHrchive
We have a zine union catalog project communications plan in a shared document you can comment on. Feedback wanted!
Detail of Kitty Magick Zine from RAHrchive
The last week has mostly seen Greg put the finishing touches on the basics of the front-end. On my side, I’ve been building some queries and fixing minor bugs that I’ve already found. As of now, the system for logging in and out works completely and there’s only a “My Account” button in the nav bar when you’re logged in.
Coming up, I’m going to be taking over most of the work and actually incorporating the back-end database calls with the front-end. I’ve been waiting to do this until Greg finishes the majority of the front-end. Since I’m probably going to need to make changes, I didn’t want to mess with what he was working with.
Not much for me this week. But that’s definitely going to change next week.
Last week, I focused mainly on the draft of the invitation letter we will send to the beta testers of end/line with the information about the date and the modalities of the test. This task is not as simple as it could appear, because it requested to consider multiple and different aspects of the test planning.
I created a list of issues and then I discussed them with Michael and Tom, and they helped me to better define our test. First of all, we have to consider that the testers come from all over the world, so we have to deal with different time zones: this could be a problem, if we wanted them to do the test all at the same time. To solve this first issue, we decided to ask them to work autonomously, so the time zones will not affect their participation.
A second issue was related to the form of assistance we will offer them during the test. We have considered different options, such as a help desk through e-mail or also through other, more interactive support such as a Google Hangout, through which they could directly talk with a member of the team to solve problems or issues.
Finally, we also decided to provide them some useful tools for their feedbacks. Being this the most relevant information we need to collect from them, and considering that they have voluntarily accepted to spend some time for us, we will provide them some forms with a series of different questions to be answered and also space for comments. In this way, we can also collect more reliable and comparable data to usefully employ for the building of end/line.
We’re finally approaching the final stretch for the initial prototype layouts. This week saw two upgrades: a list of aesthetic changes and information page changes from the group, and a few more skeletons to be fleshed out for the rest of the site. It’s really coming together and it’s nice to slowly see the initial product take form. I honestly just wish I had a bit more time to really sit down and learn a lot more about advanced development tips but unfortunately, end/line has a deadline.
The aesthetic changes involved a change of fonts. The paragraph and list elements have been changed to a bit more of a readable font (Merriweather Sans) and the header elements have been changed to a similar font to the logo (Space Mono). We chose to avoid using the font included in the logo because we didn’t want to have to deal with buying it. Google Fonts is an incredible repository of available fonts and they even have an easy way to reference them as URL based stylesheets to quickly integrate them.
The page changes saw the index page having an updated main copy via TL, small copy changes to the encoding example, and the example being shifted to the left to help mobile view options and alignment. The about page changes featured a few changes to some of the bios in their respective modals. The news page has now become the area where the blog lives: the posts have been migrated over, and eventually I’d like to integrate a scrollable link feature to keep the article titles up-top and the content below to clean things up.
Regarding the skeletons, I made a small change to the “upload” page for aesthetics, and skeletoned out the search, comparison, general encode, and settings pages. I now have to regroup with Brian and see what he needs to make the functions on the page work including the search and main poem pages. I also need to see if there’s a way to have alternate navbars or to include an element on the right for non-logged in users, and about blocking pages through the routing. We are going to reconvene on this Wednesday, but for now, the aesthetic changes are going to be pushed to the main site soon.
While community management and development work has continued, per plan, on end/line, I’ve started to feel some anxieties about the reception and humanistic purpose of this project. First, as Iuri explains in his post on the subject, not only are there many TEI projects but there are also a fair amount committed to poetry and poetics. As the humanistic markup standard, TEI underlies many digital editions, such as British Women Romantic Poets, 1789–132 and the Emily Dickinson Archive. Obviously, end/line will not be a digital edition, but its reliance on TEI may cause some confusion (e.g. “We use TEI to build digital editions, why does end/line ask us to create accounts and compare encodings?”). Increasingly, however, projects like TAPAS aim to provide platforms for collaborative TEI work. How will end/line operate in this space? Reiterating the pedagogical purpose of end/line and its limited scope, then, seems like something I need to emphasize when working with the community management team.
Second, and relatedly, end/line, regardless of the form it assumes by the end of this semester, needs users. We can build it—and that involves a good deal of time and effort—but others will not necessarily come to use it. We’ve started to generate some interest from individuals in the TEI community, but we’ll need to attend to their concerns and needs as we begin beta testing in a few weeks. Does end/line promote or inhibit the pedagogies of markup, poetry and poetics, and their intersections? As Jojo explained to me, after class on Wednesday, TEI practitioners want more TEI practitioners. In fact, it’s necessary to have more in order to build digital editions and promote the collaborative interchange of texts. If this project allows professors to teach effectively the encoding of poetic texts, then it could fulfill this desire to have more students, humanists, and scholars capable of using markup standards to perform other work.
During this Wednesday’s class, we’ll meet with Kate Singer of Mount Holyoke, and I’m hopeful that her background, teaching both TEI and poetry and poetics, will help us refine our thinking and our communication of that thinking in our scholarly outreach campaign. We’re building a prototype of a platform, now we need to understand, more fully, its humanistic import and how to encourage others to use it.
Over the past week, the community management and development teams have diverged to work more deeply on their responsibilities, with the project director and manager facilitating everything in between. On the community management team, Michael has continued to build our Twitter presence–we decided to favorite and retweet more digital humanists discussing areas that might relate, tangentially, to our work—and has worked on a style guide to ensure consistent communications across our platforms. Iuri, meanwhile, has begun to think about how best to invite interested individuals to beta testing sessions in the coming weeks. This thinking involves determining what we’ll ask from beta testers, how we’ll support the during testing, and how best to collect their feedback after testing. On the development team, Brian has connected the database to the front-end, completed some important login and signup tasks; he has also written an XML validation script for user-submitted encodings. Working on the front-end, Greg has continued to build templates, and hone the CSS, for all pages; feel free to read the commit history for more details on all development work.
Of course, even as these two teams have diverged, there have been some intersections between community management and development work, and Tom, as the project lead, has facilitated these. For example, Tom reviewed the first iteration of endlineproject.org with the community management. They edited some copy, suggested some styling changes, and recommended that we close our Commons blog and move everything to our “News” section—these changes are now with Greg to incorporate into the next iteration of the site. As we build the site’s core functionalities, meanwhile, the community management team will begin to test the development team’s work. There will be points of convergence, in other words, even as each team operates more deeply in their areas of expertise.
This week, we’ll continue our pre-planned work, but we’ll also take a more humanistic turn when we discuss the project with Kate Singer of Mount Holyoke. We hope that her experience teaching TEI in the classroom can inform the project (and that she can notify us of anything missing or awry at the moment). Furthermore, Tom has gathered a collection of ten English-language poems (from the Renaissance to modernist periods; five written by women, five written by men; and all published before 1923) for the team to practice TEI encoding. Hopefully, this helps us ground the project in its humanistic origins and start populating the site with some texts and TEI.
The zinecat project is moving slowly forward, chugging along so to speak. There are several prominent issues I have encountered that I’d like to review here at the outset.
First, what drew me to the project was the assumed ease of development through the use of Collective Access (CA). While it is undoubtedly easier than building from scratch, it’s considerably more complicated than I anticipated. The primary road block the way in which CA handles metadata mapping. Simply conceptualizing the mapping process required hand-holding from more experienced users. It appears that mapping of dual xml files to one another is the primary way configuration is achieved, which, to me, seems strange at best. Admittedly, I have never used services similar to the one we are trying to build and don’t know if something more streamlined exists. For the most part, CA is an excellent sandlot for the zine union catalog (ZUC); I simply need more time to acclimate to its inner workings before I can really swim through its interface and mapping configs naturally.
Eric and Milo have really been essential in explaining many important details to me and the rest of the team. Trudging through the CA forums and wiki would have made a project even of relative small scale (as I would argue zinecat certainly is) painstakingly long. The most apparent example of their contribution was familiarizing the team with metadata mapping.
A Summary of Mapping in CA:
The initial install of CA required a schema profile that tells CA what exactly it will be working with. There are objects, entities, collections, places, et al., as well as relationships between these elements. For example, there exists a relationship in the ZineCore profile that defines “objects” as a limited array of possible types (physical zine, digital zine, audio zine, flyer, ephemera) and “entities” as creators of those objects with which they are associated. The schema profile used for the installation lays the foundation for mapping, and additional rules can be added in the post-installation process according to the needs and wants of the developer. These additions are implemented with an xmls file that provides a layout for how to interpret data sets being uploaded to the database. All uploads of data sets must then follow the layout that this xmls file used. Essentially, the mapping file dictates “column one is the object identifier; column two is the “object_type”; column three is the “entity”, etc.. Subsequent data uploads must then organize their metadata according to these respective rules wherein column one is a number identifying this particular item; column two is what type of object it is; column three is the author/creator.
The end-of-semester objective is for us to have created multiple mapping rule sets for various profiles that libraries are already using thus allowing them to easily upload their metadata to the ZUC database.
Second, I did not expect to be the sole developer of a project. I welcome the challenge and opportunity to learn. As the last class made clear, one of the essential skills I now have to learn is to manage digital projects in a way that ensures continuity and security of data. I am currently in the process of learning how to create a local clone of the ZUC through Git so that I am able to make changes to the site’s code without fear of irreparable damage. Using Git for this purpose will also make the development process open and visible to be used or adapted for other projects.
I believe this is currently my most important task because its completion cements all the work that has been invested towards building the ZUC. Using Git as a significant part of my data management plan will greatly increase the longevity of this project and will, in turn, ensure that it will continue to be developed and expanded beyond the scope of this semester.
The end-of-March objective is to create a clone of the ZUC as a repository on Git where I can make changes in a safe environment, to create an SSH connection through which I can push successful changes to the live site, and to develop a more comprehensive plan of what role Git will play in the ZUC data management plan.
This introduction to Git and this tutorial on managing a live site through Git are my current resources. I will reach out to Jojo and others for further assistance. My objectives prior to re-evaluating the need for Git implementation were to develop location hierarchies within the ZUC and to begin creating a mapping rule-set for the Denver library metadata standard. I believe that these goals may have to wait until the Git-specific objective is completed unless another team member chooses to prioritize them into their schedule.
While Greg has been working on the front-end of end/line, I’ve been continuing with the back-end. Over the last week, I built the infrastructure for the user accounts and logging in and out. I used something called passport.js, which covers setting up and sessions and cookies and all the good stuff. I’m waiting until I talk with Greg today until I start to incorporate it into the front-end.
Other than that, I also wrote (what I think will be) the final version of the XML validation script. I tested it along with examples of TEI that I saw on Wikipedia. As long as there are no other crazy aspects of TEI that I don’t know about, we should be good there. If there are, it’s easy enough to add another pattern.
How the script works is by testing against three patterns of how XML should be written. These are:
<[word]>
<[word] [word]=”[word maybe with numbers]”>
<[word] [word]:[word]=”[word maybe with numbers]”>
If the tag doesn’t follow either of those, it will fail and report which tag has the error. To do this, I used a JavaScript regular expression. This was the first time I’ve written my own, so that was fun. Once that passes, it will then check to make sure there is a closing tag. That’s just the typical </[word]> tag.
I’m pretty confident that the script works for every possibility. Now I just need to work with Greg to get it implemented on the site.
After plenty of tutorial time and playing around, I am getting the swing of the EJS integration with the bootstrap pages I’m creating. The focus for this week was two things – the first was taking what I already fleshed out in bootstrap, and putting it into an EJS/Express/Node environment. After a recommendation from Brian and fiddling around with the partials and pages, I was able to get the first part of what I did running. The second focus for this week was having the page up by Sunday at least partially fleshed out. After getting the copy, sample encodings, bios, and logo from the group, I began creating the information pages.
The about page features a one-off paragraph, our logo, and our group members. There’s an FAQ section I currently have commented down in the EJS HTML, but it won’t go live until we flesh out the questions. The logo sits right aligned currently with the text, and each group member has their own personal modal. We will worry about personal customizations with these modals later. For now, they just have the name, role, and biography provided.
The news page is customized like the regular 2 column portfolio style featured on the index page. Each row features two of the news stories in reverse chronological order. Each new news story will have to be added manually, or eventually we can write a script to handle the additions that draw from the blog on the commons. As for the contact page, it’s a quick message that features our email and twitter account.
Two of the larger items that had to be fleshed out from scratch were the terms of service and the privacy policy. These are important because we need to remove ourselves from any liability coming from the platform. This is a web app, not an information page. Both of them were initially drawn from Hypothesis’ and modified to fit out platform. After they were both checked out by the group, they are now stuck on the bottom as footer items, and as a notification in the sign-up modal.
I’m feeling much better about this project now that we have something up and running (even if it’s just information). The next steps for me are not only playing around with the styles upon the group’s recommendations, but also creating skeletons of the user pages as well. The non logged-in views have been created – now it’s time to create the pages for the actual platform.
Firstly, this past week our team has been working on reinstalling Collective Access in order to set up the back-end of our online platform. This time, the process was fully successful.
Secondly, we managed to upload part of the first metadata set from the Queer Zine Archive Project (QZAP). Even though we will continue adding data as we have more time to work on it, we decided to begin with a small batch of twenty-five cataloged zines just to familiarize with the importing procedure and make it visible on the site.
Thirdly, The team also learned how to map the .xml metadata we had at our disposal from the xZINECOREx GitHub repository, since CA requires every user to run a mapping .xlsx file before uploading any datasets. The process of matching our metadata with the appropriate map was a complex one, at the very least. Thankfully, Milo Miller, co-founder of QZAP, gave us a hand regarding this issue.
The first prototype for the landing page of the Zine Union Catalog site can be found here. From there, one can access both the Zine Union Catalog Facebook page and its Twitter account with 226 followers. We decided to make available a link to the ZUC Google Site as well, in case someone might find that information helpful when developing their own projects in the future.
Earlier this week the team also completed the “About” page with general information about the ZUC project and about each of its members.
Going forward, our goal is to continue adding metadata catalogs and apply certain changes to the front-end to make the information more easily accessible and visually appealing.