p. 766
Visualizations of library data have been used to: • reveal relationships among subject areas for users. • illuminate circulation patterns. • suggest titles for weeding. • analyze citations and map scholarly communications
Each unit of data analyzed can be described as topical, asking “what.”6 • What is the number of courses offered in each major and minor? • What is expended in each subject area? • What is the size of the physical collection in each subject area? • What is student enrollment in each area? • What is the circulation in specific areas for one year?
libraries, if they are to survive, must rethink their collecting and service strategies in radical and possibly scary ways and to do so sooner rather than later. Anderson predicts that, in the next ten years, the “idea of collection” will be overhauled in favor of “dynamic access to a virtually unlimited flow of information products.” My note: in essence, the fight between Mark Vargas and the Acquisition/Cataloguing people
The library collection of today is changing, affected by many factors, such as demanddriven acquisitions, access, streaming media, interdisciplinary coursework, ordering enthusiasm, new areas of study, political pressures, vendor changes, and the individual faculty member following a focused line of research.
subject librarians may see opportunities in looking more closely at the relatively unexplored “intersection of circulation, interlibrary loan, and holdings.”
Using Visualizations to Address Library Problems
the difference between graphical representations of environments and knowledge visualization, which generates graphical representations of meaningful relationships among retrieved files or objects.
Exhaustive lists of data visualization tools include: • the DIRT Directory (http://dirtdirectory.org/categories/visualization) • Kathy Schrock’s educating through infographics (www.schrockguide.net/ infographics-as-an-assessment.html) • Dataviz list of online tools (www.improving-visualisation.org/case-studies/id=5)
Eugene O’Loughlin, National College of Ireland, is very helpful in composing the charts and is found here: https://youtu.be/4FyImh2G7N0.
p. 771 By looking at the data (my note – by visualizing the data), more questions are revealed, The visualizations provide greater comprehension than the two-dimensional “flatland” of the spreadsheets, in which valuable questions and insights are lost in the columns and rows of data.
By looking at data visualized in different combinations, library collection development teams can clearly compare important considerations in collection management: expenditures and purchases, circulation, student enrollment, and course hours. Library staff and administrators can make funding decisions or begin dialog based on data free from political pressure or from the influence of the squeakiest wheel in a department.
what is shall and what does it do. language close to computers, fast.
what is “bash” . cd, ls
shell job is a translator between the binory code, the middle name. several types of shells, with slight differences. one natively installed on MAC and Unix. born-again shell
bash commands: cd change director, ls – list; ls -F if it does not work: man ls (manual for LS); colon lower left corner tells you can scrool; q for escape; ls -ltr
arguments is colloquially used with different names. options, flags, parameters
cd .. – move up one directory . pwd : see the content cd data_shell/ – go down one directory
cd ~ – brings me al the way up . $HOME (universally defined variable
the default behavior of cd is to bring to home directory.
the core shall commands accept the same shell commands (letters)
$ du -h . gives me the size of the files. ctrl C to stop
$ clear . – clear the entire screen, scroll up to go back to previous command
man history $ history $! pwd (to go to pwd . $ history | grep history (piping)
$ cat (and the file name) – standard output
$ cat ../
+++++++++++++++
how to edit and delete files
to create new folder: $ mkdir . – make directory
text editors – nano, vim (UNIX text editors) . $ nano draft.txt . ctrl O (save) ctr X (exit) .
$ vim . shift esc (key) and in command line – wq (write quit) or just “q”
$ mv draft.txt ../data . (move files)
to remove $ rm thesis/: $ man rm
copy files $cp $ touch . (touches the file, creates if new)
C and C++. scripting purposes in microbiology (instructor). libraries, packages alongside Python, which can extend its functionality. numpy and scipy (numeric and science python). Python for academic libraries?
going out of python $ quit () . python expect beginning and end parenthesis
new terminal needed after installation. anaconda 5.0.1
python 3 is complete redesign, not only an update.
python is object oriented and i can define the objects
python creates its own types of objects (which we model) and those are called “DataFrame”
method applied it is an attribute to data that already exists. – difference from function
data.info() . is function – it does not take any arguments
whereas
data.columns . is a method
print (data.T) . transpose. not easy in Excel, but very easy in Python
data = pandas.read_csv(‘/Users/plamen_local/Desktop/data/gapminder_gdp_oceania.csv’ , index_col=’country’)
data.loc[‘Australia’].plot()
plt.xticks(rotation=10)
GD plot 2 is the most well known library.
xelatex is a PDF engine. reST restructured text like Markdown. google what is the best PDF engine with Jupyter
four loops . any computer language will have the concept of “for” loop. In Python: 1. whenever we create a “for” loop, that line must end with a single colon
2. indentation. any “if” statement in the “for” loop, gets indented
large corporations are designed to work with sustaining technologies. They excel at knowing their market, staying close to their customers, and having a mechanism in place to develop existing technology. Conversely, they have trouble capitalizing on the potential efficiencies, cost-savings, or new marketing opportunities created by low-margin disruptive technologies.
10. The Virtualized Library: A Librarian’s Introduction to Docker and Virtual Machines
This session will introduce two major types of virtualization, virtual machines using tools like VirtualBox and Vagrant, and containers using Docker. The relative strengths and drawbacks of the two approaches will be discussed along with plenty of hands-on time. Though geared towards integrating these tools into a development workflow, the workshop should be useful for anyone interested in creating stable and reproducible computing environments, and examples will focus on library-specific tools like Archivematica and EZPaarse. With virtualization taking a lot of the pain out of installing and distributing software, alleviating many cross-platform issues, and becoming increasingly common in library and industry practices, now is a great time to get your feet wet.
(One three-hour session)
11. Digital Empathy: Creating Safe Spaces Online
User research is often focused on measures of the usability of online spaces. We look at search traffic, run card sorting and usability testing activities, and track how users navigate our spaces. Those results inform design decisions through the lens of information architecture. This is important, but doesn’t encompass everything a user needs in a space.
This workshop will focus on the other component of user experience design and user research: how to create spaces where users feel safe. Users bring their anxieties and stressors with them to our online spaces, but informed design choices can help to ameliorate that stress. This will ultimately lead to a more positive interaction between your institution and your users.
The presenters will discuss the theory behind empathetic design, delve deeply into using ethnographic research methods – including an opportunity for attendees to practice those ethnographic skills with student participants – and finish with the practical application of these results to ongoing and future projects.
(One three-hour session)
14. ARIA Basics: Making Your Web Content Sing Accessibility
https://dequeuniversity.com/assets/html/jquery-summit/html5/slides/landmarks.html
Are you a web developer or create web content? Do you add dynamic elements to your pages? If so, you should be concerned with making those dynamic elements accessible and usable to as many as possible. One of the most powerful tools currently available for making web pages accessible is ARIA, the Accessible Rich Internet Applications specification. This workshop will teach you the basics for leveraging the full power of ARIA to make great accessible web pages. Through several hands-on exercises, participants will come to understand the purpose and power of ARIA and how to apply it for a variety of different dynamic web elements. Topics will include semantic HTML, ARIA landmarks and roles, expanding/collapsing content, and modal dialog. Participants will also be taught some basic use of the screen reader NVDA for use in accessibility testing. Finally, the lessons will also emphasize learning how to keep on learning as HTML, JavaScript, and ARIA continue to evolve and expand.
Participants will need a basic background in HTML, CSS, and some JavaScript.
(One three-hour session)
18. Learning and Teaching Tech
Tech workshops pose two unique problems: finding skilled instructors for that content, and instructing that content well. Library hosted workshops are often a primary educational resource for solo learners, and many librarians utilize these workshops as a primary outreach platform. Tackling these two issues together often makes the most sense for our limited resources. Whether a programming language or software tool, learning tech to teach tech can be one of the best motivations for learning that tech skill or tool, but equally important is to learn how to teach and present tech well.
This hands-on workshop will guide participants through developing their own learning plan, reviewing essential pedagogy for teaching tech, and crafting a workshop of their choice. Each participant will leave with an actionable learning schedule, a prioritized list of resources to investigate, and an outline of a workshop they would like to teach.
(Two three-hour sessions)
23. Introduction to Omeka S
Omeka S represents a complete rewrite of Omeka Classic (aka the Omeka 2.x series), adhering to our fundamental principles of encouraging use of metadata standards, easy web publishing, and sharing cultural history. New objectives in Omeka S include multisite functionality and increased interaction with other systems. This workshop will compare and contrast Omeka S with Omeka Classic to highlight our emphasis on 1) modern metadata standards, 2) interoperability with other systems including Linked Open Data, 3) use of modern web standards, and 4) web publishing to meet the goals medium- to large-sized institutions.
In this workshop we will walk through Omeka S Item creation, with emphasis on LoD principles. We will also look at the features of Omeka S that ease metadata input and facilitate project-defined usage and workflows. In accordance with our commitment to interoperability, we will describe how the API for Omeka S can be deployed for data exchange and sharing between many systems. We will also describe how Omeka S promotes multiple site creation from one installation, in the interest of easy publishing with many objects in many contexts, and simplifying the work of IT departments.
(One three-hour session)
24. Getting started with static website generators
Have you been curious about static website generators? Have you been wondering who Jekyll and Hugo are? Then this workshop is for you
But this article isn’t about setting up a domain name and hosting for your website. It’s for the step after that, the actual making of that site. The typical choice for a lot of people would be to use something like WordPress. It’s a one-click install on most hosting providers, and there’s a gigantic market of plugins and themes available to choose from, depending on the type of site you’re trying to build. But not only is WordPress a bit overkill for most websites, it also gives you a dynamically generated site with a lot of moving parts. If you don’t keep all of those pieces up to date, they can pose a significant security risk and your site could get hijacked.
The alternative would be to have a static website, with nothing dynamically generated on the server side. Just good old HTML and CSS (and perhaps a bit of Javascript for flair). The downside to that option has been that you’ve been relegated to coding the whole thing by hand yourself. It’s doable, but you just want a place to share your work. You shouldn’t have to know all the idiosyncrasies of low-level web design (and the monumental headache of cross-browser compatibility) to do that.
Static website generators are tools used to build a website made up only of HTML, CSS, and JavaScript. Static websites, unlike dynamic sites built with tools like Drupal or WordPress, do not use databases or server-side scripting languages. Static websites have a number of benefits over dynamic sites, including reduced security vulnerabilities, simpler long-term maintenance, and easier preservation.
In this hands-on workshop, we’ll start by exploring static website generators, their components, some of the different options available, and their benefits and disadvantages. Then, we’ll work on making our own sites, and for those that would like to, get them online with GitHub pages. Familiarity with HTML, git, and command line basics will be helpful but are not required.
(One three-hour session)
26. Using Digital Media for Research and Instruction
To use digital media effectively in both research and instruction, you need to go beyond just the playback of media files. You need to be able to stream the media, divide that stream into different segments, provide descriptive analysis of each segment, order, re-order and compare different segments from the same or different streams and create web sites that can show the result of your analysis. In this workshop, we will use Omeka and several plugins for working with digital media, to show the potential of video streaming, segmentation and descriptive analysis for research and instruction.
(One three-hour session)
28. Spark in the Dark 101 https://zeppelin.apache.org/
This is an introductory session on Apache Spark, a framework for large-scale data processing (https://spark.apache.org/). We will introduce high level concepts around Spark, including how Spark execution works and it’s relationship to the other technologies for working with Big Data. Following this introduction to the theory and background, we will walk workshop participants through hands-on usage of spark-shell, Zeppelin notebooks, and Spark SQL for processing library data. The workshop will wrap up with use cases and demos for leveraging Spark within cultural heritage institutions and information organizations, connecting the building blocks learned to current projects in the real world.
(One three-hour session)
29. Introduction to Spotlight https://github.com/projectblacklight/spotlight http://www.spotlighttechnology.com/4-OpenSource.htm
Spotlight is an open source application that extends the digital library ecosystem by providing a means for institutions to reuse digital content in easy-to-produce, attractive, and scholarly-oriented websites. Librarians, curators, and other content experts can build Spotlight exhibits to showcase digital collections using a self-service workflow for selection, arrangement, curation, and presentation.
This workshop will introduce the main features of Spotlight and present examples of Spotlight-built exhibits from the community of adopters. We’ll also describe the technical requirements for adopting Spotlight and highlight the potential to customize and extend Spotlight’s capabilities for their own needs while contributing to its growth as an open source project.
(One three-hour session)
31. Getting Started Visualizing your IoT Data in Tableau https://www.tableau.com/
The Internet of Things is a rising trend in library research. IoT sensors can be used for space assessment, service design, and environmental monitoring. IoT tools create lots of data that can be overwhelming and hard to interpret. Tableau Public (https://public.tableau.com/en-us/s/) is a data visualization tool that allows you to explore this information quickly and intuitively to find new insights.
This full-day workshop will teach you the basics of building your own own IoT sensor using a Raspberry Pi (https://www.raspberrypi.org/) in order to gather, manipulate, and visualize your data.
All are welcome, but some familiarity with Python is recommended.
(Two three-hour sessions)
32. Enabling Social Media Research and Archiving
Social media data represents a tremendous opportunity for memory institutions of all kinds, be they large academic research libraries, or small community archives. Researchers from a broad swath of disciplines have a great deal of interest in working with social media content, but they often lack access to datasets or the technical skills needed to create them. Further, it is clear that social media is already a crucial part of the historical record in areas ranging from events your local community to national elections. But attempts to build archives of social media data are largely nascent. This workshop will be both an introduction to collecting data from the APIs of social media platforms, as well as a discussion of the roles of libraries and archives in that collecting.
Assuming no prior experience, the workshop will begin with an explanation of how APIs operate. We will then focus specifically on the Twitter API, as Twitter is of significant interest to researchers and hosts an important segment of discourse. Through a combination of hands-on and demos, we will gain experience with a number of tools that support collecting social media data (e.g., Twarc, Social Feed Manager, DocNow, Twurl, and TAGS), as well as tools that enable sharing social media datasets (e.g., Hydrator, TweetSets, and the Tweet ID Catalog).
The workshop will then turn to a discussion of how to build a successful program enabling social media collecting at your institution. This might cover a variety of topics including outreach to campus researchers, collection development strategies, the relationship between social media archiving and web archiving, and how to get involved with the social media archiving community. This discussion will be framed by a focus on ethical considerations of social media data, including privacy and responsible data sharing.
Time permitting, we will provide a sampling of some approaches to social media data analysis, including Twarc Utils and Jupyter Notebooks.
Eaton, M. E. (2017). Seeing Seeing Library Data: A Prototype Data Visualization Application for Librarians. Journal of Web Librarianship, 11(1), 69–78. Retrieved from http://academicworks.cuny.edu/kb_pubs
Visualization can increase the power of data, by showing the “patterns, trends and exceptions”
Librarians can benefit when they visually leverage data in support of library projects.
Nathan Yau suggests that exploratory learning is a significant benefit of data visualization initiatives (2013). We can learn about our libraries by tinkering with data. In addition, handling data can also challenge librarians to improve their technical skills. Visualization projects allow librarians to not only learn about their libraries, but to also learn programming and data science skills.
The classic voice on data visualization theory is Edward Tufte. In Envisioning Information, Tufte unequivocally advocates for multi-dimensionality in visualizations. He praises some incredibly complex paper-based visualizations (1990). This discussion suggests that the principles of data visualization are strongly contested. Although Yau’s even-handed approach and Cairo’s willingness to find common ground are laudable, their positions are not authoritative or the only approach to data visualization.
a web application that visualizes the library’s holdings of books and e-books according to certain facets and keywords. Users can visualize whatever topics they want, by selecting keywords and facets that interest them.
Primo X-Services API. JSON, Flask, a very flexible Python web micro-framework. In addition to creating the visualization, SeeCollections also makes this data available on the web. JavaScript is the front-end technology that ultimately presents data to the SeeCollections user. JavaScript is a cornerstone of contemporary web development; a great deal of today’s interactive web content relies upon it. Many popular code libraries have been written for JavaScript. This project draws upon jQuery, Bootstrap and d3.js.
To give SeeCollections a unified visual theme, I have used Bootstrap. Bootstrap is most commonly used to make webpages responsive to different devices
D3.js facilitates the binding of data to the content of a web page, which allows manipulation of the web content based on the underlying data.
According to the email below, library faculty are asked to provide their feedback regarding the qualifications for a possible faculty line at the library.
In the fall of 2013 during a faculty meeting attended by the back than library dean and during a discussion of an article provided by the dean, it was established that leading academic libraries in this country are seeking to break the mold of “library degree” and seek fresh ideas for the reinvention of the academic library by hiring faculty with more diverse (degree-wise) background.
Is this still the case at the SCSU library? The “democratic” search for the answer of this question does not yield productive results, considering that the majority of the library faculty are “reference” and they “democratically” overturn votes, who see this library to be put on 21st century standards and rather seek more “reference” bodies for duties, which were recognized even by the same reference librarians as obsolete.
It seems that the majority of the SCSU library are “purists” in the sense of seeking professionals with broader background (other than library, even “reference” skills).
In addition, most of the current SCSU librarians are opposed to a second degree, as in acquiring more qualification, versus seeking just another diploma. There is a certain attitude of stagnation / intellectual incest, where new ideas are not generated and old ideas are prepped in “new attire” to look as innovative and/or 21st
Last but not least, a consistent complain about workforce shortages (the attrition politics of the university’s reorganization contribute to the power of such complain) fuels the requests for reference librarians and, instead of looking for new ideas, new approaches and new work responsibilities, the library reorganization conversation deteriorates into squabbles for positions among different department.
Most importantly, the narrow sightedness of being stuck in traditional work description impairs most of the librarians to see potential allies and disruptors. E.g., the insistence on the supremacy of “information literacy” leads SCSU librarians to the erroneous conclusion of the exceptionality of information literacy and the disregard of multi[meta] literacies, thus depriving the entire campus of necessary 21st century skills such as visual literacy, media literacy, technology literacy, etc.
Simultaneously, as mentioned above about potential allies and disruptors, the SCSU librarians insist on their “domain” and if they are not capable of leading meta-literacies instructions, they would also not allow and/or support others to do so.
Considering the observations above, the following qualifications must be considered:
According to the information in this blog post: https://blog.stcloudstate.edu/ims/2016/06/14/technology-requirements-samples/
for the past year and ½, academic libraries are hiring specialists with the following qualifications and for the following positions (bolded and / or in red). Here are some highlights: Positions
digital humanities
Librarian and Instructional Technology Liaison
library Specialist: Data Visualization & Collections Analytics
Qualifications
Advanced degree required, preferably in education, educational technology, instructional design, or MLS with an emphasis in instruction and assessment.
Programming skills – Demonstrated experience with one or more metadata and scripting languages (e.g.Dublin Core, XSLT, Java, JavaScript, Python, or PHP)
Data visualization skills
multi [ meta] literacy skills
Data curation, helping students working with data
Experience with website creation and design in a CMS environment and accessibility and compliance issues
Demonstrated a high degree of facility with technologies and systems germane to the 21st century library, and be well versed in the issues surrounding scholarly communications and compliance issues (e.g. author identifiers, data sharing software, repositories, among others)
Bilingual
Provides and develops awareness and knowledge related to digital scholarship and research lifecycle for librarians and staff.
Experience developing for, and supporting, common open-source library applications such as Omeka, ArchiveSpace, Dspace,
Responsibilities Establishing best practices for digital humanities labs, networks, and services
Assessing, evaluating, and peer reviewing DH projects and librarians
Actively promote TIGER or GRIC related activities through social networks and other platforms as needed.
Coordinates the transmission of online workshops through Google HangoutsScript metadata transformations and digital object processing using BASH, Python, and XSLT
liaison consults with faculty and students in a wide range of disciplines on best practices for teaching and using data/statistical software tools such as R, SPSS, Stata, and MatLab.
In response to the form attached to the Friday, September 29, email regarding St. Cloud State University Library Position Request Form:
Title
Digital Initiatives Librarian
Responsibilities:
TBD, but generally:
– works with faculty across campus on promoting digital projects and other 21st century projects. Works with the English Department faculty on positioning the SCSU library as an equal participants in the digital humanities initiatives on campus
Works with the Visualization lab to establish the library as the leading unit on campus in interpretation of big data
Works with academic technology services on promoting library faculty as the leading force in the pedagogical use of academic technologies.
Quantitative data justification
this is a mute requirement for an innovative and useful library position. It can apply for a traditional request, such as another “reference” librarian. There cannot be a quantitative data justification for an innovative position, as explained to Keith Ewing in 2015. In order to accumulate such data, the position must be functioning at least for six months.
Qualitative justification: Please provide qualitative explanation that supports need for this position.
Numerous 21st century academic tendencies right now are scattered across campus and are a subject of political/power battles rather than a venue for campus collaboration and cooperation. Such position can seek the establishment of the library as the natural hub for “sandbox” activities across campus. It can seek a redirection of using digital initiatives on this campus for political gains by administrators and move the generation and accomplishment of such initiatives to the rightful owner and primary stakeholders: faculty and students.
Currently, there are no additional facilities and resources required. Existing facilities and resources, such as the visualization lab, open source and free application can be used to generate the momentum of faculty working together toward a common goal, such as, e.g. digital humanities.
10-week eCourse Beginning Tuesday, September 5, 2017
For today’s librarian, the ability to adapt to new technology is not optional. Programming—the process of using computer language to generate commands that instruct a computer to perform specific functions—is at the core of all computer technology. A foundation in programming helps you understand the inner workings of all of the technologies that drive libraries now—from integrated library systems to Web pages and databases.
In this Advanced eCourse, you can go from having little to no programming knowledge to being familiar with coding in several different computer languages. Steve Perry—an experienced LIS instructor and programmer—will teach you in his lectures what you need to get started, and then the readings and exercises will give you practical programming experience, particularly as it relates to a library environment. Languages covered will include HTML, CSS, JavaScript, PHP, and others. You do not need any programming experience or special software to participate in this eCourse.
Participants who complete this Advanced eCourse will receive an SJSU iSchool/ALA Publishing Advanced Certificate of Completion.
We are looking for a free alternative to Microsoft Access. We have looked at Base which is part of LibreOffice and OpenOffice. However, as far as we can determine, Base does not allow us to import a CSV file into the database as a table. Such a feature would be important to us as we frequently need to import text files.
We would like to be able to query the database using SQL.
Microsoft Access supports Visual Basic Application. We would like a database that works with C#, Java or JavaScript in the same way
+++++++++++++++++++++ Defining my interests. Narrowing a topic. How do I collect information? How do I search for information?
How do we search for “serious” information?
Google and Google Scholar
Microsoft Semantic Scholar (Semantic Scholar); Microsoft Academic Search; Academicindex.net; Proquest Dialog; Quetzal; arXiv;
basic electronic (library) search information and strategies. Library research services (5 min)
using the library database, do a search on a topic of your interest.
compare the returns on your search. make an attempt to refine the search.
retrieve the following information about the book of interest: is it relevant to your topic (check the subjects); is it timely (check the published date); is it available
Strategies for conducting advanced searches (setting up filters and search criteria)