May
2022
Digital Literacy for St. Cloud State University
Finch, J. f., & Flenner, A. (2016). Using Data Visualization to Examine an Academic Library Collection. College & Research Libraries, 77(6), 765-778.
p. 766
Visualizations of library data have been used to: • reveal relationships among subject areas for users. • illuminate circulation patterns. • suggest titles for weeding. • analyze citations and map scholarly communications
Each unit of data analyzed can be described as topical, asking “what.”6 • What is the number of courses offered in each major and minor? • What is expended in each subject area? • What is the size of the physical collection in each subject area? • What is student enrollment in each area? • What is the circulation in specific areas for one year?
libraries, if they are to survive, must rethink their collecting and service strategies in radical and possibly scary ways and to do so sooner rather than later. Anderson predicts that, in the next ten years, the “idea of collection” will be overhauled in favor of “dynamic access to a virtually unlimited flow of information products.” My note: in essence, the fight between Mark Vargas and the Acquisition/Cataloguing people
The library collection of today is changing, affected by many factors, such as demanddriven acquisitions, access, streaming media, interdisciplinary coursework, ordering enthusiasm, new areas of study, political pressures, vendor changes, and the individual faculty member following a focused line of research.
subject librarians may see opportunities in looking more closely at the relatively unexplored “intersection of circulation, interlibrary loan, and holdings.”
Using Visualizations to Address Library Problems
the difference between graphical representations of environments and knowledge visualization, which generates graphical representations of meaningful relationships among retrieved files or objects.
Exhaustive lists of data visualization tools include: • the DIRT Directory (http://dirtdirectory.org/categories/visualization) • Kathy Schrock’s educating through infographics (www.schrockguide.net/ infographics-as-an-assessment.html) • Dataviz list of online tools (www.improving-visualisation.org/case-studies/id=5)
Visualization tools explored for this study include Plotly, Microsoft Excel, Python programming language, and D3.js, a javascript library for creating documents based on data. Tableau Public©
Eugene O’Loughlin, National College of Ireland, is very helpful in composing the charts and is found here: https://youtu.be/4FyImh2G7N0.
p. 771 By looking at the data (my note – by visualizing the data), more questions are revealed, The visualizations provide greater comprehension than the two-dimensional “flatland” of the spreadsheets, in which valuable questions and insights are lost in the columns and rows of data.
By looking at data visualized in different combinations, library collection development teams can clearly compare important considerations in collection management: expenditures and purchases, circulation, student enrollment, and course hours. Library staff and administrators can make funding decisions or begin dialog based on data free from political pressure or from the influence of the squeakiest wheel in a department.
+++++++++++++++
more on data visualization for the academic library in this IMS blog
https://blog.stcloudstate.edu/ims?s=data+visualization
Eaton, M. E. (2017). Seeing Seeing Library Data: A Prototype Data Visualization Application for Librarians. Journal of Web Librarianship, 11(1), 69–78. Retrieved from http://academicworks.cuny.edu/kb_pubs
Visualization can increase the power of data, by showing the “patterns, trends and exceptions”
Librarians can benefit when they visually leverage data in support of library projects.
Nathan Yau suggests that exploratory learning is a significant benefit of data visualization initiatives (2013). We can learn about our libraries by tinkering with data. In addition, handling data can also challenge librarians to improve their technical skills. Visualization projects allow librarians to not only learn about their libraries, but to also learn programming and data science skills.
The classic voice on data visualization theory is Edward Tufte. In Envisioning Information, Tufte unequivocally advocates for multi-dimensionality in visualizations. He praises some incredibly complex paper-based visualizations (1990). This discussion suggests that the principles of data visualization are strongly contested. Although Yau’s even-handed approach and Cairo’s willingness to find common ground are laudable, their positions are not authoritative or the only approach to data visualization.
a web application that visualizes the library’s holdings of books and e-books according to certain facets and keywords. Users can visualize whatever topics they want, by selecting keywords and facets that interest them.
Primo X-Services API. JSON, Flask, a very flexible Python web micro-framework. In addition to creating the visualization, SeeCollections also makes this data available on the web. JavaScript is the front-end technology that ultimately presents data to the SeeCollections user. JavaScript is a cornerstone of contemporary web development; a great deal of today’s interactive web content relies upon it. Many popular code libraries have been written for JavaScript. This project draws upon jQuery, Bootstrap and d3.js.
To give SeeCollections a unified visual theme, I have used Bootstrap. Bootstrap is most commonly used to make webpages responsive to different devices
D3.js facilitates the binding of data to the content of a web page, which allows manipulation of the web content based on the underlying data.
https://events.educause.edu/courses/2019/a-thousand-words-and-a-picture-storytelling-with-data
Part 1: March 13, 2019 | 1:00–2:30 p.m. ET
Part 2: March 20, 2019 | 1:00–2:30 p.m. ET
Part 3: March 27, 2019 | 1:00–2:30 p.m. ET
A picture is worth a thousand words, but developing a data picture worth a thousand words involves careful thought and planning. IT leaders are often in need of sharing their story and vision for the future with campus partners and campus leadership. Delivering this message in a compelling way takes a significant amount of thought and planning. This session will take participants through the process of constructing their story, how to (and how not to) incorporate data and anecdotes effectively, how to design clear data visualizations, and how to present their story with confidence.
During this course, participants will:
NOTE: Participants will be asked to complete assignments in between the course segments that support the learning objectives stated below and will receive feedback and constructive critique from course facilitators on how to improve and shape their work.
Leah Lang, Director of Analytics Services, EDUCAUSE
Leah Lang leads EDUCAUSE Analytics Services, a suite of data services, products, and tools that can be used to inform decision-making about IT in higher education. The foundational service in this suite is the EDUCAUSE Core Data Services (CDS), higher education’s comprehensive IT benchmarking data service.
+++++++++++++
more Educause webinars in this IMS blog
https://blog.stcloudstate.edu/ims?s=educause+webinar
Mar 04 – Mar 31, 2019
Delivery Mode : Asynchronous Workshop
Levels : Beginner,Intermediate
Eligible for Online Teaching Certificate elective : No
Data visualization is about presenting data visually so we can explore and identify patterns in the data, analyze and make sense of those patterns, and communicate our findings. In this course, you will explore those key aspects of data visualization, and then focus on the theories, concepts, and skills related to communicating data in effective, engaging, and accessible ways.
This will be a hands-on, project-based course in which you will apply key data visualization strategies to various data sets to tell specific data stories using Microsoft Excel or Google Sheets. Practice data sets will be provided, or you can utilize your own data sets.
Week 1: Introduction and Tool Setup
Week 2: Cognitive Load and Pre-Attentive Attributes
Week 3: Selecting the Appropriate Visualization Type
Week 4: Data Stories and Context
Learning Objectives:
Upon completion of this course, you will be able to create basic data visualizations that are effective, accessible, and engaging. In support of that primary objective, you will:
Prerequisites
Basic knowledge of Microsoft Excel or Google Sheets is required to successfully complete this course. Resources will be included to help you with the basics should you need them, but time spent learning the tools is not included in the estimated time for completing this course.
What are the key takeaways from this course?
Who should take this course?
+++++++++++
more on digital storytelling in this IMS blog
https://blog.stcloudstate.edu/ims?s=digital+storytelling
more on data visualization in this IMS blog
https://blog.stcloudstate.edu/ims?s=data+visualization
Here are things that can help you build a bridge from your current methods to effective data storytelling–
A few bonus tips to make your data visualizations really pop–
Henry Hwangbo http://usblogs.pwc.com/emerging-technology/the-future-of-collaboration-large-scale-visualization/
More data doesn’t automatically lead to better decisions. A shortage of skilled data scientists has hindered progress towards translation of information into actionable business insights. In addition, traditionally dense spreadsheets and linear slideshows are ineffective to present discoveries when dealing with Big Data’s dynamic nature. We need to evolve how we capture, analyze and communicate data.
Large-scale visualization platforms have several advantages over traditional presentation methods. They blur the line between the presenter and audience to increase the level of interactivity and collaboration. They also offer simultaneous views of both macro and micro perspectives, multi-user collaboration and real-time data interaction, and a limitless number of visualization possibilities – critical capabilities for rapidly understanding today’s large data sets.
Visualization walls enable presenters to target people’s preferred learning methods, thus creating a more effective communication tool. The human brain has an amazing ability to quickly glean insights from patterns – and great visualizations make for more efficient storytellers.
Grant: Visualizing Digital Scholarship in Libraries and Learning Spaces
Award amount: $40,000
Funder: Andrew W. Mellon Foundation
Lead institution: North Carolina State University Libraries
Due date: 13 August 2017
Notification date: 15 September 2017
Website: https://immersivescholar.org
Contact: immersivescholar@ncsu.edu
Project Description
NC State University, funded by the Andrew W. Mellon Foundation, invites proposals from institutions interested in participating in a new project for Visualizing Digital Scholarship in Libraries and Learning Spaces. The grant aims to 1) build a community of practice of scholars and librarians who work in large-scale multimedia to help visually immersive scholarly work enter the research lifecycle; and 2) overcome technical and resource barriers that limit the number of scholars and libraries who may produce digital scholarship for visualization environments and the impact of generated knowledge. Libraries and museums have made significant strides in pioneering the use of large-scale visualization technologies for research and learning. However, the utilization, scale, and impact of visualization environments and the scholarship created within them have not reached their fullest potential. A logical next step in the provision of technology-rich, visual academic spaces is to develop best practices and collaborative frameworks that can benefit individual institutions by building economies of scale among collaborators.
The project contains four major elements:
Work Summary
This call solicits proposals for block grants from library or museum systems that have visualization installations. Block grant recipients can utilize funds for ideas ranging from creating open source scholarly content for visualization environments to developing tools and templates to enhance sharing of visualization work. An advisory panel will select four institutions to receive awards of up to $40,000. Block grant recipients will also participate in the initial priority setting workshop and the culminating symposium. Participating in a block grant proposal does not disqualify an individual from later applying for one of the grant-supported scholar-in-residence appointments.
Applicants will provide a statement of work that describes the contributions that their organization will make toward the goals of the grant. Applicants will also provide a budget and budget justification.
Activities that can be funded through block grants include, but are not limited to:
Funding for operational expenditures, such as equipment, is not allowed for any grant participant.
Application
Send an application to immersivescholar@ncsu.edu by the end of the day on 13 August 2017 that includes the following:
Selection and Notification Process
An advisory panel made up of scholars, librarians, and technologists with experience and expertise in large-scale visualization and/or visual scholarship will review and rank proposals. The project leaders are especially keen to receive proposals that develop best practices and collaborative frameworks that can benefit individual institutions by building a community of practice and economies of scale among collaborators.
Awardees will be selected based on:
Awardees will be required to send a representative to an initial meeting of the project cohort in Fall 2017.
Awardees will be notified by 15 September 2017.
If you have any questions, please contact immersivescholar@ncsu.edu.
–Mike Nutt Director of Visualization Services Digital Library Initiatives, NCSU Libraries
919.513.0651 http://www.lib.ncsu.edu/do/visualization
By Rhea Kelly 11/07/16
Researchers at Carnegie Mellon University‘s CyLab Security and Privacy Institute have developed a new tool for analyzing network traffic and identifying cyber attacks. The tool uses data visualization to make it easier for network analysts to see key changes and patterns generated by distributed denial of service attacks, malware distribution networks and other malicious network traffic.
presented the tool last week at the IEEE Symposium on Visualization for Cybersecurity in Baltimore, MD.
+++++++++++++++++++
more on cybersecurity in this IMS blog
https://blog.stcloudstate.edu/ims?s=Search
http://www.rubedo.com.br/2016/08/38-great-resources-for-learning-data.html
https://www.import.io/post/8-fantastic-examples-of-data-storytelling/
Data storytelling is the realization of great data visualization. We’re seeing data that’s been analyzed well and presented in a way that someone who’s never even heard of data science can get it.
Google’s Cole Nussbaumer provides a friendly reminder of what data storytelling actually is, it’s straightforward, strategic, elegant, and simple.
++++++++++++++++++++++
more on text and data mining in this IMS blog
hthttps://blog.stcloudstate.edu/ims?s=data+mining
LITA announcement. Date: Thursday, June 30, 2016, Time: 10am-11:30am (EDT), Platform: WebEx. Registration required.
a critically important means of uncovering patterns of intellectual practice and usage that have the potential for illuminating facets and perspectives in research and scholarship that might otherwise not be noted. At the same time, challenges exist in terms of project management and support, licensing and other necessary protections.
Confirmed speakers include: Audrey McCulloch, Executive Director, ALPSP; Michael Levine-Clark, Dean of Libraries, University of Denver; Ellen Finnie, Head, Scholarly Communications and Collections Strategies, Massachusetts Institute of Technology; and Jeremy Frey, Professor of Physical Chemistry, Head of Computational Systems Chemistry, University of Southampton, UK.
Audrey McCulloch, Chief Executive, Association of Learned Professional and Society Publishers (ALPSP) and Director of the Publishers Licensing Society
Text and Data Mining: Library Opportunities and Challenges
Michael Levine-Clark, Dean and Director of Libraries, University of Denver
As scholars engage with text and data mining (TDM), libraries have struggled to provide support for projects that are unpredictable and tremendously varied. While TDM can be considered a fair use, in many cases contracts need to be renegotiated and special data sets created by the vendor. The unique nature of TDM projects makes it difficult to plan for them, and often the library and scholar have to figure them out as they go along. This session will explore strategies for libraries to effectively manage TDM, often in partnership with other units on campus and will offer suggestions to improve the process for all.
Michael Levine-Clark, the Dean and Director of the University of Denver Libraries, is the recipient of the 2015 HARRASOWITZ Leadership in Library Acquisitions Award. He writes and speaks regularly on strategies for improving academic library collection development practices, including the use of e-books in academic libraries, the development of demand-driven acquisition models, and implications of discovery tool implementation.
Library licensing approaches in text and data mining access for researchers at MIT
Ellen Finnie, Head, Scholarly Communications & Collections Strategy, MIT Libraries
This talk will address the challenges and successes that the MIT libraries have experienced in providing enabling services that deliver TDM access to MIT researchers, including:
· emphasizing TDM in negotiating contracts for scholarly resources
· defining requirements for licenses for TDM access
· working with information providers to negotiate licenses that work for our researchers
· addressing challenges and retooling to address barriers to success
· offering educational guides and workshops
· managing current needs v. the long-term goal– TDM as a reader’s right
Ellen Finnie is Head, Scholarly Communications & Collections Strategy in the MIT Libraries. She leads the MIT Libraries’ scholarly communications and collections strategy in support of the Libraries’ and MIT’s objectives, including in particular efforts to influence models of scholarly publishing and communication in ways that increase the impact and reach of MIT’s research and scholarship and which promote open, sustainable publishing and access models. She leads outreach efforts to faculty in support of scholarly publication reform and open access activities at MIT, and acts as the Libraries’ chief resource for copyright issues and for content licensing policy and negotiations. In that role, she is involved in negotiating licenses to include text/data mining rights and coordinating researcher access to TDM services for licensed scholarly resources. She has written and spoken widely on digital acquisitions, repositories, licensing, and open access.
Jeremy Frey, Professor of Physical Chemistry, Head of Computational Systems Chemistry, University of Southampton, UK
Text and Data Mining (TDM) facilitates the discovery, selection, structuring, and analysis of large numbers of documents/sets of data, enabling the visualization of results in new ways to support innovation and the development of new knowledge. In both academia and commercial contexts, TDM is increasingly recognized as a means to extract, re-use and leverage additional value from published information, by linking concepts, addressing specific questions, and creating efficiencies. But TDM in practice is not straightforward. TDM methodology and use are fast changing but are not yet matched by the development of enabling policies.
This webinar provides a review of where we are today with TDM, as seen from the perspective of the researcher, library, and licensing-publisher communities.