Visualizations of library data have been used to: • reveal relationships among subject areas for users. • illuminate circulation patterns. • suggest titles for weeding. • analyze citations and map scholarly communications
Each unit of data analyzed can be described as topical, asking “what.”6 • What is the number of courses offered in each major and minor? • What is expended in each subject area? • What is the size of the physical collection in each subject area? • What is student enrollment in each area? • What is the circulation in specific areas for one year?
libraries, if they are to survive, must rethink their collecting and service strategies in radical and possibly scary ways and to do so sooner rather than later. Anderson predicts that, in the next ten years, the “idea of collection” will be overhauled in favor of “dynamic access to a virtually unlimited flow of information products.” My note: in essence, the fight between Mark Vargas and the Acquisition/Cataloguing people
The library collection of today is changing, affected by many factors, such as demanddriven acquisitions, access, streaming media, interdisciplinary coursework, ordering enthusiasm, new areas of study, political pressures, vendor changes, and the individual faculty member following a focused line of research.
subject librarians may see opportunities in looking more closely at the relatively unexplored “intersection of circulation, interlibrary loan, and holdings.”
Using Visualizations to Address Library Problems
the difference between graphical representations of environments and knowledge visualization, which generates graphical representations of meaningful relationships among retrieved files or objects.
Exhaustive lists of data visualization tools include: • the DIRT Directory (http://dirtdirectory.org/categories/visualization) • Kathy Schrock’s educating through infographics (www.schrockguide.net/ infographics-as-an-assessment.html) • Dataviz list of online tools (www.improving-visualisation.org/case-studies/id=5)
p. 771 By looking at the data (my note – by visualizing the data), more questions are revealed, The visualizations provide greater comprehension than the two-dimensional “flatland” of the spreadsheets, in which valuable questions and insights are lost in the columns and rows of data.
By looking at data visualized in different combinations, library collection development teams can clearly compare important considerations in collection management: expenditures and purchases, circulation, student enrollment, and course hours. Library staff and administrators can make funding decisions or begin dialog based on data free from political pressure or from the influence of the squeakiest wheel in a department.
Bibliographical data analysis with Zotero and nVivo
Bibliographic Analysis for Graduate Students, EDAD 518, Fri/Sat, May 15/16, 2020
This session will not be about qualitative research (QR) only, but rather about a modern 21st century approach toward the analysis of your literature review in Chapter 2.
However, the computational approach toward qualitative research is not much different than computational approach for your quantitative research; you need to be versed in each of them, thus familiarity with nVivo for qualitative research and with SPSS for quantitative research should be pursued by any doctoral student.
Please use this link to install nVivo on your computer. Even if we were not in a quarantine and you would have been able to use the licensed nVivo software on campus, for convenience (working on your dissertation from home), most probably, you would have used the shareware. Shareware is fully functional on your computer for 14 days, so calculate the time you will be using it and mind the date of installation and your consequent work.
For the purpose of this workshop, please install nVivo on your computer early morning on Saturday, May 16, so we can work together on nVivo during the day and you can continue using the software for the next two weeks.
Please familiarize yourself with the two articles assigned in the EDAD 815 D2L course content “Practice Research Articles“ :
Brosky, D. (2011). Micropolitics in the School: Teacher Leaders’ Use of Political Skill and Influence Tactics. International Journal of Educational Leadership Preparation, 6(1). https://eric.ed.gov/?id=EJ972880
whereas the snapshots are replaced with snapshots from nVivol, version 12, which we will be using in our course and for our dissertations.
Concept of bibliographic data
Bibliographic Data is an organized collection of references to publish in literature that includes journals, magazine articles, newspaper articles, conference proceedings, reports, government and legal publications. The bibliographical data is important for writing the literature review of a research. This data is usually saved and organized in databases like Mendeley or Endnote. Nvivo provides the option to import bibliographical data from these databases directly. One can import End Note library or Mendeley library into Nvivo. Similar to interview transcripts, one can represent and analyze bibliographical data using Nvivo. To start with bibliographical data representation, this article previews the processing of literature review in Nvivo.
Importing bibliographical data
Bibliographic Data is imported using Mendeley, Endnote and other such databases or applications that are supported with Nvivo. Bibliographical data here refers to material in the form of articles, journals or conference proceedings. Common factors among all of these data are the author’s name and year of publication. Therefore, Nvivo helps to import and arrange these data with their titles as author’s name and year of publication. The process of importing bibliographical data is presented in the figures below.
select the appropriate data from external folder
Coding strategies for literature review
Coding is a process of identifying important parts or patterns in the sources and organizing them in theme node. Sources in case of literature review include material in the form of PDF. That means literature review in Nvivo requires grouping of information from PDF files in the forms of theme nodes. Nodes directly do not create content for literature review, they present ideas simply to help in framing a literature review. Nodes can be created on the basis of theme of the study, results of the study, major findings of the study or any other important information of the study. After creating nodes, code the information of each of the articles into its respective codes.
Nvivo allows coding the articles for preparing a literature review. Articles have tremendous amount of text and information in the forms of graphs, more importantly, articles are in the format of PDF. Since Nvivo does not allow editing PDF files, apply manual coding in case of literature review. There are two strategies of coding articles in Nvivo.
Code the text of PDF files into a new Node.
Code the text of PDF file into an existing Node. The procedure of manual coding in literature review is similar to interview transcripts.
The Case Nodes of articles are created as per the author name or year of the publication.
For example: Create a case node with the name of that author and attach all articles in case of multiple articles of same Author in a row with different information. For instance in figure below, five articles of same author’s name, i.e., Mr. Toppings have been selected together to group in a case Node. Prepare case nodes like this then effortlessly search information based on different author’s opinion for writing empirical review in the literature.
Nvivo questions for literature review
Apart from the coding on themes, evidences, authors or opinions in different articles, run different queries based on the aim of the study. Nvivo contains different types of search tools that helps to find information in and across different articles. With the purpose of literature review, this article presents a brief overview of word frequency search, text search, and coding query in Nvivo.
Word frequency in Nvivo allows searching for different words in the articles. In case of literature review, use word frequency to search for a word. This will help to find what different author has stated about the word in the article. Run word frequency on all types of sources and limit the number of words which are not useful to write the literature.
For example, run the command of word frequency with the limit of 100 most frequent words . This will help in assessing if any of these words remotely provide any new information for the literature (figure below).
Text search is more elaborative tool then word frequency search in Nvivo. It allows Nvivo to search for a particular phrase or expression in the articles. Also, Nvivo gives the opportunity to make a node out of text search if a particular word, phrase or expression is found useful for literature.
For example: conduct a text search query to find a word “Scaffolding” in the articles. In this case Nvivo will provide all the words, phrases and expression slightly related to this word across all the articles (Figure 8 & 9). The difference between test search and word frequency lies in generating texts, sentences and phrases in the latter related to the queried word.
Apart from text search and word frequency search Nvivo also provides the option of coding query. Coding query helps in literature review to know the intersection between two Nodes. As mentioned previously, nodes contains the information from the articles. Furthermore it is also possible that two nodes contain similar set of information. Therefore, coding query helps to condense this information in the form of two way table which represents the intersection between selected nodes.
For example, in below figure, researcher have search the intersection between three nodes namely, academics, psychological and social on the basis of three attributes namely qantitative, qualitative and mixed research. This coding theory is performed to know which of the selected themes nodes have all types of attributes. Like, Coding Matrix in figure below shows that academic have all three types of attributes that is research (quantitative, qualitative and mixed). Where psychological has only two types of attributes research (quantitative and mixed).
In this way, Coding query helps researchers to generate intersection between two or more theme nodes. This also simplifies the pattern of qualitative data to write literature.
Please do not hesitate to contact me with questions, suggestions before, during or after our workshop and about ANY questions and suggestions you may have about your Chapter 2 and, particularly about your literature review:
Dr. Sivaprakasam and I are developing a microcredentialing system for your class.
The “library” part has several components:
One badge for your ability to use the databases and find reliable scientific information in your field (required)
submit your results in the respective D2L assignment folder. A badge will be issued to you after the assignment is graded
One badge for completing the quiz based on the information from this library instruction (required)
a badge will be issued to you automatically after successful completion of the quizz
One badge for your ability to use social media for a serious, reliable, scientific research (required)
submit your results in the respective D2L assignment folder. A badge will be issued to you after the assignment is graded
One badge for using the D2L “embedded librarian” widget to contact the librarian with questions regarding your class research (one of two optional)
A badge will be issued to you after your post with your email or any other contact information is submitted
One badge for helping class peer with his research (one of two optional)
submit your results in the respective D2L assignment folder. A badge will be issued to you after the assignment is graded
Collecting two of the required and one of the optional badges let you earn the superbadge “Mastery of Library Instruction.”
The superbadge brings points toward your final grade.
Once you acquire the badges, Dr. Sivaprakasam will reflect your achievement in D2L Grades.
If you are building a LinkedIn portfolio, here are directions to upload your badges in your LinkedIn account using Badgr:
News and Media Literacy (and the lack of) is not very different from Information Literacy
An “information literate” student is able to “locate, evaluate, and effectively use information from diverse sources.” See more About Information Literacy.
How does information literacy help me?
Every day we have questions that need answers. Where do we go? Whom can we trust? How can we find information to help ourselves? How can we help our family and friends? How can we learn about the world and be a better citizen? How can we make our voice heard?
Standard 1. The information literate student determines the nature and extent of the information needed
Standard 2. The information literate student accesses needed information effectively and efficiently
Standard 3. The information literate student evaluates information and its sources critically and incorporates selected information into his or her knowledge base and value system
Standard 4. The information literate student, individually or as a member of a group, uses information effectively to accomplish a specific purpose
Standard 5. The information literate student understands many of the economic, legal, and social issues surrounding the use of information and accesses and uses information ethically and legally
Project Information Literacy
A national, longitudinal research study based in the University of Washington’s iSchool, compiling data on how college students seek and use information.
Developing Your Research Topic/Question
Research always starts with a question. But the success of your research also depends on how you formulate that question. If your topic is too broad or too narrow, you may have trouble finding information when you search. When developing your question/topic, consider the following:
Is my question one that is likely to have been researched and for which data have been published? Believe it or not, not every topic has been researched and/or published in the literature.
Be flexible. Consider broadening or narrowing the topic if you are getting a limited number or an overwhelming number of results when you search. In nursing it can be helpful to narrow by thinking about a specific population (gender, age, disease or condition, etc.), intervention, or outcome.
Discuss your topic with your professor and be willing to alter your topic according to the guidance you receive.
Getting Ready for Research
Library Resources vs. the Internet
How (where from) do you receive information about your professional interests?
Advantages/disadvantages of using Web Resources
Evaluating Web Resources
Google or similar; Yahoo, Bing
Reddit, Digg, Quora
Become a member of professional organizations and use their online information
Use the SCSU library page to online databases
Building Your List of Keywords
Why Keyword Searching?
Why not just type in a phrase or sentence like you do in Google or Yahoo!?
Because most electronic databases store and retrieve information differently than Internet search engines.
A databases searches fields within a collection of records. These fields include the information commonly found in a citation plus an abstract (if available) and subject headings. Search engines search web content which is typically the full text of sources.
The bottom line: you get better results in a database by using effective keyword search strategies.
To develop an effective search strategy, you need to:
determine the key concepts in your topic and
develop a good list of keyword synonyms.
Why use synonyms?
Because there is more than one way to express a concept or idea. You don’t know if the article you’re looking for uses the same expression for a key concept that you are using.
Consider: Will an author use:
Hypertension or High Blood Pressure?
Teach or Instruct?
Therapy or Treatment?
Don’t get “keyword lock!” Be willing to try a different term as a keyword. If you are having trouble thinking of synonyms, check a thesaurus, dictionary, or reference book for ideas.
How to find the SCSU Library Website
SCSU online databases
SCSU Library Web page
Basic Research Skills
Locating and Defining a Database
Database Searching Overview:
You can search using the SCSU library online dbases by choosing:
Identifying a Scholarly Source
CINAHL, MEDLINE, PubMed, Health Source: Consumer Edition, Health Source: Nursing/Academic Edition
Arts & Humanities Citation Index
How do you evaluate a source of information to determine if it is appropriate for academic/scholarly use. There is no set “checklist” to complete but below are some criteria to consider when you are evaluating a source.
Does the author cite reliable sources?
How does the information compare with that in other works on the topic?
Can you determine if the information has gone through peer-review?
Are there factual, spelling, typographical, or grammatical errors?
Who do you think the authors are trying to reach?
Is the language, vocabulary, style and tone appropriate for intended audience?
What are the audience demographics? (age, educational level, etc.)
Are the authors targeting a particular group or segment of society?
Who wrote the information found in the article or on the site?
What are the author’s credentials/qualifications for this particular topic?
Is the author affiliated with a particular organization or institution?
What does that affiliation suggest about the author?
Is the content current?
Does the date of the information directly affect the accuracy or usefulness of the information?
What is the author’s or website’s point of view?
Is the point of view subtle or explicit?
Is the information presented as fact or opinion?
If opinion, is the opinion supported by credible data or informed argument?
Is the information one-sided?
Are alternate views represented?
Does the point of view affect how you view the information?
What is the author’s purpose or objective, to explain, provide new information or news, entertain, persuade or sell?
Does the purpose affect how you view the information presented?
Copyright and Fair Use
Author Rights and Publishing & Finding Author Instructions for Publishing in Scholarly Journals
Higher education institutions are experiencing radical change, driven by greater accountability, stronger competition, and increased internationalization. They prioritize student success, competitive research, and global reputation. This has significant implications for library strategy, space, structures, partnerships, and identity. Strategic responses include refocusing from collections to users, reorganizing teams and roles, developing partnerships, and demonstrating value. Emphasis on student success and researcher productivity has generated learning commons buildings, converged service models, research data management services, digital scholarship engagement, and rebranding as partners. Repositioning is challenging, with the library no longer perceived as the heart of the campus but institutional leadership often holding traditional perceptions of its role.
third Library 2.019 mini-conference: “Emerging Technology,” which will be held online (and for free) on Wednesday, October 30th, from 12:00 – 3:00 pm US-Pacific Daylight Time (click for your own time zone).
Tomorrow’s technologies are shaping our world today, revolutionizing the way we live and learn. Virtual Reality, Augmented Reality, Artificial Intelligence, Machine Learning, Blockchain, Internet of Things, Drones, Personalization, the Quantified Self. Libraries can and should be the epicenter of exploring, building and promoting these emerging techs, assuring the better futures and opportunities they offer are accessible to everyone. Learn what libraries are doing right now with these cutting-edge technologies, what they’re planning next and how you can implement these ideas in your own organization.
This is a free event, being held live online and also recorded. REGISTER HERE
the type of data: wikipedia. the dangers of learning from wikipedia. how individuals can organize mitigate some of these dangers. wikidata, algorithms.
IBM Watson is using wikipedia by algorythms making sense, AI system
youtube videos debunked of conspiracy theories by using wikipedia.
semantic relatedness, Word2Vec
how does algorithms work: large body of unstructured text. picks specific words
lots of AI learns about the world from wikipedia. the neutral point of view policy. WIkipedia asks editors present as proportionally as possible. Wikipedia biases: 1. gender bias (only 20-30 % are women).
conceptnet. debias along different demographic dimensions.
citations analysis gives also an idea about biases. localness of sources cited in spatial articles. structural biases.
geolocation on Twitter by County. predicting the people living in urban areas. FB wants to push more local news.
danger (biases) #3. wikipedia search results vs wkipedia knowledge panel.
collective action against tech: Reddit, boycott for FB and Instagram.
data labor: what the primary resources this companies have. posts, images, reviews etc.
boycott, data strike (data not being available for algorithms in the future). GDPR in EU – all historical data is like the CA Consumer Privacy Act. One can do data strike without data boycott. general vs homogeneous (group with shared identity) boycott.
the wikipedia SPAM policy is obstructing new editors and that hit communities such as women.
how to access at different levels. methods and methodological concerns. ethical concerns, legal concerns,
tweetdeck for advanced Twitter searches. quoting, likes is relevant, but not enough, sometimes screenshot
social listening platforms: crimson hexagon, parsely, sysomos – not yet academic platforms, tools to setup queries and visualization, but difficult to algorythm, the data samples etc. open sources tools (Urbana, Social Media microscope: SMILE (social media intelligence and learning environment) to collect data from twitter, reddit and within the platform they can query Twitter. create trend analysis, sentiment analysis, Voxgov (subscription service: analyzing political social media)
graduate level and faculty research: accessing SM large scale data web scraping & APIs Twitter APIs. Jason script, Python etc. Gnip Firehose API ($) ; Web SCraper Chrome plugin (easy tool, Pyhon and R created); Twint (Twitter scraper)
Facepager (open source) if not Python or R coder. structure and download the data sets.
TAGS archiving google sheets, uses twitter API. anything older 7 days not avaialble, so harvest every week.
social feed manager (GWUniversity) – Justin Litman with Stanford. Install on server but allows much more.
legal concerns: copyright (public info, but not beyond copyrighted). fair use argument is strong, but cannot publish the data. can analyize under fair use. contracts supercede copyright (terms of service/use) licensed data through library.
methods: sampling concerns tufekci, 2014 questions for sm. SM data is a good set for SM, but other fields? not according to her. hashtag studies: self selection bias. twitter as a model organism: over-represnted data in academic studies.
methodological concerns: scope of access – lack of historical data. mechanics of platform and contenxt: retweets are not necessarily endorsements.
ethical concerns. public info – IRB no informed consent. the right to be forgotten. anonymized data is often still traceable.
table discussion: digital humanities, journalism interested, but too narrow. tools are still difficult to find an operate. context of the visuals. how to spread around variety of majors and classes. controversial events more likely to be deleted.
takedowns, lies and corrosion: what is a librarian to do: trolls, takedown,
development kit circulation. familiarity with the Oculus Rift resulted in lesser reservation. Downturn also.
An experience station. clean up free apps.
question: spherical video, video 360.
safety issues: policies? instructional perspective: curating,WI people: user testing. touch controllers more intuitive then xbox controller. Retail Oculus Rift
app Scatchfab. 3modelviewer. obj or sdl file. Medium, Tiltbrush.
College of Liberal Arts at the U has their VR, 3D print set up.
Penn State (Paul, librarian, kiniseology, anatomy programs), Information Science and Technology. immersive experiences lab for video 360.
CALIPHA part of it is xrlibraries. libraries equal education. content provider LifeLiqe STEM library of AR and VR objects. https://www.lifeliqe.com/
counting how many times students use electronic library resources or visit in person, and comparing that to how well the students do in their classes and how likely they are to stay in school and earn a degree. And many library leaders are finding a strong correlation, meaning that students who consume more library materials tend to be more successful academically.
carefully tracking how library use compares to other metrics, and it has made changes as a result—like moving the tutoring center and the writing lab into the library. Those moves were designed not only to lure more people into the stacks, but to make seeking help more socially-acceptable for students who might have been hesitant.
a partnership between the library, which knows what electronic materials students use, and the technology office, which manages other campus data such as usage of the course-management system. The university is doing a study to see whether library usage there also equates to student success.
Inclusion of 3D Artifacts into a Digital Library: Exploring Technologies and Best Practice Techniques
The IUPUI University Library Center for Digital Scholarship has been digitizing and providing access to community and cultural heritage collections since 2006. Varying formats include: audio, video, photographs, slides, negatives, and text (bound, loose). The library provides access to these collections using CONTENTdm. As 3D technologies become increasingly popular in libraries and museums, IUPUI University Library is exploring the workflows and processes as they relate to 3D artifacts. This presentation will focus on incorporating 3D technologies into an already established digital library of community and cultural heritage collections.