Searching for "java script"

academic library collection data visualization

Finch, J. f., & Flenner, A. (2016). Using Data Visualization to Examine an Academic Library Collection. College & Research Libraries77(6), 765-778.

http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dllf%26AN%3d119891576%26site%3dehost-live%26scope%3dsite

p. 766
Visualizations of library data have been used to: • reveal relationships among subject areas for users. • illuminate circulation patterns. • suggest titles for weeding. • analyze citations and map scholarly communications

Each unit of data analyzed can be described as topical, asking “what.”6 • What is the number of courses offered in each major and minor? • What is expended in each subject area? • What is the size of the physical collection in each subject area? • What is student enrollment in each area? • What is the circulation in specific areas for one year?

libraries, if they are to survive, must rethink their collecting and service strategies in radical and possibly scary ways and to do so sooner rather than later. Anderson predicts that, in the next ten years, the “idea of collection” will be overhauled in favor of “dynamic access to a virtually unlimited flow of information products.”  My note: in essence, the fight between Mark Vargas and the Acquisition/Cataloguing people

The library collection of today is changing, affected by many factors, such as demanddriven acquisitions, access, streaming media, interdisciplinary coursework, ordering enthusiasm, new areas of study, political pressures, vendor changes, and the individual faculty member following a focused line of research.

subject librarians may see opportunities in looking more closely at the relatively unexplored “intersection of circulation, interlibrary loan, and holdings.”

Using Visualizations to Address Library Problems

the difference between graphical representations of environments and knowledge visualization, which generates graphical representations of meaningful relationships among retrieved files or objects.

Exhaustive lists of data visualization tools include: • the DIRT Directory (http://dirtdirectory.org/categories/visualization) • Kathy Schrock’s educating through infographics (www.schrockguide.net/ infographics-as-an-assessment.html) • Dataviz list of online tools (www.improving-visualisation.org/case-studies/id=5)

Visualization tools explored for this study include Plotly, Microsoft Excel, Python programming language, and D3.js, a javascript library for creating documents based on data. Tableau Public©

Eugene O’Loughlin, National College of Ireland, is very helpful in composing the charts and is found here: https://youtu.be/4FyImh2G7N0.

p. 771 By looking at the data (my note – by visualizing the data), more questions are revealed,  The visualizations provide greater comprehension than the two-dimensional “flatland” of the spreadsheets, in which valuable questions and insights are lost in the columns and rows of data.

By looking at data visualized in different combinations, library collection development teams can clearly compare important considerations in collection management: expenditures and purchases, circulation, student enrollment, and course hours. Library staff and administrators can make funding decisions or begin dialog based on data free from political pressure or from the influence of the squeakiest wheel in a department.

+++++++++++++++
more on data visualization for the academic library in this IMS blog
https://blog.stcloudstate.edu/ims?s=data+visualization

Software Carpentry Workshop

Minnesota State University Moorhead – Software Carpentry Workshop

https://www.eventbrite.com/e/minnesota-state-university-moorhead-software-carpentry-workshop-registration-38516119751

Reservation code: 680510823  Reservation for: Plamen Miltenoff

Hagen Hall – 600 11th St S – Room 207 – Moorhead

pad.software-carpentry.org/2017-10-27-Moorhead

http://www.datacarpentry.org/lessons/

https://software-carpentry.org/lessons/

++++++++++++++++

Friday

Jeff – certified Bash Python, John

http://bit.do/msum_swc

https://ntmoore.github.io/2017-10-27-Moorhead/

what is shall and what does it do. language close to computers, fast.

what is “bash” . cd, ls

shell job is a translator between the binory code, the middle name. several types of shells, with slight differences. one natively installed on MAC and Unix. born-again shell

bash commands: cd change director, ls – list; ls -F if it does not work: man ls (manual for LS); colon lower left corner tells you can scrool; q for escape; ls -ltr

arguments is colloquially used with different names. options, flags, parameters

cd ..  – move up one directory .      pwd : see the content      cd data_shell/   – go down one directory

cd ~  – brings me al the way up .        $HOME (universally defined variable

the default behavior of cd is to bring to home directory.

the core shall commands accept the same shell commands (letters)

$ du -h .     gives me the size of the files. ctrl C to stop

$ clear . – clear the entire screen, scroll up to go back to previous command

man history $ history $! pwd (to go to pwd . $ history | grep history (piping)

$ cat (and the file name) – standard output

$ cat ../

+++++++++++++++
how to edit and delete files

to create new folder: $ mkdir . – make directory

text editors – nano, vim (UNIX text editors) .      $ nano draft.txt .  ctrl O (save) ctr X (exit) .
$ vim . shift  esc (key)  and in command line – wq (write quit) or just “q”

$ mv draft.txt ../data . (move files)

to remove $ rm thesis/:     $ man rm

copy files       $cp    $ touch . (touches the file, creates if new)

remove $ rm .    anything PSEUDO is dangerous   Bash profile: cp -i

*- wild card, truncate       $ ls analyzed      (list of the analyized directory)

stackoverflow web site .

+++++++++++++++++

head command .  $head basilisk.day (check only the first several lines of a large file

$ for filename in basilisk.dat unicorn.dat . (making a loop = multiline)

> do (expecting an action) do

> head -n 3 $filename . (3 is for the first three line of the file to be displayed and -n is for the number)

> done

for doing repetitive functions

also

$ for filename in *.dat ; do head -n 3$x; done

$ for filename in *.dat ; do echo $filename do head -n 3$x; done

$ echo $filename (print statement)

how to loop

$ for filename in *.dat ; do echo $filename ; echo head -n 3 $filename ; done

ctrl c or apple comd dot to get out of the loop

http://swcarpentry.github.io/shell-novice/02-filedir/

also

$ for filename in *.dat

> do

> $filename

> head -n  10 (first ten files ) $filename | tail  -n 20 (last twenty lines)

$ for filename  in *.dat

do
>> echo  $filename
>> done

$ for filename in *.dat
>> do
>> cp $filename orig_$filename
>>done\

history > something else

$ head something.else

+++++++++++++

another function: word count

$ wc *.pdb  (protein databank)

$ head cubane.pdb

if i don;t know how to read the outpun $ man wc

the difference between “*” and “?”

$ wc -l *.pdb

$

wc -l *.pdb > lenghts.txs

cat lenghts.txt

$ for fil in *.txt
>>> do
>>> wc -l $fil

by putting a $ sign use that not the actual text.

++++++++++++

nano middle.sh . The entire point of shell is to automate

$ bash (exectubale) to run the program middle.sh

rwx – rwx – rwx . (owner – group -anybody)

bash middle.sh

$ file middle.sh

$path .

$ echo $PATH | tr “:” “\n”

/usr/local/bin

/usr/bin

/bin

/usr/sbin

/sbin

/Applications/VMware Fusion.app/Contents/Public

/usr/local/munki

$ export PATH=$PWD:$PATH

(this is to make sure that the last version of Python is running)

$ ls ~ . (hidden files)        

$ ls -a ~

$ touch .bach_profile .bashrc

$history | grep PATH

   19   echo $PATH

   44  echo #PATH | tr “:” “\n”

   45   echo $PATH | tr “:” “\n”

   46   export PATH=$PWD:$PATH

   47  echo #PATH | tr “:” “\n”

   48   echo #PATH | tr “:” “\n”

   55  history | grep PATH

 

wc -l “$@” | sort -n ($@  – encompasses eerything. will process every single file in the list of files

 

$ chmod (make it executable)

 

$ find . -type d . (find only directories, recursively, ) 

$ find . -type f (files, instead of directories)

$ find . -name ‘*.txt’ . (find files by name, don’t forget single quotes)

$ wc -l $(find . -name ‘*.txt’)  – when searching among direcories on different level

$ find . -name ‘*.txt’ | xargs wc -l    –  same as above ; two ways to do one and the same

+++++++++++++++++++

Saturday

Python

Link to the Python Plotting : https://swcarpentry.github.io/python-novice-gapminder

C and C++. scripting purposes in microbiology (instructor). libraries, packages alongside Python, which can extend its functionality. numpy and scipy (numeric and science python). Python for academic libraries?

going out of python $ quit () .      python expect beginning and end parenthesis

new terminal needed after installation. anaconda 5.0.1

python 3 is complete redesign, not only an update.

http://swcarpentry.github.io/python-novice-gapminder/setup/

jupyter crashes in safari. open in chrome. spg engine maybe

https://swcarpentry.github.io/python-novice-gapminder/01-run-quit/

to start python in the terminal $ python

>> variable = 3

>> variable +10

several data types.

stored in JSON format.

command vs edit code.  code cell is the gray box. a text cell is plain text

markdown syntax. format working with git and github .  search explanation in https://swcarpentry.github.io/python-novice-gapminder/01-run-quit/

hackMD https://hackmd.io/ (use your GIthub account)

PANDOC – translates different data formats. https://pandoc.org/

print is a function

in what cases i will run my data trough Python instead of SPSS?

python is a 0 based language. starts counting with 0 – Java, C, P

atom_name = ‘helium ‘
print(atom_name[0])                  string slicing and indexing is tricky

atom_name = ‘helium ‘
print(atom_name[0:6])
vs
atom_name = ‘helium ‘
print(atom_name[7])                python does not know how to slice it
synthax of python is        start : end : countby/step
string versus list .   string is in a single quote, list will have brakets
strings allow me to work not only w values, revers the string
atom_name = ‘helium lithium beryllium’
print(atom_name[::-1])
muillyreb muihtil muileh
Atom_name = ‘helium’
len (atom_name)                                     6 .             case sensitive
to clean the memory, restart the kernel
objects in Python have different types. adopt a class, value may have class inherent in its defintion
print (type(’42’)) .   Python tells me that it is a string
print (type(42)) .    tells e it is a string
LaTex
to combine integer and letter: print (str(1) + ‘A’)
converting a string to integer . : print (1 + int(’55’)) .    all the same type
translation table. numerical representation of a string
float
print (‘half is’, 1 / 2.0)
built in functions and help
print is a function, lenght is a function (len); type, string, int, max, round,
Python does not explain well why the code breaks
ASCI character set – build in Python conversation
libraries – package: https://swcarpentry.github.io/python-novice-gapminder/06-libraries/
function “import”
 Saturdady afternoon
reading .CSV in Python
http://swcarpentry.github.io/python-novice-gapminder/files/python-novice-gapminder-data.zip
**For windows users only: set up git https://swcarpentry.github.io/workshop-template/#git 
python is object oriented and i can define the objects
python creates its own types of objects (which we model) and those are called “DataFrame”
method applied it is an attribute to data that already exists. – difference from function
data.info() . is function – it does not take any arguments
whereas
data.columns . is a method
print (data.T) .  transpose.  not easy in Excel, but very easy in Python
print (data.describe()) .
/Users/plamen_local/anaconda3/lib/python3.6/site-packages/pandas/__init__.py
%matplotlib inline teling Jupyter notebook

import pandas

data = pandas.read_csv(‘/Users/plamen_local/Desktop/data/gapminder_gdp_oceania.csv’ , index_col=’country’)
data.loc[‘Australia’].plot()
plt.xticks(rotation=10)

GD plot 2 is the most well known library.

xelatex is a PDF engine.  reST restructured text like Markdown.  google what is the best PDF engine with Jupyter

four loops .  any computer language will have the concept of “for” loop. In Python: 1. whenever we create a “for” loop, that line must end with a single colon

2. indentation.  any “if” statement in the “for” loop, gets indented

is disruptive positive or negative

Disruptive Technology Definition | Investopedia

Disruptive innovation – Wikipedia

What is disruptive technology? – Definition from WhatIs.com

large corporations are designed to work with sustaining technologies. They excel at knowing their market, staying close to their customers, and having a mechanism in place to develop existing technology. Conversely, they have trouble capitalizing on the potential efficiencies, cost-savings, or new marketing opportunities created by low-margin disruptive technologies.

http://whatis.techtarget.com/definition/disruptive-technology
disruptive technologies

++++++++++++

5 Top Technologies for Digital Disruption

+++++++++++

Network technology, disruptive innovation and the future from Mark Smithers

+++++++++++

Disruptive Innovation in Higher Education (full course slides) de City Vision University

++++++++++
more on disruptive technologies in this IMS blog
https://blog.stcloudstate.edu/ims?s=disruptive+technologies

code4lib 2018

Code2LIB February 2018

http://2018.code4lib.org/

2018 Preconference Voting

10. The Virtualized Library: A Librarian’s Introduction to Docker and Virtual Machines
This session will introduce two major types of virtualization, virtual machines using tools like VirtualBox and Vagrant, and containers using Docker. The relative strengths and drawbacks of the two approaches will be discussed along with plenty of hands-on time. Though geared towards integrating these tools into a development workflow, the workshop should be useful for anyone interested in creating stable and reproducible computing environments, and examples will focus on library-specific tools like Archivematica and EZPaarse. With virtualization taking a lot of the pain out of installing and distributing software, alleviating many cross-platform issues, and becoming increasingly common in library and industry practices, now is a great time to get your feet wet.

(One three-hour session)

11. Digital Empathy: Creating Safe Spaces Online
User research is often focused on measures of the usability of online spaces. We look at search traffic, run card sorting and usability testing activities, and track how users navigate our spaces. Those results inform design decisions through the lens of information architecture. This is important, but doesn’t encompass everything a user needs in a space.

This workshop will focus on the other component of user experience design and user research: how to create spaces where users feel safe. Users bring their anxieties and stressors with them to our online spaces, but informed design choices can help to ameliorate that stress. This will ultimately lead to a more positive interaction between your institution and your users.

The presenters will discuss the theory behind empathetic design, delve deeply into using ethnographic research methods – including an opportunity for attendees to practice those ethnographic skills with student participants – and finish with the practical application of these results to ongoing and future projects.

(One three-hour session)

14. ARIA Basics: Making Your Web Content Sing Accessibility

https://dequeuniversity.com/assets/html/jquery-summit/html5/slides/landmarks.html
Are you a web developer or create web content? Do you add dynamic elements to your pages? If so, you should be concerned with making those dynamic elements accessible and usable to as many as possible. One of the most powerful tools currently available for making web pages accessible is ARIA, the Accessible Rich Internet Applications specification. This workshop will teach you the basics for leveraging the full power of ARIA to make great accessible web pages. Through several hands-on exercises, participants will come to understand the purpose and power of ARIA and how to apply it for a variety of different dynamic web elements. Topics will include semantic HTML, ARIA landmarks and roles, expanding/collapsing content, and modal dialog. Participants will also be taught some basic use of the screen reader NVDA for use in accessibility testing. Finally, the lessons will also emphasize learning how to keep on learning as HTML, JavaScript, and ARIA continue to evolve and expand.

Participants will need a basic background in HTML, CSS, and some JavaScript.

(One three-hour session)

18. Learning and Teaching Tech
Tech workshops pose two unique problems: finding skilled instructors for that content, and instructing that content well. Library hosted workshops are often a primary educational resource for solo learners, and many librarians utilize these workshops as a primary outreach platform. Tackling these two issues together often makes the most sense for our limited resources. Whether a programming language or software tool, learning tech to teach tech can be one of the best motivations for learning that tech skill or tool, but equally important is to learn how to teach and present tech well.

This hands-on workshop will guide participants through developing their own learning plan, reviewing essential pedagogy for teaching tech, and crafting a workshop of their choice. Each participant will leave with an actionable learning schedule, a prioritized list of resources to investigate, and an outline of a workshop they would like to teach.

(Two three-hour sessions)

23. Introduction to Omeka S
Omeka S represents a complete rewrite of Omeka Classic (aka the Omeka 2.x series), adhering to our fundamental principles of encouraging use of metadata standards, easy web publishing, and sharing cultural history. New objectives in Omeka S include multisite functionality and increased interaction with other systems. This workshop will compare and contrast Omeka S with Omeka Classic to highlight our emphasis on 1) modern metadata standards, 2) interoperability with other systems including Linked Open Data, 3) use of modern web standards, and 4) web publishing to meet the goals medium- to large-sized institutions.

In this workshop we will walk through Omeka S Item creation, with emphasis on LoD principles. We will also look at the features of Omeka S that ease metadata input and facilitate project-defined usage and workflows. In accordance with our commitment to interoperability, we will describe how the API for Omeka S can be deployed for data exchange and sharing between many systems. We will also describe how Omeka S promotes multiple site creation from one installation, in the interest of easy publishing with many objects in many contexts, and simplifying the work of IT departments.

(One three-hour session)

24. Getting started with static website generators
Have you been curious about static website generators? Have you been wondering who Jekyll and Hugo are? Then this workshop is for you

My notehttps://opensource.com/article/17/5/hugo-vs-jekyll

But this article isn’t about setting up a domain name and hosting for your website. It’s for the step after that, the actual making of that site. The typical choice for a lot of people would be to use something like WordPress. It’s a one-click install on most hosting providers, and there’s a gigantic market of plugins and themes available to choose from, depending on the type of site you’re trying to build. But not only is WordPress a bit overkill for most websites, it also gives you a dynamically generated site with a lot of moving parts. If you don’t keep all of those pieces up to date, they can pose a significant security risk and your site could get hijacked.

The alternative would be to have a static website, with nothing dynamically generated on the server side. Just good old HTML and CSS (and perhaps a bit of Javascript for flair). The downside to that option has been that you’ve been relegated to coding the whole thing by hand yourself. It’s doable, but you just want a place to share your work. You shouldn’t have to know all the idiosyncrasies of low-level web design (and the monumental headache of cross-browser compatibility) to do that.

Static website generators are tools used to build a website made up only of HTML, CSS, and JavaScript. Static websites, unlike dynamic sites built with tools like Drupal or WordPress, do not use databases or server-side scripting languages. Static websites have a number of benefits over dynamic sites, including reduced security vulnerabilities, simpler long-term maintenance, and easier preservation.

In this hands-on workshop, we’ll start by exploring static website generators, their components, some of the different options available, and their benefits and disadvantages. Then, we’ll work on making our own sites, and for those that would like to, get them online with GitHub pages. Familiarity with HTML, git, and command line basics will be helpful but are not required.

(One three-hour session)

26. Using Digital Media for Research and Instruction
To use digital media effectively in both research and instruction, you need to go beyond just the playback of media files. You need to be able to stream the media, divide that stream into different segments, provide descriptive analysis of each segment, order, re-order and compare different segments from the same or different streams and create web sites that can show the result of your analysis. In this workshop, we will use Omeka and several plugins for working with digital media, to show the potential of video streaming, segmentation and descriptive analysis for research and instruction.

(One three-hour session)

28. Spark in the Dark 101 https://zeppelin.apache.org/
This is an introductory session on Apache Spark, a framework for large-scale data processing (https://spark.apache.org/). We will introduce high level concepts around Spark, including how Spark execution works and it’s relationship to the other technologies for working with Big Data. Following this introduction to the theory and background, we will walk workshop participants through hands-on usage of spark-shell, Zeppelin notebooks, and Spark SQL for processing library data. The workshop will wrap up with use cases and demos for leveraging Spark within cultural heritage institutions and information organizations, connecting the building blocks learned to current projects in the real world.

(One three-hour session)

29. Introduction to Spotlight https://github.com/projectblacklight/spotlight
http://www.spotlighttechnology.com/4-OpenSource.htm
Spotlight is an open source application that extends the digital library ecosystem by providing a means for institutions to reuse digital content in easy-to-produce, attractive, and scholarly-oriented websites. Librarians, curators, and other content experts can build Spotlight exhibits to showcase digital collections using a self-service workflow for selection, arrangement, curation, and presentation.

This workshop will introduce the main features of Spotlight and present examples of Spotlight-built exhibits from the community of adopters. We’ll also describe the technical requirements for adopting Spotlight and highlight the potential to customize and extend Spotlight’s capabilities for their own needs while contributing to its growth as an open source project.

(One three-hour session)

31. Getting Started Visualizing your IoT Data in Tableau https://www.tableau.com/
The Internet of Things is a rising trend in library research. IoT sensors can be used for space assessment, service design, and environmental monitoring. IoT tools create lots of data that can be overwhelming and hard to interpret. Tableau Public (https://public.tableau.com/en-us/s/) is a data visualization tool that allows you to explore this information quickly and intuitively to find new insights.

This full-day workshop will teach you the basics of building your own own IoT sensor using a Raspberry Pi (https://www.raspberrypi.org/) in order to gather, manipulate, and visualize your data.

All are welcome, but some familiarity with Python is recommended.

(Two three-hour sessions)

32. Enabling Social Media Research and Archiving
Social media data represents a tremendous opportunity for memory institutions of all kinds, be they large academic research libraries, or small community archives. Researchers from a broad swath of disciplines have a great deal of interest in working with social media content, but they often lack access to datasets or the technical skills needed to create them. Further, it is clear that social media is already a crucial part of the historical record in areas ranging from events your local community to national elections. But attempts to build archives of social media data are largely nascent. This workshop will be both an introduction to collecting data from the APIs of social media platforms, as well as a discussion of the roles of libraries and archives in that collecting.

Assuming no prior experience, the workshop will begin with an explanation of how APIs operate. We will then focus specifically on the Twitter API, as Twitter is of significant interest to researchers and hosts an important segment of discourse. Through a combination of hands-on and demos, we will gain experience with a number of tools that support collecting social media data (e.g., Twarc, Social Feed Manager, DocNow, Twurl, and TAGS), as well as tools that enable sharing social media datasets (e.g., Hydrator, TweetSets, and the Tweet ID Catalog).

The workshop will then turn to a discussion of how to build a successful program enabling social media collecting at your institution. This might cover a variety of topics including outreach to campus researchers, collection development strategies, the relationship between social media archiving and web archiving, and how to get involved with the social media archiving community. This discussion will be framed by a focus on ethical considerations of social media data, including privacy and responsible data sharing.

Time permitting, we will provide a sampling of some approaches to social media data analysis, including Twarc Utils and Jupyter Notebooks.

(One three-hour session)

data visualization for librarians

Eaton, M. E. (2017). Seeing Seeing Library Data: A Prototype Data Visualization Application for Librarians. Journal of Web Librarianship, 11(1), 69–78. Retrieved from http://academicworks.cuny.edu/kb_pubs

Visualization can increase the power of data, by showing the “patterns, trends and exceptions”

Librarians can benefit when they visually leverage data in support of library projects.

Nathan Yau suggests that exploratory learning is a significant benefit of data visualization initiatives (2013). We can learn about our libraries by tinkering with data. In addition, handling data can also challenge librarians to improve their technical skills. Visualization projects allow librarians to not only learn about their libraries, but to also learn programming and data science skills.

The classic voice on data visualization theory is Edward Tufte. In Envisioning Information, Tufte unequivocally advocates for multi-dimensionality in visualizations. He praises some incredibly complex paper-based visualizations (1990). This discussion suggests that the principles of data visualization are strongly contested. Although Yau’s even-handed approach and Cairo’s willingness to find common ground are laudable, their positions are not authoritative or the only approach to data visualization.

a web application that visualizes the library’s holdings of books and e-books according to certain facets and keywords. Users can visualize whatever topics they want, by selecting keywords and facets that interest them.

Primo X-Services API. JSON, Flask, a very flexible Python web micro-framework. In addition to creating the visualization, SeeCollections also makes this data available on the web. JavaScript is the front-end technology that ultimately presents data to the SeeCollections user. JavaScript is a cornerstone of contemporary web development; a great deal of today’s interactive web content relies upon it. Many popular code libraries have been written for JavaScript. This project draws upon jQuery, Bootstrap and d3.js.

To give SeeCollections a unified visual theme, I have used Bootstrap. Bootstrap is most commonly used to make webpages responsive to different devices

D3.js facilitates the binding of data to the content of a web page, which allows manipulation of the web content based on the underlying data.

 

scsu library position proposal

Please email completed forms to librarydeansoffice@stcloudstate.edu no later than noon on Thursday, October 5.

According to the email below, library faculty are asked to provide their feedback regarding the qualifications for a possible faculty line at the library.

  1. In the fall of 2013 during a faculty meeting attended by the back than library dean and during a discussion of an article provided by the dean, it was established that leading academic libraries in this country are seeking to break the mold of “library degree” and seek fresh ideas for the reinvention of the academic library by hiring faculty with more diverse (degree-wise) background.
  2. Is this still the case at the SCSU library? The “democratic” search for the answer of this question does not yield productive results, considering that the majority of the library faculty are “reference” and they “democratically” overturn votes, who see this library to be put on 21st century standards and rather seek more “reference” bodies for duties, which were recognized even by the same reference librarians as obsolete.
    It seems that the majority of the SCSU library are “purists” in the sense of seeking professionals with broader background (other than library, even “reference” skills).
    In addition, most of the current SCSU librarians are opposed to a second degree, as in acquiring more qualification, versus seeking just another diploma. There is a certain attitude of stagnation / intellectual incest, where new ideas are not generated and old ideas are prepped in “new attire” to look as innovative and/or 21st
    Last but not least, a consistent complain about workforce shortages (the attrition politics of the university’s reorganization contribute to the power of such complain) fuels the requests for reference librarians and, instead of looking for new ideas, new approaches and new work responsibilities, the library reorganization conversation deteriorates into squabbles for positions among different department.
    Most importantly, the narrow sightedness of being stuck in traditional work description impairs  most of the librarians to see potential allies and disruptors. E.g., the insistence on the supremacy of “information literacy” leads SCSU librarians to the erroneous conclusion of the exceptionality of information literacy and the disregard of multi[meta] literacies, thus depriving the entire campus of necessary 21st century skills such as visual literacy, media literacy, technology literacy, etc.
    Simultaneously, as mentioned above about potential allies and disruptors, the SCSU librarians insist on their “domain” and if they are not capable of leading meta-literacies instructions, they would also not allow and/or support others to do so.
    Considering the observations above, the following qualifications must be considered:
  3. According to the information in this blog post:
    https://blog.stcloudstate.edu/ims/2016/06/14/technology-requirements-samples/
    for the past year and ½, academic libraries are hiring specialists with the following qualifications and for the following positions (bolded and / or in red). Here are some highlights:
    Positions
    digital humanities
    Librarian and Instructional Technology Liaison

library Specialist: Data Visualization & Collections Analytics

Qualifications

Advanced degree required, preferably in education, educational technology, instructional design, or MLS with an emphasis in instruction and assessment.

Programming skills – Demonstrated experience with one or more metadata and scripting languages (e.g.Dublin Core, XSLT, Java, JavaScript, Python, or PHP)
Data visualization skills
multi [ meta] literacy skills

Data curation, helping students working with data
Experience with website creation and design in a CMS environment and accessibility and compliance issues
Demonstrated a high degree of facility with technologies and systems germane to the 21st century library, and be well versed in the issues surrounding scholarly communications and compliance issues (e.g. author identifiers, data sharing software, repositories, among others)

Bilingual

Provides and develops awareness and knowledge related to digital scholarship and research lifecycle for librarians and staff.

Experience developing for, and supporting, common open-source library applications such as Omeka, ArchiveSpace, Dspace,

 

Responsibilities
Establishing best practices for digital humanities labs, networks, and services

Assessing, evaluating, and peer reviewing DH projects and librarians
Actively promote TIGER or GRIC related activities through social networks and other platforms as needed.
Coordinates the transmission of online workshops through Google HangoutsScript metadata transformations and digital object processing using BASH, Python, and XSLT

liaison consults with faculty and students in a wide range of disciplines on best practices for teaching and using data/statistical software tools such as R, SPSS, Stata, and MatLab.

 

In response to the form attached to the Friday, September 29, email regarding St. Cloud State University Library Position Request Form:

 

  1. Title
    Digital Initiatives Librarian
  2. Responsibilities:
    TBD, but generally:
    – works with faculty across campus on promoting digital projects and other 21st century projects. Works with the English Department faculty on positioning the SCSU library as an equal participants in the digital humanities initiatives on campus
  • Works with the Visualization lab to establish the library as the leading unit on campus in interpretation of big data
  • Works with academic technology services on promoting library faculty as the leading force in the pedagogical use of academic technologies.
  1. Quantitative data justification
    this is a mute requirement for an innovative and useful library position. It can apply for a traditional request, such as another “reference” librarian. There cannot be a quantitative data justification for an innovative position, as explained to Keith Ewing in 2015. In order to accumulate such data, the position must be functioning at least for six months.
  2. Qualitative justification: Please provide qualitative explanation that supports need for this position.
    Numerous 21st century academic tendencies right now are scattered across campus and are a subject of political/power battles rather than a venue for campus collaboration and cooperation. Such position can seek the establishment of the library as the natural hub for “sandbox” activities across campus. It can seek a redirection of using digital initiatives on this campus for political gains by administrators and move the generation and accomplishment of such initiatives to the rightful owner and primary stakeholders: faculty and students.
    Currently, there are no additional facilities and resources required. Existing facilities and resources, such as the visualization lab, open source and free application can be used to generate the momentum of faculty working together toward a common goal, such as, e.g. digital humanities.

 

 

 

 

intro computer programming

Intro to Computer Programming
with Steve Perry

10-week eCourse  Beginning Tuesday, September 5, 2017

For today’s librarian, the ability to adapt to new technology is not optional. Programming—the process of using computer language to generate commands that instruct a computer to perform specific functions—is at the core of all computer technology. A foundation in programming helps you understand the inner workings of all of the technologies that drive libraries now—from integrated library systems to Web pages and databases.

In this Advanced eCourse, you can go from having little to no programming knowledge to being familiar with coding in several different computer languages. Steve Perry—an experienced LIS instructor and programmer—will teach you in his lectures what you need to get started, and then the readings and exercises will give you practical programming experience, particularly as it relates to a library environment. Languages covered will include HTML, CSS, JavaScript, PHP, and others. You do not need any programming experience or special software to participate in this eCourse.

Participants who complete this Advanced eCourse will receive an SJSU iSchool/ALA Publishing Advanced Certificate of Completion.

+++++++++++++++
more on coding in this IMS blog
https://blog.stcloudstate.edu/ims?s=coding

microsoft access alternatives

per LITA listserv

We are looking for a free alternative to Microsoft Access. We have looked at Base which is part of LibreOffice and OpenOffice. However, as far as we can determine, Base does not allow us to import a CSV file into the database as a table. Such a feature would be important to us as we frequently need to import text files.

We would like to be able to query the database using SQL.

Microsoft Access supports Visual Basic Application. We would like a database that works with C#, Java or JavaScript in the same way

++++++++++++++++++++++++
https://help.libreoffice.org/Common/Importing_and_Exporting_Data_in_Base

Gnumeric. https://portableapps.com/apps/office/gnumeric_portable

 

history Becker

Digital Literacy and History

Plamen Miltenoff – http://web.stcloudstate.edu/pmiltenoff/faculty/
with Heather Abrahamson, Becker High School Social Studies, 763-261-4501 (Ext. 3507)
9:50-11:15; 11:20-11:45;  12:20-1:20 |
link to this blog entry: https://blog.stcloudstate.edu/ims/2017/05/01/history-becker/
short link – http://bit.ly/histbecker

+++++++++++++++++++++
list of web sites with images for the students’ projects:

  • Holocaust

https://www.ushmm.org/collections/the-museums-collections/about/photo-archives

http://www.jewishvirtuallibrary.org/holocaust-photographs

https://go.fold3.com/holocaust_records/

https://www.wienerlibrary.co.uk/Photographs

https://www.thoughtco.com/large-collection-of-holocaust-pictures-1779703

http://www.yadvashem.org/yv/en/holocaust/resource_center/item.asp?gate=4-2

http://www.history.com/topics/world-war-ii/the-holocaust/pictures/holocaust-concentration-camps/poland-auschwitz-birkenau-death-camp

  • Cold War

http://www.gettyimages.com/photos/cold-war

http://www.coldwar.org/museum/photo_gallery.asp

http://www.cnn.com/2014/03/04/world/gallery/cold-war-history/

http://time.com/3879870/berlin-wall-photos-early-days-cold-war-symbol/

http://digitalarchive.wilsoncenter.org/theme/cold-war-history

http://archive.millercenter.org/academic/dgs/primaryresources/cold_war

  • others

http://www.loc.gov/pictures/

http://www.gettyimages.com/editorialimages/archival

https://www.archives.gov/research/alic/reference/photography.html

 

+++++++++++++++++++++
Defining my interests. Narrowing a topic. How do I collect information? How do I search for information?

How do we search for “serious” information?

https://www.google.com/; https://scholar.google.com/ (3 min); http://academic.research.microsoft.com/http://www.dialog.com/http://www.quetzal-search.infohttp://www.arXiv.orghttp://www.journalogy.com/ 
  • Digg, Reddit , Quora, Medium,
http://digg.com/, https://www.reddit.com/, https://www.quora.com/ StackExchange http://stackexchange.com/Kngine.com; AskScience https://www.reddit.com/r/askscience/, ,  and similar, https://medium.com/ (5 min)
YouTube, SlideShare https://www.slideshare.net/  and similar https://www.slideshare.net/search/slideshow?searchfrom=header&q=modern+history
  • Professional organization and social media
(10 min)
Wikipedia https://en.wikipedia.org/wiki/Modern_history
blogs, listservs http://www.bestcollegesonline.com/blog/100-awesome-blogs-for-history-junkies/
Facebook  history
Twitter  twitter
LinkedIn Groups https://www.linkedin.com/groups/my-groups  
team work using your social media accounts (e.g. Facebook, Twitter), search for information related to your topic of interest (5 min)

  • Other search engines
https://www.semanticscholar.org/
  • University Library Search
(20 min)
every university library has subject guides for different disciplines. here are the ones from SCSU http://stcloud.lib.mnscu.edu/subjects/guide.php?subject=HIST-WOR Kahoot game (5 min)
basic electronic (library) search information and strategies. Library research services (5 min)

using the library database, do a search on a topic of your interest.

compare the returns on your search. make an attempt to refine the search.

retrieve the following information about the book of interest: is it relevant to your topic (check the subjects); is it timely (check the published date); is it available

 books
Strategies for conducting advanced searches (setting up filters and search criteria)
Articles and databases (10 min)  
Kahoot competition use your smart phones to find the best researcher among you
https://play.kahoot.it/#/k/c376c27a-d39a-4825-8541-1c1ae728e1bc
https://play.kahoot.it/#/k/5e6d126f-be4d-47d0-9b6e-dfc3f2c90e61
https://play.kahoot.it/#/k/89706729-3663-4ec3-a351-173bf1bf4ed7history:
https://play.kahoot.it/#/k/7510e6d8-170f-4c0c-b7bd-6d7dd60c3f6e
Reference and Facts
Streaming and Video http://www.stcloudstate.edu/library/research/video.aspx
Journal Title and Citation Finder
shall more info be needed and or “proper” session with a reference librarian be requested http://stcloud.lib.mnscu.edu/subjects/guide.php?subject=EDAD-D
Institutional Repository http://repository.stcloudstate.edu/
  • additional academic resources
Academic.com and ResearchGate

academia


  • VR tour SCSU library
http://bit.ly/360lib and http://bit.ly/360lib2;  http://bit.ly/VRlib (15 min)

  • bibliographic tools
Refworks https://www.refworks.com/refworks2/default.aspx?r=authentication::init&
Zotero, Mendeley, Endnote
Fast and easy bibliographic tools: https://blog.stcloudstate.edu/ims/2013/12/06/bibliographic-tools-fast-and-easy/
 Primary and secondary sources video

++++++++++++++++++
more on history in this IMS blog
https://blog.stcloudstate.edu/ims?s=history

Save

Save

1 2 3 4 5