This case study of Indiana University’s e-text initiative reports on students’ actual use of and engagement with digital textbooks.
In a typical semester, students read more in the first four weeks and less in later weeks except during major assessment times; in a typical week, most reading occurs between 5:00 p.m. and 2:00 a.m. from Monday to Thursday, indicating that students use e-texts mainly as a self-study resource.
Highlighting was the markup feature most used by students, whereas use of the other interactive markup features (shared notes, questions, and answers) was minimal, perhaps because of students’ lack of awareness of these features.
Research found that higher engagement with e-texts (reading and highlighting) correlated with higher course grades.
Although cost savings is often cited as a key advantage of electronic textbooks (aka, e-textbooks or simply e-texts), e-texts also provide powerful markup and interaction tools. For these tools to improve student learning, however, their adoption is critically important.
The Indiana University e-texts program, which began in 2009, has four primary goals:
Drive down the cost of materials for students
Provide high-quality materials of choice
Enable new tools for teaching and learning
Shape the terms of sustainable models that work for students, faculty, and authors
To date, student savings on textbooks amount to $21,673,338. However, we recognize that many students do not pay the full list price for paper textbooks when they purchase online, buy used copies, or recoup some of their costs when they resell their texts after the semester is over.
herefore, we divide the calculated savings by two and report that total as a more accurate representation of student savings. Consequently, we claim that students have saved about $11 million since IU’s e-texts program started in spring 2012.
In addition to printing through the e-text platform, students can purchase a print-on-demand (PoD) copy of an e-text for an additional fee.
One downside of e-texts is that students lease their textbook for a limited time instead of owning it. This lease generally lasts a semester or six months, and students lose their access afterwards. However, with IU’s e-text model, students get access to the textbook before the first day of class and maintain their access until they graduate from Indiana University. That is, students can go back to the e-texts after their course to review or reference the content in the book. This could be especially important if the e-text course is a prerequisite for another course.
Vicky Steeves (@VickySteeves) is the first Research Data Management and Reproducibility Librarian
Reproducibility is made so much more challenging because of computers, and the dominance of closed-source operating systems and analysis software researchers use. Ben Marwick wrote a great piece called ‘How computers broke science – and what we can do to fix it’ which details a bit of the problem. Basically, computational environments affect the outcome of analyses (Gronenschild et. al (2012) showed the same data and analyses gave different results between two versions of macOS), and are exceptionally hard to reproduce, especially when the license terms don’t allow it. Additionally, programs encode data incorrectly and studies make erroneous conclusions, e.g. Microsoft Excel encodes genes as dates, which affects 1/5 of published data in leading genome journals.
technology to capture computational environments, workflow, provenance, data, and code are hugely impactful for reproducibility. It’s been the focus of my work, in supporting an open source tool called ReproZip, which packages all computational dependencies, data, and applications in a single distributable package that other can reproduce across different systems. There are other tools that fix parts of this problem: Kepler and VisTrails for workflow/provenance, Packrat for saving specific R packages at the time a script is run so updates to dependencies won’t break, Pex for generating executable Python environments, and o2r for executable papers (including data, text, and code in one).
An introduction to digital badges and a brief history
Simply put, a digital badge is an indicator of accomplishment or skill that can be displayed, accessed, and verified online. These badges can be earned in a wide variety of environments, an increasing number of which are online.
The anatomy of digital badges
In addition to the image-based design we think of as a digital badge, badges have meta-data to communicate details of the badge to anyone wishing to verify it, or learn more about the context of the achievement it signifies.
The many functions of digital badges
Just like their real-world counterparts, digital badges serve a wide variety of purposes depending on the issuing body and the individual. For the most part, badges’ functions can be bucketed into one of five categories.
Badges are issued by individual organizations who set criteria for what constitutes earning a badge. They’re most often issued through an online credential or badging platform.
Criticism of digital badges
There are various arguments to be made against the implementation of digital badges, including the common issuance of seemingly “meaningless” badges.
The future of digital badges
With the rise of online education and the increasing availability of high quality massive open online courses, there will be an increasing need for verifiable digital badges and digital credentials.
digital resource sets available through MnPALS Plus
Two sets of open access, free digital resources that may be of interest to students and faculty have been added to SCSU’s online catalog (MnPALS Plus).
Open Textbook Library (a project of the University of Minnesota)
(appears in Collection drop-down menu as “Univ of Mn Open Textbook Library”)
“Open textbooks are textbooks that have been funded, published, and licensed to be freely used, adapted, and distributed. These books have been reviewed by faculty from a variety of colleges and universities to assess their quality. These books can be downloaded for no cost, or printed at low cost. All textbooks are either used at multiple higher education institutions; or affiliated with an institution, scholarly society, or professional organization.”
For more information, see https://open.umn.edu/opentextbooks/
“Ebooks Minnesota is an online ebook collection for all Minnesotans. The collection covers a wide variety of subjects for readers of all ages, and features content from our state’s independent publishers, including some of our best literature and nonfiction.”
For more information, see https://mndigital.org/projects/ebooks-minnesota
These resources are included in any search done in the online catalog. To view or search one of these collections specifically, go the the Advanced Search in MnPALS Plus and select the desired collection from the Collection dropdown. Users can add search terms, or just click “Find” without entering any search terms to see the entire collection.
10. The Virtualized Library: A Librarian’s Introduction to Docker and Virtual Machines
This session will introduce two major types of virtualization, virtual machines using tools like VirtualBox and Vagrant, and containers using Docker. The relative strengths and drawbacks of the two approaches will be discussed along with plenty of hands-on time. Though geared towards integrating these tools into a development workflow, the workshop should be useful for anyone interested in creating stable and reproducible computing environments, and examples will focus on library-specific tools like Archivematica and EZPaarse. With virtualization taking a lot of the pain out of installing and distributing software, alleviating many cross-platform issues, and becoming increasingly common in library and industry practices, now is a great time to get your feet wet.
(One three-hour session)
11. Digital Empathy: Creating Safe Spaces Online
User research is often focused on measures of the usability of online spaces. We look at search traffic, run card sorting and usability testing activities, and track how users navigate our spaces. Those results inform design decisions through the lens of information architecture. This is important, but doesn’t encompass everything a user needs in a space.
This workshop will focus on the other component of user experience design and user research: how to create spaces where users feel safe. Users bring their anxieties and stressors with them to our online spaces, but informed design choices can help to ameliorate that stress. This will ultimately lead to a more positive interaction between your institution and your users.
The presenters will discuss the theory behind empathetic design, delve deeply into using ethnographic research methods – including an opportunity for attendees to practice those ethnographic skills with student participants – and finish with the practical application of these results to ongoing and future projects.
(One three-hour session)
14. ARIA Basics: Making Your Web Content Sing Accessibility
(One three-hour session)
18. Learning and Teaching Tech
Tech workshops pose two unique problems: finding skilled instructors for that content, and instructing that content well. Library hosted workshops are often a primary educational resource for solo learners, and many librarians utilize these workshops as a primary outreach platform. Tackling these two issues together often makes the most sense for our limited resources. Whether a programming language or software tool, learning tech to teach tech can be one of the best motivations for learning that tech skill or tool, but equally important is to learn how to teach and present tech well.
This hands-on workshop will guide participants through developing their own learning plan, reviewing essential pedagogy for teaching tech, and crafting a workshop of their choice. Each participant will leave with an actionable learning schedule, a prioritized list of resources to investigate, and an outline of a workshop they would like to teach.
(Two three-hour sessions)
23. Introduction to Omeka S
Omeka S represents a complete rewrite of Omeka Classic (aka the Omeka 2.x series), adhering to our fundamental principles of encouraging use of metadata standards, easy web publishing, and sharing cultural history. New objectives in Omeka S include multisite functionality and increased interaction with other systems. This workshop will compare and contrast Omeka S with Omeka Classic to highlight our emphasis on 1) modern metadata standards, 2) interoperability with other systems including Linked Open Data, 3) use of modern web standards, and 4) web publishing to meet the goals medium- to large-sized institutions.
In this workshop we will walk through Omeka S Item creation, with emphasis on LoD principles. We will also look at the features of Omeka S that ease metadata input and facilitate project-defined usage and workflows. In accordance with our commitment to interoperability, we will describe how the API for Omeka S can be deployed for data exchange and sharing between many systems. We will also describe how Omeka S promotes multiple site creation from one installation, in the interest of easy publishing with many objects in many contexts, and simplifying the work of IT departments.
(One three-hour session)
24. Getting started with static website generators
Have you been curious about static website generators? Have you been wondering who Jekyll and Hugo are? Then this workshop is for you
But this article isn’t about setting up a domain name and hosting for your website. It’s for the step after that, the actual making of that site. The typical choice for a lot of people would be to use something like WordPress. It’s a one-click install on most hosting providers, and there’s a gigantic market of plugins and themes available to choose from, depending on the type of site you’re trying to build. But not only is WordPress a bit overkill for most websites, it also gives you a dynamically generated site with a lot of moving parts. If you don’t keep all of those pieces up to date, they can pose a significant security risk and your site could get hijacked.
In this hands-on workshop, we’ll start by exploring static website generators, their components, some of the different options available, and their benefits and disadvantages. Then, we’ll work on making our own sites, and for those that would like to, get them online with GitHub pages. Familiarity with HTML, git, and command line basics will be helpful but are not required.
(One three-hour session)
26. Using Digital Media for Research and Instruction
To use digital media effectively in both research and instruction, you need to go beyond just the playback of media files. You need to be able to stream the media, divide that stream into different segments, provide descriptive analysis of each segment, order, re-order and compare different segments from the same or different streams and create web sites that can show the result of your analysis. In this workshop, we will use Omeka and several plugins for working with digital media, to show the potential of video streaming, segmentation and descriptive analysis for research and instruction.
(One three-hour session)
28. Spark in the Dark 101 https://zeppelin.apache.org/
This is an introductory session on Apache Spark, a framework for large-scale data processing (https://spark.apache.org/). We will introduce high level concepts around Spark, including how Spark execution works and it’s relationship to the other technologies for working with Big Data. Following this introduction to the theory and background, we will walk workshop participants through hands-on usage of spark-shell, Zeppelin notebooks, and Spark SQL for processing library data. The workshop will wrap up with use cases and demos for leveraging Spark within cultural heritage institutions and information organizations, connecting the building blocks learned to current projects in the real world.
(One three-hour session)
29. Introduction to Spotlight https://github.com/projectblacklight/spotlight http://www.spotlighttechnology.com/4-OpenSource.htm
Spotlight is an open source application that extends the digital library ecosystem by providing a means for institutions to reuse digital content in easy-to-produce, attractive, and scholarly-oriented websites. Librarians, curators, and other content experts can build Spotlight exhibits to showcase digital collections using a self-service workflow for selection, arrangement, curation, and presentation.
This workshop will introduce the main features of Spotlight and present examples of Spotlight-built exhibits from the community of adopters. We’ll also describe the technical requirements for adopting Spotlight and highlight the potential to customize and extend Spotlight’s capabilities for their own needs while contributing to its growth as an open source project.
(One three-hour session)
31. Getting Started Visualizing your IoT Data in Tableau https://www.tableau.com/
The Internet of Things is a rising trend in library research. IoT sensors can be used for space assessment, service design, and environmental monitoring. IoT tools create lots of data that can be overwhelming and hard to interpret. Tableau Public (https://public.tableau.com/en-us/s/) is a data visualization tool that allows you to explore this information quickly and intuitively to find new insights.
This full-day workshop will teach you the basics of building your own own IoT sensor using a Raspberry Pi (https://www.raspberrypi.org/) in order to gather, manipulate, and visualize your data.
All are welcome, but some familiarity with Python is recommended.
(Two three-hour sessions)
32. Enabling Social Media Research and Archiving
Social media data represents a tremendous opportunity for memory institutions of all kinds, be they large academic research libraries, or small community archives. Researchers from a broad swath of disciplines have a great deal of interest in working with social media content, but they often lack access to datasets or the technical skills needed to create them. Further, it is clear that social media is already a crucial part of the historical record in areas ranging from events your local community to national elections. But attempts to build archives of social media data are largely nascent. This workshop will be both an introduction to collecting data from the APIs of social media platforms, as well as a discussion of the roles of libraries and archives in that collecting.
Assuming no prior experience, the workshop will begin with an explanation of how APIs operate. We will then focus specifically on the Twitter API, as Twitter is of significant interest to researchers and hosts an important segment of discourse. Through a combination of hands-on and demos, we will gain experience with a number of tools that support collecting social media data (e.g., Twarc, Social Feed Manager, DocNow, Twurl, and TAGS), as well as tools that enable sharing social media datasets (e.g., Hydrator, TweetSets, and the Tweet ID Catalog).
The workshop will then turn to a discussion of how to build a successful program enabling social media collecting at your institution. This might cover a variety of topics including outreach to campus researchers, collection development strategies, the relationship between social media archiving and web archiving, and how to get involved with the social media archiving community. This discussion will be framed by a focus on ethical considerations of social media data, including privacy and responsible data sharing.
Time permitting, we will provide a sampling of some approaches to social media data analysis, including Twarc Utils and Jupyter Notebooks.
how data is produced, collected and analyzed. make accessible all kind of data and info
ask good q/s and find good answers, share finding in meaningful ways. this is where digital literacy overshadows information literacy and this the fact that SCSU library does not understand; besides teaching students how to find and evaluate data, I also teach them how to communicate effectively using electronic tools.
connecting people tools and resources and making it easier for everybody. building collaborative, open and interdisciplinary
robust data computational literates. developing workshops, project and events to practice new skills. to position the library as the interdisciplinary nexus
what are data: definition. items of information, facts, traces of content and form. higher level, conception discussion about data in terms of social effects: matadata capturing information about the world, social political and economic changes. move away the mystic conceptions about data. nothing objective about data.
the emergence of IoT – digital meets physical. cyber physical systems. smart objects driven by industry. . proliferation of sensor and device – smart devices.
what does privacy looks like ? what is netneutrality when IoT? library must restructure : collaborate across institutions about collections of data in opien and participatory ways. put IoT in the hands of make and break things (she is maker space aficionado)
make and break things hackathons – use cheap devices such as Arduino and Pi.
data literacy programs with higher level conception exploration; libraries empower the campus in data collection. data science norms, store and share data to existing repositories and even catalogs. commercial services to store and connect data, but very restrictive and this is why libraries must be involved.
linked data and dark data
linked data – draw connections around online data most of the data are locked. linked data uses metadata to link related information in ways computers can understand.
libraries take advantage of link data. link data opportunity for semantics, natural language processing etc. if hidden data is relative to our communities, it is a library responsibility to provide it. community data practitioners
massive data, which cannot be analyzed by relational processing. data not yield significant findings. might be valuable for researchers: one persons trash is another persons’ treasure. preserving data and providing access to info. collaborate with researchers across disciplines and assist decide what is worth keeping and what discarding and how to study.
rich learning experience working with lined and dark data enable fresh perspective and learning how to work with data architecture. data literacy programming.
In the age of Big Data, there is an abundance of free or cheap data sources available to libraries about their users’ behavior across the many components that make up their web presence. Data from vendors, data from Google Analytics or other third-party tracking software, and data from user testing are all things libraries have access to at little or no cost. However, just like many students can become overloaded when they do not know how to navigate the many information sources available to them, many libraries can become overloaded by the continuous stream of data pouring in from these sources. This session will aim to help librarians understand 1) what sorts of data their library already has (or easily could have) access to about how their users use their various web tools, 2) what that data can and cannot tell them, and 3) how to use the datasets they are collecting in a holistic manner to help them make design decisions. The presentation will feature examples from the presenters’ own experience of incorporating user data in decisions related to design the Bethel University Libraries’ web presence.
lack of fear, changing the mindset.
deep collaboration both within and cross-consortia
don’t rely on vendor solutions. changing mindset
development = oppty (versus development as “work”)
private higher education is PALNI
3d virtual picture of disastrous areas. unlock the digital information to be digitally accessible to all people who might be interested.
they opened the maps of Katmandu for the local community and they were coming up with the strategies to recover. democracy in action
i can’t stop thinking that the keynote speaker efforts are mere follow up of what Naomi Klein explains in her Shock Doctrine: http://www.naomiklein.org/shock-doctrine: a government country seeks reasons to destroy another country or area and then NGOs from the same country go to remedy the disasters
A question from a librarian from the U about the use of drones. My note: why did the SCSU library have to give up its drone?
Douglas County Library model. too resource intensive to continue
Marmot Library Network
ILS integrated library system – shared with other counties, same sever for the entire consortium. they have a programmer, viewfind, open source, discovery player, he customized viewfind community to viewfind plus. instead of using the ILS public access catalogue, they are using the Vufind interface
Caiifa Enki. public library – single access collection. they purchase ebooks from the publisher and they are using also the viewfind interface. but not integrated with the library catalogs. Kansas public library went from OverDrive to Viewfind. CA State library is funding for the time being this effort.
Harper Collins is too cumbersome and the reason to avoid working with them.
security issues. some of the material sent over ftp and immediately moved to sftp
decisions – use of internal resources only, if now – amazon
programmer used for the pilot. contracted programmers. lack of the ability to see the large picture. eventually hired a full time person, instead of outsourcing. RDA compliant MARC.
ONIX, spreadsheet MARC.
Decision about who to start with : public or academic.
attempt to keep pricing down –
own agreement with the customers, separate from the agreement with the Publisher
current development: web-based online reading, shared-consortial collections and SIP2 authentication