ELI Educause : Technology Procurement for Accessibility PDF document
Despite general agreement among institutional leaders that they are obligated to provide accessible technology, efforts at many colleges and universities to fulfil that promise are often ad hoc, incomplete, or not fully implemented. Including accessibility requirements or guidance in institutional policies and practices for how technology is procured is one way for colleges and universities to demonstrate a commitment to ensuring equal access to information, programs, and activities and to comply with applicable legal requirements.
Due to decentralized purchasing and contracting practices, as well as the growing ecosystem of easy-to-deploy learning apps, applications and services are often deployed with little or no oversight from an accessibility perspective.
George Mason University, the university counsel, purchasing office, libraries, and IT services are collaborating to establish purchasing guidelines that ensure all IT purchases are reviewed for accessibility and conform to explicit standards and guidelines. The California State University system has developed system-wide vendor accessibility requirements, as well as an Equally Effective Alternate Access Plan (EEAAP) to address accessibility barriers while the product development team addresses remediation of those barriers (which are outlined in a product Accessibility Roadmap). Penn State University updated its policy for accessibility of electronic and information technology to reflect evolving standards and new best practices. The University of Washington uses a step-by-step checklist, including suggested language for contracts, to help users across campus ensure accessibility compliance in technology acquisitions. The University of Wisconsin–Madison tells stakeholders that they “must consider accessibility early and throughout the process as one of the criteria for [technology] acquisition.” As part of a process of “growing a culture of access,” Wichita State University has developed an in-depth Foundations of Accessibility course for staff and a technology audit rubric, among other tools
Consistent adherence to accessibility policies for technology purchases can be challenging because some technologies might need to be deployed even though they are not fully accessible.
Campus policies allowing decentralized technology purchases can create gray areas where buyers may be uncertain about—or may not even be aware of—their responsibilities to ensure that such purchases comply with institutional accessibility policies.
Changes in pedagogic practice to ensure broader adoption of accessible technology are tangible demonstrations of that enhanced awareness. Broader adoption of the principles of Universal Design for Learning may stimulate more institutions to be intentional about policies that ensure accessible technology purchases.
The EDUCAUSE Learning Initiative has just launched its 2018 Key Issues in Teaching and Learning Survey, so vote today: http://www.tinyurl.com/ki2018.
Each year, the ELI surveys the teaching and learning community in order to discover the key issues and themes in teaching and learning. These top issues provide the thematic foundation or basis for all of our conversations, courses, and publications for the coming year. Longitudinally they also provide the way to track the evolving discourse in the teaching and learning space. More information about this annual survey can be found at https://www.educause.edu/eli/initiatives/key-issues-in-teaching-and-learning.
ACADEMIC TRANSFORMATION (Holistic models supporting student success, leadership competencies for academic transformation, partnerships and collaborations across campus, IT transformation, academic transformation that is broad, strategic, and institutional in scope)
ACCESSIBILITY AND UNIVERSAL DESIGN FOR LEARNING (Supporting and educating the academic community in effective practice; intersections with instructional delivery modes; compliance issues)
ADAPTIVE TEACHING AND LEARNING (Digital courseware; adaptive technology; implications for course design and the instructor’s role; adaptive approaches that are not technology-based; integration with LMS; use of data to improve learner outcomes)
COMPETENCY-BASED EDUCATION AND NEW METHODS FOR THE ASSESSMENT OF STUDENT LEARNING (Developing collaborative cultures of assessment that bring together faculty, instructional designers, accreditation coordinators, and technical support personnel, real world experience credit)
DIGITAL AND INFORMATION LITERACIES (Student and faculty literacies; research skills; data discovery, management, and analysis skills; information visualization skills; partnerships for literacy programs; evaluation of student digital competencies; information evaluation)
EVALUATING TECHNOLOGY-BASED INSTRUCTIONAL INNOVATIONS (Tools and methods to gather data;data analysis techniques; qualitative vs. quantitative data; evaluation project design; using findings to change curricular practice; scholarship of teaching and learning; articulating results to stakeholders; just-in-time evaluation of innovations). here is my bibliographical overview on Big Data (scroll down to “Research literature”: https://blog.stcloudstate.edu/ims/2017/11/07/irdl-proposal/ )
EVOLUTION OF THE TEACHING AND LEARNING SUPPORT PROFESSION (Professional skills for T&L support; increasing emphasis on instructional design; delineating the skills, knowledge, business acumen, and political savvy for success; role of inter-institutional communities of practices and consortia; career-oriented professional development planning)
FACULTY DEVELOPMENT (Incentivizing faculty innovation; new roles for faculty and those who support them; evidence of impact on student learning/engagement of faculty development programs; faculty development intersections with learning analytics; engagement with student success)
GAMIFICATION OF LEARNING (Gamification designs for course activities; adaptive approaches to gamification; alternate reality games; simulations; technological implementation options for faculty)
INSTRUCTIONAL DESIGN (Skills and competencies for designers; integration of technology into the profession; role of data in design; evolution of the design profession (here previous blog postings on this issue: https://blog.stcloudstate.edu/ims/2017/10/04/instructional-design-3/); effective leadership and collaboration with faculty)
INTEGRATED PLANNING AND ADVISING FOR STUDENT SUCCESS (Change management and campus leadership; collaboration across units; integration of technology systems and data; dashboard design; data visualization (here previous blog postings on this issue: https://blog.stcloudstate.edu/ims?s=data+visualization); counseling and coaching advising transformation; student success analytics)
LEARNING ANALYTICS (Leveraging open data standards; privacy and ethics; both faculty and student facing reports; implementing; learning analytics to transform other services; course design implications)
LEARNING SPACE DESIGNS (Makerspaces; funding; faculty development; learning designs across disciplines; supporting integrated campus planning; ROI; accessibility/UDL; rating of classroom designs)
MICRO-CREDENTIALING AND DIGITAL BADGING (Design of badging hierarchies; stackable credentials; certificates; role of open standards; ways to publish digital badges; approaches to meta-data; implications for the transcript; Personalized learning transcripts and blockchain technology (here previous blog postings on this issue: https://blog.stcloudstate.edu/ims?s=blockchain)
MOBILE LEARNING (Curricular use of mobile devices (here previous blog postings on this issue:
MULTI-DIMENSIONAL TECHNOLOGIES (Virtual, augmented, mixed, and immersive reality; video walls; integration with learning spaces; scalability, affordability, and accessibility; use of mobile devices; multi-dimensional printing and artifact creation)
NEXT-GENERATION DIGITAL LEARNING ENVIRONMENTS AND LMS SERVICES (Open standards; learning environments architectures (here previous blog postings on this issue: https://blog.stcloudstate.edu/ims/2017/03/28/digital-learning/; social learning environments; customization and personalization; OER integration; intersections with learning modalities such as adaptive, online, etc.; LMS evaluation, integration and support)
ONLINE AND BLENDED TEACHING AND LEARNING (Flipped course models; leveraging MOOCs in online learning; course development models; intersections with analytics; humanization of online courses; student engagement)
OPEN EDUCATION (Resources, textbooks, content; quality and editorial issues; faculty development; intersections with student success/access; analytics; licensing; affordability; business models; accessibility and sustainability)
PRIVACY AND SECURITY (Formulation of policies on privacy and data protection; increased sharing of data via open standards for internal and external purposes; increased use of cloud-based and third party options; education of faculty, students, and administrators)
WORKING WITH EMERGING LEARNING TECHNOLOGY (Scalability and diffusion; effective piloting practices; investments; faculty development; funding; evaluation methods and rubrics; interoperability; data-driven decision-making)
10. The Virtualized Library: A Librarian’s Introduction to Docker and Virtual Machines
This session will introduce two major types of virtualization, virtual machines using tools like VirtualBox and Vagrant, and containers using Docker. The relative strengths and drawbacks of the two approaches will be discussed along with plenty of hands-on time. Though geared towards integrating these tools into a development workflow, the workshop should be useful for anyone interested in creating stable and reproducible computing environments, and examples will focus on library-specific tools like Archivematica and EZPaarse. With virtualization taking a lot of the pain out of installing and distributing software, alleviating many cross-platform issues, and becoming increasingly common in library and industry practices, now is a great time to get your feet wet.
(One three-hour session)
11. Digital Empathy: Creating Safe Spaces Online
User research is often focused on measures of the usability of online spaces. We look at search traffic, run card sorting and usability testing activities, and track how users navigate our spaces. Those results inform design decisions through the lens of information architecture. This is important, but doesn’t encompass everything a user needs in a space.
This workshop will focus on the other component of user experience design and user research: how to create spaces where users feel safe. Users bring their anxieties and stressors with them to our online spaces, but informed design choices can help to ameliorate that stress. This will ultimately lead to a more positive interaction between your institution and your users.
The presenters will discuss the theory behind empathetic design, delve deeply into using ethnographic research methods – including an opportunity for attendees to practice those ethnographic skills with student participants – and finish with the practical application of these results to ongoing and future projects.
(One three-hour session)
14. ARIA Basics: Making Your Web Content Sing Accessibility
https://dequeuniversity.com/assets/html/jquery-summit/html5/slides/landmarks.html
Are you a web developer or create web content? Do you add dynamic elements to your pages? If so, you should be concerned with making those dynamic elements accessible and usable to as many as possible. One of the most powerful tools currently available for making web pages accessible is ARIA, the Accessible Rich Internet Applications specification. This workshop will teach you the basics for leveraging the full power of ARIA to make great accessible web pages. Through several hands-on exercises, participants will come to understand the purpose and power of ARIA and how to apply it for a variety of different dynamic web elements. Topics will include semantic HTML, ARIA landmarks and roles, expanding/collapsing content, and modal dialog. Participants will also be taught some basic use of the screen reader NVDA for use in accessibility testing. Finally, the lessons will also emphasize learning how to keep on learning as HTML, JavaScript, and ARIA continue to evolve and expand.
Participants will need a basic background in HTML, CSS, and some JavaScript.
(One three-hour session)
18. Learning and Teaching Tech
Tech workshops pose two unique problems: finding skilled instructors for that content, and instructing that content well. Library hosted workshops are often a primary educational resource for solo learners, and many librarians utilize these workshops as a primary outreach platform. Tackling these two issues together often makes the most sense for our limited resources. Whether a programming language or software tool, learning tech to teach tech can be one of the best motivations for learning that tech skill or tool, but equally important is to learn how to teach and present tech well.
This hands-on workshop will guide participants through developing their own learning plan, reviewing essential pedagogy for teaching tech, and crafting a workshop of their choice. Each participant will leave with an actionable learning schedule, a prioritized list of resources to investigate, and an outline of a workshop they would like to teach.
(Two three-hour sessions)
23. Introduction to Omeka S
Omeka S represents a complete rewrite of Omeka Classic (aka the Omeka 2.x series), adhering to our fundamental principles of encouraging use of metadata standards, easy web publishing, and sharing cultural history. New objectives in Omeka S include multisite functionality and increased interaction with other systems. This workshop will compare and contrast Omeka S with Omeka Classic to highlight our emphasis on 1) modern metadata standards, 2) interoperability with other systems including Linked Open Data, 3) use of modern web standards, and 4) web publishing to meet the goals medium- to large-sized institutions.
In this workshop we will walk through Omeka S Item creation, with emphasis on LoD principles. We will also look at the features of Omeka S that ease metadata input and facilitate project-defined usage and workflows. In accordance with our commitment to interoperability, we will describe how the API for Omeka S can be deployed for data exchange and sharing between many systems. We will also describe how Omeka S promotes multiple site creation from one installation, in the interest of easy publishing with many objects in many contexts, and simplifying the work of IT departments.
(One three-hour session)
24. Getting started with static website generators
Have you been curious about static website generators? Have you been wondering who Jekyll and Hugo are? Then this workshop is for you
But this article isn’t about setting up a domain name and hosting for your website. It’s for the step after that, the actual making of that site. The typical choice for a lot of people would be to use something like WordPress. It’s a one-click install on most hosting providers, and there’s a gigantic market of plugins and themes available to choose from, depending on the type of site you’re trying to build. But not only is WordPress a bit overkill for most websites, it also gives you a dynamically generated site with a lot of moving parts. If you don’t keep all of those pieces up to date, they can pose a significant security risk and your site could get hijacked.
The alternative would be to have a static website, with nothing dynamically generated on the server side. Just good old HTML and CSS (and perhaps a bit of Javascript for flair). The downside to that option has been that you’ve been relegated to coding the whole thing by hand yourself. It’s doable, but you just want a place to share your work. You shouldn’t have to know all the idiosyncrasies of low-level web design (and the monumental headache of cross-browser compatibility) to do that.
Static website generators are tools used to build a website made up only of HTML, CSS, and JavaScript. Static websites, unlike dynamic sites built with tools like Drupal or WordPress, do not use databases or server-side scripting languages. Static websites have a number of benefits over dynamic sites, including reduced security vulnerabilities, simpler long-term maintenance, and easier preservation.
In this hands-on workshop, we’ll start by exploring static website generators, their components, some of the different options available, and their benefits and disadvantages. Then, we’ll work on making our own sites, and for those that would like to, get them online with GitHub pages. Familiarity with HTML, git, and command line basics will be helpful but are not required.
(One three-hour session)
26. Using Digital Media for Research and Instruction
To use digital media effectively in both research and instruction, you need to go beyond just the playback of media files. You need to be able to stream the media, divide that stream into different segments, provide descriptive analysis of each segment, order, re-order and compare different segments from the same or different streams and create web sites that can show the result of your analysis. In this workshop, we will use Omeka and several plugins for working with digital media, to show the potential of video streaming, segmentation and descriptive analysis for research and instruction.
(One three-hour session)
28. Spark in the Dark 101 https://zeppelin.apache.org/
This is an introductory session on Apache Spark, a framework for large-scale data processing (https://spark.apache.org/). We will introduce high level concepts around Spark, including how Spark execution works and it’s relationship to the other technologies for working with Big Data. Following this introduction to the theory and background, we will walk workshop participants through hands-on usage of spark-shell, Zeppelin notebooks, and Spark SQL for processing library data. The workshop will wrap up with use cases and demos for leveraging Spark within cultural heritage institutions and information organizations, connecting the building blocks learned to current projects in the real world.
(One three-hour session)
29. Introduction to Spotlight https://github.com/projectblacklight/spotlight http://www.spotlighttechnology.com/4-OpenSource.htm
Spotlight is an open source application that extends the digital library ecosystem by providing a means for institutions to reuse digital content in easy-to-produce, attractive, and scholarly-oriented websites. Librarians, curators, and other content experts can build Spotlight exhibits to showcase digital collections using a self-service workflow for selection, arrangement, curation, and presentation.
This workshop will introduce the main features of Spotlight and present examples of Spotlight-built exhibits from the community of adopters. We’ll also describe the technical requirements for adopting Spotlight and highlight the potential to customize and extend Spotlight’s capabilities for their own needs while contributing to its growth as an open source project.
(One three-hour session)
31. Getting Started Visualizing your IoT Data in Tableau https://www.tableau.com/
The Internet of Things is a rising trend in library research. IoT sensors can be used for space assessment, service design, and environmental monitoring. IoT tools create lots of data that can be overwhelming and hard to interpret. Tableau Public (https://public.tableau.com/en-us/s/) is a data visualization tool that allows you to explore this information quickly and intuitively to find new insights.
This full-day workshop will teach you the basics of building your own own IoT sensor using a Raspberry Pi (https://www.raspberrypi.org/) in order to gather, manipulate, and visualize your data.
All are welcome, but some familiarity with Python is recommended.
(Two three-hour sessions)
32. Enabling Social Media Research and Archiving
Social media data represents a tremendous opportunity for memory institutions of all kinds, be they large academic research libraries, or small community archives. Researchers from a broad swath of disciplines have a great deal of interest in working with social media content, but they often lack access to datasets or the technical skills needed to create them. Further, it is clear that social media is already a crucial part of the historical record in areas ranging from events your local community to national elections. But attempts to build archives of social media data are largely nascent. This workshop will be both an introduction to collecting data from the APIs of social media platforms, as well as a discussion of the roles of libraries and archives in that collecting.
Assuming no prior experience, the workshop will begin with an explanation of how APIs operate. We will then focus specifically on the Twitter API, as Twitter is of significant interest to researchers and hosts an important segment of discourse. Through a combination of hands-on and demos, we will gain experience with a number of tools that support collecting social media data (e.g., Twarc, Social Feed Manager, DocNow, Twurl, and TAGS), as well as tools that enable sharing social media datasets (e.g., Hydrator, TweetSets, and the Tweet ID Catalog).
The workshop will then turn to a discussion of how to build a successful program enabling social media collecting at your institution. This might cover a variety of topics including outreach to campus researchers, collection development strategies, the relationship between social media archiving and web archiving, and how to get involved with the social media archiving community. This discussion will be framed by a focus on ethical considerations of social media data, including privacy and responsible data sharing.
Time permitting, we will provide a sampling of some approaches to social media data analysis, including Twarc Utils and Jupyter Notebooks.
Augmented reality can be described as experiencing the real world with an overlay of additional computer generated content. In contrast, virtual reality immerses a user in an entirely simulated environment, while mixed or merged reality blends real and virtual worlds in ways through which the physical and the digital can interact. AR, VR, and MR offer new opportunities to create a psychological sense of immersive presence in an environment that feels real enough to be viewed, experienced, explored, and manipulated. These technologies have the potential to democratize learning by giving everyone access to immersive experiences that were once restricted to relatively few learners.
In Grinnell College’s Immersive Experiences Lab http://gciel.sites.grinnell.edu/, teams of faculty, staff, and students collaborate on research projects, then use 3D, VR, and MR technologies as a platform to synthesize and present their findings.
In terms of equity, AR, VR, and MR have the potential to democratize learning by giving all learners access to immersive experiences
downsides :
relatively little research about the most effective ways to use these technologies as instructional tools. Combined, these factors can be disincentives for institutions to invest in the equipment, facilities, and staffing that can be required to support these systems. AR, VR, and MR technologies raise concerns about personal privacy and data security. Further, at least some of these tools and applications currently fail to meet accessibility standards. The user experience in some AR, VR, and MR applications can be intensely emotional and even disturbing (my note: but can be also used for empathy literacy),
immersing users in recreated, remote, or even hypothetical environments as small as a molecule or as large as a universe, allowing learners to experience “reality” from multiple perspectives.
Free eBook: Incorporate Accessibility into Your eLearning
Curt Zilbersher Group Owner Director of E-Learning & Training Development
Accessibility from the Ground Up, by Pamela S. Hogle, offers dozens of tips for building eLearning content that more learners can use in additional ways.
build in ease of access and assistance for people who are new to technology, are English-language learners, have trouble distinguishing colors, need high-contrast designs, are hard of hearing, or use screen readers to navigate or access text—all learners will benefit. Accessible content is easier for everyone to use. It is clear and easy to navigate.
a learning management system (LMS) is never the solution to every problem in education. Edtech is just one part of the whole learning ecosystem and student experience.
Therefore, the next generation digital learning environment (NGDLE), as envisioned by EDUCAUSE in 2015 … Looking at the NGDLE requirements from an LMS perspective, I view the NGDLE as being about five areas: interoperability; personalization; analytics, advising, and learning assessment; collaboration; accessibility and universal design.
Interoperability
Content can easily be exchanged between systems.
Users are able to leverage the tools they love, including discipline-specific apps.
Learning data is available to trusted systems and people who need it.
The learning environment is “future proof” so that it can adapt and extend as the ecosystem evolves.
Personalization
The learning environment reflects individual preferences.
Departments, divisions, and institutions can be autonomous.
Instructors teach the way they want and are not constrained by the software design.
There are clear, individual learning paths.
Students have choice in activity, expression, and engagement.
Analytics, Advising, and Learning Assessment
Learning analytics helps to identify at-risk students, course progress, and adaptive learning pathways.
The learning environment enables integrated planning and assessment of student performance.
More data is made available, with greater context around the data.
The learning environment supports platform and data standards.
Collaboration
Individual spaces persist after courses and after graduation.
Learners are encouraged as creators and consumers.
Courses include public and private spaces.
Accessibility and Universal Design
Accessibility is part of the design of the learning experience.
The learning environment enables adaptive learning and supports different types of materials.
Learning design includes measurement rubrics and quality control.
The core analogy used in the NGDLE paper is that each component of the learning environment is a Lego brick:
The days of the LMS as a “walled garden” app that does everything is over.
Today many kinds of amazing learning and collaboration tools (Lego bricks) should be accessible to educators.
We have standards that let these tools (including an LMS) talk to each other. That is, all bricks share some properties that let them fit together.
Students and teachers sign in once to this “ecosystem of bricks.”
The bricks share results and data.
These bricks fit together; they can be interchanged and swapped at will, with confidence that the learning experience will continue uninterrupted.
Any “next-gen” attempt to completely rework the pedagogical model and introduce a “mash-up of whatever” to fulfil this model would fall victim to the same criticisms levied at the LMS today: there is too little time and training to expect faculty to figure out the nuances of implementation on their own.
The Lego metaphor works only if we’re talking about “old school” Lego design — bricks of two, three, and four-post pieces that neatly fit together. Modern edtech is a lot more like the modern Lego. There are wheels and rocket launchers and belts and all kinds of amazing pieces that work well with each other, but only when they are configured properly. A user cannot simply stick together different pieces and assume they will work harmoniously in creating an environment through which each student can be successful.
As the NGDLE paper states: “Despite the high percentages of LMS adoption, relatively few instructors use its more advanced features — just 41% of faculty surveyed report using the LMS ‘to promote interaction outside the classroom.'”
But this is what the next generation LMS is good at: being a central nervous system — or learning hub — through which a variety of learning activities and tools are used. This is also where the LMS needs to go: bringing together and making sense of all the amazing innovations happening around it. This is much harder to do, perhaps even impossible, if all the pieces involved are just bricks without anything to orchestrate them or to weave them together into a meaningful, personal experience for achieving well-defined learning outcomes.
Making a commitment to build easy, flexible, and smart technology
Working with colleges and universities to remove barriers to adopting new tools in the ecosystem
Standardizing the vetting of accessibility compliance (the Strategic Nonvisual Access Partner Program from the National Federation of the Blind is a great start)
Advancing standards for data exchange while protecting individual privacy
Building integrated components that work with the institutions using them — learning quickly about what is and is not working well and applying those lessons to the next generation of interoperability standards
Letting people use the tools they love [SIC] and providing more ways for nontechnical individuals (including students) to easily integrate new features into learning activities
My note: something just refused to be accepted at SCSU
Technologists are often very focused on the technology, but the reality is that the more deeply and closely we understand the pedagogy and the people in the institutions — students, faculty, instructional support staff, administrators — the better suited we are to actually making the tech work for them.
++++++++++++++++++++++
Under the Hood of a Next Generation Digital Learning Environment in Progress
The challenge is that although 85 percent of faculty use a campus learning management system (LMS),1 a recent Blackboard report found that, out of 70,000 courses across 927 North American institutions, 53 percent of LMS usage was classified as supplemental(content-heavy, low interaction) and 24 percent as complementary (one-way communication via content/announcements/gradebook).2 Only 11 percent were characterized as social, 10 percent as evaluative (heavy use of assessment), and 2 percent as holistic (balanced use of all previous). Our FYE course required innovating beyond the supplemental course-level LMS to create a more holistic cohort-wide NGDLE in order to fully support the teaching, learning, and student success missions of the program.The key design goals for our NGDLE were to:
Create a common platform that could deliver a standard curriculum and achieve parity in all course sections using existing systems and tools and readily available content
Capture, store, and analyze any generated learner data to support learning assessment, continuous program improvement, and research
Develop reports and actionable analytics for administrators, advisors, instructors, and students
Whether you’re flipping your courses, creating videos to help your students understand specific concepts or recording lectures for exam review, these tips can help you optimize your production setup on a tight budget.
1) Speak Into the Microphone
2) Reconsider Whether You Want to be a Talking Head
Record your video and upload it to YouTube. YouTube will apply its machine transcription to the audio as a starting point. Then you can download the captions into your caption editor and improve on the captions from there. Afterward, you can delete the video from YouTube and add it to your institution’s platform.