Searching for "machine learning"

disrupting education with technology

Nancy Bailey: Disrupting Education with Technology is Unhealthy for Children

https://dianeravitch.net/2018/11/18/nancy-bailey-disrupting-education-with-technology-is-unhealthy-for-children/

Veteran teacher Nancy Bailey warns about the danger that technology poses to child development. 

Technology is a helpful tool, but it won’t provide that sense of stability. It’s a cold machine. School districts push technology over teachers. They don’t stop to think about what it will mean to children and their development.

the idea that instruction should be disrupted using technology is putting students and the country at risk. It destroys the public school curriculum that has managed to educate the masses for decades.

Early childhood teachers express concern that tech is invading preschool education. We know that free play is the heart of learning.

But programs, like Waterford Early Learning, advertise online instruction including assessment for K-2. Their Upstart program advertises, At-home, online kindergarten readiness program that gives 4- and 5-year-old children early reading, math, and science lessons.

++++++++++++++++++
more on technology in education in this IMS blog
https://blog.stcloudstate.edu/ims?s=technology+education

more on Clay Christensen disruption theory in this IMS blog:
https://blog.stcloudstate.edu/ims/2016/12/19/clayton-christensen-disruption-theory/

sound and the brain

What Types of Sound Experiences Enable Children to Learn Best?

https://www.kqed.org/mindshift/46824/what-types-of-sound-experiences-enable-children-to-learn-best
At Northwestern’s Auditory Neuroscience Lab, Kraus and colleagues measure how the brain responds when various sounds enter the ear. They’ve found that the brain reacts to sound in microseconds, and that brain waves closely resemble the sound waves.
Making sense of sound is one of the most “computationally complex” functions of the brain, Kraus said, which explains why so many language and other disorders, including autism, reveal themselves in the way the brain processes sound. The way the brain responds to the “ingredients” of sound—pitching, timing and timbre—is a window into brain health and learning ability.

practical suggestions for creating space “activities that promote sound-to-meaning development,” whether at home or in school:
Reduce noise. Chronic background noise is associated with several auditory and learning problems: it contributes to “neural noise,” wherein brain neurons fire spontaneously in the absence of sound; it reduces the brain’s sensitivity to sound; and it slows auditory growth.
Read aloud. Even before kids are able to read themselves, hearing stories told by others develops vocabulary and builds working memory; to understand how a story unfolds, listeners, need to remember what was said before.
Encourage children to play a musical instrument. “There is an explicit link between making music and strengthening language skills, so that keeping music education at the center of curricula can pay big dividends for children’s cognitive, emotional, and educational health.Two years of music instruction in elementary and even secondary school can trigger biological changes in how the brain processes sound, which in turn affects language development.
Listen to audiobooks and podcasts. Well-told stories can draw kids in and build attention skills and working memory. The number and quality of these recordings has exploded in recent years, making it that much easier to find a good fit for individuals and classes.
Support learning a second language. Growing up in a bilingual environment causes a child’s brain to manage two languages at once.
Avoid white noise machines. In an effort to soothe children to sleep, some parents set up sound machines in bedrooms. These devices, which emit “meaningless sound,” as Kraus put it, can interfere with how the brain develops sound-processing circuitry.
Use the spread of technology to your advantage. Rather than bemoan the constant bleeping and chirping of everyday life, much of it the result of technological advances, welcome the new sound opportunities these developments provide. Technologies that shrink the globalized world enable second-language learning.

=++++++++++++
More on the brain in this IMS blog
https://blog.stcloudstate.edu/ims?s=brain

coding ethics unpredictability

Franken-algorithms: the deadly consequences of unpredictable code

by  Thu 30 Aug 2018 

https://www.theguardian.com/technology/2018/aug/29/coding-algorithms-frankenalgos-program-danger

Between the “dumb” fixed algorithms and true AI lies the problematic halfway house we’ve already entered with scarcely a thought and almost no debate, much less agreement as to aims, ethics, safety, best practice. If the algorithms around us are not yet intelligent, meaning able to independently say “that calculation/course of action doesn’t look right: I’ll do it again”, they are nonetheless starting to learn from their environments. And once an algorithm is learning, we no longer know to any degree of certainty what its rules and parameters are. At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us. Where the “dumb” fixed algorithms – complex, opaque and inured to real time monitoring as they can be – are in principle predictable and interrogable, these ones are not. After a time in the wild, we no longer know what they are: they have the potential to become erratic. We might be tempted to call these “frankenalgos” – though Mary Shelley couldn’t have made this up.

Twenty years ago, George Dyson anticipated much of what is happening today in his classic book Darwin Among the Machines. The problem, he tells me, is that we’re building systems that are beyond our intellectual means to control. We believe that if a system is deterministic (acting according to fixed rules, this being the definition of an algorithm) it is predictable – and that what is predictable can be controlled. Both assumptions turn out to be wrong.“It’s proceeding on its own, in little bits and pieces,” he says. “What I was obsessed with 20 years ago that has completely taken over the world today are multicellular, metazoan digital organisms, the same way we see in biology, where you have all these pieces of code running on people’s iPhones, and collectively it acts like one multicellular organism.“There’s this old law called Ashby’s law that says a control system has to be as complex as the system it’s controlling, and we’re running into that at full speed now, with this huge push to build self-driving cars where the software has to have a complete model of everything, and almost by definition we’re not going to understand it. Because any model that we understand is gonna do the thing like run into a fire truck ’cause we forgot to put in the fire truck.”

Walsh believes this makes it more, not less, important that the public learn about programming, because the more alienated we become from it, the more it seems like magic beyond our ability to affect. When shown the definition of “algorithm” given earlier in this piece, he found it incomplete, commenting: “I would suggest the problem is that algorithm now means any large, complex decision making software system and the larger environment in which it is embedded, which makes them even more unpredictable.” A chilling thought indeed. Accordingly, he believes ethics to be the new frontier in tech, foreseeing “a golden age for philosophy” – a view with which Eugene Spafford of Purdue University, a cybersecurity expert, concurs. Where there are choices to be made, that’s where ethics comes in.

our existing system of tort law, which requires proof of intention or negligence, will need to be rethought. A dog is not held legally responsible for biting you; its owner might be, but only if the dog’s action is thought foreseeable.

model-based programming, in which machines do most of the coding work and are able to test as they go.

As we wait for a technological answer to the problem of soaring algorithmic entanglement, there are precautions we can take. Paul Wilmott, a British expert in quantitative analysis and vocal critic of high frequency trading on the stock market, wryly suggests “learning to shoot, make jam and knit

The venerable Association for Computing Machinery has updated its code of ethics along the lines of medicine’s Hippocratic oath, to instruct computing professionals to do no harm and consider the wider impacts of their work.

+++++++++++
more on coding in this IMS blog
https://blog.stcloudstate.edu/ims?s=coding

AI and China education

China’s children are its secret weapon in the global AI arms race

China wants to be the world leader in artificial intelligence by 2030. To get there, it’s reinventing the way children are taught

despite China’s many technological advances, in this new cyberspace race, the West had the lead.

Xi knew he had to act. Within twelve months he revealed his plan to make China a science and technology superpower. By 2030 the country would lead the world in AI, with a sector worth $150 billion. How? By teaching a generation of young Chinese to be the best computer scientists in the world.

Today, the US tech sector has its pick of the finest minds from across the world, importing top talent from other countries – including from China. Over half of Bay Area workers are highly-skilled immigrants. But with the growth of economies worldwide and a Presidential administration hell-bent on restricting visas, it’s unclear that approach can last.

In the UK the situation is even worse. Here, the government predicts there’ll be a shortfall of three million employees for high-skilled jobs by 2022 – even before you factor in the immigration crunch of Brexit. By contrast, China is plotting a homegrown strategy of local and national talent development programs. It may prove a masterstroke.

In 2013 the city’s teenagers gained global renown when they topped the charts in the PISA tests administered every three years by the OECD to see which country’s kids are the smartest in the world. Aged 15, Shanghai students were on average three full years ahead of their counterparts in the UK or US in maths and one-and-a-half years ahead in science.

Teachers, too, were expected to be learners. Unlike in the UK, where, when I began to teach a decade ago, you might be working on full-stops with eleven-year-olds then taking eighteen-year-olds through the finer points of poetry, teachers in Shanghai specialised not only in a subject area, but also an age-group.

Shanghai’s success owed a lot to Confucian tradition, but it fitted precisely the best contemporary understanding of how expertise is developed. In his book Why Don’t Kids Like School? cognitive Dan Willingham explains that complex mental skills like creativity and critical thinking depend on our first having mastered the simple stuff. Memorisation and repetition of the basics serve to lay down the neural architecture that creates automaticity of thought, ultimately freeing up space in our working memory to think big.

Seung-bin Lee, a seventeen-year-old high school graduate, told me of studying fourteen hours a day, seven days a week, for the three years leading up to the Suneung, the fearsome SAT exam taken by all Korean school leavers on a single Thursday each November, for which all flights are grounded so as not to break students’ concentration during the 45 minutes of the English listening paper.
Korea’s childhoods were being lost to a relentless regime of studying, crushed in a top-down system that saw them as cyphers rather than kids.

A decade ago, we consoled ourselves that although kids in China and Korea worked harder and did better on tests than ours, it didn’t matter. They were compliant, unthinking drones, lacking the creativity, critical thinking or entrepreneurialism needed to succeed in the world. No longer. Though there are still issues with Chinese education – urban centres like Shanghai and Hong Kong are positive outliers – the country knows something that we once did: education is the one investment on which a return is guaranteed. China is on course to becoming the first education superpower.

Troublingly, where education in the UK and US has been defined by creativity and independent thinking – Shanghai teachers told me of visits to our schools to learn about these qualities – our direction of travel is now away from those strengths and towards exams and standardisation, with school-readiness tests in the pipeline and UK schools minister Nick Gibb suggesting kids can beat exam stress by sitting more of them. Centres of excellence remain, but increasingly, it feels, we’re putting our children at risk of losing out to the robots, while China is building on its strong foundations to ask how its young people can be high-tech pioneers. They’re thinking big – we’re thinking of test scores.

soon “digital information processing” would be included as a core subject on China’s national graduation exam – the Gaokao – and pictured classrooms in which students would learn in cross-disciplinary fashion, designing mobile phones for example, in order to develop design, engineering and computing skills. Focusing on teaching kids to code was short-sighted, he explained. “We still regard it as a language between human and computer.” (My note: they are practically implementing the Finland’s attempt to rebuild curricula)

“If your plan is for one year,” went an old Chinese saying, “plant rice. If your plan is for ten years, plant trees. If your plan is for 100 years, educate children.” Two and half thousand years later chancellor Gwan Zhong might update his proverb, swapping rice for bitcoin and trees for artificial intelligence, but I’m sure he’d stand by his final point.

+++++++++++++
more on AR in this IMS blog
https://blog.stcloudstate.edu/ims?s=artificial+intelligence

more on China education in this IMS blog
https://blog.stcloudstate.edu/ims/2018/01/06/chinas-transformation-of-higher-education/

practical about VR and AR in schools

Beyond the Hype: 5 Ways to Think About Virtual and Augmented Reality in Schools

By Jenny Abamu     Feb 7, 2017

https://www.edsurge.com/news/2017-02-07-beyond-the-hype-5-ways-to-think-about-virtual-and-augmented-reality-in-schools

1. Ask Yourself: Why VR or AR

AR and VR are mediums for the transmission of information, and many people will judge these mediums by the content that is produced within them. For educators seeking to gain buy-in from administrators and other colleagues it is critical for them to justify the reasons their content requires new reality media.

2. Just Dive In

Gartner Hype Cycle’s “slope of enlightenment”—meaning the technology is just entering public acceptance.

Given the newness of these mediums, it is no surprise that few curricular resources exist to support courses around VR and AR. Professional development sessions on new reality tools are almost non-existent, which means educators seeking to use virtual or augmented reality simply need to dive into the subjects.

3. Go Beyond Storytelling

Studies using VR demonstrate the ‘Proteus Effect’—taking on the psychology of inhabiting a different body and unconsciously changing our behavior to conform to it (learning empathy through VR)

4. Master the Machines

“The equipment matters. If there is a latency between the computer and the VR set that can cause a lot of problems,”
With VR equipment ranging from about $15 to $600 educators will have to check the budget or start writing grant proposals to gain access to the higher quality machines.

5. Understand Your Student’s Needs

described as a “quantum shift” in the way we interact, learn and experience.

+++++++++++++
more on VR and AR in schools in this IMS blog
https://blog.stcloudstate.edu/ims?s=virtual+rality+education

challenges ed leaders technology

The Greatest Challenge Facing School Leaders in a Digital World

By Scott McLeod     Oct 29, 2017

https://www.edsurge.com/news/2017-10-29-the-greatest-challenge-facing-school-leaders-in-a-digital-world

the Center for the Advanced Study of Tech­nology Leadership in Education – CASTLE

Vision

If a school’s reputation and pride are built on decades or centuries of “this is how we’ve always done things here,” resistance from staff, parents, and alumni to significant changes may be fierce. In such institutions, heads of school may have to steer carefully between deeply ingrained habits and the need to modernize the information tools with which students and faculty work

Too often, when navigating faculty or parental resistance, school leaders and technology staff make reassurances that things will not have to change much in the classroom or that slow baby steps are OK. Unfortunately, this results in a different problem, which is that schools have now invested significant money, time, and energy into digital technologies but are using them sparingly and seeing little impact. In such schools, replicative uses of technology are quite common, but transformative uses that leverage the unique affordances of technology are quite rare.

many schools fail to proceed further because they don’t have a collective vision of what more transformative uses of technology might look like, nor do they have a shared understanding of and commitment to what it will take to get to such a place. As a result, faculty instruction and the learning experiences of students change little or not at all.

These schools have taken the time to involve all stakeholders—including students—in substantive conversations about what digital tools will allow them to do differently compared with previous analog practices. Their visions promote the potential of computing devices to facilitate all of those elements we now think of as essential 21st-century capacities: confidence, curiosity, enthusiasm, passion, critical thinking, problem-solving, and self-direction. Technology doesn’t simply support traditional teaching—it transforms it for deeper thinking and gives students more agency over their own learning.

Fear

Another prevalent issue preventing technology change in schools is fear—fear of change, of the unknown, of letting go of what we know best, of being learners again. But it’s also a fear of letting kids have wide access to the Internet with the possibility of cyberbullying, access to inappropriate material, and exposure to online predators or even excessive advertising. Fears, of course, need to be surfaced and addressed.

The fear drives some schools to ban cellphones, disallow students and faculty from using Facebook, and lock down Internet filters so tightly that useful websites are inaccessible. They prohibit the use of Twitter and YouTube, and they block blogs. Some educators see these types of responses as principled stands against the shortcomings and hassles of digital technologies. Others see them as rejections of the dehumanization of the education process by soulless machines. Often, however, it’s just schools clinging to the past and elevating what is comfortable or familiar over the potential of technology to help them better deliver on their school missions.

Heads of school don’t have to be skilled users themselves to be effective technology leaders, but they do have to exercise appropriate oversight and convey the message—repeatedly—that frequent, meaningful technology use in school is both important and expected. Nostalgia aside, there is no foreseeable future in which the primacy of printed text is not superseded by electronic text and multimedia. When nearly all information is digital or online, multi-modal and multi­media, accessed by mobile devices that fit in our pockets, the question should not be whether schools prepare students for a digital learning landscape, but rather how.

Control

Many educators aren’t necessarily afraid of technology, but they are so accustomed to heavily teacher-directed classrooms that they are leery about giving up control—and can’t see the value in doing so.

Although most of us recognize that mobile computers connected to the Internet may be the most powerful learning devices yet invented—and that youth are learning in powerful ways at home with these technologies—allowing students to have greater autonomy and ownership of the learning process can still seem daunting and questionable.

The “beyond” is particularly important. When we give students some voice in and choice about what and how they learn, we honor basic human needs for autonomy, we enhance students’ interest and engagement, and we truly actualize our missions of preparing lifelong learners.

The goal of instructional transformation is to empower students, not to disempower teachers. While instructor unfamiliarity with digital technologies, inquiry- or problem-based teaching techniques, or deeper learning strategies may result in some initial discomfort, these challenges can be overcome with robust support.

Support

A few workshops here and there rarely result in large-scale changes in implementation.

teacher-driven “unconferences” or “edcamps,” at which educators propose and facilitate discussion topics, can be powerful mechanisms for fostering professional dialogue and learning. Similarly, some schools offer voluntary “Tech Tuesdays” or “appy hours” to foster digital learning among interested faculty.

In addition to existing IT support, technology integration staff, or librarians/media specialists, some schools have student technology teams that are on call for assistance when needed.

A few middle schools and high schools go even further and assign teachers their own individual student technology mentors. These student-teacher pairings last all school year and comprise the first line of support for educators’ technology questions.

As teachers, heads of school, counselors, coaches, and librarians, we all now have the ability to participate in ongoing, virtual, global communities of practice.

Whether formal or informal, the focus of technology-related professional learning should be on student learning, not on the tools or devices. Independent school educators should always ask, “Technology for the purpose of what?” when considering the inclusion of digital technologies into learning activities. Technology never should be implemented just for technology’s sake.

++++++++++++
more on digital literacy for EDAD in this IMS blog
https://blog.stcloudstate.edu/ims?s=digital+literacy+edad

open access symposium 2018 digital libraries

The ACM/IEEE Joint Conference on Digital Libraries in 2018 (JCDL 2018L:
https://2018.jcdl.org/) will be held in conjunction with UNT Open Access
Symposium 2018 (https://openaccess.unt.edu/symposium/2018) on June 3 – 6, 2018
in Fort Worth, Texas, the rustic and artistic threshold into the American
West. JCDL welcomes interesting submissions ranging across theories, systems,
services, and applications. We invite those managing, operating, developing,
curating, evaluating, or utilizing digital libraries broadly defined, covering
academic or public institutions, including archives, museums, and social
networks. We seek involvement of those in iSchools, as well as working in
computer or information or social sciences and technologies. Multiple tracks
and sessions will ensure tailoring to researchers, practitioners, and diverse
communities including data science/analytics, data curation/stewardship,
information retrieval, human-computer interaction, hypertext (and Web/network
science), multimedia, publishing, preservation, digital humanities, machine
learning/AI, heritage/culture, health/medicine, policy, law, and privacy/
intellectual property.

General Instructions on submissions of full papers, short papers, posters and
demonstrations, doctoral consortium, tutorials, workshops, and panels can be
found at https://2018.jcdl.org/general_instructions. Below are the submission
deadlines:

• Jan. 15, 2018 – Tutorial and workshop proposal submissions
• Jan. 15, 2018 – Full paper and short paper submissions
• Jan. 29, 2018 – Panel, poster and demonstration submissions
• Feb. 1, 2018 – Notification of acceptance for tutorials and workshops
• Mar. 8, 2018 – Notification of acceptance for full papers, short papers,
panels, posters, and demonstrations
• Mar. 25, 2018 – Doctoral Consortium abstract submissions
• Apr. 5, 2018 – Notification of acceptance for Doctoral Consortium
• Apr. 15, 2018 – Final camera-ready deadline for full papers, short papers,
panels, posters, and demonstrations

Please email jcdl2018@googlegroups.com if you have any questions.

code4lib 2018

Code2LIB February 2018

http://2018.code4lib.org/

2018 Preconference Voting

10. The Virtualized Library: A Librarian’s Introduction to Docker and Virtual Machines
This session will introduce two major types of virtualization, virtual machines using tools like VirtualBox and Vagrant, and containers using Docker. The relative strengths and drawbacks of the two approaches will be discussed along with plenty of hands-on time. Though geared towards integrating these tools into a development workflow, the workshop should be useful for anyone interested in creating stable and reproducible computing environments, and examples will focus on library-specific tools like Archivematica and EZPaarse. With virtualization taking a lot of the pain out of installing and distributing software, alleviating many cross-platform issues, and becoming increasingly common in library and industry practices, now is a great time to get your feet wet.

(One three-hour session)

11. Digital Empathy: Creating Safe Spaces Online
User research is often focused on measures of the usability of online spaces. We look at search traffic, run card sorting and usability testing activities, and track how users navigate our spaces. Those results inform design decisions through the lens of information architecture. This is important, but doesn’t encompass everything a user needs in a space.

This workshop will focus on the other component of user experience design and user research: how to create spaces where users feel safe. Users bring their anxieties and stressors with them to our online spaces, but informed design choices can help to ameliorate that stress. This will ultimately lead to a more positive interaction between your institution and your users.

The presenters will discuss the theory behind empathetic design, delve deeply into using ethnographic research methods – including an opportunity for attendees to practice those ethnographic skills with student participants – and finish with the practical application of these results to ongoing and future projects.

(One three-hour session)

14. ARIA Basics: Making Your Web Content Sing Accessibility

https://dequeuniversity.com/assets/html/jquery-summit/html5/slides/landmarks.html
Are you a web developer or create web content? Do you add dynamic elements to your pages? If so, you should be concerned with making those dynamic elements accessible and usable to as many as possible. One of the most powerful tools currently available for making web pages accessible is ARIA, the Accessible Rich Internet Applications specification. This workshop will teach you the basics for leveraging the full power of ARIA to make great accessible web pages. Through several hands-on exercises, participants will come to understand the purpose and power of ARIA and how to apply it for a variety of different dynamic web elements. Topics will include semantic HTML, ARIA landmarks and roles, expanding/collapsing content, and modal dialog. Participants will also be taught some basic use of the screen reader NVDA for use in accessibility testing. Finally, the lessons will also emphasize learning how to keep on learning as HTML, JavaScript, and ARIA continue to evolve and expand.

Participants will need a basic background in HTML, CSS, and some JavaScript.

(One three-hour session)

18. Learning and Teaching Tech
Tech workshops pose two unique problems: finding skilled instructors for that content, and instructing that content well. Library hosted workshops are often a primary educational resource for solo learners, and many librarians utilize these workshops as a primary outreach platform. Tackling these two issues together often makes the most sense for our limited resources. Whether a programming language or software tool, learning tech to teach tech can be one of the best motivations for learning that tech skill or tool, but equally important is to learn how to teach and present tech well.

This hands-on workshop will guide participants through developing their own learning plan, reviewing essential pedagogy for teaching tech, and crafting a workshop of their choice. Each participant will leave with an actionable learning schedule, a prioritized list of resources to investigate, and an outline of a workshop they would like to teach.

(Two three-hour sessions)

23. Introduction to Omeka S
Omeka S represents a complete rewrite of Omeka Classic (aka the Omeka 2.x series), adhering to our fundamental principles of encouraging use of metadata standards, easy web publishing, and sharing cultural history. New objectives in Omeka S include multisite functionality and increased interaction with other systems. This workshop will compare and contrast Omeka S with Omeka Classic to highlight our emphasis on 1) modern metadata standards, 2) interoperability with other systems including Linked Open Data, 3) use of modern web standards, and 4) web publishing to meet the goals medium- to large-sized institutions.

In this workshop we will walk through Omeka S Item creation, with emphasis on LoD principles. We will also look at the features of Omeka S that ease metadata input and facilitate project-defined usage and workflows. In accordance with our commitment to interoperability, we will describe how the API for Omeka S can be deployed for data exchange and sharing between many systems. We will also describe how Omeka S promotes multiple site creation from one installation, in the interest of easy publishing with many objects in many contexts, and simplifying the work of IT departments.

(One three-hour session)

24. Getting started with static website generators
Have you been curious about static website generators? Have you been wondering who Jekyll and Hugo are? Then this workshop is for you

My notehttps://opensource.com/article/17/5/hugo-vs-jekyll

But this article isn’t about setting up a domain name and hosting for your website. It’s for the step after that, the actual making of that site. The typical choice for a lot of people would be to use something like WordPress. It’s a one-click install on most hosting providers, and there’s a gigantic market of plugins and themes available to choose from, depending on the type of site you’re trying to build. But not only is WordPress a bit overkill for most websites, it also gives you a dynamically generated site with a lot of moving parts. If you don’t keep all of those pieces up to date, they can pose a significant security risk and your site could get hijacked.

The alternative would be to have a static website, with nothing dynamically generated on the server side. Just good old HTML and CSS (and perhaps a bit of Javascript for flair). The downside to that option has been that you’ve been relegated to coding the whole thing by hand yourself. It’s doable, but you just want a place to share your work. You shouldn’t have to know all the idiosyncrasies of low-level web design (and the monumental headache of cross-browser compatibility) to do that.

Static website generators are tools used to build a website made up only of HTML, CSS, and JavaScript. Static websites, unlike dynamic sites built with tools like Drupal or WordPress, do not use databases or server-side scripting languages. Static websites have a number of benefits over dynamic sites, including reduced security vulnerabilities, simpler long-term maintenance, and easier preservation.

In this hands-on workshop, we’ll start by exploring static website generators, their components, some of the different options available, and their benefits and disadvantages. Then, we’ll work on making our own sites, and for those that would like to, get them online with GitHub pages. Familiarity with HTML, git, and command line basics will be helpful but are not required.

(One three-hour session)

26. Using Digital Media for Research and Instruction
To use digital media effectively in both research and instruction, you need to go beyond just the playback of media files. You need to be able to stream the media, divide that stream into different segments, provide descriptive analysis of each segment, order, re-order and compare different segments from the same or different streams and create web sites that can show the result of your analysis. In this workshop, we will use Omeka and several plugins for working with digital media, to show the potential of video streaming, segmentation and descriptive analysis for research and instruction.

(One three-hour session)

28. Spark in the Dark 101 https://zeppelin.apache.org/
This is an introductory session on Apache Spark, a framework for large-scale data processing (https://spark.apache.org/). We will introduce high level concepts around Spark, including how Spark execution works and it’s relationship to the other technologies for working with Big Data. Following this introduction to the theory and background, we will walk workshop participants through hands-on usage of spark-shell, Zeppelin notebooks, and Spark SQL for processing library data. The workshop will wrap up with use cases and demos for leveraging Spark within cultural heritage institutions and information organizations, connecting the building blocks learned to current projects in the real world.

(One three-hour session)

29. Introduction to Spotlight https://github.com/projectblacklight/spotlight
http://www.spotlighttechnology.com/4-OpenSource.htm
Spotlight is an open source application that extends the digital library ecosystem by providing a means for institutions to reuse digital content in easy-to-produce, attractive, and scholarly-oriented websites. Librarians, curators, and other content experts can build Spotlight exhibits to showcase digital collections using a self-service workflow for selection, arrangement, curation, and presentation.

This workshop will introduce the main features of Spotlight and present examples of Spotlight-built exhibits from the community of adopters. We’ll also describe the technical requirements for adopting Spotlight and highlight the potential to customize and extend Spotlight’s capabilities for their own needs while contributing to its growth as an open source project.

(One three-hour session)

31. Getting Started Visualizing your IoT Data in Tableau https://www.tableau.com/
The Internet of Things is a rising trend in library research. IoT sensors can be used for space assessment, service design, and environmental monitoring. IoT tools create lots of data that can be overwhelming and hard to interpret. Tableau Public (https://public.tableau.com/en-us/s/) is a data visualization tool that allows you to explore this information quickly and intuitively to find new insights.

This full-day workshop will teach you the basics of building your own own IoT sensor using a Raspberry Pi (https://www.raspberrypi.org/) in order to gather, manipulate, and visualize your data.

All are welcome, but some familiarity with Python is recommended.

(Two three-hour sessions)

32. Enabling Social Media Research and Archiving
Social media data represents a tremendous opportunity for memory institutions of all kinds, be they large academic research libraries, or small community archives. Researchers from a broad swath of disciplines have a great deal of interest in working with social media content, but they often lack access to datasets or the technical skills needed to create them. Further, it is clear that social media is already a crucial part of the historical record in areas ranging from events your local community to national elections. But attempts to build archives of social media data are largely nascent. This workshop will be both an introduction to collecting data from the APIs of social media platforms, as well as a discussion of the roles of libraries and archives in that collecting.

Assuming no prior experience, the workshop will begin with an explanation of how APIs operate. We will then focus specifically on the Twitter API, as Twitter is of significant interest to researchers and hosts an important segment of discourse. Through a combination of hands-on and demos, we will gain experience with a number of tools that support collecting social media data (e.g., Twarc, Social Feed Manager, DocNow, Twurl, and TAGS), as well as tools that enable sharing social media datasets (e.g., Hydrator, TweetSets, and the Tweet ID Catalog).

The workshop will then turn to a discussion of how to build a successful program enabling social media collecting at your institution. This might cover a variety of topics including outreach to campus researchers, collection development strategies, the relationship between social media archiving and web archiving, and how to get involved with the social media archiving community. This discussion will be framed by a focus on ethical considerations of social media data, including privacy and responsible data sharing.

Time permitting, we will provide a sampling of some approaches to social media data analysis, including Twarc Utils and Jupyter Notebooks.

(One three-hour session)

fake news and video

Computer Scientists Demonstrate The Potential For Faking Video

http://www.npr.org/sections/alltechconsidered/2017/07/14/537154304/computer-scientists-demonstrate-the-potential-for-faking-video

As a team out of the University of Washington explains in a new paper titled “Synthesizing Obama: Learning Lip Sync from Audio,” they’ve made several fake videos of Obama.

+++++++++++++

++++++++++++++++++++++++++++++++++++++

Fake news: you ain’t seen nothing yet

Generating convincing audio and video of fake events, July 1, 2017

https://www.economist.com/news/science-and-technology/21724370-generating-convincing-audio-and-video-fake-events-fake-news-you-aint-seen

took only a few days to create the clip on a desktop computer using a generative adversarial network (GAN), a type of machine-learning algorithm.

Faith in written information is under attack in some quarters by the spread of what is loosely known as “fake news”. But images and sound recordings retain for many an inherent trustworthiness. GANs are part of a technological wave that threatens this credibility.

Amnesty International is already grappling with some of these issues. Its Citizen Evidence Lab verifies videos and images of alleged human-rights abuses. It uses Google Earth to examine background landscapes and to test whether a video or image was captured when and where it claims. It uses Wolfram Alpha, a search engine, to cross-reference historical weather conditions against those claimed in the video. Amnesty’s work mostly catches old videos that are being labelled as a new atrocity, but it will have to watch out for generated video, too. Cryptography could also help to verify that content has come from a trusted organisation. Media could be signed with a unique key that only the signing organisation—or the originating device—possesses.

+++++++++++++
more on fake news in this IMS blog
https://blog.stcloudstate.edu/ims?s=fake+news

apps for special needs students

Android

Categories
Apps
Android Apps for Learners with Autism
Android Apps for Learners with Dyslexia
Android Apps for Vision Impaired

iOS

Categories
Apps
Apps for Dyslexic Learners
Apps for Autistic Learners
Apps for The Visually Impaired
Apps for Learners with Writing Difficulties

++++++++++++++++++++++
more on special ed in this IMS blog
https://blog.stcloudstate.edu/ims?s=special+education

1 4 5 6 7