Online Course | Designing a Collaborative Instructional Technology Support Model
Part 1: March 7, 2018 | 1:00–2:30 p.m. ET
Part 2: March 14, 2018 | 1:00–2:30 p.m. ET
Part 3: March 21, 2018 | 1:00–2:30 p.m. ET
Faculty need a variety of instructional technology support—instructional design, content development, technology, training, and assessment—to name a few. They don’t want to go to one place for help, find out they’re in the wrong place, and be sent somewhere else—digitally or physically. Staff don’t want to provide help in silos or duplicate what other units are doing.
So, how can academic service providers collaborate to offer the right instructional technology support services, in the right place, at the right time, in the right way? In this course, instructional technologists, instructional designers, librarians, and instructional technology staff will learn to use a tool called the Service Center Canvas that does just that.
During this course, participants will:
Explore the factors that influence how instructional technology support services are offered in higher education
Answer critical questions about how your instructional technology support services should be delivered relative to broader trends and institutional goals
Experiment with ways to prototype new services and/or new ways of delivering them
Identify potential implementation obstacles and ways to address them
NOTE: Participants will be asked to complete assignments in between the course segments that support the learning objectives stated below and will receive feedback and constructive critique from course facilitators on how to improve and shape their work.
Felix founded and leads brightspot, a strategy consultancy that reimagines places, rethinks services, and redesigns organizations on university campuses so that people are better connected to a purpose, information, and each other. Felix is accomplished strategist, facilitator, and sense-maker who has helped transform over 70 colleges and universities.
Adam Griff is a director at brightspot. He helps universities rethink their space, reinvent their service offerings, and redesign their organization to improve the experiences of their faculty, students, and staff, connecting people and processes to create simple and intuitive answers to complex questions. He has led projects with a wide range of higher education institutions including University of Wisconsin–Madison, University of North Carolina at Chapel Hill, and University of California, Berkeley.
says David Greenfield, a psychologist and assistant clinical professor of psychiatry at the University of Connecticut:When we hear a ding or little ditty alerting us to a new text, email or Facebook post, cells in our brains likely release dopamine — one of the chemical transmitters in the brain’s reward circuitry. That dopamine makes us feel pleasure
“It’s a spectrum disorder,” says Dr. Anna Lembke, a psychiatrist at Stanford University, who studies addiction. “There are mild, moderate and extreme forms.” And for many people, there’s no problem at all.
Signs you might be experiencing problematic use, Lembke says, include these:
Interacting with the device keeps you up late or otherwise interferes with your sleep.
It reduces the time you have to be with friends or family.
It interferes with your ability to finish work or homework.
It causes you to be rude, even subconsciously. “For instance,” Lembke asks, “are you in the middle of having a conversation with someone and just dropping down and scrolling through your phone?” That’s a bad sign.
It’s squelching your creativity. “I think that’s really what people don’t realize with their smartphone usage,” Lembke says. “It can really deprive you of a kind of seamless flow of creative thought that generates from your own brain.”
Consider a digital detox one day a week
Tiffany Shlain, a San Francisco Bay Area filmmaker, and her family power down all their devices every Friday evening, for a 24-hour period.
“It’s something we look forward to each week,” Shlain says. She and her husband, Ken Goldberg, a professor in the field of robotics at the University of California, Berkeley, are very tech savvy.
A recent study of high school students, published in the journal Emotion, found that too much time spent on digital devices is linked to lower self-esteem and a decrease in well-being.
The University of Wisconsin-Stout (in Menomonie, WI) will be hosting our second “E”ffordability Summit on the March 26-27. The full schedule, registration and additional information can be found at https://effordabilitysummit2018.jimdo.com/. This is shaping up to be a really great conference with keynotes by OTNers Michelle Reed, UT-Arlington and Dave Ernst. In addition to a full slate of content, we will also have Daniel Williamson, Managing Director of OpenStax, Glenda Lembisz from the National Association of College Stores and Mindy Boland, ISKME/OER Commons and much more!
A large global change in data protection law is about to hit the tech industry, thanks to the EU’s General Data Protection Regulations (GDPR). GDPR affects any company, wherever they are in the world, that handles data about European citizens. It becomes law on 25 May 2018, and as such includes UK citizens, since it precedes Brexit. It’s no surprise the EU has chosen to tighten the data protection belt: Europe has long opposed the tech industry’s expansionist tendencies, particularly through antitrust suits, and is perhaps the only regulatory body with the inclination and power to challenge Silicon Valley in the coming years.
So, no more harvesting data for unplanned analytics, future experimentation, or unspecified research. Teams must have specific uses for specific data.
What is digital literacy? Do you know how you can foster digital literacy through formal and informal learning opportunities for your library staff and users?
Supporting digital literacy still remains an important part of library staff members’ work, but sometimes we struggle to agree on a simple, meaningful definition of the term. In this four-week eCourse, training/learning specialist Paul Signorelli begins by exploring a variety of definitions, focusing on work by a few leading proponents of the need to foster digital literacy among people of all ages and backgrounds. He explores a variety of digital-literacy resources – including case studies of how we creatively approach digital-literacy learning opportunities for library staff and users, and explores a variety of digital tools that will help to encourage further understanding of this topic.
more on digital literacy in this IMS blog
In a consortium blockchain, the consensus process is controlled by a pre-selected group – a group of corporations, for example. The right to read the blockchain and submit transactions to it may be public or restricted to participants. Consortium blockchains are considered to be “permissioned blockchains” and are best suited for use in business.
Semi-private blockchains are run by a single company that grants access to any user who satisfies pre-established criteria. Although not truly decentralized, this type of permissioned blockchain is appealing for business-to-business use cases and government applications.
Private blockchains are controlled by a single organization that determines who can read it, submit transactions to it, and participate in the consensus process. Since they are 100% centralized, private blockchains are useful as sandbox environments, but not for actual production.
Anyone can read a public blockchain, send transactions to it, or participate in the consensus process. They are considered to be “permissionless.” Every transaction is public, and users can remain anonymous. Bitcoin and Ethereum are prominent examples of public blockchains.
Combine the superfast calculational capacities of Big Compute with the oceans of specific personal information comprising Big Data — and the fertile ground for computational propaganda emerges. That’s how the small AI programs called bots can be unleashed into cyberspace to target and deliver misinformation exactly to the people who will be most vulnerable to it. These messages can be refined over and over again based on how well they perform (again in terms of clicks, likes and so on). Worst of all, all this can be done semiautonomously, allowing the targeted propaganda (like fake news stories or faked images) to spread like viruses through communities most vulnerable to their misinformation.
According to Bolsover and Howard, viewing computational propaganda only from a technical perspective would be a grave mistake. As they explain, seeing it just in terms of variables and algorithms “plays into the hands of those who create it, the platforms that serve it, and the firms that profit from it.”
Computational propaganda is a new thing. People just invented it. And they did so by realizing possibilities emerging from the intersection of new technologies (Big Compute, Big Data) and new behaviors those technologies allowed (social media). But the emphasis on behavior can’t be lost.
People are not machines. We do things for a whole lot of reasons including emotions of loss, anger, fear and longing. To combat computational propaganda’s potentially dangerous effects on democracy in a digital age, we will need to focus on both its howand its why.