with Melanie Guentzel, Director of Graduate Student Services, email@example.com
when: Thu, Sept. 19., noon to 1 PM
where: Plymouth campus on Zoom: https://minnstate.zoom.us/s/9504079826
who: new international graduate students at SCSU
students in Engineering Management, Regulatory Affairs, and Applied Clinical Research.
Access the library from a distance: https://www.stcloudstate.edu/library/
Research and Writing Tips
Digital Media Has a Misinformation Problem—but It’s an Opportunity for Teaching.
Jennifer Sparrow Dec 13, 2018
Research has shown that 50 percent of college students spend a minimum of five hours each week on social media. These social channels feed information from news outlets, private bloggers, friends and family, and myriad other sources that are often curated based on the user’s interests. But what really makes social media a tricky resource for students and educators alike is that most companies don’t view themselves as content publishers. This position essentially absolves social media platforms of the responsibility to monitor what their users share, and that can allow false even harmful information to circulate.
“How do we help students become better consumers of information, data, and communication?” Fluency in each of these areas is integral to 21st century-citizenry, for which we must prepare students.
In English 202C, a technical writing course, students use our Invention Studio and littleBits to practice inventing their own electronic devices, write instructions for how to construct the device, and have classmates reproduce the invention.
The proliferation of mobile devices and high-speed Wi-Fi have made videos a common outlet for information-sharing. To keep up with the changing means of communication, Penn State campuses are equipped with One Button Studio, where students can learn to produce professional-quality video. With this, students must learn how to take information and translate it into a visual medium in a way that will best benefit the intended audience. They can also use the studios to hone their presentation or interview skills by recording practice sessions and then reviewing the footage.
more on digital media in this IMS blog
Data-Driven Design Is Killing Our Instincts
Valuing data over design instinct puts metrics over users
Benek Lisefski August 13, 2019
Overreliance on data to drive design decisions can be just as harmful as ignoring it. Data only tells one kind of story. But your project goals are often more complex than that. Goals can’t always be objectively measured.
Data-driven design is about using information gleaned from both quantitative and qualitative sources to inform how you make decisions for a set of users. Some common tools used to collect data include user surveys, A/B testing, site usage and analytics, consumer research, support logs, and discovery calls.
Designers justified their value through their innate talent for creative ideas and artistic execution. Those whose instincts reliably produced success became rock stars.
In today’s data-driven world, that instinct is less necessary and holds less power. But make no mistake, there’s still a place for it.
Data is good at measuring things that are easy to measure. Some goals are less tangible, but that doesn’t make them less important.
Data has become an authoritarian who has fired the other advisors who may have tempered his ill will. A designer’s instinct would ask, “Do people actually enjoy using this?” or “How do these tactics reflect on our reputation and brand?”
Digital interface design is going through a bland period of sameness.
Data is only as good as the questions you ask
When to use data vs. when to use instinct
Deciding between two or three options? This is where data shines. Nothing is more decisive than an A/B test to compare potential solutions and see which one actually performs better. Make sure you’re measuring long-term value metrics and not just views and clicks.
Sweating product quality and aesthetics? Turn to your instinct. The overall feeling of quality is a collection of hundreds of micro-decisions, maintained consistency, and execution with accuracy. Each one of those decisions isn’t worth validating on its own. Your users aren’t design experts, so their feedback will be too subjective and variable. Trust your design senses when finessing the details.
Unsure about user behavior? Use data rather than asking for opinions. When asked what they’ll do, customers will do what they think you want them to. Instead, trust what they actually do when they think nobody’s looking.
Building brand and reputation? Data can’t easily measure this. But we all know trustworthiness is as important as clicks (and sometimes they’re opposing goals). When building long-term reputation, trust your instinct to guide you to what’s appealing, even if it sometimes contradicts short-term data trends. You have to play the long game here.
more on big data in this IMS blog
When False Claims Are Repeated, We Start To Believe They Are True
When False Claims Are Repeated, We Start To Believe They Are True — Here’s How Behaving Like A Fact-Checker Can Help
September 12, 2019
This phenomenon, known as the “illusory truth effect”, is exploited by politicians and advertisers — and if you think you are immune to it, you’re probably wrong. In fact, earlier this year we reported on a study that found people are prone to the effect regardless of their particular cognitive profile.
A study in Cognition has found that using our own knowledge to fact-check a false claim can prevent us from believing it is true when it is later repeated. But we might need a bit of a nudge to get there.
The researchers found that participants who had focussed on how interesting the statements were in the first part of the study showed the illusory truth effect
more on Fake News in this IMS blog
Science and Technology Resources on the Internet E-learning Technologies
April L. Colosimo Associate Librarian McGill University Library & Archives
Imagine if we didn’t know how to use books – notes on a digital practices framework
the 20/60/20 model of change. The idea is that the top 20% of any group will be game for anything, they are your early adopters, always willing to try the next best thing. The bottom 20% of a group will hate everything and spend most of their time either subtly slowly things down or in open rebellion. The middle 60% are the people who have the potential to be won or lost depending on how good your plan is
The top stream is about all the sunshine and light about working with others on the internet. It’s advantages and pitfalls, ways in which to promote prosocial discourse. The middle stream is about pragmatics. The how’s of doing things, it starts out with simple guidelines and moves forward the technical realities of licensing, content production and tech using. The bottom stream is about the self. How to keep yourself safe, how to have a healthy relationship with the internet from a personal perspective.
Level 1 – Awareness
Level 2 – Learning
Level 3 Interacting and making
Level 4 – Teaching
more on digital literacy in this IMS blog
deep fake: definition
What are “deepfakes?”
That’s the nickname given to computer-created artificial videos or other digital material in which images are combined to create new footage that depicts events that never actually happened. The term originates from the online message board Reddit.
One initial use of the fake videos was in amateur-created pornography, in which the faces of famous Hollywood actresses were digitally placed onto that of other performers to make it appear as though the stars themselves were performing.
How difficult is it to create fake media?
It can be done with specialized software, experts say, the same way that editing programs such as Photoshop have made it simpler to manipulate still images. And specialized software itself isn’t necessary for what have been dubbed “shallow fakes” or “cheap fakes.”
Researchers also say they are working on new ways to speed up systems aimed at helping establish when video or audio has been manipulated. But it’s been called a “cat and mouse” game in which there may seldom be exact parity between fabrication and detection.
At least one state has considered legislation that would outlaw distributing election-oriented fake videos.
more on fake news in this IMS blog
Released on Friday, the Zao app went viral as Chinese users seized on the chance to see themselves act out scenes from well-known movies using deepfake technology, which has already prompted concerns elsewhere over potential misuse.
As of Monday afternoon it remained the top free download in China, according to the app market data provider App Annie.
Concerns over deepfakes have grown since the 2016 US election campaign, which saw wide use of online misinformation, according to US investigations.
In June, Facebook’s chief executive, Mark Zuckerberg, said the social network was struggling to find ways to deal with deepfake videos, saying they may constitute “a completely different category” of misinformation than anything faced before.
more on deepfake in this IMS blog
Chinese cyberspace is one of the most surveilled and censored in the world. That includes WeChat. Owned by Tencent, one of China’s biggest companies, the chat-meets-payment app has more than 1 billion monthly users in China and now serves users outside the country, too, although it does not divulge how many. Researchers say its use abroad has extended the global reach of China’s surveillance and censorship methods.
“The intention of keeping people safe by building these systems goes out the window the moment you don’t secure them at all,” says Victor Gevers, co-founder of the nonprofit GDI Foundation, an open-source data security collective.
Every day, Gevers scans the Internet for vulnerabilities to find unsecured databases, and he has exposed a large number of them, particularly linked to China.
more on WeChat and surveillance in this IMS blog