Jul
2020
Digital Literacy for St. Cloud State University
After March 2020 reports about Zoom privacy issues, now Zoom acknowledges working with the Chinese government:
++++++++++++++
Unlike many other major tech platforms based in the U.S., Zoom, which is headquartered in California, has not been blocked by the Chinese government. Zoom said in a blog post that it is “developing technology over the next several days that will enable us to remove or block at the participant level based on geography” which will allow the company to “to comply with requests from local authorities when they determine activity on our platform is illegal within their borders; however, we will also be able to protect these conversations for participants outside of those borders where the activity is allowed.”
Zoom’s interference with the Tiananmen gatherings and its suspension of user accounts raised alarm among many in higher education, which increasingly depends on Zoom to operate courses remotely — including for students located within China’s borders.
Multiple scholars took to Twitter to express their worries
PEN America, a group that advocates for free expression, condemned Zoom for shuttering the activist’s account.
This is not the first time Zoom’s links to China have come under scrutiny. In April, the company admitted that some of its user data were “mistakenly” routed through China; in response, the company announced that users of paid Zoom accounts could opt out of having their data routed through data centers in China.
An April 3 report by scholars at the University of Toronto’s Munk School of Global Affairs & Public Policy said Zoom’s research and development operations in China could make the company susceptible “to pressure from Chinese authorities.”
Zoom, whose Chinese-born CEO is a U.S. citizen, said in its latest annual report to the U.S. Securities and Exchange Commission that it had more than 700 employees at its research and development centers in China as of Jan. 31. The SEC filing notes that Zoom has a “high concentration of research and development personnel in China, which could expose us to market scrutiny regarding the integrity of our solution or data security features.”
+++++++++++++
Zoom Just Totally Caved In to China on Censorship from r/technology
++++++++++++++
more about Zoom in this IMS blog
https://blog.stcloudstate.edu/ims?s=zoom
https://www.kqed.org/mindshift/45396/whats-at-risk-when-schools-focus-too-much-on-student-data
The U.S. Department of Education emphasizes “ensuring the use of multiple measures of school success based on academic outcomes, student progress, and school quality.”
starting to hear more about what might be lost when schools focus too much on data. Here are five arguments against the excesses of data-driven instruction.
1) Motivation (decrease)
as stereotype threat. threatening students’ sense of belonging, which is key to academic motivation.
2) Helicoptering
A style of overly involved “intrusive parenting” has been associated in studies with increased levels of anxiety and depression when students reach college.
3) Commercial Monitoring and Marketing
The National Education Policy Center releases annual reports on commercialization and marketing in public schools. In its most recent report in May, researchers there raised concerns about targeted marketing to students using computers for schoolwork and homework.
Companies like Google pledge not to track the content of schoolwork for the purposes of advertising. But in reality these boundaries can be a lot more porous.
4) Missing What Data Can’t Capture
5) Exposing Students’ “Permanent Records”
In the past few years several states have passed laws banning employers from looking at the credit reports of job applicants.
Similarly, for young people who get in trouble with the law, there is a procedure for sealing juvenile records
Educational transcripts, unlike credit reports or juvenile court records, are currently considered fair game for gatekeepers like colleges and employers. These records, though, are getting much more detailed.
Badges are a mechanism to award ‘micro-credits’ online. They are awarded by an organization for an individual user, and can be either internal to a website or online community, or use open standards and shared repositories.
In open online learning settings, badges are used to provide incentives for individuals to use our resources and to participate in discussion threads.
The IBM skills gateway is an example of how open badges can be leveraged to document professional development. EDUCAUSE microcredentialing program offers 108 digital badges in five categories (community service, expertise development, presentation and facilitation, leadership development, awards).
Open Badge Initiative and “Digital Badges for Lifelong Learning” became the theme of the fourth Digital Meaning & Learning competition, in which over 30 innovative badge systems and 10 research studies received over $5 million in funding between 2012 and 2013.
Standardization is the key to creating transferability and recognition across contexts
In 2018, the new Open Badges 2.0 standard was released under the stewardship of IMS Global Learning Consortium.
badges awarded for participation are valued less meaningful than skill-based badges. For skill-based badges, evidence of mastery must be associated with the badge along with the evaluation criteria. Having a clear purpose, ensuring transferability, and specifying learning objectives were noted by the interviewees as the top priorities when implementing badge offerings in higher education contexts.
Sheryl Grant is a senior researcher on user experience at OpenWorks Group, a company that focuses on supporting educational web applications and mobile tools, including credentialing services. Prior to her current position, Dr. Grant was Director of Alternative Credentialing and Badge Research at HASTAC. She was part of the team that organized the ‘Badges for Lifelong Learning Competition’.
advice o offer for the design and implementation of digital badges. She stressed that badge systems need to be designed in a participatory manner together with the target audience who is supposed to receive them. This will allow for fair, realistic and transparent criteria. Another crucial aspect is the assessment portion: Who will make verify that the badge credentials are issued correctly? While badges can offer additional motivation, they can also diminish motivation and create a ‘race to the bottom’ if they are obtained too easily. Specifically, Dr. Grant advised to use badges to reward exceptional activities, and acknowledge students who want to go above and beyond. She also gave guidelines on when to avoid issuing badges, i.e., activities that are already graded and activities that are required.
All current UNC badging pilots used the platform cred.ly for issuing badges. An alternative is the Mozilla Open Badge backpack follow-up Badgr. The European platform Badgecraft is another repository with a fairly broad user base. The badge wiki project offers a comprehensive list with implementation details for each platform: Badge Platforms (Badge Wiki). (23 platforms)
Designing Effective Digital Badges (https://www.amazon.com/Designing-Effective-Digital-Badges-Applications/dp/1138306134) is a hands-on guide to the principles, implementation, and assessment of digital badging systems. Informed by the fundamental concepts and research-based characteristics of effective badge design, this book uses real-world examples to convey the advantages and challenges of badging and showcases its application across a variety of contexts.
++++++++++
more on microcred in this IMS blog
https://blog.stcloudstate.edu/ims?s=microcredentialing
ANYA SCHIFFRIN JANUARY 21, 2019
https://prospect.org/article/digital-destruction-democracy
++++++++++++
more on the issues of digital world and democracy in this IMS blog
https://blog.stcloudstate.edu/ims/2019/02/19/facebook-digital-gangsters/
https://www.edsurge.com/news/2019-01-28-4-ways-ai-education-and-ethics-will-disrupt-society-in-2019
In 2018 we witnessed a clash of titans as government and tech companies collided on privacy issues around collecting, culling and using personal data. From GDPR to Facebook scandals, many tech CEOs were defending big data, its use, and how they’re safeguarding the public.
Meanwhile, the public was amazed at technological advances like Boston Dynamic’s Atlas robot doing parkour, while simultaneously being outraged at the thought of our data no longer being ours and Alexa listening in on all our conversations.
In 2018, the National Science Foundation invested $100 million in AI research, with special support in 2019 for developing principles for safe, robust and trustworthy AI; addressing issues of bias, fairness and transparency of algorithmic intelligence; developing deeper understanding of human-AI interaction and user education; and developing insights about the influences of AI on people and society.
This investment was dwarfed by DARPA—an agency of the Department of Defence—and its multi-year investment of more than $2 billion in new and existing programs under the “AI Next” campaign. A key area of the campaign includes pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning.
Federally funded initiatives, as well as corporate efforts (such as Google’s “What If” tool) will lead to the rise of explainable AI and interpretable AI, whereby the AI actually explains the logic behind its decision making to humans. But the next step from there would be for the AI regulators and policymakers themselves to learn about how these technologies actually work. This is an overlooked step right now that Richard Danzig, former Secretary of the U.S. Navy advises us to consider, as we create “humans-in-the-loop” systems, which require people to sign off on important AI decisions.
Google invested $25 million in AI for Good and Microsoft added an AI for Humanitarian Action to its prior commitment. While these are positive steps, the tech industry continues to have a diversity problem
Ryan Calo from the University of Washington explains that it matters how we talk about technologies that we don’t fully understand.
The most secure and anonymous communication tools available
David Koff August 27 2018
https://medium.com/s/the-firewall/down-the-security-rabbit-hole-31327f47743d
These tools are used not only to lock down your security and anonymity on the known internet, but also to access the portions of the internet that are normally hidden — “The Dark Web.”
most of us don’t need the same high-privacy, high-security tools that confidential informants, journalists, and whistleblowers use, we should all know about these tools in case the time comes when we actually need them.
It’s also worth reminding everyone there’s no such thing as perfect digital security on the internet.
TAILS is an acronym for “The Amnesic Incognito Live System.”
TAILS is a highly-secure operating system (and a host of cool applications) designed to be booted off of a DVD or USB thumb drive. This not only makes TAILS easy to transport, but also ensures that TAILS can be booted and instantly useful from nearly any PC, Mac, or Chromebook. TAILS is built on Linux, a name you might recognize because it’s a popular, free, and open-source operating system that’s been available since 1991. TAILS, in particular, runs on a variant of Linux known as “Debian,” which became available in 1996.
Third and most importantly, when setup correctly, TAILS helps ensure that all of your communications — email, web browsing, chat, and more — are encrypted, made anonymous, and then routed in such a way that it’s extremely difficult to detect or trace them.
If you’re wondering just how powerful these tools really are, many of them are known by the NSA to be difficult or impossible to break. This includes:
TAILS even published a page of possible ways that its own security can be compromised.
Whonix (pronounced “HOOnix”) is an OS focused on anonymity, privacy, and security. Like TAILS, it is built on the open source Debian Linux OS and on TOR, the decentralized network which randomizes and segments your data transmissions.
Its unique approach to offering such well-regarded security is the creative use of two virtual machines (or VMs) running in tandem on one host computer. One of these VMs is known as the Gateway while the other is known as the Workstation.
Compared to TAILS, Whonix only provides a few free, open-source applications and those need to be set up fairly extensively. The list includes:
As public confidence declines, university budgets and investments face growing scrutiny
negative emotions are useful indicators of both your instincts and your beliefs:
++++++++++++
more on mindfulness in this IMS blog
https://blog.stcloudstate.edu/ims?s=mindfulness
No app is safe from the Stories plague
LinkedIn confirms to TechCrunch that it plans to build Stories for more sets of users, but first it’s launching “Student Voices” just for university students in the U.S. The feature appears atop the LinkedIn home screen and lets students post short videos to their Campus Playlist.
My note: Since 2012, I unsuccessfully tried to convince two library directors to approve similar video “channel” on the SCSU library web page with students’ testimonies and ability for students to comment / provide feedback regarding the issues raised in the videos. Can you guess the outcome of such proposal?
https://blog.stcloudstate.edu/ims/2018/11/03/video-skills-digital-literacy/
A LinkedIn spokesperson tells us the motive behind the feature is to get students sharing their academic experiences like internships, career fairs and class projects that they’d want to show off to recruiters as part of their personal brand.
+++++++++++
more on LinkedIn in this IMS blog
https://blog.stcloudstate.edu/ims?s=linkedin