Searching for "fair use"

data driven education

https://www.kqed.org/mindshift/45396/whats-at-risk-when-schools-focus-too-much-on-student-data

The U.S. Department of Education emphasizes “ensuring the use of multiple measures of school success based on academic outcomes, student progress, and school quality.”

starting to hear more about what might be lost when schools focus too much on data. Here are five arguments against the excesses of data-driven instruction.

1) Motivation (decrease)

as stereotype threat. threatening students’ sense of belonging, which is key to academic motivation.

2) Helicoptering

A style of overly involved “intrusive parenting” has been associated in studies with increased levels of anxiety and depression when students reach college.

3) Commercial Monitoring and Marketing

The National Education Policy Center releases annual reports on commercialization and marketing in public schools. In its most recent report in May, researchers there raised concerns about targeted marketing to students using computers for schoolwork and homework.

Companies like Google pledge not to track the content of schoolwork for the purposes of advertising. But in reality these boundaries can be a lot more porous.

4) Missing What Data Can’t Capture

5) Exposing Students’ “Permanent Records”

In the past few years several states have passed laws banning employers from looking at the credit reports of job applicants.
Similarly, for young people who get in trouble with the law, there is a procedure for sealing juvenile records
Educational transcripts, unlike credit reports or juvenile court records, are currently considered fair game for gatekeepers like colleges and employers. These records, though, are getting much more detailed.

Alternative Credentials

Alternative Credentials: How Can Higher Education Organizations Leverage Open Badges?

By Stefanie Panke for AACE Review, 

https://www.aace.org/review/alternative-credentials-how-can-higher-education-organizations-leverage-open-badges/

Badges are a mechanism to award ‘micro-credits’ online. They are awarded by an organization for an individual user, and can be either internal to a website or online community, or use open standards and shared repositories.

In open online learning settings, badges are used to provide incentives for individuals to use our resources and to participate in discussion threads.

The IBM skills gateway is an example of how open badges can be leveraged to document professional development. EDUCAUSE microcredentialing program offers 108 digital badges in five categories (community service, expertise development, presentation and facilitation, leadership development, awards).

Open Badge Initiative and “Digital Badges for Lifelong Learning” became the theme of the fourth Digital Meaning & Learning competition, in which over 30 innovative badge systems and 10 research studies received over $5 million in funding between 2012 and 2013.

Standardization is the key to creating transferability and recognition across contexts

In 2018, the new Open Badges 2.0 standard was released under the stewardship of IMS Global Learning Consortium.

badges awarded for participation are valued less meaningful than skill-based badges. For skill-based badges, evidence of mastery must be associated with the badge along with the evaluation criteria. Having a clear purpose, ensuring transferability, and specifying learning objectives were noted by the interviewees as the top priorities when implementing badge offerings in higher education contexts.

Sheryl Grant is a senior researcher on user experience at OpenWorks Group, a company that focuses on supporting educational web applications and mobile tools, including credentialing services. Prior to her current position, Dr. Grant was Director of Alternative Credentialing and Badge Research at HASTAC. She was part of the team that organized the ‘Badges for Lifelong Learning Competition’.

advice o offer for the design and implementation of digital badges. She stressed that badge systems need to be designed in a participatory manner together with the target audience who is supposed to receive them. This will allow for fair, realistic and transparent criteria. Another crucial aspect is the assessment portion: Who will make verify that the badge credentials are issued correctly? While badges can offer additional motivation, they can also diminish motivation and create a ‘race to the bottom’ if they are obtained too easily. Specifically, Dr. Grant advised to use badges to reward exceptional activities, and acknowledge students who want to go above and beyond. She also gave guidelines on when to avoid issuing badges, i.e., activities that are already graded and activities that are required.

All current UNC badging pilots used the platform cred.ly for issuing badges. An alternative is the Mozilla Open Badge backpack follow-up Badgr. The European platform Badgecraft is another repository with a fairly broad user base. The badge wiki project offers a comprehensive list with implementation details for each platform: Badge Platforms (Badge Wiki). (23 platforms)

Designing Effective Digital Badges (https://www.amazon.com/Designing-Effective-Digital-Badges-Applications/dp/1138306134) is a hands-on guide to the principles, implementation, and assessment of digital badging systems. Informed by the fundamental concepts and research-based characteristics of effective badge design, this book uses real-world examples to convey the advantages and challenges of badging and showcases its application across a variety of contexts.

++++++++++
more on microcred in this IMS blog
http://blog.stcloudstate.edu/ims?s=microcredentialing

Digital Destruction of Democracy

The Digital Destruction of Democracy

ANYA SCHIFFRIN JANUARY 21, 2019

https://prospect.org/article/digital-destruction-democracy

Anya Schiffrin is an adjunct faculty member at the School of International and Public Affairs at Columbia University. She worked in Hanoi from 1997 to 1999 as the bureau chief of Dow Jones Newswires.
Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics
By Yochai Benkler, Robert Faris, & Hal Roberts
Oxford University Press
A Harvard law professor who is a well-known theorist of the digital age, Benkler and colleagues have produced an authoritative tome that includes multiple taxonomies and literature reviews as well as visualizations of the flow of disinformation.
clickbait fabricators
white supremacist and alt-right trolls
a history of the scholarship on propaganda, reminding the reader that much of the discussion began in the 1930s.
Benkler’s optimistic 2007 book, The Wealth of Networks, predicted that the Internet would bring people together and transform the way information is created and spread. Today, Benkler is far less sanguine and has become one of the foremost researchers of disinformation networks.
Fox News, BreitbartThe Daily CallerInfoWars, and Zero Hedge
As a result, mainstream journalists repeat and amplify the falsehoods even as they debunk them.
There is no clear line, they argue, between Russian propaganda, Breitbart lies, and the Trump victory. They add that Fox News is probably more influential than Facebook.
after George Soros gave a speech in January 2018 calling for regulation of the social media platforms, Facebook hired a Republican opposition research firm to shovel dirt at George Soros.
The European Union has not yet tried to regulate disinformation (although they do have codes of practice for the platforms), instead focusing on taxation, competition regulation, and protection of privacy. But Germany has strengthened its regulations regarding online hate speech, including the liability of the social media platforms.
disclosure of the sources of online political advertising.It’s a bit toothless because, just as with offshore bank accounts, it may be possible to register which U.S. entity is paying for online political advertising, but it’s impossible to know whether that entity is getting its funds from overseas. Even the Honest Ads bill was too much for Facebook to take.

++++++++++++
more on the issues of digital world and democracy in this IMS blog
http://blog.stcloudstate.edu/ims/2019/02/19/facebook-digital-gangsters/

Education and Ethics

4 Ways AI Education and Ethics Will Disrupt Society in 2019

By Tara Chklovski     Jan 28, 2019

https://www.edsurge.com/news/2019-01-28-4-ways-ai-education-and-ethics-will-disrupt-society-in-2019

In 2018 we witnessed a clash of titans as government and tech companies collided on privacy issues around collecting, culling and using personal data. From GDPR to Facebook scandals, many tech CEOs were defending big data, its use, and how they’re safeguarding the public.

Meanwhile, the public was amazed at technological advances like Boston Dynamic’s Atlas robot doing parkour, while simultaneously being outraged at the thought of our data no longer being ours and Alexa listening in on all our conversations.

1. Companies will face increased pressure about the data AI-embedded services use.

2. Public concern will lead to AI regulations. But we must understand this tech too.

In 2018, the National Science Foundation invested $100 million in AI research, with special support in 2019 for developing principles for safe, robust and trustworthy AI; addressing issues of bias, fairness and transparency of algorithmic intelligence; developing deeper understanding of human-AI interaction and user education; and developing insights about the influences of AI on people and society.

This investment was dwarfed by DARPA—an agency of the Department of Defence—and its multi-year investment of more than $2 billion in new and existing programs under the “AI Next” campaign. A key area of the campaign includes pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning.

Federally funded initiatives, as well as corporate efforts (such as Google’s “What If” tool) will lead to the rise of explainable AI and interpretable AI, whereby the AI actually explains the logic behind its decision making to humans. But the next step from there would be for the AI regulators and policymakers themselves to learn about how these technologies actually work. This is an overlooked step right now that Richard Danzig, former Secretary of the U.S. Navy advises us to consider, as we create “humans-in-the-loop” systems, which require people to sign off on important AI decisions.

3. More companies will make AI a strategic initiative in corporate social responsibility.

Google invested $25 million in AI for Good and Microsoft added an AI for Humanitarian Action to its prior commitment. While these are positive steps, the tech industry continues to have a diversity problem

4. Funding for AI literacy and public education will skyrocket.

Ryan Calo from the University of Washington explains that it matters how we talk about technologies that we don’t fully understand.

 

 

 

college finances for waste

Students, employees scour college finances for waste, proof of unfair pay

As public confidence declines, university budgets and investments face growing scrutiny

https://hechingerreport.org/increasingly-skeptical-students-employees-want-colleges-to-show-them-the-money/ 
But seldom has this level of attention from students and employees been so focused on the finances of their own campuses. It coincides with what polls disclose is falling public confidence in higher education. And given the results, it seems likely to create more, not less, mistrust.
Higher education has become a popular public target. Fifty-eight percent of people polled by the think tank New Americasaid colleges and universities put their own interests ahead of those of students. About the same proportion in a Public Agenda survey said colleges care mostly about the bottom line, and 44 percent said they’re wasteful and inefficient. And a Gallup poll found that more than half of Americans have only some, or very little, confidence in higher education.
We want to see greater transparency in how they spend our money. And it is our money, most of it,” since such a large percentage of the budget comes from tuition

Emotional Data on the Job

How to Manage Your Own Emotional Data on the Job

emotions can give you truly pertinent, useful data about business problems that need attention.

negative emotions are useful indicators of both your instincts and your beliefs:

  • If you’re feeling sad or down, you’re probably unhappy with your own behavior or the effectiveness of your response to events. You might be thinking that something has gone wrong and that it was your fault. Be careful not to leave sadness unattended or it can slide into hopelessness and the belief that you’ll never be able to make things better. Instead, let sadness prompt you to change: Look for small actions and steps to make headway and improvements.
  • If you’re angry, you may be experiencing a fairness issue of some kind, and the anger may be telling you about a sense of violation or something that needs to be set right.
  • If you’re afraid or don’t feel safe in some way, you may sense that something bad is going to happen but you’re not sure what it is or how it might damage you. Or perhaps you don’t trust upcoming events or the people involved. Your fear can alert you to do extra preparation and contingency planning so you’ll have your best shot at success.

++++++++++++
more on mindfulness in this IMS blog
http://blog.stcloudstate.edu/ims?s=mindfulness

LinkedIn and Snapchat stories

LinkedIn launches its own Snapchat Stories. Here’s why it shouldn’t have

No app is safe from the Stories plague

 LinkedIn confirms to TechCrunch that it plans to build Stories for more sets of users, but first it’s launching “Student Voices” just for university students in the U.S. The feature appears atop the LinkedIn home screen and lets students post short videos to their Campus Playlist.

My note: Since 2012, I unsuccessfully tried to convince two library directors to approve similar video “channel” on the SCSU library web page with students’ testimonies and ability for students to comment / provide feedback regarding the issues raised in the videos. Can you guess the outcome of such proposal?
http://blog.stcloudstate.edu/ims/2018/11/03/video-skills-digital-literacy/

A LinkedIn spokesperson tells us the motive behind the feature is to get students sharing their academic experiences like internships, career fairs and class projects that they’d want to show off to recruiters as part of their personal brand.

+++++++++++
more on LinkedIn in this IMS blog
http://blog.stcloudstate.edu/ims?s=linkedin

Russian Influence Operations on Twitter

Russian Influence Operations on Twitter

Summary This short paper lays out an attempt to measure how much activity from Russian state-operated accounts released in the dataset made available by Twitter in October 2018 was targeted at the United Kingdom. Finding UK-related Tweets is not an easy task. By applying a combination of geographic inference, keyword analysis and classification by algorithm, we identified UK-related Tweets sent by these accounts and subjected them to further qualitative and quantitative analytic techniques.

We find:

 There were three phases in Russian influence operations : under-the-radar account building, minor Brexit vote visibility, and larger-scale visibility during the London terror attacks.

 Russian influence operations linked to the UK were most visible when discussing Islam . Tweets discussing Islam over the period of terror attacks between March and June 2017 were retweeted 25 times more often than their other messages.

 The most widely-followed and visible troll account, @TEN_GOP, shared 109 Tweets related to the UK. Of these, 60 percent were related to Islam .

 The topology of tweet activity underlines the vulnerability of social media users to disinformation in the wake of a tragedy or outrage.

 Focus on the UK was a minor part of wider influence operations in this data . Of the nine million Tweets released by Twitter, 3.1 million were in English (34 percent). Of these 3.1 million, we estimate 83 thousand were in some way linked to the UK (2.7%). Those Tweets were shared 222 thousand times. It is plausible we are therefore seeing how the UK was caught up in Russian operations against the US .

 Influence operations captured in this data show attempts to falsely amplify other news sources and to take part in conversations around Islam , and rarely show attempts to spread ‘fake news’ or influence at an electoral level.

On 17 October 2018, Twitter released data about 9 million tweets from 3,841 blocked accounts affiliated with the Internet Research Agency (IRA) – a Russian organisation founded in 2013 and based in St Petersburg, accused of using social media platforms to push pro-Kremlin propaganda and influence nation states beyond their borders, as well as being tasked with spreading pro-Kremlin messaging in Russia. It is one of the first major datasets linked to state-operated accounts engaging in influence operations released by a social media platform.

Conclusion

This report outlines the ways in which accounts linked to the Russian Internet ResearchAgency (IRA) carried out influence operations on social media and the ways their operationsintersected with the UK.The UK plays a reasonably small part in the wider context of this data. We see two possibleexplanations: either influence operations were primarily targeted at the US and British Twitterusers were impacted as collate, or this dataset is limited to US-focused operations whereevents in the UK were highlighted in an attempt to impact US public, rather than a concertedeffort against the UK. It is plausible that such efforts al so existed but are not reflected inthis dataset.Nevertheless, the data offers a highly useful window into how Russian influence operationsare carried out, as well as highlighting the moments when we might be most vulnerable tothem.Between 2011 and 2016, these state-operated accounts were camouflaged. Through manualand automated methods, they were able to quietly build up the trappings of an active andwell-followed Twitter account before eventually pivoting into attempts to influence the widerTwitter ecosystem. Their methods included engaging in unrelated and innocuous topics ofconversation, often through automated methods, and through sharing and engaging withother, more mainstream sources of news.Although this data shows levels of electoral and party-political influence operations to berelatively low, the day of the Brexit referendum results showed how messaging originatingfrom Russian state-controlled accounts might come to be visible on June 24th 2016, we believe UK Twitter users discussing the Brexit Vote would have encountered messages originating from these accounts.As early as 2014, however, influence operations began taking part in conversations aroundIslam, and these accounts came to the fore during the three months of terror attacks thattook place between March and June 2017. In the immediate wake of these attacks, messagesrelated to Islam and circulated by Russian state-operated Twitter accounts were widelyshared, and would likely have been visible in the UK.The dataset released by Twitter begins to answer some questions about attempts by a foreignstate to interfere in British affairs online. It is notable that overt political or electoralinterference is poorly represented in this dataset: rather, we see attempts at stirring societaldivision, particularly around Islam in the UK, as the messages that resonated the most overthe period.What is perhaps most interesting about this moment is its portrayal of when we as socialmedia users are most vulnerable to the kinds of messages circulated by those looking toinfluence us. In the immediate aftermath of terror attacks, the data suggests, social mediausers were more receptive to this kind of messaging than at any other time.

It is clear that hostile states have identified the growth of online news and social media as aweak spot, and that significant effort has gone into attempting to exploit new media toinfluence its users. Understanding the ways in which these platforms have been used tospread division is an important first step to fighting it.Nevertheless, it is clear that this dataset provides just one window into the ways in whichforeign states have attempted to use online platforms as part of wider information warfare
and influence campaigns. We hope that other platforms will follow Twitter’s lead and release
similar datasets and encourage their users to proactively tackle those who would abuse theirplatforms.

+++++++++++
more on cybersecurity in this IMS blog
http://blog.stcloudstate.edu/ims?s=cybersecurity

the intellectual dark web

Nuance: A Love Story. My affair with the intellectual dark web

Meghan Daum Aug 24 https://medium.com/s/greatescape/nuance-a-love-story-ae6a14991059

the standard set of middle-class Democratic Party values: Public safety nets were a force for good, corporate greed was a real threat, civil and reproductive rights were paramount.

I remember how good it felt to stand with my friends in our matching college sweatshirts shouting “never again!” and “my body, my choice!”

(hey, why shouldn’t Sarah Palin call herself a feminist?) brought angry letters from liberals as well as conservatives.

We would all go to the mat for women’s rights, gay rights, or pretty much any rights other than gun rights. We lived, for the most part, in big cities in blue states.

When Barack Obama came into the picture, we loved him with the delirium of crushed-out teenagers, perhaps less for his policies than for being the kind of person who also listens to NPR. We loved Hillary Clinton with the fraught resignation of a daughter’s love for her mother. We loved her even if we didn’t like her. We were liberals, after all. We were family.

Words like “mansplaining” and “gaslighting” were suddenly in heavy rotation, often invoked with such elasticity as to render them nearly meaningless. Similarly, the term “woke,” which originated in black activism, was being now used to draw a bright line between those on the right side of things and those on the wrong side of things.

From the Black Guys on Bloggingheads, YouTube’s algorithms bounced me along a path of similarly unapologetic thought criminals: the neuroscientist Sam Harris and his Waking Up podcast; Christina Hoff Sommers, aka “The Factual Feminist”; the comedian turned YouTube interviewer Dave Rubin; the counter-extremist activist Maajid Nawaz; and a cantankerous and then little-known Canadian psychology professor named Jordan Peterson, who railed against authoritarianism on both the left and right but reserved special disdain for postmodernism, which he believed was eroding rational thought on campuses and elsewhere.

the sudden national obsession with female endangerment on college campuses struck me much the same way it had in the early 1990s: well-intended but ultimately infantilizing to women and essentially unfeminist.

Weinstein and his wife, the evolutionary biologist Heather Heying, who also taught at Evergreen, would eventually leave the school and go on to become core members of the “intellectual dark web.”

Weinstein talked about intellectual “feebleness” in academia and in the media, about the demise of nuance, about still considering himself a progressive despite his feeling that the far left was no better at offering practical solutions to the world’s problems than the far right.

an American Enterprise Institute video of Sommers, the Factual Feminist, in conversation with the scholar and social critic Camille Paglia — “My generation fought for the freedom for women to risk getting raped!” I watched yet another video in which Paglia sat by herself and expounded volcanically about the patriarchal history of art (she was all for it).

the brothers sat down together for a two-hour, 47-minute interview on theRubin Report,

James Baldwin’s line, “I love America more than any other country in the world, and, exactly for this reason, I insist on the right to criticize her perpetually

Jordan Peterson Twelve Rules for Life: An Antidote for Chaos, is a sort of New and Improved Testament for the purpose-lacking young person (often but not always male) for whom tough-love directives like “clean up your room!” go down a lot easier when dispensed with a Jungian, evo-psych panache.

Quillette, a new online magazine that billed itself as “a platform for free thought”

the more honest we are about what we think, the more we’re alone with our thoughts. Just as you can’t fight Trumpism with tribalism, you can’t fight tribalism with a tribe.

1 2 3 4 5 8