Searching for "machine learning"

Media Literacy for GLST 195

Information Media and Digital Literacy for GLST 195: Global Society & Citizenship    

Instructor: Prof. Chuks Ugochukwu   Per Syllabus:

COURSE LEARNING OUTCOMES:
The course meets Liberal Education Program (LEP), Goal Area 8: Global Perspective; and Goal Area 9: Ethical and Civic Responsibility objectives
Goal Area 8: Global Perspective. Objective: Develop a comparative perspective and understanding of one’s place in a global context.

Students will be able to:

  1. Describe and analyze political, economic, and cultural elements which influence relations of states and societies in their historical and contemporary dimensions.
  2. Demonstrate knowledge of cultural, social, religious and linguistic differences.
  3. Analyze specific international problems, illustrating the cultural,economic, and political differences that affect their solution.
  4. Understand the role of a world citizen and the responsibility world citizens share for their common global future.

Goal Area 9: Ethical and Civic Responsibility Objective: Understand and evaluate ethical or civic issues and theories and participate in active citizenship or ethical judgment

OUR HUSKY COMPACT

Our Husky Compact is a bond shared by St. Cloud State University and its students that a SCSU education will prepare students for a life of growth and fulfillment – intellectually, professionally, and personally. When students graduate with an SCSU education, they will:

  • Think Creatively and Critically
  • Seek and Apply Knowledge
  • Communicate Effectively
  • Integrate Existing and Evolving Technologies
  • Engage as a Member of a Diverse and Multicultural World
  • Act with Personal Integrity and Civic Responsibility

+++++++++++++++++++++

Week ???: Information – Media and Digital Literacy

Most students can’t tell fake news from real news, study shows
Read more: https://blog.stcloudstate.edu/ims/2017/03/28/fake-news-3/

Module 1
video to introduce students to the readings and expected tasks

https://mediaspace.minnstate.edu/media/GLST+195+Module+1/1_32242qua

  1. Fake News / Misinformation / Disinformation
    1. Definitions
      1. Fake news, alternative facts
        https://blog.stcloudstate.edu/ims?s=fake+news
        https://blog.stcloudstate.edu/ims?s=alternative+facts

Mini-Assignment: After reading the information from the links above, take a minute to write out your own definition of 1. Fake News 2. Alternative Facts

      1. Misinformation vs disinformation
        https://blog.stcloudstate.edu/ims/2018/02/18/fake-news-disinformation-propaganda/

Mini-Assignment: After reading the information from the links above, take a minute to write out your own definition of 1. Misinformation 2. Disinformation. What are their main characteristics? How do they differ?

        1. Propaganda

Mini-Assignment: What is Propaganda? How do misinformation, disinformation, fake news and alternative facts fit into the process of propaganda?

        1. Conspiracy theories
          https://blog.stcloudstate.edu/ims?s=conspiracy+theories

Mini-Assignment:  Using the information from the links above, can you establish the connection between conspiracy theories, propaganda, mis- and disinformation, fake news, alternative news and social media?

          1. Bots, trolls
            https://blog.stcloudstate.edu/ims/2017/11/22/bots-trolls-and-fake-news/
            https://blog.stcloudstate.edu/ims/2020/04/30/fake-social-media-accounts-and-politicians/
            https://blog.stcloudstate.edu/ims/2020/01/20/bots-and-disinformation/

Mini-Assignment: using the info from the links above and/or information you have collected, can you define the role of bots and trolls in social media in regard to propaganda and conspiracy theories?

        1. Clickbait
          Filter bubbles, echo chambers
          (8 min) video explains filter bubbles
          https://www.ted.com/talks/eli_pariser_beware_online_filter

Mini-Assignment:: based on your own information and experience, as well as the information offered in the links, can you define your own resistance to clickbaits?

Assignment: which challenges do you identify with?
The Challenge of Teaching News Literacy:
https://soundcloud.com/edsurge/the-challenge-of-teaching-news-literacy

25 min podcast.

In a short paragraph, identify the issues you see as important to address in order to improve your own news literacy.
time to accomplish the assignment: ~45 min (including listening to the podcast).

  1. Why is it important to understand these processes?

Assignment: why is it important:

In a short paragraph, share your initial feeling about Fake News / Misinformation / Disinformation. 1. Do you think, it is important at all? 2. If yes, why; if not, why. 3. If yes, what is the importance, the impact?
time to accomplish the assignment: ~5-10 min

  1. How to deal with these processes
    1. how do we apply hands-on critical thinking to withstand these processes?
      1. What is critical thinking

disciplined thinking that is clear, rational, open-minded, and informed by evidence: https://www.dictionary.com/browse/critical-thinking

  1. Ability to research

Ability to find reliable information

  1. Popular media

How to spot fake news:
https://blog.stcloudstate.edu/ims/2017/03/15/fake-news-bib/
Can machines create fake news?
https://blog.stcloudstate.edu/ims/2019/10/24/fake-news-generator/
Can machines “clean up” fake from real?
https://blog.stcloudstate.edu/ims/2020/11/16/identifying-fake-news-by-90/
What can humans do to distinguish fake from real? Consider these five factors:
https://blog.stcloudstate.edu/ims/2017/06/26/fake-news-real-news/

Considering the second factor (who published it), here is a scale to consider when evaluating the veracity of your sources:
https://blog.stcloudstate.edu/ims/2017/08/13/library-spot-fake-news/
(can you find your favorite magazine/newspaper on the graphic?)

https://blog.stcloudstate.edu/ims/2016/12/14/fake-news-2/

(can you find your favorite news organization on the graphic?)

Factcheckers/Factchecking Organizations:
https://blog.stcloudstate.edu/ims/2017/03/28/fake-news-resources/

https://blog.stcloudstate.edu/ims/2016/12/14/fake-news-2/

  1. Peer-reviewed literature

Similarly to the assessment of popular information sources, academia requires vigorous vetting if the sources you will be using for your academic work. In the 21st century, your ability to find information in peer-reviewed journals might not be sufficient to assure accurate and reliable use of information from those resources for your research and writing. After your selection of peer-reviewed literature, you must be able to evaluate and determine the veracity and reliability of those sources.
How do you evaluate a source of information to determine if it is appropriate for academic/scholarly use. There is no set “checklist” to complete but below are some criteria to consider when you are evaluating a source.

Here is a short (4 min) video introducing you to the well-known basics for evaluation of academic literature:
https://youtu.be/qUd_gf2ypk4

  1. ACCURACY
    1. Does the author cite reliable sources?
    2. How does the information compare with that in other works on the topic?
    3. Can you determine if the information has gone through peer-review?
    4. Are there factual, spelling, typographical, or grammatical errors?
  2. AUDIENCE
    1. Who do you think the authors are trying to reach?
    2. Is the language, vocabulary, style and tone appropriate for intended audience?
    3. What are the audience demographics? (age, educational level, etc.)
    4. Are the authors targeting a particular group or segment of society?
  3. AUTHORITY
    1. Who wrote the information found in the article or on the site?
    2. What are the author’s credentials/qualifications for this particular topic?
    3. Is the author affiliated with a particular organization or institution?
    4. What does that affiliation suggest about the author?
  4. CURRENCY
    1. Is the content current?
    2. Does the date of the information directly affect the accuracy or usefulness of the information?
  5. OBJECTIVITY/BIAS
    1. What is the author’s or website’s point of view?
    2. Is the point of view subtle or explicit?
    3. Is the information presented as fact or opinion?
    4. If opinion, is the opinion supported by credible data or informed argument?
    5. Is the information one-sided?
    6. Are alternate views represented?
    7. Does the point of view affect how you view the information?
  6. PURPOSE
    1. What is the author’s purpose or objective, to explain, provide new information or news, entertain, persuade or sell?
    2. Does the purpose affect how you view the information presented?

In 2021, however, all suggestions above may not be sufficient to distinguish a reliable source of information, even if the article made it through the peer-reviewed process. In time, you should learn to evaluate the research methods of the authors and decide if they are reliable. Same applies for the research findings and conclusions.

Assignment: seeking reliable information

From your syllabus weekly themes: 1. Food; 2. Health; 3. Energy; 4. Environment; 5. Security, chose a topic of your interest.
For example: Food: raising cattle for food contributes to climate changes, because of the methane gas, or Health: COVID is the same (or not the same) as the flu; or Energy: Fossil energy is bad (or good) for the environment; etc.
Please consult with me (email me for a zoom appointment: pmiltenoff@stcloudstate.edu), if you need to discuss the choice and narrowing down of the topic.
Once you decide on the topic, do the research by collecting four sources of information:

The first couple of sources will be from popular media, whereas each of the two articles will be having an opposite approach, arguments and understanding of the issue. For example, one article will claim fossil energy is bad for the environment and the other one will argue fossil fuel has insignificant impact on climate change. You must be able to evaluate the veracity and the leaning of each source. The source can be a newspaper or magazine article, video (TV or Social Media), audio (podcasts, TV, etc.), presentations (PowerPoint, SlideShare, etc.).
Having troubles finding opposing resources? Feel welcome to search for your topic among these news outlets on the conservative side:
https://www.conservapedia.com/Top_Conservative_news_websites
and the https://aelieve.com/rankings/websites/category/news-media/top-liberal-websites/
In the same fashion, you will evaluate the second couple of sources from peer-reviewed journals. Each source will have different approach, argument and understanding of the issue and you must evaluate the robustness of the research method.

time to accomplish the assignment: ~30 min

Module 2 (video to introduce students to the readings and expected tasks)

  1. Digital Citizenship, Global Citizenship and Multiculturalism
    1. Definitions
    2. Global Citizenship
      seek global sources:

start reading:
e.g. start following and reading several news outlets from other countries and with time, you should be able to detect differences in opinions and facts presented at those outlets and your current sources for information:
Spiegel International (German, left-leaning)
https://www.facebook.com/spiegelinternational
Le Monde Diplomatique
https://www.facebook.com/mondediplo
El Pais (Spanish, left leaning)
https://www.facebook.com/elpaisinenglish

Moscow Times (Russian, left leaning)
https://www.facebook.com/MoscowTimes
The Epoch Times (Chinese, far-right)
https://www.theepochtimes.com/

Start watching (smart phone, laptop) news feeds, live or vlog (video blog):

Africa News
https://youtu.be/NQjabLGdP5g
Nigeria Live (you can seek any other country on YouTube by typing the name of the country adding “live”)
https://youtu.be/ATJc9LyPZj8
Al Jazeera in English
https://youtu.be/GXfsI-zZO7s

Deutsche Welle
https://www.youtube.com/user/deutschewelleenglish

BBC
https://www.youtube.com/user/bbcnews

Russia Today
https://www.youtube.com/user/RussiaToday

China Today
https://www.youtube.com/channel/UCBOqkAGTtzZVmKvY4SwdZ2g
India News
https://www.youtube.com/user/IndiaTV
you can also follow specific people’s vlogs, e.g.
Alexei Navalny’s vlog has English subtitles
https://www.youtube.com/user/NavalnyRu

France 24 Live
https://youtu.be/HeTWwH1a0CQ

 

Start listening (smart phone, laptop):
BBC
https://play.google.com/store/apps/details?id=uk.co.bbc.android.iplayerradio&hl=en_US&gl=US (Android app)

https://apps.apple.com/gb/app/bbc-sounds/id1380676511 (iOS app)

Deutsche Welle
https://play.google.com/store/apps/details?id=com.exlivinapps.deutschewelleradioappde&hl=en_US&gl=US (Android app)

https://apps.apple.com/us/developer/deutsche-welle/id305630107 (iOS app)

 

Assignment:
Global Citizenship and Multiculturalism and Information and Media Literacy

Study the following tweet feed
https://blog.stcloudstate.edu/ims/2021/02/18/facebook-google-australia/
If the information from the tweet feed is insufficient, research the issue by seeking reliable sources. (In a short paragraph defend your choice of reliable sources).
What do you see as more important issue: the Facebook stance that it can be a subject of Australian law or the Australian government stance that Facebook is interfering in Australian life with its news delivery? Is Facebook a news outlet or a platform for news outlets? Does Facebook need to be regulated? By who; each country do have to regulate Facebook or Facebook needs to be regulated globally?

time to accomplish the assignment: ~30 min

 

Module 3 (video to introduce students to the readings and expected tasks)

  1. Assistance for work on the final project / paper

 

++++++++++++++++++++++++++

Here a list of additional materials and readings on Fake News
https://www.ted.com/talks/eli_pariser_beware_online_filter

AI artificial intelligence

How to prevent AI from taking over the world

If we can’t teach machines to internalise human values and make decisions based on them, we must accept – and ensure – that AI is of limited use to us.

https://www.newstatesman.com/science-tech/2021/02/how-prevent-ai-taking-over-world

The so-called “Value Alignment Problem” – how to get AI to respect and conform to human values – is arguably the most important, if vexing, problem faced by AI developers today.

Stuart Russell, a leading AI scientist at Berkeley, offers an intriguing solution. Let’s design AI so that its goals are unclear. We then allow it to fill in the gaps by observing human behaviour. By learning its values from humans, the AI’s goals will be our goals.

++++++++++
more on AI in this IMS blog
https://blog.stcloudstate.edu/ims?s=artificial+intelligence

student data privacy

Russia-linked spyware found on school laptops given to children by government from r/worldnews

https://www.independent.co.uk/life-style/gadgets-and-tech/russia-spyware-school-laptops-b1790759.html

Upon unboxing and preparing them, it was discovered that a number of the laptops were infected with a self-propagating network worm,” one teacher reportedly wrote

Laptops provided to schools in order to support vulnerable children learning from home during the coronavirus pandemic have been found to contain viruses.

Teachers from a Bradford school shared details about suspicious files they found on the machines which appeared to be trying to contact Russian servers, the BBC reported.

The government has sent schools over 800,000 laptops in order to help poorer children get the support they need, but have been roundly criticised about both the quality of the laptops and the time it takes to receive them.

+++++++++
more on student data privacy and China in this IMS blog
https://blog.stcloudstate.edu/ims/2018/10/31/students-data-privacy/

Information Overload Fake News Social Media

Information Overload Helps Fake News Spread, and Social Media Knows It

Understanding how algorithm manipulators exploit our cognitive vulnerabilities empowers us to fight back

https://www.scientificamerican.com/article/information-overload-helps-fake-news-spread-and-social-media-knows-it/

a minefield of cognitive biases.

People who behaved in accordance with them—for example, by staying away from the overgrown pond bank where someone said there was a viper—were more likely to survive than those who did not.

Compounding the problem is the proliferation of online information. Viewing and producing blogs, videos, tweets and other units of information called memes has become so cheap and easy that the information marketplace is inundated. My note: folksonomy in its worst.

At the University of Warwick in England and at Indiana University Bloomington’s Observatory on Social Media (OSoMe, pronounced “awesome”), our teams are using cognitive experiments, simulations, data mining and artificial intelligence to comprehend the cognitive vulnerabilities of social media users.
developing analytical and machine-learning aids to fight social media manipulation.

As Nobel Prize–winning economist and psychologist Herbert A. Simon noted, “What information consumes is rather obvious: it consumes the attention of its recipients.”

attention economy

Nodal diagrams representing 3 social media networks show that more memes correlate with higher load and lower quality of information shared

 Our models revealed that even when we want to see and share high-quality information, our inability to view everything in our news feeds inevitably leads us to share things that are partly or completely untrue.

Frederic Bartlett
Cognitive biases greatly worsen the problem.

We now know that our minds do this all the time: they adjust our understanding of new information so that it fits in with what we already know. One consequence of this so-called confirmation bias is that people often seek out, recall and understand information that best confirms what they already believe.
This tendency is extremely difficult to correct.

Making matters worse, search engines and social media platforms provide personalized recommendations based on the vast amounts of data they have about users’ past preferences.

pollution by bots

Nodal diagrams representing 2 social media networks show that when more than 1% of real users follow bots, low-quality information prevails

Social Herding

social groups create a pressure toward conformity so powerful that it can overcome individual preferences, and by amplifying random early differences, it can cause segregated groups to diverge to extremes.

Social media follows a similar dynamic. We confuse popularity with quality and end up copying the behavior we observe.
information is transmitted via “complex contagion”: when we are repeatedly exposed to an idea, typically from many sources, we are more likely to adopt and reshare it.

Twitter users with extreme political views are more likely than moderate users to share information from low credibility sources

In addition to showing us items that conform with our views, social media platforms such as Facebook, Twitter, YouTube and Instagram place popular content at the top of our screens and show us how many people have liked and shared something. Few of us realize that these cues do not provide independent assessments of quality.

programmers who design the algorithms for ranking memes on social media assume that the “wisdom of crowds” will quickly identify high-quality items; they use popularity as a proxy for quality. My note: again, ill-conceived folksonomy.

Echo Chambers
the political echo chambers on Twitter are so extreme that individual users’ political leanings can be predicted with high accuracy: you have the same opinions as the majority of your connections. This chambered structure efficiently spreads information within a community while insulating that community from other groups.

socially shared information not only bolsters our biases but also becomes more resilient to correction.

machine-learning algorithms to detect social bots. One of these, Botometer, is a public tool that extracts 1,200 features from a given Twitter account to characterize its profile, friends, social network structure, temporal activity patterns, language and other features. The program compares these characteristics with those of tens of thousands of previously identified bots to give the Twitter account a score for its likely use of automation.

Some manipulators play both sides of a divide through separate fake news sites and bots, driving political polarization or monetization by ads.
recently uncovered a network of inauthentic accounts on Twitter that were all coordinated by the same entity. Some pretended to be pro-Trump supporters of the Make America Great Again campaign, whereas others posed as Trump “resisters”; all asked for political donations.

a mobile app called Fakey that helps users learn how to spot misinformation. The game simulates a social media news feed, showing actual articles from low- and high-credibility sources. Users must decide what they can or should not share and what to fact-check. Analysis of data from Fakey confirms the prevalence of online social herding: users are more likely to share low-credibility articles when they believe that many other people have shared them.

Hoaxy, shows how any extant meme spreads through Twitter. In this visualization, nodes represent actual Twitter accounts, and links depict how retweets, quotes, mentions and replies propagate the meme from account to account.

Free communication is not free. By decreasing the cost of information, we have decreased its value and invited its adulteration. 

AI and ed research

https://www.scienceopen.com/document/read?vid=992eaf61-35dd-454e-aa17-f9f8216b381b

This article presents an examination of how education research is being remade as an experimental data-intensive science. AI is combining with learning science in new ‘digital laboratories’ where ownership over data, and power and authority over educational knowledge production, are being redistributed to research assemblages of computational machines and scientific expertise.

Research across the sciences, humanities and social sciences is increasingly conducted through digital knowledge machines that are reconfiguring the ways knowledge is generated, circulated and used (Meyer and Schroeder, 2015).

Knowledge infrastructures, such as those of statistical institutes or research-intensive universities, have undergone significant digital transformation with the arrival of data-intensive technologies, with knowledge production now enacted in myriad settings, from academic laboratories and research institutes to commercial research and development studios, think tanks and consultancies. Datafied knowledge infrastructures have become hubs of command and control over the creation, analysis and exchange of data (Bigo et al., 2019).

The combination of AI and learning science into an AILSci research assemblage consists of particular forms of scientific expertise embodied by knowledge actors – individuals and organizations – identified by categories including science of learning, AIED, precision education and learning engineering.

Precision education overtly uses psychological, neurological and genomic data to tailor or personalize learning around the unique needs of the individual (Williamson, 2019). Precision education approaches include cognitive tracking, behavioural monitoring, brain imaging and DNA analysis.

Expert power is therefore claimed by those who can perform big data analyses, especially those able to translate and narrate the data for various audiences. Likewise, expert power in education is now claimed by those who can enact data-intensive science of learning, precision education and learning engineering research and development, and translate AILSci findings into knowledge for application in policy and practitioner settings.

the thinking of a thinking infrastructure is not merely a conscious human cognitive process, but relationally performed across humans and socio-material strata, wherein interconnected technical devices and other forms ‘organize thinking and thought and direct action’.
As an infrastructure for AILSci analyses, these technologies at least partly structure how experts think: they generate new understandings and knowledge about processes of education and learning that are only thinkable and knowable due to the computational machinery of the research enterprise.

Big data-based molecular genetics studies are part of a bioinformatics-led transformation of biomedical sciences based on analysing exceptional volumes of data (Parry and Greenhough, 2018), which has transformed the biological sciences to focus on structured and computable data rather than embodied evidence itself.

Isin and Ruppert (2019) have recently conceptualized an emergent form of power that they characterize as sensory power. Building on Foucault, they note how sovereign power gradually metamorphosed into disciplinary power and biopolitical forms of statistical regulation over bodies and populations.
Sensory power marks a shift to practices of data-intensive sensing, and to the quantified tracking, recording and representing of living pulses, movements and sentiments through devices such as wearable fitness monitors, online natural-language processing and behaviour-tracking apps. Davies (2019: 515–20) designates these as ‘techno-somatic real-time sensing’ technologies that capture the ‘rhythms’ and ‘metronomic vitality’ of human bodies, and bring about ‘new cyborg-type assemblages of bodies, codes, screens and machines’ in a ‘constant cybernetic loop of action, feedback and adaptation’.

Techno-somatic modes of neural sensing, using neurotechnologies for brain imaging and neural analysis, are the next frontier in AILSci. Real-time brainwave sensing is being developed and trialled in multiple expert settings.

_+++++++++++++++
more on AI in this IMS blog
https://blog.stcloudstate.edu/ims?s=artificial+intelligence

women and immersive technologies

https://www.cnbc.com/2020/05/06/women-in-tech-jobs-in-artificial-intelligence-grow-amid-coronavirus.html

International Data Corporation says it expects the number of AI jobs globally to grow 16% this year.

a new report released Wednesday, IBM found the majority (85%) of AI professionals think the industry has become more diverse over recent years

3,200 people surveyed across North AmericaEurope and India, 86% said they are now confident in AI systems’ ability to make decisions without bias.

A plurality of men (46%) said they became interested in a tech career in high school or earlier, while a majority of women (53%) only considered it a possible path during their undergraduate degree or grad school.

++++++++++++++++
more on immersive technologies in this IMS blog
https://blog.stcloudstate.edu/ims?s=immersive+technologies

disruption innovation

https://www.newyorker.com/magazine/2014/06/23/the-disruption-machine

Michael Porter, a professor at the Harvard Business School. The scholar who in some respects became his successor, Clayton M. Christensen, entered a doctoral program at the Harvard Business School in 1989 and joined the faculty in 1992. Christensen was interested in why companies fail. In his 1997 book, “The Innovator’s Dilemma,” he argued that, very often, it isn’t because their executives made bad decisions but because they made good good decisions, the same kind of good decisions that had made those companies successful for decades. (The “innovator’s dilemma” is that “doing the right thing is the wrong thing.”)

Christensen called “disruptive innovation”: the selling of a cheaper, poorer-quality product that initially reaches less profitable customers but eventually takes over and devours an entire industry.

Christensen has co-written books urging disruptive innovation in higher education (“The Innovative University”), public schools (“Disrupting Class”), and health care (“The Innovator’s Prescription”).

Startups are ruthless and leaderless and unrestrained, and they seem so tiny and powerless, until you realize, but only after it’s too late, that they’re devastatingly dangerous: Bang! Ka-boom! Think of it this way: the Times is a nation-state; BuzzFeed is stateless. Disruptive innovation is competitive strategy for an age seized by terror.

Replacing “progress” with “innovation” skirts the question of whether a novelty is an improvement: the world may not be getting better and better but our devices are getting newer and newer.

The word “innovate”—to make new—used to have chiefly negative connotations: it signified excessive novelty, without purpose or end.

Joseph Schumpeter, in his landmark study of business cycles, used the word to mean bringing new products to market, a usage that spread slowly, and only in the specialized literatures of economics and business.

Disruptive innovation can reliably be seen only after the fact.

Christensen has compared the theory of disruptive innovation to a theory of nature: the theory of evolution. But among the many differences between disruption and evolution is that the advocates of disruption have an affinity for circular arguments.

Like the bursting of the dot-com bubble, the meltdown didn’t dim the fervor for disruption; instead, it fuelled it, because these products of disruption contributed to the panic on which the theory of disruption thrives.

People aren’t disk drives. Public schools, colleges and universities, churches, museums, and many hospitals, all of which have been subjected to disruptive innovation, have revenues and expenses and infrastructures, but they aren’t industries in the same way that manufacturers of hard-disk drives or truck engines or drygoods are industries. Journalism isn’t an industry in that sense, either.

Historically, institutions like museums, hospitals, schools, and universities have been supported by patronage, donations made by individuals or funding from church or state. The press has generally supported itself by charging subscribers and selling advertising. (Underwriting by corporations and foundations is a funding source of more recent vintage.) Charging for admission, membership, subscriptions and, for some, earning profits are similarities these institutions have with businesses. Still, that doesn’t make them industries, which turn things into commodities and sell them for gain.

Christensen and Eyring’s recommendations for the disruption of the modern university include a “mix of face-to-face and online learning.” The publication of “The Innovative University,” in 2011, contributed to a frenzy for Massive Open Online Courses, or moocs, at colleges and universities across the country, including a collaboration between Harvard and M.I.T., which was announced in May of 2012. Shortly afterward, the University of Virginia’s panicked board of trustees attempted to fire the president, charging her with jeopardizing the institution’s future by failing to disruptively innovate with sufficient speed;

+++++++++++++++
more on Clayton Christensen in this IMS blog
https://blog.stcloudstate.edu/ims?s=clayton

data interference

APRIL 21, 2019 Zeynep Tufekci

Think You’re Discreet Online? Think Again

Because of technological advances and the sheer amount of data now available about billions of other people, discretion no longer suffices to protect your privacy. Computer algorithms and network analyses can now infer, with a sufficiently high degree of accuracy, a wide range of things about you that you may have never disclosed, including your moods, your political beliefs, your sexual orientation and your health.

There is no longer such a thing as individually “opting out” of our privacy-compromised world.

In 2017, the newspaper The Australian published an article, based on a leaked document from Facebook, revealing that the company had told advertisers that it could predict when younger users, including teenagers, were feeling “insecure,” “worthless” or otherwise in need of a “confidence boost.” Facebook was apparently able to draw these inferences by monitoring photos, posts and other social media data.

In 2017, academic researchers, armed with data from more than 40,000 Instagram photos, used machine-learning tools to accurately identify signs of depression in a group of 166 Instagram users. Their computer models turned out to be better predictors of depression than humans who were asked to rate whether photos were happy or sad and so forth.

Computational inference can also be a tool of social control. The Chinese government, having gathered biometric data on its citizens, is trying to use big data and artificial intelligence to single out “threats” to Communist rule, including the country’s Uighurs, a mostly Muslim ethnic group.

+++++++++++++

Zeynep Tufekci and Seth Stephens-Davidowitz: Privacy is over

https://www.centreforideas.com/article/zeynep-tufekci-and-seth-stephens-davidowitz-privacy-over

+++++++++++

Zeynep Tufekci writes about security and data privacy for NY Times, disinformation’s threat to democracy for WIRED

++++++++++
more on privacy in this IMS blog
https://blog.stcloudstate.edu/ims?s=privacy

OLC Collaborate

OLC Collaborate

https://onlinelearningconsortium.org/attend-2019/innovate/

schedule:

https://onlinelearningconsortium.org/attend-2019/innovate/program/all_sessions/#streamed

Wednesday

++++++++++++++++
THE NEW PROFESSOR: HOW I PODCASTED MY WAY INTO STUDENTS’ LIVES (AND HOW YOU CAN, TOO)

Concurrent Session 1

https://onlinelearningconsortium.org/olc-innovate-2019-session-page/?session=6734&kwds=

+++++++++++++

Creating A Cost-Free Course

+++++++++++++++++

Idea Hose: AI Design For People
Date: Wednesday, April 3rd
Time: 3:30 PM to 4:15 PM
Conference Session: Concurrent Session 3
Streamed session
Lead Presenter: Brian Kane (General Design LLC)
Track: Research: Designs, Methods, and Findings
Location: Juniper A
Session Duration: 45min
Brief Abstract:What happens when you apply design thinking to AI? AI presents a fundamental change in the way people interact with machines. By applying design thinking to the way AI is made and used, we can generate an unlimited amount of new ideas for products and experiences that people will love and use.https://onlinelearningconsortium.org/olc-innovate-2019-session-page/?session=6964&kwds=
Notes from the session:
design thinking: get out from old mental models.  new narratives; get out of the sci fi movies.
narrative generators: AI design for people stream
we need machines to make mistakes. Ai even more then traditional software.
Lessons learned: don’t replace people
creativity engines – automated creativity.
trends:
 AI Design for People stream49 PM-us9swehttps://www.androidauthority.com/nvidia-jetson-nano-966609/
https://community.infiniteflight.com/t/virtualhub-ios-and-android-free/142837?u=sudafly
 http://bit.ly/VirtualHub
Thursday
Chatbots, Game Theory, And AI: Adapting Learning For Humans, Or Innovating Humans Out Of The Picture?
Date: Thursday, April 4th
Time: 8:45 AM to 9:30 AM
Conference Session: Concurrent Session 4
Streamed session
Lead Presenter: Matt Crosslin (University of Texas at Arlington LINK Research Lab)
Track: Experiential and Life-Long Learning
Location: Cottonwood 4-5
Session Duration: 45min
Brief Abstract:How can teachers utilize chatbots and artificial intelligence in ways that won’t remove humans out of the education picture? Using tools like Twine and Recast.AI chatobts, this session will focus on how to build adaptive content that allows learners to create their own heutagogical educational pathways based on individual needs.++++++++++++++++

This Is Us: Fostering Effective Storytelling Through EdTech & Student’s Influence As Digital Citizens
Date: Thursday, April 4th
Time: 9:45 AM to 10:30 AM
Conference Session: Concurrent Session 5
Streamed session
Lead Presenter: Maikel Alendy (FIU Online)
Co-presenter: Sky V. King (FIU Online – Florida International University)
Track: Teaching and Learning Practice
Location: Cottonwood 4-5
Session Duration: 45min
Brief Abstract:“This is Us” demonstrates how leveraging storytelling in learning engages students to effectively communicate their authentic story, transitioning from consumerism to become creators and influencers. Addressing responsibility as a digital citizen, information and digital literacy, online privacy, and strategies with examples using several edtech tools, will be reviewed.++++++++++++++++++

Personalized Learning At Scale: Using Adaptive Tools & Digital Assistants
Date: Thursday, April 4th
Time: 11:15 AM to 12:00 PM
Conference Session: Concurrent Session 6
Streamed session
Lead Presenter: Kristin Bushong (Arizona State University )
Co-presenter: Heather Nebrich (Arizona State University)
Track: Effective Tools, Toys and Technologies
Location: Juniper C
Session Duration: 45min
Brief Abstract:Considering today’s overstimulated lifestyle, how do we engage busy learners to stay on task? Join this session to discover current efforts in implementing ubiquitous educational opportunities through customized interests and personalized learning aspirations e.g., adaptive math tools, AI support communities, and memory management systems.+++++++++++++

High-Impact Practices Online: Starting The Conversation
Date: Thursday, April 4th
Time: 1:15 PM to 2:00 PM
Conference Session: Concurrent Session 7
Streamed session
Lead Presenter: Katie Linder (Oregon State University)
Co-presenter: June Griffin (University of Nebraska-Lincoln)
Track: Teaching and Learning Practice
Location: Cottonwood 4-5
Session Duration: 45min
Brief Abstract:The concept of High-impact Educational Practices (HIPs) is well-known, but the conversation about transitioning HIPs online is new. In this session, contributors from the edited collection High-Impact Practices in Online Education will share current HIP research, and offer ideas for participants to reflect on regarding implementing HIPs into online environments.https://www.aacu.org/leap/hipshttps://www.aacu.org/sites/default/files/files/LEAP/HIP_tables.pdf+++++++++++++++++++++++

Human Skills For Digital Natives: Expanding Our Definition Of Tech And Media Literacy
Date: Thursday, April 4th
Time: 3:45 PM to 5:00 PM
Streamed session
Lead Presenter: Manoush Zomorodi (Stable Genius Productions)
Track: N/A
Location: Adams Ballroom
Session Duration: 1hr 15min
Brief Abstract:How can we ensure that students and educators thrive in increasingly digital environments, where change is the only constant? In this keynote, author and journalist Manoush Zomorodi shares her pioneering approach to researching the effects of technology on our behavior. Her unique brand of journalism includes deep-dive investigations into such timely topics as personal privacy, information overload, and the Attention Economy. These interactive multi-media experiments with tens of thousands of podcast listeners will inspire you to think creatively about how we use technology to educate and grow communities.Friday

Anger Is An Energy
Date: Friday, April 5th
Time: 8:30 AM to 9:30 AM
Streamed session
Lead Presenter: Michael Caulfield (Washington State University-Vancouver)
Track: N/A
Location: Adams Ballroom
Position: 2
Session Duration: 60min
Brief Abstract:Years ago, John Lyndon (then Johnny Rotten) sang that “anger is an energy.” And he was right, of course. Anger isn’t an emotion, like happiness or sadness. It’s a reaction, a swelling up of a confused urge. I’m a person profoundly uncomfortable with anger, but yet I’ve found in my professional career that often my most impactful work begins in a place of anger: anger against injustice, inequality, lies, or corruption. And often it is that anger that gives me the energy and endurance to make a difference, to move the mountains that need to be moved. In this talk I want to think through our uneasy relationship with anger; how it can be helpful, and how it can destroy us if we’re not careful.++++++++++++++++

Improving Online Teaching Practice, Creating Community And Sharing Resources
Date: Friday, April 5th
Time: 10:45 AM to 11:30 AM
Conference Session: Concurrent Session 10
Streamed session
Lead Presenter: Laurie Daily (Augustana University)
Co-presenter: Sharon Gray (Augustana University)
Track: Problems, Processes, and Practices
Location: Juniper A
Session Duration: 45min
Brief Abstract:The purpose of this session is to explore the implementation of a Community of Practice to support professional development, enhance online course and program development efforts, and to foster community and engagement between and among full and part time faculty.+++++++++++++++

It’s Not What You Teach, It’s HOW You Teach: A Story-Driven Approach To Course Design
Date: Friday, April 5th
Time: 11:45 AM to 12:30 PM
Conference Session: Concurrent Session 11
Streamed session
Lead Presenter: Katrina Rainer (Strayer University)
Co-presenter: Jennifer M McVay-Dyche (Strayer University)
Track: Teaching and Learning Practice
Location: Cottonwood 2-3
Session Duration: 45min
Brief Abstract:Learning is more effective and organic when we teach through the art of storytelling. At Strayer University, we are blending the principles story-driven learning with research-based instructional design practices to create engaging learning experiences. This session will provide you with strategies to strategically infuse stories into any lesson, course, or curriculum.

Conference on Digital Libraries

CM/IEEE Joint Conference on Digital Libraries
June 2-6, 2019 – Urbana-Champaign, IL
Curated Knowledge. Connected People. Extraordinary Results.
UPDATED DEADLINE: January 25, 2019
JCDL welcomes interesting submissions ranging across theories, systems,
services, and applications. We invite those managing, operating, developing,
curating, evaluating, or utilizing digital libraries broadly defined, covering
academic or public institutions, including archives, museums, and social
networks. We seek involvement of those in iSchools, as well as working in
computer or information or social sciences and technologies. Multiple tracks
and sessions will ensure tailoring to researchers, practitioners, and diverse
communities including data science/analytics, data curation/stewardship,
information retrieval, human-computer interaction, hypertext (and Web/network
science), multimedia, publishing, preservation, digital humanities, machine
learning/AI, heritage/culture, health/medicine, policy, law, and privacy/
intellectual property.
Additional Topics of Interest:
In addition to the topics indicated above, the following are some of the many
topics that will be considered relevant, as long as connections are made to
digital libraries:
* Collaborative and participatory information environments
* Crowdsourcing and human computation
* Cyberinfrastructure architectures, applications, and deployments
* Distributed information systems
* Document genres
* Extracting semantics, entities, and patterns from large collections
* Information and knowledge systems
* Information visualization
* Infrastructure and service design
* Knowledge discovery
* Linked data and its applications
* Performance evaluation
* Personal digital information management
* Scientific data management
* Social media, architecture, and applications
* Social networks, virtual organizations and networked information
* User behavior and modeling
* User communities and user research
We invite submissions in many forms: short papers, long papers, panels,
posters, tutorials, and workshops. We also host a Doctoral Consortium.
Submission Deadlines:
Jan. 25, 2019 – Tutorial, workshop, full paper and short paper, and consortium
submissions
Jan. 29, 2019 – Panel, poster and demonstration submissions
Submissions are to be made in electronic format via the conference’s EasyChair
submission page. Please see the conference website for more details:
https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2F2019.jcdl.org%2F&data=01%7C01%7Cpmiltenoff%40stcloudstate.edu%7C51f8325dcd444ea9051b08d67affde34%7C5e40e2ed600b4eeaa9851d0c9dcca629%7C0&sdata=RaBtwLMPYb0gwcqsXHHXsNODc1UrU3w3BFtI7uMgtKY%3D&reserved=0
To maximize your use of LITA-L or to unsubscribe, see https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.ala.org%2Flita%2Finvolve%2Femail&data=01%7C01%7Cpmiltenoff%40stcloudstate.edu%7C51f8325dcd444ea9051b08d67affde34%7C5e40e2ed600b4eeaa9851d0c9dcca629%7C0&sdata=panFuqJKW49%2B1XimfiRvGBQjgajRVMyCCIpXqpiPXbQ%3D&reserved=0

1 3 4 5 6 7