Searching for "twitter"

K12 technology preparation

Better teaching through technology? Only with thoughtful preparation

Nov. 30, 2017

https://www.educationdive.com/news/better-teaching-through-technology-only-with-thoughtful-preparation/511896/

Dive Brief:

  • Research from the Yale Center of Teaching and Learning highlights the ups and downs of classroom tech use, including the juxtaposition of increased engagement from using familiar platforms for assignments and decreased motivation and grades from limitless internet exposure, eSchool News reports.
  • Educators must ensure a cautious approach to tech use that doesn’t make students overly reliant upon it to complete tasks and solve problems, using social networking and collaborative platforms as a means to an end rather than the be-all solution.
  • Before adopting and implementing it, educators should consider how any given piece of classroom technology will improve studying, what the possible pitfalls are and how to avoid them, how it will help meet goals or close gaps, and how it will improve workflow, according to eSchool News.

+++++++++++++
more on K12 technology in this IMS blog
https://blog.stcloudstate.edu/ims?s=k12+technology

Mac OS High Sierra

ANYONE CAN HACK MACOS HIGH SIERRA JUST BY TYPING “ROOT”

ANDY GREENBERG 11.28.17 05:47 PM

https://www.wired.com/story/macos-high-sierra-hack-root/

THERE ARE HACKABLE security flaws in software. And then there are those that don’t even require hacking at all—just a knock on the door, and asking to be let in. Apple’s macOS High Sierra has the second kind.

malicious code running on the operating system could steal the contents of its keychain without a password.

Apple does have a bug bounty, but only for iOS, not MacOS.

digital humanities

7 Things You Should Know About Digital Humanities

Published:   Briefs, Case Studies, Papers, Reports  

https://library.educause.edu/resources/2017/11/7-things-you-should-know-about-digital-humanities

Lippincott, J., Spiro, L., Rugg, A., Sipher, J., & Well, C. (2017). Seven Things You Should Know About Digital Humanities (ELI 7 Things You Should Know). Retrieved from https://library.educause.edu/~/media/files/library/2017/11/eli7150.pdf

definition

The term “digital humanities” can refer to research and instruction that is about information technology or that uses IT. By applying technologies in new ways, the tools and methodologies of digital humanities open new avenues of inquiry and scholarly production. Digital humanities applies computational capabilities to humanistic questions, offering new pathways for scholars to conduct research and to create and publish scholarship. Digital humanities provides promising new channels for learners and will continue to influence the ways in which we think about and evolve technology toward better and more humanistic ends.

As defined by Johanna Drucker and colleagues at UCLA, the digital humanities is “work at the intersection of digital technology and humanities disciplines.” An EDUCAUSE/CNI working group framed the digital humanities as “the application and/or development of digital tools and resources to enable researchers to address questions and perform new types of analyses in the humanities disciplines,” and the NEH Office of Digital Humanities says digital humanities “explore how to harness new technology for thumanities research as well as those that study digital culture from a humanistic perspective.” Beyond blending the digital with the humanities, there is an intentionality about combining the two that defines it.

digital humanities can include

  • creating digital texts or data sets;
  • cleaning, organizing, and tagging those data sets;
  • applying computer-based methodologies to analyze them;
  • and making claims and creating visualizations that explain new findings from those analyses.

Scholars might reflect on

  • how the digital form of the data is organized,
  • how analysis is conducted/reproduced, and
  • how claims visualized in digital form may embody assumptions or biases.

Digital humanities can enrich pedagogy as well, such as when a student uses visualized data to study voter patterns or conducts data-driven analyses of works of literature.

Digital humanities usually involves work by teams in collaborative spaces or centers. Team members might include

  • researchers and faculty from multiple disciplines,
  • graduate students,
  • librarians,
  • instructional technologists,
  • data scientists and preservation experts,
  • technologists with expertise in critical computing and computing methods, and undergraduates

projects:

downsides

  • some disciplinary associations, including the Modern Language Association and the American Historical Association, have developed guidelines for evaluating digital proj- ects, many institutions have yet to define how work in digital humanities fits into considerations for tenure and promotion
  • Because large projects are often developed with external funding that is not readily replaced by institutional funds when the grant ends sustainability is a concern. Doing digital humanities well requires access to expertise in methodologies and tools such as GIS, mod- eling, programming, and data visualization that can be expensive for a single institution to obtain
  • Resistance to learning new tech- nologies can be another roadblock, as can the propensity of many humanists to resist working in teams. While some institutions have recognized the need for institutional infrastructure (computation and storage, equipment, software, and expertise), many have not yet incorporated such support into ongoing budgets.

Opportunities for undergraduate involvement in research, provid ing students with workplace skills such as data management, visualization, coding, and modeling. Digital humanities provides new insights into policy-making in areas such as social media, demo- graphics, and new means of engaging with popular culture and understanding past cultures. Evolution in this area will continue to build connections between the humanities and other disci- plines, cross-pollinating research and education in areas like med- icine and environmental studies. Insights about digital humanities itself will drive innovation in pedagogy and expand our conceptualization of classrooms and labs

++++++++++++
more on digital humanities in this IMS blog
https://blog.stcloudstate.edu/ims?s=digital+humanities

Borgman data

book reviews:
https://bobmorris.biz/big-data-little-data-no-data-a-book-review-by-bob-morris
“The challenge is to make data discoverable, usable, assessable, intelligible, and interpretable, and do so for extended periods of time…To restate the premise of this book, the value of data lies in their use. Unless stakeholders can agree on what to keep and why, and invest in the invisible work necessary to sustain knowledge infrastructures, big data and little data alike will become no data.”
http://www.cjc-online.ca/index.php/journal/article/view/3152/3337
he premise that data are not natural objects with their own essence, Borgman rather explores the different values assigned to them, as well as their many variations according to place, time, and the context in which they are collected. It is specifically through six “provocations” that she offers a deep engagement with different aspects of the knowledge industry. These include the reproducibility, sharing, and reuse of data; the transmission and publication of knowledge; the stability of scholarly knowledge, despite its increasing proliferation of forms and modes; the very porosity of the borders between different areas of knowledge; the costs, benefits, risks, and responsibilities related to knowledge infrastructure; and finally, investment in the sustainable acquisition and exploitation of data for scientific research.
beyond the six provocations, there is a larger question concerning the legitimacy, continuity, and durability of all scientific research—hence the urgent need for further reflection, initiated eloquently by Borgman, on the fact that “despite the media hyperbole, having the right data is usually better than having more data”
o Data management (Pages xviii-xix)
o Data definition (4-5 and 18-29)
p. 5 big data and little data are only awkwardly analogous to big science and little science. Modern science, or big science inDerek J. de Solla Price  (https://en.wikipedia.org/wiki/Big_Science) is characterized by international, collaborative efforts and by the invisible colleges of researchers who know each other and who exchange information on a formal and informal basis. Little science is the three hundred years of independent, smaller-scale work to develop theory and method for understanding research problems. Little science is typified by heterogeneous methods, heterogeneous data and by local control and analysis.
p. 8 The Long Tail
a popular way of characterizing the availability and use of data in research areas or in economic sectors. https://en.wikipedia.org/wiki/Long_tail

o Provocations (13-15)
o Digital data collections (21-26)
o Knowledge infrastructures (32-35)
o Open access to research (39-42)
o Open technologies (45-47)
o Metadata (65-70 and 79-80)
o Common resources in astronomy (71-76)
o Ethics (77-79)
o Research Methods and data practices, and, Sensor-networked science and technology (84-85 and 106-113)
o Knowledge infrastructures (94-100)
o COMPLETE survey (102-106)
o Internet surveys (128-143)
o Internet survey (128-143)
o Twitter (130-133, 138-141, and 157-158(
o Pisa Clark/CLAROS project (179-185)
o Collecting Data, Analyzing Data, and Publishing Findings (181-184)
o Buddhist studies 186-200)
o Data citation (241-268)
o Negotiating authorship credit (253-256)
o Personal names (258-261)
o Citation metrics (266-209)
o Access to data (279-283)

++++++++++++++++
more on big data in education in this IMS blog
https://blog.stcloudstate.edu/ims?s=big+data

AltSchool

AltSchool shift raises concerns of profits placed over educational promise

 Nov. 8, 2017 https://www.educationdive.com/news/altschool-shift-raises-concerns-of-profits-placed-over-educational-promise/510293/

  • The announcement by Silicon Valley personalized learning startup AltSchool that it will close two of its lab school campuses in Palo Alto and New York City has some educators concerned that the company is putting profits over efforts to improve education, EdSurge reports.
  • Butler Middle School (PA) Principal Jason Huffman also told the publication that he sees the company’s struggles as parallel to those of other education reform models that didn’t live up to their promises, adding that organizations like Future Ready have made similar models and platforms for personalization available at a lower cost to schools and districts.

Education historian Diane Ravitch, a former assistant secretary of education under President George H.W. Bush, has been among the chief critics of these increasingly close ties, especially as it pertains to charter schools and voucher programs, decrying efforts to “turn us from citizens into consumers.” Public schools, she has said, should be focused around “building a sense of community, having a sense of democracy at the local level, having people from different backgrounds coming together to solve problems and learn how to be citizens.”

But AltSchool’s intentions with its lab schools and approach to developing its platform have seemed noble enough, with the company partnering with schools of varying sizes to learn how to best scale up successful approaches to personalized learning for traditional public schools. Its lab schools have been noted for eschewing traditional grade level structures and curriculum, attracting funding from the likes of Mark Zuckerberg.

Earlier this year, we visited one of the company’s partner schools — Berthold Academy, a Montessori in Reston, VA — to find out more about what AltSchool was looking to model and how its partnerships worked, finding a promising approach in action.

+++++++++++++
more on charter schools in this IMS blog
https://blog.stcloudstate.edu/ims?s=charter

Reproducibility Librarian

Reproducibility Librarian? Yes, That Should Be Your Next Job

https://www.jove.com/blog/2017/10/27/reproducibility-librarian-yes-that-should-be-your-next-job/
Vicky Steeves (@VickySteeves) is the first Research Data Management and Reproducibility Librarian
Reproducibility is made so much more challenging because of computers, and the dominance of closed-source operating systems and analysis software researchers use. Ben Marwick wrote a great piece called ‘How computers broke science – and what we can do to fix it’ which details a bit of the problem. Basically, computational environments affect the outcome of analyses (Gronenschild et. al (2012) showed the same data and analyses gave different results between two versions of macOS), and are exceptionally hard to reproduce, especially when the license terms don’t allow it. Additionally, programs encode data incorrectly and studies make erroneous conclusions, e.g. Microsoft Excel encodes genes as dates, which affects 1/5 of published data in leading genome journals.
technology to capture computational environments, workflow, provenance, data, and code are hugely impactful for reproducibility.  It’s been the focus of my work, in supporting an open source tool called ReproZip, which packages all computational dependencies, data, and applications in a single distributable package that other can reproduce across different systems. There are other tools that fix parts of this problem: Kepler and VisTrails for workflow/provenance, Packrat for saving specific R packages at the time a script is run so updates to dependencies won’t break, Pex for generating executable Python environments, and o2r for executable papers (including data, text, and code in one).
plugin for Jupyter notebooks), and added a user interface to make it friendlier to folks not comfortable on the command line.

I would also recommend going to conferences:

++++++++++++++++++++++++
more on big data in an academic library in this IMS blog
academic library collection data visualization

https://blog.stcloudstate.edu/ims/2017/10/26/software-carpentry-workshop/

https://blog.stcloudstate.edu/ims?s=data+library

more on library positions in this IMS blog:
https://blog.stcloudstate.edu/ims?s=big+data+library
https://blog.stcloudstate.edu/ims/2016/06/14/technology-requirements-samples/

on university library future:
https://blog.stcloudstate.edu/ims/2014/12/10/unviersity-library-future/

librarian versus information specialist

 

bad rabbit virus

Bad Rabbit cryptoware attack: New virus hits companies in Russia, Turkey, Germany & Ukraine

https://www.rt.com/news/407655-bad-rabbit-cryptoware-attack/
Kaspersky Lab advised those who do not use anti-virus products to restrict execution of certain files (C:\Windows\infpub.dat, C:\Windows\cscc.dat) and shut down the Windows Management Instrumentation (WMI) service. My note: let the wolf in the shed with sheep.
The source of the attack remained undetermined, but earlier this month the head of Microsoft, Brad Smith, pinned the blame for it on North Korea, which allegedly used cyber tools or weapons that were stolen from the National Security Agency in the United States. The top executive, however, did not provide evidence to back his claims.

New ransomware attack hits Russia and spreads around globe

Malware WARNING: ‘Bad Rabbit’ virus causes flight delays, is YOUR PC susceptible?

http://www.express.co.uk/life-style/science-technology/870887/Bad-Rabbit-Ransomware-Malware-UK-Virus

Bad Rabbit ransomware outbreak

 https://nakedsecurity.sophos.com/2017/10/24/bad-rabbit-ransomware-outbreak/

++++++++++++
more on cybersecurity in this IMS blog
https://blog.stcloudstate.edu/ims?s=cybersecurity

cybersecurity kaspersky

Kaspersky Lab Has Been Working With Russian Intelligence

 Emails show the security-software maker developed products for the FSB and accompanied agents on raids. July 11, 2017, 4:00 AM CDT 
https://www.bloomberg.com/news/articles/2017-07-11/kaspersky-lab-has-been-working-with-russian-intelligence

WHY THE US GOVERNMENT SHOULDN’T BAN KASPERSKY SECURITY SOFTWARE

  09.04.17

https://www.wired.com/story/why-the-us-government-shouldnt-ban-kaspersky-security-software/

he General Services Administration (GSA) has ordered the removal of Kaspersky software platforms from its catalogues of approved vendors. Meanwhile, the Senate is considering a draft bill of the 2018 National Defense Acquisition Authorization (known as the NDAA, it specifies the size of and uses for the fiscal year 2018 US Defense Department budget) that would bar the use of Kaspersky products in the military.

W.H. cybersecurity coordinator warns against using Kaspersky Lab software

https://www.cbsnews.com/news/kasperksy-lab-software-suspected-ties-russian-intelligence-rob-joyce/

Kaspersky: Russia responds to US ban on software

14 September 2017 http://www.bbc.com/news/world-us-canada-41262049

 +++++++++++++++

KASPERSKY, RUSSIA, AND THE ANTIVIRUS PARADOX

 10.11.17

https://www.wired.com/story/kaspersky-russia-antivirus/

Israel and Russia’s overlapping hacks of Kaspersky complicate espionage narrative

Israel and Russia’s overlapping hacks of Kaspersky complicate espionage narrative

The whole ordeal is a nightmare for Kaspersky Lab. The company looks incompetent at preventing state-sponsored hacks in the best-case scenario and complicit with the Russian government in the worst-case scenario. However it plays out, the unfolding drama will certainly hurt the software maker’s footprint in the U.S., where Congress has already taken action to purge the government of the company’s software.

+++++++++++++
more on cybersecurity in this IMS blog
https://blog.stcloudstate.edu/ims?s=cybersecurity

code4lib 2018

Code2LIB February 2018

http://2018.code4lib.org/

2018 Preconference Voting

10. The Virtualized Library: A Librarian’s Introduction to Docker and Virtual Machines
This session will introduce two major types of virtualization, virtual machines using tools like VirtualBox and Vagrant, and containers using Docker. The relative strengths and drawbacks of the two approaches will be discussed along with plenty of hands-on time. Though geared towards integrating these tools into a development workflow, the workshop should be useful for anyone interested in creating stable and reproducible computing environments, and examples will focus on library-specific tools like Archivematica and EZPaarse. With virtualization taking a lot of the pain out of installing and distributing software, alleviating many cross-platform issues, and becoming increasingly common in library and industry practices, now is a great time to get your feet wet.

(One three-hour session)

11. Digital Empathy: Creating Safe Spaces Online
User research is often focused on measures of the usability of online spaces. We look at search traffic, run card sorting and usability testing activities, and track how users navigate our spaces. Those results inform design decisions through the lens of information architecture. This is important, but doesn’t encompass everything a user needs in a space.

This workshop will focus on the other component of user experience design and user research: how to create spaces where users feel safe. Users bring their anxieties and stressors with them to our online spaces, but informed design choices can help to ameliorate that stress. This will ultimately lead to a more positive interaction between your institution and your users.

The presenters will discuss the theory behind empathetic design, delve deeply into using ethnographic research methods – including an opportunity for attendees to practice those ethnographic skills with student participants – and finish with the practical application of these results to ongoing and future projects.

(One three-hour session)

14. ARIA Basics: Making Your Web Content Sing Accessibility

https://dequeuniversity.com/assets/html/jquery-summit/html5/slides/landmarks.html
Are you a web developer or create web content? Do you add dynamic elements to your pages? If so, you should be concerned with making those dynamic elements accessible and usable to as many as possible. One of the most powerful tools currently available for making web pages accessible is ARIA, the Accessible Rich Internet Applications specification. This workshop will teach you the basics for leveraging the full power of ARIA to make great accessible web pages. Through several hands-on exercises, participants will come to understand the purpose and power of ARIA and how to apply it for a variety of different dynamic web elements. Topics will include semantic HTML, ARIA landmarks and roles, expanding/collapsing content, and modal dialog. Participants will also be taught some basic use of the screen reader NVDA for use in accessibility testing. Finally, the lessons will also emphasize learning how to keep on learning as HTML, JavaScript, and ARIA continue to evolve and expand.

Participants will need a basic background in HTML, CSS, and some JavaScript.

(One three-hour session)

18. Learning and Teaching Tech
Tech workshops pose two unique problems: finding skilled instructors for that content, and instructing that content well. Library hosted workshops are often a primary educational resource for solo learners, and many librarians utilize these workshops as a primary outreach platform. Tackling these two issues together often makes the most sense for our limited resources. Whether a programming language or software tool, learning tech to teach tech can be one of the best motivations for learning that tech skill or tool, but equally important is to learn how to teach and present tech well.

This hands-on workshop will guide participants through developing their own learning plan, reviewing essential pedagogy for teaching tech, and crafting a workshop of their choice. Each participant will leave with an actionable learning schedule, a prioritized list of resources to investigate, and an outline of a workshop they would like to teach.

(Two three-hour sessions)

23. Introduction to Omeka S
Omeka S represents a complete rewrite of Omeka Classic (aka the Omeka 2.x series), adhering to our fundamental principles of encouraging use of metadata standards, easy web publishing, and sharing cultural history. New objectives in Omeka S include multisite functionality and increased interaction with other systems. This workshop will compare and contrast Omeka S with Omeka Classic to highlight our emphasis on 1) modern metadata standards, 2) interoperability with other systems including Linked Open Data, 3) use of modern web standards, and 4) web publishing to meet the goals medium- to large-sized institutions.

In this workshop we will walk through Omeka S Item creation, with emphasis on LoD principles. We will also look at the features of Omeka S that ease metadata input and facilitate project-defined usage and workflows. In accordance with our commitment to interoperability, we will describe how the API for Omeka S can be deployed for data exchange and sharing between many systems. We will also describe how Omeka S promotes multiple site creation from one installation, in the interest of easy publishing with many objects in many contexts, and simplifying the work of IT departments.

(One three-hour session)

24. Getting started with static website generators
Have you been curious about static website generators? Have you been wondering who Jekyll and Hugo are? Then this workshop is for you

My notehttps://opensource.com/article/17/5/hugo-vs-jekyll

But this article isn’t about setting up a domain name and hosting for your website. It’s for the step after that, the actual making of that site. The typical choice for a lot of people would be to use something like WordPress. It’s a one-click install on most hosting providers, and there’s a gigantic market of plugins and themes available to choose from, depending on the type of site you’re trying to build. But not only is WordPress a bit overkill for most websites, it also gives you a dynamically generated site with a lot of moving parts. If you don’t keep all of those pieces up to date, they can pose a significant security risk and your site could get hijacked.

The alternative would be to have a static website, with nothing dynamically generated on the server side. Just good old HTML and CSS (and perhaps a bit of Javascript for flair). The downside to that option has been that you’ve been relegated to coding the whole thing by hand yourself. It’s doable, but you just want a place to share your work. You shouldn’t have to know all the idiosyncrasies of low-level web design (and the monumental headache of cross-browser compatibility) to do that.

Static website generators are tools used to build a website made up only of HTML, CSS, and JavaScript. Static websites, unlike dynamic sites built with tools like Drupal or WordPress, do not use databases or server-side scripting languages. Static websites have a number of benefits over dynamic sites, including reduced security vulnerabilities, simpler long-term maintenance, and easier preservation.

In this hands-on workshop, we’ll start by exploring static website generators, their components, some of the different options available, and their benefits and disadvantages. Then, we’ll work on making our own sites, and for those that would like to, get them online with GitHub pages. Familiarity with HTML, git, and command line basics will be helpful but are not required.

(One three-hour session)

26. Using Digital Media for Research and Instruction
To use digital media effectively in both research and instruction, you need to go beyond just the playback of media files. You need to be able to stream the media, divide that stream into different segments, provide descriptive analysis of each segment, order, re-order and compare different segments from the same or different streams and create web sites that can show the result of your analysis. In this workshop, we will use Omeka and several plugins for working with digital media, to show the potential of video streaming, segmentation and descriptive analysis for research and instruction.

(One three-hour session)

28. Spark in the Dark 101 https://zeppelin.apache.org/
This is an introductory session on Apache Spark, a framework for large-scale data processing (https://spark.apache.org/). We will introduce high level concepts around Spark, including how Spark execution works and it’s relationship to the other technologies for working with Big Data. Following this introduction to the theory and background, we will walk workshop participants through hands-on usage of spark-shell, Zeppelin notebooks, and Spark SQL for processing library data. The workshop will wrap up with use cases and demos for leveraging Spark within cultural heritage institutions and information organizations, connecting the building blocks learned to current projects in the real world.

(One three-hour session)

29. Introduction to Spotlight https://github.com/projectblacklight/spotlight
http://www.spotlighttechnology.com/4-OpenSource.htm
Spotlight is an open source application that extends the digital library ecosystem by providing a means for institutions to reuse digital content in easy-to-produce, attractive, and scholarly-oriented websites. Librarians, curators, and other content experts can build Spotlight exhibits to showcase digital collections using a self-service workflow for selection, arrangement, curation, and presentation.

This workshop will introduce the main features of Spotlight and present examples of Spotlight-built exhibits from the community of adopters. We’ll also describe the technical requirements for adopting Spotlight and highlight the potential to customize and extend Spotlight’s capabilities for their own needs while contributing to its growth as an open source project.

(One three-hour session)

31. Getting Started Visualizing your IoT Data in Tableau https://www.tableau.com/
The Internet of Things is a rising trend in library research. IoT sensors can be used for space assessment, service design, and environmental monitoring. IoT tools create lots of data that can be overwhelming and hard to interpret. Tableau Public (https://public.tableau.com/en-us/s/) is a data visualization tool that allows you to explore this information quickly and intuitively to find new insights.

This full-day workshop will teach you the basics of building your own own IoT sensor using a Raspberry Pi (https://www.raspberrypi.org/) in order to gather, manipulate, and visualize your data.

All are welcome, but some familiarity with Python is recommended.

(Two three-hour sessions)

32. Enabling Social Media Research and Archiving
Social media data represents a tremendous opportunity for memory institutions of all kinds, be they large academic research libraries, or small community archives. Researchers from a broad swath of disciplines have a great deal of interest in working with social media content, but they often lack access to datasets or the technical skills needed to create them. Further, it is clear that social media is already a crucial part of the historical record in areas ranging from events your local community to national elections. But attempts to build archives of social media data are largely nascent. This workshop will be both an introduction to collecting data from the APIs of social media platforms, as well as a discussion of the roles of libraries and archives in that collecting.

Assuming no prior experience, the workshop will begin with an explanation of how APIs operate. We will then focus specifically on the Twitter API, as Twitter is of significant interest to researchers and hosts an important segment of discourse. Through a combination of hands-on and demos, we will gain experience with a number of tools that support collecting social media data (e.g., Twarc, Social Feed Manager, DocNow, Twurl, and TAGS), as well as tools that enable sharing social media datasets (e.g., Hydrator, TweetSets, and the Tweet ID Catalog).

The workshop will then turn to a discussion of how to build a successful program enabling social media collecting at your institution. This might cover a variety of topics including outreach to campus researchers, collection development strategies, the relationship between social media archiving and web archiving, and how to get involved with the social media archiving community. This discussion will be framed by a focus on ethical considerations of social media data, including privacy and responsible data sharing.

Time permitting, we will provide a sampling of some approaches to social media data analysis, including Twarc Utils and Jupyter Notebooks.

(One three-hour session)

social media choice for nonprofit

How to Choose the Right Social Media for Your Nonprofit

October 4, 2017 Wayne Elsey

https://www.linkedin.com/pulse/how-choose-right-social-media-your-nonprofit-wayne-elsey/

The pros and cons of Facebook, Twitter, Instagram, Snapchat

+++++++++++++
more on social media in this IMS blog
https://blog.stcloudstate.edu/ims?s=social+media

1 36 37 38 39 40 62