Searching for "privacy"

corporate monopoly or public control net neutrality

Net Neutrality is just the beginning

Interview with Victor Pickard

Victor Pickard, associate professor of communication at the University Pennsylvania’s Annenberg School, whose research focuses on internet policy and the political economy of media.

https://www.academia.edu/35305972/Net_Neutrality_Is_Just_the_Beginning

https://www.jacobinmag.com/2017/11/net-neutrality-fcc-ajit-pai-monopoly

with each new victory for the American telecommunications oligopoly, that digital optimism fades further from view.

Definition:

Net neutrality protections are essentially safeguards that prevent internet service providers (ISPs) from interfering with the internet. Net neutrality gives the FCC the regulatory authority to prevent ISPs like Comcast and Verizon from slowing down or blocking certain types of content. It also prevents them from offering what’s known as paid prioritization, where an ISP could let particular websites or content creators pay more for faster streaming and download times. With paid prioritization an ISP could shake down a company like Netflix or an individual website owner, coercing them to pay more in order to be in the fast lane.

Net neutrality often gets treated as a sort of technocratic squabble over ownership and control of internet pipes. But in fact it speaks to a core social contract between government, corporations, and the public. What it really comes down to is, how can members of the public obtain information and services, and express ourselves creatively and politically, without interference from massive corporations?

Should we think of the internet as a good, a service, an infrastructure, or something else?

It’s all of the above.

The internet has been radically privatized. It wasn’t inevitable, but through policy decisions over the years, the internet has become increasingly commodified. Meanwhile it’s really difficult to imagine living in modern society without fast internet services — it’s no longer a luxury but a necessity for everything ranging from education to health to livelihood. The “digital divide” is a phrase that sounds like it’s from the 1990s, but it’s still very relevant. Somewhere around one fifth of American households don’t have access to wireline broadband services. It’s a social problem. We should be thinking about the internet as a public service and subsidizing it to make sure that everyone has access.

In your recent book on media democracy, you discuss the rise of what you call “corporate libertarianism.” What is corporate libertarianism and how does it relate to net neutrality?

Corporate libertarianism is an ideological project that has origins at a core moment in the 1940s. It sees corporations as having individual freedoms, like those in the First Amendment, which they can use to shield themselves from public interest oversight and regulation. It’s also often connected to this assumption that the government should never intervene in markets, and media markets in particular. (My note: Milton Friedman)

Of course, this is a libertarian mythology — the government is always involved. The question ought to be how it should be involved. Under corporate libertarianism it’s assumed that the government should only be involved in ways that enhance profit maximization for communication oligopolies.

There are clear dangers associated with vertical integration, where the company that owns the pipes is able to control the dissemination of information, and able to set the terms by which we access that information.
There have been cases like this already. In 2005, the company Telus, which is the second largest telecommunications company in Canada, began blocking access to a server that hosted a website that supported a labor strike against Telus.

Net neutrality is just one part of the story. What other regulations, policies and interventions could resist corporate control of the internet?

Roughly half of Americans live in communities that have access to only one ISP.  My note: Ha Ha Ha, “pick me, pick me,” as Dori from “Finding Nemo” will say… Charter, whatever they will rename themselves again, is the crass example in Central MN.

Strategies to contain and confront monopolies:

  • break them up, and to prevent monopolies and oligopolies from happening in the first place by blocking mergers and acquisitions.
  • if we’re not going to outright nationalize them then we want to heavily regulate them, and enforce some kind of social contract where they’re compelled to provide a public service in exchange for the right to operate.
  • create public alternatives, like municipal wireless networks that can circumvent and compete with corporate monopolies. There’s a growing number of these publicly owned and governed internet infrastructures, and building more is crucial.

+++++++++++++
more on #netNeutrality in this IMS blog
https://blog.stcloudstate.edu/ims?s=netneutrality

topics for IM260

proposed topics for IM 260 class

  • Media literacy. Differentiated instruction. Media literacy guide.
    Fake news as part of media literacy. Visual literacy as part of media literacy. Media literacy as part of digital citizenship.
  • Web design / web development
    the roles of HTML5, CSS, Java Script, PHP, Bootstrap, JQuery, React and other scripting languages and libraries. Heat maps and other usability issues; website content strategy. THE MODEL-VIEW-CONTROLLER (MVC) design pattern
  • Social media for institutional use. Digital Curation. Social Media algorithms. Etiquette Ethics. Mastodon
    I hosted a LITA webinar in the fall of 2016 (four weeks); I can accommodate any information from that webinar for the use of the IM students
  • OER and instructional designer’s assistance to book creators.
    I can cover both the “library part” (“free” OER, copyright issues etc) and the support / creative part of an OER book / textbook
  • Big Data.” Data visualization. Large scale visualization. Text encoding. Analytics, Data mining. Unizin. Python, R in academia.
    I can introduce the students to the large idea of Big Data and its importance in lieu of the upcoming IoT, but also departmentalize its importance for academia, business, etc. From infographics to heavy duty visualization (Primo X-Services API. JSON, Flask).
  • NetNeutrality, Digital Darwinism, Internet economy and the role of your professional in such environment
    I can introduce students to the issues, if not familiar and / or lead a discussion on a rather controversial topic
  • Digital assessment. Digital Assessment literacy.
    I can introduce students to tools, how to evaluate and select tools and their pedagogical implications
  • Wikipedia
    a hands-on exercise on working with Wikipedia. After the session, students will be able to create Wikipedia entries thus knowing intimately the process of Wikipedia and its information.
  • Effective presentations. Tools, methods, concepts and theories (cognitive load). Presentations in the era of VR, AR and mixed reality. Unity.
    I can facilitate a discussion among experts (your students) on selection of tools and their didactically sound use to convey information. I can supplement the discussion with my own findings and conclusions.
  • eConferencing. Tools and methods
    I can facilitate a discussion among your students on selection of tools and comparison. Discussion about the their future and their place in an increasing online learning environment
  • Digital Storytelling. Immersive Storytelling. The Moth. Twine. Transmedia Storytelling
    I am teaching a LIB 490/590 Digital Storytelling class. I can adapt any information from that class to the use of IM students
  • VR, AR, Mixed Reality.
    besides Mark Gill, I can facilitate a discussion, which goes beyond hardware and brands, but expand on the implications for academia and corporate education / world
  • IoT , Arduino, Raspberry PI. Industry 4.0
  • Instructional design. ID2ID
    I can facilitate a discussion based on the Educause suggestions about the profession’s development
  • Microcredentialing in academia and corporate world. Blockchain
  • IT in K12. How to evaluate; prioritize; select. obsolete trends in 21 century schools. K12 mobile learning
  • Podcasting: past, present, future. Beautiful Audio Editor.
    a definition of podcasting and delineation of similar activities; advantages and disadvantages.
  • Digital, Blended (Hybrid), Online teaching and learning: facilitation. Methods and techniques. Proctoring. Online students’ expectations. Faculty support. Asynch. Blended Synchronous Learning Environment
  • Gender, race and age in education. Digital divide. Xennials, Millennials and Gen Z. generational approach to teaching and learning. Young vs old Millennials. Millennial employees.
  • Privacy, [cyber]security, surveillance. K12 cyberincidents. Hackers.
  • Gaming and gamification. Appsmashing. Gradecraft
  • Lecture capture, course capture.
  • Bibliometrics, altmetrics
  • Technology and cheating, academic dishonest, plagiarism, copyright.

IRDL proposal

Applications for the 2018 Institute will be accepted between December 1, 2017 and January 27, 2018. Scholars accepted to the program will be notified in early March 2018.

Title:

Learning to Harness Big Data in an Academic Library

Abstract (200)

Research on Big Data per se, as well as on the importance and organization of the process of Big Data collection and analysis, is well underway. The complexity of the process comprising “Big Data,” however, deprives organizations of ubiquitous “blue print.” The planning, structuring, administration and execution of the process of adopting Big Data in an organization, being that a corporate one or an educational one, remains an elusive one. No less elusive is the adoption of the Big Data practices among libraries themselves. Seeking the commonalities and differences in the adoption of Big Data practices among libraries may be a suitable start to help libraries transition to the adoption of Big Data and restructuring organizational and daily activities based on Big Data decisions.
Introduction to the problem. Limitations

The redefinition of humanities scholarship has received major attention in higher education. The advent of digital humanities challenges aspects of academic librarianship. Data literacy is a critical need for digital humanities in academia. The March 2016 Library Juice Academy Webinar led by John Russel exemplifies the efforts to help librarians become versed in obtaining programming skills, and respectively, handling data. Those are first steps on a rather long path of building a robust infrastructure to collect, analyze, and interpret data intelligently, so it can be utilized to restructure daily and strategic activities. Since the phenomenon of Big Data is young, there is a lack of blueprints on the organization of such infrastructure. A collection and sharing of best practices is an efficient approach to establishing a feasible plan for setting a library infrastructure for collection, analysis, and implementation of Big Data.
Limitations. This research can only organize the results from the responses of librarians and research into how libraries present themselves to the world in this arena. It may be able to make some rudimentary recommendations. However, based on each library’s specific goals and tasks, further research and work will be needed.

 

 

Research Literature

“Big data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it…”
– Dan Ariely, 2013  https://www.asist.org/publications/bulletin/aprilmay-2017/big-datas-impact-on-privacy-for-librarians-and-information-professionals/

Big Data is becoming an omnipresent term. It is widespread among different disciplines in academia (De Mauro, Greco, & Grimaldi, 2016). This leads to “inconsistency in meanings and necessity for formal definitions” (De Mauro et al, 2016, p. 122). Similarly, to De Mauro et al (2016), Hashem, Yaqoob, Anuar, Mokhtar, Gani and Ullah Khan (2015) seek standardization of definitions. The main connected “themes” of this phenomenon must be identified and the connections to Library Science must be sought. A prerequisite for a comprehensive definition is the identification of Big Data methods. Bughin, Chui, Manyika (2011), Chen et al. (2012) and De Mauro et al (2015) single out the methods to complete the process of building a comprehensive definition.

In conjunction with identifying the methods, volume, velocity, and variety, as defined by Laney (2001), are the three properties of Big Data accepted across the literature. Daniel (2015) defines three stages in big data: collection, analysis, and visualization. According to Daniel, (2015), Big Data in higher education “connotes the interpretation of a wide range of administrative and operational data” (p. 910) and according to Hilbert (2013), as cited in Daniel (2015), Big Data “delivers a cost-effective prospect to improve decision making” (p. 911).

The importance of understanding the process of Big Data analytics is well understood in academic libraries. An example of such “administrative and operational” use for cost-effective improvement of decision making are the Finch & Flenner (2016) and Eaton (2017) case studies of the use of data visualization to assess an academic library collection and restructure the acquisition process. Sugimoto, Ding & Thelwall (2012) call for the discussion of Big Data for libraries. According to the 2017 NMC Horizon Report “Big Data has become a major focus of academic and research libraries due to the rapid evolution of data mining technologies and the proliferation of data sources like mobile devices and social media” (Adams, Becker, et al., 2017, p. 38).

Power (2014) elaborates on the complexity of Big Data in regard to decision-making and offers ideas for organizations on building a system to deal with Big Data. As explained by Boyd and Crawford (2012) and cited in De Mauro et al (2016), there is a danger of a new digital divide among organizations with different access and ability to process data. Moreover, Big Data impacts current organizational entities in their ability to reconsider their structure and organization. The complexity of institutions’ performance under the impact of Big Data is further complicated by the change of human behavior, because, arguably, Big Data affects human behavior itself (Schroeder, 2014).

De Mauro et al (2015) touch on the impact of Dig Data on libraries. The reorganization of academic libraries considering Big Data and the handling of Big Data by libraries is in a close conjunction with the reorganization of the entire campus and the handling of Big Data by the educational institution. In additional to the disruption posed by the Big Data phenomenon, higher education is facing global changes of economic, technological, social, and educational character. Daniel (2015) uses a chart to illustrate the complexity of these global trends. Parallel to the Big Data developments in America and Asia, the European Union is offering access to an EU open data portal (https://data.europa.eu/euodp/home ). Moreover, the Association of European Research Libraries expects under the H2020 program to increase “the digitization of cultural heritage, digital preservation, research data sharing, open access policies and the interoperability of research infrastructures” (Reilly, 2013).

The challenges posed by Big Data to human and social behavior (Schroeder, 2014) are no less significant to the impact of Big Data on learning. Cohen, Dolan, Dunlap, Hellerstein, & Welton (2009) propose a road map for “more conservative organizations” (p. 1492) to overcome their reservations and/or inability to handle Big Data and adopt a practical approach to the complexity of Big Data. Two Chinese researchers assert deep learning as the “set of machine learning techniques that learn multiple levels of representation in deep architectures (Chen & Lin, 2014, p. 515). Deep learning requires “new ways of thinking and transformative solutions (Chen & Lin, 2014, p. 523). Another pair of researchers from China present a broad overview of the various societal, business and administrative applications of Big Data, including a detailed account and definitions of the processes and tools accompanying Big Data analytics.  The American counterparts of these Chinese researchers are of the same opinion when it comes to “think about the core principles and concepts that underline the techniques, and also the systematic thinking” (Provost and Fawcett, 2013, p. 58). De Mauro, Greco, and Grimaldi (2016), similarly to Provost and Fawcett (2013) draw attention to the urgent necessity to train new types of specialists to work with such data. As early as 2012, Davenport and Patil (2012), as cited in Mauro et al (2016), envisioned hybrid specialists able to manage both technological knowledge and academic research. Similarly, Provost and Fawcett (2013) mention the efforts of “academic institutions scrambling to put together programs to train data scientists” (p. 51). Further, Asomoah, Sharda, Zadeh & Kalgotra (2017) share a specific plan on the design and delivery of a big data analytics course. At the same time, librarians working with data acknowledge the shortcomings in the profession, since librarians “are practitioners first and generally do not view usability as a primary job responsibility, usually lack the depth of research skills needed to carry out a fully valid” data-based research (Emanuel, 2013, p. 207).

Borgman (2015) devotes an entire book to data and scholarly research and goes beyond the already well-established facts regarding the importance of Big Data, the implications of Big Data and the technical, societal, and educational impact and complications posed by Big Data. Borgman elucidates the importance of knowledge infrastructure and the necessity to understand the importance and complexity of building such infrastructure, in order to be able to take advantage of Big Data. In a similar fashion, a team of Chinese scholars draws attention to the complexity of data mining and Big Data and the necessity to approach the issue in an organized fashion (Wu, Xhu, Wu, Ding, 2014).

Bruns (2013) shifts the conversation from the “macro” architecture of Big Data, as focused by Borgman (2015) and Wu et al (2014) and ponders over the influx and unprecedented opportunities for humanities in academia with the advent of Big Data. Does the seemingly ubiquitous omnipresence of Big Data mean for humanities a “railroading” into “scientificity”? How will research and publishing change with the advent of Big Data across academic disciplines?

Reyes (2015) shares her “skinny” approach to Big Data in education. She presents a comprehensive structure for educational institutions to shift “traditional” analytics to “learner-centered” analytics (p. 75) and identifies the participants in the Big Data process in the organization. The model is applicable for library use.

Being a new and unchartered territory, Big Data and Big Data analytics can pose ethical issues. Willis (2013) focusses on Big Data application in education, namely the ethical questions for higher education administrators and the expectations of Big Data analytics to predict students’ success.  Daries, Reich, Waldo, Young, and Whittinghill (2014) discuss rather similar issues regarding the balance between data and student privacy regulations. The privacy issues accompanying data are also discussed by Tene and Polonetsky, (2013).

Privacy issues are habitually connected to security and surveillance issues. Andrejevic and Gates (2014) point out in a decision making “generated by data mining, the focus is not on particular individuals but on aggregate outcomes” (p. 195). Van Dijck (2014) goes into further details regarding the perils posed by metadata and data to the society, in particular to the privacy of citizens. Bail (2014) addresses the same issue regarding the impact of Big Data on societal issues, but underlines the leading roles of cultural sociologists and their theories for the correct application of Big Data.

Library organizations have been traditional proponents of core democratic values such as protection of privacy and elucidation of related ethical questions (Miltenoff & Hauptman, 2005). In recent books about Big Data and libraries, ethical issues are important part of the discussion (Weiss, 2018). Library blogs also discuss these issues (Harper & Oltmann, 2017). An academic library’s role is to educate its patrons about those values. Sugimoto et al (2012) reflect on the need for discussion about Big Data in Library and Information Science. They clearly draw attention to the library “tradition of organizing, managing, retrieving, collecting, describing, and preserving information” (p.1) as well as library and information science being “a historically interdisciplinary and collaborative field, absorbing the knowledge of multiple domains and bringing the tools, techniques, and theories” (p. 1). Sugimoto et al (2012) sought a wide discussion among the library profession regarding the implications of Big Data on the profession, no differently from the activities in other fields (e.g., Wixom, Ariyachandra, Douglas, Goul, Gupta, Iyer, Kulkami, Mooney, Phillips-Wren, Turetken, 2014). A current Andrew Mellon Foundation grant for Visualizing Digital Scholarship in Libraries seeks an opportunity to view “both macro and micro perspectives, multi-user collaboration and real-time data interaction, and a limitless number of visualization possibilities – critical capabilities for rapidly understanding today’s large data sets (Hwangbo, 2014).

The importance of the library with its traditional roles, as described by Sugimoto et al (2012) may continue, considering the Big Data platform proposed by Wu, Wu, Khabsa, Williams, Chen, Huang, Tuarob, Choudhury, Ororbia, Mitra, & Giles (2014). Such platforms will continue to emerge and be improved, with librarians as the ultimate drivers of such platforms and as the mediators between the patrons and the data generated by such platforms.

Every library needs to find its place in the large organization and in society in regard to this very new and very powerful phenomenon called Big Data. Libraries might not have the trained staff to become a leader in the process of organizing and building the complex mechanism of this new knowledge architecture, but librarians must educate and train themselves to be worthy participants in this new establishment.

 

Method

 

The study will be cleared by the SCSU IRB.
The survey will collect responses from library population and it readiness to use and use of Big Data.  Send survey URL to (academic?) libraries around the world.

Data will be processed through SPSS. Open ended results will be processed manually. The preliminary research design presupposes a mixed method approach.

The study will include the use of closed-ended survey response questions and open-ended questions.  The first part of the study (close ended, quantitative questions) will be completed online through online survey. Participants will be asked to complete the survey using a link they receive through e-mail.

Mixed methods research was defined by Johnson and Onwuegbuzie (2004) as “the class of research where the researcher mixes or combines quantitative and qualitative research techniques, methods, approaches, concepts, or language into a single study” (Johnson & Onwuegbuzie, 2004 , p. 17).  Quantitative and qualitative methods can be combined, if used to complement each other because the methods can measure different aspects of the research questions (Sale, Lohfeld, & Brazil, 2002).

 

Sampling design

 

  • Online survey of 10-15 question, with 3-5 demographic and the rest regarding the use of tools.
  • 1-2 open-ended questions at the end of the survey to probe for follow-up mixed method approach (an opportunity for qualitative study)
  • data analysis techniques: survey results will be exported to SPSS and analyzed accordingly. The final survey design will determine the appropriate statistical approach.

 

Project Schedule

 

Complete literature review and identify areas of interest – two months

Prepare and test instrument (survey) – month

IRB and other details – month

Generate a list of potential libraries to distribute survey – month

Contact libraries. Follow up and contact again, if necessary (low turnaround) – month

Collect, analyze data – two months

Write out data findings – month

Complete manuscript – month

Proofreading and other details – month

 

Significance of the work 

While it has been widely acknowledged that Big Data (and its handling) is changing higher education (https://blog.stcloudstate.edu/ims?s=big+data) as well as academic libraries (https://blog.stcloudstate.edu/ims/2016/03/29/analytics-in-education/), it remains nebulous how Big Data is handled in the academic library and, respectively, how it is related to the handling of Big Data on campus. Moreover, the visualization of Big Data between units on campus remains in progress, along with any policymaking based on the analysis of such data (hence the need for comprehensive visualization).

 

This research will aim to gain an understanding on: a. how librarians are handling Big Data; b. how are they relating their Big Data output to the campus output of Big Data and c. how librarians in particular and campus administration in general are tuning their practices based on the analysis.

Based on the survey returns (if there is a statistically significant return), this research might consider juxtaposing the practices from academic libraries, to practices from special libraries (especially corporate libraries), public and school libraries.

 

 

References:

 

Adams Becker, S., Cummins M, Davis, A., Freeman, A., Giesinger Hall, C., Ananthanarayanan, V., … Wolfson, N. (2017). NMC Horizon Report: 2017 Library Edition.

Andrejevic, M., & Gates, K. (2014). Big Data Surveillance: Introduction. Surveillance & Society, 12(2), 185–196.

Asamoah, D. A., Sharda, R., Hassan Zadeh, A., & Kalgotra, P. (2017). Preparing a Data Scientist: A Pedagogic Experience in Designing a Big Data Analytics Course. Decision Sciences Journal of Innovative Education, 15(2), 161–190. https://doi.org/10.1111/dsji.12125

Bail, C. A. (2014). The cultural environment: measuring culture with big data. Theory and Society, 43(3–4), 465–482. https://doi.org/10.1007/s11186-014-9216-5

Borgman, C. L. (2015). Big Data, Little Data, No Data: Scholarship in the Networked World. MIT Press.

Bruns, A. (2013). Faster than the speed of print: Reconciling ‘big data’ social media analysis and academic scholarship. First Monday, 18(10). Retrieved from http://firstmonday.org/ojs/index.php/fm/article/view/4879

Bughin, J., Chui, M., & Manyika, J. (2010). Clouds, big data, and smart assets: Ten tech-enabled business trends to watch. McKinsey Quarterly, 56(1), 75–86.

Chen, X. W., & Lin, X. (2014). Big Data Deep Learning: Challenges and Perspectives. IEEE Access, 2, 514–525. https://doi.org/10.1109/ACCESS.2014.2325029

Cohen, J., Dolan, B., Dunlap, M., Hellerstein, J. M., & Welton, C. (2009). MAD Skills: New Analysis Practices for Big Data. Proc. VLDB Endow., 2(2), 1481–1492. https://doi.org/10.14778/1687553.1687576

Daniel, B. (2015). Big Data and analytics in higher education: Opportunities and challenges. British Journal of Educational Technology, 46(5), 904–920. https://doi.org/10.1111/bjet.12230

Daries, J. P., Reich, J., Waldo, J., Young, E. M., Whittinghill, J., Ho, A. D., … Chuang, I. (2014). Privacy, Anonymity, and Big Data in the Social Sciences. Commun. ACM, 57(9), 56–63. https://doi.org/10.1145/2643132

De Mauro, A. D., Greco, M., & Grimaldi, M. (2016). A formal definition of Big Data based on its essential features. Library Review, 65(3), 122–135. https://doi.org/10.1108/LR-06-2015-0061

De Mauro, A., Greco, M., & Grimaldi, M. (2015). What is big data? A consensual definition and a review of key research topics. AIP Conference Proceedings, 1644(1), 97–104. https://doi.org/10.1063/1.4907823

Dumbill, E. (2012). Making Sense of Big Data. Big Data, 1(1), 1–2. https://doi.org/10.1089/big.2012.1503

Eaton, M. (2017). Seeing Library Data: A Prototype Data Visualization Application for Librarians. Publications and Research. Retrieved from http://academicworks.cuny.edu/kb_pubs/115

Emanuel, J. (2013). Usability testing in libraries: methods, limitations, and implications. OCLC Systems & Services: International Digital Library Perspectives, 29(4), 204–217. https://doi.org/10.1108/OCLC-02-2013-0009

Graham, M., & Shelton, T. (2013). Geography and the future of big data, big data and the future of geography. Dialogues in Human Geography, 3(3), 255–261. https://doi.org/10.1177/2043820613513121

Harper, L., & Oltmann, S. (2017, April 2). Big Data’s Impact on Privacy for Librarians and Information Professionals. Retrieved November 7, 2017, from https://www.asist.org/publications/bulletin/aprilmay-2017/big-datas-impact-on-privacy-for-librarians-and-information-professionals/

Hashem, I. A. T., Yaqoob, I., Anuar, N. B., Mokhtar, S., Gani, A., & Ullah Khan, S. (2015). The rise of “big data” on cloud computing: Review and open research issues. Information Systems, 47(Supplement C), 98–115. https://doi.org/10.1016/j.is.2014.07.006

Hwangbo, H. (2014, October 22). The future of collaboration: Large-scale visualization. Retrieved November 7, 2017, from http://usblogs.pwc.com/emerging-technology/the-future-of-collaboration-large-scale-visualization/

Laney, D. (2001, February 6). 3D Data Management: Controlling Data Volume, Velocity, and Variety.

Miltenoff, P., & Hauptman, R. (2005). Ethical dilemmas in libraries: an international perspective. The Electronic Library, 23(6), 664–670. https://doi.org/10.1108/02640470510635746

Philip Chen, C. L., & Zhang, C.-Y. (2014). Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Information Sciences, 275(Supplement C), 314–347. https://doi.org/10.1016/j.ins.2014.01.015

Power, D. J. (2014). Using ‘Big Data’ for analytics and decision support. Journal of Decision Systems, 23(2), 222–228. https://doi.org/10.1080/12460125.2014.888848

Provost, F., & Fawcett, T. (2013). Data Science and its Relationship to Big Data and Data-Driven Decision Making. Big Data, 1(1), 51–59. https://doi.org/10.1089/big.2013.1508

Reilly, S. (2013, December 12). What does Horizon 2020 mean for research libraries? Retrieved November 7, 2017, from http://libereurope.eu/blog/2013/12/12/what-does-horizon-2020-mean-for-research-libraries/

Reyes, J. (2015). The skinny on big data in education: Learning analytics simplified. TechTrends: Linking Research & Practice to Improve Learning, 59(2), 75–80. https://doi.org/10.1007/s11528-015-0842-1

Schroeder, R. (2014). Big Data and the brave new world of social media research. Big Data & Society, 1(2), 2053951714563194. https://doi.org/10.1177/2053951714563194

Sugimoto, C. R., Ding, Y., & Thelwall, M. (2012). Library and information science in the big data era: Funding, projects, and future [a panel proposal]. Proceedings of the American Society for Information Science and Technology, 49(1), 1–3. https://doi.org/10.1002/meet.14504901187

Tene, O., & Polonetsky, J. (2012). Big Data for All: Privacy and User Control in the Age of Analytics. Northwestern Journal of Technology and Intellectual Property, 11, [xxvii]-274.

van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society; Newcastle upon Tyne, 12(2), 197–208.

Waller, M. A., & Fawcett, S. E. (2013). Data Science, Predictive Analytics, and Big Data: A Revolution That Will Transform Supply Chain Design and Management. Journal of Business Logistics, 34(2), 77–84. https://doi.org/10.1111/jbl.12010

Weiss, A. (2018). Big-Data-Shocks-An-Introduction-to-Big-Data-for-Librarians-and-Information-Professionals. Rowman & Littlefield Publishers. Retrieved from https://rowman.com/ISBN/9781538103227/Big-Data-Shocks-An-Introduction-to-Big-Data-for-Librarians-and-Information-Professionals

West, D. M. (2012). Big data for education: Data mining, data analytics, and web dashboards. Governance Studies at Brookings, 4, 1–0.

Willis, J. (2013). Ethics, Big Data, and Analytics: A Model for Application. Educause Review Online. Retrieved from https://docs.lib.purdue.edu/idcpubs/1

Wixom, B., Ariyachandra, T., Douglas, D. E., Goul, M., Gupta, B., Iyer, L. S., … Turetken, O. (2014). The current state of business intelligence in academia: The arrival of big data. CAIS, 34, 1.

Wu, X., Zhu, X., Wu, G. Q., & Ding, W. (2014). Data mining with big data. IEEE Transactions on Knowledge and Data Engineering, 26(1), 97–107. https://doi.org/10.1109/TKDE.2013.109

Wu, Z., Wu, J., Khabsa, M., Williams, K., Chen, H. H., Huang, W., … Giles, C. L. (2014). Towards building a scholarly big data platform: Challenges, lessons and opportunities. In IEEE/ACM Joint Conference on Digital Libraries (pp. 117–126). https://doi.org/10.1109/JCDL.2014.6970157

 

+++++++++++++++++
more on big data





Key Issues in Teaching and Learning Survey

The EDUCAUSE Learning Initiative has just launched its 2018 Key Issues in Teaching and Learning Survey, so vote today: http://www.tinyurl.com/ki2018.

Each year, the ELI surveys the teaching and learning community in order to discover the key issues and themes in teaching and learning. These top issues provide the thematic foundation or basis for all of our conversations, courses, and publications for the coming year. Longitudinally they also provide the way to track the evolving discourse in the teaching and learning space. More information about this annual survey can be found at https://www.educause.edu/eli/initiatives/key-issues-in-teaching-and-learning.

ACADEMIC TRANSFORMATION (Holistic models supporting student success, leadership competencies for academic transformation, partnerships and collaborations across campus, IT transformation, academic transformation that is broad, strategic, and institutional in scope)

ACCESSIBILITY AND UNIVERSAL DESIGN FOR LEARNING (Supporting and educating the academic community in effective practice; intersections with instructional delivery modes; compliance issues)

ADAPTIVE TEACHING AND LEARNING (Digital courseware; adaptive technology; implications for course design and the instructor’s role; adaptive approaches that are not technology-based; integration with LMS; use of data to improve learner outcomes)

COMPETENCY-BASED EDUCATION AND NEW METHODS FOR THE ASSESSMENT OF STUDENT LEARNING (Developing collaborative cultures of assessment that bring together faculty, instructional designers, accreditation coordinators, and technical support personnel, real world experience credit)

DIGITAL AND INFORMATION LITERACIES (Student and faculty literacies; research skills; data discovery, management, and analysis skills; information visualization skills; partnerships for literacy programs; evaluation of student digital competencies; information evaluation)

EVALUATING TECHNOLOGY-BASED INSTRUCTIONAL INNOVATIONS (Tools and methods to gather data; data analysis techniques; qualitative vs. quantitative data; evaluation project design; using findings to change curricular practice; scholarship of teaching and learning; articulating results to stakeholders; just-in-time evaluation of innovations). here is my bibliographical overview on Big Data (scroll down to “Research literature”https://blog.stcloudstate.edu/ims/2017/11/07/irdl-proposal/ )

EVOLUTION OF THE TEACHING AND LEARNING SUPPORT PROFESSION (Professional skills for T&L support; increasing emphasis on instructional design; delineating the skills, knowledge, business acumen, and political savvy for success; role of inter-institutional communities of practices and consortia; career-oriented professional development planning)

FACULTY DEVELOPMENT (Incentivizing faculty innovation; new roles for faculty and those who support them; evidence of impact on student learning/engagement of faculty development programs; faculty development intersections with learning analytics; engagement with student success)

GAMIFICATION OF LEARNING (Gamification designs for course activities; adaptive approaches to gamification; alternate reality games; simulations; technological implementation options for faculty)

INSTRUCTIONAL DESIGN (Skills and competencies for designers; integration of technology into the profession; role of data in design; evolution of the design profession (here previous blog postings on this issue: https://blog.stcloudstate.edu/ims/2017/10/04/instructional-design-3/); effective leadership and collaboration with faculty)

INTEGRATED PLANNING AND ADVISING FOR STUDENT SUCCESS (Change management and campus leadership; collaboration across units; integration of technology systems and data; dashboard design; data visualization (here previous blog postings on this issue: https://blog.stcloudstate.edu/ims?s=data+visualization); counseling and coaching advising transformation; student success analytics)

LEARNING ANALYTICS (Leveraging open data standards; privacy and ethics; both faculty and student facing reports; implementing; learning analytics to transform other services; course design implications)

LEARNING SPACE DESIGNS (Makerspaces; funding; faculty development; learning designs across disciplines; supporting integrated campus planning; ROI; accessibility/UDL; rating of classroom designs)

MICRO-CREDENTIALING AND DIGITAL BADGING (Design of badging hierarchies; stackable credentials; certificates; role of open standards; ways to publish digital badges; approaches to meta-data; implications for the transcript; Personalized learning transcripts and blockchain technology (here previous blog postings on this issue: https://blog.stcloudstate.edu/ims?s=blockchain

MOBILE LEARNING (Curricular use of mobile devices (here previous blog postings on this issue:

https://blog.stcloudstate.edu/ims/2015/09/25/mc218-remodel/; innovative curricular apps; approaches to use in the classroom; technology integration into learning spaces; BYOD issues and opportunities)

MULTI-DIMENSIONAL TECHNOLOGIES (Virtual, augmented, mixed, and immersive reality; video walls; integration with learning spaces; scalability, affordability, and accessibility; use of mobile devices; multi-dimensional printing and artifact creation)

NEXT-GENERATION DIGITAL LEARNING ENVIRONMENTS AND LMS SERVICES (Open standards; learning environments architectures (here previous blog postings on this issue: https://blog.stcloudstate.edu/ims/2017/03/28/digital-learning/; social learning environments; customization and personalization; OER integration; intersections with learning modalities such as adaptive, online, etc.; LMS evaluation, integration and support)

ONLINE AND BLENDED TEACHING AND LEARNING (Flipped course models; leveraging MOOCs in online learning; course development models; intersections with analytics; humanization of online courses; student engagement)

OPEN EDUCATION (Resources, textbooks, content; quality and editorial issues; faculty development; intersections with student success/access; analytics; licensing; affordability; business models; accessibility and sustainability)

PRIVACY AND SECURITY (Formulation of policies on privacy and data protection; increased sharing of data via open standards for internal and external purposes; increased use of cloud-based and third party options; education of faculty, students, and administrators)

WORKING WITH EMERGING LEARNING TECHNOLOGY (Scalability and diffusion; effective piloting practices; investments; faculty development; funding; evaluation methods and rubrics; interoperability; data-driven decision-making)

+++++++++++
learning and teaching in this IMS blog
https://blog.stcloudstate.edu/ims?s=teaching+and+learning

code4lib 2018

Code2LIB February 2018

http://2018.code4lib.org/

2018 Preconference Voting

10. The Virtualized Library: A Librarian’s Introduction to Docker and Virtual Machines
This session will introduce two major types of virtualization, virtual machines using tools like VirtualBox and Vagrant, and containers using Docker. The relative strengths and drawbacks of the two approaches will be discussed along with plenty of hands-on time. Though geared towards integrating these tools into a development workflow, the workshop should be useful for anyone interested in creating stable and reproducible computing environments, and examples will focus on library-specific tools like Archivematica and EZPaarse. With virtualization taking a lot of the pain out of installing and distributing software, alleviating many cross-platform issues, and becoming increasingly common in library and industry practices, now is a great time to get your feet wet.

(One three-hour session)

11. Digital Empathy: Creating Safe Spaces Online
User research is often focused on measures of the usability of online spaces. We look at search traffic, run card sorting and usability testing activities, and track how users navigate our spaces. Those results inform design decisions through the lens of information architecture. This is important, but doesn’t encompass everything a user needs in a space.

This workshop will focus on the other component of user experience design and user research: how to create spaces where users feel safe. Users bring their anxieties and stressors with them to our online spaces, but informed design choices can help to ameliorate that stress. This will ultimately lead to a more positive interaction between your institution and your users.

The presenters will discuss the theory behind empathetic design, delve deeply into using ethnographic research methods – including an opportunity for attendees to practice those ethnographic skills with student participants – and finish with the practical application of these results to ongoing and future projects.

(One three-hour session)

14. ARIA Basics: Making Your Web Content Sing Accessibility

https://dequeuniversity.com/assets/html/jquery-summit/html5/slides/landmarks.html
Are you a web developer or create web content? Do you add dynamic elements to your pages? If so, you should be concerned with making those dynamic elements accessible and usable to as many as possible. One of the most powerful tools currently available for making web pages accessible is ARIA, the Accessible Rich Internet Applications specification. This workshop will teach you the basics for leveraging the full power of ARIA to make great accessible web pages. Through several hands-on exercises, participants will come to understand the purpose and power of ARIA and how to apply it for a variety of different dynamic web elements. Topics will include semantic HTML, ARIA landmarks and roles, expanding/collapsing content, and modal dialog. Participants will also be taught some basic use of the screen reader NVDA for use in accessibility testing. Finally, the lessons will also emphasize learning how to keep on learning as HTML, JavaScript, and ARIA continue to evolve and expand.

Participants will need a basic background in HTML, CSS, and some JavaScript.

(One three-hour session)

18. Learning and Teaching Tech
Tech workshops pose two unique problems: finding skilled instructors for that content, and instructing that content well. Library hosted workshops are often a primary educational resource for solo learners, and many librarians utilize these workshops as a primary outreach platform. Tackling these two issues together often makes the most sense for our limited resources. Whether a programming language or software tool, learning tech to teach tech can be one of the best motivations for learning that tech skill or tool, but equally important is to learn how to teach and present tech well.

This hands-on workshop will guide participants through developing their own learning plan, reviewing essential pedagogy for teaching tech, and crafting a workshop of their choice. Each participant will leave with an actionable learning schedule, a prioritized list of resources to investigate, and an outline of a workshop they would like to teach.

(Two three-hour sessions)

23. Introduction to Omeka S
Omeka S represents a complete rewrite of Omeka Classic (aka the Omeka 2.x series), adhering to our fundamental principles of encouraging use of metadata standards, easy web publishing, and sharing cultural history. New objectives in Omeka S include multisite functionality and increased interaction with other systems. This workshop will compare and contrast Omeka S with Omeka Classic to highlight our emphasis on 1) modern metadata standards, 2) interoperability with other systems including Linked Open Data, 3) use of modern web standards, and 4) web publishing to meet the goals medium- to large-sized institutions.

In this workshop we will walk through Omeka S Item creation, with emphasis on LoD principles. We will also look at the features of Omeka S that ease metadata input and facilitate project-defined usage and workflows. In accordance with our commitment to interoperability, we will describe how the API for Omeka S can be deployed for data exchange and sharing between many systems. We will also describe how Omeka S promotes multiple site creation from one installation, in the interest of easy publishing with many objects in many contexts, and simplifying the work of IT departments.

(One three-hour session)

24. Getting started with static website generators
Have you been curious about static website generators? Have you been wondering who Jekyll and Hugo are? Then this workshop is for you

My notehttps://opensource.com/article/17/5/hugo-vs-jekyll

But this article isn’t about setting up a domain name and hosting for your website. It’s for the step after that, the actual making of that site. The typical choice for a lot of people would be to use something like WordPress. It’s a one-click install on most hosting providers, and there’s a gigantic market of plugins and themes available to choose from, depending on the type of site you’re trying to build. But not only is WordPress a bit overkill for most websites, it also gives you a dynamically generated site with a lot of moving parts. If you don’t keep all of those pieces up to date, they can pose a significant security risk and your site could get hijacked.

The alternative would be to have a static website, with nothing dynamically generated on the server side. Just good old HTML and CSS (and perhaps a bit of Javascript for flair). The downside to that option has been that you’ve been relegated to coding the whole thing by hand yourself. It’s doable, but you just want a place to share your work. You shouldn’t have to know all the idiosyncrasies of low-level web design (and the monumental headache of cross-browser compatibility) to do that.

Static website generators are tools used to build a website made up only of HTML, CSS, and JavaScript. Static websites, unlike dynamic sites built with tools like Drupal or WordPress, do not use databases or server-side scripting languages. Static websites have a number of benefits over dynamic sites, including reduced security vulnerabilities, simpler long-term maintenance, and easier preservation.

In this hands-on workshop, we’ll start by exploring static website generators, their components, some of the different options available, and their benefits and disadvantages. Then, we’ll work on making our own sites, and for those that would like to, get them online with GitHub pages. Familiarity with HTML, git, and command line basics will be helpful but are not required.

(One three-hour session)

26. Using Digital Media for Research and Instruction
To use digital media effectively in both research and instruction, you need to go beyond just the playback of media files. You need to be able to stream the media, divide that stream into different segments, provide descriptive analysis of each segment, order, re-order and compare different segments from the same or different streams and create web sites that can show the result of your analysis. In this workshop, we will use Omeka and several plugins for working with digital media, to show the potential of video streaming, segmentation and descriptive analysis for research and instruction.

(One three-hour session)

28. Spark in the Dark 101 https://zeppelin.apache.org/
This is an introductory session on Apache Spark, a framework for large-scale data processing (https://spark.apache.org/). We will introduce high level concepts around Spark, including how Spark execution works and it’s relationship to the other technologies for working with Big Data. Following this introduction to the theory and background, we will walk workshop participants through hands-on usage of spark-shell, Zeppelin notebooks, and Spark SQL for processing library data. The workshop will wrap up with use cases and demos for leveraging Spark within cultural heritage institutions and information organizations, connecting the building blocks learned to current projects in the real world.

(One three-hour session)

29. Introduction to Spotlight https://github.com/projectblacklight/spotlight
http://www.spotlighttechnology.com/4-OpenSource.htm
Spotlight is an open source application that extends the digital library ecosystem by providing a means for institutions to reuse digital content in easy-to-produce, attractive, and scholarly-oriented websites. Librarians, curators, and other content experts can build Spotlight exhibits to showcase digital collections using a self-service workflow for selection, arrangement, curation, and presentation.

This workshop will introduce the main features of Spotlight and present examples of Spotlight-built exhibits from the community of adopters. We’ll also describe the technical requirements for adopting Spotlight and highlight the potential to customize and extend Spotlight’s capabilities for their own needs while contributing to its growth as an open source project.

(One three-hour session)

31. Getting Started Visualizing your IoT Data in Tableau https://www.tableau.com/
The Internet of Things is a rising trend in library research. IoT sensors can be used for space assessment, service design, and environmental monitoring. IoT tools create lots of data that can be overwhelming and hard to interpret. Tableau Public (https://public.tableau.com/en-us/s/) is a data visualization tool that allows you to explore this information quickly and intuitively to find new insights.

This full-day workshop will teach you the basics of building your own own IoT sensor using a Raspberry Pi (https://www.raspberrypi.org/) in order to gather, manipulate, and visualize your data.

All are welcome, but some familiarity with Python is recommended.

(Two three-hour sessions)

32. Enabling Social Media Research and Archiving
Social media data represents a tremendous opportunity for memory institutions of all kinds, be they large academic research libraries, or small community archives. Researchers from a broad swath of disciplines have a great deal of interest in working with social media content, but they often lack access to datasets or the technical skills needed to create them. Further, it is clear that social media is already a crucial part of the historical record in areas ranging from events your local community to national elections. But attempts to build archives of social media data are largely nascent. This workshop will be both an introduction to collecting data from the APIs of social media platforms, as well as a discussion of the roles of libraries and archives in that collecting.

Assuming no prior experience, the workshop will begin with an explanation of how APIs operate. We will then focus specifically on the Twitter API, as Twitter is of significant interest to researchers and hosts an important segment of discourse. Through a combination of hands-on and demos, we will gain experience with a number of tools that support collecting social media data (e.g., Twarc, Social Feed Manager, DocNow, Twurl, and TAGS), as well as tools that enable sharing social media datasets (e.g., Hydrator, TweetSets, and the Tweet ID Catalog).

The workshop will then turn to a discussion of how to build a successful program enabling social media collecting at your institution. This might cover a variety of topics including outreach to campus researchers, collection development strategies, the relationship between social media archiving and web archiving, and how to get involved with the social media archiving community. This discussion will be framed by a focus on ethical considerations of social media data, including privacy and responsible data sharing.

Time permitting, we will provide a sampling of some approaches to social media data analysis, including Twarc Utils and Jupyter Notebooks.

(One three-hour session)

VR AR MR in education

7 Things You Should Know About AR/VR/MR

https://library.educause.edu/resources/2017/10/7-things-you-should-know-about-ar-vr-mr 
Augmented reality can be described as experiencing the real world with an overlay of additional computer generated content. In contrast, virtual reality immerses a user in an entirely simulated environment, while mixed or merged reality blends real and virtual worlds in ways through which the physical and the digital can interact. AR, VR, and MR offer new opportunities to create a psychological sense of immersive presence in an environment that feels real enough to be viewed, experienced, explored, and manipulated. These technologies have the potential to democratize learning by giving everyone access to immersive experiences that were once restricted to relatively few learners.
In Grinnell College’s Immersive Experiences Lab http://gciel.sites.grinnell.edu/, teams of faculty, staff, and students collaborate on research projects, then use 3D, VR, and MR technologies as a platform to synthesize and present their findings.
In terms of equity, AR, VR, and MR have the potential to democratize learning by giving all learners access to immersive experiences
downsides :
relatively little research about the most effective ways to use these technologies as instructional tools. Combined, these factors can be disincentives for institutions to invest in the equipment, facilities, and staffing that can be required to support these systems. AR, VR, and MR technologies raise concerns about personal privacy and data security. Further, at least some of these tools and applications currently fail to meet accessibility standards. The user experience in some AR, VR, and MR applications can be intensely emotional and even disturbing (my note: but can be also used for empathy literacy),
immersing users in recreated, remote, or even hypothetical environments as small as a molecule or as large as a universe, allowing learners to experience “reality” from multiple perspectives.

++++++++++++++++
more on VR, AR, MX in this IMS blog
https://blog.stcloudstate.edu/ims?s=virtual+reality

psychology of social networks

The Blogger’s Guide To Understanding The Psychology Of Social Networks

Last Updated: By

http://www.bloggingwizard.com/psychology-of-social-networks/

Social media is eating the world.

Facebook alone has over 1.5 billion users – nearly 50% of the entire internet’s population.

Throw in LinkedIn, Twitter, Pinterest, Instagram and region specific social networks like Vkontakte and Sina Weibo and WeChat, and you’d be hard pressed to find anyone who’s online but isn’t on social media.

What has led to the rise of these social networks? What kind of people do they attract?

What is their psychology? What kind of content do they like to consume? And most importantly for bloggers and marketers – what works, what doesn’t on social media?

Facebook has become the ‘home base’ for most people online. While they may or may not use other networks, a majority maintain a presence on Facebook.

  • Popular: Used by 72% of all adult internet users in America.
  • More women users: 77% of online female users are on Facebook.
  • Younger audience: 82% of all online users between 18-29 are on Facebook
  • USA (14%), India (9%) and Brazil (7%) form the three largest markets.

Twitter’s quick flowing ‘info stream’ attracts an audience that swings younger and is mostly urban/semi-urban.

  • Younger: Used by 37% of all online users between 18 and 29.
  • Educated: 54% of users have either graduated college, or have some college experience.
  • Richer: 54% of online adults who make over $50,000+ are on Twitter.

nstagram recently overtook Twitter to become the second largest social network. Pew estimates that 26% of all online adults are on Instagram in the US.

  • More women than men: 29% of all online women are on Instagram, vs. only 22% of all men.
  • Overwhelmingly younger: 53% of all 18-29 year olds are on Instagram.
  • Less educated: Only 24% of Instagram users are college graduates, while 31% have some college experience – fitting since its audience is largely younger.

Google+ is a mysterious beast. It is ubiquitous, yet doesn’t attract nearly a tenth of the attention as Instagram or Facebook. Some marketers swear by it, while others are busy proclaiming its death.

  • More male: 24% of all online men are active users of Google+. For women, this number is 20%.
  • Younger users: 27% of all 16-24 year olds online are active members of Google+. In contrast, only 18% and 14% of 45-54 and 55-64 year olds are active on Google+ at the moment.
  • Large non-US user base: Only 55% of Google+ users are American. 18% are Indian and 6% are Brazilian. One reason for this international user base is Android’s popularity outside the US (since Google+ is baked right into Android).
  • Even income distribution: According to GlobalWebIndex.net, 22% of people in bottom 25% of income earners are on Google+. For the top 25% of income earners, this number is 24%, while for the mid 50% earners, this number is 23%. This means that nearly all levels of income earners are nearly equally represented on Google+.

Pinterest’s visual nature makes it a fantastic marketing tool for B2C businesses. And it’s got the potential to drive a large amount of traffic to your blog if you have a solid strategy.

Here’s what you should know about Pinterest demographics:

  • Overwhelmingly female: 42% of all online female users are on Pinterest, vs. only 13% of men.
  • Older audience: 72% of Pinterest’s audience are 30 years or older. Only 34% are between 18 and 29. Significantly, 17% are over 65 years old.
  • Distinctly suburban: Suburban and rural users form the largest share – 29% and 30% respectively. This is distinctly different from other networks where urban users rule.
  • Higher income: Given the higher average age, Pinterest users also have higher disposable income, with 64% of all adults making $50,000+ on Pinterest.

The professional networking site LinkedIn attracts an older audience that is largely urban, wealthier, and more educated.

  • Older: Only 23% of users are between 18-29 years old. 21% are over 65 years, and 31% are between 30 and 49 years of age.
  • Urban: Very limited number of rural users – only 14%. 61% are either urban or suburban.
  • Wealthier: 75% of users earn over $50,000.
  • Highly educated: 50% of LinkedIn users are college graduates. Another 22% have some college experience.

Snapchat is the newest social networks on this list, but also one of the fastest growing. Here’s what you need to know about its demographics:

  • Dominated by women: 70% of Snapchat’s users are females.
  • Overwhelmingly young: 71% of users are younger than 25.
  • Limited income: 62% earn under $50,000 – fitting given the average age of Snapchat’s users.

ere’s what you should take away from all these stats:

  • If you’re targeting younger users, stick to Instagram, Twitter and Snapchat.
  • If you’re targeting women with disposable income, head over to Pinterest.
  • For professionals with better education and income, use LinkedIn.
  • For everyone, go with Facebook.

The psychology of social media users

Facebook is a ‘closed’ network where your friends list will usually be limited to family, friends and acquaintances you’ve met in real life. Privacy is a big concern for Facebook’s users, and all posts are private by default.

This ultimately affects the way users interact with each other and with businesses on Facebook.

According to a Pew Internet study:

  • Facebook users are more trusting (since the network is closed).
  • Facebook users have more close relationships. Pew found that heavy users of the platform are more likely to have a higher number of close relationships.
  • Facebook users are politically engaged and active.

To understand why people share or follow on Twitter, researchers at Georgia Tech and UMichigan analysed over 500M tweets over 15-months. They found that the three biggest reasons why people share/follow on Twitter are:

  • Network overlap: Your network is similar to your followers’ network.
  • User tweet-RT ratio: The number of tweets vs. the number of RTs for a user.
  • Informational content: The more informative the content, the better.

As per one study, a person’s Pinterest boards represent his/her “ideal self”. That is, it is a representation of everything the user would want to be or have. This is in opposition to Facebook that represents the user’s “real self”.

keep the following in mind:

  • Instead of marketing yourself on every network, pick the network whose demographics matches your target audience’s.
  • Positivity always wins – unless you’re deliberately trying to create controversy (not a good option for most non-media businesses).
  • Rules of content: Informative content on Twitter and LinkedIn, aspirational content on Instagram and Pinterest, fun/positive/uplifting content on Facebook.

+++++++++++++++++++
more on social media in this IMS blog
https://blog.stcloudstate.edu/ims?s=social+media

anonymous browsing data

‘Anonymous’ browsing data can be easily exposed, researchers reveal

https://www.theguardian.com/technology/2017/aug/01/data-browsing-habits-brokers

A similar strategy was used in 2008, Dewes said, to deanonymise a set of ratings published by Netflix to help computer scientists improve its recommendation algorithm: by comparing “anonymous” ratings of films with public profiles on IMDB, researchers were able to unmask Netflix users – including one woman, a closeted lesbian, who went on to sue Netflix for the privacy violation.

++++++++++++++++
A hacker explains the best way to browse the internet anonymously.
https://www.facebook.com/techinsider/videos/824655787732779/ 

++++++++++++++
more on privacy in this IMS blog
https://blog.stcloudstate.edu/ims?s=privacy

safe social media

Tips Toward a Safe and Positive Social Media Experience

By Stephen Spengler 06/01/17

https://thejournal.com/articles/2017/06/01/tips-toward-a-safe-and-positive-social-media-experience.aspx

Family Online Safety Institute recommends that parents engage in “7 Steps to Good Digital Parenting”

1. Talk with your children.

2. Educate yourself.

3. Use parental controls. Check the safety controls on all of the Android and Apple devices that your family uses. On the iPhone, you can tap SETTINGS > GENERAL> RESTRICTIONS and you can create a password that allows you enable/disable apps and phone functions. On Android devices, you can turn on Google Play Parental Controls by going into the Google Play Store settings

parental monitoring software such as NetNanny, PhoneSherriff, Norton Family Premier and Qustodio.

4. Friend and follow your children on social media. Whether it’s musical.ly, Instagram or Twitter, chances are that your children use some form of social media. If you have not already, then create an account and get on their friends list.

5. Explore, share and celebrate.

6. Be a good digital role model.

7. Set ground rules and apply sanctions. Just like chore charts or family job lists, consider using a family social media or internet safety contract. These contracts establish ground rules for when devices are to be used; what they should and should not be doing on them; and to establish sanctions based on breaches of the family contract. A simple internet search for “family internet contract” or “family technology contract” will produce a wealth of available ideas and resources to help you implement rules and sanctions revolving around your family’s technology use. A good example of a social media contract for children can be found at imom.com/printable/social-media-contract-for-kids/.

Managing Your Digital Footprint

Your digital footprint, according to dictionary.com, is “one’s unique set of digital activities, actions, and communications that leave a data trace on the internet or on a computer or other digital device and can identify the particular user or device.” Digital footprints can be either passive or active. The passive digital footprint is created without your consent and is driven by the sites and apps that you visit. The data from a passive digital footprint could reveal one’s internet history, IP address, location and is all stored in files on your device without you knowing it. An active digital footprint is more easily managed by the user. Data from an active digital footprint shows social media postings, information sharing, online purchases and activity usage.

  • Search for yourself online
  • Check privacy settings.
  • Use strong passwords
  • Update software.
  • Maintain your device.
  • Think before you post

Keep These Apps on Your Radar

  • Afterschool (minimum age 17) – The Afterschool App was rejected twice from the major app stores due to complaints from parents and educators. It is a well-known app that promotes cyberbullying, sexting, pornography and is filled with references to drugs and alcohol.
  • Blue Whale (minimum age 10) – IF YOU FIND THIS APP ON YOUR CHILD’S DEVICE, DELETE IT. It is a suicide challenge app that attempts to prod children into killing themselves.
  • BurnBook (minimum age 18) – IF YOU FIND THIS APP ON YOUR CHILD’S DEVICE, DELETE IT. It is a completely anonymous app for posting text, photos, and audio that promote rumors about other people. It is a notorious for cyberbullying
  • Calculator% (minimum age 4) – IF YOU FIND THIS APP ON YOUR CHILD’S DEVICE, DELETE IT. This is one of hundreds of “secret” calculator apps. This app is designed to help students hide photos and videos that they do not want their parents to see. This app looks and functions like a calculator, but students enter a “.”, a 4-digit passcode, and then a “.” again.
  • KIK (minimum age 17) – This is a communications app that allows anyone to be contacted by anyone and it 100 percent bypasses the device’s contacts list.
  • Yik Yak (minimum age 18) – This app is a location-based (most commonly schools) bulletin board app. It works anonymously with anyone pretending to be anyone they want. Many schools across the country have encountered cyberbullying and cyberthreats originating from this app.
  • StreetChat (minimum age 14) – StreetChat is a photo-sharing board for middle school, high school and college-age students. Members do not need to be a student in the actual school and can impersonate students in schools across the country. It promotes cyberbullying through anonymous posts and private messaging.
  • ooVoo (minimum age 13) – IF YOU FIND THIS APP ON YOUR CHILD’S DEVICE, DELETE IT. ooVoo is one of the largest video and messages app. Parents should be aware that ooVoo is used by predators to contact underage children. The app can allow users to video chat with up to twelve people at one time.
  • Wishbone (girls) & Slingshot (boys) (minimum age 13) – Both are comparison apps that allow users to create polls, including ones that are not appropriate for children. Many of the users create polls to shame and cyberbully other children, plus there are inappropriate apps and videos that users are forced to watch via the app’s advertising engine.

+++++++++++++++++++

Texas Teen May Be Victim in ‘Blue Whale Challenge’ That Encourages Suicide

Isaiah Gonzalez, 15, found hanging from his closet after an apparent suicide, as allegedly instructed by macabre online game

http://www.rollingstone.com/culture/news/texas-teen-latest-victim-in-challenge-that-promotes-suicide-w491939

Nationally, the Associated Press reports that educators, law enforcement officers and parents have raised concerns about the challenge, though these two back-to-back deaths mark the first allegations in the United States about deaths directly linked to the online game. Internationally, suicides in Russia, Brazil, and half a dozen other countries have already been linked to the challenge.

++++++++++++++++++++
more on social media in education in this IMS blog
https://blog.stcloudstate.edu/ims?s=social+media+education

next gen digital learning environment

Updating the Next Generation Digital Learning Environment for Better Student Learning Outcomes

a learning management system (LMS) is never the solution to every problem in education. Edtech is just one part of the whole learning ecosystem and student experience.

Therefore, the next generation digital learning environment (NGDLE), as envisioned by EDUCAUSE in 2015 …  Looking at the NGDLE requirements from an LMS perspective, I view the NGDLE as being about five areas: interoperability; personalization; analytics, advising, and learning assessment; collaboration; accessibility and universal design.

Interoperability

  • Content can easily be exchanged between systems.
  • Users are able to leverage the tools they love, including discipline-specific apps.
  • Learning data is available to trusted systems and people who need it.
  • The learning environment is “future proof” so that it can adapt and extend as the ecosystem evolves.

Personalization

  • The learning environment reflects individual preferences.
  • Departments, divisions, and institutions can be autonomous.
  • Instructors teach the way they want and are not constrained by the software design.
  • There are clear, individual learning paths.
  • Students have choice in activity, expression, and engagement.

Analytics, Advising, and Learning Assessment

  • Learning analytics helps to identify at-risk students, course progress, and adaptive learning pathways.
  • The learning environment enables integrated planning and assessment of student performance.
  • More data is made available, with greater context around the data.
  • The learning environment supports platform and data standards.

Collaboration

  • Individual spaces persist after courses and after graduation.
  • Learners are encouraged as creators and consumers.
  • Courses include public and private spaces.

Accessibility and Universal Design

  • Accessibility is part of the design of the learning experience.
  • The learning environment enables adaptive learning and supports different types of materials.
  • Learning design includes measurement rubrics and quality control.

The core analogy used in the NGDLE paper is that each component of the learning environment is a Lego brick:

  • The days of the LMS as a “walled garden” app that does everything is over.
  • Today many kinds of amazing learning and collaboration tools (Lego bricks) should be accessible to educators.
  • We have standards that let these tools (including an LMS) talk to each other. That is, all bricks share some properties that let them fit together.
  • Students and teachers sign in once to this “ecosystem of bricks.”
  • The bricks share results and data.
  • These bricks fit together; they can be interchanged and swapped at will, with confidence that the learning experience will continue uninterrupted.

Any “next-gen” attempt to completely rework the pedagogical model and introduce a “mash-up of whatever” to fulfil this model would fall victim to the same criticisms levied at the LMS today: there is too little time and training to expect faculty to figure out the nuances of implementation on their own.

The Lego metaphor works only if we’re talking about “old school” Lego design — bricks of two, three, and four-post pieces that neatly fit together. Modern edtech is a lot more like the modern Lego. There are wheels and rocket launchers and belts and all kinds of amazing pieces that work well with each other, but only when they are configured properly. A user cannot simply stick together different pieces and assume they will work harmoniously in creating an environment through which each student can be successful.

As the NGDLE paper states: “Despite the high percentages of LMS adoption, relatively few instructors use its more advanced features — just 41% of faculty surveyed report using the LMS ‘to promote interaction outside the classroom.'”

But this is what the next generation LMS is good at: being a central nervous system — or learning hub — through which a variety of learning activities and tools are used. This is also where the LMS needs to go: bringing together and making sense of all the amazing innovations happening around it. This is much harder to do, perhaps even impossible, if all the pieces involved are just bricks without anything to orchestrate them or to weave them together into a meaningful, personal experience for achieving well-defined learning outcomes.

  • Making a commitment to build easy, flexible, and smart technology
  • Working with colleges and universities to remove barriers to adopting new tools in the ecosystem
  • Standardizing the vetting of accessibility compliance (the Strategic Nonvisual Access Partner Program from the National Federation of the Blind is a great start)
  • Advancing standards for data exchange while protecting individual privacy
  • Building integrated components that work with the institutions using them — learning quickly about what is and is not working well and applying those lessons to the next generation of interoperability standards
  • Letting people use the tools they love [SIC] and providing more ways for nontechnical individuals (including students) to easily integrate new features into learning activities

My note: something just refused to be accepted at SCSU
Technologists are often very focused on the technology, but the reality is that the more deeply and closely we understand the pedagogy and the people in the institutions — students, faculty, instructional support staff, administrators — the better suited we are to actually making the tech work for them.

++++++++++++++++++++++

Under the Hood of a Next Generation Digital Learning Environment in Progress

The challenge is that although 85 percent of faculty use a campus learning management system (LMS),1 a recent Blackboard report found that, out of 70,000 courses across 927 North American institutions, 53 percent of LMS usage was classified as supplemental(content-heavy, low interaction) and 24 percent as complementary (one-way communication via content/announcements/gradebook).2 Only 11 percent were characterized as social, 10 percent as evaluative (heavy use of assessment), and 2 percent as holistic (balanced use of all previous). Our FYE course required innovating beyond the supplemental course-level LMS to create a more holistic cohort-wide NGDLE in order to fully support the teaching, learning, and student success missions of the program.The key design goals for our NGDLE were to:

  • Create a common platform that could deliver a standard curriculum and achieve parity in all course sections using existing systems and tools and readily available content
  • Capture, store, and analyze any generated learner data to support learning assessment, continuous program improvement, and research
  • Develop reports and actionable analytics for administrators, advisors, instructors, and students

++++++++++++
more on LMS in this blog
https://blog.stcloudstate.edu/ims?s=LMS

more on learning outcomes in this IMS blog
https://blog.stcloudstate.edu/ims?s=learning+outcomes

1 14 15 16 17 18 23