Searching for "student privacy"

IRDL proposal

Applications for the 2018 Institute will be accepted between December 1, 2017 and January 27, 2018. Scholars accepted to the program will be notified in early March 2018.

Title:

Learning to Harness Big Data in an Academic Library

Abstract (200)

Research on Big Data per se, as well as on the importance and organization of the process of Big Data collection and analysis, is well underway. The complexity of the process comprising “Big Data,” however, deprives organizations of ubiquitous “blue print.” The planning, structuring, administration and execution of the process of adopting Big Data in an organization, being that a corporate one or an educational one, remains an elusive one. No less elusive is the adoption of the Big Data practices among libraries themselves. Seeking the commonalities and differences in the adoption of Big Data practices among libraries may be a suitable start to help libraries transition to the adoption of Big Data and restructuring organizational and daily activities based on Big Data decisions.
Introduction to the problem. Limitations

The redefinition of humanities scholarship has received major attention in higher education. The advent of digital humanities challenges aspects of academic librarianship. Data literacy is a critical need for digital humanities in academia. The March 2016 Library Juice Academy Webinar led by John Russel exemplifies the efforts to help librarians become versed in obtaining programming skills, and respectively, handling data. Those are first steps on a rather long path of building a robust infrastructure to collect, analyze, and interpret data intelligently, so it can be utilized to restructure daily and strategic activities. Since the phenomenon of Big Data is young, there is a lack of blueprints on the organization of such infrastructure. A collection and sharing of best practices is an efficient approach to establishing a feasible plan for setting a library infrastructure for collection, analysis, and implementation of Big Data.
Limitations. This research can only organize the results from the responses of librarians and research into how libraries present themselves to the world in this arena. It may be able to make some rudimentary recommendations. However, based on each library’s specific goals and tasks, further research and work will be needed.

 

 

Research Literature

“Big data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it…”
– Dan Ariely, 2013  https://www.asist.org/publications/bulletin/aprilmay-2017/big-datas-impact-on-privacy-for-librarians-and-information-professionals/

Big Data is becoming an omnipresent term. It is widespread among different disciplines in academia (De Mauro, Greco, & Grimaldi, 2016). This leads to “inconsistency in meanings and necessity for formal definitions” (De Mauro et al, 2016, p. 122). Similarly, to De Mauro et al (2016), Hashem, Yaqoob, Anuar, Mokhtar, Gani and Ullah Khan (2015) seek standardization of definitions. The main connected “themes” of this phenomenon must be identified and the connections to Library Science must be sought. A prerequisite for a comprehensive definition is the identification of Big Data methods. Bughin, Chui, Manyika (2011), Chen et al. (2012) and De Mauro et al (2015) single out the methods to complete the process of building a comprehensive definition.

In conjunction with identifying the methods, volume, velocity, and variety, as defined by Laney (2001), are the three properties of Big Data accepted across the literature. Daniel (2015) defines three stages in big data: collection, analysis, and visualization. According to Daniel, (2015), Big Data in higher education “connotes the interpretation of a wide range of administrative and operational data” (p. 910) and according to Hilbert (2013), as cited in Daniel (2015), Big Data “delivers a cost-effective prospect to improve decision making” (p. 911).

The importance of understanding the process of Big Data analytics is well understood in academic libraries. An example of such “administrative and operational” use for cost-effective improvement of decision making are the Finch & Flenner (2016) and Eaton (2017) case studies of the use of data visualization to assess an academic library collection and restructure the acquisition process. Sugimoto, Ding & Thelwall (2012) call for the discussion of Big Data for libraries. According to the 2017 NMC Horizon Report “Big Data has become a major focus of academic and research libraries due to the rapid evolution of data mining technologies and the proliferation of data sources like mobile devices and social media” (Adams, Becker, et al., 2017, p. 38).

Power (2014) elaborates on the complexity of Big Data in regard to decision-making and offers ideas for organizations on building a system to deal with Big Data. As explained by Boyd and Crawford (2012) and cited in De Mauro et al (2016), there is a danger of a new digital divide among organizations with different access and ability to process data. Moreover, Big Data impacts current organizational entities in their ability to reconsider their structure and organization. The complexity of institutions’ performance under the impact of Big Data is further complicated by the change of human behavior, because, arguably, Big Data affects human behavior itself (Schroeder, 2014).

De Mauro et al (2015) touch on the impact of Dig Data on libraries. The reorganization of academic libraries considering Big Data and the handling of Big Data by libraries is in a close conjunction with the reorganization of the entire campus and the handling of Big Data by the educational institution. In additional to the disruption posed by the Big Data phenomenon, higher education is facing global changes of economic, technological, social, and educational character. Daniel (2015) uses a chart to illustrate the complexity of these global trends. Parallel to the Big Data developments in America and Asia, the European Union is offering access to an EU open data portal (https://data.europa.eu/euodp/home ). Moreover, the Association of European Research Libraries expects under the H2020 program to increase “the digitization of cultural heritage, digital preservation, research data sharing, open access policies and the interoperability of research infrastructures” (Reilly, 2013).

The challenges posed by Big Data to human and social behavior (Schroeder, 2014) are no less significant to the impact of Big Data on learning. Cohen, Dolan, Dunlap, Hellerstein, & Welton (2009) propose a road map for “more conservative organizations” (p. 1492) to overcome their reservations and/or inability to handle Big Data and adopt a practical approach to the complexity of Big Data. Two Chinese researchers assert deep learning as the “set of machine learning techniques that learn multiple levels of representation in deep architectures (Chen & Lin, 2014, p. 515). Deep learning requires “new ways of thinking and transformative solutions (Chen & Lin, 2014, p. 523). Another pair of researchers from China present a broad overview of the various societal, business and administrative applications of Big Data, including a detailed account and definitions of the processes and tools accompanying Big Data analytics.  The American counterparts of these Chinese researchers are of the same opinion when it comes to “think about the core principles and concepts that underline the techniques, and also the systematic thinking” (Provost and Fawcett, 2013, p. 58). De Mauro, Greco, and Grimaldi (2016), similarly to Provost and Fawcett (2013) draw attention to the urgent necessity to train new types of specialists to work with such data. As early as 2012, Davenport and Patil (2012), as cited in Mauro et al (2016), envisioned hybrid specialists able to manage both technological knowledge and academic research. Similarly, Provost and Fawcett (2013) mention the efforts of “academic institutions scrambling to put together programs to train data scientists” (p. 51). Further, Asomoah, Sharda, Zadeh & Kalgotra (2017) share a specific plan on the design and delivery of a big data analytics course. At the same time, librarians working with data acknowledge the shortcomings in the profession, since librarians “are practitioners first and generally do not view usability as a primary job responsibility, usually lack the depth of research skills needed to carry out a fully valid” data-based research (Emanuel, 2013, p. 207).

Borgman (2015) devotes an entire book to data and scholarly research and goes beyond the already well-established facts regarding the importance of Big Data, the implications of Big Data and the technical, societal, and educational impact and complications posed by Big Data. Borgman elucidates the importance of knowledge infrastructure and the necessity to understand the importance and complexity of building such infrastructure, in order to be able to take advantage of Big Data. In a similar fashion, a team of Chinese scholars draws attention to the complexity of data mining and Big Data and the necessity to approach the issue in an organized fashion (Wu, Xhu, Wu, Ding, 2014).

Bruns (2013) shifts the conversation from the “macro” architecture of Big Data, as focused by Borgman (2015) and Wu et al (2014) and ponders over the influx and unprecedented opportunities for humanities in academia with the advent of Big Data. Does the seemingly ubiquitous omnipresence of Big Data mean for humanities a “railroading” into “scientificity”? How will research and publishing change with the advent of Big Data across academic disciplines?

Reyes (2015) shares her “skinny” approach to Big Data in education. She presents a comprehensive structure for educational institutions to shift “traditional” analytics to “learner-centered” analytics (p. 75) and identifies the participants in the Big Data process in the organization. The model is applicable for library use.

Being a new and unchartered territory, Big Data and Big Data analytics can pose ethical issues. Willis (2013) focusses on Big Data application in education, namely the ethical questions for higher education administrators and the expectations of Big Data analytics to predict students’ success.  Daries, Reich, Waldo, Young, and Whittinghill (2014) discuss rather similar issues regarding the balance between data and student privacy regulations. The privacy issues accompanying data are also discussed by Tene and Polonetsky, (2013).

Privacy issues are habitually connected to security and surveillance issues. Andrejevic and Gates (2014) point out in a decision making “generated by data mining, the focus is not on particular individuals but on aggregate outcomes” (p. 195). Van Dijck (2014) goes into further details regarding the perils posed by metadata and data to the society, in particular to the privacy of citizens. Bail (2014) addresses the same issue regarding the impact of Big Data on societal issues, but underlines the leading roles of cultural sociologists and their theories for the correct application of Big Data.

Library organizations have been traditional proponents of core democratic values such as protection of privacy and elucidation of related ethical questions (Miltenoff & Hauptman, 2005). In recent books about Big Data and libraries, ethical issues are important part of the discussion (Weiss, 2018). Library blogs also discuss these issues (Harper & Oltmann, 2017). An academic library’s role is to educate its patrons about those values. Sugimoto et al (2012) reflect on the need for discussion about Big Data in Library and Information Science. They clearly draw attention to the library “tradition of organizing, managing, retrieving, collecting, describing, and preserving information” (p.1) as well as library and information science being “a historically interdisciplinary and collaborative field, absorbing the knowledge of multiple domains and bringing the tools, techniques, and theories” (p. 1). Sugimoto et al (2012) sought a wide discussion among the library profession regarding the implications of Big Data on the profession, no differently from the activities in other fields (e.g., Wixom, Ariyachandra, Douglas, Goul, Gupta, Iyer, Kulkami, Mooney, Phillips-Wren, Turetken, 2014). A current Andrew Mellon Foundation grant for Visualizing Digital Scholarship in Libraries seeks an opportunity to view “both macro and micro perspectives, multi-user collaboration and real-time data interaction, and a limitless number of visualization possibilities – critical capabilities for rapidly understanding today’s large data sets (Hwangbo, 2014).

The importance of the library with its traditional roles, as described by Sugimoto et al (2012) may continue, considering the Big Data platform proposed by Wu, Wu, Khabsa, Williams, Chen, Huang, Tuarob, Choudhury, Ororbia, Mitra, & Giles (2014). Such platforms will continue to emerge and be improved, with librarians as the ultimate drivers of such platforms and as the mediators between the patrons and the data generated by such platforms.

Every library needs to find its place in the large organization and in society in regard to this very new and very powerful phenomenon called Big Data. Libraries might not have the trained staff to become a leader in the process of organizing and building the complex mechanism of this new knowledge architecture, but librarians must educate and train themselves to be worthy participants in this new establishment.

 

Method

 

The study will be cleared by the SCSU IRB.
The survey will collect responses from library population and it readiness to use and use of Big Data.  Send survey URL to (academic?) libraries around the world.

Data will be processed through SPSS. Open ended results will be processed manually. The preliminary research design presupposes a mixed method approach.

The study will include the use of closed-ended survey response questions and open-ended questions.  The first part of the study (close ended, quantitative questions) will be completed online through online survey. Participants will be asked to complete the survey using a link they receive through e-mail.

Mixed methods research was defined by Johnson and Onwuegbuzie (2004) as “the class of research where the researcher mixes or combines quantitative and qualitative research techniques, methods, approaches, concepts, or language into a single study” (Johnson & Onwuegbuzie, 2004 , p. 17).  Quantitative and qualitative methods can be combined, if used to complement each other because the methods can measure different aspects of the research questions (Sale, Lohfeld, & Brazil, 2002).

 

Sampling design

 

  • Online survey of 10-15 question, with 3-5 demographic and the rest regarding the use of tools.
  • 1-2 open-ended questions at the end of the survey to probe for follow-up mixed method approach (an opportunity for qualitative study)
  • data analysis techniques: survey results will be exported to SPSS and analyzed accordingly. The final survey design will determine the appropriate statistical approach.

 

Project Schedule

 

Complete literature review and identify areas of interest – two months

Prepare and test instrument (survey) – month

IRB and other details – month

Generate a list of potential libraries to distribute survey – month

Contact libraries. Follow up and contact again, if necessary (low turnaround) – month

Collect, analyze data – two months

Write out data findings – month

Complete manuscript – month

Proofreading and other details – month

 

Significance of the work 

While it has been widely acknowledged that Big Data (and its handling) is changing higher education (https://blog.stcloudstate.edu/ims?s=big+data) as well as academic libraries (https://blog.stcloudstate.edu/ims/2016/03/29/analytics-in-education/), it remains nebulous how Big Data is handled in the academic library and, respectively, how it is related to the handling of Big Data on campus. Moreover, the visualization of Big Data between units on campus remains in progress, along with any policymaking based on the analysis of such data (hence the need for comprehensive visualization).

 

This research will aim to gain an understanding on: a. how librarians are handling Big Data; b. how are they relating their Big Data output to the campus output of Big Data and c. how librarians in particular and campus administration in general are tuning their practices based on the analysis.

Based on the survey returns (if there is a statistically significant return), this research might consider juxtaposing the practices from academic libraries, to practices from special libraries (especially corporate libraries), public and school libraries.

 

 

References:

 

Adams Becker, S., Cummins M, Davis, A., Freeman, A., Giesinger Hall, C., Ananthanarayanan, V., … Wolfson, N. (2017). NMC Horizon Report: 2017 Library Edition.

Andrejevic, M., & Gates, K. (2014). Big Data Surveillance: Introduction. Surveillance & Society, 12(2), 185–196.

Asamoah, D. A., Sharda, R., Hassan Zadeh, A., & Kalgotra, P. (2017). Preparing a Data Scientist: A Pedagogic Experience in Designing a Big Data Analytics Course. Decision Sciences Journal of Innovative Education, 15(2), 161–190. https://doi.org/10.1111/dsji.12125

Bail, C. A. (2014). The cultural environment: measuring culture with big data. Theory and Society, 43(3–4), 465–482. https://doi.org/10.1007/s11186-014-9216-5

Borgman, C. L. (2015). Big Data, Little Data, No Data: Scholarship in the Networked World. MIT Press.

Bruns, A. (2013). Faster than the speed of print: Reconciling ‘big data’ social media analysis and academic scholarship. First Monday, 18(10). Retrieved from http://firstmonday.org/ojs/index.php/fm/article/view/4879

Bughin, J., Chui, M., & Manyika, J. (2010). Clouds, big data, and smart assets: Ten tech-enabled business trends to watch. McKinsey Quarterly, 56(1), 75–86.

Chen, X. W., & Lin, X. (2014). Big Data Deep Learning: Challenges and Perspectives. IEEE Access, 2, 514–525. https://doi.org/10.1109/ACCESS.2014.2325029

Cohen, J., Dolan, B., Dunlap, M., Hellerstein, J. M., & Welton, C. (2009). MAD Skills: New Analysis Practices for Big Data. Proc. VLDB Endow., 2(2), 1481–1492. https://doi.org/10.14778/1687553.1687576

Daniel, B. (2015). Big Data and analytics in higher education: Opportunities and challenges. British Journal of Educational Technology, 46(5), 904–920. https://doi.org/10.1111/bjet.12230

Daries, J. P., Reich, J., Waldo, J., Young, E. M., Whittinghill, J., Ho, A. D., … Chuang, I. (2014). Privacy, Anonymity, and Big Data in the Social Sciences. Commun. ACM, 57(9), 56–63. https://doi.org/10.1145/2643132

De Mauro, A. D., Greco, M., & Grimaldi, M. (2016). A formal definition of Big Data based on its essential features. Library Review, 65(3), 122–135. https://doi.org/10.1108/LR-06-2015-0061

De Mauro, A., Greco, M., & Grimaldi, M. (2015). What is big data? A consensual definition and a review of key research topics. AIP Conference Proceedings, 1644(1), 97–104. https://doi.org/10.1063/1.4907823

Dumbill, E. (2012). Making Sense of Big Data. Big Data, 1(1), 1–2. https://doi.org/10.1089/big.2012.1503

Eaton, M. (2017). Seeing Library Data: A Prototype Data Visualization Application for Librarians. Publications and Research. Retrieved from http://academicworks.cuny.edu/kb_pubs/115

Emanuel, J. (2013). Usability testing in libraries: methods, limitations, and implications. OCLC Systems & Services: International Digital Library Perspectives, 29(4), 204–217. https://doi.org/10.1108/OCLC-02-2013-0009

Graham, M., & Shelton, T. (2013). Geography and the future of big data, big data and the future of geography. Dialogues in Human Geography, 3(3), 255–261. https://doi.org/10.1177/2043820613513121

Harper, L., & Oltmann, S. (2017, April 2). Big Data’s Impact on Privacy for Librarians and Information Professionals. Retrieved November 7, 2017, from https://www.asist.org/publications/bulletin/aprilmay-2017/big-datas-impact-on-privacy-for-librarians-and-information-professionals/

Hashem, I. A. T., Yaqoob, I., Anuar, N. B., Mokhtar, S., Gani, A., & Ullah Khan, S. (2015). The rise of “big data” on cloud computing: Review and open research issues. Information Systems, 47(Supplement C), 98–115. https://doi.org/10.1016/j.is.2014.07.006

Hwangbo, H. (2014, October 22). The future of collaboration: Large-scale visualization. Retrieved November 7, 2017, from http://usblogs.pwc.com/emerging-technology/the-future-of-collaboration-large-scale-visualization/

Laney, D. (2001, February 6). 3D Data Management: Controlling Data Volume, Velocity, and Variety.

Miltenoff, P., & Hauptman, R. (2005). Ethical dilemmas in libraries: an international perspective. The Electronic Library, 23(6), 664–670. https://doi.org/10.1108/02640470510635746

Philip Chen, C. L., & Zhang, C.-Y. (2014). Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Information Sciences, 275(Supplement C), 314–347. https://doi.org/10.1016/j.ins.2014.01.015

Power, D. J. (2014). Using ‘Big Data’ for analytics and decision support. Journal of Decision Systems, 23(2), 222–228. https://doi.org/10.1080/12460125.2014.888848

Provost, F., & Fawcett, T. (2013). Data Science and its Relationship to Big Data and Data-Driven Decision Making. Big Data, 1(1), 51–59. https://doi.org/10.1089/big.2013.1508

Reilly, S. (2013, December 12). What does Horizon 2020 mean for research libraries? Retrieved November 7, 2017, from http://libereurope.eu/blog/2013/12/12/what-does-horizon-2020-mean-for-research-libraries/

Reyes, J. (2015). The skinny on big data in education: Learning analytics simplified. TechTrends: Linking Research & Practice to Improve Learning, 59(2), 75–80. https://doi.org/10.1007/s11528-015-0842-1

Schroeder, R. (2014). Big Data and the brave new world of social media research. Big Data & Society, 1(2), 2053951714563194. https://doi.org/10.1177/2053951714563194

Sugimoto, C. R., Ding, Y., & Thelwall, M. (2012). Library and information science in the big data era: Funding, projects, and future [a panel proposal]. Proceedings of the American Society for Information Science and Technology, 49(1), 1–3. https://doi.org/10.1002/meet.14504901187

Tene, O., & Polonetsky, J. (2012). Big Data for All: Privacy and User Control in the Age of Analytics. Northwestern Journal of Technology and Intellectual Property, 11, [xxvii]-274.

van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society; Newcastle upon Tyne, 12(2), 197–208.

Waller, M. A., & Fawcett, S. E. (2013). Data Science, Predictive Analytics, and Big Data: A Revolution That Will Transform Supply Chain Design and Management. Journal of Business Logistics, 34(2), 77–84. https://doi.org/10.1111/jbl.12010

Weiss, A. (2018). Big-Data-Shocks-An-Introduction-to-Big-Data-for-Librarians-and-Information-Professionals. Rowman & Littlefield Publishers. Retrieved from https://rowman.com/ISBN/9781538103227/Big-Data-Shocks-An-Introduction-to-Big-Data-for-Librarians-and-Information-Professionals

West, D. M. (2012). Big data for education: Data mining, data analytics, and web dashboards. Governance Studies at Brookings, 4, 1–0.

Willis, J. (2013). Ethics, Big Data, and Analytics: A Model for Application. Educause Review Online. Retrieved from https://docs.lib.purdue.edu/idcpubs/1

Wixom, B., Ariyachandra, T., Douglas, D. E., Goul, M., Gupta, B., Iyer, L. S., … Turetken, O. (2014). The current state of business intelligence in academia: The arrival of big data. CAIS, 34, 1.

Wu, X., Zhu, X., Wu, G. Q., & Ding, W. (2014). Data mining with big data. IEEE Transactions on Knowledge and Data Engineering, 26(1), 97–107. https://doi.org/10.1109/TKDE.2013.109

Wu, Z., Wu, J., Khabsa, M., Williams, K., Chen, H. H., Huang, W., … Giles, C. L. (2014). Towards building a scholarly big data platform: Challenges, lessons and opportunities. In IEEE/ACM Joint Conference on Digital Libraries (pp. 117–126). https://doi.org/10.1109/JCDL.2014.6970157

 

+++++++++++++++++
more on big data





bid data and school abscence

Data Can Help Schools Confront ‘Chronic Absence’

By Dian Schaffhauser 09/22/16

https://thejournal.com/articles/2016/09/22/data-can-help-schools-confront-chronic-absence.aspx

The data shared in June by the Office for Civil Rights, which compiled it from a 2013-2014 survey completed by nearly every school district and school in the United States. new is a report from Attendance Works and the Everyone Graduates Center that encourages schools and districts to use their own data to pinpoint ways to take on the challenge of chronic absenteeism.

The first is research that shows that missing that much school is correlated with “lower academic performance and dropping out.” Second, it also helps in identifying students earlier in the semester in order to get a jump on possible interventions.

The report offers a six-step process for using data tied to chronic absence in order to reduce the problem.

The first step is investing in “consistent and accurate data.” That’s where the definition comes in — to make sure people have a “clear understanding” and so that it can be used “across states and districts” with school years that vary in length. The same step also requires “clarifying what counts as a day of attendance or absence.”

The second step is to use the data to understand what the need is and who needs support in getting to school. This phase could involve defining multiple tiers of chronic absenteeism (at-risk, moderate or severe), and then analyzing the data to see if there are differences by student sub-population — grade, ethnicity, special education, gender, free and reduced price lunch, neighborhood or other criteria that require special kinds of intervention.

Step three asks schools and districts to use the data to identify places getting good results. By comparing chronic absence rates across the district or against schools with similar demographics, the “positive outliers” may surface, showing people that the problem isn’t unstoppable but something that can be addressed for the better.

Steps five and six call on schools and districts to help people understand why the absences are happening, develop ways to address the problem.

The report links to free data tools on the Attendance Works website, including a calculator for tallying chronic absences and guidance on how to protect student privacy when sharing data.

The full report is freely available on the Attendance Works website.

++++++++++++++
more on big data in education in this IMS blog
https://blog.stcloudstate.edu/ims?s=data

big data

big-data-in-education-report

Center for Digital Education (CDE)

real-time impact on curriculum structure, instruction delivery and student learning, permitting change and improvement. It can also provide insight into important trends that affect present and future resource needs.

Big Data: Traditionally described as high-volume, high-velocity and high-variety information.
Learning or Data Analytics: The measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs.
Educational Data Mining: The techniques, tools and research designed for automatically extracting meaning from large repositories of data generated by or related to people’s learning activities in educational settings.
Predictive Analytics: Algorithms that help analysts predict behavior or events based on data.
Predictive Modeling: The process of creating, testing and validating a model to best predict the probability of an outcome.

Data analytics, or the measurement, collection, analysis and reporting of data, is driving decisionmaking in many institutions. However, because of the unique nature of each district’s or college’s data needs, many are building their own solutions.

For example, in 2014 the nonprofit company inBloom, Inc., backed by $100 million from the Gates Foundation and the Carnegie Foundation for the Advancement of Teaching, closed its doors amid controversy regarding its plan to store, clean and aggregate a range of student information for states and districts and then make the data available to district-approved third parties to develop tools and dashboards so the data could be used by classroom educators.22

Tips for Student Data Privacy

Know the Laws and Regulations
There are many regulations on the books intended to protect student privacy and safety: the Family Educational Rights and Privacy Act (FERPA), the Protection of Pupil Rights Amendment (PPRA), the Children’s Internet Protection Act (CIPA), the Children’s Online Privacy Protection Act (COPPA) and the Health Insurance Portability and Accountability Act (HIPAA)
— as well as state, district and community laws. Because technology changes so rapidly, it is unlikely laws and regulations will keep pace with new data protection needs. Establish a committee to ascertain your institution’s level of understanding of and compliance with these laws, along with additional safeguard measures.
Make a Checklist Your institution’s privacy policies should cover security, user safety, communications, social media, access, identification rules, and intrusion detection and prevention.
Include Experts
To nail down compliance and stave off liability issues, consider tapping those who protect privacy for a living, such as your school attorney, IT professionals and security assessment vendors. Let them review your campus or district technologies as well as devices brought to campus by students, staff and instructors. Finally, a review of your privacy and security policies, terms of use and contract language is a good idea.
Communicate, Communicate, Communicate
Students, staff, faculty and parents all need to know their rights and responsibilities regarding data privacy. Convey your technology plans, policies and requirements and then assess and re-communicate those throughout each year.

“Anything-as-a-Service” or “X-as-a-Service” solutions can help K-12 and higher education institutions cope with big data by offering storage, analytics capabilities and more. These include:
• Infrastructure-as-a-Service (IaaS): Providers offer cloud-based storage, similar to a campus storage area network (SAN)

• Platform-as-a-Service (PaaS): Opens up application platforms — as opposed to the applications themselves — so others can build their own applications
using underlying operating systems, data models and databases; pre-built application components and interfaces

• Software-as-a-Service (SaaS): The hosting of applications in the cloud

• Big-Data-as-a-Service (BDaaS): Mix all the above together, upscale the amount of data involved by an enormous amount and you’ve got BDaaS

Suggestions:

Use accurate data correctly
Define goals and develop metrics
Eliminate silos, integrate data
Remember, intelligence is the goal
Maintain a robust, supportive enterprise infrastructure.
Prioritize student privacy
Develop bullet-proof data governance guidelines
Create a culture of collaboration and sharing, not compliance.

more on big data in this IMS blog:

https://blog.stcloudstate.edu/ims/?s=big+data&submit=Search

college recruitment with Facebook ads

https://www.edsurge.com/news/2022-04-25-facebook-makes-it-cheap-to-market-to-new-students-but-it-costs-colleges-dearly

complex campaigns that crossed the boundaries of social media, like Facebook, and our own channels, like university websites or institutional email.

Facebook celebrates its 18th birthday this year. Anxiety about its ethics has been around almost since its infancy, and privacy issues surfaced as early as 2007.

According to the 2022 Edelman Trust Barometer survey of more than 36,000 people in 28 countries, only 37 percent of respondents state that they trust social media as a source for general news and information.

The implications of this for colleges and universities are twofold. We’ve aligned ourselves with a partner that is in direct opposition to the values higher education claims to hold dear: truth, curiosity, democracy, critical thinking and debate.

The public perception of higher ed has been eroding over the last two decades. Which organizations we align with—both at the institutional and industry level—matters. Would you choose an advertising or branding agency with Facebook’s track record?

automated proctoring

https://www.edsurge.com/news/2021-11-19-automated-proctoring-swept-in-during-pandemic-it-s-likely-to-stick-around-despite-concerns

law student sued an automated proctoring company, students have complained about their use in student newspaper editorials and professors have compared them to Big Brother.

ProctorU, which has decided not to sell software that uses algorithms to detect cheating

recent Educause study found that 63 percent of colleges and universities in the U.S. and Canada mention the use of remote proctoring on their websites.

One reason colleges are holding onto proctoring tools, Urdan adds, is that many colleges plan to expand their online course offerings even after campus activities return to normal. And the pandemic also saw rapid growth of another tech trend: students using websites to cheat on exams.

++++++++++++++++
More on proctoring in this blog
https://blog.stcloudstate.edu/ims?s=proctoring

in house made library counters

LITA listserv exchange on “Raspberry PI Counter for Library Users”

On 7/10/20, 10:05 AM, “lita-l-request@lists.ala.org on behalf of Hammer, Erich F” <lita-l-request@lists.ala.org on behalf of erich@albany.edu> wrote:

Jason,

I think that is a very interesting project.  If I understand how it works (comparing reference images to live images), it should still work if a “fuzzy” or translucent filter were placed on the lens as a privacy measure, correct? You could even make the fuzzy video publicly accessible to prove to folks that privacy is protected.

If that’s the case, IMHO, it really is a commercially viable idea and it would have a market far beyond libraries.  Open source code and hardware designs and sales of pre-packaged hardware and support.  Time for some crowdsource funding!  🙂

Erich

On Friday, July 10, 2020 at 10:14, Jason Griffey eloquently inscribed:
I ran a multi-year project to do counting (as well as attention measurement)
called Measure the Future (http://.measurethefuture.net). That project is i
desperate need of updating….there has been some work done on it at the
> University of OK libraries, but we haven’t seen their code push et. As the
> code stands on GitHub, it isn’t usable….the installation is broken based on
> some underlying dependencies.  The Univ of OK code fixes the issue, but it
> hasn’t been pushed yet. But if you want to see the general code and way we
> approached it, that is all available.  > Jason
> On Jul 8, 2020, 1:37 PM -0500, Mitchell, James Ray
> <jmitchell20@una.edu>, wrote:
>         Hi Kun,
>         I don’t know if this will be useful to you or not, but Code4Lib journal
> had an article a couple years ago that might be helpful. It’s called
> “Testing Three Type of Raspberry Pi People Counters.” The link to the
> article is https://nam02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjournal.code4lib.org%2Farticles%2F12947&amp;data=02%7C01%7Cpmiltenoff%40stcloudstate.edu%7C8d2342df6f3d4d83766508d824e29f23%7C5011c7c60ab446ab9ef4fae74a921a7f%7C0%7C1%7C637299903041974052&amp;sdata=f9qeftEvktqHakDqWY%2BxHTj3kei7idOFAJnROp%2FiOCU%3D&amp;reserved=0
>         Regards    >         James

My note:
In 2018, following the university president’s call for ANY possible savings, the library administrator was send a proposal requesting information regarding the license for the current library counters and proposing the save the money for the license by creating an in-house Arduino counter. The blueprints for such counter were share (as per another LITA listserv exchange). SCSU Physics professor agreement to lead the project was secured as well as the opportunity for SCSU Physics students to develop the project as part of their individual study plan. The proposal was never addressed neither by the middle nor the upper management.

+++++++++++++
more on raspberry pi in this IMS blog
https://blog.stcloudstate.edu/ims?s=raspberry

more on arduino in this IMS blog
https://blog.stcloudstate.edu/ims?s=arduino

Zoom succumbs to Chinese authorities

After March 2020 reports about Zoom privacy issues, now Zoom acknowledges working with the Chinese government:

++++++++++++++

Is Zoom Safe for Chinese Students?

Elizabeth Redden June 12, 2020

https://www.insidehighered.com/news/2020/06/12/scholars-raise-concerns-about-using-zoom-teach-about-china

Unlike many other major tech platforms based in the U.S., Zoom, which is headquartered in California, has not been blocked by the Chinese government. Zoom said in a blog post that it is “developing technology over the next several days that will enable us to remove or block at the participant level based on geography” which will allow the company to “to comply with requests from local authorities when they determine activity on our platform is illegal within their borders; however, we will also be able to protect these conversations for participants outside of those borders where the activity is allowed.”

Zoom’s interference with the Tiananmen gatherings and its suspension of user accounts raised alarm among many in higher education, which increasingly depends on Zoom to operate courses remotely — including for students located within China’s borders.

Multiple scholars took to Twitter to express their worries

PEN America, a group that advocates for free expression, condemned Zoom for shuttering the activist’s account.

This is not the first time Zoom’s links to China have come under scrutiny. In April, the company admitted that some of its user data were “mistakenly” routed through China; in response, the company announced that users of paid Zoom accounts could opt out of having their data routed through data centers in China.

An April 3 report by scholars at the University of Toronto’s Munk School of Global Affairs & Public Policy said Zoom’s research and development operations in China could make the company susceptible “to pressure from Chinese authorities.”

Zoom, whose Chinese-born CEO is a U.S. citizen, said in its latest annual report to the U.S. Securities and Exchange Commission that it had more than 700 employees at its research and development centers in China as of Jan. 31. The SEC filing notes that Zoom has a “high concentration of research and development personnel in China, which could expose us to market scrutiny regarding the integrity of our solution or data security features.”

+++++++++++++

Zoom Just Totally Caved In to China on Censorship from r/technology


++++++++++++++
more about Zoom in this IMS blog
https://blog.stcloudstate.edu/ims?s=zoom

Algorithmic Test Proctoring

Our Bodies Encoded: Algorithmic Test Proctoring in Higher Education

SHEA SWAUGER ED-TECH

https://hybridpedagogy.org/our-bodies-encoded-algorithmic-test-proctoring-in-higher-education/

While in-person test proctoring has been used to combat test-based cheating, this can be difficult to translate to online courses. Ed-tech companies have sought to address this concern by offering to watch students take online tests, in real time, through their webcams.

Some of the more prominent companies offering these services include ProctorioRespondusProctorUHonorLockKryterion Global Testing Solutions, and Examity.

Algorithmic test proctoring’s settings have discriminatory consequences across multiple identities and serious privacy implications. 

While racist technology calibrated for white skin isn’t new (everything from photography to soap dispensers do this), we see it deployed through face detection and facial recognition used by algorithmic proctoring systems.

While some test proctoring companies develop their own facial recognition software, most purchase software developed by other companies, but these technologies generally function similarly and have shown a consistent inability to identify people with darker skin or even tell the difference between Chinese people. Facial recognition literally encodes the invisibility of Black people and the racist stereotype that all Asian people look the same.

As Os Keyes has demonstrated, facial recognition has a terrible history with gender. This means that a software asking students to verify their identity is compromising for students who identify as trans, non-binary, or express their gender in ways counter to cis/heteronormativity.

These features and settings create a system of asymmetric surveillance and lack of accountability, things which have always created a risk for abuse and sexual harassment. Technologies like these have a long history of being abused, largely by heterosexual men at the expense of women’s bodies, privacy, and dignity.

Their promotional messaging functions similarly to dog whistle politics which is commonly used in anti-immigration rhetoric. It’s also not a coincidence that these technologies are being used to exclude people not wanted by an institution; biometrics and facial recognition have been connected to anti-immigration policies, supported by both Republican and Democratic administrations, going back to the 1990’s.

Borrowing from Henry A. Giroux, Kevin Seeber describes the pedagogy of punishment and some of its consequences in regards to higher education’s approach to plagiarism in his book chapter “The Failed Pedagogy of Punishment: Moving Discussions of Plagiarism beyond Detection and Discipline.”

my note: I am repeating this for years
Sean Michael Morris and Jesse Stommel’s ongoing critique of Turnitin, a plagiarism detection software, outlines exactly how this logic operates in ed-tech and higher education: 1) don’t trust students, 2) surveil them, 3) ignore the complexity of writing and citation, and 4) monetize the data.

Technological Solutionism

Cheating is not a technological problem, but a social and pedagogical problem.
Our habit of believing that technology will solve pedagogical problems is endemic to narratives produced by the ed-tech community and, as Audrey Watters writes, is tied to the Silicon Valley culture that often funds it. Scholars have been dismantling the narrative of technological solutionism and neutrality for some time now. In her book “Algorithms of Oppression,” Safiya Umoja Noble demonstrates how the algorithms that are responsible for Google Search amplify and “reinforce oppressive social relationships and enact new modes of racial profiling.”

Anna Lauren Hoffmann, who coined the term “data violence” to describe the impact harmful technological systems have on people and how these systems retain the appearance of objectivity despite the disproportionate harm they inflict on marginalized communities.

This system of measuring bodies and behaviors, associating certain bodies and behaviors with desirability and others with inferiority, engages in what Lennard J. Davis calls the Eugenic Gaze.

Higher education is deeply complicit in the eugenics movement. Nazism borrowed many of its ideas about racial purity from the American school of eugenics, and universities were instrumental in supporting eugenics research by publishing copious literature on it, establishing endowed professorships, institutes, and scholarly societies that spearheaded eugenic research and propaganda.

+++++++++++++++++
more on privacy in this IMS blog
https://blog.stcloudstate.edu/ims?s=privacy

10 years in ed tech

https://www.edsurge.com/news/2019-12-31-when-education-giants-stumbled-and-data-ruled

The tools that have delivered are specific, targeted solutions that are easy to use and provide teachers and students delight. Simple solutions, like Read 180, which helps accelerate learning for struggling students, still deliver 20 years later, now under Houghton Mifflin Harcourt instead of Scholastic. Accelerated Reader, a product that started more than 30 years ago, still motivates kids to read.

Companies that aim to provide student data in a usable fashion, like Schoology, still provide value.

the promise of data in education is still proving itself. It has taken awhile, but we’re getting to a point where data is more actionable. Renaissance just acquired Schoolzilla, which was launched in 2011, for this reason.

When it comes to devices, many kids today have access to iPads or Chromebooks. Although one-to-one computing hasn’t been as transformational as some predicted in 2010, we’ve certainly seen a huge shift

Most of these [textbook providers] companies tried to re-platform every unique product into one monolithic model, but the promise didn’t pan out—the products proved clunky and hard to use

Predictions that educators would want more assessment data to drive instruction have proven true. https://www.renaissance.com/

The prediction that digital reading would be simple and easy to implement has also proven true.

Virtual reality hasn’t panned out yet.

The rise of gaming in education was another prediction that has largely faded.

started to solve the challenge of data interoperability and portability.

Alongside that, privacy and data responsibility are still a problem

The role of the teacher, however, is still critical. Rather than take over responsibility for educating students, technology’s role should be—and increasingly is—to put multiple options into educators’ hands to easily solve different types of challenges for individual students.

 

+++++++++++++
more on technology for the last decade
https://blog.stcloudstate.edu/ims/2020/01/02/100-tech-debacles-of-the-decade/

1 2 3 4 5 6 10