Searching for "big data"

Key Issues in Teaching and Learning Survey

The EDUCAUSE Learning Initiative has just launched its 2018 Key Issues in Teaching and Learning Survey, so vote today: http://www.tinyurl.com/ki2018.

Each year, the ELI surveys the teaching and learning community in order to discover the key issues and themes in teaching and learning. These top issues provide the thematic foundation or basis for all of our conversations, courses, and publications for the coming year. Longitudinally they also provide the way to track the evolving discourse in the teaching and learning space. More information about this annual survey can be found at https://www.educause.edu/eli/initiatives/key-issues-in-teaching-and-learning.

ACADEMIC TRANSFORMATION (Holistic models supporting student success, leadership competencies for academic transformation, partnerships and collaborations across campus, IT transformation, academic transformation that is broad, strategic, and institutional in scope)

ACCESSIBILITY AND UNIVERSAL DESIGN FOR LEARNING (Supporting and educating the academic community in effective practice; intersections with instructional delivery modes; compliance issues)

ADAPTIVE TEACHING AND LEARNING (Digital courseware; adaptive technology; implications for course design and the instructor’s role; adaptive approaches that are not technology-based; integration with LMS; use of data to improve learner outcomes)

COMPETENCY-BASED EDUCATION AND NEW METHODS FOR THE ASSESSMENT OF STUDENT LEARNING (Developing collaborative cultures of assessment that bring together faculty, instructional designers, accreditation coordinators, and technical support personnel, real world experience credit)

DIGITAL AND INFORMATION LITERACIES (Student and faculty literacies; research skills; data discovery, management, and analysis skills; information visualization skills; partnerships for literacy programs; evaluation of student digital competencies; information evaluation)

EVALUATING TECHNOLOGY-BASED INSTRUCTIONAL INNOVATIONS (Tools and methods to gather data; data analysis techniques; qualitative vs. quantitative data; evaluation project design; using findings to change curricular practice; scholarship of teaching and learning; articulating results to stakeholders; just-in-time evaluation of innovations). here is my bibliographical overview on Big Data (scroll down to “Research literature”https://blog.stcloudstate.edu/ims/2017/11/07/irdl-proposal/ )

EVOLUTION OF THE TEACHING AND LEARNING SUPPORT PROFESSION (Professional skills for T&L support; increasing emphasis on instructional design; delineating the skills, knowledge, business acumen, and political savvy for success; role of inter-institutional communities of practices and consortia; career-oriented professional development planning)

FACULTY DEVELOPMENT (Incentivizing faculty innovation; new roles for faculty and those who support them; evidence of impact on student learning/engagement of faculty development programs; faculty development intersections with learning analytics; engagement with student success)

GAMIFICATION OF LEARNING (Gamification designs for course activities; adaptive approaches to gamification; alternate reality games; simulations; technological implementation options for faculty)

INSTRUCTIONAL DESIGN (Skills and competencies for designers; integration of technology into the profession; role of data in design; evolution of the design profession (here previous blog postings on this issue: https://blog.stcloudstate.edu/ims/2017/10/04/instructional-design-3/); effective leadership and collaboration with faculty)

INTEGRATED PLANNING AND ADVISING FOR STUDENT SUCCESS (Change management and campus leadership; collaboration across units; integration of technology systems and data; dashboard design; data visualization (here previous blog postings on this issue: https://blog.stcloudstate.edu/ims?s=data+visualization); counseling and coaching advising transformation; student success analytics)

LEARNING ANALYTICS (Leveraging open data standards; privacy and ethics; both faculty and student facing reports; implementing; learning analytics to transform other services; course design implications)

LEARNING SPACE DESIGNS (Makerspaces; funding; faculty development; learning designs across disciplines; supporting integrated campus planning; ROI; accessibility/UDL; rating of classroom designs)

MICRO-CREDENTIALING AND DIGITAL BADGING (Design of badging hierarchies; stackable credentials; certificates; role of open standards; ways to publish digital badges; approaches to meta-data; implications for the transcript; Personalized learning transcripts and blockchain technology (here previous blog postings on this issue: https://blog.stcloudstate.edu/ims?s=blockchain

MOBILE LEARNING (Curricular use of mobile devices (here previous blog postings on this issue:

https://blog.stcloudstate.edu/ims/2015/09/25/mc218-remodel/; innovative curricular apps; approaches to use in the classroom; technology integration into learning spaces; BYOD issues and opportunities)

MULTI-DIMENSIONAL TECHNOLOGIES (Virtual, augmented, mixed, and immersive reality; video walls; integration with learning spaces; scalability, affordability, and accessibility; use of mobile devices; multi-dimensional printing and artifact creation)

NEXT-GENERATION DIGITAL LEARNING ENVIRONMENTS AND LMS SERVICES (Open standards; learning environments architectures (here previous blog postings on this issue: https://blog.stcloudstate.edu/ims/2017/03/28/digital-learning/; social learning environments; customization and personalization; OER integration; intersections with learning modalities such as adaptive, online, etc.; LMS evaluation, integration and support)

ONLINE AND BLENDED TEACHING AND LEARNING (Flipped course models; leveraging MOOCs in online learning; course development models; intersections with analytics; humanization of online courses; student engagement)

OPEN EDUCATION (Resources, textbooks, content; quality and editorial issues; faculty development; intersections with student success/access; analytics; licensing; affordability; business models; accessibility and sustainability)

PRIVACY AND SECURITY (Formulation of policies on privacy and data protection; increased sharing of data via open standards for internal and external purposes; increased use of cloud-based and third party options; education of faculty, students, and administrators)

WORKING WITH EMERGING LEARNING TECHNOLOGY (Scalability and diffusion; effective piloting practices; investments; faculty development; funding; evaluation methods and rubrics; interoperability; data-driven decision-making)

+++++++++++
learning and teaching in this IMS blog
https://blog.stcloudstate.edu/ims?s=teaching+and+learning

Reproducibility Librarian

Reproducibility Librarian? Yes, That Should Be Your Next Job

https://www.jove.com/blog/2017/10/27/reproducibility-librarian-yes-that-should-be-your-next-job/
Vicky Steeves (@VickySteeves) is the first Research Data Management and Reproducibility Librarian
Reproducibility is made so much more challenging because of computers, and the dominance of closed-source operating systems and analysis software researchers use. Ben Marwick wrote a great piece called ‘How computers broke science – and what we can do to fix it’ which details a bit of the problem. Basically, computational environments affect the outcome of analyses (Gronenschild et. al (2012) showed the same data and analyses gave different results between two versions of macOS), and are exceptionally hard to reproduce, especially when the license terms don’t allow it. Additionally, programs encode data incorrectly and studies make erroneous conclusions, e.g. Microsoft Excel encodes genes as dates, which affects 1/5 of published data in leading genome journals.
technology to capture computational environments, workflow, provenance, data, and code are hugely impactful for reproducibility.  It’s been the focus of my work, in supporting an open source tool called ReproZip, which packages all computational dependencies, data, and applications in a single distributable package that other can reproduce across different systems. There are other tools that fix parts of this problem: Kepler and VisTrails for workflow/provenance, Packrat for saving specific R packages at the time a script is run so updates to dependencies won’t break, Pex for generating executable Python environments, and o2r for executable papers (including data, text, and code in one).
plugin for Jupyter notebooks), and added a user interface to make it friendlier to folks not comfortable on the command line.

I would also recommend going to conferences:

++++++++++++++++++++++++
more on big data in an academic library in this IMS blog
academic library collection data visualization

https://blog.stcloudstate.edu/ims/2017/10/26/software-carpentry-workshop/

https://blog.stcloudstate.edu/ims?s=data+library

more on library positions in this IMS blog:
https://blog.stcloudstate.edu/ims?s=big+data+library
https://blog.stcloudstate.edu/ims/2016/06/14/technology-requirements-samples/

on university library future:
https://blog.stcloudstate.edu/ims/2014/12/10/unviersity-library-future/

librarian versus information specialist

 

LITA guides

https://rowman.com/Action/SERIES/RL/LITA

Topics for consideration include:

  • Tools for big data
  • Developing in-house technology expertise
  • Budgeting for technology
  • Writing a technology plan
  • K-12 technology
  • Applications of agile development for libraries
  • Grant writing for library technology
  • Security for library systems

Questions or comments can be sent to Marta Deyrup, LITA Acquisitions Editor.

Proposals can be submitted to the Acquisitions editor using this link.

code4lib 2018

Code2LIB February 2018

http://2018.code4lib.org/

2018 Preconference Voting

10. The Virtualized Library: A Librarian’s Introduction to Docker and Virtual Machines
This session will introduce two major types of virtualization, virtual machines using tools like VirtualBox and Vagrant, and containers using Docker. The relative strengths and drawbacks of the two approaches will be discussed along with plenty of hands-on time. Though geared towards integrating these tools into a development workflow, the workshop should be useful for anyone interested in creating stable and reproducible computing environments, and examples will focus on library-specific tools like Archivematica and EZPaarse. With virtualization taking a lot of the pain out of installing and distributing software, alleviating many cross-platform issues, and becoming increasingly common in library and industry practices, now is a great time to get your feet wet.

(One three-hour session)

11. Digital Empathy: Creating Safe Spaces Online
User research is often focused on measures of the usability of online spaces. We look at search traffic, run card sorting and usability testing activities, and track how users navigate our spaces. Those results inform design decisions through the lens of information architecture. This is important, but doesn’t encompass everything a user needs in a space.

This workshop will focus on the other component of user experience design and user research: how to create spaces where users feel safe. Users bring their anxieties and stressors with them to our online spaces, but informed design choices can help to ameliorate that stress. This will ultimately lead to a more positive interaction between your institution and your users.

The presenters will discuss the theory behind empathetic design, delve deeply into using ethnographic research methods – including an opportunity for attendees to practice those ethnographic skills with student participants – and finish with the practical application of these results to ongoing and future projects.

(One three-hour session)

14. ARIA Basics: Making Your Web Content Sing Accessibility

https://dequeuniversity.com/assets/html/jquery-summit/html5/slides/landmarks.html
Are you a web developer or create web content? Do you add dynamic elements to your pages? If so, you should be concerned with making those dynamic elements accessible and usable to as many as possible. One of the most powerful tools currently available for making web pages accessible is ARIA, the Accessible Rich Internet Applications specification. This workshop will teach you the basics for leveraging the full power of ARIA to make great accessible web pages. Through several hands-on exercises, participants will come to understand the purpose and power of ARIA and how to apply it for a variety of different dynamic web elements. Topics will include semantic HTML, ARIA landmarks and roles, expanding/collapsing content, and modal dialog. Participants will also be taught some basic use of the screen reader NVDA for use in accessibility testing. Finally, the lessons will also emphasize learning how to keep on learning as HTML, JavaScript, and ARIA continue to evolve and expand.

Participants will need a basic background in HTML, CSS, and some JavaScript.

(One three-hour session)

18. Learning and Teaching Tech
Tech workshops pose two unique problems: finding skilled instructors for that content, and instructing that content well. Library hosted workshops are often a primary educational resource for solo learners, and many librarians utilize these workshops as a primary outreach platform. Tackling these two issues together often makes the most sense for our limited resources. Whether a programming language or software tool, learning tech to teach tech can be one of the best motivations for learning that tech skill or tool, but equally important is to learn how to teach and present tech well.

This hands-on workshop will guide participants through developing their own learning plan, reviewing essential pedagogy for teaching tech, and crafting a workshop of their choice. Each participant will leave with an actionable learning schedule, a prioritized list of resources to investigate, and an outline of a workshop they would like to teach.

(Two three-hour sessions)

23. Introduction to Omeka S
Omeka S represents a complete rewrite of Omeka Classic (aka the Omeka 2.x series), adhering to our fundamental principles of encouraging use of metadata standards, easy web publishing, and sharing cultural history. New objectives in Omeka S include multisite functionality and increased interaction with other systems. This workshop will compare and contrast Omeka S with Omeka Classic to highlight our emphasis on 1) modern metadata standards, 2) interoperability with other systems including Linked Open Data, 3) use of modern web standards, and 4) web publishing to meet the goals medium- to large-sized institutions.

In this workshop we will walk through Omeka S Item creation, with emphasis on LoD principles. We will also look at the features of Omeka S that ease metadata input and facilitate project-defined usage and workflows. In accordance with our commitment to interoperability, we will describe how the API for Omeka S can be deployed for data exchange and sharing between many systems. We will also describe how Omeka S promotes multiple site creation from one installation, in the interest of easy publishing with many objects in many contexts, and simplifying the work of IT departments.

(One three-hour session)

24. Getting started with static website generators
Have you been curious about static website generators? Have you been wondering who Jekyll and Hugo are? Then this workshop is for you

My notehttps://opensource.com/article/17/5/hugo-vs-jekyll

But this article isn’t about setting up a domain name and hosting for your website. It’s for the step after that, the actual making of that site. The typical choice for a lot of people would be to use something like WordPress. It’s a one-click install on most hosting providers, and there’s a gigantic market of plugins and themes available to choose from, depending on the type of site you’re trying to build. But not only is WordPress a bit overkill for most websites, it also gives you a dynamically generated site with a lot of moving parts. If you don’t keep all of those pieces up to date, they can pose a significant security risk and your site could get hijacked.

The alternative would be to have a static website, with nothing dynamically generated on the server side. Just good old HTML and CSS (and perhaps a bit of Javascript for flair). The downside to that option has been that you’ve been relegated to coding the whole thing by hand yourself. It’s doable, but you just want a place to share your work. You shouldn’t have to know all the idiosyncrasies of low-level web design (and the monumental headache of cross-browser compatibility) to do that.

Static website generators are tools used to build a website made up only of HTML, CSS, and JavaScript. Static websites, unlike dynamic sites built with tools like Drupal or WordPress, do not use databases or server-side scripting languages. Static websites have a number of benefits over dynamic sites, including reduced security vulnerabilities, simpler long-term maintenance, and easier preservation.

In this hands-on workshop, we’ll start by exploring static website generators, their components, some of the different options available, and their benefits and disadvantages. Then, we’ll work on making our own sites, and for those that would like to, get them online with GitHub pages. Familiarity with HTML, git, and command line basics will be helpful but are not required.

(One three-hour session)

26. Using Digital Media for Research and Instruction
To use digital media effectively in both research and instruction, you need to go beyond just the playback of media files. You need to be able to stream the media, divide that stream into different segments, provide descriptive analysis of each segment, order, re-order and compare different segments from the same or different streams and create web sites that can show the result of your analysis. In this workshop, we will use Omeka and several plugins for working with digital media, to show the potential of video streaming, segmentation and descriptive analysis for research and instruction.

(One three-hour session)

28. Spark in the Dark 101 https://zeppelin.apache.org/
This is an introductory session on Apache Spark, a framework for large-scale data processing (https://spark.apache.org/). We will introduce high level concepts around Spark, including how Spark execution works and it’s relationship to the other technologies for working with Big Data. Following this introduction to the theory and background, we will walk workshop participants through hands-on usage of spark-shell, Zeppelin notebooks, and Spark SQL for processing library data. The workshop will wrap up with use cases and demos for leveraging Spark within cultural heritage institutions and information organizations, connecting the building blocks learned to current projects in the real world.

(One three-hour session)

29. Introduction to Spotlight https://github.com/projectblacklight/spotlight
http://www.spotlighttechnology.com/4-OpenSource.htm
Spotlight is an open source application that extends the digital library ecosystem by providing a means for institutions to reuse digital content in easy-to-produce, attractive, and scholarly-oriented websites. Librarians, curators, and other content experts can build Spotlight exhibits to showcase digital collections using a self-service workflow for selection, arrangement, curation, and presentation.

This workshop will introduce the main features of Spotlight and present examples of Spotlight-built exhibits from the community of adopters. We’ll also describe the technical requirements for adopting Spotlight and highlight the potential to customize and extend Spotlight’s capabilities for their own needs while contributing to its growth as an open source project.

(One three-hour session)

31. Getting Started Visualizing your IoT Data in Tableau https://www.tableau.com/
The Internet of Things is a rising trend in library research. IoT sensors can be used for space assessment, service design, and environmental monitoring. IoT tools create lots of data that can be overwhelming and hard to interpret. Tableau Public (https://public.tableau.com/en-us/s/) is a data visualization tool that allows you to explore this information quickly and intuitively to find new insights.

This full-day workshop will teach you the basics of building your own own IoT sensor using a Raspberry Pi (https://www.raspberrypi.org/) in order to gather, manipulate, and visualize your data.

All are welcome, but some familiarity with Python is recommended.

(Two three-hour sessions)

32. Enabling Social Media Research and Archiving
Social media data represents a tremendous opportunity for memory institutions of all kinds, be they large academic research libraries, or small community archives. Researchers from a broad swath of disciplines have a great deal of interest in working with social media content, but they often lack access to datasets or the technical skills needed to create them. Further, it is clear that social media is already a crucial part of the historical record in areas ranging from events your local community to national elections. But attempts to build archives of social media data are largely nascent. This workshop will be both an introduction to collecting data from the APIs of social media platforms, as well as a discussion of the roles of libraries and archives in that collecting.

Assuming no prior experience, the workshop will begin with an explanation of how APIs operate. We will then focus specifically on the Twitter API, as Twitter is of significant interest to researchers and hosts an important segment of discourse. Through a combination of hands-on and demos, we will gain experience with a number of tools that support collecting social media data (e.g., Twarc, Social Feed Manager, DocNow, Twurl, and TAGS), as well as tools that enable sharing social media datasets (e.g., Hydrator, TweetSets, and the Tweet ID Catalog).

The workshop will then turn to a discussion of how to build a successful program enabling social media collecting at your institution. This might cover a variety of topics including outreach to campus researchers, collection development strategies, the relationship between social media archiving and web archiving, and how to get involved with the social media archiving community. This discussion will be framed by a focus on ethical considerations of social media data, including privacy and responsible data sharing.

Time permitting, we will provide a sampling of some approaches to social media data analysis, including Twarc Utils and Jupyter Notebooks.

(One three-hour session)

scsu library position proposal

Please email completed forms to librarydeansoffice@stcloudstate.edu no later than noon on Thursday, October 5.

According to the email below, library faculty are asked to provide their feedback regarding the qualifications for a possible faculty line at the library.

  1. In the fall of 2013 during a faculty meeting attended by the back than library dean and during a discussion of an article provided by the dean, it was established that leading academic libraries in this country are seeking to break the mold of “library degree” and seek fresh ideas for the reinvention of the academic library by hiring faculty with more diverse (degree-wise) background.
  2. Is this still the case at the SCSU library? The “democratic” search for the answer of this question does not yield productive results, considering that the majority of the library faculty are “reference” and they “democratically” overturn votes, who see this library to be put on 21st century standards and rather seek more “reference” bodies for duties, which were recognized even by the same reference librarians as obsolete.
    It seems that the majority of the SCSU library are “purists” in the sense of seeking professionals with broader background (other than library, even “reference” skills).
    In addition, most of the current SCSU librarians are opposed to a second degree, as in acquiring more qualification, versus seeking just another diploma. There is a certain attitude of stagnation / intellectual incest, where new ideas are not generated and old ideas are prepped in “new attire” to look as innovative and/or 21st
    Last but not least, a consistent complain about workforce shortages (the attrition politics of the university’s reorganization contribute to the power of such complain) fuels the requests for reference librarians and, instead of looking for new ideas, new approaches and new work responsibilities, the library reorganization conversation deteriorates into squabbles for positions among different department.
    Most importantly, the narrow sightedness of being stuck in traditional work description impairs  most of the librarians to see potential allies and disruptors. E.g., the insistence on the supremacy of “information literacy” leads SCSU librarians to the erroneous conclusion of the exceptionality of information literacy and the disregard of multi[meta] literacies, thus depriving the entire campus of necessary 21st century skills such as visual literacy, media literacy, technology literacy, etc.
    Simultaneously, as mentioned above about potential allies and disruptors, the SCSU librarians insist on their “domain” and if they are not capable of leading meta-literacies instructions, they would also not allow and/or support others to do so.
    Considering the observations above, the following qualifications must be considered:
  3. According to the information in this blog post:
    https://blog.stcloudstate.edu/ims/2016/06/14/technology-requirements-samples/
    for the past year and ½, academic libraries are hiring specialists with the following qualifications and for the following positions (bolded and / or in red). Here are some highlights:
    Positions
    digital humanities
    Librarian and Instructional Technology Liaison

library Specialist: Data Visualization & Collections Analytics

Qualifications

Advanced degree required, preferably in education, educational technology, instructional design, or MLS with an emphasis in instruction and assessment.

Programming skills – Demonstrated experience with one or more metadata and scripting languages (e.g.Dublin Core, XSLT, Java, JavaScript, Python, or PHP)
Data visualization skills
multi [ meta] literacy skills

Data curation, helping students working with data
Experience with website creation and design in a CMS environment and accessibility and compliance issues
Demonstrated a high degree of facility with technologies and systems germane to the 21st century library, and be well versed in the issues surrounding scholarly communications and compliance issues (e.g. author identifiers, data sharing software, repositories, among others)

Bilingual

Provides and develops awareness and knowledge related to digital scholarship and research lifecycle for librarians and staff.

Experience developing for, and supporting, common open-source library applications such as Omeka, ArchiveSpace, Dspace,

 

Responsibilities
Establishing best practices for digital humanities labs, networks, and services

Assessing, evaluating, and peer reviewing DH projects and librarians
Actively promote TIGER or GRIC related activities through social networks and other platforms as needed.
Coordinates the transmission of online workshops through Google HangoutsScript metadata transformations and digital object processing using BASH, Python, and XSLT

liaison consults with faculty and students in a wide range of disciplines on best practices for teaching and using data/statistical software tools such as R, SPSS, Stata, and MatLab.

 

In response to the form attached to the Friday, September 29, email regarding St. Cloud State University Library Position Request Form:

 

  1. Title
    Digital Initiatives Librarian
  2. Responsibilities:
    TBD, but generally:
    – works with faculty across campus on promoting digital projects and other 21st century projects. Works with the English Department faculty on positioning the SCSU library as an equal participants in the digital humanities initiatives on campus
  • Works with the Visualization lab to establish the library as the leading unit on campus in interpretation of big data
  • Works with academic technology services on promoting library faculty as the leading force in the pedagogical use of academic technologies.
  1. Quantitative data justification
    this is a mute requirement for an innovative and useful library position. It can apply for a traditional request, such as another “reference” librarian. There cannot be a quantitative data justification for an innovative position, as explained to Keith Ewing in 2015. In order to accumulate such data, the position must be functioning at least for six months.
  2. Qualitative justification: Please provide qualitative explanation that supports need for this position.
    Numerous 21st century academic tendencies right now are scattered across campus and are a subject of political/power battles rather than a venue for campus collaboration and cooperation. Such position can seek the establishment of the library as the natural hub for “sandbox” activities across campus. It can seek a redirection of using digital initiatives on this campus for political gains by administrators and move the generation and accomplishment of such initiatives to the rightful owner and primary stakeholders: faculty and students.
    Currently, there are no additional facilities and resources required. Existing facilities and resources, such as the visualization lab, open source and free application can be used to generate the momentum of faculty working together toward a common goal, such as, e.g. digital humanities.

 

 

 

 

social media algorithms

How algorithms impact our browsing behavior? browsing history?
What is the connection between social media algorithms and fake news?
Are there topic-detection algorithms as they are community-detection ones?
How can I change the content of a [Google] search return? Can I? 

Larson, S. (2016, July 8). What is an Algorithm and How Does it Affect You? The Daily Dot. Retrieved from https://www.dailydot.com/debug/what-is-an-algorithm/
Berg, P. (2016, June 30). How Do Social Media Algorithms Affect You | Forge and Smith. Retrieved September 19, 2017, from https://forgeandsmith.com/how-do-social-media-algorithms-affect-you/
Oremus, W., & Chotiner, I. (2016, January 3). Who Controls Your Facebook Feed. Slate. Retrieved from http://www.slate.com/articles/technology/cover_story/2016/01/how_facebook_s_news_feed_algorithm_works.html
Lehrman, R. A. (2013, August 11). The new age of algorithms: How it affects the way we live. Christian Science Monitor. Retrieved from https://www.csmonitor.com/USA/Society/2013/0811/The-new-age-of-algorithms-How-it-affects-the-way-we-live
Johnson, C. (2017, March 10). How algorithms affect our way of life. Desert News. Retrieved from https://www.deseretnews.com/article/865675141/How-algorithms-affect-our-way-of-life.html
Understanding algorithms and their impact on human life goes far beyond basic digital literacy, some experts said.
An example could be the recent outcry over Facebook’s news algorithm, which enhances the so-called “filter bubble”of information.
personalized search (https://en.wikipedia.org/wiki/Personalized_search)
Kounine, A. (2016, August 24). How your personal data is used in personalization and advertising. Retrieved September 19, 2017, from https://www.tastehit.com/blog/personal-data-in-personalization-and-advertising/
Hotchkiss, G. (2007, March 9). The Pros & Cons Of Personalized Search. Retrieved September 19, 2017, from http://searchengineland.com/the-pros-cons-of-personalized-search-10697
Magid, L. (2012). How (and why) To Turn Off Google’s Personalized Search Results. Forbes. Retrieved from https://www.forbes.com/sites/larrymagid/2012/01/13/how-and-why-to-turn-off-googles-personalized-search-results/#53a30be838f2
Nelson, P. (n.d.). Big Data, Personalization and the No-Search of Tomorrow. Retrieved September 19, 2017, from https://www.searchtechnologies.com/blog/big-data-search-personalization

gender

Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society19(3), 329-346. doi:10.1177/1461444815608807

http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dkeh%26AN%3d121748152%26site%3dehost-live%26scope%3dsite

community detection algorithms:

Bedi, P., & Sharma, C. (2016). Community detection in social networks. Wires: Data Mining & Knowledge Discovery6(3), 115-135.

http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dllf%26AN%3d114513548%26site%3dehost-live%26scope%3dsite

CRUZ, J. D., BOTHOREL, C., & POULET, F. (2014). Community Detection and Visualization in Social Networks: Integrating Structural and Semantic Information. ACM Transactions On Intelligent Systems & Technology5(1), 1-26. doi:10.1145/2542182.2542193

http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3daph%26AN%3d95584126%26site%3dehost-live%26scope%3dsite

Bai, X., Yang, P., & Shi, X. (2017). An overlapping community detection algorithm based on density peaks. Neurocomputing2267-15. doi:10.1016/j.neucom.2016.11.019

http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dkeh%26AN%3d120321022%26site%3dehost-live%26scope%3dsite

topic-detection algorithms:

Zeng, J., & Zhang, S. (2009). Incorporating topic transition in topic detection and tracking algorithms. Expert Systems With Applications36(1), 227-232. doi:10.1016/j.eswa.2007.09.013

http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dkeh%26AN%3d34892957%26site%3dehost-live%26scope%3dsite

topic detection and tracking (TDT) algorithms based on topic models, such as LDA, pLSI (https://en.wikipedia.org/wiki/Probabilistic_latent_semantic_analysis), etc.

Zhou, E., Zhong, N., & Li, Y. (2014). Extracting news blog hot topics based on the W2T Methodology. World Wide Web17(3), 377-404. doi:10.1007/s11280-013-0207-7

http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dkeh%26AN%3d94609674%26site%3dehost-live%26scope%3dsite

The W2T (Wisdom Web of Things) methodology considers the information organization and management from the perspective of Web services, which contributes to a deep understanding of online phenomena such as users’ behaviors and comments in e-commerce platforms and online social networks.  (https://link.springer.com/chapter/10.1007/978-3-319-44198-6_10)

ethics of algorithm

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679. https://doi.org/10.1177/2053951716679679

journalism

Malyarov, N. (2016, October 18). Journalism in the age of algorithms, platforms and newsfeeds | News | FIPP.com. Retrieved September 19, 2017, from http://www.fipp.com/news/features/journalism-in-the-age-of-algorithms-platforms-newsfeeds

+++++++++++++++++
https://blog.stcloudstate.edu/ims?s=algorithm
more on algorithms in this IMS blog

see also

back to school discussion

Bryan Alexander (BA) Future Trends of Sept. 7

Are you seeing enrollments change? Which technologies hold the most promise? Will your campus become politically active? What collaborations might power up teaching and learning?

  • the big technological issues for the next year?
    robotics? automation in education? big data / analytics?

organizational transformation. David Stone (Penn State) – centralization vs decentralization. technology is shifting everywhere, even the registrar. BA – where should be the IT department? CFO or Academic Department.

difference between undergrads and grad students and how to address. CETL join center for academic technologies.

faculty role, developing courses and materials. share these materials and make more usable. who should be maintaining these materials. life cycle, compensation for development materials. This is in essence the issues of the OER Open Education Resources initiative in MN

BA: OER and Open Access to Research has very similar models and issues. Open access scholarship both have a lot of impact on campus finances. Library and faculty budges.

Amanda Major is with Division of Digital Learning as part of Academic Affairs at UCF: Are there trends in competency-based learning, assessing quality course and programs, personalized adaptive learning, utilizing data analytics for retention and student success?  BA: CBL continue to grow at state U’s and community colleges.

BA for group discussions: what are the technological changes happening this coming year, not only internally on campus, but global changes and how thy might be affecting us. Amazon Dash button, electric cars for U fleet, newer devices on campus

David Stone: students are price-sensitive. college and U can charge whatever they want and text books can raise prices.

http://hechingerreport.org/ next week

++++++++++++++++++
more on future trends in this IMS blog

https://blog.stcloudstate.edu/ims/2017/05/30/missionu-on-bryan-alexanders-future-trends/

online teaching evaluation

Tobin, T. J., Mandernach, B. J., & Taylor, A. H. (2015). Evaluating Online Teaching: Implementing Best Practices (1 edition). San Francisco, CA: Jossey-Bass.
  1. 5 measurable faculty competencies for on line teaching:
  • attend to unique challenges of distance learning
  • Be familiar with unique learning needs
  • Achieve mastery of course content, structure , and organization
  • Respond to student inquiries
  • Provide detailed feedback
  • Communicate effectively
  • Promote a safe learning environment
  • Monitor student progress
  • Communicate course goals
  • Provide evidence of teaching presence.

Best practices include:

  • Making interactions challenging yet supportive for students
  • Asking learners to be active participants in the learning process
  • Acknowledging variety on the ways that students learn best
  • Providing timely and constructive feedback

Evaluation principles

  • Instructor knowledge
  • Method of instruction
  • Instructor-student rapport
  • Teaching behaviors
  • Enthusiastic teaching
  • Concern for teaching
  • Overall

8. The American Association for higher Education 9 principle4s of Good practice for assessing student learning from 1996 hold equally in the F2F and online environments:

the assessment of student learning beings with educational values

assessment is most effective when it reflects an understanding of learning as multidimensional, integrated and revealed in performance over time

assessment works best when the programs it seeks to improve have clear, explicitly stated purposes.

Assessment requires attention to outcomes but also and equally to the experiences that lead to those outcomes.

Assessment works best when it is ongoing, not episodic

Assessment fosters wider improvement when representatives from across the educational community are involved

Assessment makes a difference when it begins with issues of use and illumines questions that people really care bout

Assessment is most likely to lead to improvements when it is part of the large set of conditions that promote change.

Through assessment, educators meet responsibilities to students and to the public.

9 most of the online teaching evaluation instruments in use today are created to evaluate content design rather than teaching practices.

29 stakeholders for the evaluation of online teaching

  • faculty members with online teaching experience
  • campus faculty members as a means of establishing equitable evaluation across modes of teaching
  • contingent faculty members teaching online
  • department or college administrators
  • members of faculty unions or representative governing organizations
  • administrative support specialists
  • distance learning administrators
  • technology specialists
  • LMS administrators
  • Faculty development and training specialists
  • Institutional assessment and effectiveness specialists
  • Students

Sample student rating q/s

University resources

Rate the effectiveness of the online library for locationg course materials

Based on your experience,

148. Checklist for Online Interactive Learning COIL

150. Quality Online Course Initiative QOCI

151 QM Rubric

154 The Online Insturctor Evaluation System OIES

 

163 Data Analytics: moving beyond student learning

  • # of announcments posted per module
  • # of contributions to the asynchronous discussion boards
  • Quality of the contributions
  • Timeliness of posting student grades
  • Timelines of student feedback
  • Quality of instructional supplements
  • Quality of feedback on student work
  • Frequency of logins
  1. 180 understanding big data
  • reliability
  • validity
  • factor structure

187 a holistics valuation plan should include both formative evaluation, in which observations and rating are undertaken with the purposes of improving teaching and learning, and summative evaluation, in which observation and ratings are used in order to make personnel decisions, such as granting promotion and tenure, remediation, and asking contingent faculty to teach again.

195 separating teaching behaviors from content design

 

 

 

 

+++++++++++++++++
more on online teaching in this IMS blog
https://blog.stcloudstate.edu/ims?s=online+teaching

Large-scale visualization

The future of collaboration: Large-scale visualization

 http://usblogs.pwc.com/emerging-technology/the-future-of-collaboration-large-scale-visualization/

More data doesn’t automatically lead to better decisions. A shortage of skilled data scientists has hindered progress towards translation of information into actionable business insights. In addition, traditionally dense spreadsheets and linear slideshows are ineffective to present discoveries when dealing with Big Data’s dynamic nature. We need to evolve how we capture, analyze and communicate data.

Large-scale visualization platforms have several advantages over traditional presentation methods. They blur the line between the presenter and audience to increase the level of interactivity and collaboration. They also offer simultaneous views of both macro and micro perspectives, multi-user collaboration and real-time data interaction, and a limitless number of visualization possibilities – critical capabilities for rapidly understanding today’s large data sets.

Visualization walls enable presenters to target people’s preferred learning methods, thus creating a more effective communication tool. The human brain has an amazing ability to quickly glean insights from patterns – and great visualizations make for more efficient storytellers.

Grant: Visualizing Digital Scholarship in Libraries and Learning Spaces
Award amount: $40,000
Funder: Andrew W. Mellon Foundation
Lead institution: North Carolina State University Libraries
Due date: 13 August 2017
Notification date: 15 September 2017
Website: https://immersivescholar.org
Contact: immersivescholar@ncsu.edu

Project Description

NC State University, funded by the Andrew W. Mellon Foundation, invites proposals from institutions interested in participating in a new project for Visualizing Digital Scholarship in Libraries and Learning Spaces. The grant aims to 1) build a community of practice of scholars and librarians who work in large-scale multimedia to help visually immersive scholarly work enter the research lifecycle; and 2) overcome technical and resource barriers that limit the number of scholars and libraries who may produce digital scholarship for visualization environments and the impact of generated knowledge. Libraries and museums have made significant strides in pioneering the use of large-scale visualization technologies for research and learning. However, the utilization, scale, and impact of visualization environments and the scholarship created within them have not reached their fullest potential. A logical next step in the provision of technology-rich, visual academic spaces is to develop best practices and collaborative frameworks that can benefit individual institutions by building economies of scale among collaborators.

The project contains four major elements:

  1. An initial meeting and priority setting workshop that brings together librarians, scholars, and technologists working in large-scale, library and museum-based visualization environments.
  2. Scholars-in-residence at NC State over a multi-year period who pursue open source creative projects, working in collaboration with our librarians and faculty, with the potential to address the articulated limitations.
  3. Funding for modest, competitive block grants to other institutions working on similar challenges for creating, disseminating, validating, and preserving digital scholarship created in and for large-scale visual environments.
  4. A culminating symposium that brings together representatives from the scholars-in-residence and block grant recipient institutions to share and assess results, organize ways of preserving and disseminating digital products produced, and build on the methods, templates, and tools developed for future projects.

Work Summary
This call solicits proposals for block grants from library or museum systems that have visualization installations. Block grant recipients can utilize funds for ideas ranging from creating open source scholarly content for visualization environments to developing tools and templates to enhance sharing of visualization work. An advisory panel will select four institutions to receive awards of up to $40,000. Block grant recipients will also participate in the initial priority setting workshop and the culminating symposium. Participating in a block grant proposal does not disqualify an individual from later applying for one of the grant-supported scholar-in-residence appointments.
Applicants will provide a statement of work that describes the contributions that their organization will make toward the goals of the grant. Applicants will also provide a budget and budget justification.
Activities that can be funded through block grants include, but are not limited to:

  • Commissioning work by a visualization expert
  • Hosting a visiting scholar, artist, or technologist residency
  • Software development or adaptation
  • Development of templates and methodologies for sharing and scaling content utilizing open source software
  • Student or staff labor for content or software development or adaptation
  • Curricula and reusable learning objects for digital scholarship and visualization courses
  • Travel (if necessary) to the initial project meeting and culminating workshop
  • User research on universal design for visualization spaces

Funding for operational expenditures, such as equipment, is not allowed for any grant participant.

Application
Send an application to immersivescholar@ncsu.edu by the end of the day on 13 August 2017 that includes the following:

  • Statement of work (no more than 1000 words) of the project idea your organization plans to develop, its relationship to the overall goals of the grant, and the challenges to be addressed.
  • List the names and contact information for each of the participants in the funded project, including a brief description of their current role, background, expertise, interests, and what they can contribute.
  • Project timeline.
  • Budget table with projected expenditures.
  • Budget narrative detailing the proposed expenditures

Selection and Notification Process
An advisory panel made up of scholars, librarians, and technologists with experience and expertise in large-scale visualization and/or visual scholarship will review and rank proposals. The project leaders are especially keen to receive proposals that develop best practices and collaborative frameworks that can benefit individual institutions by building a community of practice and economies of scale among collaborators.

Awardees will be selected based on:

  • the ability of their proposal to successfully address one or both of the identified problems;
  • the creativity of the proposed activities;
  • relevant demonstrated experience partnering with scholars or students on visualization projects;
  • whether the proposal is extensible;
  • feasibility of the work within the proposed time-frame and budget;
  • whether the project work improves or expands access to large-scale visual environments for users; and
  • the participant’s ability to expand content development and sharing among the network of institutions with large-scale visual environments.

Awardees will be required to send a representative to an initial meeting of the project cohort in Fall 2017.

Awardees will be notified by 15 September 2017.

If you have any questions, please contact immersivescholar@ncsu.edu.

–Mike Nutt Director of Visualization Services Digital Library Initiatives, NCSU Libraries
919.513.0651 http://www.lib.ncsu.edu/do/visualization

 

qualitative method research

Cohort 7

By miltenoff | View this Toon at ToonDoo | Create your own Toon

Qualitative Method Research

quote

Data treatment and analysis

Because the questionnaire data comprised both Likert scales and open questions, they were analyzed quantitatively and qualitatively. Textual data (open responses) were qualitatively analyzed by coding: each segment (e.g. a group of words) was assigned to a semantic reference category, as systematically and rigorously as possible. For example, “Using an iPad in class really motivates me to learn” was assigned to the category “positive impact on motivation.” The qualitative analysis was performed using an adapted version of the approaches developed by L’Écuyer (1990) and Huberman and Miles (1991, 1994). Thus, we adopted a content analysis approach using QDAMiner software, which is widely used in qualitative research (see Fielding, 2012; Karsenti, Komis, Depover, & Collin, 2011). For the quantitative analysis, we used SPSS 22.0 software to conduct descriptive and inferential statistics. We also conducted inferential statistics to further explore the iPad’s role in teaching and learning, along with its motivational effect. The results will be presented in a subsequent report (Fievez, & Karsenti, 2013)

Fievez, A., & Karsenti, T. (2013). The iPad in Education: uses, benefits and challenges. A survey of 6057 students and 302 teachers in Quebec, Canada (p. 51). Canada Research Chair in Technologies in Education. Retrieved from https://www.academia.edu/5366978/The_iPad_in_Education_uses_benefits_and_challenges._A_survey_of_6057_students_and_302_teachers_in_Quebec_Canada

unquote

 The 20th century notion of conducting a qualitative research by an oral interview and then processing manually your results had triggered in the second half of the 20th century [sometimes] condescending attitudes by researchers from the exact sciences.
The reason was the advent of computing power in the second half of the 20th century, which allowed exact sciences to claim “scientific” and “data-based” results.
One of the statistical package, SPSS, is today widely known and considered a magnificent tools to bring solid statistically-based argumentation, which further perpetuates the superiority of quantitative over qualitative method.
At the same time, qualitative researchers continue to lag behind, mostly due to the inertia of their approach to qualitative analysis. Qualitative analysis continues to be processed in the olden ways. While there is nothing wrong with the “olden” ways, harnessing computational power can streamline the “olden ways” process and even present options, which the “human eye” sometimes misses.
Below are some suggestions, you may consider, when you embark on the path of qualitative research.
The Use of Qualitative Content Analysis in Case Study Research
Florian Kohlbacher
http://www.qualitative-research.net/index.php/fqs/article/view/75/153

excellent guide to the structure of a qualitative research

Palys, T., & Atchison, C. (2012). Qualitative Research in the Digital Era: Obstacles and Opportunities. International Journal Of Qualitative Methods, 11(4), 352-367.
http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dkeh%26AN%3d89171709%26site%3dehost-live%26scope%3dsite
Palys and Atchison (2012) present a compelling case to bring your qualitative research to the level of the quantitative research by using modern tools for qualitative analysis.
1. The authors correctly promote NVivo as the “jaguar’ of the qualitative research method tools. Be aware, however, about the existence of other “Geo Metro” tools, which, for your research, might achieve the same result (see bottom of this blog entry).
2. The authors promote a new type of approach to Chapter 2 doctoral dissertation and namely OCR-ing PDF articles (most of your literature as of 2017 is mostly either in PDF or electronic textual format) through applications such as
Abbyy Fine Reader, https://www.abbyy.com/en-us/finereader/
OmniPage,  http://www.nuance.com/for-individuals/by-product/omnipage/index.htm
Readirus http://www.irislink.com/EN-US/c1462/Readiris-16-for-Windows—OCR-Software.aspx
The text from the articles is processed either through NVIVO or related programs (see bottom of this blog entry). As the authors propose: ” This is immediately useful for literature review and proposal writing, and continues through the research design, data gathering, and analysis stages— where NVivo’s flexibility for many different sources of data (including audio, video, graphic, and text) are well known—of writing for publication” (p. 353).
In other words, you can try to wrap your head around huge amount of textual information, but you can also approach the task by a parallel process of processing the same text with a tool.
 +++++++++++++++++++++++++++++
Here are some suggestions for Computer Assisted / Aided Qualitative Data Analysis Software (CAQDAS) for a small and a large community applications):

– RQDA (the small one): http://rqda.r-forge.r-project.org/ (see on youtube the tutorials of Metin Caliskan); one active developper.
GATE (the large one): http://gate.ac.uk/ | https://gate.ac.uk/download/

text mining: https://en.wikipedia.org/wiki/Text_mining
Text mining, also referred to as text data mining, roughly equivalent to text analytics, is the process of deriving high-quality information from text. High-quality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output.
https://ischool.syr.edu/infospace/2013/04/23/what-is-text-mining/
Qualitative data is descriptive data that cannot be measured in numbers and often includes qualities of appearance like color, texture, and textual description. Quantitative data is numerical, structured data that can be measured. However, there is often slippage between qualitative and quantitative categories. For example, a photograph might traditionally be considered “qualitative data” but when you break it down to the level of pixels, which can be measured.
word of caution, text mining doesn’t generate new facts and is not an end, in and of itself. The process is most useful when the data it generates can be further analyzed by a domain expert, who can bring additional knowledge for a more complete picture. Still, text mining creates new relationships and hypotheses for experts to explore further.

quick and easy:

intermediate:

advanced:

http://tidytextmining.com/

Introduction to GATE Developer  https://youtu.be/o5uhMF15vsA


 

use of RapidMiner:

https://rapidminer.com/pricing/

– Coding Analysis Toolkit (CAT) from University of Pittsburgh and University of Massachusetts
– Raven’s Eye is an online natural language ANALYSIS tool based
– ATLAS.TI
– XSIGTH

– QDA Miner: http://provalisresearch.com/products/qualitative-data-analysis-software/

There is also a free version called QDA Miner Lite with limited functionalities: http://provalisresearch.com/products/qualitative-data-analysis-software/freeware/

– MAXQDA

–  NVivo

– SPSS Text Analytics

– Kwalitan

– Transana (include video transcribing capability)

– XSight

Nud*ist https://www.qsrinternational.com/

(Cited from: https://www.researchgate.net/post/Are_there_any_open-source_alternatives_to_Nvivo [accessed Apr 1, 2017].

– OdinText

IBM Watson Conversation
IBM Watson Text to Speech
Google Translate API
MeTA
LingPipe
NLP4J
Timbl
Colibri Core
CRF++
Frog
Ucto
– CRFsuite

– FoLiA
PyNLPl
openNLP
NLP Compromise
MALLET
Cited from: https://www.g2crowd.com/products/nvivo/competitors/alternatives [accessed April 1, 2017
+++++++++++++++++++++++++=
http://www.socresonline.org.uk/3/3/4.html
Christine A. Barry (1998) ‘Choosing Qualitative Data Analysis Software: Atlas/ti and Nudist Compared’
Sociological Research Online, vol. 3, no. 3, <http://www.socresonline.org.uk/3/3/4.html&gt;

Pros and Cons of Computer Assisted Qualitative Data Analysis Software

+++++++++++++++++++++++++
more on quantitative research:

Asamoah, D. A., Sharda, R., Hassan Zadeh, A., & Kalgotra, P. (2017). Preparing a Data Scientist: A Pedagogic Experience in Designing a Big Data Analytics Course. Decision Sciences Journal of Innovative Education, 15(2), 161–190. https://doi.org/10.1111/dsji.12125
++++++++++++++++++++++++
literature on quantitative research:
Borgman, C. L. (2015). Big Data, Little Data, No Data: Scholarship in the Networked World. MIT Press. https://mplus.mnpals.net/vufind/Record/ebr4_1006438
St. Cloud State University MC Main Collection – 2nd floor AZ195 .B66 2015
p. 161 Data scholarship in the Humanities
p. 166 When Are Data?
Philip Chen, C. L., & Zhang, C.-Y. (2014). Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Information Sciences, 275(Supplement C), 314–347. https://doi.org/10.1016/j.ins.2014.01.015

1 4 5 6 7 8 22