Searching for "analytics"

Large-scale visualization

The future of collaboration: Large-scale visualization

 http://usblogs.pwc.com/emerging-technology/the-future-of-collaboration-large-scale-visualization/

More data doesn’t automatically lead to better decisions. A shortage of skilled data scientists has hindered progress towards translation of information into actionable business insights. In addition, traditionally dense spreadsheets and linear slideshows are ineffective to present discoveries when dealing with Big Data’s dynamic nature. We need to evolve how we capture, analyze and communicate data.

Large-scale visualization platforms have several advantages over traditional presentation methods. They blur the line between the presenter and audience to increase the level of interactivity and collaboration. They also offer simultaneous views of both macro and micro perspectives, multi-user collaboration and real-time data interaction, and a limitless number of visualization possibilities – critical capabilities for rapidly understanding today’s large data sets.

Visualization walls enable presenters to target people’s preferred learning methods, thus creating a more effective communication tool. The human brain has an amazing ability to quickly glean insights from patterns – and great visualizations make for more efficient storytellers.

Grant: Visualizing Digital Scholarship in Libraries and Learning Spaces
Award amount: $40,000
Funder: Andrew W. Mellon Foundation
Lead institution: North Carolina State University Libraries
Due date: 13 August 2017
Notification date: 15 September 2017
Website: https://immersivescholar.org
Contact: immersivescholar@ncsu.edu

Project Description

NC State University, funded by the Andrew W. Mellon Foundation, invites proposals from institutions interested in participating in a new project for Visualizing Digital Scholarship in Libraries and Learning Spaces. The grant aims to 1) build a community of practice of scholars and librarians who work in large-scale multimedia to help visually immersive scholarly work enter the research lifecycle; and 2) overcome technical and resource barriers that limit the number of scholars and libraries who may produce digital scholarship for visualization environments and the impact of generated knowledge. Libraries and museums have made significant strides in pioneering the use of large-scale visualization technologies for research and learning. However, the utilization, scale, and impact of visualization environments and the scholarship created within them have not reached their fullest potential. A logical next step in the provision of technology-rich, visual academic spaces is to develop best practices and collaborative frameworks that can benefit individual institutions by building economies of scale among collaborators.

The project contains four major elements:

  1. An initial meeting and priority setting workshop that brings together librarians, scholars, and technologists working in large-scale, library and museum-based visualization environments.
  2. Scholars-in-residence at NC State over a multi-year period who pursue open source creative projects, working in collaboration with our librarians and faculty, with the potential to address the articulated limitations.
  3. Funding for modest, competitive block grants to other institutions working on similar challenges for creating, disseminating, validating, and preserving digital scholarship created in and for large-scale visual environments.
  4. A culminating symposium that brings together representatives from the scholars-in-residence and block grant recipient institutions to share and assess results, organize ways of preserving and disseminating digital products produced, and build on the methods, templates, and tools developed for future projects.

Work Summary
This call solicits proposals for block grants from library or museum systems that have visualization installations. Block grant recipients can utilize funds for ideas ranging from creating open source scholarly content for visualization environments to developing tools and templates to enhance sharing of visualization work. An advisory panel will select four institutions to receive awards of up to $40,000. Block grant recipients will also participate in the initial priority setting workshop and the culminating symposium. Participating in a block grant proposal does not disqualify an individual from later applying for one of the grant-supported scholar-in-residence appointments.
Applicants will provide a statement of work that describes the contributions that their organization will make toward the goals of the grant. Applicants will also provide a budget and budget justification.
Activities that can be funded through block grants include, but are not limited to:

  • Commissioning work by a visualization expert
  • Hosting a visiting scholar, artist, or technologist residency
  • Software development or adaptation
  • Development of templates and methodologies for sharing and scaling content utilizing open source software
  • Student or staff labor for content or software development or adaptation
  • Curricula and reusable learning objects for digital scholarship and visualization courses
  • Travel (if necessary) to the initial project meeting and culminating workshop
  • User research on universal design for visualization spaces

Funding for operational expenditures, such as equipment, is not allowed for any grant participant.

Application
Send an application to immersivescholar@ncsu.edu by the end of the day on 13 August 2017 that includes the following:

  • Statement of work (no more than 1000 words) of the project idea your organization plans to develop, its relationship to the overall goals of the grant, and the challenges to be addressed.
  • List the names and contact information for each of the participants in the funded project, including a brief description of their current role, background, expertise, interests, and what they can contribute.
  • Project timeline.
  • Budget table with projected expenditures.
  • Budget narrative detailing the proposed expenditures

Selection and Notification Process
An advisory panel made up of scholars, librarians, and technologists with experience and expertise in large-scale visualization and/or visual scholarship will review and rank proposals. The project leaders are especially keen to receive proposals that develop best practices and collaborative frameworks that can benefit individual institutions by building a community of practice and economies of scale among collaborators.

Awardees will be selected based on:

  • the ability of their proposal to successfully address one or both of the identified problems;
  • the creativity of the proposed activities;
  • relevant demonstrated experience partnering with scholars or students on visualization projects;
  • whether the proposal is extensible;
  • feasibility of the work within the proposed time-frame and budget;
  • whether the project work improves or expands access to large-scale visual environments for users; and
  • the participant’s ability to expand content development and sharing among the network of institutions with large-scale visual environments.

Awardees will be required to send a representative to an initial meeting of the project cohort in Fall 2017.

Awardees will be notified by 15 September 2017.

If you have any questions, please contact immersivescholar@ncsu.edu.

–Mike Nutt Director of Visualization Services Digital Library Initiatives, NCSU Libraries
919.513.0651 http://www.lib.ncsu.edu/do/visualization

 

intro to stat modeling

Introduction to Statistical Modelling (bibliography)

These are the books available at the SCSU library with their call #s:

Graybill, F. A. (1961). An introduction to linear statistical models. New York: McGraw-Hill. HA29 .G75

Dobson, A. J. (1983). Introduction to statistical modelling. London ; New York: Chapman and Hall. QA276 .D59 1983

Janke, S. J., & Tinsley, F. (2005). Introduction to linear models and statistical inference. Hoboken, NJ: Wiley. QA279 .J36 2005

++++++++++++++++++
resources from the Internet:

visuals (quick reference to terms and issues)

consider this short video:
https://blog.stcloudstate.edu/ims/2017/07/06/misleading-graphs/

++++++++++++++
more on quantitative and qualitative research in this IMS blog
https://blog.stcloudstate.edu/ims?s=quantitative
https://blog.stcloudstate.edu/ims?s=qualitative+research

next gen digital learning environment

Updating the Next Generation Digital Learning Environment for Better Student Learning Outcomes

a learning management system (LMS) is never the solution to every problem in education. Edtech is just one part of the whole learning ecosystem and student experience.

Therefore, the next generation digital learning environment (NGDLE), as envisioned by EDUCAUSE in 2015 …  Looking at the NGDLE requirements from an LMS perspective, I view the NGDLE as being about five areas: interoperability; personalization; analytics, advising, and learning assessment; collaboration; accessibility and universal design.

Interoperability

  • Content can easily be exchanged between systems.
  • Users are able to leverage the tools they love, including discipline-specific apps.
  • Learning data is available to trusted systems and people who need it.
  • The learning environment is “future proof” so that it can adapt and extend as the ecosystem evolves.

Personalization

  • The learning environment reflects individual preferences.
  • Departments, divisions, and institutions can be autonomous.
  • Instructors teach the way they want and are not constrained by the software design.
  • There are clear, individual learning paths.
  • Students have choice in activity, expression, and engagement.

Analytics, Advising, and Learning Assessment

  • Learning analytics helps to identify at-risk students, course progress, and adaptive learning pathways.
  • The learning environment enables integrated planning and assessment of student performance.
  • More data is made available, with greater context around the data.
  • The learning environment supports platform and data standards.

Collaboration

  • Individual spaces persist after courses and after graduation.
  • Learners are encouraged as creators and consumers.
  • Courses include public and private spaces.

Accessibility and Universal Design

  • Accessibility is part of the design of the learning experience.
  • The learning environment enables adaptive learning and supports different types of materials.
  • Learning design includes measurement rubrics and quality control.

The core analogy used in the NGDLE paper is that each component of the learning environment is a Lego brick:

  • The days of the LMS as a “walled garden” app that does everything is over.
  • Today many kinds of amazing learning and collaboration tools (Lego bricks) should be accessible to educators.
  • We have standards that let these tools (including an LMS) talk to each other. That is, all bricks share some properties that let them fit together.
  • Students and teachers sign in once to this “ecosystem of bricks.”
  • The bricks share results and data.
  • These bricks fit together; they can be interchanged and swapped at will, with confidence that the learning experience will continue uninterrupted.

Any “next-gen” attempt to completely rework the pedagogical model and introduce a “mash-up of whatever” to fulfil this model would fall victim to the same criticisms levied at the LMS today: there is too little time and training to expect faculty to figure out the nuances of implementation on their own.

The Lego metaphor works only if we’re talking about “old school” Lego design — bricks of two, three, and four-post pieces that neatly fit together. Modern edtech is a lot more like the modern Lego. There are wheels and rocket launchers and belts and all kinds of amazing pieces that work well with each other, but only when they are configured properly. A user cannot simply stick together different pieces and assume they will work harmoniously in creating an environment through which each student can be successful.

As the NGDLE paper states: “Despite the high percentages of LMS adoption, relatively few instructors use its more advanced features — just 41% of faculty surveyed report using the LMS ‘to promote interaction outside the classroom.'”

But this is what the next generation LMS is good at: being a central nervous system — or learning hub — through which a variety of learning activities and tools are used. This is also where the LMS needs to go: bringing together and making sense of all the amazing innovations happening around it. This is much harder to do, perhaps even impossible, if all the pieces involved are just bricks without anything to orchestrate them or to weave them together into a meaningful, personal experience for achieving well-defined learning outcomes.

  • Making a commitment to build easy, flexible, and smart technology
  • Working with colleges and universities to remove barriers to adopting new tools in the ecosystem
  • Standardizing the vetting of accessibility compliance (the Strategic Nonvisual Access Partner Program from the National Federation of the Blind is a great start)
  • Advancing standards for data exchange while protecting individual privacy
  • Building integrated components that work with the institutions using them — learning quickly about what is and is not working well and applying those lessons to the next generation of interoperability standards
  • Letting people use the tools they love [SIC] and providing more ways for nontechnical individuals (including students) to easily integrate new features into learning activities

My note: something just refused to be accepted at SCSU
Technologists are often very focused on the technology, but the reality is that the more deeply and closely we understand the pedagogy and the people in the institutions — students, faculty, instructional support staff, administrators — the better suited we are to actually making the tech work for them.

++++++++++++++++++++++

Under the Hood of a Next Generation Digital Learning Environment in Progress

The challenge is that although 85 percent of faculty use a campus learning management system (LMS),1 a recent Blackboard report found that, out of 70,000 courses across 927 North American institutions, 53 percent of LMS usage was classified as supplemental(content-heavy, low interaction) and 24 percent as complementary (one-way communication via content/announcements/gradebook).2 Only 11 percent were characterized as social, 10 percent as evaluative (heavy use of assessment), and 2 percent as holistic (balanced use of all previous). Our FYE course required innovating beyond the supplemental course-level LMS to create a more holistic cohort-wide NGDLE in order to fully support the teaching, learning, and student success missions of the program.The key design goals for our NGDLE were to:

  • Create a common platform that could deliver a standard curriculum and achieve parity in all course sections using existing systems and tools and readily available content
  • Capture, store, and analyze any generated learner data to support learning assessment, continuous program improvement, and research
  • Develop reports and actionable analytics for administrators, advisors, instructors, and students

++++++++++++
more on LMS in this blog
https://blog.stcloudstate.edu/ims?s=LMS

more on learning outcomes in this IMS blog
https://blog.stcloudstate.edu/ims?s=learning+outcomes

disruptive technologies higher ed

The top 5 disruptive technologies in higher ed

By Leigh M. and Thomas Goldrick June 5th, 2017
The Internet of Things (IoT), augmented reality, and advancements in online learning have changed the way universities reach prospective students, engage with their current student body, and provide them the resources they need.
Online Learning
Despite online learning’s successes, many still believe that it lacks the interaction of its in-person counterpart. However, innovations in pedagogical strategy and technology are helping make it much more engaging.

Competency-based Education

Competency-based education (CBE) recognizes that all students enter a program with different skills and proficiencies and that each moves at a different rate. We now possess the technology to better measure these differences and design adaptive learning programs accordingly. These programs aim to increase student engagement, as time is spent expanding on what the students already know rather than having them relearn familiar material.

The Internet of Things

The Internet of Things has opened up a whole new world of possibilities in higher education. The increased connectivity between devices and “everyday things” means better data tracking and analytics, and improved communication between student, professor, and institution, often without ever saying a word. IoT is making it easier for students to learn when, how, and where they want, while providing professors support to create a more flexible and connected learning environment.

Virtual/Augmented Reality

Virtual and augmented reality technologies have begun to take Higher Ed into the realm of what used to be considered science fiction.

More often than not, they require significant planning and investment into the infrastructure needed to support them.

Artificial Intelligence

an A.I. professor’s assistant or an online learning platform that adapts to each student’s specific needs. Having artificial intelligence that learns and improves as it aids in the learning process could have a far-reaching effect on higher education both online and in-person.

+++++++++++++++++++++
more on disruptive technologies in this IMS blog
https://blog.stcloudstate.edu/ims?s=disruptive+technologies

industry 4.0 and IOT

The Internet of Things will power the Fourth Industrial Revolution. Here’s how

https://medium.com/world-economic-forum/the-internet-of-things-will-power-the-fourth-industrial-revolution-heres-how-39932f03df1

By 2020 more than 50 billion things, ranging from cranes to coffee machines, will be connected to the internet. That means a lot of data will be created — too much data, in fact, to be manageable or to be kept forever affordably.

One by-product of more devices creating more data is that they are speaking lots of different programming languages. Machines are still using languages from the 1970s and 80s as well as the new languages of today. In short, applications need to have data translated for them — by an IoT babelfish, if you will — before they can make sense of the information.

Then there are analytics and data storage.

security becomes even more important as there is little human interaction in the flow of data from device to datacentre — so called machine-to-machine communication.

 

+++++++++++++++++++
more on IOT in this IMS blog
https://blog.stcloudstate.edu/ims?s=iot

more on industry 4.0 in this IMS blog
https://blog.stcloudstate.edu/ims?s=industrial+revolution

library website content strategy

Developing a Website Content Strategy

Instructor: Shoshana Mayden, Dates: July 3-28, 2017
http://libraryjuiceacademy.com/049-content-strategy.php

Shoshana Mayden is a content strategist with the University of Arizona Libraries in Tucson, Arizona. She advises on issues related to website content and contributes to developing a content strategy across multiple channels for the library. She works with content managers and library stakeholders to re-write, re-think and re-organize content for the main library website, as well as develop workflows related to the lifecycle of content. She also a copy editor for Weave: the Journal of Library User Experience.

++++++++++++++++++++++++

Information Architecture: Designing Navigation for Library Websites

Instructor: Laura-Edythe Coleman

http://libraryjuiceacademy.com/046-designing-navigation.php

Information Architecture is an essential component of user-centered design of information spaces, especially websites. Website navigation is a key design device to help users search and browse library websites and information systems. The design of Website navigation can be simple or complex, flat or deep. In all cases, website navigation should take into account information architecture (IA) best practices, common user tasks in the library domain, user research, analytics and information seeking models.

Laura-Edythe Coleman is a Museum Informaticist: her focus is on the point of convergence for museums, information, people, and technology. Knowing that societies need museums for creating and sustaining cultural memory, she strives to help communities co-create heritage collections with museums. She holds a PhD in Information Science, a Masters of Library and Information Science and a Bachelors of Fine Arts. She can be reached via Twitter: @lauraedythe, website: http://www.lauraedythe.com, or by email lauraedythecoleman@gmail.com

+++++++++++++++++++++++++

Content Strategy for Library Websites from Rebecca Blakiston
http://www.academia.edu/12370418/Content_Strategy_for_Library_Websites 
https://steadfastlibrarian.wordpress.com/2012/06/27/211/
https://connect.library.utoronto.ca/download/attachments/25199917/2013_-__-_DevelopingaContentStrategyforanAcademicLibraryWebs%5Bretrieved-2015-11-17%5D.pdf
https://www.researchgate.net/publication/271758861_Developing_a_Content_Strategy_for_an_Academic_Library_Website
+++++++++++++++++++++
Other resources: 
http://guiseppegetto.com/ux-content-strategy-library/

+++++++++++++++++
more on Libary web site in this IMS blog
https://www.lib.umich.edu/blog-tags/web-content-strategy
https://blog.stcloudstate.edu/ims?s=library+web+page

bibliometrics altmetrics

International Benchmarks for Academic Library Use of Bibliometrics & Altmetrics, 2016-17

ID: 3807768 Report August 2016 115 pages Primary Research Group

http://www.researchandmarkets.com/publication/min3qqb/3807768

The report gives detailed data on the use of various bibliometric and altmetric tools such as Google Scholar, Web of Science, Scimago, Plum Analytics

20 predominantly research universities in the USA, continental Europe, the UK, Canada and Australia/New Zealand. Among the survey participants are: Carnegie Mellon, Cambridge University, Universitat Politècnica de Catalunya the University at Albany, the University of Melbourne, Florida State University, the University of Alberta and Victoria University of Wellington

– 50% of the institutions sampled help their researchers to obtain a Thomsen/Reuters Researcher ID.

ResearcherID provides a solution to the author ambiguity problem within the scholarly research community. Each member is assigned a unique identifier to enable researchers to manage their publication lists, track their times cited counts and h-index, identify potential collaborators and avoid author misidentification. In addition, your ResearcherID information integrates with the Web of Science and is ORCID compliant, allowing you to claim and showcase your publications from a single one account. Search the registry to find collaborators, review publication lists and explore how research is used around the world!

– Just 5% of those surveyed use Facebook Insights in their altmetrics efforts.

 

 

++++++++++++++
more on altmetrics in this IMS blog
https://blog.stcloudstate.edu/ims?s=altmetrics

industry 4.0

A Strategist’s Guide to Industry 4.0. Global businesses are about to integrate their operations into a seamless digital whole, and thereby change the world.

https://www.strategy-business.com/article/A-Strategists-Guide-to-Industry-4.0
Industrial revolutions are momentous events. By most reckonings, there have been only three. The first was triggered in the 1700s by the commercial steam engine and the mechanical loom. The harnessing of electricity and mass production sparked the second, around the start of the 20th century. The computer set the third in motion after World War II.
Henning Kagermann, the head of the German National Academy of Science and Engineering (Acatech), did exactly that in 2011, when he used the term Industrie 4.0 to describe a proposed government-sponsored industrial initiative.
The term Industry 4.0 refers to the combination of several major innovations in digital technology
These technologies include advanced robotics and artificial intelligence; sophisticated sensors; cloud computing; the Internet of Things; data capture and analytics; digital fabrication (including 3D printing); software-as-a-service and other new marketing models; smartphones and other mobile devices; platforms that use algorithms to direct motor vehicles (including navigation tools, ride-sharing apps, delivery and ride services, and autonomous vehicles); and the embedding of all these elements in an interoperable global value chain, shared by many companies from many countries.
Companies that embrace Industry 4.0 are beginning to track everything they produce from cradle to grave, sending out upgrades for complex products after they are sold (in the same way that software has come to be updated). These companies are learning mass customization: the ability to make products in batches of one as inexpensively as they could make a mass-produced product in the 20th century, while fully tailoring the product to the specifications of the purchaser
.

adoption industry 4.0 by sector

Three aspects of digitization form the heart of an Industry 4.0 approach.

• The full digitization of a company’s operations

•  The redesign of products and services

•  Closer interaction with customers

Making Industry 4.0 work requires major shifts in organizational practices and structures. These shifts include new forms of IT architecture and data management, new approaches to regulatory and tax compliance, new organizational structures, and — most importantly — a new digitally oriented culture, which must embrace data analytics as a core enterprise capability.

Klaus Schwab put it in his recent book The Fourth Industrial Revolution (World Economic Forum, 2016), “Contrary to the previous industrial revolutions, this one is evolving at an exponential rather than linear pace.… It is not only changing the ‘what’ and the ‘how’ of doing things, but also ‘who’ we are.”

This great integrating force is gaining strength at a time of political fragmentation — when many governments are considering making international trade more difficult. It may indeed become harder to move people and products across some national borders. But Industry 4.0 could overcome those barriers by enabling companies to transfer just their intellectual property, including their software, while letting each nation maintain its own manufacturing networks.
+++++++++++++++++++++++++++
more on the Internet of Things in this IMS blog
https://blog.stcloudstate.edu/ims?s=internet+of+things

also Digital Learning

https://blog.stcloudstate.edu/ims/2017/03/28/digital-learning/

qualitative method research

Cohort 7

By miltenoff | View this Toon at ToonDoo | Create your own Toon

Qualitative Method Research

quote

Data treatment and analysis

Because the questionnaire data comprised both Likert scales and open questions, they were analyzed quantitatively and qualitatively. Textual data (open responses) were qualitatively analyzed by coding: each segment (e.g. a group of words) was assigned to a semantic reference category, as systematically and rigorously as possible. For example, “Using an iPad in class really motivates me to learn” was assigned to the category “positive impact on motivation.” The qualitative analysis was performed using an adapted version of the approaches developed by L’Écuyer (1990) and Huberman and Miles (1991, 1994). Thus, we adopted a content analysis approach using QDAMiner software, which is widely used in qualitative research (see Fielding, 2012; Karsenti, Komis, Depover, & Collin, 2011). For the quantitative analysis, we used SPSS 22.0 software to conduct descriptive and inferential statistics. We also conducted inferential statistics to further explore the iPad’s role in teaching and learning, along with its motivational effect. The results will be presented in a subsequent report (Fievez, & Karsenti, 2013)

Fievez, A., & Karsenti, T. (2013). The iPad in Education: uses, benefits and challenges. A survey of 6057 students and 302 teachers in Quebec, Canada (p. 51). Canada Research Chair in Technologies in Education. Retrieved from https://www.academia.edu/5366978/The_iPad_in_Education_uses_benefits_and_challenges._A_survey_of_6057_students_and_302_teachers_in_Quebec_Canada

unquote

 The 20th century notion of conducting a qualitative research by an oral interview and then processing manually your results had triggered in the second half of the 20th century [sometimes] condescending attitudes by researchers from the exact sciences.
The reason was the advent of computing power in the second half of the 20th century, which allowed exact sciences to claim “scientific” and “data-based” results.
One of the statistical package, SPSS, is today widely known and considered a magnificent tools to bring solid statistically-based argumentation, which further perpetuates the superiority of quantitative over qualitative method.
At the same time, qualitative researchers continue to lag behind, mostly due to the inertia of their approach to qualitative analysis. Qualitative analysis continues to be processed in the olden ways. While there is nothing wrong with the “olden” ways, harnessing computational power can streamline the “olden ways” process and even present options, which the “human eye” sometimes misses.
Below are some suggestions, you may consider, when you embark on the path of qualitative research.
The Use of Qualitative Content Analysis in Case Study Research
Florian Kohlbacher
http://www.qualitative-research.net/index.php/fqs/article/view/75/153

excellent guide to the structure of a qualitative research

Palys, T., & Atchison, C. (2012). Qualitative Research in the Digital Era: Obstacles and Opportunities. International Journal Of Qualitative Methods, 11(4), 352-367.
http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dkeh%26AN%3d89171709%26site%3dehost-live%26scope%3dsite
Palys and Atchison (2012) present a compelling case to bring your qualitative research to the level of the quantitative research by using modern tools for qualitative analysis.
1. The authors correctly promote NVivo as the “jaguar’ of the qualitative research method tools. Be aware, however, about the existence of other “Geo Metro” tools, which, for your research, might achieve the same result (see bottom of this blog entry).
2. The authors promote a new type of approach to Chapter 2 doctoral dissertation and namely OCR-ing PDF articles (most of your literature as of 2017 is mostly either in PDF or electronic textual format) through applications such as
Abbyy Fine Reader, https://www.abbyy.com/en-us/finereader/
OmniPage,  http://www.nuance.com/for-individuals/by-product/omnipage/index.htm
Readirus http://www.irislink.com/EN-US/c1462/Readiris-16-for-Windows—OCR-Software.aspx
The text from the articles is processed either through NVIVO or related programs (see bottom of this blog entry). As the authors propose: ” This is immediately useful for literature review and proposal writing, and continues through the research design, data gathering, and analysis stages— where NVivo’s flexibility for many different sources of data (including audio, video, graphic, and text) are well known—of writing for publication” (p. 353).
In other words, you can try to wrap your head around huge amount of textual information, but you can also approach the task by a parallel process of processing the same text with a tool.
 +++++++++++++++++++++++++++++
Here are some suggestions for Computer Assisted / Aided Qualitative Data Analysis Software (CAQDAS) for a small and a large community applications):

– RQDA (the small one): http://rqda.r-forge.r-project.org/ (see on youtube the tutorials of Metin Caliskan); one active developper.
GATE (the large one): http://gate.ac.uk/ | https://gate.ac.uk/download/

text mining: https://en.wikipedia.org/wiki/Text_mining
Text mining, also referred to as text data mining, roughly equivalent to text analytics, is the process of deriving high-quality information from text. High-quality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output.
https://ischool.syr.edu/infospace/2013/04/23/what-is-text-mining/
Qualitative data is descriptive data that cannot be measured in numbers and often includes qualities of appearance like color, texture, and textual description. Quantitative data is numerical, structured data that can be measured. However, there is often slippage between qualitative and quantitative categories. For example, a photograph might traditionally be considered “qualitative data” but when you break it down to the level of pixels, which can be measured.
word of caution, text mining doesn’t generate new facts and is not an end, in and of itself. The process is most useful when the data it generates can be further analyzed by a domain expert, who can bring additional knowledge for a more complete picture. Still, text mining creates new relationships and hypotheses for experts to explore further.

quick and easy:

intermediate:

advanced:

http://tidytextmining.com/

Introduction to GATE Developer  https://youtu.be/o5uhMF15vsA


 

use of RapidMiner:

https://rapidminer.com/pricing/

– Coding Analysis Toolkit (CAT) from University of Pittsburgh and University of Massachusetts
– Raven’s Eye is an online natural language ANALYSIS tool based
– ATLAS.TI
– XSIGTH

– QDA Miner: http://provalisresearch.com/products/qualitative-data-analysis-software/

There is also a free version called QDA Miner Lite with limited functionalities: http://provalisresearch.com/products/qualitative-data-analysis-software/freeware/

– MAXQDA

–  NVivo

– SPSS Text Analytics

– Kwalitan

– Transana (include video transcribing capability)

– XSight

Nud*ist https://www.qsrinternational.com/

(Cited from: https://www.researchgate.net/post/Are_there_any_open-source_alternatives_to_Nvivo [accessed Apr 1, 2017].

– OdinText

IBM Watson Conversation
IBM Watson Text to Speech
Google Translate API
MeTA
LingPipe
NLP4J
Timbl
Colibri Core
CRF++
Frog
Ucto
– CRFsuite

– FoLiA
PyNLPl
openNLP
NLP Compromise
MALLET
Cited from: https://www.g2crowd.com/products/nvivo/competitors/alternatives [accessed April 1, 2017
+++++++++++++++++++++++++=
http://www.socresonline.org.uk/3/3/4.html
Christine A. Barry (1998) ‘Choosing Qualitative Data Analysis Software: Atlas/ti and Nudist Compared’
Sociological Research Online, vol. 3, no. 3, <http://www.socresonline.org.uk/3/3/4.html&gt;

Pros and Cons of Computer Assisted Qualitative Data Analysis Software

+++++++++++++++++++++++++
more on quantitative research:

Asamoah, D. A., Sharda, R., Hassan Zadeh, A., & Kalgotra, P. (2017). Preparing a Data Scientist: A Pedagogic Experience in Designing a Big Data Analytics Course. Decision Sciences Journal of Innovative Education, 15(2), 161–190. https://doi.org/10.1111/dsji.12125
++++++++++++++++++++++++
literature on quantitative research:
Borgman, C. L. (2015). Big Data, Little Data, No Data: Scholarship in the Networked World. MIT Press. https://mplus.mnpals.net/vufind/Record/ebr4_1006438
St. Cloud State University MC Main Collection – 2nd floor AZ195 .B66 2015
p. 161 Data scholarship in the Humanities
p. 166 When Are Data?
Philip Chen, C. L., & Zhang, C.-Y. (2014). Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Information Sciences, 275(Supplement C), 314–347. https://doi.org/10.1016/j.ins.2014.01.015

digital badges and micro credentials

per Tom Hergert (thank you)

AECT-OTP Webinar: Digital Badges and Micro-Credentials for the Workplace

Time: Mar 27, 2017 1:00 PM Central Time (US and Canada)

Learn how to implement digital badges in learning environments. Digital badges and micro-credentials offer an entirely new way of recognizing achievements, knowledge, skills, experiences, and competencies that can be earned in formal and informal learning environments. They are an opportunity to recognize such achievements through credible organizations that can be integrated in traditional educational programs but can also represent experience in informal contexts or community engagement.  Three guiding questions will be discussed in this webinar: (1) digital badges’ impact on learning and assessment, (2) digital badges within instructional design and technological frameworks, and (3) the importance of stakeholders for the implementation of digital badges.

Dirk Ifenthaler is Professor and Chair of Learning, Design and Technology at University of Mannheim, Germany and Adjunct Professor at Curtin University, Australia. His previous roles include Professor and Director, Centre for Research in Digital Learning at Deakin University, Australia, Manager of Applied Research and Learning Analytics at Open Universities, Australia, and Professor for Applied Teaching and Learning Research at the University of Potsdam, Germany. He was a 2012 Fulbright Scholar-in-Residence at the Jeannine Rainbolt College of Education, at the University of Oklahoma, USA

Directions to connect via Zoom Meeting:
Join from PC, Mac, Linux, iOS or Android: https://zoom.us/j/8128701328
Or iPhone one-tap (US Toll):  +14086380968,8128701328# or +16465588656,8128701328#
Or Telephone:
Dial: +1 408 638 0968 (US Toll) or +1 646 558 8656 (US Toll)
Meeting ID: 812 870 1328
International numbers available: https://zoom.us/zoomconference?m=EedT5hShl1ELe6DRYI58-DeQm_hO10Cp

+++++++++++++++++++++++++++++
Notes from the webinar
http://www.springer.com/education+%26+language/learning+%26+instruction/journal/10758

Technology, Knowledge and Learning

 and

14th International Conference on  Cognition and Exploratory Learning in Digital Age 2017 18 – 20 October Vilamoura, Algarve, Portugal

http://celda-conf.org/

learning is a process, not a product.

Each student learns differently and assessment is not linear. Learning for different students can be a longer or shorter path.

representation graph:

assessment comes before badges

what are credentials:
how well i can show my credentials: can i find it, can i translate it, issuer, earner, achievement description, date issued.

the potential to become an alternative credentialing system to link directly via metadata to validating evidence of educational achievements.

DB is not an assessment, it is the ability to demonstrate the assessment.
They are a motivational mechanism, supporting alternative forms of assessment, a way to credentialize learning, charting learning pathways, support self-reflection and planning

1 9 10 11 12 13 16