Searching for "quantitative"

netnography

Xu Zhang. (2017). The Quality of Virtual Communities: A Case Study of Chinese Overseas Students in WeChat Groups. Global Studies Journal, 10(3), 19–26. https://doi.org/10.18848/1835-4432/CGP/v10i03/19-26
p. 23-24.
“Netnography” has been developed for online community researchers. It is “net” plus “ethnography,” which is based on the traditional ethnography and combines with the qualitative analysis for online interactive contents forms of virtual community members. The aim of doing netnographic research is to study the subculture, interactive process and characteristics of collective behaviors of online communities (Kozinets 2009). Follow the development of Internet technology, the web–based method is more convenient and cost–effect in data collection. Members in virtual groups create a large number of interactive texts, pictures, network expressions and other original information over time, which provides an extremely rich database to researchers. Moreover, from the data collection’s point of view, this online observation method will not interfere with the whole research process, which is better than questionnaires and quantitative modeling (Moisander and Valtonen 2006). Additionally, Kozinets (2009) also pointed that netnogrpahy emphasize on the research background, observers not only focus on the text during communications but also need to pay attention to the characteristics of language, history, meaning and communication types. Even parse fonts, symbols, images and photo data. These content of studies are significant in social communication, which is called “Cultural Artifact.” On the other hand, netnography is based on traditional ethnography as a methodology; therefore it inherits the research processes of ethnographic method. Kozients (2009) reinterpreted these procedures for netnography as Firstly, to determine the research target and understand its cultural characteristics; Secondly, to collect and analyze information; Thirdly, to ensure the credibility of interpretation; Fourthly, pay attention to research ethics; Lastly, to obtain respondents feedbacks. To make my research adapting to this guidelines, I make my research process as 1. To target on Plymouth Chinese overseas students and to explain the Chinese guanxi; 2. To collect and analyze data through the existing WeChat group created by Plymouth Chinese Students and Scholars Association (CSSA); 3. To confirm the identity of key influencers in this virtual group; 4. To get feedbacks from respondent as much as possible.
https://en.wikipedia.org/wiki/Netnography

What is Netnography from Harrison Hayes, LLC
https://nsuworks.nova.edu/tqr/vol15/iss5/13/

suggestions for academic writing

these are suggestions from Google Groups with doctoral cohorts 6, 7, 8, 9 from the Ed leadership program

How to find a book from InterLibrary Loan: find book ILL

Citing someone else’s citation?:

http://library.northampton.ac.uk/liberation/ref/adv_harvard_else.php

http://guides.is.uwa.edu.au/c.php?g=380288&p=3109460
use them sparingly:
http://www.apastyle.org/learn/faqs/cite-another-source.aspx
Please take a look at “Paraphrasing sources: in
http://www.roanestate.edu/owl/usingsources_mla.html
it gives you a good idea why will distance you from a possibility of plagiarizing.
n example of resolution by this peer-reviewed journal article
https://doi.org/10.19173/irrodl.v17i5.2566
Ungerer, L. M. (2016). Digital Curation as a Core Competency in Current Learning and Literacy: A Higher Education Perspective. The International Review of Research in Open and Distributed Learning17(5). https://doi.org/10.19173/irrodl.v17i5.2566
Dunaway (2011) suggests that learning landscapes in a digital age are networked, social, and technological. Since people commonly create and share information by collecting, filtering, and customizing digital content, educators should provide students opportunities to master these skills (Mills, 2013). In enhancing critical thinking, we have to investigate pedagogical models that consider students’ digital realities (Mihailidis & Cohen, 2013). November (as cited in Sharma & Deschaine, 2016), however warns that although the Web fulfils a pivotal role in societal media, students often are not guided on how to critically deal with the information that they access on the Web. Sharma and Deschaine (2016) further point out the potential for personalizing teaching and incorporating authentic material when educators themselves digitally curate resources by means of Web 2.0 tools.
p. 24. Communities of practice. Lave and Wenger’s (as cited in Weller, 2011) concept of situated learning and Wenger’s (as cited in Weller, 2011) idea of communities of practice highlight the importance of apprenticeship and the social role in learning.
criteria to publish a paper

Originality: Does the paper contain new and significant information adequate to justify publication?

Relationship to Literature: Does the paper demonstrate an adequate understanding of the relevant literature in the field and cite an appropriate range of literature sources? Is any significant work ignored?

Methodology: Is the paper’s argument built on an appropriate base of theory, concepts, or other ideas? Has the research or equivalent intellectual work on which the paper is based been well designed? Are the methods employed appropriate?

Results: Are results presented clearly and analyzed appropriately? Do the conclusions adequately tie together the other elements of the paper?

Implications for research, practice and/or society: Does the paper identify clearly any implications for research, practice and/or society? Does the paper bridge the gap between theory and practice? How can the research be used in practice (economic and commercial impact), in teaching, to influence public policy, in research (contributing to the body of knowledge)? What is the impact upon society (influencing public attitudes, affecting quality of life)? Are these implications consistent with the findings and conclusions of the paper?

Quality of Communication: Does the paper clearly express its case, measured against the technical language of the field and the expected knowledge of the journal’s readership? Has attention been paid to the clarity of expression and readability, such as sentence structure, jargon use, acronyms, etc.

mixed method research

http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3deric%26AN%3dEJ971947%26site%3dehost-live%26scope%3dsite

Stanton, K. V., & Liew, C. L. (2011). Open Access Theses in Institutional Repositories: An Exploratory Study of the Perceptions of Doctoral Students. Information Research: An International Electronic Journal16(4),

We examine doctoral students’ awareness of and attitudes to open access forms of publication. Levels of awareness of open access and the concept of institutional repositories, publishing behaviour and perceptions of benefits and risks of open access publishing were explored. Method: Qualitative and quantitative data were collected through interviews with eight doctoral students enrolled in a range of disciplines in a New Zealand university and a self-completion Web survey of 251 students. Analysis: Interview data were analysed thematically, then evaluated against a theoretical framework. The interview data were then used to inform the design of the survey tool. Survey responses were analysed as a single set, then by disciple using SurveyMonkey’s online toolkit and Excel. Results: While awareness of open access and repository archiving is still low, the majority of interview and survey respondents were found to be supportive of the concept of open access. The perceived benefits of enhanced exposure and potential for sharing outweigh the perceived risks. The majority of respondents were supportive of an existing mandatory thesis submission policy. Conclusions: Low levels of awareness of the university repository remains an issue, and could be addressed by further investigating the effectiveness of different communication channels for promotion.

PLEASE NOTE:

the researchers use the qualitative approach: by interviewing participants and analyzing their responses thematically, they build the survey.
Then then administer the survey (the quantitative approach)

How do you intend to use a mixed method? Please share

paraphrasing quotes

statement of the problem

Problem statement – Wikipedia

 
Metaphors: A Problem Statement is like… 
metaphor — a novel or poetic linguistic expression where one or more words for a concept are used outside normal conventional meaning to express a similar concept. Aristotle l 
The DNA of the research l A snapshot of the research l The foundation of the research l The Heart of the research l A “taste” of the research l A blueprint for the study
 
 
 
Here is a good exercise for your writing of the problem statement:
Chapter 3
several documents, which can be helpful in two different ways:
– check your structure and methodology
– borrow verbiage
http://education.nova.edu/Resources/uploads/app/35/files/arc_doc/writing_chpt3_quantitative_research_methods.pdf 
http://education.nova.edu/Resources/uploads/app/35/files/arc_doc/writing_chpt3_qualitative_research_methods.pdf
http://www.trinitydc.edu/sps/files/2010/09/APA-6-BGS-Quantitative-Research-Paper-August-2014.pdf

digital object identifier, or DOI

digital object identifier (DOI) is a unique alphanumeric string assigned by a registration agency (the International DOI Foundation) to identify content and provide a persistent link to its location on the Internet. The publisher assigns a DOI when your article is published and made available electronically.

Why do we need it?

2010 Changes to APA for Electronic Materials Digital object identifier (DOI). DOI available. If a DOI is available you no longer include a URL. Example: Author, A. A. (date). Title of article. Title of Journal, volume(number), page numbers. doi: xx.xxxxxxx

http://www.stcloudstate.edu/writeplace/_files/documents/working-with-sources/apa-electronic-material-citations.pdf

Mendeley (vs Zotero and/or RefWorks)

https://www.brighttalk.com/webcast/11355/226845?utm_campaign=Mendeley%20Webinars%202&utm_campaignPK=271205324&utm_term=OP28019&utm_content=271205712&utm_source=99&BID=799935188&utm_medium=email&SIS_ID=46360

Online Writing Tools: FourOnlineToolsforwriting

social media and altmetrics

Accodring to Sugimoto et al (2016), the Use of social media platforms for by researchers is high — ranging from 75 to 80% in large -scale surveys (Rowlands et al., 2011; Tenopir et al., 2013; Van Eperen & Marincola, 2011) .
There is one more reason, and, as much as you want to dwell on the fact that you are practitioners and research is not the most important part of your job, to a great degree, you may be judged also by the scientific output of your office and/or institution.
In that sense, both social media and altimetrics might suddenly become extremely important to understand and apply.
Shortly altmetrics (alternative metrics) measure the impact your scientific output has on the community. Your teachers and you present, publish and create work, which might not be presented and published, but may be widely reflected through, e.g. social media, and thus, having impact on the community.
How such impact is measured, if measured at all, can greatly influence the money flow to your institution
For more information:
For EVEN MORE information, read the entire article:
Sugimoto, C. R., Work, S., Larivière, V., & Haustein, S. (2016). Scholarly use of social media and altmetrics: a review of the literature. Retrieved from https://arxiv.org/abs/1608.08112
related information:
In the comments section on this blog entry,
I left notes to
Thelwall, M., & Wilson, P. (2016). Mendeley readership altmetrics for medical articles: An analysis of 45 fields. Journal of the Association for Information Science and Technology, 67(8), 1962–1972. https://doi.org/10.1002/asi.23501
Todd Tetzlaff is using Mendeley and he might be the only one to benefit … 🙂
Here is some food for thought from the article above:
Doctoral students and junior researchers are the largest reader group in Mendeley ( Haustein & Larivière, 2014; Jeng et al., 2015; Zahedi, Costas, & Wouters, 2014a) .
Studies have also provided evidence of high rate s of blogging among certain subpopulations: for example, approximately one -third of German university staff (Pscheida et al., 2013) and one fifth of UK doctoral students use blogs (Carpenter et al., 2012) .
Social data sharing platforms provide an infrastructure to share various types of scholarly objects —including datasets, software code, figures, presentation slides and videos —and for users to interact with these objects (e.g., comment on, favorite, like , and reuse ). Platforms such as Figshare and SlideShare disseminate scholars’ various types of research outputs such as datasets, figures, infographics, documents, videos, posters , or presentation slides (Enis, 2013) and displays views, likes, and shares by other users (Mas -Bleda et al., 2014) .
Frequently mentioned social platforms in scholarly communication research include research -specific tools such as Mendeley, Zotero, CiteULike, BibSonomy, and Connotea (now defunct) as well as general tools such as Delicious and Digg (Hammond, Hannay, Lund, & Scott, 2005; Hull, Pettifer, & Kell, 2008; Priem & Hemminger, 2010; Reher & Haustein, 2010) .
qualitative research
“The focus group interviews were analysed based on the principles of interpretative phenomenology”
 
1. What are  interpretative phenomenology?
Here is an excellent article in ResarchGate:
 
https://www.researchgate.net/publication/263767248_A_practical_guide_to_using_Interpretative_Phenomenological_Analysis_in_qualitative_research_psychology
 
and a discussion from the psychologists regarding the weaknesses when using IPA (Interpretative phenomenological analysis)

https://thepsychologist.bps.org.uk/volume-24/edition-10/methods-interpretative-phenomenological-analysis

2. What is Constant Comparative Method?

http://www.qualres.org/HomeCons-3824.html

Nvivo shareware

https://blog.stcloudstate.edu/ims/2017/01/11/nvivo-shareware/

Qualitative and Quantitative research in lame terms
podcast:
https://itunes.apple.com/us/podcast/how-scientific-method-works/id278981407?i=1000331586170&mt=2
if you are not podcast fans, I understand. The link above is a pain in the behind to make work, if you are not familiar with using podcast.
Here is an easier way to find it:
1. open your cell phone and go find the podcast icon, which is pre-installed, but you might have not ever used it [yet].
2. In the app, use the search option and type “stuff you should know”
3. the podcast will pop up. scroll and find “How the scientific method works,” and/or search for it if you can.
Once you can play it on the phone, you have to find time to listen to it.
I listen to podcast when i have to do unpleasant chores such as: 1. walking to work 2. washing the dishes 3. flying long hours (very rarely). 4. Driving in the car.
There are bunch of other situations, when you may be strapped and instead of filling disgruntled and stressed, you can deliver the mental [junk] food for your brain.
Earbuds help me: 1. forget the unpleasant task, 2. Utilize time 3. Learn cool stuff
Here are podcasts, I am subscribed for, besides “stuff you should know”:
TED Radio Hour
TED Talks Education
NPR Fresh Air
BBC History
and bunch others, which, if i don’t go a listen for an year, i go and erase and if i peruse through the top chart and something picks my interest, I try.
If I did not manage to convince to podcast, totally fine; do not feel obligated.
However, this podcast, you can listen to on your computer, if you don’t want to download on your phone.
It is one hour show by two geeks, who are trying to make funny (and they do) a dry matter such as quantitative vs qualitative, which you want to internalize:
1. Sometimes at minute 12, they talk about inductive versus deductive to introduce you to qualitative versus quantitative. It is good to listen to their musings, since your dissertation is going through inductive and deductive process, and understanding it, can help you control better your dissertation writing. 
2. Scientific method. Hypothesis etc (around min 17).
While this is not a Ph.D., but Ed.D. and we do not delve into the philosophy of science and dissertation etc. the more you know about this process, the better control you have over your dissertation. 
3. Methods and how you prove (Chapter 3) is discussed around min 35
4. dependent and independent variables and how do you do your research in general (min ~45)
Shortly, listen and please do share your thoughts below. You do not have to be kind to this source offering. Actually, be as critical as possible, so you can help me decide, if I should offer it to the next cohort and thank you in advance for your feedback. 

 

 

coding ethics unpredictability

Franken-algorithms: the deadly consequences of unpredictable code

by  Thu 30 Aug 2018 

https://www.theguardian.com/technology/2018/aug/29/coding-algorithms-frankenalgos-program-danger

Between the “dumb” fixed algorithms and true AI lies the problematic halfway house we’ve already entered with scarcely a thought and almost no debate, much less agreement as to aims, ethics, safety, best practice. If the algorithms around us are not yet intelligent, meaning able to independently say “that calculation/course of action doesn’t look right: I’ll do it again”, they are nonetheless starting to learn from their environments. And once an algorithm is learning, we no longer know to any degree of certainty what its rules and parameters are. At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us. Where the “dumb” fixed algorithms – complex, opaque and inured to real time monitoring as they can be – are in principle predictable and interrogable, these ones are not. After a time in the wild, we no longer know what they are: they have the potential to become erratic. We might be tempted to call these “frankenalgos” – though Mary Shelley couldn’t have made this up.

Twenty years ago, George Dyson anticipated much of what is happening today in his classic book Darwin Among the Machines. The problem, he tells me, is that we’re building systems that are beyond our intellectual means to control. We believe that if a system is deterministic (acting according to fixed rules, this being the definition of an algorithm) it is predictable – and that what is predictable can be controlled. Both assumptions turn out to be wrong.“It’s proceeding on its own, in little bits and pieces,” he says. “What I was obsessed with 20 years ago that has completely taken over the world today are multicellular, metazoan digital organisms, the same way we see in biology, where you have all these pieces of code running on people’s iPhones, and collectively it acts like one multicellular organism.“There’s this old law called Ashby’s law that says a control system has to be as complex as the system it’s controlling, and we’re running into that at full speed now, with this huge push to build self-driving cars where the software has to have a complete model of everything, and almost by definition we’re not going to understand it. Because any model that we understand is gonna do the thing like run into a fire truck ’cause we forgot to put in the fire truck.”

Walsh believes this makes it more, not less, important that the public learn about programming, because the more alienated we become from it, the more it seems like magic beyond our ability to affect. When shown the definition of “algorithm” given earlier in this piece, he found it incomplete, commenting: “I would suggest the problem is that algorithm now means any large, complex decision making software system and the larger environment in which it is embedded, which makes them even more unpredictable.” A chilling thought indeed. Accordingly, he believes ethics to be the new frontier in tech, foreseeing “a golden age for philosophy” – a view with which Eugene Spafford of Purdue University, a cybersecurity expert, concurs. Where there are choices to be made, that’s where ethics comes in.

our existing system of tort law, which requires proof of intention or negligence, will need to be rethought. A dog is not held legally responsible for biting you; its owner might be, but only if the dog’s action is thought foreseeable.

model-based programming, in which machines do most of the coding work and are able to test as they go.

As we wait for a technological answer to the problem of soaring algorithmic entanglement, there are precautions we can take. Paul Wilmott, a British expert in quantitative analysis and vocal critic of high frequency trading on the stock market, wryly suggests “learning to shoot, make jam and knit

The venerable Association for Computing Machinery has updated its code of ethics along the lines of medicine’s Hippocratic oath, to instruct computing professionals to do no harm and consider the wider impacts of their work.

+++++++++++
more on coding in this IMS blog
https://blog.stcloudstate.edu/ims?s=coding

Measuring Learning Outcomes of New Library Initiatives

International Conference on Qualitative and Quantitative Methods in Libraries 2018 (QQML2018)

conf@qqml.net

Where: Cultural Centre Of Chania
ΠΝΕΥΜΑΤΙΚΟ ΚΕΝΤΡΟ ΧΑΝΙΩΝ

https://goo.gl/maps/8KcyxTurBAL2

also live broadcast at https://www.facebook.com/InforMediaServices/videos/1542057332571425/

When: May 24, 12:30AM-2:30PM (local time; 4:40AM-6:30AM, Chicago Central)

Programme QQML2018-23pgopv

Live broadcasts from some of the sessions:

Here is a link to Sebastian Bock’s presentation:
https://drive.google.com/file/d/1jSOyNXQuqgGTrhHIapq0uxAXQAvkC6Qb/view

Information literacy skills and college students from Jade Geary

Session 1:
http://qqml.org/wp-content/uploads/2017/09/SESSION-Miltenoff.pdf

Session Title: Measuring Learning Outcomes of New Library Initiatives Coordinator: Professor Plamen Miltenoff, Ph.D., MLIS, St. Cloud State University, USA Contact: pmiltenoff@stcloudstate.edu Scope & rationale: The advent of new technologies, such as virtual/augmented/mixed reality, and new pedagogical concepts, such as gaming and gamification, steers academic libraries in uncharted territories. There is not yet sufficiently compiled research and, respectively, proof to justify financial and workforce investment in such endeavors. On the other hand, dwindling resources for education presses administration to demand justification for new endeavors. As it has been established already, technology does not teach; teachers do; a growing body of literature questions the impact of educational technology on educational outcomes. This session seeks to bring together presentations and discussion, both qualitative and quantitative research, related to new pedagogical and technological endeavors in academic libraries as part of education on campus. By experimenting with new technologies such as Video 360 degrees and new pedagogical approaches such as gaming and gamification, does the library improve learning? By experimenting with new technologies and pedagogical approaches, does the library help campus faculty to adopt these methods and improve their teaching? How can results be measured, demonstrated?

Conference program

http://qqml.org/wp-content/uploads/2017/09/7.5.2018-programme_final.pdf

More information and bibliography:

https://www.academia.edu/Documents/in/Videogame_and_Virtual_World_Technologies_Serious_Games_applications_in_Education_and_Training

https://www.academia.edu/Documents/in/Measurement_and_evaluation_in_education

Social Media:
https://www.facebook.com/QQML-International-Conference-575508262589919/

 

 

 

publish metrics ranking and citation info

EdTech Research – Where to Publish, How to Share (Part 2): Journal Metrics, Rankings and Citation Information

EdTech Research – Where to Publish, How to Share (Part 1): Journal Overview

electronic journals

International Review of Research in Open and Distributed Learning (IRRODL)

Publisher / Organization: Athabasca University Press

Year founded: 2000

Description: The International Review of Research in Open and Distributed Learning disseminates original research, theory, and best practice in open and distributed learning worldwide.

First Monday

Publisher / Organization: The University of Illinois at Chicago- University Library

Year founded: 1996

Description: First Monday is among the very first open access journals in the EdTech field. The journal’s subject matter encompasses the full range of Internet issues, including educational technologies, social media and web search. Contributors are urged via author guidelines to use simple explanations and less complex sentences and to be mindful that a large proportion of their readers are not part of academia and do not have English as a first language.

URL: http://firstmonday.org/

International Journal of Educational Technology in Higher Education(ETHE)

Publisher / Organization: Springer (from 2013)

Academic Management: University of Catalonia (UOC)

Year founded: 2004

Description: This journal aims to: provide a vehicle for scholarly presentation and exchange of information between professionals, researchers and practitioners in the technology-enhanced education field; contribute to the advancement of scientific knowledge regarding the use of technology and computers in higher education; and inform readers about the latest developments in the application of information technologies (ITs) in higher education learning, training, research and management.

URL: https://educationaltechnologyjournal.springeropen.com/

Online Learning (formerly JOLT / JALN)

Publisher / Organization: Online Learning Consortium

Year founded: 1997

Description: Online Learning promotes the development and dissemination of new knowledge at the intersection of pedagogy, emerging technology, policy, and practice in online environments. The journal has been published for over 20 years as the Journal of Asynchronous Learning Networks (JALN) and recently merged with the Journal of Online Learning and Teaching (JOLT).

URL: https://olj.onlinelearningconsortium.org/

Journal of Educational Technology & Society

Publisher / Organization: International Forum of Educational Technology & Society

Year founded:1998

Description: Educational Technology & Society seeks academic articles on the issues affecting the developers of educational systems and educators who implement and manage these systems. Articles should discuss the perspectives of both communities – the programmers and the instructors. The journal is currently still accepting submissions for ongoing special issues, but will cease publication in the future as the editors feel that the field of EdTech is saturated with high quality publications.

URL: http://www.ds.unipi.gr/et&s/index.php

Australasian Journal of Educational Technology

Publisher / Organization: Ascilite (Organization) & PKP Publishing Services Network

Year founded: 1985

Description: The Australasian Journal of Educational Technology aims to promote research and scholarship on the integration of technology in tertiary education, promote effective practice, and inform policy. The goal is to advance understanding of educational technology in post-school education settings, including higher and further education, lifelong learning, and training.

URL: https://ajet.org.au/index.php/AJET

Print Journals

The Internet and Higher Education

Publisher / Organization: Elsevier Ltd.

YEAR FOUNDED: 1998

DESCRIPTION: The Internet and Higher Education is devoted to addressing contemporary issues and future developments related to online learning, teaching, and administration on the Internet in post-secondary settings. Articles should significantly address innovative deployments of Internet technology in instruction and report on research to demonstrate the effects of information technology on instruction in various contexts in higher education.

URL: https://www.journals.elsevier.com/the-internet-and-higher-education

British Journal of Educational Technology

Publisher / Organization: British Educational Research Association (BERA)

YEAR FOUNDED: 1970

DESCRIPTION: The journal publishes theoretical perspectives, methodological developments and empirical research that demonstrate whether and how applications of instructional/educational technology systems, networks, tools and resources lead to improvements in formal and non-formal education at all levels, from early years through to higher, technical and vocational education, professional development and corporate training.

LINK: http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)1467-8535

Computers & Education

Publisher / Organization: Elsevier Ltd.

Year founded: 1976

Description: Computers & Education aims to increase knowledge and understanding of ways in which digital technology can enhance education, through the publication of high quality research, which extends theory and practice.

URL: https://www.journals.elsevier.com/computers-and-education/

Tech Trends

Publisher / Organization: Springer US

Year founded: 1985

Description: TechTrends targets professionals in the educational communication and technology field. It provides a vehicle that fosters the exchange of important and current information among professional practitioners. Among the topics addressed are the management of media and programs, the application of educational technology principles and techniques to instructional programs, and corporate and military training.

URL: https://link.springer.com/journal/11528

International Journal on E-Learning (IJEL)

Year founded: 2002

Description: Advances in technology and the growth of e-learning to provide educators and trainers with unique opportunities to enhance learning and teaching in corporate, government, healthcare, and higher education. IJEL serves as a forum to facilitate the international exchange of information on the current research, development, and practice of e-learning in these sectors.

Led by an Editorial Review Board of leaders in the field of e-Learning, the Journal is designed for the following audiences: researchers, developers, and practitioners in corporate, government, healthcare, and higher education. IJEL is a peer-reviewed journal.

URL: http://www.aace.org/pubs/ijel/

Journal of Computers in Mathematics and Science Teaching (JCMST)

Year founded: 1981

Description: JCMST is a highly respected scholarly journal which offers an in-depth forum for the interchange of information in the fields of science, mathematics, and computer science. JCMST is the only periodical devoted specifically to using information technology in the teaching of mathematics and science.

URL: https://www.aace.org/pubs/jcmst/

Just as researchers build reputation over time that can be depicted (in part) through quantitative measures such as h-index and i10-index, journals are also compared based on the number of citations they receive..

Journal of Interactive Learning Research (JILR)

Year founded: 1997

Description: The Journal of Interactive Learning Research (JILR) publishes papers related to the underlying theory, design, implementation, effectiveness, and impact on education and training of the following interactive learning environments: authoring systems, cognitive tools for learning computer-assisted language learning computer-based assessment systems, computer-based training computer-mediated communications, computer-supported collaborative learning distributed learning environments, electronic performance support systems interactive learning environments, interactive multimedia systems interactive simulations and games, intelligent agents on the Internet intelligent tutoring systems, microworlds, virtual reality based learning systems.

URL: http://learntechlib.org/j/JILR/

Journal of Educational Multimedia and Hypermedia (JEMH)

Year founded: 1996

Description: JEMH is designed to provide a multi-disciplinary forum to present and discuss research, development and applications of multimedia and hypermedia in education. It contributes to the advancement of the theory and practice of learning and teaching in environments that integrate images, sound, text, and data.

URL: https://www.aace.org/pubs/jemh/

Journal of Technology and Teacher Education (JTATE)

Publisher / Organization: Society for Information Technology and Teacher Education (SITE)

Year founded: 1997

Description: JTATE serves as a forum for the exchange of knowledge about the use of information technology in teacher education. Journal content covers preservice and inservice teacher education, graduate programs in areas such as curriculum and instruction, educational administration, staff development instructional technology, and educational computing.

URL: https://www.aace.org/pubs/jtate/

Journal on Online Learning Research (JOLR)

Publisher / Organization: Association for the Advancement of Computing in Education (AACE)

YEAR FOUNDED: 2015

DESCRIPTION: The Journal of Online Learning Research (JOLR) is a peer-reviewed, international journal devoted to the theoretical, empirical, and pragmatic understanding of technologies and their impact on primary and secondary pedagogy and policy in primary and secondary (K-12) online and blended environments. JOLR is focused on publishing manuscripts that address online learning, catering particularly to the educators who research, practice, design, and/or administer in primary and secondary schooling in online settings. However, the journal also serves those educators who have chosen to blend online learning tools and strategies in their face-to-face classroom.

URL: https://www.aace.org/pubs/jolr/

 

++++++++++++++
part 2

The most commonly used index to measure the relative importance of journals is the annual Journal Citation Reports (JCR). This report is published by Clarivate Analytics (previously Thomson Reuters).

SCImago

SCImago Journal Rank (SJR indicator) measures the influence of journals based on the number of citations the articles in the journal receive and the importance or prestige of the journals where such citations come from. The SJR indicator is a free journal metric which uses an algorithm similar to PageRank and provides an open access alternative to the journal impact factor in the Web of Science Journal Citation Report. The portal draws from the information contained in the Scopus database (Elsevier B.V.).

Google Scholar Journal Rank

Introduced by Google in 2004, Scholar is a freely accessible search engine that indexes the full text or metadata of scholarly publications across an array of publishing formats and disciplines.

Scopus Journal Metrics

Introduced by Elsevier in 2004, Scopus is an abstract and citation database that covers nearly 18,000 titles from more than 5,000 publishers. It offers journal metrics that go beyond just journals to include most serial titles, including supplements, special issues and conference proceedings. Scopus offers useful information such as the total number of citations, the total number of articles published, and the percent of articles cited.

Anne-Wil Harzing:

Citations are not just a reflection of the impact that a particular piece of academic work has generated. Citations can be used to tell stories about academics, journals and fields of research, but they can also be used to distort stories”.

Harzing, A.-W. (2013). The publish or perish book: Your guide to effective and responsible citation analysis. http://harzing.com/popbook/index.htm

ResearchGate

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. The community was founded in May 2008. Today it has over 14 million members.

Google Scholar

Google Scholar allows users to search for digital or physical copies of articles, whether online or in libraries. It indexes “full-text journal articles, technical reports, preprints, theses, books, and other documents, including selected Web pages that are deemed to be ‘scholarly. It comprises an estimated 160 million documents.

Academia.edu

Academia.edu is a social-networking platform for academics to share research papers. You can upload your own work, and follow the updates of your peers. Founded in 2008, the network currently has 59 million users, and adding 20 million documents.

ORCID

The ORCHID (Open Researcher and Contributor ID) is a nonproprietary alphanumeric code to uniquely identify scientific and other academic authors and contributors. It provides a persistent identity for humans, similar to content-related entities on digital networks that utilize digital object identifiers (DOIs). The organization offers an open and independent registry intended to be the de facto standard for contributor identification in research and academic publishing.

SCOPUS

The Scopus Author Identifier assigns a unique number to groups of documents written by the same author via an algorithm that matches authorship based on a certain criteria. If a document cannot be confidently matched with an author identifier, it is grouped separately. In this case, you may see more than one entry for the same author.

 

+++++++++++++++++
more on metrics in this iMS blog

https://blog.stcloudstate.edu/ims?s=metrics

digital humanities

7 Things You Should Know About Digital Humanities

Published:   Briefs, Case Studies, Papers, Reports  

https://library.educause.edu/resources/2017/11/7-things-you-should-know-about-digital-humanities

Lippincott, J., Spiro, L., Rugg, A., Sipher, J., & Well, C. (2017). Seven Things You Should Know About Digital Humanities (ELI 7 Things You Should Know). Retrieved from https://library.educause.edu/~/media/files/library/2017/11/eli7150.pdf

definition

The term “digital humanities” can refer to research and instruction that is about information technology or that uses IT. By applying technologies in new ways, the tools and methodologies of digital humanities open new avenues of inquiry and scholarly production. Digital humanities applies computational capabilities to humanistic questions, offering new pathways for scholars to conduct research and to create and publish scholarship. Digital humanities provides promising new channels for learners and will continue to influence the ways in which we think about and evolve technology toward better and more humanistic ends.

As defined by Johanna Drucker and colleagues at UCLA, the digital humanities is “work at the intersection of digital technology and humanities disciplines.” An EDUCAUSE/CNI working group framed the digital humanities as “the application and/or development of digital tools and resources to enable researchers to address questions and perform new types of analyses in the humanities disciplines,” and the NEH Office of Digital Humanities says digital humanities “explore how to harness new technology for thumanities research as well as those that study digital culture from a humanistic perspective.” Beyond blending the digital with the humanities, there is an intentionality about combining the two that defines it.

digital humanities can include

  • creating digital texts or data sets;
  • cleaning, organizing, and tagging those data sets;
  • applying computer-based methodologies to analyze them;
  • and making claims and creating visualizations that explain new findings from those analyses.

Scholars might reflect on

  • how the digital form of the data is organized,
  • how analysis is conducted/reproduced, and
  • how claims visualized in digital form may embody assumptions or biases.

Digital humanities can enrich pedagogy as well, such as when a student uses visualized data to study voter patterns or conducts data-driven analyses of works of literature.

Digital humanities usually involves work by teams in collaborative spaces or centers. Team members might include

  • researchers and faculty from multiple disciplines,
  • graduate students,
  • librarians,
  • instructional technologists,
  • data scientists and preservation experts,
  • technologists with expertise in critical computing and computing methods, and undergraduates

projects:

downsides

  • some disciplinary associations, including the Modern Language Association and the American Historical Association, have developed guidelines for evaluating digital proj- ects, many institutions have yet to define how work in digital humanities fits into considerations for tenure and promotion
  • Because large projects are often developed with external funding that is not readily replaced by institutional funds when the grant ends sustainability is a concern. Doing digital humanities well requires access to expertise in methodologies and tools such as GIS, mod- eling, programming, and data visualization that can be expensive for a single institution to obtain
  • Resistance to learning new tech- nologies can be another roadblock, as can the propensity of many humanists to resist working in teams. While some institutions have recognized the need for institutional infrastructure (computation and storage, equipment, software, and expertise), many have not yet incorporated such support into ongoing budgets.

Opportunities for undergraduate involvement in research, provid ing students with workplace skills such as data management, visualization, coding, and modeling. Digital humanities provides new insights into policy-making in areas such as social media, demo- graphics, and new means of engaging with popular culture and understanding past cultures. Evolution in this area will continue to build connections between the humanities and other disci- plines, cross-pollinating research and education in areas like med- icine and environmental studies. Insights about digital humanities itself will drive innovation in pedagogy and expand our conceptualization of classrooms and labs

++++++++++++
more on digital humanities in this IMS blog
https://blog.stcloudstate.edu/ims?s=digital+humanities

Cohort 8 research and write dissertation

When writing your dissertation…

Please have an FAQ-kind of list of the Google Group postings regarding resources and information on research and writing of Chapter 2

digital resource sets available through MnPALS Plus

https://blog.stcloudstate.edu/ims/2017/10/21/digital-resource-sets-available-through-mnpals-plus/ 

+++++++++++++++++++++++++

[how to] write chapter 2

You were reminded to look at dissertations of your peers from previous cohorts and use their dissertations as a “template”: http://repository.stcloudstate.edu/do/discipline_browser/articles?discipline_key=1230

You also were reminded to use the documents in Google Drive: e.g. https://drive.google.com/open?id=0B7IvS0UYhpxFVTNyRUFtNl93blE

Please have also materials, which might help you organize our thoughts and expedite your Chapter 2 writing….

Do you agree with (did you use) the following observations:

The purpose of the review of the literature is to prove that no one has studied the gap in the knowledge outlined in Chapter 1. The subjects in the Review of Literature should have been introduced in the Background of the Problem in Chapter 1. Chapter 2 is not a textbook of subject matter loosely related to the subject of the study.  Every research study that is mentioned should in some way bear upon the gap in the knowledge, and each study that is mentioned should end with the comment that the study did not collect data about the specific gap in the knowledge of the study as outlined in Chapter 1.

The review should be laid out in major sections introduced by organizational generalizations. An organizational generalization can be a subheading so long as the last sentence of the previous section introduces the reader to what the next section will contain.  The purpose of this chapter is to cite major conclusions, findings, and methodological issues related to the gap in the knowledge from Chapter 1. It is written for knowledgeable peers from easily retrievable sources of the most recent issue possible.

Empirical literature published within the previous 5 years or less is reviewed to prove no mention of the specific gap in the knowledge that is the subject of the dissertation is in the body of knowledge. Common sense should prevail. Often, to provide a history of the research, it is necessary to cite studies older than 5 years. The object is to acquaint the reader with existing studies relative to the gap in the knowledge and describe who has done the work, when and where the research was completed, and what approaches were used for the methodology, instrumentation, statistical analyses, or all of these subjects.

If very little literature exists, the wise student will write, in effect, a several-paragraph book report by citing the purpose of the study, the methodology, the findings, and the conclusions.  If there is an abundance of studies, cite only the most recent studies.  Firmly establish the need for the study.  Defend the methods and procedures by pointing out other relevant studies that implemented similar methodologies. It should be frequently pointed out to the reader why a particular study did not match the exact purpose of the dissertation.

The Review of Literature ends with a Conclusion that clearly states that, based on the review of the literature, the gap in the knowledge that is the subject of the study has not been studied.  Remember that a “summary” is different from a “conclusion.”  A Summary, the final main section, introduces the next chapter.

from http://dissertationwriting.com/wp/writing-literature-review/

Here is the template from a different school (then SCSU)

http://semo.edu/education/images/EduLead_DissertGuide_2007.pdf 

+++++++++++++++++

When conducting qualitative data, how many people should be interviewed? Is there a minimum or a max

Here is my take on it:

Simple question, not so simple answer.

It depends.

Generally, the number of respondents depends on the type of qualitative inquiry: case study methodology, phenomenological study, ethnographic study, or ethnomethodology. However, a rule of thumb is for scholars to achieve saturation point–that is the point in which no fresh information is uncovered in response to an issue that is of interest to the researcher.

If your qualitative method is designed to meet rigor and trustworthiness, thick, rich data is important. To achieve these principles you would need at least 12 interviews, ensuring your participants are the holders of knowledge in the area you intend to investigate. In grounded theory you could start with 12 and interview more if your data is not rich enough.

In IPA the norm tends to be 6 interviews.

You may check the sample size in peer reviewed qualitative publications in your field to find out about popular practice. In all depends on the research problem, choice of specific qualitative approach and theoretical framework, so the answer to your question will vary from few to few dozens.

How many interviews are needed in a qualitative research?

There are different views in literature and no one agreed to the exact number. Here I reviewed some mostly cited references. Based Creswell (2014), it is estimated that 16 participants will provide rich and detailed data. There are a couple of researchers agreed ‎on 10–15 in-depth interviews ‎are ‎sufficient ‎‎ (Guest, Bunce & Johnson 2006; Baker & ‎Edwards 2012).

your methodological choices need to reflect your ontological position and understanding of knowledge production, and that’s also where you can argue a strong case for smaller qualitative studies, as you say. This is not only a problem for certain subjects, I think it’s a problem in certain departments or journals across the board of social science research, as it’s a question of academic culture.

here more serious literature and research (in case you need to cite in Chapter 3)

Sample Size and Saturation in PhD Studies Using Qualitative Interviews

http://www.qualitative-research.net/index.php/fqs/article/view/1428/3027

https://researcholic.wordpress.com/2015/03/20/sample_size_interviews/

Gaskell, George (2000). Individual and Group Interviewing. In Martin W. Bauer & George Gaskell (Eds.), Qualitative Researching With Text, Image and Sound. A Practical Handbook (pp. 38-56). London: SAGE Publications.

Lieberson, Stanley 1991: “Small N’s and Big Conclusions.” Social Forces 70:307-20. (http://www.jstor.org/pss/2580241)

Savolainen, Jukka 1994: “The Rationality of Drawing Big Conclusions Based on Small Samples.” Social Forces 72:1217-24. (http://www.jstor.org/pss/2580299).

Small, M.(2009) ‘How many cases do I need ? On science and the logic of case selection in field-based research’ Ethnography 10(1) 5-38

Williams,M. (2000) ‘Interpretivism and generalisation ‘ Sociology 34(2) 209-224

http://james-ramsden.com/semi-structured-interviews-how-many-interviews-is-enough/

+++++++++++++++++

how to start your writing process

If you are a Pinterest user, you are welcome to just sbuscribe to the board:

https://www.pinterest.com/aidedza/doctoral-cohort/

otherwise, I am mirroring the information also in the IMS blog:

https://blog.stcloudstate.edu/ims/2017/08/13/analytical-essay/ 

+++++++++++++++++++++++++++

APA citing of “unusual” resources

https://blog.stcloudstate.edu/ims/2017/08/06/apa-citation/

+++++++++++++++++++++++

statistical modeling: your guide to Chapter 3

working on your dissertation, namely Chapter 3, you probably are consulting with the materials in this shared folder:

https://drive.google.com/drive/folders/0B7IvS0UYhpxFVTNyRUFtNl93blE?usp=sharing

In it, there is a subfolder, called “stats related materials”
https://drive.google.com/open?id=0B7IvS0UYhpxFcVg3aWxCX0RVams

where you have several documents from the Graduate school and myself to start building your understanding and vocabulary regarding your quantitative, qualitative or mixed method research.

It has been agreed that before you go to the Statistical Center (Randy Kolb), it is wise to be prepared and understand the terminology as well as the basics of the research methods.

Please have an additional list of materials available through the SCSU library and the Internet. They can help you further with building a robust foundation to lead your research:

https://blog.stcloudstate.edu/ims/2017/07/10/intro-to-stat-modeling/

In this blog entry, I shared with you:

  1. Books on intro to stat modeling available at the library. I understand the major pain borrowing books from the SCSU library can constitute, but you can use the titles and the authors and see if you can borrow them from your local public library
  2. I also sought and shared with you “visual” explanations of the basics terms and concepts. Once you start looking at those, you should be able to further research (e.g. YouTube) and find suitable sources for your learning style.

I (and the future cohorts) will deeply appreciate if you remember to share those “suitable sources for your learning style” either by sharing in this Google Group thread and/or sharing in the comments section of the blog entry: https://blog.stcloudstate.edu/ims/2017/07/10/intro-to-stat-modeling.  Your Facebook group page is also a good place to discuss among ourselves best practices to learn and use research methods for your chapter 3.

++++++++++++++++
search for sources

Google just posted on their Facebook profile a nifty short video on Google Search
https://blog.stcloudstate.edu/ims/2017/06/26/google-search/

Watching the video, you may remember the same #BooleanSearch techniques from our BI (bibliography instruction) session of last semester.

Considering the fact of preponderance of information in 2017: your Chapter 2 is NOT ONLY about finding information regrading your topic.
Your Chapter 2 is about proving your extensive research of the existing literature.

The techniques presented in the short video will arm you with methods to dig deeper and look further.

If you would like to do a decent job exploring all corners of the vast area called Internet, please consider other search engines similar to Google Scholar:

Microsoft Semantic Scholar (Semantic Scholar); Microsoft Academic Search; Academicindex.net; Proquest Dialog; Quetzal; arXiv;

https://www.google.com/; https://scholar.google.com/ (3 min); http://academic.research.microsoft.com/http://www.dialog.com/http://www.quetzal-search.infohttp://www.arXiv.orghttp://www.journalogy.com/
More about such search engines in the following blog entries:

https://blog.stcloudstate.edu/ims/2017/01/19/digital-literacy-for-glst-495/

and

https://blog.stcloudstate.edu/ims/2017/05/01/history-becker/

Let me know, if more info needed and/or you need help embarking on the “deep” search

+++++++++++++++++

tips for writing and proofreading

please have several infographics to help you with your writing habits (organization) and proofreading, posted in the IMS blog:

https://blog.stcloudstate.edu/ims/2017/06/11/writing-first-draft/
https://blog.stcloudstate.edu/ims/2017/06/11/prewriting-strategies/ 

https://blog.stcloudstate.edu/ims/2017/06/11/essay-checklist/

++++++++++++++

letter – request copyright permission

Here are several samples on mastering such letter:

https://registrar.stanford.edu/students/dissertation-and-thesis-submission/preparing-engineer-theses-paper-submission/sample-3

http://www.iup.edu/graduatestudies/resources-for-current-students/research/thesis-dissertation-information/before-starting-your-research/copyright-permission-instructions-and-sample-letter/

https://brocku.ca/webfm_send/25032

 

+++++++++++++++++

 

 

 

IRDL proposal

Applications for the 2018 Institute will be accepted between December 1, 2017 and January 27, 2018. Scholars accepted to the program will be notified in early March 2018.

Title:

Learning to Harness Big Data in an Academic Library

Abstract (200)

Research on Big Data per se, as well as on the importance and organization of the process of Big Data collection and analysis, is well underway. The complexity of the process comprising “Big Data,” however, deprives organizations of ubiquitous “blue print.” The planning, structuring, administration and execution of the process of adopting Big Data in an organization, being that a corporate one or an educational one, remains an elusive one. No less elusive is the adoption of the Big Data practices among libraries themselves. Seeking the commonalities and differences in the adoption of Big Data practices among libraries may be a suitable start to help libraries transition to the adoption of Big Data and restructuring organizational and daily activities based on Big Data decisions.
Introduction to the problem. Limitations

The redefinition of humanities scholarship has received major attention in higher education. The advent of digital humanities challenges aspects of academic librarianship. Data literacy is a critical need for digital humanities in academia. The March 2016 Library Juice Academy Webinar led by John Russel exemplifies the efforts to help librarians become versed in obtaining programming skills, and respectively, handling data. Those are first steps on a rather long path of building a robust infrastructure to collect, analyze, and interpret data intelligently, so it can be utilized to restructure daily and strategic activities. Since the phenomenon of Big Data is young, there is a lack of blueprints on the organization of such infrastructure. A collection and sharing of best practices is an efficient approach to establishing a feasible plan for setting a library infrastructure for collection, analysis, and implementation of Big Data.
Limitations. This research can only organize the results from the responses of librarians and research into how libraries present themselves to the world in this arena. It may be able to make some rudimentary recommendations. However, based on each library’s specific goals and tasks, further research and work will be needed.

 

 

Research Literature

“Big data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it…”
– Dan Ariely, 2013  https://www.asist.org/publications/bulletin/aprilmay-2017/big-datas-impact-on-privacy-for-librarians-and-information-professionals/

Big Data is becoming an omnipresent term. It is widespread among different disciplines in academia (De Mauro, Greco, & Grimaldi, 2016). This leads to “inconsistency in meanings and necessity for formal definitions” (De Mauro et al, 2016, p. 122). Similarly, to De Mauro et al (2016), Hashem, Yaqoob, Anuar, Mokhtar, Gani and Ullah Khan (2015) seek standardization of definitions. The main connected “themes” of this phenomenon must be identified and the connections to Library Science must be sought. A prerequisite for a comprehensive definition is the identification of Big Data methods. Bughin, Chui, Manyika (2011), Chen et al. (2012) and De Mauro et al (2015) single out the methods to complete the process of building a comprehensive definition.

In conjunction with identifying the methods, volume, velocity, and variety, as defined by Laney (2001), are the three properties of Big Data accepted across the literature. Daniel (2015) defines three stages in big data: collection, analysis, and visualization. According to Daniel, (2015), Big Data in higher education “connotes the interpretation of a wide range of administrative and operational data” (p. 910) and according to Hilbert (2013), as cited in Daniel (2015), Big Data “delivers a cost-effective prospect to improve decision making” (p. 911).

The importance of understanding the process of Big Data analytics is well understood in academic libraries. An example of such “administrative and operational” use for cost-effective improvement of decision making are the Finch & Flenner (2016) and Eaton (2017) case studies of the use of data visualization to assess an academic library collection and restructure the acquisition process. Sugimoto, Ding & Thelwall (2012) call for the discussion of Big Data for libraries. According to the 2017 NMC Horizon Report “Big Data has become a major focus of academic and research libraries due to the rapid evolution of data mining technologies and the proliferation of data sources like mobile devices and social media” (Adams, Becker, et al., 2017, p. 38).

Power (2014) elaborates on the complexity of Big Data in regard to decision-making and offers ideas for organizations on building a system to deal with Big Data. As explained by Boyd and Crawford (2012) and cited in De Mauro et al (2016), there is a danger of a new digital divide among organizations with different access and ability to process data. Moreover, Big Data impacts current organizational entities in their ability to reconsider their structure and organization. The complexity of institutions’ performance under the impact of Big Data is further complicated by the change of human behavior, because, arguably, Big Data affects human behavior itself (Schroeder, 2014).

De Mauro et al (2015) touch on the impact of Dig Data on libraries. The reorganization of academic libraries considering Big Data and the handling of Big Data by libraries is in a close conjunction with the reorganization of the entire campus and the handling of Big Data by the educational institution. In additional to the disruption posed by the Big Data phenomenon, higher education is facing global changes of economic, technological, social, and educational character. Daniel (2015) uses a chart to illustrate the complexity of these global trends. Parallel to the Big Data developments in America and Asia, the European Union is offering access to an EU open data portal (https://data.europa.eu/euodp/home ). Moreover, the Association of European Research Libraries expects under the H2020 program to increase “the digitization of cultural heritage, digital preservation, research data sharing, open access policies and the interoperability of research infrastructures” (Reilly, 2013).

The challenges posed by Big Data to human and social behavior (Schroeder, 2014) are no less significant to the impact of Big Data on learning. Cohen, Dolan, Dunlap, Hellerstein, & Welton (2009) propose a road map for “more conservative organizations” (p. 1492) to overcome their reservations and/or inability to handle Big Data and adopt a practical approach to the complexity of Big Data. Two Chinese researchers assert deep learning as the “set of machine learning techniques that learn multiple levels of representation in deep architectures (Chen & Lin, 2014, p. 515). Deep learning requires “new ways of thinking and transformative solutions (Chen & Lin, 2014, p. 523). Another pair of researchers from China present a broad overview of the various societal, business and administrative applications of Big Data, including a detailed account and definitions of the processes and tools accompanying Big Data analytics.  The American counterparts of these Chinese researchers are of the same opinion when it comes to “think about the core principles and concepts that underline the techniques, and also the systematic thinking” (Provost and Fawcett, 2013, p. 58). De Mauro, Greco, and Grimaldi (2016), similarly to Provost and Fawcett (2013) draw attention to the urgent necessity to train new types of specialists to work with such data. As early as 2012, Davenport and Patil (2012), as cited in Mauro et al (2016), envisioned hybrid specialists able to manage both technological knowledge and academic research. Similarly, Provost and Fawcett (2013) mention the efforts of “academic institutions scrambling to put together programs to train data scientists” (p. 51). Further, Asomoah, Sharda, Zadeh & Kalgotra (2017) share a specific plan on the design and delivery of a big data analytics course. At the same time, librarians working with data acknowledge the shortcomings in the profession, since librarians “are practitioners first and generally do not view usability as a primary job responsibility, usually lack the depth of research skills needed to carry out a fully valid” data-based research (Emanuel, 2013, p. 207).

Borgman (2015) devotes an entire book to data and scholarly research and goes beyond the already well-established facts regarding the importance of Big Data, the implications of Big Data and the technical, societal, and educational impact and complications posed by Big Data. Borgman elucidates the importance of knowledge infrastructure and the necessity to understand the importance and complexity of building such infrastructure, in order to be able to take advantage of Big Data. In a similar fashion, a team of Chinese scholars draws attention to the complexity of data mining and Big Data and the necessity to approach the issue in an organized fashion (Wu, Xhu, Wu, Ding, 2014).

Bruns (2013) shifts the conversation from the “macro” architecture of Big Data, as focused by Borgman (2015) and Wu et al (2014) and ponders over the influx and unprecedented opportunities for humanities in academia with the advent of Big Data. Does the seemingly ubiquitous omnipresence of Big Data mean for humanities a “railroading” into “scientificity”? How will research and publishing change with the advent of Big Data across academic disciplines?

Reyes (2015) shares her “skinny” approach to Big Data in education. She presents a comprehensive structure for educational institutions to shift “traditional” analytics to “learner-centered” analytics (p. 75) and identifies the participants in the Big Data process in the organization. The model is applicable for library use.

Being a new and unchartered territory, Big Data and Big Data analytics can pose ethical issues. Willis (2013) focusses on Big Data application in education, namely the ethical questions for higher education administrators and the expectations of Big Data analytics to predict students’ success.  Daries, Reich, Waldo, Young, and Whittinghill (2014) discuss rather similar issues regarding the balance between data and student privacy regulations. The privacy issues accompanying data are also discussed by Tene and Polonetsky, (2013).

Privacy issues are habitually connected to security and surveillance issues. Andrejevic and Gates (2014) point out in a decision making “generated by data mining, the focus is not on particular individuals but on aggregate outcomes” (p. 195). Van Dijck (2014) goes into further details regarding the perils posed by metadata and data to the society, in particular to the privacy of citizens. Bail (2014) addresses the same issue regarding the impact of Big Data on societal issues, but underlines the leading roles of cultural sociologists and their theories for the correct application of Big Data.

Library organizations have been traditional proponents of core democratic values such as protection of privacy and elucidation of related ethical questions (Miltenoff & Hauptman, 2005). In recent books about Big Data and libraries, ethical issues are important part of the discussion (Weiss, 2018). Library blogs also discuss these issues (Harper & Oltmann, 2017). An academic library’s role is to educate its patrons about those values. Sugimoto et al (2012) reflect on the need for discussion about Big Data in Library and Information Science. They clearly draw attention to the library “tradition of organizing, managing, retrieving, collecting, describing, and preserving information” (p.1) as well as library and information science being “a historically interdisciplinary and collaborative field, absorbing the knowledge of multiple domains and bringing the tools, techniques, and theories” (p. 1). Sugimoto et al (2012) sought a wide discussion among the library profession regarding the implications of Big Data on the profession, no differently from the activities in other fields (e.g., Wixom, Ariyachandra, Douglas, Goul, Gupta, Iyer, Kulkami, Mooney, Phillips-Wren, Turetken, 2014). A current Andrew Mellon Foundation grant for Visualizing Digital Scholarship in Libraries seeks an opportunity to view “both macro and micro perspectives, multi-user collaboration and real-time data interaction, and a limitless number of visualization possibilities – critical capabilities for rapidly understanding today’s large data sets (Hwangbo, 2014).

The importance of the library with its traditional roles, as described by Sugimoto et al (2012) may continue, considering the Big Data platform proposed by Wu, Wu, Khabsa, Williams, Chen, Huang, Tuarob, Choudhury, Ororbia, Mitra, & Giles (2014). Such platforms will continue to emerge and be improved, with librarians as the ultimate drivers of such platforms and as the mediators between the patrons and the data generated by such platforms.

Every library needs to find its place in the large organization and in society in regard to this very new and very powerful phenomenon called Big Data. Libraries might not have the trained staff to become a leader in the process of organizing and building the complex mechanism of this new knowledge architecture, but librarians must educate and train themselves to be worthy participants in this new establishment.

 

Method

 

The study will be cleared by the SCSU IRB.
The survey will collect responses from library population and it readiness to use and use of Big Data.  Send survey URL to (academic?) libraries around the world.

Data will be processed through SPSS. Open ended results will be processed manually. The preliminary research design presupposes a mixed method approach.

The study will include the use of closed-ended survey response questions and open-ended questions.  The first part of the study (close ended, quantitative questions) will be completed online through online survey. Participants will be asked to complete the survey using a link they receive through e-mail.

Mixed methods research was defined by Johnson and Onwuegbuzie (2004) as “the class of research where the researcher mixes or combines quantitative and qualitative research techniques, methods, approaches, concepts, or language into a single study” (Johnson & Onwuegbuzie, 2004 , p. 17).  Quantitative and qualitative methods can be combined, if used to complement each other because the methods can measure different aspects of the research questions (Sale, Lohfeld, & Brazil, 2002).

 

Sampling design

 

  • Online survey of 10-15 question, with 3-5 demographic and the rest regarding the use of tools.
  • 1-2 open-ended questions at the end of the survey to probe for follow-up mixed method approach (an opportunity for qualitative study)
  • data analysis techniques: survey results will be exported to SPSS and analyzed accordingly. The final survey design will determine the appropriate statistical approach.

 

Project Schedule

 

Complete literature review and identify areas of interest – two months

Prepare and test instrument (survey) – month

IRB and other details – month

Generate a list of potential libraries to distribute survey – month

Contact libraries. Follow up and contact again, if necessary (low turnaround) – month

Collect, analyze data – two months

Write out data findings – month

Complete manuscript – month

Proofreading and other details – month

 

Significance of the work 

While it has been widely acknowledged that Big Data (and its handling) is changing higher education (https://blog.stcloudstate.edu/ims?s=big+data) as well as academic libraries (https://blog.stcloudstate.edu/ims/2016/03/29/analytics-in-education/), it remains nebulous how Big Data is handled in the academic library and, respectively, how it is related to the handling of Big Data on campus. Moreover, the visualization of Big Data between units on campus remains in progress, along with any policymaking based on the analysis of such data (hence the need for comprehensive visualization).

 

This research will aim to gain an understanding on: a. how librarians are handling Big Data; b. how are they relating their Big Data output to the campus output of Big Data and c. how librarians in particular and campus administration in general are tuning their practices based on the analysis.

Based on the survey returns (if there is a statistically significant return), this research might consider juxtaposing the practices from academic libraries, to practices from special libraries (especially corporate libraries), public and school libraries.

 

 

References:

 

Adams Becker, S., Cummins M, Davis, A., Freeman, A., Giesinger Hall, C., Ananthanarayanan, V., … Wolfson, N. (2017). NMC Horizon Report: 2017 Library Edition.

Andrejevic, M., & Gates, K. (2014). Big Data Surveillance: Introduction. Surveillance & Society, 12(2), 185–196.

Asamoah, D. A., Sharda, R., Hassan Zadeh, A., & Kalgotra, P. (2017). Preparing a Data Scientist: A Pedagogic Experience in Designing a Big Data Analytics Course. Decision Sciences Journal of Innovative Education, 15(2), 161–190. https://doi.org/10.1111/dsji.12125

Bail, C. A. (2014). The cultural environment: measuring culture with big data. Theory and Society, 43(3–4), 465–482. https://doi.org/10.1007/s11186-014-9216-5

Borgman, C. L. (2015). Big Data, Little Data, No Data: Scholarship in the Networked World. MIT Press.

Bruns, A. (2013). Faster than the speed of print: Reconciling ‘big data’ social media analysis and academic scholarship. First Monday, 18(10). Retrieved from http://firstmonday.org/ojs/index.php/fm/article/view/4879

Bughin, J., Chui, M., & Manyika, J. (2010). Clouds, big data, and smart assets: Ten tech-enabled business trends to watch. McKinsey Quarterly, 56(1), 75–86.

Chen, X. W., & Lin, X. (2014). Big Data Deep Learning: Challenges and Perspectives. IEEE Access, 2, 514–525. https://doi.org/10.1109/ACCESS.2014.2325029

Cohen, J., Dolan, B., Dunlap, M., Hellerstein, J. M., & Welton, C. (2009). MAD Skills: New Analysis Practices for Big Data. Proc. VLDB Endow., 2(2), 1481–1492. https://doi.org/10.14778/1687553.1687576

Daniel, B. (2015). Big Data and analytics in higher education: Opportunities and challenges. British Journal of Educational Technology, 46(5), 904–920. https://doi.org/10.1111/bjet.12230

Daries, J. P., Reich, J., Waldo, J., Young, E. M., Whittinghill, J., Ho, A. D., … Chuang, I. (2014). Privacy, Anonymity, and Big Data in the Social Sciences. Commun. ACM, 57(9), 56–63. https://doi.org/10.1145/2643132

De Mauro, A. D., Greco, M., & Grimaldi, M. (2016). A formal definition of Big Data based on its essential features. Library Review, 65(3), 122–135. https://doi.org/10.1108/LR-06-2015-0061

De Mauro, A., Greco, M., & Grimaldi, M. (2015). What is big data? A consensual definition and a review of key research topics. AIP Conference Proceedings, 1644(1), 97–104. https://doi.org/10.1063/1.4907823

Dumbill, E. (2012). Making Sense of Big Data. Big Data, 1(1), 1–2. https://doi.org/10.1089/big.2012.1503

Eaton, M. (2017). Seeing Library Data: A Prototype Data Visualization Application for Librarians. Publications and Research. Retrieved from http://academicworks.cuny.edu/kb_pubs/115

Emanuel, J. (2013). Usability testing in libraries: methods, limitations, and implications. OCLC Systems & Services: International Digital Library Perspectives, 29(4), 204–217. https://doi.org/10.1108/OCLC-02-2013-0009

Graham, M., & Shelton, T. (2013). Geography and the future of big data, big data and the future of geography. Dialogues in Human Geography, 3(3), 255–261. https://doi.org/10.1177/2043820613513121

Harper, L., & Oltmann, S. (2017, April 2). Big Data’s Impact on Privacy for Librarians and Information Professionals. Retrieved November 7, 2017, from https://www.asist.org/publications/bulletin/aprilmay-2017/big-datas-impact-on-privacy-for-librarians-and-information-professionals/

Hashem, I. A. T., Yaqoob, I., Anuar, N. B., Mokhtar, S., Gani, A., & Ullah Khan, S. (2015). The rise of “big data” on cloud computing: Review and open research issues. Information Systems, 47(Supplement C), 98–115. https://doi.org/10.1016/j.is.2014.07.006

Hwangbo, H. (2014, October 22). The future of collaboration: Large-scale visualization. Retrieved November 7, 2017, from http://usblogs.pwc.com/emerging-technology/the-future-of-collaboration-large-scale-visualization/

Laney, D. (2001, February 6). 3D Data Management: Controlling Data Volume, Velocity, and Variety.

Miltenoff, P., & Hauptman, R. (2005). Ethical dilemmas in libraries: an international perspective. The Electronic Library, 23(6), 664–670. https://doi.org/10.1108/02640470510635746

Philip Chen, C. L., & Zhang, C.-Y. (2014). Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Information Sciences, 275(Supplement C), 314–347. https://doi.org/10.1016/j.ins.2014.01.015

Power, D. J. (2014). Using ‘Big Data’ for analytics and decision support. Journal of Decision Systems, 23(2), 222–228. https://doi.org/10.1080/12460125.2014.888848

Provost, F., & Fawcett, T. (2013). Data Science and its Relationship to Big Data and Data-Driven Decision Making. Big Data, 1(1), 51–59. https://doi.org/10.1089/big.2013.1508

Reilly, S. (2013, December 12). What does Horizon 2020 mean for research libraries? Retrieved November 7, 2017, from http://libereurope.eu/blog/2013/12/12/what-does-horizon-2020-mean-for-research-libraries/

Reyes, J. (2015). The skinny on big data in education: Learning analytics simplified. TechTrends: Linking Research & Practice to Improve Learning, 59(2), 75–80. https://doi.org/10.1007/s11528-015-0842-1

Schroeder, R. (2014). Big Data and the brave new world of social media research. Big Data & Society, 1(2), 2053951714563194. https://doi.org/10.1177/2053951714563194

Sugimoto, C. R., Ding, Y., & Thelwall, M. (2012). Library and information science in the big data era: Funding, projects, and future [a panel proposal]. Proceedings of the American Society for Information Science and Technology, 49(1), 1–3. https://doi.org/10.1002/meet.14504901187

Tene, O., & Polonetsky, J. (2012). Big Data for All: Privacy and User Control in the Age of Analytics. Northwestern Journal of Technology and Intellectual Property, 11, [xxvii]-274.

van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society; Newcastle upon Tyne, 12(2), 197–208.

Waller, M. A., & Fawcett, S. E. (2013). Data Science, Predictive Analytics, and Big Data: A Revolution That Will Transform Supply Chain Design and Management. Journal of Business Logistics, 34(2), 77–84. https://doi.org/10.1111/jbl.12010

Weiss, A. (2018). Big-Data-Shocks-An-Introduction-to-Big-Data-for-Librarians-and-Information-Professionals. Rowman & Littlefield Publishers. Retrieved from https://rowman.com/ISBN/9781538103227/Big-Data-Shocks-An-Introduction-to-Big-Data-for-Librarians-and-Information-Professionals

West, D. M. (2012). Big data for education: Data mining, data analytics, and web dashboards. Governance Studies at Brookings, 4, 1–0.

Willis, J. (2013). Ethics, Big Data, and Analytics: A Model for Application. Educause Review Online. Retrieved from https://docs.lib.purdue.edu/idcpubs/1

Wixom, B., Ariyachandra, T., Douglas, D. E., Goul, M., Gupta, B., Iyer, L. S., … Turetken, O. (2014). The current state of business intelligence in academia: The arrival of big data. CAIS, 34, 1.

Wu, X., Zhu, X., Wu, G. Q., & Ding, W. (2014). Data mining with big data. IEEE Transactions on Knowledge and Data Engineering, 26(1), 97–107. https://doi.org/10.1109/TKDE.2013.109

Wu, Z., Wu, J., Khabsa, M., Williams, K., Chen, H. H., Huang, W., … Giles, C. L. (2014). Towards building a scholarly big data platform: Challenges, lessons and opportunities. In IEEE/ACM Joint Conference on Digital Libraries (pp. 117–126). https://doi.org/10.1109/JCDL.2014.6970157

 

+++++++++++++++++
more on big data





Key Issues in Teaching and Learning Survey

The EDUCAUSE Learning Initiative has just launched its 2018 Key Issues in Teaching and Learning Survey, so vote today: http://www.tinyurl.com/ki2018.

Each year, the ELI surveys the teaching and learning community in order to discover the key issues and themes in teaching and learning. These top issues provide the thematic foundation or basis for all of our conversations, courses, and publications for the coming year. Longitudinally they also provide the way to track the evolving discourse in the teaching and learning space. More information about this annual survey can be found at https://www.educause.edu/eli/initiatives/key-issues-in-teaching-and-learning.

ACADEMIC TRANSFORMATION (Holistic models supporting student success, leadership competencies for academic transformation, partnerships and collaborations across campus, IT transformation, academic transformation that is broad, strategic, and institutional in scope)

ACCESSIBILITY AND UNIVERSAL DESIGN FOR LEARNING (Supporting and educating the academic community in effective practice; intersections with instructional delivery modes; compliance issues)

ADAPTIVE TEACHING AND LEARNING (Digital courseware; adaptive technology; implications for course design and the instructor’s role; adaptive approaches that are not technology-based; integration with LMS; use of data to improve learner outcomes)

COMPETENCY-BASED EDUCATION AND NEW METHODS FOR THE ASSESSMENT OF STUDENT LEARNING (Developing collaborative cultures of assessment that bring together faculty, instructional designers, accreditation coordinators, and technical support personnel, real world experience credit)

DIGITAL AND INFORMATION LITERACIES (Student and faculty literacies; research skills; data discovery, management, and analysis skills; information visualization skills; partnerships for literacy programs; evaluation of student digital competencies; information evaluation)

EVALUATING TECHNOLOGY-BASED INSTRUCTIONAL INNOVATIONS (Tools and methods to gather data; data analysis techniques; qualitative vs. quantitative data; evaluation project design; using findings to change curricular practice; scholarship of teaching and learning; articulating results to stakeholders; just-in-time evaluation of innovations). here is my bibliographical overview on Big Data (scroll down to “Research literature”https://blog.stcloudstate.edu/ims/2017/11/07/irdl-proposal/ )

EVOLUTION OF THE TEACHING AND LEARNING SUPPORT PROFESSION (Professional skills for T&L support; increasing emphasis on instructional design; delineating the skills, knowledge, business acumen, and political savvy for success; role of inter-institutional communities of practices and consortia; career-oriented professional development planning)

FACULTY DEVELOPMENT (Incentivizing faculty innovation; new roles for faculty and those who support them; evidence of impact on student learning/engagement of faculty development programs; faculty development intersections with learning analytics; engagement with student success)

GAMIFICATION OF LEARNING (Gamification designs for course activities; adaptive approaches to gamification; alternate reality games; simulations; technological implementation options for faculty)

INSTRUCTIONAL DESIGN (Skills and competencies for designers; integration of technology into the profession; role of data in design; evolution of the design profession (here previous blog postings on this issue: https://blog.stcloudstate.edu/ims/2017/10/04/instructional-design-3/); effective leadership and collaboration with faculty)

INTEGRATED PLANNING AND ADVISING FOR STUDENT SUCCESS (Change management and campus leadership; collaboration across units; integration of technology systems and data; dashboard design; data visualization (here previous blog postings on this issue: https://blog.stcloudstate.edu/ims?s=data+visualization); counseling and coaching advising transformation; student success analytics)

LEARNING ANALYTICS (Leveraging open data standards; privacy and ethics; both faculty and student facing reports; implementing; learning analytics to transform other services; course design implications)

LEARNING SPACE DESIGNS (Makerspaces; funding; faculty development; learning designs across disciplines; supporting integrated campus planning; ROI; accessibility/UDL; rating of classroom designs)

MICRO-CREDENTIALING AND DIGITAL BADGING (Design of badging hierarchies; stackable credentials; certificates; role of open standards; ways to publish digital badges; approaches to meta-data; implications for the transcript; Personalized learning transcripts and blockchain technology (here previous blog postings on this issue: https://blog.stcloudstate.edu/ims?s=blockchain

MOBILE LEARNING (Curricular use of mobile devices (here previous blog postings on this issue:

https://blog.stcloudstate.edu/ims/2015/09/25/mc218-remodel/; innovative curricular apps; approaches to use in the classroom; technology integration into learning spaces; BYOD issues and opportunities)

MULTI-DIMENSIONAL TECHNOLOGIES (Virtual, augmented, mixed, and immersive reality; video walls; integration with learning spaces; scalability, affordability, and accessibility; use of mobile devices; multi-dimensional printing and artifact creation)

NEXT-GENERATION DIGITAL LEARNING ENVIRONMENTS AND LMS SERVICES (Open standards; learning environments architectures (here previous blog postings on this issue: https://blog.stcloudstate.edu/ims/2017/03/28/digital-learning/; social learning environments; customization and personalization; OER integration; intersections with learning modalities such as adaptive, online, etc.; LMS evaluation, integration and support)

ONLINE AND BLENDED TEACHING AND LEARNING (Flipped course models; leveraging MOOCs in online learning; course development models; intersections with analytics; humanization of online courses; student engagement)

OPEN EDUCATION (Resources, textbooks, content; quality and editorial issues; faculty development; intersections with student success/access; analytics; licensing; affordability; business models; accessibility and sustainability)

PRIVACY AND SECURITY (Formulation of policies on privacy and data protection; increased sharing of data via open standards for internal and external purposes; increased use of cloud-based and third party options; education of faculty, students, and administrators)

WORKING WITH EMERGING LEARNING TECHNOLOGY (Scalability and diffusion; effective piloting practices; investments; faculty development; funding; evaluation methods and rubrics; interoperability; data-driven decision-making)

+++++++++++
learning and teaching in this IMS blog
https://blog.stcloudstate.edu/ims?s=teaching+and+learning

1 2 3 4 5