Searching for "quantitative"

bibliographical data analysis nVivo

Bibliographical data analysis with Zotero and nVivo

Bibliographic Analysis for Graduate Students, EDAD 518, Fri/Sat, May 15/16, 2020

This session will not be about qualitative research (QR) only, but rather about a modern 21st century approach toward the analysis of your literature review in Chapter 2.

However, the computational approach toward qualitative research is not much different than computational approach for your quantitative research; you need to be versed in each of them, thus familiarity with nVivo for qualitative research and with SPSS for quantitative research should be pursued by any doctoral student.

Qualitative Research

Here a short presentation on the basics:

http://blog.stcloudstate.edu/ims/2019/03/25/qualitative-analysis-basics/

Further, if you wish to expand your knowledge, on qualitative research (QR) in this IMS blog:

http://blog.stcloudstate.edu/ims?s=qualitative+research

Workshop on computational practices for QR:

http://blog.stcloudstate.edu/ims/2017/04/01/qualitative-method-research/

Here is a library instruction session for your course
http://blog.stcloudstate.edu/ims/2020/01/24/digital-literacy-edad-828/

Once you complete the overview of the resources above, please make sure you have Zotero working on your computer; we will be reviewing the Zotero features before we move to nVivo.

Here materials on Zotero collected in the IMS blog:
http://blog.stcloudstate.edu/ims?s=zotero

Of those materials, you might want to cover at least:

https://youtu.be/ktLPpGeP9ic

Familiarity with Zotero is a prerequisite for successful work with nVivo, so please if you are already working with Zotero, try to expand your knowledge using the materials above.

nVivo

http://blog.stcloudstate.edu/ims/2017/01/11/nvivo-shareware/

Please use this link to install nVivo on your computer. Even if we were not in a quarantine and you would have been able to use the licensed nVivo software on campus, for convenience (working on your dissertation from home), most probably, you would have used the shareware. Shareware is fully functional on your computer for 14 days, so calculate the time you will be using it and mind the date of installation and your consequent work.

For the purpose of this workshop, please install nVivo on your computer early morning on Saturday, May 16, so we can work together on nVivo during the day and you can continue using the software for the next two weeks.

Please familiarize yourself with the two articles assigned in the EDAD 815 D2L course content “Practice Research Articles“ :

Brosky, D. (2011). Micropolitics in the School: Teacher Leaders’ Use of Political Skill and Influence Tactics. International Journal of Educational Leadership Preparation, 6(1). https://eric.ed.gov/?id=EJ972880

Tooms, A. K., Kretovics, M. A., & Smialek, C. A. (2007). Principals’ perceptions of politics. International Journal of Leadership in Education, 10(1), 89–100. https://doi.org/10.1080/13603120600950901

It is very important to be familiar with the articles when we start working with nVivo.

++++++++++++++++

How to use Zotero

http://blog.stcloudstate.edu/ims/2020/01/27/zotero-workshop/

++++++++++++++++

How to use nVivo for bibliographic analysis

The following guideline is based on this document:

Bibliographical data analysis using Nvivo

whereas the snapshots are replaced with snapshots from nVivol, version 12, which we will be using in our course and for our dissertations.

Concept of bibliographic data

Bibliographic Data is an organized collection of references to publish in literature that includes journals, magazine articles, newspaper articles, conference proceedings, reports, government and legal publications. The bibliographical data is important for writing the literature review of a research. This data is usually saved and organized in databases like Mendeley or Endnote. Nvivo provides the option to import bibliographical data from these databases directly. One can import End Note library or Mendeley library into Nvivo. Similar to interview transcripts, one can represent and analyze bibliographical data using Nvivo. To start with bibliographical data representation, this article previews the processing of literature review in Nvivo.

Importing bibliographical data

Bibliographic Data is imported using Mendeley, Endnote and other such databases or applications that are supported with Nvivo.  Bibliographical data here refers to material in the form of articles, journals or conference proceedings. Common factors among all of these data are the author’s name and year of publication. Therefore, Nvivo helps  to import and arrange these data with their titles as author’s name and year of publication. The process of importing bibliographical data is presented in the figures below.

import Zotero data in nVivo

 

 

 

 

select the appropriate data from external folder

select the appropriate data from external folder

step 1 create record in nVIvo

 

step 2 create record in nVIvo

step 3 create record in nVIvo

 

Coding strategies for literature review

Coding is a process of identifying important parts or patterns in the sources and organizing them in theme node. Sources in case of literature review include material in the form of PDF. That means literature review in Nvivo requires grouping of information from PDF files in the forms of theme nodes. Nodes directly do not create content for literature review, they present ideas simply to help in framing a literature review. Nodes can be created on the basis of theme of the study, results of the study, major findings of the study or any other important information of the study. After creating nodes, code the information of each of the articles into its respective codes.

Nvivo allows coding the articles for preparing a literature review. Articles have tremendous amount of text and information in the forms of graphs, more importantly, articles are in the format of PDF. Since Nvivo does not allow editing PDF files, apply manual coding in case of literature review.  There are two strategies of coding articles in Nvivo.

  1. Code the text of PDF files into a new Node.
  2. Code the text of PDF file into an existing Node. The procedure of manual coding in literature review is similar to interview transcripts.

Add Node to Cases

 

 

 

 

 

The Case Nodes of articles are created as per the author name or year of the publication.

For example: Create a case node with the name of that author and attach all articles in case of multiple articles of same Author in a row with different information. For instance in figure below, five articles of same author’s name, i.e., Mr. Toppings have been selected together to group in a case Node. Prepare case nodes like this then effortlessly search information based on different author’s opinion for writing empirical review in the literature.

Nvivo questions for literature review

Apart from the coding on themes, evidences, authors or opinions in different articles, run different queries based on the aim of the study. Nvivo contains different types of search tools that helps to find information in and across different articles. With the purpose of literature review, this article presents a brief overview of word frequency search, text search, and coding query in Nvivo.

Word frequency

Word frequency in Nvivo allows searching for different words in the articles. In case of literature review, use word frequency to search for a word. This will help to find what different author has stated about the word in the article. Run word frequency  on all types of sources and limit the number of words which are not useful to write the literature.

For example, run the command of word frequency with the limit of 100 most frequent words . This will help in assessing if any of these words remotely provide any new information for the literature (figure below).

Query Text Frequency

andword frequency search

and

word frequency query saved

Text search

Text search is more elaborative tool then word frequency search in Nvivo. It allows Nvivo to search for a particular phrase or expression in the articles. Also, Nvivo gives the opportunity to make a node out of text search if a particular word, phrase or expression is found useful for literature.

For example: conduct a text search query to find a word “Scaffolding” in the articles. In this case Nvivo will provide all the words, phrases and expression slightly related to this word across all the articles (Figure 8 & 9). The difference between test search and word frequency lies in generating texts, sentences and phrases in the latter related to the queried word.

Query Text Search

Coding query

Apart from text search and word frequency search Nvivo also provides the option of coding query. Coding query helps in  literature review to know the intersection between two Nodes. As mentioned previously, nodes contains the information from the articles.  Furthermore it is also possible that two nodes contain similar set of information. Therefore, coding query helps to condense this information in the form of two way table which represents the intersection between selected nodes.

For example, in below figure, researcher have search the intersection between three nodes namely, academics, psychological and social on the basis of three attributes namely qantitative, qualitative and mixed research. This coding theory is performed to know which of the selected themes nodes have all types of attributes. Like, Coding Matrix in figure below shows that academic have all three types of attributes that is research (quantitative, qualitative and mixed). Where psychological has only two types of attributes research (quantitative and mixed).

In this way, Coding query helps researchers to generate intersection between two or more theme nodes. This also simplifies the pattern of qualitative data to write literature.

+++++++++++++++++++

Please do not hesitate to contact me with questions, suggestions before, during or after our workshop and about ANY questions and suggestions you may have about your Chapter 2 and, particularly about your literature review:

Plamen Miltenoff, Ph.D., MLIS

Professor | 320-308-3072 | pmiltenoff@stcloudstate.edu | http://web.stcloudstate.edu/pmiltenoff/faculty/ | schedule a meeting: https://doodle.com/digitalliteracy | Zoom, Google Hangouts, Skype, FaceTalk, Whatsapp, WeChat, Facebook Messenger are only some of the platforms I can desktopshare with you; if you have your preferable platform, I can meet you also at your preference.

++++++++++++++
more on nVIvo in this IMS blog
http://blog.stcloudstate.edu/ims?s=nvivo

more on Zotero in this IMS blog
http://blog.stcloudstate.edu/ims?s=zotero

Remote UX Work: Guidelines and Resources

https://www.nngroup.com/articles/remote-ux/

capture qualitative insights from video recordings and think-aloud narration from users:  https://lookback.io/  https://app.dscout.com/sign_in  https://userbrain.net/

capture quantitative metrics such as time spent and success rate:   https://konceptapp.com/

Many platforms have both qualitative and quantitative capabilities, such as UserZoom and UserTesting

Tips for Remote Facilitating and Presenting:

  • turn on your camera
  • Enable connection
  • Create ground rules
  • Assign homework
  • Adapt the structure

Tools for Remote Facilitating and Presenting

  • Presenting UX work: Zoom, GoToMeeting, and Google Hangouts Meet
  • Generative workshop activities: Google Draw, Microsoft Visio, Sketch, MURAL, and Miro
  • Evaluative workshop activities: MURAL or Miro. Alternatively, use survey tools such as SurveyMonkey or CrowdSignal, or live polling apps such as Poll Everywhere that you can insert directly into your slides.

Remote Collaboration and Brainstorming

  • Consider both synchronous and asynchronous methods
  • Enable mutual participation
  • Respect schedules
  • Keep tools simple

White boards: https://miro.com/ and https://mural.co/

Learning analytics, student satisfaction, and student performance

Learning analytics, student satisfaction, and student performance at the UK Open University

https://www.tonybates.ca/2018/05/11/11025/
Rienties and his team linked 151 modules (courses) and 111,256 students with students’ behaviour, satisfaction and performance at the Open University UK, using multiple regression models.

There is little correlation between student course evaluations and student performance

The design of the course matters

Student feedback on the quality of a course is really important but it is more useful as a conversation between students and instructors/designers than as a quantitative ranking of the quality of a course.  In fact using learner satisfaction as a way to rank teaching is highly misleading. Learner satisfaction encompasses a very wide range of factors as well as the teaching of a particular course. 

this research provides quantitative evidence of the importance of learning design in online and distance teaching. Good design leads to better learning outcomes. We need a shift in the power balance between university and college subject experts and learning designers resulting in the latter being treated as at least equals in the teaching process.

+++++++++|
more on learning analytics in this IMS blog
http://blog.stcloudstate.edu/ims?s=learning+analytics

Data driven design

Valuing data over design instinct puts metrics over users

Benek Lisefski August 13, 2019

https://modus.medium.com/data-driven-design-is-killing-our-instincts-d448d141653d

Overreliance on data to drive design decisions can be just as harmful as ignoring it. Data only tells one kind of story. But your project goals are often more complex than that. Goals can’t always be objectively measured.

Data-driven design is about using information gleaned from both quantitative and qualitative sources to inform how you make decisions for a set of users. Some common tools used to collect data include user surveys, A/B testing, site usage and analytics, consumer research, support logs, and discovery calls. 

Designers justified their value through their innate talent for creative ideas and artistic execution. Those whose instincts reliably produced success became rock stars.

In today’s data-driven world, that instinct is less necessary and holds less power. But make no mistake, there’s still a place for it.

Data is good at measuring things that are easy to measure. Some goals are less tangible, but that doesn’t make them less important.

Data has become an authoritarian who has fired the other advisors who may have tempered his ill will. A designer’s instinct would ask, “Do people actually enjoy using this?” or “How do these tactics reflect on our reputation and brand?”

Digital interface design is going through a bland period of sameness.

Data is only as good as the questions you ask

When to use data vs. when to use instinct

Deciding between two or three options? This is where data shines. Nothing is more decisive than an A/B test to compare potential solutions and see which one actually performs better. Make sure you’re measuring long-term value metrics and not just views and clicks.

Sweating product quality and aesthetics? Turn to your instinct. The overall feeling of quality is a collection of hundreds of micro-decisions, maintained consistency, and execution with accuracy. Each one of those decisions isn’t worth validating on its own. Your users aren’t design experts, so their feedback will be too subjective and variable. Trust your design senses when finessing the details.

Unsure about user behavior? Use data rather than asking for opinions. When asked what they’ll do, customers will do what they think you want them to. Instead, trust what they actually do when they think nobody’s looking.

Building brand and reputation? Data can’t easily measure this. But we all know trustworthiness is as important as clicks (and sometimes they’re opposing goals). When building long-term reputation, trust your instinct to guide you to what’s appealing, even if it sometimes contradicts short-term data trends. You have to play the long game here.

+++++++++
more on big data in this IMS blog
http://blog.stcloudstate.edu/ims?s=big+data

Literature on Digital Humanities

Burdick, A. (2012). Digital humanities . Cambridge, MA: MIT Press.

https://mnpals-scs.primo.exlibrisgroup.com/discovery/fulldisplay?docid=alma990078472690104318&context=L&vid=01MNPALS_SCS:SCS&search_scope=MyInst_and_CI&tab=Everything&lang=en

digital humanities is born f the encounter between traditional humanities and computational methods.

p. 5. From Humanism to Humanities
While the foundations of of humanistic inquiry and the liberal arts can be traced back in the west to the medieval trivium and quadrivium, the modern and human sciences are rooted in the Renaissance shift from a medieval, church dominated, theocratic world view to be human centered one period the gradual transformation of early humanism into the disciplines that make up the humanities today Was profoundly shaped by the editorial practices involved in the recovery of the corpus of works from classical antiquity

P. 6. The shift from humanism to the institution only sanctioned disciplinary practices and protocols that we associate with the humanities today is best described as a gradual process of subdivision and specialization.
P. 7. Text-based disciplines in studies (classics, literature, philosophy, the history of ideas) make up, from the very start, the core of both the humanities and the great books curricular instituted in the 1920s and 1930s.
P. 10. Transmedia modes of argumentation
In the 21st-century, we communicate in media significantly more varied, extensible, and multiplicative then linear text. From scalable databases to information visualizations, from video lectures to multi-user virtual platforms serious content and rigorous argumentation take shape across multiple platforms in media. The best digital humanities pedagogy and research projects train students both in “reading “and “writing “this emergent rhetoric and in understanding how the reshape and three model humanistic knowledge. This means developing critically informed literacy expensive enough to include graphic design visual narrative time based media, and the development of interfaces (Rather then the rote acceptance of them as off-the-shelf products).
P. 11. The visual becomes ever more fundamental to the digital humanities, in ways that compliment, enhance, and sometimes are in pension with the textual.
There is no either/or, no simple interchangeability between language and the visual, no strict sub ordination of the one to the other. Words are themselves visual but other kinds of visual constructs do different things. The question is how to use each to its best effect into device meaningful interpret wing links, to use Theodor Nelson’s ludic neologism.
P. 11. The suite of expressive forms now encompasses the use of sound, motion graphics, animation, screen capture, video, audio, and the appropriation and into remix sink of code it underlines game engines. This expanded range of communicative tools requires those who are engaged in digital humanities world to familiarize themselves with issues, discussions, and debates in design fields, especially communication and interaction design. Like their print predecessors, form at the convention center screen environments can become naturalized all too quickly, with the results that the thinking that informed they were designed goes unperceived.

p. 13.

For digital humanists, design is a creative practice harnessing cultural, social, economic, and technological constraints in order to bring systems and objects into the world. Design in dialogue with research is simply a picnic, but when used to pose in frame questions about knowledge, design becomes an intellectual method. Digital humanities is a production based in Denver in which theoretical issues get tested in the design of implementations and implementations or loci after your radical reflection and elaboration.
Did you thaw humanists have much to learn from communication and media design about how to juxtapose and integrate words and images create hire he is of reading, Forge pathways of understanding, deployed grades in templates to best effect, and develop navigational schemata that guide in produce meaningful interactions.
P. 15.  The field of digital digital humanities me see the emergence of polymaths who can “ do it all” : Who can research, write, shoot, edit, code, model, design, network, and dialogue with users. But there is also ample room for specialization and, particularly, for collaboration.
P. 16. Computational activities in digital humanities.
The foundational layer, computation, relies on principles that are, on the surface, at odds with humanistic methods.
P. 17. The second level involves processing in a way that conform to computational capacities, and this were explored in the first generation of digital scholarship and stylometrics, concordance development, and indexing.
P. 17.
Duration, analysis, editing, modeling.
Duration, analysis, editing, and modeling comprise fundamental activities at the core of digital humanities. Involving archives, collections, repositories, and other aggregations of materials, duration is the selection and organization of materials in an interpretive framework, argument, or exhibit.
P. 18. Analysis refers to the processing of text or data: statistical and quantitative methods of analysis have brought close readings of texts (stylometrics and genre analysis, correlation, comparisons of versions for alter attribution or usage patterns ) into dialogue with distant reading (The crunching cuff large quantities of information across the corpus of textual data or its metadata).
Edit think has been revived with the advent of digital media and the web and to continue to be an integral activity in textual as well as time based formats.
P. 18. Model link highlights the notion of content models- shapes of argument expressed in information structures in their design he digital project is always an expression of assumptions about knowledge: usually domain specific knowledge given an explicit form by the model in which it is designed.
P. 19.  Each of these areas of activity- cure ration, analysis, editing, and modeling is supported by the basic building blocks of digital activity. But they also depend upon networks and infrastructure that are cultural and institutional as well as technical. Servers, software, and systems administration are key elements of any project design.
P. 30. Digital media are not more “evolved” have them print media nor are books obsolete; but the multiplicity of media in the very processes of mediation entry mediation in the formation of cultural knowledge and humanistic inquiry required close attention. Tug link between distant and clothes, macro and micro, and surface in depth becomes the norm. Here, we focus on the importance of visualization to the digital humanities before moving on to other, though often related, genre and methods such as Locative investigation, thick mapping, animated archives, database documentaries, platform studies, and emerging practices like cultural analytics, data mining and humanities gaming.
P. 35. Fluid texture out what he refers to the mutability of texts in the variants and versions Whether these are produced through Authorial changes, anything, transcription, translation, or print production

Cultural Analytics, aggregation, and data mining.
The field of cultural Analytics has emerged over the past few years, utilizing tools of high-end computational analysis and data visualization today sect large-scale coach data sets. Cultural Analytic does Not analyze cultural artifacts, but operates on the level of digital models of this materials in aggregate. Again, the point is not to pit “close” hermeneutic reading against “distant” data mapping, but rather to appreciate the synergistic possibilities and tensions that exist between a hyper localized, deep analysis and a microcosmic view

p. 42.

Data mining is a term that covers a host of picnics for analyzing digital material by “parameterizing” some feature of information and extract in it. This means that any element of a file or collection of files that can be given explicit specifications,  or parameters, can be extracted from those files for analysis.
Understanding the rehtoric of graphics is another essential skill, therefore, in working at a skill where individual objects are lost in the mass of processed information and data. To date, much humanities data mining has merely involved counting. Much more sophisticated statistical methods and use of probability will be needed for humanists to absorb the lessons of the social sciences into their methods
P. 42. Visualization and data design
Currently, visualization in the humanities uses techniques drawn largely from the social sciences, Business applications, and the natural sciences, all of which require self-conscious criticality in their adoption. Such visual displays including graphs and charts, may present themselves is subjective or even unmediated views of reality, rather then is rhetorical constructs.

+++++++++++++++++++++++++++
Warwick, C., Terras, M., & Nyhan, J. (2012). Digital humanities in practice . London: Facet Publishing in association with UCL Centre for Digital Humanities.

https://mnpals-scs.primo.exlibrisgroup.com/discovery/fulldisplay?docid=alma990078423690104318&context=L&vid=01MNPALS_SCS:SCS&search_scope=MyInst_and_CI&tab=Everything&lang=en

 

Russian Influence Operations on Twitter

Russian Influence Operations on Twitter

Summary This short paper lays out an attempt to measure how much activity from Russian state-operated accounts released in the dataset made available by Twitter in October 2018 was targeted at the United Kingdom. Finding UK-related Tweets is not an easy task. By applying a combination of geographic inference, keyword analysis and classification by algorithm, we identified UK-related Tweets sent by these accounts and subjected them to further qualitative and quantitative analytic techniques.

We find:

 There were three phases in Russian influence operations : under-the-radar account building, minor Brexit vote visibility, and larger-scale visibility during the London terror attacks.

 Russian influence operations linked to the UK were most visible when discussing Islam . Tweets discussing Islam over the period of terror attacks between March and June 2017 were retweeted 25 times more often than their other messages.

 The most widely-followed and visible troll account, @TEN_GOP, shared 109 Tweets related to the UK. Of these, 60 percent were related to Islam .

 The topology of tweet activity underlines the vulnerability of social media users to disinformation in the wake of a tragedy or outrage.

 Focus on the UK was a minor part of wider influence operations in this data . Of the nine million Tweets released by Twitter, 3.1 million were in English (34 percent). Of these 3.1 million, we estimate 83 thousand were in some way linked to the UK (2.7%). Those Tweets were shared 222 thousand times. It is plausible we are therefore seeing how the UK was caught up in Russian operations against the US .

 Influence operations captured in this data show attempts to falsely amplify other news sources and to take part in conversations around Islam , and rarely show attempts to spread ‘fake news’ or influence at an electoral level.

On 17 October 2018, Twitter released data about 9 million tweets from 3,841 blocked accounts affiliated with the Internet Research Agency (IRA) – a Russian organisation founded in 2013 and based in St Petersburg, accused of using social media platforms to push pro-Kremlin propaganda and influence nation states beyond their borders, as well as being tasked with spreading pro-Kremlin messaging in Russia. It is one of the first major datasets linked to state-operated accounts engaging in influence operations released by a social media platform.

Conclusion

This report outlines the ways in which accounts linked to the Russian Internet ResearchAgency (IRA) carried out influence operations on social media and the ways their operationsintersected with the UK.The UK plays a reasonably small part in the wider context of this data. We see two possibleexplanations: either influence operations were primarily targeted at the US and British Twitterusers were impacted as collate, or this dataset is limited to US-focused operations whereevents in the UK were highlighted in an attempt to impact US public, rather than a concertedeffort against the UK. It is plausible that such efforts al so existed but are not reflected inthis dataset.Nevertheless, the data offers a highly useful window into how Russian influence operationsare carried out, as well as highlighting the moments when we might be most vulnerable tothem.Between 2011 and 2016, these state-operated accounts were camouflaged. Through manualand automated methods, they were able to quietly build up the trappings of an active andwell-followed Twitter account before eventually pivoting into attempts to influence the widerTwitter ecosystem. Their methods included engaging in unrelated and innocuous topics ofconversation, often through automated methods, and through sharing and engaging withother, more mainstream sources of news.Although this data shows levels of electoral and party-political influence operations to berelatively low, the day of the Brexit referendum results showed how messaging originatingfrom Russian state-controlled accounts might come to be visible on June 24th 2016, we believe UK Twitter users discussing the Brexit Vote would have encountered messages originating from these accounts.As early as 2014, however, influence operations began taking part in conversations aroundIslam, and these accounts came to the fore during the three months of terror attacks thattook place between March and June 2017. In the immediate wake of these attacks, messagesrelated to Islam and circulated by Russian state-operated Twitter accounts were widelyshared, and would likely have been visible in the UK.The dataset released by Twitter begins to answer some questions about attempts by a foreignstate to interfere in British affairs online. It is notable that overt political or electoralinterference is poorly represented in this dataset: rather, we see attempts at stirring societaldivision, particularly around Islam in the UK, as the messages that resonated the most overthe period.What is perhaps most interesting about this moment is its portrayal of when we as socialmedia users are most vulnerable to the kinds of messages circulated by those looking toinfluence us. In the immediate aftermath of terror attacks, the data suggests, social mediausers were more receptive to this kind of messaging than at any other time.

It is clear that hostile states have identified the growth of online news and social media as aweak spot, and that significant effort has gone into attempting to exploit new media toinfluence its users. Understanding the ways in which these platforms have been used tospread division is an important first step to fighting it.Nevertheless, it is clear that this dataset provides just one window into the ways in whichforeign states have attempted to use online platforms as part of wider information warfare
and influence campaigns. We hope that other platforms will follow Twitter’s lead and release
similar datasets and encourage their users to proactively tackle those who would abuse theirplatforms.

+++++++++++
more on cybersecurity in this IMS blog
http://blog.stcloudstate.edu/ims?s=cybersecurity

netnography

Xu Zhang. (2017). The Quality of Virtual Communities: A Case Study of Chinese Overseas Students in WeChat Groups. Global Studies Journal, 10(3), 19–26. https://doi.org/10.18848/1835-4432/CGP/v10i03/19-26
p. 23-24.
“Netnography” has been developed for online community researchers. It is “net” plus “ethnography,” which is based on the traditional ethnography and combines with the qualitative analysis for online interactive contents forms of virtual community members. The aim of doing netnographic research is to study the subculture, interactive process and characteristics of collective behaviors of online communities (Kozinets 2009). Follow the development of Internet technology, the web–based method is more convenient and cost–effect in data collection. Members in virtual groups create a large number of interactive texts, pictures, network expressions and other original information over time, which provides an extremely rich database to researchers. Moreover, from the data collection’s point of view, this online observation method will not interfere with the whole research process, which is better than questionnaires and quantitative modeling (Moisander and Valtonen 2006). Additionally, Kozinets (2009) also pointed that netnogrpahy emphasize on the research background, observers not only focus on the text during communications but also need to pay attention to the characteristics of language, history, meaning and communication types. Even parse fonts, symbols, images and photo data. These content of studies are significant in social communication, which is called “Cultural Artifact.” On the other hand, netnography is based on traditional ethnography as a methodology; therefore it inherits the research processes of ethnographic method. Kozients (2009) reinterpreted these procedures for netnography as Firstly, to determine the research target and understand its cultural characteristics; Secondly, to collect and analyze information; Thirdly, to ensure the credibility of interpretation; Fourthly, pay attention to research ethics; Lastly, to obtain respondents feedbacks. To make my research adapting to this guidelines, I make my research process as 1. To target on Plymouth Chinese overseas students and to explain the Chinese guanxi; 2. To collect and analyze data through the existing WeChat group created by Plymouth Chinese Students and Scholars Association (CSSA); 3. To confirm the identity of key influencers in this virtual group; 4. To get feedbacks from respondent as much as possible.
https://en.wikipedia.org/wiki/Netnography

What is Netnography from Harrison Hayes, LLC
https://nsuworks.nova.edu/tqr/vol15/iss5/13/

suggestions for academic writing

these are suggestions from Google Groups with doctoral cohorts 6, 7, 8, 9 from the Ed leadership program

How to find a book from InterLibrary Loan: find book ILL

Citing someone else’s citation?:

http://library.northampton.ac.uk/liberation/ref/adv_harvard_else.php

http://guides.is.uwa.edu.au/c.php?g=380288&p=3109460
use them sparingly:
http://www.apastyle.org/learn/faqs/cite-another-source.aspx
Please take a look at “Paraphrasing sources: in
http://www.roanestate.edu/owl/usingsources_mla.html
it gives you a good idea why will distance you from a possibility of plagiarizing.
n example of resolution by this peer-reviewed journal article
https://doi.org/10.19173/irrodl.v17i5.2566
Ungerer, L. M. (2016). Digital Curation as a Core Competency in Current Learning and Literacy: A Higher Education Perspective. The International Review of Research in Open and Distributed Learning17(5). https://doi.org/10.19173/irrodl.v17i5.2566
Dunaway (2011) suggests that learning landscapes in a digital age are networked, social, and technological. Since people commonly create and share information by collecting, filtering, and customizing digital content, educators should provide students opportunities to master these skills (Mills, 2013). In enhancing critical thinking, we have to investigate pedagogical models that consider students’ digital realities (Mihailidis & Cohen, 2013). November (as cited in Sharma & Deschaine, 2016), however warns that although the Web fulfils a pivotal role in societal media, students often are not guided on how to critically deal with the information that they access on the Web. Sharma and Deschaine (2016) further point out the potential for personalizing teaching and incorporating authentic material when educators themselves digitally curate resources by means of Web 2.0 tools.
p. 24. Communities of practice. Lave and Wenger’s (as cited in Weller, 2011) concept of situated learning and Wenger’s (as cited in Weller, 2011) idea of communities of practice highlight the importance of apprenticeship and the social role in learning.
criteria to publish a paper

Originality: Does the paper contain new and significant information adequate to justify publication?

Relationship to Literature: Does the paper demonstrate an adequate understanding of the relevant literature in the field and cite an appropriate range of literature sources? Is any significant work ignored?

Methodology: Is the paper’s argument built on an appropriate base of theory, concepts, or other ideas? Has the research or equivalent intellectual work on which the paper is based been well designed? Are the methods employed appropriate?

Results: Are results presented clearly and analyzed appropriately? Do the conclusions adequately tie together the other elements of the paper?

Implications for research, practice and/or society: Does the paper identify clearly any implications for research, practice and/or society? Does the paper bridge the gap between theory and practice? How can the research be used in practice (economic and commercial impact), in teaching, to influence public policy, in research (contributing to the body of knowledge)? What is the impact upon society (influencing public attitudes, affecting quality of life)? Are these implications consistent with the findings and conclusions of the paper?

Quality of Communication: Does the paper clearly express its case, measured against the technical language of the field and the expected knowledge of the journal’s readership? Has attention been paid to the clarity of expression and readability, such as sentence structure, jargon use, acronyms, etc.

mixed method research

http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3deric%26AN%3dEJ971947%26site%3dehost-live%26scope%3dsite

Stanton, K. V., & Liew, C. L. (2011). Open Access Theses in Institutional Repositories: An Exploratory Study of the Perceptions of Doctoral Students. Information Research: An International Electronic Journal16(4),

We examine doctoral students’ awareness of and attitudes to open access forms of publication. Levels of awareness of open access and the concept of institutional repositories, publishing behaviour and perceptions of benefits and risks of open access publishing were explored. Method: Qualitative and quantitative data were collected through interviews with eight doctoral students enrolled in a range of disciplines in a New Zealand university and a self-completion Web survey of 251 students. Analysis: Interview data were analysed thematically, then evaluated against a theoretical framework. The interview data were then used to inform the design of the survey tool. Survey responses were analysed as a single set, then by disciple using SurveyMonkey’s online toolkit and Excel. Results: While awareness of open access and repository archiving is still low, the majority of interview and survey respondents were found to be supportive of the concept of open access. The perceived benefits of enhanced exposure and potential for sharing outweigh the perceived risks. The majority of respondents were supportive of an existing mandatory thesis submission policy. Conclusions: Low levels of awareness of the university repository remains an issue, and could be addressed by further investigating the effectiveness of different communication channels for promotion.

PLEASE NOTE:

the researchers use the qualitative approach: by interviewing participants and analyzing their responses thematically, they build the survey.
Then then administer the survey (the quantitative approach)

How do you intend to use a mixed method? Please share

paraphrasing quotes

statement of the problem

Problem statement – Wikipedia

 
Metaphors: A Problem Statement is like… 
metaphor — a novel or poetic linguistic expression where one or more words for a concept are used outside normal conventional meaning to express a similar concept. Aristotle l 
The DNA of the research l A snapshot of the research l The foundation of the research l The Heart of the research l A “taste” of the research l A blueprint for the study
 
 
 
Here is a good exercise for your writing of the problem statement:
Chapter 3
several documents, which can be helpful in two different ways:
– check your structure and methodology
– borrow verbiage
http://education.nova.edu/Resources/uploads/app/35/files/arc_doc/writing_chpt3_quantitative_research_methods.pdf 
http://education.nova.edu/Resources/uploads/app/35/files/arc_doc/writing_chpt3_qualitative_research_methods.pdf
http://www.trinitydc.edu/sps/files/2010/09/APA-6-BGS-Quantitative-Research-Paper-August-2014.pdf

digital object identifier, or DOI

digital object identifier (DOI) is a unique alphanumeric string assigned by a registration agency (the International DOI Foundation) to identify content and provide a persistent link to its location on the Internet. The publisher assigns a DOI when your article is published and made available electronically.

Why do we need it?

2010 Changes to APA for Electronic Materials Digital object identifier (DOI). DOI available. If a DOI is available you no longer include a URL. Example: Author, A. A. (date). Title of article. Title of Journal, volume(number), page numbers. doi: xx.xxxxxxx

http://www.stcloudstate.edu/writeplace/_files/documents/working-with-sources/apa-electronic-material-citations.pdf

Mendeley (vs Zotero and/or RefWorks)

https://www.brighttalk.com/webcast/11355/226845?utm_campaign=Mendeley%20Webinars%202&utm_campaignPK=271205324&utm_term=OP28019&utm_content=271205712&utm_source=99&BID=799935188&utm_medium=email&SIS_ID=46360

Online Writing Tools: FourOnlineToolsforwriting

social media and altmetrics

Accodring to Sugimoto et al (2016), the Use of social media platforms for by researchers is high — ranging from 75 to 80% in large -scale surveys (Rowlands et al., 2011; Tenopir et al., 2013; Van Eperen & Marincola, 2011) .
There is one more reason, and, as much as you want to dwell on the fact that you are practitioners and research is not the most important part of your job, to a great degree, you may be judged also by the scientific output of your office and/or institution.
In that sense, both social media and altimetrics might suddenly become extremely important to understand and apply.
Shortly altmetrics (alternative metrics) measure the impact your scientific output has on the community. Your teachers and you present, publish and create work, which might not be presented and published, but may be widely reflected through, e.g. social media, and thus, having impact on the community.
How such impact is measured, if measured at all, can greatly influence the money flow to your institution
For more information:
For EVEN MORE information, read the entire article:
Sugimoto, C. R., Work, S., Larivière, V., & Haustein, S. (2016). Scholarly use of social media and altmetrics: a review of the literature. Retrieved from https://arxiv.org/abs/1608.08112
related information:
In the comments section on this blog entry,
I left notes to
Thelwall, M., & Wilson, P. (2016). Mendeley readership altmetrics for medical articles: An analysis of 45 fields. Journal of the Association for Information Science and Technology, 67(8), 1962–1972. https://doi.org/10.1002/asi.23501
Todd Tetzlaff is using Mendeley and he might be the only one to benefit … 🙂
Here is some food for thought from the article above:
Doctoral students and junior researchers are the largest reader group in Mendeley ( Haustein & Larivière, 2014; Jeng et al., 2015; Zahedi, Costas, & Wouters, 2014a) .
Studies have also provided evidence of high rate s of blogging among certain subpopulations: for example, approximately one -third of German university staff (Pscheida et al., 2013) and one fifth of UK doctoral students use blogs (Carpenter et al., 2012) .
Social data sharing platforms provide an infrastructure to share various types of scholarly objects —including datasets, software code, figures, presentation slides and videos —and for users to interact with these objects (e.g., comment on, favorite, like , and reuse ). Platforms such as Figshare and SlideShare disseminate scholars’ various types of research outputs such as datasets, figures, infographics, documents, videos, posters , or presentation slides (Enis, 2013) and displays views, likes, and shares by other users (Mas -Bleda et al., 2014) .
Frequently mentioned social platforms in scholarly communication research include research -specific tools such as Mendeley, Zotero, CiteULike, BibSonomy, and Connotea (now defunct) as well as general tools such as Delicious and Digg (Hammond, Hannay, Lund, & Scott, 2005; Hull, Pettifer, & Kell, 2008; Priem & Hemminger, 2010; Reher & Haustein, 2010) .
qualitative research
“The focus group interviews were analysed based on the principles of interpretative phenomenology”
 
1. What are  interpretative phenomenology?
Here is an excellent article in ResarchGate:
 
https://www.researchgate.net/publication/263767248_A_practical_guide_to_using_Interpretative_Phenomenological_Analysis_in_qualitative_research_psychology
 
and a discussion from the psychologists regarding the weaknesses when using IPA (Interpretative phenomenological analysis)

https://thepsychologist.bps.org.uk/volume-24/edition-10/methods-interpretative-phenomenological-analysis

2. What is Constant Comparative Method?

http://www.qualres.org/HomeCons-3824.html

Nvivo shareware

http://blog.stcloudstate.edu/ims/2017/01/11/nvivo-shareware/

Qualitative and Quantitative research in lame terms
podcast:
https://itunes.apple.com/us/podcast/how-scientific-method-works/id278981407?i=1000331586170&mt=2
if you are not podcast fans, I understand. The link above is a pain in the behind to make work, if you are not familiar with using podcast.
Here is an easier way to find it:
1. open your cell phone and go find the podcast icon, which is pre-installed, but you might have not ever used it [yet].
2. In the app, use the search option and type “stuff you should know”
3. the podcast will pop up. scroll and find “How the scientific method works,” and/or search for it if you can.
Once you can play it on the phone, you have to find time to listen to it.
I listen to podcast when i have to do unpleasant chores such as: 1. walking to work 2. washing the dishes 3. flying long hours (very rarely). 4. Driving in the car.
There are bunch of other situations, when you may be strapped and instead of filling disgruntled and stressed, you can deliver the mental [junk] food for your brain.
Earbuds help me: 1. forget the unpleasant task, 2. Utilize time 3. Learn cool stuff
Here are podcasts, I am subscribed for, besides “stuff you should know”:
TED Radio Hour
TED Talks Education
NPR Fresh Air
BBC History
and bunch others, which, if i don’t go a listen for an year, i go and erase and if i peruse through the top chart and something picks my interest, I try.
If I did not manage to convince to podcast, totally fine; do not feel obligated.
However, this podcast, you can listen to on your computer, if you don’t want to download on your phone.
It is one hour show by two geeks, who are trying to make funny (and they do) a dry matter such as quantitative vs qualitative, which you want to internalize:
1. Sometimes at minute 12, they talk about inductive versus deductive to introduce you to qualitative versus quantitative. It is good to listen to their musings, since your dissertation is going through inductive and deductive process, and understanding it, can help you control better your dissertation writing. 
2. Scientific method. Hypothesis etc (around min 17).
While this is not a Ph.D., but Ed.D. and we do not delve into the philosophy of science and dissertation etc. the more you know about this process, the better control you have over your dissertation. 
3. Methods and how you prove (Chapter 3) is discussed around min 35
4. dependent and independent variables and how do you do your research in general (min ~45)
Shortly, listen and please do share your thoughts below. You do not have to be kind to this source offering. Actually, be as critical as possible, so you can help me decide, if I should offer it to the next cohort and thank you in advance for your feedback. 

 

 

coding ethics unpredictability

Franken-algorithms: the deadly consequences of unpredictable code

by  Thu 30 Aug 2018 

https://www.theguardian.com/technology/2018/aug/29/coding-algorithms-frankenalgos-program-danger

Between the “dumb” fixed algorithms and true AI lies the problematic halfway house we’ve already entered with scarcely a thought and almost no debate, much less agreement as to aims, ethics, safety, best practice. If the algorithms around us are not yet intelligent, meaning able to independently say “that calculation/course of action doesn’t look right: I’ll do it again”, they are nonetheless starting to learn from their environments. And once an algorithm is learning, we no longer know to any degree of certainty what its rules and parameters are. At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us. Where the “dumb” fixed algorithms – complex, opaque and inured to real time monitoring as they can be – are in principle predictable and interrogable, these ones are not. After a time in the wild, we no longer know what they are: they have the potential to become erratic. We might be tempted to call these “frankenalgos” – though Mary Shelley couldn’t have made this up.

Twenty years ago, George Dyson anticipated much of what is happening today in his classic book Darwin Among the Machines. The problem, he tells me, is that we’re building systems that are beyond our intellectual means to control. We believe that if a system is deterministic (acting according to fixed rules, this being the definition of an algorithm) it is predictable – and that what is predictable can be controlled. Both assumptions turn out to be wrong.“It’s proceeding on its own, in little bits and pieces,” he says. “What I was obsessed with 20 years ago that has completely taken over the world today are multicellular, metazoan digital organisms, the same way we see in biology, where you have all these pieces of code running on people’s iPhones, and collectively it acts like one multicellular organism.“There’s this old law called Ashby’s law that says a control system has to be as complex as the system it’s controlling, and we’re running into that at full speed now, with this huge push to build self-driving cars where the software has to have a complete model of everything, and almost by definition we’re not going to understand it. Because any model that we understand is gonna do the thing like run into a fire truck ’cause we forgot to put in the fire truck.”

Walsh believes this makes it more, not less, important that the public learn about programming, because the more alienated we become from it, the more it seems like magic beyond our ability to affect. When shown the definition of “algorithm” given earlier in this piece, he found it incomplete, commenting: “I would suggest the problem is that algorithm now means any large, complex decision making software system and the larger environment in which it is embedded, which makes them even more unpredictable.” A chilling thought indeed. Accordingly, he believes ethics to be the new frontier in tech, foreseeing “a golden age for philosophy” – a view with which Eugene Spafford of Purdue University, a cybersecurity expert, concurs. Where there are choices to be made, that’s where ethics comes in.

our existing system of tort law, which requires proof of intention or negligence, will need to be rethought. A dog is not held legally responsible for biting you; its owner might be, but only if the dog’s action is thought foreseeable.

model-based programming, in which machines do most of the coding work and are able to test as they go.

As we wait for a technological answer to the problem of soaring algorithmic entanglement, there are precautions we can take. Paul Wilmott, a British expert in quantitative analysis and vocal critic of high frequency trading on the stock market, wryly suggests “learning to shoot, make jam and knit

The venerable Association for Computing Machinery has updated its code of ethics along the lines of medicine’s Hippocratic oath, to instruct computing professionals to do no harm and consider the wider impacts of their work.

+++++++++++
more on coding in this IMS blog
http://blog.stcloudstate.edu/ims?s=coding

Measuring Learning Outcomes of New Library Initiatives

International Conference on Qualitative and Quantitative Methods in Libraries 2018 (QQML2018)

conf@qqml.net

Where: Cultural Centre Of Chania
ΠΝΕΥΜΑΤΙΚΟ ΚΕΝΤΡΟ ΧΑΝΙΩΝ

https://goo.gl/maps/8KcyxTurBAL2

also live broadcast at https://www.facebook.com/InforMediaServices/videos/1542057332571425/

Posted by InforMedia Services on Thursday, May 24, 2018

When: May 24, 12:30AM-2:30PM (local time; 4:40AM-6:30AM, Chicago Central)

Programme QQML2018-23pgopv

Live broadcasts from some of the sessions:

#QQML2018 Sebastian Bock w @Springer Nature about citation #metrics and beyond

Posted by InforMedia Services on Wednesday, May 23, 2018

Here is a link to Sebastian Bock’s presentation:
https://drive.google.com/file/d/1jSOyNXQuqgGTrhHIapq0uxAXQAvkC6Qb/view

#qqml2018

Posted by InforMedia Services on Wednesday, May 23, 2018

#qqml2018 after two hurricanes presenting

Posted by InforMedia Services on Thursday, May 24, 2018

#qqml2018 Carla Fulgham hashtags

Posted by InforMedia Services on Wednesday, May 23, 2018

Information literacy skills and college students from Jade Geary

Session 1:
http://qqml.org/wp-content/uploads/2017/09/SESSION-Miltenoff.pdf

Session Title: Measuring Learning Outcomes of New Library Initiatives Coordinator: Professor Plamen Miltenoff, Ph.D., MLIS, St. Cloud State University, USA Contact: pmiltenoff@stcloudstate.edu Scope & rationale: The advent of new technologies, such as virtual/augmented/mixed reality, and new pedagogical concepts, such as gaming and gamification, steers academic libraries in uncharted territories. There is not yet sufficiently compiled research and, respectively, proof to justify financial and workforce investment in such endeavors. On the other hand, dwindling resources for education presses administration to demand justification for new endeavors. As it has been established already, technology does not teach; teachers do; a growing body of literature questions the impact of educational technology on educational outcomes. This session seeks to bring together presentations and discussion, both qualitative and quantitative research, related to new pedagogical and technological endeavors in academic libraries as part of education on campus. By experimenting with new technologies such as Video 360 degrees and new pedagogical approaches such as gaming and gamification, does the library improve learning? By experimenting with new technologies and pedagogical approaches, does the library help campus faculty to adopt these methods and improve their teaching? How can results be measured, demonstrated?

Conference program

http://qqml.org/wp-content/uploads/2017/09/7.5.2018-programme_final.pdf

More information and bibliography:

https://www.academia.edu/Documents/in/Videogame_and_Virtual_World_Technologies_Serious_Games_applications_in_Education_and_Training

https://www.academia.edu/Documents/in/Measurement_and_evaluation_in_education

Social Media:
https://www.facebook.com/QQML-International-Conference-575508262589919/

 

 

 

1 2 3 4