Tuesday, June 16, 2020 from 1:00 PM to 2:00 PM (CDT)
This event will be an expert panel considering research in/about VR. The experts in the panel are; Sam Reno, Géraldine Perriguey, Anthony Chaston PhD and Evelien Ydo who all have presented for the research track before (biographies below, see the EDVR YouTube channel for their previous presentations). The event will be highly interactive, where the audience is welcomed to introduce topics and questions for the panel to discuss. At the end of the event there will be some time to network as well.
The Educators in VR Research Team features researchers from across the spectrum of VR/AR/XR research and development, coming together to share their knowledge, techniques, and research and learn from each other. Join us to discuss the possibilities and potential of research in VR. We host regular meetups and workshops for discussion and learning.
Please have also materials, which might help you organize our thoughts and expedite your Chapter 2 writing….
Do you agree with (did you use) the following observations:
The purpose of the review of the literature is to prove that no one has studied the gap in the knowledge outlined in Chapter 1. The subjects in the Review of Literature should have been introduced in the Background of the Problem in Chapter 1. Chapter 2 is not a textbook of subject matter loosely related to the subject of the study. Every research study that is mentioned should in some way bear upon the gap in the knowledge, and each study that is mentioned should end with the comment that the study did not collect data about the specific gap in the knowledge of the study as outlined in Chapter 1.
The review should be laid out in major sections introduced by organizational generalizations. An organizational generalization can be a subheading so long as the last sentence of the previous section introduces the reader to what the next section will contain. The purpose of this chapter is to cite major conclusions, findings, and methodological issues related to the gap in the knowledge from Chapter 1. It is written for knowledgeable peers from easily retrievable sources of the most recent issue possible.
Empirical literature published within the previous 5 years or less is reviewed to prove no mention of the specific gap in the knowledge that is the subject of the dissertation is in the body of knowledge. Common sense should prevail. Often, to provide a history of the research, it is necessary to cite studies older than 5 years. The object is to acquaint the reader with existing studies relative to the gap in the knowledge and describe who has done the work, when and where the research was completed, and what approaches were used for the methodology, instrumentation, statistical analyses, or all of these subjects.
If very little literature exists, the wise student will write, in effect, a several-paragraph book report by citing the purpose of the study, the methodology, the findings, and the conclusions. If there is an abundance of studies, cite only the most recent studies. Firmly establish the need for the study. Defend the methods and procedures by pointing out other relevant studies that implemented similar methodologies. It should be frequently pointed out to the reader why a particular study did not match the exact purpose of the dissertation.
The Review of Literature ends with a Conclusion that clearly states that, based on the review of the literature, the gap in the knowledge that is the subject of the study has not been studied. Remember that a “summary” is different from a “conclusion.” A Summary, the final main section, introduces the next chapter.
When conducting qualitative data, how many people should be interviewed? Is there a minimum or a max
Here is my take on it:
Simple question, not so simple answer.
Generally, the number of respondents depends on the type of qualitative inquiry: case study methodology, phenomenological study, ethnographic study, or ethnomethodology. However, a rule of thumb is for scholars to achieve saturation point–that is the point in which no fresh information is uncovered in response to an issue that is of interest to the researcher.
If your qualitative method is designed to meet rigor and trustworthiness, thick, rich data is important. To achieve these principles you would need at least 12 interviews, ensuring your participants are the holders of knowledge in the area you intend to investigate. In grounded theory you could start with 12 and interview more if your data is not rich enough.
In IPA the norm tends to be 6 interviews.
You may check the sample size in peer reviewed qualitative publications in your field to find out about popular practice. In all depends on the research problem, choice of specific qualitative approach and theoretical framework, so the answer to your question will vary from few to few dozens.
How many interviews are needed in a qualitative research?
There are different views in literature and no one agreed to the exact number. Here I reviewed some mostly cited references. Based Creswell (2014), it is estimated that 16 participants will provide rich and detailed data. There are a couple of researchers agreed on 10–15 in-depth interviews are sufficient (Guest, Bunce & Johnson 2006; Baker & Edwards 2012).
your methodological choices need to reflect your ontological position and understanding of knowledge production, and that’s also where you can argue a strong case for smaller qualitative studies, as you say. This is not only a problem for certain subjects, I think it’s a problem in certain departments or journals across the board of social science research, as it’s a question of academic culture.
here more serious literature and research (in case you need to cite in Chapter 3)
Sample Size and Saturation in PhD Studies Using Qualitative Interviews
Gaskell, George (2000). Individual and Group Interviewing. In Martin W. Bauer & George Gaskell (Eds.), Qualitative Researching With Text, Image and Sound. A Practical Handbook (pp. 38-56). London: SAGE Publications.
Books on intro to stat modeling available at the library. I understand the major pain borrowing books from the SCSU library can constitute, but you can use the titles and the authors and see if you can borrow them from your local public library
I also sought and shared with you “visual” explanations of the basics terms and concepts. Once you start looking at those, you should be able to further research (e.g. YouTube) and find suitable sources for your learning style.
I (and the future cohorts) will deeply appreciate if you remember to share those “suitable sources for your learning style” either by sharing in this Google Group thread and/or sharing in the comments section of the blog entry: http://blog.stcloudstate.edu/ims/2017/07/10/intro-to-stat-modeling. Your Facebook group page is also a good place to discuss among ourselves best practices to learn and use research methods for your chapter 3.
Watching the video, you may remember the same #BooleanSearch techniques from our BI (bibliography instruction) session of last semester.
Considering the fact of preponderance of information in 2017: your Chapter 2 is NOT ONLY about finding information regrading your topic.
Your Chapter 2 is about proving your extensive research of the existing literature.
The techniques presented in the short video will arm you with methods to dig deeper and look further.
If you would like to do a decent job exploring all corners of the vast area called Internet, please consider other search engines similar to Google Scholar:
Because the questionnaire data comprised both Likert scales and open questions, they were analyzed quantitatively and qualitatively. Textual data (open responses) were qualitatively analyzed by coding: each segment (e.g. a group of words) was assigned to a semantic reference category, as systematically and rigorously as possible. For example, “Using an iPad in class really motivates me to learn” was assigned to the category “positive impact on motivation.” The qualitative analysis was performed using an adapted version of the approaches developed by L’Écuyer (1990) and Huberman and Miles (1991, 1994). Thus, we adopted a content analysis approach using QDAMiner software, which is widely used in qualitative research (see Fielding, 2012; Karsenti, Komis, Depover, & Collin, 2011). For the quantitative analysis, we used SPSS 22.0 software to conduct descriptive and inferential statistics. We also conducted inferential statistics to further explore the iPad’s role in teaching and learning, along with its motivational effect. The results will be presented in a subsequent report (Fievez, & Karsenti, 2013)
The 20th century notion of conducting a qualitative research by an oral interview and then processing manually your results had triggered in the second half of the 20th century [sometimes] condescending attitudes by researchers from the exact sciences.
The reason was the advent of computing power in the second half of the 20th century, which allowed exact sciences to claim “scientific” and “data-based” results.
One of the statistical package, SPSS, is today widely known and considered a magnificent tools to bring solid statistically-based argumentation, which further perpetuates the superiority of quantitative over qualitative method.
At the same time, qualitative researchers continue to lag behind, mostly due to the inertia of their approach to qualitative analysis. Qualitative analysis continues to be processed in the olden ways. While there is nothing wrong with the “olden” ways, harnessing computational power can streamline the “olden ways” process and even present options, which the “human eye” sometimes misses.
Below are some suggestions, you may consider, when you embark on the path of qualitative research.
Palys and Atchison (2012) present a compelling case to bring your qualitative research to the level of the quantitative research by using modern tools for qualitative analysis.
1. The authors correctly promote NVivo as the “jaguar’ of the qualitative research method tools. Be aware, however, about the existence of other “Geo Metro” tools, which, for your research, might achieve the same result (see bottom of this blog entry).
text mining: https://en.wikipedia.org/wiki/Text_mining Text mining, also referred to as text data mining, roughly equivalent to text analytics, is the process of deriving high-quality information from text. High-quality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. https://ischool.syr.edu/infospace/2013/04/23/what-is-text-mining/
Qualitative data is descriptive data that cannot be measured in numbers and often includes qualities of appearance like color, texture, and textual description. Quantitative data is numerical, structured data that can be measured. However, there is often slippage between qualitative and quantitative categories. For example, a photograph might traditionally be considered “qualitative data” but when you break it down to the level of pixels, which can be measured.
word of caution, text mining doesn’t generate new facts and is not an end, in and of itself. The process is most useful when the data it generates can be further analyzed by a domain expert, who can bring additional knowledge for a more complete picture. Still, text mining creates new relationships and hypotheses for experts to explore further.
Pros and Cons of Computer Assisted Qualitative Data Analysis Software
more on quantitative research:
Asamoah, D. A., Sharda, R., Hassan Zadeh, A., & Kalgotra, P. (2017). Preparing a Data Scientist: A Pedagogic Experience in Designing a Big Data Analytics Course. Decision Sciences Journal of Innovative Education, 15(2), 161–190. https://doi.org/10.1111/dsji.12125
literature on quantitative research:
St. Cloud State University MC Main Collection – 2nd floor
AZ195 .B66 2015
p. 161 Data scholarship in the Humanities
p. 166 When Are Data?
Philip Chen, C. L., & Zhang, C.-Y. (2014). Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Information Sciences, 275(Supplement C), 314–347. https://doi.org/10.1016/j.ins.2014.01.015
Applications for the 2018 Institute will be accepted between December 1, 2017 and January 27, 2018. Scholars accepted to the program will be notified in early March 2018.
Learning to Harness Big Data in an Academic Library
Research on Big Data per se, as well as on the importance and organization of the process of Big Data collection and analysis, is well underway. The complexity of the process comprising “Big Data,” however, deprives organizations of ubiquitous “blue print.” The planning, structuring, administration and execution of the process of adopting Big Data in an organization, being that a corporate one or an educational one, remains an elusive one. No less elusive is the adoption of the Big Data practices among libraries themselves. Seeking the commonalities and differences in the adoption of Big Data practices among libraries may be a suitable start to help libraries transition to the adoption of Big Data and restructuring organizational and daily activities based on Big Data decisions. Introduction to the problem. Limitations
The redefinition of humanities scholarship has received major attention in higher education. The advent of digital humanities challenges aspects of academic librarianship. Data literacy is a critical need for digital humanities in academia. The March 2016 Library Juice Academy Webinar led by John Russel exemplifies the efforts to help librarians become versed in obtaining programming skills, and respectively, handling data. Those are first steps on a rather long path of building a robust infrastructure to collect, analyze, and interpret data intelligently, so it can be utilized to restructure daily and strategic activities. Since the phenomenon of Big Data is young, there is a lack of blueprints on the organization of such infrastructure. A collection and sharing of best practices is an efficient approach to establishing a feasible plan for setting a library infrastructure for collection, analysis, and implementation of Big Data.
Limitations. This research can only organize the results from the responses of librarians and research into how libraries present themselves to the world in this arena. It may be able to make some rudimentary recommendations. However, based on each library’s specific goals and tasks, further research and work will be needed.
Big Data is becoming an omnipresent term. It is widespread among different disciplines in academia (De Mauro, Greco, & Grimaldi, 2016). This leads to “inconsistency in meanings and necessity for formal definitions” (De Mauro et al, 2016, p. 122). Similarly, to De Mauro et al (2016), Hashem, Yaqoob, Anuar, Mokhtar, Gani and Ullah Khan (2015) seek standardization of definitions. The main connected “themes” of this phenomenon must be identified and the connections to Library Science must be sought. A prerequisite for a comprehensive definition is the identification of Big Data methods. Bughin, Chui, Manyika (2011), Chen et al. (2012) and De Mauro et al (2015) single out the methods to complete the process of building a comprehensive definition.
In conjunction with identifying the methods, volume, velocity, and variety, as defined by Laney (2001), are the three properties of Big Data accepted across the literature. Daniel (2015) defines three stages in big data: collection, analysis, and visualization. According to Daniel, (2015), Big Data in higher education “connotes the interpretation of a wide range of administrative and operational data” (p. 910) and according to Hilbert (2013), as cited in Daniel (2015), Big Data “delivers a cost-effective prospect to improve decision making” (p. 911).
The importance of understanding the process of Big Data analytics is well understood in academic libraries. An example of such “administrative and operational” use for cost-effective improvement of decision making are the Finch & Flenner (2016) and Eaton (2017) case studies of the use of data visualization to assess an academic library collection and restructure the acquisition process. Sugimoto, Ding & Thelwall (2012) call for the discussion of Big Data for libraries. According to the 2017 NMC Horizon Report “Big Data has become a major focus of academic and research libraries due to the rapid evolution of data mining technologies and the proliferation of data sources like mobile devices and social media” (Adams, Becker, et al., 2017, p. 38).
Power (2014) elaborates on the complexity of Big Data in regard to decision-making and offers ideas for organizations on building a system to deal with Big Data. As explained by Boyd and Crawford (2012) and cited in De Mauro et al (2016), there is a danger of a new digital divide among organizations with different access and ability to process data. Moreover, Big Data impacts current organizational entities in their ability to reconsider their structure and organization. The complexity of institutions’ performance under the impact of Big Data is further complicated by the change of human behavior, because, arguably, Big Data affects human behavior itself (Schroeder, 2014).
De Mauro et al (2015) touch on the impact of Dig Data on libraries. The reorganization of academic libraries considering Big Data and the handling of Big Data by libraries is in a close conjunction with the reorganization of the entire campus and the handling of Big Data by the educational institution. In additional to the disruption posed by the Big Data phenomenon, higher education is facing global changes of economic, technological, social, and educational character. Daniel (2015) uses a chart to illustrate the complexity of these global trends. Parallel to the Big Data developments in America and Asia, the European Union is offering access to an EU open data portal (https://data.europa.eu/euodp/home ). Moreover, the Association of European Research Libraries expects under the H2020 program to increase “the digitization of cultural heritage, digital preservation, research data sharing, open access policies and the interoperability of research infrastructures” (Reilly, 2013).
The challenges posed by Big Data to human and social behavior (Schroeder, 2014) are no less significant to the impact of Big Data on learning. Cohen, Dolan, Dunlap, Hellerstein, & Welton (2009) propose a road map for “more conservative organizations” (p. 1492) to overcome their reservations and/or inability to handle Big Data and adopt a practical approach to the complexity of Big Data. Two Chinese researchers assert deep learning as the “set of machine learning techniques that learn multiple levels of representation in deep architectures (Chen & Lin, 2014, p. 515). Deep learning requires “new ways of thinking and transformative solutions (Chen & Lin, 2014, p. 523). Another pair of researchers from China present a broad overview of the various societal, business and administrative applications of Big Data, including a detailed account and definitions of the processes and tools accompanying Big Data analytics. The American counterparts of these Chinese researchers are of the same opinion when it comes to “think about the core principles and concepts that underline the techniques, and also the systematic thinking” (Provost and Fawcett, 2013, p. 58). De Mauro, Greco, and Grimaldi (2016), similarly to Provost and Fawcett (2013) draw attention to the urgent necessity to train new types of specialists to work with such data. As early as 2012, Davenport and Patil (2012), as cited in Mauro et al (2016), envisioned hybrid specialists able to manage both technological knowledge and academic research. Similarly, Provost and Fawcett (2013) mention the efforts of “academic institutions scrambling to put together programs to train data scientists” (p. 51). Further, Asomoah, Sharda, Zadeh & Kalgotra (2017) share a specific plan on the design and delivery of a big data analytics course. At the same time, librarians working with data acknowledge the shortcomings in the profession, since librarians “are practitioners first and generally do not view usability as a primary job responsibility, usually lack the depth of research skills needed to carry out a fully valid” data-based research (Emanuel, 2013, p. 207).
Borgman (2015) devotes an entire book to data and scholarly research and goes beyond the already well-established facts regarding the importance of Big Data, the implications of Big Data and the technical, societal, and educational impact and complications posed by Big Data. Borgman elucidates the importance of knowledge infrastructure and the necessity to understand the importance and complexity of building such infrastructure, in order to be able to take advantage of Big Data. In a similar fashion, a team of Chinese scholars draws attention to the complexity of data mining and Big Data and the necessity to approach the issue in an organized fashion (Wu, Xhu, Wu, Ding, 2014).
Bruns (2013) shifts the conversation from the “macro” architecture of Big Data, as focused by Borgman (2015) and Wu et al (2014) and ponders over the influx and unprecedented opportunities for humanities in academia with the advent of Big Data. Does the seemingly ubiquitous omnipresence of Big Data mean for humanities a “railroading” into “scientificity”? How will research and publishing change with the advent of Big Data across academic disciplines?
Reyes (2015) shares her “skinny” approach to Big Data in education. She presents a comprehensive structure for educational institutions to shift “traditional” analytics to “learner-centered” analytics (p. 75) and identifies the participants in the Big Data process in the organization. The model is applicable for library use.
Being a new and unchartered territory, Big Data and Big Data analytics can pose ethical issues. Willis (2013) focusses on Big Data application in education, namely the ethical questions for higher education administrators and the expectations of Big Data analytics to predict students’ success. Daries, Reich, Waldo, Young, and Whittinghill (2014) discuss rather similar issues regarding the balance between data and student privacy regulations. The privacy issues accompanying data are also discussed by Tene and Polonetsky, (2013).
Privacy issues are habitually connected to security and surveillance issues. Andrejevic and Gates (2014) point out in a decision making “generated by data mining, the focus is not on particular individuals but on aggregate outcomes” (p. 195). Van Dijck (2014) goes into further details regarding the perils posed by metadata and data to the society, in particular to the privacy of citizens. Bail (2014) addresses the same issue regarding the impact of Big Data on societal issues, but underlines the leading roles of cultural sociologists and their theories for the correct application of Big Data.
Library organizations have been traditional proponents of core democratic values such as protection of privacy and elucidation of related ethical questions (Miltenoff & Hauptman, 2005). In recent books about Big Data and libraries, ethical issues are important part of the discussion (Weiss, 2018). Library blogs also discuss these issues (Harper & Oltmann, 2017). An academic library’s role is to educate its patrons about those values. Sugimoto et al (2012) reflect on the need for discussion about Big Data in Library and Information Science. They clearly draw attention to the library “tradition of organizing, managing, retrieving, collecting, describing, and preserving information” (p.1) as well as library and information science being “a historically interdisciplinary and collaborative field, absorbing the knowledge of multiple domains and bringing the tools, techniques, and theories” (p. 1). Sugimoto et al (2012) sought a wide discussion among the library profession regarding the implications of Big Data on the profession, no differently from the activities in other fields (e.g., Wixom, Ariyachandra, Douglas, Goul, Gupta, Iyer, Kulkami, Mooney, Phillips-Wren, Turetken, 2014). A current Andrew Mellon Foundation grant for Visualizing Digital Scholarship in Libraries seeks an opportunity to view “both macro and micro perspectives, multi-user collaboration and real-time data interaction, and a limitless number of visualization possibilities – critical capabilities for rapidly understanding today’s large data sets (Hwangbo, 2014).
The importance of the library with its traditional roles, as described by Sugimoto et al (2012) may continue, considering the Big Data platform proposed by Wu, Wu, Khabsa, Williams, Chen, Huang, Tuarob, Choudhury, Ororbia, Mitra, & Giles (2014). Such platforms will continue to emerge and be improved, with librarians as the ultimate drivers of such platforms and as the mediators between the patrons and the data generated by such platforms.
Every library needs to find its place in the large organization and in society in regard to this very new and very powerful phenomenon called Big Data. Libraries might not have the trained staff to become a leader in the process of organizing and building the complex mechanism of this new knowledge architecture, but librarians must educate and train themselves to be worthy participants in this new establishment.
The study will be cleared by the SCSU IRB.
The survey will collect responses from library population and it readiness to use and use of Big Data. Send survey URL to (academic?) libraries around the world.
Data will be processed through SPSS. Open ended results will be processed manually. The preliminary research design presupposes a mixed method approach.
The study will include the use of closed-ended survey response questions and open-ended questions. The first part of the study (close ended, quantitative questions) will be completed online through online survey. Participants will be asked to complete the survey using a link they receive through e-mail.
Mixed methods research was defined by Johnson and Onwuegbuzie (2004) as “the class of research where the researcher mixes or combines quantitative and qualitative research techniques, methods, approaches, concepts, or language into a single study” (Johnson & Onwuegbuzie, 2004 , p. 17). Quantitative and qualitative methods can be combined, if used to complement each other because the methods can measure different aspects of the research questions (Sale, Lohfeld, & Brazil, 2002).
Online survey of 10-15 question, with 3-5 demographic and the rest regarding the use of tools.
1-2 open-ended questions at the end of the survey to probe for follow-up mixed method approach (an opportunity for qualitative study)
data analysis techniques: survey results will be exported to SPSS and analyzed accordingly. The final survey design will determine the appropriate statistical approach.
Complete literature review and identify areas of interest – two months
Prepare and test instrument (survey) – month
IRB and other details – month
Generate a list of potential libraries to distribute survey – month
Contact libraries. Follow up and contact again, if necessary (low turnaround) – month
Collect, analyze data – two months
Write out data findings – month
Complete manuscript – month
Proofreading and other details – month
Significance of the work
While it has been widely acknowledged that Big Data (and its handling) is changing higher education (http://blog.stcloudstate.edu/ims?s=big+data) as well as academic libraries (http://blog.stcloudstate.edu/ims/2016/03/29/analytics-in-education/), it remains nebulous how Big Data is handled in the academic library and, respectively, how it is related to the handling of Big Data on campus. Moreover, the visualization of Big Data between units on campus remains in progress, along with any policymaking based on the analysis of such data (hence the need for comprehensive visualization).
This research will aim to gain an understanding on: a. how librarians are handling Big Data; b. how are they relating their Big Data output to the campus output of Big Data and c. how librarians in particular and campus administration in general are tuning their practices based on the analysis.
Based on the survey returns (if there is a statistically significant return), this research might consider juxtaposing the practices from academic libraries, to practices from special libraries (especially corporate libraries), public and school libraries.
Adams Becker, S., Cummins M, Davis, A., Freeman, A., Giesinger Hall, C., Ananthanarayanan, V., … Wolfson, N. (2017). NMC Horizon Report: 2017 Library Edition.
Andrejevic, M., & Gates, K. (2014). Big Data Surveillance: Introduction. Surveillance & Society, 12(2), 185–196.
Asamoah, D. A., Sharda, R., Hassan Zadeh, A., & Kalgotra, P. (2017). Preparing a Data Scientist: A Pedagogic Experience in Designing a Big Data Analytics Course. Decision Sciences Journal of Innovative Education, 15(2), 161–190. https://doi.org/10.1111/dsji.12125
Cohen, J., Dolan, B., Dunlap, M., Hellerstein, J. M., & Welton, C. (2009). MAD Skills: New Analysis Practices for Big Data. Proc. VLDB Endow., 2(2), 1481–1492. https://doi.org/10.14778/1687553.1687576
Daniel, B. (2015). Big Data and analytics in higher education: Opportunities and challenges. British Journal of Educational Technology, 46(5), 904–920. https://doi.org/10.1111/bjet.12230
Daries, J. P., Reich, J., Waldo, J., Young, E. M., Whittinghill, J., Ho, A. D., … Chuang, I. (2014). Privacy, Anonymity, and Big Data in the Social Sciences. Commun. ACM, 57(9), 56–63. https://doi.org/10.1145/2643132
De Mauro, A., Greco, M., & Grimaldi, M. (2015). What is big data? A consensual definition and a review of key research topics. AIP Conference Proceedings, 1644(1), 97–104. https://doi.org/10.1063/1.4907823
Emanuel, J. (2013). Usability testing in libraries: methods, limitations, and implications. OCLC Systems & Services: International Digital Library Perspectives, 29(4), 204–217. https://doi.org/10.1108/OCLC-02-2013-0009
Hashem, I. A. T., Yaqoob, I., Anuar, N. B., Mokhtar, S., Gani, A., & Ullah Khan, S. (2015). The rise of “big data” on cloud computing: Review and open research issues. Information Systems, 47(Supplement C), 98–115. https://doi.org/10.1016/j.is.2014.07.006
Philip Chen, C. L., & Zhang, C.-Y. (2014). Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Information Sciences, 275(Supplement C), 314–347. https://doi.org/10.1016/j.ins.2014.01.015
Sugimoto, C. R., Ding, Y., & Thelwall, M. (2012). Library and information science in the big data era: Funding, projects, and future [a panel proposal]. Proceedings of the American Society for Information Science and Technology, 49(1), 1–3. https://doi.org/10.1002/meet.14504901187
Tene, O., & Polonetsky, J. (2012). Big Data for All: Privacy and User Control in the Age of Analytics. Northwestern Journal of Technology and Intellectual Property, 11, [xxvii]-274.
van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society; Newcastle upon Tyne, 12(2), 197–208.
Waller, M. A., & Fawcett, S. E. (2013). Data Science, Predictive Analytics, and Big Data: A Revolution That Will Transform Supply Chain Design and Management. Journal of Business Logistics, 34(2), 77–84. https://doi.org/10.1111/jbl.12010
Wu, Z., Wu, J., Khabsa, M., Williams, K., Chen, H. H., Huang, W., … Giles, C. L. (2014). Towards building a scholarly big data platform: Challenges, lessons and opportunities. In IEEE/ACM Joint Conference on Digital Libraries (pp. 117–126). https://doi.org/10.1109/JCDL.2014.6970157
Document analysis is a systematic procedure for reviewing or evaluating documents—both printed and electronic (computer-based and Internet-transmitted) material. Like other analytical methods in qualitative research, document analysis requires that data be examined and interpreted in order to elicit meaning, gain understanding, and develop empirical knowledge(Corbin&Strauss,2008;seealsoRapley,2007).
Document analysis is often used in combination with other qualitative research methods as a means of triangulation—‘the combination of methodologies in the study of the same phenomenon’ (Denzin, 1970, p. 291)
The qualitative researcher is expected to draw upon multiple (at least two) sources of evidence; that is, to seek convergence and corroboration through the use of different data sources and methods. Apart from documents, such sources include interviews, participant or non-participant observation, and physical artifacts (Yin,1994).By triangulating data, the researcher attempts to provide ‘a confluence of evidence that breeds credibility’ (Eisner, 1991, p. 110). By examining information collected through different methods, the researcher can corroborate findings across data sets and thus reduce the impact of potential biases that can exist in a single study. According to Patton (1990), triangulation helps the researcher guard against the accusation that a study’s findings are simply an artifact of a single method, a single source, or a single investigator’s bias. Mixed-method studies (which combine quantitative and qualitative research techniques)sometimes include document analysis. Here is an example: In their large-scale, three-year evaluation of regional educational service agencies (RESAs), Rossman and Wilson (1985) combined quantitative and qualitative methods—surveys (to collect quantitative data) and open ended, semi structured interviews with reviews of documents (as the primary sources of qualitative data). The document reviews were designed to identify the agencies that played a role in supporting school improvement programs.
Document analysis is a form of qualitative research in which documents are interpreted by the researcher to give voice and meaning around an assessment topic. Analyzing documents incorporates coding content into themes similar to how focus group or interview transcripts are analyzed. A rubric can also be used to grade or score a document. There are three primary types of documents:
• Public Records: The official, ongoing records of an organization’s activities. Examples include student transcripts, mission statements, annual reports, policy manuals, student handbooks, strategic plans, and syllabi.
• Personal Documents: First-person accounts of an individual’s actions, experiences, and beliefs. Examples include calendars, e-mails, scrapbooks, blogs, Facebook posts, duty logs, incident reports, reflections/journals, and newspapers.
• Physical Evidence: Physical objects found within the study setting (often called artifacts). Examples include flyers, posters, agendas, handbooks, and training materials.
As with all research, how you collect and analyse the data should depend on what you want to find out. Since you haven’t told us that, it is difficult to give you any precise advice. However, one really important matter in using documents as sources, whatever the overall aim of your research, is that data from documents are very different from data from speech events such as interviews, or overheard conversations.So the first analytic question you need to ask with regard to documents is ‘how are these data shaped by documentary production ?’ Something which differentiates nearly all data from documents from speech data is that those who compose documents know what comes at the end while still able to alter the beginning; which gives far more opportunity for consideration of how the recepient of the utterances will view the provider; ie for more artful self-presentation. Apart from this however, analysing the way documentary practice shapes your data will depend on what these documents are: for example your question might turn out to be ‘How are news stories produced ?’ – if you are using news reports, or ‘What does this bureaucracy consider relevant information (and what not relevant and what unmentionable) ? if you are using completed proformas or internal reports from some organisation.
An analysis technique is just like a hardware tool. It depends where and with what you are working to choose the right one. For a nail you should use a hammer, and there are lots of types of hammers to choose, depending on the type of nail.
So, in order to tell you the bettet technique, it is important to know the objectives you intend to reach and the theoretical framework you are using. Perhaps, after that, We could tell you if you should use content analysis, discourse or grounded theory (which type of it as, like the hammer, there are several types of GTs).
Holland, B. (2020). Emerging Technology and Today’s Libraries. In Holland, B. (Eds.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 1-33). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch001
The purpose of this chapter is to examine emerging technology and today’s libraries. New technology stands out first and foremost given that they will end up revolutionizing every industry in an age where digital transformation plays a major role. Major trends will define technological disruption. The next-gen of communication, core computing, and integration technologies will adopt new architectures. Major technological, economic, and environmental changes have generated interest in smart cities. Sensing technologies have made IoT possible, but also provide the data required for AI algorithms and models, often in real-time, to make intelligent business and operational decisions. Smart cities consume different types of electronic internet of things (IoT) sensors to collect data and then use these data to manage assets and resources efficiently. This includes data collected from citizens, devices, and assets that are processed and analyzed to monitor and manage, schools, libraries, hospitals, and other community services.
Makori, E. O. (2020). Blockchain Applications and Trends That Promote Information Management. In Holland, B. (Eds.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 34-51). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch002
Blockchain revolutionary paradigm is the new and emerging digital innovation that organizations have no choice but to embrace and implement in order to sustain and manage service delivery to the customers. From disruptive to sustaining perspective, blockchain practices have transformed the information management environment with innovative products and services. Blockchain-based applications and innovations provide information management professionals and practitioners with robust and secure opportunities to transform corporate affairs and social responsibilities of organizations through accountability, integrity, and transparency; information governance; data and information security; as well as digital internet of things.
Hahn, J. (2020). Student Engagement and Smart Spaces: Library Browsing and Internet of Things Technology. In Holland, B. (Eds.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 52-70). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch003
The purpose of this chapter is to provide evidence-based findings on student engagement within smart library spaces. The focus of smart libraries includes spaces that are enhanced with the internet of things (IoT) infrastructure and library collection maps accessed through a library-designed mobile application. The analysis herein explored IoT-based browsing within an undergraduate library collection. The open stacks and mobile infrastructure provided several years (2016-2019) of user-generated smart building data on browsing and selecting items in open stacks. The methods of analysis used in this chapter include transactional analysis and data visualization of IoT infrastructure logs. By analyzing server logs from the computing infrastructure that powers the IoT services, it is possible to infer in greater detail than heretofore possible the specifics of the way library collections are a target of undergraduate student engagement.
Treskon, M. (2020). Providing an Environment for Authentic Learning Experiences. In Holland, B. (Eds.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 71-86). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch004
The Loyola Notre Dame Library provides authentic learning environments for undergraduate students by serving as “client” for senior capstone projects. Through the creative application of IoT technologies such as Arduinos and Raspberry Pis in a library setting, the students gain valuable experience working through software design methodology and create software in response to a real-world challenge. Although these proof-of-concept projects could be implemented, the library is primarily interested in furthering the research, teaching, and learning missions of the two universities it supports. Whether the library gets a product that is worth implementing is not a requirement; it is a “bonus.”
Rashid, M., Nazeer, I., Gupta, S. K., & Khanam, Z. (2020). Internet of Things: Architecture, Challenges, and Future Directions. In Holland, B. (Ed.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 87-104). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch005
The internet of things (IoT) is a computing paradigm that has changed our daily livelihood and functioning. IoT focuses on the interconnection of all the sensor-based devices like smart meters, coffee machines, cell phones, etc., enabling these devices to exchange data with each other during human interactions. With easy connectivity among humans and devices, speed of data generation is getting multi-fold, increasing exponentially in volume, and is getting more complex in nature. In this chapter, the authors will outline the architecture of IoT for handling various issues and challenges in real-world problems and will cover various areas where usage of IoT is done in real applications. The authors believe that this chapter will act as a guide for researchers in IoT to create a technical revolution for future generations.
Martin, L. (2020). Cloud Computing, Smart Technology, and Library Automation. In Holland, B. (Eds.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 105-123). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch006
As technology continues to change, the landscape of the work of librarians and libraries continue to adapt and adopt innovations that support their services. Technology also continues to be an essential tool for dissemination, retrieving, storing, and accessing the resources and information. Cloud computing is an essential component employed to carry out these tasks. The concept of cloud computing has long been a tool utilized in libraries. Many libraries use OCLC to catalog and manage resources and share resources, WorldCat, and other library applications that are cloud-based services. Cloud computing services are used in the library automation process. Using cloud-based services can streamline library services, minimize cost, and the need to have designated space for servers, software, or other hardware to perform library operations. Cloud computing systems with the library consolidate, unify, and optimize library operations such as acquisitions, cataloging, circulation, discovery, and retrieval of information.
Owusu-Ansah, S. (2020). Developing a Digital Engagement Strategy for Ghanaian University Libraries: An Exploratory Study. In Holland, B. (Eds.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 124-139). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch007
This study represents a framework that digital libraries can leverage to increase usage and visibility. The adopted qualitative research aims to examine a digital engagement strategy for the libraries in the University of Ghana (UG). Data is collected from participants (digital librarians) who are key stakeholders of digital library service provision in the University of Ghana Library System (UGLS). The chapter reveals that digital library services included rare collections, e-journal, e-databases, e-books, microfilms, e-theses, e-newspapers, and e-past questions. Additionally, the research revealed that the digital library service patronage could be enhanced through outreach programmes, open access, exhibitions, social media, and conferences. Digital librarians recommend that to optimize digital library services, literacy programmes/instructions, social media platforms, IT equipment, software, and website must be deployed. In conclusion, a DES helps UGLS foster new relationships, connect with new audiences, and establish new or improved brand identity.
Nambobi, M., Ssemwogerere, R., & Ramadhan, B. K. (2020). Implementation of Autonomous Library Assistants Using RFID Technology. In Holland, B. (Ed.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 140-150). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch008
This is an interesting time to innovate around disruptive technologies like the internet of things (IoT), machine learning, blockchain. Autonomous assistants (IoT) are the electro-mechanical system that performs any prescribed task automatically with no human intervention through self-learning and adaptation to changing environments. This means that by acknowledging autonomy, the system has to perceive environments, actuate a movement, and perform tasks with a high degree of autonomy. This means the ability to make their own decisions in a given set of the environment. It is important to note that autonomous IoT using radio frequency identification (RFID) technology is used in educational sectors to boost the research the arena, improve customer service, ease book identification and traceability of items in the library. This chapter discusses the role, importance, the critical tools, applicability, and challenges of autonomous IoT in the library using RFID technology.
Priya, A., & Sahana, S. K. (2020). Processor Scheduling in High-Performance Computing (HPC) Environment. In Holland, B. (Ed.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 151-179). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch009
Processor scheduling is one of the thrust areas in the field of computer science. The future technologies use a huge amount of processing for execution of their tasks like huge games, programming software, and in the field of quantum computing. In real-time, many complex problems are solved by GPU programming. The primary concern of scheduling is to reduce the time complexity and manpower. Several traditional techniques exit for processor scheduling. The performance of traditional techniques is reduced when it comes to the huge processing of tasks. Most scheduling problems are NP-hard in nature. Many of the complex problems are recently solved by GPU programming. GPU scheduling is another complex issue as it runs thousands of threads in parallel and needs to be scheduled efficiently. For such large-scale scheduling problems, the performance of state-of-the-art algorithms is very poor. It is observed that evolutionary and genetic-based algorithms exhibit better performance for large-scale combinatorial and internet of things (IoT) problems.
Librarians are beginning to offer virtual reality (VR) services in libraries. This chapter reviews how libraries are currently using virtual reality for both consumption and creation purposes. Virtual reality tools will be compared and contrasted, and recommendations will be given for purchasing and circulating headsets and VR equipment. Google Tour Creator and a smartphone or 360-degree camera can be used to create a virtual tour of the library and other virtual reality content. These new library services will be discussed along with practical advice and best practices for incorporating virtual reality into the library for instructional and entertainment purposes.
Heffernan, K. L., & Chartier, S. (2020). Augmented Reality Gamifies the Library: A Ride Through the Technological Frontier. In Holland, B. (Ed.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 194-210). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch011
Two librarians at a University in New Hampshire attempted to integrate gamification and mobile technologies into the exploration of, and orientation to, the library’s services and resources. From augmented reality to virtual escape rooms and finally an in-house app created by undergraduate, campus-based, game design students, the library team learned much about the triumphs and challenges that come with attempting to utilize new technologies to reach users in the 21st century. This chapter is a narrative describing years of various attempts, innovation, and iteration, which have led to the library team being on the verge of introducing an app that could revolutionize campus discovery and engagement.
Miltenoff, P. (2020). Video 360 and Augmented Reality: Visualization to Help Educators Enter the Era of eXtended Reality. In Holland, B. (Eds.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 211-225). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch012
The advent of all types of eXtended Reality (XR)—VR, AR, MR—raises serious questions, both technological and pedagogical. The setup of campus services around XR is only the prelude to the more complex and expensive project of creating learning content using XR. In 2018, the authors started a limited proof-of-concept augmented reality (AR) project for a library tour. Building on their previous research and experience creating a virtual reality (VR) library tour, they sought a scalable introduction of XR services and content for the campus community. The AR library tour aimed to start us toward a matrix for similar services for the entire campus. They also explored the attitudes of students, faculty, and staff toward this new technology and its incorporation in education, as well as its potential and limitations toward the creation of a “smart” library.
XR Storytellers: Learners Making Immersive Stories
Thursday, May 7, 2020 from 1:30 PM to 2:30 PM (CDT)
Our team will share lessons learned in collaborating to create immersive experiences that accelerate STEM education. Find out how students achieve classroom learning objectives by designing AR experiences. Watch a demonstration of how an immersive scientific story is co-created by students and teachers in a virtual learning environment. Explore novel techniques for supporting learners to demonstrate understanding and share knowledge using spatial technologies and storytelling principles. We invite guests to share their questions and perspectives on the possibilities and limitations of XR storytelling to facilitate relational connections to curriculum and instruction.
PRESENTERS: Sarah Cassidy | Janelle LaVoie | Quincy Wang | Poh Tan
We are a team of VR learners from the University of Saskatchewan and Simon Fraser University in Canada. Our research explores innovative uses of immersive technology for STEM education and pro-social change.
The event requires no registration, and is virtual only, free, and open to the public. Platform access is required, so please install one of the above platforms to attend the International Summit. You may attend in 2D on a desktop or laptop computer with a headphone and microphone (USB gaming headphone recommended), or with a virtual device such as the Oculus Go, Quest, and Rift, Vive, and other mobile and tethered devices. Please note the specifications and requirements of each platform.
Charlie Fink, author, columnist for Forbes magazine, and Adjunct Faculty member of Chapman University, will be presenting “Setting the Table for the Next Decade in XR,” discussing the future of this innovative and immersive technology, at the 2020 Educators in VR International Summit. He will be speaking in AltspaceVR on Tuesday, February 18 at 1:00 PM EST /
This workshop with Dr. Sarah Jones will focus on developing a relevant and new literacy for virtual reality, including the core competencies and skills needed to develop and understand how to become an engaged user of the technology in a meaningful way. The workshop will develop into research for a forthcoming book on Uncovering a Literacy for VR due to be published in 2020.
Sarah is listed as one of the top 15 global influencers within virtual reality. After nearly a decade in television news, Sarah began working in universities focusing on future media, future technology and future education. Sarah holds a PhD in Immersive Storytelling and has published extensively on virtual and augmented reality, whilst continuing to make and create immersive experiences. She has advised the UK Government on Immersive Technologies and delivers keynotes and speaks at conferences across the world on imagining future technology. Sarah is committed to diversifying the media and technology industries and regularly champions initiatives to support this agenda.
Currently there are limited ways to connect 3D VR environments to physical objects in the real-world whilst simultaneously conducting communication and collaboration between remote users. Within the context of a solar power plant, the performance metrics of the site are invaluable for environmental engineers who are remotely located. Often two or more remotely located engineers need to communicate and collaborate on solving a problem. If a solar panel component is damaged, the repair often needs to be undertaken on-site thereby incurring additional expenses. This triage of communication is known as inter-cognitive communication and intra-cognitive communication: inter-cognitive communication where information transfer occurs between two cognitive entities with different cognitive capabilities (e.g., between a human and an artificially cognitive system); intra-cognitive communication where information transfer occurs between two cognitive entities with equivalent cognitive capabilities (e.g., between two humans) [Baranyi and Csapo, 2010]. Currently, non-VR solutions offer a comprehensive analysis of solar plant data. A regular PC with a monitor currently have advantages over 3D VR. For example, sensors can be monitored using dedicated software such as EPEVER or via a web browser; as exemplified by the comprehensive service provided by Elseta. But when multiple users are able to collaborate remotely within a three-dimensional virtual simulation, the opportunities for communication, training and academic education will be profound.
Michael Vallance Ed.D. is a researcher in the Department of Media Architecture, Future University Hakodate, Japan. He has been involved in educational technology design, implementation, research and consultancy for over twenty years, working closely with Higher Education Institutes, schools and media companies in UK, Singapore, Malaysia and Japan. His 3D virtual world design and tele-robotics research has been recognized and funded by the UK Prime Minister’s Initiative (PMI2) and the Japan Advanced Institute of Science and Technology (JAIST). He has been awarded by the United States Army for his research in collaborating the programming of robots in a 3D Virtual World.
Augmented Reality Lens is popular among young people thanks to Snapchat’s invention. Business is losing money without fully using of social media targeting young people (14-25). In my presentation, Dominique Wu will show how businesses can generate more leads through Spark AR (Facebook AR/Instagram AR) & Snapchat AR Lens, and how to create a strategic Snapchat & Instagram AR campaigns.
Domnique Wu is an XR social media strategist and expert in UX/UI design.She has her own YouTube and Apple Podcast show called “XReality: Digital Transformation,” covering the technology and techniques of incorporating XR and AR into social media, marketing, and integration into enterprise solutions.
Mark Christian, EVP, Strategy and Corporate Development, GIGXR
Mixed Reality devices like the HoloLens are transforming education now. Mark Christian will discuss how the technology is not about edge use cases or POCs, but real usable products that are at Universities transforming the way we teach and learn. Christian will talk about the products of GIGXR, the story of how they were developed and what the research is saying about their efficacy. It is time to move to adoption of XR technology in education. Learn how one team has made this a reality.
As CEO of forward-thinking virtual reality and software companies, Mark Christian employs asymmetric approaches to rapid, global market adoption, hiring, diversity and revenue. He prides himself on unconventional approaches to building technology companies.
Virtual Reality is an effective medium to impart education to the student only if it is done right.The way VR is considered gimmick or not is by the way the software application are designed/developed by the developers not the hardware limitation.I will be giving insight about the VR development for educational content specifically designed for students of lower secondary school.I will also provide insights about the development of game in unity3D game engine.
Game Developer and VR developer with over 3 years of experience in Game Development.Developer of Zombie Shooter, winner of various national awards in the gaming and entertainment category, Avinash Gyawali is the developer of EDVR, an immersive voice controlled VR experience specially designed for children of age 10-18 years.
Virtual Reality Technologies for Learning Designers
Virtual Reality (VR) is a computer-generated experience that simulates presence in real or imagined environments (Kerrebrock, Brengman, & Willems, 2017). VR promotes contextualized learning, authentic experiences, critical thinking, and problem-solving opportunities. Despite the great potential and popularity of this technology, the latest two installations of the Educause Horizon Report (2018, 2019) have argued that VR remains “elusive” in terms of mainstream adoption. The reasons are varied, including the expense and the lack of empirical evidence for its effectiveness in education. More importantly, examples of successful VR implementations for those instructors who lack technical skills are still scarce. Margherita Berti will discuss a range of easy-to-use educational VR tools and examples of VR-based activity examples and the learning theories and instructional design principles utilized for their development.
Margherita Berti is a doctoral candidate in Second Language Acquisition and Teaching (SLAT) and Educational Technology at the University of Arizona. Her research specialization resides at the intersection of virtual reality, the teaching of culture, and curriculum and content development for foreign language education.
There is a barrier between an author and readers of his/her books. The author’s journey ends, and the reader’s begins. But what if as an author/trainer, you could use gamification and augmented reality(AR) to interact and coach your readers as part of their learning journey? Attend this session with Amanda Fox to learn how the book Teachingland leverages augmented reality tools such as Metaverse to connect with readers beyond the text.
Amanda Fox, Creative Director of STEAMPunksEdu, and author of Teachingland: A Teacher’s Survival Guide to the Classroom Apolcalypse and Zom-Be A Design Thinker. Check her out on the Virtual Reality Podcast, or connect with her on twitter @AmandaFoxSTEM.
Christian Jonathan Angel Rueda specializaes in didactic activity of the use of virtual reality/virtual worlds to learn the fundamentals of design. He shares the development of a course including recreating in the three-dimensional environment using the fundamentals learned in class, a demonstration of all the works developed throughout the semester using the knowledge of design foundation to show them creatively, and a final project class scenario that connected with the scenes of the students who showed their work throughout the semester.
Christian Jonathan Angel Rueda is a research professor at the Autonomous University of Queretaro in Mexico. With a PhD in educational technology, Christian has published several papers on the intersection of education, pedagogy, and three-dimensional immersive digital environments. He is also an edtech, virtual reality, and social media consultant at Eco Onis.
How we can bridge the gap between eLearning and XR. Richard Van Tilborg discusses combining brain insights enabled with new technologies. Training and education cases realised with the CoVince platform: journeys which start on you mobile and continue in VR. The possibilities to earn from your creations and have a central distribution place for learning and data.
Richard Van Tilborg works with the CoVince platform, a VR platform offering training and educational programs for central distribution of learning and data. He is an author and speaker focusing on computers and education in virtual reality-based tasks for delivering feedback.