Holland, B. (2020). Emerging Technology and Today’s Libraries. In Holland, B. (Eds.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 1-33). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch001
The purpose of this chapter is to examine emerging technology and today’s libraries. New technology stands out first and foremost given that they will end up revolutionizing every industry in an age where digital transformation plays a major role. Major trends will define technological disruption. The next-gen of communication, core computing, and integration technologies will adopt new architectures. Major technological, economic, and environmental changes have generated interest in smart cities. Sensing technologies have made IoT possible, but also provide the data required for AI algorithms and models, often in real-time, to make intelligent business and operational decisions. Smart cities consume different types of electronic internet of things (IoT) sensors to collect data and then use these data to manage assets and resources efficiently. This includes data collected from citizens, devices, and assets that are processed and analyzed to monitor and manage, schools, libraries, hospitals, and other community services.
Makori, E. O. (2020). Blockchain Applications and Trends That Promote Information Management. In Holland, B. (Eds.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 34-51). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch002
Blockchain revolutionary paradigm is the new and emerging digital innovation that organizations have no choice but to embrace and implement in order to sustain and manage service delivery to the customers. From disruptive to sustaining perspective, blockchain practices have transformed the information management environment with innovative products and services. Blockchain-based applications and innovations provide information management professionals and practitioners with robust and secure opportunities to transform corporate affairs and social responsibilities of organizations through accountability, integrity, and transparency; information governance; data and information security; as well as digital internet of things.
Hahn, J. (2020). Student Engagement and Smart Spaces: Library Browsing and Internet of Things Technology. In Holland, B. (Eds.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 52-70). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch003
The purpose of this chapter is to provide evidence-based findings on student engagement within smart library spaces. The focus of smart libraries includes spaces that are enhanced with the internet of things (IoT) infrastructure and library collection maps accessed through a library-designed mobile application. The analysis herein explored IoT-based browsing within an undergraduate library collection. The open stacks and mobile infrastructure provided several years (2016-2019) of user-generated smart building data on browsing and selecting items in open stacks. The methods of analysis used in this chapter include transactional analysis and data visualization of IoT infrastructure logs. By analyzing server logs from the computing infrastructure that powers the IoT services, it is possible to infer in greater detail than heretofore possible the specifics of the way library collections are a target of undergraduate student engagement.
Treskon, M. (2020). Providing an Environment for Authentic Learning Experiences. In Holland, B. (Eds.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 71-86). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch004
The Loyola Notre Dame Library provides authentic learning environments for undergraduate students by serving as “client” for senior capstone projects. Through the creative application of IoT technologies such as Arduinos and Raspberry Pis in a library setting, the students gain valuable experience working through software design methodology and create software in response to a real-world challenge. Although these proof-of-concept projects could be implemented, the library is primarily interested in furthering the research, teaching, and learning missions of the two universities it supports. Whether the library gets a product that is worth implementing is not a requirement; it is a “bonus.”
Rashid, M., Nazeer, I., Gupta, S. K., & Khanam, Z. (2020). Internet of Things: Architecture, Challenges, and Future Directions. In Holland, B. (Ed.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 87-104). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch005
The internet of things (IoT) is a computing paradigm that has changed our daily livelihood and functioning. IoT focuses on the interconnection of all the sensor-based devices like smart meters, coffee machines, cell phones, etc., enabling these devices to exchange data with each other during human interactions. With easy connectivity among humans and devices, speed of data generation is getting multi-fold, increasing exponentially in volume, and is getting more complex in nature. In this chapter, the authors will outline the architecture of IoT for handling various issues and challenges in real-world problems and will cover various areas where usage of IoT is done in real applications. The authors believe that this chapter will act as a guide for researchers in IoT to create a technical revolution for future generations.
Martin, L. (2020). Cloud Computing, Smart Technology, and Library Automation. In Holland, B. (Eds.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 105-123). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch006
As technology continues to change, the landscape of the work of librarians and libraries continue to adapt and adopt innovations that support their services. Technology also continues to be an essential tool for dissemination, retrieving, storing, and accessing the resources and information. Cloud computing is an essential component employed to carry out these tasks. The concept of cloud computing has long been a tool utilized in libraries. Many libraries use OCLC to catalog and manage resources and share resources, WorldCat, and other library applications that are cloud-based services. Cloud computing services are used in the library automation process. Using cloud-based services can streamline library services, minimize cost, and the need to have designated space for servers, software, or other hardware to perform library operations. Cloud computing systems with the library consolidate, unify, and optimize library operations such as acquisitions, cataloging, circulation, discovery, and retrieval of information.
Owusu-Ansah, S. (2020). Developing a Digital Engagement Strategy for Ghanaian University Libraries: An Exploratory Study. In Holland, B. (Eds.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 124-139). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch007
This study represents a framework that digital libraries can leverage to increase usage and visibility. The adopted qualitative research aims to examine a digital engagement strategy for the libraries in the University of Ghana (UG). Data is collected from participants (digital librarians) who are key stakeholders of digital library service provision in the University of Ghana Library System (UGLS). The chapter reveals that digital library services included rare collections, e-journal, e-databases, e-books, microfilms, e-theses, e-newspapers, and e-past questions. Additionally, the research revealed that the digital library service patronage could be enhanced through outreach programmes, open access, exhibitions, social media, and conferences. Digital librarians recommend that to optimize digital library services, literacy programmes/instructions, social media platforms, IT equipment, software, and website must be deployed. In conclusion, a DES helps UGLS foster new relationships, connect with new audiences, and establish new or improved brand identity.
Nambobi, M., Ssemwogerere, R., & Ramadhan, B. K. (2020). Implementation of Autonomous Library Assistants Using RFID Technology. In Holland, B. (Ed.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 140-150). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch008
This is an interesting time to innovate around disruptive technologies like the internet of things (IoT), machine learning, blockchain. Autonomous assistants (IoT) are the electro-mechanical system that performs any prescribed task automatically with no human intervention through self-learning and adaptation to changing environments. This means that by acknowledging autonomy, the system has to perceive environments, actuate a movement, and perform tasks with a high degree of autonomy. This means the ability to make their own decisions in a given set of the environment. It is important to note that autonomous IoT using radio frequency identification (RFID) technology is used in educational sectors to boost the research the arena, improve customer service, ease book identification and traceability of items in the library. This chapter discusses the role, importance, the critical tools, applicability, and challenges of autonomous IoT in the library using RFID technology.
Priya, A., & Sahana, S. K. (2020). Processor Scheduling in High-Performance Computing (HPC) Environment. In Holland, B. (Ed.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 151-179). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch009
Processor scheduling is one of the thrust areas in the field of computer science. The future technologies use a huge amount of processing for execution of their tasks like huge games, programming software, and in the field of quantum computing. In real-time, many complex problems are solved by GPU programming. The primary concern of scheduling is to reduce the time complexity and manpower. Several traditional techniques exit for processor scheduling. The performance of traditional techniques is reduced when it comes to the huge processing of tasks. Most scheduling problems are NP-hard in nature. Many of the complex problems are recently solved by GPU programming. GPU scheduling is another complex issue as it runs thousands of threads in parallel and needs to be scheduled efficiently. For such large-scale scheduling problems, the performance of state-of-the-art algorithms is very poor. It is observed that evolutionary and genetic-based algorithms exhibit better performance for large-scale combinatorial and internet of things (IoT) problems.
Librarians are beginning to offer virtual reality (VR) services in libraries. This chapter reviews how libraries are currently using virtual reality for both consumption and creation purposes. Virtual reality tools will be compared and contrasted, and recommendations will be given for purchasing and circulating headsets and VR equipment. Google Tour Creator and a smartphone or 360-degree camera can be used to create a virtual tour of the library and other virtual reality content. These new library services will be discussed along with practical advice and best practices for incorporating virtual reality into the library for instructional and entertainment purposes.
Heffernan, K. L., & Chartier, S. (2020). Augmented Reality Gamifies the Library: A Ride Through the Technological Frontier. In Holland, B. (Ed.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 194-210). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch011
Two librarians at a University in New Hampshire attempted to integrate gamification and mobile technologies into the exploration of, and orientation to, the library’s services and resources. From augmented reality to virtual escape rooms and finally an in-house app created by undergraduate, campus-based, game design students, the library team learned much about the triumphs and challenges that come with attempting to utilize new technologies to reach users in the 21st century. This chapter is a narrative describing years of various attempts, innovation, and iteration, which have led to the library team being on the verge of introducing an app that could revolutionize campus discovery and engagement.
Miltenoff, P. (2020). Video 360 and Augmented Reality: Visualization to Help Educators Enter the Era of eXtended Reality. In Holland, B. (Eds.), Emerging Trends and Impacts of the Internet of Things in Libraries (pp. 211-225). IGI Global. http://doi:10.4018/978-1-7998-4742-7.ch012
The advent of all types of eXtended Reality (XR)—VR, AR, MR—raises serious questions, both technological and pedagogical. The setup of campus services around XR is only the prelude to the more complex and expensive project of creating learning content using XR. In 2018, the authors started a limited proof-of-concept augmented reality (AR) project for a library tour. Building on their previous research and experience creating a virtual reality (VR) library tour, they sought a scalable introduction of XR services and content for the campus community. The AR library tour aimed to start us toward a matrix for similar services for the entire campus. They also explored the attitudes of students, faculty, and staff toward this new technology and its incorporation in education, as well as its potential and limitations toward the creation of a “smart” library.
President Donald Trump on Monday directed federal agencies to improve the nation’s artificial intelligence abilities — and help people whose jobs are displaced by the automation it enables.
t’s good for the US government to focus on AI, said Daniel Castro, chief executive of the Center for Data Innovation, a technology-focused think tank that supports the initiative.
Silicon Valley has been investing heavily in AI in recent years, but the path hasn’t always been an easy one. In October, for instance, Google withdrew from competition for a $10 billion Pentagon cloud computing contract, saying it might conflict with its principles for ethical use of AI.
Red Hat, although not quite a household name, is an undeniably significant company, with lots of fingers in lots of pies, especially when it comes to cloud computing and the Linux ecosystem.
The gem in its crown is arguably the platform-as-a-service (PaaS) provider OpenShift, which directly competes with the Salesforce-owned Heroku and Google App Engine. It also owns and develops Red Hat Enterprise Linux (RHEL), which is employed across several commercial settings, including workstations, servers, and supercomputers.
Red Hat is an enthusiastic contributor to several major Linux projects, playing a role in developing Libre Office and GNOME, as well as the Kernel itself.
PROGRAM FEES $2,300 STARTS ON November 28, 20182 months, online
6-8 hours per week
A Digital Revolution Is Underway.
In a rapidly expanding digital marketplace, legacy companies without a clear digital transformation strategy are being left behind. How can we stay on top of rapid—and sometimes radical—change? How can we position our organizations to take advantage of new technologies? How can we track and combat the security threats facing all of us as we are swept forward into the future?
Who is this Program for?
Professionals in traditional companies poised to implement strategic change, as well as entrepreneurs seeking to harness the opportunities afforded by new technologies, will learn the fundamentals of digital transformation and secure the necessary tools to navigate their enterprise to a digital platform.
Participants come from a wide range of industries and include C-suite executives, business consultants, corporate attorneys, risk officers, marketing, R&D, and innovation enablers.
<h3 “>Your Learning Journey
This online program takes you through the fundamentals of digital technologies transforming our world today. Led by MIT faculty at the forefront of data science, participants will learn the history and application of transformative technologies such as blockchain, artificial intelligence, cloud computing, IoT, and cybersecurity as well as the implications of employing—or ignoring—digitalization.
for Google’s corporate parent, Alphabet, the opportunities in the world’s largest internet market may be too good to resist. And the full scope of the company’s interest in China now appears to be broader than just internet search.
The latest hint came from Waymo, the driverless-car company that was spun out of Google in 2016. Chinese media noticed this week that the business had quietly registered a Shanghai subsidiary in May, suggesting that it wants a piece of an industry that the Chinese government has made a priority.
Unlike Google, Apple runs its own app store in China, heeding government directives about the kinds of apps that can be available to Chinese users. Microsoft and Amazon offer cloud computing services, working with local partners and following strict controls on how customers’ data is stored.
Baidu, maker of the country’s leading search engine, has made its autonomous-vehicle software platform available to dozens of local and foreign companies. SAIC Motor, China’s largest carmaker, is working with the e-commerce titan Alibaba. BMW and Daimler have received permission in China to test their own self-driving vehicles.
Applications for the 2018 Institute will be accepted between December 1, 2017 and January 27, 2018. Scholars accepted to the program will be notified in early March 2018.
Learning to Harness Big Data in an Academic Library
Research on Big Data per se, as well as on the importance and organization of the process of Big Data collection and analysis, is well underway. The complexity of the process comprising “Big Data,” however, deprives organizations of ubiquitous “blue print.” The planning, structuring, administration and execution of the process of adopting Big Data in an organization, being that a corporate one or an educational one, remains an elusive one. No less elusive is the adoption of the Big Data practices among libraries themselves. Seeking the commonalities and differences in the adoption of Big Data practices among libraries may be a suitable start to help libraries transition to the adoption of Big Data and restructuring organizational and daily activities based on Big Data decisions. Introduction to the problem. Limitations
The redefinition of humanities scholarship has received major attention in higher education. The advent of digital humanities challenges aspects of academic librarianship. Data literacy is a critical need for digital humanities in academia. The March 2016 Library Juice Academy Webinar led by John Russel exemplifies the efforts to help librarians become versed in obtaining programming skills, and respectively, handling data. Those are first steps on a rather long path of building a robust infrastructure to collect, analyze, and interpret data intelligently, so it can be utilized to restructure daily and strategic activities. Since the phenomenon of Big Data is young, there is a lack of blueprints on the organization of such infrastructure. A collection and sharing of best practices is an efficient approach to establishing a feasible plan for setting a library infrastructure for collection, analysis, and implementation of Big Data.
Limitations. This research can only organize the results from the responses of librarians and research into how libraries present themselves to the world in this arena. It may be able to make some rudimentary recommendations. However, based on each library’s specific goals and tasks, further research and work will be needed.
Big Data is becoming an omnipresent term. It is widespread among different disciplines in academia (De Mauro, Greco, & Grimaldi, 2016). This leads to “inconsistency in meanings and necessity for formal definitions” (De Mauro et al, 2016, p. 122). Similarly, to De Mauro et al (2016), Hashem, Yaqoob, Anuar, Mokhtar, Gani and Ullah Khan (2015) seek standardization of definitions. The main connected “themes” of this phenomenon must be identified and the connections to Library Science must be sought. A prerequisite for a comprehensive definition is the identification of Big Data methods. Bughin, Chui, Manyika (2011), Chen et al. (2012) and De Mauro et al (2015) single out the methods to complete the process of building a comprehensive definition.
In conjunction with identifying the methods, volume, velocity, and variety, as defined by Laney (2001), are the three properties of Big Data accepted across the literature. Daniel (2015) defines three stages in big data: collection, analysis, and visualization. According to Daniel, (2015), Big Data in higher education “connotes the interpretation of a wide range of administrative and operational data” (p. 910) and according to Hilbert (2013), as cited in Daniel (2015), Big Data “delivers a cost-effective prospect to improve decision making” (p. 911).
The importance of understanding the process of Big Data analytics is well understood in academic libraries. An example of such “administrative and operational” use for cost-effective improvement of decision making are the Finch & Flenner (2016) and Eaton (2017) case studies of the use of data visualization to assess an academic library collection and restructure the acquisition process. Sugimoto, Ding & Thelwall (2012) call for the discussion of Big Data for libraries. According to the 2017 NMC Horizon Report “Big Data has become a major focus of academic and research libraries due to the rapid evolution of data mining technologies and the proliferation of data sources like mobile devices and social media” (Adams, Becker, et al., 2017, p. 38).
Power (2014) elaborates on the complexity of Big Data in regard to decision-making and offers ideas for organizations on building a system to deal with Big Data. As explained by Boyd and Crawford (2012) and cited in De Mauro et al (2016), there is a danger of a new digital divide among organizations with different access and ability to process data. Moreover, Big Data impacts current organizational entities in their ability to reconsider their structure and organization. The complexity of institutions’ performance under the impact of Big Data is further complicated by the change of human behavior, because, arguably, Big Data affects human behavior itself (Schroeder, 2014).
De Mauro et al (2015) touch on the impact of Dig Data on libraries. The reorganization of academic libraries considering Big Data and the handling of Big Data by libraries is in a close conjunction with the reorganization of the entire campus and the handling of Big Data by the educational institution. In additional to the disruption posed by the Big Data phenomenon, higher education is facing global changes of economic, technological, social, and educational character. Daniel (2015) uses a chart to illustrate the complexity of these global trends. Parallel to the Big Data developments in America and Asia, the European Union is offering access to an EU open data portal (https://data.europa.eu/euodp/home ). Moreover, the Association of European Research Libraries expects under the H2020 program to increase “the digitization of cultural heritage, digital preservation, research data sharing, open access policies and the interoperability of research infrastructures” (Reilly, 2013).
The challenges posed by Big Data to human and social behavior (Schroeder, 2014) are no less significant to the impact of Big Data on learning. Cohen, Dolan, Dunlap, Hellerstein, & Welton (2009) propose a road map for “more conservative organizations” (p. 1492) to overcome their reservations and/or inability to handle Big Data and adopt a practical approach to the complexity of Big Data. Two Chinese researchers assert deep learning as the “set of machine learning techniques that learn multiple levels of representation in deep architectures (Chen & Lin, 2014, p. 515). Deep learning requires “new ways of thinking and transformative solutions (Chen & Lin, 2014, p. 523). Another pair of researchers from China present a broad overview of the various societal, business and administrative applications of Big Data, including a detailed account and definitions of the processes and tools accompanying Big Data analytics. The American counterparts of these Chinese researchers are of the same opinion when it comes to “think about the core principles and concepts that underline the techniques, and also the systematic thinking” (Provost and Fawcett, 2013, p. 58). De Mauro, Greco, and Grimaldi (2016), similarly to Provost and Fawcett (2013) draw attention to the urgent necessity to train new types of specialists to work with such data. As early as 2012, Davenport and Patil (2012), as cited in Mauro et al (2016), envisioned hybrid specialists able to manage both technological knowledge and academic research. Similarly, Provost and Fawcett (2013) mention the efforts of “academic institutions scrambling to put together programs to train data scientists” (p. 51). Further, Asomoah, Sharda, Zadeh & Kalgotra (2017) share a specific plan on the design and delivery of a big data analytics course. At the same time, librarians working with data acknowledge the shortcomings in the profession, since librarians “are practitioners first and generally do not view usability as a primary job responsibility, usually lack the depth of research skills needed to carry out a fully valid” data-based research (Emanuel, 2013, p. 207).
Borgman (2015) devotes an entire book to data and scholarly research and goes beyond the already well-established facts regarding the importance of Big Data, the implications of Big Data and the technical, societal, and educational impact and complications posed by Big Data. Borgman elucidates the importance of knowledge infrastructure and the necessity to understand the importance and complexity of building such infrastructure, in order to be able to take advantage of Big Data. In a similar fashion, a team of Chinese scholars draws attention to the complexity of data mining and Big Data and the necessity to approach the issue in an organized fashion (Wu, Xhu, Wu, Ding, 2014).
Bruns (2013) shifts the conversation from the “macro” architecture of Big Data, as focused by Borgman (2015) and Wu et al (2014) and ponders over the influx and unprecedented opportunities for humanities in academia with the advent of Big Data. Does the seemingly ubiquitous omnipresence of Big Data mean for humanities a “railroading” into “scientificity”? How will research and publishing change with the advent of Big Data across academic disciplines?
Reyes (2015) shares her “skinny” approach to Big Data in education. She presents a comprehensive structure for educational institutions to shift “traditional” analytics to “learner-centered” analytics (p. 75) and identifies the participants in the Big Data process in the organization. The model is applicable for library use.
Being a new and unchartered territory, Big Data and Big Data analytics can pose ethical issues. Willis (2013) focusses on Big Data application in education, namely the ethical questions for higher education administrators and the expectations of Big Data analytics to predict students’ success. Daries, Reich, Waldo, Young, and Whittinghill (2014) discuss rather similar issues regarding the balance between data and student privacy regulations. The privacy issues accompanying data are also discussed by Tene and Polonetsky, (2013).
Privacy issues are habitually connected to security and surveillance issues. Andrejevic and Gates (2014) point out in a decision making “generated by data mining, the focus is not on particular individuals but on aggregate outcomes” (p. 195). Van Dijck (2014) goes into further details regarding the perils posed by metadata and data to the society, in particular to the privacy of citizens. Bail (2014) addresses the same issue regarding the impact of Big Data on societal issues, but underlines the leading roles of cultural sociologists and their theories for the correct application of Big Data.
Library organizations have been traditional proponents of core democratic values such as protection of privacy and elucidation of related ethical questions (Miltenoff & Hauptman, 2005). In recent books about Big Data and libraries, ethical issues are important part of the discussion (Weiss, 2018). Library blogs also discuss these issues (Harper & Oltmann, 2017). An academic library’s role is to educate its patrons about those values. Sugimoto et al (2012) reflect on the need for discussion about Big Data in Library and Information Science. They clearly draw attention to the library “tradition of organizing, managing, retrieving, collecting, describing, and preserving information” (p.1) as well as library and information science being “a historically interdisciplinary and collaborative field, absorbing the knowledge of multiple domains and bringing the tools, techniques, and theories” (p. 1). Sugimoto et al (2012) sought a wide discussion among the library profession regarding the implications of Big Data on the profession, no differently from the activities in other fields (e.g., Wixom, Ariyachandra, Douglas, Goul, Gupta, Iyer, Kulkami, Mooney, Phillips-Wren, Turetken, 2014). A current Andrew Mellon Foundation grant for Visualizing Digital Scholarship in Libraries seeks an opportunity to view “both macro and micro perspectives, multi-user collaboration and real-time data interaction, and a limitless number of visualization possibilities – critical capabilities for rapidly understanding today’s large data sets (Hwangbo, 2014).
The importance of the library with its traditional roles, as described by Sugimoto et al (2012) may continue, considering the Big Data platform proposed by Wu, Wu, Khabsa, Williams, Chen, Huang, Tuarob, Choudhury, Ororbia, Mitra, & Giles (2014). Such platforms will continue to emerge and be improved, with librarians as the ultimate drivers of such platforms and as the mediators between the patrons and the data generated by such platforms.
Every library needs to find its place in the large organization and in society in regard to this very new and very powerful phenomenon called Big Data. Libraries might not have the trained staff to become a leader in the process of organizing and building the complex mechanism of this new knowledge architecture, but librarians must educate and train themselves to be worthy participants in this new establishment.
The study will be cleared by the SCSU IRB.
The survey will collect responses from library population and it readiness to use and use of Big Data. Send survey URL to (academic?) libraries around the world.
Data will be processed through SPSS. Open ended results will be processed manually. The preliminary research design presupposes a mixed method approach.
The study will include the use of closed-ended survey response questions and open-ended questions. The first part of the study (close ended, quantitative questions) will be completed online through online survey. Participants will be asked to complete the survey using a link they receive through e-mail.
Mixed methods research was defined by Johnson and Onwuegbuzie (2004) as “the class of research where the researcher mixes or combines quantitative and qualitative research techniques, methods, approaches, concepts, or language into a single study” (Johnson & Onwuegbuzie, 2004 , p. 17). Quantitative and qualitative methods can be combined, if used to complement each other because the methods can measure different aspects of the research questions (Sale, Lohfeld, & Brazil, 2002).
Online survey of 10-15 question, with 3-5 demographic and the rest regarding the use of tools.
1-2 open-ended questions at the end of the survey to probe for follow-up mixed method approach (an opportunity for qualitative study)
data analysis techniques: survey results will be exported to SPSS and analyzed accordingly. The final survey design will determine the appropriate statistical approach.
Complete literature review and identify areas of interest – two months
Prepare and test instrument (survey) – month
IRB and other details – month
Generate a list of potential libraries to distribute survey – month
Contact libraries. Follow up and contact again, if necessary (low turnaround) – month
Collect, analyze data – two months
Write out data findings – month
Complete manuscript – month
Proofreading and other details – month
Significance of the work
While it has been widely acknowledged that Big Data (and its handling) is changing higher education (http://blog.stcloudstate.edu/ims?s=big+data) as well as academic libraries (http://blog.stcloudstate.edu/ims/2016/03/29/analytics-in-education/), it remains nebulous how Big Data is handled in the academic library and, respectively, how it is related to the handling of Big Data on campus. Moreover, the visualization of Big Data between units on campus remains in progress, along with any policymaking based on the analysis of such data (hence the need for comprehensive visualization).
This research will aim to gain an understanding on: a. how librarians are handling Big Data; b. how are they relating their Big Data output to the campus output of Big Data and c. how librarians in particular and campus administration in general are tuning their practices based on the analysis.
Based on the survey returns (if there is a statistically significant return), this research might consider juxtaposing the practices from academic libraries, to practices from special libraries (especially corporate libraries), public and school libraries.
Adams Becker, S., Cummins M, Davis, A., Freeman, A., Giesinger Hall, C., Ananthanarayanan, V., … Wolfson, N. (2017). NMC Horizon Report: 2017 Library Edition.
Andrejevic, M., & Gates, K. (2014). Big Data Surveillance: Introduction. Surveillance & Society, 12(2), 185–196.
Asamoah, D. A., Sharda, R., Hassan Zadeh, A., & Kalgotra, P. (2017). Preparing a Data Scientist: A Pedagogic Experience in Designing a Big Data Analytics Course. Decision Sciences Journal of Innovative Education, 15(2), 161–190. https://doi.org/10.1111/dsji.12125
Cohen, J., Dolan, B., Dunlap, M., Hellerstein, J. M., & Welton, C. (2009). MAD Skills: New Analysis Practices for Big Data. Proc. VLDB Endow., 2(2), 1481–1492. https://doi.org/10.14778/1687553.1687576
Daniel, B. (2015). Big Data and analytics in higher education: Opportunities and challenges. British Journal of Educational Technology, 46(5), 904–920. https://doi.org/10.1111/bjet.12230
Daries, J. P., Reich, J., Waldo, J., Young, E. M., Whittinghill, J., Ho, A. D., … Chuang, I. (2014). Privacy, Anonymity, and Big Data in the Social Sciences. Commun. ACM, 57(9), 56–63. https://doi.org/10.1145/2643132
De Mauro, A., Greco, M., & Grimaldi, M. (2015). What is big data? A consensual definition and a review of key research topics. AIP Conference Proceedings, 1644(1), 97–104. https://doi.org/10.1063/1.4907823
Emanuel, J. (2013). Usability testing in libraries: methods, limitations, and implications. OCLC Systems & Services: International Digital Library Perspectives, 29(4), 204–217. https://doi.org/10.1108/OCLC-02-2013-0009
Hashem, I. A. T., Yaqoob, I., Anuar, N. B., Mokhtar, S., Gani, A., & Ullah Khan, S. (2015). The rise of “big data” on cloud computing: Review and open research issues. Information Systems, 47(Supplement C), 98–115. https://doi.org/10.1016/j.is.2014.07.006
Philip Chen, C. L., & Zhang, C.-Y. (2014). Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Information Sciences, 275(Supplement C), 314–347. https://doi.org/10.1016/j.ins.2014.01.015
Sugimoto, C. R., Ding, Y., & Thelwall, M. (2012). Library and information science in the big data era: Funding, projects, and future [a panel proposal]. Proceedings of the American Society for Information Science and Technology, 49(1), 1–3. https://doi.org/10.1002/meet.14504901187
Tene, O., & Polonetsky, J. (2012). Big Data for All: Privacy and User Control in the Age of Analytics. Northwestern Journal of Technology and Intellectual Property, 11, [xxvii]-274.
van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society; Newcastle upon Tyne, 12(2), 197–208.
Waller, M. A., & Fawcett, S. E. (2013). Data Science, Predictive Analytics, and Big Data: A Revolution That Will Transform Supply Chain Design and Management. Journal of Business Logistics, 34(2), 77–84. https://doi.org/10.1111/jbl.12010
Wu, Z., Wu, J., Khabsa, M., Williams, K., Chen, H. H., Huang, W., … Giles, C. L. (2014). Towards building a scholarly big data platform: Challenges, lessons and opportunities. In IEEE/ACM Joint Conference on Digital Libraries (pp. 117–126). https://doi.org/10.1109/JCDL.2014.6970157
Minecraft for Higher Ed? Try it. Pros, Cons, Recommendations?
Description: Why Minecraft, the online video game? How can Minecraft improve learning for higher education? We’ll begin with a live demo in which all can participate (see “Minecraft for Free”). We’ll review “Examples, Not Rumors” of successful adaptations and USES of Minecraft for teaching/learning in higher education. Especially those submitted in advance And we’ll try to extract from these activities a few recommendations/questions/requests re Minecraft in higher education.
These affordances develop both social and cognitive abilities of students
Nebel, S., Schneider, S., Beege, M., Kolda, F., Mackiewicz, V., & Rey, G. (2017). You cannot do this alone! Increasing task interdependence in cooperative educational videogames to encourage collaboration. Educational Technology Research & Development, 65(4), 993-1014. doi:10.1007/s11423-017-9511-8
Abrams, S. S., & Rowsell, J. (2017). Emotionally Crafted Experiences: Layering Literacies in Minecraft. Reading Teacher, 70(4), 501-506.
Nebel, S., Schneider, S., & Daniel Rey, G. (2016). Mining Learning and Crafting Scientific Experiments: A Literature Review on the Use of Minecraft in Education and Research. Source: Journal of Educational Technology & Society, 19(192), 355–366. Retrieved from http://www.jstor.org/stable/jeductechsoci.19.2.355
Cipollone, M., Schifter, C. C., & Moffat, R. A. (2014). Minecraft as a Creative Tool: A Case Study. International Journal Of Game-Based Learning, 4(2), 1-14.
Nebel, S., Schneider, S., & Daniel Rey, G. (2016). Mining Learning and Crafting Scientific Experiments: A Literature Review on the Use of Minecraft in Education and Research. Journal of Educational Technology & Society, 19(192), 355–366. Retrieved from http://www.jstor.org/stable/jeductechsoci.19.2.355
Uusi-Mäkelä, M., & Uusi-Mäkelä, M. (2014). Immersive Language Learning with Games: Finding Flow in MinecraftEdu. EdMedia: World Conference on Educational Media and Technology (Vol. 2014). Association for the Advancement of Computing in Education (AACE). Retrieved from https://www.learntechlib.org/noaccess/148409/
Birt, J., & Hovorka, D. (2014). Effect of mixed media visualization on learner perceptions and outcomes. In 25th Australasian Conference on Information Systems (pp. 1–10). Retrieved from http://epublications.bond.edu.au/fsd_papers/74
Mobile computing, cloud computing, and data-rich repositories have altered ideas about where and how learning takes place.
designers can find themselves filling a variety of roles. They might design large, complex systems or work with faculty and departments to develop courses and curricula. They might migrate traditional resources to mobile or adaptive platforms. They might help administrators understand the value and potential of new learning strategies and tools. Today’s instructional designer might work with subject-matter experts, coders, graphic designers, and others. Moreover, the work of an instructional designer increasingly continues throughout the duration of a course rather than taking place upfront
Given the expanding role and landscape of technology—as well as the growing body of knowledge about learning and about educational activities and assessments—dedicated instructional designers are increasingly common and often take a stronger role.
Competency based learning allows students to progress at their own pace and finish assignments, courses, and degree plans as time and skills permit. Data provided by analytics systems can help instructional designers predict which pedagogical approaches might be most effective and tailor learning experiences accordingly. The use of mobile learning continues to grow, enabling new kinds of learning experiences.
In some contexts, instructional designers might work more directly with students, teaching them lifelong learning skills. Students might begin coursework by choosing from a menu of options, creating their own path through content, making choices about learning options, being more hands-on, and selecting best approaches for demonstrating mastery. Educational models that feature adaptive and personalized learning will increasingly be a focus of instructional design.
Instructional designers bring a cross-disciplinary approach to their work, showing faculty how learning activities used in particular subject areas might be effective in others. In this way, instructional designers can cultivate a measure of consistency across courses and disciplines in how educational strategies and techniques are incorporated.
Please join me September 20 for a free webinar where Dr. Sheryl Abshire, CTO of Calcasieu Parish SD and a recognized leader in K-12 technology, shares her insights on the top strategies, best practices and most valuable ideas that can reduce IT departmental costs and increase efficiencies.
What: New Ways to Measure & Leverage the Value of IT When: 09/20 @ 2:00 PM ET | 11:00 AM PT
Listen in and learn how to:
· Use data you already collect to justify needs and resources
· Create a new value proposition for IT
· Measure the strategic use of IT in the district
· Determine if your current technology is making the difference you expected
https://www.schooldude.com/ Tech support costs in K12 increased by 50% in the last four years from 14% to 21% of the technology budget. One half of the school technology leaders said that their school board understands that technology relates to district oveall goals , it is not as supportive financially. Worse, 8% felt that the school board does not believe technology is important to their district overall goals
Harvard Business Report Driving Digital Transformation. 2015 surveyed digital leaders. Driving innovation most important role breaking down internal silos
virtualization; data deluge; energy and green IT; complex resource tracking; consumerization and social software; unified communications; mobile and wireless; system density; mashups and portals; cloud computing
what is a quick recovery?
Action plan: 1. Focus on virtualization and green IT for immediate cost and flexibility benefits. 2. Look at storage virtualization, deduplication and thin provisioning. 3. Evaluate web social software to transform interactions 4. exploit mashups and cloud-based services to address immediate user needs. 5. link UC to collaboration and enterprise applications to support growth initiatives. 6. begin to track weak signals and subtle patterns – from everywhere.
managing upkeep and replacement of growing number of devices
perception gap (what we are doing)
agentless network discovery mechanism. scanning of devices on the network. optimize hard software usage, improve planning and budgeting process with status reporting.
MDM (mobile device management). supports both BYOD and school devices. control app distribution across the network, supervise device usage, remotely manage device policy
Helpdesk: complete ticket to close helpdesk solution
technology facilitators: spend time at assigned schools; talk to teacher and try to figure out what teachers know about technology and then work the principal to customize workshops (PLCs) to build the skills based on their skills set. versus technology facilitator at every school. Help them grow their own.
Industrial revolutions are momentous events. By most reckonings, there have been only three. The first was triggered in the 1700s by the commercial steam engine and the mechanical loom. The harnessing of electricity and mass production sparked the second, around the start of the 20th century. The computer set the third in motion after World War II.
Henning Kagermann, the head of the German National Academy of Science and Engineering (Acatech), did exactly that in 2011, when he used the term Industrie 4.0 to describe a proposed government-sponsored industrial initiative.
The term Industry 4.0 refers to the combination of several major innovations in digital technology
These technologies include advanced robotics and artificial intelligence; sophisticated sensors; cloud computing; the Internet of Things; data capture and analytics; digital fabrication (including 3D printing); software-as-a-service and other new marketing models; smartphones and other mobile devices; platforms that use algorithms to direct motor vehicles (including navigation tools, ride-sharing apps, delivery and ride services, and autonomous vehicles); and the embedding of all these elements in an interoperable global value chain, shared by many companies from many countries.
Companies that embrace Industry 4.0 are beginning to track everything they produce from cradle to grave, sending out upgrades for complex products after they are sold (in the same way that software has come to be updated). These companies are learning mass customization: the ability to make products in batches of one as inexpensively as they could make a mass-produced product in the 20th century, while fully tailoring the product to the specifications of the purchaser
Three aspects of digitization form the heart of an Industry 4.0 approach.
• The full digitization of a company’s operations
• The redesign of products and services
• Closer interaction with customers
Making Industry 4.0 work requires major shifts in organizational practices and structures. These shifts include new forms of IT architecture and data management, new approaches to regulatory and tax compliance, new organizational structures, and — most importantly — a new digitally oriented culture, which must embrace data analytics as a core enterprise capability.
Klaus Schwab put it in his recent book The Fourth Industrial Revolution (World Economic Forum, 2016), “Contrary to the previous industrial revolutions, this one is evolving at an exponential rather than linear pace.… It is not only changing the ‘what’ and the ‘how’ of doing things, but also ‘who’ we are.”
This great integrating force is gaining strength at a time of political fragmentation — when many governments are considering making international trade more difficult. It may indeed become harder to move people and products across some national borders. But Industry 4.0 could overcome those barriers by enabling companies to transfer just their intellectual property, including their software, while letting each nation maintain its own manufacturing networks.
more on the Internet of Things in this IMS blog http://blog.stcloudstate.edu/ims?s=internet+of+things