a report from ISACA, a nonprofit association focused on knowledge and practices for information systems. The 2017 State of Cyber Security Study surveyed IT security leaders around the globe on security issues, the emerging threat landscape, workforce challenges and more.
53 percent of survey respondents reported a year-over-year increase in cyber attacks;
62 percent experienced ransomware in 2016, but only 53 percent have a formal process in place to address a ransomware attack;
78 percent reported malicious attacks aimed at impairing an organization’s operations or user data;
Only 31 percent said they routinely test their security controls, while 13 percent never test them; and
16 percent do not have an incident response plan.
65 percent of organizations now employ a chief information security officers, up from 50 percent in 2016, yet still struggle to fill open cyber security positions;
48 percent of respondents don’t feel comfortable with their staff’s ability to address complex cyber security issues;
More than half say cyber security professionals “lack an ability to understand the business”;
One in four organizations allot less than $1,000 per cyber security team member for training; and
About half of the organizations surveyed will see an increase in their cyber security budget, down from 61 percent in 2016.
++++++++++++++++++++++++++
IoT to Represent More Than Half of Connected Device Landscape by 2021
analysis comes from Cisco’s recent Visual Networking Index for the 2016-2021 forecast period.
IP video traffic will increase from 73 percent of all internet consumer traffic in 2016 to 82 percent in 2021 (with live streaming accounting for 13 percent);
Virtual and augmented reality traffic is expected to increase 20-fold during the forecast period at a compound annual growth rate of 82 percent; and
Internet video surveillance traffic is anticipated to grow during the forecast period, comprising 3.4 percent of all internet traffic.
the privacy concerns such use might raise; as universities implement systems that integrate wearables, they will encounter this hurdle and have to implement policies to address it.
5. Research
Laboratories are often required to be completely controlled spaces with considerations made for climate, light, and sometimes even biometric data inside the lab.
Call For Chapters: Responsible Analytics and Data Mining in Education: Global Perspectives on Quality, Support, and Decision-Making
SUBMIT A 1-2 PAGE CHAPTER PROPOSAL
Deadline – June 1, 2017
Title: Responsible Analytics and Data Mining in Education: Global Perspectives on Quality, Support, and Decision-Making
Synopsis:
Due to rapid advancements in our ability to collect, process, and analyze massive amounts of data, it is now possible for educators at all levels to gain new insights into how people learn. According to Bainbridge, et. al. (2015), using simple learning analytics models, educators now have the tools to identify, with up to 80% accuracy, which students are at the greatest risk of failure before classes even begin. As we consider the enormous potential of data analytics and data mining in education, we must also recognize a myriad of emerging issues and potential consequences—intentional and unintentional—to implement them responsibly. For example:
· Who collects and controls the data?
· Is it accessible to all stakeholders?
· How are the data being used, and is there a possibility for abuse?
· How do we assess data quality?
· Who determines which data to trust and use?
· What happens when the data analysis yields flawed results?
· How do we ensure due process when data-driven errors are uncovered?
· What policies are in place to address errors?
· Is there a plan for handling data breaches?
This book, published by Routledge Taylor & Francis Group, will provide insights and support for policy makers, administrators, faculty, and IT personnel on issues pertaining the responsible use data analytics and data mining in education.
Important Dates:
· June 1, 2017 – Chapter proposal submission deadline
· July 15, 2017 – Proposal decision notification
· October 15, 2017 – Full chapter submission deadline
· December 1, 2017 – Full chapter decision notification
· January 15, 2018 – Full chapter revisions due
++++++++++++++++++
more on data mining in this IMS blog https://blog.stcloudstate.edu/ims?s=data+mining
It’s the first time mobile learning ranked as the highest priority in the survey. The No. 2 priority is broadband and network capacity, which ranked first last year, and the No. 3 priority is cybersecurity and privacy, with 62 percent of respondents rating them more important than last year.
Understaffing remains a key issue for technology departments in school systems.
Single sign-on (SSO) is the most implemented interoperability initiative
More than one-third of IT leaders expressed no interest in bring your own device (BYOD) initiatives, up from 20 percent in 2014.
Interest in open educational resources (OER) is high
Education technology experience is common among IT leaders
Strong academic backgrounds are also prevalent among IT leaders.
Lack of diversity continues to be an issue for school district technology leaders.
CoSN is a nonprofit association for school system technology leaders. To read or download the full IT leadership survey, visit this CoSN site.
Because the questionnaire data comprised both Likert scales and open questions, they were analyzed quantitatively and qualitatively. Textual data (open responses) were qualitatively analyzed by coding: each segment (e.g. a group of words) was assigned to a semantic reference category, as systematically and rigorously as possible. For example, “Using an iPad in class really motivates me to learn” was assigned to the category “positive impact on motivation.” The qualitative analysis was performed using an adapted version of the approaches developed by L’Écuyer (1990) and Huberman and Miles (1991, 1994). Thus, we adopted a content analysis approach using QDAMiner software, which is widely used in qualitative research (see Fielding, 2012; Karsenti, Komis, Depover, & Collin, 2011). For the quantitative analysis, we used SPSS 22.0 software to conduct descriptive and inferential statistics. We also conducted inferential statistics to further explore the iPad’s role in teaching and learning, along with its motivational effect. The results will be presented in a subsequent report (Fievez, & Karsenti, 2013)
The 20th century notion of conducting a qualitative research by an oral interview and then processing manually your results had triggered in the second half of the 20th century [sometimes] condescending attitudes by researchers from the exact sciences.
The reason was the advent of computing power in the second half of the 20th century, which allowed exact sciences to claim “scientific” and “data-based” results.
One of the statistical package, SPSS, is today widely known and considered a magnificent tools to bring solid statistically-based argumentation, which further perpetuates the superiority of quantitative over qualitative method.
At the same time, qualitative researchers continue to lag behind, mostly due to the inertia of their approach to qualitative analysis. Qualitative analysis continues to be processed in the olden ways. While there is nothing wrong with the “olden” ways, harnessing computational power can streamline the “olden ways” process and even present options, which the “human eye” sometimes misses.
Below are some suggestions, you may consider, when you embark on the path of qualitative research.
excellent guide to the structure of a qualitative research
Palys, T., & Atchison, C. (2012). Qualitative Research in the Digital Era: Obstacles and Opportunities. International Journal Of Qualitative Methods, 11(4), 352-367.
Palys and Atchison (2012) present a compelling case to bring your qualitative research to the level of the quantitative research by using modern tools for qualitative analysis.
1. The authors correctly promote NVivo as the “jaguar’ of the qualitative research method tools. Be aware, however, about the existence of other “Geo Metro” tools, which, for your research, might achieve the same result (see bottom of this blog entry).
2. The authors promote a new type of approach to Chapter 2 doctoral dissertation and namely OCR-ing PDF articles (most of your literature as of 2017 is mostly either in PDF or electronic textual format) through applications such as
Abbyy Fine Reader, https://www.abbyy.com/en-us/finereader/
OmniPage, http://www.nuance.com/for-individuals/by-product/omnipage/index.htm
Readirus http://www.irislink.com/EN-US/c1462/Readiris-16-for-Windows—OCR-Software.aspx
The text from the articles is processed either through NVIVO or related programs (see bottom of this blog entry). As the authors propose: ” This is immediately useful for literature review and proposal writing, and continues through the research design, data gathering, and analysis stages— where NVivo’s flexibility for many different sources of data (including audio, video, graphic, and text) are well known—of writing for publication” (p. 353).
In other words, you can try to wrap your head around huge amount of textual information, but you can also approach the task by a parallel process of processing the same text with a tool.
+++++++++++++++++++++++++++++
Here are some suggestions for Computer Assisted / Aided Qualitative Data Analysis Software (CAQDAS)for a small and a large community applications):
text mining: https://en.wikipedia.org/wiki/Text_mining Text mining, also referred to as text data mining, roughly equivalent to text analytics, is the process of deriving high-quality information from text. High-quality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. https://ischool.syr.edu/infospace/2013/04/23/what-is-text-mining/
Qualitative data is descriptive data that cannot be measured in numbers and often includes qualities of appearance like color, texture, and textual description. Quantitative data is numerical, structured data that can be measured. However, there is often slippage between qualitative and quantitative categories. For example, a photograph might traditionally be considered “qualitative data” but when you break it down to the level of pixels, which can be measured.
word of caution, text mining doesn’t generate new facts and is not an end, in and of itself. The process is most useful when the data it generates can be further analyzed by a domain expert, who can bring additional knowledge for a more complete picture. Still, text mining creates new relationships and hypotheses for experts to explore further.
Pros and Cons of Computer Assisted Qualitative Data Analysis Software
+++++++++++++++++++++++++
more on quantitative research:
Asamoah, D. A., Sharda, R., Hassan Zadeh, A., & Kalgotra, P. (2017). Preparing a Data Scientist: A Pedagogic Experience in Designing a Big Data Analytics Course. Decision Sciences Journal of Innovative Education, 15(2), 161–190. https://doi.org/10.1111/dsji.12125
++++++++++++++++++++++++
literature on quantitative research:
St. Cloud State University MC Main Collection – 2nd floor
AZ195 .B66 2015
p. 161 Data scholarship in the Humanities
p. 166 When Are Data?
Philip Chen, C. L., & Zhang, C.-Y. (2014). Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Information Sciences, 275(Supplement C), 314–347. https://doi.org/10.1016/j.ins.2014.01.015
The survey data is based on a survey of more than 500 scholars drawn from more than 50 major research universities in the USA, Canada, the UK, Australia, New Zealand and Ireland. Data is broken out by various criteria, such as type of university, scholar’s country, gender, political views, academic subject specialty, academic title and other criteria.
50.69% of respondents are currently collaborating or coordinating research with scholars or other researchers from other universities or colleges outside of their country.
Web based meetings were most common in the Engineering, Mathematics, Computer Science, Physics, Chemistry and other Science and Technology fields, 33.70, and least common in the Literature and Languages fields, 2.92.
7.72% of respondents routinely use Adobe Connect to communicate with scholars at other locations.
87.52% of respondents have co-authored a journal article with one or more other authors. Co-authorship was most common in Australia/New Zealand, 96.77%, followed by Canada, 93.10%, and the UK/Ireland, 89.83%. It was least common in the USA, 85.07%.
At the invitation of Adobe Education, I attended the Educause Annual Conference this year and did a quick series of interviews about the education work that Adobe is doing. A huge highlight for me was reconnecting with futurist Bryan Alexander, whom I’d interviewed in 2012 as a part of my Future of Education series, and whose work and voice I’ve continued to really appreciate.