Call For Chapters: Responsible Analytics and Data Mining in Education: Global Perspectives on Quality, Support, and Decision-Making
SUBMIT A 1-2 PAGE CHAPTER PROPOSAL
Deadline – June 1, 2017
Title: Responsible Analytics and Data Mining in Education: Global Perspectives on Quality, Support, and Decision-Making
Due to rapid advancements in our ability to collect, process, and analyze massive amounts of data, it is now possible for educators at all levels to gain new insights into how people learn. According to Bainbridge, et. al. (2015), using simple learning analytics models, educators now have the tools to identify, with up to 80% accuracy, which students are at the greatest risk of failure before classes even begin. As we consider the enormous potential of data analytics and data mining in education, we must also recognize a myriad of emerging issues and potential consequences—intentional and unintentional—to implement them responsibly. For example:
· Who collects and controls the data?
· Is it accessible to all stakeholders?
· How are the data being used, and is there a possibility for abuse?
· How do we assess data quality?
· Who determines which data to trust and use?
· What happens when the data analysis yields flawed results?
· How do we ensure due process when data-driven errors are uncovered?
· What policies are in place to address errors?
· Is there a plan for handling data breaches?
This book, published by Routledge Taylor & Francis Group, will provide insights and support for policy makers, administrators, faculty, and IT personnel on issues pertaining the responsible use data analytics and data mining in education.
· June 1, 2017 – Chapter proposal submission deadline
· July 15, 2017 – Proposal decision notification
· October 15, 2017 – Full chapter submission deadline
· December 1, 2017 – Full chapter decision notification
· January 15, 2018 – Full chapter revisions due
more on data mining in this IMS blog http://blog.stcloudstate.edu/ims?s=data+mining
The growing use of data mining software in online education has great potential to support student success by identifying and reaching out to struggling students and streamlining the path to graduation. This can be a challenge for institutions that are using a variety of technology systems that are not integrated with each other. As institutions implement learning management systems, degree planning technologies, early alert systems, and tutor scheduling that promote increased interactions among various stakeholders, there is a need for centralized aggregation of these data to provide students with holistic support that improves learning outcomes. Join us to hear from an institutional exemplar who is building solutions that integrate student data across platforms. Then work with peers to address challenges and develop solutions of your own.
W3Schools – Fantastic set of interactive tutorials for learning different languages. Their SQL tutorial is second to none. You’ll learn how to manipulate data in MySQL, SQL Server, Access, Oracle, Sybase, DB2 and other database systems.
Treasure Data – The best way to learn is to work towards a goal. That’s what this helpful blog series is all about. You’ll learn SQL from scratch by following along with a simple, but common, data analysis scenario.
10 Queries – This course is recommended for the intermediate SQL-er who wants to brush up on his/her skills. It’s a series of 10 challenges coupled with forums and external videos to help you improve your SQL knowledge and understanding of the underlying principles.
TryR – Created by Code School, this interactive online tutorial system is designed to step you through R for statistics and data modeling. As you work through their seven modules, you’ll earn badges to track your progress helping you to stay on track.
Leada – If you’re a complete R novice, try Lead’s introduction to R. In their 1 hour 30 min course, they’ll cover installation, basic usage, common functions, data structures, and data types. They’ll even set you up with your own development environment in RStudio.
Advanced R – Once you’ve mastered the basics of R, bookmark this page. It’s a fantastically comprehensive style guide to using R. We should all strive to write beautiful code, and this resource (based on Google’s R style guide) is your key to that ideal.
Swirl – Learn R in R – a radical idea certainly. But that’s exactly what Swirl does. They’ll interactively teach you how to program in R and do some basic data science at your own pace. Right in the R console.
Python for beginners – The Python website actually has a pretty comprehensive and easy-to-follow set of tutorials. You can learn everything from installation to complex analyzes. It also gives you access to the Python community, who will be happy to answer your questions.
PythonSpot – A complete list of Python tutorials to take you from zero to Python hero. There are tutorials for beginners, intermediate and advanced learners.
Read all about it: data mining books
Data Jujitsu: The Art of Turning Data into Product – This free book by DJ Patil gives you a brief introduction to the complexity of data problems and how to approach them. He gives nice, understandable examples that cover the most important thought processes of data mining. It’s a great book for beginners but still interesting to the data mining expert. Plus, it’s free!
Data Mining: Concepts and Techniques – The third (and most recent) edition will give you an understanding of the theory and practice of discovering patterns in large data sets. Each chapter is a stand-alone guide to a particular topic, making it a good resource if you’re not into reading in sequence or you want to know about a particular topic.
Mining of Massive Datasets – Based on the Stanford Computer Science course, this book is often sighted by data scientists as one of the most helpful resources around. It’s designed at the undergraduate level with no formal prerequisites. It’s the next best thing to actually going to Stanford!
Hadoop: The Definitive Guide – As a data scientist, you will undoubtedly be asked about Hadoop. So you’d better know how it works. This comprehensive guide will teach you how to build and maintain reliable, scalable, distributed systems with Apache Hadoop. Make sure you get the most recent addition to keep up with this fast-changing service.
Online learning: data mining webinars and courses
DataCamp – Learn data mining from the comfort of your home with DataCamp’s online courses. They have free courses on R, Statistics, Data Manipulation, Dynamic Reporting, Large Data Sets and much more.
Coursera – Coursera brings you all the best University courses straight to your computer. Their online classes will teach you the fundamentals of interpreting data, performing analyzes and communicating insights. They have topics for beginners and advanced learners in Data Analysis, Machine Learning, Probability and Statistics and more.
Udemy – With a range of free and pay for data mining courses, you’re sure to find something you like on Udemy no matter your level. There are 395 in the area of data mining! All their courses are uploaded by other Udemy users meaning quality can fluctuate so make sure you read the reviews.
CodeSchool – These courses are handily organized into “Paths” based on the technology you want to learn. You can do everything from build a foundation in Git to take control of a data layer in SQL. Their engaging online videos will take you step-by-step through each lesson and their challenges will let you practice what you’ve learned in a controlled environment.
Udacity – Master a new skill or programming language with Udacity’s unique series of online courses and projects. Each class is developed by a Silicon Valley tech giant, so you know what your learning will be directly applicable to the real world.
Treehouse – Learn from experts in web design, coding, business and more. The video tutorials from Treehouse will teach you the basics and their quizzes and coding challenges will ensure the information sticks. And their UI is pretty easy on the eyes.
Learn from the best: top data miners to follow
John Foreman – Chief Data Scientist at MailChimp and author of Data Smart, John is worth a follow for his witty yet poignant tweets on data science.
DJ Patil – Author and Chief Data Scientist at The White House OSTP, DJ tweets everything you’ve ever wanted to know about data in politics.
Nate Silver – He’s Editor-in-Chief of FiveThirtyEight, a blog that uses data to analyze news stories in Politics, Sports, and Current Events.
Andrew Ng – As the Chief Data Scientist at Baidu, Andrew is responsible for some of the most groundbreaking developments in Machine Learning and Data Science.
Bernard Marr – He might know pretty much everything there is to know about Big Data.
Gregory Piatetsky – He’s the author of popular data science blog KDNuggets, the leading newsletter on data mining and knowledge discovery.
Christian Rudder – As the Co-founder of OKCupid, Christian has access to one of the most unique datasets on the planet and he uses it to give fascinating insight into human nature, love, and relationships
Dean Abbott – He’s contributed to a number of data blogs and authored his own book on Applied Predictive Analytics. At the moment, Dean is Chief Data Scientist at SmarterHQ.
Practice what you’ve learned: data mining competitions
Kaggle – This is the ultimate data mining competition. The world’s biggest corporations offer big prizes for solving their toughest data problems.
Stack Overflow – The best way to learn is to teach. Stackoverflow offers the perfect forum for you to prove your data mining know-how by answering fellow enthusiast’s questions.
TunedIT – With a live leaderboard and interactive participation, TunedIT offers a great platform to flex your data mining muscles.
DrivenData – You can find a number of nonprofit data mining challenges on DataDriven. All of your mining efforts will go towards a good cause.
Quora – Another great site to answer questions on just about everything. There are plenty of curious data lovers on there asking for help with data mining and data science.
Meet your fellow data miner: social networks, groups and meetups
Facebook – As with many social media platforms, Facebook is a great place to meet and interact with people who have similar interests. There are a number of very active data mining groups you can join.
LinkedIn – If you’re looking for data mining experts in a particular field, look no further than LinkedIn. There are hundreds of data mining groups ranging from the generic to the hyper-specific. In short, there’s sure to be something for everyone.
Meetup – Want to meet your fellow data miners in person? Attend a meetup! Just search for data mining in your city and you’re sure to find an awesome group near you.
By enabling the development and creation of big data for non-commercial use only, the European Commission has come up with a half-baked policy. Startups will be discouraged from mining in Europe and it will be impossible for companies to grow out of universities in the EU.
a critically important means of uncovering patterns of intellectual practice and usage that have the potential for illuminating facets and perspectives in research and scholarship that might otherwise not be noted. At the same time, challenges exist in terms of project management and support, licensing and other necessary protections.
Confirmed speakers include: Audrey McCulloch, Executive Director, ALPSP; Michael Levine-Clark, Dean of Libraries, University of Denver; Ellen Finnie, Head, Scholarly Communications and Collections Strategies, Massachusetts Institute of Technology; and Jeremy Frey, Professor of Physical Chemistry, Head of Computational Systems Chemistry, University of Southampton, UK.
Audrey McCulloch, Chief Executive, Association of Learned Professional and Society Publishers (ALPSP) and Director of the Publishers Licensing Society
Text and Data Mining: Library Opportunities and Challenges Michael Levine-Clark, Dean and Director of Libraries, University of Denver
As scholars engage with text and data mining (TDM), libraries have struggled to provide support for projects that are unpredictable and tremendously varied. While TDM can be considered a fair use, in many cases contracts need to be renegotiated and special data sets created by the vendor. The unique nature of TDM projects makes it difficult to plan for them, and often the library and scholar have to figure them out as they go along. This session will explore strategies for libraries to effectively manage TDM, often in partnership with other units on campus and will offer suggestions to improve the process for all.
Michael Levine-Clark, the Dean and Director of the University of Denver Libraries, is the recipient of the 2015 HARRASOWITZ Leadership in Library Acquisitions Award. He writes and speaks regularly on strategies for improving academic library collection development practices, including the use of e-books in academic libraries, the development of demand-driven acquisition models, and implications of discovery tool implementation.
Library licensing approaches in text and data mining access for researchers at MIT Ellen Finnie, Head, Scholarly Communications & Collections Strategy, MIT Libraries
This talk will address the challenges and successes that the MIT libraries have experienced in providing enabling services that deliver TDM access to MIT researchers, including:
· emphasizing TDM in negotiating contracts for scholarly resources
· defining requirements for licenses for TDM access
· working with information providers to negotiate licenses that work for our researchers
· addressing challenges and retooling to address barriers to success
· offering educational guides and workshops
· managing current needs v. the long-term goal– TDM as a reader’s right
Ellen Finnie is Head, Scholarly Communications & Collections Strategy in the MIT Libraries. She leads the MIT Libraries’ scholarly communications and collections strategy in support of the Libraries’ and MIT’s objectives, including in particular efforts to influence models of scholarly publishing and communication in ways that increase the impact and reach of MIT’s research and scholarship and which promote open, sustainable publishing and access models. She leads outreach efforts to faculty in support of scholarly publication reform and open access activities at MIT, and acts as the Libraries’ chief resource for copyright issues and for content licensing policy and negotiations. In that role, she is involved in negotiating licenses to include text/data mining rights and coordinating researcher access to TDM services for licensed scholarly resources. She has written and spoken widely on digital acquisitions, repositories, licensing, and open access.
Jeremy Frey, Professor of Physical Chemistry, Head of Computational Systems Chemistry, University of Southampton, UK
Text and Data Mining (TDM) facilitates the discovery, selection, structuring, and analysis of large numbers of documents/sets of data, enabling the visualization of results in new ways to support innovation and the development of new knowledge. In both academia and commercial contexts, TDM is increasingly recognized as a means to extract, re-use and leverage additional value from published information, by linking concepts, addressing specific questions, and creating efficiencies. But TDM in practice is not straightforward. TDM methodology and use are fast changing but are not yet matched by the development of enabling policies.
This webinar provides a review of where we are today with TDM, as seen from the perspective of the researcher, library, and licensing-publisher communities.
real-time impact on curriculum structure, instruction delivery and student learning, permitting change and improvement. It can also provide insight into important trends that affect present and future resource needs.
Big Data: Traditionally described as high-volume, high-velocity and high-variety information.
Learning or Data Analytics: The measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs.
Educational Data Mining: The techniques, tools and research designed for automatically extracting meaning from large repositories of data generated by or related to people’s learning activities in educational settings.
Predictive Analytics: Algorithms that help analysts predict behavior or events based on data.
Predictive Modeling: The process of creating, testing and validating a model to best predict the probability of an outcome.
Data analytics, or the measurement, collection, analysis and reporting of data, is driving decisionmaking in many institutions. However, because of the unique nature of each district’s or college’s data needs, many are building their own solutions.
For example, in 2014 the nonprofit company inBloom, Inc., backed by $100 million from the Gates Foundation and the Carnegie Foundation for the Advancement of Teaching, closed its doors amid controversy regarding its plan to store, clean and aggregate a range of student information for states and districts and then make the data available to district-approved third parties to develop tools and dashboards so the data could be used by classroom educators.22
Tips for Student Data Privacy
Know the Laws and Regulations
There are many regulations on the books intended to protect student privacy and safety: the Family Educational Rights and Privacy Act (FERPA), the Protection of Pupil Rights Amendment (PPRA), the Children’s Internet Protection Act (CIPA), the Children’s Online Privacy Protection Act (COPPA) and the Health Insurance Portability and Accountability Act (HIPAA)
— as well as state, district and community laws. Because technology changes so rapidly, it is unlikely laws and regulations will keep pace with new data protection needs. Establish a committee to ascertain your institution’s level of understanding of and compliance with these laws, along with additional safeguard measures.
Make a Checklist Your institution’s privacy policies should cover security, user safety, communications, social media, access, identification rules, and intrusion detection and prevention.
Communicate, Communicate, Communicate
Students, staff, faculty and parents all need to know their rights and responsibilities regarding data privacy. Convey your technology plans, policies and requirements and then assess and re-communicate those throughout each year.
“Anything-as-a-Service” or “X-as-a-Service” solutions can help K-12 and higher education institutions cope with big data by offering storage, analytics capabilities and more. These include:
• Infrastructure-as-a-Service (IaaS): Providers offer cloud-based storage, similar to a campus storage area network (SAN)
• Platform-as-a-Service (PaaS): Opens up application platforms — as opposed to the applications themselves — so others can build their own applications
using underlying operating systems, data models and databases; pre-built application components and interfaces
• Software-as-a-Service (SaaS): The hosting of applications in the cloud
• Big-Data-as-a-Service (BDaaS): Mix all the above together, upscale the amount of data involved by an enormous amount and you’ve got BDaaS
Use accurate data correctly
Define goals and develop metrics
Eliminate silos, integrate data
Remember, intelligence is the goal
Maintain a robust, supportive enterprise infrastructure.
Prioritize student privacy
Develop bullet-proof data governance guidelines
Create a culture of collaboration and sharing, not compliance.
For all the data and feedback they provide, student information systems interfere with learning.
“School isn’t about learning. It’s about doing well.”
The singular focus on grades that these systems encourage turns learning into a competitive, zero-sum game for students.
the parallel with the online grades systems at K12 is the Big Data movement at Higher Ed. Big Data must be about assisting teaching, not about determining teaching and instructors must be very well aware and very carefully navigating in this nebulous areas of assisting versus determining.
This article about quantifying management of teaching and learning in K12 reminds me the big hopes put on technocrats governing counties and economies in the 70s of the last centuries when the advent of the computers was celebrated as the solution of all our problems. Haven’t we, as civilization learned anything from that lesson?
How algorithms impact our browsing behavior? browsing history? What is the connection between social media algorithms and fake news? Are there topic-detection algorithms as they are community-detection ones?
How can I change the content of a [Google] search return? Can I?
CRUZ, J. D., BOTHOREL, C., & POULET, F. (2014). Community Detection and Visualization in Social Networks: Integrating Structural and Semantic Information. ACM Transactions On Intelligent Systems & Technology, 5(1), 1-26. doi:10.1145/2542182.2542193
The W2T (Wisdom Web of Things) methodology considers the information organization and management from the perspective of Web services, which contributes to a deep understanding of online phenomena such as users’ behaviors and comments in e-commerce platforms and online social networks. (https://link.springer.com/chapter/10.1007/978-3-319-44198-6_10)
ethics of algorithm
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679. https://doi.org/10.1177/2053951716679679
• Who collects and controls the data?
• Is it accessible to all stakeholders?
• How are the data being used, and is there a possibility for abuse?
• How do we assess data quality?
• Who determines which data to trust and use?
• What happens when the data analysis yields flawed results?
• How do we ensure due process when data-driven errors are uncovered?
• What policies are in place to address errors?
• Is there a plan for handling data breaches?
Because the questionnaire data comprised both Likert scales and open questions, they were analyzed quantitatively and qualitatively. Textual data (open responses) were qualitatively analyzed by coding: each segment (e.g. a group of words) was assigned to a semantic reference category, as systematically and rigorously as possible. For example, “Using an iPad in class really motivates me to learn” was assigned to the category “positive impact on motivation.” The qualitative analysis was performed using an adapted version of the approaches developed by L’Écuyer (1990) and Huberman and Miles (1991, 1994). Thus, we adopted a content analysis approach using QDAMiner software, which is widely used in qualitative research (see Fielding, 2012; Karsenti, Komis, Depover, & Collin, 2011). For the quantitative analysis, we used SPSS 22.0 software to conduct descriptive and inferential statistics. We also conducted inferential statistics to further explore the iPad’s role in teaching and learning, along with its motivational effect. The results will be presented in a subsequent report (Fievez, & Karsenti, 2013)
The 20th century notion of conducting a qualitative research by an oral interview and then processing manually your results had triggered in the second half of the 20th century [sometimes] condescending attitudes by researchers from the exact sciences.
The reason was the advent of computing power in the second half of the 20th century, which allowed exact sciences to claim “scientific” and “data-based” results.
One of the statistical package, SPSS, is today widely known and considered a magnificent tools to bring solid statistically-based argumentation, which further perpetuates the superiority of quantitative over qualitative method.
At the same time, qualitative researchers continue to lag behind, mostly due to the inertia of their approach to qualitative analysis. Qualitative analysis continues to be processed in the olden ways. While there is nothing wrong with the “olden” ways, harnessing computational power can streamline the “olden ways” process and even present options, which the “human eye” sometimes misses.
Below are some suggestions, you may consider, when you embark on the path of qualitative research.
Palys and Atchison (2012) present a compelling case to bring your qualitative research to the level of the quantitative research by using modern tools for qualitative analysis.
1. The authors correctly promote NVivo as the “jaguar’ of the qualitative research method tools. Be aware, however, about the existence of other “Geo Metro” tools, which, for your research, might achieve the same result (see bottom of this blog entry).
text mining: https://en.wikipedia.org/wiki/Text_mining Text mining, also referred to as text data mining, roughly equivalent to text analytics, is the process of deriving high-quality information from text. High-quality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. https://ischool.syr.edu/infospace/2013/04/23/what-is-text-mining/
Qualitative data is descriptive data that cannot be measured in numbers and often includes qualities of appearance like color, texture, and textual description. Quantitative data is numerical, structured data that can be measured. However, there is often slippage between qualitative and quantitative categories. For example, a photograph might traditionally be considered “qualitative data” but when you break it down to the level of pixels, which can be measured.
word of caution, text mining doesn’t generate new facts and is not an end, in and of itself. The process is most useful when the data it generates can be further analyzed by a domain expert, who can bring additional knowledge for a more complete picture. Still, text mining creates new relationships and hypotheses for experts to explore further.