Microcredentials and Digital Badges in Higher Education
November 27 – 29, 2018 Savannah, GA
Badging programs are rapidly gaining momentum in higher education – join us to learn how to get your badging efforts off the ground.
Key Considerations: Assessment of Competencies
During this session, you will learn how to ask the right questions and evaluate if badges are a good fit within your unique institutional context, including determining ROI on badging efforts. You’ll also learn how to assess the competencies behind digital badges.
Key Technology Considerations
This session will allow for greater understanding of Open Badges standards, the variety of technology software and platforms, and the portability of badges. We will also explore emerging trends in the digital badging space and discuss campus considerations.
Key Financial Considerations
During this hour, we will take a closer look at answering key financial questions surrounding badges:
- What does the business model look like behind existing institutional badging initiatives?
- Are these money-makers for an institution? Is there revenue potential?
- Where does funding for these efforts come from?
Partnering with Industry
Badging can be a catalyst for partnerships between higher education and industry. In this session, you will have the opportunity to learn more about strategies for collaborating with industry in the development of badges and how badges align with employer expectations.
Branding and Marketing Badges
Now that we have a better idea of the “why” and “what” of badges, how do we market their value to external and internal stakeholders? You’ll see examples of how other institutions are designing and marketing their badges.
Alongside your peers and our expert instructors, you will have the opportunity to brainstorm ideas, get feedback, ask questions, and get answers.
Next Steps and the Road Ahead: Where Badging in Higher Ed is Going
Most institutions are getting into the badging game, and we’ll talk about the far-reaching considerations in the world of badging. We’ll use this time to engage in forward-thinking and discuss the future of badging and what future trends in badging might be.
more on microcredentialing in this IMS blog
The future of collaboration: Large-scale visualization
Henry Hwangbo http://usblogs.pwc.com/emerging-technology/the-future-of-collaboration-large-scale-visualization/
More data doesn’t automatically lead to better decisions. A shortage of skilled data scientists has hindered progress towards translation of information into actionable business insights. In addition, traditionally dense spreadsheets and linear slideshows are ineffective to present discoveries when dealing with Big Data’s dynamic nature. We need to evolve how we capture, analyze and communicate data.
Large-scale visualization platforms have several advantages over traditional presentation methods. They blur the line between the presenter and audience to increase the level of interactivity and collaboration. They also offer simultaneous views of both macro and micro perspectives, multi-user collaboration and real-time data interaction, and a limitless number of visualization possibilities – critical capabilities for rapidly understanding today’s large data sets.
Visualization walls enable presenters to target people’s preferred learning methods, thus creating a more effective communication tool. The human brain has an amazing ability to quickly glean insights from patterns – and great visualizations make for more efficient storytellers.
Grant: Visualizing Digital Scholarship in Libraries and Learning Spaces
Award amount: $40,000
Funder: Andrew W. Mellon Foundation
Lead institution: North Carolina State University Libraries
Due date: 13 August 2017
Notification date: 15 September 2017
NC State University, funded by the Andrew W. Mellon Foundation, invites proposals from institutions interested in participating in a new project for Visualizing Digital Scholarship in Libraries and Learning Spaces. The grant aims to 1) build a community of practice of scholars and librarians who work in large-scale multimedia to help visually immersive scholarly work enter the research lifecycle; and 2) overcome technical and resource barriers that limit the number of scholars and libraries who may produce digital scholarship for visualization environments and the impact of generated knowledge. Libraries and museums have made significant strides in pioneering the use of large-scale visualization technologies for research and learning. However, the utilization, scale, and impact of visualization environments and the scholarship created within them have not reached their fullest potential. A logical next step in the provision of technology-rich, visual academic spaces is to develop best practices and collaborative frameworks that can benefit individual institutions by building economies of scale among collaborators.
The project contains four major elements:
- An initial meeting and priority setting workshop that brings together librarians, scholars, and technologists working in large-scale, library and museum-based visualization environments.
- Scholars-in-residence at NC State over a multi-year period who pursue open source creative projects, working in collaboration with our librarians and faculty, with the potential to address the articulated limitations.
- Funding for modest, competitive block grants to other institutions working on similar challenges for creating, disseminating, validating, and preserving digital scholarship created in and for large-scale visual environments.
- A culminating symposium that brings together representatives from the scholars-in-residence and block grant recipient institutions to share and assess results, organize ways of preserving and disseminating digital products produced, and build on the methods, templates, and tools developed for future projects.
This call solicits proposals for block grants from library or museum systems that have visualization installations. Block grant recipients can utilize funds for ideas ranging from creating open source scholarly content for visualization environments to developing tools and templates to enhance sharing of visualization work. An advisory panel will select four institutions to receive awards of up to $40,000. Block grant recipients will also participate in the initial priority setting workshop and the culminating symposium. Participating in a block grant proposal does not disqualify an individual from later applying for one of the grant-supported scholar-in-residence appointments.
Applicants will provide a statement of work that describes the contributions that their organization will make toward the goals of the grant. Applicants will also provide a budget and budget justification.
Activities that can be funded through block grants include, but are not limited to:
- Commissioning work by a visualization expert
- Hosting a visiting scholar, artist, or technologist residency
- Software development or adaptation
- Development of templates and methodologies for sharing and scaling content utilizing open source software
- Student or staff labor for content or software development or adaptation
- Curricula and reusable learning objects for digital scholarship and visualization courses
- Travel (if necessary) to the initial project meeting and culminating workshop
- User research on universal design for visualization spaces
Funding for operational expenditures, such as equipment, is not allowed for any grant participant.
Send an application to firstname.lastname@example.org by the end of the day on 13 August 2017 that includes the following:
- Statement of work (no more than 1000 words) of the project idea your organization plans to develop, its relationship to the overall goals of the grant, and the challenges to be addressed.
- List the names and contact information for each of the participants in the funded project, including a brief description of their current role, background, expertise, interests, and what they can contribute.
- Project timeline.
- Budget table with projected expenditures.
- Budget narrative detailing the proposed expenditures
Selection and Notification Process
An advisory panel made up of scholars, librarians, and technologists with experience and expertise in large-scale visualization and/or visual scholarship will review and rank proposals. The project leaders are especially keen to receive proposals that develop best practices and collaborative frameworks that can benefit individual institutions by building a community of practice and economies of scale among collaborators.
Awardees will be selected based on:
- the ability of their proposal to successfully address one or both of the identified problems;
- the creativity of the proposed activities;
- relevant demonstrated experience partnering with scholars or students on visualization projects;
- whether the proposal is extensible;
- feasibility of the work within the proposed time-frame and budget;
- whether the project work improves or expands access to large-scale visual environments for users; and
- the participant’s ability to expand content development and sharing among the network of institutions with large-scale visual environments.
Awardees will be required to send a representative to an initial meeting of the project cohort in Fall 2017.
Awardees will be notified by 15 September 2017.
If you have any questions, please contact email@example.com.
–Mike Nutt Director of Visualization Services Digital Library Initiatives, NCSU Libraries
Updating the Next Generation Digital Learning Environment for Better Student Learning Outcomes
Monday, July 3, 2017
a learning management system (LMS) is never the solution to every problem in education. Edtech is just one part of the whole learning ecosystem and student experience.
Therefore, the next generation digital learning environment (NGDLE), as envisioned by EDUCAUSE in 2015 … Looking at the NGDLE requirements from an LMS perspective, I view the NGDLE as being about five areas: interoperability; personalization; analytics, advising, and learning assessment; collaboration; accessibility and universal design.
- Content can easily be exchanged between systems.
- Users are able to leverage the tools they love, including discipline-specific apps.
- Learning data is available to trusted systems and people who need it.
- The learning environment is “future proof” so that it can adapt and extend as the ecosystem evolves.
- The learning environment reflects individual preferences.
- Departments, divisions, and institutions can be autonomous.
- Instructors teach the way they want and are not constrained by the software design.
- There are clear, individual learning paths.
- Students have choice in activity, expression, and engagement.
Analytics, Advising, and Learning Assessment
- Learning analytics helps to identify at-risk students, course progress, and adaptive learning pathways.
- The learning environment enables integrated planning and assessment of student performance.
- More data is made available, with greater context around the data.
- The learning environment supports platform and data standards.
- Individual spaces persist after courses and after graduation.
- Learners are encouraged as creators and consumers.
- Courses include public and private spaces.
Accessibility and Universal Design
- Accessibility is part of the design of the learning experience.
- The learning environment enables adaptive learning and supports different types of materials.
- Learning design includes measurement rubrics and quality control.
The core analogy used in the NGDLE paper is that each component of the learning environment is a Lego brick:
- The days of the LMS as a “walled garden” app that does everything is over.
- Today many kinds of amazing learning and collaboration tools (Lego bricks) should be accessible to educators.
- We have standards that let these tools (including an LMS) talk to each other. That is, all bricks share some properties that let them fit together.
- Students and teachers sign in once to this “ecosystem of bricks.”
- The bricks share results and data.
- These bricks fit together; they can be interchanged and swapped at will, with confidence that the learning experience will continue uninterrupted.
Any “next-gen” attempt to completely rework the pedagogical model and introduce a “mash-up of whatever” to fulfil this model would fall victim to the same criticisms levied at the LMS today: there is too little time and training to expect faculty to figure out the nuances of implementation on their own.
The Lego metaphor works only if we’re talking about “old school” Lego design — bricks of two, three, and four-post pieces that neatly fit together. Modern edtech is a lot more like the modern Lego. There are wheels and rocket launchers and belts and all kinds of amazing pieces that work well with each other, but only when they are configured properly. A user cannot simply stick together different pieces and assume they will work harmoniously in creating an environment through which each student can be successful.
As the NGDLE paper states: “Despite the high percentages of LMS adoption, relatively few instructors use its more advanced features — just 41% of faculty surveyed report using the LMS ‘to promote interaction outside the classroom.'”
But this is what the next generation LMS is good at: being a central nervous system — or learning hub — through which a variety of learning activities and tools are used. This is also where the LMS needs to go: bringing together and making sense of all the amazing innovations happening around it. This is much harder to do, perhaps even impossible, if all the pieces involved are just bricks without anything to orchestrate them or to weave them together into a meaningful, personal experience for achieving well-defined learning outcomes.
- Making a commitment to build easy, flexible, and smart technology
- Working with colleges and universities to remove barriers to adopting new tools in the ecosystem
- Standardizing the vetting of accessibility compliance (the Strategic Nonvisual Access Partner Program from the National Federation of the Blind is a great start)
- Advancing standards for data exchange while protecting individual privacy
- Building integrated components that work with the institutions using them — learning quickly about what is and is not working well and applying those lessons to the next generation of interoperability standards
- Letting people use the tools they love [SIC] and providing more ways for nontechnical individuals (including students) to easily integrate new features into learning activities
My note: something just refused to be accepted at SCSU
Technologists are often very focused on the technology, but the reality is that the more deeply and closely we understand the pedagogy and the people in the institutions — students, faculty, instructional support staff, administrators — the better suited we are to actually making the tech work for them.
Under the Hood of a Next Generation Digital Learning Environment in Progress
Monday, July 31, 2017
The challenge is that although 85 percent of faculty use a campus learning management system (LMS),1 a recent Blackboard report found that, out of 70,000 courses across 927 North American institutions, 53 percent of LMS usage was classified as supplemental(content-heavy, low interaction) and 24 percent as complementary (one-way communication via content/announcements/gradebook).2 Only 11 percent were characterized as social, 10 percent as evaluative (heavy use of assessment), and 2 percent as holistic (balanced use of all previous). Our FYE course required innovating beyond the supplemental course-level LMS to create a more holistic cohort-wide NGDLE in order to fully support the teaching, learning, and student success missions of the program.The key design goals for our NGDLE were to:
- Create a common platform that could deliver a standard curriculum and achieve parity in all course sections using existing systems and tools and readily available content
- Capture, store, and analyze any generated learner data to support learning assessment, continuous program improvement, and research
- Develop reports and actionable analytics for administrators, advisors, instructors, and students
more on LMS in this blog
more on learning outcomes in this IMS blog
International Benchmarks for Academic Library Use of Bibliometrics & Altmetrics, 2016-17
ID: 3807768 Report August 2016 115 pages Primary Research Group
The report gives detailed data on the use of various bibliometric and altmetric tools such as Google Scholar, Web of Science, Scimago, Plum Analytics
20 predominantly research universities in the USA, continental Europe, the UK, Canada and Australia/New Zealand. Among the survey participants are: Carnegie Mellon, Cambridge University, Universitat Politècnica de Catalunya the University at Albany, the University of Melbourne, Florida State University, the University of Alberta and Victoria University of Wellington
– 50% of the institutions sampled help their researchers to obtain a Thomsen/Reuters Researcher ID.
ResearcherID provides a solution to the author ambiguity problem within the scholarly research community. Each member is assigned a unique identifier to enable researchers to manage their publication lists, track their times cited counts and h-index, identify potential collaborators and avoid author misidentification. In addition, your ResearcherID information integrates with the Web of Science and is ORCID compliant, allowing you to claim and showcase your publications from a single one account. Search the registry to find collaborators, review publication lists and explore how research is used around the world!
– Just 5% of those surveyed use Facebook Insights in their altmetrics efforts.
more on altmetrics in this IMS blog
Instagram positioned itself as the third most popular social media app
and the best social media app of 2016.
Twitter saw a decrease in users over the past year and even death of their beloved 6-second video-clip sharing app, Vine.
In an article entitled ‘Why Vine Died,’ Casey Newman reported the following, “Former executives say that a major competitive challenged emerged in the form of Instagram, which introduced 15-second video clips in June 2013.
Instagram remained stable with the introduction of new features like stories and video channels, resources of it’s parent company, Facebook, and the introduction of ads to the platform that look very similar to the posts in a user’s feed.
In addition to a total logo redesign, Instagram shifted its focus from just pictures, to longer video (from 15 sec. to one minute) and direct messaging features, such as group posts and disappearing video. Explore Channels in Discover let people discover new photo and video content based on interests. Instagram Stories added a new element to the Instagram experience showing highlights from friends, celebrities and businesses one follows without interfering with their feed. Instagram also caters to business needs through its Instagram for Business platform that allows for instant contact, detailed analytics and easy-to-follow linked content.
Most recently, Instagram released live video in their stories feature. Users can start a live stream in their Instagram story and view comments and feedback from their viewers in real time! This feature is similar to apps like musical.ly and live.ly which has over 80 million users and 62% of its users are under 21.
#StudentVoices #MillennialMondays #WhatToWatch
#MillennialMondays is a new series that aims to discuss relevant topics on careers and business from a millennial perspective.
more on instagram in this IMS blog
Learn data mining languages: R, Python and SQL
– Fantastic set of interactive tutorials for learning different languages. Their SQL tutorial is second to none. You’ll learn how to manipulate data in MySQL, SQL Server, Access, Oracle, Sybase, DB2 and other database systems.
– The best way to learn is to work towards a goal. That’s what this helpful blog series is all about. You’ll learn SQL from scratch by following along with a simple, but common, data analysis scenario.
– This course is recommended for the intermediate SQL-er who wants to brush up on his/her skills. It’s a series of 10 challenges coupled with forums and external videos to help you improve your SQL knowledge and understanding of the underlying principles.
– Created by Code School, this interactive online tutorial system is designed to step you through R for statistics and data modeling. As you work through their seven modules, you’ll earn badges to track your progress helping you to stay on track.
– If you’re a complete R novice, try Lead’s introduction to R. In their 1 hour 30 min course, they’ll cover installation, basic usage, common functions, data structures, and data types. They’ll even set you up with your own development environment in RStudio.
– Once you’ve mastered the basics of R, bookmark this page. It’s a fantastically comprehensive style guide to using R. We should all strive to write beautiful code, and this resource (based on Google’s R style guide) is your key to that ideal.
– Learn R in R – a radical idea certainly. But that’s exactly what Swirl does. They’ll interactively teach you how to program in R and do some basic data science at your own pace. Right in the R console.
Python for beginners
– The Python website actually has a pretty comprehensive and easy-to-follow set of tutorials. You can learn everything from installation to complex analyzes. It also gives you access to the Python community, who will be happy to answer your questions.
– A complete list of Python tutorials to take you from zero to Python hero. There are tutorials for beginners, intermediate and advanced learners.
Read all about it: data mining books
Data Jujitsu: The Art of Turning Data into Product
– This free book by DJ Patil gives you a brief introduction to the complexity of data problems and how to approach them. He gives nice, understandable examples that cover the most important thought processes of data mining. It’s a great book for beginners but still interesting to the data mining expert. Plus, it’s free!
Data Mining: Concepts and Techniques
– The third (and most recent) edition will give you an understanding of the theory and practice of discovering patterns in large data sets. Each chapter is a stand-alone guide to a particular topic, making it a good resource if you’re not into reading in sequence or you want to know about a particular topic.
Mining of Massive Datasets
– Based on the Stanford Computer Science course, this book is often sighted by data scientists as one of the most helpful resources around. It’s designed at the undergraduate level with no formal prerequisites. It’s the next best thing to actually going to Stanford!
Hadoop: The Definitive Guide
– As a data scientist, you will undoubtedly be asked about Hadoop. So you’d better know how it works. This comprehensive guide will teach you how to build and maintain reliable, scalable, distributed systems with Apache Hadoop. Make sure you get the most recent addition to keep up with this fast-changing service.
Online learning: data mining webinars and courses
– Learn data mining from the comfort of your home with DataCamp’s online courses. They have free courses on R, Statistics, Data Manipulation, Dynamic Reporting, Large Data Sets and much more.
– Coursera brings you all the best University courses straight to your computer. Their online classes will teach you the fundamentals of interpreting data, performing analyzes and communicating insights. They have topics for beginners and advanced learners in Data Analysis, Machine Learning, Probability and Statistics and more.
– With a range of free and pay for data mining courses, you’re sure to find something you like on Udemy no matter your level. There are 395 in the area of data mining! All their courses are uploaded by other Udemy users meaning quality can fluctuate so make sure you read the reviews.
– These courses are handily organized into “Paths” based on the technology you want to learn. You can do everything from build a foundation in Git to take control of a data layer in SQL. Their engaging online videos will take you step-by-step through each lesson and their challenges will let you practice what you’ve learned in a controlled environment.
– Master a new skill or programming language with Udacity’s unique series of online courses and projects. Each class is developed by a Silicon Valley tech giant, so you know what your learning will be directly applicable to the real world.
– Learn from experts in web design, coding, business and more. The video tutorials from Treehouse will teach you the basics and their quizzes and coding challenges will ensure the information sticks. And their UI is pretty easy on the eyes.
Learn from the best: top data miners to follow
– Chief Data Scientist at MailChimp and author of Data Smart, John is worth a follow for his witty yet poignant tweets on data science.
– Author and Chief Data Scientist at The White House OSTP, DJ tweets everything you’ve ever wanted to know about data in politics.
– He’s Editor-in-Chief of FiveThirtyEight, a blog that uses data to analyze news stories in Politics, Sports, and Current Events.
– As the Chief Data Scientist at Baidu, Andrew is responsible for some of the most groundbreaking developments in Machine Learning and Data Science.
– He might know pretty much everything there is to know about Big Data.
– He’s the author of popular data science blog KDNuggets
, the leading newsletter on data mining and knowledge discovery.
– As the Co-founder of OKCupid, Christian has access to one of the most unique datasets on the planet and he uses it to give fascinating insight into human nature, love, and relationships
– He’s contributed to a number of data blogs and authored his own book on Applied Predictive Analytics. At the moment, Dean is Chief Data Scientist at SmarterHQ
Practice what you’ve learned: data mining competitions
– This is the ultimate data mining competition. The world’s biggest corporations offer big prizes for solving their toughest data problems.
– The best way to learn is to teach. Stackoverflow offers the perfect forum for you to prove your data mining know-how by answering fellow enthusiast’s questions.
– With a live leaderboard and interactive participation, TunedIT offers a great platform to flex your data mining muscles.
– You can find a number of nonprofit data mining challenges on DataDriven. All of your mining efforts will go towards a good cause.
– Another great site to answer questions on just about everything. There are plenty of curious data lovers on there asking for help with data mining and data science.
Meet your fellow data miner: social networks, groups and meetups
– As with many social media platforms, Facebook is a great place to meet and interact with people who have similar interests. There are a number of very active data mining groups you can join.
– If you’re looking for data mining experts in a particular field, look no further than LinkedIn. There are hundreds of data mining groups ranging from the generic to the hyper-specific. In short, there’s sure to be something for everyone.
– Want to meet your fellow data miners in person? Attend a meetup! Just search for data mining in your city and you’re sure to find an awesome group near you.
8 fantastic examples of data storytelling
8 fantastic examples of data storytelling
Data storytelling is the realization of great data visualization. We’re seeing data that’s been analyzed well and presented in a way that someone who’s never even heard of data science can get it.
Google’s Cole Nussbaumer provides a friendly reminder of what data storytelling actually is, it’s straightforward, strategic, elegant, and simple.
more on text and data mining in this IMS blog
Researchers use an app to predict GPA based on smartphone use
Dartmouth College and the University of Texas at Austin have developed an app that tracks smartphone activity to compute a grade point average that’s within 0.17 of a point.
More on Big Data in education in this blog:
Why Girls Tend to Get Better Grades Than Boys Do
New research shows that girls are ahead in every subject, including math and science. Do today’s grading methods skew in their favor?
The latest data from the Pew Research Center uses U.S. Census Bureau data to show that in 2012, 71 percent of female high school graduates went on to college, compared to 61 percent of their male counterparts. In 1994 the figures were 63 and 61 percent, respectively.
Girls succeed over boys in school because they are more apt to plan ahead, set academic goals, and put effort into achieving those goals.
The weaker sex
For all the data and feedback they provide, student information systems interfere with learning.
“School isn’t about learning. It’s about doing well.”
The singular focus on grades that these systems encourage turns learning into a competitive, zero-sum game for students.
the parallel with the online grades systems at K12 is the Big Data movement at Higher Ed. Big Data must be about assisting teaching, not about determining teaching and instructors must be very well aware and very carefully navigating in this nebulous areas of assisting versus determining.
This article about quantifying management of teaching and learning in K12 reminds me the big hopes put on technocrats governing counties and economies in the 70s of the last centuries when the advent of the computers was celebrated as the solution of all our problems. Haven’t we, as civilization learned anything from that lesson?