Searching for "data visualization"

Literature on Digital Humanities

Burdick, A. (2012). Digital humanities . Cambridge, MA: MIT Press.

https://mnpals-scs.primo.exlibrisgroup.com/discovery/fulldisplay?docid=alma990078472690104318&context=L&vid=01MNPALS_SCS:SCS&search_scope=MyInst_and_CI&tab=Everything&lang=en

digital humanities is born f the encounter between traditional humanities and computational methods.

p. 5. From Humanism to Humanities
While the foundations of of humanistic inquiry and the liberal arts can be traced back in the west to the medieval trivium and quadrivium, the modern and human sciences are rooted in the Renaissance shift from a medieval, church dominated, theocratic world view to be human centered one period the gradual transformation of early humanism into the disciplines that make up the humanities today Was profoundly shaped by the editorial practices involved in the recovery of the corpus of works from classical antiquity

P. 6. The shift from humanism to the institution only sanctioned disciplinary practices and protocols that we associate with the humanities today is best described as a gradual process of subdivision and specialization.
P. 7. Text-based disciplines in studies (classics, literature, philosophy, the history of ideas) make up, from the very start, the core of both the humanities and the great books curricular instituted in the 1920s and 1930s.
P. 10. Transmedia modes of argumentation
In the 21st-century, we communicate in media significantly more varied, extensible, and multiplicative then linear text. From scalable databases to information visualizations, from video lectures to multi-user virtual platforms serious content and rigorous argumentation take shape across multiple platforms in media. The best digital humanities pedagogy and research projects train students both in “reading “and “writing “this emergent rhetoric and in understanding how the reshape and three model humanistic knowledge. This means developing critically informed literacy expensive enough to include graphic design visual narrative time based media, and the development of interfaces (Rather then the rote acceptance of them as off-the-shelf products).
P. 11. The visual becomes ever more fundamental to the digital humanities, in ways that compliment, enhance, and sometimes are in pension with the textual.
There is no either/or, no simple interchangeability between language and the visual, no strict sub ordination of the one to the other. Words are themselves visual but other kinds of visual constructs do different things. The question is how to use each to its best effect into device meaningful interpret wing links, to use Theodor Nelson’s ludic neologism.
P. 11. The suite of expressive forms now encompasses the use of sound, motion graphics, animation, screen capture, video, audio, and the appropriation and into remix sink of code it underlines game engines. This expanded range of communicative tools requires those who are engaged in digital humanities world to familiarize themselves with issues, discussions, and debates in design fields, especially communication and interaction design. Like their print predecessors, form at the convention center screen environments can become naturalized all too quickly, with the results that the thinking that informed they were designed goes unperceived.

p. 13.

For digital humanists, design is a creative practice harnessing cultural, social, economic, and technological constraints in order to bring systems and objects into the world. Design in dialogue with research is simply a picnic, but when used to pose in frame questions about knowledge, design becomes an intellectual method. Digital humanities is a production based in Denver in which theoretical issues get tested in the design of implementations and implementations or loci after your radical reflection and elaboration.
Did you thaw humanists have much to learn from communication and media design about how to juxtapose and integrate words and images create hire he is of reading, Forge pathways of understanding, deployed grades in templates to best effect, and develop navigational schemata that guide in produce meaningful interactions.
P. 15.  The field of digital digital humanities me see the emergence of polymaths who can “ do it all” : Who can research, write, shoot, edit, code, model, design, network, and dialogue with users. But there is also ample room for specialization and, particularly, for collaboration.
P. 16. Computational activities in digital humanities.
The foundational layer, computation, relies on principles that are, on the surface, at odds with humanistic methods.
P. 17. The second level involves processing in a way that conform to computational capacities, and this were explored in the first generation of digital scholarship and stylometrics, concordance development, and indexing.
P. 17.
Duration, analysis, editing, modeling.
Duration, analysis, editing, and modeling comprise fundamental activities at the core of digital humanities. Involving archives, collections, repositories, and other aggregations of materials, duration is the selection and organization of materials in an interpretive framework, argument, or exhibit.
P. 18. Analysis refers to the processing of text or data: statistical and quantitative methods of analysis have brought close readings of texts (stylometrics and genre analysis, correlation, comparisons of versions for alter attribution or usage patterns ) into dialogue with distant reading (The crunching cuff large quantities of information across the corpus of textual data or its metadata).
Edit think has been revived with the advent of digital media and the web and to continue to be an integral activity in textual as well as time based formats.
P. 18. Model link highlights the notion of content models- shapes of argument expressed in information structures in their design he digital project is always an expression of assumptions about knowledge: usually domain specific knowledge given an explicit form by the model in which it is designed.
P. 19.  Each of these areas of activity- cure ration, analysis, editing, and modeling is supported by the basic building blocks of digital activity. But they also depend upon networks and infrastructure that are cultural and institutional as well as technical. Servers, software, and systems administration are key elements of any project design.
P. 30. Digital media are not more “evolved” have them print media nor are books obsolete; but the multiplicity of media in the very processes of mediation entry mediation in the formation of cultural knowledge and humanistic inquiry required close attention. Tug link between distant and clothes, macro and micro, and surface in depth becomes the norm. Here, we focus on the importance of visualization to the digital humanities before moving on to other, though often related, genre and methods such as Locative investigation, thick mapping, animated archives, database documentaries, platform studies, and emerging practices like cultural analytics, data mining and humanities gaming.
P. 35. Fluid texture out what he refers to the mutability of texts in the variants and versions Whether these are produced through Authorial changes, anything, transcription, translation, or print production

Cultural Analytics, aggregation, and data mining.
The field of cultural Analytics has emerged over the past few years, utilizing tools of high-end computational analysis and data visualization today sect large-scale coach data sets. Cultural Analytic does Not analyze cultural artifacts, but operates on the level of digital models of this materials in aggregate. Again, the point is not to pit “close” hermeneutic reading against “distant” data mapping, but rather to appreciate the synergistic possibilities and tensions that exist between a hyper localized, deep analysis and a microcosmic view

p. 42.

Data mining is a term that covers a host of picnics for analyzing digital material by “parameterizing” some feature of information and extract in it. This means that any element of a file or collection of files that can be given explicit specifications,  or parameters, can be extracted from those files for analysis.
Understanding the rehtoric of graphics is another essential skill, therefore, in working at a skill where individual objects are lost in the mass of processed information and data. To date, much humanities data mining has merely involved counting. Much more sophisticated statistical methods and use of probability will be needed for humanists to absorb the lessons of the social sciences into their methods
P. 42. Visualization and data design
Currently, visualization in the humanities uses techniques drawn largely from the social sciences, Business applications, and the natural sciences, all of which require self-conscious criticality in their adoption. Such visual displays including graphs and charts, may present themselves is subjective or even unmediated views of reality, rather then is rhetorical constructs.

+++++++++++++++++++++++++++
Warwick, C., Terras, M., & Nyhan, J. (2012). Digital humanities in practice . London: Facet Publishing in association with UCL Centre for Digital Humanities.

https://mnpals-scs.primo.exlibrisgroup.com/discovery/fulldisplay?docid=alma990078423690104318&context=L&vid=01MNPALS_SCS:SCS&search_scope=MyInst_and_CI&tab=Everything&lang=en

 

Design social media images

How to Easily Design Social Media Images: 4 Free Tools

October 3, 2018 https://www.socialmediaexaminer.com/social-media-images-free-tools

Preview Text Styles With One Touch via Adobe Spark

  • Adobe Spark is part of Adobe’s suite of creative products, bringing social media image and video creation to the web.

remove the Adobe Spark watermark with a paid Adobe Spark plan or Creative Cloud subscription, both starting at $9.99 a month.

  • Design Basic Social Media Images Quickly With Pablo

    Pablo by Buffer is a no-frills online image editor that lets you make basic social media images in seconds. So while it doesn’t have some of the features of other image editors on this list, it works in a pinch. This tool is free to use without registration, making it perfect for when you or your team needs to create a quick image. My note: not on mobiles yet, only desktop

  • Design Automatically Resizable Social Media Images With Snappa

    Snappa is a user-friendly online image maker that has templates for every social media network. In addition to social post templates, it offers banner, story, and infographic templates. This makes Snappa your one-stop shop for creating all sorts of social media content.

  • Add Simple Data Visualization Charts to Social Media Images in CanvaCanva is a free online image editor with a huge library of free templates and royalty-free images. The app has built-in templates for all of the major social networks, and you can even post directly to your social media accounts from the app.

+++++++++++
more on social media images in this IMS blog
http://blog.stcloudstate.edu/ims?s=social+media+images

ELI 2018 Key Issues Teaching Learning

Key Issues in Teaching and Learning

https://www.educause.edu/eli/initiatives/key-issues-in-teaching-and-learning

A roster of results since 2011 is here.

ELI 2018 key issues

1. Academic Transformation

2. Accessibility and UDL

3. Faculty Development

4. Privacy and Security

5. Digital and Information Literacies

https://cdn.nmc.org/media/2017-nmc-strategic-brief-digital-literacy-in-higher-education-II.pdf
Three Models of Digital Literacy: Universal, Creative, Literacy Across Disciplines

United States digital literacy frameworks tend to focus on educational policy details and personal empowerment, the latter encouraging learners to become more effective students, better creators, smarter information consumers, and more influential members of their community.

National policies are vitally important in European digital literacy work, unsurprising for a continent well populated with nation-states and struggling to redefine itself, while still trying to grow economies in the wake of the 2008 financial crisis and subsequent financial pressures

African digital literacy is more business-oriented.

Middle Eastern nations offer yet another variation, with a strong focus on media literacy. As with other regions, this can be a response to countries with strong state influence or control over local media. It can also represent a drive to produce more locally-sourced content, as opposed to consuming material from abroad, which may elicit criticism of neocolonialism or religious challenges.

p. 14 Digital literacy for Humanities: What does it mean to be digitally literate in history, literature, or philosophy? Creativity in these disciplines often involves textuality, given the large role writing plays in them, as, for example, in the Folger Shakespeare Library’s instructor’s guide. In the digital realm, this can include web-based writing through social media, along with the creation of multimedia projects through posters, presentations, and video. Information literacy remains a key part of digital literacy in the humanities. The digital humanities movement has not seen much connection with digital literacy, unfortunately, but their alignment seems likely, given the turn toward using digital technologies to explore humanities questions. That development could then foster a spread of other technologies and approaches to the rest of the humanities, including mapping, data visualization, text mining, web-based digital archives, and “distant reading” (working with very large bodies of texts). The digital humanities’ emphasis on making projects may also increase

Digital Literacy for Business: Digital literacy in this world is focused on manipulation of data, from spreadsheets to more advanced modeling software, leading up to degrees in management information systems. Management classes unsurprisingly focus on how to organize people working on and with digital tools.

Digital Literacy for Computer Science: Naturally, coding appears as a central competency within this discipline. Other aspects of the digital world feature prominently, including hardware and network architecture. Some courses housed within the computer science discipline offer a deeper examination of the impact of computing on society and politics, along with how to use digital tools. Media production plays a minor role here, beyond publications (posters, videos), as many institutions assign multimedia to other departments. Looking forward to a future when automation has become both more widespread and powerful, developing artificial intelligence projects will potentially play a role in computer science literacy.

6. Integrated Planning and Advising Systems for Student Success (iPASS)

7. Instructional Design

8. Online and Blended Learning

In traditional instruction, students’ first contact with new ideas happens in class, usually through direct instruction from the professor; after exposure to the basics, students are turned out of the classroom to tackle the most difficult tasks in learning — those that involve application, analysis, synthesis, and creativity — in their individual spaces. Flipped learning reverses this, by moving first contact with new concepts to the individual space and using the newly-expanded time in class for students to pursue difficult, higher-level tasks together, with the instructor as a guide.

Let’s take a look at some of the myths about flipped learning and try to find the facts.

Myth: Flipped learning is predicated on recording videos for students to watch before class.

Fact: Flipped learning does not require video. Although many real-life implementations of flipped learning use video, there’s nothing that says video must be used. In fact, one of the earliest instances of flipped learning — Eric Mazur’s peer instruction concept, used in Harvard physics classes — uses no video but rather an online text outfitted with social annotation software. And one of the most successful public instances of flipped learning, an edX course on numerical methods designed by Lorena Barba of George Washington University, uses precisely one video. Video is simply not necessary for flipped learning, and many alternatives to video can lead to effective flipped learning environments [http://rtalbert.org/flipped-learning-without-video/].

Myth: Flipped learning replaces face-to-face teaching.

Fact: Flipped learning optimizes face-to-face teaching. Flipped learning may (but does not always) replace lectures in class, but this is not to say that it replaces teaching. Teaching and “telling” are not the same thing.

Myth: Flipped learning has no evidence to back up its effectiveness.

Fact: Flipped learning research is growing at an exponential pace and has been since at least 2014. That research — 131 peer-reviewed articles in the first half of 2017 alone — includes results from primary, secondary, and postsecondary education in nearly every discipline, most showing significant improvements in student learning, motivation, and critical thinking skills.

Myth: Flipped learning is a fad.

Fact: Flipped learning has been with us in the form defined here for nearly 20 years.

Myth: People have been doing flipped learning for centuries.

Fact: Flipped learning is not just a rebranding of old techniques. The basic concept of students doing individually active work to encounter new ideas that are then built upon in class is almost as old as the university itself. So flipped learning is, in a real sense, a modern means of returning higher education to its roots. Even so, flipped learning is different from these time-honored techniques.

Myth: Students and professors prefer lecture over flipped learning.

Fact: Students and professors embrace flipped learning once they understand the benefits. It’s true that professors often enjoy their lectures, and students often enjoy being lectured to. But the question is not who “enjoys” what, but rather what helps students learn the best.They know what the research says about the effectiveness of active learning

Assertion: Flipped learning provides a platform for implementing active learning in a way that works powerfully for students.

9. Evaluating Technology-based Instructional Innovations

Transitioning to an ROI lens requires three fundamental shifts
What is the total cost of my innovation, including both new spending and the use of existing resources?

What’s the unit I should measure that connects cost with a change in performance?

How might the expected change in student performance also support a more sustainable financial model?

The Exposure Approach: we don’t provide a way for participants to determine if they learned anything new or now have the confidence or competence to apply what they learned.

The Exemplar Approach: from ‘show and tell’ for adults to show, tell, do and learn.

The Tutorial Approach: Getting a group that can meet at the same time and place can be challenging. That is why many faculty report a preference for self-paced professional development.build in simple self-assessment checks. We can add prompts that invite people to engage in some sort of follow up activity with a colleague. We can also add an elective option for faculty in a tutorial to actually create or do something with what they learned and then submit it for direct or narrative feedback.

The Course Approach: a non-credit format, these have the benefits of a more structured and lengthy learning experience, even if they are just three to five-week short courses that meet online or in-person once every week or two.involve badges, portfolios, peer assessment, self-assessment, or one-on-one feedback from a facilitator

The Academy Approach: like the course approach, is one that tends to be a deeper and more extended experience. People might gather in a cohort over a year or longer.Assessment through coaching and mentoring, the use of portfolios, peer feedback and much more can be easily incorporated to add a rich assessment element to such longer-term professional development programs.

The Mentoring Approach: The mentors often don’t set specific learning goals with the mentee. Instead, it is often a set of structured meetings, but also someone to whom mentees can turn with questions and tips along the way.

The Coaching Approach: A mentor tends to be a broader type of relationship with a person.A coaching relationship tends to be more focused upon specific goals, tasks or outcomes.

The Peer Approach:This can be done on a 1:1 basis or in small groups, where those who are teaching the same courses are able to compare notes on curricula and teaching models. They might give each other feedback on how to teach certain concepts, how to write syllabi, how to handle certain teaching and learning challenges, and much more. Faculty might sit in on each other’s courses, observe, and give feedback afterward.

The Self-Directed Approach:a self-assessment strategy such as setting goals and creating simple checklists and rubrics to monitor our progress. Or, we invite feedback from colleagues, often in a narrative and/or informal format. We might also create a portfolio of our work, or engage in some sort of learning journal that documents our thoughts, experiments, experiences, and learning along the way.

The Buffet Approach:

10. Open Education

Figure 1. A Model for Networked Education (Credit: Image by Catherine Cronin, building on
Interpretations of
Balancing Privacy and Openness (Credit: Image by Catherine Cronin. CC BY-SA)

11. Learning Analytics

12. Adaptive Teaching and Learning

13. Working with Emerging Technology

In 2014, administrators at Central Piedmont Community College (CPCC) in Charlotte, North Carolina, began talks with members of the North Carolina State Board of Community Colleges and North Carolina Community College System (NCCCS) leadership about starting a CBE program.

Building on an existing project at CPCC for identifying the elements of a digital learning environment (DLE), which was itself influenced by the EDUCAUSE publication The Next Generation Digital Learning Environment: A Report on Research,1 the committee reached consensus on a DLE concept and a shared lexicon: the “Digital Learning Environment Operational Definitions,

Figure 1. NC-CBE Digital Learning Environment

Mapping 1968

Mapping 1968, Conflict and Change

An Opportunity for Interdisciplinary Research 

When:  Friday, September 28, 8:30am-3:00pm
Where: Wilson Research Collaboration Studio, Wilson Library
Cost: Free; advanced registration is required

1968 was one of the most turbulent years of the 20th century.  2018 marks the 50th anniversary of that year’s landmark political, social and cultural events–events that continue to influence our world today.

Focusing on the importance of this 50 year anniversary we are calling out to all faculty, staff, students, and community partners to participate the workshop ‘Mapping 1968, Conflict and Change’. This all-day event is designed to bring people together into working groups based on common themes.  Bring your talent and curiosity to apply an interdisciplinary approach to further explore the spatial context of these historic and/or current events. Learn new skills on mapping techniques that can be applied to any time in history. To compliment the expertise that you bring to the workshop, working groups will also have the support of library, mapping, and data science experts to help gather, create, and organize the spatial components of a given topic.

To learn more and to register for the workshop, go here

Workshop sponsors: Institute for Advanced Studies (IAS), U-Spatial, Liberal Arts Technologies & Innovation Services (LATIS), Digital Arts, Science & Humanities (DASH), and UMN Libraries.

#mapping1968 #interdisciplinaryresearch

Posted by Plamen Miltenoff on Friday, September 28, 2018

https://www.goodreads.com/book/show/5114403-early-thematic-mapping-in-the-history-of-cartography – symbolization methods, cartographers and statisticians.

Kevin Ehrman-Solberg ehrma046@umn.edu PPT on Mapping Prejudice. https://www.mappingprejudice.org/

Henneping County scanned the deeds, OCR, Python script to search. Data in an open source. covenant data. Local historian found microfishes, the language from the initial data. e.g. eugenics flavor: arian, truncate.

covenance: https://www.dictionary.com/browse/convenance

Dan Milz. Public Affairs. geo-referencing, teaching a class environmental planning, spatial analysis, dmilz@umn.edu @dcmlz

Chris ancient historian. The Tale of Mediterranean, City: Mapping the history of Premodern Carthage and Tunis.
College of Liberal Arts

from archives to special resources. archaeological data into GIS layers. ESRI https://www.esri.com/en-us/home how interactive is ESRI.

mapping for 6 months. finding the maps in the archeological and history reports was time consuming. once that data was sorted out, exciting.

#mapping1968 #digitalhumanities

Posted by InforMedia Services on Friday, September 28, 2018

Kate Carlson, U-Spatial Story Maps, An Intro

patters, we wouldn’t see if we did not bring it up spatially. interactivity and data visualization, digital humanities

making an argument, asking questions, crowdsourcing, archival and resources accessibility, civitates orbis terrarum http://historic-cities.huji.ac.il/mapmakers/braun_hogenberg.html

storymaps.arcgis.com/en/gallery https://storymaps.arcgis.com/en/gallery/#s=0  cloud-based mapping software. ArcGIS Online. organizational account for the U, 600 users. over 700 storymaps creates within the U, some of them are not active, share all kind of data: archive data on spreadsheet, but also a whole set of data within the software; so add the data or use the ArcGIS data and use templates. web maps into the storymap app, Living Atlas: curated set of data: hunderd sets of data, from sat images, to different contents. 846 layers of data, imagery, besides org account, one can create maps within the free account with limited access. data browser to use my own data – Data Enrichment to characterized my data. census data from 2018 and before,
make plan, create a storyboard, writing for the web, short and precise (not as writing for a journal), cartographic style, copyright, citing the materials, choosing the right map scale for each page. online learning materials, some only thru org account ESRI academy has course catalogue. Mapping 101, Dekstop GIS 101, Collector 101, Imagery 101, SQL 101, Story Maps 101,

Awards for UMN undergrad and grad students, $1000

history, anthropology, political science,

Melinda, Kernik, Spatial Data Curator kerni016@umn.edu Jenny McBurney jmcburney@umn.edu

z.umn.edu/1968resources https://docs.google.com/presentation/d/1QpdYKA1Rgzd_Nsd4Rr8ed1cJDAX1zeG7J3exRO6BHV0/edit#slide=id.g436145dc5b_0_23

data2.nhgis.org/main

#mapping1968

Posted by InforMedia Services on Friday, September 28, 2018

University Digital COnservancy

civil rights information from the U (migrants blog)

DASH Digital Arts, Sciences and Humanities. text mining data visualization,

data repository for the U (DRUM)

DASH director, https://dash.umn.edu/. Ben Wiggins 

Jennifer Gunn
+++++++++++++++++++++++++

The “Mapping 1968, Conflict and Change” planning committee is very pleased with the amount of interest and the wonderful attendance at Friday’s gathering. Thank you for attending and actively participating in this interdisciplinary workshop!
To re-cap and learn more on your thoughts and expectations of the workshop we would be grateful if you can take a few moments to complete the workshop evaluation.   Please complete the evaluation even if you were unable to attend last Friday, there are questions regarding continued communication and the possibility for future events of this kind.
 
Below is a list of presented workshop resources:
Best Regards-
Kate

U-Spatial | Spatial Technology Consultant
Research Computing, Office of the Vice President for Research
University of Minnesota
Office Address
Blegen Hall 420
Mailing Address
Geography
Room 414 SocSci
7163A

++++++++++++++
more on GIS in this IMS blog
http://blog.stcloudstate.edu/ims?s=GIS

digital humanities

7 Things You Should Know About Digital Humanities

Published:   Briefs, Case Studies, Papers, Reports  

https://library.educause.edu/resources/2017/11/7-things-you-should-know-about-digital-humanities

Lippincott, J., Spiro, L., Rugg, A., Sipher, J., & Well, C. (2017). Seven Things You Should Know About Digital Humanities (ELI 7 Things You Should Know). Retrieved from https://library.educause.edu/~/media/files/library/2017/11/eli7150.pdf

definition

The term “digital humanities” can refer to research and instruction that is about information technology or that uses IT. By applying technologies in new ways, the tools and methodologies of digital humanities open new avenues of inquiry and scholarly production. Digital humanities applies computational capabilities to humanistic questions, offering new pathways for scholars to conduct research and to create and publish scholarship. Digital humanities provides promising new channels for learners and will continue to influence the ways in which we think about and evolve technology toward better and more humanistic ends.

As defined by Johanna Drucker and colleagues at UCLA, the digital humanities is “work at the intersection of digital technology and humanities disciplines.” An EDUCAUSE/CNI working group framed the digital humanities as “the application and/or development of digital tools and resources to enable researchers to address questions and perform new types of analyses in the humanities disciplines,” and the NEH Office of Digital Humanities says digital humanities “explore how to harness new technology for thumanities research as well as those that study digital culture from a humanistic perspective.” Beyond blending the digital with the humanities, there is an intentionality about combining the two that defines it.

digital humanities can include

  • creating digital texts or data sets;
  • cleaning, organizing, and tagging those data sets;
  • applying computer-based methodologies to analyze them;
  • and making claims and creating visualizations that explain new findings from those analyses.

Scholars might reflect on

  • how the digital form of the data is organized,
  • how analysis is conducted/reproduced, and
  • how claims visualized in digital form may embody assumptions or biases.

Digital humanities can enrich pedagogy as well, such as when a student uses visualized data to study voter patterns or conducts data-driven analyses of works of literature.

Digital humanities usually involves work by teams in collaborative spaces or centers. Team members might include

  • researchers and faculty from multiple disciplines,
  • graduate students,
  • librarians,
  • instructional technologists,
  • data scientists and preservation experts,
  • technologists with expertise in critical computing and computing methods, and undergraduates

projects:

downsides

  • some disciplinary associations, including the Modern Language Association and the American Historical Association, have developed guidelines for evaluating digital proj- ects, many institutions have yet to define how work in digital humanities fits into considerations for tenure and promotion
  • Because large projects are often developed with external funding that is not readily replaced by institutional funds when the grant ends sustainability is a concern. Doing digital humanities well requires access to expertise in methodologies and tools such as GIS, mod- eling, programming, and data visualization that can be expensive for a single institution to obtain
  • Resistance to learning new tech- nologies can be another roadblock, as can the propensity of many humanists to resist working in teams. While some institutions have recognized the need for institutional infrastructure (computation and storage, equipment, software, and expertise), many have not yet incorporated such support into ongoing budgets.

Opportunities for undergraduate involvement in research, provid ing students with workplace skills such as data management, visualization, coding, and modeling. Digital humanities provides new insights into policy-making in areas such as social media, demo- graphics, and new means of engaging with popular culture and understanding past cultures. Evolution in this area will continue to build connections between the humanities and other disci- plines, cross-pollinating research and education in areas like med- icine and environmental studies. Insights about digital humanities itself will drive innovation in pedagogy and expand our conceptualization of classrooms and labs

++++++++++++
more on digital humanities in this IMS blog
http://blog.stcloudstate.edu/ims?s=digital+humanities

topics for IM260

proposed topics for IM 260 class

  • Media literacy. Differentiated instruction. Media literacy guide.
    Fake news as part of media literacy. Visual literacy as part of media literacy. Media literacy as part of digital citizenship.
  • Web design / web development
    the roles of HTML5, CSS, Java Script, PHP, Bootstrap, JQuery, React and other scripting languages and libraries. Heat maps and other usability issues; website content strategy. THE MODEL-VIEW-CONTROLLER (MVC) design pattern
  • Social media for institutional use. Digital Curation. Social Media algorithms. Etiquette Ethics. Mastodon
    I hosted a LITA webinar in the fall of 2016 (four weeks); I can accommodate any information from that webinar for the use of the IM students
  • OER and instructional designer’s assistance to book creators.
    I can cover both the “library part” (“free” OER, copyright issues etc) and the support / creative part of an OER book / textbook
  • Big Data.” Data visualization. Large scale visualization. Text encoding. Analytics, Data mining. Unizin. Python, R in academia.
    I can introduce the students to the large idea of Big Data and its importance in lieu of the upcoming IoT, but also departmentalize its importance for academia, business, etc. From infographics to heavy duty visualization (Primo X-Services API. JSON, Flask).
  • NetNeutrality, Digital Darwinism, Internet economy and the role of your professional in such environment
    I can introduce students to the issues, if not familiar and / or lead a discussion on a rather controversial topic
  • Digital assessment. Digital Assessment literacy.
    I can introduce students to tools, how to evaluate and select tools and their pedagogical implications
  • Wikipedia
    a hands-on exercise on working with Wikipedia. After the session, students will be able to create Wikipedia entries thus knowing intimately the process of Wikipedia and its information.
  • Effective presentations. Tools, methods, concepts and theories (cognitive load). Presentations in the era of VR, AR and mixed reality. Unity.
    I can facilitate a discussion among experts (your students) on selection of tools and their didactically sound use to convey information. I can supplement the discussion with my own findings and conclusions.
  • eConferencing. Tools and methods
    I can facilitate a discussion among your students on selection of tools and comparison. Discussion about the their future and their place in an increasing online learning environment
  • Digital Storytelling. Immersive Storytelling. The Moth. Twine. Transmedia Storytelling
    I am teaching a LIB 490/590 Digital Storytelling class. I can adapt any information from that class to the use of IM students
  • VR, AR, Mixed Reality.
    besides Mark Gill, I can facilitate a discussion, which goes beyond hardware and brands, but expand on the implications for academia and corporate education / world
  • IoT , Arduino, Raspberry PI. Industry 4.0
  • Instructional design. ID2ID
    I can facilitate a discussion based on the Educause suggestions about the profession’s development
  • Microcredentialing in academia and corporate world. Blockchain
  • IT in K12. How to evaluate; prioritize; select. obsolete trends in 21 century schools. K12 mobile learning
  • Podcasting: past, present, future. Beautiful Audio Editor.
    a definition of podcasting and delineation of similar activities; advantages and disadvantages.
  • Digital, Blended (Hybrid), Online teaching and learning: facilitation. Methods and techniques. Proctoring. Online students’ expectations. Faculty support. Asynch. Blended Synchronous Learning Environment
  • Gender, race and age in education. Digital divide. Xennials, Millennials and Gen Z. generational approach to teaching and learning. Young vs old Millennials. Millennial employees.
  • Privacy, [cyber]security, surveillance. K12 cyberincidents. Hackers.
  • Gaming and gamification. Appsmashing. Gradecraft
  • Lecture capture, course capture.
  • Bibliometrics, altmetrics
  • Technology and cheating, academic dishonest, plagiarism, copyright.

IRDL proposal

Applications for the 2018 Institute will be accepted between December 1, 2017 and January 27, 2018. Scholars accepted to the program will be notified in early March 2018.

Title:

Learning to Harness Big Data in an Academic Library

Abstract (200)

Research on Big Data per se, as well as on the importance and organization of the process of Big Data collection and analysis, is well underway. The complexity of the process comprising “Big Data,” however, deprives organizations of ubiquitous “blue print.” The planning, structuring, administration and execution of the process of adopting Big Data in an organization, being that a corporate one or an educational one, remains an elusive one. No less elusive is the adoption of the Big Data practices among libraries themselves. Seeking the commonalities and differences in the adoption of Big Data practices among libraries may be a suitable start to help libraries transition to the adoption of Big Data and restructuring organizational and daily activities based on Big Data decisions.
Introduction to the problem. Limitations

The redefinition of humanities scholarship has received major attention in higher education. The advent of digital humanities challenges aspects of academic librarianship. Data literacy is a critical need for digital humanities in academia. The March 2016 Library Juice Academy Webinar led by John Russel exemplifies the efforts to help librarians become versed in obtaining programming skills, and respectively, handling data. Those are first steps on a rather long path of building a robust infrastructure to collect, analyze, and interpret data intelligently, so it can be utilized to restructure daily and strategic activities. Since the phenomenon of Big Data is young, there is a lack of blueprints on the organization of such infrastructure. A collection and sharing of best practices is an efficient approach to establishing a feasible plan for setting a library infrastructure for collection, analysis, and implementation of Big Data.
Limitations. This research can only organize the results from the responses of librarians and research into how libraries present themselves to the world in this arena. It may be able to make some rudimentary recommendations. However, based on each library’s specific goals and tasks, further research and work will be needed.

 

 

Research Literature

“Big data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it…”
– Dan Ariely, 2013  https://www.asist.org/publications/bulletin/aprilmay-2017/big-datas-impact-on-privacy-for-librarians-and-information-professionals/

Big Data is becoming an omnipresent term. It is widespread among different disciplines in academia (De Mauro, Greco, & Grimaldi, 2016). This leads to “inconsistency in meanings and necessity for formal definitions” (De Mauro et al, 2016, p. 122). Similarly, to De Mauro et al (2016), Hashem, Yaqoob, Anuar, Mokhtar, Gani and Ullah Khan (2015) seek standardization of definitions. The main connected “themes” of this phenomenon must be identified and the connections to Library Science must be sought. A prerequisite for a comprehensive definition is the identification of Big Data methods. Bughin, Chui, Manyika (2011), Chen et al. (2012) and De Mauro et al (2015) single out the methods to complete the process of building a comprehensive definition.

In conjunction with identifying the methods, volume, velocity, and variety, as defined by Laney (2001), are the three properties of Big Data accepted across the literature. Daniel (2015) defines three stages in big data: collection, analysis, and visualization. According to Daniel, (2015), Big Data in higher education “connotes the interpretation of a wide range of administrative and operational data” (p. 910) and according to Hilbert (2013), as cited in Daniel (2015), Big Data “delivers a cost-effective prospect to improve decision making” (p. 911).

The importance of understanding the process of Big Data analytics is well understood in academic libraries. An example of such “administrative and operational” use for cost-effective improvement of decision making are the Finch & Flenner (2016) and Eaton (2017) case studies of the use of data visualization to assess an academic library collection and restructure the acquisition process. Sugimoto, Ding & Thelwall (2012) call for the discussion of Big Data for libraries. According to the 2017 NMC Horizon Report “Big Data has become a major focus of academic and research libraries due to the rapid evolution of data mining technologies and the proliferation of data sources like mobile devices and social media” (Adams, Becker, et al., 2017, p. 38).

Power (2014) elaborates on the complexity of Big Data in regard to decision-making and offers ideas for organizations on building a system to deal with Big Data. As explained by Boyd and Crawford (2012) and cited in De Mauro et al (2016), there is a danger of a new digital divide among organizations with different access and ability to process data. Moreover, Big Data impacts current organizational entities in their ability to reconsider their structure and organization. The complexity of institutions’ performance under the impact of Big Data is further complicated by the change of human behavior, because, arguably, Big Data affects human behavior itself (Schroeder, 2014).

De Mauro et al (2015) touch on the impact of Dig Data on libraries. The reorganization of academic libraries considering Big Data and the handling of Big Data by libraries is in a close conjunction with the reorganization of the entire campus and the handling of Big Data by the educational institution. In additional to the disruption posed by the Big Data phenomenon, higher education is facing global changes of economic, technological, social, and educational character. Daniel (2015) uses a chart to illustrate the complexity of these global trends. Parallel to the Big Data developments in America and Asia, the European Union is offering access to an EU open data portal (https://data.europa.eu/euodp/home ). Moreover, the Association of European Research Libraries expects under the H2020 program to increase “the digitization of cultural heritage, digital preservation, research data sharing, open access policies and the interoperability of research infrastructures” (Reilly, 2013).

The challenges posed by Big Data to human and social behavior (Schroeder, 2014) are no less significant to the impact of Big Data on learning. Cohen, Dolan, Dunlap, Hellerstein, & Welton (2009) propose a road map for “more conservative organizations” (p. 1492) to overcome their reservations and/or inability to handle Big Data and adopt a practical approach to the complexity of Big Data. Two Chinese researchers assert deep learning as the “set of machine learning techniques that learn multiple levels of representation in deep architectures (Chen & Lin, 2014, p. 515). Deep learning requires “new ways of thinking and transformative solutions (Chen & Lin, 2014, p. 523). Another pair of researchers from China present a broad overview of the various societal, business and administrative applications of Big Data, including a detailed account and definitions of the processes and tools accompanying Big Data analytics.  The American counterparts of these Chinese researchers are of the same opinion when it comes to “think about the core principles and concepts that underline the techniques, and also the systematic thinking” (Provost and Fawcett, 2013, p. 58). De Mauro, Greco, and Grimaldi (2016), similarly to Provost and Fawcett (2013) draw attention to the urgent necessity to train new types of specialists to work with such data. As early as 2012, Davenport and Patil (2012), as cited in Mauro et al (2016), envisioned hybrid specialists able to manage both technological knowledge and academic research. Similarly, Provost and Fawcett (2013) mention the efforts of “academic institutions scrambling to put together programs to train data scientists” (p. 51). Further, Asomoah, Sharda, Zadeh & Kalgotra (2017) share a specific plan on the design and delivery of a big data analytics course. At the same time, librarians working with data acknowledge the shortcomings in the profession, since librarians “are practitioners first and generally do not view usability as a primary job responsibility, usually lack the depth of research skills needed to carry out a fully valid” data-based research (Emanuel, 2013, p. 207).

Borgman (2015) devotes an entire book to data and scholarly research and goes beyond the already well-established facts regarding the importance of Big Data, the implications of Big Data and the technical, societal, and educational impact and complications posed by Big Data. Borgman elucidates the importance of knowledge infrastructure and the necessity to understand the importance and complexity of building such infrastructure, in order to be able to take advantage of Big Data. In a similar fashion, a team of Chinese scholars draws attention to the complexity of data mining and Big Data and the necessity to approach the issue in an organized fashion (Wu, Xhu, Wu, Ding, 2014).

Bruns (2013) shifts the conversation from the “macro” architecture of Big Data, as focused by Borgman (2015) and Wu et al (2014) and ponders over the influx and unprecedented opportunities for humanities in academia with the advent of Big Data. Does the seemingly ubiquitous omnipresence of Big Data mean for humanities a “railroading” into “scientificity”? How will research and publishing change with the advent of Big Data across academic disciplines?

Reyes (2015) shares her “skinny” approach to Big Data in education. She presents a comprehensive structure for educational institutions to shift “traditional” analytics to “learner-centered” analytics (p. 75) and identifies the participants in the Big Data process in the organization. The model is applicable for library use.

Being a new and unchartered territory, Big Data and Big Data analytics can pose ethical issues. Willis (2013) focusses on Big Data application in education, namely the ethical questions for higher education administrators and the expectations of Big Data analytics to predict students’ success.  Daries, Reich, Waldo, Young, and Whittinghill (2014) discuss rather similar issues regarding the balance between data and student privacy regulations. The privacy issues accompanying data are also discussed by Tene and Polonetsky, (2013).

Privacy issues are habitually connected to security and surveillance issues. Andrejevic and Gates (2014) point out in a decision making “generated by data mining, the focus is not on particular individuals but on aggregate outcomes” (p. 195). Van Dijck (2014) goes into further details regarding the perils posed by metadata and data to the society, in particular to the privacy of citizens. Bail (2014) addresses the same issue regarding the impact of Big Data on societal issues, but underlines the leading roles of cultural sociologists and their theories for the correct application of Big Data.

Library organizations have been traditional proponents of core democratic values such as protection of privacy and elucidation of related ethical questions (Miltenoff & Hauptman, 2005). In recent books about Big Data and libraries, ethical issues are important part of the discussion (Weiss, 2018). Library blogs also discuss these issues (Harper & Oltmann, 2017). An academic library’s role is to educate its patrons about those values. Sugimoto et al (2012) reflect on the need for discussion about Big Data in Library and Information Science. They clearly draw attention to the library “tradition of organizing, managing, retrieving, collecting, describing, and preserving information” (p.1) as well as library and information science being “a historically interdisciplinary and collaborative field, absorbing the knowledge of multiple domains and bringing the tools, techniques, and theories” (p. 1). Sugimoto et al (2012) sought a wide discussion among the library profession regarding the implications of Big Data on the profession, no differently from the activities in other fields (e.g., Wixom, Ariyachandra, Douglas, Goul, Gupta, Iyer, Kulkami, Mooney, Phillips-Wren, Turetken, 2014). A current Andrew Mellon Foundation grant for Visualizing Digital Scholarship in Libraries seeks an opportunity to view “both macro and micro perspectives, multi-user collaboration and real-time data interaction, and a limitless number of visualization possibilities – critical capabilities for rapidly understanding today’s large data sets (Hwangbo, 2014).

The importance of the library with its traditional roles, as described by Sugimoto et al (2012) may continue, considering the Big Data platform proposed by Wu, Wu, Khabsa, Williams, Chen, Huang, Tuarob, Choudhury, Ororbia, Mitra, & Giles (2014). Such platforms will continue to emerge and be improved, with librarians as the ultimate drivers of such platforms and as the mediators between the patrons and the data generated by such platforms.

Every library needs to find its place in the large organization and in society in regard to this very new and very powerful phenomenon called Big Data. Libraries might not have the trained staff to become a leader in the process of organizing and building the complex mechanism of this new knowledge architecture, but librarians must educate and train themselves to be worthy participants in this new establishment.

 

Method

 

The study will be cleared by the SCSU IRB.
The survey will collect responses from library population and it readiness to use and use of Big Data.  Send survey URL to (academic?) libraries around the world.

Data will be processed through SPSS. Open ended results will be processed manually. The preliminary research design presupposes a mixed method approach.

The study will include the use of closed-ended survey response questions and open-ended questions.  The first part of the study (close ended, quantitative questions) will be completed online through online survey. Participants will be asked to complete the survey using a link they receive through e-mail.

Mixed methods research was defined by Johnson and Onwuegbuzie (2004) as “the class of research where the researcher mixes or combines quantitative and qualitative research techniques, methods, approaches, concepts, or language into a single study” (Johnson & Onwuegbuzie, 2004 , p. 17).  Quantitative and qualitative methods can be combined, if used to complement each other because the methods can measure different aspects of the research questions (Sale, Lohfeld, & Brazil, 2002).

 

Sampling design

 

  • Online survey of 10-15 question, with 3-5 demographic and the rest regarding the use of tools.
  • 1-2 open-ended questions at the end of the survey to probe for follow-up mixed method approach (an opportunity for qualitative study)
  • data analysis techniques: survey results will be exported to SPSS and analyzed accordingly. The final survey design will determine the appropriate statistical approach.

 

Project Schedule

 

Complete literature review and identify areas of interest – two months

Prepare and test instrument (survey) – month

IRB and other details – month

Generate a list of potential libraries to distribute survey – month

Contact libraries. Follow up and contact again, if necessary (low turnaround) – month

Collect, analyze data – two months

Write out data findings – month

Complete manuscript – month

Proofreading and other details – month

 

Significance of the work 

While it has been widely acknowledged that Big Data (and its handling) is changing higher education (http://blog.stcloudstate.edu/ims?s=big+data) as well as academic libraries (http://blog.stcloudstate.edu/ims/2016/03/29/analytics-in-education/), it remains nebulous how Big Data is handled in the academic library and, respectively, how it is related to the handling of Big Data on campus. Moreover, the visualization of Big Data between units on campus remains in progress, along with any policymaking based on the analysis of such data (hence the need for comprehensive visualization).

 

This research will aim to gain an understanding on: a. how librarians are handling Big Data; b. how are they relating their Big Data output to the campus output of Big Data and c. how librarians in particular and campus administration in general are tuning their practices based on the analysis.

Based on the survey returns (if there is a statistically significant return), this research might consider juxtaposing the practices from academic libraries, to practices from special libraries (especially corporate libraries), public and school libraries.

 

 

References:

 

Adams Becker, S., Cummins M, Davis, A., Freeman, A., Giesinger Hall, C., Ananthanarayanan, V., … Wolfson, N. (2017). NMC Horizon Report: 2017 Library Edition.

Andrejevic, M., & Gates, K. (2014). Big Data Surveillance: Introduction. Surveillance & Society, 12(2), 185–196.

Asamoah, D. A., Sharda, R., Hassan Zadeh, A., & Kalgotra, P. (2017). Preparing a Data Scientist: A Pedagogic Experience in Designing a Big Data Analytics Course. Decision Sciences Journal of Innovative Education, 15(2), 161–190. https://doi.org/10.1111/dsji.12125

Bail, C. A. (2014). The cultural environment: measuring culture with big data. Theory and Society, 43(3–4), 465–482. https://doi.org/10.1007/s11186-014-9216-5

Borgman, C. L. (2015). Big Data, Little Data, No Data: Scholarship in the Networked World. MIT Press.

Bruns, A. (2013). Faster than the speed of print: Reconciling ‘big data’ social media analysis and academic scholarship. First Monday, 18(10). Retrieved from http://firstmonday.org/ojs/index.php/fm/article/view/4879

Bughin, J., Chui, M., & Manyika, J. (2010). Clouds, big data, and smart assets: Ten tech-enabled business trends to watch. McKinsey Quarterly, 56(1), 75–86.

Chen, X. W., & Lin, X. (2014). Big Data Deep Learning: Challenges and Perspectives. IEEE Access, 2, 514–525. https://doi.org/10.1109/ACCESS.2014.2325029

Cohen, J., Dolan, B., Dunlap, M., Hellerstein, J. M., & Welton, C. (2009). MAD Skills: New Analysis Practices for Big Data. Proc. VLDB Endow., 2(2), 1481–1492. https://doi.org/10.14778/1687553.1687576

Daniel, B. (2015). Big Data and analytics in higher education: Opportunities and challenges. British Journal of Educational Technology, 46(5), 904–920. https://doi.org/10.1111/bjet.12230

Daries, J. P., Reich, J., Waldo, J., Young, E. M., Whittinghill, J., Ho, A. D., … Chuang, I. (2014). Privacy, Anonymity, and Big Data in the Social Sciences. Commun. ACM, 57(9), 56–63. https://doi.org/10.1145/2643132

De Mauro, A. D., Greco, M., & Grimaldi, M. (2016). A formal definition of Big Data based on its essential features. Library Review, 65(3), 122–135. https://doi.org/10.1108/LR-06-2015-0061

De Mauro, A., Greco, M., & Grimaldi, M. (2015). What is big data? A consensual definition and a review of key research topics. AIP Conference Proceedings, 1644(1), 97–104. https://doi.org/10.1063/1.4907823

Dumbill, E. (2012). Making Sense of Big Data. Big Data, 1(1), 1–2. https://doi.org/10.1089/big.2012.1503

Eaton, M. (2017). Seeing Library Data: A Prototype Data Visualization Application for Librarians. Publications and Research. Retrieved from http://academicworks.cuny.edu/kb_pubs/115

Emanuel, J. (2013). Usability testing in libraries: methods, limitations, and implications. OCLC Systems & Services: International Digital Library Perspectives, 29(4), 204–217. https://doi.org/10.1108/OCLC-02-2013-0009

Graham, M., & Shelton, T. (2013). Geography and the future of big data, big data and the future of geography. Dialogues in Human Geography, 3(3), 255–261. https://doi.org/10.1177/2043820613513121

Harper, L., & Oltmann, S. (2017, April 2). Big Data’s Impact on Privacy for Librarians and Information Professionals. Retrieved November 7, 2017, from https://www.asist.org/publications/bulletin/aprilmay-2017/big-datas-impact-on-privacy-for-librarians-and-information-professionals/

Hashem, I. A. T., Yaqoob, I., Anuar, N. B., Mokhtar, S., Gani, A., & Ullah Khan, S. (2015). The rise of “big data” on cloud computing: Review and open research issues. Information Systems, 47(Supplement C), 98–115. https://doi.org/10.1016/j.is.2014.07.006

Hwangbo, H. (2014, October 22). The future of collaboration: Large-scale visualization. Retrieved November 7, 2017, from http://usblogs.pwc.com/emerging-technology/the-future-of-collaboration-large-scale-visualization/

Laney, D. (2001, February 6). 3D Data Management: Controlling Data Volume, Velocity, and Variety.

Miltenoff, P., & Hauptman, R. (2005). Ethical dilemmas in libraries: an international perspective. The Electronic Library, 23(6), 664–670. https://doi.org/10.1108/02640470510635746

Philip Chen, C. L., & Zhang, C.-Y. (2014). Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Information Sciences, 275(Supplement C), 314–347. https://doi.org/10.1016/j.ins.2014.01.015

Power, D. J. (2014). Using ‘Big Data’ for analytics and decision support. Journal of Decision Systems, 23(2), 222–228. https://doi.org/10.1080/12460125.2014.888848

Provost, F., & Fawcett, T. (2013). Data Science and its Relationship to Big Data and Data-Driven Decision Making. Big Data, 1(1), 51–59. https://doi.org/10.1089/big.2013.1508

Reilly, S. (2013, December 12). What does Horizon 2020 mean for research libraries? Retrieved November 7, 2017, from http://libereurope.eu/blog/2013/12/12/what-does-horizon-2020-mean-for-research-libraries/

Reyes, J. (2015). The skinny on big data in education: Learning analytics simplified. TechTrends: Linking Research & Practice to Improve Learning, 59(2), 75–80. https://doi.org/10.1007/s11528-015-0842-1

Schroeder, R. (2014). Big Data and the brave new world of social media research. Big Data & Society, 1(2), 2053951714563194. https://doi.org/10.1177/2053951714563194

Sugimoto, C. R., Ding, Y., & Thelwall, M. (2012). Library and information science in the big data era: Funding, projects, and future [a panel proposal]. Proceedings of the American Society for Information Science and Technology, 49(1), 1–3. https://doi.org/10.1002/meet.14504901187

Tene, O., & Polonetsky, J. (2012). Big Data for All: Privacy and User Control in the Age of Analytics. Northwestern Journal of Technology and Intellectual Property, 11, [xxvii]-274.

van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society; Newcastle upon Tyne, 12(2), 197–208.

Waller, M. A., & Fawcett, S. E. (2013). Data Science, Predictive Analytics, and Big Data: A Revolution That Will Transform Supply Chain Design and Management. Journal of Business Logistics, 34(2), 77–84. https://doi.org/10.1111/jbl.12010

Weiss, A. (2018). Big-Data-Shocks-An-Introduction-to-Big-Data-for-Librarians-and-Information-Professionals. Rowman & Littlefield Publishers. Retrieved from https://rowman.com/ISBN/9781538103227/Big-Data-Shocks-An-Introduction-to-Big-Data-for-Librarians-and-Information-Professionals

West, D. M. (2012). Big data for education: Data mining, data analytics, and web dashboards. Governance Studies at Brookings, 4, 1–0.

Willis, J. (2013). Ethics, Big Data, and Analytics: A Model for Application. Educause Review Online. Retrieved from https://docs.lib.purdue.edu/idcpubs/1

Wixom, B., Ariyachandra, T., Douglas, D. E., Goul, M., Gupta, B., Iyer, L. S., … Turetken, O. (2014). The current state of business intelligence in academia: The arrival of big data. CAIS, 34, 1.

Wu, X., Zhu, X., Wu, G. Q., & Ding, W. (2014). Data mining with big data. IEEE Transactions on Knowledge and Data Engineering, 26(1), 97–107. https://doi.org/10.1109/TKDE.2013.109

Wu, Z., Wu, J., Khabsa, M., Williams, K., Chen, H. H., Huang, W., … Giles, C. L. (2014). Towards building a scholarly big data platform: Challenges, lessons and opportunities. In IEEE/ACM Joint Conference on Digital Libraries (pp. 117–126). https://doi.org/10.1109/JCDL.2014.6970157

 

+++++++++++++++++
more on big data





Key Issues in Teaching and Learning Survey

The EDUCAUSE Learning Initiative has just launched its 2018 Key Issues in Teaching and Learning Survey, so vote today: http://www.tinyurl.com/ki2018.

Each year, the ELI surveys the teaching and learning community in order to discover the key issues and themes in teaching and learning. These top issues provide the thematic foundation or basis for all of our conversations, courses, and publications for the coming year. Longitudinally they also provide the way to track the evolving discourse in the teaching and learning space. More information about this annual survey can be found at https://www.educause.edu/eli/initiatives/key-issues-in-teaching-and-learning.

ACADEMIC TRANSFORMATION (Holistic models supporting student success, leadership competencies for academic transformation, partnerships and collaborations across campus, IT transformation, academic transformation that is broad, strategic, and institutional in scope)

ACCESSIBILITY AND UNIVERSAL DESIGN FOR LEARNING (Supporting and educating the academic community in effective practice; intersections with instructional delivery modes; compliance issues)

ADAPTIVE TEACHING AND LEARNING (Digital courseware; adaptive technology; implications for course design and the instructor’s role; adaptive approaches that are not technology-based; integration with LMS; use of data to improve learner outcomes)

COMPETENCY-BASED EDUCATION AND NEW METHODS FOR THE ASSESSMENT OF STUDENT LEARNING (Developing collaborative cultures of assessment that bring together faculty, instructional designers, accreditation coordinators, and technical support personnel, real world experience credit)

DIGITAL AND INFORMATION LITERACIES (Student and faculty literacies; research skills; data discovery, management, and analysis skills; information visualization skills; partnerships for literacy programs; evaluation of student digital competencies; information evaluation)

EVALUATING TECHNOLOGY-BASED INSTRUCTIONAL INNOVATIONS (Tools and methods to gather data; data analysis techniques; qualitative vs. quantitative data; evaluation project design; using findings to change curricular practice; scholarship of teaching and learning; articulating results to stakeholders; just-in-time evaluation of innovations). here is my bibliographical overview on Big Data (scroll down to “Research literature”http://blog.stcloudstate.edu/ims/2017/11/07/irdl-proposal/ )

EVOLUTION OF THE TEACHING AND LEARNING SUPPORT PROFESSION (Professional skills for T&L support; increasing emphasis on instructional design; delineating the skills, knowledge, business acumen, and political savvy for success; role of inter-institutional communities of practices and consortia; career-oriented professional development planning)

FACULTY DEVELOPMENT (Incentivizing faculty innovation; new roles for faculty and those who support them; evidence of impact on student learning/engagement of faculty development programs; faculty development intersections with learning analytics; engagement with student success)

GAMIFICATION OF LEARNING (Gamification designs for course activities; adaptive approaches to gamification; alternate reality games; simulations; technological implementation options for faculty)

INSTRUCTIONAL DESIGN (Skills and competencies for designers; integration of technology into the profession; role of data in design; evolution of the design profession (here previous blog postings on this issue: http://blog.stcloudstate.edu/ims/2017/10/04/instructional-design-3/); effective leadership and collaboration with faculty)

INTEGRATED PLANNING AND ADVISING FOR STUDENT SUCCESS (Change management and campus leadership; collaboration across units; integration of technology systems and data; dashboard design; data visualization (here previous blog postings on this issue: http://blog.stcloudstate.edu/ims?s=data+visualization); counseling and coaching advising transformation; student success analytics)

LEARNING ANALYTICS (Leveraging open data standards; privacy and ethics; both faculty and student facing reports; implementing; learning analytics to transform other services; course design implications)

LEARNING SPACE DESIGNS (Makerspaces; funding; faculty development; learning designs across disciplines; supporting integrated campus planning; ROI; accessibility/UDL; rating of classroom designs)

MICRO-CREDENTIALING AND DIGITAL BADGING (Design of badging hierarchies; stackable credentials; certificates; role of open standards; ways to publish digital badges; approaches to meta-data; implications for the transcript; Personalized learning transcripts and blockchain technology (here previous blog postings on this issue: http://blog.stcloudstate.edu/ims?s=blockchain

MOBILE LEARNING (Curricular use of mobile devices (here previous blog postings on this issue:

http://blog.stcloudstate.edu/ims/2015/09/25/mc218-remodel/; innovative curricular apps; approaches to use in the classroom; technology integration into learning spaces; BYOD issues and opportunities)

MULTI-DIMENSIONAL TECHNOLOGIES (Virtual, augmented, mixed, and immersive reality; video walls; integration with learning spaces; scalability, affordability, and accessibility; use of mobile devices; multi-dimensional printing and artifact creation)

NEXT-GENERATION DIGITAL LEARNING ENVIRONMENTS AND LMS SERVICES (Open standards; learning environments architectures (here previous blog postings on this issue: http://blog.stcloudstate.edu/ims/2017/03/28/digital-learning/; social learning environments; customization and personalization; OER integration; intersections with learning modalities such as adaptive, online, etc.; LMS evaluation, integration and support)

ONLINE AND BLENDED TEACHING AND LEARNING (Flipped course models; leveraging MOOCs in online learning; course development models; intersections with analytics; humanization of online courses; student engagement)

OPEN EDUCATION (Resources, textbooks, content; quality and editorial issues; faculty development; intersections with student success/access; analytics; licensing; affordability; business models; accessibility and sustainability)

PRIVACY AND SECURITY (Formulation of policies on privacy and data protection; increased sharing of data via open standards for internal and external purposes; increased use of cloud-based and third party options; education of faculty, students, and administrators)

WORKING WITH EMERGING LEARNING TECHNOLOGY (Scalability and diffusion; effective piloting practices; investments; faculty development; funding; evaluation methods and rubrics; interoperability; data-driven decision-making)

+++++++++++
learning and teaching in this IMS blog
http://blog.stcloudstate.edu/ims?s=teaching+and+learning

Reproducibility Librarian

Reproducibility Librarian? Yes, That Should Be Your Next Job

https://www.jove.com/blog/2017/10/27/reproducibility-librarian-yes-that-should-be-your-next-job/
Vicky Steeves (@VickySteeves) is the first Research Data Management and Reproducibility Librarian
Reproducibility is made so much more challenging because of computers, and the dominance of closed-source operating systems and analysis software researchers use. Ben Marwick wrote a great piece called ‘How computers broke science – and what we can do to fix it’ which details a bit of the problem. Basically, computational environments affect the outcome of analyses (Gronenschild et. al (2012) showed the same data and analyses gave different results between two versions of macOS), and are exceptionally hard to reproduce, especially when the license terms don’t allow it. Additionally, programs encode data incorrectly and studies make erroneous conclusions, e.g. Microsoft Excel encodes genes as dates, which affects 1/5 of published data in leading genome journals.
technology to capture computational environments, workflow, provenance, data, and code are hugely impactful for reproducibility.  It’s been the focus of my work, in supporting an open source tool called ReproZip, which packages all computational dependencies, data, and applications in a single distributable package that other can reproduce across different systems. There are other tools that fix parts of this problem: Kepler and VisTrails for workflow/provenance, Packrat for saving specific R packages at the time a script is run so updates to dependencies won’t break, Pex for generating executable Python environments, and o2r for executable papers (including data, text, and code in one).
plugin for Jupyter notebooks), and added a user interface to make it friendlier to folks not comfortable on the command line.

I would also recommend going to conferences:

++++++++++++++++++++++++
more on big data in an academic library in this IMS blog
academic library collection data visualization

http://blog.stcloudstate.edu/ims/2017/10/26/software-carpentry-workshop/

http://blog.stcloudstate.edu/ims?s=data+library

more on library positions in this IMS blog:
http://blog.stcloudstate.edu/ims?s=big+data+library
http://blog.stcloudstate.edu/ims/2016/06/14/technology-requirements-samples/

on university library future:
http://blog.stcloudstate.edu/ims/2014/12/10/unviersity-library-future/

librarian versus information specialist

 

code4lib 2018

Code2LIB February 2018

http://2018.code4lib.org/

2018 Preconference Voting

10. The Virtualized Library: A Librarian’s Introduction to Docker and Virtual Machines
This session will introduce two major types of virtualization, virtual machines using tools like VirtualBox and Vagrant, and containers using Docker. The relative strengths and drawbacks of the two approaches will be discussed along with plenty of hands-on time. Though geared towards integrating these tools into a development workflow, the workshop should be useful for anyone interested in creating stable and reproducible computing environments, and examples will focus on library-specific tools like Archivematica and EZPaarse. With virtualization taking a lot of the pain out of installing and distributing software, alleviating many cross-platform issues, and becoming increasingly common in library and industry practices, now is a great time to get your feet wet.

(One three-hour session)

11. Digital Empathy: Creating Safe Spaces Online
User research is often focused on measures of the usability of online spaces. We look at search traffic, run card sorting and usability testing activities, and track how users navigate our spaces. Those results inform design decisions through the lens of information architecture. This is important, but doesn’t encompass everything a user needs in a space.

This workshop will focus on the other component of user experience design and user research: how to create spaces where users feel safe. Users bring their anxieties and stressors with them to our online spaces, but informed design choices can help to ameliorate that stress. This will ultimately lead to a more positive interaction between your institution and your users.

The presenters will discuss the theory behind empathetic design, delve deeply into using ethnographic research methods – including an opportunity for attendees to practice those ethnographic skills with student participants – and finish with the practical application of these results to ongoing and future projects.

(One three-hour session)

14. ARIA Basics: Making Your Web Content Sing Accessibility

https://dequeuniversity.com/assets/html/jquery-summit/html5/slides/landmarks.html
Are you a web developer or create web content? Do you add dynamic elements to your pages? If so, you should be concerned with making those dynamic elements accessible and usable to as many as possible. One of the most powerful tools currently available for making web pages accessible is ARIA, the Accessible Rich Internet Applications specification. This workshop will teach you the basics for leveraging the full power of ARIA to make great accessible web pages. Through several hands-on exercises, participants will come to understand the purpose and power of ARIA and how to apply it for a variety of different dynamic web elements. Topics will include semantic HTML, ARIA landmarks and roles, expanding/collapsing content, and modal dialog. Participants will also be taught some basic use of the screen reader NVDA for use in accessibility testing. Finally, the lessons will also emphasize learning how to keep on learning as HTML, JavaScript, and ARIA continue to evolve and expand.

Participants will need a basic background in HTML, CSS, and some JavaScript.

(One three-hour session)

18. Learning and Teaching Tech
Tech workshops pose two unique problems: finding skilled instructors for that content, and instructing that content well. Library hosted workshops are often a primary educational resource for solo learners, and many librarians utilize these workshops as a primary outreach platform. Tackling these two issues together often makes the most sense for our limited resources. Whether a programming language or software tool, learning tech to teach tech can be one of the best motivations for learning that tech skill or tool, but equally important is to learn how to teach and present tech well.

This hands-on workshop will guide participants through developing their own learning plan, reviewing essential pedagogy for teaching tech, and crafting a workshop of their choice. Each participant will leave with an actionable learning schedule, a prioritized list of resources to investigate, and an outline of a workshop they would like to teach.

(Two three-hour sessions)

23. Introduction to Omeka S
Omeka S represents a complete rewrite of Omeka Classic (aka the Omeka 2.x series), adhering to our fundamental principles of encouraging use of metadata standards, easy web publishing, and sharing cultural history. New objectives in Omeka S include multisite functionality and increased interaction with other systems. This workshop will compare and contrast Omeka S with Omeka Classic to highlight our emphasis on 1) modern metadata standards, 2) interoperability with other systems including Linked Open Data, 3) use of modern web standards, and 4) web publishing to meet the goals medium- to large-sized institutions.

In this workshop we will walk through Omeka S Item creation, with emphasis on LoD principles. We will also look at the features of Omeka S that ease metadata input and facilitate project-defined usage and workflows. In accordance with our commitment to interoperability, we will describe how the API for Omeka S can be deployed for data exchange and sharing between many systems. We will also describe how Omeka S promotes multiple site creation from one installation, in the interest of easy publishing with many objects in many contexts, and simplifying the work of IT departments.

(One three-hour session)

24. Getting started with static website generators
Have you been curious about static website generators? Have you been wondering who Jekyll and Hugo are? Then this workshop is for you

My notehttps://opensource.com/article/17/5/hugo-vs-jekyll

But this article isn’t about setting up a domain name and hosting for your website. It’s for the step after that, the actual making of that site. The typical choice for a lot of people would be to use something like WordPress. It’s a one-click install on most hosting providers, and there’s a gigantic market of plugins and themes available to choose from, depending on the type of site you’re trying to build. But not only is WordPress a bit overkill for most websites, it also gives you a dynamically generated site with a lot of moving parts. If you don’t keep all of those pieces up to date, they can pose a significant security risk and your site could get hijacked.

The alternative would be to have a static website, with nothing dynamically generated on the server side. Just good old HTML and CSS (and perhaps a bit of Javascript for flair). The downside to that option has been that you’ve been relegated to coding the whole thing by hand yourself. It’s doable, but you just want a place to share your work. You shouldn’t have to know all the idiosyncrasies of low-level web design (and the monumental headache of cross-browser compatibility) to do that.

Static website generators are tools used to build a website made up only of HTML, CSS, and JavaScript. Static websites, unlike dynamic sites built with tools like Drupal or WordPress, do not use databases or server-side scripting languages. Static websites have a number of benefits over dynamic sites, including reduced security vulnerabilities, simpler long-term maintenance, and easier preservation.

In this hands-on workshop, we’ll start by exploring static website generators, their components, some of the different options available, and their benefits and disadvantages. Then, we’ll work on making our own sites, and for those that would like to, get them online with GitHub pages. Familiarity with HTML, git, and command line basics will be helpful but are not required.

(One three-hour session)

26. Using Digital Media for Research and Instruction
To use digital media effectively in both research and instruction, you need to go beyond just the playback of media files. You need to be able to stream the media, divide that stream into different segments, provide descriptive analysis of each segment, order, re-order and compare different segments from the same or different streams and create web sites that can show the result of your analysis. In this workshop, we will use Omeka and several plugins for working with digital media, to show the potential of video streaming, segmentation and descriptive analysis for research and instruction.

(One three-hour session)

28. Spark in the Dark 101 https://zeppelin.apache.org/
This is an introductory session on Apache Spark, a framework for large-scale data processing (https://spark.apache.org/). We will introduce high level concepts around Spark, including how Spark execution works and it’s relationship to the other technologies for working with Big Data. Following this introduction to the theory and background, we will walk workshop participants through hands-on usage of spark-shell, Zeppelin notebooks, and Spark SQL for processing library data. The workshop will wrap up with use cases and demos for leveraging Spark within cultural heritage institutions and information organizations, connecting the building blocks learned to current projects in the real world.

(One three-hour session)

29. Introduction to Spotlight https://github.com/projectblacklight/spotlight
http://www.spotlighttechnology.com/4-OpenSource.htm
Spotlight is an open source application that extends the digital library ecosystem by providing a means for institutions to reuse digital content in easy-to-produce, attractive, and scholarly-oriented websites. Librarians, curators, and other content experts can build Spotlight exhibits to showcase digital collections using a self-service workflow for selection, arrangement, curation, and presentation.

This workshop will introduce the main features of Spotlight and present examples of Spotlight-built exhibits from the community of adopters. We’ll also describe the technical requirements for adopting Spotlight and highlight the potential to customize and extend Spotlight’s capabilities for their own needs while contributing to its growth as an open source project.

(One three-hour session)

31. Getting Started Visualizing your IoT Data in Tableau https://www.tableau.com/
The Internet of Things is a rising trend in library research. IoT sensors can be used for space assessment, service design, and environmental monitoring. IoT tools create lots of data that can be overwhelming and hard to interpret. Tableau Public (https://public.tableau.com/en-us/s/) is a data visualization tool that allows you to explore this information quickly and intuitively to find new insights.

This full-day workshop will teach you the basics of building your own own IoT sensor using a Raspberry Pi (https://www.raspberrypi.org/) in order to gather, manipulate, and visualize your data.

All are welcome, but some familiarity with Python is recommended.

(Two three-hour sessions)

32. Enabling Social Media Research and Archiving
Social media data represents a tremendous opportunity for memory institutions of all kinds, be they large academic research libraries, or small community archives. Researchers from a broad swath of disciplines have a great deal of interest in working with social media content, but they often lack access to datasets or the technical skills needed to create them. Further, it is clear that social media is already a crucial part of the historical record in areas ranging from events your local community to national elections. But attempts to build archives of social media data are largely nascent. This workshop will be both an introduction to collecting data from the APIs of social media platforms, as well as a discussion of the roles of libraries and archives in that collecting.

Assuming no prior experience, the workshop will begin with an explanation of how APIs operate. We will then focus specifically on the Twitter API, as Twitter is of significant interest to researchers and hosts an important segment of discourse. Through a combination of hands-on and demos, we will gain experience with a number of tools that support collecting social media data (e.g., Twarc, Social Feed Manager, DocNow, Twurl, and TAGS), as well as tools that enable sharing social media datasets (e.g., Hydrator, TweetSets, and the Tweet ID Catalog).

The workshop will then turn to a discussion of how to build a successful program enabling social media collecting at your institution. This might cover a variety of topics including outreach to campus researchers, collection development strategies, the relationship between social media archiving and web archiving, and how to get involved with the social media archiving community. This discussion will be framed by a focus on ethical considerations of social media data, including privacy and responsible data sharing.

Time permitting, we will provide a sampling of some approaches to social media data analysis, including Twarc Utils and Jupyter Notebooks.

(One three-hour session)

1 2 3 4 5