The term “digital humanities” can refer to research and instruction that is about information technology or that uses IT. By applying technologies in new ways, the tools and methodologies of digital humanities open new avenues of inquiry and scholarly production. Digital humanities applies computational capabilities to humanistic questions, offering new pathways for scholars to conduct research and to create and publish scholarship. Digital humanities provides promising new channels for learners and will continue to influence the ways in which we think about and evolve technology toward better and more humanistic ends.
As defined by Johanna Drucker and colleagues at UCLA, the digital humanities is “work at the intersection of digital technology and humanities disciplines.” An EDUCAUSE/CNI working group framed the digital humanities as “the application and/or development of digital tools and resources to enable researchers to address questions and perform new types of analyses in the humanities disciplines,” and the NEH Office of Digital Humanities says digital humanities “explore how to harness new technology for thumanities research as well as those that study digital culture from a humanistic perspective.” Beyond blending the digital with the humanities, there is an intentionality about combining the two that defines it.
digital humanities can include
creating digital texts or data sets;
cleaning, organizing, and tagging those data sets;
applying computer-based methodologies to analyze them;
and making claims and creating visualizations that explain new findings from those analyses.
Scholars might reflect on
how the digital form of the data is organized,
how analysis is conducted/reproduced, and
how claims visualized in digital form may embody assumptions or biases.
Digital humanities can enrich pedagogy as well, such as when a student uses visualized data to study voter patterns or conducts data-driven analyses of works of literature.
Digital humanities usually involves work by teams in collaborative spaces or centers. Team members might include
researchers and faculty from multiple disciplines,
graduate students,
librarians,
instructional technologists,
data scientists and preservation experts,
technologists with expertise in critical computing and computing methods, and undergraduates
some disciplinary associations, including the Modern Language Association and the American HistoricalAssociation, have developed guidelines for evaluating digital proj- ects, many institutions have yet to define how work in digital humanities fits into considerations for tenure and promotion
Because large projects are often developed with external funding that is not readily replaced by institutional funds when the grant ends sustainability is a concern. Doing digital humanities well requires access to expertise in methodologies and tools such as GIS, mod- eling, programming, and data visualization that can be expensive for a single institution to obtain
Resistance to learning new tech- nologies can be another roadblock, as can the propensity of many humanists to resist working in teams. While some institutions have recognized the need for institutional infrastructure (computation and storage, equipment, software, and expertise), many have not yet incorporated such support into ongoing budgets.
Opportunities for undergraduate involvement in research, provid ing students with workplace skills such as data management, visualization, coding, and modeling. Digital humanities provides new insights into policy-making in areas such as social media, demo- graphics, and new means of engaging with popular culture and understanding past cultures. Evolution in this area will continue to build connections between the humanities and other disci- plines, cross-pollinating research and education in areas like med- icine and environmental studies. Insights about digital humanities itself will drive innovation in pedagogy and expand our conceptualization of classrooms and labs
Please have also materials, which might help you organize our thoughts and expedite your Chapter 2 writing….
Do you agree with (did you use) the following observations:
The purpose of the review of the literature is to prove that no one has studied the gap in the knowledge outlined in Chapter 1. The subjects in the Review of Literature should have been introduced in the Background of the Problem in Chapter 1. Chapter 2 is not a textbook of subject matter loosely related to the subject of the study. Every research study that is mentioned should in some way bear upon the gap in the knowledge, and each study that is mentioned should end with the comment that the study did not collect data about the specific gap in the knowledge of the study as outlined in Chapter 1.
The review should be laid out in major sections introduced by organizational generalizations. An organizational generalization can be a subheading so long as the last sentence of the previous section introduces the reader to what the next section will contain. The purpose of this chapter is to cite major conclusions, findings, and methodological issues related to the gap in the knowledge from Chapter 1. It is written for knowledgeable peers from easily retrievable sources of the most recent issue possible.
Empirical literature published within the previous 5 years or less is reviewed to prove no mention of the specific gap in the knowledge that is the subject of the dissertation is in the body of knowledge. Common sense should prevail. Often, to provide a history of the research, it is necessary to cite studies older than 5 years. The object is to acquaint the reader with existing studies relative to the gap in the knowledge and describe who has done the work, when and where the research was completed, and what approaches were used for the methodology, instrumentation, statistical analyses, or all of these subjects.
If very little literature exists, the wise student will write, in effect, a several-paragraph book report by citing the purpose of the study, the methodology, the findings, and the conclusions. If there is an abundance of studies, cite only the most recent studies. Firmly establish the need for the study. Defend the methods and procedures by pointing out other relevant studies that implemented similar methodologies. It should be frequently pointed out to the reader why a particular study did not match the exact purpose of the dissertation.
The Review of Literature ends with a Conclusion that clearly states that, based on the review of the literature, the gap in the knowledge that is the subject of the study has not been studied. Remember that a “summary” is different from a “conclusion.” A Summary, the final main section, introduces the next chapter.
When conducting qualitative data, how many people should be interviewed? Is there a minimum or a max
Here is my take on it:
Simple question, not so simple answer.
It depends.
Generally, the number of respondents depends on the type of qualitative inquiry: case study methodology, phenomenological study, ethnographic study, or ethnomethodology. However, a rule of thumb is for scholars to achieve saturation point–that is the point in which no fresh information is uncovered in response to an issue that is of interest to the researcher.
If your qualitative method is designed to meet rigor and trustworthiness, thick, rich data is important. To achieve these principles you would need at least 12 interviews, ensuring your participants are the holders of knowledge in the area you intend to investigate. In grounded theory you could start with 12 and interview more if your data is not rich enough.
In IPA the norm tends to be 6 interviews.
You may check the sample size in peer reviewed qualitative publications in your field to find out about popular practice. In all depends on the research problem, choice of specific qualitative approach and theoretical framework, so the answer to your question will vary from few to few dozens.
How many interviews are needed in a qualitative research?
There are different views in literature and no one agreed to the exact number. Here I reviewed some mostly cited references. Based Creswell (2014), it is estimated that 16 participants will provide rich and detailed data. There are a couple of researchers agreed on 10–15 in-depth interviews are sufficient (Guest, Bunce & Johnson 2006; Baker & Edwards 2012).
your methodological choices need to reflect your ontological position and understanding of knowledge production, and that’s also where you can argue a strong case for smaller qualitative studies, as you say. This is not only a problem for certain subjects, I think it’s a problem in certain departments or journals across the board of social science research, as it’s a question of academic culture.
here more serious literature and research (in case you need to cite in Chapter 3)
Sample Size and Saturation in PhD Studies Using Qualitative Interviews
Gaskell, George (2000). Individual and Group Interviewing. In Martin W. Bauer & George Gaskell (Eds.), Qualitative Researching With Text, Image and Sound. A Practical Handbook (pp. 38-56). London: SAGE Publications.
Savolainen, Jukka 1994: “The Rationality of Drawing Big Conclusions Based on Small Samples.” Social Forces 72:1217-24. (http://www.jstor.org/pss/2580299).
Small, M.(2009) ‘How many cases do I need ? On science and the logic of case selection in field-based research’ Ethnography 10(1) 5-38
Williams,M. (2000) ‘Interpretivism and generalisation ‘ Sociology 34(2) 209-224
where you have several documents from the Graduate school and myself to start building your understanding and vocabulary regarding your quantitative, qualitative or mixed method research.
It has been agreed that before you go to the Statistical Center (Randy Kolb), it is wise to be prepared and understand the terminology as well as the basics of the research methods.
Please have an additional list of materials available through the SCSU library and the Internet. They can help you further with building a robust foundation to lead your research:
Books on intro to stat modeling available at the library. I understand the major pain borrowing books from the SCSU library can constitute, but you can use the titles and the authors and see if you can borrow them from your local public library
I also sought and shared with you “visual” explanations of the basics terms and concepts. Once you start looking at those, you should be able to further research (e.g. YouTube) and find suitable sources for your learning style.
I (and the future cohorts) will deeply appreciate if you remember to share those “suitable sources for your learning style” either by sharing in this Google Group thread and/or sharing in the comments section of the blog entry: https://blog.stcloudstate.edu/ims/2017/07/10/intro-to-stat-modeling. Your Facebook group page is also a good place to discuss among ourselves best practices to learn and use research methods for your chapter 3.
Watching the video, you may remember the same #BooleanSearch techniques from our BI (bibliography instruction) session of last semester.
Considering the fact of preponderance of information in 2017: your Chapter 2 is NOT ONLY about finding information regrading your topic.
Your Chapter 2 is about proving your extensive research of the existing literature.
The techniques presented in the short video will arm you with methods to dig deeper and look further.
If you would like to do a decent job exploring all corners of the vast area called Internet, please consider other search engines similar to Google Scholar:
Slavenka describec what the East Germans called Die Quall der Wahl
When communism fell, Poland had Solidarity and Lech Walesa, Czechoslovakia had Václav Havel, Hungary had Fidesz, Bulgaria had Zhelyu Zhelev—and Yugoslavia had no democratic opposition at all. My note: Little she knew about the Bulgarian Opposition
A few years before the breakup of Yugoslavia, the political landscape was already filled with communists-turnednationalists (like Slobodan Milosevic and Franjo Tudjman). Nationalism became the only political “alter native” in Yugoslavia, leading us directly to wars in Croatia, Bosnia and Kosovo.
Yes, my generation lived too well, and obviously we mistook freedom and democracy for the freedom of shopping in the West. And as in a medieval morality play, we had to pay for that in the three wars to follow: our children fought those wars; they were killed, and their limbs were severed.
Chun, L. (2017). Discipline and power: knowledge of China in political science. Critical Asian Studies, 49(4), 501-522. doi:10.1080/14672715.2017.1362321
p. 501 – is political science “softer” than the other soft social sciences?
thus… political science “may never live up to its lofty ambition of scientific explanation and prediction. Indeed, like other social sciences, it can be no more than a ‘ science in formation’ permanently seeking to surmount obstacles to objectivity.”
p. 502 disciplinary parochialism
the fetishes of pure observation, raw experience, unambiguous rationality, and one-way causality were formative influences in the genesis of the social sciences. the ‘unfortunate positivism” of such impulses, along with the illusion of a value-free science, converged to produce a behavioral revolution in the interwar period Behaviorism was then followed through an epistemological twist, by boldly optimistic leaps to an “end of ideology” and ultimately to a claimed “end of history” itself.
p. 503
early positivism was openly underpinned by an European condescension toward Asians’ “ignorance and prejudice.” Behind similar depictions lay a comprehensive Eurocentric social and political philosophy.
this is illustrated its view of China through the grand narrative of modernization.
p. 504
Robert McNamara famously reiterated that if World War I was a chemist’s war and Word War II a physicist’s, Vietnam “might well have to be considered the social scientists’ war.”
Although China nominally remains a communist state, it has doubtlessly changed color without a color revolution.
p. 505
In the fixed disciplinary eye, “China” is to specific to produce anything generalizable beyond descriptive and self-containing narratives. The area studies approach, in contrast to disciplinary approaches, is all about cultural, historical, and ethnographic specificities.
If first-hand information contradicts theoretical conclusions, redress is sought only at the former end (my note – ha ha ha, such an elegant but scathing criticism of [Western] academia).
The catch [is] that Chinese otherness is in essence not a matter of cultural difference (hence limitations of criticizing Eurocentrism and Orientalism) and does not merely reproduce itself by inertia.
Given a long omitted self-critical rethinking of the discipline’s parochial base, calling for cross-fertilizing alone would be fruitless or even lead only to a one-way colonization of seemingly particularistic histories by an illusive universal science.
p. 506
political culture, once a key concept of political science’s hope for unified theorization, has turned out to be no answer
Long after its heyday, modernization theory – now with its new face of globalization – remains a primary signifier and legitimating benchmark. To those, who use it to gauge developments since 1945, private property and liberal democracy are permanent, unquestioned norms that are to be globally homogenized.
Moreover, since modernity is assumed to be a liberal capitalists condition, the revolutionary nationalism of an oppressed people remaking itself into a new historical subject noncompliant with capitalism cannot be modernizational.
p. 507
Political scientists and historical sociologists… saw the communist in power as formidable modernizers, but distinguished the Maoist model from the Stalinist in economic management and campaign politics.
Their analyses showed how organic connections between top-down mobilization and bottom-up participation cultivated in an active citizenry and high intensity politics. My note: I disagree here with the author, since such statement can be arbitrary from a historical point of view; indeed, for a short period of time, such “organic connection” can produce positive results, but once calcitrated (as it is in China for the past 6-7 decades), it turns stagnant.
p. 510
the state’s altered support base is essentially a matter of class power, involving both adaptive cultivation of new economic elites and iron-fist approaches to protest and dissent. By the same weight of historical logic, the party’s internal decay, loss of its founding ideological vision and commitment, and collusion with capital will do more than any outside force ever could do to destroy the regime.
That the Party stays in power is not primarily because the country’s economy continues to grow, but is more attributable to a residual social reliance on its credentials and organizational capacities accumulated in earlier revolutionary and socialist struggles. This historical promise has so far worked to the extent that cracks within the leadership are more or less held in check, resentment against local wrongs are insulated from central intentions, and social policies in one way or another respond to common outcries, consultative deliberations, and pressure groups.
p. 511
The word “madness” has indeed been freely employed to describe nations and societies judged inept at modern reason, as found in contemporary academic publications on epi- sodes of the PRC history. My note: I agree with this – the deconstructionalists: (Jaques Derrida, Tzvetan Todorov) linguistically prove the inability of Western cultures to understand and explain other cultures. In this case, Lin Chun is right; just because western political scientist cannot comprehend foreign complex societal problems and/or juxtaposing them to their own “schemes,” prompts the same western researchers to announce them as “mad.”
p. 513aa
This is the best and worst of times for the globalization of knowledge. In one scenario, an eventual completion of the political science parameters can now seal both knowledge, sophisticatedly canalized, and ideology, universally uncontested – even if the two are never separable in the foundation of political science. In another scenario, causes and effects no longer rule out atypical polities, but the differences are presented as culturally incompatible. In either case, the trick remains to let anormalies make the norms validate preexist- ing disciplinary sanctions.
p. 514
Overcoming outmoded rigidities will nurture a robust scholarship committed to universally resonant theories.
what is shall and what does it do. language close to computers, fast.
what is “bash” . cd, ls
shell job is a translator between the binory code, the middle name. several types of shells, with slight differences. one natively installed on MAC and Unix. born-again shell
bash commands: cd change director, ls – list; ls -F if it does not work: man ls (manual for LS); colon lower left corner tells you can scrool; q for escape; ls -ltr
arguments is colloquially used with different names. options, flags, parameters
cd .. – move up one directory . pwd : see the content cd data_shell/ – go down one directory
cd ~ – brings me al the way up . $HOME (universally defined variable
the default behavior of cd is to bring to home directory.
the core shall commands accept the same shell commands (letters)
$ du -h . gives me the size of the files. ctrl C to stop
$ clear . – clear the entire screen, scroll up to go back to previous command
man history $ history $! pwd (to go to pwd . $ history | grep history (piping)
$ cat (and the file name) – standard output
$ cat ../
+++++++++++++++
how to edit and delete files
to create new folder: $ mkdir . – make directory
text editors – nano, vim (UNIX text editors) . $ nano draft.txt . ctrl O (save) ctr X (exit) .
$ vim . shift esc (key) and in command line – wq (write quit) or just “q”
$ mv draft.txt ../data . (move files)
to remove $ rm thesis/: $ man rm
copy files $cp $ touch . (touches the file, creates if new)
C and C++. scripting purposes in microbiology (instructor). libraries, packages alongside Python, which can extend its functionality. numpy and scipy (numeric and science python). Python for academic libraries?
going out of python $ quit () . python expect beginning and end parenthesis
new terminal needed after installation. anaconda 5.0.1
python 3 is complete redesign, not only an update.
python is object oriented and i can define the objects
python creates its own types of objects (which we model) and those are called “DataFrame”
method applied it is an attribute to data that already exists. – difference from function
data.info() . is function – it does not take any arguments
whereas
data.columns . is a method
print (data.T) . transpose. not easy in Excel, but very easy in Python
data = pandas.read_csv(‘/Users/plamen_local/Desktop/data/gapminder_gdp_oceania.csv’ , index_col=’country’)
data.loc[‘Australia’].plot()
plt.xticks(rotation=10)
GD plot 2 is the most well known library.
xelatex is a PDF engine. reST restructured text like Markdown. google what is the best PDF engine with Jupyter
four loops . any computer language will have the concept of “for” loop. In Python: 1. whenever we create a “for” loop, that line must end with a single colon
2. indentation. any “if” statement in the “for” loop, gets indented
An introduction to digital badges and a brief history
Simply put, a digital badge is an indicator of accomplishment or skill that can be displayed, accessed, and verified online. These badges can be earned in a wide variety of environments, an increasing number of which are online.
The anatomy of digital badges
In addition to the image-based design we think of as a digital badge, badges have meta-data to communicate details of the badge to anyone wishing to verify it, or learn more about the context of the achievement it signifies.
The many functions of digital badges
Just like their real-world counterparts, digital badges serve a wide variety of purposes depending on the issuing body and the individual. For the most part, badges’ functions can be bucketed into one of five categories.
Badges are issued by individual organizations who set criteria for what constitutes earning a badge. They’re most often issued through an online credential or badging platform.
Criticism of digital badges
There are various arguments to be made against the implementation of digital badges, including the common issuance of seemingly “meaningless” badges.
The future of digital badges
With the rise of online education and the increasing availability of high quality massive open online courses, there will be an increasing need for verifiable digital badges and digital credentials.
“When I hear the word ‘culture,’ I reach for my revolver.”
Kultur, he explains (along with Bildung, or education), denoted in pre-unification Germany those qualities that the intellectuals and professionals of the small, isolated German middle class claimed for themselves in response to the disdain of the minor German nobles who employed them: intellectual achievement, of course, but also simple virtues like authenticity, honesty, and sincerity.
German courtiers, by contrast, according to the possessors of Kultur, had acquired “civilization” from their French tutors: manners, social polish, the cultivation of appearances. As the German middle class asserted itself in the nineteenth century, the particular virtues of Kultur became an important ingredient in national self-definition. The inferior values of “civilization” were no longer attributed to an erstwhile French-educated German nobility, but to the French themselves and to the West in general.
By 1914, the contrast between Kultur and Zivilisation had taken on a more aggressively nationalist tone. During World War I German patriotic propaganda vaunted the superiority of Germany’s supposedly rooted, organic, spiritual Kultur over the allegedly effete, shallow, cosmopolitan, materialist, Jewish-influenced “civilization” of Western Europe. Martin’s book shows how vigorously the Nazis applied this traditional construct.
Goebbels and Hitler were as obsessed with movies as American adolescents are today with social media.
Music was a realm that Germans felt particularly qualified to dominate. But first the German national musical scene had to be properly organized. In November 1933 Goebbels offered Richard Strauss the leadership of a Reich Music Chamber.
Goebbels organized in Düsseldorf in 1938 a presentation of “degenerate music” following the better-known 1937 exhibition of “degenerate art.”
As with music, the Nazis were able to attract writers outside the immediate orbit of the Nazi and Fascist parties by endorsing conservative literary styles against modernism, by mitigating copyright and royalty problems, and by offering sybaritic visits to Germany and public attention.
Painting and sculpture, curiously, do not figure in this account of the cultural fields that the Nazis and Fascists tried to reorganize “inter-nationally,” perhaps because they had not previously been organized on liberal democratic lines. Picasso and Kandinsky painted quietly in private and Jean Bazaine organized an exhibition with fellow modernists in 1941. Nazi cultural officials thought “degenerate” art appropriate for France.
Science would have made an interesting case study, a contrary one. Germany dominated the world of science before 1933. Germans won fifteen Nobel Prizes in physics, chemistry, and physiology or medicine between 1918 and 1933, more than any other nation. Far from capitalizing on this major soft power asset, Hitler destroyed it by imposing ideological conformity and expelling Jewish scientists such as the talented nuclear physicist Lise Meitner. The soft power of science is fragile, as Americans may yet find out.
American soft power thrived mostly through the profit motive and by offering popular entertainment to the young.
+++++++++++++++
The Original Axis of Evil
THE ANATOMY OF FASCISM By Robert O. Paxton. 321 pp. New York: Alfred A. Knopf. $26.
fascism — unlike Communism, socialism, capitalism or conservatism — is a smear word more often used to brand one’s foes than it is a descriptor used to shed light on them.
World War I and the Bolshevik Revolution of 1917 contributed mightily to the advent of fascism. The war generated acute economic malaise, national humiliation and legions of restive veterans and unemployed youths who could be harnessed politically. The Bolshevik Revolution, but one symptom of the frustration with the old order, made conservative elites in Italy and Germany so fearful of Communism that anything — even fascism — came to seem preferable to a Marxist overthrow.
Paxton debunks the consoling fiction that Mussolini and Hitler seized power. Rather, conservative elites desperate to subdue leftist populist movements ”normalized” the fascists by inviting them to share power. It was the mob that flocked to fascism, but the elites who elevated it.
Fascist movements and regimes are different from military dictatorships and authoritarian regimes. They seek not to exclude, but rather to enlist, the masses. They often collapse the distinction between the public and private sphere (eliminating the latter). In the words of Robert Ley, the head of the Nazi Labor Office, the only private individual who existed in Nazi Germany was someone asleep.
t was this need to keep citizens intoxicated by fascism’s dynamism that made Mussolini and Hitler see war as both desirable and necessary. ”War is to men,” Mussolini insisted, ”as maternity is to women.”
For every official American attempt to link Islamic terrorism to fascism, there is an anti-Bush protest that applies the fascist label to Washington’s nationalist rhetoric, assault on civil liberties and warmaking.
10. The Virtualized Library: A Librarian’s Introduction to Docker and Virtual Machines
This session will introduce two major types of virtualization, virtual machines using tools like VirtualBox and Vagrant, and containers using Docker. The relative strengths and drawbacks of the two approaches will be discussed along with plenty of hands-on time. Though geared towards integrating these tools into a development workflow, the workshop should be useful for anyone interested in creating stable and reproducible computing environments, and examples will focus on library-specific tools like Archivematica and EZPaarse. With virtualization taking a lot of the pain out of installing and distributing software, alleviating many cross-platform issues, and becoming increasingly common in library and industry practices, now is a great time to get your feet wet.
(One three-hour session)
11. Digital Empathy: Creating Safe Spaces Online
User research is often focused on measures of the usability of online spaces. We look at search traffic, run card sorting and usability testing activities, and track how users navigate our spaces. Those results inform design decisions through the lens of information architecture. This is important, but doesn’t encompass everything a user needs in a space.
This workshop will focus on the other component of user experience design and user research: how to create spaces where users feel safe. Users bring their anxieties and stressors with them to our online spaces, but informed design choices can help to ameliorate that stress. This will ultimately lead to a more positive interaction between your institution and your users.
The presenters will discuss the theory behind empathetic design, delve deeply into using ethnographic research methods – including an opportunity for attendees to practice those ethnographic skills with student participants – and finish with the practical application of these results to ongoing and future projects.
(One three-hour session)
14. ARIA Basics: Making Your Web Content Sing Accessibility
https://dequeuniversity.com/assets/html/jquery-summit/html5/slides/landmarks.html
Are you a web developer or create web content? Do you add dynamic elements to your pages? If so, you should be concerned with making those dynamic elements accessible and usable to as many as possible. One of the most powerful tools currently available for making web pages accessible is ARIA, the Accessible Rich Internet Applications specification. This workshop will teach you the basics for leveraging the full power of ARIA to make great accessible web pages. Through several hands-on exercises, participants will come to understand the purpose and power of ARIA and how to apply it for a variety of different dynamic web elements. Topics will include semantic HTML, ARIA landmarks and roles, expanding/collapsing content, and modal dialog. Participants will also be taught some basic use of the screen reader NVDA for use in accessibility testing. Finally, the lessons will also emphasize learning how to keep on learning as HTML, JavaScript, and ARIA continue to evolve and expand.
Participants will need a basic background in HTML, CSS, and some JavaScript.
(One three-hour session)
18. Learning and Teaching Tech
Tech workshops pose two unique problems: finding skilled instructors for that content, and instructing that content well. Library hosted workshops are often a primary educational resource for solo learners, and many librarians utilize these workshops as a primary outreach platform. Tackling these two issues together often makes the most sense for our limited resources. Whether a programming language or software tool, learning tech to teach tech can be one of the best motivations for learning that tech skill or tool, but equally important is to learn how to teach and present tech well.
This hands-on workshop will guide participants through developing their own learning plan, reviewing essential pedagogy for teaching tech, and crafting a workshop of their choice. Each participant will leave with an actionable learning schedule, a prioritized list of resources to investigate, and an outline of a workshop they would like to teach.
(Two three-hour sessions)
23. Introduction to Omeka S
Omeka S represents a complete rewrite of Omeka Classic (aka the Omeka 2.x series), adhering to our fundamental principles of encouraging use of metadata standards, easy web publishing, and sharing cultural history. New objectives in Omeka S include multisite functionality and increased interaction with other systems. This workshop will compare and contrast Omeka S with Omeka Classic to highlight our emphasis on 1) modern metadata standards, 2) interoperability with other systems including Linked Open Data, 3) use of modern web standards, and 4) web publishing to meet the goals medium- to large-sized institutions.
In this workshop we will walk through Omeka S Item creation, with emphasis on LoD principles. We will also look at the features of Omeka S that ease metadata input and facilitate project-defined usage and workflows. In accordance with our commitment to interoperability, we will describe how the API for Omeka S can be deployed for data exchange and sharing between many systems. We will also describe how Omeka S promotes multiple site creation from one installation, in the interest of easy publishing with many objects in many contexts, and simplifying the work of IT departments.
(One three-hour session)
24. Getting started with static website generators
Have you been curious about static website generators? Have you been wondering who Jekyll and Hugo are? Then this workshop is for you
But this article isn’t about setting up a domain name and hosting for your website. It’s for the step after that, the actual making of that site. The typical choice for a lot of people would be to use something like WordPress. It’s a one-click install on most hosting providers, and there’s a gigantic market of plugins and themes available to choose from, depending on the type of site you’re trying to build. But not only is WordPress a bit overkill for most websites, it also gives you a dynamically generated site with a lot of moving parts. If you don’t keep all of those pieces up to date, they can pose a significant security risk and your site could get hijacked.
The alternative would be to have a static website, with nothing dynamically generated on the server side. Just good old HTML and CSS (and perhaps a bit of Javascript for flair). The downside to that option has been that you’ve been relegated to coding the whole thing by hand yourself. It’s doable, but you just want a place to share your work. You shouldn’t have to know all the idiosyncrasies of low-level web design (and the monumental headache of cross-browser compatibility) to do that.
Static website generators are tools used to build a website made up only of HTML, CSS, and JavaScript. Static websites, unlike dynamic sites built with tools like Drupal or WordPress, do not use databases or server-side scripting languages. Static websites have a number of benefits over dynamic sites, including reduced security vulnerabilities, simpler long-term maintenance, and easier preservation.
In this hands-on workshop, we’ll start by exploring static website generators, their components, some of the different options available, and their benefits and disadvantages. Then, we’ll work on making our own sites, and for those that would like to, get them online with GitHub pages. Familiarity with HTML, git, and command line basics will be helpful but are not required.
(One three-hour session)
26. Using Digital Media for Research and Instruction
To use digital media effectively in both research and instruction, you need to go beyond just the playback of media files. You need to be able to stream the media, divide that stream into different segments, provide descriptive analysis of each segment, order, re-order and compare different segments from the same or different streams and create web sites that can show the result of your analysis. In this workshop, we will use Omeka and several plugins for working with digital media, to show the potential of video streaming, segmentation and descriptive analysis for research and instruction.
(One three-hour session)
28. Spark in the Dark 101 https://zeppelin.apache.org/
This is an introductory session on Apache Spark, a framework for large-scale data processing (https://spark.apache.org/). We will introduce high level concepts around Spark, including how Spark execution works and it’s relationship to the other technologies for working with Big Data. Following this introduction to the theory and background, we will walk workshop participants through hands-on usage of spark-shell, Zeppelin notebooks, and Spark SQL for processing library data. The workshop will wrap up with use cases and demos for leveraging Spark within cultural heritage institutions and information organizations, connecting the building blocks learned to current projects in the real world.
(One three-hour session)
29. Introduction to Spotlight https://github.com/projectblacklight/spotlight http://www.spotlighttechnology.com/4-OpenSource.htm
Spotlight is an open source application that extends the digital library ecosystem by providing a means for institutions to reuse digital content in easy-to-produce, attractive, and scholarly-oriented websites. Librarians, curators, and other content experts can build Spotlight exhibits to showcase digital collections using a self-service workflow for selection, arrangement, curation, and presentation.
This workshop will introduce the main features of Spotlight and present examples of Spotlight-built exhibits from the community of adopters. We’ll also describe the technical requirements for adopting Spotlight and highlight the potential to customize and extend Spotlight’s capabilities for their own needs while contributing to its growth as an open source project.
(One three-hour session)
31. Getting Started Visualizing your IoT Data in Tableau https://www.tableau.com/
The Internet of Things is a rising trend in library research. IoT sensors can be used for space assessment, service design, and environmental monitoring. IoT tools create lots of data that can be overwhelming and hard to interpret. Tableau Public (https://public.tableau.com/en-us/s/) is a data visualization tool that allows you to explore this information quickly and intuitively to find new insights.
This full-day workshop will teach you the basics of building your own own IoT sensor using a Raspberry Pi (https://www.raspberrypi.org/) in order to gather, manipulate, and visualize your data.
All are welcome, but some familiarity with Python is recommended.
(Two three-hour sessions)
32. Enabling Social Media Research and Archiving
Social media data represents a tremendous opportunity for memory institutions of all kinds, be they large academic research libraries, or small community archives. Researchers from a broad swath of disciplines have a great deal of interest in working with social media content, but they often lack access to datasets or the technical skills needed to create them. Further, it is clear that social media is already a crucial part of the historical record in areas ranging from events your local community to national elections. But attempts to build archives of social media data are largely nascent. This workshop will be both an introduction to collecting data from the APIs of social media platforms, as well as a discussion of the roles of libraries and archives in that collecting.
Assuming no prior experience, the workshop will begin with an explanation of how APIs operate. We will then focus specifically on the Twitter API, as Twitter is of significant interest to researchers and hosts an important segment of discourse. Through a combination of hands-on and demos, we will gain experience with a number of tools that support collecting social media data (e.g., Twarc, Social Feed Manager, DocNow, Twurl, and TAGS), as well as tools that enable sharing social media datasets (e.g., Hydrator, TweetSets, and the Tweet ID Catalog).
The workshop will then turn to a discussion of how to build a successful program enabling social media collecting at your institution. This might cover a variety of topics including outreach to campus researchers, collection development strategies, the relationship between social media archiving and web archiving, and how to get involved with the social media archiving community. This discussion will be framed by a focus on ethical considerations of social media data, including privacy and responsible data sharing.
Time permitting, we will provide a sampling of some approaches to social media data analysis, including Twarc Utils and Jupyter Notebooks.
blockchain is a database or digital ledger. The data in the ledger is arranged in batches known as blocks, with each block storing data about a specific transaction. The blocks are linked together using cryptographic validation to form an unbroken and unbreakable chain–hence the name blockchain. As it relates to bitcoin, the blocks are monetary units, and the chain includes information about all past transactions of that monetary unit.
Importantly, the database (i.e., the series of blocks) is duplicated thousands of times across a network of computers, meaning that it has no one central repository. This not only means that the records are truly public, but also that there is no centralized version of the data for a hacker to corrupt. In order to make changes to the ledger, consensus between all members of the group must be obtained, further adding to the system’s security.
1. Blockchain for the Future of Credentialing
With today’s technologies, graduates and prospective employers must go through a tedious process to obtain student transcripts or diplomas, and this complexity is compounded when these credentials are spread across multiple institutions. Not only that, but these transcripts can take days or weeks to produce and send, and usually require a small fee be paid to the institution.LinkedLinek
This could be a key enabler to facilitate student ownership of this data and would allow them to instantly produce secure and comprehensive credentials to any institute or employer requesting them, including information about a student’s performance on standardized tests, degree requirements, extracurricular activities, and other learning activities.
Blockchain could play a major role in Competency-Based Education (CBE) programs and micro-credentialing, which are becoming ever more popular across universities and internal business training programs.
various companies are currently working on such a system of record. One of the most well-known is called “BlockCert,” which is an open standard created by MIT Media Lab and which the institute hopes will help drive the adoption of blockchain credentialing.
imagine the role that LinkedIn or a similar platform could play in the distribution of such content. Beyond verification of university records, LinkedIn could become a platform for sharing verified work history and resumes as well, making the job application process far simpler
2. Blockchain’s Financial Implications and Student debt
how could blockchain influence student finances? For starters, financial aid and grants could be tied to student success. Instead of students and universities having to send over regular progress reports on a recipient’s performance, automatic updates to a student’s digital record would ensure that benchmarks were being met–and open up new opportunities for institutions looking to offer merit-based grants.
Electronic tuition payments and money transfers could also simplify the tuition process. This is an especially appealing option for international students, as bitcoin’s interchangeable nature and lack of special fees for international transfers makes it a simpler and more cost-effective payment method.
How algorithms impact our browsing behavior? browsing history? What is the connection between social media algorithms and fake news? Are there topic-detection algorithms as they are community-detection ones?
How can I change the content of a [Google] search return? Can I?
Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329-346. doi:10.1177/1461444815608807
CRUZ, J. D., BOTHOREL, C., & POULET, F. (2014). Community Detection and Visualization in Social Networks: Integrating Structural and Semantic Information. ACM Transactions On Intelligent Systems & Technology, 5(1), 1-26. doi:10.1145/2542182.2542193
Bai, X., Yang, P., & Shi, X. (2017). An overlapping community detection algorithm based on density peaks. Neurocomputing, 2267-15. doi:10.1016/j.neucom.2016.11.019
Zeng, J., & Zhang, S. (2009). Incorporating topic transition in topic detection and tracking algorithms. Expert Systems With Applications, 36(1), 227-232. doi:10.1016/j.eswa.2007.09.013
Zhou, E., Zhong, N., & Li, Y. (2014). Extracting news blog hot topics based on the W2T Methodology. World Wide Web, 17(3), 377-404. doi:10.1007/s11280-013-0207-7
The W2T (Wisdom Web of Things) methodology considers the information organization and management from the perspective of Web services, which contributes to a deep understanding of online phenomena such as users’ behaviors and comments in e-commerce platforms and online social networks. (https://link.springer.com/chapter/10.1007/978-3-319-44198-6_10)
ethics of algorithm
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679. https://doi.org/10.1177/2053951716679679