Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial
But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. recognition technology.
Facial recognition technology has always been controversial. It makes people nervous about Big Brother. It has a tendency to deliver false matches for certain groups, like people of color. And some facial recognition products used by the police — including Clearview’s — haven’t been vetted by independent experts.
Clearview deployed current and former Republican officials to approach police forces, offering free trials and annual licenses for as little as $2,000. Mr. Schwartz tapped his political connections to help make government officials aware of the tool, according to Mr. Ton-That.
“We have no data to suggest this tool is accurate,” said Clare Garvie, a researcher at Georgetown University’s Center on Privacy and Technology, who has studied the government’s use of facial recognition. “The larger the database, the larger the risk of misidentification because of the doppelgänger effect. They’re talking about a massive database of random people they’ve found on the internet.”
Part of the problem stems from a lack of oversight. There has been no real public input into adoption of Clearview’s software, and the company’s ability to safeguard data hasn’t been tested in practice. Clearview itself remained highly secretive until late 2019.
The software also appears to explicitly violate policies at Facebook and elsewhere against collecting users’ images en masse.
while there’s underlying code that could theoretically be used for augmented reality glasses that could identify people on the street, Ton-That said there were no plans for such a design.
facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we’re in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it’s being built by corporations in order to influence our buying behavior, and is incidentally used by the government.
People can be identified at a distance by their heart beat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and irispatterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses.
The data broker industry is almost entirely unregulated; there’s only one law — passed in Vermont in 2018 — that requires data brokers to register and explain in broad terms what kind of data they collect.
Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.
The upside for businesses is that this new, “anonymized” video no longer gives away the exact identity of a customer—which, Perry says, means companies using D-ID can “eliminate the need for consent” and analyze the footage for business and marketing purposes. A store might, for example, feed video of a happy-looking white woman to an algorithm that can surface the most effective ad for her in real time.
Three leading European privacy experts who spoke to MIT Technology Review voiced their concerns about D-ID’s technology and its intentions. All say that, in their opinion, D-ID actually violates GDPR.
As Norwegian Refugee Council research found, 70 percent of Syrian refugees lack basic identification and documents showing ownership of property.
The global passport
Host nations certainly has a share in the damage, as they face problems concerning the accessibility of vital information about the newcomers — dealing with the undocumented refugee, the immigration service can’t gain the information about his/her health status, family ties or criminal record, or verify any other vital data that helps them make a decision. Needless to say, this may lead to the designation of refugee status being exploited by economic migrants, fugitives or even the war criminals that caused the mass displacement to begin with.
Another important issue is data security. Refugees’ personal identities are carefully re-established with the support of clever biometric systems set up by the U.N. Agency for Refugees (UNHCR). UNHCR registers millions of refugees and maintains those records in a database. But the evidence suggests that centralized systems like this could be prone to attacks. As a report on UNCHR’s site notes, Aadhaar — India’s massive biometric database and the largest national database of people in the world — has suffered serious breaches, and last year, allegations were made that access was for sale on the internet for as little as $8
Finland, a country with a population of 5.5 million, cannot boast huge numbers of refugees. For 2018, it set a quota of 750 people, mainly flying from Syria and the Democratic Republic of Congo. That’s way less than neighboring Sweden, which promised to take in 3,400. Nevertheless, the country sets a global example of the use of effective technology in immigration policy: It’s using blockchain to help the newcomers get on their feet faster.
The system, developed by the Helsinki-based startup MONI, maintains a full analogue of a bank account for every one of its participants.
Speaking at the World Economic Forum in Davos in January 2018, the billionaire investor and philanthropist George Soros revealed that his structures already use a blockchain in immigration policies
In 2017, Accenture and Microsoft Corp. teamed up to build a digital ID network using blockchain technology, as part of a U.N.-supported project to provide legal identification to 1.1 billion people worldwide with no official documents.
a Memorandum of Understanding (MOU) with blockchain platform IOTA to explore how the technology could increase efficiency.
In Media Manipulation and Disinformation Online, Marwick and Lewis (2017) of the Data & Society Research Institute described the agents of media manipulation, their modus operandi, motivators, and how they’ve taken advantage of the vulnerability of online media. The researchers described the manipulators as right-wing extremists (RWE), also known as alt-right, who run the gamut from sexists (including male sexual conquest communities) to white nationalists to anti-immigration activists and even those who rebuke RWE identification but whose actions confer such classification. These manipulators rally behind a shared belief on online forums, blogs, podcasts, and social media through pranks or ruinous trolling anonymity, usurping participatory culture methods (networking, humor, mentorship) for harassment, and competitive cyber brigades that earn status by escalating bullying such as the sharing of a target’s private information.
Marwick and Lewis reported on how RWE groups have taken advantage of certain media tactics to gain viewers’ attention such as novelty and sensationalism, as well as their interactions with the public via social media, to manipulate it for their agenda. For instance, YouTube provides any individual with a portal and potential revenue to contribute to the media ecosystem. The researchers shared the example of the use of YouTube by conspiracy theorists, which can be used as fodder for extremist networks as conspiracies generally focus on loss of control of important ideals, health, and safety.
One tactic they’re using is to package their hate in a way that appeals to millennials. They use attention hacking to increase their status such as hate speech, which is later recanted as trickster trolling all the while gaining the media’s attention for further propagation
SHARED MODUS OPERANDI
Marwick and Lewis reported the following shared tactics various RWE groups use for online exploits:
Ambiguity of persona or ideology,
Baiting a single or community target’s emotions,
Bots for amplification of propaganda that appears legitimately from a real person,
“…Embeddedness in Internet culture… (p. 28),”
Exploitation of young male rebelliousness,
Hate speech and offensive language (under the guise of First Amendment protections),
Irony to cloak ideology and/or skewer intended targets,
Memes for stickiness of propaganda,
Mentorship in argumentation, marketing strategies, and subversive literature in their communities of interest,
Networked and agile groups,
“…Permanent warfare… (p.12)” call to action,
Pseudo scholarship to deceive readers,
“…Quasi moral arguments… (p. 7)”
Shocking images for filtering network membership,
“Trading stories up the chain… (p. 38)” from low-level news outlets to mainstream, and
Trolling others with asocial behavior.
teenagers in Veles, Macedonia who profited around 16K dollars per month via Google’s AdSense from Facebook post engagements
1. Using a blockchain for automatic recognition and transfer of credits
The decline in first-time, first-year student enrollments is having a real financial impact on a number of institutions across the United States and focusing on transfer students (a pool of prospects twice as large) has become an important strategy for many. But credit articulation presents a real challenge for institutions bringing in students from community colleges. While setting standardized articulation requirements across the nation presents a high hurdle, blockchain-supported initiatives may hold great promise for university and city education systems looking to streamline educational mobility in their communities.
2. Blockchains for tracking intellectual property and rewarding use and re-use of that property
If researchers were able to publish openly and accurately assess the use of their resources, the access-prohibitive costs of academic book and journal publications could be circumvented, whether for research- or teaching-oriented outputs. Accurately tracking the sharing of knowledge without restrictions has transformative potential for open-education models.
3. Using verified sovereign identities for student identification within educational organizations
The data footprint of higher education institutions is enormous. With FERPA regulations as well as local and international requirements for the storage and distribution of Personally Identifiable Information (PII), maintaining this data in various institutional silos magnifies the risk associated with a data breach. Using sovereign identities to limit the proliferation of personal data promotes better data hygiene and data lifecycle management and could realize significant efficiency gains at the institutional level.
4. Using a blockchain as a lifelong learning passport
Educational institutions and private businesses partner with online course delivery giants to extend the reach of their educational services and priorities. Traditional educational routes are increasingly less normal and in this expanding world of providers, the need for verifiable credentials from a number of sources is growing. Producing a form of digitally “verifiable CVs” would limit credential fraud, and significantly reduce organizational workload in credential verification.
5. Using blockchains to permanently secure certificates
The open source solution Blockcerts already enables signed certificates to be posted to a blockchain and supports the verification of those certificates by third parties.
When an institution issues official transcripts, obtaining copies can be expensive and burdensome for graduates. But student-owned digital transcripts put the power of secure verification in the hands of learners, eliminating the need for lengthy and costly transcripts to further their professional or educational pursuits. An early mover, Central New Mexico Community College, debuted digital diplomas on the blockchain in December of 2017.
6. Using blockchains to verify multi-step accreditation
As different accreditors recognize different forms of credentials and a growing diversity of educational providers issue credentials, checking the ‘pedigree’ of a qualification can be laborious. Turning a certification verification process from a multi-stage research effort into a single-click process will automate many thousands of labor hours for organizations and institutions
We spend a lot of time debating the characteristics of generations—are baby boomers really selfish and entitled, are millennials really narcissists, and the latest, has the next generation (whatever it is going to be called) already been ruined by cellphones? Many academics—and many consultants—argue that generations are distinct and that organizations, educators, and even parents need to accommodate them. These classifications are often met with resistance from those they supposedly represent, as most people dislike being represented by overgeneralizations, and these disputes only fuel the debate around this contentious topic.
In short, the science shows that generations are not a thing.
It is important to be clear what not a thing means. It does not mean that people today are the same as people 80 years ago or that anything else is static. Times change and so do people. However, the idea that distinct generations capture and represent these changes is unsupported.
What is a generation? Those who promote the concept define it as a group of people who are roughly the same age and who were influenced by a set of significant events. These experiences supposedly create commonalities, making those in the group more similar to each other and more different from other groups now and from groups of the same age in the past.
In line with the definition, there is a commonly held perception that people growing up around the same time and in the same place must have some sort of universally shared set of experiences and characteristics. It helps that the idea of generations intuitively makes sense. But the science does not support it. In fact, most of the research findings showing distinct generations are explained by other causes, have serious scientific flaws, or both.
Numerousbooks, articles, and pundits have claimed that millennials are much more narcissistic than young people in the past.
on average, millennials are no more narcissistic now than Xers or boomers were when they were in their 20s, and one study has even found they might be less so than generations past. While millennials today may be more narcissistic than Xers or boomers are today, that is because young people are pretty narcissistic regardless of when they are young. This too is an age effect.
Final example. Research shows that millennials joining the Army now show more pride in their service than boomers or Xers did when they joined 20-plus years ago. Is this a generational effect? Nope. Everyone in the military now shows more pride on average than 20 years ago because of 9/11. The terrorist attack increased military pride across the board. This is known as a period effect and it doesn’t have anything to do with generations.
Another problem—identifying true generational effects is methodologically very hard. The only way to do it would be to collect data from multiple longitudinal panels. Individuals in the first panel would be measured at the start of the study and then in subsequent years with new panels added every year thereafter, allowing assessment of whether people were changing because they were getting older (age effects), because of what was happening around them (period effects), or because of their generation (cohort effects). Unfortunately, such data sets pretty much do not exist. Thus, we’re never really able to determine why a change occurred.
According to one national-culture model, people from the United States are, on average, relatively individualistic, indulgent, and uncomfortable with hierarchical order. My note: RIchard Nisbett sides with Hofstede and Minkov: http://blog.stcloudstate.edu/ims/2016/06/14/cultural-differences/
Conversely, people from China are generally group-oriented, restrained, and comfortable with hierarchy. However, these countries are so large and diverse that they each have millions of individuals who are more similar to the “averages” of the other country than to their own.
Given these design and data issues, it is not surprising that researchers have tried a variety of different statistical techniques to massage (aka torture) the data in an attempt to find generational differences. Studies showing generational differences have used statistical techniques like analysis of variance (ANOVA) and cross-temporal meta-analysis (CTMA), neither of which is capable of actually attributing the differences to generations.
The statistical challenge derives from the problem we have already raised—generations (i.e., cohorts) are defined by age and period. As such, mathematically separating age, period, and cohort effects is very difficult because they are inherently confounded with one another. Their linear dependency creates what is known as an identification problem, and unless one has access to multiple longitudinal panels like I described above, it is impossible to statistically isolate the unique effect of any one factor.
Are some millennials narcissistic? Are some boomers selfish? Sure, but there are many who are not and whose profiles mirror othergenerations.
First, relying on flawed generational science leads to poor advice and bad decisions. An analogy: Women live longer than men, on average. Why? They engage in fewer risky behaviors, take better care of themselves, and have two X chromosomes, giving them backups in case of mutations. But if you are a man and you go to the doctor and ask how to live longer, she doesn’t tell you, “Be a woman.” She says eat better, exercise, and don’t do stupid stuff. Knowing the why guides the recommendation.
Now imagine you are a manager trying to retain your supposedly job-hopping, commitment-averse millennial employees and you know that Xers and boomers are less likely to leave their jobs. If you are that manager, you wouldn’t tell your millennial employees to “be a boomer” or “grow older” (nor would you decide to hire boomers or Xers rather than millennials—remember that individuals vary within populations). Instead, you should focus on addressing benefits, work conditions, and other factors that are reasons for leaving.
Second, this focus on generational distinctions wastes resources. Take the millennials-as-commitment-averse-job-hoppers stereotype. Based on this belief, consultants sell businesses on how to recruit and retain this mercurial generation. But are all (or even most) millennials job-hopping commitment avoiders? Survey research shows that millennials and Xers at the same point in their careers are equally likely to stay with their current employer for five or more years (22 percent v. 21.8 percent). It makes no sense for organizations to spend time and money changing HR policies when employees are just as likely to stick around today as they were 15 years ago.
Third, generations perpetuate stereotyping. Ask millennials if they are narcissistic job-hoppers and most of them will rightly be offended. Treat boomers like materialistic achievement seekers and see how it affects their work quality and commitment. We finally are starting to recognize that those within any specific group of people are varied individuals, and we should remember those same principles in this context too. We are (mostly) past it being acceptable to stereotype and discriminate against women, minorities, and the disabled. Why is it OK to do so to millennials or boomers?
The solutions are fairly straightforward, albeit challenging, to implement. To start, we need to focus on the why when talking about whether groups of people differ. The reasons why any generation should be different have only been generally discussed, and the theoretical mechanism that supposedly creates generations has not been fully fleshed out.
Next, we need to quit using these nonsensical generations labels, because they don’t mean anything. The start and end years are somewhat arbitrary anyway. The original conceptualization of social generations started with a biological generational interval of about 20 years, which historians, sociologists and demographers (for one example, see Strauss and Howe, 1991) then retrofitted with various significant historical events that defined the period.
The problem with this is twofold. First, such events do not occur in nice, neat 20-year intervals. Second, not everyone agrees on what the key events were for each generation, so the start and end dates also move around depending on what people think they were. One review found that start and end dates for boomers, Xers, and millennials varied by as many as nine years, and often four to five, depending on the study and the researcher. As with the statistical problem, how can distinct generations be a thing if simply defining when they start and when they end varies so much from study to study?
In the end, the core scientific problem is that the pop press, consultants, and even some academics who are committed to generations don’t focus on the whys. They have a vested interest in selling the whats(Generation Me has reportedly sold more than 115,000 copies, and Google “generations consultants” and see how many firms are dedicated to promulgating these distinctions), but without the science behind them, any prescriptions are worthless or even harmful
David Costanza is an associate professor of organizational sciences at George Washington University and a senior consortium fellow for the U.S. Army Research Institute. He researches, teaches, and consults in the areas of generations, leadership, culture, and organizational performance.
Publisher / Organization: The University of Illinois at Chicago- University Library
Year founded: 1996
Description: First Monday is among the very first open access journals in the EdTech field. The journal’s subject matter encompasses the full range of Internet issues, including educational technologies, social media and web search. Contributors are urged via author guidelines to use simple explanations and less complex sentences and to be mindful that a large proportion of their readers are not part of academia and do not have English as a first language.
Academic Management: University of Catalonia (UOC)
Year founded: 2004
Description: This journal aims to: provide a vehicle for scholarly presentation and exchange of information between professionals, researchers and practitioners in the technology-enhanced education field; contribute to the advancement of scientific knowledge regarding the use of technology and computers in higher education; and inform readers about the latest developments in the application of information technologies (ITs) in higher education learning, training, research and management.
Description: Online Learning promotes the development and dissemination of new knowledge at the intersection of pedagogy, emerging technology, policy, and practice in online environments. The journal has been published for over 20 years as the Journal of Asynchronous Learning Networks (JALN) and recently merged with the Journal of Online Learning and Teaching (JOLT).
Publisher / Organization: International Forum of Educational Technology & Society
Description: Educational Technology & Society seeks academic articles on the issues affecting the developers of educational systems and educators who implement and manage these systems. Articles should discuss the perspectives of both communities – the programmers and the instructors. The journal is currently still accepting submissions for ongoing special issues, but will cease publication in the future as the editors feel that the field of EdTech is saturated with high quality publications.
Description: The Australasian Journal of Educational Technology aims to promote research and scholarship on the integration of technology in tertiary education, promote effective practice, and inform policy. The goal is to advance understanding of educational technology in post-school education settings, including higher and further education, lifelong learning, and training.
DESCRIPTION: The Internet and Higher Education is devoted to addressing contemporary issues and future developments related to online learning, teaching, and administration on the Internet in post-secondary settings. Articles should significantly address innovative deployments of Internet technology in instruction and report on research to demonstrate the effects of information technology on instruction in various contexts in higher education.
Publisher / Organization: British Educational Research Association (BERA)
YEAR FOUNDED: 1970
DESCRIPTION: The journal publishes theoretical perspectives, methodological developments and empirical research that demonstrate whether and how applications of instructional/educational technology systems, networks, tools and resources lead to improvements in formal and non-formal education at all levels, from early years through to higher, technical and vocational education, professional development and corporate training.
Description: Computers & Education aims to increase knowledge and understanding of ways in which digital technology can enhance education, through the publication of high quality research, which extends theory and practice.
Description: TechTrends targets professionals in the educational communication and technology field. It provides a vehicle that fosters the exchange of important and current information among professional practitioners. Among the topics addressed are the management of media and programs, the application of educational technology principles and techniques to instructional programs, and corporate and military training.
Description: Advances in technology and the growth of e-learning to provide educators and trainers with unique opportunities to enhance learning and teaching in corporate, government, healthcare, and higher education. IJEL serves as a forum to facilitate the international exchange of information on the current research, development, and practice of e-learning in these sectors.
Led by an Editorial Review Board of leaders in the field of e-Learning, the Journal is designed for the following audiences: researchers, developers, and practitioners in corporate, government, healthcare, and higher education. IJEL is a peer-reviewed journal.
Description: JCMST is a highly respected scholarly journal which offers an in-depth forum for the interchange of information in the fields of science, mathematics, and computer science. JCMST is the only periodical devoted specifically to using information technology in the teaching of mathematics and science.
Just as researchers build reputation over time that can be depicted (in part) through quantitative measures such as h-index and i10-index, journals are also compared based on the number of citations they receive..
Description: The Journal of Interactive Learning Research (JILR) publishes papers related to the underlying theory, design, implementation, effectiveness, and impact on education and training of the following interactive learning environments: authoring systems, cognitive tools for learning computer-assisted language learning computer-based assessment systems, computer-based training computer-mediated communications, computer-supported collaborative learning distributed learning environments, electronic performance support systems interactive learning environments, interactive multimedia systems interactive simulations and games, intelligent agents on the Internet intelligent tutoring systems, microworlds, virtual reality based learning systems.
Description: JEMH is designed to provide a multi-disciplinary forum to present and discuss research, development and applications of multimedia and hypermedia in education. It contributes to the advancement of the theory and practice of learning and teaching in environments that integrate images, sound, text, and data.
Publisher / Organization: Society for Information Technology and Teacher Education (SITE)
Year founded: 1997
Description: JTATE serves as a forum for the exchange of knowledge about the use of information technology in teacher education. Journal content covers preservice and inservice teacher education, graduate programs in areas such as curriculum and instruction, educational administration, staff development instructional technology, and educational computing.
Publisher / Organization: Association for the Advancement of Computing in Education (AACE)
YEAR FOUNDED: 2015
DESCRIPTION: The Journal of Online Learning Research (JOLR) is a peer-reviewed, international journal devoted to the theoretical, empirical, and pragmatic understanding of technologies and their impact on primary and secondary pedagogy and policy in primary and secondary (K-12) online and blended environments. JOLR is focused on publishing manuscripts that address online learning, catering particularly to the educators who research, practice, design, and/or administer in primary and secondary schooling in online settings. However, the journal also serves those educators who have chosen to blend online learning tools and strategies in their face-to-face classroom.
SCImago Journal Rank (SJR indicator) measures the influence of journals based on the number of citations the articles in the journal receive and the importance or prestige of the journals where such citations come from. The SJR indicator is a free journal metric which uses an algorithm similar to PageRank and provides an open access alternative to the journal impact factor in the Web of Science Journal Citation Report. The portal draws from the information contained in the Scopus database (Elsevier B.V.).
Introduced by Elsevier in 2004, Scopus is an abstract and citation database that covers nearly 18,000 titles from more than 5,000 publishers. It offers journal metrics that go beyond just journals to include most serial titles, including supplements, special issues and conference proceedings. Scopus offers useful information such as the total number of citations, the total number of articles published, and the percent of articles cited.
“Citations are not just a reflection of the impact that a particular piece of academic work has generated. Citations can be used to tell stories about academics, journals and fields of research, but they can also be used to distort stories”.
ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. The community was founded in May 2008. Today it has over 14 million members.
Google Scholar allows users to search for digital or physical copies of articles, whether online or in libraries. It indexes “full-text journal articles, technical reports, preprints, theses, books, and other documents, including selected Web pages that are deemed to be ‘scholarly. It comprises an estimated 160 million documents.
Academia.edu is a social-networking platform for academics to share research papers. You can upload your own work, and follow the updates of your peers. Founded in 2008, the network currently has 59 million users, and adding 20 million documents.
The ORCHID (Open Researcher and Contributor ID) is a nonproprietary alphanumeric code to uniquely identify scientific and other academic authors and contributors. It provides a persistent identity for humans, similar to content-related entities on digital networks that utilize digital object identifiers (DOIs). The organization offers an open and independent registry intended to be the de facto standard for contributor identification in research and academic publishing.
The Scopus Author Identifier assigns a unique number to groups of documents written by the same author via an algorithm that matches authorship based on a certain criteria. If a document cannot be confidently matched with an author identifier, it is grouped separately. In this case, you may see more than one entry for the same author.
more on metrics in this iMS blog
Taylor, C. (2017). Our evolving agenda. Philosophy & Social Criticism, 43(3), 274-275. doi:10.1177/0191453716680433
Neo-Kantian ethics, for its part, tends to separate issues of the good life from what it considers the central questions of justice.
The reigning neo-liberal ideology, and the order it lauds, is meant to produce a maximization of wealth, and hence of means to fulfil our goals, without asking in what ways our frenetic attempts to increase GNP run counter to some of our most important goals: solidarity, the ability to discern and pursue a truly meaningful and fulfilling life, in keeping with our endowment and inclinations. We are either induced to neglect these in favour of playing our part in increasing GNP and/or we never pause to consider questions about what kind of life is best for us and, above all, what we owe to each other in this department
One of the central issues that arises in this context is that of democracy. After 1945, and then 1989, and then again in 2011 with the Arab Spring, we had the sense that democracy was on the march in history. But not only have many of the new departures been disappointing – Russia, Turkey, Egypt – but democracy is beginning to decay in its historic heartlands, where it has been operative for more than a century.
Inequalities are growing; in fact, democracy has been sacrificed to the supposed path of more rapid growth, as defined by neo-liberalism. This has led to a sense of impotence among non-elites, which has meant a drop in electoral participation, which in turn increases the power of money in politics, which leads to an intensified sense of impotence, and so on.
Taylor, C. (1998, October). The Dynamics of Democratic Exclusion. Journal of Democracy. p. 143.
Liberal democracy is a great philosophy of inclusion. It is rule of the people, by the people, and for the people, and today the “people” is taken to mean everybody, without the unspoken restrictions that formerly excluded peasants, women, or slaves. Contemporary liberal democracy offers the spectacle of the most inclusive politics in human history. Yet there is also something in the dynamic of democracy that pushes toward exclusion. This was allowed full rein in earlier democracies, as among the ancient republics, but today is a cause of great malaise.
The basic mode of legitimation of democratic states implies that they are founded on popular sovereignty. Now, for the people to be sovereign, it needs to form an entity and have a personality. This need can be expressed in the following way: The people is supposed to rule; this means that its members make up a decision-making unit, a body that takes joint decisions through a consensus, or at least amajority vote, of agents who are deemed equal and autonomous. It is not “democratic” for some citizens to be under the control of others. This might facilitate decision making, but it is not democratically legitimate.
In other words, a modern democratic state demands a “people” with a strong collective identity. Democracy obliges us to show much more solidarity and much more commitment to one another in our joint political project than was demanded by the hierarchical and authoritarian societies of yesteryear.
Thinkers in the civic humanist tradition, from Aristotle through Hannah Arendt, have noted that free societies require a higher level of commitment and participation than despotic or authoritarian ones. Citizens have to do for themselves, as it were, what the rulers would otherwise do for them. But this will happen only if these citizens feel a strong bond of identification with their political community, and hence with their fellow citizens.
successive waves of immigrants were perceived by many U.S. citizens of longer standing as a threat to democracy and the American way of life. This was the fate of the Irish beginning in the 1840s, and later in the century of immigrants from Southern and Eastern Europe. And of course, the long-established black population, when it was given citizen rights for the first time after the Civil War, was effectively excluded from voting through much of the Old South up until the civil rights legislation of the 1960s.
Multiculturalism and Postmodernism
For although conservatives often lump “postmodernists” and “multiculturalists” together with “liberals,” nothing could be less fair. In fact, the “postmodernists” themselves attack the unfortunate liberals with much greater gusto than they direct against the conser-vatives.
the two do have something in common, and so the targets partly converge. The discourse of the victim-accuser is ultimately rooted in certain philosophical sources that the postmodernists share with procedural liberalism—in particular, a commitment to negative liberty and/or a hostility to the Herder-Humboldt model of the associative bond. That is why policies framed in the language of “postmodernism” usually share certain properties with the policies of their procedural liberal enemies.
The struggle to redefine our political life in order to counteract the dangers and temptations of democratic exclusion will only intensify in the next century (My note: 21st century). There are no easy solutions, no universal formulas for success in this struggle. But at least we can try to avoid falling into the shadow or illusory ways of thinking. This means, first, that we must understand the drive to exclusion (as well as the vocation of inclusion) that democratic politics contains; and second, that we must fight free of some of the powerful philosophical illusions of our age. This essay is an attempt to push our thought a little ahead in both these directions.
Taylor, C., & And, O. (1994). Multiculturalism: Examining the Politics of Recognition.
Taylor, C. A. (1996). Theorizing Practice and Practicing Theory: Toward a Constructive Analysis of Scientific Rhetorics. Communication Theory (10503293), 6(4), 374-387.
Taylor, C., & Jennings, I. (2005). The Immanent Counter-Enlightenment: Christianity and Morality. South African Journal Of Philosophy, 24(3), 224-239.
a passage from Paul Bénichou’s fa mous work Mo rales du grand siècle: ‘Hu man kind re presses its mis ery when ever it can; and at the same time for gets that hu mil i at ing mo ral ity by which it had con demned life, and in do ing so had made a vir tue of ne ces sity.2 ’ In this ver sion, the la tent hu man ist mo ral ity suc ceeds in es tab – lish ing it self, and in so do ing helps to throw the theo log i cal-as cetic code onto the scrap heap. On this view, it is as if the hu man ist mo ral ity had al ways been there, wait ing for the chance to over throw its op pres sive pre de ces sor.
The re la tion ship was something like the fol low ing: As long as one lived in the en – chanted world, where the weather-bells chimed, one felt one self to be in a world full of threats, vul ner a ble to black magic in all its forms. In this world God was for most be liev ers the source of a pos i tive power, which was able to de feat the pow ers of evil. God was the chief source of coun ter-, or white, magic. He was the fi nal guar an tor that good would tri umph in this world of man i fold spir its and pow ers. For those com pletely ab sorbed in this world, it was prac ti cally im pos si ble not to be – lieve in God. Not to be lieve would mean de vot ing one self to the devil. A small mi nor – ity of truly re mark able – or per haps truly des per ate – peo ple did in deed do this. But for the vast ma jor ity there was no ques tion whether one be lieved in God or not – the pos i – tive force was as real a fact as the threats it coun ter acted. The ques tion of be lief was a ques tion of trust and mem ber ship rather than one of the ac cep tance of par tic u lar doc – trines. In this sense they were closer to the con text of the gos pels.
Applications for the 2018 Institute will be accepted between December 1, 2017 and January 27, 2018. Scholars accepted to the program will be notified in early March 2018.
Learning to Harness Big Data in an Academic Library
Research on Big Data per se, as well as on the importance and organization of the process of Big Data collection and analysis, is well underway. The complexity of the process comprising “Big Data,” however, deprives organizations of ubiquitous “blue print.” The planning, structuring, administration and execution of the process of adopting Big Data in an organization, being that a corporate one or an educational one, remains an elusive one. No less elusive is the adoption of the Big Data practices among libraries themselves. Seeking the commonalities and differences in the adoption of Big Data practices among libraries may be a suitable start to help libraries transition to the adoption of Big Data and restructuring organizational and daily activities based on Big Data decisions. Introduction to the problem. Limitations
The redefinition of humanities scholarship has received major attention in higher education. The advent of digital humanities challenges aspects of academic librarianship. Data literacy is a critical need for digital humanities in academia. The March 2016 Library Juice Academy Webinar led by John Russel exemplifies the efforts to help librarians become versed in obtaining programming skills, and respectively, handling data. Those are first steps on a rather long path of building a robust infrastructure to collect, analyze, and interpret data intelligently, so it can be utilized to restructure daily and strategic activities. Since the phenomenon of Big Data is young, there is a lack of blueprints on the organization of such infrastructure. A collection and sharing of best practices is an efficient approach to establishing a feasible plan for setting a library infrastructure for collection, analysis, and implementation of Big Data.
Limitations. This research can only organize the results from the responses of librarians and research into how libraries present themselves to the world in this arena. It may be able to make some rudimentary recommendations. However, based on each library’s specific goals and tasks, further research and work will be needed.
Big Data is becoming an omnipresent term. It is widespread among different disciplines in academia (De Mauro, Greco, & Grimaldi, 2016). This leads to “inconsistency in meanings and necessity for formal definitions” (De Mauro et al, 2016, p. 122). Similarly, to De Mauro et al (2016), Hashem, Yaqoob, Anuar, Mokhtar, Gani and Ullah Khan (2015) seek standardization of definitions. The main connected “themes” of this phenomenon must be identified and the connections to Library Science must be sought. A prerequisite for a comprehensive definition is the identification of Big Data methods. Bughin, Chui, Manyika (2011), Chen et al. (2012) and De Mauro et al (2015) single out the methods to complete the process of building a comprehensive definition.
In conjunction with identifying the methods, volume, velocity, and variety, as defined by Laney (2001), are the three properties of Big Data accepted across the literature. Daniel (2015) defines three stages in big data: collection, analysis, and visualization. According to Daniel, (2015), Big Data in higher education “connotes the interpretation of a wide range of administrative and operational data” (p. 910) and according to Hilbert (2013), as cited in Daniel (2015), Big Data “delivers a cost-effective prospect to improve decision making” (p. 911).
The importance of understanding the process of Big Data analytics is well understood in academic libraries. An example of such “administrative and operational” use for cost-effective improvement of decision making are the Finch & Flenner (2016) and Eaton (2017) case studies of the use of data visualization to assess an academic library collection and restructure the acquisition process. Sugimoto, Ding & Thelwall (2012) call for the discussion of Big Data for libraries. According to the 2017 NMC Horizon Report “Big Data has become a major focus of academic and research libraries due to the rapid evolution of data mining technologies and the proliferation of data sources like mobile devices and social media” (Adams, Becker, et al., 2017, p. 38).
Power (2014) elaborates on the complexity of Big Data in regard to decision-making and offers ideas for organizations on building a system to deal with Big Data. As explained by Boyd and Crawford (2012) and cited in De Mauro et al (2016), there is a danger of a new digital divide among organizations with different access and ability to process data. Moreover, Big Data impacts current organizational entities in their ability to reconsider their structure and organization. The complexity of institutions’ performance under the impact of Big Data is further complicated by the change of human behavior, because, arguably, Big Data affects human behavior itself (Schroeder, 2014).
De Mauro et al (2015) touch on the impact of Dig Data on libraries. The reorganization of academic libraries considering Big Data and the handling of Big Data by libraries is in a close conjunction with the reorganization of the entire campus and the handling of Big Data by the educational institution. In additional to the disruption posed by the Big Data phenomenon, higher education is facing global changes of economic, technological, social, and educational character. Daniel (2015) uses a chart to illustrate the complexity of these global trends. Parallel to the Big Data developments in America and Asia, the European Union is offering access to an EU open data portal (https://data.europa.eu/euodp/home ). Moreover, the Association of European Research Libraries expects under the H2020 program to increase “the digitization of cultural heritage, digital preservation, research data sharing, open access policies and the interoperability of research infrastructures” (Reilly, 2013).
The challenges posed by Big Data to human and social behavior (Schroeder, 2014) are no less significant to the impact of Big Data on learning. Cohen, Dolan, Dunlap, Hellerstein, & Welton (2009) propose a road map for “more conservative organizations” (p. 1492) to overcome their reservations and/or inability to handle Big Data and adopt a practical approach to the complexity of Big Data. Two Chinese researchers assert deep learning as the “set of machine learning techniques that learn multiple levels of representation in deep architectures (Chen & Lin, 2014, p. 515). Deep learning requires “new ways of thinking and transformative solutions (Chen & Lin, 2014, p. 523). Another pair of researchers from China present a broad overview of the various societal, business and administrative applications of Big Data, including a detailed account and definitions of the processes and tools accompanying Big Data analytics. The American counterparts of these Chinese researchers are of the same opinion when it comes to “think about the core principles and concepts that underline the techniques, and also the systematic thinking” (Provost and Fawcett, 2013, p. 58). De Mauro, Greco, and Grimaldi (2016), similarly to Provost and Fawcett (2013) draw attention to the urgent necessity to train new types of specialists to work with such data. As early as 2012, Davenport and Patil (2012), as cited in Mauro et al (2016), envisioned hybrid specialists able to manage both technological knowledge and academic research. Similarly, Provost and Fawcett (2013) mention the efforts of “academic institutions scrambling to put together programs to train data scientists” (p. 51). Further, Asomoah, Sharda, Zadeh & Kalgotra (2017) share a specific plan on the design and delivery of a big data analytics course. At the same time, librarians working with data acknowledge the shortcomings in the profession, since librarians “are practitioners first and generally do not view usability as a primary job responsibility, usually lack the depth of research skills needed to carry out a fully valid” data-based research (Emanuel, 2013, p. 207).
Borgman (2015) devotes an entire book to data and scholarly research and goes beyond the already well-established facts regarding the importance of Big Data, the implications of Big Data and the technical, societal, and educational impact and complications posed by Big Data. Borgman elucidates the importance of knowledge infrastructure and the necessity to understand the importance and complexity of building such infrastructure, in order to be able to take advantage of Big Data. In a similar fashion, a team of Chinese scholars draws attention to the complexity of data mining and Big Data and the necessity to approach the issue in an organized fashion (Wu, Xhu, Wu, Ding, 2014).
Bruns (2013) shifts the conversation from the “macro” architecture of Big Data, as focused by Borgman (2015) and Wu et al (2014) and ponders over the influx and unprecedented opportunities for humanities in academia with the advent of Big Data. Does the seemingly ubiquitous omnipresence of Big Data mean for humanities a “railroading” into “scientificity”? How will research and publishing change with the advent of Big Data across academic disciplines?
Reyes (2015) shares her “skinny” approach to Big Data in education. She presents a comprehensive structure for educational institutions to shift “traditional” analytics to “learner-centered” analytics (p. 75) and identifies the participants in the Big Data process in the organization. The model is applicable for library use.
Being a new and unchartered territory, Big Data and Big Data analytics can pose ethical issues. Willis (2013) focusses on Big Data application in education, namely the ethical questions for higher education administrators and the expectations of Big Data analytics to predict students’ success. Daries, Reich, Waldo, Young, and Whittinghill (2014) discuss rather similar issues regarding the balance between data and student privacy regulations. The privacy issues accompanying data are also discussed by Tene and Polonetsky, (2013).
Privacy issues are habitually connected to security and surveillance issues. Andrejevic and Gates (2014) point out in a decision making “generated by data mining, the focus is not on particular individuals but on aggregate outcomes” (p. 195). Van Dijck (2014) goes into further details regarding the perils posed by metadata and data to the society, in particular to the privacy of citizens. Bail (2014) addresses the same issue regarding the impact of Big Data on societal issues, but underlines the leading roles of cultural sociologists and their theories for the correct application of Big Data.
Library organizations have been traditional proponents of core democratic values such as protection of privacy and elucidation of related ethical questions (Miltenoff & Hauptman, 2005). In recent books about Big Data and libraries, ethical issues are important part of the discussion (Weiss, 2018). Library blogs also discuss these issues (Harper & Oltmann, 2017). An academic library’s role is to educate its patrons about those values. Sugimoto et al (2012) reflect on the need for discussion about Big Data in Library and Information Science. They clearly draw attention to the library “tradition of organizing, managing, retrieving, collecting, describing, and preserving information” (p.1) as well as library and information science being “a historically interdisciplinary and collaborative field, absorbing the knowledge of multiple domains and bringing the tools, techniques, and theories” (p. 1). Sugimoto et al (2012) sought a wide discussion among the library profession regarding the implications of Big Data on the profession, no differently from the activities in other fields (e.g., Wixom, Ariyachandra, Douglas, Goul, Gupta, Iyer, Kulkami, Mooney, Phillips-Wren, Turetken, 2014). A current Andrew Mellon Foundation grant for Visualizing Digital Scholarship in Libraries seeks an opportunity to view “both macro and micro perspectives, multi-user collaboration and real-time data interaction, and a limitless number of visualization possibilities – critical capabilities for rapidly understanding today’s large data sets (Hwangbo, 2014).
The importance of the library with its traditional roles, as described by Sugimoto et al (2012) may continue, considering the Big Data platform proposed by Wu, Wu, Khabsa, Williams, Chen, Huang, Tuarob, Choudhury, Ororbia, Mitra, & Giles (2014). Such platforms will continue to emerge and be improved, with librarians as the ultimate drivers of such platforms and as the mediators between the patrons and the data generated by such platforms.
Every library needs to find its place in the large organization and in society in regard to this very new and very powerful phenomenon called Big Data. Libraries might not have the trained staff to become a leader in the process of organizing and building the complex mechanism of this new knowledge architecture, but librarians must educate and train themselves to be worthy participants in this new establishment.
The study will be cleared by the SCSU IRB.
The survey will collect responses from library population and it readiness to use and use of Big Data. Send survey URL to (academic?) libraries around the world.
Data will be processed through SPSS. Open ended results will be processed manually. The preliminary research design presupposes a mixed method approach.
The study will include the use of closed-ended survey response questions and open-ended questions. The first part of the study (close ended, quantitative questions) will be completed online through online survey. Participants will be asked to complete the survey using a link they receive through e-mail.
Mixed methods research was defined by Johnson and Onwuegbuzie (2004) as “the class of research where the researcher mixes or combines quantitative and qualitative research techniques, methods, approaches, concepts, or language into a single study” (Johnson & Onwuegbuzie, 2004 , p. 17). Quantitative and qualitative methods can be combined, if used to complement each other because the methods can measure different aspects of the research questions (Sale, Lohfeld, & Brazil, 2002).
Online survey of 10-15 question, with 3-5 demographic and the rest regarding the use of tools.
1-2 open-ended questions at the end of the survey to probe for follow-up mixed method approach (an opportunity for qualitative study)
data analysis techniques: survey results will be exported to SPSS and analyzed accordingly. The final survey design will determine the appropriate statistical approach.
Complete literature review and identify areas of interest – two months
Prepare and test instrument (survey) – month
IRB and other details – month
Generate a list of potential libraries to distribute survey – month
Contact libraries. Follow up and contact again, if necessary (low turnaround) – month
Collect, analyze data – two months
Write out data findings – month
Complete manuscript – month
Proofreading and other details – month
Significance of the work
While it has been widely acknowledged that Big Data (and its handling) is changing higher education (http://blog.stcloudstate.edu/ims?s=big+data) as well as academic libraries (http://blog.stcloudstate.edu/ims/2016/03/29/analytics-in-education/), it remains nebulous how Big Data is handled in the academic library and, respectively, how it is related to the handling of Big Data on campus. Moreover, the visualization of Big Data between units on campus remains in progress, along with any policymaking based on the analysis of such data (hence the need for comprehensive visualization).
This research will aim to gain an understanding on: a. how librarians are handling Big Data; b. how are they relating their Big Data output to the campus output of Big Data and c. how librarians in particular and campus administration in general are tuning their practices based on the analysis.
Based on the survey returns (if there is a statistically significant return), this research might consider juxtaposing the practices from academic libraries, to practices from special libraries (especially corporate libraries), public and school libraries.
Adams Becker, S., Cummins M, Davis, A., Freeman, A., Giesinger Hall, C., Ananthanarayanan, V., … Wolfson, N. (2017). NMC Horizon Report: 2017 Library Edition.
Andrejevic, M., & Gates, K. (2014). Big Data Surveillance: Introduction. Surveillance & Society, 12(2), 185–196.
Asamoah, D. A., Sharda, R., Hassan Zadeh, A., & Kalgotra, P. (2017). Preparing a Data Scientist: A Pedagogic Experience in Designing a Big Data Analytics Course. Decision Sciences Journal of Innovative Education, 15(2), 161–190. https://doi.org/10.1111/dsji.12125
Cohen, J., Dolan, B., Dunlap, M., Hellerstein, J. M., & Welton, C. (2009). MAD Skills: New Analysis Practices for Big Data. Proc. VLDB Endow., 2(2), 1481–1492. https://doi.org/10.14778/1687553.1687576
Daniel, B. (2015). Big Data and analytics in higher education: Opportunities and challenges. British Journal of Educational Technology, 46(5), 904–920. https://doi.org/10.1111/bjet.12230
Daries, J. P., Reich, J., Waldo, J., Young, E. M., Whittinghill, J., Ho, A. D., … Chuang, I. (2014). Privacy, Anonymity, and Big Data in the Social Sciences. Commun. ACM, 57(9), 56–63. https://doi.org/10.1145/2643132
De Mauro, A., Greco, M., & Grimaldi, M. (2015). What is big data? A consensual definition and a review of key research topics. AIP Conference Proceedings, 1644(1), 97–104. https://doi.org/10.1063/1.4907823
Emanuel, J. (2013). Usability testing in libraries: methods, limitations, and implications. OCLC Systems & Services: International Digital Library Perspectives, 29(4), 204–217. https://doi.org/10.1108/OCLC-02-2013-0009
Hashem, I. A. T., Yaqoob, I., Anuar, N. B., Mokhtar, S., Gani, A., & Ullah Khan, S. (2015). The rise of “big data” on cloud computing: Review and open research issues. Information Systems, 47(Supplement C), 98–115. https://doi.org/10.1016/j.is.2014.07.006
Philip Chen, C. L., & Zhang, C.-Y. (2014). Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Information Sciences, 275(Supplement C), 314–347. https://doi.org/10.1016/j.ins.2014.01.015
Sugimoto, C. R., Ding, Y., & Thelwall, M. (2012). Library and information science in the big data era: Funding, projects, and future [a panel proposal]. Proceedings of the American Society for Information Science and Technology, 49(1), 1–3. https://doi.org/10.1002/meet.14504901187
Tene, O., & Polonetsky, J. (2012). Big Data for All: Privacy and User Control in the Age of Analytics. Northwestern Journal of Technology and Intellectual Property, 11, [xxvii]-274.
van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society; Newcastle upon Tyne, 12(2), 197–208.
Waller, M. A., & Fawcett, S. E. (2013). Data Science, Predictive Analytics, and Big Data: A Revolution That Will Transform Supply Chain Design and Management. Journal of Business Logistics, 34(2), 77–84. https://doi.org/10.1111/jbl.12010
Wu, Z., Wu, J., Khabsa, M., Williams, K., Chen, H. H., Huang, W., … Giles, C. L. (2014). Towards building a scholarly big data platform: Challenges, lessons and opportunities. In IEEE/ACM Joint Conference on Digital Libraries (pp. 117–126). https://doi.org/10.1109/JCDL.2014.6970157
The 21st Century Skills of the Academic Librarian in Bulgaria
Plamen Miltenoff, PhD, MLIS, http://web.stcloudstate.edu/pmiltenoff/faculty/
My experience and connections with the library organizations and professionals from Moldova, Bulgaria and Austria, as well as my 17+ years working at the St. Cloud State University library provides me with an opportunity for comparison and, consequently, proposal for collaborative practices with Bulgarian academic librarians.
The role of the academic librarian in the educational process is different/limited in Bulgaria compared to the United States. During a collaboration on gamifying library instruction (http://web.stcloudstate.edu/pmiltenoff/bi/), the NBU librarians demonstrated their propensity to shift their campus role close to the campus role of American librarians, yet in general the Bulgarian library guild remains traditional in their view of their responsibilities toward the educational process on campus.
This proposal aims regular discussions among professionals from Bulgarian and American (possibly other nations) librarians to determine the framework regarding librarian’s responsibilities. Are academic librarians faculty members or staff? Do they have teaching or service (or both) responsibilities? What are 20th century academic librarians’ responsibilities are to be preserved? Updated? What are the 21st century responsibilities to be gained? What is the relationship between academic librarians and faculty? What is expected from an academic librarians to ensure learning happens? To benefit faculty’s teaching?
A comparison of academic library structures, job descriptions, models and discourses can lead to deep[er] analysis of existing structures and possible reorganizations to improve the role of the library in particular and the efficiency of the educational institution in general.
Comparisons of topics and syllabi: multiliteraices as successor of information literacy? Is the academic library the hub for technological innovations (e.g makerspaces, 3D printing, virtual reality/augmented reality) and if not, what is the academic library role in the process?
Other relevant topics / issues are expected to transpire during such discourse.
The project is organized in collaboration of synchronous and asynchronous character during the span of one academic year. Three synchronous sessions each semester (six sessions for the entire semester) will provide a forum through e-conferencing tools (e.g. Adobe Connect, WebEx, Skype, Google Hangout etc.) for live discussions and planning. Weekly asynchronous dialog through social media (e.g. blog, Facebook Group, Google Group etc.) will provide the platform/ hub/ forum daily/detailed preparation for the monthly synchronous meetings.
Most valuable feedback through the weekly asynchronous discussions will be voted by participants and three best weekly contributions will be awarded badges. At the end of the academic year, the three contributors with largest collection of badges will be awarded cost for registration fee, travel and lodging to an important European conference regarding libraries and education.
The experience and lessons from the process will be summed up, published and presented at local (Bulgarian), regional (Balkans) and international (European, U.S.) educational conferences and events. Similar cross-cultural experiences and studies will be research and comparison and future collaboration will be sought.
The use of synchronous tools will provide technological and didactical practice for academic librarians; an experience they later can apply in their service to the campus community.
Same with the asynchronous tools / social media
The practice and experience of using social media for institutional purposes can help librarians figure out pertinent outreach to the recent and incoming students (Millennials and Gen Y)
The use of social media will provide transparency and participatory governing of the process.
The lessons from such endeavor aim to bring closer collaboration and understanding between academic librarians and campus faculty. Such collaboration can be measured, as well as impact of improved teaching and improved learning. The measurements should convince university administration to further support the continues process of cross-cultural collaboration between academic librarians.