Overreliance on data to drive design decisions can be just as harmful as ignoring it. Data only tells one kind of story. But your project goals are often more complex than that. Goals can’t always be objectively measured.
Data-driven design is about using information gleaned from both quantitative and qualitative sources to inform how you make decisions for a set of users. Some common tools used to collect data include user surveys, A/B testing, site usage and analytics, consumer research, support logs, and discovery calls.
Designers justified their value through their innate talent for creative ideas and artistic execution. Those whose instincts reliably produced success became rock stars.
In today’s data-driven world, that instinct is less necessary and holds less power. But make no mistake, there’s still a place for it.
Data is good at measuring things that are easy to measure. Some goals are less tangible, but that doesn’t make them less important.
Data has become an authoritarian who has fired the other advisors who may have tempered his ill will. A designer’s instinct would ask, “Do people actually enjoy using this?” or “How do these tactics reflect on our reputation and brand?”
Deciding between two or three options? This is where data shines. Nothing is more decisive than an A/B test to compare potential solutions and see which one actually performs better. Make sure you’re measuring long-term value metrics and not just views and clicks.
Sweating product quality and aesthetics? Turn to your instinct. The overall feeling of quality is a collection of hundreds of micro-decisions, maintained consistency, and execution with accuracy. Each one of those decisions isn’t worth validating on its own. Your users aren’t design experts, so their feedback will be too subjective and variable. Trust your design senses when finessing the details.
Unsure about user behavior? Use data rather than asking for opinions. When asked what they’ll do, customers will do what they think you want them to. Instead, trust what they actually do when they think nobody’s looking.
Building brand and reputation? Data can’t easily measure this. But we all know trustworthiness is as important as clicks (and sometimes they’re opposing goals). When building long-term reputation, trust your instinct to guide you to what’s appealing, even if it sometimes contradicts short-term data trends. You have to play the long game here.
digital humanities is born f the encounter between traditional humanities and computational methods.
p. 5. From Humanism to Humanities
While the foundations of of humanistic inquiry and the liberal arts can be traced back in the west to the medieval trivium and quadrivium, the modern and human sciences are rooted in the Renaissance shift from a medieval, church dominated, theocratic world view to be human centered one period the gradual transformation of early humanism into the disciplines that make up the humanities today Was profoundly shaped by the editorial practices involved in the recovery of the corpus of works from classical antiquity
P. 6. The shift from humanism to the institution only sanctioned disciplinary practices and protocols that we associate with the humanities today is best described as a gradual process of subdivision and specialization.
P. 7. Text-based disciplines in studies (classics, literature, philosophy, the history of ideas) make up, from the very start, the core of both the humanities and the great books curricular instituted in the 1920s and 1930s.
P. 10. Transmedia modes of argumentation
In the 21st-century, we communicate in media significantly more varied, extensible, and multiplicative then linear text. From scalable databases to information visualizations, from video lectures to multi-user virtual platforms serious content and rigorous argumentation take shape across multiple platforms in media. The best digital humanities pedagogy and research projects train students both in “reading “and “writing “this emergent rhetoric and in understanding how the reshape and three model humanistic knowledge. This means developing critically informed literacy expensive enough to include graphic design visual narrative time based media, and the development of interfaces (Rather then the rote acceptance of them as off-the-shelf products).
P. 11. The visual becomes ever more fundamental to the digital humanities, in ways that compliment, enhance, and sometimes are in pension with the textual.
There is no either/or, no simple interchangeability between language and the visual, no strict sub ordination of the one to the other. Words are themselves visual but other kinds of visual constructs do different things. The question is how to use each to its best effect into device meaningful interpret wing links, to use Theodor Nelson’s ludic neologism.
P. 11. The suite of expressive forms now encompasses the use of sound, motion graphics, animation, screen capture, video, audio, and the appropriation and into remix sink of code it underlines game engines. This expanded range of communicative tools requires those who are engaged in digital humanities world to familiarize themselves with issues, discussions, and debates in design fields, especially communication and interaction design. Like their print predecessors, form at the convention center screen environments can become naturalized all too quickly, with the results that the thinking that informed they were designed goes unperceived.
For digital humanists, design is a creative practice harnessing cultural, social, economic, and technological constraints in order to bring systems and objects into the world. Design in dialogue with research is simply a picnic, but when used to pose in frame questions about knowledge, design becomes an intellectual method. Digital humanities is a production based in Denver in which theoretical issues get tested in the design of implementations and implementations or loci after your radical reflection and elaboration.
Did you thaw humanists have much to learn from communication and media design about how to juxtapose and integrate words and images create hire he is of reading, Forge pathways of understanding, deployed grades in templates to best effect, and develop navigational schemata that guide in produce meaningful interactions.
P. 15. The field of digital digital humanities me see the emergence of polymaths who can “ do it all” : Who can research, write, shoot, edit, code, model, design, network, and dialogue with users. But there is also ample room for specialization and, particularly, for collaboration.
P. 16. Computational activities in digital humanities.
The foundational layer, computation, relies on principles that are, on the surface, at odds with humanistic methods.
P. 17. The second level involves processing in a way that conform to computational capacities, and this were explored in the first generation of digital scholarship and stylometrics, concordance development, and indexing.
Duration, analysis, editing, modeling.
Duration, analysis, editing, and modeling comprise fundamental activities at the core of digital humanities. Involving archives, collections, repositories, and other aggregations of materials, duration is the selection and organization of materials in an interpretive framework, argument, or exhibit.
P. 18. Analysis refers to the processing of text or data: statistical and quantitative methods of analysis have brought close readings of texts (stylometrics and genre analysis, correlation, comparisons of versions for alter attribution or usage patterns ) into dialogue with distant reading (The crunching cuff large quantities of information across the corpus of textual data or its metadata).
Edit think has been revived with the advent of digital media and the web and to continue to be an integral activity in textual as well as time based formats.
P. 18. Model link highlights the notion of content models- shapes of argument expressed in information structures in their design he digital project is always an expression of assumptions about knowledge: usually domain specific knowledge given an explicit form by the model in which it is designed.
P. 19. Each of these areas of activity- cure ration, analysis, editing, and modeling is supported by the basic building blocks of digital activity. But they also depend upon networks and infrastructure that are cultural and institutional as well as technical. Servers, software, and systems administration are key elements of any project design.
P. 30. Digital media are not more “evolved” have them print media nor are books obsolete; but the multiplicity of media in the very processes of mediation entry mediation in the formation of cultural knowledge and humanistic inquiry required close attention. Tug link between distant and clothes, macro and micro, and surface in depth becomes the norm. Here, we focus on the importance of visualization to the digital humanities before moving on to other, though often related, genre and methods such as Locative investigation, thick mapping, animated archives, database documentaries, platform studies, and emerging practices like cultural analytics, data mining and humanities gaming.
P. 35. Fluid texture out what he refers to the mutability of texts in the variants and versions Whether these are produced through Authorial changes, anything, transcription, translation, or print production
Cultural Analytics, aggregation, and data mining.
The field of cultural Analytics has emerged over the past few years, utilizing tools of high-end computational analysis and data visualization today sect large-scale coach data sets. Cultural Analytic does Not analyze cultural artifacts, but operates on the level of digital models of this materials in aggregate. Again, the point is not to pit “close” hermeneutic reading against “distant” data mapping, but rather to appreciate the synergistic possibilities and tensions that exist between a hyper localized, deep analysis and a microcosmic view
Data mining is a term that covers a host of picnics for analyzing digital material by “parameterizing” some feature of information and extract in it. This means that any element of a file or collection of files that can be given explicit specifications, or parameters, can be extracted from those files for analysis.
Understanding the rehtoric of graphics is another essential skill, therefore, in working at a skill where individual objects are lost in the mass of processed information and data. To date, much humanities data mining has merely involved counting. Much more sophisticated statistical methods and use of probability will be needed for humanists to absorb the lessons of the social sciences into their methods
P. 42. Visualization and data design
Currently, visualization in the humanities uses techniques drawn largely from the social sciences, Business applications, and the natural sciences, all of which require self-conscious criticality in their adoption. Such visual displays including graphs and charts, may present themselves is subjective or even unmediated views of reality, rather then is rhetorical constructs.
Warwick, C., Terras, M., & Nyhan, J. (2012). Digital humanities in practice . London: Facet Publishing in association with UCL Centre for Digital Humanities.
Summary This short paper lays out an attempt to measure how much activity from Russian state-operated accounts released in the dataset made available by Twitter in October 2018 was targeted at the United Kingdom. Finding UK-related Tweets is not an easy task. By applying a combination of geographic inference, keyword analysis and classification by algorithm, we identified UK-related Tweets sent by these accounts and subjected them to further qualitative and quantitative analytic techniques.
There were three phases in Russian influence operations : under-the-radar account building, minor Brexit vote visibility, and larger-scale visibility during the London terror attacks.
Russian influence operations linked to the UK were most visible when discussing Islam . Tweets discussing Islam over the period of terror attacks between March and June 2017 were retweeted 25 times more often than their other messages.
The most widely-followed and visible troll account, @TEN_GOP, shared 109 Tweets related to the UK. Of these, 60 percent were related to Islam .
The topology of tweet activity underlines the vulnerability of social media users to disinformation in the wake of a tragedy or outrage.
Focus on the UK was a minor part of wider influence operations in this data . Of the nine million Tweets released by Twitter, 3.1 million were in English (34 percent). Of these 3.1 million, we estimate 83 thousand were in some way linked to the UK (2.7%). Those Tweets were shared 222 thousand times. It is plausible we are therefore seeing how the UK was caught up in Russian operations against the US .
Influence operations captured in this data show attempts to falsely amplify other news sources and to take part in conversations around Islam , and rarely show attempts to spread ‘fake news’ or influence at an electoral level.
On 17 October 2018, Twitter released data about 9 million tweets from 3,841 blocked accounts affiliated with the Internet Research Agency (IRA) – a Russian organisation founded in 2013 and based in St Petersburg, accused of using social media platforms to push pro-Kremlin propaganda and influence nation states beyond their borders, as well as being tasked with spreading pro-Kremlin messaging in Russia. It is one of the first major datasets linked to state-operated accounts engaging in influence operations released by a social media platform.
This report outlines the ways in which accounts linked to the Russian Internet ResearchAgency (IRA) carried out influence operations on social media and the ways their operationsintersected with the UK.The UK plays a reasonably small part in the wider context of this data. We see two possibleexplanations: either influence operations were primarily targeted at the US and British Twitterusers were impacted as collate, or this dataset is limited to US-focused operations whereevents in the UK were highlighted in an attempt to impact US public, rather than a concertedeffort against the UK. It is plausible that such efforts al so existed but are not reflected inthis dataset.Nevertheless, the data offers a highly useful window into how Russian influence operationsare carried out, as well as highlighting the moments when we might be most vulnerable tothem.Between 2011 and 2016, these state-operated accounts were camouflaged. Through manualand automated methods, they were able to quietly build up the trappings of an active andwell-followed Twitter account before eventually pivoting into attempts to influence the widerTwitter ecosystem. Their methods included engaging in unrelated and innocuous topics ofconversation, often through automated methods, and through sharing and engaging withother, more mainstream sources of news.Although this data shows levels of electoral and party-political influence operations to berelatively low, the day of the Brexit referendum results showed how messaging originatingfrom Russian state-controlled accounts might come to be visible – on June 24th 2016, we believe UK Twitter users discussing the Brexit Vote would have encountered messages originating from these accounts.As early as 2014, however, influence operations began taking part in conversations aroundIslam, and these accounts came to the fore during the three months of terror attacks thattook place between March and June 2017. In the immediate wake of these attacks, messagesrelated to Islam and circulated by Russian state-operated Twitter accounts were widelyshared, and would likely have been visible in the UK.The dataset released by Twitter begins to answer some questions about attempts by a foreignstate to interfere in British affairs online. It is notable that overt political or electoralinterference is poorly represented in this dataset: rather, we see attempts at stirring societaldivision, particularly around Islam in the UK, as the messages that resonated the most overthe period.What is perhaps most interesting about this moment is its portrayal of when we as socialmedia users are most vulnerable to the kinds of messages circulated by those looking toinfluence us. In the immediate aftermath of terror attacks, the data suggests, social mediausers were more receptive to this kind of messaging than at any other time.
It is clear that hostile states have identified the growth of online news and social media as aweak spot, and that significant effort has gone into attempting to exploit new media toinfluence its users. Understanding the ways in which these platforms have been used tospread division is an important first step to fighting it.Nevertheless, it is clear that this dataset provides just one window into the ways in whichforeign states have attempted to use online platforms as part of wider information warfare
and influence campaigns. We hope that other platforms will follow Twitter’s lead and release
similar datasets and encourage their users to proactively tackle those who would abuse theirplatforms.
“Netnography” has been developed for online community researchers. It is “net” plus “ethnography,” which is based on the traditional ethnography and combines with the qualitative analysis for online interactive contents forms of virtual community members. The aim of doing netnographic research is to study the subculture, interactive process and characteristics of collective behaviors of online communities (Kozinets 2009). Follow the development of Internet technology, the web–based method is more convenient and cost–effect in data collection. Members in virtual groups create a large number of interactive texts, pictures, network expressions and other original information over time, which provides an extremely rich database to researchers. Moreover, from the data collection’s point of view, this online observation method will not interfere with the whole research process, which is better than questionnaires and quantitative modeling (Moisander and Valtonen 2006). Additionally, Kozinets (2009) also pointed that netnogrpahy emphasize on the research background, observers not only focus on the text during communications but also need to pay attention to the characteristics of language, history, meaning and communication types. Even parse fonts, symbols, images and photo data. These content of studies are significant in social communication, which is called “Cultural Artifact.” On the other hand, netnography is based on traditional ethnography as a methodology; therefore it inherits the research processes of ethnographic method. Kozients (2009) reinterpreted these procedures for netnography as Firstly, to determine the research target and understand its cultural characteristics; Secondly, to collect and analyze information; Thirdly, to ensure the credibility of interpretation; Fourthly, pay attention to research ethics; Lastly, to obtain respondents feedbacks. To make my research adapting to this guidelines, I make my research process as 1. To target on Plymouth Chinese overseas students and to explain the Chinese guanxi; 2. To collect and analyze data through the existing WeChat group created by Plymouth Chinese Students and Scholars Association (CSSA); 3. To confirm the identity of key influencers in this virtual group; 4. To get feedbacks from respondent as much as possible.
Ungerer, L. M. (2016). Digital Curation as a Core Competency in Current Learning and Literacy: A Higher Education Perspective. The International Review of Research in Open and Distributed Learning, 17(5). https://doi.org/10.19173/irrodl.v17i5.2566
Dunaway (2011) suggests that learning landscapes in a digital age are networked, social, and technological. Since people commonly create and share information by collecting, filtering, and customizing digital content, educators should provide students opportunities to master these skills (Mills, 2013). In enhancing critical thinking, we have to investigate pedagogical models that consider students’ digital realities (Mihailidis & Cohen, 2013). November (as cited in Sharma & Deschaine, 2016), however warns that although the Web fulfils a pivotal role in societal media, students often are not guided on how to critically deal with the information that they access on the Web. Sharma and Deschaine (2016) further point out the potential for personalizing teaching and incorporating authentic material when educators themselves digitally curate resources by means of Web 2.0 tools.
p. 24. Communities of practice. Lave and Wenger’s (as cited in Weller, 2011) concept of situated learning and Wenger’s (as cited in Weller, 2011) idea of communities of practice highlight the importance of apprenticeship and the social role in learning.
criteria to publish a paper
Originality: Does the paper contain new and significant information adequate to justify publication?
Relationship to Literature: Does the paper demonstrate an adequate understanding of the relevant literature in the field and cite an appropriate range of literature sources? Is any significant work ignored?
Methodology: Is the paper’s argument built on an appropriate base of theory, concepts, or other ideas? Has the research or equivalent intellectual work on which the paper is based been well designed? Are the methods employed appropriate?
Results: Are results presented clearly and analyzed appropriately? Do the conclusions adequately tie together the other elements of the paper?
Implications for research, practice and/or society: Does the paper identify clearly any implications for research, practice and/or society? Does the paper bridge the gap between theory and practice? How can the research be used in practice (economic and commercial impact), in teaching, to influence public policy, in research (contributing to the body of knowledge)? What is the impact upon society (influencing public attitudes, affecting quality of life)? Are these implications consistent with the findings and conclusions of the paper?
Quality of Communication: Does the paper clearly express its case, measured against the technical language of the field and the expected knowledge of the journal’s readership? Has attention been paid to the clarity of expression and readability, such as sentence structure, jargon use, acronyms, etc.
Stanton, K. V., & Liew, C. L. (2011). Open Access Theses in Institutional Repositories: An Exploratory Study of the Perceptions of Doctoral Students. Information Research: An International Electronic Journal, 16(4),
We examine doctoral students’ awareness of and attitudes to open access forms of publication. Levels of awareness of open access and the concept of institutional repositories, publishing behaviour and perceptions of benefits and risks of open access publishing were explored. Method: Qualitative and quantitative data were collected through interviews with eight doctoral students enrolled in a range of disciplines in a New Zealand university and a self-completion Web survey of 251 students. Analysis: Interview data were analysed thematically, then evaluated against a theoretical framework. The interview data were then used to inform the design of the survey tool. Survey responses were analysed as a single set, then by disciple using SurveyMonkey’s online toolkit and Excel. Results: While awareness of open access and repository archiving is still low, the majority of interview and survey respondents were found to be supportive of the concept of open access. The perceived benefits of enhanced exposure and potential for sharing outweigh the perceived risks. The majority of respondents were supportive of an existing mandatory thesis submission policy. Conclusions: Low levels of awareness of the university repository remains an issue, and could be addressed by further investigating the effectiveness of different communication channels for promotion.
the researchers use the qualitative approach: by interviewing participants and analyzing their responses thematically, they build the survey.
Then then administer the survey (the quantitative approach)
How do you intend to use a mixed method? Please share
Metaphors: A Problem Statement is like… metaphor — a novel or poetic linguistic expression where one or more words for a concept are used outside normal conventional meaning to express a similar concept. Aristotle l The DNA of the research l A snapshot of the research l The foundation of the research l The Heart of the research l A “taste” of the research l A blueprint for the study
digital object identifier (DOI) is a unique alphanumeric string assigned by a registration agency (the International DOI Foundation) to identify content and provide a persistent link to its location on the Internet. The publisher assigns a DOI when your article is published and made available electronically.
Why do we need it?
2010 Changes to APA for Electronic Materials Digital object identifier (DOI). DOI available. If a DOI is available you no longer include a URL. Example: Author, A. A. (date). Title of article. Title of Journal, volume(number), page numbers. doi: xx.xxxxxxx
Accodring to Sugimoto et al (2016), the Use of social media platforms for by researchers is high — ranging from 75 to 80% in large -scale surveys (Rowlands et al., 2011; Tenopir et al., 2013; Van Eperen & Marincola, 2011) .
There is one more reason, and, as much as you want to dwell on the fact that you are practitioners and research is not the most important part of your job, to a great degree, you may be judged also by the scientific output of your office and/or institution.
In that sense, both social media and altimetrics might suddenly become extremely important to understand and apply.
Shortly altmetrics (alternative metrics) measure the impact your scientific output has on the community. Your teachers and you present, publish and create work, which might not be presented and published, but may be widely reflected through, e.g. social media, and thus, having impact on the community.
How such impact is measured, if measured at all, can greatly influence the money flow to your institution
Thelwall, M., & Wilson, P. (2016). Mendeley readership altmetrics for medical articles: An analysis of 45 fields. Journal of the Association for Information Science and Technology, 67(8), 1962–1972. https://doi.org/10.1002/asi.23501
Todd Tetzlaff is using Mendeley and he might be the only one to benefit … 🙂
Here is some food for thought from the article above:
Doctoral students and junior researchers are the largest reader group in Mendeley ( Haustein & Larivière, 2014; Jeng et al., 2015; Zahedi, Costas, & Wouters, 2014a) .
Studies have also provided evidence of high rate s of blogging among certain subpopulations: for example, approximately one -third of German university staff (Pscheida et al., 2013) and one fifth of UK doctoral students use blogs (Carpenter et al., 2012) .
Social data sharing platforms provide an infrastructure to share various types of scholarly objects —including datasets, software code, figures, presentation slides and videos —and for users to interact with these objects (e.g., comment on, favorite, like , and reuse ). Platforms such as Figshare and SlideShare disseminate scholars’ various types of research outputs such as datasets, figures, infographics, documents, videos, posters , or presentation slides (Enis, 2013) and displays views, likes, and shares by other users (Mas -Bleda et al., 2014) .
Frequently mentioned social platforms in scholarly communication research include research -specific tools such as Mendeley, Zotero, CiteULike, BibSonomy, and Connotea (now defunct) as well as general tools such as Delicious and Digg (Hammond, Hannay, Lund, & Scott, 2005; Hull, Pettifer, & Kell, 2008; Priem & Hemminger, 2010; Reher & Haustein, 2010) .
“The focus group interviews were analysed based on the principles of interpretative phenomenology”
if you are not podcast fans, I understand. The link above is a pain in the behind to make work, if you are not familiar with using podcast.
Here is an easier way to find it:
1. open your cell phone and go find the podcast icon, which is pre-installed, but you might have not ever used it [yet].
2. In the app, use the search option and type “stuff you should know”
3. the podcast will pop up. scroll and find “How the scientific method works,” and/or search for it if you can.
Once you can play it on the phone, you have to find time to listen to it.
I listen to podcast when i have to do unpleasant chores such as: 1. walking to work 2. washing the dishes 3. flying long hours (very rarely). 4. Driving in the car.
There are bunch of other situations, when you may be strapped and instead of filling disgruntled and stressed, you can deliver the mental [junk] food for your brain.
Earbuds help me: 1. forget the unpleasant task, 2. Utilize time 3. Learn cool stuff
Here are podcasts, I am subscribed for, besides “stuff you should know”:
TED Radio Hour
TED Talks Education
NPR Fresh Air
and bunch others, which, if i don’t go a listen for an year, i go and erase and if i peruse through the top chart and something picks my interest, I try.
If I did not manage to convince to podcast, totally fine; do not feel obligated.
However, this podcast, you can listen to on your computer, if you don’t want to download on your phone.
It is one hour show by two geeks, who are trying to make funny (and they do) a dry matter such as quantitative vs qualitative, which you want to internalize:
1. Sometimes at minute 12, they talk about inductive versus deductive to introduce you to qualitative versus quantitative. It is good to listen to their musings, since your dissertation is going through inductive and deductive process, and understanding it, can help you control better your dissertation writing.
2. Scientific method. Hypothesis etc (around min 17).
While this is not a Ph.D., but Ed.D. and we do not delve into the philosophy of science and dissertation etc. the more you know about this process, the better control you have over your dissertation.
3. Methods and how you prove (Chapter 3) is discussed around min 35
4. dependent and independent variables and how do you do your research in general (min ~45)
Shortly, listen and please do share your thoughts below. You do not have to be kind to this source offering. Actually, be as critical as possible, so you can help me decide, if I should offer it to the next cohort and thank you in advance for your feedback.
Between the “dumb” fixed algorithms and true AI lies the problematic halfway house we’ve already entered with scarcely a thought and almost no debate, much less agreement as to aims, ethics, safety, best practice. If the algorithms around us are not yet intelligent, meaning able to independently say “that calculation/course of action doesn’t look right: I’ll do it again”, they are nonetheless starting to learn from their environments. And once an algorithm is learning, we no longer know to any degree of certainty what its rules and parameters are. At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us. Where the “dumb” fixed algorithms – complex, opaque and inured to real time monitoring as they can be – are in principle predictable and interrogable, these ones are not. After a time in the wild, we no longer know what they are: they have the potential to become erratic. We might be tempted to call these “frankenalgos” – though Mary Shelley couldn’t have made this up.
Twenty years ago, George Dyson anticipated much of what is happening today in his classic book Darwin Among the Machines. The problem, he tells me, is that we’re building systems that are beyond our intellectual means to control. We believe that if a system is deterministic (acting according to fixed rules, this being the definition of an algorithm) it is predictable – and that what is predictable can be controlled. Both assumptions turn out to be wrong.“It’s proceeding on its own, in little bits and pieces,” he says. “What I was obsessed with 20 years ago that has completely taken over the world today are multicellular, metazoan digital organisms, the same way we see in biology, where you have all these pieces of code running on people’s iPhones, and collectively it acts like one multicellular organism.“There’s this old law called Ashby’s law that says a control system has to be as complex as the system it’s controlling, and we’re running into that at full speed now, with this huge push to build self-driving cars where the software has to have a complete model of everything, and almost by definition we’re not going to understand it. Because any model that we understand is gonna do the thing like run into a fire truck ’cause we forgot to put in the fire truck.”
Walsh believes this makes it more, not less, important that the public learn about programming, because the more alienated we become from it, the more it seems like magic beyond our ability to affect. When shown the definition of “algorithm” given earlier in this piece, he found it incomplete, commenting: “I would suggest the problem is that algorithm now means any large, complex decision making software system and the larger environment in which it is embedded, which makes them even more unpredictable.” A chilling thought indeed. Accordingly, he believes ethics to be the new frontier in tech, foreseeing “a golden age for philosophy” – a view with which Eugene Spafford of Purdue University, a cybersecurity expert, concurs. Where there are choices to be made, that’s where ethics comes in.
our existing system of tort law, which requires proof of intention or negligence, will need to be rethought. A dog is not held legally responsible for biting you; its owner might be, but only if the dog’s action is thought foreseeable.
As we wait for a technological answer to the problem of soaring algorithmic entanglement, there are precautions we can take. Paul Wilmott, a British expert in quantitative analysis and vocal critic of high frequency trading on the stock market, wryly suggests “learning to shoot, make jam and knit”
The venerable Association for Computing Machinery has updated its code of ethics along the lines of medicine’s Hippocratic oath, to instruct computing professionals to do no harm and consider the wider impacts of their work.
Session Title: Measuring Learning Outcomes of New Library Initiatives Coordinator: Professor Plamen Miltenoff, Ph.D., MLIS, St. Cloud State University, USA Contact: email@example.com Scope & rationale: The advent of new technologies, such as virtual/augmented/mixed reality, and new pedagogical concepts, such as gaming and gamification, steers academic libraries in uncharted territories. There is not yet sufficiently compiled research and, respectively, proof to justify financial and workforce investment in such endeavors. On the other hand, dwindling resources for education presses administration to demand justification for new endeavors. As it has been established already, technology does not teach; teachers do; a growing body of literature questions the impact of educational technology on educational outcomes. This session seeks to bring together presentations and discussion, both qualitative and quantitative research, related to new pedagogical and technological endeavors in academic libraries as part of education on campus. By experimenting with new technologies such as Video 360 degrees and new pedagogical approaches such as gaming and gamification, does the library improve learning? By experimenting with new technologies and pedagogical approaches, does the library help campus faculty to adopt these methods and improve their teaching? How can results be measured, demonstrated?
Publisher / Organization: The University of Illinois at Chicago- University Library
Year founded: 1996
Description: First Monday is among the very first open access journals in the EdTech field. The journal’s subject matter encompasses the full range of Internet issues, including educational technologies, social media and web search. Contributors are urged via author guidelines to use simple explanations and less complex sentences and to be mindful that a large proportion of their readers are not part of academia and do not have English as a first language.
Academic Management: University of Catalonia (UOC)
Year founded: 2004
Description: This journal aims to: provide a vehicle for scholarly presentation and exchange of information between professionals, researchers and practitioners in the technology-enhanced education field; contribute to the advancement of scientific knowledge regarding the use of technology and computers in higher education; and inform readers about the latest developments in the application of information technologies (ITs) in higher education learning, training, research and management.
Description: Online Learning promotes the development and dissemination of new knowledge at the intersection of pedagogy, emerging technology, policy, and practice in online environments. The journal has been published for over 20 years as the Journal of Asynchronous Learning Networks (JALN) and recently merged with the Journal of Online Learning and Teaching (JOLT).
Publisher / Organization: International Forum of Educational Technology & Society
Description: Educational Technology & Society seeks academic articles on the issues affecting the developers of educational systems and educators who implement and manage these systems. Articles should discuss the perspectives of both communities – the programmers and the instructors. The journal is currently still accepting submissions for ongoing special issues, but will cease publication in the future as the editors feel that the field of EdTech is saturated with high quality publications.
Description: The Australasian Journal of Educational Technology aims to promote research and scholarship on the integration of technology in tertiary education, promote effective practice, and inform policy. The goal is to advance understanding of educational technology in post-school education settings, including higher and further education, lifelong learning, and training.
DESCRIPTION: The Internet and Higher Education is devoted to addressing contemporary issues and future developments related to online learning, teaching, and administration on the Internet in post-secondary settings. Articles should significantly address innovative deployments of Internet technology in instruction and report on research to demonstrate the effects of information technology on instruction in various contexts in higher education.
Publisher / Organization: British Educational Research Association (BERA)
YEAR FOUNDED: 1970
DESCRIPTION: The journal publishes theoretical perspectives, methodological developments and empirical research that demonstrate whether and how applications of instructional/educational technology systems, networks, tools and resources lead to improvements in formal and non-formal education at all levels, from early years through to higher, technical and vocational education, professional development and corporate training.
Description: Computers & Education aims to increase knowledge and understanding of ways in which digital technology can enhance education, through the publication of high quality research, which extends theory and practice.
Description: TechTrends targets professionals in the educational communication and technology field. It provides a vehicle that fosters the exchange of important and current information among professional practitioners. Among the topics addressed are the management of media and programs, the application of educational technology principles and techniques to instructional programs, and corporate and military training.
Description: Advances in technology and the growth of e-learning to provide educators and trainers with unique opportunities to enhance learning and teaching in corporate, government, healthcare, and higher education. IJEL serves as a forum to facilitate the international exchange of information on the current research, development, and practice of e-learning in these sectors.
Led by an Editorial Review Board of leaders in the field of e-Learning, the Journal is designed for the following audiences: researchers, developers, and practitioners in corporate, government, healthcare, and higher education. IJEL is a peer-reviewed journal.
Description: JCMST is a highly respected scholarly journal which offers an in-depth forum for the interchange of information in the fields of science, mathematics, and computer science. JCMST is the only periodical devoted specifically to using information technology in the teaching of mathematics and science.
Just as researchers build reputation over time that can be depicted (in part) through quantitative measures such as h-index and i10-index, journals are also compared based on the number of citations they receive..
Description: The Journal of Interactive Learning Research (JILR) publishes papers related to the underlying theory, design, implementation, effectiveness, and impact on education and training of the following interactive learning environments: authoring systems, cognitive tools for learning computer-assisted language learning computer-based assessment systems, computer-based training computer-mediated communications, computer-supported collaborative learning distributed learning environments, electronic performance support systems interactive learning environments, interactive multimedia systems interactive simulations and games, intelligent agents on the Internet intelligent tutoring systems, microworlds, virtual reality based learning systems.
Description: JEMH is designed to provide a multi-disciplinary forum to present and discuss research, development and applications of multimedia and hypermedia in education. It contributes to the advancement of the theory and practice of learning and teaching in environments that integrate images, sound, text, and data.
Publisher / Organization: Society for Information Technology and Teacher Education (SITE)
Year founded: 1997
Description: JTATE serves as a forum for the exchange of knowledge about the use of information technology in teacher education. Journal content covers preservice and inservice teacher education, graduate programs in areas such as curriculum and instruction, educational administration, staff development instructional technology, and educational computing.
Publisher / Organization: Association for the Advancement of Computing in Education (AACE)
YEAR FOUNDED: 2015
DESCRIPTION: The Journal of Online Learning Research (JOLR) is a peer-reviewed, international journal devoted to the theoretical, empirical, and pragmatic understanding of technologies and their impact on primary and secondary pedagogy and policy in primary and secondary (K-12) online and blended environments. JOLR is focused on publishing manuscripts that address online learning, catering particularly to the educators who research, practice, design, and/or administer in primary and secondary schooling in online settings. However, the journal also serves those educators who have chosen to blend online learning tools and strategies in their face-to-face classroom.
SCImago Journal Rank (SJR indicator) measures the influence of journals based on the number of citations the articles in the journal receive and the importance or prestige of the journals where such citations come from. The SJR indicator is a free journal metric which uses an algorithm similar to PageRank and provides an open access alternative to the journal impact factor in the Web of Science Journal Citation Report. The portal draws from the information contained in the Scopus database (Elsevier B.V.).
Introduced by Elsevier in 2004, Scopus is an abstract and citation database that covers nearly 18,000 titles from more than 5,000 publishers. It offers journal metrics that go beyond just journals to include most serial titles, including supplements, special issues and conference proceedings. Scopus offers useful information such as the total number of citations, the total number of articles published, and the percent of articles cited.
“Citations are not just a reflection of the impact that a particular piece of academic work has generated. Citations can be used to tell stories about academics, journals and fields of research, but they can also be used to distort stories”.
ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. The community was founded in May 2008. Today it has over 14 million members.
Google Scholar allows users to search for digital or physical copies of articles, whether online or in libraries. It indexes “full-text journal articles, technical reports, preprints, theses, books, and other documents, including selected Web pages that are deemed to be ‘scholarly. It comprises an estimated 160 million documents.
Academia.edu is a social-networking platform for academics to share research papers. You can upload your own work, and follow the updates of your peers. Founded in 2008, the network currently has 59 million users, and adding 20 million documents.
The ORCHID (Open Researcher and Contributor ID) is a nonproprietary alphanumeric code to uniquely identify scientific and other academic authors and contributors. It provides a persistent identity for humans, similar to content-related entities on digital networks that utilize digital object identifiers (DOIs). The organization offers an open and independent registry intended to be the de facto standard for contributor identification in research and academic publishing.
The Scopus Author Identifier assigns a unique number to groups of documents written by the same author via an algorithm that matches authorship based on a certain criteria. If a document cannot be confidently matched with an author identifier, it is grouped separately. In this case, you may see more than one entry for the same author.
more on metrics in this iMS blog
Writing clear and neutral survey questions is much more difficult than it might seem. We spend *a lot* of time thinking about the phrasing and ordering of our survey questions. The second video in our Methods 101 series tackles the many ways writing survey questions can go wrong, and the steps you can take to avoid these pitfalls.
The term “digital humanities” can refer to research and instruction that is about information technology or that uses IT. By applying technologies in new ways, the tools and methodologies of digital humanities open new avenues of inquiry and scholarly production. Digital humanities applies computational capabilities to humanistic questions, offering new pathways for scholars to conduct research and to create and publish scholarship. Digital humanities provides promising new channels for learners and will continue to influence the ways in which we think about and evolve technology toward better and more humanistic ends.
As defined by Johanna Drucker and colleagues at UCLA, the digital humanities is “work at the intersection of digital technology and humanities disciplines.” An EDUCAUSE/CNI working group framed the digital humanities as “the application and/or development of digital tools and resources to enable researchers to address questions and perform new types of analyses in the humanities disciplines,” and the NEH Office of Digital Humanities says digital humanities “explore how to harness new technology for thumanities research as well as those that study digital culture from a humanistic perspective.” Beyond blending the digital with the humanities, there is an intentionality about combining the two that defines it.
digital humanities can include
creating digital texts or data sets;
cleaning, organizing, and tagging those data sets;
applying computer-based methodologies to analyze them;
and making claims and creating visualizations that explain new findings from those analyses.
Scholars might reflect on
how the digital form of the data is organized,
how analysis is conducted/reproduced, and
how claims visualized in digital form may embody assumptions or biases.
Digital humanities can enrich pedagogy as well, such as when a student uses visualized data to study voter patterns or conducts data-driven analyses of works of literature.
Digital humanities usually involves work by teams in collaborative spaces or centers. Team members might include
researchers and faculty from multiple disciplines,
data scientists and preservation experts,
technologists with expertise in critical computing and computing methods, and undergraduates
some disciplinary associations, including the Modern Language Association and the American HistoricalAssociation, have developed guidelines for evaluating digital proj- ects, many institutions have yet to define how work in digital humanities fits into considerations for tenure and promotion
Because large projects are often developed with external funding that is not readily replaced by institutional funds when the grant ends sustainability is a concern. Doing digital humanities well requires access to expertise in methodologies and tools such as GIS, mod- eling, programming, and data visualization that can be expensive for a single institution to obtain
Resistance to learning new tech- nologies can be another roadblock, as can the propensity of many humanists to resist working in teams. While some institutions have recognized the need for institutional infrastructure (computation and storage, equipment, software, and expertise), many have not yet incorporated such support into ongoing budgets.
Opportunities for undergraduate involvement in research, provid ing students with workplace skills such as data management, visualization, coding, and modeling. Digital humanities provides new insights into policy-making in areas such as social media, demo- graphics, and new means of engaging with popular culture and understanding past cultures. Evolution in this area will continue to build connections between the humanities and other disci- plines, cross-pollinating research and education in areas like med- icine and environmental studies. Insights about digital humanities itself will drive innovation in pedagogy and expand our conceptualization of classrooms and labs