It is important to note that bot accounts do not always clearly identify themselves as such in their profiles, and any bot classification system inevitably carries some risk of error. The Botometer system has been documented and validated in an array of academicpublications, and researchers from the Center conducted a number of independent validation measures of its results.8
Combine the superfast calculational capacities of Big Compute with the oceans of specific personal information comprising Big Data — and the fertile ground for computational propaganda emerges. That’s how the small AI programs called bots can be unleashed into cyberspace to target and deliver misinformation exactly to the people who will be most vulnerable to it. These messages can be refined over and over again based on how well they perform (again in terms of clicks, likes and so on). Worst of all, all this can be done semiautonomously, allowing the targeted propaganda (like fake news stories or faked images) to spread like viruses through communities most vulnerable to their misinformation.
According to Bolsover and Howard, viewing computational propaganda only from a technical perspective would be a grave mistake. As they explain, seeing it just in terms of variables and algorithms “plays into the hands of those who create it, the platforms that serve it, and the firms that profit from it.”
Computational propaganda is a new thing. People just invented it. And they did so by realizing possibilities emerging from the intersection of new technologies (Big Compute, Big Data) and new behaviors those technologies allowed (social media). But the emphasis on behavior can’t be lost.
People are not machines. We do things for a whole lot of reasons including emotions of loss, anger, fear and longing. To combat computational propaganda’s potentially dangerous effects on democracy in a digital age, we will need to focus on both its howand its why.
We needed to create more study, learning, and research space in the library. Put simply, our library space was cramped. It was a nice-looking building but not terribly “user-friendly.”
Additionally, the building itself was one of the oldest on campus…
Finally, we wanted to create a more visionary learning space. We wanted to define what impactful spaces for our students would be, and examine how the academic library can support both emerging academic trends and social formation on campus.
We’ve created “living rooms” in the library: spaces with couches, softer seating, fireplaces—where students can go and plop down. That “plopping down” is important. The library has become a place where students go with some intentionality to rest, to check their phone, to read.
We’ve tried to create interesting “spots.” We have nicely appointed, contemporary-in-feel study spaces, with glass whiteboards and glassed walls. People can see in, people can see out; today’s students like to be seen, and they like to see in. This was very important in our focus groups. Also, on a practical level, students like to be able to see into study spaces to see if they’re occupied.
Special Collections used to be intimidating for a first or second-year student. We wanted an experience in which from the moment the student arrives, there are no barriers to exploration. We wanted to send the message that this is a place for inquiry and discovery, a place to learn more. There are no doors—just an open entrance to the wing.
the key with the Great Books Room is that it is glassed. Students can look in and see others deliberating about great books around an oval table, or participating in mentor-led discussions. And they see that this is a part of the experience they can have at college.
Vine, R. (2018). Realigning liaison with university priorities: Observations from ARL Liaison Institutes 2015–18. College & Research Libraries News, 70(9). https://doi.org/10.5860/crln.79.8.420
Rita Vine is head of faculty and student engagement at the University of Toronto Libraries, email: email@example.com. In 2017–18, she was visiting program officer for the Reimaging Library Liaison initiative at the Association of Research Libraries.
The overarching goal of the institutes is to acknowledge a library’s primary traditional services (instruction, collections, reference) while challenging conventional thinking about what is needed for the future and how best to provide it. Exercises are designed to help librarians move from “what’s in it for the library” to “what’s in it for the university.”
Top ten observations
1. Liaison librarians would benefit from greater exposure to institutional research priorities at their university.
2. Liaisons find it easiest to engage in classroom support and access library resources. Research engagement is harder. Moving into new areas of engagement is challenging when faculty continue to see librarians as buyers of content or helpers of students.5 Liaisons experience little pressure from individual faculty to venture into new areas that have not been typically associated with libraries. If asked to engage in new areas, some liaisons find it intimidating to step outside of familiar roles to probe and advocate for new capabilities and services that faculty may not be ready to discuss, or which liaisons may not yet fully understand.
3. Liaisons are both eager and anxious about shifting their roles from service to engagement. Anxiety manifests itself in feeling inexpert or untrained in technical areas.
The need for training in many different and complex technical skills, like data numeracy, publishing practices, and research data management,
4. Many liaisons’ professional identity and value system revolves around disciplinarity, service, and openness, and less around outreach and impact.
5. Some liaisons see outreach and engagement as equivalent to advocacy, library “flag-waving,” and sometimes “not my job.” My note: as in “library degree is no less better the Ph.D., it is like a physicians degree.”
6. Finding time, space, and motivation to undertake deeper outreach is daunting to many liaisons. Liaisons were very reluctant to identify any current activities that could be terminated or reimagined in order to make time for new forms of engagement. Particularly in institutions where librarians enjoy faculty status, finding time to engage in personal research concerned liaisons more than finding time for outreach.
7. Liaisons want to deepen their relationships with faculty, but are unclear about ways to do this beyond sending an email and waiting.
8. Many liaisons are unclear about how their work intersects with that of functional specialists, and may need prompting to see opportunities for collaboration with them.
9. While liaisons place considerable value on traditional library services, they have difficulty articulating the value of those services when they put themselves in the shoes of their users. Groups struggled to find value in aspects of traditional services, but had little appetite for serious reconsideration of services that may have lost all or most of their value relative to the time and energy expended to deliver them.
10. For liaisons, teaming with others raises concerns about how teamwork translates into merit, promotion, and other tangible rewards. Liaisons wonder how the need for increased teaming and collaboration will impact their reward structure. My note: I read between the lines of this particular point: it is up to the administrator to become a leader!!! A leader can alleviate such individualistic concerns and raise the individuals to a team.
three recommendations for research libraries to consider to help their workforce move to a robust engagement and impact model.
Foster more frequent and deeper communication between librarians and faculty to understand their research and teaching challenges. Many liaisons will not take even modest communications risks, such as engaging in conversations with faculty in areas where they feel inexpert, without strong but supportive management interventions (as per my note above).
Find ways to help librarians use internal teaming and collaborations to solve university challenges.My note: Chris Kvaal, thank you for introducing me to the “hundred squirrels in one room” allegory. To find way to help librarians use internal teaming, librarians must be open to the mere idea of teaming.
Increase liaison activity with non-departmentalized units on campus, which are often drivers of institutional initiatives and university priorities. Units such as institutional research services, teaching centers, and senior university offices can connect the library to high-level institutional projects and provide opportunities to engage more liaisons and functional specialists in these areas.
United States digital literacy frameworks tend to focus on educational policy details and personal empowerment, the latter encouraging learners to become more effective students, better creators, smarter information consumers, and more influential members of their community.
National policies are vitally important in European digital literacy work, unsurprising for a continent well populated with nation-states and struggling to redefine itself, while still trying to grow economies in the wake of the 2008 financial crisis and subsequent financial pressures
African digital literacy is more business-oriented.
Middle Eastern nations offer yet another variation, with a strong focus on media literacy. As with other regions, this can be a response to countries with strong state influence or control over local media. It can also represent a drive to produce more locally-sourced content, as opposed to consuming material from abroad, which may elicit criticism of neocolonialism or religious challenges.
p. 14 Digital literacy for Humanities: What does it mean to be digitally literate in history, literature, or philosophy? Creativity in these disciplines often involves textuality, given the large role writing plays in them, as, for example, in the Folger Shakespeare Library’s instructor’s guide. In the digital realm, this can include web-based writing through social media, along with the creation of multimedia projects through posters, presentations, and video. Information literacy remains a key part of digital literacy in the humanities. The digital humanities movement has not seen much connection with digital literacy, unfortunately, but their alignment seems likely, given the turn toward using digital technologies to explore humanities questions. That development could then foster a spread of other technologies and approaches to the rest of the humanities, including mapping, data visualization, text mining, web-based digital archives, and “distant reading” (working with very large bodies of texts). The digital humanities’ emphasis on making projects may also increase
Digital Literacy for Business: Digital literacy in this world is focused on manipulation of data, from spreadsheets to more advanced modeling software, leading up to degrees in management information systems. Management classes unsurprisingly focus on how to organize people working on and with digital tools.
Digital Literacy for Computer Science: Naturally, coding appears as a central competency within this discipline. Other aspects of the digital world feature prominently, including hardware and network architecture. Some courses housed within the computer science discipline offer a deeper examination of the impact of computing on society and politics, along with how to use digital tools. Media production plays a minor role here, beyond publications (posters, videos), as many institutions assign multimedia to other departments. Looking forward to a future when automation has become both more widespread and powerful, developing artificial intelligence projects will potentially play a role in computer science literacy.
In traditional instruction, students’ first contact with new ideas happens in class, usually through direct instruction from the professor; after exposure to the basics, students are turned out of the classroom to tackle the most difficult tasks in learning — those that involve application, analysis, synthesis, and creativity — in their individual spaces. Flipped learning reverses this, by moving first contact with new concepts to the individual space and using the newly-expanded time in class for students to pursue difficult, higher-level tasks together, with the instructor as a guide.
Let’s take a look at some of the myths about flipped learning and try to find the facts.
Myth: Flipped learning is predicated on recording videos for students to watch before class.
Fact: Flipped learning does not require video. Although many real-life implementations of flipped learning use video, there’s nothing that says video must be used. In fact, one of the earliest instances of flipped learning — Eric Mazur’s peer instruction concept, used in Harvard physics classes — uses no video but rather an online text outfitted with social annotation software. And one of the most successful public instances of flipped learning, an edX course on numerical methods designed by Lorena Barba of George Washington University, uses precisely one video. Video is simply not necessary for flipped learning, and many alternatives to video can lead to effective flipped learning environments [http://rtalbert.org/flipped-learning-without-video/].
Fact: Flipped learning optimizes face-to-face teaching. Flipped learning may (but does not always) replace lectures in class, but this is not to say that it replaces teaching. Teaching and “telling” are not the same thing.
Myth: Flipped learning has no evidence to back up its effectiveness.
Fact: Flipped learning research is growing at an exponential pace and has been since at least 2014. That research — 131 peer-reviewed articles in the first half of 2017 alone — includes results from primary, secondary, and postsecondary education in nearly every discipline, most showing significant improvements in student learning, motivation, and critical thinking skills.
Myth: Flipped learning is a fad.
Fact: Flipped learning has been with us in the form defined here for nearly 20 years.
Myth: People have been doing flipped learning for centuries.
Fact: Flipped learning is not just a rebranding of old techniques. The basic concept of students doing individually active work to encounter new ideas that are then built upon in class is almost as old as the university itself. So flipped learning is, in a real sense, a modern means of returning higher education to its roots. Even so, flipped learning is different from these time-honored techniques.
Myth: Students and professors prefer lecture over flipped learning.
Fact: Students and professors embrace flipped learning once they understand the benefits. It’s true that professors often enjoy their lectures, and students often enjoy being lectured to. But the question is not who “enjoys” what, but rather what helps students learn the best.They know what the research says about the effectiveness of active learning
Assertion: Flipped learning provides a platform for implementing active learning in a way that works powerfully for students.
The Exposure Approach: we don’t provide a way for participants to determine if they learned anything new or now have the confidence or competence to apply what they learned.
The Exemplar Approach: from ‘show and tell’ for adults to show, tell, do and learn.
The Tutorial Approach: Getting a group that can meet at the same time and place can be challenging. That is why many faculty report a preference for self-paced professional development.build in simple self-assessment checks. We can add prompts that invite people to engage in some sort of follow up activity with a colleague. We can also add an elective option for faculty in a tutorial to actually create or do something with what they learned and then submit it for direct or narrative feedback.
The Course Approach: a non-credit format, these have the benefits of a more structured and lengthy learning experience, even if they are just three to five-week short courses that meet online or in-person once every week or two.involve badges, portfolios, peer assessment, self-assessment, or one-on-one feedback from a facilitator
The Academy Approach: like the course approach, is one that tends to be a deeper and more extended experience. People might gather in a cohort over a year or longer.Assessment through coaching and mentoring, the use of portfolios, peer feedback and much more can be easily incorporated to add a rich assessment element to such longer-term professional development programs.
The Mentoring Approach: The mentors often don’t set specific learning goals with the mentee. Instead, it is often a set of structured meetings, but also someone to whom mentees can turn with questions and tips along the way.
The Coaching Approach: A mentor tends to be a broader type of relationship with a person.A coaching relationship tends to be more focused upon specific goals, tasks or outcomes.
The Peer Approach:This can be done on a 1:1 basis or in small groups, where those who are teaching the same courses are able to compare notes on curricula and teaching models. They might give each other feedback on how to teach certain concepts, how to write syllabi, how to handle certain teaching and learning challenges, and much more. Faculty might sit in on each other’s courses, observe, and give feedback afterward.
The Self-Directed Approach:a self-assessment strategy such as setting goals and creating simple checklists and rubrics to monitor our progress. Or, we invite feedback from colleagues, often in a narrative and/or informal format. We might also create a portfolio of our work, or engage in some sort of learning journal that documents our thoughts, experiments, experiences, and learning along the way.
In 2014, administrators at Central Piedmont Community College (CPCC) in Charlotte, North Carolina, began talks with members of the North Carolina State Board of Community Colleges and North Carolina Community College System (NCCCS) leadership about starting a CBE program.
Building on an existing project at CPCC for identifying the elements of a digital learning environment (DLE), which was itself influenced by the EDUCAUSE publication The Next Generation Digital Learning Environment: A Report on Research,1 the committee reached consensus on a DLE concept and a shared lexicon: the “Digital Learning Environment Operational Definitions,
Ungerer, L. M. (2016). Digital Curation as a Core Competency in Current Learning and Literacy: A Higher Education Perspective. The International Review of Research in Open and Distributed Learning, 17(5). https://doi.org/10.19173/irrodl.v17i5.2566
Dunaway (2011) suggests that learning landscapes in a digital age are networked, social, and technological. Since people commonly create and share information by collecting, filtering, and customizing digital content, educators should provide students opportunities to master these skills (Mills, 2013). In enhancing critical thinking, we have to investigate pedagogical models that consider students’ digital realities (Mihailidis & Cohen, 2013). November (as cited in Sharma & Deschaine, 2016), however warns that although the Web fulfils a pivotal role in societal media, students often are not guided on how to critically deal with the information that they access on the Web. Sharma and Deschaine (2016) further point out the potential for personalizing teaching and incorporating authentic material when educators themselves digitally curate resources by means of Web 2.0 tools.
p. 24. Communities of practice. Lave and Wenger’s (as cited in Weller, 2011) concept of situated learning and Wenger’s (as cited in Weller, 2011) idea of communities of practice highlight the importance of apprenticeship and the social role in learning.
criteria to publish a paper
Originality: Does the paper contain new and significant information adequate to justify publication?
Relationship to Literature: Does the paper demonstrate an adequate understanding of the relevant literature in the field and cite an appropriate range of literature sources? Is any significant work ignored?
Methodology: Is the paper’s argument built on an appropriate base of theory, concepts, or other ideas? Has the research or equivalent intellectual work on which the paper is based been well designed? Are the methods employed appropriate?
Results: Are results presented clearly and analyzed appropriately? Do the conclusions adequately tie together the other elements of the paper?
Implications for research, practice and/or society: Does the paper identify clearly any implications for research, practice and/or society? Does the paper bridge the gap between theory and practice? How can the research be used in practice (economic and commercial impact), in teaching, to influence public policy, in research (contributing to the body of knowledge)? What is the impact upon society (influencing public attitudes, affecting quality of life)? Are these implications consistent with the findings and conclusions of the paper?
Quality of Communication: Does the paper clearly express its case, measured against the technical language of the field and the expected knowledge of the journal’s readership? Has attention been paid to the clarity of expression and readability, such as sentence structure, jargon use, acronyms, etc.
Stanton, K. V., & Liew, C. L. (2011). Open Access Theses in Institutional Repositories: An Exploratory Study of the Perceptions of Doctoral Students. Information Research: An International Electronic Journal, 16(4),
We examine doctoral students’ awareness of and attitudes to open access forms of publication. Levels of awareness of open access and the concept of institutional repositories, publishing behaviour and perceptions of benefits and risks of open access publishing were explored. Method: Qualitative and quantitative data were collected through interviews with eight doctoral students enrolled in a range of disciplines in a New Zealand university and a self-completion Web survey of 251 students. Analysis: Interview data were analysed thematically, then evaluated against a theoretical framework. The interview data were then used to inform the design of the survey tool. Survey responses were analysed as a single set, then by disciple using SurveyMonkey’s online toolkit and Excel. Results: While awareness of open access and repository archiving is still low, the majority of interview and survey respondents were found to be supportive of the concept of open access. The perceived benefits of enhanced exposure and potential for sharing outweigh the perceived risks. The majority of respondents were supportive of an existing mandatory thesis submission policy. Conclusions: Low levels of awareness of the university repository remains an issue, and could be addressed by further investigating the effectiveness of different communication channels for promotion.
the researchers use the qualitative approach: by interviewing participants and analyzing their responses thematically, they build the survey.
Then then administer the survey (the quantitative approach)
How do you intend to use a mixed method? Please share
Metaphors: A Problem Statement is like… metaphor — a novel or poetic linguistic expression where one or more words for a concept are used outside normal conventional meaning to express a similar concept. Aristotle l The DNA of the research l A snapshot of the research l The foundation of the research l The Heart of the research l A “taste” of the research l A blueprint for the study
digital object identifier (DOI) is a unique alphanumeric string assigned by a registration agency (the International DOI Foundation) to identify content and provide a persistent link to its location on the Internet. The publisher assigns a DOI when your article is published and made available electronically.
Why do we need it?
2010 Changes to APA for Electronic Materials Digital object identifier (DOI). DOI available. If a DOI is available you no longer include a URL. Example: Author, A. A. (date). Title of article. Title of Journal, volume(number), page numbers. doi: xx.xxxxxxx
Accodring to Sugimoto et al (2016), the Use of social media platforms for by researchers is high — ranging from 75 to 80% in large -scale surveys (Rowlands et al., 2011; Tenopir et al., 2013; Van Eperen & Marincola, 2011) .
There is one more reason, and, as much as you want to dwell on the fact that you are practitioners and research is not the most important part of your job, to a great degree, you may be judged also by the scientific output of your office and/or institution.
In that sense, both social media and altimetrics might suddenly become extremely important to understand and apply.
Shortly altmetrics (alternative metrics) measure the impact your scientific output has on the community. Your teachers and you present, publish and create work, which might not be presented and published, but may be widely reflected through, e.g. social media, and thus, having impact on the community.
How such impact is measured, if measured at all, can greatly influence the money flow to your institution
Thelwall, M., & Wilson, P. (2016). Mendeley readership altmetrics for medical articles: An analysis of 45 fields. Journal of the Association for Information Science and Technology, 67(8), 1962–1972. https://doi.org/10.1002/asi.23501
Todd Tetzlaff is using Mendeley and he might be the only one to benefit … 🙂
Here is some food for thought from the article above:
Doctoral students and junior researchers are the largest reader group in Mendeley ( Haustein & Larivière, 2014; Jeng et al., 2015; Zahedi, Costas, & Wouters, 2014a) .
Studies have also provided evidence of high rate s of blogging among certain subpopulations: for example, approximately one -third of German university staff (Pscheida et al., 2013) and one fifth of UK doctoral students use blogs (Carpenter et al., 2012) .
Social data sharing platforms provide an infrastructure to share various types of scholarly objects —including datasets, software code, figures, presentation slides and videos —and for users to interact with these objects (e.g., comment on, favorite, like , and reuse ). Platforms such as Figshare and SlideShare disseminate scholars’ various types of research outputs such as datasets, figures, infographics, documents, videos, posters , or presentation slides (Enis, 2013) and displays views, likes, and shares by other users (Mas -Bleda et al., 2014) .
Frequently mentioned social platforms in scholarly communication research include research -specific tools such as Mendeley, Zotero, CiteULike, BibSonomy, and Connotea (now defunct) as well as general tools such as Delicious and Digg (Hammond, Hannay, Lund, & Scott, 2005; Hull, Pettifer, & Kell, 2008; Priem & Hemminger, 2010; Reher & Haustein, 2010) .
“The focus group interviews were analysed based on the principles of interpretative phenomenology”
if you are not podcast fans, I understand. The link above is a pain in the behind to make work, if you are not familiar with using podcast.
Here is an easier way to find it:
1. open your cell phone and go find the podcast icon, which is pre-installed, but you might have not ever used it [yet].
2. In the app, use the search option and type “stuff you should know”
3. the podcast will pop up. scroll and find “How the scientific method works,” and/or search for it if you can.
Once you can play it on the phone, you have to find time to listen to it.
I listen to podcast when i have to do unpleasant chores such as: 1. walking to work 2. washing the dishes 3. flying long hours (very rarely). 4. Driving in the car.
There are bunch of other situations, when you may be strapped and instead of filling disgruntled and stressed, you can deliver the mental [junk] food for your brain.
Earbuds help me: 1. forget the unpleasant task, 2. Utilize time 3. Learn cool stuff
Here are podcasts, I am subscribed for, besides “stuff you should know”:
TED Radio Hour
TED Talks Education
NPR Fresh Air
and bunch others, which, if i don’t go a listen for an year, i go and erase and if i peruse through the top chart and something picks my interest, I try.
If I did not manage to convince to podcast, totally fine; do not feel obligated.
However, this podcast, you can listen to on your computer, if you don’t want to download on your phone.
It is one hour show by two geeks, who are trying to make funny (and they do) a dry matter such as quantitative vs qualitative, which you want to internalize:
1. Sometimes at minute 12, they talk about inductive versus deductive to introduce you to qualitative versus quantitative. It is good to listen to their musings, since your dissertation is going through inductive and deductive process, and understanding it, can help you control better your dissertation writing.
2. Scientific method. Hypothesis etc (around min 17).
While this is not a Ph.D., but Ed.D. and we do not delve into the philosophy of science and dissertation etc. the more you know about this process, the better control you have over your dissertation.
3. Methods and how you prove (Chapter 3) is discussed around min 35
4. dependent and independent variables and how do you do your research in general (min ~45)
Shortly, listen and please do share your thoughts below. You do not have to be kind to this source offering. Actually, be as critical as possible, so you can help me decide, if I should offer it to the next cohort and thank you in advance for your feedback.
Three Things Teachers Need to Spot—and Stop—Plagiarism
SPONSORED CONTENTFROM PLAGIARISMCHECK
my note: I firmly disagree with the corporate push to mechanize plagiarism. Plagiarism is about teaching both faculty and students, and this industry, under the same cover is trying to make a profit by mechanizing, not teaching about plagiarism.
Plagiarism-detection software can address the most pressing needs of classroom educators faced with assessing students’ written work. Here’s how:
1. Teachers Need More Time
The Challenge: The larger the class is, and the more students that are in it, the longer it takes to review each written assignment—checking grammar, style, originality of ideas, etc. This is especially important when screening for plagiarism.
My note: this is NOT true. If the teacher is still lingering in the old habits of lecturing, this could be true. However, when a teacher gets into the habit of reviewing papers, s/he can detect as soon as in the first several paragraphs the discrepancies due to copy and paste of other work versus the student’s work.
In addition, if the teacher applies group work in her/his class, s/he can organize students to proofread each other’s work, thus teaching them actively about plagiarism, punctuation etc.
2. Evidence Must Be Reliable
The Challenge: When identifying plagiarism, teachers need to be confident in their assessment. Accusing students of academic dishonesty is a weighty claim; it can lead to their suspension or even expulsion from school.
My note: another myth perpetuated by industry searching for profit. Instead of looking at the process of plagiarism as punitive action, an educator will look at it as education and prevention. Prevention of plagiarism will never be successful, if the focus as in this article is on “suspension,” “expulsion,” etc. The goal of the teacher is NOT to catch the student, but to work with the student and understand the complexity of plagiarism.
3. Tools Must Be Easy to Use
My note: right, the goal is to make the teacher think as less as possible.
My note: PlagiarismCheck is the same as TurnitIn and all other tools, which seek profit, not education. Considering that plagiarism is a moving target (http://blog.stcloudstate.edu/ims/2016/01/10/plagiarism-or-collaboration/) and it is a concept first and secondly an action, the attempt to extract profits from the mechanization of this process is no less corrupt then the attempt to focus on profit (of education) rather then on education (itself)