University of Toronto Libraries opened a User Research and Usability (UX) Lab in September 2017, the first space of its kind on campus. The UX Lab is open to students, staff, and faculty by appointment or during weekly drop in hours.
In this 90-minute webinar, our presenter will discuss:
The rationale behind building a physical usability lab and why a physical space isn’t always needed (or recommended)
Experience with community building efforts
How to raise awareness of UX as a service to staff and the University community at large
The evolution of the lab’s services
Next steps
Presenter: Lisa Gayhart, User Experience Librarian, University of Toronto Libraries Thursday November 15, 2018, 1:00 – 2:30 pm Central Time
The Research Grants Programme is intended for early career scholars of all nationalities and in any field of the Humanities. They must have a particular interest in cultural heritage and take a digital humanities approach. Applicants must hold a PhD, with no more than 7 years of experience after the completion of their PhD. With duly justified exceptions, their projects must be hosted by institutions, i.e. a university, a research centre, a library lab or a museum, working in one of the European Union member states.
!*!*!*!*! — this article was pitched by Mark Vargas in the fall of 2013, back then dean of LRS and discussed at a faculty meeting at LRS in the same year—- !*!*!*!
New Roles for New Times: Transforming Liaison Roles in Research Libraries
(p. 4) Building strong relationships with faculty and other campus professionals, and establishing collaborative partnerships within and across institutions, are necessary building blocks to librarians’ success. In a traditional liaison model, librarians use their subject knowledge to select books and journals and teach guest lectures.
“Liaisons cannot be experts themselves in each new capability, but knowing when to call in a colleague, or how to describe appropriate expert capabilities to faculty, will be key to the new liaison role.
six trends in the development of new roles for library liaisons
user engagement is a driving factor
what users do (research, teaching, and learning) rather than on what librarians do (collections, reference, library instruction). In addition, an ALA-accredited master’s degree in library science is no longer strictly required.
In a networked world, local collections as ends in themselves make learning fragmentary and incomplete. (p. 5)
A multi-institutional approach is the only one that now makes sense.
Scholars already collaborate; libraries need to make it easier for them to do so.
but they also advise and collaborate on issues of copyright, scholarly communication, data management, knowledge management, and information literacy. The base level of knowledge that a liaison must possess is much broader than familiarity with a reference collection or facility with online searching; instead, they must constantly keep up with evolving pedagogies and research methods, rapidly developing tools, technologies, and ever-changing policies that facilitate and inform teaching, learning, and research in their assigned disciplines.
In many research libraries, programmatic efforts with information literacy have been too narrowly defined. It is not unusual for libraries to focus on freshman writing programs and a series of “one-shot” or invited guest lectures in individual courses. While many librarians have become excellent teachers, traditional one-shot, in-person instructional sessions can vary in quality depending on the training librarians have received in this arena; and they neither scale well nor do they necessarily address broader curricular goals. Librarians at many institutions are now focusing on collaborating with faculty to develop thoughtful assignments and provide online instructional materials that are built into key courses within a curriculum and provide scaffolding to help students develop library research skills over the course of their academic careers.
And many libraries stated that they lack instructional designers and/or educational technologists on their staff, limiting the development of interactive online learning modules and tutorials. (my note: or just ignore the desire by unites such as IMS to help).
(p. 7). This move away from supervision allows the librarians to focus on their liaison responsibilities rather than on the day-to-day operations of a library and its attendant personnel needs.
effectively support teaching, (1.) learning, and research; (2.) identify opportunities for further development of tools and services; (3.) and connect students, staff, and faculty to deeper expertise when needed.
At many institutions, therefore, the conversation has focused on how to supplement and support the liaison model with other staff.
At many institutions, therefore, the conversation has focused on how to supplement and support the liaison model with other staff.
the hybrid exists within the liaison structure, where liaisons also devote a portion of their time (e.g., 20% or more) to an additional area of expertise, for example digital humanities and scholarly communication, and may work with liaisons across all disciplinary areas. (my note: and at the SCSU library, the librarians firmly opposed the request for a second master’s degree)
functional specialists who do not have liaison assignments to specific academic departments but instead serve as “superliaisons” to other librarians and to the entire campus. Current specialist areas of expertise include copyright, geographic information systems (GIS), media production and integration, distributed education or e-learning, data management, emerging technologies, user experience, instructional design, and bioinformatics. (everything in italics is currently done by IMS faculty).
divided into five areas of functional specialization: information resources and collections management; information literacy, instruction, and curriculum development; discovery and access; archival and special collections; scholarly communication and the research enterprise.
E-Scholarship Collaborative, a Research Support Services Collaborative (p. 8).
p. 9. managing alerts and feeds, personal archiving, and using social networking for teaching and professional development
p. 10. new initiatives in humanistic research and teaching are changing the nature and frequency of partnerships between faculty and the Libraries. In particular, cross-disciplinary Humanities Laboratories (http://fhi.duke.edu/labs), supported by the John Hope Franklin Humanities Institute and the Andrew W. Mellon Foundation-funded Humanities Writ Large project, have allowed liaisons to partner with faculty to develop and curate new forms of scholarship.
consultations on a range of topics, such as how to use social media to effectively communicate academic research and how to mark up historical texts using the Text Encoding Initiative (TEI) guidelines
p. 11. Media literacy, and facilitating the integration of media into courses, is an area in which research libraries can play a lead role at their institutions. (my note: yet still suppressed or outright denied to IMS to conducts such efforts)
Purdue Academic Course Transformation, or IMPACT (http://www.lib.purdue.edu/infolit/impact). The program’s purpose is to make foundational courses at Purdue more student-centered and participatory. Librarians are key members of interdepartmental teams that “work with Purdue instructors to redesign courses by applying evidence-based educational practices” and offer “learning solutions” that help students engage with and critically evaluate information. (my note: as offered by Keith and myself to Miguel, the vice provost for undergrads, who left; then offered to First Year Experience faculty, but ignored by Christine Metzo; then offered again to Glenn Davis, who bounced it back to Christine Metzo).
p. 15. The NCSU Libraries Fellows Program offers new librarians a two-year appointment during which they develop expertise in a functional area and contribute to an innovative initiative of strategic importance. NCSU Libraries typically have four to six fellows at a time, bringing in people with needed skills and working to find ongoing positions when they have a particularly good match. Purdue Libraries have experimented with offering two-year visiting assistant professor positions. And the University of Minnesota has hired a second CLIR fellow for a two-year digital humanities project; the first CLIR fellow now holds an ongoing position as a curator in Archives and Special Collections. The CLIR Fellowship is a postdoctoral program that hires recent PhD graduates (non-librarians), allowing them to explore alternative careers and allowing the libraries to benefit from their discipline-specific expertise.
The brain is actually three brains: the ancient reptilian brain, the limbic brain, and the cortical brain. This article will focus on the limbic brain, because it may be most important to successfully using interactive video or web-based video. The limbic brain monitors the external world and the internal body, taking in information through the senses as well as body temperature and blood pressure, among others. It is the limbic brain that generates and interprets facial expressions and handles emotions, while the cortical brain handles symbolic activities such as language as well as action and strategizing. The two interact when an emotion is sent from the limbic to the cortical brain and generates a conscious thought; in response to a feeling of fear (limbic), you ask, “what should I do?” (cortical).
The importance of direct eye contact and deciphering body language is also important for sending and picking up clues about social context.
The loss of social cues is important because it may affect the quality of the content of the presentation (by not allowing timely feedback or questions) but also because students may feel less engaged and become frustrated with the interaction, and subsequently lower their assessment of the class and the instructor (Reeves & Nass, 1996). Fortunately, faculty can provide such social cues verbally, once they are aware of the importance of helping students use these new media.
Attachment theory also supports the importance of physical and emotional connections.
As many a struggling teacher knows, students are often impervious to learning new concepts. They may replay the new information for a test, but after time passes, they revert to the earlier (and likely wrong) information. This is referred to as the “power of mental models.” As explained in Marchese (2000), when we view a tree, it is not as if we see the tree in our head, as in photography.
The coping strategies of the two hemispheres are fundamentally different. The left hemisphere’s job is to create a belief system or model and to fold new experiences into that belief system. If confronted with some new information that doesn’t fit the model, it relies on Freudian defense mechanisms to deny, repress or confabulate – anything to preserve the status quo. The right hemisphere’s strategy is to play “Devil’s Advocate,” to question the status quo and look for global inconsistencies. When the anomalous information reaches a certain threshold, the right hemisphere decides that it is time to force a complete revision of the entire model and start from scratch (Ramachandran & Blakeslee, 1998, p. 136).
While much hemispheric-based research has been repudiated as an oversimplification (Gackenbach, 1999), the above description of how new information eventually overwhelms an old world view may be the result of multiple brain functions – some of which work to preserve our models and others to alter – that help us both maintain and change as needed.
Self-talk is the “the root of empathy, understanding, cooperation, and rules that allow us to be successful social beings. Any sense of moral behavior requires thought before action” (Ratey, 2001, p. 255).
Healy (1999) argues that based on what we know about brain development in children, new computer media may be responsible for developing brains that are largely different from the brains of adults. This is because “many brain connections have become specialized for . . . media” (p. 133); in this view, a brain formed by language and reading is different from a brain formed by hypermedia. Different media lead to different synaptic connections being laid down and reinforced, creating different brains in youngsters raised on fast-paced, visually-stimulating computer applications and video games. “Newer technologies emphasize rapid processing of visual symbols . . . and deemphasize traditional verbal learning . . . and the linear, analytic thought process . . . [making it] more difficult to deal with abstract verbal reasoning” (Healy, 1999, p. 142).
Please have also materials, which might help you organize our thoughts and expedite your Chapter 2 writing….
Do you agree with (did you use) the following observations:
The purpose of the review of the literature is to prove that no one has studied the gap in the knowledge outlined in Chapter 1. The subjects in the Review of Literature should have been introduced in the Background of the Problem in Chapter 1. Chapter 2 is not a textbook of subject matter loosely related to the subject of the study. Every research study that is mentioned should in some way bear upon the gap in the knowledge, and each study that is mentioned should end with the comment that the study did not collect data about the specific gap in the knowledge of the study as outlined in Chapter 1.
The review should be laid out in major sections introduced by organizational generalizations. An organizational generalization can be a subheading so long as the last sentence of the previous section introduces the reader to what the next section will contain. The purpose of this chapter is to cite major conclusions, findings, and methodological issues related to the gap in the knowledge from Chapter 1. It is written for knowledgeable peers from easily retrievable sources of the most recent issue possible.
Empirical literature published within the previous 5 years or less is reviewed to prove no mention of the specific gap in the knowledge that is the subject of the dissertation is in the body of knowledge. Common sense should prevail. Often, to provide a history of the research, it is necessary to cite studies older than 5 years. The object is to acquaint the reader with existing studies relative to the gap in the knowledge and describe who has done the work, when and where the research was completed, and what approaches were used for the methodology, instrumentation, statistical analyses, or all of these subjects.
If very little literature exists, the wise student will write, in effect, a several-paragraph book report by citing the purpose of the study, the methodology, the findings, and the conclusions. If there is an abundance of studies, cite only the most recent studies. Firmly establish the need for the study. Defend the methods and procedures by pointing out other relevant studies that implemented similar methodologies. It should be frequently pointed out to the reader why a particular study did not match the exact purpose of the dissertation.
The Review of Literature ends with a Conclusion that clearly states that, based on the review of the literature, the gap in the knowledge that is the subject of the study has not been studied. Remember that a “summary” is different from a “conclusion.” A Summary, the final main section, introduces the next chapter.
When conducting qualitative data, how many people should be interviewed? Is there a minimum or a max
Here is my take on it:
Simple question, not so simple answer.
It depends.
Generally, the number of respondents depends on the type of qualitative inquiry: case study methodology, phenomenological study, ethnographic study, or ethnomethodology. However, a rule of thumb is for scholars to achieve saturation point–that is the point in which no fresh information is uncovered in response to an issue that is of interest to the researcher.
If your qualitative method is designed to meet rigor and trustworthiness, thick, rich data is important. To achieve these principles you would need at least 12 interviews, ensuring your participants are the holders of knowledge in the area you intend to investigate. In grounded theory you could start with 12 and interview more if your data is not rich enough.
In IPA the norm tends to be 6 interviews.
You may check the sample size in peer reviewed qualitative publications in your field to find out about popular practice. In all depends on the research problem, choice of specific qualitative approach and theoretical framework, so the answer to your question will vary from few to few dozens.
How many interviews are needed in a qualitative research?
There are different views in literature and no one agreed to the exact number. Here I reviewed some mostly cited references. Based Creswell (2014), it is estimated that 16 participants will provide rich and detailed data. There are a couple of researchers agreed on 10–15 in-depth interviews are sufficient (Guest, Bunce & Johnson 2006; Baker & Edwards 2012).
your methodological choices need to reflect your ontological position and understanding of knowledge production, and that’s also where you can argue a strong case for smaller qualitative studies, as you say. This is not only a problem for certain subjects, I think it’s a problem in certain departments or journals across the board of social science research, as it’s a question of academic culture.
here more serious literature and research (in case you need to cite in Chapter 3)
Sample Size and Saturation in PhD Studies Using Qualitative Interviews
Gaskell, George (2000). Individual and Group Interviewing. In Martin W. Bauer & George Gaskell (Eds.), Qualitative Researching With Text, Image and Sound. A Practical Handbook (pp. 38-56). London: SAGE Publications.
Savolainen, Jukka 1994: “The Rationality of Drawing Big Conclusions Based on Small Samples.” Social Forces 72:1217-24. (http://www.jstor.org/pss/2580299).
Small, M.(2009) ‘How many cases do I need ? On science and the logic of case selection in field-based research’ Ethnography 10(1) 5-38
Williams,M. (2000) ‘Interpretivism and generalisation ‘ Sociology 34(2) 209-224
where you have several documents from the Graduate school and myself to start building your understanding and vocabulary regarding your quantitative, qualitative or mixed method research.
It has been agreed that before you go to the Statistical Center (Randy Kolb), it is wise to be prepared and understand the terminology as well as the basics of the research methods.
Please have an additional list of materials available through the SCSU library and the Internet. They can help you further with building a robust foundation to lead your research:
Books on intro to stat modeling available at the library. I understand the major pain borrowing books from the SCSU library can constitute, but you can use the titles and the authors and see if you can borrow them from your local public library
I also sought and shared with you “visual” explanations of the basics terms and concepts. Once you start looking at those, you should be able to further research (e.g. YouTube) and find suitable sources for your learning style.
I (and the future cohorts) will deeply appreciate if you remember to share those “suitable sources for your learning style” either by sharing in this Google Group thread and/or sharing in the comments section of the blog entry: https://blog.stcloudstate.edu/ims/2017/07/10/intro-to-stat-modeling. Your Facebook group page is also a good place to discuss among ourselves best practices to learn and use research methods for your chapter 3.
Watching the video, you may remember the same #BooleanSearch techniques from our BI (bibliography instruction) session of last semester.
Considering the fact of preponderance of information in 2017: your Chapter 2 is NOT ONLY about finding information regrading your topic.
Your Chapter 2 is about proving your extensive research of the existing literature.
The techniques presented in the short video will arm you with methods to dig deeper and look further.
If you would like to do a decent job exploring all corners of the vast area called Internet, please consider other search engines similar to Google Scholar:
Because the questionnaire data comprised both Likert scales and open questions, they were analyzed quantitatively and qualitatively. Textual data (open responses) were qualitatively analyzed by coding: each segment (e.g. a group of words) was assigned to a semantic reference category, as systematically and rigorously as possible. For example, “Using an iPad in class really motivates me to learn” was assigned to the category “positive impact on motivation.” The qualitative analysis was performed using an adapted version of the approaches developed by L’Écuyer (1990) and Huberman and Miles (1991, 1994). Thus, we adopted a content analysis approach using QDAMiner software, which is widely used in qualitative research (see Fielding, 2012; Karsenti, Komis, Depover, & Collin, 2011). For the quantitative analysis, we used SPSS 22.0 software to conduct descriptive and inferential statistics. We also conducted inferential statistics to further explore the iPad’s role in teaching and learning, along with its motivational effect. The results will be presented in a subsequent report (Fievez, & Karsenti, 2013)
The 20th century notion of conducting a qualitative research by an oral interview and then processing manually your results had triggered in the second half of the 20th century [sometimes] condescending attitudes by researchers from the exact sciences.
The reason was the advent of computing power in the second half of the 20th century, which allowed exact sciences to claim “scientific” and “data-based” results.
One of the statistical package, SPSS, is today widely known and considered a magnificent tools to bring solid statistically-based argumentation, which further perpetuates the superiority of quantitative over qualitative method.
At the same time, qualitative researchers continue to lag behind, mostly due to the inertia of their approach to qualitative analysis. Qualitative analysis continues to be processed in the olden ways. While there is nothing wrong with the “olden” ways, harnessing computational power can streamline the “olden ways” process and even present options, which the “human eye” sometimes misses.
Below are some suggestions, you may consider, when you embark on the path of qualitative research.
excellent guide to the structure of a qualitative research
Palys, T., & Atchison, C. (2012). Qualitative Research in the Digital Era: Obstacles and Opportunities. International Journal Of Qualitative Methods, 11(4), 352-367.
Palys and Atchison (2012) present a compelling case to bring your qualitative research to the level of the quantitative research by using modern tools for qualitative analysis.
1. The authors correctly promote NVivo as the “jaguar’ of the qualitative research method tools. Be aware, however, about the existence of other “Geo Metro” tools, which, for your research, might achieve the same result (see bottom of this blog entry).
2. The authors promote a new type of approach to Chapter 2 doctoral dissertation and namely OCR-ing PDF articles (most of your literature as of 2017 is mostly either in PDF or electronic textual format) through applications such as
Abbyy Fine Reader, https://www.abbyy.com/en-us/finereader/
OmniPage, http://www.nuance.com/for-individuals/by-product/omnipage/index.htm
Readirus http://www.irislink.com/EN-US/c1462/Readiris-16-for-Windows—OCR-Software.aspx
The text from the articles is processed either through NVIVO or related programs (see bottom of this blog entry). As the authors propose: ” This is immediately useful for literature review and proposal writing, and continues through the research design, data gathering, and analysis stages— where NVivo’s flexibility for many different sources of data (including audio, video, graphic, and text) are well known—of writing for publication” (p. 353).
In other words, you can try to wrap your head around huge amount of textual information, but you can also approach the task by a parallel process of processing the same text with a tool.
+++++++++++++++++++++++++++++
Here are some suggestions for Computer Assisted / Aided Qualitative Data Analysis Software (CAQDAS)for a small and a large community applications):
text mining: https://en.wikipedia.org/wiki/Text_mining Text mining, also referred to as text data mining, roughly equivalent to text analytics, is the process of deriving high-quality information from text. High-quality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. https://ischool.syr.edu/infospace/2013/04/23/what-is-text-mining/
Qualitative data is descriptive data that cannot be measured in numbers and often includes qualities of appearance like color, texture, and textual description. Quantitative data is numerical, structured data that can be measured. However, there is often slippage between qualitative and quantitative categories. For example, a photograph might traditionally be considered “qualitative data” but when you break it down to the level of pixels, which can be measured.
word of caution, text mining doesn’t generate new facts and is not an end, in and of itself. The process is most useful when the data it generates can be further analyzed by a domain expert, who can bring additional knowledge for a more complete picture. Still, text mining creates new relationships and hypotheses for experts to explore further.
Pros and Cons of Computer Assisted Qualitative Data Analysis Software
+++++++++++++++++++++++++
more on quantitative research:
Asamoah, D. A., Sharda, R., Hassan Zadeh, A., & Kalgotra, P. (2017). Preparing a Data Scientist: A Pedagogic Experience in Designing a Big Data Analytics Course. Decision Sciences Journal of Innovative Education, 15(2), 161–190. https://doi.org/10.1111/dsji.12125
++++++++++++++++++++++++
literature on quantitative research:
St. Cloud State University MC Main Collection – 2nd floor
AZ195 .B66 2015
p. 161 Data scholarship in the Humanities
p. 166 When Are Data?
Philip Chen, C. L., & Zhang, C.-Y. (2014). Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Information Sciences, 275(Supplement C), 314–347. https://doi.org/10.1016/j.ins.2014.01.015
Shortly: Limitations are influences that the researcher cannot control. They are the shortcomings, conditions or influences that cannot be controlled by the researcher that place restrictions on your methodology and conclusions. Any limitations that might influence the results should be mentioned. Delimitationsare choices made by the researcher which should be mentioned. They describe the boundaries that you have set for the study. Assumptions are accepted as true, or at least plausible, by researchers and peers who will read your dissertation or thesis.