The brain is actually three brains: the ancient reptilian brain, the limbic brain, and the cortical brain. This article will focus on the limbic brain, because it may be most important to successfully using interactive video or web-based video. The limbic brain monitors the external world and the internal body, taking in information through the senses as well as body temperature and blood pressure, among others. It is the limbic brain that generates and interprets facial expressions and handles emotions, while the cortical brain handles symbolic activities such as language as well as action and strategizing. The two interact when an emotion is sent from the limbic to the cortical brain and generates a conscious thought; in response to a feeling of fear (limbic), you ask, “what should I do?” (cortical).
The importance of direct eye contact and deciphering body language is also important for sending and picking up clues about social context.
The loss of social cues is important because it may affect the quality of the content of the presentation (by not allowing timely feedback or questions) but also because students may feel less engaged and become frustrated with the interaction, and subsequently lower their assessment of the class and the instructor (Reeves & Nass, 1996). Fortunately, faculty can provide such social cues verbally, once they are aware of the importance of helping students use these new media.
Attachment theory also supports the importance of physical and emotional connections.
As many a struggling teacher knows, students are often impervious to learning new concepts. They may replay the new information for a test, but after time passes, they revert to the earlier (and likely wrong) information. This is referred to as the “power of mental models.” As explained in Marchese (2000), when we view a tree, it is not as if we see the tree in our head, as in photography.
The coping strategies of the two hemispheres are fundamentally different. The left hemisphere’s job is to create a belief system or model and to fold new experiences into that belief system. If confronted with some new information that doesn’t fit the model, it relies on Freudian defense mechanisms to deny, repress or confabulate – anything to preserve the status quo. The right hemisphere’s strategy is to play “Devil’s Advocate,” to question the status quo and look for global inconsistencies. When the anomalous information reaches a certain threshold, the right hemisphere decides that it is time to force a complete revision of the entire model and start from scratch (Ramachandran & Blakeslee, 1998, p. 136).
While much hemispheric-based research has been repudiated as an oversimplification (Gackenbach, 1999), the above description of how new information eventually overwhelms an old world view may be the result of multiple brain functions – some of which work to preserve our models and others to alter – that help us both maintain and change as needed.
Self-talk is the “the root of empathy, understanding, cooperation, and rules that allow us to be successful social beings. Any sense of moral behavior requires thought before action” (Ratey, 2001, p. 255).
Healy (1999) argues that based on what we know about brain development in children, new computer media may be responsible for developing brains that are largely different from the brains of adults. This is because “many brain connections have become specialized for . . . media” (p. 133); in this view, a brain formed by language and reading is different from a brain formed by hypermedia. Different media lead to different synaptic connections being laid down and reinforced, creating different brains in youngsters raised on fast-paced, visually-stimulating computer applications and video games. “Newer technologies emphasize rapid processing of visual symbols . . . and deemphasize traditional verbal learning . . . and the linear, analytic thought process . . . [making it] more difficult to deal with abstract verbal reasoning” (Healy, 1999, p. 142).
Please have also materials, which might help you organize our thoughts and expedite your Chapter 2 writing….
Do you agree with (did you use) the following observations:
The purpose of the review of the literature is to prove that no one has studied the gap in the knowledge outlined in Chapter 1. The subjects in the Review of Literature should have been introduced in the Background of the Problem in Chapter 1. Chapter 2 is not a textbook of subject matter loosely related to the subject of the study. Every research study that is mentioned should in some way bear upon the gap in the knowledge, and each study that is mentioned should end with the comment that the study did not collect data about the specific gap in the knowledge of the study as outlined in Chapter 1.
The review should be laid out in major sections introduced by organizational generalizations. An organizational generalization can be a subheading so long as the last sentence of the previous section introduces the reader to what the next section will contain. The purpose of this chapter is to cite major conclusions, findings, and methodological issues related to the gap in the knowledge from Chapter 1. It is written for knowledgeable peers from easily retrievable sources of the most recent issue possible.
Empirical literature published within the previous 5 years or less is reviewed to prove no mention of the specific gap in the knowledge that is the subject of the dissertation is in the body of knowledge. Common sense should prevail. Often, to provide a history of the research, it is necessary to cite studies older than 5 years. The object is to acquaint the reader with existing studies relative to the gap in the knowledge and describe who has done the work, when and where the research was completed, and what approaches were used for the methodology, instrumentation, statistical analyses, or all of these subjects.
If very little literature exists, the wise student will write, in effect, a several-paragraph book report by citing the purpose of the study, the methodology, the findings, and the conclusions. If there is an abundance of studies, cite only the most recent studies. Firmly establish the need for the study. Defend the methods and procedures by pointing out other relevant studies that implemented similar methodologies. It should be frequently pointed out to the reader why a particular study did not match the exact purpose of the dissertation.
The Review of Literature ends with a Conclusion that clearly states that, based on the review of the literature, the gap in the knowledge that is the subject of the study has not been studied. Remember that a “summary” is different from a “conclusion.” A Summary, the final main section, introduces the next chapter.
When conducting qualitative data, how many people should be interviewed? Is there a minimum or a max
Here is my take on it:
Simple question, not so simple answer.
Generally, the number of respondents depends on the type of qualitative inquiry: case study methodology, phenomenological study, ethnographic study, or ethnomethodology. However, a rule of thumb is for scholars to achieve saturation point–that is the point in which no fresh information is uncovered in response to an issue that is of interest to the researcher.
If your qualitative method is designed to meet rigor and trustworthiness, thick, rich data is important. To achieve these principles you would need at least 12 interviews, ensuring your participants are the holders of knowledge in the area you intend to investigate. In grounded theory you could start with 12 and interview more if your data is not rich enough.
In IPA the norm tends to be 6 interviews.
You may check the sample size in peer reviewed qualitative publications in your field to find out about popular practice. In all depends on the research problem, choice of specific qualitative approach and theoretical framework, so the answer to your question will vary from few to few dozens.
How many interviews are needed in a qualitative research?
There are different views in literature and no one agreed to the exact number. Here I reviewed some mostly cited references. Based Creswell (2014), it is estimated that 16 participants will provide rich and detailed data. There are a couple of researchers agreed on 10–15 in-depth interviews are sufficient (Guest, Bunce & Johnson 2006; Baker & Edwards 2012).
your methodological choices need to reflect your ontological position and understanding of knowledge production, and that’s also where you can argue a strong case for smaller qualitative studies, as you say. This is not only a problem for certain subjects, I think it’s a problem in certain departments or journals across the board of social science research, as it’s a question of academic culture.
here more serious literature and research (in case you need to cite in Chapter 3)
Sample Size and Saturation in PhD Studies Using Qualitative Interviews
Gaskell, George (2000). Individual and Group Interviewing. In Martin W. Bauer & George Gaskell (Eds.), Qualitative Researching With Text, Image and Sound. A Practical Handbook (pp. 38-56). London: SAGE Publications.
Books on intro to stat modeling available at the library. I understand the major pain borrowing books from the SCSU library can constitute, but you can use the titles and the authors and see if you can borrow them from your local public library
I also sought and shared with you “visual” explanations of the basics terms and concepts. Once you start looking at those, you should be able to further research (e.g. YouTube) and find suitable sources for your learning style.
I (and the future cohorts) will deeply appreciate if you remember to share those “suitable sources for your learning style” either by sharing in this Google Group thread and/or sharing in the comments section of the blog entry: http://blog.stcloudstate.edu/ims/2017/07/10/intro-to-stat-modeling. Your Facebook group page is also a good place to discuss among ourselves best practices to learn and use research methods for your chapter 3.
Watching the video, you may remember the same #BooleanSearch techniques from our BI (bibliography instruction) session of last semester.
Considering the fact of preponderance of information in 2017: your Chapter 2 is NOT ONLY about finding information regrading your topic.
Your Chapter 2 is about proving your extensive research of the existing literature.
The techniques presented in the short video will arm you with methods to dig deeper and look further.
If you would like to do a decent job exploring all corners of the vast area called Internet, please consider other search engines similar to Google Scholar:
Because the questionnaire data comprised both Likert scales and open questions, they were analyzed quantitatively and qualitatively. Textual data (open responses) were qualitatively analyzed by coding: each segment (e.g. a group of words) was assigned to a semantic reference category, as systematically and rigorously as possible. For example, “Using an iPad in class really motivates me to learn” was assigned to the category “positive impact on motivation.” The qualitative analysis was performed using an adapted version of the approaches developed by L’Écuyer (1990) and Huberman and Miles (1991, 1994). Thus, we adopted a content analysis approach using QDAMiner software, which is widely used in qualitative research (see Fielding, 2012; Karsenti, Komis, Depover, & Collin, 2011). For the quantitative analysis, we used SPSS 22.0 software to conduct descriptive and inferential statistics. We also conducted inferential statistics to further explore the iPad’s role in teaching and learning, along with its motivational effect. The results will be presented in a subsequent report (Fievez, & Karsenti, 2013)
The 20th century notion of conducting a qualitative research by an oral interview and then processing manually your results had triggered in the second half of the 20th century [sometimes] condescending attitudes by researchers from the exact sciences.
The reason was the advent of computing power in the second half of the 20th century, which allowed exact sciences to claim “scientific” and “data-based” results.
One of the statistical package, SPSS, is today widely known and considered a magnificent tools to bring solid statistically-based argumentation, which further perpetuates the superiority of quantitative over qualitative method.
At the same time, qualitative researchers continue to lag behind, mostly due to the inertia of their approach to qualitative analysis. Qualitative analysis continues to be processed in the olden ways. While there is nothing wrong with the “olden” ways, harnessing computational power can streamline the “olden ways” process and even present options, which the “human eye” sometimes misses.
Below are some suggestions, you may consider, when you embark on the path of qualitative research.
Palys and Atchison (2012) present a compelling case to bring your qualitative research to the level of the quantitative research by using modern tools for qualitative analysis.
1. The authors correctly promote NVivo as the “jaguar’ of the qualitative research method tools. Be aware, however, about the existence of other “Geo Metro” tools, which, for your research, might achieve the same result (see bottom of this blog entry).
text mining: https://en.wikipedia.org/wiki/Text_mining Text mining, also referred to as text data mining, roughly equivalent to text analytics, is the process of deriving high-quality information from text. High-quality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. https://ischool.syr.edu/infospace/2013/04/23/what-is-text-mining/
Qualitative data is descriptive data that cannot be measured in numbers and often includes qualities of appearance like color, texture, and textual description. Quantitative data is numerical, structured data that can be measured. However, there is often slippage between qualitative and quantitative categories. For example, a photograph might traditionally be considered “qualitative data” but when you break it down to the level of pixels, which can be measured.
word of caution, text mining doesn’t generate new facts and is not an end, in and of itself. The process is most useful when the data it generates can be further analyzed by a domain expert, who can bring additional knowledge for a more complete picture. Still, text mining creates new relationships and hypotheses for experts to explore further.
more on quantitative research:
Asamoah, D. A., Sharda, R., Hassan Zadeh, A., & Kalgotra, P. (2017). Preparing a Data Scientist: A Pedagogic Experience in Designing a Big Data Analytics Course. Decision Sciences Journal of Innovative Education, 15(2), 161–190. https://doi.org/10.1111/dsji.12125
literature on quantitative research:
St. Cloud State University MC Main Collection – 2nd floor
AZ195 .B66 2015
p. 161 Data scholarship in the Humanities
p. 166 When Are Data?
Philip Chen, C. L., & Zhang, C.-Y. (2014). Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Information Sciences, 275(Supplement C), 314–347. https://doi.org/10.1016/j.ins.2014.01.015
Shortly: Limitations are influences that the researcher cannot control. They are the shortcomings, conditions or influences that cannot be controlled by the researcher that place restrictions on your methodology and conclusions. Any limitations that might influence the results should be mentioned. Delimitationsare choices made by the researcher which should be mentioned. They describe the boundaries that you have set for the study. Assumptions are accepted as true, or at least plausible, by researchers and peers who will read your dissertation or thesis.
What Does Recent Pedagogical Research Tell Us About eLearning Good Practice?
Many instructors indicate that they want their elearning teaching approaches to be evidence-based. Indeed, there are rich and varied sources of research being conducted on elearning good practices available in scholarly journals and government reports. However, few of us have time to keep up with these publications. In this session Christina Petersen will do some of that work for you. She summarize findings from recent government and university reports which review over 1,000 online learning studies. Additionally, she will summarize the findings from newly published articles from pedagogical journals with important information about good practices in online education. These practices address evidence-based methods for promoting student engagement in online courses, good practices for video production, and other topics related to online teaching. We will discuss the importance of all of these findings for your teaching.
Christina Petersen is an Education Program Specialist in the Center for Educational Innovation at the University of Minnesota where she partners with faculty and departments to help create and redesign courses and curriculum to promote maximal student learning. She facilitates a monthly Pedagogical Innovations Journal Club at the CEI. She has a PhD in Pharmacology and her teaching experience includes undergraduate courses in Pharmacology, and graduate courses in Higher Education pedagogy. Her teaching interests include integrating active learning into science courses, teaching in active learning classrooms, and evidence-based teaching practice. She is co-author of a soon-to-be-released book from Stylus, “A Guide to Teaching in Active Learning Classrooms”
Discussion on the EDUCAUSE Blended and Online Learning Group’s listserv
I head an instructional design unit and we’ve been noticing that instructors with no experience in online teaching seem to struggle to teach in a blended environment. They get easily confused about 1) how to decide what content is best suited for in class and what goes online and 2) they also have difficulty bridging the two modalities to create a seamless and rich learning environment.
Oregon State University has a hybrid course design program that is a partnership between OSU’s Ecampus and our Center for Teaching and Learning. You can find quite a few resources here: http://ctl.oregonstate.edu/hybrid-learning
Shannon Riggs Director, Course Development and Training Oregon State University Ecampus 4943 Valley Library Corvallis, OR 97331-4504 541.737.2613
You might find my recent book The Blended Course Design Workbook: A Practical Guide to be a helpful resource. Each chapter has a literature review of the relevant research as well as activities to guide faculty through the various components of blended course design. You can read the first chapter on the fundamentals of blended teaching and learning at the publisher website. The book also has a companion website with additional resources here: http://www.bcdworkbook.com.
Katie Linder Research Director Extended Campus, Oregon State University 4943 The Valley Library Corvallis, Oregon 97331 Phone 541-737-4629 | Fax 541-737-2734 Email: email@example.com Twitter: @ECResearchUnit & @RIA_podcast Check out the Research in Action podcast: ecampus.oregonstate.edu/podcast
Roberts, C. (2010). The Dissertation Journey. A Practical and Comprehensive Guide to Planing, Writing, and Defending Your Dissertation. Corwin, Thousand Oaks, CA.
Purpose and scope
We talked about “themes” and the need to be careful with breaking them into “subthemes”: if you do a historical overview, avoid chunking it into “dates” and rather keep the thematic relation. Make sure that the relate to your topic; that’s why it is good to keep your title (even if preliminary), outline (even if in progress), thesis (even if under work) etc. on the first page of your Chapter 2 manuscript / draft.
focus the purpose of your study more precisely.
Avoid postponing finalizing the title, the thesis, the outline.