Searching for "deep learning"

NLP and ACL

NLP – natural language processing; ACL – Association for Computational Linguistics (ACL 2019)

Major trends in NLP: a review of 20 years of ACL research

Janna Lipenkova, July 23, 2019

https://www.linkedin.com/pulse/major-trends-nlp-review-20-years-acl-research-janna-lipenkova

The 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019)

 Data: working around the bottlenecks

large data is inherently noisy. \In general, the more “democratic” the production channel, the dirtier the data – which means that more effort has to be spent on its cleaning. For example, data from social media will require a longer cleaning pipeline. Among others, you will need to deal with extravagancies of self-expression like smileys and irregular punctuation, which are normally absent in more formal settings such as scientific papers or legal contracts.

The other major challenge is the labeled data bottleneck

crowd-sourcing and Training Data as a Service (TDaaS). On the other hand, a range of automatic workarounds for the creation of annotated datasets have also been suggested in the machine learning community.

Algorithms: a chain of disruptions in Deep Learning

Neural Networks are the workhorse of Deep Learning (cf. Goldberg and Hirst (2017) for an introduction of the basic architectures in the NLP context). Convolutional Neural Networks have seen an increase in the past years, whereas the popularity of the traditional Recurrent Neural Network (RNN) is dropping. This is due, on the one hand, to the availability of more efficient RNN-based architectures such as LSTM and GRU. On the other hand, a new and pretty disruptive mechanism for sequential processing – attention – has been introduced in the sequence-to-sequence (seq2seq) model by Sutskever et al. (2014).

Consolidating various NLP tasks

the three “global” NLP development curves – syntax, semantics and context awareness
the third curve – the awareness of a larger context – has already become one of the main drivers behind new Deep Learning algorithms.

A note on multilingual research

Think of different languages as different lenses through which we view the same world – they share many properties, a fact that is fully accommodated by modern learning algorithms with their increasing power for abstraction and generalization.

Spurred by the global AI hype, the NLP field is exploding with new approaches and disruptive improvements. There is a shift towards modeling meaning and context dependence, probably the most universal and challenging fact of human language. The generalisation power of modern algorithms allows for efficient scaling across different tasks, languages and datasets, thus significantly speeding up the ROI cycle of NLP developments and allowing for a flexible and efficient integration of NLP into individual business scenarios.

Policy for Artificial Intelligence

Law is Code: Making Policy for Artificial Intelligence

Jules Polonetsky and Omer Tene January 16, 2019

https://www.ourworld.co/law-is-code-making-policy-for-artificial-intelligence/

Twenty years have passed since renowned Harvard Professor Larry Lessig coined the phrase “Code is Law”, suggesting that in the digital age, computer code regulates behavior much like legislative code traditionally did.  These days, the computer code that powers artificial intelligence (AI) is a salient example of Lessig’s statement.

  • Good AI requires sound data.  One of the principles,  some would say the organizing principle, of privacy and data protection frameworks is data minimization.  Data protection laws require organizations to limit data collection to the extent strictly necessary and retain data only so long as it is needed for its stated goal. 
  • Preventing discrimination – intentional or not.
    When is a distinction between groups permissible or even merited and when is it untoward?  How should organizations address historically entrenched inequalities that are embedded in data?  New mathematical theories such as “fairness through awareness” enable sophisticated modeling to guarantee statistical parity between groups.
  • Assuring explainability – technological due process.  In privacy and freedom of information frameworks alike, transparency has traditionally been a bulwark against unfairness and discrimination.  As Justice Brandeis once wrote, “Sunlight is the best of disinfectants.”
  • Deep learning means that iterative computer programs derive conclusions for reasons that may not be evident even after forensic inquiry. 

Yet even with code as law and a rising need for law in code, policymakers do not need to become mathematicians, engineers and coders.  Instead, institutions must develop and enhance their technical toolbox by hiring experts and consulting with top academics, industry researchers and civil society voices.  Responsible AI requires access to not only lawyers, ethicists and philosophers but also to technical leaders and subject matter experts to ensure an appropriate balance between economic and scientific benefits to society on the one hand and individual rights and freedoms on the other hand.

+++++++++++++
more on AI in this IMS blog
https://blog.stcloudstate.edu/ims?s=artificial+intelligence

shaping the future of AI

Shaping the Future of A.I.

Daniel Burrus

https://www.linkedin.com/pulse/shaping-future-ai-daniel-burrus/

Way back in 1983, I identified A.I. as one of 20 exponential technologies that would increasingly drive economic growth for decades to come.

Artificial intelligence applies to computing systems designed to perform tasks usually reserved for human intelligence using logic, if-then rules, decision trees and machine learning to recognize patterns from vast amounts of data, provide insights, predict outcomes and make complex decisions. A.I. can be applied to pattern recognition, object classification, language translation, data translation, logistical modeling and predictive modeling, to name a few. It’s important to understand that all A.I. relies on vast amounts of quality data and advanced analytics technology. The quality of the data used will determine the reliability of the A.I. output.

Machine learning is a subset of A.I. that utilizes advanced statistical techniques to enable computing systems to improve at tasks with experience over time. Chatbots like Amazon’s Alexa, Apple’s Siri, or any of the others from companies like Google and Microsoft all get better every year thanks to all of the use we give them and the machine learning that takes place in the background.

Deep learning is a subset of machine learning that uses advanced algorithms to enable an A.I. system to train itself to perform tasks by exposing multi-layered neural networks to vast amounts of data, then using what has been learned to recognize new patterns contained in the data. Learning can be Human Supervised LearningUnsupervised Learningand/or Reinforcement Learning like Google used with DeepMind to learn how to beat humans at the complex game Go. Reinforcement learning will drive some of the biggest breakthroughs.

Autonomous computing uses advanced A.I. tools such as deep learning to enable systems to be self-governing and capable of acting according to situational data without human command. A.I. autonomy includes perception, high-speed analytics, machine-to-machine communications and movement. For example, autonomous vehicles use all of these in real time to successfully pilot a vehicle without a human driver.

Augmented thinking: Over the next five years and beyond, A.I. will become increasingly embedded at the chip level into objects, processes, products and services, and humans will augment their personal problem-solving and decision-making abilities with the insights A.I. provides to get to a better answer faster.

Technology is not good or evil, it is how we as humans apply it. Since we can’t stop the increasing power of A.I., I want us to direct its future, putting it to the best possible use for humans. 

++++++++++
more on AI in this IMS blog
https://blog.stcloudstate.edu/ims?s=artifical+intelligence

more on deep learning in this IMS blog
https://blog.stcloudstate.edu/ims?s=deep+learning

IRDL proposal

Applications for the 2018 Institute will be accepted between December 1, 2017 and January 27, 2018. Scholars accepted to the program will be notified in early March 2018.

Title:

Learning to Harness Big Data in an Academic Library

Abstract (200)

Research on Big Data per se, as well as on the importance and organization of the process of Big Data collection and analysis, is well underway. The complexity of the process comprising “Big Data,” however, deprives organizations of ubiquitous “blue print.” The planning, structuring, administration and execution of the process of adopting Big Data in an organization, being that a corporate one or an educational one, remains an elusive one. No less elusive is the adoption of the Big Data practices among libraries themselves. Seeking the commonalities and differences in the adoption of Big Data practices among libraries may be a suitable start to help libraries transition to the adoption of Big Data and restructuring organizational and daily activities based on Big Data decisions.
Introduction to the problem. Limitations

The redefinition of humanities scholarship has received major attention in higher education. The advent of digital humanities challenges aspects of academic librarianship. Data literacy is a critical need for digital humanities in academia. The March 2016 Library Juice Academy Webinar led by John Russel exemplifies the efforts to help librarians become versed in obtaining programming skills, and respectively, handling data. Those are first steps on a rather long path of building a robust infrastructure to collect, analyze, and interpret data intelligently, so it can be utilized to restructure daily and strategic activities. Since the phenomenon of Big Data is young, there is a lack of blueprints on the organization of such infrastructure. A collection and sharing of best practices is an efficient approach to establishing a feasible plan for setting a library infrastructure for collection, analysis, and implementation of Big Data.
Limitations. This research can only organize the results from the responses of librarians and research into how libraries present themselves to the world in this arena. It may be able to make some rudimentary recommendations. However, based on each library’s specific goals and tasks, further research and work will be needed.

 

 

Research Literature

“Big data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it…”
– Dan Ariely, 2013  https://www.asist.org/publications/bulletin/aprilmay-2017/big-datas-impact-on-privacy-for-librarians-and-information-professionals/

Big Data is becoming an omnipresent term. It is widespread among different disciplines in academia (De Mauro, Greco, & Grimaldi, 2016). This leads to “inconsistency in meanings and necessity for formal definitions” (De Mauro et al, 2016, p. 122). Similarly, to De Mauro et al (2016), Hashem, Yaqoob, Anuar, Mokhtar, Gani and Ullah Khan (2015) seek standardization of definitions. The main connected “themes” of this phenomenon must be identified and the connections to Library Science must be sought. A prerequisite for a comprehensive definition is the identification of Big Data methods. Bughin, Chui, Manyika (2011), Chen et al. (2012) and De Mauro et al (2015) single out the methods to complete the process of building a comprehensive definition.

In conjunction with identifying the methods, volume, velocity, and variety, as defined by Laney (2001), are the three properties of Big Data accepted across the literature. Daniel (2015) defines three stages in big data: collection, analysis, and visualization. According to Daniel, (2015), Big Data in higher education “connotes the interpretation of a wide range of administrative and operational data” (p. 910) and according to Hilbert (2013), as cited in Daniel (2015), Big Data “delivers a cost-effective prospect to improve decision making” (p. 911).

The importance of understanding the process of Big Data analytics is well understood in academic libraries. An example of such “administrative and operational” use for cost-effective improvement of decision making are the Finch & Flenner (2016) and Eaton (2017) case studies of the use of data visualization to assess an academic library collection and restructure the acquisition process. Sugimoto, Ding & Thelwall (2012) call for the discussion of Big Data for libraries. According to the 2017 NMC Horizon Report “Big Data has become a major focus of academic and research libraries due to the rapid evolution of data mining technologies and the proliferation of data sources like mobile devices and social media” (Adams, Becker, et al., 2017, p. 38).

Power (2014) elaborates on the complexity of Big Data in regard to decision-making and offers ideas for organizations on building a system to deal with Big Data. As explained by Boyd and Crawford (2012) and cited in De Mauro et al (2016), there is a danger of a new digital divide among organizations with different access and ability to process data. Moreover, Big Data impacts current organizational entities in their ability to reconsider their structure and organization. The complexity of institutions’ performance under the impact of Big Data is further complicated by the change of human behavior, because, arguably, Big Data affects human behavior itself (Schroeder, 2014).

De Mauro et al (2015) touch on the impact of Dig Data on libraries. The reorganization of academic libraries considering Big Data and the handling of Big Data by libraries is in a close conjunction with the reorganization of the entire campus and the handling of Big Data by the educational institution. In additional to the disruption posed by the Big Data phenomenon, higher education is facing global changes of economic, technological, social, and educational character. Daniel (2015) uses a chart to illustrate the complexity of these global trends. Parallel to the Big Data developments in America and Asia, the European Union is offering access to an EU open data portal (https://data.europa.eu/euodp/home ). Moreover, the Association of European Research Libraries expects under the H2020 program to increase “the digitization of cultural heritage, digital preservation, research data sharing, open access policies and the interoperability of research infrastructures” (Reilly, 2013).

The challenges posed by Big Data to human and social behavior (Schroeder, 2014) are no less significant to the impact of Big Data on learning. Cohen, Dolan, Dunlap, Hellerstein, & Welton (2009) propose a road map for “more conservative organizations” (p. 1492) to overcome their reservations and/or inability to handle Big Data and adopt a practical approach to the complexity of Big Data. Two Chinese researchers assert deep learning as the “set of machine learning techniques that learn multiple levels of representation in deep architectures (Chen & Lin, 2014, p. 515). Deep learning requires “new ways of thinking and transformative solutions (Chen & Lin, 2014, p. 523). Another pair of researchers from China present a broad overview of the various societal, business and administrative applications of Big Data, including a detailed account and definitions of the processes and tools accompanying Big Data analytics.  The American counterparts of these Chinese researchers are of the same opinion when it comes to “think about the core principles and concepts that underline the techniques, and also the systematic thinking” (Provost and Fawcett, 2013, p. 58). De Mauro, Greco, and Grimaldi (2016), similarly to Provost and Fawcett (2013) draw attention to the urgent necessity to train new types of specialists to work with such data. As early as 2012, Davenport and Patil (2012), as cited in Mauro et al (2016), envisioned hybrid specialists able to manage both technological knowledge and academic research. Similarly, Provost and Fawcett (2013) mention the efforts of “academic institutions scrambling to put together programs to train data scientists” (p. 51). Further, Asomoah, Sharda, Zadeh & Kalgotra (2017) share a specific plan on the design and delivery of a big data analytics course. At the same time, librarians working with data acknowledge the shortcomings in the profession, since librarians “are practitioners first and generally do not view usability as a primary job responsibility, usually lack the depth of research skills needed to carry out a fully valid” data-based research (Emanuel, 2013, p. 207).

Borgman (2015) devotes an entire book to data and scholarly research and goes beyond the already well-established facts regarding the importance of Big Data, the implications of Big Data and the technical, societal, and educational impact and complications posed by Big Data. Borgman elucidates the importance of knowledge infrastructure and the necessity to understand the importance and complexity of building such infrastructure, in order to be able to take advantage of Big Data. In a similar fashion, a team of Chinese scholars draws attention to the complexity of data mining and Big Data and the necessity to approach the issue in an organized fashion (Wu, Xhu, Wu, Ding, 2014).

Bruns (2013) shifts the conversation from the “macro” architecture of Big Data, as focused by Borgman (2015) and Wu et al (2014) and ponders over the influx and unprecedented opportunities for humanities in academia with the advent of Big Data. Does the seemingly ubiquitous omnipresence of Big Data mean for humanities a “railroading” into “scientificity”? How will research and publishing change with the advent of Big Data across academic disciplines?

Reyes (2015) shares her “skinny” approach to Big Data in education. She presents a comprehensive structure for educational institutions to shift “traditional” analytics to “learner-centered” analytics (p. 75) and identifies the participants in the Big Data process in the organization. The model is applicable for library use.

Being a new and unchartered territory, Big Data and Big Data analytics can pose ethical issues. Willis (2013) focusses on Big Data application in education, namely the ethical questions for higher education administrators and the expectations of Big Data analytics to predict students’ success.  Daries, Reich, Waldo, Young, and Whittinghill (2014) discuss rather similar issues regarding the balance between data and student privacy regulations. The privacy issues accompanying data are also discussed by Tene and Polonetsky, (2013).

Privacy issues are habitually connected to security and surveillance issues. Andrejevic and Gates (2014) point out in a decision making “generated by data mining, the focus is not on particular individuals but on aggregate outcomes” (p. 195). Van Dijck (2014) goes into further details regarding the perils posed by metadata and data to the society, in particular to the privacy of citizens. Bail (2014) addresses the same issue regarding the impact of Big Data on societal issues, but underlines the leading roles of cultural sociologists and their theories for the correct application of Big Data.

Library organizations have been traditional proponents of core democratic values such as protection of privacy and elucidation of related ethical questions (Miltenoff & Hauptman, 2005). In recent books about Big Data and libraries, ethical issues are important part of the discussion (Weiss, 2018). Library blogs also discuss these issues (Harper & Oltmann, 2017). An academic library’s role is to educate its patrons about those values. Sugimoto et al (2012) reflect on the need for discussion about Big Data in Library and Information Science. They clearly draw attention to the library “tradition of organizing, managing, retrieving, collecting, describing, and preserving information” (p.1) as well as library and information science being “a historically interdisciplinary and collaborative field, absorbing the knowledge of multiple domains and bringing the tools, techniques, and theories” (p. 1). Sugimoto et al (2012) sought a wide discussion among the library profession regarding the implications of Big Data on the profession, no differently from the activities in other fields (e.g., Wixom, Ariyachandra, Douglas, Goul, Gupta, Iyer, Kulkami, Mooney, Phillips-Wren, Turetken, 2014). A current Andrew Mellon Foundation grant for Visualizing Digital Scholarship in Libraries seeks an opportunity to view “both macro and micro perspectives, multi-user collaboration and real-time data interaction, and a limitless number of visualization possibilities – critical capabilities for rapidly understanding today’s large data sets (Hwangbo, 2014).

The importance of the library with its traditional roles, as described by Sugimoto et al (2012) may continue, considering the Big Data platform proposed by Wu, Wu, Khabsa, Williams, Chen, Huang, Tuarob, Choudhury, Ororbia, Mitra, & Giles (2014). Such platforms will continue to emerge and be improved, with librarians as the ultimate drivers of such platforms and as the mediators between the patrons and the data generated by such platforms.

Every library needs to find its place in the large organization and in society in regard to this very new and very powerful phenomenon called Big Data. Libraries might not have the trained staff to become a leader in the process of organizing and building the complex mechanism of this new knowledge architecture, but librarians must educate and train themselves to be worthy participants in this new establishment.

 

Method

 

The study will be cleared by the SCSU IRB.
The survey will collect responses from library population and it readiness to use and use of Big Data.  Send survey URL to (academic?) libraries around the world.

Data will be processed through SPSS. Open ended results will be processed manually. The preliminary research design presupposes a mixed method approach.

The study will include the use of closed-ended survey response questions and open-ended questions.  The first part of the study (close ended, quantitative questions) will be completed online through online survey. Participants will be asked to complete the survey using a link they receive through e-mail.

Mixed methods research was defined by Johnson and Onwuegbuzie (2004) as “the class of research where the researcher mixes or combines quantitative and qualitative research techniques, methods, approaches, concepts, or language into a single study” (Johnson & Onwuegbuzie, 2004 , p. 17).  Quantitative and qualitative methods can be combined, if used to complement each other because the methods can measure different aspects of the research questions (Sale, Lohfeld, & Brazil, 2002).

 

Sampling design

 

  • Online survey of 10-15 question, with 3-5 demographic and the rest regarding the use of tools.
  • 1-2 open-ended questions at the end of the survey to probe for follow-up mixed method approach (an opportunity for qualitative study)
  • data analysis techniques: survey results will be exported to SPSS and analyzed accordingly. The final survey design will determine the appropriate statistical approach.

 

Project Schedule

 

Complete literature review and identify areas of interest – two months

Prepare and test instrument (survey) – month

IRB and other details – month

Generate a list of potential libraries to distribute survey – month

Contact libraries. Follow up and contact again, if necessary (low turnaround) – month

Collect, analyze data – two months

Write out data findings – month

Complete manuscript – month

Proofreading and other details – month

 

Significance of the work 

While it has been widely acknowledged that Big Data (and its handling) is changing higher education (https://blog.stcloudstate.edu/ims?s=big+data) as well as academic libraries (https://blog.stcloudstate.edu/ims/2016/03/29/analytics-in-education/), it remains nebulous how Big Data is handled in the academic library and, respectively, how it is related to the handling of Big Data on campus. Moreover, the visualization of Big Data between units on campus remains in progress, along with any policymaking based on the analysis of such data (hence the need for comprehensive visualization).

 

This research will aim to gain an understanding on: a. how librarians are handling Big Data; b. how are they relating their Big Data output to the campus output of Big Data and c. how librarians in particular and campus administration in general are tuning their practices based on the analysis.

Based on the survey returns (if there is a statistically significant return), this research might consider juxtaposing the practices from academic libraries, to practices from special libraries (especially corporate libraries), public and school libraries.

 

 

References:

 

Adams Becker, S., Cummins M, Davis, A., Freeman, A., Giesinger Hall, C., Ananthanarayanan, V., … Wolfson, N. (2017). NMC Horizon Report: 2017 Library Edition.

Andrejevic, M., & Gates, K. (2014). Big Data Surveillance: Introduction. Surveillance & Society, 12(2), 185–196.

Asamoah, D. A., Sharda, R., Hassan Zadeh, A., & Kalgotra, P. (2017). Preparing a Data Scientist: A Pedagogic Experience in Designing a Big Data Analytics Course. Decision Sciences Journal of Innovative Education, 15(2), 161–190. https://doi.org/10.1111/dsji.12125

Bail, C. A. (2014). The cultural environment: measuring culture with big data. Theory and Society, 43(3–4), 465–482. https://doi.org/10.1007/s11186-014-9216-5

Borgman, C. L. (2015). Big Data, Little Data, No Data: Scholarship in the Networked World. MIT Press.

Bruns, A. (2013). Faster than the speed of print: Reconciling ‘big data’ social media analysis and academic scholarship. First Monday, 18(10). Retrieved from http://firstmonday.org/ojs/index.php/fm/article/view/4879

Bughin, J., Chui, M., & Manyika, J. (2010). Clouds, big data, and smart assets: Ten tech-enabled business trends to watch. McKinsey Quarterly, 56(1), 75–86.

Chen, X. W., & Lin, X. (2014). Big Data Deep Learning: Challenges and Perspectives. IEEE Access, 2, 514–525. https://doi.org/10.1109/ACCESS.2014.2325029

Cohen, J., Dolan, B., Dunlap, M., Hellerstein, J. M., & Welton, C. (2009). MAD Skills: New Analysis Practices for Big Data. Proc. VLDB Endow., 2(2), 1481–1492. https://doi.org/10.14778/1687553.1687576

Daniel, B. (2015). Big Data and analytics in higher education: Opportunities and challenges. British Journal of Educational Technology, 46(5), 904–920. https://doi.org/10.1111/bjet.12230

Daries, J. P., Reich, J., Waldo, J., Young, E. M., Whittinghill, J., Ho, A. D., … Chuang, I. (2014). Privacy, Anonymity, and Big Data in the Social Sciences. Commun. ACM, 57(9), 56–63. https://doi.org/10.1145/2643132

De Mauro, A. D., Greco, M., & Grimaldi, M. (2016). A formal definition of Big Data based on its essential features. Library Review, 65(3), 122–135. https://doi.org/10.1108/LR-06-2015-0061

De Mauro, A., Greco, M., & Grimaldi, M. (2015). What is big data? A consensual definition and a review of key research topics. AIP Conference Proceedings, 1644(1), 97–104. https://doi.org/10.1063/1.4907823

Dumbill, E. (2012). Making Sense of Big Data. Big Data, 1(1), 1–2. https://doi.org/10.1089/big.2012.1503

Eaton, M. (2017). Seeing Library Data: A Prototype Data Visualization Application for Librarians. Publications and Research. Retrieved from http://academicworks.cuny.edu/kb_pubs/115

Emanuel, J. (2013). Usability testing in libraries: methods, limitations, and implications. OCLC Systems & Services: International Digital Library Perspectives, 29(4), 204–217. https://doi.org/10.1108/OCLC-02-2013-0009

Graham, M., & Shelton, T. (2013). Geography and the future of big data, big data and the future of geography. Dialogues in Human Geography, 3(3), 255–261. https://doi.org/10.1177/2043820613513121

Harper, L., & Oltmann, S. (2017, April 2). Big Data’s Impact on Privacy for Librarians and Information Professionals. Retrieved November 7, 2017, from https://www.asist.org/publications/bulletin/aprilmay-2017/big-datas-impact-on-privacy-for-librarians-and-information-professionals/

Hashem, I. A. T., Yaqoob, I., Anuar, N. B., Mokhtar, S., Gani, A., & Ullah Khan, S. (2015). The rise of “big data” on cloud computing: Review and open research issues. Information Systems, 47(Supplement C), 98–115. https://doi.org/10.1016/j.is.2014.07.006

Hwangbo, H. (2014, October 22). The future of collaboration: Large-scale visualization. Retrieved November 7, 2017, from http://usblogs.pwc.com/emerging-technology/the-future-of-collaboration-large-scale-visualization/

Laney, D. (2001, February 6). 3D Data Management: Controlling Data Volume, Velocity, and Variety.

Miltenoff, P., & Hauptman, R. (2005). Ethical dilemmas in libraries: an international perspective. The Electronic Library, 23(6), 664–670. https://doi.org/10.1108/02640470510635746

Philip Chen, C. L., & Zhang, C.-Y. (2014). Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Information Sciences, 275(Supplement C), 314–347. https://doi.org/10.1016/j.ins.2014.01.015

Power, D. J. (2014). Using ‘Big Data’ for analytics and decision support. Journal of Decision Systems, 23(2), 222–228. https://doi.org/10.1080/12460125.2014.888848

Provost, F., & Fawcett, T. (2013). Data Science and its Relationship to Big Data and Data-Driven Decision Making. Big Data, 1(1), 51–59. https://doi.org/10.1089/big.2013.1508

Reilly, S. (2013, December 12). What does Horizon 2020 mean for research libraries? Retrieved November 7, 2017, from http://libereurope.eu/blog/2013/12/12/what-does-horizon-2020-mean-for-research-libraries/

Reyes, J. (2015). The skinny on big data in education: Learning analytics simplified. TechTrends: Linking Research & Practice to Improve Learning, 59(2), 75–80. https://doi.org/10.1007/s11528-015-0842-1

Schroeder, R. (2014). Big Data and the brave new world of social media research. Big Data & Society, 1(2), 2053951714563194. https://doi.org/10.1177/2053951714563194

Sugimoto, C. R., Ding, Y., & Thelwall, M. (2012). Library and information science in the big data era: Funding, projects, and future [a panel proposal]. Proceedings of the American Society for Information Science and Technology, 49(1), 1–3. https://doi.org/10.1002/meet.14504901187

Tene, O., & Polonetsky, J. (2012). Big Data for All: Privacy and User Control in the Age of Analytics. Northwestern Journal of Technology and Intellectual Property, 11, [xxvii]-274.

van Dijck, J. (2014). Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology. Surveillance & Society; Newcastle upon Tyne, 12(2), 197–208.

Waller, M. A., & Fawcett, S. E. (2013). Data Science, Predictive Analytics, and Big Data: A Revolution That Will Transform Supply Chain Design and Management. Journal of Business Logistics, 34(2), 77–84. https://doi.org/10.1111/jbl.12010

Weiss, A. (2018). Big-Data-Shocks-An-Introduction-to-Big-Data-for-Librarians-and-Information-Professionals. Rowman & Littlefield Publishers. Retrieved from https://rowman.com/ISBN/9781538103227/Big-Data-Shocks-An-Introduction-to-Big-Data-for-Librarians-and-Information-Professionals

West, D. M. (2012). Big data for education: Data mining, data analytics, and web dashboards. Governance Studies at Brookings, 4, 1–0.

Willis, J. (2013). Ethics, Big Data, and Analytics: A Model for Application. Educause Review Online. Retrieved from https://docs.lib.purdue.edu/idcpubs/1

Wixom, B., Ariyachandra, T., Douglas, D. E., Goul, M., Gupta, B., Iyer, L. S., … Turetken, O. (2014). The current state of business intelligence in academia: The arrival of big data. CAIS, 34, 1.

Wu, X., Zhu, X., Wu, G. Q., & Ding, W. (2014). Data mining with big data. IEEE Transactions on Knowledge and Data Engineering, 26(1), 97–107. https://doi.org/10.1109/TKDE.2013.109

Wu, Z., Wu, J., Khabsa, M., Williams, K., Chen, H. H., Huang, W., … Giles, C. L. (2014). Towards building a scholarly big data platform: Challenges, lessons and opportunities. In IEEE/ACM Joint Conference on Digital Libraries (pp. 117–126). https://doi.org/10.1109/JCDL.2014.6970157

 

+++++++++++++++++
more on big data





Virtual Augmented Mixed Reality

11 Ed Tech Trends to Watch in 2017
Five higher ed leaders analyze the hottest trends in education technology this year.

http://pdf.101com.com/CampusTech/2017/701921020/CAM_1702DG.pdf

new forms of human-computer interaction (HCI) such as augmented reality (AR),virtual reality (VR) and mixed reality (MR).
p. 21
combining AR/VR/MR with cognitive computing and artificial intelligence (AI) technologies (such as machine learning, deep learning, natural language processing and chatbots).
Some thought-provoking questions include:
  • Will remote workers be able to be seen and interacted with via their holograms (i.e., attending their meetings virtually)? What would this mean for remote learners?
  • Will our smartphones increasingly allow us to see information overlaid on the real world? (Think Pokémon Go, but putting that sort of technology into a vast array of different applications, many of which could be educational in nature)
  • How do/will these new forms of HCI impact how we design our learning spaces?
  • Will students be able to pick their preferred learning setting (i.e., studying by a brook or stream or in a virtual Starbucks-like atmosphere)?
  • Will more devices/platforms be developed that combine the power of AI with VR/AR/MR-related experiences? For example, will students be able to issue a verbal question or command to be able to see and experience walking around ancient Rome?
  • Will there be many new types of learning experiences,like what Microsoft was able to achieve in its collaboration with Case Western Reserve University [OH]? Its HoloLens product transforms the way human anatomy can be taught.

p. 22 Extensive costs for VR design and development drive the need for collaborative efforts.

Case Western Reserve University, demonstrates a collaboration with the Cleveland Clinic and Microsoft to create active multi-dimensional learning using holography.

the development of more affordable high-quality virtual reality solutions.

AR game developed by the Salzburg University of Applied Sciences [Austria] (http://www.fh-salzburg.ac.at/en/) that teaches  about sustainability, the environment and living green.
Whether using AR for a gamified course or to acclimate new students to campus, the trend will continue into 2017.

++++++++++++++++++++++++++++++

15 Tech Tool Favorites From ISTE 2016

list of resources that can help educators find what they need

Google Expeditions
This virtual reality field trip tool works in conjunction with Google Cardboard and has just been officially released. The app allows teachers to guide students through an exploration of 200 (and growing) historical sites and natural resources in an immersive, three-dimensional experience. The app only works on Android devices and is free.

Flippity
This app works in conjunction with Google Sheets and allows teachers to easily make a Jeopardy-style game.

Google Science Journal
This Android app allows users to do science experiments with mobile phones. Students can use sensors in the phone or connect external sensors to collect data, but can also take notes on observations, analyze and annotate within the app.

Google Cast
This simple app solves issues of disparate devices in the classroom. When students download the app, they can project from their devices onto the screen at the front of the room easily. “You don’t have to have specific hardware, you just have to have Wi-Fi,”

Constitute
This site hosts a database of constitutions from around the world. Anything digitally available has been aggregated here. It is searchable by topic and will pull out specific excerpts related to search terms like “freedom of speech.”

YouTube
a database of YouTube Channels by subject to help educators with discoverability (hint subjects are by tab along the bottom of the document).

Zygote Body
This freemium tool has a lot of functionality in the free version, allowing students to view different parts of human anatomy and dig into how various body systems work.
Pixlr
This app has less power than Photoshop, but is free and fairly sophisticated. It works directly with Google accounts, so students can store files there.
uild With Chrome
This extension to the Chrome browser lets kids play with digital blocks like Legos. Based on the computer’s IP address, the software assigns users a plot of land on which to build nearby. There’s a Build Academy to learn how to use the various tools within the program, but then students can make whatever they want.
Google CS First
Built on Scratch’s programming language, this easy tool gives step-by-step instructions to get started and is great for the hesitant teacher who is just beginning to dip a toe into coding.
several posters about Google Apps For Education that are available to anyone for free

+++++++++++++++
More on VR in this IMS bloghttps://blog.stcloudstate.edu/ims?s=virtual+reality

AI

The Deep Mind of Dennis Hassabis

In the race to recruit the best AI talent, Google scored a coup by getting the team led by a former video game guru and chess prodigy

https://medium.com/backchannel/the-deep-mind-of-demis-hassabis-156112890d8a

the only path to developing really powerful AI would be to use this unstructured information. It’s also called unsupervised learning— you just give it data and it learns by itself what to do with it, what the structure is, what the insights are.

One of the people you work with at Google is Geoff Hinton, a pioneer of neural networks. Has his work been crucial to yours?

Sure. He had this big paper in 2006 that rejuvenated this whole area. And he introduced this idea of deep neural networks—Deep Learning. The other big thing that we have here is reinforcement learning, which we think is equally important. A lot of what Deep Mind has done so far is combining those two promising areas of research together in a really fundamental way. And that’s resulted in the Atari game player, which really is the first demonstration of an agent that goes from pixels to action, as we call it.

Do student evaluations measure teaching effectiveness?

Do student evaluations measure teaching effectiveness?Manager’s Choice

Assistant Professor in MISTop Contributor

Higher Education institutions use course evaluations for a variety of purposes. They factor in retention analysis for adjuncts, tenure approval or rejection for full-time professors, even in salary bonuses and raises. But, are the results of course evaluations an objective measure of high quality scholarship in the classroom?

—————————-

  • Daniel WilliamsDaniel

    Daniel Williams

    Associate Professor of Molecular Biology at Winston-Salem State University

    I feel they measure student satisfaction, more like a customer service survey, than they do teaching effectiveness. Teachers students think are easy get higher scores than tough ones, though the students may have learned less from the former.

    Maria P.John S. and 17 others like this

  • Muvaffak

    Muvaffak GOZAYDIN

    Founder at Global Digital University

    Top Contributor

    How can you measure teachers’ effectiveness.
    That is how much students learn?
    If there is a method to measure how much we learn , I would appreciate to learn .

    Simphiwe N.Laura G. and 4 others like this

  • Michael TomlinsonMichael

    Michael Tomlinson

    Senior Director at TEQSA

    From what I recall, the research indicates that student evaluations have some value as a proxy and rough indicator of teacher effectiveness. We would expect that bad teachers will often get bad ratings, and good teachers will often get good ratings. Ratings for individual teachers should always be put in context, IMHO, for precisely the reasons that Daniel outlines.

    Aggregated ratings for teachers in departments or institutions can even out some of these factors, especially if you combine consideration with other indicators, such as progress rates.The hardest indicators however are drop-out rates and completion rates. When students vote with their feet this can flag significant problems. We have to bear in mind that students often drop out for personal reasons, but if your college’s drop-out rate is higher than your peers, this is worth investigating.

    phillip P.J.B. W. and 12 others like this

  • Rina SahayRina

    Rina Sahay

    Technical educator looking for a new opportunity or career direction

    I agree with what Michael says – to a point. Unfortunately student evaluations have also been used as a venue for disgruntled students, acting alone or in concert – a popularity contest of sorts. Even more unfortunately college administrations (especially for-profits) tend to rate Instructor effectiveness on the basis of student evaluations.

    IMHO, student evaluation questions need to be carefully crafted in order to be as objective as possible in order to eliminate the possibility of responses of an unprofessional nature. To clarify – a question like “Would you recommend this teacher to other students?” has the greatest potential for counter-productivity.

    Maria P.phillip P. and 6 others like this

  • Robert WhippleRobert

    Robert Whipple

    Chair, English Department at Creighton University

    No.

    Rina S.Elizabeth T. and 7 others like this

  • Dr. Virginia Stead, Ed.D.Dr. Virginia

    Dr. Virginia Stead, Ed.D.

    2013-2015 Peter Lang Publishing, Inc. (New York) Founding Book Series Editor: Higher Education Theory, Policy, & Praxis

    This is not a Cartesian question in that the answer is neither yes nor no; it’s not about flipping a coin. One element that may make it more likely that student achievement is a result of teacher effectiveness is the comparison of cumulative or summative student achievement against incoming achievement levels. Another variable is the extent to which individual students are sufficiently resourced (such as having enough food, safety, shelter, sleep, learning materials) to benefit from the teacher’s beneficence.

    Bridget K.Simphiwe N. and 4 others like this

  • Barbara

    Barbara Celia

    Assistant Clinical Professor at Drexel University

    Depends on how the evaluation tool is developed. However, overall I do not believe they are effective in measuring teacher effectiveness.

    Jeremy W.Ronnie S. and 1 other like this

  • Sri YogamalarSri

    Sri Yogamalar

    Lecturer at MUSC, Malaysia

    Overall, I think students are the best judge of a teacher’s effective pedagogy methods. Although there may be students with different learning difficulties (as there usually is in a class), their understanding of the concepts/principles and application of the subject matter in exam questions, etc. depends on how the teacher imparts such knowledge in a rather simplified and easy manner to enhance analytical and critical thinking in them. Of course, there are students too who give a bad review of a teacher’s teaching mode out of spite just because the said teacher has reprimanded him/her in class for being late, for example, or for even being rude. In such a case, it would not be a true reflection of the teacher’s method of teaching. A teacher tries his/her best to educate and inculcate values by imparting the required knowledge and ensuring a 2-way teaching-learning process. It is the students who will be the best judge to evaluate and assess the success of the efforts undertaken by the teacher because it is they who are supposed to benefit at the end of the teaching exercise.

    Chunli W.Simphiwe N. and 2 others like this

  • Paul S HickmanPaul S

    Paul S Hickman

    Member of the Council of Trustees & Distinguished Mentor at Warnborough College, Ireland & UK

    No! No!

    Anne G.Maria P. and 2 others like this

  • Bonnie FoxBonnie

    Bonnie Fox

    Higher Education Copywriter

    In some cases, I think evaluations (and negative ones in particular) can offer a good perspective on the course, especially if an instructor is willing to review them with an open mind. Of course, there are always the students who nitpick and, as Rina said, use the eval as a chance to vent. But when an entire class complains about how an instructor has handled a course (as I once saw happen with a tutoring student whose fellow classmates were in agreement about the problems in the course), I think it should be taken seriously. But I also agree with Daniel about how evaluations should be viewed like a customer service survey for student satisfaction. Evals are only useful up to a point.

    I definitely agree about the way evaluations are worded, though, to make sure that it’s easier to recognize the useful information and weed out the whining.

    Maria P.Pierre H. and 4 others like this

  • Pierre HENONPierre

    Pierre HENON

    university teacher (professeur agrege)

    I am director of studies and students in continuing education are making evaluation of the teaching effectiveness. Because I am in an ISO process, I must take in account those measurements. It might be very difficult sometimes because the number of students does not reach the level required for the sample to be valid (in a statistic meaning). But in the meantime, I believe in the utility of such measurements. The hard job is for me when I have to discuss with the teacher who is under the required score.

    Simphiwe N.Maria P. like this

  • Maria PerssonMaria

    Maria Persson

    Senior Tutor – CeTTL – Student Learning & Digital/Technology Coach (U of W – Faculty of Education)

    I’m currently ‘filling in’ as the administrator in our Teaching Development Unit – Appraisals and I have come to appreciate that the evaluation tool of choice is only that – a tool. How the tool is used in terms of the objective for collecting ‘teaching effectiveness’ information, question types developed to gain insight of, and then how that info is acted upon to inform future teaching and learning will in many ways denote the quality of the teaching itself !

    Student voice is not just about keeping our jobs, ‘bums on seats’ or ‘talking with their feet’ (all part of it of course) but should be about whether or not we really care about learning. Student voice in the form of evaluating teachers’ effectiveness is critically essential if we want our teaching to model learning that affects positive change – Thomas More’s educational utopia comes to mind…

    Simphiwe N.Pierre H. and 4 others like this

  • David ShallenbergerDavid

    David Shallenberger

    Consultant and Professor of International Education

    Alas, I think they are weak indicators of teaching effectiveness, yet they are used often as the most important indicators of the same. And in the pursuit of high response rate, they are too often given the last day of class, when they cannot measure anything significant — before the learning has “sunk in.” Ask better questions, and ask the questions after students have had a chance to reflect on the learning.

    Barbara C.Pierre H. and 9 others like this

  • Cathryn McCormackCathryn

    Cathryn McCormack

    Lecturer (Teaching and Learning), and Belly Dance teacher

    I’m just wrapping up a very large project at my university that looked at policy, processes, systems and the instrument for collecting student feedback (taking a break from writing the report to write this comment). One thing that has struck me very clearly is that we need to reconceptualise SETs. de Vellis, in Scale Development, talks about how a scale generally has a higher validity if the respondent is asked to talk about their own experiences.

    Yet here we are asking students to not only comment on, but evaluate their teachers. What we really want students to do in class in concentrate on their learning – not on what the teacher is doing. If they are focussing on what the teacher is doing then something is not going right. The way we ask now seems even crazier when we consider the most sophisticated conception of teaching is to help students learn. So why aren’t we asking students about their learning?

    The standard format has something to do with it – it’s extremely difficult to ask interesting questions on learning when the wording must align with a 5 point Likert response scale. Despite our best efforts, I do not believe it is possible to prepare a truly student centred and learning centred questionnaire using this format.

    An alternate format I came across that I really liked (Modified PLEQ Devlin 2002, An Improved Questionnaire for Gathering Student Perceptions of Teaching and Learning), but no commercial evaluation software (which we are required to purchase) can do it. A few overarching questions sets the scene for the nature of the class, but the general question format goes: In [choose from drop down list] my learning was [helped/hindered] when [fill in the blank] because [fill in the blank]. The drop down list would include options such as lectures, seminars/tutorials, a private study situation, preparing essays, labs, field trip, etc. After completing one question the student has the option to fill in another … and another … and another … for as long as they want.

    Think about what information we could actually get on student learning if we we started asking like this! No teacher ratings, all learning. The only number that would emerge would be the #helped and the #hindered.

    Maria P.Pierre H. and 6 others like this

  • Hans TilstraHans

    Hans Tilstra

    Senior Coordinator, Learning and Teaching

    Keep in mind “Goodhart’s Law” – When a measure becomes a target, it ceases to be a good measure.

    For example, if youth unemployment figures become the main measure, governments may be tempted to go for the low hanging fruit, the short term (eg. a work for the dole stick to steer unemployed people into study or the army).

    Punita S.Laura G. and 2 others like this

  • robert easterbrookrobert

    robert easterbrook

    Education Management Professional

    Nope.

    Catherine W.Anne G. like this

  • John StanburyJohn

    John Stanbury

    Professor at Singapore Institute of Management

    I totally agree with most of the comments here. I find student evaluations to be virtually meaningless as measures of a teachers’ effectiveness. They are measures of student perception NOT of learning. Yet university administrators eg Deans, Dept chairs, persist in using them to evaluate faculty performance in the classroom to the point where many instructors have had their careers torn apart. Its an absolute disgrace!! But no one seems to care! That’s the sick thing about it!

    Ronnie S.Maria P. and 4 others like this

  • Simon YoungSimon

    Simon Young

    Programme Coordinator, Pharmacy

    Satisfaction cannot be simply correlated with teaching quality. The evidence is that students are most “satisfied” with courses that support a surface learning approach – what the student “needs to know” to pass the course. Where material and delivery is challenging, this generates less crowd approval but, conversely, is more likely to be “good teaching” as this supports deep learning.

    Our challenge is to achieve deep learning and still generate rave satisfaction reviews. If any reader has the magic recipe, I would be pleased to learn of it.

    joe O.Maria P. and 4 others like this

  • Laura GabigerLaura

    Laura Gabiger

    Professor at Johnson & Wales University

    Top Contributor

    Maybe it is about time we started calling it what it is and got Michelin to develop the star rating system for our universities.

    Nevertheless I appreciate everyone’s thoughtful comments. Muvaffak, I agree with you about the importance and also the difficulty of measuring student learning. Cathryn, thank you for taking a break from your project to give us an overview.

    My story: the best professor and mentor in my life (I spent a total of 21 years as a student in higher education), the professor from whom I learned indispensable and enduring habits of thought that have become more important with each passing year, was one whom the other graduate students in my first term told me–almost unanimously– to avoid at all costs.

    Jeremy W.Maria P. and 1 other like this

  • Dr. Pedro L. MartinezDr. Pedro L.

    Dr. Pedro L. Martinez

    Former Provost and Vice Chancellor for Academic Affairs at Winston Salem State University & President of HigherEd SC.

    I am not sure that course evaluations based on one snap shot measure “teacher effectiveness”. For various reasons, some ineffective teachers get good ratings by pandering to the lowest level of intellectual laziness. However, consistently looking at comments and some other measures may yield indicators of teachers who are unprepared, do not provide feedback, do not adhere to a syllabus of record, and do not respect students in general. I think part of that information is based how questions are crafted.

    I believe that a self evaluation of instructor over a period of a semester could yield invaluable information. Using a camera and other devices, ask the instructor to take snap shots of their teaching/ learning in the classroom over a period of time and then ask for a self-evaluation. For the novice teacher that information could be evaluated by senior faculty and assist the junior faculty to improve his/her delivery. Many instructors are experts in their field but lack exposure to different methods of instructional delivery. I would like to see a taxonomy of a scale that measures the instructor’s ability using lecture as the base of instruction and moving up to levels of problem based learning, service learning, undergraduate research by gauging the different pedagogies (pedagogy, androgogy heutagogy, paragogy etc. that engage students in active learning.

    Dvora P.Maria P. and 1 other like this

  • Steve CharlierSteve

    Steve Charlier

    Assistant Professor at Quinnipiac University

    I wanted to piggyback on Cathryn’s comment above, and align myself with how many of you seem to feel about student evaluations. The quantitative part of student evals are problematic, for all of the reasons mentioned already. But the open-ended feedback that is (usually) a part of student evaluations is where I believe some real value can be gained, both for administrative purposes and for instructor development.

    When allowed to speak freely, what are students saying? Are they lamenting a particular aspect of the course/instructor? Is that one area coloring their response across all questions? These are all important considerations, and provide a much richer source of information for all involved.

    Sadly, the quantitative data is what most folks gravitate to, simply because it’s standardized and “easy”. I don’t believe that student evaluations are a complete waste of time, but I do think that we tend to focus on the wrong information. And, of course, this ignores the issues of timing and participation rates that are probably another conversation altogether!

    Dvora P.Sonu S. and 4 others like this

  • robert easterbrookrobert

    robert easterbrook

    Education Management Professional

    ‘What the Student Does: teaching for enhanced learning’ by John Biggs in Higher Education Research & Development, Vol. 18, No. 1, 1999.

    “The deep approach refers to activities that are appropriate to handling the task so that an appropriate outcome is achieved. The surface approach is therefore to be discouraged, the deep approach encouraged – and that is my working definition of good teaching. Learning is thus a way of interacting with the world. As we learn, our conceptions of phenomena change, and we see the world differently. The acquisition of information in itself does not bring about such a change, but the way we structure that information and think with it does. Thus, education is about conceptual change, not just the acquisition of information.” (p. 60)

    This is the approach higher education is trying adapt to at the moment, as far as I’m aware.

    Jeremy W.Adrian M. like this

  • Cindy KenkelCindy

    Cindy Kenkel

    Northwest Missouri State University

    My Human Resource students will focus on this issue in a class debate “Should student evaluation data significantly impact faculty tenure and promotion decisions?” One side will argue “yes, it provides credible data that should be one of the most important elements” and the other group will argue against this based on much of what has been said above. They will say student evaluations are basically a popularity contest and faculty may actually be dumbing down their classes in order to get higher ratings.

    To what extent is student data used in faculty tenure and promotion decisions at your institutions?

  • yasir

    yasir hayat

    Faculty member at institute of management sciences,peshawar

    NO

  • yasir

    yasir hayat

    Faculty member at institute of management sciences,peshawar

    NO

  • joe othmanjoe

    joe othman

    Associate Professor at Institute of Education, IIUM

    Agree with Pierre, when the number of students responding is not what is expected; then what?

  • joe othmanjoe

    joe othman

    Associate Professor at Institute of Education, IIUM

    Cindy; it is used in promotion decision in my university, but only a small percentage of the total points. Yet this issue is still a thorny one for some faculty

  • Sonu SardaSonu

    Sonu Sarda

    Lecturer at University of Southern Queensland

    How open are we? Is learning about the delivery of a subject only or bulding on soft skills as well?So if we as teachers are facilitating learning in a conducive manner ,would it not lead to an average TE at the least &thus indicate our teaching effectiveness at the base level. Indeed qualitative approach would be far better an approach, if we intend to accomplish the actual purpose of TE i.e Reflection for continual improvement.More and more classrooms are becoming learner centered and to accomplish this learners ‘say’ is vital.
    Some students using these as platforms for personal whims, must not be a concern for many, since the TE are averaged out .Of course last but not the least TEs are like dynamites and must be handled by experts.These are one of the means of assessing the gaps,if any, between the teaching and learning strategies. These must not be used for performance evaluation.If at all, then all the other factors such as the number of students,absenteeism,pass rate rather HD & D rates over a period of minimum three terms must also be included alongside.

  • Dvora PeretsDvora

    Dvora Perets

    Teaching colleague at Ben Gurion University of the Negev

    I implement a semester long self evaluation process in all my mathematics courses. Students gets 3 points (out of 100) for anonymously filling an online questionnaire online every week . They rate (1-5) their personal class experience (I was bored -I was fascinated, I understood nothing- I understood everything, The tutorials sessions didn’t-did help, I visited Lecturer’s/TA’s office hours, I spent X hours of self learning this week). They can also add verbal comments.
    I started it 10 years ago when I built a new special course, to help me “hear” the students (80-100 in each class) and to better adjust myself and the content to my new students. I used to publish a weekly respond to the verbal comments, accepting some and rejecting others while making sure to explain and justify any decision of mine.
    Not only that it helped me improve my teaching and the course but it turned out that it actually created a very solid perception of me as a caring teacher. I always was a very caring teacher (some of my colleagues accuse me of being over caring…) but it seems that “forcing” my student to give feedback along all the semester kind of “brought it out” to the open.

    I am still using long-semester feedback in all my courses and I consider both quantitative and qualitative responds. It helps me see that the majority of students understand me in class. I ignore those who choose “I understand nothing” – obviously if they were indeed understanding “nothing” they would have not come to class… (they can choose “I didn’t participate” or “I don’t wont to answer”)
    I ignore all verbal comments that aim to “punish” me and I change things when I think students r right.
    Finally, being a math lecturer for non-major students is extremely hard, both academically and emotionally. Most students are not willing to do what is needed in order to understand the abstract/complicated concepts and processes.
    Only few (“courageous “) students will attribute their lack of understanding to the fact that they did not attend all classes, or that they weren’t really focused on learning, (probably they spend a lot of time in “Facebook” during class..), or that they didn’t go over class notes at home and come to office hours when they didn’t understand something etc.
    I am encouraged by the fact that about 2/3 of the students that attend classes state they “understood enough” and above (3-5) all semester long. This is especially important as only 40-50% of the students fill the formal end of the semester SE and I bet u can guess how the majority of of them will rate my performance. Students fill SE before the final exam but (again) u can guess how 2 midterms with about 24% failures will influence their evaluation of my teaching.

    Cathryn M.Steve C. and 3 others like this

  • Michael TomlinsonMichael

    Michael Tomlinson

    Senior Director at TEQSA

    I think it’s important to avoid defensive responses to the question. Most participants have assumed that we are talking about individual teachers being assessed through questionnaires, and I share everyone’s reservations about that. I entirely agree that deep learning is what we need to go for, but given the huge amounts of public money that are poured into our institutions, we need to have some way of evaluating whether what we are doing is effective or whether it isn’t.

    I’m not impressed by institutions that are obsessed only with evaluation by numbers. However, there is some merit in monitoring aggregated statistics over time and detecting statistically significant variations. If average satisfaction rates in Engineering have gone down every year for five years shouldn’t we try and find out why? If satisfaction rates in Architecture have gone up every year for five years wouldn’t it be interesting to know if they have been doing something to bring that about that might be worthwhile? It might turn out to be a statistical artifact, but we need to inquire into it, and bring the same arts of critical inquiry to bear on the evidence that we use in our scholarship and research.

    But I always encourage faculties and institutions to supplement this by actually getting groups of students together and talking to them about their student experience as well. Qualitative responses can be more valuable than quantitative surveys. We might actually learn something!

    Laura G.yasir H. and 2 others like this

  • Aleardo

    Aleardo Manacero

    Associate Professor at UNESP – São Paulo State University

    As everyone here I also think that these evaluation forms do not truly measure teaching effectiveness. This is a quite hard thing to evaluate, since the effect of learning will be felt several years later, while performing their job duties.

    Besides that, some observations made by students are interesting for our own growth. I usually get these through informal talks with the class or even some students.

    In another direction, some of the previous comments are addressing deep/surface learning basically stating that deep learning is the right way to go. I have to disagree with this for some of the contents that have to be taught. In my case (teaching to computer science majors) it is important, for example, that every student have a surface knowledge about operating systems design, but those who are going to work as database analysts do not need to know the deep concepts involved with that (the same is true for database concepts for a network analyst…). So, surface learning has also its relevance in the professional formation.

    Jeremy W.Sonu S. like this

  • George ChristodoulidesGeorge

    George Christodoulides

    Senior Consultant and Lecturer at university of nicosia

    The usefulness of Student evaluations, like all similar surveys, is closely linked to the particular questions they are asked to answer. There are the objective-type/factual questions such as “Does he start class on time” or “does he speak clearly” and the very personal questions such as “does he give fair grades”… The effectiveness of a Teacher could be more appropriately linked to suitably phrased question, such as “has he motivated you to learn” and “how much have you bebnefited from the course”. The responses to these questions could, also, be further assessed by comparison with the final grades given to that particular course with the performance of the class in the other courses they have taken..during that semester. So, for assessing Teacher Effectiveness, one needs to ask relevant questions. and perform the appropriate evaluations..

  • Laura GabigerLaura

    Laura Gabiger

    Professor at Johnson & Wales University

    Top Contributor

    Michael has an excellent point that some accountability of institutions and programs is appropriate, and that aggregated data or qualitative results can be useful in assessing whether the teaching in a particular program is accomplishing what it sets out to do. Many outcomes studies are set up to measure the learning in an aggregated way.

    We may want to remember that our present conventions of teaching evaluation had their roots in the 1970s (in California, if I remember correctly), partly as a response to a system in which faculty, both individually and collectively, were accountable to no one. I recall my student days when a professor in a large public research institution would consider it an intrusion and a personal affront to be asked to supply a course syllabus.

    As the air continues to leak out of the USA’s higher education bubble, as the enrollments drop and the number of empty seats rises, it seems inevitable that institutions will feel the pressure to offer anything to make the students perceive their experience as positive. It may be too hard to make learning–often one of the most uncomfortable experiences in life–the priority. Faculty respond defensively because we are continually put in the position of defending ourselves, often by poorly-designed quantitative instruments that address every kind of feel-good hotel concierge aspect of classroom management while overlooking learning.

    John S. likes this

  • Sethuraman JambunathaSethuraman

    Sethuraman Jambunatha

    Dean (I & E) at Vinayaka Mission

    The evaluation of faculty by the students is welcome. The statistics of information can be looked into to a certain degree of objectivity. An instructor strict with his/her students may be ranked low in spite of being an asset to the department. A ‘free-lance’ teacher with students may be placed higher despite being a poor teacher. At any rate the HoD’s duty is to observe the quality of all teachers and his objective evaluation is final. The parents feed-back is also to be taken. Actually
    teaching is a multi-dimensional task and students evaluation is just one co-ordinate only.

  • Edwin

    Edwin Herman

    Associate Professor at University of Wisconsin, Stevens Point

    Student evaluations are a terrible tool for measuring teacher effectiveness. They do measure student satisfaction, and to some extent the measure student *perception* of teacher effectiveness. But the effectiveness of a teaching method or of an instructor is poorly correlated with student satisfaction: while there are positive linkages between the two concepts, students are generally MORE satisfied by an easy course that makes them feel good than by a hard course that makes them have to really think and work (and learn).

    Students like things that are flashy, and things that are easy more than they like things that require a lot of work or things that force them to rethink their core values. Certainly there are students who value a challenge, but even those students may not recognize which teacher gave them a better course.

    Student evaluations can be used effectively to help identify very poor teaching. But it is useless to distinguish between adequate and good teaching practices.

    John S. likes this

  • Cesar GranadosCesar

    Cesar Granados

    ex Vicerrector Administrativo en Universidad Nacional de San Cristóbal de Huamanga

    César S. Granados
    Retired Professor from The National University of San Cristóbal de Huamanga
    Ayacucho, PERÚ

    Since teaching effectiveness is a function of teacher competencies, an effective teacher is able to use the existing competencies to achieve the desired student´s results; but, student´s performance mainly depends of his commitment to achieve competencies.

  • Steve KaczmarekSteve

    Steve Kaczmarek

    Professor at Columbus State Community College

    The student evaluations I’ve seen are more like customer satisfaction surveys, and in this respect, there is less helpful information for the instructor to improve his or her craft and instead more feedback about whether or not the student liked the experience. Shouldn’t their learning and/or improving skills be at least as important? I’m not arguing that these concepts are mutually exclusive, but the evaluations are often written to privilege one over the other.

    There are other problems. Using the same evaluation tool for very different kinds of courses (lecture versus workshop, for instance) doesn’t make a lot of sense. Evaluation language is often vague and puzzling in what it rewards (one evaluation form asks “Was the instructor enthusiastic?” Would an instructor bursting with smiles and enthusiasm but who is disorganized and otherwise less effective be privileged over one who is low-key but nonetheless covers the material effectively?). The “halo effect” can distort findings, where, among other things, more attractive instructors can get higher marks.

    Given how many times I’ve heard from students about someone being their favorite instructor because he or she was easy, I question the criteria students may use when evaluating. Instructors are also told that evaluations are for their benefit to improve teaching ability, but then chairs and administrators use them in promotion and hiring decisions.

    I think if the evaluation tool is sound, it can be useful to helping instructors. But, lastly, I think of my own experiences as a student, where I may have disliked or even resented some instructors because they challenged me or pushed me out of my comfort zone to learn new skills or paradigms. I may have evaluated them poorly at the time, only to come to learn a few years later with greater maturity that they not only taught me well, but taught me something invaluable, and perhaps more so than the instructors I liked. In this respect, it would be more fair to those instructors for me to fill out an evaluation a few years later to accurately describe their teaching.

  • Diane

    Diane Halm

    Adjunct Professor of Writing at Niagara University

    Wow, there are so many valid points raised; so many considerations. In general, I tend to agree with those who believe it gauges student satisfaction more than learning, though there is a correlation between the two. After 13 years as an adjunct at a relatively small, private college, I have found that engagement really is what many students long for. It seems far less about the final grades earned and more about the tools they’ve acquired. It should be mentioned that I teach developmental level composition, and while almost no student earns an A, most feel they have learned much:)

    Pierre H. likes this

  • Nira HativaNira

    Nira Hativa

    Former director, center for the advancement of teaching at Tel Aviv University

    Student ratings of instruction (SRI) do not measure teaching effectiveness but rather student satisfaction from instruction (as some previous comments on this list suggest). However there is a substantial research evidence for the relationships between SRIs and some agreed-upon measures of good teaching and of student learning. This research is summarized in much detail in my recent book:
    Student Ratings of Instruction: A Practical Approach to Designing, Operating, and Reporting (220 pp.) https://www.createspace.com/4065544
    ISBN-13:978-1481054331

    Michael T.Diane H. and 1 other like this

  • robert easterbrookrobert

    robert easterbrook

    Education Management Professional

    Learning is not about what the teacher does, it is about what the learner does.

    Do not confuse the two.

    Learning is what the learner does with what the teacher teaches.

    If you think that learning is all about what the teacher does, then the SRI will mislead and deceive.

    Adrian M.David Shallenberger and 1 other like this

  • Sami SamraSami

    Sami Samra

    Associate Professor at Notre Dame University – Louaize

    Evaluation, in all its forms, is a complex exercise that needs both knowledge and skill. Further, evaluation can best be achieved through a variety of instruments. We know all of this as teachers. Question is how knowledgeable are our students regarding the teaching/learning process. More, how knowledgeable are our administrators in translating information collected from questionnaires (some of which are validity-questionable) into plausible data-based decisions. I agree that students should have a say in how their courses are being conducted. But to use their feedback, quantitatively, to evaluate university professors… I fear that I must hold a very skeptical stand towards such evaluation.

     

  • Top Contributor

    Quite an interesting topic, and I’m reminded of the ancient proverb, “Parts is not parts.” OK, maybe that was McDonalds. This conversation would make a very thoughtful manuscript.

    Courses is not courses. Which course will be more popular, “Contemporary Music” or “General Chemistry?”

    Search any university using the following keywords “really easy course [university].” Those who teach these courses are experts at what they do, and what they do is valuable, however the workload for the student is minimal.

    The major issues: (1) popularity is inversely proportional to workload; and (2) the composition of the questions appearing on course and professor evaluations (CAPEs).

    “What grade do you expect in this class? Instructor explains course material well? Lectures hold your attention?”

    If Sally gets to listen to Nickleback in class and then next period learn quantum mechanics, which course does one suppose best held her attention?

    A person about to receive a C- in General Chemistry is probably receiving that C- because s/he was never able to understand the material for lack of striving, and probably hates the subject. That person is very likely to have never visited the professor during office hours for help. Logically one might expect low approval ratings from such a scenario.

    A person about to receive an A in General Chemistry is getting that A because s/he worked his/her tail off. S/he was able to comprehend mostly everything the professor said, and most probably liked the course. Even more, s/he probably visited the professor during office hours several times for feedback.

    One might argue that the laws of statistics will work in favor of reality, however that’s untrue when only 20% of students respond to CAPEs. Those who respond either love the professor or hate the professor. There’s usually no middle ground. Add this to internet anonymity, and the problem is compounded. I am aware of multiple studies conducted by universities indicating high correlation between written CAPEs and electronic CAPEs, however I’d like to bring up one point.

    Think of the last time you raised your voice to a customer service rep on the phone. Would you have raised your voice to that rep in person?

    There’s not enough space to comment on all the variables involved in CAPE numerical responses. As of last term I stopped paying attention to the numbers and focused exclusively on the comments. There’s a lot of truth in most of the comments.

    I would like to see the following experiment performed. Take a group of 10,000 students. Record their CAPE responses prior to receiving their final grade. Three weeks later, have them re-CAPE. One year later, have them re-CAPE again. Two years. Three years. Finally, have them re-CAPE after getting a job.

    Many students don’t know what a professor did for them until semesters or years down the road. They’re likely to realize how good of a teacher the professor was by their performance in future courses in the same subject requiring cumulative mastery.

    Do I think student evaluations measure teaching effectiveness? CAPEs is not CAPEs.

    Ronnie S.Sonu S. like this

  • Anne GardnerAnne

    Anne Gardner

    Senior Lecturer at University of Technology Sydney

    No, of course they don’t.

  • Christa van StadenChrista

    Christa van Staden

    Owner of AREND.co, a professional learning community for educators

    No, it does not. Efficiency in class room should be measured by the results of students, their attitude towards students and the quality of their preparation. I worked with a man who told a story about the different hats and learning and thought that was a new way of looking at learning. To my utmost shock my colleague, who sat because he had to say something, told me that he did it exactly the same, same jokes, etc, when he did the course five years ago. For real – nothing changed, no new technology, no new insights. no learning happened over a period of five years, nothing? And he is rated very high – head of a new wing. Who rated him? How? And why did it not effect his teaching at all?

  • Mat Jizat AbdolMat Jizat

    Mat Jizat Abdol

    Chief Executive at Institut Sains @ Teknologi Darul Takzim ( INSTEDT)

    If we are looking for quality, we have to get information about our performance.in the lecture room. There are 6 elements normally being practice. They are: 1.Teaching Plan of lecture contents 2.Teaching Delivery 3.Fair and systematic of evaluation on student’s work 4. Whether the Teaching follows the semester plan.5. Whether the lecturer follows the T-Table and always on time of their lecturer hours and lastly is the Relationship between lecturer and students.

  • orlando mcallisterorlando

    orlando mcallister

    Department Head – Communications/Mathematics

    Do we need to be reminded that educators were students at one time or the other? So why not have students evaluate the performance of a teacher? After all, the students are contributing to their own investment in what is significant for survival; and whether it is effective towards career development to attain their full potential as a human sentient being towards the greater good of humanity; anything else falls short of human progress in a tiny rotating planet cycling through the solar system with destination unknown! Welcome to the ‘Twilight Zone.”

    Would you rather educate a student to make a wise decision to accept 10 gallons of water in a desert? Or accept a $1 million check that further creates mirages and illusory dreams of success?

  • Stephen RobertsonStephen

    Stephen Robertson

    Lecturer at Edinburgh Napier University

    I think what my students say about me is important. I’m most interested in the comments they make and have used these to pilot other ideas or adjust my approach.

    I’ve had to learn to not beat myself up about a few bad comments or get carried away with a few good ones.

    I also use the assessment results to see if the adjustments made have had the intended impact. I use the VLE logs as well to see how engaged the students are with the materials and what tools they use and when.

    I find the balance keeps me grounded. I want my students to do well and have fun. The dashboard on your car has multiple measures. Why should teaching be different? Like the car I listen for strange noises and look out the window to make sure I’m still on the road.

    Jeremy W. likes this

  • Allan SheppardAllan

    Allan Sheppard

    Lecturer/tutor/PhD student at Griffith University

    I think that most student evaluations are only reaction measures and not true evaluation of learning outcome or teaching effectiveness – and often evaluations are tainted if the student get a lower mark than anticipated
    I think these types of evaluation are only indicative — and should not really be used to measure teacher/teaching effectiveness – and should not be allowed to affect teachers’ careers
    I note Stephen’s point about multiple measures — unfortunately most evaluations are quick and dirty — and certainly do not provide multiple measures

    Jeremy W.John S. like this

  • Allan SheppardAllan

    Allan Sheppard

    Lecturer/tutor/PhD student at Griffith University

    interestingly most student evaluations are anonymous – so the student can say what he/she likes and not have to face scrutiny

    George C. likes this

  • Olga

    Olga Kuznetsova

    No, students’evaluations cannot fully measure teaching effectiveness.
    However,for the relationship to be mutually beneficial, you have to accept their judgement on the matter, Unfortunately a Unique teacher for all categories (types) of students does not exist in our dynamic world.

    George C. likes this

  • Penny PaliadelisPenny

    Penny Paliadelis

    Professor, Executive Dean, Faculty of Health, Federation University Australia

    Student evaluations are merely popularity contests, they tempt academics to ‘ dumb down’ the content in order to be liked and evaluated positively…this is a dangerous and slippery slope then can result in graduates being ill-prepared for the professions and industries they seek to enter.

    Kathleen C.John S. like this

  • Robson Chiambiro (MBA, MSc, MEd.)Robson

    Robson Chiambiro (MBA, MSc, MEd.)

    PRINCE 2 Registered Practitioner at Higher Colleges of Technology

    In my opinion the student-teacher evaluations are measuring popularity as others suggested but the problem is that some of the questions and intentions of assesing are not fulfilled due to the use of wrong questioning. I have never seen in the instruments a question asking students of their expectations from the teacher and the course as such. To me that is more important than to ask if the student likes the teaching style which students do not know anyway. Teachers who give any test before the assessment are likely to get low ratings than those who give tests soon after the evaluation.

  • Chris GarbettChris

    Chris Garbett

    Principal Lecturer Leeds Metropolitan University

    I agree with other contributors. The evaluations are akin to a satisfaction survey. Personally, if, for example, I stay at an hotel, I only fill in the satisfaction survey if something is wrong. If the service is as I expect, I don’t bother with the survey.

    I feel also that students rate the courses or modules on a popularity basis. A module on a course may be enjoyable, or fun, but not necessarily better taught than another subject with a less entertaining subject.

    Unfortunately, everyone seems to think that the student evaluations are the main criteria by which to judge a course.

    Olga K. likes this

  • Steve BentonSteve

    Steve Benton

    Senior Research Officer, The IDEA Center

    First of all, it would help if we stop referring to them as “student” or “course” evaluations. Students are not qualified to evaluate. That is what administrators are paid to do. However, students are qualified to provide feedback to instructors and administrators about their perceptions of what occurred in the class and of how much they believe they learned. How can that not be valuable information, especially for developmental purposes about how to teach more effectively? Evaluation is not an event that happens at the end of a course–it is an ongoing process that requires multiple indicators of effectiveness (e.g., student ratings of the course, peer evaluations, administrator evaluations, course design, student products). By triangulating that combination of evidence, administrators and faculty can then make informed judgments and evaluate.

    Olga K. likes this

  • Eytan FichmanEytan

    Eytan Fichman

    Lecturer at Hanoi Architectural University

    The student / teacher relationship around the subject matter is a ‘triangle.’ The character of the triangle has a lot to do with a student’s reception of the of the material and the teacher.

    The Student:
    The well-prepared student and the intrinsically motivated student can more readily thrive in the relationship. If s/he is thriving s/he may be more inclined to rate the teacher highly. The poorly prepared student or the student who requires motivation from ‘outside’ is much less likely to thrive and more likely to rate a teacher poorly.

    The Teacher:
    The well-prepared teacher and the intrinsically motivated teacher can more readily thrive in the relationship. If s/he is thriving students may be more inclined to rate the teacher highly. The poorly prepared teacher or the teacher who requires motivation from ‘outside’ is much less likely to thrive and more likely to achieve poor teacher ratings.

    The Subject Matter:
    The content and form of the subject matter are crucial, especially in their relation to the student and teacher.

  • Daniel GoecknerDaniel

    Daniel Goeckner

    Education Professonal

    Student evaluations do not measure teaching effectiveness. I have been told I walk on water and am the worst teacher ever. The major difference was the level of student participation. The more they participated the better I was.

    What I use them for is a learning tool. I take the comments apart looking for snippets that I can use to improve my teaching.

    I have been involved in a portfolio program the past two years. One consist is the better the measured outcomes, the worse the student reviews.

    • Dr. Pedro L. MartinezDr. Pedro L.

      Dr. Pedro L. Martinez

      Former Provost and Vice Chancellor for Academic Affairs at Winston Salem State University & President of HigherEd SC.

      Steve,
      Have you ever been part of a tenure or promotion committee evaluation process? In my 35 years of experience, faculty members do not operate in that ideal smooth linear trajectory that you have described. On the contrary, they partition evaluations into categories and look at student course evaluations as the evidence of an instructor’s ability to teach. However, faculty can choose which evaluations they can submit and what comments they want to include as part of the record. I have never seen “negative comments” as evidence of “ineffective teaching”. The five point scale is used and whenever that falls below a 3.50, it becomes a great concern for our colleagues!

    • Sethuraman JambunathaSethuraman

      Sethuraman Jambunatha

      Dean (I & E) at Vinayaka Mission

      There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just

    • Sethuraman JambunathaSethuraman

      Sethuraman Jambunatha

      Dean (I & E) at Vinayaka Mission

      There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just

    • Sethuraman JambunathaSethuraman

      Sethuraman Jambunatha

      Dean (I & E) at Vinayaka Mission

      There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just

    • Sethuraman JambunathaSethuraman

      Sethuraman Jambunatha

      Dean (I & E) at Vinayaka Mission

      There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just

    • Sethuraman JambunathaSethuraman

      Sethuraman Jambunatha

      Dean (I & E) at Vinayaka Mission

      There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just

    • Sethuraman JambunathaSethuraman

      Sethuraman Jambunatha

      Dean (I & E) at Vinayaka Mission

      There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just

    • Sethuraman JambunathaSethuraman

      Sethuraman Jambunatha

      Dean (I & E) at Vinayaka Mission

      There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just

    • Sethuraman JambunathaSethuraman

      Sethuraman Jambunatha

      Dean (I & E) at Vinayaka Mission

      There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just

    • Sethuraman JambunathaSethuraman

      Sethuraman Jambunatha

      Dean (I & E) at Vinayaka Mission

      There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just

    • Susan WrightSusan

      Susan Wright

      Assistant Professor at Clarkson University

      Amazing how things work…I’m actually in the process of framing out a research project related to this very question. Does anyone have any suggestions for specific papers I should look at i.e. literature related to the topic?

      With respect to your question, I believe the answer depends on the questions that get asked.

    • Sarah LowengardSarah

      Sarah Lowengard

      Researcher, Writer, Editor, Consultant (history, technology, art, sciences)

      I fall on the “no” side too.

      The school-derived questionnaires nearly always ask the wrong questions, for one.

      I’ve always thought students should wait some years (3-20) before providing feedback, because the final day of class is too recent to do a good assessment.

      David Shallenberger likes this

    • Jeremy

      Jeremy Wickins

      Open University Coursework Consultant, Research Methods

      I’m quite late to the topic here, and much of what I think has been said by others. There is a difference between the qualitative and quantitative aspects of student evaluations – I am always fascinated to find out what my students (and peers, of course, though that is a different topic) do/do not think I am doing well so I can learn and adapt my teaching. For this reason, I prefer a more continuous student evaluation than the questionnaire at the end of the course – if I need to adapt to a particular group, I need the information sooner rather than later.

      However, the quantitative side means nothing unless it is tied back to hard data on how the students did in their assessments – an unpopular teacher can still be a *good* teacher of the subject at hand! And the subject matter counts a lot – merely teaching an unpopular but compulsory subject (public law, for instance!) tends to make the teacher initially unpopular in the minds of students – a type of shooting the messenger.

      Teaching isn’t a beauty contest – these metrics need to be used in the right way, and combined with other data if they are to say anything about the teaching.

    • Dr. James R. MartinDr. James R.

      Dr. James R. Martin

      Professor Emeritus

      I wrote a paper about this issue a few years ago. Briefly, the thrust of my argument is that student opinions should not be used as the basis for evaluating teaching effectiveness because these aggregated opinions are invalid measures of quality teaching, provide no empirical evidence in this regard, are incomparable across different courses and different faculty members, promote faculty gaming and competition, tend to distract all participants and observers from the learning mission of the university, and insure the sub-optimization and further decline of the higher education system. Using student opinions to evaluate, compare and subsequently rank faculty members represents a severe form of a problem Deming referred to as a deadly disease of western style management. The theme of the alternative approach is that learning on a program-wide basis should be the primary consideration in the evaluation of teaching effectiveness. Emphasis should shift from student opinion surveys to the development and assessment of program-wide learning outcomes. To achieve this shift in emphasis, the university performance measurement system needs to be redesigned to motivate faculty members to become part of an integrated learning development and assessment team, rather than a group of independent contractors competing for individual rewards.

      Martin, J. R. 1998. Evaluating faculty based on student opinions: Problems, implications and recommendations from Deming’s theory of management perspective. Issues in Accounting Education (November): 1079-1094. http://maaw.info/ArticleSummaries/ArtSumMartinSet98.htm

      Barbara C. likes this

    • Joseph Lennox, Ph.D.The next logical step in the discussion would appear to be, “How would you effectively measure teacher effectiveness?”

      With large enrollment classes, one avenue is here:

      http://www.insidehighered.com/views/2013/10/11/way-produce-more-information-about-instructors-effectiveness-essay

      So, how should teacher effectiveness be measured?” data-li-editable=”false” data-li-edit-sec-left=”900″ data-li-time=”” />

      There appears to be general agreement that the answer to the proposed question is “No.”

      The next logical step in the discussion would appear to be, “How would you effectively measure teacher effectiveness?”

      With large enrollment classes, one avenue is here:

      http://www.insidehighered.com/views/2013/10/11/way-produce-more-information-about-instructors-effectiveness-essay

      So, how should teacher effectiveness be measured?

      Jeremy W.Olga K. like this

    • Ron MelchersRon

      Ron Melchers

      Professor of Criminology, University of Ottawa

      Top Contributor

      To inform this discussion, I would highly recommend this research review done for the Higher Education Quality Council of Ontario. It’s a pretty balanced and well-informed treatment of student course (and teacher) evaluations:http://www.heqco.ca/SiteCollectionDocuments/Student%20Course%20Evaluations_Research,%20Models%20and%20Trends.pdf

      Joseph L.Ken R. like this

    • Ron MelchersRon

      Ron Melchers

      Professor of Criminology, University of Ottawa

      Top Contributor

      Just to add my own two cents (two and a half Canadian cents at this point), I think students have much of value to tell us about their experience in our courses and classes, information that we can use to improve their learning and become more effective teachers. They are also able to inform academic administrators of the degree to which teachers fulfill their basic duties and perform the elementary tasks they are assigned. They have far less to tell us about the value of what they’re learning to their future, their professions … and they are perhaps not the best qualified to identify effective learning and teaching techniques and methods. Those sorts of things are better assessed by knowledgeable, expert professional and academic peers.

      David Shallenberger likes this

    • Barbara

      Barbara Celia

      Assistant Clinical Professor at Drexel University

      Thank you, Ron. A great deal of info but worth reading and analyzing.

    • Prof. Ravindra Kumar

      Prof. Ravindra Kumar Raghuvanshi

      Member of Academic committees of some Universities & Retd.Prof.,Dept.of Botany,University of Rajasthan,Jaipur.

      Student rating system may not necessarily be a reliable method to assess the teaching
      effeciveness,because it depends upon individual grasping/understanding power, intelligence
      and study tendency A teacher does his/her job well, but how many students understand
      it well. It is reflected invariably in the marks obtained by them.

POD conference 2013, Pittsburgh

http://podnetwork.org/event/pod-2013/

Conference program available in PDF and upub format, so I can have it on my laptop and on my mobile device: diminishes the necessity to carry and pull constantly a paper stack.

it is the only conference I know with 6AM yoga. Strong spirit in a strong body. LRS & CETL must find space and instructors an offer mediation + yoga opportunity for SCSU students to disconnect

1:00 – 5:00 PM excursion to Carnegie Mellon – Learning Spaces. LRS interest in Learning Commons.

From the pre-conference workshops, Thurs, Nov 7, 8:30AM – 12:00PM:
Linda Shadiow, Connecting Reflection and Growth: Engaging Faculty Stories.
This workshop seems attractive to me, since it coincides with my firm conviction that SCSU faculty must share “best practices” as part of the effort to engage them into learning new technologies.

Kenyon, Kimberly et al, Risky Business: Strategic Planning and Your Center.
This workshop might be attractive for Lalita and Mark Vargas, since strategic planning is considered right now at LRS and CETL might also benefit from such ideas.

roundtables, Thurs, Nov. 7, 1:30-2:45PM

Measuring the Promise in Learner-Centered Syllabi
Michael Palmer, Laura Alexander, Dorothe Bach, and Adriana Streifer, University of Virginia

Effective Faculty Practices: Student-Centered Pedagogy and Learning Outcomes
Laura Palucki Blake, UCLA

Laura is the assistant director http://gseis.ucla.edu/people/paluckiblake
3 time survey of freshmen. survey also faculty every 3 years.  can link this date: faculty practices and student learning
triangulating research findings. student-centered pedagogy. which teaching practices are effective in promoting student-center learning practices.
no statistical differences in terms of student learning outcomes between part-time and full-time faculty. The literature says otherwise, but Laura did not find any statistical difference.
http://ow.ly/i/3EL77
discussions is big, small group work is big with faculty
in terms of discussions, there is huge difference between doing discussion and doing it well.
this is a self-report data, so it can be biased
there are gender differences. women more likely to use class discussions, cooperative learning same, students presentations same. gender discipline holds the gender differences.  same also in STEM fields.
students evaluations of each other work. cooperative learning: it is closer gender-wise.
the more student-centered pedagogy, the less disengagement from school work.
understand on a national level what students are exposed to.
lpblake@hmc.edu
http://www.heri.ucla.edu/
wabash national data.

ePublishing: Emerging Scholarship and the Changing Role of CTLs
Laura Cruz, Andrew Adams, and Robert Crow, Western Carolina University
LORs are in Kentucky.
CETL does at least Professional Development, Resources, Eportfolios, LORSs. FLCs
Teaching Times at Penn.
model 2: around instructional technology. More and more CETL into a combined comprehensive center. about 9 are paid by IT and 11 by academic center. because of finances cuts this is the model predicted from the 90s. Why not IT? because ater they say how to use it. and how to use it effective. think outside of technology, technogogy is not the same as technology.  Teacher-scholar model: research, service, teaching.
http://ow.ly/i/3EMJl
egallery and other electronic ways to recognize productivity. Stats and survey software does NOT reside with grad studies, but with CETL, so CETL can help faculty from a glimmer of an idea to presentation and publication. Research Support Specialist.
how and where it fits into faculty development. Neutrality. Should CETL be advocates for institutional, organizational change.  Do CETL encourage faculty to take innovation and risk (change the culture of higher ed). Tenure and promotion: do we advocate that epub should count, e.g. a blog will count toward tenure.
a national publication: http://www.sparc.arl.org/resources/authors/addendum
we domenstrate that it is good school. scholarship of teaching will be good teaching.
OER? Open educational resources. SHould CETL host and participate in those? Do we participate in creating resources, which are designed to replace texbooks? Caroline has a state-wide grant to support faculty developing learning resources.
open access is controversial. the right to publish and republish. http://www.sparc.arl.org/
40% of all scholarly articles are owned by 3 publishers
Academic Social Media academic.edu and electronic journals.
CETL is the comprehensive center, the hub where people go to, so CETL can direct them to and or get together stakeholder to make things happen.
the lesson from this session for me is that Lalita and Keith Ewing must work much closer.

Evaluating the quality of MOOCs: Is there room for improvement?
Erping Zhu, University of Michigan; Danilo Baylen, University of West Georgia
reflection on “taking” a MOOC and the seven principles. how to design and teach MOOC using the seven principles.
MOOC has a lot of issues; this is not the focus, focus is on the instructional design. Both presenters are instructional designers. Danilo is taking MOOC in library and information science.
Second principle: what is a good graduate education.
about half had completed a course. Atter the 3rd week the motivation is dissipating.
Erping’s experience: Provost makes quick decision. The CETL was charged with MOOC at U of Michigan. Securing Digital Democracy. http://www.mooc-list.com/university-entity/university-michigan
Danilo is a librarian. his MOOC class had a blog, gets a certificate at the end. Different from online class is the badges system to get you involved in the courses. the MOOC instructors also had involved grad students to monitor the others. the production team is not usually as transparent as at Corsera. Sustainability. 10 week module, need to do reflections, feedback from peers. 7 assignments are too much for a full-time professional.
http://www.amazon.com/Library-2-0-Guide-Participatory-Service/dp/1573872970
http://tametheweb.com/category/hyperlibmooc/
http://tametheweb.com/2013/10/20/hyperlibmooc-library-2-013-presentation-links/

1. principle: contact btw faculty and student. Not in a MOOC. video is the only source provides sense of connection. the casual comments the instructor makes addressing the students provides this sense. Quick response. Collaboration and cooperation in MOOC environment and bring it in a F2F and campus teaching. Feedback for quizzes was not helpful to improve, since it i automated. students at the discussion board were the one who helped. from an instructional design point of view, how MOOC design can be improved.
group exercise, we were split in groups and rotated sheet among each other to log in response to 7 sheets of paper. then each group had to choose the best of the logged responses. the responses will be on the POD site.
eri week resources

Per Keith’s request

“Why Students Avoid Risking Engagement with Innovative Instructional Methods
Donna Ellis, University of Waterloo”

Excerpt From: Professional and Organizational Development Network in Higher Education. “POD Network 2013 Conference Program, Pittsburgh PA 11/7 to 11/10.” iBooks.
This material may be protected by copyright.

A quantitative study. The difficulty of group works. Various questions from the audience, the time of class (early Mrng) is it a reason to increase the students disengagement. Students pereceptions .

The teacher did. It explain why the research and this might have increased the negative perception. Summary of key barrierS.

Risk of negative consequneces

preceived lack of control

contravention of perceived norms.

fishbein and Aizen 2010

discussoon .  How faculty can design and deliver the course to minimize the barriers. Our table thought that there are a lot of unknown parameters to decide and it is good to hear the instructor nit only the researcher. How to deal with dysfunctional group members behaviors. Reflections from the faculty member how to response to the data? Some of the barriers frustrated him. Outlines for the assignments only part of the things he had done to mitigate. What are we asking students on course evaluations. Since a lot more then only negative feedback. Instructor needes more training in conflict resolution and how to run group work.

http://ow.ly/i/3Fjqt

http://ow.ly/i/3Fjpq

 

CRLT Players

Friday, Nov 8, 10:30 AM – 12:00 PM
William Penn Ballroom
7 into 15

CRLT Players, University of Michigan”

Excerpt From: Professional and Organizational Development Network in Higher Education. “POD Network 2013 Conference Program, Pittsburgh PA 11/7 to 11/10.” iBooks.
This material may be protected by copyright.

It is a burlesque and theater approach to engage students and faculty into a conversation. 10 plays in 30 min.

Discuses different topics from the plays and seek solutions as a team. How to deal with international students ( Harvard lady said ” safe places” for students) how to deal with technology or the lack of it, missed next one writing this notes and how to reward faculty in innvative things. T. Encoruage innovation, they received a letter from the provost and if they fail, it is not used in their annual evaluation

No  videotaping of this performance because the power is in conversation. Is there a franchise, like training people to do that. NSF grant was allowing them but now just pick up the idea. Sell scripts? Can have conversations about strategies how to collaborate with the theater department where to start these short vinniets. If come to campus and bring performance do they do also the follow up.
Is anger or hostility a reaction during after these presentations. How to handle it. Hostility can be productive and make sure that it is told that it is productive. Getting difficult things out there is what the theater is trying to do in a distant way. This is not a morality
how develop the work? How come up with issues. Faculty bring issues, followed by interviews, draft created we heater identifies the problem and address the issue. Preview performances with stakeholders who confirm .  There are more then. Sufficient ideas, so the organizers can choose what they see most pertinent
other ways to follow up. http://ow.ly/i/3FpI4 http://ow.ly/i/3FpJy
ecrc committee went to their meeting instead of lunch to see if I can particpirate for next year activitities. Ecrc is the acronym for the tech committee. Web site is one takes of this committee. Word press site , how the groups work, how forms work, how to connect with people and most importantly how to start communicating through the web site and cut the listserv. An attempt to centralized all info in the website rather then scattered across universities.
what is BRL? Google apps and Wikipedia as a wiki for another year until figure out if it can be incorporated in the web site. Reconceptualize how do work in the process. To groups in ecrc. Wikpaidea and web page.  And then social media with Amy?  Ecrc liaison in every POD committee to understand how to set up the committee web presence. Blackboard collaborate to do meetings and this is what liason explain to committee members. Tinyurl.com/ECRC2013
Designing Online Discussions For Student Engagement And Deep Learning
Friday, Nov 8, 2:15 PM – 3:30 PM, Roundtable
Parkview East
Danilo M Baylen, University of West Georgia”
pit must be asynchronous discussion
What is the purpose and format of the discussion. Assessment.  How the online discussion is supporting the purpose of the curriculum to the students learning
About five discussions per semester all together. Behaved part of the class culture
Format of the assignment
asynchronous discussion list. Series of questions or a case study. Is the format a sequence of responses or invite a discussions
checklist which stifles a creative discussion or just let it more free
purpose – must be part of the syllabus and it must be clear.
Meeting learning objectives.
duration
interactivity – response to other students. List of 6 different options how they can reply. what format the interactivity takes Is important issue, which has no textbook
assessment- initial posting are critical, since it gives and idea what to work on. How much points as part of the bigger picture. Yet it is the ground work for the assignment, which gets most points.
metacognitive not evaluative , give students examples from the pro regions class what a good discussion is And explain students how to. Evaluate a good discussion entry
how the question is worded and use the threaded discussion for them to reflect how they think, rather then only assess if they read the chapter. The research about online discussion is very different.
What is the  baseline.
Online course must must be set up ready before semester starts or not?
reflection for the end of the semester
SteVn brookfields critical questionaire
meet thISTI and qr standards
is reflection on the content or the process
students reflect on their own reflections
what have you learned about yourself as online learner and look for consistencies for both negative and positive reflections
“Connecting and Learning with Integrative ePortfolios: The Teaching Center’s Role
Friday, Nov 8, 3:45 PM – 5:00 PM, Roundtable
Assess critical thinking
there is a workshop by the presenters instituitions how to organize
more claims then actual evidence so Data is sought to
main issues
programmatic emportfolio. Not student presentation portfolios, but academic portfolio
e portfolio forum
http://ncepr.org
look at image of the green copy:
1. Integration and reflection
2. Social media – in community with other students , faculty, organizations
3. Resume builder
eportfolio is. Prt of the assessment. Conversation on campus. Some depts use exportfolio extensively but not happy.  Programmatic academic e portfolio to collect data
use Sakai open portfolio system
12 drepartments and six more second year.  to speak the same language, they developed a guideline, conceptual framework ( see snapshot of handout)
Curriculum mapping ( see the grid on the. Handout) took much longer then expected.
Fachlty was overwhelmed by the quantity of responses from studentses when filling out Th grid. http://ow.ly/i/3FBL3http://ow.ly/i/3FBMP
the role of CETL. The provost at Kevin’s institution charged CETL to do the portfolio gig.
The big argument of the CETL redirector with the provost is that portfolio not only to collect data for assessment and accreditation but to provide meaningful experience for the students. EDUCAUSE report horizon, learning analytics  Scandalous headlines of students suing law schools. bad deductions made on big data. The things that matte for students must be in the portfolio and they get used to use the portfolio. Pre reflection entries by the students, which shorted the advising sessions. The advisor can see ahead of time. The advisers. Will. B the. Focus point,   The. Advising  portfolio Is becoming
portfolio must be used by faculty not only students.
Whats the by in for students.  Presentations portfolio part of. Marketing purposes. Google sites so when students leave the institutions students can ” take” the portfolio with them as we’ll go multimedia. attempts failed because platforms which can be cutozmized we’re not used   Digital identity   As CETL director not technology expect and how to learn from the faculty and that was very
documenting and learning with eportfolios.
faculty to demonstrate reflections to students and how enter into portfolio. Using rubrics. Faculty are using already tools but connecting with. Reflections.
STAR: Situation , tasks, action, response
Writing skills differentiate, but even good writers got better on reflection
how one polish a portfolio before bringing to an Employer. Student Working with career services to polish and proofread.
How much the university is responsible for an individual portfolio. How many levels of proof reading.
Poor student work reflects a poor faculty attention.
“Teaching Online and Its Impact on Face-to-Face Teaching
Friday, Nov 8, 3:45 PM – 5:00 PM, 35-Minute Research Session B
http://wikipodia.podnetwork.org/pod-2013-conference/presentations-2013/lkearns
“Groups Inform Pedagogies
Friday, Nov 8, 3:45 PM – 5:00 PM, 35-Minute Research Session A
Carnegie III
Rhett McDaniel and Derek Bruff, Vanderbilt University”
Teaching Online and Its Impact on Face-to-Face Teaching
Friday, Nov 8, 3:45 PM – 5:00 PM, 35-Minute Research Session B
Greene & Franklin
Lorna Kearns, University of Pittsburgh”

Freedom to Breathe: A Discussion about Prioritizing Your Center’s Work
Andy Goodman and Susan Shadle, Boise State University

Connecting, Risking, and Learning: A Panel Conversation about Social Media
Michelle Rodems, University of Louisville.  Conference C 9:00 AM – 10:15 AM
The use of social media in higher education
Conference C 9-11:15 AM

Panel of CETL directors and faculty. The guy from Notre dame uses word press the same way I use it. Collect questions and after the 3rd one creates blog entry and answers the next q/ s  with the URL to the blog entry NspireD is the name of. The blog

the OHIO state UCAT guy is a twitter guy. Program coordinator who manages wordpress and web site. Intersect with FB and twitter. Platforms are inteGrated, so be did not to know the technicalities. The graduate consultants are setting up. ciirdinator tried to understand how the mesh together. Can be used as conversation starters or to broadcast and share info.  Use of hashtags how to use them appropriate in twitter and FB to streamline .

Scsu problem. W don’t build it they will not come. a Tim burton version of the field of dreams.

Rachel CETL assist dir at U of Michigan.  She is out there personally likes it. Very static web page. Drupal as a content management system so the blog is part of the web page. So 2 times a week entries. One of the staff people is an editor and writes blog posts, but vetted by a second CETL staff. Auto push for the blog to the twitter. Screencasts for YouTube channel with screencasts.  Comments on the blog minimal from faculty and stat. What about students? About 1000 followers on the twitter.  What do analytics say. Hits on home page, but no idea how much time reading. The time people spend more time and using the tags .  the use of blog is less formal way to share information.  recycling in December and August a lot of material.

does anybody subscribe and do you promote RSS

the separate blog for a workshop requires interaction and that is a success

for faculty development U of Michigan is using blog recruited 50  to follow the blog.  TSam of 3 using. WordPress  For a semester and then survey. Focus group. Huge success, between 6 and 30 comments. Community with no other space on campus

how are u using social media to promote connections. elevate voices of others on campus by interviewing faculty.  At U of Michigan there was no interest to learn about what other faculty are doing. So they trashed that initiative but starTed a video narration about faculty who innovate. Videotaped and edited no hi Qual video , tagged and blog posted and this approach created more connection, because it is not text only.

What have been the obstacles and indoor failure and what have you learned?

convincing the administration that CETL than do it and it does not have to be the same quality as the web page and the printed material.  Changing the mindset. No assessment, since nothing else was working and they were ready for radical step such as blog

Same with the twitter. Taking the risk to experiment with the hashtags. Tweets can’t be approved. Need to time to build an audience, one month will not have an impact. Start with the. Notion that you are building a reposIvory noT a foRum

one of the panelist has a google spreadsheet which has information of allCETL social media sites   There are resources on how to deal with negative outcomes of using social media. Working with librarians, the Norte dame said! they will give you twenty sources. No no, no, he siad, give me your best three.

 

U of MichiGan more grad studns blog guest posts almost no faculty.

Have you considered giving them more then guest blog, but no facilitator? Let faculty once a semester do a blog post. It is not moderated but more like lead to how to do a good blog. Interview based approach is unique and does not show up somewhere elSe.

Insitutional background important in these decisions.

How often refresh the wordpress page.  How often one person is voicing and it takes a log of journalistic skills. Use the draft option to publish when there are several ideas coming at once.

Mindshift of CETL is to decrease the standards. Make it more informal. Blog post can be always fixed later. To avoid faculty false perception that this is not scholarly needs to be references. So causal tone + references.

Blog ” from students perspective” is repurposE

Risking Together: Cultivating Connection and Learning for Faculty Teaching Online
Michaella Thornton, Christopher Grabau, and Jerod Quinn, Saint Louis University
Oliver 9-11:15 AM

Space Matters! and Is There a Simple Formula to Understand and Improve Student Motivation
Kathleen Kane and Leslie A. Lopez, University of Hawaii at Manoa
Riverboat 9:00 AM – 10:15 AM

The Risks and Rewards of Becoming a Campus Change Agent
Dr. Adrianna Kezar, University of Southern California
William Penn Ballroom 10:30 AM – 12:00 PM

Branch campuses, students abroad, to more with less, completion from profit institutions

students work more but this is a good reflection on learning success

provost might ask to consolidate prof development opportunities for faculty and students instead of faculty only.

If administration is genuine understand transparent   Administration more about persuading not listening. Respect, not assuming that faculty will not accept it. If faculty will sacrifices what will faculty see the administration sacrifice on their side. Leading from the. Middle , it means collective vision for the future. Multilevel leadershup, top down efforts dont work and bottom top are fragile. Managing up  is less preferred then powering up.  It is difficult to tell administration that they miss or misunderstand the technology issue.

Four frames. Goal multi frame leadership http://www.tnellen.com/ted/tc/bolman.html. Vey much the same as Jim Collins good to great right people on the bus right trained http://www.afa1976.org/Portals/0/documents/Essentials/Creating%20Organizational%20Learning%20and%20Change.pdf

How to build coalition, different perspectives, aknowledge  the inherent conflict.

The Delphi project

 

It Takes a Campus: Promoting Information Literacy through Collaboration
Karla Fribley and Karen St. Clair, Emerson College
Oakmont 1:45 PM – 3:00 PM

Most of the attendees and both presenters were librarians

The presenters played a scatch to involve the particppaints

deifnition what is IL. https://mobile.twitter.com/search/?q=%23POD13&s=hash

http://ow.ly/i/3G00e/original

Information literacy collaborative  work with faculty to design student learning outocmes for information literacy

Guiding principles by backward course design

Where they see students struggle with research

question to students survey, what is most difficult for your and wordle.

http://ow.ly/i/3G0l6/original

self reflection ow.ly/i/3G0UH

Curriculum mapping to identify which courses are the stretigic ones to instill the non credit info litreacy

acrl assessment in action

 

Risky Business: Supporting Institutional Data Gathering in Faculty Development Centers
Meghan Burke and Tom Pusateri, Kennesaw State University
Oliver 1:45 PM – 3:00 PM Roundtable

Exploring Issues of Perceptual Bias and International Faculty
Shivanthi Anandan, Drexel University.
Heinz 3:15 PM – 4:30 PM Roundtable

Why do we need it and onoy regarding international faculty don’t in Kim Lisa wolf-wendel

susan twombly. Pointers for hiring and retention. Performance is both teaching and living. Sanitary effect.  sanitary issues not only pay rate. FLC all tenure track without citizenship they are worried about their tenure. Funding agencies, very few will fund you if you are not a citizenship

Diane Schafer  perceptual biases, graffiti. Cathryn Ross

 

Averting Death by PowerPoint! From Killer Professors to Killer Presenters
Christy Price, Dalton State College
Riverboat 4:45 PM – 6:00 PM

How to create effective mini lectures checklist for acting palnning

engage and leave lecture out. The reason why can’t move away is because some  people lecture as performance art

Make lectures mini. How long mini should be. 22 min, the age number of the person.

Emotional appeal, empathy.

Evoke positive emotions with humor.   Always mixed method research, since the narrative   Berk, r. (2000) and Sousa (2011)

ethical. Obligations and emotional appeal

acknowledge the opposition

enhance memory processing with visuals and multimedia

use guided practice by miniki zing note taking

presentationzen is a book! which need to read http://www.barnesandnoble.com/w/presentation-zen-garr-reynolds/1100391495?ean=9780321525659

Enchanted memory processing by creating mistery

address relevance

 

http://advanceyourslides.com/2011/01/28/the-5-most-memorable-concepts-from-nancy-duartes-new-book-resonate/
Death by PowerPoint:  Nancy Duarte: The secret structure of great talks
http://www.ted.com/talks/nancy_duarte_the_secret_structure_of_great_talks.html

http://www.gobookee.org/get_book.php?u=aHR0cDovL3d3dy5vcGVuaXNibi5jb20vZG93bmxvYWQvMDQ3MDYzMjAxMS5wZGYKVGl0bGU6IFJlc29uYXRlOiBQcmVzZW50IFZpc3VhbCBTdG9yaWVzIFRoYXQgVHJhbnNmb3JtIC4uLg==

Engage faculty by showing. Faculty how their presentation. Is. And how it c can be

process with clickers

Sunday Mrng session

vygotsky zone of  NAND the flipped mindset. http://t.co/vCI8TOJ7J2. Cool tweets at #pod13.

Ideas process baudler Boyd stromle 2013

I – identify the issue

D debrief the situation

A  analyze what happened

s strategize solutions and Oport unities for growth and future success

 

Formative Assessment

7 Smart, Fast Ways to Do Formative Assessment

Within these methods you’ll find close to 40 tools and tricks for finding out what your students know while they’re still learning.

edutopia.org/article/7-smart-fast-ways-do-formative-assessment

Entry and exit slips

Exit slips can take lots of forms beyond the old-school pencil and scrap paper. Whether you’re assessing at the bottom of Bloom’s taxonomy or the top, you can use tools like Padlet or Poll Everywhere, or measure progress toward attainment or retention of essential content or standards with tools like Google Classroom’s Question toolGoogle Forms with Flubaroo, and Edulastic,

Low-stakes quizzes and polls: If you want to find out whether your students really know as much as you think they know, polls and quizzes created with Socrative or Quizlet or in-class games and tools like QuizalizeKahoot, FlipQuiz, GimkitPlickers, and Flippity

Dipsticks: So-called alternative formative assessments are meant to be as easy and quick as checking the oil in your car, so they’re sometimes referred to as dipsticks. These can be things like asking students to:

  • write a letter explaining a key idea to a friend,
  • draw a sketch to visually represent new knowledge, or
  • do a think, pair, share exercise with a partner.

Interview assessments: If you want to dig a little deeper into students’ understanding of content, try discussion-based assessment methods. Casual chats with students in the classroom can help them feel at ease even as you get a sense of what they know, and you may find that five-minute interview assessments

TAG feedback 

FlipgridExplain Everything, or Seesaw

Methods that incorporate art: Consider using visual art or photography or videography as an assessment tool. Whether students draw, create a collage, or sculpt, you may find that the assessment helps them synthesize their learning.

Misconceptions and errors: Sometimes it’s helpful to see if students understand why something is incorrect or why a concept is hard. Ask students to explain the “muddiest point” in the lesson—the place where things got confusing or particularly difficult or where they still lack clarity. Or do a misconception check:

Self-assessment: Don’t forget to consult the experts—the kids. Often you can give your rubric to your student

critical news literacy session

Critical news literacy session for social policy analysis course

Katie Querna, Thursday, 11AM, Stewart Hall

post Higher Ed Learning Collective

https://www.theguardian.com/world/2022/feb/21/dumb-and-lazy-the-flawed-films-of-ukrainian-attacks-made-by-russias-fake-factory

https://english.elpais.com/science-tech/2022-02-24/the-war-in-ukraine-via-tiktok-how-ordinary-citizens-are-recording-russian-troops.html

+++ please cover this information at home and bring your ideas and questions to class +++++

Most students can’t tell fake news from real news, study shows
Read more: https://blog.stcloudstate.edu/ims/2017/03/28/fake-news-3/

Module 1 (video to introduce students to the readings and expected tasks)

  1. Fake News / Misinformation / Disinformation
    1. Definitions
      1. Fake news, alternative facts
        https://blog.stcloudstate.edu/ims?s=fake+news
        https://blog.stcloudstate.edu/ims?s=alternative+facts
      2. Misinformation vs disinformation
        https://blog.stcloudstate.edu/ims/2018/02/18/fake-news-disinformation-propaganda/

        1. Propaganda
        2. Conspiracy theories
          1. Bots, trolls
            https://blog.stcloudstate.edu/ims/2017/11/22/bots-trolls-and-fake-news/
            https://blog.stcloudstate.edu/ims/2020/04/30/fake-social-media-accounts-and-politicians/
            https://blog.stcloudstate.edu/ims/2020/01/20/bots-and-disinformation/
        3. Clickbait
          Filter bubbles, echo chambers
          (8 min) video explains filter bubbles
          https://www.ted.com/talks/eli_pariser_beware_online_filter

+++++ thank you for covering this information at home. Pls don’t forget to bring your q/s and ideas to class +++++

Why we are here today?
We need to look deeper in the current 21stcentury state of information and disinformation and determine how such awareness can help policy analysis. 
How do we make up our mind about news and information; where from we get our info; who do we believe, who do we mistrust. 

What do you understand under the following three items and their place in our efforts to analyze policies?
“critical thinking,” https://blog.stcloudstate.edu/ims/2014/05/11/the-5-step-model-to-teach-students-critical-thinking-skills/

“media literacy,” “Media Literacy now considers digital citizenship as part of media literacy — not the other way around”
https://blog.stcloudstate.edu/ims/2020/01/07/k12-media-literacy/

“critical [news] literacy”
https://youtu.be/i2WyIkK9IOg

how do these three items assist a better analysis of policies?

Class assignment:
Share a topic which is very much to your heart.
Please feel welcome to use the following resources and/or contribute with your own resources to determine the sources and potential bias

library spot fake news

fake news resources

fake news and video

Feel free also to use the following guidelines when establishing the veracity of information:

Here is a short (4 min) video introducing you to the well-known basics for evaluation of academic literature:
https://youtu.be/qUd_gf2ypk4

  1. ACCURACY
    1. Does the author cite reliable sources?
    2. How does the information compare with that in other works on the topic?
    3. Can you determine if the information has gone through peer-review?
    4. Are there factual, spelling, typographical, or grammatical errors?
  2. AUDIENCE
    1. Who do you think the authors are trying to reach?
    2. Is the language, vocabulary, style and tone appropriate for intended audience?
    3. What are the audience demographics? (age, educational level, etc.)
    4. Are the authors targeting a particular group or segment of society?
  3. AUTHORITY
    1. Who wrote the information found in the article or on the site?
    2. What are the author’s credentials/qualifications for this particular topic?
    3. Is the author affiliated with a particular organization or institution?
    4. What does that affiliation suggest about the author?
  4. CURRENCY
    1. Is the content current?
    2. Does the date of the information directly affect the accuracy or usefulness of the information?
  5. OBJECTIVITY/BIAS
    1. What is the author’s or website’s point of view?
    2. Is the point of view subtle or explicit?
    3. Is the information presented as fact or opinion?
    4. If opinion, is the opinion supported by credible data or informed argument?
    5. Is the information one-sided?
    6. Are alternate views represented?
    7. Does the point of view affect how you view the information?
  6. PURPOSE
    1. What is the author’s purpose or objective, to explain, provide new information or news, entertain, persuade or sell?
    2. Does the purpose affect how you view the information presented?

In 2021, however, all suggestions above may not be sufficient to distinguish a reliable source of information, even if the article made it through the peer-reviewed process. In time, you should learn to evaluate the research methods of the authors and decide if they are reliable. Same applies for the research findings and conclusions.

++++++++++++++++++++
Aditional topics and ideas for exploring at home:
civil disobedience

https://blog.stcloudstate.edu/ims/2014/09/30/disruptive-technologies-from-swarming-to-mesh-networking/
https://blog.stcloudstate.edu/ims/2019/08/30/tik-tok-students-and-teachers/
https://news.softpedia.com/news/Venezuela-Blocks-Walkie-Talkie-App-Zello-Amid-Protests-428583.shtml
http://www.businessinsider.com/yo-updates-on-israel-missile-attacks-2014-7

https://blog.stcloudstate.edu/ims/2016/11/14/internet-freedom/
https://blog.stcloudstate.edu/ims/2016/08/31/police-to-block-social-media/
https://blog.stcloudstate.edu/ims/2016/04/04/technology-and-activism/

1 2 3 4 5 12