Searching for "apa style"

globalization economy democracy

Caldwell, C. (April, 2017). Sending Jobs Overseas. CRB, 27(2).

http://www.claremont.org/crb/article/sending-jobs-overseas/ 

https://en.wikipedia.org/wiki/Claremont_Institute

Caldwell’s book review of
Baldwin, Richard E. The Great Convergence: Information Technology and the New Globalization. Cambridge, Massachusetts: The Belknap Press of Harvard University Press, 2016. not at SCSU library, available through ILL (https://mplus.mnpals.net/vufind/Record/008770850/Hold?item_id=MSU50008770850000010&id=008770850&hashKey=cff0a018a46178d4d3208ac449d86c4e#tabnav)

Globalization’s cheerleaders, from Columbia University economist Jagdish Bhagwati to New York Times columnist Thomas Friedman, made arguments from classical economics: by buying manufactured products from people overseas who made them cheaper than we did, the United States could get rich concentrating on product design, marketing, and other lucrative services. That turned out to be a mostly inaccurate description of how globalism would work in the developed world, as mainstream politicians everywhere are now discovering.

Certain skeptics, including polymath author Edward Luttwak and Harvard economist Dani Rodrik, put forward a better account. In his 1998 book Turbo-Capitalism, Luttwak gave what is still the most succinct and accurate reading of the new system’s economic consequences. “It enriches industrializing poor countries, impoverishes the semi-affluent majority in rich countries, and greatly adds to the incomes of the top 1 percent on both sides who are managing the arbitrage.”

In The Great Convergence, Richard Baldwin, an economist at the Graduate Institute in Geneva, gives us an idea why, over the past generation, globalization’s benefits have been so hard to explain and its damage so hard to diagnose.

We have had “globalization,” in the sense of far-flung trade, for centuries now.

ut around 1990, the cost of sharing information at a distance fell dramatically. Workers on complex projects no longer had to cluster in the same factory, mill town, or even country. Other factors entered in. Tariffs fell. The rise of “Global English” as a common language of business reduced the cost of moving information (albeit at an exorbitant cost in culture). “Containerization” (the use of standard-sized shipping containers across road, rail, and sea transport) made packing and shipping predictable and helped break the world’s powerful longshoremen’s unions. Active “pro-business” political reforms did the rest.

Far-flung “global value chains” replaced assembly lines. Corporations came to do some of the work of governments, because in the free-trade climate imposed by the U.S., they could play governments off against one another. Globalization is not about nations anymore. It is not about products. And the most recent elections showed that it has not been about people for a long time. No, it is about tasks.

his means a windfall for what used to be called the Third World. More than 600 million people have been pulled out of dire poverty. They can get richer by building parts of things.

The competition that globalization has created for manufacturing has driven the value-added in manufacturing down close to what we would think of as zilch. The lucrative work is in the design and the P.R.—the brainy, high-paying stuff that we still get to do.

But only a tiny fraction of people in any society is equipped to do lucrative brainwork. In all Western societies, the new formula for prosperity is inconsistent with the old formula for democracy.

One of these platitudes is that all nations gain from trade. Baldwin singles out Harvard professor and former George W. Bush Administration economic adviser Gregory Mankiw, who urged passage of the Obama Administration mega-trade deals TPP and Transatlantic Trade and Investment Partnership (TTIP) on the grounds that America should “work in those industries in which we have an advantage compared with other nations, and we should import from abroad those goods that can be produced more cheaply there.”

That was a solid argument 200 years ago, when the British economist David Ricardo developed modern doctrines of trade. In practical terms, it is not always solid today. What has changed is the new mobility of knowledge. But knowledge is a special commodity. It can be reused. Several people can use it at the same time. It causes people to cluster in groups, and tends to grow where those groups have already clustered.

When surgeries involved opening the patient up like a lobster or a peapod, the doctor had to be in physical contact with a patient. New arthroscopic processes require the surgeon to guide cutting and cauterizing tools by computer. That computer did not have to be in the same room. And if it did not, why did it have to be in the same country? In 2001, a doctor in New York performed surgery on a patient in Strasbourg. In a similar way, the foreman on the American factory floor could now coordinate production processes in Mexico. Each step of the production process could now be isolated, and then offshored. This process, Baldwin writes, “broke up Team America by eroding American labor’s quasi-monopoly on using American firms’ know-how.”

To explain why the idea that all nations win from trade isn’t true any longer, Baldwin returns to his teamwork metaphor. In the old Ricardian world that most policymakers still inhabit, the international economy could be thought of as a professional sports league. Trading goods and services resembled trading players from one team to another. Neither team would carry out the deal unless it believed it to be in its own interests. Nowadays, trade is more like an arrangement by which the manager of the better team is allowed to coach the lousier one in his spare time.

Vietnam, which does low-level assembly of wire harnesses for Honda. This does not mean Vietnam has industrialized, but nations like it no longer have to.

In the work of Thomas Friedman and other boosters you find value chains described as kaleidoscopic, complex, operating in a dozen different countries. Those are rare. There is less to “global value chains” than meets the eye. Most of them, Baldwin shows, are actually regional value chains. As noted, they exist on the periphery of the United States, Europe, or Japan. In this, offshoring resembles the elaborate international transactions that Florentine bankers under the Medicis engaged in for the sole purpose of avoiding church strictures on moneylending.

One way of describing outsourcing is as a verdict on the pay structure that had arisen in the West by the 1970s: on trade unions, prevailing-wage laws, defined-benefit pension plans, long vacations, and, more generally, the power workers had accumulated against their bosses.

In 1993, during the first month of his presidency, Bill Clinton outlined some of the promise of a world in which “the average 18-year-old today will change jobs seven times in a lifetime.” How could anyone ever have believed in, tolerated, or even wished for such a thing? A person cannot productively invest the resources of his only life if he’s going to be told every five years that everything he once thought solid has melted into ait.

The more so since globalization undermines democracy, in the ways we have noted. Global value chains are extraordinarily delicate. They are vulnerable to shocks. Terrorists have discovered this. In order to work, free-trade systems must be frictionless and immune to interruption, forever. This means a program of intellectual property protection, zero tariffs, and cross-border traffic in everything, including migrants. This can be assured only in a system that is veto-proof and non-consultative—in short, undemocratic.

Sheltered from democracy, the economy of the free trade system becomes more and more a private space.

+#+#+#+#+#+#+#+#+#+#+#+#+#+#+#+#+#+#+#+

Caldwell, C. (2014, November). Twilight of Democracy. CRB, 14(4).

Caldwell’s book review of
Fukuyama, Francis. The Origins of Political Order: From Prehuman Times to the French Revolution. New York: Farrar, Straus and Giroux, 2011. SCSU Library: https://mplus.mnpals.net/vufind/Record/007359076  Call Number: JC11 .F85 2011

http://www.claremont.org/crb/article/twilight-of-democracy/

Fukuyama’s first volume opened with China’s mandarin bureaucracy rather than the democracy of ancient Athens, shifting the methods of political science away from specifically Western intellectual genealogies and towards anthropology. Nepotism and favor-swapping are man’s basic political motivations, as Fukuyama sees it. Disciplining those impulses leads to effective government, but “repatrimonialization”—the capture of government by private interests—threatens whenever vigilance is relaxed. Fukuyama’s new volume, which describes political order since the French Revolution, extends his thinking on repatrimonialization, from the undermining of meritocratic bureaucracy in Han China through the sale of offices under France’s Henri IV to the looting of foreign aid in post-colonial Zaire. Fukuyama is convinced that the United States is on a similar path of institutional decay.

Political philosophy asks which government is best for man. Political science asks which government is best for government. Political decline, Fukuyama insists, is not the same thing as civilizational collapse.

Fukuyama is not the first to remark that wars can spur government efficiency—even if front-line soldiers are the last to benefit from it.

Relative to the smooth-running systems of northwestern Europe, American bureaucracy has been a dud, riddled with corruption from the start and resistant to reform. Patronage—favors for individual cronies and supporters—has thrived.

Clientelism is an ambiguous phenomenon: it is bread and circuses, it is race politics, it is doing favors for special classes of people. Clientelism is both more democratic and more systemically corrupting than the occasional nepotistic appointment.

why modern mass liberal democracy has developed on clientelistic lines in the U.S. and meritocratic ones in Europe. In Europe, democracy, when it came, had to adapt itself to longstanding pre-democratic institutions, and to governing elites that insisted on established codes and habits. Where strong states precede democracy (as in Germany), bureaucracies are efficient and uncorrupt. Where democracy precedes strong states (as in the United States but also Greece and Italy), government can be viewed by the public as a piñata.

Fukuyama contrasts the painstaking Japanese development of Taiwan a century ago with the mess that the U.S. Congress, “eager to impose American models of government on a society they only dimly understood,” was then making of the Philippines. It is not surprising that Fukuyama was one of the most eloquent conservative critics of the U.S. invasion of Iraq from the very beginning.

What distinguishes once-colonized Vietnam and China and uncolonized Japan and Korea from these Third World basket cases is that the East Asian lands “all possess competent, high-capacity states,” in contrast to sub-Saharan Africa, which “did not possess strong state-level institutions.”

Fukuyama does not think ethnic homogeneity is a prerequisite for successful politics

the United States “suffers from the problem of political decay in a more acute form than other democratic political systems.” It has kept the peace in a stagnant economy only by dragooning women into the workplace and showering the working and middle classes with credit.

public-sector unions have colluded with the Democratic Party to make government employment more rewarding for those who do it and less responsive to the public at large. In this sense, government is too big. But he also believes that cutting taxes on the rich in hopes of spurring economic growth has been a fool’s errand, and that the beneficiaries of deregulation, financial and otherwise, have grown to the point where they have escaped bureaucratic control altogether. In this sense, government is not big enough.

Washington, as Fukuyama sees it, is a patchwork of impotence and omnipotence—effective where it insists on its prerogatives, ineffective where it has been bought out. The unpredictable results of democratic oversight have led Americans to seek guidance in exactly the wrong place: the courts, which have both exceeded and misinterpreted their constitutional responsibilities.  the almost daily insistence of courts that they are liberating people by removing discretion from them gives American society a Soviet cast.

“Effective modern states,” he writes, “are built around technical expertise, competence, and autonomy.”

http://librev.com/index.php/2013-03-30-08-56-39/discussion/culture/3234-gartziya-i-problemite-na-klientelistkata-darzhava

#+#+#+#+#+#+#+#+#+#+#+#+#+#+#+#+#+#+#+#+#+

Williams, J. (2017, May). The Dumb Politics of Elite Condescension. NYT

https://mobile.nytimes.com/2017/05/27/opinion/sunday/the-dumb-politics-of-elite-condescension.html

the sociologists Richard Sennett and Jonathan Cobb call the “hidden injuries of class.” These are dramatized by a recent employment study, in which the sociologists Lauren A. Rivera and Andras Tilcsik sent 316 law firms résumés with identical and impressive work and academic credentials, but different cues about social class. The study found that men who listed hobbies like sailing and listening to classical music had a callback rate 12 times higher than those of men who signaled working-class origins, by mentioning country music, for example.

Politically, the biggest “hidden injury” is the hollowing out of the middle class in advanced industrialized countries. For two generations after World War II, working-class whites in the United States enjoyed a middle-class standard of living, only to lose it in recent decades.

The college-for-all experiment did not work. Two-thirds of Americans are not college graduates. We need to continue to make college more accessible, but we also need to improve the economic prospects of Americans without college degrees.

the United States has a well-documented dearth of workers qualified for middle-skill jobs that pay $40,000 or more a year and require some postsecondary education but not a college degree. A 2014 report by Accenture, Burning Glass Technologies and Harvard Business School found that a lack of adequate middle-skills talent affects the productivity of “47 percent of manufacturing companies, 35 percent of health care and social assistance companies, and 21 percent of retail companies.”

Skillful, a partnership among the Markle Foundation, LinkedIn and Colorado, is one initiative pointing the way. Skillful helps provide marketable skills for job seekers without college degrees and connects them with employers in need of middle-skilled workers in information technology, advanced manufacturing and health care. For more information, see my other IMS blog entries, such ashttps://blog.stcloudstate.edu/ims/2017/01/11/credly-badges-on-canvas/

document analysis methodology

document analysis – literature on the methodology

  • Bowen, G. A. (n.d.). Document Analysis as a Qualitative Research Method. Qualitative Research Journal, 9, 27–40.
    https://www.academia.edu/8434566/Document_Analysis_as_a_Qualitative_Research_Method
    Document analysis is a systematic procedure for reviewing or evaluating documents—both printed and electronic (computer-based and Internet-transmitted) material. Like other analytical methods in qualitative research, document analysis requires that data be examined and interpreted in order to elicit meaning, gain understanding, and develop empirical knowledge(Corbin&Strauss,2008;seealsoRapley,2007).
    Document analysis is often used in combination with other qualitative research methods as a means of triangulation—‘the combination of methodologies in the study of the same phenomenon’ (Denzin, 1970, p. 291)
    The qualitative researcher is expected to draw upon multiple (at least two) sources of evidence; that is, to seek convergence and corroboration through the use of different data sources and methods. Apart from documents, such sources include interviews, participant or non-participant observation, and physical artifacts (Yin,1994).By triangulating data, the researcher attempts to provide ‘a confluence of evidence that breeds credibility’ (Eisner, 1991, p. 110). By examining information collected through different methods, the researcher can corroborate findings across data sets and thus reduce the impact of potential biases that can exist in a single study. According to Patton (1990), triangulation helps the researcher guard against the accusation that a study’s findings are simply an artifact of a single method, a single source, or a single investigator’s bias. Mixed-method studies (which combine quantitative and qualitative research techniques)sometimes include document analysis. Here is an example: In their large-scale, three-year evaluation of regional educational service agencies (RESAs), Rossman and Wilson (1985) combined quantitative and qualitative methods—surveys (to collect quantitative data) and open ended, semi structured interviews with reviews of documents (as the primary sources of qualitative data). The document reviews were designed to identify the agencies that played a role in supporting school improvement programs.
  • Glenn A. Bowen, (2009) “Document Analysis as a Qualitative Research Method”, Qualitative Research Journal, Vol. 9 Issue: 2, pp.27-40, doi: 10.3316/QRJ0902027
    http://www.emeraldinsight.com/action/showCitFormats?doi=10.3316%2FQRJ0902027
  • Document Review and Analysis
    https://www.bcps.org/offices/lis/researchcourse/develop_docreview.html

Qualitative

  • Semiotics (studies the life of signs in society; seeks to understand the underlining messages in visual texts; forms basis for interpretive analysis)
  • Discourse Analysis (concerned with production of meaning through talk and texts; how people use language)
  • Interpretative Analysis (captures hidden meaning and ambiguity; looks at how messages are encoded or hidden; acutely aware of who the audience is)
  • Conversation Analysis (concerned with structures of talk in interaction and achievement of interaction)
  • Grounded Theory (inductive and interpretative; developing novel theoretical ideas based on the data)

Document Analysis
Document analysis is a form of qualitative research in which documents are interpreted by the researcher to give voice and meaning around an assessment topic. Analyzing documents incorporates coding content into themes similar to how focus group or interview transcripts are analyzed. A rubric can also be used to grade or score a document. There are three primary types of documents:

• Public Records: The official, ongoing records of an organization’s activities. Examples include student transcripts, mission statements, annual reports, policy manuals, student handbooks, strategic plans, and syllabi.

• Personal Documents: First-person accounts of an individual’s actions, experiences, and beliefs. Examples include calendars, e-mails, scrapbooks, blogs, Facebook posts, duty logs, incident reports, reflections/journals, and newspapers.

• Physical Evidence: Physical objects found within the study setting (often called artifacts). Examples include flyers, posters, agendas, handbooks, and training materials.

As with all research, how you collect and analyse the data should depend on what you want to find out. Since you haven’t told us that, it is difficult to give you any precise advice. However, one really important matter in using documents as sources, whatever the overall aim of your research, is that data from documents are very different from data from speech events such as interviews, or overheard conversations.So the first analytic question you need to ask with regard to documents is ‘how are these data shaped by documentary production ?’  Something which differentiates nearly all data from documents from speech data is that those who compose documents know what comes at the end while still able to alter the beginning; which gives far more opportunity for consideration of how the recepient of the utterances will view the provider; ie for more artful self-presentation. Apart from this however, analysing the way documentary practice shapes your data will depend on what these documents are: for example your question might turn out to be ‘How are news stories produced ?’ – if you are using news reports, or ‘What does this bureaucracy consider relevant information (and what not relevant and what unmentionable) ? if you are using completed proformas or internal reports from some organisation.

An analysis technique is just like a hardware tool. It depends where and with what you are working to choose the right one. For a nail you should use a hammer, and there are lots of types of hammers to choose, depending on the type of nail.

So, in order to tell you the bettet technique, it is important to know the objectives you intend to reach and the theoretical framework you are using. Perhaps, after that, We could tell you if you should use content analysis, discourse or grounded theory (which type of it as, like the hammer, there are several types of GTs).

written after Bowen (2009), but well chewed and digested.

1. Introduction: Qualitative vs. Quantitative Research?

excellent guide to the structure of a qualitative research

++++++++++++++++
more on qualitative research in this IMS blog
https://blog.stcloudstate.edu/ims?s=qualitative+research

K12 mobile learning

CoSN Survey: Mobile Learning Top Priority for K–12 IT Leaders

By Richard Chang 04/04/17

https://thejournal.com/articles/2017/04/04/cosn-survey-mobile-learning-top-priority-for-k12-it-leaders.aspx

Mobile learning is the top priority for K–12 IT leaders, according to the fifth annual K–12 IT Leadership Survey published by the Consortium for School Networking (CoSN).

It’s the first time mobile learning ranked as the highest priority in the survey. The No. 2 priority is broadband and network capacity, which ranked first last year, and the No. 3 priority is cybersecurity and privacy, with 62 percent of respondents rating them more important than last year.

  • Understaffing remains a key issue for technology departments in school systems.
  • Single sign-on (SSO) is the most implemented interoperability initiative
  • More than one-third of IT leaders expressed no interest in bring your own device (BYOD) initiatives, up from 20 percent in 2014.
  • Interest in open educational resources (OER) is high
  • Education technology experience is common among IT leaders
  • Strong academic backgrounds are also prevalent among IT leaders.
  • Lack of diversity continues to be an issue for school district technology leaders.

CoSN is a nonprofit association for school system technology leaders. To read or download the full IT leadership survey, visit this CoSN site.

+++++++++++++++++++
more on mobile learning in this IMS blog
https://blog.stcloudstate.edu/ims?s=mobile+learning

qualitative method research

Cohort 7

By miltenoff | View this Toon at ToonDoo | Create your own Toon

Qualitative Method Research

quote

Data treatment and analysis

Because the questionnaire data comprised both Likert scales and open questions, they were analyzed quantitatively and qualitatively. Textual data (open responses) were qualitatively analyzed by coding: each segment (e.g. a group of words) was assigned to a semantic reference category, as systematically and rigorously as possible. For example, “Using an iPad in class really motivates me to learn” was assigned to the category “positive impact on motivation.” The qualitative analysis was performed using an adapted version of the approaches developed by L’Écuyer (1990) and Huberman and Miles (1991, 1994). Thus, we adopted a content analysis approach using QDAMiner software, which is widely used in qualitative research (see Fielding, 2012; Karsenti, Komis, Depover, & Collin, 2011). For the quantitative analysis, we used SPSS 22.0 software to conduct descriptive and inferential statistics. We also conducted inferential statistics to further explore the iPad’s role in teaching and learning, along with its motivational effect. The results will be presented in a subsequent report (Fievez, & Karsenti, 2013)

Fievez, A., & Karsenti, T. (2013). The iPad in Education: uses, benefits and challenges. A survey of 6057 students and 302 teachers in Quebec, Canada (p. 51). Canada Research Chair in Technologies in Education. Retrieved from https://www.academia.edu/5366978/The_iPad_in_Education_uses_benefits_and_challenges._A_survey_of_6057_students_and_302_teachers_in_Quebec_Canada

unquote

 The 20th century notion of conducting a qualitative research by an oral interview and then processing manually your results had triggered in the second half of the 20th century [sometimes] condescending attitudes by researchers from the exact sciences.
The reason was the advent of computing power in the second half of the 20th century, which allowed exact sciences to claim “scientific” and “data-based” results.
One of the statistical package, SPSS, is today widely known and considered a magnificent tools to bring solid statistically-based argumentation, which further perpetuates the superiority of quantitative over qualitative method.
At the same time, qualitative researchers continue to lag behind, mostly due to the inertia of their approach to qualitative analysis. Qualitative analysis continues to be processed in the olden ways. While there is nothing wrong with the “olden” ways, harnessing computational power can streamline the “olden ways” process and even present options, which the “human eye” sometimes misses.
Below are some suggestions, you may consider, when you embark on the path of qualitative research.
The Use of Qualitative Content Analysis in Case Study Research
Florian Kohlbacher
http://www.qualitative-research.net/index.php/fqs/article/view/75/153

excellent guide to the structure of a qualitative research

Palys, T., & Atchison, C. (2012). Qualitative Research in the Digital Era: Obstacles and Opportunities. International Journal Of Qualitative Methods, 11(4), 352-367.
http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dkeh%26AN%3d89171709%26site%3dehost-live%26scope%3dsite
Palys and Atchison (2012) present a compelling case to bring your qualitative research to the level of the quantitative research by using modern tools for qualitative analysis.
1. The authors correctly promote NVivo as the “jaguar’ of the qualitative research method tools. Be aware, however, about the existence of other “Geo Metro” tools, which, for your research, might achieve the same result (see bottom of this blog entry).
2. The authors promote a new type of approach to Chapter 2 doctoral dissertation and namely OCR-ing PDF articles (most of your literature as of 2017 is mostly either in PDF or electronic textual format) through applications such as
Abbyy Fine Reader, https://www.abbyy.com/en-us/finereader/
OmniPage,  http://www.nuance.com/for-individuals/by-product/omnipage/index.htm
Readirus http://www.irislink.com/EN-US/c1462/Readiris-16-for-Windows—OCR-Software.aspx
The text from the articles is processed either through NVIVO or related programs (see bottom of this blog entry). As the authors propose: ” This is immediately useful for literature review and proposal writing, and continues through the research design, data gathering, and analysis stages— where NVivo’s flexibility for many different sources of data (including audio, video, graphic, and text) are well known—of writing for publication” (p. 353).
In other words, you can try to wrap your head around huge amount of textual information, but you can also approach the task by a parallel process of processing the same text with a tool.
 +++++++++++++++++++++++++++++
Here are some suggestions for Computer Assisted / Aided Qualitative Data Analysis Software (CAQDAS) for a small and a large community applications):

– RQDA (the small one): http://rqda.r-forge.r-project.org/ (see on youtube the tutorials of Metin Caliskan); one active developper.
GATE (the large one): http://gate.ac.uk/ | https://gate.ac.uk/download/

text mining: https://en.wikipedia.org/wiki/Text_mining
Text mining, also referred to as text data mining, roughly equivalent to text analytics, is the process of deriving high-quality information from text. High-quality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output.
https://ischool.syr.edu/infospace/2013/04/23/what-is-text-mining/
Qualitative data is descriptive data that cannot be measured in numbers and often includes qualities of appearance like color, texture, and textual description. Quantitative data is numerical, structured data that can be measured. However, there is often slippage between qualitative and quantitative categories. For example, a photograph might traditionally be considered “qualitative data” but when you break it down to the level of pixels, which can be measured.
word of caution, text mining doesn’t generate new facts and is not an end, in and of itself. The process is most useful when the data it generates can be further analyzed by a domain expert, who can bring additional knowledge for a more complete picture. Still, text mining creates new relationships and hypotheses for experts to explore further.

quick and easy:

intermediate:

advanced:

http://tidytextmining.com/

Introduction to GATE Developer  https://youtu.be/o5uhMF15vsA


 

use of RapidMiner:

https://rapidminer.com/pricing/

– Coding Analysis Toolkit (CAT) from University of Pittsburgh and University of Massachusetts
– Raven’s Eye is an online natural language ANALYSIS tool based
– ATLAS.TI
– XSIGTH

– QDA Miner: http://provalisresearch.com/products/qualitative-data-analysis-software/

There is also a free version called QDA Miner Lite with limited functionalities: http://provalisresearch.com/products/qualitative-data-analysis-software/freeware/

– MAXQDA

–  NVivo

– SPSS Text Analytics

– Kwalitan

– Transana (include video transcribing capability)

– XSight

Nud*ist https://www.qsrinternational.com/

(Cited from: https://www.researchgate.net/post/Are_there_any_open-source_alternatives_to_Nvivo [accessed Apr 1, 2017].

– OdinText

IBM Watson Conversation
IBM Watson Text to Speech
Google Translate API
MeTA
LingPipe
NLP4J
Timbl
Colibri Core
CRF++
Frog
Ucto
– CRFsuite

– FoLiA
PyNLPl
openNLP
NLP Compromise
MALLET
Cited from: https://www.g2crowd.com/products/nvivo/competitors/alternatives [accessed April 1, 2017
+++++++++++++++++++++++++=
http://www.socresonline.org.uk/3/3/4.html
Christine A. Barry (1998) ‘Choosing Qualitative Data Analysis Software: Atlas/ti and Nudist Compared’
Sociological Research Online, vol. 3, no. 3, <http://www.socresonline.org.uk/3/3/4.html&gt;

Pros and Cons of Computer Assisted Qualitative Data Analysis Software

+++++++++++++++++++++++++
more on quantitative research:

Asamoah, D. A., Sharda, R., Hassan Zadeh, A., & Kalgotra, P. (2017). Preparing a Data Scientist: A Pedagogic Experience in Designing a Big Data Analytics Course. Decision Sciences Journal of Innovative Education, 15(2), 161–190. https://doi.org/10.1111/dsji.12125
++++++++++++++++++++++++
literature on quantitative research:
Borgman, C. L. (2015). Big Data, Little Data, No Data: Scholarship in the Networked World. MIT Press. https://mplus.mnpals.net/vufind/Record/ebr4_1006438
St. Cloud State University MC Main Collection – 2nd floor AZ195 .B66 2015
p. 161 Data scholarship in the Humanities
p. 166 When Are Data?
Philip Chen, C. L., & Zhang, C.-Y. (2014). Data-intensive applications, challenges, techniques and technologies: A survey on Big Data. Information Sciences, 275(Supplement C), 314–347. https://doi.org/10.1016/j.ins.2014.01.015

digital learning

The Disruption of Digital Learning: Ten Things We Have Learned

Published on Featured in: Leadership & Management    https://www.linkedin.com/pulse/disruption-digital-learning-ten-things-we-have-learned-josh-bersin

meetings with Chief Learning Officers, talent management leaders, and vendors of next generation learning tools.

The corporate L&D industry is over $140 billion in size, and it crosses over into the $300 billion marketplace for college degrees, professional development, and secondary education around the world.

Digital Learning does not mean learning on your phone, it means “bringing learning to where employees are.” In other words, this new era is not only a shift in tools, it’s a shift toward employee-centric design. Shifting from “instructional design” to “experience design” and using design thinking are key here.

evolution of L&D The Evolution of Corporate Training

1) The traditional LMS is no longer the center of corporate learning, and it’s starting to go away.

LMS platforms were designed around the traditional content model, using a 17 year old standard called SCORM. SCORM is a technology developed in the 1980s, originally intended to help companies like track training records from their CD-ROM based training programs.

the paradigm that we built was focused on the idea of a “course catalog,” an artifact that makes sense for formal education, but no longer feels relevant for much of our learning today.

not saying the $4 billion LMS market is dead, but the center or action has moved (ie. their cheese has been moved). Today’s LMS is much more of a compliance management system, serving as a platform for record-keeping, and this function can now be replaced by new technologies.

We have come from a world of CD ROMs to online courseware (early 2000s) to an explosion of video and instructional content (YouTube and MOOCs in the last five years), to a new world of always-on, machine-curated content of all shapes and sizes. The LMS, which was largely architected in the early 2000s, simply has not kept up effectively.

2) The emergence of the X-API makes everything we do part of learning.

In the days of SCORM (the technology developed by Boeing in the 1980s to track CD Roms) we could only really track what you did in a traditional or e-learning course. Today all these other activities are trackable using the X-API (also called Tin Can or the Experience API). So just like Google and Facebook can track your activities on websites and your browser can track your clicks on your PC or phone, the X-API lets products like the learning record store keep track of all your digital activities at work.

Evolution of Learning Technology Standards

3) As content grows in volume, it is falling into two categories: micro-learning and macro-learning.

MicroLearning vs. MacroLearning
Understanding Macro vs. Micro Learning

4) Work Has Changed, Driving The Need for Continuous Learning

Why is all the micro learning content so important? Quite simply because the way we work has radically changed. We spend an inordinate amount of time looking for information at work, and we are constantly bombarded by distractions, messages, and emails.

The Overwhelmed Employee
Too Much Time Searching

sEmployees spend 1% of their time learning

5) Spaced Learning Has Arrived

If we consider the new world of content (micro and macro), how do we build an architecture that teaches people what to use when? Can we make it easier and avoid all this searching?

“spaced learning.”

Neurological research has proved that we don’t learn well through “binge education” like a course. We learn by being exposed to new skills and ideas over time, with spacing and questioning in between. Studies have shown that students who cram for final exams lose much of their memory within a few weeks, yet students who learn slowly with continuous reinforcement can capture skills and knowledge for decades.

Ebbinghaus forgetting curve

Spaced Learning: Repetition, Spacing, Questioning

6) A New Learning Architecture Has Emerged: With New Vendors To Consider

One of the keys to digital learning is building a new learning architecture. This means using the LMS as a “player” but not the “center,” and looking at a range of new tools and systems to bring content together.
The New Learning Landscape

On the upper left is a relatively new breed of vendors, including companies like Degreed, EdCast, Pathgather, Jam, Fuse, and others, that serve as “learning experience” platforms. They aggregate, curate, and add intelligence to content, without specifically storing content or authoring in any way. In a sense they develop a “learning experience,” and they are all modeled after magazine-like interfaces that enables users to browse, read, consume, and rate content.

The second category the “program experience platforms” or “learning delivery systems.” These companies, which include vendors like NovoEd, EdX, Intrepid, Everwise, and many others (including many LMS vendors), help you build a traditional learning “program” in an open and easy way. They offer pathways, chapters, social features, and features for assessment, scoring, and instructor interaction. While many of these features belong in an LMS, these systems are built in a modern cloud architecture, and they are effective for programs like sales training, executive development, onboarding, and more. In many ways you can consider them “open MOOC platforms” that let you build your own MOOCs.

The third category at the top I call “micro-learning platforms” or “adaptive learning platforms.” These are systems that operate more like intelligent, learning-centric content management systems that help you take lots of content, arrange it into micro-learning pathways and programs, and serve it up to learners at just the right time. Qstream, for example, has focused initially on sales training – and clients tell me it is useful at using spaced learning to help sales people stay up to speed (they are also entering the market for management development). Axonify is a fast-growing vendor that serves many markets, including safety training and compliance training, where people are reminded of important practices on a regular basis, and learning is assessed and tracked. Vendors in this category, again, offer LMS-like functionality, but in a way that tends to be far more useful and modern than traditional LMS systems. And I expect many others to enter this space.

Perhaps the most exciting part of tools today is the growth of AI and machine-learning systems, as well as the huge potential for virtual reality.

A Digital Learning Architecture

7) Traditional Coaching, Training, and Culture of Learning Has Not Gone Away

The importance of culture and management

8) A New Business Model for Learning

he days of spending millions of dollars on learning platforms is starting to come to an end. We do have to make strategic decisions about what vendors to select, but given the rapid and immature state of the market, I would warn against spending too much money on any one vendor at a time. The market has yet to shake out, and many of these vendors could go out of business, be acquired, or simply become irrelevant in 3-5 years.

9) The Impact of Microsoft, Google, Facebook, and Slack Is Coming

The newest versions of Microsoft Teams, Google Hangouts and Google Drive, Workplace by Facebook, Slack, and other enterprise IT products now give employees the opportunity to share content, view videos, and find context-relevant documents in the flow of their daily work.

We can imagine that Microsoft’s acquisition of LinkedIn will result in some integration of Lynda.com content in the flow of work. (Imagine if you are trying to build a spreadsheet and a relevant Lynda course opens up). This is an example of “delivering learning to where people are.”

New work environments will be learning environments

10) A new set of skills and capabilities in L&D

It’s no longer enough to consider yourself a “trainer” or “instructional designer” by career. While instructional design continues to play a role, we now need L&D to focus on “experience design,” “design thinking,” the development of “employee journey maps,” and much more experimental, data-driven, solutions in the flow of work.

lmost all the companies are now teaching themselves design thinking, they are using MVP (minimal viable product) approaches to new solutions, and they are focusing on understanding and addressing the “employee experience,” rather than just injecting new training programs into the company.
New Capabilities Needed

+++++++++++++++++++
more on elearning in this IMS blog
https://blog.stcloudstate.edu/ims?s=elearning

compensation for online

Compensation for creation of online courses

++++++++++++++++++++

I absolutely echo Kimber’s notion that a team approach to course development can actually take longer, even when one of the team members is an instructional designer. Perhaps because faculty members are used to controlling all aspects of their course development and delivery, the division of labor concept may feel too foreign to them. An issue that is similar in nature and referred to as ‘unbundling the faculty role’ is discussed at length in the development of competency-based education (CBE) courses and it is not typically a concept that faculty embrace.

Robin

+++++++++++++++++++

I will also confirm that the team approach to course development can take longer.  Indeed it does in my experience.  It requires much more “back and forth”, negotiating of who is doing what, ensuring that the overall approach is congruent, etc.  That’s not to say that it’s not a worthwhile endeavor in some cases where it makes pedagogical sense (in our case we are designing courses for 18-22 year-old campus-based learners and 22+ year-old fully online learners at the same time), but if time/cost savings is the goal, you will be sorely disappointed, in my experience.  The “divide and conquer” approach requires a LOT of coordination and oversight.  Without that you will likely have a cobbled together, hodgepodge of a course that doesn’t meet expectations.

Best, Carine  Director, Office of Instructional Design & Academic Technology Ottawa University 1001 S. Cedar St. * Ottawa, KS 66067 carine.ullom@ottawa.edu * 785-248-2510

++++++++++++++++++++

Breaking up a course and coming up with a cohesive design and approach, could make the design process longer. At SSC, we generally work with our faculty over the course of a semester for each course. When we’ve worked with teams, we have not seen a shortened timeline.
The length of time it takes to develop a course depends on the content. Are there videos? If so, they have to be created, which is time-consuming, plus they either need to have a transcript created or they need subtitles. Both of those can be time-consuming. PowerPoint slides take time, plus, they need more content to make them relevant. We are working with our faculty to use the Universal Design for Learning model, which means we’re challenging them to create the content to benefit the most learners
I have a very small team whose sole focus is course design and it takes us 3-4 weeks to design a course and it’s our full-time job!

Linda
Linda C. Morosko, MA Director, eStarkState Division of Student Success 330-494-6170 ext. 4973 lmorosko@starkstate.edu

+++++++++++++++++++++++++

Kelvin, we also use the 8-week development cycle, but do occasionally have to lengthen that cycle for particularly complex courses or in rare cases when the SME has had medical emergencies or other major life disruptions.  I would be surprised if multiple faculty working on a course could develop it any more quickly than a single faculty member, though, because of the additional time required for them to agree and the dispersed sense of responsibility. Interesting idea.

-Kimber

Dr. Kimberly D. Barnett Gibson, Assistant Vice President for Academic Affairs and Online Learning Our Lady of the Lake University 411 SW 24th Street San Antonio, TX 78207 Kgibson@ollusa.edu 210.431.5574 BlackBoard IM kimberly.gibson  https://www.pinterest.com/drkdbgavpol@drkimberTweets

++++++++++++++++++++++++

Hello everyone. As a follow-up to the current thread, how long do you typically give hey course developer to develop a master course for your institution? We currently use an eight week model but some faculty have indicated that that is not enough time for them although we have teams of 2 to 4 faculty developing such content. Our current assumption is that with teams, there can be divisions of labor that can reduce the total amount time needed during the course development process.

Kelvin Bentley, PhD Vice President of Academic Affairs, TCC Connect Campus Tarrant County College District

+++++++++++++++++++++++++++

At Berkeley College, full-time faculty may develop online courses in conjunction with an instructional designer.   The course is used as a master template for other sections to be assigned from. Once the course has been scheduled and taught, the faculty member receives a stipend.  The faculty member would receive their normal pay to teach the developed course as part of their semester course load, with no additional royalties assigned for it or any additional sections that may be provided to students.

Regards, Gina   Gina Okun Assistant Dean, Online Berkeley College  64 East Midland Avenue, Suite 2, Paramus, NJ 07652 (973)405-2111  x6309 gina-okun@berkeleycollege.edu

+++++++++++++++++++++++++

We operate with nearly all adjunct faculty where >70% of enrollment credits are onlinez
With one exception that I can recall, the development contract includes the college’s outright ownership, with no royalty rights. One of the issues with a royalty based arrangement would be what to do when the course is revised (which happens nearly every term, to one degree or another). At what point does the course begin to take on the character of another person’s input?
What do you do if the course is adapted for a shorter summer term, or a between-term intensive? What if new media tools or a different LMS are used? Is the royalty arrangement based on the syllabus or the course content itself? What happens if the textbook goes out of print, or an Open resource becomes available? What happens if students evaluate the course poorly?
I’m not in position to set this policy — I’m only reporting it. I like the idea of a royalty arrangement but it seems like it could get pretty messy. It isn’t as if you are licensing a song or an image where the original product doesn’t change. Courses, the modes of delivery, and the means of communication change all the time. Seems like it would be hard to define what constitutes “the course” after a certain amount of time.

Steve Covello Rich Media Specialist/Instructional Designer/Online Instructor Chalk & Wire e-Portfolio Administrator Granite State College 603-513-1346 Video chat: https://appear.in/id.team  Scheduling: http://meetme.so/stevecovello

++++++++++++++++++++++++++++

I’ve worked with many institutions that have used Subject Matter Experts (SMEs) to develop or provide the online course content. Most often, the institutions also provide a resource in the form of an Instructional Designer (ID) to take the content and create the actual course environment.

The SME is paid on a contract basis for provision of the content. This is a one-time payment, and the institution then owns the course content (other than integrated published materials such as text books, licensed online lab products, etc.). The SME may be an existing faculty member at the institution or not, or the SME may go on to teach the course at the institution. In any event, whoever teaches the course would be paid the standard faculty rate for the course. If the course requires revisions to the extent that a person will need to be engaged for content updates, then that can be a negotiated contract. Typically it is some fraction of the original development cost. No royalties are involved.

Hap Aziz, Ed.D. @digitalhap http:hapaziz.wordpress.com

++++++++++++++++++++

Within SUNY, there is some variance regarding whether a stipend is paid for development or not. In either case, since we are unionized there is policy regarding IP. IP resides with the faculty developer unless both parties agree in writing in the form of a contract to assign or share rights.

Policy statement: http://uupinfo.org/reports/reportpdf/IntellectualPropertyUpdated2016.pdf

Thank you for your feedback on this issue. Our institution does does not provide a royalty as we consider course development as a fee-for-service arrangement. We pay teams of 2-4 faculty $1000 each to develop master course shells for our high-enrollment courses.  Instead of a royalty fee, I think an institution can simply provide course developers the perk of first right of refusal to teach the course when it offered as well as providing course developers with the first option to make revisions to the course shell over time.

Kelvin

Kelvin Bentley, Ph.D. Vice President of Academic Affairs, TCC Connect Campus Tarrant County College District

Once upon a time, and several positions ago, we set up a google doc for capturing all kinds of data points across institutions, like this. I’m sure it’s far out of date, but may still have some ideas or info in there – and could possibly be dusted off and oiled up for re-use… I present the Blend-Online Data Collector. This tab is for course development payment.

Kind regards,

Clark

Clark Shah-Nelson

Assistant Dean, Instructional Design and Technology
University of Maryland School of Social Work—Twitter … LinkedIn —voice/SMS: (646) 535-7272fax: 270.514.0112

Hi Jenn,

Just want to clarify…you say faculty “sign over all intellectual property rights of the course to the college.” but later in the email say “Faculty own all intellectual property and can take it with them to teach at another institution”, so is your policy changing to the former? Or, is it the later and that is what you are asking about?

I’ll send details on our policy directly to your email account.

Best,

Ellen

On Tue, Dec 6, 2016 at 9:43 AM, Jennifer Stevens <jennifer_stevens@emerson.edu> wrote:

Hello all,

I am tasked with finding out what the going rate is for the following model:
We pay an adjunct faculty member (“teaching faculty”) a set amount in order to develop an online course and sign over all intellectual property rights of the course to the college.
Is anyone doing this? I’ve heard of models that include royalties, but I personally don’t know of any that offer straight payment for IP. I know this can be a touchy subject, so feel free to respond directly to me and I will return and post a range of payment rates with no other identifying data.
For some comparison, we are currently paying full time faculty a $5000 stipend to spend a semester developing their very first online class, and then they get paid to teach the class. Subsequent online class developments are unpaid. Emerson owns the course description and course shell and is allowed to show the course to future faculty who will teach the online course. Faculty own all intellectual property and can take it with them to teach at another institution. More info: http://www.emerson.edu/itg/online-emerson/frequently-asked-questions
I asked this on another list, but wanted to get Blend_Online’s opinion as well. Thanks for any pointers!
Jenn Stevens
Director | Instructional Technology Group | 403A Walker Building  |  Emerson College  |  120 Boylston St  |  Boston MA 02116  |  (617) 824-3093

Ellen M. Murphy

Director of Program Development
Graduate Professional Studies

Brandeis University Rabb School

781-736-8737

++++++++++++++++
more on compensation for online courses in this IMS blog:
https://blog.stcloudstate.edu/ims?s=online+compensation

Plagiarism Past, Present, and Future

Plagiarism: Past, Present, and Future

https://www.linkedin.com/pulse/plagiarism-past-present-future-josh-howell

The proper solution to plagiarism in our nation’s schools is education and vigilance. Students should understand the role of academic integrity inside their own work, and be held accountable when they are not in accordance with academic policies and honor codes. Self-plagiarism, incorrect citations, no citations, or even word for word copying must be taught to students on a regular basis. Updates to both MLA and APA are ongoing as well; therefor, even graduates must stay current with how their citation methods change overtime.

My response to this LInkedIn entry:
Here is most of the information, I have collected on plagiarism, academic integrity, academic dishonesty. I added also Joshe’s opinion LinkedIn entry:
https://blog.stcloudstate.edu/ims?s=plagiarism
My firm conviction through the years is that for-profit such as TurnitIn are a smoke-screen, opportunists, which are trying to bank on lack of organized approach toward educating students and ourselves about the increasing nebulous areas of plagiarism (due to the increasing digitization of our work). It is in their interest to use scare tactics and try to convince us that computerization is the answer. Anyone, who had proofread papers for more than two semesters can detect easily the change of style, the lack of punctuation and other little, but significant details in the writing process. Since, the instructor has to read the paper for content anyhow, it is just preposterous to seek multiple-thousand dollars software license to replace the instructor.
The literature shows that the predominant percentage of students committing plagiarism is doing it due to lack of proper explanation and education. I that sense, I support Josh’s choice of words: education and vigilance. My only addition is that the vigilance must be human based, not machine-based. Higher admin shouldn’t squander finances in purchasing more licenses and cutting faculty positions, but invest in well-rounded and capable faculty.

+++++++++++++++++++++
more on plagiarism in this IMS blog:
https://blog.stcloudstate.edu/ims?s=plagiarism

NMC on digital literacy

NMC Releases Horizon Project Strategic Brief on Digital Literacy

Anaheim, California (October 25, 2016) — The New Media Consortium (NMC) has released Digital Literacy: An NMC Horizon Project Strategic Brief in conjunction with the 2016 EDUCAUSE Annual Conference.

This project was launched because there is a lack of consensus across the field about how to define digital literacy and implement effective programs. A survey was disseminated throughout the NMC community of higher education leaders and practitioners to understand how digital literacy initiatives are impacting their campuses. The NMC’s research examines the current landscape to illuminate multiple models of digital literacy — universal literacy, creative literacy, and literacy across disciplines — around which dedicated programs can proliferate a spectrum of skills and competencies.

p. 8-10 examples across US universities on digital literacy organization

p. 12 Where does support for digital literacy come from your institution? Individual people

nmc-definition-of-digital-literacy

p. 13. campus libraries must be deeply embedded in course curriculum. While libraries have always supported academic institutions, librarians can play a more critical role in the development of digital literacy skills. Historically, these types of programs have been implemented in “one-off” segments, which are experienced apart from a student’s normal studies and often delivered in a one-size-fits-all method. However, an increasing number of academic libraries are supporting a more integrated approach that delivers continuous skill development and assessment over time to both students and faculty. This requires deeper involvement with departments and agreeing on common definitions of what capacities should be achieved, and the most effective pedagogical method. Librarians are tasked with broadening their role in the co-design of curriculum and improving their instruction techniques to work alongside faculty toward the common goal of training students to be savvy digital researchers. University of Arizona Libraries, for example, found that a key step in this transition required collaborating on a common instructional philosophy.

nmc-improving-of-digital-literacy+++++++++++++++

more on digital literacy in this IMS blog:

https://blog.stcloudstate.edu/ims?s=digital+literacy

Save

teaching with technology

Boulder Faculty Teaching with Technology Report
Sarah Wise, Education Researcher ,  Megan Meyer, Research Assistant, March 8,2016

http://www.colorado.edu/assett/sites/default/files/attached-files/final-fac-survey-full-report.pdf

Faculty perceive undergraduates to be less proficient with digital literacy skills. One-third think
their students do not find or organize digital information very well. The majority (52%) think
they lack skill in validating digital information.
My note: for the SCSU librarians, digital literacy is fancy word for information literacy. Digital literacy, as used in this report is much greater area, which encompasses much broader set of skills
Faculty do not prefer to teach online (57%) or in a hybrid format (where some sessions occur
online, 32%). One-third of faculty reported no experience with these least popular course types
my note: pay attention to the questions asked; questions I am asking Mike Penrod to let me work with faculty for years. Questions, which are snubbed by CETL and a dominance of D2L and MnSCU mandated tools is established.

Table 5. Do you use these in-class technologies for teaching undergraduates? Which are the Top 3 in-class technologies you would like to learn or use more? (n = 442)

Top 3 use in most of my classes have used in some classes tried, but do not use  

N/A: no experience

in-class activities, problems (via worksheets, tablets, laptops, simulations, beSocratic, etc.)  

52%

 

33%

 

30%

 

6%

 

30%

in-class question, discussion tools (e.g. Twitter, TodaysMeet, aka “backchannel communication”)  

 

47%

 

 

8%

 

 

13%

 

 

11%

 

 

68%

using online resources to find high quality curricular materials  

37%

 

48%

 

31%

 

3%

 

18%

iClickers 24% 23% 16% 9% 52%
other presentation tool (Prezi, Google presentation, Slide Carnival, etc.)  

23%

 

14%

 

21%

 

15%

 

51%

whiteboard / blackboard 20% 58% 23% 6% 14%
Powerpoint or Keynote 20% 74% 16% 4% 5%
document camera / overhead projector 15% 28% 20% 14% 38%

 

Table 6. Do you have undergraduates use these assignment technology tools? Which are your Top 3 assignment technology tools to learn about or use more? (n = 432)

Top 3 use in most of my classes have used in some classes tried, but do not use N/A: no experience using
collaborative reading and discussion tools (e.g. VoiceThread, NB, NotaBene, Highlighter, beSocratic) 43% 3% 10% 10% 77%
collaborative project, writing, editing tools (wikis, PBWorks, Weebly, Google Drive, Dropbox, Zotero)  

38%

 

16%

 

29%

 

12%

 

43%

online practice problems / quizzes with instant feedback 36% 22% 22% 8% 47%
online discussions (D2L, Today’s Meet, etc) 31% 33% 21% 15% 30%
individual written assignment, presentation and project tools (blogs, assignment submission, Powerpoint, Prezi, Adobe Creative Suite, etc.)  

31%

 

43%

 

28%

 

7%

 

22%

research tools (Chinook, pubMed, Google Scholar, Mendeley, Zotero, Evernote) 30% 33% 32% 8% 27%
online practice (problems, quizzes, simulations, games, CAPA, Pearson Mastering, etc.) 27% 20% 21% 7% 52%
data analysis tools (SPSS, R, Latex, Excel, NVivo, MATLAB, etc.) 24% 9% 23% 6% 62%
readings (online textbooks, articles, e-books) 21% 68% 23% 1% 8%

Table 7. Do you use any of these online tools in your teaching? Which are the Top 3 online tools you would like to learn about or use more? (n = 437)

 

 

 

Top 3

 

use in most of my classes

 

have used in some classes

 

tried, but do not use

N/A: no experience using
videos/animations produced for my course (online lectures, Lecture Capture, Camtasia, Vimeo)  

38%

 

14%

 

21%

 

11%

 

54%

chat-based office hours or meetings (D2L chat, Google Hangouts, texting, tutoring portals, etc.)  

36%

 

4%

 

9%

 

10%

 

76%

simulations, PhET, educational games 27% 7% 17% 6% 70%
videoconferencing-based office hours or meetings (Zoom, Skype, Continuing Education’s Composition hub, etc.)  

26%

 

4%

 

13%

 

11%

 

72%

alternative to D2L (moodle, Google Site, wordpress course website) 23% 11% 10% 13% 66%
D2L course platform 23% 81% 7% 4% 8%
online tutorials and trainings (OIT tutorials, Lynda.com videos) 21% 4% 16% 13% 68%
D2L as a portal to other learning tools (homework websites, videos, simulations, Nota Bene/NB, Voice Thread, etc.)  

21%

 

28%

 

18%

 

11%

 

42%

videos/animations produced elsewhere 19% 40% 36% 2% 22%

In both large and small classes, the most common responses faculty make to digital distraction are to discuss why it is a problem and to limit or ban phones in class.
my note: which completely defies the BYOD and turns into empty talk / lip service.

Quite a number of other faculty (n = 18) reported putting the onus on themselves to plan engaging and busy class sessions to preclude distraction, for example:

“If my students are more interested in their laptops than my course material, I need to make my curriculum more interesting.”

I have not found this to be a problem. When the teaching and learning are both engaged/engaging, device problems tend to disappear.”

The most common complaint related to students and technology was their lack of common technological skills, including D2L and Google, and needing to take time to teach these skills in class (n = 14). Two commented that digital skills in today’s students were lower than in their students 10 years ago.

Table 9. Which of the following are the most effective types of learning opportunities about teaching, for you? Chose your Top 2-3. (n = 473)

Count           Percentage

meeting 1:1 with an expert 296 63%
hour-long workshop 240 51%
contact an expert on-call (phone, email, etc) 155 33%
faculty learning community (meeting across asemester,

e.g. ASSETT’s Hybrid/Online Course Design Seminar)

116 25%
expert hands-on support for course redesign (e.g. OIT’s Academic Design Team) 114 24%
opportunity to apply for grant funding with expert support, for a project I design (e.g. ASSETT’s Development Awards)  

97

 

21%

half-day or day-long workshop 98 21%
other 40 8%
multi-day retreats / institutes 30 6%

Faculty indicated that the best times for them to attend teaching professional developments across the year are before and early semester, and summer. They were split among all options for meeting across one week, but preferred afternoon sessions to mornings. Only 8% of respondents (n = 40) indicated they would not likely attend any professional development session (Table 10).

+++++++++++++++++++++++++++

Teaching Through Technology
http://www.maine.edu/pdf/T4FinalYear1ReportCRE.pdf

Table T1: Faculty beliefs about using digital technologies in teaching

Count Column N%
Technology is a significant barrier to teaching and learning. 1 0.2%
Technology can have a place in teaching, but often detracts from teaching and learning. 76 18.3%
Technology has a place in teaching, and usually enhances the teaching learning process. 233 56.0%
Technology greatly enhances the teaching learning process. 106 25.5%

Table T2: Faculty beliefs about the impact of technology on courses

Count Column N%
Makes a more effective course 302 72.6%
Makes no difference in the effectiveness of a course 42 10.1%
Makes a less effective course 7 1.7%
Has an unknown impact 65 15.6%

Table T3: Faculty use of common technologies (most frequently selected categories shaded)

Once a month or less A few hours a month A few hours a week An hour a day Several hours a day
Count % Count % Count % Count % Count %
Computer 19 4.8% 15 3.8% 46 11.5% 37 9.3% 282 70.7%
Smart Phone 220 60.6% 42 11.6% 32 8.8% 45 12.4% 24 6.6%
Office Software 31 7.8% 19 4.8% 41 10.3% 82 20.6% 226 56.6%
Email 1 0.2% 19 4.6% 53 12.8% 98 23.7% 243 58.7%
Social Networking 243 68.8% 40 11.3% 40 11.3% 23 6.5% 7 2.0%
Video/Sound Media 105 27.6% 96 25.2% 95 24.9% 53 13.9% 32 8.4%

Table T9: One sample t-test for influence of technology on approaches to grading and assessment

Test Value = 50
t df Sig. (2-tailed) Mean Difference 95% Confidence Interval of the Difference
Lower Upper
In class tests and quizzes -4.369 78 .000 -9.74684 -14.1886 -5.3051
Online tests and quizzes 5.624 69 .000 14.77143 9.5313 20.0115
Ungraded  assessments 1.176 66 .244 2.17910 -1.5208 5.8790
Formative assessment 5.534 70 .000 9.56338 6.1169 13.0099
Short essays, papers, lab reports, etc. 2.876 70 .005 5.45070 1.6702 9.2312
Extended essays and major projects or performances 1.931 69 .058 3.67143 -.1219 7.4648
Collaborative learning projects .000 73 1.000 .00000 -4.9819 4.9819

Table T10: Rate the degree to which your role as a faculty member and teacher has changed as a result of increased as a result of increased use of technology

Strongly Disagree Disagree Somewhat Disagree Somewhat Agree Agree Strongly Agree
Count % Count % Count % Count % Count % Count %
shifting from the role of content expert to one of learning facilitator  

12

 

9.2%

 

22

 

16.9%

 

14

 

10.8%

 

37

 

28.5%

 

29

 

22.3%

 

16

 

12.3%

your primary role is to provide content for students  

14

 

10.9%

 

13

 

10.1%

 

28

 

21.7%

 

29

 

22.5%

 

25

 

19.4%

 

20

 

15.5%

your identification with your University is increased  

23

 

18.3%

 

26

 

20.6%

 

42

 

33.3%

 

20

 

15.9%

 

12

 

9.5%

 

3

 

2.4%

you have less ownership of your course content  

26

 

20.2%

 

39

 

30.2%

 

24

 

18.6%

 

21

 

16.3%

 

14

 

10.9%

 

5

 

3.9%

your role as a teacher is strengthened 13 10.1% 12 9.3% 26 20.2% 37 28.7% 29 22.5% 12 9.3%
your overall control over your course(s) is diminished  

23

 

17.7%

 

44

 

33.8%

 

30

 

23.1%

 

20

 

15.4%

 

7

 

5.4%

 

6

 

4.6%

Table T14: One sample t-test for influence of technology on faculty time spent on specific teaching activities

Test Value = 50
t df Sig. (2-tailed) Mean Difference 95% Confidence Interval of the Difference
Lower Upper
Lecturing -7.381 88 .000 -12.04494 -15.2879 -8.8020
Preparing course materials 9.246 96 .000 16.85567 13.2370 20.4744
Identifying course materials 8.111 85 .000 13.80233 10.4191 17.1856
Grading / assessing 5.221 87 .000 10.48864 6.4959 14.4813
Course design 12.962 94 .000 21.55789 18.2558 24.8600
Increasing access to materials for all types of learners 8.632 86 .000 16.12644 12.4126 19.8403
Reading student discussion posts 10.102 79 .000 21.98750 17.6553 26.3197
Email to/with students 15.809 93 .000 26.62766 23.2830 29.9724

++++++++++++++++++++++++++

Study of Faculty and Information Technology, 2014

http://net.educause.edu/ir/library/pdf/ers1407/ers1407.pdf

Although the LMS is pervasive in higher education, 15% of faculty said that they
do not use the LMS at all. Survey demographics suggest these nonusers are part of
the more mature faculty ranks, with a tenure status, more than 10 years of teaching
experience, and a full-professor standing.
18
The vast majority of faculty use the LMS
to conduct or support their teaching activities, but only three in five LMS users (60%)
said it is critical to their teaching. The ways in which faculty typically use the LMS are
presented in figure 8.
19
Pushing out information such as a syllabus or other handout
is the most common use of the LMS (58%), which is a basic functionality of the
first-generation systems that emerged in the late 1990s, and it remains one of the core
features of any LMS.
20
Many institutions preload the LMS with basic course content
(58%), up about 12% since 2011, and this base gives instructors a prepopulated plat
form from which to build their courses.
21
Preloading basic content does not appear to
preclude faculty from making the LMS part of their daily digital habit; a small majority
of faculty (56%) reported using the LMS daily, and another 37% use it weekly.

+++++++++++++++++++++++++++++

Digital Literacy, Engagement, and Digital Identity Development

https://www.insidehighered.com/blogs/student-affairs-and-technology/digital-literacy-engagement-and-digital-identity-development

igital Literacy, Engagement, and Digital Identity Development

+++++++++++++++++

 

++++++++++++++++

more on digital literacy in this IMS blog

https://blog.stcloudstate.edu/ims?s=digital+literacy

Save

TurnitIn

We know that many of you have been interested in exploring Turnitin in the past, so we are excited to bring you an exclusive standardized price and more information on the roll out of Feedback Studio, replacing the Turnitin you have previously seen. We would like to share some exciting accessibility updates, how Feedback Studio can help faculty deliver formative feedback to students and help students become writers. Starting today thru December 31st non-integrated Feedback Studio will be $2.50 and integrated Feedback Studio will be $3 for new customers! Confused by the name? Don’t be! Turnitin is new and improved! Check out this video to learn about Feedback Studio!

Meet your exclusive Turnitin Team!

Ariel Ream – Account Executive, Indianapolis aream@turnitin.com – 317.650.2795
Juliessa Rivera – Relationship Manager, Oakland jrivera@iparadigms.com – 510.764.7698

Juan Valladares – Account Representative, Oakland
jvalladares@turnitin.com – 510.764.7552
To learn more, please join us for a WebEx on September 21st. We will be offering free 30 day pilots to anyone who attends!
Turnitin Webinar
Wednesday, September 21, 2016
11:00 am | Central Daylight Time (Chicago) | 1 hr
Meeting number (access code): 632 474 162
https://mnscu.webex.com/mnscu/j.php?MTID=mebaec2ae9d1d25e6774d16717719008d

+++++++++++++++++++

my notes from the webinar

I am prejudiced against TI and I am not hiding it; that does not mean that I am wrong.
For me, TurnitIn (TI) is an anti-pedagogical “surfer,” using the hype of “technology” to ride the wave of overworked faculty, who hope to streamline increasing workload with technology instead of working on pedagogical resolutions of not that new issues.

Low and behold, Juan, the TI presenter is trying to dazzle me with stuff, which does not dazzle me for a long time.
WCAG 2.0 AA standards of the W3C and section 508 of the rehabilitation act.
the sales pitch: 79% of students believe in feedback, but only %50+ receive it. HIs source is TurnitIn surveys from 2012 to 2016 (very very small font size (ashamed of it?))
It seems to me very much like “massaged” data.
Testimonials: one professor and one students. Ha. the apex of qualitative research…

next sales pitch: TurnitIn feedback studio. Not any more the old Classic. It assesses the originality. Drag and drop macro-style notes. Pushing rubrics. but we still fight for rubrics in D2L. If we have a large amount of adjuncts. Ha. another gem. “I know that you are, guys, IT folks.” So the IT folks are the Trojan horse to get the faculty on board. put comments on
This presentation is structured dangerously askew: IT people but no faculty. If faculty is present, they will object that they ARE capable of doing the same which is proposed to be automated.
More , why do i have to pay for another expensive software, if we have paid already Microsoft? MS Word can do everything that has been presented so far. Between MS Word and D2L, it becomes redundant.
why the heck i am interested about middle school and high school.

TI was sued for illegal collection of paper; paper are stored in their database without the consent of the students’ who wrote it. TI goes “great length to protect the identity of the students,” but still collects their work [illegally?}

November 10 – 30 day free trial

otherwise, $3 per student, prompts back: between Google, MS Word and D2L (which we already heftily pay for), why pay another exuberant price.

D2L integration: version, which does not work. LTI.
“small price to pay of such a beauty” – it does not matter how quick and easy the integration is, it is a redundancy, which already can be resolved with existing tools, part of which we are paying hefty price for

https://d2l.custhelp.com/app/answers/detail/a_id/1668/

Play recording (1 hr 4 min 19 sec)
https://mnscu.webex.com/mnscu/ldr.php?RCID=a9b182b4ca8c4d74060f0fd29d6a5b5c

1 7 8 9 10 11 12