Combine the superfast calculational capacities of Big Compute with the oceans of specific personal information comprising Big Data — and the fertile ground for computational propaganda emerges. That’s how the small AI programs called bots can be unleashed into cyberspace to target and deliver misinformation exactly to the people who will be most vulnerable to it. These messages can be refined over and over again based on how well they perform (again in terms of clicks, likes and so on). Worst of all, all this can be done semiautonomously, allowing the targeted propaganda (like fake news stories or faked images) to spread like viruses through communities most vulnerable to their misinformation.
According to Bolsover and Howard, viewing computational propaganda only from a technical perspective would be a grave mistake. As they explain, seeing it just in terms of variables and algorithms “plays into the hands of those who create it, the platforms that serve it, and the firms that profit from it.”
Computational propaganda is a new thing. People just invented it. And they did so by realizing possibilities emerging from the intersection of new technologies (Big Compute, Big Data) and new behaviors those technologies allowed (social media). But the emphasis on behavior can’t be lost.
People are not machines. We do things for a whole lot of reasons including emotions of loss, anger, fear and longing. To combat computational propaganda’s potentially dangerous effects on democracy in a digital age, we will need to focus on both its howand its why.
the Center for the Advanced Study of Technology Leadership in Education – CASTLE
If a school’s reputation and pride are built on decades or centuries of “this is how we’ve always done things here,” resistance from staff, parents, and alumni to significant changes may be fierce. In such institutions, heads of school may have to steer carefully between deeply ingrained habits and the need to modernize the information tools with which students and faculty work
Too often, when navigating faculty or parental resistance, school leaders and technology staff make reassurances that things will not have to change much in the classroom or that slow baby steps are OK. Unfortunately, this results in a different problem, which is that schools have now invested significant money, time, and energy into digital technologies but are using them sparingly and seeing little impact. In such schools, replicative uses of technology are quite common, but transformative uses that leverage the unique affordances of technology are quite rare.
many schools fail to proceed further because they don’t have a collective vision of what more transformative uses of technology might look like, nor do they have a shared understanding of and commitment to what it will take to get to such a place. As a result, faculty instruction and the learning experiences of students change little or not at all.
These schools have taken the time to involve all stakeholders—including students—in substantive conversations about what digital tools will allow them to do differently compared with previous analog practices. Their visions promote the potential of computing devices to facilitate all of those elements we now think of as essential 21st-century capacities: confidence, curiosity, enthusiasm, passion, critical thinking, problem-solving, and self-direction. Technology doesn’t simply support traditional teaching—it transforms it for deeper thinking and gives students more agency over their own learning.
Another prevalent issue preventing technology change in schools is fear—fear of change, of the unknown, of letting go of what we know best, of being learners again. But it’s also a fear of letting kids have wide access to the Internet with the possibility of cyberbullying, access to inappropriate material, and exposure to online predators or even excessive advertising. Fears, of course, need to be surfaced and addressed.
The fear drives some schools to ban cellphones, disallow students and faculty from using Facebook, and lock down Internet filters so tightly that useful websites are inaccessible. They prohibit the use of Twitter and YouTube, and they block blogs. Some educators see these types of responses as principled stands against the shortcomings and hassles of digital technologies. Others see them as rejections of the dehumanization of the education process by soulless machines. Often, however, it’s just schools clinging to the past and elevating what is comfortable or familiar over the potential of technology to help them better deliver on their school missions.
Heads of school don’t have to be skilled users themselves to be effective technology leaders, but they do have to exercise appropriate oversight and convey the message—repeatedly—that frequent, meaningful technology use in school is both important and expected. Nostalgia aside, there is no foreseeable future in which the primacy of printed text is not superseded by electronic text and multimedia. When nearly all information is digital or online, multi-modal and multimedia, accessed by mobile devices that fit in our pockets, the question should not be whether schools prepare students for a digital learning landscape, but rather how.
Many educators aren’t necessarily afraid of technology, but they are so accustomed to heavily teacher-directed classrooms that they are leery about giving up control—and can’t see the value in doing so.
Although most of us recognize that mobile computers connected to the Internet may be the most powerful learning devices yet invented—and that youth are learning in powerful ways at home with these technologies—allowing students to have greater autonomy and ownership of the learning process can still seem daunting and questionable.
The “beyond” is particularly important. When we give students some voice in and choice about what and how they learn, we honor basic human needs for autonomy, we enhance students’ interest and engagement, and we truly actualize our missions of preparing lifelong learners.
The goal of instructional transformation is to empower students, not to disempower teachers. While instructor unfamiliarity with digital technologies, inquiry- or problem-based teaching techniques, or deeper learning strategies may result in some initial discomfort, these challenges can be overcome with robust support.
A few workshops here and there rarely result in large-scale changes in implementation.
teacher-driven “unconferences” or “edcamps,” at which educators propose and facilitate discussion topics, can be powerful mechanisms for fostering professional dialogue and learning. Similarly, some schools offer voluntary “Tech Tuesdays” or “appy hours” to foster digital learning among interested faculty.
In addition to existing IT support, technology integration staff, or librarians/media specialists, some schools have student technology teams that are on call for assistance when needed.
A few middle schools and high schools go even further and assign teachers their own individual student technology mentors. These student-teacher pairings last all school year and comprise the first line of support for educators’ technology questions.
As teachers, heads of school, counselors, coaches, and librarians, we all now have the ability to participate in ongoing, virtual, global communities of practice.
Whether formal or informal, the focus of technology-related professional learning should be on student learning, not on the tools or devices. Independent school educators should always ask, “Technology for the purpose of what?” when considering the inclusion of digital technologies into learning activities. Technology never should be implemented just for technology’s sake.
Want to learn basic computer programming skills specifically tailored for academia?
Please consider a FREE two-day workshop on either on Python or on R.
Python is a programming language that is simple, easy to learn for beginners and experienced programmers, and emphasizes readability. At the same time, it comes with lots of modules and packages to add to your programs when you need more sophistication. Whether you need to perform data analysis, graphing, or develop a network application, or just want to have a nice calculator that remembers all your formulas and constants, Python can do it with elegance. https://www.python.org/about/
R (RStudio) is a language and environment for statistical computing and graphics. R provides a wide variety of statistical and graphical techniques. R can produce well-designed publication-quality plots, including mathematical symbols and formulae. https://www.r-project.org/about.html
Both software packages are free and operate on MS Windows, MAC/Apple and GNU/Linux OS.
Besides seamless installation on your personal computer, you can access both software in SCSU computer labs or via SCSU AppsAnywhere.
says David Greenfield, a psychologist and assistant clinical professor of psychiatry at the University of Connecticut:When we hear a ding or little ditty alerting us to a new text, email or Facebook post, cells in our brains likely release dopamine — one of the chemical transmitters in the brain’s reward circuitry. That dopamine makes us feel pleasure
“It’s a spectrum disorder,” says Dr. Anna Lembke, a psychiatrist at Stanford University, who studies addiction. “There are mild, moderate and extreme forms.” And for many people, there’s no problem at all.
Signs you might be experiencing problematic use, Lembke says, include these:
Interacting with the device keeps you up late or otherwise interferes with your sleep.
It reduces the time you have to be with friends or family.
It interferes with your ability to finish work or homework.
It causes you to be rude, even subconsciously. “For instance,” Lembke asks, “are you in the middle of having a conversation with someone and just dropping down and scrolling through your phone?” That’s a bad sign.
It’s squelching your creativity. “I think that’s really what people don’t realize with their smartphone usage,” Lembke says. “It can really deprive you of a kind of seamless flow of creative thought that generates from your own brain.”
Consider a digital detox one day a week
Tiffany Shlain, a San Francisco Bay Area filmmaker, and her family power down all their devices every Friday evening, for a 24-hour period.
“It’s something we look forward to each week,” Shlain says. She and her husband, Ken Goldberg, a professor in the field of robotics at the University of California, Berkeley, are very tech savvy.
A recent study of high school students, published in the journal Emotion, found that too much time spent on digital devices is linked to lower self-esteem and a decrease in well-being.
Common Sense Media recently partnered with the Center for Humane Technology, which supports the development of ethical technological tools, to lay out a fierce call for regulation and awareness about the health issues surrounding tech addiction.
To support educators making such decisions, Common Sense Media is taking their “Truth about Tech” campaign to schools through an upgraded version of their current Digital Citizenship curriculum. The new updates will include more information on subjects such as:
Creating a healthy media balance and digital wellness;
Concerns about the rise of hate speech in schools, that go beyond talking about cyberbullying; and
Fake news, media literacy and curating your own content
What Does ‘Tech Addiction’ Mean?
In a recent NPR report, writer Anya Kamenetz, notes that clinicians are debating whether technology overuse is best categorized as a bad habit, a symptom of other mental struggles (such as depression or anxiety) or as an addiction.
Dr. Jenny Radesky, a developmental-behavioral pediatrician at the American Academy of Pediatrics, notes that though she’s seen solid evidence linking heavy media usage to problems with sleep and obesity, she hesitated to call the usage “addiction.”
Dr. Robert Lustig, an endocrinologist who studies hormones at the University of Southern California disagreed, noting that parents have to see the overuse of technology as an addiction.
The brain is actually three brains: the ancient reptilian brain, the limbic brain, and the cortical brain. This article will focus on the limbic brain, because it may be most important to successfully using interactive video or web-based video. The limbic brain monitors the external world and the internal body, taking in information through the senses as well as body temperature and blood pressure, among others. It is the limbic brain that generates and interprets facial expressions and handles emotions, while the cortical brain handles symbolic activities such as language as well as action and strategizing. The two interact when an emotion is sent from the limbic to the cortical brain and generates a conscious thought; in response to a feeling of fear (limbic), you ask, “what should I do?” (cortical).
The importance of direct eye contact and deciphering body language is also important for sending and picking up clues about social context.
The loss of social cues is important because it may affect the quality of the content of the presentation (by not allowing timely feedback or questions) but also because students may feel less engaged and become frustrated with the interaction, and subsequently lower their assessment of the class and the instructor (Reeves & Nass, 1996). Fortunately, faculty can provide such social cues verbally, once they are aware of the importance of helping students use these new media.
Attachment theory also supports the importance of physical and emotional connections.
As many a struggling teacher knows, students are often impervious to learning new concepts. They may replay the new information for a test, but after time passes, they revert to the earlier (and likely wrong) information. This is referred to as the “power of mental models.” As explained in Marchese (2000), when we view a tree, it is not as if we see the tree in our head, as in photography.
The coping strategies of the two hemispheres are fundamentally different. The left hemisphere’s job is to create a belief system or model and to fold new experiences into that belief system. If confronted with some new information that doesn’t fit the model, it relies on Freudian defense mechanisms to deny, repress or confabulate – anything to preserve the status quo. The right hemisphere’s strategy is to play “Devil’s Advocate,” to question the status quo and look for global inconsistencies. When the anomalous information reaches a certain threshold, the right hemisphere decides that it is time to force a complete revision of the entire model and start from scratch (Ramachandran & Blakeslee, 1998, p. 136).
While much hemispheric-based research has been repudiated as an oversimplification (Gackenbach, 1999), the above description of how new information eventually overwhelms an old world view may be the result of multiple brain functions – some of which work to preserve our models and others to alter – that help us both maintain and change as needed.
Self-talk is the “the root of empathy, understanding, cooperation, and rules that allow us to be successful social beings. Any sense of moral behavior requires thought before action” (Ratey, 2001, p. 255).
Healy (1999) argues that based on what we know about brain development in children, new computer media may be responsible for developing brains that are largely different from the brains of adults. This is because “many brain connections have become specialized for . . . media” (p. 133); in this view, a brain formed by language and reading is different from a brain formed by hypermedia. Different media lead to different synaptic connections being laid down and reinforced, creating different brains in youngsters raised on fast-paced, visually-stimulating computer applications and video games. “Newer technologies emphasize rapid processing of visual symbols . . . and deemphasize traditional verbal learning . . . and the linear, analytic thought process . . . [making it] more difficult to deal with abstract verbal reasoning” (Healy, 1999, p. 142).
“They do plan,” said a senior Obama-administration official. “They’re not stupid at all. But the idea that they have this all perfectly planned and that Putin is an amazing chess player—that’s not quite it. He knows where he wants to end up, he plans the first few moves, and then he figures out the rest later. People ask if he plays chess or checkers. It’s neither: He plays blackjack. He has a higher acceptance of risk. Think about it. The election interference—that was pretty risky, what he did. If Hillary Clinton had won, there would’ve been hell to pay.”
Even the manner of the Russian attack was risky. The fact that the Russians didn’t really bother hiding their fingerprints is a testament to the change in Russia’s intent toward the U.S., Robert Hannigan, a former head of the Government Communications Headquarters, the British analogue to the National Security Agency, said at the Aspen Forum. “The brazen recklessness of it … the fact that they don’t seem to care that it’s attributed to them very publicly, is the biggest change.”
EdSurge’s CEO, Betsy Corcoran, argued that 2017 was a year when educators and schools were trying to take control of their technology choices “We have said from the time we started writing the newsletters that not every piece of technology will work for every student, or for every school or every classroom,” she said. “It’s all about asking the right questions to figure out if there is a piece of technology that will support learning goals. What we’re starting to really see across schools, districts and teachers, people really owning those questions. They’re saying, ‘What do I want to do with my classroom? With my kids? And what are the technologies that will support me?’”
Another discussion participant asked whether colleges and universities are starting to accept cryptocurrencies like Bitcoin, or experimenting with the blockchain technology that drives those systems. Johnson said most of the hype around unversities’ blockchain experiments has centered on storing and managing credentials.