Eureka: machine learning tool, brainstorming engine. give it an initial idea and it returns similar ideas. Like Google: refine the idea, so the machine can understand it better. create a collection of ideas to translate into course design or others.
influencers and microinfluencers, pre- and doing the execution
a machine can construct a book with the help of a person. bionic book. machine and person working hand in hand. provide keywords and phrases from lecture notes, presentation materials. from there recommendations and suggestions based on own experience; then identify included and excluded content. then instructor can construct.
Design may be the least interesting part of the book for the faculty.
multiple choice quiz may be the least interesting part, and faculty might want to do much deeper assessment.
use these machine learning techniques to build assessment. how to more effectively. inquizitive is the machine learning
students engagements and similar prompts
presence in the classroom: pre-service teachers class. how to immerse them and practice classroom management skills
First class: marriage btw VR and use of AI – an environment headset: an algorithm reacts how teachers are interacting with the virtual kids. series of variables, oppty to interact with present behavior. classroom management skills. simulations and environments otherwise impossible to create. apps for these type of interactions
facilitation, reflection and research
AI for more human experience, allow more time for the faculty to be more human, more free time to contemplate.
Jason: Won’t the use of AI still reduce the amount of faculty needed?
Christina Dumeng: @Jason–I think it will most likely increase the amount of students per instructor.
Andrew Cole (UW-Whitewater): I wonder if instead of reducing faculty, these types of platforms (e.g., analytic capabilities) might require instructors to also become experts in the various technology platforms.
Dirk Morrison: Also wonder what the implications of AI for informal, self-directed learning?
Kate Borowske: The context that you’re presenting this in, as “your own jazz band,” is brilliant. These tools presented as a “partner” in the “band” seems as though it might be less threatening to faculty. Sort of gamifies parts of course design…?
Dirk Morrison: Move from teacher-centric to student-centric? Recommender systems, AI-based tutoring?
Andrew Cole (UW-Whitewater): The course with the bot TA must have been 100-level right? It would be interesting to see if those results replicate in 300, 400 level courses
Jumping onboard to a new industry trend with insufficient planning can result in your initiative failing to achieve its objective and, in the worst case, even hinder the learning process. So which hot topics should you treat with care?
1. Virtual Reality, or VR
Ultimately, the key question to consider when adopting anything new is whether it will help you achieve the desired outcome. VR shouldn’t be incorporated into learning just because it’s a common buzzword. Before you decide to give it a go, consider how it’s going to help your learner, and whether it’s truly the most effective or efficient way to meet the learning goal.
considering introducing an interactive element to your learning, don’t let this deter you—just ensure that it’s relevant to the content and will aid the learning process.
3. Artificial Intelligence, or AI
If you are confident that a trend is going to yield better results for your learners, the ROI you see may well justify the upfront resources it requires.
Again, it all comes down to whether a trend is going to deliver in terms of achieving an objective.
The theory behind microlearning makes a lot of sense: organizing content into sections so that learning can fit easily with modern day attention spans and learners’ busy lifestyles is not a bad thing. The worry is that the buzzword, ‘microlearning’, has grown legs of its own, meaning the industry is losing sight of its’ founding principles.
TWO YEARS AGO, Alison Darcy built a robot to help out the depressed. As a clinical research psychologist at Stanford University, she knew that one powerful way to help people suffering from depression or anxiety is cognitive behavioral therapy, or C.B.T. It’s a form of treatment in which a therapist teaches patients simple techniques that help them break negative patterns of thinking.
In a study with 70 young adults, Darcy found that after two weeks of interacting with the bot, the test subjects had lower incidences of depression and anxiety. They were impressed, and even touched, by the software’s attentiveness.
Many tell Darcy that it’s easier to talk to a bot than a human; they don’t feel judged.
Darcy argues this is a glimpse of our rapidly arriving future, where talking software is increasingly able to help us manage our emotions. There will be A.I.s that detect our feelings, possibly better than we can. “I think you’ll see robots for weight loss, and robots for being more effective communicators,” she says. It may feel odd at first
RECENT HISTORY HAS seen a rapid change in at least one human attitude toward machines: We’ve grown accustomed to talking to them. Millions now tell Alexa or Siri or Google Assistant to play music, take memos, put something on their calendar or tell a terrible joke.
One reason botmakers are embracing artificiality is that the Turing Test turns out to be incredibly difficult to pass. Human conversation is full of idioms, metaphors and implied knowledge: Recognizing that the expression “It’s raining cats and dogs” isn’t actually about cats and dogs, for example, surpasses the reach of chatbots.
Conversational bots thus could bring on a new wave of unemployment — or “readjustment,” to use the bloodless term of economics. Service workers, sales agents, telemarketers — it’s not hard to imagine how millions of jobs that require social interaction, whether on the phone or online, could eventually be eliminated by code.
One person who bought a Jibo was Erin Partridge, an art therapist in Alameda, Calif., who works with the elderly. When she took Jibo on visits, her patients loved it.
For some technology critics, including Sherry Turkle, who does research on the psychology of tech at M.I.T., this raises ethical concerns. “People are hard-wired with sort of Darwinian vulnerabilities, Darwinian buttons,” she told me. “And these Darwinian buttons are pushed by this technology.” That is, programmers are manipulating our emotions when they create objects that inquire after our needs.
The precursor to today’s bots, Joseph Weizenbaum’s ELIZA, was created at M.I.T. in 1966. ELIZA was a pretty crude set of prompts, but by simply asking people about their feelings, it drew them into deep conversations.