Posts Tagged ‘Deep learning’
drones
Artificial Intelligence, Machine Learning, Deep Learning
In the Age of AI
In The Age Of A.I. (2019) — This just aired last night and it’s absolutely fantastic. It presents a great look at AI, and it also talks about automation, wealth inequality, data-mining and surveillance.
byu/srsly_its_so_ez inDocumentaries
13 min 40 sec = Wechat
14 min 60 sec = data is the new oil and China is the new Saudi Arabia
18 min 30 sec = social credit and facial recognition
++++++++++
more on deep learning in this IMS blog
https://blog.stcloudstate.edu/ims?s=deep+learning
AI Ethics, Policy and Governance
Please tune in live on Monday Oct 28 and Tuesday Oct 29! This is our fall conference on AI Ethics, Policy and Governance @StanfordHAI, lead by HAI Associate Directors @robreich @Susan_Athey and Deputy Director @MPSellitto https://t.co/8fClgbm6ZY
— Fei-Fei Li (@drfeifei) October 26, 2019
https://hai.stanford.edu/?sf111258978=1
++++++++++
more on ethics in this IMS blog
https://blog.stcloudstate.edu/ims?s=ethics
Policy for Artificial Intelligence
Law is Code: Making Policy for Artificial Intelligence
Jules Polonetsky and Omer Tene January 16, 2019
https://www.ourworld.co/law-is-code-making-policy-for-artificial-intelligence/
Twenty years have passed since renowned Harvard Professor Larry Lessig coined the phrase “Code is Law”, suggesting that in the digital age, computer code regulates behavior much like legislative code traditionally did. These days, the computer code that powers artificial intelligence (AI) is a salient example of Lessig’s statement.
- Good AI requires sound data. One of the principles, some would say the organizing principle, of privacy and data protection frameworks is data minimization. Data protection laws require organizations to limit data collection to the extent strictly necessary and retain data only so long as it is needed for its stated goal.
- Preventing discrimination – intentional or not.
When is a distinction between groups permissible or even merited and when is it untoward? How should organizations address historically entrenched inequalities that are embedded in data? New mathematical theories such as “fairness through awareness” enable sophisticated modeling to guarantee statistical parity between groups.
- Assuring explainability – technological due process. In privacy and freedom of information frameworks alike, transparency has traditionally been a bulwark against unfairness and discrimination. As Justice Brandeis once wrote, “Sunlight is the best of disinfectants.”
- Deep learning means that iterative computer programs derive conclusions for reasons that may not be evident even after forensic inquiry.
Yet even with code as law and a rising need for law in code, policymakers do not need to become mathematicians, engineers and coders. Instead, institutions must develop and enhance their technical toolbox by hiring experts and consulting with top academics, industry researchers and civil society voices. Responsible AI requires access to not only lawyers, ethicists and philosophers but also to technical leaders and subject matter experts to ensure an appropriate balance between economic and scientific benefits to society on the one hand and individual rights and freedoms on the other hand.
+++++++++++++
more on AI in this IMS blog
https://blog.stcloudstate.edu/ims?s=artificial+intelligence
shaping the future of AI
Shaping the Future of A.I.
Daniel Burrus
Way back in 1983, I identified A.I. as one of 20 exponential technologies that would increasingly drive economic growth for decades to come.
Artificial intelligence applies to computing systems designed to perform tasks usually reserved for human intelligence using logic, if-then rules, decision trees and machine learning to recognize patterns from vast amounts of data, provide insights, predict outcomes and make complex decisions. A.I. can be applied to pattern recognition, object classification, language translation, data translation, logistical modeling and predictive modeling, to name a few. It’s important to understand that all A.I. relies on vast amounts of quality data and advanced analytics technology. The quality of the data used will determine the reliability of the A.I. output.
Machine learning is a subset of A.I. that utilizes advanced statistical techniques to enable computing systems to improve at tasks with experience over time. Chatbots like Amazon’s Alexa, Apple’s Siri, or any of the others from companies like Google and Microsoft all get better every year thanks to all of the use we give them and the machine learning that takes place in the background.
Deep learning is a subset of machine learning that uses advanced algorithms to enable an A.I. system to train itself to perform tasks by exposing multi-layered neural networks to vast amounts of data, then using what has been learned to recognize new patterns contained in the data. Learning can be Human Supervised Learning, Unsupervised Learningand/or Reinforcement Learning like Google used with DeepMind to learn how to beat humans at the complex game Go. Reinforcement learning will drive some of the biggest breakthroughs.
Autonomous computing uses advanced A.I. tools such as deep learning to enable systems to be self-governing and capable of acting according to situational data without human command. A.I. autonomy includes perception, high-speed analytics, machine-to-machine communications and movement. For example, autonomous vehicles use all of these in real time to successfully pilot a vehicle without a human driver.
Augmented thinking: Over the next five years and beyond, A.I. will become increasingly embedded at the chip level into objects, processes, products and services, and humans will augment their personal problem-solving and decision-making abilities with the insights A.I. provides to get to a better answer faster.
Technology is not good or evil, it is how we as humans apply it. Since we can’t stop the increasing power of A.I., I want us to direct its future, putting it to the best possible use for humans.
++++++++++
more on AI in this IMS blog
https://blog.stcloudstate.edu/ims?s=artifical+intelligence
more on deep learning in this IMS blog
https://blog.stcloudstate.edu/ims?s=deep+learning
deep learning revolution
How deep learning―from Google Translate to driverless cars to personal cognitive assistants―is changing our lives and transforming every sector of the economy.
The deep learning revolution has brought us driverless cars, the greatly improved Google Translate, fluent conversations with Siri and Alexa, and enormous profits from automated trading on the New York Stock Exchange. Deep learning networks can play poker better than professional poker players and defeat a world champion at Go. In this book, Terry Sejnowski explains how deep learning went from being an arcane academic field to a disruptive technology in the information economy.
Sejnowski played an important role in the founding of deep learning, as one of a small group of researchers in the 1980s who challenged the prevailing logic-and-symbol based version of AI. The new version of AI Sejnowski and others developed, which became deep learning, is fueled instead by data. Deep networks learn from data in the same way that babies experience the world, starting with fresh eyes and gradually acquiring the skills needed to navigate novel environments. Learning algorithms extract information from raw data; information can be used to create knowledge; knowledge underlies understanding; understanding leads to wisdom. Someday a driverless car will know the road better than you do and drive with more skill; a deep learning network will diagnose your illness; a personal cognitive assistant will augment your puny human brain. It took nature many millions of years to evolve human intelligence; AI is on a trajectory measured in decades. Sejnowski prepares us for a deep learning future.
A pioneering scientist explains ‘deep learning’
Artificial intelligence meets human intelligence
Buzzwords like “deep learning” and “neural networks” are everywhere, but so much of the popular understanding is misguided, says Terrence Sejnowski, a computational neuroscientist at the Salk Institute for Biological Studies.
Sejnowski, a pioneer in the study of learning algorithms, is the author of The Deep Learning Revolution (out next week from MIT Press). He argues that the hype about killer AI or robots making us obsolete ignores exciting possibilities happening in the fields of computer science and neuroscience, and what can happen when artificial intelligence meets human intelligence.
Machine learning is a very large field and goes way back. Originally, people were calling it “pattern recognition,” but the algorithms became much broader and much more sophisticated mathematically. Within machine learning are neural networks inspired by the brain, and then deep learning. Deep learning algorithms have a particular architecture with many layers that flow through the network. So basically, deep learning is one part of machine learning and machine learning is one part of AI.
December 2012 at the NIPS meeting, which is the biggest AI conference. There, [computer scientist] Geoff Hinton and two of his graduate students showed you could take a very large dataset called ImageNet, with 10,000 categories and 10 million images, and reduce the classification error by 20 percent using deep learning.Traditionally on that dataset, error decreases by less than 1 percent in one year. In one year, 20 years of research was bypassed. That really opened the floodgates.
The inspiration for deep learning really comes from neuroscience.
AlphaGo, the program that beat the Go champion included not just a model of the cortex, but also a model of a part of the brain called the basal ganglia, which is important for making a sequence of decisions to meet a goal. There’s an algorithm there called temporal differences, developed back in the ‘80s by Richard Sutton, that, when coupled with deep learning, is capable of very sophisticated plays that no human has ever seen before.
there’s a convergence occurring between AI and human intelligence. As we learn more and more about how the brain works, that’s going to reflect back in AI. But at the same time, they’re actually creating a whole theory of learning that can be applied to understanding the brain and allowing us to analyze the thousands of neurons and how their activities are coming out. So there’s this feedback loop between neuroscience and AI
POD 2017
2016 POD Network Conference
http://podnetwork.org/content/uploads/2016-POD-Program-Final.pdf
https://guidebook.com/g/pod2016
Studying Connections between Student Well-Being,
Performance, and Active Learning
Amy Godert, Cornell University; Teresa Pettit, Cornell University
Treasure in the Sierra Madre? Digital Badges and Educational
Development
Chris Clark, University of Notre Dame; G. Alex Ambrose, University
of Notre Dame; Gwynn Mettetal, Indiana University South Bend;
David Pedersen, Embry-Riddle Aeronautical University; Roberta
(Robin) Sullivan, University of Buffalo, State University of New York
Learning and Teaching Centers: The Missing Link in Data
Analytics
Denise Drane, Northwestern University; Susanna Calkins,
Northwestern University
Identifying and Supporting the Needs of International Faculty
Deborah DeZure, Michigan State University; Cindi Leverich, Michigan
State University
Online Discussions for Engaged and Meaningful Student
Learning
Danilo M. Baylen, University of West Georgia; Cheryl Fulghum,
Haywood Community College
Why Consider Online Asynchronous Educational Development?
Christopher Price, SUNY Center for Professional Development
Online, On-Demand Faculty Professional Development for Your
Campus
Roberta (Robin) Sullivan, University at Buffalo, State University of
New York; Cherie van Putten, Binghamton University, State
University of New York; Chris Price, State University of New York
The Tools of Engagement Project (http://suny.edu/toep) is an online faculty development model that encourages instructors to explore and reflect on innovative and creative uses of freely-available online educational technologies to increase student engagement and learning. TOEP is not traditional professional development but instead provides access to resources for instructors to explore at their own pace through a set of hands-on discovery activities. TOEP facilitates a learning community where participants learn from each
other and share ideas. This poster will demonstrate how you can implement TOEP at your campus by either adopting your own version or joining the existing project.
Video Captioning 101: Establishing High Standards With
Limited Resources
Stacy Grooters, Boston College; Christina Mirshekari, Boston
College; Kimberly Humphrey, Boston College
Recent legal challenges have alerted institutions to the importance of ensuring that video content for instruction is properly captioned. However, merely meeting minimum legal standards can still fall significantly short of the best practices defined by disability rights
organizations and the principles of Universal Design for Learning. Drawing from data gathered through a year-long pilot to investigate the costs and labor required to establish “in-house” captioning support at Boston College, this hands-on session seeks to give
participants the tools and information they need to set a high bar for captioning initiatives at their own institutions.
Sessions on mindfulness
52 Cognitive Neuroscience Applications for Teaching and Learning (BoF)
53 Contemplative Practices (BoF) Facilitators: Penelope Wong, Berea College; Carl S. Moore, University of the District of Columbia
79 The Art of Mindfulness: Transforming Faculty Development by Being Present Ursula Sorensen, Utah Valley University
93 Impacting Learning through Understanding of Work Life Balance Deanna Arbuckle, Walden University
113 Classroom Mindfulness Practices to Increase Attention, Creativity, and Deep Engagement Michael Sweet, Northeastern University
132 Measuring the Impacts of Mindfulness Practices in the Classroom Kelsey Bitting, Northeastern University; Michael Sweet, Northeastern University
+++++++++
more on POD conferences in this IMS blog
https://blog.stcloudstate.edu/ims?s=pod+conference
Deep learning and Wearables
RE.WORK Deep Learning Summit, Boston
Internet of Things Summit, Boston 2015
May 28, 2015 – May 29, 2015
Hyatt Regency Boston, Boston, Massachusetts, USA
– See more at: https://www.crunchbase.com/event/internet-of-things-summit-boston-2015-2015528#sthash.cBVjBogG.dpuf