Technology is a branch of moral philosophy, not of science
The process of making technology is design
Design is a branch of moral philosophy, not of a science
System design reflects the designer’s values and the cultural content
Byzantine history professor Bulgarian – all that is 200 years old is politics, not history
Access, privacy, equity, values for the prof organization ARLD.
This is how bad design makes it out into the world, not due to mailcioius intent, but whith nbo intent at all
Our expertise, our service ethic, and our values remain our greatest strengths. But for us to have the impat we seek into the lives of our users, we must encode our services and our values in to the software
Design interprets the world to crate useful objects. Ethical design closes the loop, imaging how those object will affect the world.
A good science fiction story should be able to predict not the automobile, ut the traffics jam. Frederic Pohl
Victor Papanek The designer’s social and moral judgement must be brought into play long before she begins to design.
We need to fear the consequences of our work more than we love the cleverness of our ideas Mike Monteiro
Qual and quan data – lirarainas love data, usage, ILL, course reserves, data – QQLM.
IDEO – the goal of design research isn’t to collect data, I tis to synthesize information and provide insight and guidance that leads to action.
Google Analytics: the trade off. besides privacy concners. sometimes data and analytics is the only thing we can see.
Frank CHimero – remove a person;s humanity and she is just a curiosity, a pinpoint on a map, a line in a list, an entry in a dbase. a person turns into a granular but of information.
by designing for yourself or your team, you are potentially building discrimination right into your product Erica Hall.
what is relevance. the relevance of the ranking algorithm. for whom (what patron). crummy searches.
reckless associsations – made by humans or computers – can do very real harm especially when they appear in supposedly neutral environments.
Donna Lanclos and Andrew Asher Ethonography should be core to the business of the library.
technology as information ecology. co-evolve. prepare to start asking questions to see the effect of our design choices.
ethnography of library: touch point tours – a student to give a tour to the librarians or draw a map of the library , give a sense what spaces they use, what is important. ethnographish
Q from the audience: if instructors warn against Google and Wikipedia and steer students to library and dbases, how do you now warn about the perils of the dbases bias? A: put fires down, and systematically, try to build into existing initiatives: bi-annual magazine, as many places as can
Because of technological advances and the sheer amount of data now available about billions of other people, discretion no longer suffices to protect your privacy. Computer algorithms and network analyses can now infer, with a sufficiently high degree of accuracy, a wide range of things about you that you may have never disclosed, including your moods, your political beliefs, your sexual orientation and your health.
There is no longer such a thing as individually “opting out” of our privacy-compromised world.
In 2017, the newspaper The Australian published an article, based on a leaked document from Facebook, revealing that the company had told advertisers that it could predict when younger users, including teenagers, were feeling “insecure,” “worthless” or otherwise in need of a “confidence boost.” Facebook was apparently able to draw these inferences by monitoring photos, posts and other social media data.
In 2017, academic researchers, armed with data from more than 40,000 Instagram photos, used machine-learning tools to accurately identify signs of depression in a group of 166 Instagram users. Their computer models turned out to be better predictors of depression than humans who were asked to rate whether photos were happy or sad and so forth.
Computational inference can also be a tool of social control. The Chinese government, having gathered biometric data on its citizens, is trying to use big data and artificial intelligence to single out “threats” to Communist rule, including the country’s Uighurs, a mostly Muslim ethnic group.
Zeynep Tufekci and Seth Stephens-Davidowitz: Privacy is over
the type of data: wikipedia. the dangers of learning from wikipedia. how individuals can organize mitigate some of these dangers. wikidata, algorithms.
IBM Watson is using wikipedia by algorythms making sense, AI system
youtube videos debunked of conspiracy theories by using wikipedia.
semantic relatedness, Word2Vec
how does algorithms work: large body of unstructured text. picks specific words
lots of AI learns about the world from wikipedia. the neutral point of view policy. WIkipedia asks editors present as proportionally as possible. Wikipedia biases: 1. gender bias (only 20-30 % are women).
conceptnet. debias along different demographic dimensions.
citations analysis gives also an idea about biases. localness of sources cited in spatial articles. structural biases.
geolocation on Twitter by County. predicting the people living in urban areas. FB wants to push more local news.
danger (biases) #3. wikipedia search results vs wkipedia knowledge panel.
collective action against tech: Reddit, boycott for FB and Instagram.
data labor: what the primary resources this companies have. posts, images, reviews etc.
boycott, data strike (data not being available for algorithms in the future). GDPR in EU – all historical data is like the CA Consumer Privacy Act. One can do data strike without data boycott. general vs homogeneous (group with shared identity) boycott.
the wikipedia SPAM policy is obstructing new editors and that hit communities such as women.
how to access at different levels. methods and methodological concerns. ethical concerns, legal concerns,
tweetdeck for advanced Twitter searches. quoting, likes is relevant, but not enough, sometimes screenshot
social listening platforms: crimson hexagon, parsely, sysomos – not yet academic platforms, tools to setup queries and visualization, but difficult to algorythm, the data samples etc. open sources tools (Urbana, Social Media microscope: SMILE (social media intelligence and learning environment) to collect data from twitter, reddit and within the platform they can query Twitter. create trend analysis, sentiment analysis, Voxgov (subscription service: analyzing political social media)
graduate level and faculty research: accessing SM large scale data web scraping & APIs Twitter APIs. Jason script, Python etc. Gnip Firehose API ($) ; Web SCraper Chrome plugin (easy tool, Pyhon and R created); Twint (Twitter scraper)
Facepager (open source) if not Python or R coder. structure and download the data sets.
TAGS archiving google sheets, uses twitter API. anything older 7 days not avaialble, so harvest every week.
social feed manager (GWUniversity) – Justin Litman with Stanford. Install on server but allows much more.
legal concerns: copyright (public info, but not beyond copyrighted). fair use argument is strong, but cannot publish the data. can analyize under fair use. contracts supercede copyright (terms of service/use) licensed data through library.
methods: sampling concerns tufekci, 2014 questions for sm. SM data is a good set for SM, but other fields? not according to her. hashtag studies: self selection bias. twitter as a model organism: over-represnted data in academic studies.
methodological concerns: scope of access – lack of historical data. mechanics of platform and contenxt: retweets are not necessarily endorsements.
ethical concerns. public info – IRB no informed consent. the right to be forgotten. anonymized data is often still traceable.
table discussion: digital humanities, journalism interested, but too narrow. tools are still difficult to find an operate. context of the visuals. how to spread around variety of majors and classes. controversial events more likely to be deleted.
takedowns, lies and corrosion: what is a librarian to do: trolls, takedown,
development kit circulation. familiarity with the Oculus Rift resulted in lesser reservation. Downturn also.
An experience station. clean up free apps.
question: spherical video, video 360.
safety issues: policies? instructional perspective: curating,WI people: user testing. touch controllers more intuitive then xbox controller. Retail Oculus Rift
app Scatchfab. 3modelviewer. obj or sdl file. Medium, Tiltbrush.
College of Liberal Arts at the U has their VR, 3D print set up.
Penn State (Paul, librarian, kiniseology, anatomy programs), Information Science and Technology. immersive experiences lab for video 360.
CALIPHA part of it is xrlibraries. libraries equal education. content provider LifeLiqe STEM library of AR and VR objects. https://www.lifeliqe.com/
Many educational institutions maintain their own data centers. “We need to minimize the amount of work we do to keep systems up and running, and spend more energy innovating on things that matter to people.”
what’s the difference between machine learning (ML) and artificial intelligence (AI)?
Jeff Olson: That’s actually the setup for a joke going around the data science community. The punchline? If it’s written in Python or R, it’s machine learning. If it’s written in PowerPoint, it’s AI.
machine learning is in practical use in a lot of places, whereas AI conjures up all these fantastic thoughts in people.
What is serverless architecture, and why are you excited about it?
Instead of having a machine running all the time, you just run the code necessary to do what you want—there is no persisting server or container. There is only this fleeting moment when the code is being executed. It’s called Function as a Service, and AWS pioneered it with a service called AWS Lambda. It allows an organization to scale up without planning ahead.
How do you think machine learning and Function as a Service will impact higher education in general?
The radical nature of this innovation will make a lot of systems that were built five or 10 years ago obsolete. Once an organization comes to grips with Function as a Service (FaaS) as a concept, it’s a pretty simple step for that institution to stop doing its own plumbing. FaaS will help accelerate innovation in education because of the API economy.
If the campus IT department will no longer be taking care of the plumbing, what will its role be?
I think IT will be curating the inter-operation of services, some developed locally but most purchased from the API economy.
As a result, you write far less code and have fewer security risks, so you can innovate faster. A succinct machine-learning algorithm with fewer than 500 lines of code can now replace an application that might have required millions of lines of code. Second, it scales. If you happen to have a gigantic spike in traffic, it deals with it effortlessly. If you have very little traffic, you incur a negligible cost.
In 2018 we witnessed a clash of titans as government and tech companies collided on privacy issues around collecting, culling and using personal data. From GDPR to Facebook scandals, many tech CEOs were defending big data, its use, and how they’re safeguarding the public.
1. Companies will face increased pressure about the data AI-embedded services use.
2. Public concern will lead to AI regulations. But we must understand this tech too.
In 2018, the National Science Foundation invested $100 million in AI research, with special support in 2019 for developing principles for safe, robust and trustworthy AI; addressing issues of bias, fairness and transparency of algorithmic intelligence; developing deeper understanding of human-AI interaction and user education; and developing insights about the influences of AI on people and society.
This investment was dwarfed by DARPA—an agency of the Department of Defence—and its multi-year investment of more than $2 billion in new and existing programs under the “AI Next” campaign. A key area of the campaign includes pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning.
Submissions are invited for the IOLUG Spring 2019 Conference, to be held May 10th in Indianapolis, IN. Submissions are welcomed from all types of libraries and on topics related to the theme of data in libraries.
Libraries and librarians work with data every day, with a variety of applications – circulation, gate counts, reference questions, and so on. The mass collection of user data has made headlines many times in the past few years. Analytics and privacy have, understandably, become important issues both globally and locally. In addition to being aware of the data ecosystem in which we work, libraries can play a pivotal role in educating user communities about data and all of its implications, both favorable and unfavorable.
The Conference Planning Committee is seeking proposals on topics related to data in libraries, including but not limited to:
Using tools/resources to find and leverage data to solve problems and expand knowledge,
Data policies and procedures,
Harvesting, organizing, and presenting data,
Data-driven decision making,
Data in collection development,
Using data to measure outcomes, not just uses,
Using data to better reach and serve your communities,
Whether the NYC police angle is true or not (it’s being hotly disputed), Facebook and Google are thinking along lines that follow the whims of the Chinese Government.
SenseTime and Megvii won’t just be worth $5 Billion, they will be worth many times that in the future. This is because a facial recognition data-harvesting of everything is the future of consumerism and capitalism, and in some places, the central tenet of social order (think Asia).
China has already ‘won’ the trade-war, because its winning the race to innovation. America doesn’t regulate Amazon, Microsoft, Google or Facebook properly, that stunts innovation and ethics in technology where the West is now forced to copy China just to keep up.
Eureka: machine learning tool, brainstorming engine. give it an initial idea and it returns similar ideas. Like Google: refine the idea, so the machine can understand it better. create a collection of ideas to translate into course design or others.
influencers and microinfluencers, pre- and doing the execution
a machine can construct a book with the help of a person. bionic book. machine and person working hand in hand. provide keywords and phrases from lecture notes, presentation materials. from there recommendations and suggestions based on own experience; then identify included and excluded content. then instructor can construct.
Design may be the least interesting part of the book for the faculty.
multiple choice quiz may be the least interesting part, and faculty might want to do much deeper assessment.
use these machine learning techniques to build assessment. how to more effectively. inquizitive is the machine learning
students engagements and similar prompts
presence in the classroom: pre-service teachers class. how to immerse them and practice classroom management skills
First class: marriage btw VR and use of AI – an environment headset: an algorithm reacts how teachers are interacting with the virtual kids. series of variables, oppty to interact with present behavior. classroom management skills. simulations and environments otherwise impossible to create. apps for these type of interactions
facilitation, reflection and research
AI for more human experience, allow more time for the faculty to be more human, more free time to contemplate.
Jason: Won’t the use of AI still reduce the amount of faculty needed?
Christina Dumeng: @Jason–I think it will most likely increase the amount of students per instructor.
Andrew Cole (UW-Whitewater): I wonder if instead of reducing faculty, these types of platforms (e.g., analytic capabilities) might require instructors to also become experts in the various technology platforms.
Dirk Morrison: Also wonder what the implications of AI for informal, self-directed learning?
Kate Borowske: The context that you’re presenting this in, as “your own jazz band,” is brilliant. These tools presented as a “partner” in the “band” seems as though it might be less threatening to faculty. Sort of gamifies parts of course design…?
Dirk Morrison: Move from teacher-centric to student-centric? Recommender systems, AI-based tutoring?
Andrew Cole (UW-Whitewater): The course with the bot TA must have been 100-level right? It would be interesting to see if those results replicate in 300, 400 level courses
Both jazz and classical art forms require not only music literacy, but for the musician to be at the top of their game in technical proficiency, tonal quality and creativity in the case of the jazz idiom. Jazz masters like John Coltrane would practice six to nine hours a day, often cutting his practice only because his inner lower lip would be bleeding from the friction caused by his mouth piece against his gums and teeth. His ability to compose and create new styles and directions for jazz was legendary. With few exceptions such as Wes Montgomery or Chet Baker, if you couldn’t read music, you couldn’t play jazz.
Besides the decline of music literacy and participation, there has also been a decline in the quality of music which has been proven scientifically by Joan Serra, a postdoctoral scholar at the Artificial Intelligence Research Institute of the Spanish National Research Council in Barcelona. Joan and his colleagues looked at 500,000 pieces of music between 1955-2010, running songs through a complex set of algorithms examining three aspects of those songs:
1. Timbre- sound color, texture and tone quality
2. Pitch- harmonic content of the piece, including its chords, melody, and tonal arrangements
3. Loudness- volume variance adding richness and depth
In an interview, Billy Joel was asked what has made him a standout. He responded his ability to read and compose music made him unique in the music industry, which as he explained, was troubling for the industry when being musically literate makes you stand out. An astonishing amount of today’s popular music is written by two people: Lukasz Gottwald of the United States and Max Martin from Sweden, who are both responsible for dozens of songs in the top 100 charts. You can credit Max and Dr. Luke for most the hits of these stars:
Katy Perry, Britney Spears, Kelly Clarkson, Taylor Swift, Jessie J., KE$HA, Miley Cyrus, Avril Lavigne, Maroon 5, Taio Cruz, Ellie Goulding, NSYNC, Backstreet Boys, Ariana Grande, Justin Timberlake, Nick Minaj, Celine Dion, Bon Jovi, Usher, Adam Lambert, Justin Bieber, Domino, Pink, Pitbull, One Direction, Flo Rida, Paris Hilton, The Veronicas, R. Kelly, Zebrahead
Way back in 1983, I identified A.I. as one of 20 exponential technologies that would increasingly drive economic growth for decades to come.
Artificial intelligence applies to computing systems designed to perform tasks usually reserved for human intelligence using logic, if-then rules, decision trees and machine learning to recognize patterns from vast amounts of data, provide insights, predict outcomes and make complex decisions. A.I. can be applied to pattern recognition, object classification, language translation, data translation, logistical modeling and predictive modeling, to name a few. It’s important to understand that all A.I. relies on vast amounts of quality data and advanced analytics technology. The quality of the data used will determine the reliability of the A.I. output.
Machine learning is a subset of A.I. that utilizes advanced statistical techniques to enable computing systems to improve at tasks with experience over time. Chatbots like Amazon’s Alexa, Apple’s Siri, or any of the others from companies like Google and Microsoft all get better every year thanks to all of the use we give them and the machine learning that takes place in the background.
Deep learning is a subset of machine learning that uses advanced algorithms to enable an A.I. system to train itself to perform tasks by exposing multi-layered neural networks to vast amounts of data, then using what has been learned to recognize new patterns contained in the data. Learning can be Human Supervised Learning, Unsupervised Learningand/or Reinforcement Learning like Google used with DeepMind to learn how to beat humans at the complex game Go. Reinforcement learning will drive some of the biggest breakthroughs.
Autonomous computing uses advanced A.I. tools such as deep learning to enable systems to be self-governing and capable of acting according to situational data without human command. A.I. autonomy includes perception, high-speed analytics, machine-to-machine communications and movement. For example, autonomous vehicles use all of these in real time to successfully pilot a vehicle without a human driver.
Augmented thinking: Over the next five years and beyond, A.I. will become increasingly embedded at the chip level into objects, processes, products and services, and humans will augment their personal problem-solving and decision-making abilities with the insights A.I. provides to get to a better answer faster.
Technology is not good or evil, it is how we as humans apply it. Since we can’t stop the increasing power of A.I., I want us to direct its future, putting it to the best possible use for humans.