Searching for "algorithms"

social media algorithms

How algorithms impact our browsing behavior? browsing history?
What is the connection between social media algorithms and fake news?
Are there topic-detection algorithms as they are community-detection ones?
How can I change the content of a [Google] search return? Can I? 

Larson, S. (2016, July 8). What is an Algorithm and How Does it Affect You? The Daily Dot. Retrieved from
Berg, P. (2016, June 30). How Do Social Media Algorithms Affect You | Forge and Smith. Retrieved September 19, 2017, from
Oremus, W., & Chotiner, I. (2016, January 3). Who Controls Your Facebook Feed. Slate. Retrieved from
Lehrman, R. A. (2013, August 11). The new age of algorithms: How it affects the way we live. Christian Science Monitor. Retrieved from
Johnson, C. (2017, March 10). How algorithms affect our way of life. Desert News. Retrieved from
Understanding algorithms and their impact on human life goes far beyond basic digital literacy, some experts said.
An example could be the recent outcry over Facebook’s news algorithm, which enhances the so-called “filter bubble”of information.
personalized search (
Kounine, A. (2016, August 24). How your personal data is used in personalization and advertising. Retrieved September 19, 2017, from
Hotchkiss, G. (2007, March 9). The Pros & Cons Of Personalized Search. Retrieved September 19, 2017, from
Magid, L. (2012). How (and why) To Turn Off Google’s Personalized Search Results. Forbes. Retrieved from
Nelson, P. (n.d.). Big Data, Personalization and the No-Search of Tomorrow. Retrieved September 19, 2017, from


Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society19(3), 329-346. doi:10.1177/1461444815608807

community detection algorithms:

Bedi, P., & Sharma, C. (2016). Community detection in social networks. Wires: Data Mining & Knowledge Discovery6(3), 115-135.

CRUZ, J. D., BOTHOREL, C., & POULET, F. (2014). Community Detection and Visualization in Social Networks: Integrating Structural and Semantic Information. ACM Transactions On Intelligent Systems & Technology5(1), 1-26. doi:10.1145/2542182.2542193

Bai, X., Yang, P., & Shi, X. (2017). An overlapping community detection algorithm based on density peaks. Neurocomputing2267-15. doi:10.1016/j.neucom.2016.11.019

topic-detection algorithms:

Zeng, J., & Zhang, S. (2009). Incorporating topic transition in topic detection and tracking algorithms. Expert Systems With Applications36(1), 227-232. doi:10.1016/j.eswa.2007.09.013

topic detection and tracking (TDT) algorithms based on topic models, such as LDA, pLSI (, etc.

Zhou, E., Zhong, N., & Li, Y. (2014). Extracting news blog hot topics based on the W2T Methodology. World Wide Web17(3), 377-404. doi:10.1007/s11280-013-0207-7

The W2T (Wisdom Web of Things) methodology considers the information organization and management from the perspective of Web services, which contributes to a deep understanding of online phenomena such as users’ behaviors and comments in e-commerce platforms and online social networks.  (

ethics of algorithm

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679.


Malyarov, N. (2016, October 18). Journalism in the age of algorithms, platforms and newsfeeds | News | Retrieved September 19, 2017, from

more on algorithms in this IMS blog

see also

Education and Ethics

4 Ways AI Education and Ethics Will Disrupt Society in 2019

By Tara Chklovski     Jan 28, 2019

In 2018 we witnessed a clash of titans as government and tech companies collided on privacy issues around collecting, culling and using personal data. From GDPR to Facebook scandals, many tech CEOs were defending big data, its use, and how they’re safeguarding the public.

Meanwhile, the public was amazed at technological advances like Boston Dynamic’s Atlas robot doing parkour, while simultaneously being outraged at the thought of our data no longer being ours and Alexa listening in on all our conversations.

1. Companies will face increased pressure about the data AI-embedded services use.

2. Public concern will lead to AI regulations. But we must understand this tech too.

In 2018, the National Science Foundation invested $100 million in AI research, with special support in 2019 for developing principles for safe, robust and trustworthy AI; addressing issues of bias, fairness and transparency of algorithmic intelligence; developing deeper understanding of human-AI interaction and user education; and developing insights about the influences of AI on people and society.

This investment was dwarfed by DARPA—an agency of the Department of Defence—and its multi-year investment of more than $2 billion in new and existing programs under the “AI Next” campaign. A key area of the campaign includes pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning.

Federally funded initiatives, as well as corporate efforts (such as Google’s “What If” tool) will lead to the rise of explainable AI and interpretable AI, whereby the AI actually explains the logic behind its decision making to humans. But the next step from there would be for the AI regulators and policymakers themselves to learn about how these technologies actually work. This is an overlooked step right now that Richard Danzig, former Secretary of the U.S. Navy advises us to consider, as we create “humans-in-the-loop” systems, which require people to sign off on important AI decisions.

3. More companies will make AI a strategic initiative in corporate social responsibility.

Google invested $25 million in AI for Good and Microsoft added an AI for Humanitarian Action to its prior commitment. While these are positive steps, the tech industry continues to have a diversity problem

4. Funding for AI literacy and public education will skyrocket.

Ryan Calo from the University of Washington explains that it matters how we talk about technologies that we don’t fully understand.




Tackling Data in Libraries

Tackling Data in Libraries: Opportunities and Challenges in Serving User Communities

Submit proposals at

Deadline is Friday, March 1, 2019

Submissions are invited for the IOLUG Spring 2019 Conference, to be held May 10th in Indianapolis, IN. Submissions are welcomed from all types of libraries and on topics related to the theme of data in libraries.

Libraries and librarians work with data every day, with a variety of applications – circulation, gate counts, reference questions, and so on. The mass collection of user data has made headlines many times in the past few years. Analytics and privacy have, understandably, become important issues both globally and locally. In addition to being aware of the data ecosystem in which we work, libraries can play a pivotal role in educating user communities about data and all of its implications, both favorable and unfavorable.

The Conference Planning Committee is seeking proposals on topics related to data in libraries, including but not limited to:

  • Using tools/resources to find and leverage data to solve problems and expand knowledge,
  • Data policies and procedures,
  • Harvesting, organizing, and presenting data,
  • Data-driven decision making,
  • Learning analytics,
  • Metadata/linked data,
  • Data in collection development,
  • Using data to measure outcomes, not just uses,
  • Using data to better reach and serve your communities,
  • Libraries as data collectors,
  • Big data in libraries,
  • Privacy,
  • Social justice/Community Engagement,
  • Algorithms,
  • Storytelling, (
  • Libraries as positive stewards of user data.

Facial Recognition issues

Chinese Facial Recognition Will Take over the World in 2019

Michael K. Spencer Jan 14, 2018
The best facial recognition startups are in China, by a long-shot. As their software is less biased, global adoption is occurring via their software. This is evidenced in 2019 by the New York Police department in NYC for example, according to the South China Morning Post.
The mass surveillance state of data harvesting in real-time is coming. Facebook already rates and profiles us.

The Tech Wars come down to an AI-War

Whether the NYC police angle is true or not (it’s being hotly disputed), Facebook and Google are thinking along lines that follow the whims of the Chinese Government.

SenseTime and Megvii won’t just be worth $5 Billion, they will be worth many times that in the future. This is because a facial recognition data-harvesting of everything is the future of consumerism and capitalism, and in some places, the central tenet of social order (think Asia).

China has already ‘won’ the trade-war, because its winning the race to innovation. America doesn’t regulate Amazon, Microsoft, Google or Facebook properly, that stunts innovation and ethics in technology where the West is now forced to copy China just to keep up.

more about facial recognition in schools

music literacy

The Tragic Decline of Music Literacy (and Quality)

Jon Henschen | August 16, 2018 |  529,478

Both jazz and classical art forms require not only music literacy, but for the musician to be at the top of their game in technical proficiency, tonal quality and creativity in the case of the jazz idiom. Jazz masters like John Coltrane would practice six to nine hours a day, often cutting his practice only because his inner lower lip would be bleeding from the friction caused by his mouth piece against his gums and teeth. His ability to compose and create new styles and directions for jazz was legendary. With few exceptions such as Wes Montgomery or Chet Baker, if you couldn’t read music, you couldn’t play jazz.


can you read music?

Besides the decline of music literacy and participation, there has also been a decline in the quality of music which has been proven scientifically by Joan Serra, a postdoctoral scholar at the Artificial Intelligence Research Institute of the Spanish National Research Council in Barcelona. Joan and his colleagues looked at 500,000 pieces of music between 1955-2010, running songs through a complex set of algorithms examining three aspects of those songs:

1. Timbre- sound color, texture and tone quality

2. Pitch- harmonic content of the piece, including its chords, melody, and tonal arrangements

3. Loudness- volume variance adding richness and depth

In an interview, Billy Joel was asked what has made him a standout. He responded his ability to read and compose music made him unique in the music industry, which as he explained, was troubling for the industry when being musically literate makes you stand out. An astonishing amount of today’s popular music is written by two people: Lukasz Gottwald of the United States and Max Martin from Sweden, who are both responsible for dozens of songs in the top 100 charts. You can credit Max and Dr. Luke for most the hits of these stars:

Katy Perry, Britney Spears, Kelly Clarkson, Taylor Swift, Jessie J., KE$HA, Miley Cyrus, Avril Lavigne, Maroon 5, Taio Cruz, Ellie Goulding, NSYNC, Backstreet Boys, Ariana Grande, Justin Timberlake, Nick Minaj, Celine Dion, Bon Jovi, Usher, Adam Lambert, Justin Bieber, Domino, Pink, Pitbull, One Direction, Flo Rida, Paris Hilton, The Veronicas, R. Kelly, Zebrahead

more on metaliteracies in this IMS blog

shaping the future of AI

Shaping the Future of A.I.

Daniel Burrus

Way back in 1983, I identified A.I. as one of 20 exponential technologies that would increasingly drive economic growth for decades to come.

Artificial intelligence applies to computing systems designed to perform tasks usually reserved for human intelligence using logic, if-then rules, decision trees and machine learning to recognize patterns from vast amounts of data, provide insights, predict outcomes and make complex decisions. A.I. can be applied to pattern recognition, object classification, language translation, data translation, logistical modeling and predictive modeling, to name a few. It’s important to understand that all A.I. relies on vast amounts of quality data and advanced analytics technology. The quality of the data used will determine the reliability of the A.I. output.

Machine learning is a subset of A.I. that utilizes advanced statistical techniques to enable computing systems to improve at tasks with experience over time. Chatbots like Amazon’s Alexa, Apple’s Siri, or any of the others from companies like Google and Microsoft all get better every year thanks to all of the use we give them and the machine learning that takes place in the background.

Deep learning is a subset of machine learning that uses advanced algorithms to enable an A.I. system to train itself to perform tasks by exposing multi-layered neural networks to vast amounts of data, then using what has been learned to recognize new patterns contained in the data. Learning can be Human Supervised LearningUnsupervised Learningand/or Reinforcement Learning like Google used with DeepMind to learn how to beat humans at the complex game Go. Reinforcement learning will drive some of the biggest breakthroughs.

Autonomous computing uses advanced A.I. tools such as deep learning to enable systems to be self-governing and capable of acting according to situational data without human command. A.I. autonomy includes perception, high-speed analytics, machine-to-machine communications and movement. For example, autonomous vehicles use all of these in real time to successfully pilot a vehicle without a human driver.

Augmented thinking: Over the next five years and beyond, A.I. will become increasingly embedded at the chip level into objects, processes, products and services, and humans will augment their personal problem-solving and decision-making abilities with the insights A.I. provides to get to a better answer faster.

Technology is not good or evil, it is how we as humans apply it. Since we can’t stop the increasing power of A.I., I want us to direct its future, putting it to the best possible use for humans. 

more on AI in this IMS blog

more on deep learning in this IMS blog

Does AI favor tyranny

Why Technology Favors Tyranny

Artificial intelligence could erase many practical advantages of democracy, and erode the ideals of liberty and equality. It will further concentrate power among a small elite if we don’t take steps to stop it.


Ordinary people may not understand artificial intelligence and biotechnology in any detail, but they can sense that the future is passing them by. In 1938 the common man’s condition in the Soviet Union, Germany, or the United States may have been grim, but he was constantly told that he was the most important thing in the world, and that he was the future (provided, of course, that he was an “ordinary man,” rather than, say, a Jew or a woman).

n 2018 the common person feels increasingly irrelevant. Lots of mysterious terms are bandied about excitedly in ted Talks, at government think tanks, and at high-tech conferences—globalizationblockchaingenetic engineeringAImachine learning—and common people, both men and women, may well suspect that none of these terms is about them.

Fears of machines pushing people out of the job market are, of course, nothing new, and in the past such fears proved to be unfounded. But artificial intelligence is different from the old machines. In the past, machines competed with humans mainly in manual skills. Now they are beginning to compete with us in cognitive skills.

Israel is a leader in the field of surveillance technology, and has created in the occupied West Bank a working prototype for a total-surveillance regime. Already today whenever Palestinians make a phone call, post something on Facebook, or travel from one city to another, they are likely to be monitored by Israeli microphones, cameras, drones, or spy software. Algorithms analyze the gathered data, helping the Israeli security forces pinpoint and neutralize what they consider to be potential threats.

The conflict between democracy and dictatorship is actually a conflict between two different data-processing systems. AI may swing the advantage toward the latter.

As we rely more on Google for answers, our ability to locate information independently diminishes. Already today, “truth” is defined by the top results of a Google search. This process has likewise affected our physical abilities, such as navigating space.

So what should we do?

For starters, we need to place a much higher priority on understanding how the human mind works—particularly how our own wisdom and compassion can be cultivated.

more on SCSU student philosophy club in this IMS blog

deep learning revolution

Sejnowski, T. J. (2018). The Deep Learning Revolution. Cambridge, MA: The MIT Press.

How deep learning―from Google Translate to driverless cars to personal cognitive assistants―is changing our lives and transforming every sector of the economy.

The deep learning revolution has brought us driverless cars, the greatly improved Google Translate, fluent conversations with Siri and Alexa, and enormous profits from automated trading on the New York Stock Exchange. Deep learning networks can play poker better than professional poker players and defeat a world champion at Go. In this book, Terry Sejnowski explains how deep learning went from being an arcane academic field to a disruptive technology in the information economy.

Sejnowski played an important role in the founding of deep learning, as one of a small group of researchers in the 1980s who challenged the prevailing logic-and-symbol based version of AI. The new version of AI Sejnowski and others developed, which became deep learning, is fueled instead by data. Deep networks learn from data in the same way that babies experience the world, starting with fresh eyes and gradually acquiring the skills needed to navigate novel environments. Learning algorithms extract information from raw data; information can be used to create knowledge; knowledge underlies understanding; understanding leads to wisdom. Someday a driverless car will know the road better than you do and drive with more skill; a deep learning network will diagnose your illness; a personal cognitive assistant will augment your puny human brain. It took nature many millions of years to evolve human intelligence; AI is on a trajectory measured in decades. Sejnowski prepares us for a deep learning future.

A pioneering scientist explains ‘deep learning’

Artificial intelligence meets human intelligence

neural networks

Buzzwords like “deep learning” and “neural networks” are everywhere, but so much of the popular understanding is misguided, says Terrence Sejnowski, a computational neuroscientist at the Salk Institute for Biological Studies.

Sejnowski, a pioneer in the study of learning algorithms, is the author of The Deep Learning Revolution (out next week from MIT Press). He argues that the hype about killer AI or robots making us obsolete ignores exciting possibilities happening in the fields of computer science and neuroscience, and what can happen when artificial intelligence meets human intelligence.

Machine learning is a very large field and goes way back. Originally, people were calling it “pattern recognition,” but the algorithms became much broader and much more sophisticated mathematically. Within machine learning are neural networks inspired by the brain, and then deep learning. Deep learning algorithms have a particular architecture with many layers that flow through the network. So basically, deep learning is one part of machine learning and machine learning is one part of AI.

December 2012 at the NIPS meeting, which is the biggest AI conference. There, [computer scientist] Geoff Hinton and two of his graduate students showed you could take a very large dataset called ImageNet, with 10,000 categories and 10 million images, and reduce the classification error by 20 percent using deep learning.Traditionally on that dataset, error decreases by less than 1 percent in one year. In one year, 20 years of research was bypassed. That really opened the floodgates.

The inspiration for deep learning really comes from neuroscience.

AlphaGo, the program that beat the Go champion included not just a model of the cortex, but also a model of a part of the brain called the basal ganglia, which is important for making a sequence of decisions to meet a goal. There’s an algorithm there called temporal differences, developed back in the ‘80s by Richard Sutton, that, when coupled with deep learning, is capable of very sophisticated plays that no human has ever seen before.

there’s a convergence occurring between AI and human intelligence. As we learn more and more about how the brain works, that’s going to reflect back in AI. But at the same time, they’re actually creating a whole theory of learning that can be applied to understanding the brain and allowing us to analyze the thousands of neurons and how their activities are coming out. So there’s this feedback loop between neuroscience and AI

deep learning revolution

AI and ethics

Live Facebook discussion at SCSU VizLab on ethics and technology:

Join our discussion on #technology and #ethics. share your opinions, suggestions, ideas

Posted by InforMedia Services on Thursday, November 1, 2018

Heard on Marketplace this morning (Oct. 22, 2018): ethics of artificial intelligence with John Havens of the Institute of Electrical and Electronics Engineers, which has developed a new ethics certification process for AI:

Ethics and AI

***** The student club, the Philosophical Society, has now been recognized by SCSU as a student organization ***

Could it be the case that a random decision is still better then predetermined one designed to minimize harm?

similar ethical considerations are raised also:

in this sitcom (full movie)

This TED talk:



IoT (Internet of Things), Industry 4.0, Big Data, BlockChain,

IoT (Internet of Things), Industry 4.0, Big Data, BlockChain, Privacy, Security, Surveilance

peer-reviewed literature;

Keyword search: ethic* + Internet of Things = 31

Baldini, G., Botterman, M., Neisse, R., & Tallacchini, M. (2018). Ethical Design in the Internet of Things. Science & Engineering Ethics24(3), 905–925.

Berman, F., & Cerf, V. G. (2017). Social and Ethical Behavior in the Internet of Things. Communications of the ACM60(2), 6–7.

Murdock, G. (2018). Media Materialties: For A Moral Economy of Machines. Journal of Communication68(2), 359–368.

Carrier, J. G. (2018). Moral economy: What’s in a name. Anthropological Theory18(1), 18–35.

Kernaghan, K. (2014). Digital dilemmas: Values, ethics and information technology. Canadian Public Administration57(2), 295–317.

Koucheryavy, Y., Kirichek, R., Glushakov, R., & Pirmagomedov, R. (2017). Quo vadis, humanity? Ethics on the last mile toward cybernetic organism. Russian Journal of Communication9(3), 287–293.

Keyword search: ethic+ + autonomous vehicles = 46

Cerf, V. G. (2017). A Brittle and Fragile Future. Communications of the ACM60(7), 7.

Fleetwood, J. (2017). Public Health, Ethics, and Autonomous Vehicles. American Journal of Public Health107(4), 632–537.

HARRIS, J. (2018). Who Owns My Autonomous Vehicle? Ethics and Responsibility in Artificial and Human Intelligence. Cambridge Quarterly of Healthcare Ethics27(4), 599–609.

Keeling, G. (2018). Legal Necessity, Pareto Efficiency & Justified Killing in Autonomous Vehicle Collisions. Ethical Theory & Moral Practice21(2), 413–427.

Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis. Science & Engineering Ethics21(3), 619–630.

Getha-Taylor, H. (2017). The Problem with Automated Ethics. Public Integrity19(4), 299–300.

Keyword search: ethic* + artificial intelligence = 349

Etzioni, A., & Etzioni, O. (2017). Incorporating Ethics into Artificial Intelligence. Journal of Ethics21(4), 403–418.

Köse, U. (2018). Are We Safe Enough in the Future of Artificial Intelligence? A Discussion on Machine Ethics and Artificial Intelligence Safety. BRAIN: Broad Research in Artificial Intelligence & Neuroscience9(2), 184–197. Retrieved from


2018 CTS Transportation Research Conference

Keynote presentations will explore the future of driving and the evolution and potential of automated vehicle technologies.


more on AI in this IMS blog

AI and autonomous cars as ALA discussion topic

and privacy concerns

the call of the German scientists on ethics and AI

AI in the race for world dominance

the intellectual dark web

Nuance: A Love Story. My affair with the intellectual dark web

Meghan Daum Aug 24

the standard set of middle-class Democratic Party values: Public safety nets were a force for good, corporate greed was a real threat, civil and reproductive rights were paramount.

I remember how good it felt to stand with my friends in our matching college sweatshirts shouting “never again!” and “my body, my choice!”

(hey, why shouldn’t Sarah Palin call herself a feminist?) brought angry letters from liberals as well as conservatives.

We would all go to the mat for women’s rights, gay rights, or pretty much any rights other than gun rights. We lived, for the most part, in big cities in blue states.

When Barack Obama came into the picture, we loved him with the delirium of crushed-out teenagers, perhaps less for his policies than for being the kind of person who also listens to NPR. We loved Hillary Clinton with the fraught resignation of a daughter’s love for her mother. We loved her even if we didn’t like her. We were liberals, after all. We were family.

Words like “mansplaining” and “gaslighting” were suddenly in heavy rotation, often invoked with such elasticity as to render them nearly meaningless. Similarly, the term “woke,” which originated in black activism, was being now used to draw a bright line between those on the right side of things and those on the wrong side of things.

From the Black Guys on Bloggingheads, YouTube’s algorithms bounced me along a path of similarly unapologetic thought criminals: the neuroscientist Sam Harris and his Waking Up podcast; Christina Hoff Sommers, aka “The Factual Feminist”; the comedian turned YouTube interviewer Dave Rubin; the counter-extremist activist Maajid Nawaz; and a cantankerous and then little-known Canadian psychology professor named Jordan Peterson, who railed against authoritarianism on both the left and right but reserved special disdain for postmodernism, which he believed was eroding rational thought on campuses and elsewhere.

the sudden national obsession with female endangerment on college campuses struck me much the same way it had in the early 1990s: well-intended but ultimately infantilizing to women and essentially unfeminist.

Weinstein and his wife, the evolutionary biologist Heather Heying, who also taught at Evergreen, would eventually leave the school and go on to become core members of the “intellectual dark web.”

Weinstein talked about intellectual “feebleness” in academia and in the media, about the demise of nuance, about still considering himself a progressive despite his feeling that the far left was no better at offering practical solutions to the world’s problems than the far right.

an American Enterprise Institute video of Sommers, the Factual Feminist, in conversation with the scholar and social critic Camille Paglia — “My generation fought for the freedom for women to risk getting raped!” I watched yet another video in which Paglia sat by herself and expounded volcanically about the patriarchal history of art (she was all for it).

the brothers sat down together for a two-hour, 47-minute interview on theRubin Report,

James Baldwin’s line, “I love America more than any other country in the world, and, exactly for this reason, I insist on the right to criticize her perpetually

Jordan Peterson Twelve Rules for Life: An Antidote for Chaos, is a sort of New and Improved Testament for the purpose-lacking young person (often but not always male) for whom tough-love directives like “clean up your room!” go down a lot easier when dispensed with a Jungian, evo-psych panache.

Quillette, a new online magazine that billed itself as “a platform for free thought”

the more honest we are about what we think, the more we’re alone with our thoughts. Just as you can’t fight Trumpism with tribalism, you can’t fight tribalism with a tribe.

1 2 3

Skip to toolbar