Between the “dumb” fixed algorithms and true AI lies the problematic halfway house we’ve already entered with scarcely a thought and almost no debate, much less agreement as to aims, ethics, safety, best practice. If the algorithms around us are not yet intelligent, meaning able to independently say “that calculation/course of action doesn’t look right: I’ll do it again”, they are nonetheless starting to learn from their environments. And once an algorithm is learning, we no longer know to any degree of certainty what its rules and parameters are. At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us. Where the “dumb” fixed algorithms – complex, opaque and inured to real time monitoring as they can be – are in principle predictable and interrogable, these ones are not. After a time in the wild, we no longer know what they are: they have the potential to become erratic. We might be tempted to call these “frankenalgos” – though Mary Shelley couldn’t have made this up.
Twenty years ago, George Dyson anticipated much of what is happening today in his classic book Darwin Among the Machines. The problem, he tells me, is that we’re building systems that are beyond our intellectual means to control. We believe that if a system is deterministic (acting according to fixed rules, this being the definition of an algorithm) it is predictable – and that what is predictable can be controlled. Both assumptions turn out to be wrong.“It’s proceeding on its own, in little bits and pieces,” he says. “What I was obsessed with 20 years ago that has completely taken over the world today are multicellular, metazoan digital organisms, the same way we see in biology, where you have all these pieces of code running on people’s iPhones, and collectively it acts like one multicellular organism.“There’s this old law called Ashby’s law that says a control system has to be as complex as the system it’s controlling, and we’re running into that at full speed now, with this huge push to build self-driving cars where the software has to have a complete model of everything, and almost by definition we’re not going to understand it. Because any model that we understand is gonna do the thing like run into a fire truck ’cause we forgot to put in the fire truck.”
Walsh believes this makes it more, not less, important that the public learn about programming, because the more alienated we become from it, the more it seems like magic beyond our ability to affect. When shown the definition of “algorithm” given earlier in this piece, he found it incomplete, commenting: “I would suggest the problem is that algorithm now means any large, complex decision making software system and the larger environment in which it is embedded, which makes them even more unpredictable.” A chilling thought indeed. Accordingly, he believes ethics to be the new frontier in tech, foreseeing “a golden age for philosophy” – a view with which Eugene Spafford of Purdue University, a cybersecurity expert, concurs. Where there are choices to be made, that’s where ethics comes in.
our existing system of tort law, which requires proof of intention or negligence, will need to be rethought. A dog is not held legally responsible for biting you; its owner might be, but only if the dog’s action is thought foreseeable.
model-based programming, in which machines do most of the coding work and are able to test as they go.
As we wait for a technological answer to the problem of soaring algorithmic entanglement, there are precautions we can take. Paul Wilmott, a British expert in quantitative analysis and vocal critic of high frequency trading on the stock market, wryly suggests “learning to shoot, make jam and knit”
The venerable Association for Computing Machinery has updated its code of ethics along the lines of medicine’s Hippocratic oath, to instruct computing professionals to do no harm and consider the wider impacts of their work.
My note: it is NOT about creating masses of programmers and driving the salaries down, as the author claims; it is about fostering a generation, which is technology literate. A doctor, knowing how to code will be a better doctor in the era of IoT; a philosopher knowing how to code will be better in the era of digital humanities.
This course will introduce students to text encoding according to the Text Encoding Initiative (TEI) Guidelines. Why should you care about text encoding or the TEI Guidelines? The creation of digital scholarly texts is a core part of the digital humanities and many digital humanities grants and publications require encoding texts in accordance with the TEI Guidelines. Students in this course will learn about the use-cases for text encoding and get a basic introduction to the principles of scholarly editing before moving on to learning some XML basics and creating a small-scale TEI project using the XML editor oXygen. We will not cover (beyond the very basics) processing TEI, and students interested in learning about XSLT and/or XQuery should turn to the LJA courses offered on those subjects. This course as this course is intended as a follow up to the Introduction to Digital Humanities for Librarians course, but there are no prerequisites, and the course is open to all interested.
Objectives:
– A basic understanding of digital scholarly editing as an academic activity.
– Knowledge of standard TEI elements for encoding poetry and prose.
– Some engagement with more complex encoding practices, such as working with manuscripts.
– An understanding of how librarians have participated in text encoding.
– Deeper engagement with digital humanities practices.
John Russell is the Associate Director of the Center for Humanities and Information at Pennsylvania State University. He has been actively involved in digital humanities projects, primarily related to text encoding, and has taught courses and workshops on digital humanities methods, including “Introduction to Digital Humanities for Librarians.”
In Kentucky, mining veteran Rusty Justice decided that code could replace coal. He cofounded Bit Source, a code shop that builds its workforce by retraining coal miners as programmers. Enthusiasm is sky high: Justice got 950 applications for his first 11 positions. Miners, it turns out, are accustomed to deep focus, team play, and working with complex engineering tech. “Coal miners are really technology workers who get dirty,” Justice says.
The whole problem is rooted in the abuse of the key term, language. In foreign languages the term language refers to “the system of words or signs that people use to express thoughts and feelings to each other” (Merriam-Webster) while in programming languages the term language means “a formal system of signs and symbols including rules for the formation and transformation of admissible expressions“ (Merriam-Webster). To equate foreign languages with programming languages reduces learning a foreign language to the mere acquisition of a set of tokens or words that are semantically and syntactically glued together. It fundamentally ignores the societal, cultural and historical aspects of human languages.