Although brain-computer interfaces (BCIs) are the heart of neurotech, it is more broadly defined as technology able to collect, interpret, infer or modify information generated by any part of the nervous system.
There are different types of it—some is invasive, some isn’t. Invasive brain-computer interfaces involve placing microelectrodes or other kinds of neurotech materials directly onto the brain or even embedding them into the neural tissue. The idea is to directly sense or modulate neural activity.
Noninvasive neurotech is also used for pain management. Together with Boston Scientific, IBM researchers are applying machine learning, the internet of things, and neurotech to improve chronic pain therapy.
As new, emerging technology, neurotech challenges corporations, researchers and individuals to reaffirm our commitment to responsible innovation. It’s essential to enforce guardrails so that they lead to beneficial long-term outcomes—on company, national and international levels. We need to ensure that researchers and manufacturers of neurotech as well as policymakers and consumers approach it responsibly and ethically.
Last year, Australia’s Chief Scientist Alan Finkel suggested that we in Australia should become “human custodians”. This would mean being leaders in technological development, ethics, and human rights.
A recent report from the Australian Council of Learned Academies (ACOLA) brought together experts from scientific and technical fields as well as the humanities, arts and social sciences to examine key issues arising from artificial intelligence.
A similar vision drives Stanford University’s Institute for Human-Centered Artificial Intelligence. The institute brings together researchers from the humanities, education, law, medicine, business and STEM to study and develop “human-centred” AI technologies.
Meanwhile, across the Atlantic, the Future of Humanity Institute at the University of Oxford similarly investigates “big-picture questions” to ensure “a long and flourishing future for humanity”.
The IT sector is also wrestling with the ethical issues raised by rapid technological advancement. Microsoft’s Brad Smith and Harry Shum wrote in their 2018 book The Future Computed that one of their “most important conclusions” was that the humanities and social sciences have a crucial role to play in confronting the challenges raised by AI
Without training in ethics, human rights and social justice, the people who develop the technologies that will shape our future could make poor decisions.
digital ethics, which I define simply as “doing the right thing at the intersection of technology innovation and accepted social values.”
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, written by Cathy O’Neil in early 2016, continues to be relevant and illuminating. O’Neil’s book revolves around her insight that “algorithms are opinions embedded in code,” in distinct contrast to the belief that algorithms are based on—and produce—indisputable facts.
Safiya Umoja Noble’s book Algorithms of Oppression: How Search Engines Reinforce Racism
The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power
In 2018 we witnessed a clash of titans as government and tech companies collided on privacy issues around collecting, culling and using personal data. From GDPR to Facebook scandals, many tech CEOs were defending big data, its use, and how they’re safeguarding the public.
1. Companies will face increased pressure about the data AI-embedded services use.
2. Public concern will lead to AI regulations. But we must understand this tech too.
In 2018, the National Science Foundation invested $100 million in AI research, with special support in 2019 for developing principles for safe, robust and trustworthy AI; addressing issues of bias, fairness and transparency of algorithmic intelligence; developing deeper understanding of human-AI interaction and user education; and developing insights about the influences of AI on people and society.
This investment was dwarfed by DARPA—an agency of the Department of Defence—and its multi-year investment of more than $2 billion in new and existing programs under the “AI Next” campaign. A key area of the campaign includes pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning.
Federally funded initiatives, as well as corporate efforts (such as Google’s “What If” tool) will lead to the rise of explainable AI and interpretable AI, whereby the AI actually explains the logic behind its decision making to humans. But the next step from there would be for the AI regulators and policymakers themselves to learn about how these technologies actually work. This is an overlooked step right now that Richard Danzig, former Secretary of the U.S. Navy advises us to consider, as we create “humans-in-the-loop” systems, which require people to sign off on important AI decisions.
3. More companies will make AI a strategic initiative in corporate social responsibility.
“Formulating a product, you better know about ethics and understand legal frameworks.”
These days a growing number of people are concerned with bringing more talk of ethics into technology. One question is whether that will bring change to data-science curricula.
Following major data breaches and privacy scandals at tech companies like Facebook, universities including Stanford, the University of Texas and Harvard have all added ethics courses into computer science degree programs to address tech’s “ethical dark side,” the New York Times has reported.
As more college and universities consider incorporating humanities courses into technical degree programs, some are asking what kind of ethics should be taught.
Between the “dumb” fixed algorithms and true AI lies the problematic halfway house we’ve already entered with scarcely a thought and almost no debate, much less agreement as to aims, ethics, safety, best practice. If the algorithms around us are not yet intelligent, meaning able to independently say “that calculation/course of action doesn’t look right: I’ll do it again”, they are nonetheless starting to learn from their environments. And once an algorithm is learning, we no longer know to any degree of certainty what its rules and parameters are. At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us. Where the “dumb” fixed algorithms – complex, opaque and inured to real time monitoring as they can be – are in principle predictable and interrogable, these ones are not. After a time in the wild, we no longer know what they are: they have the potential to become erratic. We might be tempted to call these “frankenalgos” – though Mary Shelley couldn’t have made this up.
Twenty years ago, George Dyson anticipated much of what is happening today in his classic book Darwin Among the Machines. The problem, he tells me, is that we’re building systems that are beyond our intellectual means to control. We believe that if a system is deterministic (acting according to fixed rules, this being the definition of an algorithm) it is predictable – and that what is predictable can be controlled. Both assumptions turn out to be wrong.“It’s proceeding on its own, in little bits and pieces,” he says. “What I was obsessed with 20 years ago that has completely taken over the world today are multicellular, metazoan digital organisms, the same way we see in biology, where you have all these pieces of code running on people’s iPhones, and collectively it acts like one multicellular organism.“There’s this old law called Ashby’s law that says a control system has to be as complex as the system it’s controlling, and we’re running into that at full speed now, with this huge push to build self-driving cars where the software has to have a complete model of everything, and almost by definition we’re not going to understand it. Because any model that we understand is gonna do the thing like run into a fire truck ’cause we forgot to put in the fire truck.”
Walsh believes this makes it more, not less, important that the public learn about programming, because the more alienated we become from it, the more it seems like magic beyond our ability to affect. When shown the definition of “algorithm” given earlier in this piece, he found it incomplete, commenting: “I would suggest the problem is that algorithm now means any large, complex decision making software system and the larger environment in which it is embedded, which makes them even more unpredictable.” A chilling thought indeed. Accordingly, he believes ethics to be the new frontier in tech, foreseeing “a golden age for philosophy” – a view with which Eugene Spafford of Purdue University, a cybersecurity expert, concurs. Where there are choices to be made, that’s where ethics comes in.
our existing system of tort law, which requires proof of intention or negligence, will need to be rethought. A dog is not held legally responsible for biting you; its owner might be, but only if the dog’s action is thought foreseeable.
model-based programming, in which machines do most of the coding work and are able to test as they go.
As we wait for a technological answer to the problem of soaring algorithmic entanglement, there are precautions we can take. Paul Wilmott, a British expert in quantitative analysis and vocal critic of high frequency trading on the stock market, wryly suggests “learning to shoot, make jam and knit”
The venerable Association for Computing Machinery has updated its code of ethics along the lines of medicine’s Hippocratic oath, to instruct computing professionals to do no harm and consider the wider impacts of their work.