Europe to lead human-centric Artificial Intelligence: we invite the industry, research institutes and public authorities to test ethics guidelines for trustworthy AI drafted by a group of experts.
4 Ways AI Education and Ethics Will Disrupt Society in 2019
In 2018 we witnessed a clash of titans as government and tech companies collided on privacy issues around collecting, culling and using personal data. From GDPR to Facebook scandals, many tech CEOs were defending big data, its use, and how they’re safeguarding the public.
Meanwhile, the public was amazed at technological advances like Boston Dynamic’s Atlas robot doing parkour, while simultaneously being outraged at the thought of our data no longer being ours and Alexa listening in on all our conversations.
1. Companies will face increased pressure about the data AI-embedded services use.
2. Public concern will lead to AI regulations. But we must understand this tech too.
In 2018, the National Science Foundation invested $100 million in AI research, with special support in 2019 for developing principles for safe, robust and trustworthy AI; addressing issues of bias, fairness and transparency of algorithmic intelligence; developing deeper understanding of human-AI interaction and user education; and developing insights about the influences of AI on people and society.
This investment was dwarfed by DARPA—an agency of the Department of Defence—and its multi-year investment of more than $2 billion in new and existing programs under the “AI Next” campaign. A key area of the campaign includes pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning.
Federally funded initiatives, as well as corporate efforts (such as Google’s “What If” tool) will lead to the rise of explainable AI and interpretable AI, whereby the AI actually explains the logic behind its decision making to humans. But the next step from there would be for the AI regulators and policymakers themselves to learn about how these technologies actually work. This is an overlooked step right now that Richard Danzig, former Secretary of the U.S. Navy advises us to consider, as we create “humans-in-the-loop” systems, which require people to sign off on important AI decisions.
3. More companies will make AI a strategic initiative in corporate social responsibility.
Google invested $25 million in AI for Good and Microsoft added an AI for Humanitarian Action to its prior commitment. While these are positive steps, the tech industry continues to have a diversity problem
4. Funding for AI literacy and public education will skyrocket.
Ryan Calo from the University of Washington explains that it matters how we talk about technologies that we don’t fully understand.
How Much Artificial Intelligence Should There Be in the Classroom?
We can build robot teachers, or even robot teaching assistants. But should we?
the Chinese government has declared a national goal of surpassing the U.S. in AI technology by the year 2030, so there is almost a Sputnik-like push for the tech going on right now in China. At the same time, China is also facing a shortage of qualified teachers in many rural areas, and there’s a huge demand for high-quality language teachers and tutors throughout the country.
more on AI in this IMS blog
Trump creates American AI Initiative to boost research, train displaced workers
The order is designed to protect American technology, national security, privacy, and values when it comes to artificial intelligence.
STEPHEN SHANKLAND,SEAN KEANE FEBRUARY 11, 2019
President Donald Trump on Monday directed federal agencies to improve the nation’s artificial intelligence abilities — and help people whose jobs are displaced by the automation it enables.
t’s good for the US government to focus on AI, said Daniel Castro, chief executive of the Center for Data Innovation, a technology-focused think tank that supports the initiative.
Silicon Valley has been investing heavily in AI in recent years, but the path hasn’t always been an easy one. In October, for instance, Google withdrew from competition for a $10 billion Pentagon cloud computing contract, saying it might conflict with its principles for ethical use of AI.
Trump this week is also reportedly expected to sign an executive order banning Chinese telecom equipment from US wireless networks by the end of February.
more on AI in this IMS blog
Law is Code: Making Policy for Artificial Intelligence
Jules Polonetsky and Omer Tene January 16, 2019
Twenty years have passed since renowned Harvard Professor Larry Lessig coined the phrase “Code is Law”, suggesting that in the digital age, computer code regulates behavior much like legislative code traditionally did. These days, the computer code that powers artificial intelligence (AI) is a salient example of Lessig’s statement.
- Good AI requires sound data. One of the principles, some would say the organizing principle, of privacy and data protection frameworks is data minimization. Data protection laws require organizations to limit data collection to the extent strictly necessary and retain data only so long as it is needed for its stated goal.
- Preventing discrimination – intentional or not.
When is a distinction between groups permissible or even merited and when is it untoward? How should organizations address historically entrenched inequalities that are embedded in data? New mathematical theories such as “fairness through awareness” enable sophisticated modeling to guarantee statistical parity between groups.
- Assuring explainability – technological due process. In privacy and freedom of information frameworks alike, transparency has traditionally been a bulwark against unfairness and discrimination. As Justice Brandeis once wrote, “Sunlight is the best of disinfectants.”
- Deep learning means that iterative computer programs derive conclusions for reasons that may not be evident even after forensic inquiry.
Yet even with code as law and a rising need for law in code, policymakers do not need to become mathematicians, engineers and coders. Instead, institutions must develop and enhance their technical toolbox by hiring experts and consulting with top academics, industry researchers and civil society voices. Responsible AI requires access to not only lawyers, ethicists and philosophers but also to technical leaders and subject matter experts to ensure an appropriate balance between economic and scientific benefits to society on the one hand and individual rights and freedoms on the other hand.
more on AI in this IMS blog
Chinese Facial Recognition Will Take over the World in 2019
Michael K. Spencer Jan 14, 2018
The best facial recognition startups are in China, by a long-shot. As their software is less biased, global adoption is occurring via their software. This is evidenced in 2019 by the New York Police department in NYC for example, according to the South China Morning Post.
The mass surveillance state of data harvesting in real-time is coming. Facebook already rates and profiles us.
The Tech Wars come down to an AI-War
Whether the NYC police angle is true or not (it’s being hotly disputed), Facebook and Google are thinking along lines that follow the whims of the Chinese Government.
SenseTime and Megvii won’t just be worth $5 Billion, they will be worth many times that in the future. This is because a facial recognition data-harvesting of everything is the future of consumerism and capitalism, and in some places, the central tenet of social order (think Asia).
China has already ‘won’ the trade-war, because its winning the race to innovation. America doesn’t regulate Amazon, Microsoft, Google or Facebook properly, that stunts innovation and ethics in technology where the West is now forced to copy China just to keep up.
more about facial recognition in schools
Why Technology Favors Tyranny
Artificial intelligence could erase many practical advantages of democracy, and erode the ideals of liberty and equality. It will further concentrate power among a small elite if we don’t take steps to stop it.
YUVAL NOAH HARARI OCTOBER 2018 ISSUE
Ordinary people may not understand artificial intelligence and biotechnology in any detail, but they can sense that the future is passing them by. In 1938 the common man’s condition in the Soviet Union, Germany, or the United States may have been grim, but he was constantly told that he was the most important thing in the world, and that he was the future (provided, of course, that he was an “ordinary man,” rather than, say, a Jew or a woman).
n 2018 the common person feels increasingly irrelevant. Lots of mysterious terms are bandied about excitedly in ted Talks, at government think tanks, and at high-tech conferences—globalization, blockchain, genetic engineering, AI, machine learning—and common people, both men and women, may well suspect that none of these terms is about them.
Fears of machines pushing people out of the job market are, of course, nothing new, and in the past such fears proved to be unfounded. But artificial intelligence is different from the old machines. In the past, machines competed with humans mainly in manual skills. Now they are beginning to compete with us in cognitive skills.
Israel is a leader in the field of surveillance technology, and has created in the occupied West Bank a working prototype for a total-surveillance regime. Already today whenever Palestinians make a phone call, post something on Facebook, or travel from one city to another, they are likely to be monitored by Israeli microphones, cameras, drones, or spy software. Algorithms analyze the gathered data, helping the Israeli security forces pinpoint and neutralize what they consider to be potential threats.
The conflict between democracy and dictatorship is actually a conflict between two different data-processing systems. AI may swing the advantage toward the latter.
As we rely more on Google for answers, our ability to locate information independently diminishes. Already today, “truth” is defined by the top results of a Google search. This process has likewise affected our physical abilities, such as navigating space.
So what should we do?
For starters, we need to place a much higher priority on understanding how the human mind works—particularly how our own wisdom and compassion can be cultivated.
more on SCSU student philosophy club in this IMS blog