Posts Tagged ‘AI artificial intelligence’

surveillance technology and education

https://www.edsurge.com/news/2019-06-10-is-school-surveillance-going-too-far-privacy-leaders-urge-a-slow-down

New York’s Lockport City School District, which is using public funds from a Smart Schools bond to help pay for a reported $3.8 million security system that uses facial recognition technology to identify individuals who don’t belong on campus

The Lockport case has drawn the attention of national media, ire of many parents and criticism from the New York Civil Liberties Union, among other privacy groups.

the Future of Privacy Forum (FPF), a nonprofit think tank based in Washington, D.C., published an animated video that illustrates the possible harm that surveillance technology can cause to children and the steps schools should take before making any decisions, such as identifying specific goals for the technology and establishing who will have access to the data and for how long.

A few days later, the nonprofit Center for Democracy and Technology, in partnership with New York University’s Brennan Center for Justice, released a brief examining the same topic.

My note: same considerations were relayed to the SCSU SOE dean in regard of the purchase of Premethean and its installation in SOE building without discussion with faculty, who work with technology. This information was also shared with the dean: http://blog.stcloudstate.edu/ims/2018/10/31/students-data-privacy/

++++++++++++
more on surveillance in education in this IMS blog
http://blog.stcloudstate.edu/ims?s=surveillance+education

AI in education

https://www.edsurge.com/news/2019-01-23-how-much-artificial-intelligence-should-there-be-in-the-classroom

a two-day conference about artificial intelligence in education organized by a company called Squirrel AI.

he believes that having AI-driven tutors or instructors will help them each get the individual approach they need.

the Chinese government has declared a national goal of surpassing the U.S. in AI technology by the year 2030, so there is almost a Sputnik-like push for the tech going on right now in China.

+_+++++++++++++++++
more on AI in education in this IMS blog
http://blog.stcloudstate.edu/ims?s=Artificial+Intelligence+and+education

AI deep learning

Machine learning for sensors

June 3, 2019

https://phys.org/news/2019-06-machine-sensors.html

Researchers at the Fraunhofer Institute for Microelectronic Circuits and Systems IMS have developed AIfES, an artificial intelligence (AI) concept for microcontrollers and sensors that contains a completely configurable artificial neural network. AIfES is a platform-independent machine learning library which can be used to realize self-learning microelectronics requiring no connection to a cloud or to high-performance computers. The sensor-related AI system recognizes handwriting and gestures, enabling for example gesture control of input when the library is running on a wearable.

a machine learning library programmed in C that can run on microcontrollers, but also on other platforms such as PCs, Raspberry PI and Android.

+++++++++++++++++
more about machine learning in this IMS blog
http://blog.stcloudstate.edu/ims?s=machine+learning

Education and Ethics

4 Ways AI Education and Ethics Will Disrupt Society in 2019

By Tara Chklovski     Jan 28, 2019

https://www.edsurge.com/news/2019-01-28-4-ways-ai-education-and-ethics-will-disrupt-society-in-2019

In 2018 we witnessed a clash of titans as government and tech companies collided on privacy issues around collecting, culling and using personal data. From GDPR to Facebook scandals, many tech CEOs were defending big data, its use, and how they’re safeguarding the public.

Meanwhile, the public was amazed at technological advances like Boston Dynamic’s Atlas robot doing parkour, while simultaneously being outraged at the thought of our data no longer being ours and Alexa listening in on all our conversations.

1. Companies will face increased pressure about the data AI-embedded services use.

2. Public concern will lead to AI regulations. But we must understand this tech too.

In 2018, the National Science Foundation invested $100 million in AI research, with special support in 2019 for developing principles for safe, robust and trustworthy AI; addressing issues of bias, fairness and transparency of algorithmic intelligence; developing deeper understanding of human-AI interaction and user education; and developing insights about the influences of AI on people and society.

This investment was dwarfed by DARPA—an agency of the Department of Defence—and its multi-year investment of more than $2 billion in new and existing programs under the “AI Next” campaign. A key area of the campaign includes pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning.

Federally funded initiatives, as well as corporate efforts (such as Google’s “What If” tool) will lead to the rise of explainable AI and interpretable AI, whereby the AI actually explains the logic behind its decision making to humans. But the next step from there would be for the AI regulators and policymakers themselves to learn about how these technologies actually work. This is an overlooked step right now that Richard Danzig, former Secretary of the U.S. Navy advises us to consider, as we create “humans-in-the-loop” systems, which require people to sign off on important AI decisions.

3. More companies will make AI a strategic initiative in corporate social responsibility.

Google invested $25 million in AI for Good and Microsoft added an AI for Humanitarian Action to its prior commitment. While these are positive steps, the tech industry continues to have a diversity problem

4. Funding for AI literacy and public education will skyrocket.

Ryan Calo from the University of Washington explains that it matters how we talk about technologies that we don’t fully understand.

 

 

 

AI in the classroom

How Much Artificial Intelligence Should There Be in the Classroom?

By Betsy Corcoran and Jeffrey R. Young     Jan 23, 2019

https://www.edsurge.com/news/2019-01-23-how-much-artificial-intelligence-should-there-be-in-the-classroom

We can build robot teachers, or even robot teaching assistants. But should we?

the Chinese government has declared a national goal of surpassing the U.S. in AI technology by the year 2030, so there is almost a Sputnik-like push for the tech going on right now in China. At the same time, China is also facing a shortage of qualified teachers in many rural areas, and there’s a huge demand for high-quality language teachers and tutors throughout the country.

+++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=artificial+intelligence

American AI Initiative

Trump creates American AI Initiative to boost research, train displaced workers

The order is designed to protect American technology, national security, privacy, and values when it comes to artificial intelligence.

STEPHEN SHANKLAND,SEAN KEANE FEBRUARY 11, 2019

https://www.cnet.com/news/trump-to-create-american-ai-initiative-with-executive-order/

President Donald Trump on Monday directed federal agencies to improve the nation’s artificial intelligence abilities — and help people whose jobs are displaced by the automation it enables.

t’s good for the US government to focus on AI, said Daniel Castro, chief executive of the Center for Data Innovation, a technology-focused think tank that supports the initiative.

Silicon Valley has been investing heavily in AI in recent years, but the path hasn’t always been an easy one. In October, for instance, Google withdrew from competition for a $10 billion Pentagon cloud computing contract, saying it might conflict with its principles for ethical use of AI.

Trump this week is also reportedly expected to sign an executive order banning Chinese telecom equipment from US wireless networks by the end of February.

++++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=artificial+intelligence

Policy for Artificial Intelligence

Law is Code: Making Policy for Artificial Intelligence

Jules Polonetsky and Omer Tene January 16, 2019

https://www.ourworld.co/law-is-code-making-policy-for-artificial-intelligence/

Twenty years have passed since renowned Harvard Professor Larry Lessig coined the phrase “Code is Law”, suggesting that in the digital age, computer code regulates behavior much like legislative code traditionally did.  These days, the computer code that powers artificial intelligence (AI) is a salient example of Lessig’s statement.

  • Good AI requires sound data.  One of the principles,  some would say the organizing principle, of privacy and data protection frameworks is data minimization.  Data protection laws require organizations to limit data collection to the extent strictly necessary and retain data only so long as it is needed for its stated goal. 
  • Preventing discrimination – intentional or not.
    When is a distinction between groups permissible or even merited and when is it untoward?  How should organizations address historically entrenched inequalities that are embedded in data?  New mathematical theories such as “fairness through awareness” enable sophisticated modeling to guarantee statistical parity between groups.
  • Assuring explainability – technological due process.  In privacy and freedom of information frameworks alike, transparency has traditionally been a bulwark against unfairness and discrimination.  As Justice Brandeis once wrote, “Sunlight is the best of disinfectants.”
  • Deep learning means that iterative computer programs derive conclusions for reasons that may not be evident even after forensic inquiry. 

Yet even with code as law and a rising need for law in code, policymakers do not need to become mathematicians, engineers and coders.  Instead, institutions must develop and enhance their technical toolbox by hiring experts and consulting with top academics, industry researchers and civil society voices.  Responsible AI requires access to not only lawyers, ethicists and philosophers but also to technical leaders and subject matter experts to ensure an appropriate balance between economic and scientific benefits to society on the one hand and individual rights and freedoms on the other hand.

+++++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=artificial+intelligence

Facial Recognition issues

Chinese Facial Recognition Will Take over the World in 2019

Michael K. Spencer Jan 14, 2018
https://medium.com/futuresin/chinese-facial-recognition-will-take-over-the-world-in-2019-520754a7f966
The best facial recognition startups are in China, by a long-shot. As their software is less biased, global adoption is occurring via their software. This is evidenced in 2019 by the New York Police department in NYC for example, according to the South China Morning Post.
The mass surveillance state of data harvesting in real-time is coming. Facebook already rates and profiles us.

The Tech Wars come down to an AI-War

Whether the NYC police angle is true or not (it’s being hotly disputed), Facebook and Google are thinking along lines that follow the whims of the Chinese Government.

SenseTime and Megvii won’t just be worth $5 Billion, they will be worth many times that in the future. This is because a facial recognition data-harvesting of everything is the future of consumerism and capitalism, and in some places, the central tenet of social order (think Asia).

China has already ‘won’ the trade-war, because its winning the race to innovation. America doesn’t regulate Amazon, Microsoft, Google or Facebook properly, that stunts innovation and ethics in technology where the West is now forced to copy China just to keep up.

+++++++++++++
more about facial recognition in schools
http://blog.stcloudstate.edu/ims/2019/02/02/facial-recognition-technology-in-schools/

1 2