Searching for "privacy"

Digital Destruction of Democracy

The Digital Destruction of Democracy

ANYA SCHIFFRIN JANUARY 21, 2019

https://prospect.org/article/digital-destruction-democracy

Anya Schiffrin is an adjunct faculty member at the School of International and Public Affairs at Columbia University. She worked in Hanoi from 1997 to 1999 as the bureau chief of Dow Jones Newswires.
Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics
By Yochai Benkler, Robert Faris, & Hal Roberts
Oxford University Press
A Harvard law professor who is a well-known theorist of the digital age, Benkler and colleagues have produced an authoritative tome that includes multiple taxonomies and literature reviews as well as visualizations of the flow of disinformation.
clickbait fabricators
white supremacist and alt-right trolls
a history of the scholarship on propaganda, reminding the reader that much of the discussion began in the 1930s.
Benkler’s optimistic 2007 book, The Wealth of Networks, predicted that the Internet would bring people together and transform the way information is created and spread. Today, Benkler is far less sanguine and has become one of the foremost researchers of disinformation networks.
Fox News, BreitbartThe Daily CallerInfoWars, and Zero Hedge
As a result, mainstream journalists repeat and amplify the falsehoods even as they debunk them.
There is no clear line, they argue, between Russian propaganda, Breitbart lies, and the Trump victory. They add that Fox News is probably more influential than Facebook.
after George Soros gave a speech in January 2018 calling for regulation of the social media platforms, Facebook hired a Republican opposition research firm to shovel dirt at George Soros.
The European Union has not yet tried to regulate disinformation (although they do have codes of practice for the platforms), instead focusing on taxation, competition regulation, and protection of privacy. But Germany has strengthened its regulations regarding online hate speech, including the liability of the social media platforms.
disclosure of the sources of online political advertising.It’s a bit toothless because, just as with offshore bank accounts, it may be possible to register which U.S. entity is paying for online political advertising, but it’s impossible to know whether that entity is getting its funds from overseas. Even the Honest Ads bill was too much for Facebook to take.

++++++++++++
more on the issues of digital world and democracy in this IMS blog
https://blog.stcloudstate.edu/ims/2019/02/19/facebook-digital-gangsters/

Facebook Digital Gangsters

Facebook labelled ‘digital gangsters’ by report on fake news

Company broke privacy and competition law and should be regulated urgently, say MPs

http://www.theguardian.com/technology/2019/feb/18/facebook-fake-news-investigation-report-regulation-privacy-law-dcms

https://abcn.ws/2NjmNoZ

See also: https://blog.stcloudstate.edu/ims/2019/02/20/digital-destruction-of-democracy/

+++++++++++++
more on Facebook in this IMS blog
https://blog.stcloudstate.edu/ims?s=facebook

Education and Ethics

4 Ways AI Education and Ethics Will Disrupt Society in 2019

By Tara Chklovski     Jan 28, 2019

https://www.edsurge.com/news/2019-01-28-4-ways-ai-education-and-ethics-will-disrupt-society-in-2019

In 2018 we witnessed a clash of titans as government and tech companies collided on privacy issues around collecting, culling and using personal data. From GDPR to Facebook scandals, many tech CEOs were defending big data, its use, and how they’re safeguarding the public.

Meanwhile, the public was amazed at technological advances like Boston Dynamic’s Atlas robot doing parkour, while simultaneously being outraged at the thought of our data no longer being ours and Alexa listening in on all our conversations.

1. Companies will face increased pressure about the data AI-embedded services use.

2. Public concern will lead to AI regulations. But we must understand this tech too.

In 2018, the National Science Foundation invested $100 million in AI research, with special support in 2019 for developing principles for safe, robust and trustworthy AI; addressing issues of bias, fairness and transparency of algorithmic intelligence; developing deeper understanding of human-AI interaction and user education; and developing insights about the influences of AI on people and society.

This investment was dwarfed by DARPA—an agency of the Department of Defence—and its multi-year investment of more than $2 billion in new and existing programs under the “AI Next” campaign. A key area of the campaign includes pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning.

Federally funded initiatives, as well as corporate efforts (such as Google’s “What If” tool) will lead to the rise of explainable AI and interpretable AI, whereby the AI actually explains the logic behind its decision making to humans. But the next step from there would be for the AI regulators and policymakers themselves to learn about how these technologies actually work. This is an overlooked step right now that Richard Danzig, former Secretary of the U.S. Navy advises us to consider, as we create “humans-in-the-loop” systems, which require people to sign off on important AI decisions.

3. More companies will make AI a strategic initiative in corporate social responsibility.

Google invested $25 million in AI for Good and Microsoft added an AI for Humanitarian Action to its prior commitment. While these are positive steps, the tech industry continues to have a diversity problem

4. Funding for AI literacy and public education will skyrocket.

Ryan Calo from the University of Washington explains that it matters how we talk about technologies that we don’t fully understand.

 

 

 

Russia disconnect Internet

Russia ‘successfully tests’ its unplugged internet

24 December 2019

https://www.bbc.com/news/technology-50902496

“Increasingly, authoritarian countries which want to control what citizens see are looking at what Iran and China have already done.

“It means people will not have access to dialogue about what is going on in their own country, they will be kept within their own bubble.”

a “sovereign Runet”?

In Iran, the National Information Network allows access to web services while policing all content on the network and limiting external information. It is run by the state-owned Telecommunication Company of Iran.

One of the benefits of effectively turning all internet access into a government-controlled walled garden, is that virtual private networks (VPNs), often used to circumvent blocks, would not work.

Another example of this is the so-called Great Firewall of China. It blocks access to many foreign internet services, which in turn has helped several domestic tech giants establish themselves.

Russia already tech champions of its own, such as Yandex and Mail.Ru, but other local firms might also benefit.

The country plans to create its own Wikipedia and politicians have passed a bill that bans the sale of smartphones that do not have Russian software pre-installed.

++++++++++++++++++++++++

Russia Is Considering An Experiment To Disconnect From The Internet

February 11, 20194:50 PM ET  SASHA INGBER

https://www.npr.org/2019/02/11/693538900/russia-is-considering-an-experiment-to-disconnect-from-the-internet

Russia is considering a plan to temporarily disconnect from the Internet as a way to gauge how the country’s cyberdefenses would fare in the face of foreign aggression, according to Russian media.

It was introduced after the White House published its 2018 National Security Strategy, which attributed cyberattacks on the United States to Russia, China, Iran and North Korea.

Russia’s Communications Ministry also simulated a switching-off exercise of global Internet services in 2014, according to Russian outlet RT.

Russia’s State Duma will meet Tuesday to consider the bill, according to RIA Novosti.

Roskomnadzor has also exerted pressure on Google to remove certain sites on Russian searches.

Director of National Intelligence Dan Coats told Congress last month that Russia, as well as other foreign actors, will increasingly use cyber operations to “threaten both minds and machines in an expanding number of ways—to steal information, to influence our citizens, or to disrupt critical infrastructure.”

My note: In the past, the US actions prompted other countries to consider the same:
Germanty – https://blog.stcloudstate.edu/ims/2014/07/01/privacy-and-surveillance-obama-advisor-john-podesta-every-country-has-a-history-of-going-over-the-line/

++++++++++++
more on cybersecurity in this IMS blog
https://blog.stcloudstate.edu/ims?s=cybersecurity

more on surveillance in this IMS blog
https://blog.stcloudstate.edu/ims?s=surveillance

American AI Initiative

Trump creates American AI Initiative to boost research, train displaced workers

The order is designed to protect American technology, national security, privacy, and values when it comes to artificial intelligence.

STEPHEN SHANKLAND,SEAN KEANE FEBRUARY 11, 2019

https://www.cnet.com/news/trump-to-create-american-ai-initiative-with-executive-order/

President Donald Trump on Monday directed federal agencies to improve the nation’s artificial intelligence abilities — and help people whose jobs are displaced by the automation it enables.

t’s good for the US government to focus on AI, said Daniel Castro, chief executive of the Center for Data Innovation, a technology-focused think tank that supports the initiative.

Silicon Valley has been investing heavily in AI in recent years, but the path hasn’t always been an easy one. In October, for instance, Google withdrew from competition for a $10 billion Pentagon cloud computing contract, saying it might conflict with its principles for ethical use of AI.

Trump this week is also reportedly expected to sign an executive order banning Chinese telecom equipment from US wireless networks by the end of February.

++++++++++++
more on AI in this IMS blog
https://blog.stcloudstate.edu/ims?s=artificial+intelligence

Policy for Artificial Intelligence

Law is Code: Making Policy for Artificial Intelligence

Jules Polonetsky and Omer Tene January 16, 2019

https://www.ourworld.co/law-is-code-making-policy-for-artificial-intelligence/

Twenty years have passed since renowned Harvard Professor Larry Lessig coined the phrase “Code is Law”, suggesting that in the digital age, computer code regulates behavior much like legislative code traditionally did.  These days, the computer code that powers artificial intelligence (AI) is a salient example of Lessig’s statement.

  • Good AI requires sound data.  One of the principles,  some would say the organizing principle, of privacy and data protection frameworks is data minimization.  Data protection laws require organizations to limit data collection to the extent strictly necessary and retain data only so long as it is needed for its stated goal. 
  • Preventing discrimination – intentional or not.
    When is a distinction between groups permissible or even merited and when is it untoward?  How should organizations address historically entrenched inequalities that are embedded in data?  New mathematical theories such as “fairness through awareness” enable sophisticated modeling to guarantee statistical parity between groups.
  • Assuring explainability – technological due process.  In privacy and freedom of information frameworks alike, transparency has traditionally been a bulwark against unfairness and discrimination.  As Justice Brandeis once wrote, “Sunlight is the best of disinfectants.”
  • Deep learning means that iterative computer programs derive conclusions for reasons that may not be evident even after forensic inquiry. 

Yet even with code as law and a rising need for law in code, policymakers do not need to become mathematicians, engineers and coders.  Instead, institutions must develop and enhance their technical toolbox by hiring experts and consulting with top academics, industry researchers and civil society voices.  Responsible AI requires access to not only lawyers, ethicists and philosophers but also to technical leaders and subject matter experts to ensure an appropriate balance between economic and scientific benefits to society on the one hand and individual rights and freedoms on the other hand.

+++++++++++++
more on AI in this IMS blog
https://blog.stcloudstate.edu/ims?s=artificial+intelligence

Your Brain Off Facebook

This Is Your Brain Off Facebook

Planning on quitting the social platform? A major new study offers a glimpse of what unplugging might do for your life. (Spoiler: It’s not so bad.)

Benedict Carey, Jan 30, 2019

This Is Your Brain Off Facebook by BENEDICT CAREY

So what happens if you actually do quit? A new study, the most comprehensive to date, offers a preview.

Well before news broke that Facebook had shared users’ data without consent, scientists and habitual users debated how the platform had changed the experience of daily life.

the use of Facebook and other social media is linked to mental distress, especially in adolescents.

Others have likened habitual Facebook use to a mental disorder, comparing it to drug addiction and even publishing magnetic-resonance images of what Facebook addiction “looks like in the brain.”

When Facebook has published its own analyses to test such claims, the company has been roundly criticized.

For abstainers, breaking up with Facebook freed up about an hour a day, on average, and more than twice that for the heaviest users.

research led by Ethan Kross, a professor of psychology at the University of Michigan, has found that high levels of passive browsing on social media predict lowered moods, compared to more active engagement.

++++++++++++
more on Facebook in this IMS blog
https://blog.stcloudstate.edu/ims?s=facebook

Tackling Data in Libraries

Tackling Data in Libraries: Opportunities and Challenges in Serving User Communities

Submit proposals at http://www.iolug.org

Deadline is Friday, March 1, 2019

Submissions are invited for the IOLUG Spring 2019 Conference, to be held May 10th in Indianapolis, IN. Submissions are welcomed from all types of libraries and on topics related to the theme of data in libraries.

Libraries and librarians work with data every day, with a variety of applications – circulation, gate counts, reference questions, and so on. The mass collection of user data has made headlines many times in the past few years. Analytics and privacy have, understandably, become important issues both globally and locally. In addition to being aware of the data ecosystem in which we work, libraries can play a pivotal role in educating user communities about data and all of its implications, both favorable and unfavorable.

The Conference Planning Committee is seeking proposals on topics related to data in libraries, including but not limited to:

  • Using tools/resources to find and leverage data to solve problems and expand knowledge,
  • Data policies and procedures,
  • Harvesting, organizing, and presenting data,
  • Data-driven decision making,
  • Learning analytics,
  • Metadata/linked data,
  • Data in collection development,
  • Using data to measure outcomes, not just uses,
  • Using data to better reach and serve your communities,
  • Libraries as data collectors,
  • Big data in libraries,
  • Privacy,
  • Social justice/Community Engagement,
  • Algorithms,
  • Storytelling, (https://web.stcloudstate.edu/pmiltenoff/lib490/)
  • Libraries as positive stewards of user data.

Facial Recognition Technology in schools

With Safety in Mind, Schools Turn to Facial Recognition Technology. But at What Cost?

By Emily Tate     Jan 31, 2019

https://www.edsurge.com/news/2019-01-31-with-safety-in-mind-schools-turn-to-facial-recognition-technology-but-at-what-cost

SAFR (Secure, Accurate Facial Recognition)

violent deaths in schools have stayed relatively constant over the last 30 years, according to data from the National Center for Education Statistics. But then there’s the emotive reality, which is that every time another event like Sandy Hook or Parkland occurs, many educators and students feel they are in peril when they go to school.

RealNetworks, a Seattle-based software company that was popular in the 1990s for its audio and video streaming services but has since expanded to offer other tools, including SAFR (Secure, Accurate Facial Recognition), its AI-supported facial recognition software.

After installing new security cameras, purchasing a few Apple devices and upgrading the school’s Wi-Fi, St. Therese was looking at a $24,000 technology tab.

The software is programmed to allow authorized users into the building with a smile.

“Facial recognition isn’t a panacea. It is just a tool,” says Collins, who focuses on education privacy issues.

Another part of the problem with tools like SAFR, is it provides a false sense of security.

++++++++++++++
more on surveillance in this IMS blog
https://blog.stcloudstate.edu/ims?s=surveillance

more on privacy in this IMS blog
https://blog.stcloudstate.edu/ims?s=privacy

1 10 11 12 13 14 23