Archive of ‘information technology’ category
Ethermap is a new tool that simplifies the process of collaboratively creating online maps. Unlike Google’s My Maps, Google Earth, or ESRI’s mapping tools, Ethermap doesn’t require user registration.
To invite others to work on your Ethermap with you, you simply have to give them the link to your map.
Google Maps & Earth – More Than Just Social Studies.
more on Polly Google in this IMS blog
Computer-generated humans and disinformation campaigns could soon take over political debate. Last year, researchers found that 70 countries had political disinformation campaigns over two years from r/Futurology
Bots will dominate political debate, experts warn
Last year, researchers at Oxford University found that 70 countries had political disinformation campaigns over two years.
Perhaps the most notable of such campaigns was that initiated by a Russian propaganda group to influence the 2016 US election result.
he US Federal Communications Commission hosted a period in 2017 where the public could comment on its plans to repeal net neutrality. Harvard Kennedy School lecturer Bruce Schneier notes that while the agency received 22 million comments, many of them were made by fake identities.
Schneier argues that the escalating prevalence of computer-generated personas could “starve” people of democracy
more on deepfake in this IMS blog
UB op-ed: How to move from digital literacy to digital fluency
The Secretive Company That Might End Privacy as We Know It: It’s taken 3 billion images from the internet to build a an AI driven database that allows US law enforcement agencies identify any stranger. from r/Futurology
Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial
But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. recognition technology.
Facial recognition technology has always been controversial. It makes people nervous about Big Brother. It has a tendency to deliver false matches for certain groups, like people of color. And some facial recognition products used by the police — including Clearview’s — haven’t been vetted by independent experts.
Clearview deployed current and former Republican officials to approach police forces, offering free trials and annual licenses for as little as $2,000. Mr. Schwartz tapped his political connections to help make government officials aware of the tool, according to Mr. Ton-That.
“We have no data to suggest this tool is accurate,” said Clare Garvie, a researcher at Georgetown University’s Center on Privacy and Technology, who has studied the government’s use of facial recognition. “The larger the database, the larger the risk of misidentification because of the doppelgänger effect. They’re talking about a massive database of random people they’ve found on the internet.”
Law enforcement is using a facial recognition app with huge privacy issues Clearview AI’s software can find matches in billions of internet images. from r/technology
Part of the problem stems from a lack of oversight. There has been no real public input into adoption of Clearview’s software, and the company’s ability to safeguard data hasn’t been tested in practice. Clearview itself remained highly secretive until late 2019.
The software also appears to explicitly violate policies at Facebook and elsewhere against collecting users’ images en masse.
while there’s underlying code that could theoretically be used for augmented reality glasses that could identify people on the street, Ton-That said there were no plans for such a design.
Banning Facial Recognition Isn’t Enough from r/technology
In May of last year, San Francisco banned facial recognition; the neighboring city of Oakland soon followed, as did Somerville and Brookline in Massachusetts (a statewide ban may follow). In December, San Diego suspended a facial recognition program in advance of a new statewide law, which declared it illegal, coming into effect. Forty major music festivals pledged not to use the technology, and activists are calling for a nationwide ban. Many Democratic presidential candidates support at least a partial ban on the technology.
facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we’re in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it’s being built by corporations in order to influence our buying behavior, and is incidentally used by the government.
People can be identified at a distance by their heart beat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and iris patterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses.
China, for example, uses multiple identification technologies to support its surveillance state.
There is a huge — and almost entirely unregulated — data broker industry in the United States that trades on our information.
This is why many companies buy license plate data from states. It’s also why companies like Google are buying health records, and part of the reason Google bought the company Fitbit, along with all of its data.
The data broker industry is almost entirely unregulated; there’s only one law — passed in Vermont in 2018 — that requires data brokers to register and explain in broad terms what kind of data they collect.
The Secretive Company That Might End Privacy as We Know It from r/technews
Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.
on social credit system in this IMS blog
PISA scores were recently released, and results of the international test revealed that only 14 percent of U.S. students were able to reliably distinguish between fact and opinion.
according to Pew Research Center, 68 percent of American adults get their news from social media—platforms where opinion is often presented as fact. While Facebook and other social media outlets have pledged to tackle fake news, the results are lackluster.
Even on seemingly-serious websites, credibility is not a given. When I was in middle and high school, we were taught that we could trust .org websites. Now, with the practice of astroturfing, responsible consumers of information must dig deeper and go further to verify the legitimacy of information. https://www.merriam-webster.com/dictionary/astroturfing
Experiences like these, where students are challenged to consider the validity of information and sort what’s real from what’s fake, would better prepare them not only to be savvier consumers of news, but also to someday digest contradictory information to make complicated decisions about their own health care, finances or civic engagement.
freely available resources to help educators teach how to vet information and think critically about real-world topics.
more fake news in this IMS blog
the service can be used for a variety of functions at schools and colleges, including verifying credentials, tracking donations and payments, or handling other student records.
a K-6 educational app called SpoonRead
Blockchain is a decentralized system where every record is linked and transparent, and any alterations leave a trail that supposedly can’t be hidden.
Some have questioned whether there is a need for blockchain in student records, considering that other kinds of encryption techniques already exist to protect and verify things like credentials.
more on blockchain in education in this IMS blog
Digital Badging and Microcredentialing
short link to this blog entry: http://bit.ly/convocation2020
for backchanneling, pls join us on Zoom: https://minnstate.zoom.us/my/badge or 9107443388
if you want to review the Zoom recording, pls click here:
Presenters: Kannan Sivaprakasam & Plamen Mittenoff
1. Share your ideas and practice of badge distribution and/or microcredentialing
2. What is a digital badge/microcredentialing?
3. How to create and award D2L digital badges for your class?
4. How to motivate the students in earning digital badges?
5. How it aligns with COSE’s strategic plan 2022/Husky Compact?
What we hope to achieve
• Create a community of digital badgers
• Catalyze professional development opportunity for faculty/staff
Literature and additional information:
- digital badge/microcredentialing
- Badgr, Credly, etc.
- credential transparency
- game-based learning:
View this post on Instagram
At #convication2020, Dr. Kannan Sivaprakasam and Dr. Plamen Miltenoff discussed achievements from the @minnstateedu innovation grant #badges #digitalbadges and #microcredentials. For more info, pls visit http://burly/convocation2020. @scsu_soe @scsualumni @scsusopa @scsucla @scsucareer @scsumasscomm @scsustudentgovernment @scsucose @scsu_chemistry_club
Facebook’s new ban targets videos that are manipulated to make it appear someone said words they didn’t actually say.
more on deep fake in this IMS blog
more on facebook in this IMS blog
1 2 3 … 83 Next