Searching for "technology ethics"

Library and information science and ethics

literature:

https://www.ifla.org/g/faife/professional-codes-of-ethics-for-librarians/ и наред с IFLA е добре да се има пред вид ЮНЕСКО https://en.unesco.org/themes/ethics-science-and-technology/ethics-education
https://www.essaysauce.com/education-essays/ethics-library-information-science-lis-professionals/

https://www.uniassignment.com/essay-samples/information-technology/ethics-and-professionalism-in-library-information-technology-essay.php

https://www.springer.com/journal/10676 (за мен би било интересно да видя/чуя студентите ти как се „разграничават” от ИТ специалистите). Статиите в това списание интересни ли са им? Ако не са, защо? Как те виждат бъдещата си професия и т.н.). Ако се окаже, че повечето се разграничават, тогава да обсъдят тази статия: https://utpjournals.press/doi/abs/10.3138/jelis-62-4-2020-0106?journalCode=jelis
https://www.i-c-i-e.org/
https://www.academia.edu/4896164/Information_Ethics_II_Towards_a_Unified_Taxonomy

https://urresearch.rochester.edu/fileDownloadForInstitutionalItem.action?itemId=8334&itemFileId=17570
https://www.amazon.com/Studies-Library-Information-Science-Ethics/dp/0786433671
five categories: intellectual freedom, privacy, intellectual property, professional ethics, and intercultural information ethics

https://en.m.wikibooks.org/wiki/Introduction_to_Library_and_Information_Science/Ethics_and_Values_in_the_Information_Professions (syllabus, как твоя клас се различава от този? Прилича на този?)
https://www.loc.gov/item/webcast-3363 (това видео, повече от час, сравнено с предния учебен план, интересно)
https://repository.arizona.edu/bitstream/handle/10150/105520/fallislibraryhitech.pdf?sequence=1 (преглед на теориите, методологията)
http://article.sapub.org/10.5923.j.library.20180701.01.html  и https://www.researchgate.net/publication/228428173_Where_is_Information_Ethics_in_Iranian_Library_and_Information_Science_Publications_and_Services
(тези две са интересни, защото са case study и по манталитет, ресурси и т.н., ние сме по близо до Нигерия и Иран, отколкото САЩ). Ето, да се противопостави на тази статия: https://divine-noise-attack.info/Case-Studies-In-Library-And-Information-Science-Ethics%7CKathrine-A.-Henderson.cgi и тази също https://cdr.lib.unc.edu/concern/masters_papers/g732dd51k
https://www.semanticscholar.org/paper/Ethical-Aspects-of-Library-and-Information-Science-Rubin-Froehlich/e251e778686c9c9e6f9ce6440cf5c99191563874
collection development, censorship, privacy, reference services, copyright, administrative concerns, information access, technology-related issues, and problems with conflicting loyalties

https://urresearch.rochester.edu/fileDownloadForInstitutionalItem.action?itemId=8334&itemFileId=17570

https://www.ideals.illinois.edu/bitstream/handle/2142/106536/Contribution_368_final.pdf?sequence=1&isAllowed=y

https://www.barnesandnoble.com/w/case-studies-in-library-and-information-science-ethics-elizabeth-a-buchanan/1101364048 това е книга, мога да се опитам

https://uni-mysore.ac.in/assets/downloads2011/PHD-syllabus/Library-Information-Science.pdf В Индия, разглеждат етиката като част от изследователските методи. Това може да е полезно, ако наистина мислите за магистърска програма.
https://simmonslis.libguides.com/AppliedEthics_and_LAMs/applied_ethics libguide (споменах ви за платформата)

ethics computers brain

+++The Ethical Challenges of Connecting Our Brains to Computers from r/tech

https://www.scientificamerican.com/article/the-ethical-challenges-of-connecting-our-brains-to-computers/

Although brain-computer interfaces (BCIs) are the heart of neurotech, it is more broadly defined as technology able to collect, interpret, infer or modify information generated by any part of the nervous system.

There are different types of it—some is invasive, some isn’t. Invasive brain-computer interfaces involve placing microelectrodes or other kinds of neurotech materials directly onto the brain or even embedding them into the neural tissue. The idea is to directly sense or modulate neural activity.

Noninvasive neurotech is also used for pain management. Together with Boston Scientific, IBM researchers are applying machine learning, the internet of things, and neurotech to improve chronic pain therapy.

As new, emerging technology, neurotech challenges corporations, researchers and individuals to reaffirm our commitment to responsible innovation. It’s essential to enforce guardrails so that they lead to beneficial long-term outcomes—on company, national and international levels. We need to ensure that researchers and manufacturers of neurotech as well as policymakers and consumers approach it responsibly and ethically.

++++++++++
more on ethics in this IMS blog
https://blog.stcloudstate.edu/ims?s=ethics

digital ethics

O’Brien, J. (2020). Digital Ethics in Higher Education: 2020. Educause Review. https://er.educause.edu/articles/2020/5/digital-ethics-in-higher-education-2020

digital ethics, which I define simply as “doing the right thing at the intersection of technology innovation and accepted social values.”
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, written by Cathy O’Neil in early 2016, continues to be relevant and illuminating. O’Neil’s book revolves around her insight that “algorithms are opinions embedded in code,” in distinct contrast to the belief that algorithms are based on—and produce—indisputable facts.
Safiya Umoja Noble’s book Algorithms of Oppression: How Search Engines Reinforce Racism
The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power

+++++++++++++++++

International Dialogue on “The Ethics of Digitalisation” Kicks Off in Berlin | Berkman Klein Center. (2020, August 20). [Harvard University]. Berkman Klein Center. https://cyber.harvard.edu/story/2020-08/international-dialogue-ethics-digitalisation-kicks-berlin

+++++++++++++++++
more on ethics in this IMS blog
https://blog.stcloudstate.edu/ims?s=ethics

Education and Ethics

4 Ways AI Education and Ethics Will Disrupt Society in 2019

By Tara Chklovski     Jan 28, 2019

https://www.edsurge.com/news/2019-01-28-4-ways-ai-education-and-ethics-will-disrupt-society-in-2019

In 2018 we witnessed a clash of titans as government and tech companies collided on privacy issues around collecting, culling and using personal data. From GDPR to Facebook scandals, many tech CEOs were defending big data, its use, and how they’re safeguarding the public.

Meanwhile, the public was amazed at technological advances like Boston Dynamic’s Atlas robot doing parkour, while simultaneously being outraged at the thought of our data no longer being ours and Alexa listening in on all our conversations.

1. Companies will face increased pressure about the data AI-embedded services use.

2. Public concern will lead to AI regulations. But we must understand this tech too.

In 2018, the National Science Foundation invested $100 million in AI research, with special support in 2019 for developing principles for safe, robust and trustworthy AI; addressing issues of bias, fairness and transparency of algorithmic intelligence; developing deeper understanding of human-AI interaction and user education; and developing insights about the influences of AI on people and society.

This investment was dwarfed by DARPA—an agency of the Department of Defence—and its multi-year investment of more than $2 billion in new and existing programs under the “AI Next” campaign. A key area of the campaign includes pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning.

Federally funded initiatives, as well as corporate efforts (such as Google’s “What If” tool) will lead to the rise of explainable AI and interpretable AI, whereby the AI actually explains the logic behind its decision making to humans. But the next step from there would be for the AI regulators and policymakers themselves to learn about how these technologies actually work. This is an overlooked step right now that Richard Danzig, former Secretary of the U.S. Navy advises us to consider, as we create “humans-in-the-loop” systems, which require people to sign off on important AI decisions.

3. More companies will make AI a strategic initiative in corporate social responsibility.

Google invested $25 million in AI for Good and Microsoft added an AI for Humanitarian Action to its prior commitment. While these are positive steps, the tech industry continues to have a diversity problem

4. Funding for AI literacy and public education will skyrocket.

Ryan Calo from the University of Washington explains that it matters how we talk about technologies that we don’t fully understand.

 

 

 

ethics and exact sciences

University Data Science Programs Turn to Ethics and the Humanities

By Sydney Johnson     Jan 11, 2019

https://www.edsurge.com/news/2019-01-11-university-data-science-programs-turn-to-ethics-and-the-humanities

“Formulating a product, you better know about ethics and understand legal frameworks.”

These days a growing number of people are concerned with bringing more talk of ethics into technology. One question is whether that will bring change to data-science curricula.

Following major data breaches and privacy scandals at tech companies like Facebook, universities including Stanford, the University of Texas and Harvard have all added ethics courses into computer science degree programs to address tech’s “ethical dark side,” the New York Times has reported.

As more college and universities consider incorporating humanities courses into technical degree programs, some are asking what kind of ethics should be taught.

 

++++++++++++++
more on ethics in this IMS blog
https://blog.stcloudstate.edu/ims?s=ethics

AI and ethics

Live Facebook discussion at SCSU VizLab on ethics and technology:

Heard on Marketplace this morning (Oct. 22, 2018): ethics of artificial intelligence with John Havens of the Institute of Electrical and Electronics Engineers, which has developed a new ethics certification process for AI: https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ec_bios.pdf

Ethics and AI

***** The student club, the Philosophical Society, has now been recognized by SCSU as a student organization ***

https://ed.ted.com/lessons/the-ethical-dilemma-of-self-driving-cars-patrick-lin

Could it be the case that a random decision is still better then predetermined one designed to minimize harm?

similar ethical considerations are raised also:

in this sitcom

https://www.theatlantic.com/sponsored/hpe-2018/the-ethics-of-ai/1865/ (full movie)

This TED talk:

https://blog.stcloudstate.edu/ims/2017/09/19/social-media-algorithms/

https://blog.stcloudstate.edu/ims/2018/10/02/social-media-monopoly/

 

 

+++++++++++++++++++
IoT (Internet of Things), Industry 4.0, Big Data, BlockChain,

+++++++++++++++++++
IoT (Internet of Things), Industry 4.0, Big Data, BlockChain, Privacy, Security, Surveilance

https://blog.stcloudstate.edu/ims?s=internet+of+things

peer-reviewed literature;

Keyword search: ethic* + Internet of Things = 31

Baldini, G., Botterman, M., Neisse, R., & Tallacchini, M. (2018). Ethical Design in the Internet of Things. Science & Engineering Ethics24(3), 905–925. https://doi-org.libproxy.stcloudstate.edu/10.1007/s11948-016-9754-5

Berman, F., & Cerf, V. G. (2017). Social and Ethical Behavior in the Internet of Things. Communications of the ACM60(2), 6–7. https://doi-org.libproxy.stcloudstate.edu/10.1145/3036698

Murdock, G. (2018). Media Materialties: For A Moral Economy of Machines. Journal of Communication68(2), 359–368. https://doi-org.libproxy.stcloudstate.edu/10.1093/joc/jqx023

Carrier, J. G. (2018). Moral economy: What’s in a name. Anthropological Theory18(1), 18–35. https://doi-org.libproxy.stcloudstate.edu/10.1177/1463499617735259

Kernaghan, K. (2014). Digital dilemmas: Values, ethics and information technology. Canadian Public Administration57(2), 295–317. https://doi-org.libproxy.stcloudstate.edu/10.1111/capa.12069

Koucheryavy, Y., Kirichek, R., Glushakov, R., & Pirmagomedov, R. (2017). Quo vadis, humanity? Ethics on the last mile toward cybernetic organism. Russian Journal of Communication9(3), 287–293. https://doi-org.libproxy.stcloudstate.edu/10.1080/19409419.2017.1376561

Keyword search: ethic+ + autonomous vehicles = 46

Cerf, V. G. (2017). A Brittle and Fragile Future. Communications of the ACM60(7), 7. https://doi-org.libproxy.stcloudstate.edu/10.1145/3102112

Fleetwood, J. (2017). Public Health, Ethics, and Autonomous Vehicles. American Journal of Public Health107(4), 632–537. https://doi-org.libproxy.stcloudstate.edu/10.2105/AJPH.2016.303628

HARRIS, J. (2018). Who Owns My Autonomous Vehicle? Ethics and Responsibility in Artificial and Human Intelligence. Cambridge Quarterly of Healthcare Ethics27(4), 599–609. https://doi-org.libproxy.stcloudstate.edu/10.1017/S0963180118000038

Keeling, G. (2018). Legal Necessity, Pareto Efficiency & Justified Killing in Autonomous Vehicle Collisions. Ethical Theory & Moral Practice21(2), 413–427. https://doi-org.libproxy.stcloudstate.edu/10.1007/s10677-018-9887-5

Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis. Science & Engineering Ethics21(3), 619–630. https://doi-org.libproxy.stcloudstate.edu/10.1007/s11948-014-9565-5

Getha-Taylor, H. (2017). The Problem with Automated Ethics. Public Integrity19(4), 299–300. https://doi-org.libproxy.stcloudstate.edu/10.1080/10999922.2016.1250575

Keyword search: ethic* + artificial intelligence = 349

Etzioni, A., & Etzioni, O. (2017). Incorporating Ethics into Artificial Intelligence. Journal of Ethics21(4), 403–418. https://doi-org.libproxy.stcloudstate.edu/10.1007/s10892-017-9252-2

Köse, U. (2018). Are We Safe Enough in the Future of Artificial Intelligence? A Discussion on Machine Ethics and Artificial Intelligence Safety. BRAIN: Broad Research in Artificial Intelligence & Neuroscience9(2), 184–197. Retrieved from http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3daph%26AN%3d129943455%26site%3dehost-live%26scope%3dsite

++++++++++++++++
http://www.cts.umn.edu/events/conference/2018

2018 CTS Transportation Research Conference

Keynote presentations will explore the future of driving and the evolution and potential of automated vehicle technologies.

+++++++++++++++++++
https://blog.stcloudstate.edu/ims/2016/02/26/philosophy-and-technology/

+++++++++++++++++++
more on AI in this IMS blog
https://blog.stcloudstate.edu/ims/2018/09/07/limbic-thought-artificial-intelligence/

AI and autonomous cars as ALA discussion topic
https://blog.stcloudstate.edu/ims/2018/01/11/ai-autonomous-cars-libraries/

and privacy concerns
https://blog.stcloudstate.edu/ims/2018/09/14/ai-for-education/

the call of the German scientists on ethics and AI
https://blog.stcloudstate.edu/ims/2018/09/01/ethics-and-ai/

AI in the race for world dominance
https://blog.stcloudstate.edu/ims/2018/04/21/ai-china-education/

coding ethics unpredictability

Franken-algorithms: the deadly consequences of unpredictable code

by  Thu 30 Aug 2018 

https://www.theguardian.com/technology/2018/aug/29/coding-algorithms-frankenalgos-program-danger

Between the “dumb” fixed algorithms and true AI lies the problematic halfway house we’ve already entered with scarcely a thought and almost no debate, much less agreement as to aims, ethics, safety, best practice. If the algorithms around us are not yet intelligent, meaning able to independently say “that calculation/course of action doesn’t look right: I’ll do it again”, they are nonetheless starting to learn from their environments. And once an algorithm is learning, we no longer know to any degree of certainty what its rules and parameters are. At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us. Where the “dumb” fixed algorithms – complex, opaque and inured to real time monitoring as they can be – are in principle predictable and interrogable, these ones are not. After a time in the wild, we no longer know what they are: they have the potential to become erratic. We might be tempted to call these “frankenalgos” – though Mary Shelley couldn’t have made this up.

Twenty years ago, George Dyson anticipated much of what is happening today in his classic book Darwin Among the Machines. The problem, he tells me, is that we’re building systems that are beyond our intellectual means to control. We believe that if a system is deterministic (acting according to fixed rules, this being the definition of an algorithm) it is predictable – and that what is predictable can be controlled. Both assumptions turn out to be wrong.“It’s proceeding on its own, in little bits and pieces,” he says. “What I was obsessed with 20 years ago that has completely taken over the world today are multicellular, metazoan digital organisms, the same way we see in biology, where you have all these pieces of code running on people’s iPhones, and collectively it acts like one multicellular organism.“There’s this old law called Ashby’s law that says a control system has to be as complex as the system it’s controlling, and we’re running into that at full speed now, with this huge push to build self-driving cars where the software has to have a complete model of everything, and almost by definition we’re not going to understand it. Because any model that we understand is gonna do the thing like run into a fire truck ’cause we forgot to put in the fire truck.”

Walsh believes this makes it more, not less, important that the public learn about programming, because the more alienated we become from it, the more it seems like magic beyond our ability to affect. When shown the definition of “algorithm” given earlier in this piece, he found it incomplete, commenting: “I would suggest the problem is that algorithm now means any large, complex decision making software system and the larger environment in which it is embedded, which makes them even more unpredictable.” A chilling thought indeed. Accordingly, he believes ethics to be the new frontier in tech, foreseeing “a golden age for philosophy” – a view with which Eugene Spafford of Purdue University, a cybersecurity expert, concurs. Where there are choices to be made, that’s where ethics comes in.

our existing system of tort law, which requires proof of intention or negligence, will need to be rethought. A dog is not held legally responsible for biting you; its owner might be, but only if the dog’s action is thought foreseeable.

model-based programming, in which machines do most of the coding work and are able to test as they go.

As we wait for a technological answer to the problem of soaring algorithmic entanglement, there are precautions we can take. Paul Wilmott, a British expert in quantitative analysis and vocal critic of high frequency trading on the stock market, wryly suggests “learning to shoot, make jam and knit

The venerable Association for Computing Machinery has updated its code of ethics along the lines of medicine’s Hippocratic oath, to instruct computing professionals to do no harm and consider the wider impacts of their work.

+++++++++++
more on coding in this IMS blog
https://blog.stcloudstate.edu/ims?s=coding

Research and Ethics: If Facebook can tweak our emotions and make us vote, what else can it Do?

If Facebook can tweak our emotions and make us vote, what else can it do?

http://www.businessinsider.com/facebook-calls-experiment-innovative-2014-7#ixzz36PtsxVfL

Google’s chief executive has expressed concern that we don’t trust big companies with our data – but may be dismayed at Facebook’s latest venture into manipulation

Please consider the information on Power, Privacy, and the Internet and details on ethics and big data in this IMS blog entry:https://blog.stcloudstate.edu/ims/2014/07/01/privacy-and-surveillance-obama-advisor-john-podesta-every-country-has-a-history-of-going-over-the-line/

important information:
Please consider the SCSU Research Ethics and the IRB (Institutional Review Board) document:
http://www.stcloudstate.edu/graduatestudies/current/culmProject/documents/ResearchEthicsandQualitative–IRBPresentationforGradStudentsv2.2011.pdf
For more information, please contact the SCSU Institutional Review Board : http://www.stcloudstate.edu/irb/default.asp

The Facebook Conundrum: Where Ethics and Science Collide

http://blogs.kqed.org/mindshift/2014/07/the-facebook-conundrum-where-ethics-and-science-collide

The field of learning analytics isn’t just about advancing the understanding of learning. It’s also being applied in efforts to try to influence and predict student behavior.

Learning analytics has yet to demonstrate its big beneficial breakthrough, its “penicillin,” in the words of Reich. Nor has there been a big ethical failure to creep lots of people out.

“There’s a difference,” Pistilli says, “between what we can do and what we should do.”

IM 690 VR and AR lab part 2

IM 690 Virtual Reality and Augmented Reality. short link: http://bit.ly/IM690lab

IM 690 lab plan for March 3, MC 205:  Oculus Go and Quest

Readings:

  1. TAM:Technology Acceptances Model
    Read Venkatesh, and Davis and sum up the importance of their model for instructional designers working with VR technologies and creating materials for users of VR technologies.
  2. UTAUT: using the theory to learn well with VR and to design good acceptance model for endusers: https://blog.stcloudstate.edu/ims/2020/02/20/utaut/
    Watch both parts of Victoria Bolotina presentation at the Global VR conference. How is she applying UTAUT for her research?
    Read Bracq et al (2019); how do they apply UTAUT for their VR nursing training?

Lab work (continue):

revision from last week:
How to shoot and edit 360 videos: Ben Claremont
https://www.youtube.com/channel/UCAjSHLRJcDfhDSu7WRpOu-w
and
https://www.youtube.com/channel/UCUFJyy31hGam1uPZMqcjL_A

  1. Oculus Quest as VR advanced level
    1. Using the controllers
    2. Confirm Guardian
    3. Using the menu

Oculus Quest main

    1. Watching 360 video in YouTube
      1. Switch between 2D and 360 VR
        1. Play a game

Climbing


Racketball

View this post on Instagram

Hell yeah, @naysy is the ultimate Beat Saber queen! 💃 #VR #VirtualReality #BeatSaber #PanicAtTheDisco

A post shared by Beat Saber (@beatsaber) on

Practice interactivity (space station)

    1. Broadcast your experience (Facebook Live)
  1. Additional (advanced) features of Oculus Quest
    1. https://engagevr.io/
    2. https://sidequestvr.com/#/setup-howto

Interactivity: communication and working collaboratively with Altspace VR

https://account.altvr.com/

setting up your avatar

joining a space and collaborating and communicating with other users

  1. Assignment: Group work
    1. Find one F2F and one online peer to form a group.
      Based on the questions/directions before you started watching the videos:
      – Does this particular technology fit in the instructional design (ID) frames and theories covered
      – how does this particular technology fit in the instructional design (ID) frames and theories covered so far?
      – what models and ideas from the videos you will see seem possible to be replicated by you?
      exchange thoughts with your peers and make a plan to create similar educational product
    2. Post your writing in the following D2L Discussions thread
  2. Augmented Reality with Hololens Watch videos at computer station)
    1. Start and turn off; go through menu

      https://youtu.be/VX3O650comM
    2. Learn gestures, voice commands,
  1. Augmented Reality with Merge Cube
    1. 3D apps and software packages and their compatibility with AR
  2. Augmented Reality with telephone
  3. Samsung Gear 360 video camera
    1. If all other goggles and devices are busy, please feel welcome to use the camera to practice and/or work toward your final project
    2. CIM card and data transfer – does your phone have a CIM card compatible with the camera?
    3. Upload 360 images and videos on your YouTube and FB accounts
  4. Issues with XR
    1. Ethics
      1. empathy
        Peter Rubin “Future Presence”
        https://blog.stcloudstate.edu/ims/2019/03/25/peter-rubin-future-presence/

+++++++++++++

Enhance your XR instructional Design with other tools: https://blog.stcloudstate.edu/ims/2020/02/07/crs-loop/

https://aframe.io/

https://framevr.io/

https://learn.framevr.io/ (free learning of frame)

https://hubs.mozilla.com/#/

https://sketchfab.com/ WebxR technology

https://mixedreality.mozilla.org/hello-webxr/

https://studio.gometa.io/landing

+++++++++++
Plamen Miltenoff, Ph.D., MLIS
Professor
320-308-3072
pmiltenoff@stcloudstate.edu
http://web.stcloudstate.edu/pmiltenoff/faculty/
schedule a meeting: https://doodle.com/digitalliteracy
find my office: https://youtu.be/QAng6b_FJqs

Education and New Developments 2019

International Conference on Education and New Developments 2019
27 to 29 of June, 2020 – Zagreb, Croatia
http://www.end-educationconference.org/

  • In TEACHERS AND STUDENTS: Teachers and Staff training and education; Educational quality and standards; Curriculum and Pedagogy; Vocational education and Counselling; Ubiquitous and lifelong learning; Training programmes and professional guidance; Teaching and learning relationship; Student affairs (learning, experiences and diversity; Extra-curricular activities; Assessment and measurements in Education.
    • In PROJECTS AND TRENDS: Pedagogic innovations; Challenges and transformations in Education; Technology in teaching and learning; Distance Education and eLearning; Global and sustainable developments for Education; New learning and teaching models; Multicultural and (inter)cultural communications; Inclusive and Special Education; Rural and indigenous Education; Educational projects.
    • In TEACHING AND LEARNING: Critical Thinking; Educational foundations; Research and development methodologies; Early childhood and Primary Education; Secondary Education; Higher Education; Science and technology Education; Literacy, languages and Linguistics (TESL/TEFL); Health Education; Religious Education; Sports Education.
    • In ORGANIZATIONAL ISSUES: Educational policy and leadership; Human Resources development; Educational environment; Business, Administration, and Management in Education; Economics in Education; Institutional accreditations and rankings; International Education and Exchange programmes; Equity, social justice and social change; Ethics and values; Organizational learning and change; Corporate Education.

= Types of Contributions =
All submissions are subjected to a blind-review refereeing process and are divided in these categories:
– Oral Presentations
– Posters
– Workshops
– Virtual presentations
– Company Presentation
Corporates can also showcase their products or services in the conference exhibitions area by contacting the secretariat or publicity email (provided below).

= Conference Date and Location =
END 2020 will be held in Zagreb, Croatia (Hotel Dubrovnik) and will occur from 27 to 29 of June, 2020.

= Contacts =
Conference email: secretariat@end-educationconference.org
Publicity email: publicity@end-educationconference.org

 

1 2 3