/r/Open_Science
Open science is an increasingly important discussion topic. This subreddit collates the latest information on topics such as open access, open data, open education, open peer review, and open source.
Anyone is able to post links, images, videos, and text to this feed. They will also be displayed on Mastodon and Twitter.
Our idea of "science" spans from the natural sciences to the humanities and everything in between.
Open science, or open research, is an increasingly important discussion topic. However, it can be difficult to know where to go for information. This subreddit will collate the latest from the world of open science, including but not limited to open access, open data, open education, open peer review, and open source.
We use term "science" in the international sense: from the natural sciences to the humanities and everything in between.
/r/Open_Science
Information overload has dramatically shaped how we interact with content, from news to academic research. Drawing on personal experiences in AI, digital media and research personalization, I explore ways we can shift from overwhelming quantity to meaningful quality in my essay, A Tribute to Information Overload
How do you manage information overload in your research or daily life? I'd love to hear your thoughts and experiences. Let’s start a conversation!
🎧 Hi everyone! I just published an essay exploring how the music industry’s digital evolution—from physical formats to streaming—parallels what’s happening in research today.
We’re facing fragmentation, information overload, and the limits of PDFs. Could a new platform emerge to transform how research is shared, discovered, and discussed? I’d love to hear your thoughts!
Read the full essay here: https://medium.com/@n.nanas/0cdc6e6ee671
Hi all - posting this here in case anybody is interested in participating in an open science research - about 25 different countries are testing the same survey on health behaviours. We are collecting the Australian data, so if you are over 18 and an Australian citizen, would love your input. My post history has the details (don't want to share the link here in case it is not allowed). Thank you!
Hello Open Science community! 👋
Today, I’m thrilled to announce Akanaba.org: a platform in the making, built on the belief that research should be fair, rewarding, collaborative, and above all, innovative. Akanaba is designed to help researchers:
The vision is to drive research innovation and revolutionize how researchers collaborate, share knowledge, and advance science.
In the coming days, I’ll also be sharing an essay titled “Is PDF the MP3 of Research?”, exploring the parallels between the music and research industries as we shift from ownership to access. Stay tuned for that!
I’m Nikolaos Nanas, an AI specialist and innovator with over two decades of experience in AI, web personalization, and research publishing and I am excited to hear your thoughts and sparkle interesting discussions about the future of Open Science.
Hello Open Science community,
I'm excited to share a project that I believe aligns closely with the values and goals of open science. We're developing Ideosphere (https://ideosphere.io), a subscription-based funding platform for scientific research that aims to make the funding process more transparent, accessible, and aligned with open science principles.
Key features:
We believe this model can help address some of the challenges in traditional research funding, such as:
We're in the early stages and would love to hear from the open science community:
Your insights would be invaluable as we develop this platform. Feel free to check out our website or share your thoughts here.
Let's discuss how we can work together to make scientific research more open, accessible, and sustainably funded!
Hi! I wanted to share an Open Science Hardware tool we just released publicly. It's a low-cost, high performance insect monitor that you can build yourself with off-the-shelf parts! We have dozens of deployments here in Panama, and so it can withstand really harsh environments.
After it collects all your data, we also made custom open AI programs to detect all the insects (modified YOLO) and try to identify what they are (modified BioCLIP).
All the info and documentation for making your own is right here: https://digital-naturalism-laboratories.github.io/Mothbox/
For its sixth edition we are hosting SOSC, a school for young (data)scientists that is meant to provide an overview of the best practices and new cloud tools that can help with the daily tasks of a data scientist, all by making heavy use of live hands-on experiences.
One of the recent program update was the inclusion of workflow managment tools, and well, we got the impression that is difficult to select one techonoligy that is enough intuitive and powerful, and fit into a 1 day activity.
Also there are a lot of alternatives out there, how would you choose? What is your experience?
We looked at MLFlow, Argo Workflows (kubeflow pipelines), Dagster et al, each one with theirs pros and cons....
P.S. the registrations are open til Oct 5 :) https://agenda.infn.it/event/40829/
Schubert et al. (2024) reveal the successes and challenges faced by organizations in adhering to reforestation best practices. While many acknowledge the importance of measurable goals and community involvement, only a few provide detailed monitoring and long-term plans. Only 38% of organizations in the study report quantitative measures of the benefits to local communities.
https://groundtruth.app/evaluating-global-tree-growing-efforts-achievements-and-challenges/
Hi there,
Im working on a platform that promotes people their works in the fields of open source.
As most of this is done on GitHub i was wondering what are platforms that are used for publishing open science work?
Im very new to open science so would love some advice.
Thanks!
Hi, I'm struggling to understand the meaning behind the question:
"Outline the data utility: to whom will it be useful?" (FAIR Data Management Plan HORIZON 2020).
If it is just to say that the data is A) useful for researchers for purpose for the research project, and B) useful for academics/public interested in the topic, it seems too trivial/bureaucratic/annoying as a question.
Is there perhaps a deeping meaning I am missing? Is there a way to answer the question in a surprising/non-trivial way?
I would love to see a platform in which researchers can share conclusions that they have come to based on the research, along with the chain of evidence that led them there.
Like a meta-study, but more navigable. Each conclusion could be backed up by quotes and links to the underlying studies. Ideally it would be auto-updating and incorporate new research as it comes out.
Does a thing like this exist?
RSpace is an all-in-one ELN, sample manager and Research Data Management (RDM) platform that integrates with many other data tools. RSpace is designed to act as a central data hub and pipeline for large academic institutes who want to support open science and FAIR data principles. RSpace already has good open APIs, but to encourage the data community to build even more integrations to allow better flow of data, RSpace is now fully open source. Learn more here: https://github.com/rspace-os
There is this option if you open the menu of your paper published in Techrxiv.
Does it mean it counts citation? or it transfer to the final accepted paper? someone has experience?
How is Techrxiv working with citations on your preprint need to be transferred to the accepted paper, how does it link the two papers?
I have this open source project which I use to generate openly accessible formal proof data for Hilbert systems, and I have once briefly presented it on Reddit to the open source community.
The few times I have conversed with people about it, it seemed to me that they do not really get a clue of what I am doing there or why, despite thinking to myself that I have pretty much written it all out. I get that people tend to believe that mathematics would be all about numbers, but the objects of study in proof theory are formal proofs and their systems. People tend to shy away from it because it can look humiliating at first.
But it's my impression that formal proofs in Hilbert systems are pretty easy to grasp since they are built on very basic concepts, and what they accomplish is actually pretty cool. For instance, to declare algorithms that are also mathematical proofs to derive any mathematical theorem based on very few axioms/definitions, so that a machine can easily verify it. A project about building databases of such proofs is Metamath, but it does not focus on size/complexity/simplicity, and only on very few systems, mostly one of ZFC.
Finding proofs in Hilbert systems is hard, but looking at the short ones and their incredible elegance (in a world/system that feels kinda random because it is so vast and complex), gives me great satisfaction. It essentially shows how powerful (in epistemic terms) a few — or even a single — small statement(s) can be. It also builds some foundations in complexity theory. For example, focussing on propositional systems further tackles the NP vs. coNP problem.
Yet, afaik, I could not ignite similar excitement about the topic in any other individual, so far.
I would like to address the topic in different ways and possibly answer meaningful questions about what this is all about and how it works. But from my perspective it is all so goddamn straightforward, thus I need other people's perspectives to guide me.
Which aspects should I address, what are questions whose answers you believe would help and motivate other nerdy/techy people to catch interest or even participate in this research?
Note that the project has a discussion forum, so if you think you can contribute a good idea or question, you can also do it there (and be supported by better layout, file uploads, more characters allowed, etc).
tl;dr: Sign the pledge for DOA publishing at freeourknowledge.org to help reduce the dominance of for-profit publishers and boost journals that charge no fees.
The current academic publishing system prioritizes profit over free knowledge and scientific quality and we call for direct action by researchers to improve our publishing system. We are a small team of researchers from different fields in cognitive science and we've organized the Committee for Collective Action in Science to organize researchers and encourage them to resist perverse incentives in the pressure to publish.
Commercial publishing has led to a corruption of the core scientific process itself, such as in the case of (rapid) open-access publishers (e.g., MDPI, Frontiers; e.g., see Bloudoff-Indelicato, 2015), where it is increasingly reported that peer-reviewed processes were shallow, flawed or expert reviews ignored, so as to ensure rapid publishing at high quantities in order to collect article processing fees. As a consequence, public resources are funneled into profit margins for the academic publishing industry estimated to be as high as 40%-50% (Van Noorden, 2013), greatly exceeding what is expected in healthy competitive markets. Globally, between 2015 and 2018, authors paid an estimated $1.06 billion in fees in order to provide open access to their work (Butler et. al, 2023). This stifles scientific advancement and goes against the public interest. Of course, academics rely on the publishers in order to disseminate information and advance in their career. Ultimately, this leads to a collective problem where individual researchers are incentivized to act against their own and their community’s best interest.
For these reasons we have proposed the Diamond Initiative. Diamond Open Access refers to a publishing model in which authors are not charged for making their work publicly available to all readers. Researchers are invited to contribute to this initiative by pledging to publish at least one scholarly work through a diamond open access agreement within a five-year period when a critical mass is reached. By doing this, participants contribute to a more inclusive and accessible knowledge-sharing environment and promote alternative community-led and university-led publishers.
The pledge's activation is contingent on a threshold of 500 people which will demonstrate that researchers can find solidarity to change the status quo. We also offer assistance to those who pledge to find a suitable and reputable DOA journal to publish in. Sign the pledge here, or sign up for our newsletter here.