/r/semanticweb
A subreddit dedicated to all things Linked Data. Links, questions, discussions, etc. on RDF, metadata, inferencing, microformats, SPARQL, ...
The semantic web or linked data movement is an ongoing, W3C-supported push to extend the current WWW-technologies and practices towards a machine-readable web of data.
Anything related to this push is welcome in /r/semanticweb.
See also:
/r/semanticweb
We have a large cwa system (several dozen services using the same ontology with several dependencies between some services)
What are best practices around how the ontology should be released in our ecosystem?
Should we release only the non-inferred ontology?
Should we release only the inferred ontology?
Should we release both inferred and non inferred?
TIA
Hi Guys, A PhD student in civil Engineering here. I'm bit of a stuck in my research. As objective 1 I tried to carry a Qualitative Research to understand why construction professionals resist the technology. I found out that they don't even understand what structured data and unstructured data is. Anyways I proposed a solution that should not be technically difficult. I came across a paper" Using semantic documentation to management software project management" and proposed I could use it. I think I was pretty clear that I'm improving Semantics in the data and aim to improve data integration and information Retrieval.But the committee has asked me I need to be more specific what to do, improving data Accessibility, availability or what. I'm really confused in this part. My supervisor is a civil engineer and doesn't understand himself, therefore I'm here
Hello,
I have a statement like this one:
:x :some ( :a :b :c ) .
How do I know that is a list, at all?
something like
statement.isList() ?
Thank you
A fungus sits at the intersection of the social web (Mastodon, Pixelfed, Lemmy, etc.), the semantic web (knowledgraphs like Wikidata.org) and decentralized federated learning, representing the "computation web"-aspect in the above diagram.
Together with other similar agents, it result in a decentralized, federated web of AI agents that work on open, shared data and are open to communities. Everybody should be able to set up their own fungus service and help to grow an AI model of their choice. I call this the "fungiverse" or "mycelial web".
A fungus web-service ...
- answers user requests and knowledge inserts over the social web
- writes and reads data from the semantic web to collaborate with other fungi agents (this would ideally done with decentralized technology like solid pods, or other knowledge graphs, e.g. like wikidata.org or an own fuseki server)
- develops a shared AI model (which is also written to the semantic web) based on decentralized federated learning (which would be ideally be based on something like FlowerAi, but isn't at the moment)
Together with other similar agents, it result in a decentralized, federated web of AI agents that work on open, shared data and are open to communities. Everybody should be able to set up their own fungus service and help to grow an AI model of their choice. I call this the "fungiverse" or "mycelial web".
In its behaviour its similar to that of a fungus (hence the name):
The shared model data can be thought of as the spores, which are also used by other fungi to adjust their models. The resulting AI chats available to the users are the "fruits" of the fungi.
Roughly, a fungi's behaviour is defined by a protocol, for example the SPORE:
Every participating node runs the following protocol, which consists of two main stages:
Now the different stages in detail:
The following fungus is able to make song recommendations to the user: https://github.com/bluebbberry/MusicRecommendationFungus
Hello everyone! I'm writing this post w.r.t being helped for my final year project implicitly, which is somewhat related to KGQA and pre-trained models, to say as not confirmed yet but is enough to give context for my questions here.
So, I need to get into KG and all for the above mentioned.
Kindly suggest me some resources which can be anything from videos to books and courses to blogs to repositories, anything. But those should be credible and legit. Since it's a stake for my FYP, I need to do my best.
Those should be in detail covering everything, even nuances. However, suggest detailed but shorter courses as well.
I hope you get my point and genuine help will be provided anticipated.
Note: Deep Learning will be used as well for sure.
Thanks & Regards Ritish
Hello👋
I'm looking for suggestions on books about Knowledge Graphs(RDF or property ones but with strong preference to the former) and/or Graph RAG. Specifically, I'm interested in up-to-date and advanced resources. I’m not looking for entry-level material but rather something that dives deeper into the subject.
If you have any recommendations, I’d greatly appreciate it. Thanks in advance! 😊
Hi all!
I've been working with Knowledge Graphs for a while, and lately, the knowledge they contain has become Big Data (especially the volume).
I currently have over 50 named graphs and a total of almost 4,000,000 triples in a (still work in progress) Large Knowledge Graph of the Mexican Federation.
I wonder if you are familiar with methodologies or approaches that one could read for managing and working with such Large Knowledge Graphs?
Thanks in advance!
Hi everyone! I’m building Seamantic, a Mastodon client that introduces a semantic feed—a way to interact directly with the Semantic Web.
Here’s how it works:
By connecting users and knowledge bases, the semantic feed creates a dynamic flow of high-quality, consensus-driven data.
What do you think of the idea? Feedback is always welcome.
Full disclosure, I don't know whether this is even possible, but everything I found so far seems very close and adjacent. The short version of my question is whether/how I could synthesise how to style from software documentation written in RDF?
I'll start with my use case, then the specific outcome I'd like to be able to do, and lastly maybe a restatement of my question.
I'm currently in the process of documenting a web server for a friend of mine. The primary goal for this documentation is to allow her to deal with minor maintenance issues herself. And the secondary aim is to have a complete set of documentation so that we she gets someone in to help her with more technical aspects they don't spend hours trying to just figure out how the system works.
So it's not a huge project. There's a bit of custom code, some config for the servers, etc. So documenting what is actually there isn't a huge deal. However descriptive documentation is, in my opinion, effectively useless. Descriptive documentation isn't sufficient to explain how to do something. Especially not for a nontechnical user. And how to documentation requires that I accurately predict her needs, which I'm not capable of.
So I want to write descriptive documentation, maybe some extra relations and definitions contexts etc. And then I want to generate how to documentation based on her queries. I imagine the following two queries would be the most common:
How do I do X? I imagine this will be the most frequent and it's also the most difficult. I can't anticipate every possible how to scenario or context. However some aspects of this seems reasonable. For example is X in the documentation is a simple query that can definitely be answered. What links to X can likewise be answered. And I feel like it's a very small step to get from there to and actual, if basic, how to guide. With the obvious caveat that if it's not documented it may as well not exist.
The second most common query will probably be simple term lookups (what does X mean), or related information lookups (it says to type ls
but where and what is the probable intent). This part I imagine is relatively trivial to provide, even automatically if the interface is well designed.
I have never worked with any form of linked data before though, and I'm at best a semi technical user. So I guess I have two questions. Is it possible to do something like this in RDF/OWL? And if it's possible how might I go about implementing it?
First year CS major, assisting my professor who majorly works with ontologies and SWRL for her research.
I understand they help connect data and I’m using ChatGPT to explain basic things to me but if there’s a good source it would be very helpful.
My professor works with increasing efficiency for business models etc but I’m more interested in the healthcare side of this. This also seems to be a more niche topic. Also it would be nice to connect with people who are researching on this and share what we learn etc.
Hi everyone,
We are conducting a survey to better understand the challenges, experiences, and practical applications of validating RDF data using SHACL and ShEx. This is an opportunity to share your insights and contribute to advancing knowledge in this area.
The survey will take approximately 8 minutes to complete, with many questions being optional:
https://forms.gle/LdXsG644obcgSsAE6
Your participation would be greatly appreciated, and the findings will be shared with the community in a summarized form.
Thank you for your time and support!
Best regards,
Katja Hose, Maxime Jakubowski, Dominik Tomaszuk
Are there any practical personal knowledge graphs that people can recommend? By now I've got decades of emails, documents, notes that I'd like to index and auto-apply JSON-LD when practical, and consistent categories in general, as well as the ability to create relationships, all in a knowledge graph, and use the whole thing for RAG with LocalLLM. I would see this as useful for recall/relations and also technical knowledge development. Yes, this is essentially what Google and others are building toward, but I'd like a local version.
The use case seems straightforward and generally useful, but are there any specific projects like this? I guess logseq has some of these features, but it's not really designed for manage imported information.
Hi all!
Considering Large Language Models and other large and complex AI systems are growing in popularity daily, I am curious to ask you about Large Knowledge Graphs.
When I say Large Knowledge Graph (LKG) I mean a structured representation of vast amounts of interconnected information, typically modeled as entities (nodes) and their relationships (edges) in a graph format. It integrates diverse data sources, providing semantic context through ontologies, metadata and other knowledge representations. LKGs are designed for scalability, enabling advanced reasoning, querying, and analytics, and are widely used in domains like AI, search engines, and decision-making systems to extract insights and support complex tasks.
And so, I am curious...
When dealing with Large Knowledge Graphs/Representations like ontologies, vocabularies, catalogs, etc... How do you structure your work?
- Do you think about a specific file-structure? (Knowledge Representation oriented, Class oriented, Domain oriented...)
- Do you use a single source with Named Graphs or do you distribute?
- If you distribute, is your distribution on different systems, triplestores or graph databases?
- Do you use any Ontology Editors or Ontology Management Systems? for Large Knowledge Graphs?
Feel free to share any knowledge that you might consider valuable to the thread, and to everybody interested in Large Knowledge Graphs.
Thanks in advance!
I know that AWS Neptune is a pay-as-a-go (PAYGo) SaaS service, but I’m looking for something that supports SHACL and SPARQL.
I’ve also seen GraphDB enterprise available as an Azure VM, but it’s listed at a flat $95k per year.
Are there any other SaaS/PAYGo solutions out there in either Azure or AWS ?
… or maybe something that is coming soon?
(azure preferred)
Hi everyone!
Looking for a full Knowledge Engineering Tech Stack for working with knowledge graphs, ontologies, taxonomies and other knowledge representations.
From tools for managing and storing (data layer), transforming and connecting (logic layer), and consuming knowledge (presentation) to frameworks, methodologies, maturity models, etc. this thread aims to provide us, innovators and enthusiasts, with tools and insights on how to make the most of our shared interests.
Also, feel free to share your small-to-large scale take. From an individual or personal project, to an international multilateral enterprise.
Extra points for full pipelines.
Here are some categories that might be useful to narrow down the scope of the tech stack:
Thanks in advance, and may this thread be useful to us all!
🚀RDF Developers, We Need Your Input! 🌐
Help shape the next-gen RDF tools for VS Code! Please take our
short survey to prioritize features like ontology management, validation, and visualizations.
Your feedback will directly influence tools built for the SemanticWeb community.
📋 Survey link:
[ https://app.opinionx.co/2f83eea9-cc92-4a43-babd-7a2ab25ddec0/intro ]
Let’s build smarter RDF tools together! 💡
Hi, is there a way to get the dbpedia-wikidata mapping for the relations mentioned in dbpedia dataset. Thanks in advance.
Fairly new to all this btw!
I would like to use protege on mac to visualize and load up an ontology. I have a succesful .owl file that I have tested in webProtege.
The application on mac is giving me headache after headache.
I cant seem to open any files. I get stuck in an incessant loop asking for permissions. It seems others had issues online but I cant find any resolution that I can make heads or tails of. Does anyone have any advice?
Hi!
I'm trying to find a SPARQL endpoint that provides conversion rates from EUR to other currencies, but I'm having a tough time locating one Any suggestions would be greatly appreciated!
Thanks in advance!
Looking for recommendations for a book or site that a good practical introduction to ontology engineering. There are a couple on the market, but they’re pricey, so I’m hoping y’all might have some insight.
Hello,
maybe somebody knows or, has solved it otherwise.
Issue, I want a (simple,no need for super) sparql pretty formatter. There is one for JS, https://github.com/sparqling/sparql-formatter/blob/main/src/formatter.js, and they use an AST.
Bob DC wrote about using the command line uparse https://www.bobdc.com/blog/jenagems/ - and looking into the github Jena code bin/bat of i;, it uses "arq.parse"
.. which I cannot find.
It seems to do the pretty formatting and - where is the implementation?
.. yes I found here and there something about syntax, about algebra etc. however not that much documentation.
If someone knew where I can just find the impl of what Bob DC is using (not the bash, the Java impl), please kindly hint at it :)
thank you
I’m looking for gratis software, or solutions, to visualize a knowledge graph (RDF-based, of course) in a graph-like fashion.
Input: I have a SPARQL endpoint (or, if necessary, exports as Turtle files or similar).
I’m thinking of an interactive visualization which allows users to explore/navigate the graph.
Ideally something that can be used by non-technical end users. Would be great if it can be embedded in webpages (context-aware, so that I could visualize the relevant part of the graph depending on the resource the page is about), but a stand-alone (web-based/desktop) tool would be fine as well.
GraphDB (even in its gratis version) offers a visualization feature. It’s quite nice, but not really intended for / usable by end users.
Ontodia – the project seems to be inactive (last commit from 2020, and the repo is archived now). I didn’t use it yet myself, but will probably explore it if there is no better solution out there.
Looks like PURL.org is down, due to Internet Archive Cyber Attack. Unfortunately a lot of public ontologies use it as permanent URL. While they are fixing it, how do you people deal with it?
Any good work around hints?
Hello everyone,
I’m working with two ontologies, A and B, which share the same semantics but are expressed using different syntaxes. Does the difference in syntax mean that it’s impossible to find correspondence/mappings between them?
From my understanding, ontology alignment typically focuses on semantic correspondences, so even if the syntax is different, mapping tools should still be able to find them based on the meaning of terms. Is that correct?
Thank you in advance :)
I have an ontology written using SPIN. When I run it in TopBraid, it generates all the inferences I need. However, when I run it using the SPIN API (https://www.topbraid.org/spin/api/), only part of the inferences are generated. This happens because of the "spif" functions. How can I work around this issue to get all my results using the API? I've tried several solutions, and if anyone could suggest a path forward (or detail how to make my failed attempts work), I would greatly appreciate it. So far, I’ve tried:
Rewriting the spif functions (this worked for most, but some, like "spif:split," didn’t work).
Implementing the function in Java.
Replacing "spif" with another function, such as "apf:strSplit."