/r/semanticweb
A subreddit dedicated to all things Linked Data. Links, questions, discussions, etc. on RDF, metadata, inferencing, microformats, SPARQL, ...
The semantic web or linked data movement is an ongoing, W3C-supported push to extend the current WWW-technologies and practices towards a machine-readable web of data.
Anything related to this push is welcome in /r/semanticweb.
See also:
/r/semanticweb
Are there any practical personal knowledge graphs that people can recommend? By now I've got decades of emails, documents, notes that I'd like to index and auto-apply JSON-LD when practical, and consistent categories in general, as well as the ability to create relationships, all in a knowledge graph, and use the whole thing for RAG with LocalLLM. I would see this as useful for recall/relations and also technical knowledge development. Yes, this is essentially what Google and others are building toward, but I'd like a local version.
The use case seems straightforward and generally useful, but are there any specific projects like this? I guess logseq has some of these features, but it's not really designed for manage imported information.
Hi all!
Considering Large Language Models and other large and complex AI systems are growing in popularity daily, I am curious to ask you about Large Knowledge Graphs.
When I say Large Knowledge Graph (LKG) I mean a structured representation of vast amounts of interconnected information, typically modeled as entities (nodes) and their relationships (edges) in a graph format. It integrates diverse data sources, providing semantic context through ontologies, metadata and other knowledge representations. LKGs are designed for scalability, enabling advanced reasoning, querying, and analytics, and are widely used in domains like AI, search engines, and decision-making systems to extract insights and support complex tasks.
And so, I am curious...
When dealing with Large Knowledge Graphs/Representations like ontologies, vocabularies, catalogs, etc... How do you structure your work?
- Do you think about a specific file-structure? (Knowledge Representation oriented, Class oriented, Domain oriented...)
- Do you use a single source with Named Graphs or do you distribute?
- If you distribute, is your distribution on different systems, triplestores or graph databases?
- Do you use any Ontology Editors or Ontology Management Systems? for Large Knowledge Graphs?
Feel free to share any knowledge that you might consider valuable to the thread, and to everybody interested in Large Knowledge Graphs.
Thanks in advance!
I know that AWS Neptune is a pay-as-a-go (PAYGo) SaaS service, but I’m looking for something that supports SHACL and SPARQL.
I’ve also seen GraphDB enterprise available as an Azure VM, but it’s listed at a flat $95k per year.
Are there any other SaaS/PAYGo solutions out there in either Azure or AWS ?
… or maybe something that is coming soon?
(azure preferred)
Hi everyone!
Looking for a full Knowledge Engineering Tech Stack for working with knowledge graphs, ontologies, taxonomies and other knowledge representations.
From tools for managing and storing (data layer), transforming and connecting (logic layer), and consuming knowledge (presentation) to frameworks, methodologies, maturity models, etc. this thread aims to provide us, innovators and enthusiasts, with tools and insights on how to make the most of our shared interests.
Also, feel free to share your small-to-large scale take. From an individual or personal project, to an international multilateral enterprise.
Extra points for full pipelines.
Here are some categories that might be useful to narrow down the scope of the tech stack:
Thanks in advance, and may this thread be useful to us all!
🚀RDF Developers, We Need Your Input! 🌐
Help shape the next-gen RDF tools for VS Code! Please take our
short survey to prioritize features like ontology management, validation, and visualizations.
Your feedback will directly influence tools built for the SemanticWeb community.
📋 Survey link:
[ https://app.opinionx.co/2f83eea9-cc92-4a43-babd-7a2ab25ddec0/intro ]
Let’s build smarter RDF tools together! 💡
Hi, is there a way to get the dbpedia-wikidata mapping for the relations mentioned in dbpedia dataset. Thanks in advance.
Fairly new to all this btw!
I would like to use protege on mac to visualize and load up an ontology. I have a succesful .owl file that I have tested in webProtege.
The application on mac is giving me headache after headache.
I cant seem to open any files. I get stuck in an incessant loop asking for permissions. It seems others had issues online but I cant find any resolution that I can make heads or tails of. Does anyone have any advice?
Hi!
I'm trying to find a SPARQL endpoint that provides conversion rates from EUR to other currencies, but I'm having a tough time locating one Any suggestions would be greatly appreciated!
Thanks in advance!
Looking for recommendations for a book or site that a good practical introduction to ontology engineering. There are a couple on the market, but they’re pricey, so I’m hoping y’all might have some insight.
Hello,
maybe somebody knows or, has solved it otherwise.
Issue, I want a (simple,no need for super) sparql pretty formatter. There is one for JS, https://github.com/sparqling/sparql-formatter/blob/main/src/formatter.js, and they use an AST.
Bob DC wrote about using the command line uparse https://www.bobdc.com/blog/jenagems/ - and looking into the github Jena code bin/bat of i;, it uses "arq.parse"
.. which I cannot find.
It seems to do the pretty formatting and - where is the implementation?
.. yes I found here and there something about syntax, about algebra etc. however not that much documentation.
If someone knew where I can just find the impl of what Bob DC is using (not the bash, the Java impl), please kindly hint at it :)
thank you
I’m looking for gratis software, or solutions, to visualize a knowledge graph (RDF-based, of course) in a graph-like fashion.
Input: I have a SPARQL endpoint (or, if necessary, exports as Turtle files or similar).
I’m thinking of an interactive visualization which allows users to explore/navigate the graph.
Ideally something that can be used by non-technical end users. Would be great if it can be embedded in webpages (context-aware, so that I could visualize the relevant part of the graph depending on the resource the page is about), but a stand-alone (web-based/desktop) tool would be fine as well.
GraphDB (even in its gratis version) offers a visualization feature. It’s quite nice, but not really intended for / usable by end users.
Ontodia – the project seems to be inactive (last commit from 2020, and the repo is archived now). I didn’t use it yet myself, but will probably explore it if there is no better solution out there.
Looks like PURL.org is down, due to Internet Archive Cyber Attack. Unfortunately a lot of public ontologies use it as permanent URL. While they are fixing it, how do you people deal with it?
Any good work around hints?
Hello everyone,
I’m working with two ontologies, A and B, which share the same semantics but are expressed using different syntaxes. Does the difference in syntax mean that it’s impossible to find correspondence/mappings between them?
From my understanding, ontology alignment typically focuses on semantic correspondences, so even if the syntax is different, mapping tools should still be able to find them based on the meaning of terms. Is that correct?
Thank you in advance :)
I have an ontology written using SPIN. When I run it in TopBraid, it generates all the inferences I need. However, when I run it using the SPIN API (https://www.topbraid.org/spin/api/), only part of the inferences are generated. This happens because of the "spif" functions. How can I work around this issue to get all my results using the API? I've tried several solutions, and if anyone could suggest a path forward (or detail how to make my failed attempts work), I would greatly appreciate it. So far, I’ve tried:
Rewriting the spif functions (this worked for most, but some, like "spif:split," didn’t work).
Implementing the function in Java.
Replacing "spif" with another function, such as "apf:strSplit."
I am looking forward to testing this out! People usually try to do lists with a SPARQL group_concat() which is yucky.
The main feature for Jena 5.2.0 is an implementation of CDT "Composite
Datatype Literals", an experimental idea for dealing with maps and lists
in RDF and SPARQL while remaining within RDF 1.1.
Happy Monday! Take a few minutes to watch my second video describing/defining ontology. Good ontologies are languages, graphs, and models.
https://youtu.be/w9s16ICbndg?si=nptNSzR5NkY8A2ac
Thanks so much to those who liked and shared the first video. More of that is welcome as I get my channel up and running. And also many thanks to those who gave feedback to help me get better; I have a lot of improving to do, and I'm eager to learn. If you have suggestions on things I could do better, or topics in philosophy and AI you'd like to see covered, let me know.
This video is the first in a series to help understand what ontologies are in the context of philosophy and the semantic web. I hope you enjoy it!
I would like to know if it’s feasible to applicate semantic data model such thesaurus/taxinomy in sharepoint in order to improve the research of documents.
For example, I build a taxinomy of topic and applicate on all my documents and if an user enter a topic or his synonym, it will return a set of document with this topic.
I heard about Microsoft syntex :
https://learn.microsoft.com/fr-fr/microsoft-365/syntex/skos-format-reference
Any feedback would be amazing. Thank a lot
https://new.mta.info/article/mta-open-data-challenge
This would be a fun opportunity to express some NYC data using an ontology.
I did something similar a few year ago but I used the Wikidata ontology. I wouldn't use that ontology again. But I think gist or CCO would be good candidates for this current challenge.
Hi all, I'm new to semantic web and Ontology. I'm doing project in university about building job seeking website, but I have several question
Sorry for my dumb question and my bad English, any help would be greatly appreciated, as I'm trying to learn and apply this new knowledge to my project!
Hi. I don't know whether this is the right place to share this but I have a list of approx. 7k actor names and I need to find their birthdays ( at least the year they were born ). I've tried SPARQL but couldn't wrap my head around it. Can scrape some sites with the list at hand but would take a lot of time/effort. I mean, even half of the records with the birth dates would be ok for me. I'd be thankful if someone could come up with a feasible approach.
Hi everyone,
Given the excitement for the first benchmark on the Best RDF triplestore/graph database I've decided to perform another benchmark.
This benchmark will focus on different Ontology Development Environment Tools, for high-impact big scale projects. I would love to get your recommendations on this one too.
If you have any experience with tools like Protègè, TopBraid, Stardog Studio/Designer, LinkedDataHub, Metaphactory, AtomicServer, or others, please share your thoughts! Pros, cons, and specific use cases are all appreciated.
Thanks in advance!
Some considerations:
Hi, I searched for incident/accident ontology on w3c, published research papers and could only find incident management ontology but not a complete domain ontology under BFO or ISO15926.
There seems to a w3c recommended road accident ontology nothing more. Can you help me point to something you know?
Hey! We are Atman Labs -- our mission is to emulate human experts in software.
We are building the next generation of systems that can structure and reason about knowledge like human experts do to solve complex tasks. These systems will semantically understand every piece of content on the internet like humans do, extracting and joining entities and relationships across text, images, and videos to create web-scale knowledge graphs.
These knowledge graphs will not only fulfill the vision of the Semantic Web, but when combined with reinforcement learning, they will also serve as the foundation for a new type of search engine that is deeply knowledgeable, proactive and emulates interacting with a human expert to solve complex goals.
If you are excited by working on the frontier of knowledge graphs, graph embeddings/GNNs, computer vision/NLP, and are bold enough to want to disrupt Google, apply to our founding team!
Hi everyone, I'm planning to load databases like wikidata and work with them, but I have some questions about the type of computer I need. I'm referring to things like memory size, processor... What features should the computer have?
Thanks in advance.
Hey all! Hope this is the right olace to ask. Im currently looking for a repository of inconsistent owl ontologies. Does anyone have a lead on where to find it/create it? Any help would be greatly appreciated!
i am trying to assign each wikidata item with a main type, e.g. Barack Obama - Person, Orancejuice - Beverage, jacket - clothing, Tree - Plant, mona-lisa - painting, etc. I assume, to do this in a meaningful way for all ~100m Wikidata items, it would need 1000-2000 types. I want to use a pre-existing ontology/taxonomy for this, such as DBPedia (700-800 types, very good, but I feel that some important types are missing), schema.org (600-700 types, also quite good, but also some types are missing and many types exist that are irrelevant for my use case) or SUMO. Am I missing any other existing classification system that is worth considering? There exist of course many domain-specific systems, but those are not useful for my purpose.
Thanks!
Does anyone have a list of important conferences in the semantic web area? What I have:
Are there any free online bibliography where they have Syntax and Semantics as separate chapter? For example chapter 2.2 is syntax and then chapter 2.3 is semantics. I am currently writing my MSc thesis and this would be really helpful.
Thank you in advance :)