/r/semanticweb
A subreddit dedicated to all things Linked Data. Links, questions, discussions, etc. on RDF, metadata, inferencing, microformats, SPARQL, ...
The semantic web or linked data movement is an ongoing, W3C-supported push to extend the current WWW-technologies and practices towards a machine-readable web of data.
Anything related to this push is welcome in /r/semanticweb.
See also:
/r/semanticweb
Hi!
I'm trying to find a SPARQL endpoint that provides conversion rates from EUR to other currencies, but I'm having a tough time locating one Any suggestions would be greatly appreciated!
Thanks in advance!
Looking for recommendations for a book or site that a good practical introduction to ontology engineering. There are a couple on the market, but they’re pricey, so I’m hoping y’all might have some insight.
Hello,
maybe somebody knows or, has solved it otherwise.
Issue, I want a (simple,no need for super) sparql pretty formatter. There is one for JS, https://github.com/sparqling/sparql-formatter/blob/main/src/formatter.js, and they use an AST.
Bob DC wrote about using the command line uparse https://www.bobdc.com/blog/jenagems/ - and looking into the github Jena code bin/bat of i;, it uses "arq.parse"
.. which I cannot find.
It seems to do the pretty formatting and - where is the implementation?
.. yes I found here and there something about syntax, about algebra etc. however not that much documentation.
If someone knew where I can just find the impl of what Bob DC is using (not the bash, the Java impl), please kindly hint at it :)
thank you
I’m looking for gratis software, or solutions, to visualize a knowledge graph (RDF-based, of course) in a graph-like fashion.
Input: I have a SPARQL endpoint (or, if necessary, exports as Turtle files or similar).
I’m thinking of an interactive visualization which allows users to explore/navigate the graph.
Ideally something that can be used by non-technical end users. Would be great if it can be embedded in webpages (context-aware, so that I could visualize the relevant part of the graph depending on the resource the page is about), but a stand-alone (web-based/desktop) tool would be fine as well.
GraphDB (even in its gratis version) offers a visualization feature. It’s quite nice, but not really intended for / usable by end users.
Ontodia – the project seems to be inactive (last commit from 2020, and the repo is archived now). I didn’t use it yet myself, but will probably explore it if there is no better solution out there.
Looks like PURL.org is down, due to Internet Archive Cyber Attack. Unfortunately a lot of public ontologies use it as permanent URL. While they are fixing it, how do you people deal with it?
Any good work around hints?
Hello everyone,
I’m working with two ontologies, A and B, which share the same semantics but are expressed using different syntaxes. Does the difference in syntax mean that it’s impossible to find correspondence/mappings between them?
From my understanding, ontology alignment typically focuses on semantic correspondences, so even if the syntax is different, mapping tools should still be able to find them based on the meaning of terms. Is that correct?
Thank you in advance :)
I have an ontology written using SPIN. When I run it in TopBraid, it generates all the inferences I need. However, when I run it using the SPIN API (https://www.topbraid.org/spin/api/), only part of the inferences are generated. This happens because of the "spif" functions. How can I work around this issue to get all my results using the API? I've tried several solutions, and if anyone could suggest a path forward (or detail how to make my failed attempts work), I would greatly appreciate it. So far, I’ve tried:
Rewriting the spif functions (this worked for most, but some, like "spif:split," didn’t work).
Implementing the function in Java.
Replacing "spif" with another function, such as "apf:strSplit."
I am looking forward to testing this out! People usually try to do lists with a SPARQL group_concat() which is yucky.
The main feature for Jena 5.2.0 is an implementation of CDT "Composite
Datatype Literals", an experimental idea for dealing with maps and lists
in RDF and SPARQL while remaining within RDF 1.1.
Happy Monday! Take a few minutes to watch my second video describing/defining ontology. Good ontologies are languages, graphs, and models.
https://youtu.be/w9s16ICbndg?si=nptNSzR5NkY8A2ac
Thanks so much to those who liked and shared the first video. More of that is welcome as I get my channel up and running. And also many thanks to those who gave feedback to help me get better; I have a lot of improving to do, and I'm eager to learn. If you have suggestions on things I could do better, or topics in philosophy and AI you'd like to see covered, let me know.
This video is the first in a series to help understand what ontologies are in the context of philosophy and the semantic web. I hope you enjoy it!
I would like to know if it’s feasible to applicate semantic data model such thesaurus/taxinomy in sharepoint in order to improve the research of documents.
For example, I build a taxinomy of topic and applicate on all my documents and if an user enter a topic or his synonym, it will return a set of document with this topic.
I heard about Microsoft syntex :
https://learn.microsoft.com/fr-fr/microsoft-365/syntex/skos-format-reference
Any feedback would be amazing. Thank a lot
https://new.mta.info/article/mta-open-data-challenge
This would be a fun opportunity to express some NYC data using an ontology.
I did something similar a few year ago but I used the Wikidata ontology. I wouldn't use that ontology again. But I think gist or CCO would be good candidates for this current challenge.
Hi all, I'm new to semantic web and Ontology. I'm doing project in university about building job seeking website, but I have several question
Sorry for my dumb question and my bad English, any help would be greatly appreciated, as I'm trying to learn and apply this new knowledge to my project!
Hi. I don't know whether this is the right place to share this but I have a list of approx. 7k actor names and I need to find their birthdays ( at least the year they were born ). I've tried SPARQL but couldn't wrap my head around it. Can scrape some sites with the list at hand but would take a lot of time/effort. I mean, even half of the records with the birth dates would be ok for me. I'd be thankful if someone could come up with a feasible approach.
Hi everyone,
Given the excitement for the first benchmark on the Best RDF triplestore/graph database I've decided to perform another benchmark.
This benchmark will focus on different Ontology Development Environment Tools, for high-impact big scale projects. I would love to get your recommendations on this one too.
If you have any experience with tools like Protègè, TopBraid, Stardog Studio/Designer, LinkedDataHub, Metaphactory, AtomicServer, or others, please share your thoughts! Pros, cons, and specific use cases are all appreciated.
Thanks in advance!
Some considerations:
Hi, I searched for incident/accident ontology on w3c, published research papers and could only find incident management ontology but not a complete domain ontology under BFO or ISO15926.
There seems to a w3c recommended road accident ontology nothing more. Can you help me point to something you know?
Hey! We are Atman Labs -- our mission is to emulate human experts in software.
We are building the next generation of systems that can structure and reason about knowledge like human experts do to solve complex tasks. These systems will semantically understand every piece of content on the internet like humans do, extracting and joining entities and relationships across text, images, and videos to create web-scale knowledge graphs.
These knowledge graphs will not only fulfill the vision of the Semantic Web, but when combined with reinforcement learning, they will also serve as the foundation for a new type of search engine that is deeply knowledgeable, proactive and emulates interacting with a human expert to solve complex goals.
If you are excited by working on the frontier of knowledge graphs, graph embeddings/GNNs, computer vision/NLP, and are bold enough to want to disrupt Google, apply to our founding team!
Hi everyone, I'm planning to load databases like wikidata and work with them, but I have some questions about the type of computer I need. I'm referring to things like memory size, processor... What features should the computer have?
Thanks in advance.
Hey all! Hope this is the right olace to ask. Im currently looking for a repository of inconsistent owl ontologies. Does anyone have a lead on where to find it/create it? Any help would be greatly appreciated!
i am trying to assign each wikidata item with a main type, e.g. Barack Obama - Person, Orancejuice - Beverage, jacket - clothing, Tree - Plant, mona-lisa - painting, etc. I assume, to do this in a meaningful way for all ~100m Wikidata items, it would need 1000-2000 types. I want to use a pre-existing ontology/taxonomy for this, such as DBPedia (700-800 types, very good, but I feel that some important types are missing), schema.org (600-700 types, also quite good, but also some types are missing and many types exist that are irrelevant for my use case) or SUMO. Am I missing any other existing classification system that is worth considering? There exist of course many domain-specific systems, but those are not useful for my purpose.
Thanks!
Does anyone have a list of important conferences in the semantic web area? What I have:
Are there any free online bibliography where they have Syntax and Semantics as separate chapter? For example chapter 2.2 is syntax and then chapter 2.3 is semantics. I am currently writing my MSc thesis and this would be really helpful.
Thank you in advance :)
Hi everyone,
I'm currently performing a benchmark on different RDF Store options, for high-impact big scale projects, and would love to get your recommendations.
If you have any experience with tools like MarkLogic, Virtuoso, Apache Jena, GraphDB, Amazon Neptune, Stardog, AllegroGraph, Blazegraph, or others, please share your thoughts! Pros, cons, and specific use cases are all appreciated.
UPDATE: Based on your amazing comments, here are some considerations:
Thanks in advance!
I want to be able use the same union of classes in multiple places as a domain, ideally I would want to be able to encapsulate this as one object or URI, how would one go about this?
eg.
ex:SomeProperty rdfs:domain [ owl:unionOf (ex:Class1 ex:Class2) ] .
vs.
ex:SomeProperty rdfs:domain ex:UnionOfClass1AndClass2 .
I’m currently in need of a platform similar to the Open Metadata Registry. Unfortunately, I can’t access their instructions, and I’m having trouble finding comparable tools.
Are there any other platforms out there that offer similar features for metadata management, especially in terms of interoperability and standardization across systems?
Hi folks, I’m an information architect who came up through libraries. I’ve been asked to create a repository for a really complex data/systems architecture and I keep feeling like a knowledge graph is what I need to build but I’ve never made one. A lot of the resources online are by companies who mostly describe how their expensive platform does it for you, but I need to know if it will meet my needs and if I can build one with my company’s existing tools or not. Any suggestions on how I can understand this stuff better?