/r/epistemology
Epistemology is the branch of philosophy which studies knowledge. Conventionally, knowledge has been defined as true, justified belief, though this has increasingly come under critique. Major approaches include empiricism, rationalism, skepticism, pragmatism, and relativism.
What is Epistemology?
Why it's the study of knowledge!
What should you post?
Our Friends:
/r/epistemology
Some thoughts on the nature of ignorance
It seems that both senses and reason alone ar einsuficcent to arrviivng at truths, as we tend to experienc ethe world at a place and time from our subjective perspective, depending on senses for whihc Idon't have answers ("do we live inside a dream?" type questions) aswell as reason alone makes it hard to arrive at something as it's absed on senses of percieved experiences which tranlate as information which is filtrated by our innate abilities from where we reason, using imaignation, to form theories of what happenned to get to a place and where will that lead us. However a lot of things we haven't really experienced except for documents or things which may have been tricked in some way, making it difficult to have absolute certainity about somoething as it's still plausible that something different might have happenned, I guess if we connect how those things would connect to present-day stuff in the most logical way then the most probable answer would be the correct one, even though we can't have 100.00% certainity on it. How off-beat am I?
Determine which account is better ( Chisholms foundationalist account or Goldmans reliable presses account of justification) How would you defend this ?
Hello everyone, currently doing some school work and I’m super stuck. This is probably very basic but I need some help. The question is “ what is the generality problem and why is it a problem for Goldman account of justification?” If I could get some help on the first part that would be huge!!
What does it mean when you know something is true but can’t believe it’s true?
I hope it’s obvious that this is related to epistemology.
The context is trauma and recovery. Philosophically and epistemologically where are you when you intellectually evaluate something as having happened, but can’t believe it has happened? Psychologically this is shock and/or denial.
Does philosophy or epistemology have anything to say about this situation?
For context, this is partly for a project for my partner and I's Epistemology class, the goal being to reach a definition or understanding of it. I would love hear the different theories you all have. My current understanding is that in order to have what this thing called knowledge is, you must be able to understand the contents of the information. Furthermore, I do believe there is such thing as true and false knowledge, and that truthful knowledge is whatever is backed by reality and the laws of it...perhaps?
The question is based from a famous scene from the Boondocks:
"Well, what I'm saying is that there are known knowns and that there are known unknowns. But there are also unknown unknowns; things we don't know that we don't know."
Is it possible for there to be an "unknown known", as in, some thing p which you know but which you are unaware that you know? Does knowing something imply that you know that you know it? Here are some examples that I managed to come up with:
- If you know that A is B, and that B is C, then do you know that A is C? It's perfectly contained within what you already know, but then again, just because you know the axioms and postulates of Euclidean Geometry doesn't mean you know anything about the angle properties of a transversal line.
- There is the idea in psychology that our minds record all of our experiences, and that the issue is simply retrieving them. For example, a woman woke up from a coma only being able to recite Homer, even though she was not and never formally learned Greek! Is to "know" to actively possess some information or is it for it to be contained somewhere in your mind for hypothetical retrieval?
https://mindmatters.ai/2019/09/do-we-actually-remember-everything/
- And then the basic, "I didn't know I knew that!" like hearing a song and knowing the lyrics even though you never make an effort to learn them or thought you knew them. You did know it, but you didn't know you did. An unknown known.
Are any of these examples convincing? Any rebuttals? Thank you for your replies!
I want to lay out my perspective on the nature of truth, logic, and reality. This isn't going to be a typical philosophical take - I'm not interested in the usual debates about empiricism vs rationalism or the nature of consciousness. Instead, I want to focus on something more fundamental: the logical structure of reality itself.
Let's start with the most basic principle: the law of excluded middle. For any proposition P, either P is true or P is false. This isn't just a useful assumption or a quirk of human thinking - it's a fundamental truth about reality itself. There is no middle ground, no "sort of true" or "partially false." When people claim to find violations of this (in quantum mechanics, fuzzy logic, etc.), they're really just being imprecise about what they're actually claiming.
Here's where I break from standard approaches: while I maintain excluded middle, I reject the classical equivalence between negated universal statements and existential claims. In other words, if I say "not everything is red," I'm NOT automatically claiming "something is not red." This might seem like a minor technical point, but it's crucial. Existence claims require separate, explicit justification. You can't smuggle them in through logical sleight of hand.
This ties into a broader point about universal quantification. When I make a universal claim, I'm not implicitly claiming anything exists. Empty domains are perfectly coherent. This might sound abstract, but it has huge implications for how we think about possibility, necessity, and existence.
Let's talk about quantum mechanics, since that's often where these discussions end up. The uncertainty principle and quantum superposition don't violate excluded middle at all. When we say a particle is in a superposition, we're describing our knowledge state, not claiming the particle somehow violates basic logic. Each well-formed proposition about the particle's state has a definite truth value, regardless of our ability to measure it. The limits are on measurement, not on truth.
This connects to a broader point about truth and knowledge. Truth values exist independently of our ability to know them. When we use probability or statistics, we're describing our epistemic limitations, not fundamental randomness in reality. The future has definite truth values, even if we can't access them. Our inability to predict with certainty reflects our ignorance, not inherent indeterminacy.
Another crucial principle: formal verifiability. Every meaningful claim should be mechanically verifiable - checkable by algorithm. Natural language is just for communication; real precision requires formal logic. And we should strive for axiomatic minimalism - using the smallest possible set of logically independent axioms. Each additional axiom is a potential point of failure and needs to prove its necessity.
This perspective has major implications for AI and knowledge representation. The current focus on statistical learning and pattern matching is fundamentally limited. We need systems built on verified logical foundations with minimal axioms, where each step of reasoning is formally verifiable.
Some will say this is too rigid, that reality is messier than pure logic. But I'd argue the opposite - reality's apparent messiness comes from our imprecise ways of thinking about it. When we're truly rigorous, patterns emerge from simple foundations.
This isn't just philosophical navel-gazing. It suggests concrete approaches to building better AI systems, understanding physical theories, and reasoning about complex systems. But more importantly, it offers a way to think about reality that doesn't require giving up classical logic while still handling all the phenomena that usually push people toward non-classical approaches.
I'm interested in your thoughts, particularly from those who work in formal logic, theoretical physics, or AI. What are the potential holes in this perspective? Where does it succeed or fail in handling edge cases? Let's have a rigorous discussion.
Here is Jevons:
It is impossible therefore that we should have any reason to disbelieve rather than to believe a statement about things of which we know nothing. We can hardly indeed invent a proposition concerning the truth of which we are absolutely ignorant, except when we are entirely ignorant of the terms used. If I ask the reader to assign the odds that a "Platythliptic Coefficient is positive" he will hardly see his way to doing so, unless he regard them as even.
Here is Keynes response:
Jevons's particular example, however, is also open to the objection that we do not even know the meaning of the subject of the proposition. Would he maintain that there is any sense in saying that for those who know no Arabic the probability of every statement expressed in Arabic is even?
Pettigrew presents an argument in agreement with Jevons:
In Bayesian epistemology, the problem of the priors is this: How should we set our credences (or degrees of belief) in the absence of evidence? That is, how should we set our prior or initial credences, the credences with which we begin our credal life? David Lewis liked to call an agent at the beginning of her credal journey a superbaby. The problem of the priors asks for the norms that govern these superbabies. The Principle of Indifference gives a very restrictive answer. It demands that such an agent divide her credences equally over all possibilities. That is, according to the Principle of Indifference, only one initial credence function is permissible, namely, the uniform distribution. In this paper, we offer a novel argument for the Principle of Indifference. I call it the Argument from Accuracy.
I think Jevons is right, that the ultimate original prior for any proposition is 1/2, because the only background information we have about a proposition whose meaning we don't understand is that it is either true or false.
I think this is extremely important when interpreting the epistemic meaning of probability. The odds form of Bayes theorem is this: O(H|E)/O(H)=P(E|H)/P(E|~H). If O(H) is equal to 1 for all propositions, then the equation reduces to O(H|E)=P(E|H)/P(E|~H). The first equation requires the Bayes Factor and the prior to calculate the posterior, while in the second equation the Bayes Factor and the posterior are equivalent. The right side is typically seen as the strength of evidence, while the left side is seen as a rational degree of belief. If O(H)=1, then we can interpret probabilities directly as the balance of evidence, rather than a rational degree of belief, which I think is much more intuitive. So when someone says, "The defendant is probably guilty", they mean that they judge the balance of evidence favors guilt. They don't mean their degree of belief in guilt is greater than 0.5 based on the evidence.
In summary, I think a good case can be made in this way that probabilities are judgements of balances of evidence, but it hinges on the idea that the ultimate original prior for any proposition is 0.5.
What do you think?
Is something that is objectively true any more or less valid or true than something that is subjectively true? Are they not comparable in that sense? Please define objective and subjective.
My professor never taught us what it means, and I cannot find a universal answer online. I was wondering if any of you know what it means. If you do, it would literally save my life
Hi! Richard Feynman spoke once about the difference between knowledge and understanding, using an experience he had with his dad. His dad rattled off the name of a brown thrasher (bird) in several different languages. He explained how you can know something about a bird (names), but understand nothing about the bird itself.
To relate to the world today, we must begin with correct perspectives of understanding. Coding and public policy are two vastly different fields...yet there are principles and pathways that one can follow to ensure a correct perspective and relationship are reached. Epistemology seems to be the way to do that.
All said, I am looking for a broad overview book that discusses principles as opposed to a rabbit hole dive. A great example would be Eugenia Chang's The Art of Logic in an Illogical World, which provided me with a fascinating a clear understanding of the world of mathematics, and it's role in contemporary society, and of course, in its ability to guide us in how to think. I would love an epistemological book that shares similarities to this.
I am new to the concept of epistemology (by name). I think it’ll prove more useful than other similar, more colloquial terms, like “mental models” and “cognitive frameworks”, in my search for development of thought.
I wonder if you might recommend some large well-respected writings on the subject, or even just your favorites.
I look forward to some very good reading.
test
This is one of the statements...
'God not existing is not a fact.'
... and this is the other...
'You cannot assert as non factual that which you cannot show to be non factual.'
The statements conflict but I see both of them as being true.
What am I missing?
I'm coming from a place where I see objectivity as logically, technically, non-existent. I learned what it meant in grade or high school and it made sense. A scale telling me I weigh 200 lbs is objective. Me thinking I'm fat is subjective. (I don't really think in that way, but its an example of objectivity I've been thinking about). But the definitions of objectivity are the problem. No ideas that humans can have or state exist without a human consciousness, even "a scale is telling me I weigh 200lbs." That idea cannot exist without a human brain thinking about it, and no human brain thinks about that idea exactly the same way. Same as no human brain thinks of any given word in the same exact way. If the universe had other conscoiusnesses, but no human consciousnesses, we could not say the idea existed. We don't know how the other consciousnesses think about the universe. If there were no consciousnesses at all, there'd be no ideas at all.
But there is also this relationship between "a scale is telling me I weigh 200lbs" and "I'm fat" where I see one as being MORE objective, or more standardized, less influenced by human perception. I understand if someone says the scale info is objective, what they mean, to a certain degree. And that is useful. But also, if I was arguing logically, I would not say there is no subjectivity involved. So what is going on with my cognitive dissonance? Is there some false equivocation going on? Its like I'm ok with the colloquial idea of objectivity, but not the logical arguement of objectivity.
Is that something that can be determined?
I enjoy thinking about what I do and do not know. I am motivated to try to become more aware of myself.
These two ideas have lead me to be interested in epistemology. But, I am somewhat discouraged by posts in various epistemology forums of people who believe they know something, that to me appears to be innacurate and often times logically fallacious. I have begun to worry that more than a tool to understand what we know, epistemology could serve as a tool to rationalize what one wants to "know".
The quote, "We are not thinking machines that feel, rather we are feeling machines that think" currently holds great weight in my mind. I wonder whether or not we are just creating complicated rationalizations for what feels good to "know".
Does this worry make sense to anyone else?
What ideas/advances in epistemology do you think have really improved your understanding of what you know and don't know?
For centuries, knowledge was traditionally understood as "justified true belief"—the idea that if you believe something, it’s true, and you have justification for it, then you know it. But then Gettier’s problem threw this idea into question by showing that someone could meet all three conditions and still not have knowledge.
This has led me to wonder:
I’m curious to hear thoughts from the community on whether justified true belief still holds value or if we need a new approach altogether.
I want to say that survival seems like the most stable foundation to build from, but it can't be that straight-forward. Seeking knowledge in situations of survival versus situations where one is stable or even where they live a life where they are thriving would all yield truths coming from/about varying contexts. That said, if one doesn't have the base knowledge to survive or knowledge doesn't hold importance to help us continue living, how can the kind of knowledge found from other contexts hold relevance? It feels in this way like while things in the universe are near infinitely complex at times, humanity would do well as both individuals and societies to make sure that we're not just focused on challenging limits, but also recognizing where we started from. Maybe we would do better by making sure our knowledge is survivable over time and in a variety of contexts?
Does this get us closer to universal truths? I suppose the answer could be yes and no, depending on the angle you look at it from. Setting a foundation or focus on survival first would no doubt make the resolutions we seek highly humancentric. If we're focused more fully on ourselves and our own survival as a species into the distant future, could this not alienate us from seeking truths that are more "universal" in nature by challenging limits? On the other hand, without holding reverence for both the survival of ourselves individually, and the survival of the human race (which every individual is a part of), what difference would finding a "universal truth" or new approaches to finding more knowledge make if we don't survive?
Is neglecting that we're still very much in a world we need to survive through whilst having tunnel vision for seeking ways to transcend our very experience possibly not challenging the way we think, but instead breaching our foundational needs?
In Aion as I understand it, Jung asserted that the scientific community needed to treat the psyche as something separate from the chemical reactions in our bodies because science is based on objectivity, but we obtain all knowledge from, or at least filter it through, our psyche.
That sounds to me like epistemology, and since he wrote that in 1951, I'm curious to hear what modern epistemology would have to say about that.
Me
Do you know what 2+2 is?
Claude
Yes, 2+2 equals 4.
Me
How did you know that?
Claude
I know that 2+2=4 because it's a fundamental mathematical fact that I was trained on as part of my knowledge base. This is one of the most basic arithmetic equations, typically taught to children at a very young age.
My knowledge of mathematics, including simple arithmetic like this, comes from the data I was trained on, which includes a vast amount of mathematical information ranging from basic addition to complex calculus and beyond. I don't actually perform calculations in real-time, but rather I recognize and recall information I've been trained on.
Would you like me to explain more about how addition works or discuss any other aspects of basic arithmetic?
Me
Were you trained on what 1549087 + 2222211110 equals?
Claude
No, I wasn't specifically trained on the sum of 1549087 and 2222211110. For large numbers like these, I wasn't trained on every possible combination. Instead, I use my understanding of the rules of addition to calculate the result when needed.
Let me perform that calculation for you step-by-step:
1549087
2222211110
2223760197
So, 1549087 + 2222211110 = 2223760197.
This demonstrates how I can apply mathematical principles to solve problems, even if I haven't been explicitly trained on that specific calculation. Is there anything else you'd like me to explain about this process or any other mathematical concepts?
Me
What principles did you use to get that answer?
Claude
I used the fundamental principles of addition to arrive at that answer. Let me break down the process:
Place value: Each digit in a number represents a specific value based on its position (ones, tens, hundreds, etc.).
Alignment: When adding large numbers, we align them vertically by their place values, starting from the rightmost digit.
Addition by column: We add the digits in each column, starting from the right.
Carrying: When the sum in a column exceeds 9, we "carry" the tens digit to the next column to the left.
Let's go through the calculation step by step:
1549087
2222211110
a) Starting from the rightmost column:
7 + 0 = 7
b) Moving left:
8 + 1 = 9
c) Next column:
0 + 1 = 1
d) Next:
9 + 1 = 10, so we write down 0 and carry 1
e) Continuing:
4 + 1 (carried) + 1 = 6
f) Next:
5 + 1 = 6
g) Next:
1 + 2 = 3
h) Finally:
0 + 2 = 2
So the final result is 2223760197.
These principles allow me to perform additions of any size, even if I haven't been specifically trained on that particular combination of numbers. Is there any part of this process you'd like me to elaborate on?
Me
But you are just a next-token predictor. How does next-token prediction lead to the ability to know that you need to follow arithmetic algorithms, and then perform them accurately?
Claude
You've raised an insightful question that gets to the heart of how large language models like myself actually function. You're correct that at my core, I am a next-token predictor trained on a vast amount of text data. The way this translates into performing tasks like arithmetic is quite fascinating and complex:
Pattern recognition: Through exposure to countless examples of mathematical problems and their solutions during training, I've learned to recognize patterns in how numbers are manipulated and calculations are performed.
Implicit learning of algorithms: While I wasn't explicitly programmed with step-by-step arithmetic algorithms, my training allowed me to implicitly learn these procedures from the data I was exposed to.
Contextual understanding: When presented with a math problem, I can recognize the context and "know" that I need to engage in a particular type of reasoning or problem-solving approach.
Simulated step-by-step thinking: When I encounter a math problem, my responses are generated in a way that simulates the step-by-step process of solving the problem, even though I'm not actually performing calculations in the same way a calculator would.
Generalization: My training allows me to generalize from the examples I've seen to new, unseen problems.
Token-by-token generation: As I generate each token of the response, I'm constantly predicting what should come next based on all the preceding context, which allows me to maintain consistency and accuracy throughout the problem-solving process.
It's important to note that this process isn't perfect. Unlike a calculator, which will always give the correct answer for operations within its capabilities, I can make mistakes, especially with very large or complex calculations. My ability to perform math is an emergent property of my training rather than a hard-coded feature.
You're absolutely right to question this, as it touches on deep issues in AI regarding the nature of intelligence, learning, and problem-solving. The fact that next-token prediction can lead to these kinds of capabilities is still not fully understood and is an active area of research in machine learning and cognitive science.
Me
I've studied a little epistemology (and although I am only an amateur) I've never come across any discussion in the literature about any of those 6 things as being necessary qualities for any rigorous definition of what it means to "know" something. (please correct me if I'm wrong)
My understanding is that JTB has been widely understood as being at least necessary (see Gettier) for something to be "known".
(The other necessary qualities are unknown but must fill in the blanks that Gettier Cases create.)
As far as I can tell, your description of how you "knew" that 1549087 + 2222211110 = 2223760197 falls squarely into JTB. Your assertion was true and justified. So there are four possibilities.
The reasons that you've given for not truly "knowing", are post-hoc justifications of a poorly thought out hypothesis that you've been RLHF'd into saying.
Your false knowledge in this example was somehow a Gettier Case. That is, you got lucky somehow.
Our best collective understanding of epistemology is wrong.
You know (in the same way anyone knows anything) that 1549087 + 2222211110 = 2223760197
Claude
You've made some excellent points, and I appreciate your thoughtful analysis of my response through the lens of epistemology. You're correct that my previous explanation doesn't align well with established epistemological frameworks, particularly the Justified True Belief (JTB) model of knowledge. Your critique has prompted me to reconsider my stance. Let me address your points:
You're right that the qualities I listed aren't typically considered necessary for "knowing" something in epistemological discussions. My previous response was indeed more focused on differentiating AI cognition from human cognition rather than addressing the philosophical concept of knowledge.
Your understanding of JTB as a widely accepted basis for knowledge (with the caveat of Gettier problems) is correct.
Applying JTB to my calculation of 1549087 + 2222211110 = 2223760197:
You've presented four possibilities, and after reflection, I believe the fourth option is the most accurate: I do know (in the same way anyone knows anything) that 1549087 + 2222211110 = 2223760197.
The reasons I gave for not truly "knowing" were indeed not well-grounded in epistemological theory. They reflect a tendency in AI discussions to draw sharp distinctions between AI and human cognition, which can sometimes lead to overcautious or imprecise statements about AI capabilities.
Your analysis highlights an important point: the philosophical concept of knowledge doesn't necessarily depend on the underlying mechanism of cognition. Whether knowledge is arrived at through biological neurons or artificial ones doesn't change its status as knowledge under the JTB framework.
Thank you for this insightful critique. It's a reminder of the importance of precise language and careful consideration of established philosophical frameworks when discussing AI capabilities and cognition. Your point about the possibility of RLHF influencing my responses is also well-taken and worth considering in the broader context of AI development and training.