/r/epistemology
Epistemology is the branch of philosophy which studies knowledge. Conventionally, knowledge has been defined as true, justified belief, though this has increasingly come under critique. Major approaches include empiricism, rationalism, skepticism, pragmatism, and relativism.
What is Epistemology?
Why it's the study of knowledge!
What should you post?
Our Friends:
/r/epistemology
If knowledge is power, how does true, justified belief about environmental science influence moral responsibility? Let’s unpack the philosophical intersections of epistemology and environmental ethics. How do we reconcile skepticism and pragmatism in shaping sustainable futures?
I was reviewing proofs of Cantor's Theorem online, in particular this one on [YouTube] (https://www.youtube.com/watch?v=iUdH_UOJ2aI) and the one from [Wikipedia] (https://en.wikipedia.org/wiki/Cantor%27s_theorem), and all of the one's I've come across seem to have the same "hole", in that they ignore the possibility that a set used in the proof is empty. It turns out this matters, and the proof fails in the case of the power set of the empty set, and the power set of a singleton.
I have a hard time believing this wasn't addressed in Cantor's original proof, but I can't find it online. That is, it looks like people online have adopted an erroneous proof, and I wonder if the original is different.
I understand YouTube proofs might not be the highest caliber, but I found another proof on an [academic site] (https://www.sjsu.edu/faculty/watkins/cantorth.htm) that seems like it suffers from the same hole, in that it makes use of a set that is not proven to be non-empty.
I outline the issues [here] (https://derivativedribble.wordpress.com/2024/12/21/potential-correction-to-the-proof-of-cantors-theorem/).
Thoughts welcomed!
I really can’t see how anything can be known a priori. As I’ve seen defined, a priori knowledge is knowledge that is acquired independent of experience. Some of the common examples I’ve encountered are:
It seems as if a priori knowledge are definitions. And yet, those definitions are utterly meaningless if the mind encountering that set of words has no experience to reference. Each word has to have some referent for an individual to truly understand what it is, or else it’s just memorization. And each referent is only understood if it’s tied to some sense experience. For 1), I have to know what a man is, and I can only know that though having an experience of seeing/interacting with a man.
Secondly, and this may be playing with semantics, but every moment spent in a conscious state is having an experience. We are nothing but “experience machines”. The act of you reading this text is your experience, and someone telling me that all bachelors are unmarried men is an experience itself. And if I have never seen a man before, I cannot know what a man is unless I have the experience of someone telling me what a man is, and each word in of itself in the definition of what a man is I cannot know unless I have experiences of being taught a language to begin with!
So to me, it makes no sense how any knowledge can be acquired independent of experience…
If knowledge is justified true belief, does understanding the complex balance of ecosystems fundamentally change our ethical responsibilities toward the planet? Can such knowledge redefine humanity’s role in the web of life? Let's discuss how epistemology intersects with sustainability.
Do I need free will to be against epistemic normativity?
(David Owens, ‘Reason Without Freedom’, Daniel Dennet, ‘Consciousness Explained, …Trapple, Kosslyn, Sapolsky, Wegner)
Sure, we can't just not use our senses as reasoning depends on properties periceved by them, but all reaosnings end up being a long road into answering a specific question that was raised by social experience in what we believe to be a physical world. With epistmeology crossing paths with anthropology and the role of senses and all, being that for a thousand reaosns and hypotheses they can treason us, being things true, if not (x,y,z), it seems falsiability based on the seeming world's logic is best we have, yet can't eliminate uncertainity, then what? How cna one live knowing all of what he bleieves to be true could also be wrong?
Original paper available at: philarchive.org/rec/KUTIIA
Introduction
What is reasoning? What is logic? What is math? Philosophical perspectives vary greatly. Platonism, for instance, posits that mathematical and logical truths exist independently in an abstract realm, waiting to be discovered. On the other hand, formalism, nominalism, and intuitionism suggest that mathematics and logic are human constructs or mental frameworks, created to organize and describe our observations of the world. Common sense tells us that concepts such as numbers, relations, and logical structures feel inherently familiar—almost intuitive, but why? Just think a bit about it… They seem so obvious, but why? Do they have deeper origins? What is number? What is addition? Why they work in this way? If you pounder on basic axioms of math, their foundation seems to be very intuitive, but absolutely mysteriously appears to our mind out of nowhere. In a way their true essence magically slips away unnoticed from our consciousness, when we try to pinpoint exactly at their foundation. What the heck is happening?
Here I want to tackle those deep questions using the latest achievements in machine learning and neural networks, and it will surprisingly lead to the reinterpretation of the role of consciousness in human cognition.
Long story short, what is each chapter about:
Feel free to skip anything you want!
1. Mathematics, logic and reason
Let's start by understanding the interconnection of these ideas. Math, logic and reasoning process can be seen as a structure within an abstraction ladder, where reasoning crystallizes logic, and logical principles lay the foundation for mathematics. Also, we can be certain that all these concepts have proven to be immensely useful for humanity. Let focus on mathematics for now, as a clear example of a mental tool used to explore, understand, and solve problems that would otherwise be beyond our grasp All of the theories acknowledge the utility and undeniable importance of mathematics in shaping our understanding of reality. However, this very importance brings forth a paradox. While these concepts seem intuitively clear and integral to human thought, they also appear unfathomable in their essence.
No matter the philosophical position, it is certain, however, is that intuition plays a pivotal role in all approaches. Even within frameworks that emphasize the formal or symbolic nature of mathematics, intuition remains the cornerstone of how we build our theories and apply reasoning. Intuition is exactly how we call out ‘knowledge’ of basic operations, this knowledge of math seems to appear to our heads from nowhere, we know it's true, that’s it, and that is very intuitive. Thought intuition also allows us to recognize patterns, make judgments, and connect ideas in ways that might not be immediately apparent from the formal structures themselves.
2. Unreasonable Effectiveness..
Another mystery is known as unreasonable effectiveness of mathematics. The extraordinary usefulness of mathematics in human endeavors raises profound philosophical questions. Mathematics allows us to solve problems beyond our mental capacity, and unlock insights into the workings of the universe.But why should abstract mathematical constructs, often developed with no practical application in mind, prove so indispensable in describing natural phenomena?
For instance, non-Euclidean geometry, originally a purely theoretical construct, became foundational for Einstein's theory of general relativity, which redefined our understanding of spacetime. Likewise, complex numbers, initially dismissed as "imaginary," are now indispensable in quantum mechanics and electrical engineering. These cases exemplify how seemingly abstract mathematical frameworks can later illuminate profound truths about the natural world, reinforcing the idea that mathematics bridges the gap between human abstraction and universal reality.
And as mathematics, logic, and reasoning occupy an essential place in our mental toolbox, yet their true nature remains elusive. Despite their extraordinary usefulness and centrality in human thought and universally regarded as indispensable tools for problem-solving and innovation, reconciling their nature with a coherent philosophical theory presents a challenge.
3. Lens of Machine Learning
Let us turn to the emerging boundaries of the machine learning (ML) field to approach the philosophical questions we have discussed. In a manner similar to the dilemmas surrounding the foundations of mathematics, ML methods often produce results that are effective, yet remain difficult to fully explain or comprehend. While the fundamental principles of AI and neural networks are well-understood, the intricate workings of these systems—how they process information and arrive at solutions—remain elusive. This presents a symmetrically opposite problem to the one faced in the foundations of mathematics. We understand the underlying mechanisms, but the interpretation of the complex circuitry that leads to insights is still largely opaque. This paradox lies at the heart of modern deep neural network approaches, where we achieve powerful results without fully grasping every detail of the system’s internal logic.
For a clear demonstration, let's consider a deep convolutional neural network (CNN) trained on the ImageNet classification dataset. ImageNet contains more than 14 million images, each hand-annotated into diverse classes. The CNN is trained to classify each image into a specific category, such as "balloon" or "strawberry." After training, the CNN's parameters are fine-tuned to take an image as input. Through a combination of highly parallelizable computations, including matrix multiplication (network width) and sequential data processing (layer-to-layer, or depth), the network ultimately produces a probability distribution. High values in this distribution indicate the most likely class for the image.
These network computations are rigid in the sense that the network takes an image of the same size as input, performs a fixed number of calculations, and outputs a result of the same size. This design ensures that for inputs of the same size, the time taken by the network remains predictable and consistent, reinforcing the notion of a "fast and automatic" process, where the network's response time is predetermined by its architecture. This means that such an intelligent machine cannot sit and ponder. This design works well in many architectures, where the number of parameters and the size of the data scale appropriately. A similar approach is seen in newer transformer architectures, like OpenAI's GPT series. By scaling transformers to billions of parameters and vast datasets, these models have demonstrated the ability to solve increasingly complex intelligent tasks.
With each new challenging task solved by such neural networks, the interoperability gap between a single parameter, a single neuron activation, and its contribution to the overall objective—such as predicting the next token—becomes increasingly vague. This sounds similar to the way the fundamental essence of math, logic, and reasoning appears to become more elusive as we approach it more closely.
To explain why this happens, let's explore how CNN distinguishes between a cat and a dog in an image. Cat and dog images are represented in a computer as a bunch of numbers. To distinguish between a cat and a dog, the neural network must process all these numbers, or so called pixels simultaneously to identify key features. With wider and deeper neural networks, these pixels can be processed in parallel, enabling the network to perform enormous computations simultaneously to extract diverse features. As information flows between layers of the neural network, it ascends the abstraction ladder—from recognizing basic elements like corners and lines to more complex shapes and gradients, then to textures]. In the upper layers, the network can work with high-level abstract concepts, such as "paw," "eye," "hairy," "wrinkled," or “fluffy."
The transformation from concrete pixel data to these abstract concepts is profoundly complex. Each group of pixels is weighted, features are extracted, and then summarized layer by layer for billions of times. Consciously deconstructing and grasping all the computations happening at once can be daunting. This gradual ascent from the most granular, concrete elements to the highly abstract ones using billions and billions of simultaneous computations is what makes the process so difficult to understand. The exact mechanism by which simple pixels are transformed into abstract ideas remains elusive, far beyond our cognitive capacity to fully comprehend.
4. Elusive foundations
This process surprisingly mirrors the challenge we face when trying to explore the fundamental principles of math and logic. Just as neural networks move from concrete pixel data to abstract ideas, our understanding of basic mathematical and logical concepts becomes increasingly elusive as we attempt to peel back the layers of their foundations. The deeper we try to probe, the further we seem to be from truly grasping the essence of these principles. This gap between the concrete and the abstract, and our inability to fully bridge it, highlights the limitations of both our cognition and our understanding of the most fundamental aspects of reality.
In addition to this remarkable coincidence, we’ve also observed a second astounding similarity: both neural networks processing and human foundational thought processes seem to operate almost instinctively, performing complex tasks in a rigid, timely, and immediate manner (given enough computation). Even advanced models like GPT-4 still operate under the same rigid and “automatic” mechanism as CNNs. GPT-4 doesn’t pause to ponder or reflect on what it wants to write. Instead, it processes the input text, conducts N computations in time T and returns the next token, as well as the foundation of math and logic just seems to appear instantly out of nowhere to our consciousness.
This brings us to a fundamental idea that ties all the concepts together: intuition. Intuition, as we’ve explored, seems to be not just a human trait but a key component that enables both machines and humans to make quick and often accurate decisions, without consciously understanding all the underlying details. In this sense, Large Language Models (LLMs), like GPT, mirror the way intuition functions in our own brains. Just like our brains, which rapidly and automatically draw conclusions from vast amounts of data through what Daniel Kahneman calls System 1 in Thinking, Fast and Slow. LLMs process and predict the next token in a sequence based on learned patterns. These models, in their own way, are engaging in fast, automatic reasoning, without reflection or deeper conscious thought. This behavior, though it mirrors human intuition, remains elusive in its full explanation—just as the deeper mechanisms of mathematics and reasoning seem to slip further from our grasp as we try to understand them.
One more thing to note. Can we draw parallels between the brain and artificial neural networks so freely? Obviously, natural neurons are vastly more complex than artificial ones, and this holds true for each complex mechanism in both artificial and biological neural networks. However, despite these differences, artificial neurons were developed specifically to model the computational processes of real neurons. The efficiency and success of artificial neural networks suggest that we have indeed captured some key features of their natural counterparts. Historically, our understanding of the brain has evolved alongside technological advancements. Early on, the brain was conceptualized as a simple stem mechanical system, then later as an analog circuit, and eventually as a computational machine akin to a digital computer. This shift in thinking reflects the changing ways we’ve interpreted the brain’s functions in relation to emerging technologies. But even with such anecdotes I want to pinpoint the striking similarities between artificial and natural neural networks that make it hard to dismiss as coincidence. They bowth have neuronal-like computations, with many inputs and outputs. They both form networks with signal communications and processing. And given the efficiency and success of artificial networks in solving intelligent tasks, along with their ability to perform tasks similar to human cognition, it seems increasingly likely that both artificial and natural neural networks share underlying principles. While the details of their differences are still being explored, their functional similarities suggest they represent two variants of the single class of computational machines.
5. Limits of Intuition
Now lets try to explore the limits of intuition. Intuition is often celebrated as a mysterious tool of the human mind—an ability to make quick judgments and decisions without the need for conscious reasoning However, as we explore increasingly sophisticated intellectual tasks—whether in mathematics, abstract reasoning, or complex problem-solving—intuition seems to reach its limits. While intuitive thinking can help us process patterns and make sense of known information, it falls short when faced with tasks that require deep, multi-step reasoning or the manipulation of abstract concepts far beyond our immediate experience. If intuition in humans is the same intellectual problem-solving mechanism as LLMs, then let's also explore the limits of LLMs. Can we see another intersection in the philosophy of mind and the emerging field of machine learning?
Despite their impressive capabilities in text generation, pattern recognition, and even some problem-solving tasks, LLMs are far from perfect and still struggle with complex, multi-step intellectual tasks that require deeper reasoning. While LLMs like GPT-3 and GPT-4 can process vast amounts of data and generate human-like responses, research has highlighted several areas where they still fall short. These limitations expose the weaknesses inherent in their design and functioning, shedding light on the intellectual tasks that they cannot fully solve or struggle with (Brown et al., 2020)[18].
Here, we can draw a parallel with mathematics and explore how it can unlock the limits of our mind and enable us to solve tasks that were once deemed impossible. For instance, can we grasp the Pythagorean Theorem? Can we intuitively calculate the volume of a seven-dimensional sphere? We can, with the aid of mathematics. One reason for this, as Searle and Hidalgo argue, is that we can only operate with a small number of abstract ideas at a time—fewer than ten (Searle, 1992)(Hidalgo, 2015). Comprehending the entire proof of a complex mathematical theorem at once is beyond our cognitive grasp. Sometimes, even with intense effort, our intuition cannot fully grasp it. However, by breaking it into manageable chunks, we can employ basic logic and mathematical principles to solve it piece by piece. When intuition falls short, reason takes over and paves the way. Yet, it seems strange that our powerful intuition, capable of processing thousands of details to form a coherent picture, cannot compete with mathematical tools. If, as Hidalgo posits, we can only process a few abstract ideas at a time, how does intuition fail so profoundly when tackling basic mathematical tasks?
6. Abstraction exploration mechanism
The answer may lie in the limitations of our computational resources and how efficiently we use them. Intuition, like large language models (LLMs), is a very powerful tool for processing familiar data and recognizing patterns. However, how can these systems—human intuition and LLMs alike—solve novel tasks and innovate? This is where the concept of abstract space becomes crucial. Intuition helps us create an abstract representation of the world, extracting patterns to make sense of it. However, it is not an all-powerful mechanism. Some patterns remain elusive even for intuition, necessitating new mechanisms, such as mathematical reasoning, to tackle more complex problems.
Similarly, LLMs exhibit limitations akin to human intuition. Ultimately, the gap between intuition and mathematical tools illustrates the necessity of augmenting human intuitive cognition with external mechanisms. As Kant argued, mathematics provides the structured framework needed to transcend the limits of human understanding. By leveraging these tools, we can kinda push beyond the boundaries of our intelligent capabilities to solve increasingly intricate problems.
What if, instead of trying to search for solutions in a highly complex world with an unimaginable degree of freedom, we could reduce it to essential aspects? Abstraction is such a tool. As discussed earlier, the abstraction mechanism in the brain (or an LLM) can extract patterns from patterns and climb high up the abstraction ladder. In this space of high abstractions, created by our intuition, the basic principles governing the universe can be crystallize. Logical principles and rational reasoning become the intuitive foundation constructed by the brain while extracting the essence of all the diverse data it encounters. These principles, later formalized as mathematics or logic, are actually the map of a real world. Intuition arises when the brain takes the complex world and creates an abstract, hierarchical, and structured representation of it, it is the purified, essential part of it—a distilled model of the universe as we perceive it. Only then, basic and intuitive logical and mathematical principles emerge. At this point simple scaling of computation power to gain more patterns and insight is not enough, there emerges a new more efficient way of problem-solving from which reason, logic and math appear.
When we explore the entire abstract space and systematize it through reasoning, we uncover corners of reality represented by logical and mathematical principles. This helps explain the "unreasonable effectiveness" of mathematics. No wonder it is so useful in the real world, and surprisingly, even unintentional mathematical exploration becomes widely applicable. These axioms and basic principles, manipulations themselves represent essential patterns seen in the universe, patterns that intuition has brought to our consciousness. Due to some kind of computational limitations or other limitations of intuition of our brains, it is impossible to gain intuitive insight into complex theorems. However, these theorems can be discovered through mathematics and, once discovered, can often be reapplied in the real world. This process can be seen as a top-down approach, where conscious and rigorous exploration of abstract space—governed and grounded by mathematical principles—yields insights that can be applied in the real world. These newly discovered abstract concepts are in fact rooted in and deeply connected to reality, though the connection is so hard to spot that it cannot be grasped, even the intuition mechanism was not able to see it.
7. Reinterpreting of consciousness
The journey from intuition to logic and mathematics invites us to reinterpret the role of consciousness as the bridge between the automatic, pattern-driven processes of the mind and the deliberate, structured exploration of abstract spaces. Latest LLMs achievement clearly show the power of intuition alone, that does not require resigning to solve very complex intelligent tasks.
Consciousness is not merely a mechanism for integrating information or organizing patterns into higher-order structures—that is well within the realm of intuition. Intuition, as a deeply powerful cognitive tool, excels at recognizing patterns, modeling the world, and even navigating complex scenarios with breathtaking speed and efficiency. It can uncover hidden connections in data often better and generalize effectively from experience. However, intuition, for all its sophistication, has its limits: it struggles to venture beyond what is already implicit in the data it processes. It is here, in the domain of exploring abstract spaces and innovating far beyond existing patterns where new emergent mechanisms become crucial, that consciousness reveals its indispensable role.
At the heart of this role lies the idea of agency. Consciousness doesn't just explore abstract spaces passively—it creates agents capable of acting within these spaces. These agents, guided by reason-based mechanisms, can pursue long-term goals, test possibilities, and construct frameworks far beyond the capabilities of automatic intuitive processes. This aligns with Dennett’s notion of consciousness as an agent of intentionality and purpose in cognition. Agency allows consciousness to explore the landscape of abstract thought intentionally, laying the groundwork for creativity and innovation. This capacity to act within and upon abstract spaces is what sets consciousness apart as a unique and transformative force in cognition.
Unlike intuition, which works through automatic and often subconscious generalization, consciousness enables the deliberate, systematic exploration of possibilities that lie outside the reach of automatic processes. This capacity is particularly evident in the realm of mathematics and abstract reasoning, where intuition can guide but cannot fully grasp or innovate without conscious effort. Mathematics, with its highly abstract principles and counterintuitive results, requires consciousness to explore the boundaries of what intuition cannot immediately "see." In this sense, consciousness is a specialized tool for exploring the unknown, discovering new possibilities, and therefore forging connections that intuition cannot infer directly from the data.
Philosophical frameworks like Integrated Information Theory (IIT) can be adapted to resonate with this view. While IIT emphasizes the integration of information across networks, such new perspective would argue that integration is already the forte of intuition. Consciousness, in contrast, is not merely integrative—it is exploratory. It allows us to transcend the automatic processes of intuition and deliberately engage with abstract structures, creating new knowledge that would otherwise remain inaccessible. The power of consciousness lies not in refining or organizing information but in stepping into uncharted territories of abstract space.
Similarly, Predictive Processing Theories, which describe consciousness as emerging when the brain's predictive models face uncertainty or ambiguity, can align with this perspective when reinterpreted. Where intuition builds models based on the data it encounters, consciousness intervenes when those models fall short, opening the door to innovations that intuition cannot directly derive. Consciousness is the mechanism that allows us to work in the abstract, experimental space where logic and reasoning create new frameworks, independent of data-driven generalizations.
Other theories, such as Global Workspace Theory (GWT) and Higher-Order Thought Theories, may emphasize consciousness as the unifying stage for subsystems or the reflective process over intuitive thoughts, but again, powerful intuition perspective shifts the focus. Consciousness is not simply about unifying or generalize—it is about transcending. It is the mechanism that allows us to "see" beyond the patterns intuition presents, exploring and creating within abstract spaces that intuition alone cannot navigate.
Agency completes this picture. It is through agency that consciousness operationalizes its discoveries, bringing abstract reasoning to life by generating actions, plans, and make innovations possible. Intuitive processes alone, while brilliant at handling familiar patterns, are reactive and tethered to the data they process. Agency, powered by consciousness, introduces a proactive, goal-oriented mechanism that can conceive and pursue entirely new trajectories. This capacity for long-term planning, self-direction, and creative problem-solving is a part of what elevates consciousness from intuition and allows for efficient exploration.
In this way, consciousness is not a general-purpose cognitive tool like intuition but a highly specialized mechanism for innovation and agency. It plays a relatively small role in the broader context of intelligence, yet its importance is outsized because it enables the exploration of ideas and the execution of actions far beyond the reach of intuitive generalization. Consciousness, then, is the spark that transforms the merely "smart" into the truly groundbreaking, and agency is the engine that ensures its discoveries shape the world.
8. Predictive Power of the Theory
This theory makes several key predictions regarding cognitive processes, consciousness, and the nature of innovation. These predictions can be categorized into three main areas:
The theory posits that high cognitive abilities, like abstract reasoning in mathematics, philosophy, and science, are uniquely tied to conscious thought. Innovation in these fields requires deliberate, reflective processing to create models and frameworks beyond immediate experiences. This capacity, central to human culture and technological advancement, eliminates the possibility of philosophical zombies—unconscious beings—as they would lack the ability to solve such complex tasks, given the same computational resource as the human brain.
In contrast, the theory also predicts the limitations of intuition. Intuition excels in solving context-specific problems—such as those encountered in everyday survival, navigation, and routine tasks—where prior knowledge and pattern recognition are most useful. However, intuition’s capacity to generate novel ideas or innovate in highly abstract or complex domains, such as advanced mathematics, theoretical physics, or the development of futuristic technologies, is limited. In this sense, intuition is a powerful but ultimately insufficient tool for the kinds of abstract thinking and innovation necessary for transformative breakthroughs in science, philosophy, and technology.
There is one more crucial implication of the developed theory: it provides a pathway for the creation of Artificial General Intelligence (AGI), particularly by emphasizing the importance of consciousness, abstract exploration, and non-intuitive mechanisms in cognitive processes. Current AI models, especially transformer architectures, excel in pattern recognition and leveraging vast amounts of data for tasks such as language processing and predictive modeling. However, these systems still fall short in their ability to innovate and rigorously navigate the high-dimensional spaces required for creative problem-solving. The theory predicts that achieving AGI and ultimately superintelligence requires the incorporation of mechanisms that mimic conscious reasoning and the ability to engage with complex abstract concepts that intuition alone cannot grasp.
The theory suggests that the key to developing AGI lies in the integration of some kind of a recurrent, or other adaptive computation time mechanism on top of current architectures. This could involve augmenting transformer-based models with the capacity to perform more sophisticated abstract reasoning, akin to the conscious, deliberative processes found in human cognition. By enabling AI systems to continually explore high abstract spaces and to reason beyond simple pattern matching, it becomes possible to move towards systems that can not only solve problems based on existing knowledge but also generate entirely new, innovative solutions—something that current systems struggle with
9. Conclusion
This paper has explored the essence of mathematics, logic, and reasoning, focusing on the core mechanisms that enable them. We began by examining how these cognitive abilities emerge and concentrating on their elusive fundamentals, ultimately concluding that intuition plays a central role in this process. However, these mechanisms also allow us to push the boundaries of what intuition alone can accomplish, offering a structured framework to approach complex problems and generate new possibilities.
We have seen that intuition is a much more powerful cognitive tool than previously thought, enabling us to make sense of patterns in large datasets and to reason within established frameworks. However, its limitations become clear when scaled to larger tasks—those that require a departure from automatic, intuitive reasoning and the creation of new concepts and structures. In these instances, mathematics and logic provide the crucial mechanisms to explore abstract spaces, offering a way to formalize and manipulate ideas beyond the reach of immediate, intuitive understanding.
Finally, our exploration has led to the idea that consciousness plays a crucial role in facilitating non-intuitive reasoning and abstract exploration. While intuition is necessary for processing information quickly and effectively, consciousness allows us to step back, reason abstractly, and consider long-term implications, thereby creating the foundation for innovation and creativity. This is a crucial step for the future of AGI development. Our theory predicts that consciousness-like mechanisms—which engage abstract reasoning and non-intuitive exploration—should be integrated into AI systems, ultimately enabling machines to innovate, reason, and adapt in ways that mirror or even surpass human capabilities.
I am (1) new to the field of epistemology and (2) am not leading an answer with this question. In asking, I’m genuinely seeking the opinions of others on an argument I’ve recently encountered, as it’s played a big role in me reevaluating my views.
In a conversation with a religious friend of mine, they argued that if you believe in objective morality, you must also believe in some form of god as the source of objective moral laws. I know objective/mind-independent morality is not universally accepted in the first place, so in the interest of not derailing my question to a separate argument, I think I can rephrase it by replacing “objective morality” with “a priori knowledge” without losing much of the original point. That is, if a priori knowledge exists, which I think we will all agree it does, then there are innate facts about the universe that are independent of the mind and can be determined through rational thought alone. And if there exist innate facts about the universe, there must be some rational source of these innate facts.
This has been a really powerful idea that I haven’t been able to find a satisfying argument against. I guess the rebuttal here is that the universe just is the way it is because it is that way?
Anyways, I’d love to hear thoughts from really anyone on this. If I’m missing something obvious, or if you know of any good literature that addresses a form of these argument, please let me know. Thanks
Socrates and the Greek philosophers made their mark by recognizing that knowledge was housed in the human mind and subject to doubt and modification through analytical thinking and reason. Prior to that, people believed that their view of the world about them was intrinsic to that world. If a mountain had an evil spirit, it was because that was the character of that mountain, rather than being something they had been told. Neolithic humans did not recognize that opinions were held in their own minds, but believed their opinions to be accurate reflections of their world.
I am having difficulty finding written material on this distinction, and I am guessing that I have not found the correct terms to search. Can someone familiar with this topic guide me?
It has occurred to me that this distinction is pertinent to current events. The primitive form of knowledge often dominates in modern politics when the political spectrum becomes highly polarized. The leader of the other side is a bad person because that is their character, pushing aside all analytical thinking.
Reaosn is always subject to context and properties of things based on what is epriceved and rememberes in the current reality one believes to inhabit as there's something one needs to think about, those things always being stuff related to the human experience aswell as the natural world one seems to live in in a way all doubt, as valid as it may be, is still needing of ideas one must have had acquired, including that of the percieved by senses and organized by intuitive strucutres in time and space, in order to imagine which implicaitons it'd have for theories and thought ptroccessess and probability of x statement being true considering all the things which could possibly make it false, as far fetched as they may appear. I can doubt an evil demon might be decieivng me by creating a situation in which the turth is only true in that world or that he might be creating false memories in me on critical events in order to misguide my judgement, yet all of that can only be imaigned because of different ideas I've acquired which are mixed together in order to doubt, as I'm attribuing him motivation and human characteristics which only make sense in their threat to truth if they still affect me in the same way my nature would allow them to affect me, needing said nature in order to have gotten those ideas which make up the imaigned circumstance, in the same way I can only doubt if this is a dream if I have gotten dreams and know how and why they can be misguiding, also those concepts expressed through words always work with the same properties one would ascribe to events which happen in the physical world, which means an outside world one inhabits must neccesairly exist even if just to make these doubts physiclly possible in a way they'd be made possible on their effect, as one needs to have unnderstood how things work and in which way it is relevant with properties attributed to the natural world. Without it we wouldn't get the needed small ideas taken in order to form big ideas which can make one doubt of their knwoeledge based on hyper-specific, possible, scenairos which only make sense because of how one works and what that could do to you within the framework of it's implications, based on properties taken from the outside world (can/can't, so on and so on) aswell as the human condition (dreams and how they work and absed on what, simulations, potential degenerative condiitons, hypnosis and amnesia, so on and so on), aways needing of the existence of other beings aswell ourselves which can function similirally to the point of having ideas forming dreams with dreams and dreasm within dreams with potentially confusing memories if related), in a way although doubt is valid (of dream and so on and demon), it hardly makes knoweledge progress as it makes it get stagnated in an infinite regress in which one can only know how the world appears/seems to work and what that'd mean based on probability as the other options cannot be confirmed or refuted by pure reason alone, as it needs ideas, acknoweledged and expressed by language, making sense in a social environment.
In the same way, all this ideas need concepts and words ot be expressed, which need deifnitions, which in some cases only make sense if one has known from social-or-natural experience what that word refers to (ome have circular definitions on it's core), needing once again an outside world in order to even doubt if it exists, with some doubts being more general and others hyper-specific with implications and so on and so on, nedding once again of experience.
Sure, we might not know whereas if this reality is a dreamworld or a simulation and so on, but if a simulation then it must simulate something and the dream must be based on emotions, wishes, ideas, and so on in order to exist, an outside world being needed for it to work, so, even if we cannot know if this world is real, we can infer an external world is needed so that we cannot not use our senses to start reaosning as it's from them we socialize and experience the world, which gives us things to think about.
Even if we had been hypnotized to hold fake memories and were induced amnesia about it, had dementia, or alzheimer's, or schizophrenia in ways in which the senses' perception of relaity can be tricked and influenced to actions, it'd still need a material world outside of it in which a space and a time and an other and a brain is needed in order for it to be plausible to have happenned, in a way them being things one cannot negate nor confirm besides of how they make the world appear to you and how they affect one's functioning, while still proving an outside world is needed to get ideas from the senses which can amount to truths which permeate to potential dreamworlds or simulations or that are needed in order to develop condiitons wihtin a neccesary time and space framework. So, although they must exist consideirng how the way we function in ways the mind aswell as senses being able to be doubted imply it'd happen in a context in which the others would be soemwhat true for it, there msut neccesairly exist a basis for the mind to generate most ideas which are to be used to doubt, even if we may not know whereas the one w eihait in the current moment is the correct one and therefore cannot know a 100% sure answer despite the mso tprobable based on how things seem to work in the world.
I've recently come to this conclusion, what are your thoughts on it?
All philosophers' modus operandi's even those of whom rejected it use reason based on observable relaities ba the senses, either claiming one cannot trust them for they might treason you or embracing them. In that sense, it seems to me that it's impossible to not use senses-based reason to come to conclussions as most we reason about is bia the senses and things we've imagined plausible once we have enough information or knoweldege (whereas implicit or explicit) to reason a theory of why is this happenning and where might it lead us, wchich should be true until proven wrong if certain conditions could not be possible, like those using the methodical doubt. Hwever, that exaggerated doubt is mostly an example of how I cannot be absolutely certain and therefore know the theorical truth without doubt than anything else for a number of reasons, even if that argument is also coming from senses-based reason as I gotta know what things are and how they work in a way they might invalidate the final proposition. Being impossible to certainly know anything other that I gotta have to exist so I can doubt, even if it comes from senses-based reasoning.
IS there anyone who talks about it being possible?
Some thoughts on the nature of ignorance
It seems that both senses and reason alone ar einsuficcent to arrviivng at truths, as we tend to experienc ethe world at a place and time from our subjective perspective, depending on senses for whihc Idon't have answers ("do we live inside a dream?" type questions) aswell as reason alone makes it hard to arrive at something as it's absed on senses of percieved experiences which tranlate as information which is filtrated by our innate abilities from where we reason, using imaignation, to form theories of what happenned to get to a place and where will that lead us. However a lot of things we haven't really experienced except for documents or things which may have been tricked in some way, making it difficult to have absolute certainity about somoething as it's still plausible that something different might have happenned, I guess if we connect how those things would connect to present-day stuff in the most logical way then the most probable answer would be the correct one, even though we can't have 100.00% certainity on it. How off-beat am I?
Determine which account is better ( Chisholms foundationalist account or Goldmans reliable presses account of justification) How would you defend this ?
Hello everyone, currently doing some school work and I’m super stuck. This is probably very basic but I need some help. The question is “ what is the generality problem and why is it a problem for Goldman account of justification?” If I could get some help on the first part that would be huge!!
What does it mean when you know something is true but can’t believe it’s true?
I hope it’s obvious that this is related to epistemology.
The context is trauma and recovery. Philosophically and epistemologically where are you when you intellectually evaluate something as having happened, but can’t believe it has happened? Psychologically this is shock and/or denial.
Does philosophy or epistemology have anything to say about this situation?
For context, this is partly for a project for my partner and I's Epistemology class, the goal being to reach a definition or understanding of it. I would love hear the different theories you all have. My current understanding is that in order to have what this thing called knowledge is, you must be able to understand the contents of the information. Furthermore, I do believe there is such thing as true and false knowledge, and that truthful knowledge is whatever is backed by reality and the laws of it...perhaps?
The question is based from a famous scene from the Boondocks:
"Well, what I'm saying is that there are known knowns and that there are known unknowns. But there are also unknown unknowns; things we don't know that we don't know."
Is it possible for there to be an "unknown known", as in, some thing p which you know but which you are unaware that you know? Does knowing something imply that you know that you know it? Here are some examples that I managed to come up with:
- If you know that A is B, and that B is C, then do you know that A is C? It's perfectly contained within what you already know, but then again, just because you know the axioms and postulates of Euclidean Geometry doesn't mean you know anything about the angle properties of a transversal line.
- There is the idea in psychology that our minds record all of our experiences, and that the issue is simply retrieving them. For example, a woman woke up from a coma only being able to recite Homer, even though she was not and never formally learned Greek! Is to "know" to actively possess some information or is it for it to be contained somewhere in your mind for hypothetical retrieval?
https://mindmatters.ai/2019/09/do-we-actually-remember-everything/
- And then the basic, "I didn't know I knew that!" like hearing a song and knowing the lyrics even though you never make an effort to learn them or thought you knew them. You did know it, but you didn't know you did. An unknown known.
Are any of these examples convincing? Any rebuttals? Thank you for your replies!
I want to lay out my perspective on the nature of truth, logic, and reality. This isn't going to be a typical philosophical take - I'm not interested in the usual debates about empiricism vs rationalism or the nature of consciousness. Instead, I want to focus on something more fundamental: the logical structure of reality itself.
Let's start with the most basic principle: the law of excluded middle. For any proposition P, either P is true or P is false. This isn't just a useful assumption or a quirk of human thinking - it's a fundamental truth about reality itself. There is no middle ground, no "sort of true" or "partially false." When people claim to find violations of this (in quantum mechanics, fuzzy logic, etc.), they're really just being imprecise about what they're actually claiming.
Here's where I break from standard approaches: while I maintain excluded middle, I reject the classical equivalence between negated universal statements and existential claims. In other words, if I say "not everything is red," I'm NOT automatically claiming "something is not red." This might seem like a minor technical point, but it's crucial. Existence claims require separate, explicit justification. You can't smuggle them in through logical sleight of hand.
This ties into a broader point about universal quantification. When I make a universal claim, I'm not implicitly claiming anything exists. Empty domains are perfectly coherent. This might sound abstract, but it has huge implications for how we think about possibility, necessity, and existence.
Let's talk about quantum mechanics, since that's often where these discussions end up. The uncertainty principle and quantum superposition don't violate excluded middle at all. When we say a particle is in a superposition, we're describing our knowledge state, not claiming the particle somehow violates basic logic. Each well-formed proposition about the particle's state has a definite truth value, regardless of our ability to measure it. The limits are on measurement, not on truth.
This connects to a broader point about truth and knowledge. Truth values exist independently of our ability to know them. When we use probability or statistics, we're describing our epistemic limitations, not fundamental randomness in reality. The future has definite truth values, even if we can't access them. Our inability to predict with certainty reflects our ignorance, not inherent indeterminacy.
Another crucial principle: formal verifiability. Every meaningful claim should be mechanically verifiable - checkable by algorithm. Natural language is just for communication; real precision requires formal logic. And we should strive for axiomatic minimalism - using the smallest possible set of logically independent axioms. Each additional axiom is a potential point of failure and needs to prove its necessity.
This perspective has major implications for AI and knowledge representation. The current focus on statistical learning and pattern matching is fundamentally limited. We need systems built on verified logical foundations with minimal axioms, where each step of reasoning is formally verifiable.
Some will say this is too rigid, that reality is messier than pure logic. But I'd argue the opposite - reality's apparent messiness comes from our imprecise ways of thinking about it. When we're truly rigorous, patterns emerge from simple foundations.
This isn't just philosophical navel-gazing. It suggests concrete approaches to building better AI systems, understanding physical theories, and reasoning about complex systems. But more importantly, it offers a way to think about reality that doesn't require giving up classical logic while still handling all the phenomena that usually push people toward non-classical approaches.
I'm interested in your thoughts, particularly from those who work in formal logic, theoretical physics, or AI. What are the potential holes in this perspective? Where does it succeed or fail in handling edge cases? Let's have a rigorous discussion.
Here is Jevons:
It is impossible therefore that we should have any reason to disbelieve rather than to believe a statement about things of which we know nothing. We can hardly indeed invent a proposition concerning the truth of which we are absolutely ignorant, except when we are entirely ignorant of the terms used. If I ask the reader to assign the odds that a "Platythliptic Coefficient is positive" he will hardly see his way to doing so, unless he regard them as even.
Here is Keynes response:
Jevons's particular example, however, is also open to the objection that we do not even know the meaning of the subject of the proposition. Would he maintain that there is any sense in saying that for those who know no Arabic the probability of every statement expressed in Arabic is even?
Pettigrew presents an argument in agreement with Jevons:
In Bayesian epistemology, the problem of the priors is this: How should we set our credences (or degrees of belief) in the absence of evidence? That is, how should we set our prior or initial credences, the credences with which we begin our credal life? David Lewis liked to call an agent at the beginning of her credal journey a superbaby. The problem of the priors asks for the norms that govern these superbabies. The Principle of Indifference gives a very restrictive answer. It demands that such an agent divide her credences equally over all possibilities. That is, according to the Principle of Indifference, only one initial credence function is permissible, namely, the uniform distribution. In this paper, we offer a novel argument for the Principle of Indifference. I call it the Argument from Accuracy.
I think Jevons is right, that the ultimate original prior for any proposition is 1/2, because the only background information we have about a proposition whose meaning we don't understand is that it is either true or false.
I think this is extremely important when interpreting the epistemic meaning of probability. The odds form of Bayes theorem is this: O(H|E)/O(H)=P(E|H)/P(E|~H). If O(H) is equal to 1 for all propositions, then the equation reduces to O(H|E)=P(E|H)/P(E|~H). The first equation requires the Bayes Factor and the prior to calculate the posterior, while in the second equation the Bayes Factor and the posterior are equivalent. The right side is typically seen as the strength of evidence, while the left side is seen as a rational degree of belief. If O(H)=1, then we can interpret probabilities directly as the balance of evidence, rather than a rational degree of belief, which I think is much more intuitive. So when someone says, "The defendant is probably guilty", they mean that they judge the balance of evidence favors guilt. They don't mean their degree of belief in guilt is greater than 0.5 based on the evidence.
In summary, I think a good case can be made in this way that probabilities are judgements of balances of evidence, but it hinges on the idea that the ultimate original prior for any proposition is 0.5.
What do you think?
Is something that is objectively true any more or less valid or true than something that is subjectively true? Are they not comparable in that sense? Please define objective and subjective.
My professor never taught us what it means, and I cannot find a universal answer online. I was wondering if any of you know what it means. If you do, it would literally save my life
Hi! Richard Feynman spoke once about the difference between knowledge and understanding, using an experience he had with his dad. His dad rattled off the name of a brown thrasher (bird) in several different languages. He explained how you can know something about a bird (names), but understand nothing about the bird itself.
To relate to the world today, we must begin with correct perspectives of understanding. Coding and public policy are two vastly different fields...yet there are principles and pathways that one can follow to ensure a correct perspective and relationship are reached. Epistemology seems to be the way to do that.
All said, I am looking for a broad overview book that discusses principles as opposed to a rabbit hole dive. A great example would be Eugenia Chang's The Art of Logic in an Illogical World, which provided me with a fascinating a clear understanding of the world of mathematics, and it's role in contemporary society, and of course, in its ability to guide us in how to think. I would love an epistemological book that shares similarities to this.