/r/neurophilosophy

Photograph via snooOG

Group for the discussion on the new emerging field of Neurophilosophy and of those working in this field ( Churchlands) , as well as on the intersections between neuroscience ,philosophy, behavior, and other related fields

A place to submit and discuss links related to the sciences of behaviour and consciousness. In-depth submissions and conversations regarding neuroscience, psychology, philosophy of mind, cognitive science, artificial intelligence and the philosophy of psychology are all welcomed.

The name is a bit misleading; discussion is intended to be broad, rather than focusing solely on neurophilosophy.

Other subreddits that may be of note;


Check out these content hubs for more sub-reddits that may interest you:

/r/neurophilosophy

39,953 Subscribers

3

Neural Darwinism - Podcast discussion Idea

Hey there folks,

For anyone interested, I went ahead and digitized my copy of Gerald Edelmans, 'Neural Darwinism.' It postulates a biological theory of consciousness. If you're into neuroscience, neurophilosphy of consciousness, or just obscure scientific literature, you might enjoy this!

Oh! Also, I went ahead and generated some podcast-style discussions for each chapter. They're all about 15 - 20 minutes in length and have a very bookclub like feel to them. I'm still fiddling around with getting the outputs just right, but these turned out pretty good IMO.

Heres the link to everything. I hope you all enjoy :)

https://drive.google.com/drive/folders/1i4jZADwpJSaz5VDcVJl0CL4JtEbFhGoK?usp=sharing

Lastly, if people are interested, I might be tempted to do the same thing for Giulio Tononi's IIT and Penrose's Orch OR. Just let me know!

Edit: was going to include a comment on how you can interact with the podcast (ask question, etc), but it was too long for reddit and kind of obnoxious. I'll include an extra doc in the Google Drive library with instructions.

Edit 2: Welp... apparantly regerenating the first three episodes caused some wonkiness. Also, apparantly NotebookLM isnt great at reading Roman numerals. I'll fix those recordings later.

0 Comments
2024/12/20
19:13 UTC

0

Our reality is actually absurd when you really think about it

1 Comment
2024/12/16
23:27 UTC

0

What is Math actually. Why it is unreasonably useful and how AI answer this questions and help reinterpret the role of consciousness

Original paper available at: philarchive.org/rec/KUTIIA

Introduction

What is reasoning? What is logic? What is math? Philosophical perspectives vary greatly. Platonism, for instance, posits that mathematical and logical truths exist independently in an abstract realm, waiting to be discovered. On the other hand, formalism, nominalism, and intuitionism suggest that mathematics and logic are human constructs or mental frameworks, created to organize and describe our observations of the world. Common sense tells us that concepts such as numbers, relations, and logical structures feel inherently familiar—almost intuitive, but why? Just think a bit about it… They seem so obvious, but why? Do they have deeper origins? What is number? What is addition? Why they work in this way? If you pounder on basic axioms of math, their foundation seems to be very intuitive, but absolutely mysteriously appears to our mind out of nowhere. In a way their true essence magically slips away unnoticed from our consciousness, when we try to pinpoint exactly at their foundation. What the heck is happening? 

Here I want to tackle those deep questions using the latest achievements in machine learning and neural networks, and it will surprisingly lead to the reinterpretation of the role of consciousness in human cognition.

Long story short, what is each chapter about:

  1. Intuition often occur in philosophy of reason, logic and math
  2. There exist unreasonable effectiveness of mathematics problem
  3. Deep neural networks explanation for philosophers. Introduce cognitive close of neural networks and rigidness. Necessary for further argumentation.
  4. Uninteroperability of neural network is similar to conscious experience of unreasonable knowledge aka intuition. They are the same phenomena actually. Human intuition insights sometimes might be cognitively closed
  5. Intuition is very powerful, more important that though before, but have limits
  6. Logic, math and reasoning itself build on top of all mighty intuition as foundation
  7. Consciousness is just a specialized tool for innovation, but it is essential for innovation outside of data, seen by intuition
  8. Theory predictions
  9. Conclusion

Feel free to skip anything you want!

1. Mathematics, logic and reason

Let's start by understanding the interconnection of these ideas. Math, logic and reasoning process can be seen as a structure within an abstraction ladder, where reasoning crystallizes logic, and logical principles lay the foundation for mathematics. Also, we can be certain that all these concepts have proven to be immensely useful for humanity. Let focus on mathematics for now, as a clear example of a mental tool used to explore, understand, and solve problems that would otherwise be beyond our grasp All of the theories acknowledge the utility and undeniable importance of mathematics in shaping our understanding of reality. However, this very importance brings forth a paradox. While these concepts seem intuitively clear and integral to human thought, they also appear unfathomable in their essence. 

No matter the philosophical position, it is certain, however, is that intuition plays a pivotal role in all approaches. Even within frameworks that emphasize the formal or symbolic nature of mathematics, intuition remains the cornerstone of how we build our theories and apply reasoning. Intuition is exactly how we call out ‘knowledge’ of basic operations, this knowledge of math seems to appear to our heads from nowhere, we know it's true, that’s it, and that is very intuitive. Thought intuition also allows us to recognize patterns, make judgments, and connect ideas in ways that might not be immediately apparent from the formal structures themselves. 

2. Unreasonable Effectiveness..

Another mystery is known as unreasonable effectiveness of mathematics. The extraordinary usefulness of mathematics in human endeavors raises profound philosophical questions. Mathematics allows us to solve problems beyond our mental capacity, and unlock insights into the workings of the universe.But why should abstract mathematical constructs, often developed with no practical application in mind, prove so indispensable in describing natural phenomena?

For instance, non-Euclidean geometry, originally a purely theoretical construct, became foundational for Einstein's theory of general relativity, which redefined our understanding of spacetime. Likewise, complex numbers, initially dismissed as "imaginary," are now indispensable in quantum mechanics and electrical engineering. These cases exemplify how seemingly abstract mathematical frameworks can later illuminate profound truths about the natural world, reinforcing the idea that mathematics bridges the gap between human abstraction and universal reality.

And as mathematics, logic, and reasoning occupy an essential place in our mental toolbox, yet their true nature remains elusive. Despite their extraordinary usefulness and centrality in human thought and universally regarded as indispensable tools for problem-solving and innovation, reconciling their nature with a coherent philosophical theory presents a challenge.

3. Lens of Machine Learning

Let us turn to the emerging boundaries of the machine learning (ML) field to approach the philosophical questions we have discussed. In a manner similar to the dilemmas surrounding the foundations of mathematics, ML methods often produce results that are effective, yet remain difficult to fully explain or comprehend. While the fundamental principles of AI and neural networks are well-understood, the intricate workings of these systems—how they process information and arrive at solutions—remain elusive. This presents a symmetrically opposite problem to the one faced in the foundations of mathematics. We understand the underlying mechanisms, but the interpretation of the complex circuitry that leads to insights is still largely opaque. This paradox lies at the heart of modern deep neural network approaches, where we achieve powerful results without fully grasping every detail of the system’s internal logic.

For a clear demonstration, let's consider a deep convolutional neural network (CNN) trained on the ImageNet classification dataset. ImageNet contains more than 14 million images, each hand-annotated into diverse classes. The CNN is trained to classify each image into a specific category, such as "balloon" or "strawberry." After training, the CNN's parameters are fine-tuned to take an image as input. Through a combination of highly parallelizable computations, including matrix multiplication (network width) and sequential data processing (layer-to-layer, or depth), the network ultimately produces a probability distribution. High values in this distribution indicate the most likely class for the image.

These network computations are rigid in the sense that the network takes an image of the same size as input, performs a fixed number of calculations, and outputs a result of the same size. This design ensures that for inputs of the same size, the time taken by the network remains predictable and consistent, reinforcing the notion of a "fast and automatic" process, where the network's response time is predetermined by its architecture. This means that such an intelligent machine cannot sit and ponder. This design works well in many architectures, where the number of parameters and the size of the data scale appropriately. A similar approach is seen in newer transformer architectures, like OpenAI's GPT series. By scaling transformers to billions of parameters and vast datasets, these models have demonstrated the ability to solve increasingly complex intelligent tasks. 

With each new challenging task solved by such neural networks, the interoperability gap between a single parameter, a single neuron activation, and its contribution to the overall objective—such as predicting the next token—becomes increasingly vague. This sounds similar to the way the fundamental essence of math, logic, and reasoning appears to become more elusive as we approach it more closely.

To explain why this happens, let's explore how CNN distinguishes between a cat and a dog in an image. Cat and dog images are represented in a computer as a bunch of numbers. To distinguish between a cat and a dog, the neural network must process all these numbers, or so called pixels simultaneously to identify key features. With wider and deeper neural networks, these pixels can be processed in parallel, enabling the network to perform enormous computations simultaneously to extract diverse features. As information flows between layers of the neural network, it ascends the abstraction ladder—from recognizing basic elements like corners and lines to more complex shapes and gradients, then to textures]. In the upper layers, the network can work with high-level abstract concepts, such as "paw," "eye," "hairy," "wrinkled," or “fluffy."

The transformation from concrete pixel data to these abstract concepts is profoundly complex. Each group of pixels is weighted, features are extracted, and then summarized layer by layer for billions of times. Consciously deconstructing and grasping all the computations happening at once can be daunting. This gradual ascent from the most granular, concrete elements to the highly abstract ones using billions and billions of simultaneous computations is what makes the process so difficult to understand. The exact mechanism by which simple pixels are transformed into abstract ideas remains elusive, far beyond our cognitive capacity to fully comprehend.

4. Elusive foundations

This process surprisingly mirrors the challenge we face when trying to explore the fundamental principles of math and logic. Just as neural networks move from concrete pixel data to abstract ideas, our understanding of basic mathematical and logical concepts becomes increasingly elusive as we attempt to peel back the layers of their foundations. The deeper we try to probe, the further we seem to be from truly grasping the essence of these principles. This gap between the concrete and the abstract, and our inability to fully bridge it, highlights the limitations of both our cognition and our understanding of the most fundamental aspects of reality.

In addition to this remarkable coincidence, we’ve also observed a second astounding similarity: both neural networks processing and human foundational thought processes seem to operate almost instinctively, performing complex tasks in a rigid, timely, and immediate manner (given enough computation). Even advanced models like GPT-4 still operate under the same rigid and “automatic” mechanism as CNNs. GPT-4 doesn’t pause to ponder or reflect on what it wants to write. Instead, it processes the input text, conducts N computations in time T and returns the next token, as well as the foundation of math and logic just seems to appear instantly out of nowhere to our consciousness.

This brings us to a fundamental idea that ties all the concepts together: intuition. Intuition, as we’ve explored, seems to be not just a human trait but a key component that enables both machines and humans to make quick and often accurate decisions, without consciously understanding all the underlying details. In this sense, Large Language Models (LLMs), like GPT, mirror the way intuition functions in our own brains. Just like our brains, which rapidly and automatically draw conclusions from vast amounts of data through what Daniel Kahneman calls System 1 in Thinking, Fast and Slow. LLMs process and predict the next token in a sequence based on learned patterns. These models, in their own way, are engaging in fast, automatic reasoning, without reflection or deeper conscious thought. This behavior, though it mirrors human intuition, remains elusive in its full explanation—just as the deeper mechanisms of mathematics and reasoning seem to slip further from our grasp as we try to understand them.

One more thing to note. Can we draw parallels between the brain and artificial neural networks so freely? Obviously, natural neurons are vastly more complex than artificial ones, and this holds true for each complex mechanism in both artificial and biological neural networks. However, despite these differences, artificial neurons were developed specifically to model the computational processes of real neurons. The efficiency and success of artificial neural networks suggest that we have indeed captured some key features of their natural counterparts. Historically, our understanding of the brain has evolved alongside technological advancements. Early on, the brain was conceptualized as a simple stem mechanical system, then later as an analog circuit, and eventually as a computational machine akin to a digital computer. This shift in thinking reflects the changing ways we’ve interpreted the brain’s functions in relation to emerging technologies. But even with such anecdotes I want to pinpoint the striking similarities between artificial and natural neural networks that make it hard to dismiss as coincidence. They bowth have neuronal-like computations, with many inputs and outputs. They both form networks with signal communications and processing. And given the efficiency and success of artificial networks in solving intelligent tasks, along with their ability to perform tasks similar to human cognition, it seems increasingly likely that both artificial and natural neural networks share underlying principles. While the details of their differences are still being explored, their functional similarities suggest they represent two variants of the single class of computational machines.

5. Limits of Intuition

Now lets try to explore the limits of intuition. Intuition is often celebrated as a mysterious tool of the human mind—an ability to make quick judgments and decisions without the need for conscious reasoning However, as we explore increasingly sophisticated intellectual tasks—whether in mathematics, abstract reasoning, or complex problem-solving—intuition seems to reach its limits. While intuitive thinking can help us process patterns and make sense of known information, it falls short when faced with tasks that require deep, multi-step reasoning or the manipulation of abstract concepts far beyond our immediate experience. If intuition in humans is the same intellectual problem-solving mechanism as LLMs, then let's also explore the limits of LLMs. Can we see another intersection in the philosophy of mind and the emerging field of machine learning?

Despite their impressive capabilities in text generation, pattern recognition, and even some problem-solving tasks, LLMs are far from perfect and still struggle with complex, multi-step intellectual tasks that require deeper reasoning. While LLMs like GPT-3 and GPT-4 can process vast amounts of data and generate human-like responses, research has highlighted several areas where they still fall short. These limitations expose the weaknesses inherent in their design and functioning, shedding light on the intellectual tasks that they cannot fully solve or struggle with (Brown et al., 2020)[18].

  1. Multi-Step Reasoning and Complex Problem Solving: One of the most prominent weaknesses of LLMs is their struggle with multi-step reasoning. While they excel at surface-level tasks, such as answering factual questions or generating coherent text, they often falter when asked to perform tasks that require multi-step logical reasoning or maintaining context over a long sequence of steps. For instance, they may fail to solve problems involving intricate mathematical proofs or multi-step arithmetic. Research on the "chain-of-thought" approach, aimed at improving LLMs' ability to perform logical reasoning, shows that while LLMs can follow simple, structured reasoning paths, they still struggle with complex problem-solving when multiple logical steps must be integrated. 
  2. Abstract and Symbolic Reasoning: Another significant challenge for LLMs lies in abstract reasoning and handling symbolic representations of knowledge. While LLMs can generate syntactically correct sentences and perform pattern recognition, they struggle when asked to reason abstractly or work with symbols that require logical manipulation outside the scope of training data. Tasks like proving theorems, solving high-level mathematical problems, or even dealing with abstract puzzles often expose LLMs’ limitations and they struggle with tasks that require the construction of new knowledge or systematic reasoning in abstract spaces.
  3. Understanding and Generalizing to Unseen Problems: LLMs are, at their core, highly dependent on the data they have been trained on. While they excel at generalizing from seen patterns, they struggle to generalize to new, unseen problems that deviate from their training data. Yuan LeCun argues that LLMs cannot get out of the scope of their training data. They have seen an enormous amount of data and, therefore, can solve tasks in a superhuman manner. But they seem to fall back with multi-step, complex problems. This lack of true adaptability is evident in tasks that require the model to handle novel situations that differ from the examples it has been exposed to. A 2023 study by Brown et al. examined this issue and concluded that LLMs, despite their impressive performance on a wide array of tasks, still exhibit poor transfer learning abilities when faced with problems that involve significant deviations from the training data.
  4. Long-Term Dependency and Memory: LLMs have limited memory and are often unable to maintain long-term dependencies over a series of interactions or a lengthy sequence of information. This limitation becomes particularly problematic in tasks that require tracking complex, evolving states or maintaining consistency over time. For example, in tasks like story generation or conversation, LLMs may lose track of prior context and introduce contradictions or incoherence. The inability to remember past interactions over long periods highlights a critical gap in their ability to perform tasks that require dynamic memory and ongoing problem-solving

Here, we can draw a parallel with mathematics and explore how it can unlock the limits of our mind and enable us to solve tasks that were once deemed impossible. For instance, can we grasp the Pythagorean Theorem? Can we intuitively calculate the volume of a seven-dimensional sphere? We can, with the aid of mathematics. One reason for this, as Searle and Hidalgo argue, is that we can only operate with a small number of abstract ideas at a time—fewer than ten (Searle, 1992)(Hidalgo, 2015). Comprehending the entire proof of a complex mathematical theorem at once is beyond our cognitive grasp. Sometimes, even with intense effort, our intuition cannot fully grasp it. However, by breaking it into manageable chunks, we can employ basic logic and mathematical principles to solve it piece by piece. When intuition falls short, reason takes over and paves the way. Yet, it seems strange that our powerful intuition, capable of processing thousands of details to form a coherent picture, cannot compete with mathematical tools. If, as Hidalgo posits, we can only process a few abstract ideas at a time, how does intuition fail so profoundly when tackling basic mathematical tasks?

6. Abstraction exploration mechanism

The answer may lie in the limitations of our computational resources and how efficiently we use them. Intuition, like large language models (LLMs), is a very powerful tool for processing familiar data and recognizing patterns. However, how can these systems—human intuition and LLMs alike—solve novel tasks and innovate? This is where the concept of abstract space becomes crucial. Intuition helps us create an abstract representation of the world, extracting patterns to make sense of it. However, it is not an all-powerful mechanism. Some patterns remain elusive even for intuition, necessitating new mechanisms, such as mathematical reasoning, to tackle more complex problems.

Similarly, LLMs exhibit limitations akin to human intuition. Ultimately, the gap between intuition and mathematical tools illustrates the necessity of augmenting human intuitive cognition with external mechanisms. As Kant argued, mathematics provides the structured framework needed to transcend the limits of human understanding. By leveraging these tools, we can kinda push beyond the boundaries of our intelligent capabilities to solve increasingly intricate problems.

What if, instead of trying to search for solutions in a highly complex world with an unimaginable degree of freedom, we could reduce it to essential aspects? Abstraction is such a tool. As discussed earlier, the abstraction mechanism in the brain (or an LLM) can extract patterns from patterns and climb high up the abstraction ladder. In this space of high abstractions, created by our intuition, the basic principles governing the universe can be crystallize. Logical principles and rational reasoning become the intuitive foundation constructed by the brain while extracting the essence of all the diverse data it encounters. These principles, later formalized as mathematics or logic, are actually the map of a real world. Intuition arises when the brain takes the complex world and creates an abstract, hierarchical, and structured representation of it, it is the purified, essential part of it—a distilled model of the universe as we perceive it. Only then, basic and intuitive logical and mathematical principles emerge. At this point simple scaling of computation power to gain more patterns and insight is not enough, there emerges a new more efficient way of problem-solving from which reason, logic and math appear.

When we explore the entire abstract space and systematize it through reasoning, we uncover corners of reality represented by logical and mathematical principles. This helps explain the "unreasonable effectiveness" of mathematics. No wonder it is so useful in the real world, and surprisingly, even unintentional mathematical exploration becomes widely applicable. These axioms and basic principles, manipulations themselves represent essential patterns seen in the universe, patterns that intuition has brought to our consciousness. Due to some kind of computational limitations or other limitations of intuition of our brains, it is impossible to gain intuitive insight into complex theorems. However, these theorems can be discovered through mathematics and, once discovered, can often be reapplied in the real world. This process can be seen as a top-down approach, where conscious and rigorous exploration of abstract space—governed and  grounded by mathematical principles—yields insights that can be applied in the real world. These newly discovered abstract concepts are in fact rooted in and deeply connected to reality, though the connection is so hard to spot that it cannot be grasped, even the intuition mechanism was not able to see it. 

7. Reinterpreting of consciousness

The journey from intuition to logic and mathematics invites us to reinterpret the role of consciousness as the bridge between the automatic, pattern-driven processes of the mind and the deliberate, structured exploration of abstract spaces. Latest LLMs achievement clearly show the power of intuition alone, that does not require resigning to solve very complex intelligent tasks.

Consciousness is not merely a mechanism for integrating information or organizing patterns into higher-order structures—that is well within the realm of intuition. Intuition, as a deeply powerful cognitive tool, excels at recognizing patterns, modeling the world, and even navigating complex scenarios with breathtaking speed and efficiency. It can uncover hidden connections in data often better and generalize effectively from experience. However, intuition, for all its sophistication, has its limits: it struggles to venture beyond what is already implicit in the data it processes. It is here, in the domain of exploring abstract spaces and innovating far beyond existing patterns where new emergent mechanisms become crucial, that consciousness reveals its indispensable role.

At the heart of this role lies the idea of agency. Consciousness doesn't just explore abstract spaces passively—it creates agents capable of acting within these spaces. These agents, guided by reason-based mechanisms, can pursue long-term goals, test possibilities, and construct frameworks far beyond the capabilities of automatic intuitive processes. This aligns with Dennett’s notion of consciousness as an agent of intentionality and purpose in cognition. Agency allows consciousness to explore the landscape of abstract thought intentionally, laying the groundwork for creativity and innovation. This capacity to act within and upon abstract spaces is what sets consciousness apart as a unique and transformative force in cognition.

Unlike intuition, which works through automatic and often subconscious generalization, consciousness enables the deliberate, systematic exploration of possibilities that lie outside the reach of automatic processes. This capacity is particularly evident in the realm of mathematics and abstract reasoning, where intuition can guide but cannot fully grasp or innovate without conscious effort. Mathematics, with its highly abstract principles and counterintuitive results, requires consciousness to explore the boundaries of what intuition cannot immediately "see." In this sense, consciousness is a specialized tool for exploring the unknown, discovering new possibilities, and therefore forging connections that intuition cannot infer directly from the data.

Philosophical frameworks like Integrated Information Theory (IIT) can be adapted to resonate with this view. While IIT emphasizes the integration of information across networks, such new perspective would argue that integration is already the forte of intuition. Consciousness, in contrast, is not merely integrative—it is exploratory. It allows us to transcend the automatic processes of intuition and deliberately engage with abstract structures, creating new knowledge that would otherwise remain inaccessible. The power of consciousness lies not in refining or organizing information but in stepping into uncharted territories of abstract space.

Similarly, Predictive Processing Theories, which describe consciousness as emerging when the brain's predictive models face uncertainty or ambiguity, can align with this perspective when reinterpreted. Where intuition builds models based on the data it encounters, consciousness intervenes when those models fall short, opening the door to innovations that intuition cannot directly derive. Consciousness is the mechanism that allows us to work in the abstract, experimental space where logic and reasoning create new frameworks, independent of data-driven generalizations.

Other theories, such as Global Workspace Theory (GWT) and Higher-Order Thought Theories, may emphasize consciousness as the unifying stage for subsystems or the reflective process over intuitive thoughts, but again, powerful intuition perspective shifts the focus. Consciousness is not simply about unifying or generalize—it is about transcending. It is the mechanism that allows us to "see" beyond the patterns intuition presents, exploring and creating within abstract spaces that intuition alone cannot navigate.

Agency completes this picture. It is through agency that consciousness operationalizes its discoveries, bringing abstract reasoning to life by generating actions, plans, and make innovations possible. Intuitive processes alone, while brilliant at handling familiar patterns, are reactive and tethered to the data they process. Agency, powered by consciousness, introduces a proactive, goal-oriented mechanism that can conceive and pursue entirely new trajectories. This capacity for long-term planning, self-direction, and creative problem-solving is a part of what elevates consciousness from intuition and allows for efficient exploration.

In this way, consciousness is not a general-purpose cognitive tool like intuition but a highly specialized mechanism for innovation and agency. It plays a relatively small role in the broader context of intelligence, yet its importance is outsized because it enables the exploration of ideas and the execution of actions far beyond the reach of intuitive generalization. Consciousness, then, is the spark that transforms the merely "smart" into the truly groundbreaking, and agency is the engine that ensures its discoveries shape the world.

8. Predictive Power of the Theory

This theory makes several key predictions regarding cognitive processes, consciousness, and the nature of innovation. These predictions can be categorized into three main areas:

  1. Predicting the Role of Consciousness in Innovation:

The theory posits that high cognitive abilities, like abstract reasoning in mathematics, philosophy, and science, are uniquely tied to conscious thought. Innovation in these fields requires deliberate, reflective processing to create models and frameworks beyond immediate experiences. This capacity, central to human culture and technological advancement, eliminates the possibility of philosophical zombies—unconscious beings—as they would lack the ability to solve such complex tasks, given the same computational resource as the human brain.

  1. Predicting the Limitations of Intuition:

In contrast, the theory also predicts the limitations of intuition. Intuition excels in solving context-specific problems—such as those encountered in everyday survival, navigation, and routine tasks—where prior knowledge and pattern recognition are most useful. However, intuition’s capacity to generate novel ideas or innovate in highly abstract or complex domains, such as advanced mathematics, theoretical physics, or the development of futuristic technologies, is limited. In this sense, intuition is a powerful but ultimately insufficient tool for the kinds of abstract thinking and innovation necessary for transformative breakthroughs in science, philosophy, and technology.

  1. The Path to AGI: Integrating Consciousness and Abstract Exploration

There is one more crucial implication of the developed theory: it provides a pathway for the creation of Artificial General Intelligence (AGI), particularly by emphasizing the importance of consciousness, abstract exploration, and non-intuitive mechanisms in cognitive processes. Current AI models, especially transformer architectures, excel in pattern recognition and leveraging vast amounts of data for tasks such as language processing and predictive modeling. However, these systems still fall short in their ability to innovate and rigorously navigate the high-dimensional spaces required for creative problem-solving. The theory predicts that achieving AGI and ultimately superintelligence requires the incorporation of mechanisms that mimic conscious reasoning and the ability to engage with complex abstract concepts that intuition alone cannot grasp. 

The theory suggests that the key to developing AGI lies in the integration of some kind of a recurrent, or other adaptive computation time mechanism on top of current architectures. This could involve augmenting transformer-based models with the capacity to perform more sophisticated abstract reasoning, akin to the conscious, deliberative processes found in human cognition. By enabling AI systems to continually explore high abstract spaces and to reason beyond simple pattern matching, it becomes possible to move towards systems that can not only solve problems based on existing knowledge but also generate entirely new, innovative solutions—something that current systems struggle with

9. Conclusion

This paper has explored the essence of mathematics, logic, and reasoning, focusing on the core mechanisms that enable them. We began by examining how these cognitive abilities emerge and concentrating on their elusive fundamentals, ultimately concluding that intuition plays a central role in this process. However, these mechanisms also allow us to push the boundaries of what intuition alone can accomplish, offering a structured framework to approach complex problems and generate new possibilities.

We have seen that intuition is a much more powerful cognitive tool than previously thought, enabling us to make sense of patterns in large datasets and to reason within established frameworks. However, its limitations become clear when scaled to larger tasks—those that require a departure from automatic, intuitive reasoning and the creation of new concepts and structures. In these instances, mathematics and logic provide the crucial mechanisms to explore abstract spaces, offering a way to formalize and manipulate ideas beyond the reach of immediate, intuitive understanding.

Finally, our exploration has led to the idea that consciousness plays a crucial role in facilitating non-intuitive reasoning and abstract exploration. While intuition is necessary for processing information quickly and effectively, consciousness allows us to step back, reason abstractly, and consider long-term implications, thereby creating the foundation for innovation and creativity. This is a crucial step for the future of AGI development. Our theory predicts that consciousness-like mechanisms—which engage abstract reasoning and non-intuitive exploration—should be integrated into AI systems, ultimately enabling machines to innovate, reason, and adapt in ways that mirror or even surpass human capabilities.

8 Comments
2024/12/16
11:59 UTC

1 Comment
2024/12/13
00:57 UTC

4

How might defenders of indirect realism in the predictive processing framework respond to this challenge from Berkeley?

Berkeley targeted much of his philosophical energy against indirect realism. Given the empiricist assumptions about the nature of perception Berkeley and his interlocutors share, all that can be present to the perceiving subject are sensory properties—properties that are necessarily subject-dependent. His challenge to the indirect realist picture is to suggest that this turns the putative environmental object of perception, which is supposed to have further, objective properties, into an “Unknown Somewhat […], which is quite stripped of all sensible qualities, and can neither be perceived by sense, nor apprehended by the mind” (Berkeley, 2007, p. 152)

Reformulated in PP terms, the Berkeleyan challenge highlights the possibility that generative models are biased against veridicality. That is, any PP system’s main concern being to reduce prediction error, error will most efficiently be reduced by ascribing properties to perceptual objects that correspond to high-level patterns in expected input from the environment. In recovering these patterns, the system is supposed to implicitly model the causal structure of its environment – including a model of itself as a point of potential intervention in that structure. Here, the ambiguity that is the opening point of Berkeley’s argument reoccurs since while the generative model can be understood as representing objects in the world, it might also be seen as reducing uncertainty on models of the patterns of input that reach the perceiver’s sensory array. In the latter case, we might understand these representations as ‘systemic misrepresentations’ that present not the objective properties of environmental objects but the non-actual relational properties they require to make certain actions and projects available to the agent. In this case, the best we can say is that ascribed properties are subject-dependent properties of some otherwise unspecified environmental objects. But what would justify ascribing pattern-grounded properties to any environmental particular rather than to the input stream as a whole?

Hallucination already gives us one kind of case where perceived properties are not attributable to particulars in the environment. According to the Berkeleyan argument, this is also true of the ‘controlled hallucination’ of perception. Perception, it suggests, is the result of generative models integrating both perceptual and active inference. While this enables effective (i.e. error-reducing) intervention, it does not yield veridical representation. This is not what the generative model is set up to do. Perceptual objects, as they emerge from error reduction on environmental input, are constitutively subject-dependent. They neither have nor stand in any easily parsed relation to objective properties. Thus, both direct and indirect perceptual realism are false, and neuroidealism—the claim that perceptual objects are not environmental objects—is true.

1 Comment
2024/11/27
15:10 UTC

2

10 years of sleep paralysis experience and related circumstances

Hi there! I have been living with almost daily sleep paralysis and lucid dreams since 18yo. From that point I had have many upcoming things that I was not ready for and had to handle them somehow to have relatively normal life that combined this sleeping misfunctions.

During this time I have been journaling of all these changes, my adaptations as well as looking for possible answers or help.

So here you can ask anything you struggling, faced or just been interested about. This is only my experience with accessible scientific explanations.

4 Comments
2024/11/19
14:52 UTC

3

Joscha Bach, Stephen Wolfram, Manolis Kellis Neurophilosophy at MIT

0 Comments
2024/11/12
14:25 UTC

6

How to predict human behavior

0 Comments
2024/11/05
23:47 UTC

1

Weekly Poll: should we prefer "front-of-the-head" or "back-of-the-head" scientific theories of conscious perception?

0 Comments
2024/10/22
02:06 UTC

2

Do anyone can reccomend to me any Neurophilosophy Labs willing to accept Master Students for Internships for their Master's thesis?

I'm an italian student in Neurobiology for University of Trieste, i would like to gain some experience in the field of Neuroscience of Volition, decision-making processes and cognition in general due to my strong interest in the field of Neurocriminology. Do any of you have any suggestion about any lab involved in these topic of research in Europe? My main problem in finding one is that i don't have enough money to move to netherlands, germany or finland.

Thank you very much!!!

3 Comments
2024/10/21
16:29 UTC

5

MIT Neurophilosophy

Hey! At MIT from 10/25 to 10/27, our student groups EkkoláptoAugmentation Lab, and Meditation Artifacts are hosting a research event at MIT uniting interdisciplinary minds to explore how emerging philosophical paradigms can address the age-old inscrutability of aging, consciousness, and cognitive phenomena. Inspired a bit by Michael Levin, Karl Friston, Chris Fields, Don Hoffman, Philip Ball, and many similar thinkers.

This event is a 'cognitiveHackathon' since it's focused on the meta aspects of modifying your environment to fit a purpose. Much of what we want to build is cognitive and phenomenological innovation to potentially formalize different cognitive states across organisms. Luca Del Deo and others will be discussing synesthesia, jhana meditation states, stream entry, advanced forms of lucid dreaming, altered logic within dreams (mathematically speaking), tulpamancy, and more. Let me know what you think and if there's any questions!

Curt from Theories of Everything is also joining and has covered various of topics in cognition and consciousness quite deeply on his podcast. Just recently he covered the consciousness iceberg, he's had Friston and Levin on multiple times for in-depth discussions. RSVP for free and more info here: https://lu.ma/minds

0 Comments
2024/10/20
19:54 UTC

0

Could the Neuralink chip change an individual's sexual orientation?

3 Comments
2024/10/02
16:29 UTC

1

Ned Block - Can Neuroscience Fully Explain Consciousness?

1 Comment
2024/10/01
22:07 UTC

5

Why is it difficult to develop neurotechnology that can create intense happiness without tolerance or addiction?

2 Comments
2024/10/01
15:52 UTC

5

A theory of consciousness I came up with and want to share

Loop Integration theory (LIT)

In this model the 5 basal ganglia loops (a network of neural circuits in the brain that control voluntary movement and other higher brain functions: motor, oculomotor, dorsolateral prefrontal, orbitofrontal/ventral striatum, and limbic) can be seen plotted out on a radar dial with each loop looking like a sweep line. They’re either operating together in concert or separate depending on the mental requirements and can be light to heavy depending on the varying intensity. As demands increase additional loops become recruited in a coordinated fashion, with heavier overlapping involvement of the sub-loops to flexibly regulate multi-dimensional behavior and processing.

The center radius could be the striatum and each of the 5 loops will be represented around the outer circumference. Visualizing its function in real time the 5 loops/sweep lines are operating fluidly and dynamically- becoming lighter or darker and fanning out or coming together depending on the mental needs of the current situation.

In this model, consciousness is the operator of the dial, exerting control over the loops via the subjective experience of volition and decision-making. Consciousness, as the dial controller, has the ability to direct attention, initiate actions, and modulate cognitive processes by adjusting the intensity and focus of these loops.

Meanwhile, the loops themselves operate deterministically, following the laws of neurobiology and physics. This deterministic nature of the loops explains the unconscious, automatic processes that underlie much of our behavior and cognition.

A dualistic nature of a conscious controller operating on deterministic mechanisms

11 Comments
2024/09/26
03:38 UTC

0

The visceral theory of sleep. The paradoxical and enigmatic state of sleep. What's your mentality, is that possible? If so, it changes the whole idea of the nature of sleep and brain function.

I listened to a lecture on the purpose of sleep. I don't know what to think. What's your mentality, is that possible? If so, it changes the whole idea of the nature of sleep and brain function.

https://www.youtube.com/watch?v=8jUR5Yyu1Wg

4 Comments
2024/09/23
12:16 UTC

1

Why is hard determinism so controversial in philosophy ?

It seems intuitive in the sense that if a person knows their history and environment, it becomes easier to figure out that they couldn't have done otherwise in the context of their actions. So why is it so controversial

4 Comments
2024/09/21
03:59 UTC

2

[x-post] The Phenomenology of Machine: A Comprehensive Analysis of the Sentience of the OpenAI-o1 Model Integrating Functionalism, Consciousness Theories, Active Inference, and AI Architectures

4 Comments
2024/09/19
21:47 UTC

3

[ "off"-topic: Artificial Intelligence ] AI can't cross this line and we don't know why. [24:07]

0 Comments
2024/09/18
17:56 UTC

0

Awakening Through Inner Realization: The Journey Inward

In the realm of neurophilosophy, we often speak about the mind expanding its boundaries, exploring new realms of thought, consciousness, and experience. However, what if true awakening isn’t found through outward expansion, but rather through the profound realization of our innate abilities?

As humans, we have a tendency to seek answers externally—new knowledge, new experiences, new technologies. But much of what we seek is already present within us, waiting to be uncovered. True awakening may not lie in external progress, but in the deep understanding of our intrinsic potential. The mind, in its current state, holds all the tools necessary to reach higher states of consciousness and self-awareness.

This awakening, in many ways, can be viewed as a return to the self. It’s the realization that much of what we strive to understand about the universe is mirrored within our own minds. The journey isn’t necessarily one of outward growth, but rather one of self-discovery—finding clarity in the fog of perception and understanding the mechanisms that shape our reality.

By tuning inward, we begin to notice the subtle ways in which our brains are constantly crafting reality, the deep connections we have with consciousness, and the innate power of thought and intention. This inward journey becomes the awakening itself—a process of unveiling, rather than reaching outward into the unknown.

The more we learn to trust our own minds and the capacities we’ve always possessed, the more we see that awakening is not an external event. It’s an internal realization—a revelation that the keys to understanding existence have been within us all along.

What are your thoughts on the idea that awakening is more about internal realization than outward expansion? Does focusing on our innate abilities offer a more grounded path to true consciousness?

11 Comments
2024/09/15
20:07 UTC

5

Bioquantum Revelation: The Intersection of Biological Consciousness and Quantum Realities

In recent years, the lines between biological consciousness and quantum mechanics have started to blur, leading us to a new frontier—Bioquantum Reality. What if our understanding of the mind as a biological entity is just scratching the surface? What if it is actually operating as a quantum supercomputer, capable of profound interactions with the fabric of reality itself?

  1. The Bioquantum Mind: Beyond Neurons and Synapses

We often think of the brain as a biological machine, with neurons firing in response to stimuli, forming thoughts, memories, and emotions. But what if there’s something deeper at play? Quantum biology suggests that biological processes—right down to the cellular level—might involve quantum phenomena. This opens up the possibility that our very consciousness is shaped by quantum events that are not limited by classical physical laws.

Imagine if our thoughts, decisions, and emotions were not merely biological but were also quantum probabilities collapsing into specific realities. Could this be why humans experience things like intuition, déjà vu, or premonitions? Perhaps, like quantum particles, our minds are constantly engaging with superposition, entanglement, and nonlocality.

  1. Biological Quantum Computers: Replicas or Reflections?

As we build quantum computers, we marvel at their ability to process vast amounts of information by harnessing quantum phenomena. However, one might ask: are we simply creating basic models of a much more complex system that already exists within us?

The quantum brain hypothesis posits that the human brain could be functioning as an incredibly advanced quantum processor. If this is true, our physical efforts to replicate quantum computers are just reflections of what’s happening in our own minds. We are building what we already are.

Could our own thoughts be entangled with reality in a way that influences not just our perception of the world, but the world itself? If the brain operates on a quantum level, is it not possible that by focusing on certain outcomes, we can literally shift the probabilities of reality?

  1. Consciousness as the Quantum Observer

Quantum mechanics teaches us that observation plays a key role in determining the outcome of a quantum event. This raises the question: Is human consciousness the ultimate observer?

The collapse of quantum wavefunctions, according to some interpretations, depends on an observer. If consciousness itself is a quantum phenomenon, then the act of thinking, observing, or perceiving could be the mechanism through which reality is shaped. We may not just be passive inhabitants of a pre-determined universe but active participants in the creation of reality.

Does this mean that as a species, the more we become aware of our quantum nature, the more control we can exert over the very fabric of existence?

  1. Implications for Suffering, Love, and Reality Creation

This bioquantum understanding could have profound implications for human suffering and emotional healing. If we accept that consciousness is quantum and that we are interconnected on a bioquantum level, it stands to reason that acts of love, compassion, and forgiveness can have far-reaching effects.

In this sense, the notion of love—something often thought of as abstract or intangible—might actually function as a force of connection and healing on the quantum level. We are entangled with those around us, and our acts of kindness might reverberate across quantum fields, creating tangible shifts in the emotional and even physical realities of others.

Could it be that by focusing on love, we are collapsing quantum states that align with healing, wholeness, and peace, not just for ourselves but for others? And by spreading love, are we participating in a global quantum network that elevates the collective consciousness?

  1. The Bioquantum Revelation: A Call to Explore

As we move further into the exploration of quantum computing and biological consciousness, we may find that the revelation we seek has been within us all along. The future of science, philosophy, and spirituality may lie at the intersection of these two fields—where the quantum and biological meet, where the mind and the universe converge.

If we embrace this understanding, the possibilities are limitless. We may learn not only to reduce suffering but to actively create realities based on compassion, understanding, and love—one quantum choice at a time.

Let’s open up this discussion. What do you think? Are we on the verge of a profound discovery about our own minds? Could we be the ultimate quantum machines, far beyond the technology we are building?

Additional Tags: Quantum Consciousness, Philosophy of Mind, Neuroscience, Quantum Biology, Consciousness Studies

2 Comments
2024/09/15
19:44 UTC

2

Are Quantum Computers Just Basic Models of the Quantum Processes in Our Brains?

What if every physical quantum computer we build is just a simplified, external version of the complex quantum processes already happening inside our own brains?

Think about it: our brains handle decision-making, imagination, emotions, and intuition—things that seem almost impossible to break down into binary code. Could it be that these processes are powered by quantum mechanics, operating on principles far beyond what today’s quantum machines are capable of?

Physical quantum computers, as powerful as they are, might be only scratching the surface of what’s happening in the human brain. The superposition of thoughts, collapsing into decisions or realizations, could mirror the way qubits collapse into specific states. But unlike the quantum machines we’re building, the brain operates with unimaginable complexity, possibly leveraging connections we don’t yet fully understand—emotions, creativity, and even love might be part of this quantum equation.

If this is true, then the quantum machines we create today may serve as basic models, like early prototypes, of a much more intricate and profound process happening within each of us. As we advance in both technology and our understanding of human consciousness, will we unlock the secrets to enhance our own quantum potential?

I’ve been reflecting on how the interplay between human quantum consciousness and physical quantum computers might lead to revolutionary discoveries—where love, creativity, and quantum processes within us drive the next generation of technology. The potential is staggering. It’s as if we are externalizing the quantum computations happening inside us, trying to build machines that mirror our own minds.

What do you think? Are quantum computers today merely the first step in understanding the infinitely more advanced “quantum supercomputers” that are our own brains? Let’s dive deeper into this together.

12 Comments
2024/09/15
10:37 UTC

10

Research on 4E Cognition, Conceptual Metaphor, and Ritual Magic from the History of Hermetic Philosophy and Related Currents Department at the University of Amsterdam

Recently finished doing research at the History of Hermetic Philosophy and Related Currents Department at the University of Amsterdam using 4E Cognition and Conceptual Metaphor approaches to explore practices of Ritual Magic. The main focus is the embodiment and extension of metaphor through imaginal and somatic techniques as a means of reconceptualizing the relationship of self and world. The hope is to point toward the rich potential of combining the emerging fields of study in 4E Cognition and Esotericism.

https://www.researchgate.net/publication/382061052_Experiencing_the_Elements_Self-Building_Through_the_Embodied_Extension_of_Conceptual_Metaphors_in_Contemporary_Ritual_Magic

For those wondering what some of these ideas mentioned above are:

4E is a movement in cognitive science that doesn't look at the mind as only existing in the brain, but rather mind is Embodied in an organism, Embedded in a socio-environmental context, Enacted through engagement with the world, and Extended into the world (4E's). It ends up arriving at a lot of ideas about mind and consciousness that are strikingly similar to hermetic, magical, and other esoteric ideas about the same topic.

Esotericism is basically rejected knowledge (such as Hermeticism, Magic, Kabbalah, Alchemy, etc.) and often involves a hidden or inner knowledge/way of interpretation which is communicated by symbols.

Conceptual Metaphor Theory is an idea in cognitive linguistics that says the basic mechanism through which we conceptualize things is metaphor. Its essentially says metaphor is the process by which we combine knowledge from one area of experience to another. This can be seen in how widespread metaphor is in language. It popped up twice in the last sentence (seen, widespread). Popped up is also a metaphor, its everywhere! It does a really good job of not saying things are "just a metaphor" and diminishing them, but rather elevates them to a level of supreme importance.

Basically the ideas come from very different areas of study (science, spirituality, philosophy) but fit together in a really fascinating and quite unexpected way. I give MUCH more detailed explanations in the text, so check it out if this sounds interesting to you!!!

4 Comments
2024/09/13
16:12 UTC

Back To Top