/r/cogsci

Photograph via snooOG

The interdisciplinary study of the mind and intelligence, embracing philosophy, psychology, artificial intelligence, neuroscience, linguistics, and anthropology.

A community for those who are interested in the mind, brain, language and artificial intelligence.

Posting rules:

  • This is not a self-help sub. Posts must be about cognitive science. Occasional threads of general interest (discussion of careers in Cog Sci, for example) may be allowed.

  • Currently, calls for participation in scientific studies are allowed. See our policy on that here.

  • All posts must be about cognitive science. Pseudoscience, claims not backed by peer-reviewed science, and the like are not allowed.

  • All decisions on posts, bans, etc. are at the discretion of the moderators. All such decisions are final, and appeals (and especially complaints) will likely be ignored.


Want to know more? Take a look at our reading list here. If you have any suggestions for further inclusions, post them here.


AskScience Science ScienceNet
Psych CogSci Neuro
CogLing IOPsych PsychSci
BehEcon Music Cog NooTropics
NeuroPsych MathPsych Psychopharm
Linguistics PsychoPath Academic Psych
NeuroPhilosophy CogNeuro Multilink

/r/cogsci

117,362 Subscribers

2

Any suggestions for materials regarding eye movement data analysis in Eye link

Hi,

I am a second year PhD student in vision science. I have started collecting eye movement data whole doing a hazard perception test (video reaction time). I want to analyse the difference in eye movements specially related to driving skills between groups.

As this is the first time I'm using a eye tracker, it would really helpful if some researchers can suggest me any materials that can help to learn the analytic methods.

Also, do you have any suggestions for specific parameters that I would be great check in eye movement data?

2 Comments
2024/05/03
16:05 UTC

7

What Masters degree did you get in combination with a CogSci BS?

I am currently a 3rd year Cognitive Sciences Major at UCI, and have focused my coursework primarily on psych/neuro as well as computation. I am curious about what job opportunities different Masters programs can enable when combined with this Cognitive Sciences background.

In particular, I am interested in AI/ML, data analytics, and (pretty different) product engineering - but I'm curious about what paths are out there that I might not know are even possible. I'd love to hear people's stories of what they did with their CogSci degree, what skills they found necessary to develop outside of their degree, and any other tips they'd like to share :)

Thanks!

1 Comment
2024/05/02
21:23 UTC

4

Fans of Dual N Back?

What's your anecdotal experience with it? Worth it? Better WM?

1 Comment
2024/05/02
03:39 UTC

5

Is a CogSci Degree worth it in 2024?

I'm a community college student and am interested in transferring to a UC school for cognitive science. I decided to work towards transferring with this major because I am interested in UX. However, it seems that the job market for UX is in shambles, and I don't know if it will be getting any better in these following years. It's definitely not too late for me to switch majors since I'm only 17, but I feel like I've already made some solid progress towards transferring with this major. I was wondering what other fields I could get into with CogSci? The main thing I look for in a career is job security, which is why I've been considering nursing, but then I'd be giving up on my dream of going to either ucsd or ucla. Wondering what other CogSci students/alumni think and wanted to ask if you feel like you made the right choice to major in CogSci.

9 Comments
2024/05/01
21:58 UTC

2

Why Books Have a Powerful Impact on the Mind??

Have you ever experienced the transformative power of books on your mind? Reading a book often leads us to adopt a new perspective, influencing how we navigate, and take decision in our lives. This influence is significant as it molds our thoughts and beliefs.

How could this happen?
Does this mean that we could become anyone, any person in our life, by just influencing ourself that right way
Therefore, should we be selective in our reading choices to align with the life we aspire to lead?

7 Comments
2024/05/01
18:54 UTC

4

an undergraduate journal of cogsci that accepts journals from all over the world (except for Canadian Undergraduate Journal of CogSci)

I want to send an article on consciousness and AI to a journal which will have a philosophical tone. Can you suggest any journals that accepts articles from anywhere and I need to remind you that I do not have a budget for that so... It should also be a one that is free of cost. Do I want too much? Anyways let me know if you know such a journal. Lots of love 💗

0 Comments
2024/04/29
19:16 UTC

1

Extended Digit Span Test and IQ chart

Hello everyone!

I've become interested in digit span in the past week. Looked up WAIS manuals and everything. A couple of days ago someone linked an extended digit span test (this one: digit span test) which had completely absurd "norms". So I showed how the people from the website likely calculated them and explained why they couldn't have not been absurd (here: I explain stuff here). As you can see by clicking on the link, I focused on forward digit span due to lack of time.

I also proposed the use of a log-normal distribution, for various reasons, and the results seem to be much better. However, we can't know how much better with no additional data.

So, I'd like to know if anyone is up to trying (up to two attempts) this test and sharing their results. Please include any IQ scores on properly normed tests, such as the RAPM, the WAIS-III, the WAIS-IV etc.

Here's my best guess to what the norms for the forward section should be. By "IQ" I mean "if IQ was defined as the result of mapping rarity via the inverse of the cdf of a normal distribution with mean 100 and standard deviation 15, then...".

ScoreRarityIQ
5166.9
8189.6
113105.0
148116.7
1725126.1
2085134.0
23300140.8
261,100146.7
293,800152.0
3213,000156.8
3543,000161.1
38140,000165.1
41440,000168.8
441,300,000172.2
474,000,000175.4
5011,000,000178.4

Thanks in advance!

3 Comments
2024/04/28
19:32 UTC

0

Can you keep a relationship with cognitive problems or she Will get irritated and find new options sooner?

My biggest problems are logical thinking, visual spatial Memory, overall Memory and I have probably dyspraxia. I dont have problems in language/writing area.

But in every relationship I had...they were Always treating me like a baby or like they were my teacher. They would get mad at me because I couldnt Remember things when they were telling me their days, or hobbies.

' I have already told you this/you asked me that before '

They would Say things like ' dont do that/ dont day that ' and basically controlling everything I was saying. Or I would do basic errors like forgetting where I park my car or forgetting roads I do everyday. Same happens when something was broken in the house, I didnt know how to fix even simple problems.

She would not feel safe even when I was driving, even if Im a good driver...She would be suspicious about everything I said or nagging.

Other than disrespect I think this happens a lot in ADHD cases, I think women are repulsed by someone they have to take care...as a baby.

2 Comments
2024/04/28
16:51 UTC

2

Cognitive Science UC Berkeley vs. UCLA

Hi all, I just got accepted into Berkeley and UCLA for Cognitive Science. My goals after undergrad are still unclear but I definitely want to get a Master's (So I know I'll want research opportunities) but I'm also interested in a career in UI/UX. I don't care as much about school reputation but I'd like to know more about the resources at both places.

1 Comment
2024/04/27
04:15 UTC

1

Conscious experience

Conscious experience is nothing but prediction error. Change my mind.

17 Comments
2024/04/26
13:24 UTC

2

Anyone else here in graduate school for cognitive science?

Hi,

I'm currently doing my masters in Cognitive Science and was wondering if there's anyone else on this subreddit that's in the same boat. Just looking to see if there's anyone out there that wants to chat about cogsci topics and their thoughts about the field?

1 Comment
2024/04/25
13:51 UTC

5

Backpropagation through space, time, and the brain

Paper: https://arxiv.org/abs/2403.16933

Abstract:

Effective learning in neuronal networks requires the adaptation of individual synapses given their relative contribution to solving a task. However, physical neuronal systems -- whether biological or artificial -- are constrained by spatio-temporal locality. How such networks can perform efficient credit assignment, remains, to a large extent, an open question. In Machine Learning, the answer is almost universally given by the error backpropagation algorithm, through both space (BP) and time (BPTT). However, BP(TT) is well-known to rely on biologically implausible assumptions, in particular with respect to spatiotemporal (non-)locality, while forward-propagation models such as real-time recurrent learning (RTRL) suffer from prohibitive memory constraints. We introduce Generalized Latent Equilibrium (GLE), a computational framework for fully local spatio-temporal credit assignment in physical, dynamical networks of neurons. We start by defining an energy based on neuron-local mismatches, from which we derive both neuronal dynamics via stationarity and parameter dynamics via gradient descent. The resulting dynamics can be interpreted as a real-time, biologically plausible approximation of BPTT in deep cortical networks with continuous-time neuronal dynamics and continuously active, local synaptic plasticity. In particular, GLE exploits the ability of biological neurons to phase-shift their output rate with respect to their membrane potential, which is essential in both directions of information propagation. For the forward computation, it enables the mapping of time-continuous inputs to neuronal space, performing an effective spatiotemporal convolution. For the backward computation, it permits the temporal inversion of feedback signals, which consequently approximate the adjoint states necessary for useful parameter updates.

0 Comments
2024/04/24
08:30 UTC

3

AI: Words vs. Concepts

Dear All,

I hope all is well.

If I may, does the success of OpenAI ChatGPT et al. amount to an unequivocal assertion of the reach of presentations (symbolic language; parroting, if it makes you happy :) while making that which is presented, along with its representations (i.e., theory and models) almost disposable?

This question takes on added significance (surely, for me) in the light of:

'Presentations/words for the purpose of calculation/communication are always needed, but it is a serious mistake to confuse the arbitrary formulations of such presentations/words with the objective concept itself or to arbitrarily enshrine one choice of presentation/verbalization as the theory, thereby obscuring even the existence of the invariant mathematical content' (see p. 194 in https://drive.google.com/file/d/1tX4Z_FN7FvIYDES_DuPWHChysM-ZhcDX/view?usp=sharing).

Of course, ordinary people know full well that different words are used to refer to one concept/thing and vice-versa, and have no trouble dealing with it all in their non-trivial everyday lives; people are familiar with difficulty and have the procedural knowledge of domestication needed to 'master stimuli' (Freud's phrase). It's the enlightened housed in ivory towers who seem to find it all one Jamesian blooming, buzzing confusion in which we all are supposedly suspended, which plausibly has something to do with their virtuous act of paying bills ;)

In this context, I must hasten to note that along with Professor F. William Lawvere's functorial semantics (http://www.tac.mta.ca/tac/reprints/articles/5/tr5.pdf), which recognized that a theory of a mathematical category of particulars can be construed as a category* (e.g., a theory of cats is a cat**; see Figs. 3 & 5, https://philpapers.org/rec/VENFSF; see also 'Geometry provides its own foundations', https://conceptualmathematics.wordpress.com/wp-content/uploads/2013/02/axiomatizationeducation.pdf), equivalent findings can be found in Bastiani & Ehresmann sketch theory (http://www.numdam.org/item/CTGDC_1972__13_2_104_0.pdf) and Grothendieck's definition of descent (https://conceptualmathematics.wordpress.com/wp-content/uploads/2013/02/lawvereinterview.pdf, p. 15).

Your time permitting, please correct my (plausibly) mistaken understanding!

Thanking you, Yours truly, posina venkata rayudu /\

*I vividly remember reading in the writings of my guru Professor F. William Lawvere that 'thinking of a motion of a thing as a thing' (https://zenodo.org/records/7633972; e.g., we treat direction, speed, etc., characterizing a motion of a thing as things in calculating other characteristics, such as acceleration, of the motion of the thing) as that which set the then science in motion to arrive at what/where it is. Thinking of a theory of things as a thing (theory of a category of particulars is a category, as in functorial semantics/sketch theory/descent) launches science into a stable orbit of sensible-and-reasonable, a paring characterized by compatibility, or so I think, and, as such, is a significant intellectual milestone in scientific progress (on par with that of Newtonian mechanism in physics and Darwinian evolution in biology): a festival waiting to be celebrated by making it common sense for the enlightened (in a spirit of giving a glimpse of the the perspective: independent of Professor F. William Lawvere repeatedly pointing out the parallels between mathematical knowing and ordinary cognition (e.g., https://www.math.union.edu/~niefiels/13conference/Web/Abstracts/Lawvere.pdf), Professor Alison Gopnik put forward 'theory theory' of concepts (in a never enough immunization against the Fregean virus: concepts are sets of properties; see p. 380 in https://drive.google.com/file/d/1f6EYx3Y_mXzSeaiuGuz5f6kZthDfEJJe/view?usp=sharing). Unfortunately, with Fodor, the then resident-jester of cogsci, calling it 'the best-kept secret', the

fruits of the intellectual struggles of Professor Gopnik to make sense of how we conceptualize don't seem to have rised above the ambient noise so as to make the kinship between math and the mundane salient enough for cogsci to see, seek guidance, and build on the parallels.

**I'd like to thank my good friend Dr. Salk (https://www.irma.ac.in/faculty-research/faculty-members/449; I don't know why they are all dressed like members of a cult ;) for this succinct summation of my discourse on functorial semantics that was, in compliance with my wont, not destined to end ;)

P.S. If you believe that particulars make us wiser, a' la William James, then some or all of the above may be of questionable value.

P.P.S. I had to address the distinction between statistical and mathematical for the first time in Lipton lab when I proposed to model neuronal death. Aren't there already many models of death (e.g., exponential curves depicting population declines based on observations of how many died and when they died)? In response, I said, unlike the statistical models of death we have, I would like to develop a mathematical theory/model of neuronal death in terms of the underlying/mediating biophysical processes/mechanisms (diffusion, energy, pumps, etc., see our Apoptosis vs. Necrosis SfN abstract, https://conceptualmathematics.substack.com/p/shapes-of-figures; I'm sorry I couldn't find the full unpublished manuscript). All of this is to spell-out my understanding of the distinction: statistical vs. mathematical, so that you may, your time permitting, correct my (plausible) understanding /\

0 Comments
2024/04/23
16:18 UTC

5

Looking for someone to study with

Hi guys,

I am a 56 yo with a lot of time on my hands, and i have a life-long interest in strong AI (a.k.a. "AGI" nowadays). i always approached the study AI from the perspective of cognitive science and machine learning, and i was wondering if there's anyone here who'd be both interested in this dual approach to AI and have enough time on their hands, someone that i could talk to on a (more-or-less) regular basis, someone to share ideas with (i currently feel totally isolated in my pursuit, with absolutely no one to talk to).

13 Comments
2024/04/23
14:11 UTC

1

Want to work in computational neuroscience

I want to work as a computational neuroscientist. I currently hold a bachelor’s in electrical engineering and masters is marketing and brand management. I really wanna work in computational neuroscience but idk where to start. Can you guys help me out please.

#computationalneuroscience #neuroscience

4 Comments
2024/04/21
13:31 UTC

4

What should I familiarize myself with if I'm trying to get into computational neuroscience?

I'm currently an undergrad student double majoring in cognitive science and computer science, and I was thinking of adding an emphasis or a minor. I'm currently choosing between applied math or physics but overall I just wanted to hear from people who are experienced what they would recommend? Before anyone tells me what I'm doing is not worth it, it's okay. Whether these minors/emphasis or additional majors actually help me in my academic journey is not really important to me. I just really like this topic and learning different things in this area of study. I'm not really set on adding a minor, maybe an emphasis. I was looking into adding a computational modeling emphasis to my cognitive science major. I'm not really sure. I'm not too worried about graduating as soon as I can, I just wanna learn as much as I can. My original major was cognitive science, and I just added my computer science major last semester. I'm also in a cognitive mechanics lab, so I guess I could also just ask my PI. But I wanna know what other people think! I want to go to grad school for comp neuro so I just wanna know what topics I should look into for that. :p

16 Comments
2024/04/19
23:06 UTC

4

Survey on Formal argumentation and Behavioral economics

https://preview.redd.it/wpxyfbda7fvc1.jpg?width=1131&format=pjpg&auto=webp&s=e92ae3a9af61fe0a10f74eb220bb495fb3abc027

I'm writing my Master's thesis in Cognitive science on Formal argumentation and how well it matches the function of human reasoning. In this quick survey, you are tasked to evaluate interesting argumentation scenarios and judge whether an argument is acceptable. Thank you in advance to those who participate and make the future of social AI possible. https://people.cs.umu.se/~tkampik/argsurvey/Webappsurvey.html

0 Comments
2024/04/19
11:24 UTC

3

AI Consciousness is Inevitable: A Theoretical Computer Science Perspective

Paper: https://arxiv.org/abs/2403.17101

Abstract:

We look at consciousness through the lens of Theoretical Computer Science, a branch of mathematics that studies computation under resource limitations. From this perspective, we develop a formal machine model for consciousness. The model is inspired by Alan Turing's simple yet powerful model of computation and Bernard Baars' theater model of consciousness. Though extremely simple, the model aligns at a high level with many of the major scientific theories of human and animal consciousness, supporting our claim that machine consciousness is inevitable.

20 Comments
2024/04/18
15:50 UTC

0

Asking Neuroscientist Kevin Mitchell if Free Will exists... A new clip from my podcast I thought this community would enjoy. (If you'd like to see new academic interviews coming soon please consider subscribing, thanks!)

7 Comments
2024/04/17
11:27 UTC

0

Getting Kids Off Social Media Won't Fix Adolescent Mental Health

0 Comments
2024/04/16
22:18 UTC

1

How do we cope with small chunks of misread/misunderstood information? (example below)

Hi everyone.

I'm familiar with the research on how people can mentally correct or fill in the gaps in otherwise understandable texts. However, this recent post made me wonder: How exactly is it that we can misread individual words while still grasping the overall meaning of the sentence?

Is it the exact same thing as when mentally correcting typos? This seemed slightly different than that, since here the typo leads to another meaningful (albeit inappropriate for the context) abbreviation. The unscientific consensus in the comments seems to be that many people misread the abbreviation, but still understood the sentence fine.

1 Comment
2024/04/16
19:40 UTC

3

Natural language instructions induce compositional generalization in networks of neurons

Paper: https://www.nature.com/articles/s41593-024-01607-5

Preprint: https://www.biorxiv.org/content/10.1101/2022.02.22.481293

Code: https://github.com/ReidarRiveland/Instruct-RNN/

Video: https://www.youtube.com/watch?v=miEwuSz7Pts

Abstract:

A fundamental human cognitive feat is to interpret linguistic instructions in order to perform novel tasks without explicit task experience. Yet, the neural computations that might be used to accomplish this remain poorly understood. We use advances in natural language processing to create a neural model of generalization based on linguistic instructions. Models are trained on a set of common psychophysical tasks, and receive instructions embedded by a pretrained language model. Our best models can perform a previously unseen task with an average performance of 83% correct based solely on linguistic instructions (that is, zero-shot learning). We found that language scaffolds sensorimotor representations such that activity for interrelated tasks shares a common geometry with the semantic representations of instructions, allowing language to cue the proper composition of practiced skills in unseen settings. We show how this model generates a linguistic description of a novel task it has identified using only motor feedback, which can subsequently guide a partner model to perform the task. Our models offer several experimentally testable predictions outlining how linguistic information must be represented to facilitate flexible and general cognition in the human brain.

0 Comments
2024/04/16
07:00 UTC

5

Masters Programs which are still accepting applicants with low GPA in the current cycle.

I applied to a couple of unis for masters hoping to work in Neuro-AI under some professor there and get a good GPA plus research experience to eventually apply for a PhD in neuroscience since my undergrad GPA is very low and I dont have a formal background in neuroscience. I had high hopes for a uni in Canada since my prospective supervisor there was perfectly matching my research interests, knew the current prof I worked under personally and had also approved my application but unfortunately I was informed today that the department rejected my application because of my GPA.

I am currently waiting for decisions for 2 more masters programs Trento cognitive science and Brain and Cognitive Sciences at UPF Barcelona.

What are some other masters programs in unis with comp neuro researchers in Europe (not US since its very expensive) where I can still apply to in the current cycle.

My stats

6.62/10 GPA B.Tech Electronics and Telecommunication Tier 2 uni in India(19-23)

My Research Experience

My Research Experience

1 yr of research exp under a prof at an ivy league uni

1 yr on a funded project with an Italian uni where I did my thesis too

2 yrs or research experience in my unis AI research Center

6 Months at a Healthcare startup.

Pubs- 3 under review

This is probably the last time I will be applying since if I don't get in this time its solely because of my GPA, which can't be changed so no point in trying again next year or anything.

1 Comment
2024/04/15
17:25 UTC

3

Question about TFR using Morlet wavelets

I'm writing a methods section. I analyzed some EEG data with time-frequency methods. I did this using Morlet wavelets (specifically with the mne.time_frequency morlet tools).

I just want to double-check that I know what I did. Basically for a given frequency, the package defines a Morlet representing that frequency. Then, the package goes through an EEG time series (t = 0 to end), and at each position, it defines a window and takes the dot product between the signal and the defined Morlet. Is this right? Also, can this be said to be a "sliding window" approach and/or be "convolving the time series with the Morlet"?

Also, this dot product is taken between the Morlet and the actual signal, right? I'm not taking some dot product with the output of a FFT somehow, correct? I saw the below quote in a paper and it confused me

Time–frequency measures were computed by multiplying the fast Fourier transformed (FFT) power spectrum of single-trial EEG data with the FFT power spectrum of a set of complex Morlet wavelets and taking the inverse FFT.

Thanks

1 Comment
2024/04/12
15:29 UTC

8

ucsd or ucla cogsci

im interested in going into hci or ux research/design. i could go to ucsd and do cogsci with specialization design and interaction OR go to ucla and do cogsci with specialization is computing.

i think ucsd has a more well rounded cogsci program and offers a better learning experience. ucla offers the prestige and the better social life, but their cogsci leans more towards bio. i am also a transfer student so i want to enjoy this 2-year college experience to the fullest.

which one would you guys choose?

5 Comments
2024/04/08
17:17 UTC

4

I’m planning to pursue a double major in data science and cognitive science. Should I go to UCLA or Cal?

I’ve been accepted to both UCLA and Cal for cognitive science! I am beyond excited, but also extremely anxious to make my choice. At UCLA, I have been accepted to the Pre-college of Cognitive Science, which is basically just Cognitive Science, but undeclared. At Cal, I’ve been accepted into the College of Letters and Science, which basically is the same thing as an undeclared major at UCLA, except you have more freedom to choose your major.

I’m having a really hard time choosing between the two schools, so I thought that I’d come to reddit for some advice!

At UCLA, I would be able to have the freedom to explore the neuroscience and psych aspects of cognitive science more. There is also a computer science emphasis route for the cognitive sci major that I would most likely take if I went to UCLA. I would like to take classes related to machine learning/ AI, but I’ve heard that at both UCLA and Cal, cog sci majors do not get priority when registering for these classes. An additional plus for me is that UCLA is close to home (2 hours away), and there is a 4 year housing guarantee.

At Berkeley, the cog sci major is still flexible, but it seems like there is a bit more of an emphasis on computer science (which is one of the best in the nation). The data science program at Cal is also world renowned. My main concern is the extreme “cutthroat” stigma/ attitude concerning the academics at Cal. I’m genuinely scared that if I go to Cal, my GPA will be ruined and I’ll be at the bottom of my class. Obviously, college should NOT be easy (and UCLA is also probably super challenging - I just have some kind of weird complex towards Cal), but I’m just intimidated by the rigor of classes and academic excellence of my possible classmates. Also, Cal only has a 1 year housing guarantee, so I would have to try to find housing after my first year.

I love the locations and “vibes” of both schools! I am now just mainly concerned about the actual curriculum of each school, and which one would best set me up for success early in my career. I would really appreciate any advice that anyone could give to me, especially as you all are in the field that I’m aspiring to join!

2 Comments
2024/04/08
04:10 UTC

3

How do I find books on how to learn that are based on solid research? I've read some, but... the replication crisis...

There are many books out there on how to learn, some based on personal experiences of successful people (I immediately cross those out), and some based on research, mostly from the field of congitive science.

For example, I just finished reading Make it Stick and it's a good book but it was published in 2014 which means it's missing at least a decade of latest research, and it was written before replication crisis was in such a big focus so, for example, the book cites heavily from Thinking, Fast and Slow which has had some chapters completely discredited and further analysis showed most other chapters have very shaky empirical basis.

I am certain if Make it Stick were written today it wouldn't make references to Thinking, Fast and Slow.

I have searched far and wide but books that aren't huge bestsellers don't have such in-depth analysis for me to know what I'm getting myself into.

Tl;dr How do I find recently published trustworthy books on learning/studying that have minimal errors and most of the science they're based on is sound (or at the very least, authors warn when it's uncertain).

I would like to know how to check this myself, but any suggestions for good books on this topic are also welcome!

3 Comments
2024/04/07
18:38 UTC

Back To Top