/r/compsci
Computer Science Theory and Application. We share and discuss any content that computer scientists find interesting. People from all walks of life welcome, including hackers, hobbyists, professionals, and academics.
Welcome Computer Science researchers, students, professionals, and enthusiasts!
We share and discuss content that computer scientists find interesting.
Self-posts and Q&A threads are welcome, but we prefer high quality posts focused directly on graduate level CS material. We discourage most posts about introductory material, how to study CS, or about careers. For those topics, please consider one of the subreddits in the sidebar instead.
Read the original free Structure and Interpretation of Computer Programs (or see the Online conversion of SICP )
Other topics are likely better suited for:
Other online communities:
If you are new to Computer Science please read our FAQ before posting. A list of book recommendations from our community for various topics can be found here.
/r/compsci
It's been a while and I would like you to name at least one related blog you are regularly visiting. Optional, why this is interesting with some keywords.
Starting with
https://pete.akeo.ie/ - Security Researcher
https://avikdas.com/ - Computer Scientist 3D and dynamic programming as well as scalability and other stuff
https://karpathy.ai/ - Machine Learning
Hi guys, sorry if this seems a stupid question, I was going through this part in Crafting Interpreters
, and I came across this side note:
Yes, we need to define a syntax to use for the rules that define our syntax. Should we specify that metasyntax too? What notation do we use for it? It’s languages all the way down!
But this will lead to an infinite recursion of sorts by defining each meta^n language using a meta^(n+1) language. I read on Wikipedia that BNF can be used to describe its own syntax, is that why we don't have this infinite recursion in practice?
Where can I publish any of those papers?
Hi guys,
im looking for resources explaining the inner workings of the following video codecs: H264, H265, VP9, AV1, VVC.
I need something more detailed than the articles you can find by googling "H264 technical explanation", i understand the concepts of i/p-frames, DCT, transform blocks etc. (It doesnt help that many of the articles seem copy/pasted or generated by AI, or just cover how much bandwith do codecs save).
However the documentation for said codecs is really overwhelming (H264 ITU-T has 844 pages), im looking for something in between in terms of technical depth.
Thanks for all replies, it can be just about one of the codecs listed above.
Hello Everyone, I have a question regarding the choice of algorithm to solve my combinatorial optimization problem i am facing. Sorry for the long post, as I want to describe my problem as clearly as possible.
I am trying to solve a combinatorial optimization problem, I don't have the exact number of parameters yet, but the estimate is around 15 to 20 parameters. Each parameter can have anywhere between 2-4 valid options (a major chunk ~50% might have 2 valid options). The major problem that I am facing is that the cost evaluation for each solution is very expensive, hence I am only able to perform a total of 100 - 125 evaluations. (since I have access to a cluster, i can parallelize 20 - 25 of the calculations). Given the nature of my problem I am okay to not land on the global maxima/ the combination that leads to least value of my cost function, a result that is a good improvement over the solution that I currently have is a win for me (if miraculously I can find the global maxima then that solution is of course favored over others, even if it leads a little higher compute time). I don't want to color the reader with my opinion, however the current idea is to use a genetic algorithm with 20-25 population size and 4-5 generations, with a tournament selector, with a mutation rate on the higher end to ensure the exploration of the search space. (the exact figures/parameters for genetic algorithm are not decided yet -- I am quite inexperienced in this field so is there a way to actually come up with these numbers).
If there are any folks who have experience in dealing with combinatorial optimization problems, I would love to hear your thoughts on the use of genetic algorithm to solve this or if they would like to point me / educate me on use of any other alternate algorithms suited for the above described problem. I am using a smaller/toy version of my original problem so I do have some amount of freedom to experiment with different algorithm choices and their parameters.
Ps:- From my understanding simulated annealing is inherently a non-parallelizable algorithm, therefore I have eliminated it. Also this is my first time dealing with problems of massive scale as this, so any advice is appreciated.
Pps:- I cant divulge more details of the problem as they are confidential. Thanks for understanding
I've recently found that I really enjoy theoretical computer science even though my degree is more like an applied mathematics degree. I love working on advanced algorithms and really enjoy things like complexity theory and I'm planning to take other theoretical classes soon line graph theory, advanced algorithms and maybe even cryptography. I want to focus the rest of my degree on theoretical computer science and either get a CS masters and focus on theory or a mathematics masters with a focus on discrete maths/ computer science. I'm only in my second year so I really haven't paid attention the job market so I have no idea what kind of jobs there are out there.
Most jobs I hear related to computer science are either:
Software engineer/developer: sounds like a nightmare to me. I actually don't like coding that much. I enjoy the algorithmic problem solving part and coding is just a tool for me to work on problems I enjoy. I know people who work as software engineers and it just sounds like a boring desk job.
Data scientist: I don't might probability theory but I don't like statistics (idk if that makes sense lol) and from what I've seen from machine learning doesn't really excite me in any ways really.
Jobs in IT, web development etc which all sound kinda tedious to me.
Now a lot of people will probably suggest a PhD and going to academia. Even though I think I'd consider getting a PhD, I just can't see myself working in academia. It's more of a personality thing really. I don't see myself fitting into that type of environment. My ideal job is some research position out in the industry which is heavily theoretical, somewhere in between mathematics and computer science. I just don't know if that exists. Do you have any advice? Is there any of you work on theoretical computer science outside of academia? I would appreciate any advice and sorry for the long rant I'm just kind of lost at the moment.
Hey everyone, I posted here a few weeks ago about the start of my YouTube channel on the llvm and compilers. I just uploaded a new video on compiler system design, I hope you all enjoy it! https://youtu.be/hCaBjH5cV5Q?si=njm0iA0h_vBz0MFO
I’m excited to share our recent work on steering large language models using Concept Activation Vectors (CAVs). This technique allows us to adjust the behavior of LLMs to act like domain experts (like Python or French) and even manipulate their refusal and language-switching capabilities. If you’re into AI interpretability or LLM safety, you might find our experiments and findings intriguing.
📄 Highlights:
We’ve already expanded on the safety concept activation vector (SCAV) idea introduced earlier this year and observed some cool (and strange) phenomena, especially around language and task steering.
💡 Interested in how this works? Check out our full write-up on LessWrong. Would love your thoughts and feedback!
Hi there,
I've created a video here where I talk about the t-test, a statistical method used to determine if there is a significant difference between the means of two groups
I hope it may be of use to some of you out there. Feedback is more than welcomed! :)
I'm currently in university studying computer science, and I've found myself thinking a lot about where the field of CS is going to go. The last few decades have seen basically exponential growth in computers and technology, and we're still seeing rapid development of new applications.
I have this irrational worry that I keep coming back to: when, if ever, will we see CS start to plateau? I know this is incredibly short-sighted of me and is because I just don't know enough about the field yet to imagine what comes next.
Which is why I'm asking here, I guess. Especially when we're constantly listening to thousands of voices about AI/LLMs and whether they will be the unraveling of software engineering (personally, I don't think it's all doom and gloom, but there are certainly times when the loudest voices get to you), I guess I'm trying to look for areas in Computer Science that will continue to see effort poured into them or nascent fields that have the potential to grow further over the course of my career. I'd appreciate some answers beyond AI/ML, because I know that's the hottest new thing right now.
I know I've rambled a bit in the post, so thank you in advance if you've read this far and even more so if you answer!
Hi everyone, I am working on a building a Knowledge Graph and for that I am want to store data in a database with either Apache 2, BSD 3 Clause, or MIT License. I also want to store some extra metadata with the nodes and edges. Currently I have Janus graph, Dgraph and Memgraph in mind. Please suggest me which one I should choose. Keep in mind, that I would like to make this to the production as well. Thanks a lot.
Aside from clrs
Hello everyone, I'm a master's student, soon to become a computer engineer. After a long journey searching for the right project idea for my degree, I knew I wanted to focus on something related to operating systems, low-level programming, or networking. However, I was unsure about the exact direction, especially since I now lean more toward software-oriented work. Recently, I came across an interesting theme: "Low-Latency Kernel Bypass Framework for High-Performance Networking." I'm considering pursuing this idea, but I have a few concerns. Is it feasible to complete within a one-year period? Also, would this project be a case of reinventing the wheel, given that some existing tools already perform similar tasks? if you have better project ideas please feel free to share them here! THANK YOU!!
hello guys I'm working on RAG Architecture for my CS Grad Project
so I want to know what is the most topics or fields that ChatGPT or popular LLMs are very bad in it and can't give accurate answers
Hi,
I'm learning it from the following public e-book (Principles of Programming Languages, by Smith, Palmer and Grant):
http://pl.cs.jhu.edu/pl/book
But, I'd like to read and learn more from different sources.
Recommendations?
Thanks!
The area where we can do research and try to solve the problems of the Logistic Regression?
To start off none of my friends who program have ever read a book, they used courses such, as data camp, or codecamp, none of them read books. But then I thought how could a book be even close to something like data camp. I mean data camp is so much more hands on than books, gives really good examples, and has quizzes.
I usually see that explanations about containers mention that traditionally IT teams hosted Virtual Machines, however Docker containers simplified processes using cgroups & namespaces. I still understand how Docker simplifies processes but why people used to create VMs instead manually creating namespaces & cgroups for each application ?
I'm looking to learn more about systems design and software design. Things like event driven architecture and AWS features like SQS, SNS, Lambdas, Step functions, etc. There are plenty of books but I don't know which are actually good and they're all a bit dry. I'm wondering if there are any alternatives, like games, that would be more interesting while still being informative/useful.
I'm writing this computer science course on abstractions where we start with the question: Are you a bunch of cells, atoms, or a human - or all of the above?
The idea is to show that we use abstractions to manage complex systems. This is possible in math (where we have a line as an abstraction of multiple points and a plane as an abstraction of multiple lines) and the same is the case with computer science.
I was curious whether reality is aware of these abstractions or if it operates at a very fundamental level. There is this theory that everything is based on computation, even in the real world. So I was just curious does reality operate on some abstractions or that's just how we observe reality?
I started watching Ben Eater's breadboard computer series, where he builds an 8-bit computer from scratch. When it came to instructions, because the word size is 8 bits, the instruction was divided between 4 bits for the opcode and 4 bits for the operand (an address for example). For example, LDA 14 loads the value of memory address 14 into register A. I did some research and saw that are architectures with fixed size instructions (like early MIPS I believe), and in those architectures it would not always be possible to address the whole memory in one instruction because you need to leave some space for the opcode and some other reserved bits. In that situation, you would need to load the address into a register in two steps and then reference that register. Did I understood this part correctly?
In more modern architectures, instructions may not be fixed size, and in x64 an instruction could be 15 bytes long. I'm trying to wrap my head around how this would work considering the fetch-decode-execution cycle. Coming back to the 8-bit computer, we are able to fetch a whole instruction in one clock cycle because the whole instruction fits in 8 bits, but how would this work in x64 when the word size is 64 bits but the instruction can be much bigger than that?
These questions seem easy to just Google but I wasn't able to find a satisfying answer.
PLSSSSS