/r/compsci
Computer Science Theory and Application. We share and discuss any content that computer scientists find interesting. People from all walks of life welcome, including hackers, hobbyists, professionals, and academics.
Welcome Computer Science researchers, students, professionals, and enthusiasts!
We share and discuss content that computer scientists find interesting.
Self-posts and Q&A threads are welcome, but we prefer high quality posts focused directly on graduate level CS material. We discourage most posts about introductory material, how to study CS, or about careers. For those topics, please consider one of the subreddits in the sidebar instead.
Read the original free Structure and Interpretation of Computer Programs (or see the Online conversion of SICP )
Other topics are likely better suited for:
Other online communities:
If you are new to Computer Science please read our FAQ before posting. A list of book recommendations from our community for various topics can be found here.
/r/compsci
Hi there,
I've created a video here where I talk about the L1 and L2 regularization, two techniques that help us in preventing overfitting, and explore the differences between them.
I hope it may be of use to some of you out there. Feedback is more than welcomed! :)
Im self studying The book and it has huge number of exercises I don't know which ones are important so I can solve them.
Hi, I'm reading Design patterns, elements of object oriented software. Chapter 2 is a case study on designing a document editor, it has been incredibly illuminating. I was wondering, if there exists such a source of design case studies for other software such as media player, image editor and something like MS paint as well. Thank you.
Maybe this is a dumb question, but is it generally looked down upon to use AI editors (such as Cursor) during an internship? Is there an unwritten rule not to?
Francis Bacon saw human history as one long, often repetitive cycle of waxing and waning intelligence. In his analysis of history, mankind’s knowledge didn't grow smoothly over time but rather moved through grand revolutions, golden ages where the mind flourished, followed by dark, stagnant periods that erased all progress. The Greeks, the Romans, and then the Renaissance each had their time in the sun, but each was also followed by an era where knowledge hit a plateau or even regressed. Think about the destruction of the Library of Alexandria and the purge of intellectuals. Will Ai lead to another decline? https://onepercentrule.substack.com/p/ai-and-overcoming-the-threat-of-intelligence
Hi, I've been really loving all CoreDumped videos, especially as someone getting into programming without a college degree.
That channel been invaluable to me and I want more videos like this
Does anyone else have similar suggestions for computer science channels?
So im working on a board and trying to make a reaction speed test.
Board im working with has a RTC (Real time clock) From that i can use seconds,hours,minutes.
On the other hand, the board has a free running clock-16-bit 1Mhz.
My approach currently is that im counting clock cycles. That is done by comparing the value of the current clock (free) and the value of the clock when first called. If it is equal then a cycle has completed, CountCycle++ . If it is less than then an overflow occured and clock wrapped back to 0 so CountCycle++.
then i convert CountCycle to ms by dividing the number of clock cycles by 45 (Rough math was fried at this point).
Was debugging the code and the answers (in ms) were not realistic at all. Is the math wrong? Or is my way of counting cycles wrong? Personally i feel it is the latter and i am skipping clock cycles while checking if the button is pressed. If so what suggestions do you have.
Feel free to ask any question I’ll do my best to answer.
I was wondering what makes so hard for windows to implement fork. I read somewhere it’s because windows is more thread based than process based.
But what makes it harder to implement copy on write and make the system able to implement a fork?
I've been searching for similar topics and found this one, but the reviews at GoodReads discouraged me. What do you think? There is another one called Distributed Systems from Maarten van Steen, which has better reviews.
These models, no matter how many parameters they boast, can stumble when faced with nuance. They can’t reason beyond the boundaries of statistical correlations. Can they genuinely understand? Can they infer from first principles? When tasked with generating a text, a picture, or an insight, are they merely performing a magic trick, or, as it appears, approximating the complex nuance of human-like creativity?
https://onepercentrule.substack.com/p/the-birth-adolescence-and-now-awkward
I have to prove that the maximum number if mincuts in a graph is nC2. Now I know Karger's Algorithm has success probability at at least 1/nC2. Now P[sucess of karger's algorithm]=P[Output Cut is Mincut]= (#mincuts)/(#all cuts). Then how then we are getting that bound.
Hey Reddit!
I have been working on this project for a long time (almost a year now).
I am 16 years old, and, I built this as a project for my college application (looking to pursue CS)
It is called Tidal, and it is my own programming language written in Rust.
https://tidal.pranavv.site <= You can find everything on this page, including the Github Repo and Documentation, and Downloads.
It is a simple programming language, with a syntax that I like to call - "Javathon" 😅; it resembles a mix between JavaScript and Python.
Please do check it out, and let me know what you think!
(Also, this is not an ad, I want to hear your criticism towards this project; one more thing, if you don't mind, please Star the Github Repo, it will help me with my college application! Thank a Lot! 💖)
Kinda noobish, I know, but most of the stuff I've done has been little utility scripts that execute once and close. Obviously, most programs (Chrome, explorer.exe, Word, Garage Band, Libre Office, etc) keep running until you tell them to close. What are some different approaches to make this happen? I've seen a couple different patterns to make this happen:
Example 1:
int main(){
while(true){
doStuff();
sleep(amount);
}
return 0;
}
Example 2:
int main(){
while(enterLoop()){
doStuff();
}
return 0;
}
Are these essentially the only 2 options to make a program persistent, or are there other patterns too? As I understand it, these are both "event loops". However, by running in a loop like these, the program essentially relies on polling events, rather than directly reacting to them. Is there a way to be event-driven without having to rely on polling for events (i.e. have events pushed to the program)?
I'm assuming a single-threaded program, as I'm trying to just build up my understanding of programming patterns/designs from the ground up (I know that in the past, they relied on emulating multithreaded behavior with a single thread).
(Intro) No clue why this started but I’ve seen a lot of overhype on A.I. and YouTubers started making videos now about how CS is now a dead end choice for a career. (I don’t think so since there is a lot happening behind the scenes of any program/ai/automation).
It seems programming and computers overall have been going in this direction since they were built in order to be able to handle more and more complex tasks with more and more ease on the surface level/making it more “human”and logical to operate things.
(Skip to here for main idea)
(Think about how alien ships are often portrayed to be very basic and empty inside when it comes to controls even though the ship itself can defy physics/do crazy cool things, they’re often controlled by very forward and instinctual controls paired with some sort of automation system that they can communicate on or input information that even a kid would understand. This being because if you get to such a high level of technology, there would be too much to keep track of(similar to how we’ve moved past writing in binary or machine code because of how there is too much to keep track of), so we seal those things off and make sure they’re completely break proof in terms of software and hardware then allow pilots who are also often the engineers to monitor what they need using a super simple human/alien design. Being able to change and effect large or small aspects of the complex multilayered system using only a few touches of a button. This is kind of similar to how secure and complex iPhones were when they came out, and how we could do a lot that other phones couldn’t do simply because Apple created a UI that anyone could use and gave them access to a bunch of otherwise complex things at the push of a button. Then we had people who were engineers create an art form from it through jailbreaking/modding these closed complex systems and gave regular people more customization that Apple didn’t originally give. I think the same will happen overall with all of Comp Sci where we will have super complex platforms and programs that can be designed and produced by anyone, not just companies like Apple, but the internals would be somewhat too complex for them to understand and there will be engineers who will be able to go in and edit/monitor these things and even modify certain things and those people will be the new computer scientists while people who actually build programs using the already available advanced platforms we’ve built will be more similar to how companies drawing stuff on boards and making ideas since anyone can do it).
What are your thoughts?
Hey All,
I recently started my Master of AI program, and something caught my attention while planning my first semester: there’s a core option course called Introduction to Quantum Computing. At first, it sounded pretty interesting, but then I started wondering if studying this course is even worth it without access to an actual quantum computer.
I’ll be honest—I don’t fully understand quantum computing. The idea of qubits being 1 and 0 at the same time feels like Schrödinger's cat to me (both dead and alive). It’s fascinating, but also feels super abstract and disconnected from practical applications unless you’re in a very niche field.
Since I’m aiming to specialize in AI, and quantum computing doesn’t seem directly relevant to what I want to do, I’m leaning toward skipping this course. But before I finalize my choice, I’m curious:
Is studying quantum computing actually worth it if you don’t have access to a quantum computer? Or is it just something to file under "cool theoretical knowledge"?
Would love to hear your thoughts, especially if you’ve taken a similar course or work in this area!
Hi, everyone!
I'm a newbie currently learning data structures and algorithms in C, but my next step would be Network Programming.
I found a used copy of the Tannebaum's Computer Networks (4th Edition) and it's really cheap (8€). But, to me it seems pretty old (2003) so I'm curious to know how relevant is it today and will I miss much if I buy it instead of the 5th edition.
Thanks in advance!
He believes that if used correctly, classical systems can be used to model complex systems, including quantum systems. This is because natural phenomena tend to have structures that can be learned by classical machine learning systems. He believes that this method can be used to search possibilities efficiently, potentially getting around some of the inefficiencies of traditional methods.
He acknowledges that this is a controversial take, but he has spoken to top quantum computer scientists about it, including Professor Zinger and David Deutsch. He believes that this is a promising area of research and that classical systems may be able to model a lot more complex systems than we previously thought. https://www.youtube.com/watch?v=nQKmVhLIGcs
TYNET 2.0 is here to empower female innovators across the globe. Organized by the RAIT ACM-W Student Chapter, this 24-hour international hackathon is a unique platform to tackle real-world challenges, showcase your coding skills, and drive positive change in tech.
Exclusively for Women: A supportive environment to empower female talent in computing.
Innovative Domains: Work on AI/ML, FinTech, Healthcare, Education, Environment, and Social Good.
Exciting Rounds: Compete online in Round 1, and the top 15 teams advance to the on-site hackathon at RAIT!
Team Size: 2 to 4 participants per team.
Round 1 (Online): PPT Submission (Nov 21 – Dec 10, 2024).
Round 2 (Offline): Hackathon Kickoff (Jan 10 – 11, 2025).
Women aged 16+ from any branch or year are welcome!
#Hackathon #WomenInTech #TYNET2024 #Empowerment #Innovation
We have constant upper bound 'b' on sum of 'n' positive arbitrary-size input integers on a system with 'm'-bit word sizes (usually m = 32 bits for every integer).
To represent 'b', we need to store it across 'w = ceil(log_2^m(b))' words.
(number of m-bit words to store all bits of b)
(formula is log base 2^m of b, rounded up to nearest whole number)
Then, each positive arbitrary-size input integer can be represented with 'w' words, and because 'w' is constant (dependent on constant 'b'), then this summation has runtime complexity
O(n * w) = O(n)
Quick example:
m = 32
b = 11692013098647223345629478661730264157247460343808
⇒ w = ceil(log_2^32(11692013098647223345629478661730264157247460343808)) = 6
sum implementation pseudocode:
input = [input 'n' positive integers, each can be represented with 6 words]
sum = allocate 6 words
for each value in input:
for i from 1 to 6:
word_i = i'th word of value
add word_i to i'th word of sum
// consider overflow bit into i-1'th word of sum as needed
return sum
end
sum runtime complexity: O(n * 6) = O(n)
prove me wrong
edit: positive integers, no negatives, thanks u/neilmoore
PKE (Precision Knowledge Editing), an open-source method to improve the safety of LLMs by reducing toxic content generation without impacting their general performance. It works by identifying "toxic hotspots" in the model using neuron weight tracking and activation pathway tracing and modifying them through a custom loss function.
If you're curious about the methodology and results, there's a published a paper detailing the approach and experimental findings. It includes comparisons with existing techniques like Detoxifying Instance Neuron Modification (DINM) and showcases PKE's significant improvements in reducing the Attack Success Rate (ASR).
The GitHub repo features a Jupyter Notebook that provides a hands-on demo of applying PKE to models like Meta-Llama-3-8B-Instruct: https://github.com/HydroXai/Enhancing-Safety-in-Large-Language-Models
If you're interested in AI safety, I'd really appreciate your thoughts and suggestions. Are there similar methods being done and how to improve this method and use it at scale?
I was tasked to discuss Reflexive Closure, in relation to computer science. In Discrete Mathematics context, its basically a set that relates to an element itself. But I just can't find any explanation about its uses in real world, and its application in computer science. If you could explain, or drop an article or link. That would be a big help. Thank you
I am trying to understand different language models. What is the primary difference between Claude and ChatGPT? When would you use one model over the other?