Photograph via snooOG

Computer Science Theory and Application. We share and discuss any content that computer scientists find interesting. People from all walks of life welcome, including hackers, hobbyists, professionals, and academics.

Welcome Computer Science researchers, students, professionals, and enthusiasts!

We share and discuss content that computer scientists find interesting.


Self-posts and Q&A threads are welcome, but we prefer high quality posts focused directly on graduate level CS material. We discourage most posts about introductory material, how to study CS, or about careers. For those topics, please consider one of the subreddits in the sidebar instead.

Want to study CS or learn programming?

Read the original free Structure and Interpretation of Computer Programs (or see the Online conversion of SICP )

Related subreddits

Other topics are likely better suited for:

Other online communities:

If you are new to Computer Science please read our FAQ before posting. A list of book recommendations from our community for various topics can be found here.


3,032,397 Subscribers


Is it advisable for me to learn C++ as a beginner over Java? (I wanna develop Audio Plugins)

I want to develop my first VST Plugin, and so the JUCE Framework that I have to use only works with C++. However, a lot of people suggested me to learn Java first. I'm a beginner at programming, and also a professional Music Producer. Which language do you guys recommend learning first and why?

12:19 UTC


Adversarial Attacks and Adversarial Training in Reinforcement Learning

11:29 UTC


Are tree-based data structures really that useful?

I'm not a CS major but I've done a little bit of low-level programming which got me into the topic of data structures.

I can think of two main uses for trees:

1- Searching data: Binary trees can be searched in O(log(N)) on average. However, it seems that hash tables are better for every case here (O(1) at best). That's not to mention that hash tables often have a native implementation in most programming languages.

2- Recursively navigating the file system, but this is also possible to achieve without trees.

Also another disadvantage to trees is that they are complicated. They're often memory inefficient. The typical tree implementation also has the data scattered all over the RAM, which affects performance.

So it's not clear to me where trees would shine, it often seems that the best thing to do is to avoid using them.

Any inputs?

17:32 UTC


How do I go about creating a simple Version Control System.

I am a second semester student of Computer Engineering and I intend on creating a simple version control system that is able to implement basic functionalities like, Staging Area,Commiting,Branches and Checking Out Previous Commits.

I think this could be a basic/intermediate project for us given that we're given a span of 8-12 weeks for the project.Could anyone help me out with required libraries and useful resources.

16:16 UTC


What is W in Karp's Partition problem reduction to Max-Cut?

I'm currently taking an algorithms class at university, and I'm kind of confused about the reduction of the Partition problem to Max-Cut.


This is what I understand so far:

To reduce an instance of the Partition problem to Max-Cut, we need to construct a graph G

  • For each integer in the Partition problems set, we create a vertex with the value of the integer
  • We connect each of these vertices together with an edge (u,v) where the weight is the product of the values of vertices u and v
  • If there is a partition of the integers into 2 subsets with equal sum, then sum of the weight of the edges in the cut will be at least W

I've tried some partitions and saw that the partitions are equal to the max-cut.

I don't really get why the value of W is 1/4 of the sum of the integers squared. Could someone please explain it to me?

14:43 UTC


Am I a theorist?

I read books about books (73 related to CS so far and I've started another one), but I don't feel like applying it.

Does anyone know this problem? What would be a way out?

Edit: [finding]: New knowledge stimulates my reward system (The question now would be whether everyone can achieve that).
Analysis paralysis

11:53 UTC


Buchi Automata

For an infinite binary word, call a ”segment” the alternating contiguous sub-words of identical symbols, 0’s or 1’s. For instance, the word 00111010110001... has segments 00,111,0,1,0,11,000,1.... Consider the following ω-languages. A={w|w has segments of even length only}. Does this Buchi Automata recognise A?


11:10 UTC


Books/resources on computational thinking

Recently I came across Interaction combinations courtesy the HVM which has started to make me wonder what the hell does computation even mean. To create nodes with certain edges and then annihilate them until there's nothing left to be done and bang, your computation is complete. Like what? Turing machines have hurt my brain real good. Any resources to dwell deeper into this aspect of compsci? Sorry but I don't even know what category it falls under. Computational thinking shows me ML/AI stuff which is not what I want


10:29 UTC


Help me understand the proof sketch to this proposition from the book Algorithms 4 by Robert Sedgewick, Kevin Wayne


In the resizing array implementation of Stack (given below), the average number of array accesses for any sequence of operations starting from an empty data structure is constant in the worst case.

Proof sketch:

For each push() that causes the array to grow (say from size N to size 2N), consider the N/2 - 1 push() operations that most recently caused the stack size to grow to k, for k from N/2 + 2 to N. Averaging the 4N array accesses to grow the array with N/2 array accesses (one for each push), we get an average cost of 9 array accesses per operation.

Resizing array implementation of Stack:

import java.util.Iterator;

public class ResizingArrayStack<Item> implements Iterable<Item> {
private Item[] a = (Item[]) new Object[1]; // stack items
private int N = 0; // number of items

public boolean isEmpty() { return N == 0; }

public int size() { return N; }

private void resize(int max) {
// Move stack to a new array of size max.
Item[] temp = (Item[]) new Object[max];`
for (int i = 0; i < N; i++)`
temp[i] = a[i];
a = temp;

public void push(Item item) {`
// Add item to top of stack.`
if (N == a.length) resize(2*a.length);
a[N++] = item;

public Item pop() {
// Remove item from top of stack.
Item item = a[--N];
a[N] = null;
if (N > 0 && N == a.length/4) resize(a.length/2);
return item;

public Iterator<Item> iterator() { return new ReverseArrayIterator(); }

private class ReverseArrayIterator implements Iterator<Item> {`
// Support LIFO iteration.
private int i = N;
public boolean hasNext() { return i > 0; }
public Item next() { return a[--i]; }
public void remove() { }


I can't understand the significance of the bolded part in the proof sketch.

17:11 UTC


Havoc c2 :--- Failed to execute assembly or initialize the clr

10:24 UTC


Thoughts on the new language Bend?

Just saw the fireship video for the bend programming language:

and the github repo:

Where would we use it or is it just another language that's going to be forgotten after 1 year?

21:43 UTC


Is it proved that NP-complete problems can/cannot be solved in polynomial space?

18:58 UTC


A Visual Guide to the K-Means Clustering Algorithm. 👥

TL;DR: K-Means clustering groups data points into clusters based on their similarities, making it useful for applications like customer segmentation, image segmentation, and document clustering. By minimizing the variance within each cluster, K-Means helps reveal hidden patterns and relationships in the data.

K-Means Clustering Visual Guide


14:06 UTC


floating point arithmetic "as fast or faster" than integer arithmetic

TIL the LUA interpreter uses doubles as integers. The documentation claims "Moreover, most modern CPUs do floating-point arithmetic as fast as (or even faster than) integer arithmetic.".

I am a bit dubious. I would think modern CPUs have more ALUs than FP units, so integers should win in any case once we saturate?

Also I would expect there's logic before you come to the same-exponent special case where it becomes integer math, and there's logic after that (do we need to change exponent?).

So, is their claim true?

Edit: To clarify: This is NOT about LUA, or whether it matters in that context. This is about the specific claim about "modern CPUs".

10:22 UTC


temp-cleaner: an app to automatically clean up temporary files and ignored items from git repositories in your system by analyzing .gitignore files

temp-cleaner is a C++20 tiny app designed to automatically clean up temporary files and ignored items from Git repositories on your system by analyzing .gitignore files. You can pass two arguments to its executable: the first one is the directory through which the search is performed (including all its subdirectories), while the second one is the name of a configuration file containing paths to be ignored during the search.

This app also supports reading relative paths with * and ** written in the .gitignore file by using regex patterns.

Github repository: https://github.com/JustWhit3/temp-cleaner

08:42 UTC


Good books/courses on Computer Science theory

Hey, hope you guys are fine. I am gonna be getting vacations in a few days and after that my college (or as you Americans call it "high school") is going to start. So I have decided to self learn some of the CS theory as I want to get an head start. Also because I'm a huge nerd 🤓. So do you guys have any recommendations on courses and books on CS theory. I just want the resources to not feel like I'm reading Greek. I also want to be a game developer, so what theory should I learn?

05:06 UTC


If a zeno machine solves the halting problem for a turing machine, what machine solves a zeno machine's halting problem?

And what if you feed the machine from the class of machines that solves a ZM halting problem it’s own description?

Will it also be unsolvable? What does this tell us about the halting problem? I would think it shows that HP says something about the constraints on what a machine can compute given the way we defined it’s own computing. Is this correct?

00:39 UTC


Why is consumer software more often closed source?

It seems counterintuitive to me. I would expect that open source projects would be dedicating more resources to creating consumer products which actually benefit users while enterprise software would be closed source since companies often require more tailored solutions.

What I personally see is open source operating systems like Linux used mostly in embedded systems, backend infrastructure, etc. Infrastructure software like web frameworks, build tools, databases, etc are all used in the backend by developers for companies. Even with VPNs, consumers use closed source software like NordVPN or ExpressVPN while enterprise solutions often involve open source software like WireGuard or OpenVPN.

Why is the software domain like this? It seems counterintuitive to me, even when taking financial incentives into account. Am I blinded by some kind of survivorship bias?

23:21 UTC


Did the definition of AI change?

Hello. I know this might be an odd question.

But I first learned about the concept of AI around 2016 (when I was 12) and relearned it in 2019 ish. I'm a Comp Sci major right now and only have brushed the very basics of AI as it is not within my concentration.

The first few times AI was defined to was something similar to just the simulation of intelligence. So this included essentially anything that used neural networks and algorithms. Which is very broad and of course does not literally mean it's going to be on the level of human intelligence. Sometimes the programs are very simplistic and just be made to do simple things like play chess. When it was redefined to me in class in 2019 it was made to seem even broader and include things like video game enemies that were not being directly controlled by a person.

This year I've been seeing a lot of threads, videos, and forums talk about AI and argue that none of these things fall into the AI definition and that we haven't truly made AI yet. I am also in a data science class that very basically overviews "AI" and states that no neural network falls under this definition. And when I learn more about where they are coming from, they usually argue something like "Well nueral networks don't actually know what these words mean and what they are doing". And I'm like, of course, but AI is a simulation of intelligence, not literal intelligence . Coming from when I was younger taking lower education comp sci classes, and watching MIT opencourseware, this definition is completely different. Which formally to me it was a range from simple predictive programs with tiny data sets to something as advanced as self driving cars.

I am having a hard time adjusting because this new one seems almost sci fi and completely subjective, not something that even has a purpose of having a meaning because it "doesnt exist yet". At least the old AI definition I knew had somewhat of a meaning that mattered in society. Which was to say that something was automated and functioned based on a well developed algorithm (usually neural networks). This new AI meaning (literal human intelligence) would rely on a society that had advanced machines that completely mimiced human brains. Which obviously is completely fantastical right now, and thus doesn't actually have a meaning as a word anymore than skynet does. Am I missing something?

Edit: Going by the comments, it's pretty clear to me now that this is philosophical with no hard definition.

I was getting really frustrated because every time it's presented to me in academia, it's as a black and white definition. Leaving no room for philosophical understanding and getting points wrong on tests for calling things AI or not AI. Which prevented me from understanding what people are talking about when they talk about it. It's silly to even put this kind of question in a test as a true or false question next to hard math. With no nuance whatsoever. I would not have been able to guess based off of how it's been presented to me, that it is not a tech term whatsoever

06:43 UTC


What projects should I do to really hammer in OS and computer architecture knowledge?

Practically all of what I know about both of these topics is textbook stuff, all theoretical. But putting it into practice sounds really complicated, as a single OS involves many aspects and so does general computer architecture. Any project/liist of projects that get increasingly more complex to learn both of those things?

02:35 UTC


Singular Value Decomposition (SVD) Explained

Hi there,

I've created a video here where I how the singular value decomposition (SVD) works and its applications in machine learning.

I hope it may be of use to some of you out there. Feedback is more than welcomed! :)

19:13 UTC


Computer sci help

Is there anyone in here that can help me understand Automata, Languages and Computation asap? Please.

00:12 UTC


comp sci themed grad party games

idk if this is the right place to ask but does anyone have any ideas for comp sci themed grad party games? my older brother just graduated with his bachelor’s and his surprise grad party is this coming sunday so my parents put me in charge of planning and i’m blanking on what kinds of games to plan.

if anyone has any ideas even vague ones please comment!! these were some of the rules my parents gave me:

  • cash prize
  • not too time consuming or confusing
  • easy set up/can buy on amazon
  • try to add alcohol in a way
  • brain teasers/riddles esque
  • simple and fun but on theme

also if anyone was wondering, everyone that would be playing the games are college age so like 18-27 ish

21:50 UTC


Econet LAN Party Mega Post

Less than 1 week to go until this year's Econet LAN party! Bring your machine along and plug in. Last year we got 57 machines and several remote connections - how many can we do this year?

Book here.

If you have any BBC or Archimedes machines with an Econet interface, please bring them along. We’d be very interested to see any other machines too, so if you have a rare System rack or an Atom, you’ll definitely attract our attention.

Can't be there in-person? Follow this thread for more information on joining our network remotely!

We're also hosting an auction on the day of Econet/Acorn/BBC-related items. More info here.

As part of the event, we’re going to have some short talks with an Econet theme, exploring the past and present of Econet, as well as some TNMOC exhibits.


Doors open 9:30am on both days. Clear up by 5pm on Sunday. The room will be locked overnight, so your equipment will be secure. We will provide chairs and tables, a nearby mains socket and an Econet point(s). Please bring your own Econet cable and mains leads. To avoid confusion, we recommend you label your kit.


Admission is £15/day or £25 for both days. The museum will be open to normal visitors too, and you are welcome to look around. Lunch is included in the ticket price.

Getting there

Sat-Nav postcode is MK3 6DS. Parking is available very close to the room. There is an electric car charging point on site, although this is a short walk from TNMOC’s building. Bletchley train station is a 5-10 minute walk away with two direct trains per hour to London and Birmingham.

We'd love to see you there! 🖥️

16:28 UTC


Modulo 255 vs 256 checksum in the Fletcher's checksum

Fletcher's checksum is a checksum algorithm that computes a checksum over a block of data using two sums: one is the simple sum of the data (modular arithmetic), and the other is the sum of the running totals of the first sum.

Assuming the data is processed in 1 B words, where di is the ith byte:

s1 = (s1 + di) mod 255

s2 = (s2 + s1) mod 255


Is there any particular reason 255 is chosen instead of 256-- or if choosing 256, omitting the modulo component altogether and just letting the bytes overflow, taking the final result as the checksum? Obviously, using 256 would be computationally faster, but ChatGPT says that 255 provides a better distribution of checksum values for some arbitrary block of data. I am having trouble understanding the last part, however, or finding relevant theory.

1 Comment
15:04 UTC


Dear CS theorists, which of the following complexity books would you prefer and why: Arora-Barak, Goldreich, or Moore-Mertens?

Dear CS theorists,

I am interested in doing research in combinatorics and TCS for my PhD, especially in the fields of extremal combinatorics and algorithms. I am about to take a course on computational complexity next semester and the professor said that he probably would follow Arora-Barak.

I have one or two TCS friends and they told me that they prefer Goldreich to Arora-Barak, which contains some errors. Also for the table of contents, it seems that Moore-Mertens would also cover some materials from physics that are related to TCS.

So I was wondering that for people here who have experience in TCS, which of the three books would you pick and why?

Arora-Barak: Computational Complexity: A Modern Approach

Goldreich: Computational Complexity: A Conceptual Perspective

Moore-Mertens: The nature of computation

Thank you very much!

12:43 UTC

Back To Top