/r/computerscience
The hot spot for CS on reddit.
Welcome to /r/ComputerScience!
We're glad you're here.
This subreddit is dedicated to discussion of Computer Science topics including algorithms, computation, theory of languages, theory of programming, some software engineering, AI, cryptography, information theory, and computer architecture.
For more detailed descriptions of these rules, please visit the rules page
/r/computerscience
This is the first time im using ICs, and im trying to make an D-Latch, but for some reason the LEDs seems to be flickering everytime i start the simulation. I already checked the schematic and i couldnt find any circular dependency. Whats wrong with my D-Latch?
Picture:
https://www.reddit.com/r/PictureReference/comments/1ihenwa/nand_latch/
Q1
Turing complete game says S, R = 0 is an error. But why?
I tried creating NAND latch in logism and turing complete game but it seems fine? I don't see any contradictions.
If I assume top NAND gate to have input of 0, 0 or 0, 1 either way its going to produce 1 and that 1 is going to go to the bottom NAND gate so its input becomes 0, 1 which is going to produce 1 which is going top NAND gate so its input becomes 0, 1 which is going to produce 1 which goes to bottom NAND and so on...
Q2
Why in turing complete game says S, R = 0 is an error
But in Logism S, R = 1 is an error (there is a red rectangle)
Hello, I am not a fellow CS cadet. But my partner that I love very much is!
Valentine’s Day is coming up and I want to get him something related to computer science. He truly enjoys coding and programming, he does it in his free time. He talks about all of his side projects (I never understand a thing he is talking about lol).
He enjoys open source (like a lot). He codes with OpenBSD and talks about unix. If there’s any awesome gift ideas let me know :)
I am in a sense looking for a passion in CS/applied math, to then undertake some research in that field. Many times, I have weirdly "convinced myself" that this new subject was my passion. I ended up changing my mind, and while still finding the subject interesting, the fire and love I had for it always ends up subsiding. This was less of a problem during my undergraduate degree, but now that I am going into my masters next semester, I have to choose a few specialisations. I tend to be an "all in" type of person, especially in my studies. Breadth is essential, but I want to start focusing on depth in a subject I really like.
My thought processes are very cyclical and go something like this:
I really want to escape this cycle, as it is mentally exhausting. I am also aware that maybe my relationship with certain academic interests is not realistic or healthy.
All fields that I tend to be interested in tend to share common characteristics though. For example, I started off being interested in computational linear algebra, then probability and statistics, algorithms, and now I am in between cryptography and numerical methods / CG and computational geometry. So maybe I'm not that crazy?
What doesn't falter / vary over time though is my want to do research.
Any help is greatly appreciated.
I would like to show how conditional probability can be used to achieve a better false positive rate(for the same size and hash function calls(k)) over a standard bloom filter. Here's the link to a small gist on GitHub I made to explain illustrate the idea in working code https://gist.github.com/ross39/e8260da17672538e6833cb968c776793
Just wondering, do you have to write 0 at 128 when converting from denary to binary. For example, 127= 01111111. ^
Or do you just write 1111111
Sorry I you didn't understand, English is my second language
Great book so far, but a question I have on pg 169 of the first edition (apparently it's on page 238 of the newer edition):
In the bottom circuit, he shows us adding a "Clear" input that sets Q to 0, regardless of any other input. This makes sense, but isn't there a scenario where you could set Clear to 1, Clock to 1, and Data to 1, and end up with Q being 0, but ALSO Q-bar being 0? And that would present a conundrum, since Q and Q-bar are supposed to be complements of one another?
This seems to play out when toying with the diagram at https://www.codehiddenlanguage.com/Chapter17/ , setting Clear, Clock and Data to 1 shows Q and Q-bar both as 0.
I came up with a new sorting algorithm that runs in O(n log ln n) time worst case. It uses natural runs like Timsort to run in O(n) time best case. Tested it and it's faster than most stuff out there without needing any customized hardware, just uses a mildly clever new data structure.
Now what? How do I get it out there? Is there a way to monetize something like this beyond a youtube video with ads?
When we program a certain software we create an executable to use that software. Regardless of the technology or language used to create a program, the executable created is a binary file. Why should we use secure programming practices as we decide what the executable is doing? Furthermore, it cannot be changed by the clients.
For example, cpp classes provide access specifiers. Why should I bother creating a private variable if the client cannot access it anyway nor can they access the code base. One valid argument here is that it allows clear setup of resources and gives the production a logical structure. But the advantages limit themselves to the production side. How will it affect the client side?
Reverse engineering the binary cannot be a valid argument as a lot direct secure programming practices do not deal with it.
Thoughts?
Just made a new sorting algorithm called bogogu bogugo sort. Let me know what you think and add suggestions below. Maybe someone can do a simulation of this if you could that be really cool😎.
It starts off normal with finding a pivot (Ex: a singular 5 [only 5 of all the numbers])
Basically doing quick sort at the beginning and dividing the group of numbers into two, biggest half and smallest half.
We then pick the smallest number of the bigger half and biggest number of smallest half (Ex: 4 and 6)
We add them up then divide by 2 {(4+6)/2=10}
We add the pivot to the variable {5+10=15}
We find the averages of both half's to see if any of them are equal to 15 and if they aren't then we restart everything with random numbers this time and if 15 is equal to one of the averages then we sort one singular number then repeat until fully done.
Thanks guys.
Hypothetically if you broke RSA encryption today what would be the most damge you could do, if you were trying to create havoc and how much money could you get if you wanted to make the most money with this?
Hey everyone!
I'm looking to deepen my understanding of computer hardware—how different components are made and their functions. I want to dive into concepts like threads, kernels, and other low-level system operations to gain a more comprehensive view of how computers work.
For context, I’m a computer science major with several years of programming experience and a basic understanding of hardware, but I’d like to take my knowledge to the next level. I’ve watched numerous YouTube videos on these topics, but I still struggle to fully grasp some of the concepts.
Are there any good books or guides that explain these topics in depth? I’d really appreciate any recommendations!
I'm interested in the Category Theorey course by Bartosz Milewski (https://www.youtube.com/playlist?list=PLbgaMIhjbmEnaH_LTkxLI7FMa2HsnawM_), and I'm looking for a studying partner. We'd watch roughly about 2 lectures a week, exchange notes and questions, etc. Anyone interested - DM me.
About me: Master's student in CS.
title
Yesterday, DeepSeek R1 demonstrated the untapped potential of advancing computer science to build better algorithms for Artificial Intelligence. This breakthrough made it crystal clear: Artificial Intelligence progress doesn’t come from just throwing more compute at problems for marginal improvements.
Computer Science is a deeply mathematical discipline, and there are likely endless computational solutions that far outshine today's state-of-the-art algorithms in efficiency and performance.
NVIDlA's 17% stock drop in a single day reflects a market realisation: while hardware is important, it is not the key factor that drives Artificial Intelligence innovation. True innovation comes from mastering the mathematics in Computer Science that drives smarter, faster, and more scalable algorithms.
Let’s embrace this shift by focusing on advancing foundational CS and algorithmic research, the possibilities for Artificial Intelligence (and beyond) are limitless.
therefore its implementations would be data structures also?for ex could we describe quick find as a algorithm or data structure?
https://learntocodetogether.com/we-need-math-for-coding/
Yes, I have a better sense how HTTPS works actually by grinding some of the math behind it. So I can say if we’re caring about the details of something and want to understand something deeper than the conceptual level, math is not always the answer perhaps, but sometimes it can help definitely.
In the past few days, I have had time to reflect on what kind of math I have to use in practice for writing technical implementation. Nothing too fancy, just some basic math & fundamentals, but it's the cumulative effort spanning over a couple of years of writing software and recent exposure to some new interesting concepts.
I hope I could get some feedback from this post and I'm glad if you find it useful! 😇😇
Everyone who gets into computer science is quickly introduced to sorting algorithms like Quick Sort, Merge Sort, Heap Sort, etc, but these algorithms all assume that we can only compare two elements at a time, and while this is almost always the case, especially in computer science, there are scenarios where this assumption doesn't hold.
For example, imagine someone wants to sort their horses by speed. While they cannot measure the horses' speeds precisely, they can race up to three horses at a time and determine their relative ranking in that race. The goal would be to minimize the number of races needed to sort all the horses.
I never heard anything about this topic but certainly some people have, so I'm curious about what research exists on this topic, and if there are any known sorting algorithms designed for scenarios like this, and how they work
Btw, I used three horses as an example, but the question is for n elements comparisons, tho I believe much bigger n's would be too complex to handle since for an n elements comparison we have n! possible outcomes
I don't know why.
I sat down this morning and whipped up an esoteric language that translates pairs of three distinct zero-width Unicode characters into Brainfuck tokens. Then a minimal Brainfuck interpreter executes it.
Here's how you implement "Hello World!" in PhantomFuck: HelloWorld.phf
Good luck seeing the code. But do take a look at the file size, that was unintentional, but a good indication that this thing is cursed.
Repo: Github
For some long time I thought SSL/TSL add security at transport layer to ensure transmitted data integrity, data confidentially and server authentication using digital certificates. However, upon careful look into the OSI model, it actually operates at presentation layer. Why would the security be added and presentation instead of transport layer where the data is actually being move from point a to point b?
Our computers mostly run on the principles of digital electronics. They use the voltage channels to map binary operations using different circuit components like transistors, diodes, etc.
From a theoretical point of view, I was curious - what difference would it make, if we try to do the same using magnetic fields, i.e., treating north pole & south pole analogous to two binary states. Here magnetic field is an arbitary choice, it can be anything in general.
Taking these two types of computers, one using electronics and other magnetic field, how can I formulate a conceptual framework that captures this method of implementation given a particular hardware/substrate I am using to do my computations? Like can we develop properties of each computer along the lines of "representation", "modeling", and "substrate dependence"?
If my guess is correct, there should be a categorical difference between the two, like based on the implementation method one of the computers will show their effectiveness for some operation over another one and vice versa. Is it a sensible question or am I just halucinating?
I don't want to date myself but stuff like unit testing just didn't exist when I was studying CS. However, I was thinking about recursion the other day and was wondering how modern languages (or IDEs) catch problems like the base case (or multiple cases) never being reached. Will today's development platforms warn you if your recursion is headed for infinity or have you just written 100 lines of code that will never be reached. Back in my day we could only speculate about the latter, or sic an intern on it. But for the base case? First you'd have to know a solvable solution (eg foo(x), x=?) and trap for stack overflow. Where are such gotchas avoided in 2025?
I am less than a year away from getting my Bachelors of CS, but some of the information is hard for me to understand. I’m doing okay in school, but some of the information, I’m struggling to comprehend. Did anyone else experience this? Was some of the algorithm, abstract, hypothetical information that you learned, difficult to grasp? did it come with time or did you just not have to use it??
I don’t know how to fully comprehend algorithms, networking, and operating systems more.
Any advice? Nothing specific, btw. Just the idea. Maybe some youtube videos? Help! 🥹😅