/r/computerscience
The hot spot for CS on reddit.
Welcome to /r/ComputerScience!
We're glad you're here.
This subreddit is dedicated to discussion of Computer Science topics including algorithms, computation, theory of languages, theory of programming, some software engineering, AI, cryptography, information theory, and computer architecture.
For more detailed descriptions of these rules, please visit the rules page
/r/computerscience
I've experimented little script which outputs all unicode characters, in specified character ranges (cause not all code-point values from 0x00000000
to 0xFFFFFFFF
are accepted as unicode)
Surprisingly, i found no reliable information for full list of character ranges (most of them didn't list emoticons)
the fullest list, i've found so far is this with 209 character range entries (most of the websites give 140-150 entries):
https://www.unicodepedia.com/groups/
I would like to know if a peer to peer network can be established that can be done without 3rd party software or code, just non intrusive.
For example someone has a file that he wants to send to someone but wants to do it the fastest way using peer to peer over public internet how can he do it without downloading any additional stuff to perform it? I mean that the receiving peer doesn't need anything to get it
Other question
How can someone in a peer to peer contribution network connect to the nearest peer? Does the network need a data centre with database that has all geolocation data and it calculates the nearest peer using formula or machine learning?
The closest peer is one with lowest ping.
The geolocation data is there in firsthand because the peer to peer contribution network. The contributors must share it to reduce latency.
Is there a central hypervisor that assigns task centrally or any other way?
So when you have a website or app that has lots of traffic and it creates lots of data. What do you do with the data besides recomendations and ML training and selling? What can be applications of the data? What do you do with the Data?
Hey everyone, I had this idea: what if we could replace some real-time calculations in engines or graphics with precomputed memory lookups or approximations? It’s kind of like how supercomputers simulate weather or physics—they don’t calculate every tiny detail; they use approximations that are “close enough.” Imagine applying this to graphics engines: instead of recalculating the same physics or light interactions over and over, you’d use a memory-efficient table of precomputed values or patterns. It could potentially revolutionize performance by cutting down on computational overhead! What do you think? Could this redefine how we optimize devices and engines? Let’s discuss!
One of them is suspected to be the name of a character generation method in the Computer Graphics subject.
If someone here actually knows the right answer please let me know, because I have been trying to find the correct spelling and some searches say it as Starburst and others say it as Starbust. I have a study material given to me by my teacher that uses both spellings.
I am a bit overwhelmed with UML Activity Diagrams. I have to prepare a presentation about it for my lecture. While looking for a source, I realised that different sources have different numbers of elements and notations.
Is there any official documentation/listing of the elements and notation that officially appear in a UML 2 Activity Diagram?
I'm writing a story in which the antagonist has placed a computer program on a series of beanstalks that will, essentially, end the world. He also has watchdog programs on the system to ensure no one tampers with the base program. For story reasons these programs must be disabled in a specific sequence. Only, the protagonists don't know what that sequence is. I need them to narrow down the sequence to one of two reversed sets. But I'm having trouble figuring out how they might narrow it down in such a way. Any help is greatly appreciated.
Hey everybody,
I am curious about what caused accumulator and stack based architectures to lose the battle against register based architectures?
Thanks so much!
I am trying to learn about thread safety and reentrancy. Most document says that these two concepts are orthogonal, and function can be neither, both, or either of them.
Digging into the web, in this StackOverflow Question, the following code is given as example of reentrant but not thread safe:
int t;
void swap(int *x, int *y)
{
int s;
s = t;
t = *x;
*x = *y;
*y = t;
t = s;
}
Someone pointed out that this code is not reentrant, but the poster claimed that:
The assumption is that if the function gets interrupted (at any point), it's only to be called again, and we wait until it completes before continuing the original call. If anything else happens, then it's basically multithreading, and this function is not thread-safe. Suppose the function does ABCD, we only accept things like AB_ABCD_CD, or A_ABCD_BCD, or even A__AB_ABCD_CD__BCD. As you can check, example 3 would work fine under these assumptions, so it is reentrant
This code was taken from Wikipedia page, but I noticed that this code was deleted recently.
Looking at Wikipedia talk:
The code in #Reentrant but not thread-safe is not reentrant unless it is running on a uniprocessor with interrupts disabled.
but other user argued that:
When a program is running on a single thread (whether on a uniprocessor or multiprocessor) with interrupts enabled, the reentrancy is nested. That means, if a function is interrupted and reentered, the interrupted process (the outer one) has to wait for the reentered process (the inner one). In that case, "s=tmp" and "tmp=s" recover "tmp" to the previous value. So I think this example is reentrant.
But finally other user mentioned that:
No, reentrant does not mean recursive. When a process is interrupted while running a function and a second process runs the same function, an interrupt or system call in the second process could allow the first process to continue running before the second process has finished running that function.
So who is saying the truth? I cannot imagine the situation that process is interrupted and reentered, but runs the original code in single thread environment.
I came across this game and is wondering how many collectable items are in the game. I uses dev tool for this analysis and got 59. The result can be find here in this spreadsheet. Be interested to hear what other techniques yall would use and if I missed anything.
I've been considering some ideas for free educational YouTube videos that nobody's done before.
I had the idea of doing algorithms on paper with no computer assistance. I know from experience (25+ years as a professional) that the most important part of algorithms is understanding the process, the path and their application.
So I thought of the idea of teaching it without computers at all. Showing how to perform the operations (on limited datasets of course) with pen and paper. And finish up with practice problems and solutions. This can give some rote practice to help create an intuitive understanding of computer science.
This also has the added benefit of being programming language agnostic.
Wanted to validate this idea and see if this is something people would find value in.
So what do you think? Is this something you (or people you know) would watch?
Every time I do something like copy a 100GB file onto a USB stick I'm amazed that in the end it's a bit-by-bit exact copy. And 100 gigabytes are about 800 billion individual 0/1 values. I'm no expert, but I imagine there's some clever error correction that I'm not aware of. If I had to code that, I'd use file hashes. For example cut the whole data that has to be transmitted into feasible sizes and for example make a hash of the last 100MB, every time 100MB is transmitted, and compare the hash sum (or value, what is it called?) of the 100MB on the computer with the hash sum of the 100MB on the USB or where it's copied to. If they're the same, continue with the next one, if not, overwrite that data with a new transmission from the source. Maybe do only one hash check after the copying, but if it fails you have do repeat the whole action.
But I don't think error correction is standard when downloading files from the internet, so is it all accurate enough to download gigabytes from the internet and be assured that most probably every single bit of the billions of bits has been transmitted correctly? And as it's through the internet, there's much more hardware and physical distances that the data has to go through.
I'm still amazed at how accurate computers are. I intuitively feel like there should be a process going on of data literally decaying. For example in a very hot CPU, shouldn't there be lots and lots bits failing to keep the same value? It's such, such tiny physical components keeping values. At 90-100C. And receiving and changing signals in microseconds. I guess there's some even more genius error correction going on. Or are errors acceptable? I've heard of some error rate as real-time statistic for CPU's. But that does mean that the errors get detected, and probably corrected. I'm a bit confused.
Edit: 100GB is 800 billion bits, not just 8 billion. And sorry for assuming that online connections have no error correction just because I as a user don't see it ...
i'm doing research on content delivery networks for my end of sem presentation and i want to explore how cdns like spotify and netflix work. i recently found this netflix tech blog so could anyone redirect me to relevant articles that might help? ty
So basically some years ago I watched a nice video which claimed to present an algorithm capable of solving a currently unsolved problem (I don't remember which one though) with the lowest time complexity possible. If I remember correctly what the algorithm did was basically tipyng random letters on a line, running them and repeating until, by chance, it had written a second algorithm capable of solving the problem (so yeah, of course it was a sort of informative joke video); and because time complexity only sees the time taken in the limit it was technically very low for this one.
I know this isn't a very specific description but does it ring a bell to anyone? I can't find the algorithm name anywhere but I know it's somewhat known
By history book, I mean trends in Software Engineering for that particular era etc. Would be cool if there are "war stories" regarding different issues resolved. An example is on how a specific startup scaled up to x amount of users, but is older than that, think early 200s.
I've been looking into transformers a bit and I came across rotational positional embedding. They say it's better than absolute and relative positional embedding techniques in terms of flexibility and compute costs. My question is since it rotates each token's embedding by a theta times the token's position in the encoding, is it not possible for an embedding to be rotated to have a closer meaning to a completely unrelated word?
What I mean is: let's say that we have the word "dog" as the first word in a sequence and we have the word "house" as the hundredth. Is there not an axis of rotation where the word "house" maps, not exactly but close, to "dog"? After all, the word "house" would get rotated more dramatically than "dog" because of its position father in the sequence. Wouldn't this cause the model to think that these two words are more related than they actually are?
Please correct me if I'm wrong.
I'm hosting an online conference that invites professors from distinguished computer science universities to speak about their areas of expertise and help attendees cultivate a passion for computer science.
Although focused on students in Appalachia, It’s free and open to anyone interested in computer science—students, educators, or just curious minds!
Details and registration are in the form below. Hope to see you there! Feel free to PM me if you have questions.
So I know absoloutely nothing about computers. I understand how numbers and characters work with binary bits to some degree. But my understanding is that everything comes down to 0s and 1s?
How does something like say...a while loop look in 0s and 1s in a code? Trying to conceptually bridge the gap between the simplest human language functions and binary digits. How do you get from A to B?
A single precision fp number has 23 bits for mantissa. how can square root be calculated using floating point representation. I need to user Verilog, so no inbuilt functions there
example, 100 is represented as 1.100100 x 2^6.
here, mantissa will be stored as 100100 followed by 17 zeroes. can i use newton raphson method for this and if yes, which set of bits can be used to calculated the square root. remember, i need to use representation in floating point only.
In the enders game books, there is a game that the children play that adapts to their intrests in a way that shows things about their character, their motives, their grit, their battle-readiness, etc. It psychoanalyzes them through their use of the game, and adapts to every player. It makes more sense if you have read the enders game books (which i recommend!!) but i wonder if there is a way to make this game in real life. Could you code Artificial Intellgence to adapt a game as a person plays it? Would the Artifical Intellegence need to re-write its own code? Is that dangerous? Why hasn't it been attempted? How difficult would that be? I am only learning how to code now, and I am sure there are some incredibly obvious answers as to why this is entirely impossible and stupid, but... its too cool to give up on.
Hello good folks, I need your help about this problem.
Let's say I have a boolean expression in conjunctive normal form (CNF) that uses n variables that are free to each other and without any negation in the clauses. Checking the satisfiability of this boolean expression is trivial because of the lack of negation, but I need to find a satisfying truth assignment with the minimum number of true values.
So for example, given a set of 6 boolean variables {a, b, c, d, e, f} and this CNF:
(a ∨ b) ∧ (a ∨ c ∨ f) ∧ (d ∨ e)
the satisfying assignments with minimum trues are either {a = true, d = true} or {a = true, e = true}.
So far my ideas are:
I also feel like this is quite similar to another kind of Set related problem, but I can't quite find the correct keywords for it.
I am a game dev effectively working on multiple games at once because I am only ever burnt out of one of them at a time.
One of them is a multiplayer warioware-like where each player plays one at a time. The device is meant to be passed around between players, so the order of who plays what minigame should be unpredictable. The task is this:
Given a list of M items, each repeated N times to form a list M*N in length, randomize the list in such a way that no item repeats consecutively. So, [1 3 2 1 2 3] is acceptable, [1 2 2 3 1 3] is not, [1 1 2 2 3 3] is extremely not.
The game will have M players play N microgames each.
My current thought process is to simply randomize the list, then repeatedly comb over the list and in each pass, if it sees an item that's the same as the one before it, swap it with the one that comes next, effectively inserting it between the two. But this... feels inefficient. And I haven't yet proven to myself that this will always terminate.
Another option I played around with was to populate the list one by one, randomly choosing from anything that wasn't the last one to be chosen. This sounds like it works, but I haven't figured out how to prevent the case that multiple of the same item is left at the end.
I wonder if there's something I'm missing. A more efficient one-pass way to remove adjacent duplicates, or a way to construct the list and entirely prevent the issue.
A quote jumped out at me from this August 2024 Quanta Magazine article: How Base 3 Computing Beats Binary
"Surprisingly, if you allow a base to be any real number, and not just an integer, then the most efficient computational base is the irrational number e."
Would it even be hypothetically possible to architect a computer using base e?
I know it connects my devices to the Internet but how? Is their a mini computer in there telling it what to do? And if so what is is telling it?