/r/computerscience
The hot spot for CS on reddit.
Welcome to /r/ComputerScience!
We're glad you're here.
This subreddit is dedicated to discussion of Computer Science topics including algorithms, computation, theory of languages, theory of programming, some software engineering, AI, cryptography, information theory, and computer architecture.
For more detailed descriptions of these rules, please visit the rules page
/r/computerscience
Hi folks, I hope you’re doing well.
I am a student currently studying Computer Science at university.
I studied very shallowly in the first 1/3rd of the curriculum.
I regret not taking everything seriously from the beginning because I have now become passionate and interested in computer science as a field beyond getting a qualification for a job.
In the first few modules I crammed and retained very little knowledge. I have been more diligent with my more recent work and plan on continuing to do so.
How can I overcome the knowledge gaps I created?
I am also working part time so going back to each of those subjects is going to be challenging.
How would you deal with this situation if you were me?
I have a seminar tomorrow but can't decide any topic for my seminar. .
Seminar should be on latest trend regardless of any topic..
guys please help
Hey guys, I already asked this question in r/MLQuestions but I figured I'd try fellow compsci colleagues here as well. Hope I'm not breaking rule number 9, but I think it's interesting enough to be asked here too.
I'm working on a classifier for detecting the topic or a category of a web page based on analysis of its source and possibly URL. Practically it means I have an anotated dataset containing the URL, scraped source code and the category of the web. Probably going with XGBoost, Random Forest and similar methods and comparing the results later to evaluate accuracy.
Which features do you think I could extract and would actually be good for classification of the website into a predefined category?
Some good recommendations I got was using bag of words or more complicated methods like TD-IDF or even BERT, but perhaps you guys here would have more ideas what could be good, I thought utilizing tags, styles or scripts on the site could be interesting, but I can't really figure out how exactly, perhaps someone here would have an idea.
Thanks a lot and have a nice start into the week.
The problem is I feel like everything becomes an include relationship if you think at it long enough. From my knowledge include means the function relies on another function. Whereas exclude means its simply optional.
If you look at the attached image I made. I need help understanding why or why not "set deadline" and "modify exercise" would be an extend relationship.
And if so, what kinds of programs could it implement?
In Neural Networks. A neuron typically holds a value between 0 and 1, especially in layers where activation functions like the sigmoid function are used. This range allows the neural network to model probabilities or degrees of confidence in binary classifications.
A value between 0 and 1 is a float. A percent/ratio/probability can be stored as an integer in a range of 0 to 100. Although both usually take 4 bytes, Integers are generally more space-efficient and faster to process, as they require less storage and computational overhead.
Do you think changing the representation of this could give any significant performance? I see companies like google using nuclear power plants to power their server for these transformers. What are your thoughts
My professor hasn't been great at teaching this semester, and I've really struggled with this subject. I've looked online quite a bit, but haven't found any good resources for studying.
MISD (Multiple Instruction streams Single Data stream)
SISD (Single Instruction stream Single Data stream)
I understand SISD: Fetch one instruction->decode->fetch data for it->execute
But when I read MISD, it says, fetch multiple instruction->decode them->fetch data (same data for all instructions)-> instead of executing all instructions in parallel, execute them sequentially one by one
So, isn't MISD something similar to SISD since at the end it executes sequentially, correct me if I am wrong
I am confused with these naming
I have the bases of them, but as I never went to uni I never practiced this well enough.
I never really understood it really well, so i want to start from scratch. Is there a really good book with very good examples that will teach me all of computer networks? I want to understand it top to bottom.
Thanks in advance!
how would you understand (mathematically) a filter is linear or not. For example
h(x, y) = 5f(x, y)- 1f(x−1, y)+ 2f(x+ 1, y)+ 8f(x, y−1)- 2f(x, y+ 1)
is h linear in this case?
Actually what I know is that to call a function linear, we should show it's homogeneous and additive.
So I tried to show it's homogeneous with following: for some constants a and k, if h(ax,ay) = a^k h(x,y) , then it's homogeneous. But I stuck on h(ax, ay) = 5f(ax, ay)- 1f(ax−1, ay)+ 2f(ax+ 1, ay)+ 8f(ax, ay−1)- 2f(ax, ay+ 1) and I don't actually know how to remain.
is main () a global or local function ?
Suggest me resources for studying CS Core Topics and C++ in-depth!
Hi! So my interviews are up quite soon and I really want to revise the CS Core topics in and out. Kindly suggest me resources to study the topics (OS, DBMS, CN, OOPS majorly), as well as C++ in depth (I know C++ syntactically well enough to practise DSA and CP).
(xpriv G) + (ypriv G) = (xpriv + ypriv) G
G generator point
xpriv, ypriv is integer from Fp finite field (p curve order)
+ is + mod p
if this is true, is the following also true:
Bob can generate ethereum (ECDSA) keypair, and share his pub key with Alice,
Alice can generate keypair and share pub key with Bob.
they can generate unified pub key by adding (ec point addition) those two pub keys, and from it
generate valid ethereum account address.
while they keep their private keys secret, wallet address is watch-only, no single individual can sign transactions
and move assets from that address. Only way to reconstruct private key for that wallet(account) address
is for both Bobs and Alices private keys to be added (integer addition in Fp)
Is this know fact ? I want to make a centralized system
but without custody of users wallets, so idea was to generate deposit addresses,
and private keys for deposit addresses can only be constructed when both users and my centralized system
agree on reveailing privay keys to each other.
Please tell me can this work, is it already implemented somewhere, is it wrong ?
Upvote1Downvote0Go to commentsShare
(xpriv G) + (ypriv G) = (xpriv + ypriv) G
G generator point
xpriv, ypriv is integer from Fp finite field (p curve order)
+ is + mod p
if this is true, is the following also true:
Bob can generate ethereum (ECDSA) keypair, and share his pub key with Alice,
Alice can generate keypair and share pub key with Bob.
they can generate unified pub key by adding (ec point addition) those two pub keys, and from it
generate valid ethereum account address.
while they keep their private keys secret, wallet address is watch-only, no single individual can sign transactions
and move assets from that address. Only way to reconstruct private key for that wallet(account) address
is for both Bobs and Alices private keys to be added (integer addition in Fp)
Is this know fact ? I want to make a centralized system
but without custody of users wallets, so idea was to generate deposit addresses,
and private keys for deposit addresses can only be constructed when both users and my centralized system
agree on reveailing privay keys to each other.
Please tell me can this work, is it already implemented somewhere, is it wrong ?
I’m first year bachelor student in cs, it is really tough for me right now,I feel like I’m lost in discrete math. Does anyone know a way to get better in that course
As it is now, I have no idea how to program, and I do not understand the java programming language enough to do anything on my own beyond trivial objects with print statements and if statements.
I had trouble coming to this conclusion prior because I had made an effort to try and learn to program prior through the typical 'intro to java' courses, and find tutorials such as 'learning godot engine' Even though it felt as though I was just copying code with no explanation.
I think I am relatively ok at looking at language exempt/language independent descriptions of algorithms and their exercises through videos and on paper, when I ask certain questions about the algorithm eventually the answer is that it will make sense once I actually code, which is when things go south.
Hey guys
Searched a lot but couldn't find an answer to what's the difference between CSAPP 3rd edition and 3rd global edition.
Thanks in advance!
Hello all,
Okay, this will sound like an incredibly dumb question. In my almost 2 decades (context) of software engineering, there is one thing I have long struggled with: which direction to make an arrow point in my notes, impromptu explanatory diagrams, and formal documentation.
There are cases where this is obvious: diagrams that show the flow of information, including classic flow charts (does anyone use these though?) or network diagrams where directionality has a clearly defined meaning.
However, if you say "A abstracts B" you might just as well say "B refines A". Same as "A depends on B" or "B is referenced by A".
Or even more abstractly, when you are relating concepts, each of those relations may be different within a single diagram! This more happens in personal notes and mind mapping.
I'm wondering if there's a general, perhaps obnoxiously/delightfully abstract, answer to this dilemma.
Thank you!
Bestieboots.
If making an algorithm to beat humans at 4x games (like civ 6) was as big of a deal as making a chess engine to beat humans was in the 1900's, could it be done? The disagreement is: making an algorithm of that complexity could be done if it had the significance that the chess algo did in the 60-90's despite the difference in complexity vs it simply not being feasible? The reasoning as to why an algorithm like this hasn't been made yet is because the problem is not significant enough to receive the resources needed to be "solved," whereas a machine beating a grandmaster in the 90's was a big enough deal to receive said resources vs it being too complex of a problem to compute.
im a junior in college, and im having a lot of technical interviews. im not comfortable with my ds/algo knowlege. are there any good resources that u recommend in this area? leetcode is helpful, but i want a bit of a better foundational knowledge (i took intro ds and algos online and i didnt pay much attention and its starting to show)
The understanding I have about this question is this-
When I compile a code, OS loads the compiler program related to that code in the main memory.
Then the compiler program is executed and the code it is supposed to compile gets translated into the necessary format using the cpu.
Meaning, OS executable code(already present in RAM) runs on CPU. Schedules the compiler, then CPU executes the compilation process as instructed in the compiler executable file.
I understand other process might get a chance for execution in between the compilation process, and IO interruption might happen.
Now I can be totally wrong here, the image I have about this process may be entirely wrong. And then in that case I'd say please enlighten me, by providing me with a clearer picture.
I am a bit confused with analysing functions with recursions. Consider the function definitions given below for fun1() and fun2():
fun1(int n){
`if n <= 0 return 1;`
`else return (fun(n/2) + n);`
}
fun2(int n){
`if n <=0 return 1;`
`else return (fun2(n/2) + fun3(n)); // fun3(n) runs in O(n) time`
}
I have got some questions with the above code:
My reference suggests that to analyse fun1(), we would use the recurrence relation T(n) = T(n/2) + C, and not T(n) = T(n/2) + n. Why is it so? How is fun2 different from fun1?
Is the order of growth of fun1() different from that of its return value? The reference uses T(n) = T(n/2) + n to compute the latter.
I had this lingering thought while waiting in traffic. It's nothing serious but I just want to know. I know that Google maps is able to take into account real time traffic data for it's pathfinding along with average speed and road conditions.
What I want to know is if they estimate the traffic of a given section of road depending on day and hour. If they do, do they take it into account in their pathfinding? How do/would they optimize it?
As an example: Let's say there's two paths to choose from and each path contains two sections:
At timestep t=0: The first path has both sections of the road estimated to take around 5 units of time.
The second path has the first section take around 5 units as well. However, the second section is a bit more congested and is estimated to take around 10 units of time.
At timestep t=5: Let's say the first section of both path doesn't fluctuate and that if you were to take either path at t=0, you would have cleared it.
However, the second sections do: The second section of the first path starts to enter their rush hour time and gives an ETA of 7 units of time.
On the other hand, the second section of the second path just finished it's rush hour and the road is basically empty. Now it has an ETA of 4 minutes.
Would Google's algorithm have taken the first path (shortest path at t=0) or the second path(the true shortest path)?
Note: let's say that these paths fork out so you can't just switch paths mid journey without making the trip longer.
Hi. I've heard that Michael Sipser Introduction to the Theory of Computation is a great book. But it's been 12 years since its last edition (the third). Maybe some discoveries have been made and it's no longer up to date? I'm afraid of learning something that is no longer true.
Hello guys, I recently saw a video about a guy who created an OS from scratch to play Tetris, and I wanted to give it a try. However, I don’t know where to start. I know OS development is difficult, but I want to give it a shot. Does anyone have good resources, like books or courses? I’d prefer something focused on writing an OS in ARM assembly for the Raspberry Pi. Thank you!
When I asked this question prior, I was usually told I did not need to go as far as the physical magnets and voltage lights on a board to understand binary and coding data types, but I began to feel very stuck studying intro to algorithms. Right now, I don’t truly understand data and memory. When I imagine the lowest level, I just imagine this large array of blinking lights. Even though I have watched intro to (language) tutorials, I never truly understood the idea of addresses, references, container data types, and defining relationships between data.
I don’t really understand how we define characters and numbers. I roughly assume we assign a symbol to a certain set of bits/blinking lights in the computer. yet I don’t really understand how we code the symbol itself into the computer, how do we implement the visual character ‘r’ or ‘2’ ?
Moving on to data types such as integers, I don’t understand how we code inequality, for example. How do we code that int 1 is “bigger” than int 2?
Any sort of relationship or command is also hard to understand, such as if statements. how do we tell the computer that if this set of lights is on, then we must execute this line of code? Is an ‘if’ statement also stored in memory as some sort of object, when instantiated the same way a data type such as ‘int 2’ is, even though statements in programming are commands? Do statements also have addresses?
Another issue is the idea of references, and pointers as a type of reference.
data such as ‘int 2’ or maybe an array element array[1] = 2 or a node in a linked list has an address which is not the actual element, such as ‘2’, but some assortment of symbols such as ‘@a3fbk’ does an address itself hold memory, where are addresses stored?
why do we need an address alongside what should assumably be a set of binary code/pattern of lights inside the computer?
I never understand when studying different data structures that are better or worse for memory because I have no concrete idea of what memory is as well as data.
Where could I start?
Hey everyone. I want to learn more about how to make basic computers, with stuff like toggles and bitshifts, and logic gates.
One of my undergrad courses was Digital Logic, and I fell in love with the stuff we covered like logic gates, kmaps, multiplexers, and the like. But since it’s an engineering degree, we didn’t get too deep into it.
Combined with me accidentally diving down the YouTube rabbit hole of people who’ve made their own computer components, and some Factorio videos that blew me away with what people created and I just really need to learn more.
It doesn’t really help that I don’t know enough about the subject to even know what to google.
So I’m hoping you all have some digital resource I can really sink my teeth into. Honestly an online textbook on advanced digital logic would be close to what I’m looking for.
Don’t worry about how complex the material may be. Thanks for any help in advanced.
Fixing the Y2K bug is estimated to have cost 500 billion dollars.
The reason was a decision that was made >50 years ago. The engineers back then decided to use 6 bytes instead of 8 to store dates.
I am asking myself, was this the better solution cost wise, even considering the high cost later on?
They started off on punch cards, then tubes, later they used magnetic core memory and eventually flip flops. Many of the basic programs have not changed since the beginning of the widely spread use of computers.
They used billions of punch cards and I assume saved a lot of money by only using 6 bytes. Considering that one byte on a magnetic core memory cost 1 USD in 50s money, 2 bytes are a huge cost factor. Especially if you count in that these 2$ could be used in other investments - considering an interest rate of 5% these 2$ are 22.93$ 60 years later. Same goes for all the memory types used throughout history - even in the 80s one TB still cost shy to a billion USD.
So they saved a lot of money during this time and every dollar not spent was available for investments.
Also the point in time when they fixed the "bug" was important as there were more programmers available in the 90s than ever before. Even tough the people fixing the programs in the 90s earned a lot of money, it would have been much more expensive during the 80s or 70s.
Are there estimations about the money saved by using the 2 bytes less for 50 years?
Q.How do me as a user can ensure that whatever optimisations I’ve done to my code, is actually taking effect and the portion of code which I want to load to cache is working fine? What are the tools/ways in which I can measure and say with confidence.
Q. By monitoring total time of execution?