/r/computerscience

Photograph via snooOG

Welcome to /r/ComputerScience!

We're glad you're here.

This subreddit is dedicated to such Computer Science topics like algorithms, computation, theory of languages, theory of programming, some software engineering, AI, cryptography, information theory, computer architecture etc.

Rules

  1. Content must be on-topic
  2. Be civil
  3. No career, major or courses advice
  4. No advertising
  5. No joke submissions
  6. No laptop/desktop purchase advice
  7. No tech/programming support
  8. No homework, exams, projects etc.
  9. No asking for ideas

For more detailed descriptions of these rules, please visit the rules page

Related subreddits

Credits

  • Header image is found here.
  • Subreddit logo is under an open source license from lessonhacker.com, found here

NIGHT MODE NORMAL

/r/computerscience

380,085 Subscribers

1

Playlist or resources to fully understand Computer Networks.

I am a second year computer science student. I need good resources for learning computer networks.

0 Comments
2024/03/27
18:58 UTC

2

What are some of the 100% FREE resources to learn IT related topics on the internet?

I'm unemployed at the moment and trying to gather a list of free resources/games/courses/etc to help me explore more of the field, find out where I want to go after starting in IT Support and learn as much as I can without having to spend the $ I don't have.

PS: For reference I'm currently studying for the A+ 220-1102 exam with the goal of breaking into the tech field.

2 Comments
2024/03/27
17:55 UTC

1

Search Idea

I have an old mini computer devine ( MK Android 809 III). I am searching ideas of what can i do with (some troll could be good ideas also). If someone could help me it would be great. Thx

0 Comments
2024/03/27
12:35 UTC

14

In formal academic algorithmic pseudocode, why 1-index & arbitrary variable names?

For someone relatively new to their formal compsci journey, these seem to add unnecessary confusion.

1-idx vs 0-idx seems to be an odd choice, given it has impacts on edge cases.

The use of “i”,”j”,”k” … etc i really struggle with. It’s fine if eg there’s just a single variable, i, which is semantically used as an iterator variable. But eg I was looking through my prof’s pseudocode for QuickSort, and they use “k” and “l” for the left and right pointers during the pivot algorithm.

The point of pseudocode (as i understand) is to abstract away the particulars of a machine, and focus on the steps. But this adds more confusion for me, preventing focus. Eg, setting a pointer that is inherently on the Right to lowercase “l” (which is already difficult to differentiate from 1 or uppercase I) seems convoluted, particularly when you ALSO have a Left pointer called something else!

14 Comments
2024/03/27
12:11 UTC

9

computer science audiobooks?

i spend all day at work standing with headphones in. i need good audiobooks or audio lectures about basics/fundamentals of computer science

thanks in advance

1 Comment
2024/03/27
08:05 UTC

30

Distributed Systems 101!

Hi everyone, I just finished writing my first blog post!

I am walking towards a master's degree on computer science and the idea is to document some of my learnings on my blog. The first article is a 101 introduction on one of my favorite areas (and the one that I want to pursue the master's on), Distributed Systems. Feel free to take a look at it!

Any feedbacks are much appreciated.

https://leodalcegio.dev/distributed-systems-101-based-on-understanding-distributed-systems

7 Comments
2024/03/26
17:47 UTC

50

Why Database is faster than spreadsheet?

I have googled little to find an answer for this and things i understood are,

how the data stored on the memory/disk - database stores data in b tree, hashes, heap, etc. But how spreadsheets are stored on disk?

Easy manipulation of data since database stores data in structure way (atleast the structured DBs) such as each column as specific type (int, string, timestamp etc), where spreadsheet has unique cell mechanism, each cell is unique and can be typed any number or string or formula.

I have read some difference through internet and could somebody help me to the conclusion with enough explanation?

17 Comments
2024/03/26
12:33 UTC

7

What influences compilation times the most?

I'm not a coder, but I've compiled a lot of code from git repos over the years and I've noticed that some projects are very slow to compile and/or have very large code bases, both of which I wouldn't have expected, considering what the program does. I don't have concrete examples, it's just something I've noticed and thought was weird. Some GUI libraries maybe? Is it just my baseless expectation that ffmpeg should be more complex to compile than GUI?

What characteriatics of a certain code influence compilation time the most, all else being equal? What kind of complexity is harderst for a compiler to deal with? Thoughts about the matter in general?

3 Comments
2024/03/26
11:30 UTC

9

Stupid Question regarding lossless image compression

This is a really stupid question as I just started learning computer science: how does run length encoding work if it incorporates decimal numbers and computers use a binary numeral system? Thank you in advance!

10 Comments
2024/03/26
03:05 UTC

27

My solution for config hell

I worked as a full stack developer for several years. Here and as a hobbyist developer and ran into several recurring annoyances.

  • Glue code is annoying to write; much of webdev is hooking together preexisting systems.
  • The interesting code is in the backend, but the frontend often takes longer to develop.
  • Configuration is a hassle. Little compares to the pain of "It works on my machine".
  • Documentation often wildly under- or over- specifies how to get a tool running.
  • I'll write a quick script, then when I need it again a month later, I forget where it is, or how to run it.

I knew there had to be a better way to develop in 2024. By meta-tagging algorithms with the type of their inputs and outputs, we obtain a machine-understandable description of how to run them. In the process of adding these tags, we can also index the algorithms by keyword for easy searching later. With a little bit of theoretical computer science, this tagging also allows us to quickly find the sequence of algorithms required for a larger process.

Fundamentally, these tags are types and the algorithms serve as casts.

Ontolog is a programming language I'm developing that treats all computations as typecasts. This change in perspective large solves the above problems. For a more philosophical tutorial/demo, check out ontolog.dev.

Beyond the language, Ontolog will be an open-source no-code interface developers to distribute their algorithms to nontechnical consumers. Ontolog is platform-independent, config-free, fast, and great for orchestrating large processes.

Possible use cases include:

  • Scientific computing
  • 3D Design
  • Report/document automation
  • Smart home automation

We are looking to write this code and to start working with devs! Let us know you're interested by joining the mailing list!

We are also looking for interesting problems to code ourselves to stress-test the system. If you're working on an interesting problem, let us know!

If you know a similar tool or think there's a fundamental flaw, we'd like to know too!

10 Comments
2024/03/24
18:40 UTC

33

Consequences of P=NP ?

Is there a list of precise consequences of P=NP somewhere, or a list of hypothesis which entail P=NP ?

For instance, IF SAT is solvable by trial and error by guessing only O(log(n)) bits of the solution, and deducting/computing (in polytime) the rest, THEN P=NP. And the converse is also true by taking 0 guess then the p=np algorithm.

34 Comments
2024/03/23
23:04 UTC

23

Good exercises for concurrency or OS software design exercises

I got Little Book of Semaphores, any other recommendations? Ideally with exercises that involve designing concurrency primitives (lock but make it abortable, a matcher that assigns threads into groups of fixed sizes, write a barrier, etc.) Thank you!

9 Comments
2024/03/23
01:00 UTC

45

How does Anticheat implementation in Games work?

I'm not entirely sure if this is the right place to ask, but I'm really curious about how Game Anticheats like BattleEye or EasyAnticheat are integrated into games.

I'm curious since there are games, using the same Anticheat, but with vastly different results.

For example, the game "Planetside 2" has the BattleEye Anticheat, however it seems to have a major issue with cheaters running rampant right now. While the Anticheat seems to not work at all and the devs literally ban each Hacker manually by hand, "Rainbow 6 Siege" has the same Anticheat, but handles those hackers much more effectively, or at least detects and bans them automatically.

Therefore I'm wondering why is there such a difference with the same Anticheat?

How does the Anticheat Implementation work? Is the dev team of the game responsible to improve the Anticheat, or is that the responsibility of the Anticheat BattleEye Team?

Has the anticheat something like an API where the game devs have to implement the anticheat components into the game, and depending on how much work they are willing to put into it, the anticheat works better with the game or not?

6 Comments
2024/03/22
10:32 UTC

28

What are books that show how concepts evolved through time? For example similar as "The design and evolution of C++" by Bjarne Stroustrup

I enjoy reading about how something evolved through time, so if you have some suggestions please write them.

Thanks in advance!

8 Comments
2024/03/21
11:17 UTC

4

Is it a bad practice to learn more than one domain in computer science ?

For example like game development, ethical hacking, and web development? I like to learn anything related to programming. What advice can you give me ?

21 Comments
2024/03/21
04:45 UTC

0

nodes and edges in graph algorithms

Hi,

Most of the time I have seen that graph algorithm is introduced using a pictorial representation as one shown in Figure #1 below.

In actual implementation, I think each node stands for coordinates of a point and each edge is the shortest possible between two points.

Do you think I'm thinking along the right lines?

Are graph search algorithms the most important sub-category of graph algorithms? Could you please help me?

Figure #1

12 Comments
2024/03/20
06:18 UTC

9

How much 24/7 continuous run of server gear increases it's lifespan?

Hey everyone, Can hardware theoretically last longer if being kept under voltage ?

If i have a old hardware- server Is it going to last longer if the server will be 24/7 UP? Or is it going to last longer If I will turn on only when needed ?

7 Comments
2024/03/19
15:11 UTC

8

Variant of point set coverage problem

I have the following problem for which i am searching resources/algorithms.

Given two Point Sets P1, P2 in 2D-space, I want to find distinct subsets S1,…Sn and T1,…,Tn of P1/P2 such that the union T1,…,Tn = P2 and the pairs of subsets Si, Ti are equal except for a rotation/translation. I further have the restriction that These subsets must be contained within a fixed region in space e.g. described by a convex polygon G which can be shifted/rotated arbitrarely. My goal is to find an algorithm that is able to do that, and ideally minimizes the amount if subsets n.

In which category does this problem belong? Are there similar problems you know of? If so, are there established algorithms?

My current solution is a tree-search, which works okay if the Point Sets are on a rectangular Grid.

Footnote: I can assume that the union S1,…Sn is a (real) subset of P1.

6 Comments
2024/03/18
16:02 UTC

9

Is there a formula /representation for this pattern? For example so I can represent the output as the function of input like O = 2i+2

INPUT : A B C D

OUTPUT : A B C D AB ABC ABCD BC BCD CD

INPUT : X Y Z

OUTPUT : X Y Z XY XYZ YZ

Edit: added missing CD

3 Comments
2024/03/17
19:43 UTC

2

Name Resolution

Which one scales better at large geographic area, recursive or iterative name resolution?

1 Comment
2024/03/17
17:19 UTC

8

How do you rotate an image matrix into 2d vectors containing their x and y coordinate?

Ok, I've been studying 3Blue1brown videos of how matrices work, and I've been looking at visual kernels videos, on how an image can be translated to 2d space by imagining them as points on 2d space. I just have one more curiosity, how are we able to apply a 2d rotation matrix to say a simple 3x3 black and white image??? The 2d rotation matrix is 2x2 and the image has a matrix of 3x3. But that 3x3 matrix only specifies the intensity of white color, not the vector space.

So then I guess in my head what would essentially happen is:

  1. There is a way to map each value of that 3x3 intensity matrix to 2d vector spaces to draw on the screen of the computer

  2. Once that is figured out, there is a way to also individually rotate all of this matrix with the rotation matrix??

Are those assumptions correct?

Any sources or videos where I can study more of this? Thanks

4 Comments
2024/03/17
03:50 UTC

5

What could a PC user do with 1 exaflop of processing power?

What could a PC user do with 1 exaflop of processing power?

Imagine what video games would look like if a GPU had exascale computing power.

Are there any applications that could utilize such a powerful computer?

In the year 2000, the most powerful supercomputer in the world had 1 teraflop of processing power. Today, the Nvidia RTX 4090 has around 82 tereflops.

I'd imagine that consumer computers will (eventually) reach 1 exaflop within a few decades.

32 Comments
2024/03/14
16:32 UTC

205

The Realities of the AI Developer

Ever since Turing the idea of human-like machine intelligence floated around, but with recent advancements in computing power neural networks have deepened. The inclusion of GPT techniques have allowed the pushing of the Deep Neural Network (DNN) to heights never before seen. And now with the announcement of Devin, the AI Software Engineer, the nails in the coffin of the profession that nearly all CS majors end up in is finally being hammered in. I hope to convince you of the opposite: many of the companies behind the push aren't the humanists they claim to be, and their claims are exaggerated or even completely fabricated.

Neural networks have been around for quite some time, the idea dates back to the 1950s. But the fundamental concept hasn't really evolved since. Sure, the techniques have gotten more clever, but essentially we're still relying on the fundamental perceptron design. Which, if it works it works. And it does quite well as a statistical learning technique. However, the important distinction that a while these are called neural networks they are massively different than human brains, both in operation and fundamental structure. You don't have to take my word for it, though.

In 2021, Dr. Melanie Mitchell published a paper titled Why AI is Harder Than We Think, in a direct challenge to the AI researchers of today. While I wholly endorse reading the paper in its entirety, Dr. Mitchell brought up four fallacies that AI researchers fall for (and as a consequence, the population that listens to them).

  1. Narrow intelligence is on a continuum with general intelligence
  2. Easy things are easy and hard things are hard
  3. The lure of wishful mnemonics
  4. Intelligence is all in the brain

Dr. Mitchell further argues that AI springs are common and followed by an AI winter. We are presently in an AI spring, and an AI winter is coming. It's powerful logic that's mostly been ignored by the larger AI community, who insist that AGI is right around the corner.

But self-driving cars are still not a reality, and they were promised as early as 2015. We were promised by Elon Musk that they were going to take over. Ten years later we're still waiting and trillions of dollars have been spent on what seems to be the new fusion. So maybe it's time we stop listening to billionaires and start listening to real computer scientists, like Dr. Mitchell.

But truly these companies have made some very great advancements in the field of AI. And it's true, but i has come at a massive cost. The electricity demands are massive in a time when power consumption should be optimized on the dangerous precipice of a warming planet. The datasets used by OpenAI, Google, and Microsoft were hardly ethically obtained. And so we're left with a troubling realization that it certainly seems like this is just the new business craze (remember NFTs? Blockchain?) instead of a truly humanist pursuit. And those that cheer on them on hoping for the tech utopia of their dreams is just another form of wishful thinking.

So no, AGI isn't upon us, no matter how many times the company who tried to trademark GPT tries to will it to be through marketing materials. An Apple esque-bar graph is hardly scientific, and these companies are not ethical. They are out to make money, and have an invested interest in making the grandiose claims you see plague the news.

Thank you for coming to my TED talk, I'll have a peanut butter and jelly sandwich. And no, I'll make it myself, the AI keeps confusing the marmite with the jelly.

23 Comments
2024/03/14
16:10 UTC

20

How do you think quantum computing will change everyday computing? What effects could it have on keeping data secure, solving complex problems efficiently, and advancing artificial intelligence?

15 Comments
2024/03/14
14:24 UTC

Back To Top