/r/AskComputerScience

Photograph via snooOG

Ask Computer Science Questions And Get Answers! This subreddit is intended for questions about topics that might be taught by a computer science department at a university.

Ask Computer Science Questions And Get Answers!

Before posting please read our rules.

This subreddit is intended for questions about topics that might be taught by a computer science department at a university. Questions do not have to be expert-level - it's perfectly fine to ask beginner questions. We all start somewhere. All we ask is that your questions be on-topic.

If you have questions that aren't directly about computer science, here are some other subreddits that can likely offer you more help:

If your post is off-topic, or violates one of our other rules, your post may be removed.

/r/AskComputerScience

85,527 Subscribers

1

How does Bios Transfer access, read, and transfer entire OS in ROM quickly enough to RAM where it’s better to do this than just keep OS in the slower-accessible ROM?

Hi everybody,

A bit confused about something: How does Bios Transfer access, read, and transfer entire OS in ROM quickly enough to RAM where it’s better to do this than just keep OS in the slower-accessible ROM?

Thanks so much!

1 Comment
2024/12/01
05:09 UTC

0

Is pursuing a CS degree still worth it given the rapid advancements in AI?

I'm currently finishing my first semester at university, studying Computer Science. During our programming course, we were allowed to use AI tools like ChatGPT to solve exam problems. Our professor mentioned that learning the syntax of programming languages might soon become unnecessary because AI can now generate the required code snippets to solve specific problems.

This raises a question for me (we discussed in class with our professor): If the role of a programmer is shifting towards breaking down complex problems into simpler parts and then asking AI to generate the code, what is the added value of getting a CS degree? It seems like, in the future, programming could be reduced to just describing a problem in natural language and letting AI handle the coding. Essentially, the AI becomes the "programming language" anyone can use. If you do not know a topic, you could simply ask the AI to explain it (for example, I, in addition to taking classes and books, study with ChatGPT who almost always manages to explain the topic differently or even better, as if he were a tutor).

If this is the case, what makes a CS graduate valuable when anyone with a good idea and access to AI tools can potentially develop high-quality software without deep knowledge of programming languages? Is the core skill set of a computer scientist changing, and if so, how should we prepare for this evolving landscape?

27 Comments
2024/11/30
17:59 UTC

0

CS Fundamentals Help!

Hi everyone! I’m a first year CS student, and we’ve got a problem that I’ve been working on for a week and I’m still stuck on it. I’ve reached out to some friends and they’re stuck, too. The project is to use object-oriented programming in python to create a hint generator for our Wordle object (class Wordle). The first half of the assignment is to create a method that takes a feedback pattern and returns a set of all possible ‘completions’ for that pattern based on hard and soft constraints. For our program, hard constraints (Green Wordle letters) are uppercase letters, and soft (yellow) constraints are represented with lowercase letters. For example, if the target word is “steal” and the user’s guess is “scant” the feedback string would read “S.a.t” where ‘a’ and ‘t’ are soft constraints. The method, expandPattern, which takes arguments (self,pattern,softCnt, match=‘’) would return a set {‘SAT..’,’SA.T.’,’ST.A.’,’ST..A’,’S.TA.’,S.T.A’,….and so on} softCnt is a dictionary of soft constraints and the number of times they occur in the string. In the above example, expandPattern would be called on an object of class Wordle with the following arguments: w.expandPattern(‘S.a.t’,{‘a’:1,’t’:1})

I can’t figure out how to collect these results in a set using a recursive method, without storing it in the arguments passed to the method. I’m also struggling with the exact conditions so that it works with doubled letters, multiple soft constraints, etc.

If you have a solution, please educate me. I’d much prefer to learn how to solve this problem than receive the grade, but in a world where I can get a decent grade and also learn how to do it is the superior position.

2 Comments
2024/11/30
17:47 UTC

1

good resources to learn OS

i have a test on monday, operating systems more specifically about main memory management, virtual memory and cache memory

1 Comment
2024/11/30
03:55 UTC

1

What software and video games are Turing Complete?

Does anyone know which software and video games are Turing Complete like Microsoft Excel or Conway's Game of Life?

7 Comments
2024/11/30
03:21 UTC

1

Where is my misconception of the pumping lemma for context free languages?

I've been trying to understand the pumping lemma, and i decided that instead of applying it to a language that isn't a CFL, that i would apply it towards a language that i already know is context free to see if i can apply it properly.

I understand that the language L = {a^n b^n | n >= 0}, where Σ = {a, b}, is context free. However when I apply the pumping lemma, I seem to incorrectly conclude that it is not context-free. Clearly, I must be misapplying the lemma. Here is my approach to it:

  1. I select a string w ∈ L such that ∣w∣ >= p, where p is the pumping constant.
  2. I choose w = a^p b^p and decompose it to w = uvwxy, where |vwx| <= p and |vx| >= 1
  3. As I understand it, I can choose any vwx in w, and as long as its length is at most p, i can pump its v and x to see if the resulting string remains in the language
  4. I choose a vwx that consists entirely of a's: vwx = a^p (meaning i chose my y to be the rest of the string: b^p)
  5. Pumping v and x increases the number of a's, breaking the balance between a's and b's. This seems to imply L is not context-free.

Clearly, this is incorrect because L is known to be context free. Where am I misunderstanding or misapplying the pumping lemma? Thanks in advance

2 Comments
2024/11/29
23:09 UTC

0

Where Can I Find a Bank of 3-UNSAT Problems?

I’m interested in some algorithms related to SAT solving / UNSAT solving. I’m specifically looking for a bank of problems that are unsatisfiable & contain 3 variables. I already have a bank of problems for 3-SAT, but there are not many of these problems which are actually unsatisfiable (i.e. like 90% of these are satisfiable).

0 Comments
2024/11/29
21:23 UTC

2

Red-Black Trees. Why can't a 3-node tree fulfill the conditions for RBT?

Challenge from instructor in Coursera Algorithms course.

see screenshot: https://imgur.com/a/4SKpX5y

If I color node #2 red, then it appears to me that all 3 conditions are met.

  1. root and leaves are Black
  2. Every Red node has two Black children.
  3. The Black height from any node is well defined and consistent.

I don't know what I am missing.

(in comparison, here is a different slide from his lecture with a RBT https://imgur.com/a/eUJW3S2)

8 Comments
2024/11/29
18:26 UTC

2

A question concerning skiena's algorithm course

In lecture 2 (Asymptotic Notation) slide 1 and 2 we are asked to find counter examples to the proposed algorithms for the knapsack problem, but I don't understand the algorithms, there are 3 of them:

1 Put the elements of S in the knapsack in left to right order if they fit, i.e. the first-fit algorithm?

2 Put the elements of S in the knapsack from smallest to largest, i.e. the best-fit algorithm?

3 Put the elements of S in the knapsack from largest to smallest?

What does he mean by "in left to right order"?

Do 2 and 3 mean that we should try to put the elements monotonous until we got the target number?

0 Comments
2024/11/29
09:46 UTC

1

Need help in making a distinction between flat sequences and trees

I'm struggling with the ProseMirror docs part about documents:

A ProseMirror document is a node, which holds a fragment containing zero or more child nodes.This is a lot like the browser DOM, in that it is recursive and tree-shaped. But it differs from the DOM in the way it stores inline content.

...

Whereas in ProseMirror, the inline content is modeled as a flat sequence, with the markup attached as metadata to the nodes:

# Flat sequence representation
flat_sequence = {
    "type": "paragraph",
    "content": [
        { "text": "This is", "marks": [] },
        { "text": "strong text with", "marks": ["strong"] },
        { "text": "emphasis", "marks": ["strong", "em"] }
    ]
}

# Tree structure representation
tree_structure = {
    "type": "paragraph",
    "content": [
        {
            "type": "text",
            "text": "This is"
        },
        {
            "type": "strong",
            "content": [
                {
                    "type": "text",
                    "text": "strong text with"
                },
                {
                    "type": "em",
                    "content": [
                        {
                            "type": "text",
                            "text": "emphasis"
                        }
                    ]
                }
            ]
        }
    ]
}

I find it hard to see the difference. Considering P is a set (set theory speaking) with subsets that contain arbitrary values (text string) and type values (strong, em etc), it's hard for me to grasp the "flatness" of example B as opposed to the "treeness" of example A.

Maybe I have trouble cause I see both as graphs. If I had to guess, i'd say example B vertices are only 1 edge away from the parent. But example A vertices are N edges away from parent. I think my problem is this - how does example B model more complex DOMs?

1 Comment
2024/11/29
08:15 UTC

1

Machine Scheduling problem - How can I improve a backtracking exhaustive search algorithm?

I have a backtracking algorithm that exhaustively searches an optimal solution for the “machine’s scheduling of tasks” problem (basically, there’s an m number of machines and an n number of tasks, each task has a duration and every machine is identical).

I tried implementing some optimisations such as sorting the tasks vector descending before using or adding a lowerbound to prune bad solutions instead of keeping exploring them, however my code still struggles with some larger inputs. This is expected, of course, as it's a exhaustive search algorithm, however I'm struggling with the theory of what kinds of optimisations I could use to improve my search or my backtracking, while keeping the optimal result. I do know about algorithms such as Graham's or LPT, however they won't usually find an optimal solution, so they aren't ideal.

I will put the relevant bits of my algorithm here (do note that it was translated to English using GPT though), but only the necessary as the rules say less code is better. There are two functions, sheduling and optimal solution.

/*Schedule assignment function*/
void scheduling(int m, int n, int d[], int current_task, int loads[],
                int schedule[], int optimal_schedule[], int *sorted_tasks,
                int current_max_load, int *best_makespan)
{
    if (current_task == n)
    {
        if (current_max_load < *best_makespan)
        {
            *best_makespan = current_max_load;
            memcpy(optimal_schedule, schedule, n * sizeof(int));
        }
        return;
    }

    // Compute remaining total duration and max task duration
    int remaining_time = 0;
    int longest_remaining_task = 0;
    for (int i = current_task; i < n; i++)
    {
        int task_duration = d[sorted_tasks[i]];
        remaining_time += task_duration;
        if (task_duration > longest_remaining_task)
            longest_remaining_task = task_duration;
    }

    // Calculate total assigned time
    int total_assigned_time = 0;
    for (int i = 0; i < m; i++)
        total_assigned_time += loads[i];

    /*This approach ensures that the lower bound is a conservative estimate, accounting for the worst-case scenario
    where the load is as evenly distributed as possible while still considering the longest task and current maximum load.*/
    double average_load = (remaining_time + total_assigned_time) / (double)m;
    int lower_bound = (int)ceil(fmax(current_max_load, fmax(average_load, longest_remaining_task)));

    if (lower_bound >= *best_makespan)
    {
        return; // Prune this branch
    }

    int current_task_duration = d[sorted_tasks[current_task]];

    // Assign tasks to machines
    for (int i = 0; i < m; i++)
    {
        if (i > 0 && loads[i] == loads[i - 1])
            continue; // Skip symmetric states
        // Prune if assignment exceeds current best makespan
        if (loads[i] + current_task_duration >= *best_makespan)
        {
            continue; // Prune this branch
        }

        // Assign task to machine
        schedule[sorted_tasks[current_task]] = i;
        loads[i] += current_task_duration;
        int new_max_load = loads[i] > current_max_load ? loads[i] : current_max_load;

        // Recursive call
        scheduling(m, n, d, current_task + 1, loads, schedule,
                   optimal_schedule, sorted_tasks, new_max_load, best_makespan);

        // Undo assignment (backtrack)
        loads[i] -= current_task_duration;
    }
}


// Function to allocate scheduling
int OptimalSolution(int m, int n, int d[], int optimal_schedule[])
{
    int best_makespan = INT_MAX;

    int *loads = mallocSafe(m * sizeof(int));
    int *schedule = mallocSafe(n * sizeof(int));
    int *sorted_tasks = mallocSafe(n * sizeof(int));

    // Initialize machine loads to zero
    for (int i = 0; i < m; i++)
        loads[i] = 0;

    // Sort tasks in descending order of duration
    sortIndirectly(n, d, sorted_tasks);

    scheduling(m, n, d, 0, loads, schedule, optimal_schedule, sorted_tasks, 0, &best_makespan);
    free(loads);
    free(schedule);
    free(sorted_tasks);
    return best_makespan;
}
1 Comment
2024/11/28
20:53 UTC

2

Direct-mapped cache tag field

I’m currently learning about cache systems at my university and I’m confused about the tag field in direct-mapped caching. As the tag is meant to distinguish between different blocks in main memory that map to the same block in cache, does the tag field in the memory address increment by 1 every number of blocks in cache? Furthermore, all the resources i could find online say that tag field size = total address bits - block field - offset field. That’s fair enough, but to calculate the minimum size that the tag field size could be would it be: minimum tag field size = log2(main memory blocks / cache blocks) as main memory blocks / cache blocks = the number of main memory blocks assigned to a specific cache block (which therefore each need a unique representation using the tag field).

1 Comment
2024/11/28
06:02 UTC

3

Designing a leveled multi parent-chilld tree like graph for db.

This is probably a bad example, but basically, in Digimon, the evolution chain is varied. Any digimon of a lower tier can become a multiple kind of higher tier, and a higher tier can originate from multiple lower tier. Thus, it would be ideal to organize such a relationship inside a graph. However, such chain can only originate from a lower tier into a higher tier, thus it has certain characteristic of a tree. As such, is it possible to find an optimized version of graph storage to possess layed characteristics. As to real life example, think about Symbolic Linking or resource sharing over network, only it is co ownership like a shared pointer. That said, it is unclear on the concept of depth as certain path can be shorter from a given root node, and there are more than one potential root.

0 Comments
2024/11/28
05:15 UTC

0

In depth C Course?

Looking for a good C Lang course that covers lot of the features of C with data structures and algorithms

2 Comments
2024/11/28
02:54 UTC

2

If the start state of a PDA is also an accept state, does the machine accept the empty string?

I was trying to make a PDA for a language that accepts the empty string. To allow for this, I simply made a start state that is also an accept state, figuring that it would non-deterministically accept. However my prof marked me down saying that it didn't, am I wrong?

6 Comments
2024/11/26
00:33 UTC

2

Z3(or any other SMT solver) as a tool to verify requirements?

Hi all,

I have been reading about Z3 recently and all of the examples I have stumbled upon so far are of people analyzing source code with it. I was wondering, is it possible to use it to analyses requirements before any code is written i.e. the requirements state that when this and that is true, this other thing will happen, we express them with boolean expressions and find whether there are cases where the relationship is not true.

Is this even a feasible approach? Does anybody has resources or experience about the topic? I am specifically interested in how does one translate a requirement to Z3.

4 Comments
2024/11/25
20:29 UTC

2

Multi Region Replication: Ordering Issues and Conflicts

I’m trying to understand how conflicts and ordering issues are handled in a multi-region replication setup. Here’s the scenario: • Let’s assume we have two leaders, A and B, which are fully synced. • Two writes, wa and wb, occur at leader B, one after the other.

My questions: 1. If wa reaches leader A before wb, how does leader A detect that there is a conflict? 2. If wb reaches leader A before wa, what happens in this case? How is the ordering resolved?

Would appreciate any insights into how such scenarios are typically handled in distributed systems!

Is multi-region replication used in any high scale scenarios ? Or leaderless is defecto standard?

1 Comment
2024/11/25
16:15 UTC

2

Binary Search Trees: Are the leaves nodes or NULL pointers in the parent node?

Let's say this is 19 and let's say here it has to be greater than 20, less than 25, let's say 23, 21, 24 greater than 20 but less than equal to 25. Let's say 29, 31, and then let's say nil, nil, nil, nil, nil, nil, nil, nil those are going to be the leaves. I will still soon stop writing these leaves, but for now I will write these leaves.

I am doing the CU-Boulder Coursera course on Trees and Graphs.

It is confusing why he is referring to the leaves as NIL.

Are the leaves nodes or NULL pointers in the parent node?

see screenshot here https://imgur.com/a/fkkQ4OK

Edit: Unfortunately, the course as with Coursera courses is abandoned and instructors do not reply in the discussion forums. Hence I am posting here. Thanks to all for your help.

7 Comments
2024/11/25
13:07 UTC

13

How to study computer science further after graduation?

I have a Bachelor's in Engineering in Computer Science Degree from my state school and a Masters in IT Management from Western Governor's University. I have a fulltime software engineering job that is work from home. I'm not seeking further degrees or qualifications for employment reasons (would like a PhD in comp sci when I get more settled)

I want to know the best courses / books / well formulated projects that can provide problem sets, and train me in traditional comp sci topics. AI, ML, computer graphics, Databasing technologies, (math topics as well that are cross listed), Compilers, system design, low level systems programming.

Basically I want to know how the entire stack works top to bottom. I have watched plenty of videos but i want to have worked with the science, try to do as much as i can because that's how i learn best.

15 Comments
2024/11/24
17:42 UTC

2

Seeking Guidance on Tackling Heuristic Problems in Optimization

Hello everyone,

I’m currently exploring ways to approach heuristic algorithms for optimization problems. Specifically, I'm trying to understand how to adapt methodologies like graph coloring to scenarios where constraints and objectives differ significantly. For instance, I recently worked on a reduction task where constraints included:

  1. Ensuring two special elements had distinct assignments while avoiding direct conflicts.
  2. Adding base constraints to ensure all elements participated under certain conditions.
  3. Managing isolated elements dynamically to maintain problem feasibility.

The implementation leveraged a reduction from Graph Coloring. Key steps included:

  • Introducing initial constraints (edges) for specific problem requirements.
  • Dynamically adjusting constraints to handle edge cases, such as isolated nodes.
  • Standardizing input/output to align with a target problem’s format for verification.

While I understand the mechanics of such reductions, I'm curious about broader strategies. When dealing with variations of these types of problems:

  • How do you decide between heuristic approaches like simulated annealing, local search, or a completely new algorithm design?
  • Are there best practices for identifying and managing edge cases dynamically within such transformations?
  • What indicators suggest you might need to redefine your problem as a maximization or minimization task to suit algorithmic strengths?

I’d greatly appreciate insights, especially any general frameworks or practical examples that have helped you approach similar challenges. Looking forward to your thoughts!

0 Comments
2024/11/23
17:06 UTC

3

Scaling LB

For making highly scalable, highly available applications - applications are put behind a load balancer and LB will distribute traffic between them.

Let say load balancer is reaching its peak traffic then what ? How is traffic handled in that scenario.

4 Comments
2024/11/23
06:49 UTC

0

What language and format is practical for sharing large prime numbers (in the service of pseudorandom number generation)?

In order to generate pseudo-random numbers, I sometimes replicate easy code for calculating very large prime numbers. Typically my code for generating pseudo-random numbers takes up much more digital space than any single prime number.

However, I don't just want to write my code once and enjoy it. I want to share my code over the Internet with other coders who write in different styles, but who recognize the same mathematical limits to number representation.

Main question:

If I want to calculate and share very large prime numbers on the Internet, I have to use some language (possibly MATLAB, Python, R, Java, LISP, etc.) and digital files of some fixed size (e.g. 1 terabyte, 1 petabyte, etc.). This is for educational purposes; I don't plan to learn cryptography. I am not trying to break new ground; I have in the past replicated standard techniques such as Mersenne Twisters.

https://en.wikipedia.org/wiki/Mersenne_Twister

What are the current best practices for calculating and sharing very large prime numbers on the Internet? Is it worthwhile to delve into very specialized skills, such as CUDA? Are all the cool kids using Mathematica?

Thanks in advance.

2 Comments
2024/11/22
22:48 UTC

2

Dashboard Building Difficulty

Alrighty folks, got a difficulty-level question. This isn't a HW question, promise--just a curiosity one based on some trading I've started doing.

I want to create a web app that, on the home page, shows 8-12 widgets. Each widget updates every, say, 5 minutes by pulling the current price of different cryptocurrencies. The trick is that the cryptocurrencies are currently housed on different exchanges, not a single one--meaning each widget might (theoretically) have to pull the info from a different website. It wouldn't need to pull any proprietary data or data locked behind a login screen or anything, just prices displayed publicly on the exchange's home-page. Basically, I want it to save me from having to click through 15 different tabs every 5 minutes to see prices.

I want to do this on a publicly available website so I can share w/ friends and discord members, etc.

How difficult would this be/what sort of platform could this be built on/is there a no-code or low-code way to build this?

8 Comments
2024/11/22
19:48 UTC

0

Moos Ivp

I’m looking for someone who understands moos Ivp specifically inter vehicular communication and share. If you have any knowledge on the topic pls dm me.

0 Comments
2024/11/22
08:14 UTC

8

How does BlueSky work?

Just watched a video of BlueSky's CEO talk about how users can just take their data and leave, and how everything is open source, and how there's "no algorithm", and how developers can contribute. This seems very different from any kind of social media platform, and either it's all BS, or there's some cool stuff going on under the hood there.

7 Comments
2024/11/22
04:03 UTC

3

Is there a name for a cyber attack where instead of brute forcing passwords, you brute force emails/usenames

For example, someone wants to break into numerous social media accounts at the same time. They know the most common password, so instead of trying various passwords of a specific account, they use the single most common password to log into dozens or even hundreds of random users' accounts.

This was partially inspired by https://xkcd.com/792/ where Black Hat notes that password reuse is an exploitable vulnerability

2 Comments
2024/11/21
18:31 UTC

1

Want to know if my method for calculating floating point on 8/16 bit CPUs

So I was trying to find a way for a 8 or 16bit CPU to process 32/64bit floating point. Online I found a bunch of methods that were converting to fixed point and such and thought it to be unnecessary. Example of my method for 32bit floating point for 16bit CPU:

  1. take the lower 16bits of the mantisa and treat it as normal 16bit integer, perform the math on it then store it
  2. take the upper half and isolate the rest of the mantisa via an AND operation then perform math on it and store it
  3. isolate the exponent with AND again and modify it if needed then store it
  4. isolate and do the operations on the sign bit

What I want to ask is if this would give the same result or are the conversions to fixed point necessary ?

6 Comments
2024/11/21
06:33 UTC

1

Automata Theory question. How to make a pushdown automata that accepts words that are made of 3 parts, and the 3rd part must contain the inverse of the first part.

So it must accept words that look like this: J^(-1)KL, where J, L ∈ {0,3}^(*) and K ∈ {a, b}^(*), |J|>0, |k|>0, |L|>0 and L must be a substring of J (anywhere), but they do not have to be the same.

My issue is that let's say we have w: "300a03003", then i push the "300" to the stack, read in the "a".

But then there is an issue. If i read in the rest "03003" and match it to the "003" in the stack, there is an issue, since the first "0" is matching, but then in order to check the second ("3" in the input, "0" in the stack) characters, i have to pop the fisrst "0" from the stack.
But right at this comparism, it will fail, since the second letters do not match, and i already popped the "0" from the stack, so the automata does not store the initial J part anymore.

I feel like i could do this with 2 stacks, but not with only 1.

Any ideas?

12 Comments
2024/11/19
20:51 UTC

2

DMA works like this?

we have different ways to handle data transfer between memory and I/O devices

pulling: the CPU constantly checks (somewhere... interface? I/O device? where?) if there is any transfer needed to be done, when there is one, the CPU just does the transfer and keeps going (working and checking if there are interruption)

interruption: the PIC sends an IRQ to inform about an interruption that wants to take place, the CPU finishes the instruction that its executing and handles the interruption (depending if IF = 1 (handles interruption) IF = 0 (ignores interrumption), if NMI interrupts, the CPU always take the interruption because its something important. this is only for I/O devices? NMI could be something non I/O related?

DMA: a controller which allows the CPU to keep working while this DMA handles the transfer (data transfer = interruption?) so that the CPU doesnt lose, the CPU sends something to the DMA which i dont know what it is, i suspect it sends to the control area of the DMA "instructions on what to do", aswell as the amount of data which needs to be transfered between devices (it works as a counter, when it reaches 0 it means that everything was passed), addresses of both sides (who gives the DMA this information? when?) and the direction of the data (from point A to B or point B to A)

at some point the device sends a DMA-REQ (im ready to transfer?), at some point the DMA sends the device a DEM-ACK (ok, got your message or transfer started?), at some point the DMA sends the device if its going to Read or Write (i believed it was the other way around)

at the end of everything the DMA sends the CPU an IRQ telling it that the transfer was done so it shouldnt worry

as you can see i barely understand whats happening (im not an EE or CS student, just a related field so i just need to know it not so deeply, if you could correct my understanding and provide a timeline on when does all of this happen i would appreciate it, please keep it simple, try to use the technical words that i used as i dont know many)

3 Comments
2024/11/19
20:06 UTC

1

Binary search question

code: https://imgur.com/a/P3aLSxf
Question: https://imgur.com/a/zcvo38Y

Could somebody explain the answer to this question in relation to the code?

the answer provided was the one with the check mark, but I'm sort of confused; at the beginning of the algorithm, we don't know what L[b] and L[e] refer to, so they can't be known, meaning the loop invariant is false. a part from that, i do think that the loop invariant becomes true after the 1st iteration

Any help is appreciated.

6 Comments
2024/11/19
16:08 UTC

Back To Top