/r/compsci
Computer Science Theory and Application. We share and discuss any content that computer scientists find interesting. People from all walks of life welcome, including hackers, hobbyists, professionals, and academics.
Welcome Computer Science researchers, students, professionals, and enthusiasts!
We share and discuss content that computer scientists find interesting.
Self-posts and Q&A threads are welcome, but we prefer high quality posts focused directly on graduate level CS material. We discourage most posts about introductory material, how to study CS, or about careers. For those topics, please consider one of the subreddits in the sidebar instead.
Read the original free Structure and Interpretation of Computer Programs (or see the Online conversion of SICP )
Other topics are likely better suited for:
Other online communities:
If you are new to Computer Science please read our FAQ before posting. A list of book recommendations from our community for various topics can be found here.
/r/compsci
T(n)=27T(n/3)+Θ(n3 /lg n)
Ans : a=27 , b=3 , k=3 , p=-1
Log b by a = log27 by 3 = 3
Case 2 satisfied and p = -1
T(n) = Θ(n^(3) log(log n))
Hi, I'm Interested in learning computer science I just want to know that does it requires basic math or if it requires advanced math.
I’m taking my first course on operating systems and I’m quite interested in knowing how we went from a bunch of engineers tinkering at Bell labs to where we are today. I’d love to know more about the people involved and the stages of development and changes that OS technology went through over time.
I saw her here and I taught she might be a clerk or a secretary at Bell Labs but I was very wrong. She is very decorated. She worked on troff and GNU plotutils. She is one of the earliest people who had a degree in CS. Probably one of the earliest women in CS as well. Does anyone know a bit more about her besides what is written on the web?
PS: One of the few women higher-ups at Bell Labs worked on troff, please don't blame me for thinking she was a secretary.
I need a survey or any other reference to learn how to use Fourier transform and series in different theoretical computer science topics from basic to advanced. For example in coding theory ε-biased code sometimes going to Fourier basis is helpful in many occasions. So I want a survey or any other reference on this topic does there exist any?
I find the Reddit debates about whether Computer Science belongs to the realms of science, engineering, or mathematics quite fascinating. Recently, I came across a research paper that I believe provides a very good summary of this topic. I'm curious to hear your thoughts and opinions on this matter, so please feel free to share your perspective. What are your views on this topic?
"We conclude that distinct positions taken in regard to these questions emanate from distinct sets of received beliefs or paradigms within the discipline:
- The rationalist paradigm, which was common among theoretical computer scientists, defines computer science as a branch of mathematics, treats programs on a par with mathematical objects, and seeks certain, a priori knowledge about their ‘correctness’ by means of deductive reasoning.
- The technocratic paradigm, promulgated mainly by software engineers and has come to dominate much of the discipline, defines computer science as an engineering discipline, treats programs as mere data, and seeks probable, a posteriori knowledge about their reliability empirically using testing suites.
- The scientific paradigm, prevalent in the branches of artificial intelligence, defines computer science as a natural (empirical) science, takes programs to be entities on a par with mental processes, and seeks a priori and a posteriori knowledge about them by combining formal deduction and scientific experimentation.
We demonstrate evidence corroborating the tenets of the scientific paradigm, in particular the claim that program-processes are on a par with mental processes. We conclude with a discussion in the influence that the technocratic paradigm has been having over computer science."
References
Amnon H. Eden. (2007). Three Paradigms of Computer Science. https://link.springer.com/article/10.1007/s11023-007-9060-8
A curated reading list for the adversarial perspective in deep reinforcement learning.
https://github.com/EzgiKorkmaz/adversarial-reinforcement-learning
Hi fellow computer scientists,
After some research I came to the conclusion that both Super-Resolution (SR) and Compressed Sensing (CS) are used for MRI reconstruction. However, I'm questioning whether super-resolution is suited for MRI reconstruction when the goal is to accelerate image acquisition or reduce scan time while maintaining image quality?
I've seen some really good papers that support super-resolution as a technique for MRI acceleration, but I have not yet found a general opinion about this. Is it possible/practical to leverage MRI images reconstructed from undersampled k-space data, and then use super-resolution algorithms to enhance the spatial resolution and generate higher-resolution MRI images?
Won't that imply acquisition acceleration, since we deliberately collect fewer data points in k-space than what would be required for a fully sampled image?
Thank you!
What are your favorite books on those topics? I don't mean textbooks for learning (like Introduction to Algorithms). I've been reading a wide range of literature, from Kafka to philosophy like Nietzsche, and science (such as Gödel, Escher, Bach). So, I'm hoping there are some book enthusiasts here who can recommend interesting books in the fields of computer science, math, and technology. They can be novels, biographies, or non-fiction books.
Thank you!
What is your top 5 books or what book or books do you recommend reading to all novice software engineers?
My favorite programming language (Julia) has a standard iteration protocol that must be satisfied by iterators (if you satisfy the protocol, you are an iterator). Sadly, the protocol seems to have been designed without due care (there seems to have been a hurry to redesign it before the v1 release) so it has some issues (affecting the performance of composed iterators, in particular, in some cases). Thus I want to provide a separate and improved interface for iterators (call it IteratorsV2 if you will) as a user package.
One thing in particular that crossed my mind is this: I suppose it would be good to think about supporting different execution policies upfront. For example, a user should be able to choose whether the iteration will be executed in parallel or sequentially. "In parallel" should presumably be further subdivided into threaded execution, GPU execution, etc.
Could someone point me to any possible existing languages that have/support something like this? If there are no examples, why is that?
C++, for example, has support for something similar, its execution policies. They're however limited (user-created execution policies are not allowed) and, it seems to me, unwieldy (the execution policy needs to be provided explicitly to each function that does iteration).
I think it would be preferable to:
somehow enable users to provide their own custom execution policies
have the execution policy as part of the type of the iterator, so it wouldn't be necessary to pass the execution policy explicitly to every single foreach
or mapreduce
call. I'm not sure that this idea of mine is good, though.
I'd also appreciate references to relevant papers or books.
The software development landscape is witnessing a paradigm shift. Developers, in their quest for innovation, often find themselves entangled in the web of routine operational tasks. Low-code platforms are emerging as a beacon of hope, offering a fresh perspective and a transformative approach to the challenges faced by developers. I've delved deep into this topic in my recent article and here are some insights:
Component abstraction in low-code: One of the standout features of low-code platforms is their focus on component abstraction. It allows for a more straightforward expression of business logic, benefiting not just developers but also business analysts and managers. The platforms are designed to be easily understandable, bridging the knowledge gap between different stakeholders.
Reusability and flexibility: Low-code platforms emphasize the reusability of components. This means that developers can easily integrate new functionalities into existing systems without having to build from scratch. This is a significant departure from traditional platforms, where each new functionality often requires extensive coding.
LCDP vs. Code-First Platforms: It's essential to differentiate between Low-Code Development Platforms (LCDPs) like Mendix, AINSYS, Pega, and Appian, and more flexible systems such as WordPress. LCDPs are built around a core of builder elements and related functions, focusing on efficiency and reusability, rather than mere flexibility.
Operational efficiency: One of the most compelling promises of LCDPs is their potential to eliminate or significantly reduce routine tasks. It allows developers to channel their energy into more creative and complex tasks, thereby adding more value to the project.
User-friendly interface: LCDPs are designed to be so user-friendly that even non-developers can participate in the development process. This inclusivity ensures that the final product is of top quality, as flawed logic or insufficient naming can be quickly detected and corrected.
Streamlining requirement collection: Platforms like AINSYS have framework requirement management systems that facilitate better communication between developers, analysts, and stakeholders. It makes the process of requirements collection and documentation more efficient, ultimately improving the quality of the software.
For professionals in the IT sector, what are your thoughts on the rise of low-code platforms? Do you see them as a viable solution for the challenges currently facing the software development industry? I would appreciate your insights.
The array shuffle algorithm (fisher-yates) is implemented like this:
void shuffle(int *array, int size) {
for (int i = size - 1; i >= 0; --i) {
int a = randint(0, i);
swap(&array[i], &array[a]);
}
}
Where as I would imagine it would be implemented like this:
void shuffle(int *array, int size) {
for (int i = 0; i != size; ++i) {
int a = randint(0, size);
int b = randint(0, size);
swap(&array[a], &array[b]);
}
}
So I have a few questions. Assuming both algorithms operate under true random generators, is one's quality better? Is mine flawed?
I also understand I double the calls to randint
but in my defense randint
is probably just a couple of CPU instructions where I always range-reduce to the same [0, size)
range so I can just precompute the montogomery multiplication values ahead of time and avoid using modulos which is kinda slow in the randint
.
Anyway, would mine be okay? Is one faster or better than the other? Reason second one is litteraly not mentioned anywhere?
The article explores how code generation and integrity tools, working together, suggest a powerful combination to stay ahead and exploit AI coding assistant tools smartly: Code Integrity Supercharges Code Generation
Code generation tools enable you to code faster. However, they can also create new problems for development teams, like introducing hidden bugs and reducing familiarity, understanding, and responsibility of the code.
Code integrity tools verifying that the code fits the intent or spec, improving code coverage, improving code quality, and helping developers get familiar with the code.