/r/compsci
Computer Science Theory and Application. We share and discuss any content that computer scientists find interesting. People from all walks of life welcome, including hackers, hobbyists, professionals, and academics.
Welcome Computer Science researchers, students, professionals, and enthusiasts!
We share and discuss content that computer scientists find interesting.
Self-posts and Q&A threads are welcome, but we prefer high quality posts focused directly on graduate level CS material. We discourage most posts about introductory material, how to study CS, or about careers. For those topics, please consider one of the subreddits in the sidebar instead.
Read the original free Structure and Interpretation of Computer Programs (or see the Online conversion of SICP )
Other topics are likely better suited for:
Other online communities:
If you are new to Computer Science please read our FAQ before posting. A list of book recommendations from our community for various topics can be found here.
/r/compsci
Instead of thinking and response in a neural black box, can an AI perform calculations using a logic machine after understanding the meaning?
I need to create an algorithm to crop an image with dimensions that satisfies couple of different conditions at the same time. For example the eye level has to be 60-70% of the image's height and the head should occupy 40-50% of the image height while keeping 1:1 ratio and keeping the resolution above 1000px.
(The eye level and face dimensions are already calculated)
What kind of algorithm could achieve the solution efficiently if one existed?
I tried bruteforcing (60% eye level + 40 face height, 61% eye level + 41 face height, ....) But as you could imagine it's very slow I need some help
This was a Question In my University's Operating Systems Courses that there was a relative amount of disagreement in the correct answer for the question. I am curious on what you all think of the correct answer?
Which of the following features is not necessarily a "standard" feature for most of today's OSs?
It has been 15 days since I have started learning app Dev and I already feel lost. There is just soooo much in this field that I already feel lost. When I try to not copy the tutorials and implement a few ideas by myself I encounter a pile of problems that I just can't get my head around. So if you know about some course,some websites,YT channels do suggest them that will help me get the hang of everything. Currently I am interested in sexy fluid animation in my apps ,integrating API's and these are the concepts that are so scattered on the internet that it takes time to find. If you have these resources that can help do share them
Need some new project ideas for a new fullstack project. If you have any ideas, let me know
Hi there,
I've created a video here where I explain how cross-validation works and why it is useful.
I hope it may be of use to some of you out there. Feedback is more than welcomed! :)
There is a problem of generating a random permutation of [1..N], for simplicity N is a power of 2. One way is to use a permutation function F_key(x), that depends on a key and generate a permutation either in the recurrent form {F_key(seed), F_key(seed)^2, F_key(seed)^3 … } in case of LCGs, LFSRs and such, or in the form {F_key(0), F_key(1), F_key(2) …} using various cryptographic primitives, w/o storing a whole permutation in memory on the one hand, and have a random key as an identifier to pass around on the other.
If we fix a specific permutation function F, then iterating over all possible keys, with every F_key generating a single permutation, gives us a subgroup of a group of all permutations S_n. I'd like to discuss the landscape, the pros and cons. Is there theoretical analysis among different families, such as polynomial arithmetic based functions, substitutions, xorshifts and various others, that can answer following questions:
So, I want to discuss is is there definite answers or it’s still an ongoing research, hoping to find a family that is easy to adjust for a specific length N, that generates a big enough subgroup of S_n with “most” of the keys producing different permutations. I think that can be useful for others as well.
Thanks for your input.
like we can still project any number really (1,10,11,100,101,111,1000 etc.) so how come PCs only ever use 2^n?
This is probably a bit off topic but was not sure where else to post.
Here in Aus it seems like the only difference between Software Engineering and CompSci degrees is that you spend a year studying random engineering things. So why does this degree exist?
My best guess would be that historically computer development was an engineering area and that the idea of a "programmer" or "computer scientist" was not a thing until later. Is this right?
Edit: just as a little note, I was not throwing shade at Software Engineering or Software Engineers. My question stems from the two universities I have attended here in Aus: QUT and Deakin. At both the SE degree is just the first year of an Engineering degree and then a copy of the IT/CS degree.
I now know that SE does specialise in different stuff than IT and CS.
Hello. I’m a grad student who got into MS CS at UMIch and JHU for fall 24
I find computational neuroscience very interesting, but it’s relatively new to me. UMIch AA is ranked highly for CS whereas JHU is ranked higher for neuroscience. Consider this, which University would you prefer for Computational Neuroscience? And why?
Another question is, how good is this concentration? I understand it’s highly research oriented, but I am not inclined towards PhD after graduation. I would like to get an AI/ML engineer(since it’s closely related) or Research scientist or Applied Scientist roles. I will be taking AI and ML courses. In addition to that I thought I’d get into computational Neuroscience concentration. What are the job opportunities I can expect with this? And is it worth it? Will it open doors to AI opportunities at big techs too?
I was recently working on a project which was written on P4 open source programming language having a working example of a IPv4 router. The code was working fine. However i need some help to modify it to route IPv6 and filter TCP/UDP packets if they do not have a specific port. I'll be adding the screenshots of the working P4 code written for Ipv4.
TL;DR: Embedding models pre-trained using contrastive learning. Hierarchical clustering is used to carve the embedding space to recognize different individuals.
Here is a visual guide covering the technical details: How Apple Uses ML
What is the fastest model architecture that supports inpainting/outpainting with reasonable quality?
Does anyone know if there is an inpainting/outpainting pipeline with SDXL Turbo?
Is it possible to recreate a logical short-term memory (just like human's one that takes parts in major intelligence workloads) from recording every single neuron kv from AI like a log then retrieving it by fetching?
Artificial intelligence's memory capacity shouldn't be naturally how neurons react to recreate the reflex that occurred before. It should be managed by local memory unit(hence could be read and write) because memory unit's unique and perfect attributes from memorizing things(ie will not forget things, fast to retrieve/fetch, High reading speed). Using memory as logical memory for AI could significatly improve reasoning intensity/speed etc.
Greetings /r/compsci,
For those with a keen interest in the convergence of computer science and advanced artificial intelligence technologies, today’s AI Roundtable Twitter Space event is not to be missed. Tau's CTO, Ohad Asor, will join a select group of AI experts to discuss the future and implications of decentralized AI systems.
Why This is Important:
Event Details:
This session promises rich discussions on the theoretical and practical aspects of decentralized AI, offering valuable insights for students, researchers, and professionals in computer science.
Share this with anyone passionate about the future of compsci and AI. Your engagement can help shape the conversation around these pivotal technologies.
Hope to see you there!
Cheers, The Tau Team
In my experience, people don't learn about synchronization at school and it's often seen as an advanced topic of sorts mostly for people doing multithreaded work. Even engineers working with MT don't really think that much about it, with luck they will care about locking/unlocking the mutexes in the right order. But it's not just MT, synchronization problems are everywhere, I just encountered one incorrect behaviour because dbus messages were exchanged in a different order than expected; and a friend just told me that this is a common problem in microservices architectures.
How to reason about synchronization? I found about order theory, is that a good framework to model such problems or is something else needed?
So, I started DSA in C++. I stumbled upon this a course on DSA(from GFG, a friend gave it to me, yes pirated, i mean its too costly), and its long. Now, when it came to web development, I was quite the note-taking pro. But It's a whole different story when tackling DSA.
Videos just 7-10 minutes long, is taking me a solid hour or two to digest fully! I've tried speeding up the lectures to 1.5x, but here's the kicker: I end up pausing every few seconds just to jot down notes. It's like a never-ending cycle of rewind, pause, write, repeat. And I get it, notes are essential, but man, it's slowing me down big time.
I am a undergraduate student in statistics and mathematics and recently I saw a problem where u have to prove that given algo A no better algo B exist that is algo A has less time complexity than algo B for all algo B. But how does one goes to prove something like this. I don't have any idea. In maths whenever u have to proof such a thing no exist u basically use proof by contradiction. But how to solve such a problem in algo. If u can refer any paper where such a thing has been proven it will be useful or any method or approach comp sci generally use to tackle this type of problem.
I am a last year student of an engineering degree, and I must do a bachelor's Thesis. While I am not betting my whole future in getting a job in motorsport, it has been a dream for me for quite a while, mainly in formula or endurance. So, I was thinking about doing something related to this field. My idea was to do something about data analysis and use that for a prediction system or something along the line. Any ideas? Any help is appreciated!
Hi guys,
I recently watched a video clip of Terry Davis where he mentioned that TempleOS doesn't use common executable formats like ELF or PE. Which format does it use and what information is present in that format (symbol tables, etc.)?
Also, are there any guidelines for executable format design?