/r/SoftwareEngineering
Software engineering is the process of analyzing user needs and designing, constructing, and testing end user applications that will satisfy these needs through the use of software programming languages. It is the application of engineering principles to software development.
/r/SoftwareEngineering
I'm struggling to understand the differences between these things. People seem to use them interchangeably. Below is what I think so far.
Requirements: Written from the client's perspective. Establishes what functionality is required in a list.
Specifications: Translation of the requirements using more technical language. Another list.
Scope: Paragraphs stating what is included in the project (isn't that what requirements do?) and what isn't. Used to establish boundaries.
Any help would be appreciated.
I am looking to improve our operation as a software agency -
how do you collect requirements and change requests - so that you can estimate them? these are usually a document that are before the SOW -
How do you track changes on these to requirments and the scope ?
Hi
I studied DDD in college and from then on I became very interested in the topic. I don't have difficulty with concepts such as layered software, repositories, factories, etc., but I already have difficulty designing the domain model.
I decided to start reading Eric Evans' book, the blue book. I'm almost halfway through the book, but I think it's very difficult to read and it's not helping me. My professor talked about another book, Implementing Domain Driven Design by Vaughn Vernon, which is easier to read, with more examples.
I would like to know tips or resources to know how to improve in this aspect.
Although I know that many programmers are not completely in favor of DDD, I think that knowing how to design a good domain model is important for any object-oriented architecture.
Thank you
Hey
Im trying to convince my team about to focus on code quality. I've found a 1k lines files or 300 lines methods, we don't have automatic tests of any kind and I'm really worry about it. I want some papers or statistic to support my proposal.
Thanks for your help.
Hey folks. At the company I work for, we're running a migration to move data from an unstructured schema, from legacy database, to a structured one, in a new database.
Our stack is basically Node.js + Prisma ORM.
We have an endpoint that fetches 200 projects from legacy database, runs a set of transformations (to make it adhere to the new structure), then save it into our new database. During the transformation, we also query other databases.
The whole process for a batch of 200 projects takes around ~30s, and considering we have +100MM projects to migrate, at that pace, it'd take 173d to complete this migration if the script ran 24/7. So we need to optimize this.
We have 2 ways to optimize this:
Previously, we fetched the 200 projects then ran through each running the transformations. Literally this:
for (const project of projects) {
await runTransformation(project);
}
Then, we tried parallelizing it with a Promise
Promise.allSetled(projects.map(runTransformation))
But that quickly leads to a "Maximum SQL connections reached". Don't think increasing our Pool Size is a realiable approach - our max. is 100, I tried with a batch of 30, and still got this error.
Wonder if you guys have any ideas on how to approach that?
I've been reading up on unit testing, and I was reminded of the existence of TDD. I could never for the life of me take it seriously. But apparently it has a swarm of supporters who I struggle to believe actually adhere to it as much as they say they do. I'm not even sure if people follow it properly, because in TDD you're supposed to write your test, then code to pass the test, ONE TEST A TIME.
But even if i write all my tests before implementation, imo TDD is at best, just as good as implementing before testing.
Tests will inevitably depend on implementation. The first thing you learn about unit testing is edge cases. That's not a "behavior" or "interface", that's an implementation detail.
Unless the problem you're solving is so simple that you can see ahead of time how the code will look like, you'll inevitably refactor your code as you write it. This means renaming, changing arguments, method & class deletion/creation. That means rewriting your tests as well. That's wasted time.
I think there's value in testing, but doing it backwards makes no sense to me.
I am completely rewording this post.
I am working in a programming language that directly supports the fundamentals of OO programming, or at least, some of them. The language permits method calls on objects. It permits to put a method or other data directly on an object, dynamically. There is native support for single inheritance from object to object.
The language provides its entire self, as could be used for coding procedures to run at runtime, also for one-time execution when modules are loaded. So, the entire dynamic language is available as the static language, so to speak. As a result, programmers can build up objects as artifacts of programming. They can call procedures at "compile" time just as their procedures can call other procedures at "runtime".
In OO programming, it is common to set up a pattern of some kind, that can be applied to make objects at runtime that share some common behavior.
In classic OO programming, like in Smalltalk or Ruby, this pattern finds its specification or realization in the terms of the language as a class. In Smalltalk, a class is realized as in fact an object that belongs to another class. A class is conceptually distinguished from an object that is an instance of the class. The class and the instance are expected to exhibit very different behaviors. A class has class methods and instance methods and it is the latter that inform the instances.
In the Self language, the usual way, more or less, to establish in a program, a pattern out of which instances can be made at runtime, is with a pair of objects, one of which provides the shared instance behavior, and the other is a prototype, and it can be asked at runtime to clone itself, and the clone is the new object. If I understand the Self culture correctly (it's a bit hard to research), these two objects have distinct pathnames, but ending in the same name. The programmer has to use one of these names when adding methods, and the other when obtaining an instance.
Back to my project. I said the underlying language in which I am working, and in which users of my little library will also be working, provides basic OO functionality. I just want to add a slightly higher-level way for programmers to define behavior and command the creation of instances. I don't want the programmers to have to use two names, as they do in Self. I want to use one reference for the thing to which programmers can add methods and demand an instance. And I want to provide support for it to be possible to ask an instance to clone itself; it will return a shallow copy of itself.
Are readability and maintainability better served, in your opinion, if there is just one operation name that is to be applied to either an instance that can clone itself, or to a parent object that will provide shared behavior, to get a new instance or clone? Or different names depending on whether the receiver is a parent object (e. g. new
) or not (e. g., clone
)? Is it confusing and deceptive to say we are going to ask the shared-behavior object for a clone
and instead of giving us a shallow copy of itself with all the method references duplicated, it gives us an object that inherits the methods?
Original post:
Title:
In object-oriented programming, for most engineering benefit, is it better for client code to have to distinguish between cloning a prototype and instantiating a class?
Original body:
or should I be able to pass an object that could be either a prototype or a class to the client code, so the client code could request the same operation in either event, and the result would be a clone or an instance?
And if so, should the operation be called "clone", "new", "create", "make", or something else?
FYI: Approved by mods
Hi all,
I’m a Computer Science student conducting a study aimed at examining the transformation in productivity and well-being among software developers as they transitioned from office-based work to remote environments during the COVID-19 pandemic, and how these aspects have continued to evolve to the present day.
If you have/currently worked remotely in any of the following periods then please do complete the survey!
Pre-March 2020
March 2020 - August 2021
August 2021- Present day
The survey should take 5-15 minutes depending on which of these time periods you choose to answer for.
The survey can be found here:Â https://york.qualtrics.com/jfe/form/SV_9oCDIpBZCSZgzWK
Thank you!
Hi, I'm writing some software in Python 3 to plot signals and data on a graph. I'm using the following libs to help:
I started by using the MVC design pattern to handle the GUI, however I would like to know if there are any other patterns that I could incorporate to do things like convert from Pandas Data Frame to NumPy nd-array, or just allow the program to interpret different data structures and display them in a consistent fashion.
I also am not sure how I would handle an undefined number of datasets simultaneously with the MVC design pattern.
Currently, I can only plot one graph using Matplotlib, but I'd like to be unlimited.I'd like to be able to display a set of data in tabular format, select to columns to plot as x and y axis of a Matplotlib graph, then do the same to another set of data (or generate data with a math function) and perform some math function on the displayed graphs to output a new and final set of data (also to be displayed as a graph)
Apologies if my description is somewhat vague, but i was hoping someone might be able to spot some design patterns that could be usefull to make the software more modular/reusable/efficient.
Any suggestions would be much appreciated, even if they are asking for more specifics on what I'm trying to achieve.
Thanks for reading :)
When companies ask for time complexity of a function, they normally expect big O(something) which from interpretation, is big theta(something). BIG O (upper bound) IS NOT ALWAYS BIG THETA ( tight bound).
🚀 Hi there! If you are a software developer we would love to hear your story! 🚀
My peers and I are investigating how the job of software development affects physical health. If you are a software developer, we would love to hear your story! Please, consider contacting me through direct message or directly book a convenient 30 minutes call through this calendly link: https://calendly.com/jbiggio/research-interview
Your contribution will help to better understand the matter as part of a university project! Thank you for your time!
I’m planning on delivering a tech talk to my team on the pitfalls of explicitly returning nulls in production code, as opposed to using optionals where the language supports it or throwing exceptions when the value is expected to be present.
To make sure I’m not presenting an overly biased view, and to avoid getting blind-sided if someone raises a point I hadn’t considered, I want to hear examples of times you would actually prefer to explicitly return null.
Edit: Since some were curious and I neglected to specify, our team works predominantly in Java so we do have the Optional interface available to us. I have also worked with Go a bit and tbf I did like the ability to have multiple return values in the case of errors etc. I also don’t mind how Swift/Kotlin handle optionals and unwrapping them, I believe they handle it in a similar way.
View and Save the summary here
Summary was created with Recall
Original Source here