/r/SoftwareEngineering
Software engineering is the process of analyzing user needs and designing, constructing, and testing end user applications that will satisfy these needs through the use of software programming languages. It is the application of engineering principles to software development.
/r/SoftwareEngineering
đź‘‹ Hey everyone!
I’m building a free directory of self-published books by indie authors. From my experience, indie books often provide more practical and real-world insights compared to those from big publishers like O’Reilly or Packt.
So far, I’ve curated 50+ books, which you can explore here: https://indieverse.dev
If you’ve come across any other great self-published books or courses, feel free to share them in the comments. I’d love to include them in the directory and make it an even better resource for the community! 🚀
Would you say Software Engineering is as Math intensive as CS or Data science and if you would put it in order which is least and highest in terms of math intensity?
24 YO Software Developer (soon to be 25 in 16 days) that feels lost on his career. 3 years of experience on a small company, 2 years of recent college graduate.
I have a job on a small company, my principal activities are user technical support, administrative task and software developing using mostly NodeJS
The thing here is that... i feel kinda lost on the way, if that's a way to put it (sorry for my bad english, my native language is spanish).
I don't feel passionate with the projects I'm working on right now, not because they are not interesting, but because I feel kind of depressed towards my capabilities on them. I feel dumb, uncapable of doing something of quality.
I want to do some personal projects to build up my portfolio and for personal use on my day to day, but I don't seem to find courage or motivation to do them. And when I'm one step closer to start those projects, I get lost not knowing where to begin with. Also, I feel kind of bad thinking about how the AI is gaining weight on the industry, I find myself thinking on, what's the point?
So here I am, asking for advice
How can I start again? What can I do to improve my skills on software development? Am I asking vague questions? Hope not...
This talk was authored by Adam Wathan back in 2017 at Laracon US, a Laravel Convention. My senior showed me this concept, which I believe is quite powerful. I know its a laravel convention but the concept could be applied on any other frameworks. It simplifies controllers, even though it may create more of them. I'd like to hear your thoughts.
anyway here's the link to the video: CRUDdy by Design
Incomplete Meta Tags are Unprofessional
TL;DR: Incomplete or null meta tags break functionality and user experience.
When you leave meta tags unfinished, such as {user_name}
or {product_name}
, they often sneak into your final output. Imagine sending an email that says, "Hi {user_name}, your order for {product_name} is ready."
It screams unprofessionalism and confuses users.
Null values worsen things by causing crashes or silent failures, leading to bad user experiences or broken processes.
You can avoid this by asserting completeness before rendering or sending.
When your code finds an incomplete meta tag or a null value, stop the process immediately and throw an exception.
<?php
$emailBody = "Hello {user_name},
your order for {product_name} is confirmed.";
// You forget to make the replacements
sendEmail($emailBody);
<?php
$emailBody = "Hello {user_name},
your order for {product_name} is confirmed.";
if (strpos($emailBody, '{') !== false) {
throw new Exception(
"Incomplete meta tags found in email body.");
}
sendEmail($emailBody);
[X] Automatic
You can detect this smell with automated tests or linters scanning unfinished placeholders ({} or similar patterns).
[X] Beginner
Your system must maintain a one-to-one mapping when representing user data with placeholders.
You break this mapping if your {user_name} placeholder exists but lacks a corresponding real name.
This causes errors, confusion, and a loss of trust in your application.
Ensuring bijection compliance avoids these issues.
AI tools sometimes introduce this smell when generating templates with placeholders but fail to substitute real data.
You must validate and complete all placeholders before using the output.
AI tools like linters or email rendering validators can detect unfinished meta tags if you configure them correctly.
Use these tools to automate meta-tag detection and reduce human error.
Remember: AI Assistants make lots of mistakes
Without Proper Instructions | With Specific Instructions |
---|---|
ChatGPT | ChatGPT |
Claude | Claude |
Perplexity | Perplexity |
Copilot | Copilot |
Gemini | Gemini |
Incomplete meta tags are more than just sloppy—they're harmful. Validate tags, assert completeness, and throw exceptions when needed.
Handling meta tags carefully prevents errors and ensures a professional experience.
Code Smell 139 - Business Code in the User Interface
Code Smell 97 - Error Messages Without Empathy
Null: The Billion Dollar Mistake
Code Smells are my opinion.
Photo by Tomas Martinez on Unsplash
The best error message is the one that never shows up.
Thomas Fuchs
Software Engineering Great Quotes
This article is part of the CodeSmell Series.
I just read this article, and it really made me think about all the hate Scrum gets. A lot of the problems people have with it seem to come down to how it’s being used (or misused). Like, it’s not supposed to be about micromanaging or cramming too much into a sprint—it’s about empowering teams and delivering value.
The article does a good job of breaking down how Scrum can go off the rails and what it’s actually meant to do. Honestly, it gave me a fresh perspective.
Curious to hear how others feel about this—is it a broken system, or are we just doing it wrong?
The 5 Whys technique is a simple problem-solving method used to identify the root cause of an issue by repeatedly asking "Why?"—typically five times or until the underlying cause is found. Sakichi Toyoda, founder of Toyota Industries, developed the 5 Whys technique in the 1930s. It is part of the Toyota Production System.
Starting with the problem, each "why" digs deeper into the contributing factors, moving from surface symptoms to the root cause. For example, if a machine breaks down, asking "Why?" might reveal that it wasn’t maintained properly, which might be traced back to a lack of a maintenance schedule. The technique helps teams focus on fixing the core issue rather than just addressing symptoms.
I don’t use 5 Whys nearly as much as I should since it irritates stakeholders, but every time I have, the results have been excellent. What has been your experience? Do you use similar techniques to find and fix core issues rather than address symptoms?
I am trying to get my head around event streaming, streams and how to organize them best.
Of course the answer is it depends but here is a "theoretical" example:
Most important criteria: reliability and speed
Most important fact: All endpoints produce data irregularly but the fastest endpoints are every 20 milliseconds
Let's assume we have the following:
300 Devices with some protocol - Wind-Sensor-Data (id, wind speed, wind direction, etc.)
300 Devices with some protocol - Temperature-Sensor-Data (id, temperature, temperature-unit, humidity, etc.)
300 Devices with some protocol - Light-Sensor-Data (id, status, consumption, etc.)
300 Rooms where the 300 temperature and 300 light sensors are in - Room-Data (id, door-status, window-status, ac-status etc.)
For simplicity let’s say we have the following scenario:
PointService1: gets data from Wind-Sensors 1-100, Temperature-Sensor 1-100, Light-Sensor 1-100, Room 1-100 and produce that data to stream/streams.
Then ControlService & StationService & LoggerService consumes that data (all consumers need the same data)
PointService2: gets data from Wind-Sensors 101-200, Temperature-Sensor 101-200, Light-Sensor 101-200, Room 101-200 and produce that data to stream/streams.
Then the same ControlService & StationService & LoggerService consumes that data (all consumers need the same data)
PointService3: gets data from Wind-Sensors 201-300, Temperature-Sensor 201-300, Light-Sensor 201-300, Room 201-300 and produce that data to stream/streams.
Then the same ControlService & StationService & LoggerService consumes that data (all consumers need the same data)
Considerations:
Considering that, example Redis, can handle up to 2^32 keys (4'294'967'296) I most likely won't run into any limitation when creating streams for every wind, temperature, light, room, etc. if I want to.
Considering I can read from multiple streams. I can bundle less important streams into a single thread if I want to save resources.
Considering the amount of devices/rooms per PointService won’t be dynamic but an additional PointService with additional devices might be added at some point.
Questions:
Do I create one stream for all device/room data and differentiate with the content (StreamEntry) sent (1 stream)?
Do I create one stream per PointService(1-3) and differentiate with the content (3 streams)?
Do I create one stream per endpoint type (Wind, Temperature, Light, Room) and differentiate with the content (4 streams)?
Do I create one stream per device/room (1200 streams)?
More important what if I want to stream set points back to all the devices via the PointServices(1-3) (consider the system load stream/filter on consumer)?
One stream per PointServices?
* Note: Each message or entry in the stream is represented by the StreamEntry type. Each stream entry contains a unique ID and an array of name/value pairs.
Functional requirements define the “what” of software. Non-functional requirements, or NFRs, define how well it should accomplish its tasks. They describe the software's operation capabilities and constraints, including availability, performance, security, reliability, scalability, data integrity, etc. How do you approach eliciting, understanding, and documenting nonfunctional requirements? Do you use frameworks like TOGAF (The Open Group Architecture Framework), NFR Framework, ISO/IEC 25010:2023, IEEE 29148-2018, or others (Volere, FURPS+, etc.) to help with this process? Do you use any tools to help with this process? My experience has been that NFRs, while critical to success, are often neglected. Has that been your experience?
I am solving a problem where I have a very large dataset with unstructed data. This would be usually accessed a lot to get customer info and analysing trends from different groups. I need to make this access optimal.
Realtime data based analytics is not a requirement. We would usually query and validate data across weeks or months. What are the best ways to access data from databases to compute queries optimally?
Goal-Oriented Requirements Engineering (GORE) is an approach to requirements engineering that focuses on identifying, analyzing, and refining stakeholders' goals into detailed system requirements. Please tell me about your experiences using GORE in your projects—what methodologies (e.g., KAOS, i*, GRL) and tools (e.g., OpenOME, jUCMNav, Enterprise Architect) have you used, and how effective have they been in aligning requirements with stakeholders' objectives? Did using GORE improve the clarity of requirements and overall project success?
I have been thinking about how I have always read that to compute the composite availability when depending on two parallel services we multiply their availabilities. E.g. Composite Cloud Availability | Google Cloud Blog
I understand this comes from probability theory, where assuming two services are independent:
A = SLA of service A
B = SLA of service B
P(A and B) = P(A) * P(B)
However, besides assuming independence, this treats SLAs like probabilities, which they are not.
Instead, to me what would make sense is:
A = SLA of service A
B = SLA of service B
DA = Maximum % of downtime over a month of A = (100 - A)
DB = Maximum % of downtime over a month of B = (100 - B)
Worst case maximum % of downtime over a month of A or B = 100 - DA - DB = 100 - (100 - A) - (100 - B) = A + B - 100
For example:
Example 1
99.41 * 99.71 / 100 = 99.121711
vs
99.41 + 99.71 - 100 = 99.12
Example 2
75.41 * 98.71 / 100 = 74.437211
vs
75.41 + 98.71 - 100 = 74.12
I see that the results are similar, but not the same. Playing with GeoGebra I can see they are only similar when at least one of the availabilities is very high.
SLA B = 99.99, X axis is availability of A, availability X*B (red) vs X+B-100 (green)
SLA B = 95.3, X axis is availability of A, availability X*B (red) vs X+B-100 (green)
Why do we multiply instead of doing it as I suggest? Is there something I am missing? Or its simply done like this for simplicity?
I thought of this idea for a data structure, and I'm not sure if it's actually useful or just a fun thought experiment. It's a linked list where each node has an extra pointer called prev_median. This pointer points back to the median node of the list as it was when the current node became the median.
The idea is to use these prev_median pointers to perform something like a binary search on the list, which would make search operations logarithmic in a sorted list. It does add memory overhead since every node has an extra pointer, but it keeps the list dynamic and easy to grow like a normal linked list.
Insertion and deletion are a bit more complex because you need to update the median pointers, but they should still be efficient. I thought it might be useful in situations like leaderboards, log files, or datasets where quick search and dynamic growth are both important.
Do you think something like this could have any real-world use cases, or is it just me trying to reinvent skip lists in a less elegant way? Would love to hear your thoughts...
We're working on documenting an FDA De Novo pre-market submission, one requirement of which is a software requirements specification (SRS) document. We're creating this new for the filing, for already existing software. Until now we've been working from a design control matrix (DCM) as our source of truth. No one on our small team is very experienced with writing SRS.
So far I understand that the SRS normally has a highly abstracted list of functional requirements, which the DCM would derive from, the DCM being responsible for defining more explicit and verifiable requirements. Then of course there's the (also required) software design specification (SDS) which goes into implementation details.
The FDA though seems to be asking for very well defined requirements within the SRS. The following comes from their guidance in this document:
The software requirements specification document should contain a written definition of the software functions. It is not possible to validate software without predetermined and documented software requirements. Typical software requirements specify the following:
- All software system inputs;
- All software system outputs;
- All functions that the software system will perform;
- All performance requirements that the software will meet, (e.g., data throughput, reliability, and timing);
- The definition of all external and user interfaces, as well as any internal software-to-system interfaces;
- How users will interact with the system;
- What constitutes an error and how errors should be handled;
- Required response times;
- The intended operating environment for the software, if this is a design constraint (e.g., hardware platform, operating system);
- All ranges, limits, defaults, and specific values that the software will accept; and
- All safety related requirements, specifications, features, or functions that will be implemented in software.
This leads me to believe that they expect the SRS to be much more granular than it normally would be. Reading this, I would think that if I were documenting a requirement for (say) user authentication, I would need to explicitly define all expected API responses, their status codes, their bodies, and also constraints on both the user and password request (input) fields, and potentially even details on the method by which the authentication happens. It also sounds like it would need to be more exhaustive than normal, covering all functions of the software, not just the broad requirements.
That's fine if that's the case, it just doesn't line up with my initial understanding of the SRS as an abstract document of functional requirements that's normally intended to be written prior to any work having started. Many of these details I feel like will be dependent on our specific implementation choices, which I feel would belong in the SDS instead.
What I'm thinking of doing so far is exactly what I've described above, very detailed requirements, providing references to relevant design outputs where applicable for traceability. With that in mind, any input would be hugely appreciated.
Wrote down a useful revelation I had. Here is the full write up. ———
Software is short lived. The world of software moves fast and even great code quickly goes out of date. This is a problem because the constant change would at times rob me of my job satisfaction. There is something inherently comforting in knowing your work lasts.
The planting
This normally was not top of mind for me. I thought I was satisfied with my day to day work. But that was called into question when I had to plant a tree. The work was not as cognitively taxing as writing software. But the air was hot and humid and the actual digging was slow and laborious. The planting directions that came with the tree were specific on the dimensions of the hole and the composition of the soil mix. Getting the hole to meet the specs was more taxing than I care to admit.
I was not alone in this endeavor. I had my spouse there to compliment my failing cognitive abilities as my physical energy waned. She would keep the soil mixture precise and keep me on track to finish before dusk. It was hard work but probably good for my body to move after sitting at a desk all day. Upon completion of the hole I triumphantly picked up this thin arborvitae from the grass and stuck it into the ground with the zest of an explorer planting his flag into a newly discovered land. We straightened the trunk and layered the earth back over the root bulb. A job well done.
The epiphany
As I stood back with my spouse admiring our work a rush of satisfaction ran over me. It was unexpected. I took a moment to reflect on why I was feeling this way. I realized this tree could be there for the next 50 years. I can look out at it every day and watch it grow tall. My friends and family will probably play in it. It will be in the backdrop of our lives for a long time. That thought was satisfying.
The Change
I can’t plant a physical tree every day. But how can I get this feeling more, especially from my work? I try to focus on things that will last. The software probably won’t, but the trust I build with a customer after solving their problem can. The relationship that can be born out of that trust can persist as long as I hold up my end. Teaching another engineer to solve a problem is rewarding. But knowing that problem can be gone from their life forever is a type of tree. I try to focus on the lasting outcomes I can provide instead of the fleeting software changes. So plant trees that last, they are there if you look. Your mental health may thank you.
SWEBOK V4.0 is the newest edition of the internationally acclaimed Software Engineering Body of Knowledge. This guide, crafted by top experts and rigorously reviewed by industry professionals, is designed to be a dynamic and evolving resource. It has been made available for public review and feedback, maintaining its 20-year tradition as the definitive and most trusted reference for software engineering professionals.
https://ieeecs-media.computer.org/media/education/swebok/swebok-v4.pdf