/r/SoftwareEngineering
Software engineering is the process of analyzing user needs and designing, constructing, and testing end user applications that will satisfy these needs through the use of software programming languages. It is the application of engineering principles to software development.
/r/SoftwareEngineering
When I started using a language where exceptions were the primary way intended to handle errors (C#), I thought they were great. No more if statements scattered around the code after every function call.
I understand these days the idea is to use an error object instead of a simple integer result code, but there's no reason you couldn't return a struct in C and do the same thing.
I don't understand why people don't like exceptions. I shudder when I think about going back to result validation after every function call. Why do people want to go back to cluttering up the code?
Also, how often are people doing fine grained error recovery? I mean actual recovery, not just using a default value if a request or some other operation failed. The vast majority of the time (I'd say 95%+), your line of processing is now dead and the error should just go up the chain to a higher handler.
I am a software engineer of 3-4 years experience, and I feel that I struggle with gathering and clarifying requirements when talking to clients, colleagues, or stakeholders. I find it difficult to ask the right questions and fully understand the project scope without explicit instructions. However, when someone provides clear directions, I have no issues implementing the solution.
Can anyone provide actionable advice on how I can improve my requirement-gathering skills, particularly in the context of client communication and user story creation? Additionally, are there any books, videos, or other resources you would recommend to help me enhance this aspect of my career?
Hello! I have to write an assignment in college using primary sources to explain how writing/communication skills are important in the field I want to work in (Software engineering). I thought why not make a survey and see if a couple of kind people would like to give it a quick go? It's 10 questions and doesn't collect emails or anything like that. Here is the link to the survey if anybody would like to fill it out: https://forms.gle/hvG5w4bSshPXkDkf9
Thank you guys! (If you have any questions feel free to ask away, I'll try to respond pretty quickly)
(I did get permission from the mods to post this by the way)
so I have been thinking about people in the industry. those who are creating software requirement specifications, dataflow diagram, user flow diagrams, and module driven approach, functional, non functional requirement, then defining user personas and what not.
do mncs and startups and other enterprise companies in the IT industry. follow this pattern before developing a software for a client or product?
A few months ago, I learned about best practices in software engineering and various design patterns in university. Concepts like cohesion and coupling, the Single Responsibility Principle, etc., were emphasized repeatedly.
Currently, I’m practicing by creating class diagrams for hypothetical programs, and I’ve come across a question I’m not sure how to answer.
Let’s say there’s a certain value that needs to be computed, and depending on the situation, there are different algorithms to calculate this value. In most cases, I only need two values: int a
and int b
. So, the method signature in the interface would look like this:
int calculateValue(int a, int b)
Based on the specific algorithm, these two values would be processed in some way. However, let’s say there’s one special case where the algorithm also needs a third parameter: int c
.
Of course, I could modify the interface method signature to this:
int calculateValue(int a, int b, int c)
But in doing so, I’d be passing the parameter c
to all classes implementing the interface, even when they don’t need it. This feels wrong because, in our course, we were taught that only the necessary parameters should be passed to a function or method—nothing more, nothing less. So, is it justifiable to pass the third parameter to all classes that don’t actually need it?
Moreover, what if I extend the program later, and a new algorithm requires an additional field for its calculations? Changing the interface header again would violate the Open-Closed Principle.
Or is the issue more fundamental, and do I need to completely rethink my design approach?
Thank you in advance for your help!
TLDR: is sbom_with_vex_as_cyclone_dx_json too long?
I named a variable in our code sbom_with_vex_as_cyclone_dx_json.
Someone in the code review said that I should just call it sbom_json, which I find confusing since I do not know whether the file itself is in the cyclone_dx or spdx format and whether it contains the vex information or not.
He said that a variable name should never be longer than 4 words.
In the book clean code in the appendix (page 405) I also found a variable being quite long: LEAP_YEAR_AGGREGATE_DAYS_TO_END_OF_PRECEDING_MONTH
I personally learned in university that this is acceptable since it is better to be descriptive and only in older languages like Fortran the length of a variable meaningfully affects the runtime speed.
The same thing with this variable of mine:
maximum_character_length_of_dependency_track_description_field=255
I could have used 255 directly but I wanted to save the information why I am using this number somewhere and I did not want to use a comment.
I can understand that it is painful to read but you do not have to read it if you use intellisense and copy paste. I want to force the reader to take his time here if he tries to read the variable name because it is complicated.
I just merged my code without changing it to his feedback.
What do you think about it? Am I the a××h×le?
Do you guys/gals who are doing any form of micro-service architecture plan and report at the granularity of the service?
I have been in several projects recently where the work items (Jira) ultimately span half a dozen or more services.
For some reason this seems like it takes all the hardship of "systems integration" and places it onto individual developers. To complete the ticket the developer might have open changes in 6 or 7 services. In order to raise a "Pull request" they have to raise 6 or 7. Rather than monitor one pipeline and merge incoming changes to one build/deploy branch they have to monitor 6 or 7. When the work is accepted they have to fight and merge all 6 or 7 in the correct order, while there are another 2 teams all trying to do the same in "master".
It would seem more practical to try and split the work items on a "per service" basis. While practically impossible to achieve completely, but still worth trying, the premise of "Single service = single developer" per "SOW".
What are your thoughts? Is this not one of the mainstay advantages of micro-service architecture - that the service level is small enough for a single developer to work within. Encapsulating, dividing and isolating complexity to make that so. This then facilitates parallel development across services to achieve a "SOW complete".
I suppose the downsides are going to be in designing your micro-services and architecture to easily facilitate this. Work items coming in from upstream will need to be broken down by seniors into a set of service tickets and those service tickets sequenced such that the feature branches can be advanced and sync up for releases.
At my company we struggle to share technical knowledge between different projects, I personally believe there's a heavy element of the company culture involved but I'm curious how other companies incentivise that, and what tools can be helpful. internal Forums, communication tools such as Zoom, MS Teams, internal Stack overflow? what do you use in your company that you feel that works well? Thank you
So in my job i have to document all 8 current projects by the end of the year, they are all functional and there is information about them, but its mostly scattered and redundant like a bunch of digital post-it notes.
My team uses confluence so i have to use it as well, my question is, are there any methodologies/frameworks/design patterns i could follow to do it? I need to pitch a format for the docs soon so it can be approved and i can start working on them.
(I volunteered for this, so im not precisely having a bad time, this needed to be done eventually but i want to do it right, this is not a case of a abuse of power or nothing of the sort)
A question for test engineer. How do design the test cases? Assuming you have a functional requirement like : the system shall send an email to the customer as purchase confirmation.
What your approach? Any material to study? Thanks
I'm currently developing a code review process for a client and had a question about code review standards and checklists. If you've done code reviews in the past, I'd love to hear your thoughts. Specifically, do you think the following checklists are sufficient:
Or should the client consider creating their own custom code review checklist?
How does your team handle this? What checklist do you use?
Hi my colleague and I are having a debate about something and I wanted to get other opinions.
Suppose I have a class Foo. And in this class there is some hash like this (this is php but whatever):
private const PRODUCT_CODE_TO_THUMBNAIL = [
'abc' => 'p1.jpg',
'def' => 'p2.jpg',
'ghi' => 'p3.jpg',
];
Then elsewhere in the code this hash is used to, say, create a response that has a list of products in it with the appropriate thumbnail. E.g. some JSON like:
{
"products": [
"product": "abc",
"thumbnail": "p1.jpg"
]
}
Okay, now lets say we've got a Unit test class FooTest, and we want to have a test that makes sure that the thumbnail in a response is always the appropriate one for the product. E.g. we'd want to make sure product 'abc' never ends up with a thumbnail other than 'p1.jpg'.
Question: is it better to:
make PRODUCT_CODE_TO_THUMBNAIL accessible from the from FooTest, so both the code and the test are using the same source of truth or...
Give FooTest it's own copy of PRODUCT_CODE_TO_THUMBNAIL and use that as the expected value.
My colleague does not like having two sources of truth like in option 2. But I like option 2 for the following reason:
Let's say somebody changes a thumbnail value in PRODUCT_CODE_TO_THUMBNAIL to an incorrect value. If both are using the same source of truth, this would not get caught and the test failed to do its job. So by giving FooTest its own copy, basically we are taking a snapshot of the 'source of truth' as it is today. If it ever changes (either on purpose or by accident) we will catch it. If it was by accident the test did its job. If on purpose, it just means we have to update the test.
I suppose it could matter how often that value might be expected to change. If it happens often, then having to update the unit test might become a hassle. But in my particular case, it would not be expected to change often, if ever even.
I am preparing a presentation for my team about the importance of keeping a low amount of work in progress. An important reason to keep your work in progress low is to keep low lead and cycle times for your tickets. Currently we have a lead time of about 158 days and a cycle time of 103 days. Intuitively this seems very high, but I can't find any "recommended" values for these metrics. What would be a good lead & cycle time? I assume it will also depend on the type of project. But let's say that we have a cloud product that is in production and we do some bug fixes and some improvements. We're working with three teams of 5 developers on it.
What would be a good cycle and lead time according to you and is there any literature you can recommend?
Are there any resources out there for averages of cost per line of code. I've heard some numbers but without any context. Would like to understand how we compare to the industry
Edit: Thanks to those who've posted already. For some context I'm not intending to use this information raw but was interested if it even existed. Yes I'm aware that SLOCs are not a good way of measuring developer or team performance, but I understand that this kind of thing used to be measured. I was hoping that there is some of this data recorded somewhere in studies or journals. Just looking for links or books thanks
Some context about me: I've been a software developer for 2 decades
Recently, I faced a reality that left me shocked. We started exploring what Allure Test Ops can do and how it could be integrated into our development process so that this tool moves from the category of "Testers' Spellbook" to the category of "Just another tool alongside GitLab / Jira / etc., which everyone uses daily." Btw, I really like this tool itself (not ad). I've watched many YouTube videos with ideas on how to rethink the separation between manual and automated testing to make something more natural, and allure contributes to this to the fullest. So, what surprised me?
Test cases related with tickets but not requirements! To explain my pain, let me ask first, what quality are we concerned about? From what I see in the market, one thing is obvious - ticket quality (!!!). All integrations are built on the idea that everything strives to be linked specifically to a Jira ticket, as if it were the source of knowledge about the product, though it isn't. When working on a product, what primarily concerns us is the quality of meeting the product's requirements. It’s the requirements that capture expectations, and "success" is precisely hitting your client's expectations. So, what is the role of the ticket then?
In my view, features, bugs, and any other types of issues that one might encounter are like the diff between the old state of requirements and the new state of requirements (as in Git), or a discovered non-compliance with current requirements. It turns out that by changing or validating requirements, we create tickets, and moreover, by keeping requirements up-to-date, we can generate tickets semi-automatically as a consequence of changes/validations of expectations. Even though Requirements Management tools (such as Requirement Yogi) have long existed, I hardly see any integrations with them (except perhaps from Jira).
It seems that development is doomed to "bad requirements" simply because the process starts with a derivative component of them - tickets. We only fully realize the sum total of the requirements when we rewrite the product's specification, which, generally speaking, resembles reverse engineering of something you already had access to - absolute madness.
Why do we focus so much on tickets but not on requirements?
Does static analysis have to be done on the same platform that software compilation is targeting? I have software that is intended to compile on rhel9, but (for reasons) I am interested in scanning that software on a rhel7 machine, is that a valid static analysis scan? I can use the bdf or compile command json that compilation on rhel9 yields, I can also set the SA tool to use the same version of GCC that would be used in the rhel9 machine. My question is, do you lose validity in your SA scan if you aren’t doing it in the same environment that the software would be compiled in (but choosing the same compiler tool chain). Thanks for any insight!!