/r/softwarearchitecture
Dive into discussions on designing, structuring, and optimizing software systems. Share insights on architectural patterns, best practices, and real-world experiences.
Software architecture, design patterns, methodologies, concepts.
Other subreddits you may like:
/r/softwarearchitecture
Hi everyone! I recently started learning about Domain Driven Design and am trying to model a registration workflow for an imaginary event hosting platform. I'm considering two different options. The first, very dogmatic, one, is as follows:
I am distinguishing between four different bounded contexts which are involved here. The event starts in the Platform Management Context which represents the frontend and takes care of authentication. An event then gets posted to the Activity Context, which checks whether the event even exists and does other validation on the activity. Then the event travels to the Membership Context which checks whether the user is authorized to register for the event. Finally, the event ends at the Registration Context, where the information gets stored in the database. Also see the picture below:
The other option, is to just access the tables from the other contexts in the Registration context, and do the checks within one query to the database.
Some pros/cons I have been able to identify are, with respect to the first option, it ensures each bounded context is only responsible for its own data-access, promoting separation of concerns, ideal for larger applications. It does however put more stress on the database connection, making more requests. The second option seems more efficient and easier to implement, which makes it make sense to start out with.
My main question is, do the benefits of implementing the first option, outweigh its efficiency issues? And what would be the preferred option ‘in the real world’?
Of course, this is all very framework and infrastructure dependent as well, so I would like to restrict the problem to a conceptual perspective only (if that’s even possible).
I would love to hear from people who have experience with implementing DDD in production, thanks!
I’m working on a large enterprise project where we have Angular for the front end. We are implementing a BFF for the web API that will interact with other API services that are private in the Azure network.
Question: What are your thoughts and opinions on using a well-defined API Response schema for responses from the BFF back to the web client (Angular)?
I wrote a 300+ pages long book that arranges architectural patterns into a kind of inheritance hierarchy. It is:
Download (52 MB): PDF EPUB DOCX
The trouble is that the major publishers rejected the book because of its free license, thus I can rely only on P2P promotion. Please check the book and share it to your friends if you like it. If you don't, I will be glad to hear your ideas for improvement.
The guide below explores end-to-end (E2E) software testing, emphasizing its importance in validating the complete code functionality and integration - how E2E testing simulates real-world user scenarios, contrasting it with unit and integration testing, which focus on isolated parts of the code: End-to-End Software Testing: Overcoming Challenges
I'm studying software architectures of scaled services to understand it as a product manager. Would be great if anyone knows any resources for how Amazon pay works internally
Long time lurker. I've been on since Kevin Rose kicked it off when he renamed digg to reddit /s. Wanted some thoughts on an integration package I created that bypasses the SaaS and infra-heavy orchestration models.
I had an idea in early 2023 that if I scaled down an integration server to something the size of a postage stamp I could solve the Saga Pattern by turning the problem inside out. I experimented until I landed on a pattern that puts the database in the middle, with stateless integration servers at the edge.
You just install the package on any microservice and point to a database. It's NPM over Terraform.
The approach felt novel enough that I decided to re-implement Temporal.io from the ground up (servers, clients, everything) using this approach. It took me about 9 months of late-night sessions after the kids were asleep, but I’m happy with the outcome and hopeful that my serverless, router-based approach proves useful to someone. Here's a 1 minute video showing the side-by-side.
For now, I’m putting out a TypeScript beta and will implement other languages and databases once I’ve heard some feedback. The long-term goal is to provide infrastructure simplicity, with an Operational Database at the center and NPM packages punching above their weight at the edges.
It's services without the platform tax.
In the domain layer do I only have a basic repository with persistence methods, like delete, add, update and so on?
In the app layer with Cqrs, for commands I understand only using domain specific repositories, because usually its as simple as creating, updating or deleting, which can be defined in the domain repository. (although in some places I also read that you should separate persistence logic from the domain, and maybe have a separate repository layer)
for queries however I don't need to communicate with the domain repository right? Would it not make more sense to leave all the read operations to the application? and in the domain repository not even have simple read methods such as getOneById()?
And If that's so how would you structure your project, and template the directories in such a way that it makes sense and is understandable? (I understand every project is different and it 'depends', but there are still usually some templates that you follow when structuring your projects)
for CQRS queries as far as I know there are "queries", that are basically like request dtos, that give information to query data, and you have "query handlers", that basically orchestrate the query logic, like a use-case, but how do you go about with defining database querying methods for complex reads, and the dtos/read-models they will return, where do you keep all that in your structure, and how do you go about it?
I would like some assistance for how it is done conventionally.
Thanks in advance!
I work on angular application on the job. It is on Angular 16. It communicates with a SpringBoot app on the backend via a grpc api for all requests/responses.
The application loads large amount of data and this data could change every few minutes. So when the change happens, the users are required to hit a reload button on the main component to refresh the data shown on the UI.
The downstream statements one the data have capabilities for sending notifications when data changes. I’m thinking if I can have a caching layer in between that can cache data relevant to the app and all subscribe to change notifications so that my UI can keep refreshing without the reload button.
I think I will continue to use the grpc for the initial load and then start a websocket connection with the caching layer maybe?
My questions for this to work
How does the ui communicate with backend? A hybrid of grpc for bulk initial load and then websocket for realtime updates? Or just web sockets overall? Anything else?
What technology or data store can I use for the intermediate caching layer to serve the realtime hosted to the UI?
Thank you!
Do you use INT sequential PK IDs in your database to do joins by them and have a better performance etc...?
Or do you usually use your domain generated Ids only, for joins, database indexes, maybe even foreign keys etc...
This is something that I usually go for:
.
└── Cqrs/
├── Command/
│ └── ...
└── Query/
└── User/
└── GetUserByCriteriaQuery/
├── GetUserByCriteriaQuery.php
└── GetUserByCriteriaQueryHandler.php
But how about something like a GetUserByCriteriaRepositoryInterface.php/GetUserByCriteriaQueryInterface.php
? How would you structure placements like these in your applications?
(I think that its fine to reuse the same app level repository in more than one query/command handlers right? It's not like queries/commands that are handled by one handler only.)
Thanks in advance!
In the last decades the tech stack of web application became increasingly more complex. Sometimes its necessary but often it is just the "default approach" and far from necessary.
I never did much of technical writing but coming out of another overly complex project I thought I give it a try and channel my frustration into something useful. So I did a little writeup:
The article offers you an alternative to the "default". A "starting point" with the potential to grow. On a more technical level it will show you how to build web application across multiple domain teams using Modular Monoliths, SSR, Micro Frontends, HTMX and Tailwind CSS. (demo code on github)
Hope you find this somehow useful =)
Please let me know you thoughts about the current "default"? Did we overshoot the reasonable complexity threshold just because we can?
So far I have a simple .net site being hosted on a small web server. Im looking for the simplest way to allow users to authenticate. If i use oauth and allow them to sign in with existing gmail/facebook/etc accounts then I assume I still need a database to track the users. Are there any free/cheap third-party services that i can swap in for allowing users to sign up without having to host a bunch of new services?
Im trying to plan out a list of core hosts/services for generating new sites in the cheapest way possible and auth/db always seems to get me into expensive territory which is never practical having such a small user base for now.
I am a big fan of schema-first / contract-first design where I’d write an Open API spec in yaml and then use code generators to generate server and client code to get end-to-end type safety. It’s a great workflow because it not only decouples the frontend and backend team but also forces developers to think about how the API will be consumed early in the design process. It can be a huge pain at times though.
Here are my pain points surrounding schema first design
It’s tempting to just go with grpc or GraphQL at this point, but sending Json over http is just so easy and well-supported in every language and platform. Is there a simple Json RPC that treats schema first design as the first citizen?
To clarify, I am picturing a function-like API using POST requests as the underlying transfering "protocol". To build code generators for Open API Spec + Restful API, you'd have to think about url parameters, query parameters, headers, body, content-type, http verbs, data validation, etc. If the new Json RPC Spec only supports Post Requests without url parameters and query parameters, I think we'll be able to have a spec that is not only easy for devs to write, but also make the toolings surrounding it easier to build. This RPC would still work with all the familiar toolings like Postman or curl since it's just POST request under the hood. Is anyone interested in this theoradical new schema-first Json RPC?
*just deleted my other post about this lol - reposting for clarity
I'm going to be starting as a Jr. Software Architect and I want to have a super strong start. When you were first starting out, what were the best (or worst) things you did for yourself?
SOA - many services per process.
MSA - single service per process.
'Process' in containerised platforms usually means a pod per service.
MSA yields scalability horizontally. "Multiple instances per service".
SOA groups related services and minimises overheads. (debatable?)
Massive sweeping over simplifications.
Anyway. A number of times I have come across "costing" issues with, in particular, Java based service architectures. There are basic minima in terms of memory to get something like a Spring boot application up and running with full security. It is not trivial when compared to other languages.
In said circumstances a proposal was to utilise a "process management" application instead of raw java applications. Said management app would facilitate the running of multiple micro-services in one JVM. Configurable at deployment time (or possibly at runtime via k8s mutations ala FluxCD et al).
This means that "minor environments" can launch two or three VMs, Processes, Pods but run 20 services.
This was considered for non-production environments only to save on cost. Especially if the platform is "on-prem" and finite.
To me it seems like a nice way to extend the "Horizontal scalability" both ways. Not only can you have 4 pods for the auth service but you can allocate 0.25 pods to the notification service by running it with 3 other services in-pod.
However. Isn't this just the "Application server" pattern repeating itself? Back to tomcat and JBoss?
Anyway. A question. In these discussions someone mentioned an existing application management tool for Java and I can't recall its name or even find it googling. I'm not sure if it was smart enough to interact with deployment descriptors or if you just configured it with a config file to load service A, B and F jars.
My interest is mostly on minimising "test" environment size and allowing test harnessing including large groups of services to be run locally on developers workstations/laptops.
Finally working on build real products that will possibly be of use to others. Want to write clean and very organized code so that is maintainable and scalable. I want to learn structure of files and best practices on how to work with microservices, design systems, db schemas, and much more.
I'm learning hexagonal architecture and I tried to implement a hotel booking system just to understand the things in the architecture. Here's the code in the domain layer, the persistence means port and I defined as interface the implementation is in the infrastructure layer.
public interface BillingService {
void createBill();
}
// implementation
public class GenericBillingService implements BillingService {
private final BillingPersistence billingPersistence;
@Override
public void createBill() {
// do stuff
billingPersistence.save(new PaymentBill());
}
}
public interface ReservationService {
void reserve(UUID hotelId);
}
// implementation
public class GenericReservationService implements ReservationService {
private final HotelPersistence hotelPersistence;
@Override
public void reserve(UUID hotelId) {
Hotel hotel = hotelPersistence.findById(hotelId)
.orElseThrow(() -> new NotFoundException());
// reserve room
hotel.reserve();
hotelPersistence.save(hotel);
}
}
public interface BookingService {
void book(UUID id);
}
// implementation
public class GenericBookingService implements BookingService {
private final ReservationService reservationService;
private final BillingService billingService;
@Override
public void book(UUID id) {
reservationService.reserve(id);
billingService.createBill();
}
}
I defined 3 different domain services BillingService, ReservationService and BookingService. The first 2 services I think I defined it correctly but the BookingService is calling another 2 domain services which I'm not sure if it's bad practice or not to let a domain service call another domain service.
Another possible way is to let ReservationService use BillingPersistence port and have access to the Billing domain. However I want it to have Single Responsibility property and reusable so I think it's better to separate the idea of billing and reservation.
I am trying to find a open source API management solution. Most software does have open source version but I am not sure if any company actually uses that version instead of using the enterprise grade version.