logo tech writers

tech writers

This is our blog for technology lovers! Here, softplayers and other specialists share knowledge that is fundamental for the development of this community.

Learn more
And who doesn't want to be a leader, what do they do? Discover the specialist career
Tech Writers September 26, 2022

And who doesn't want to be a leader, what do they do? Discover the specialist career

It is common sense to associate career progression with leadership positions. Changes in the job market make other career formats increasingly evident. A career as a specialist, for example, is a possibility that makes sense in many areas, especially in the world of technology. Being a leader, whether of a team or an organization, is a very complex role that requires different skills. Among them, people management. People-oriented skills bring a considerable challenge, which not everyone can identify with. The fact is that being a leader is not the only way to grow in your career and act strategically in the organization, bringing impact to the business. Therefore, in this article we will discuss one of the career formats we have at Softplan, in Y, in addition to exploring some aspects of the specialist career. Keep reading to check it out! Beyond leadership: Y-shaped career and the possibility of being a specialist Over time it became clear that not everyone identifies with the career path that leads to management, and that not always being good technically is indicative of following the path of leadership. A career as a specialist, for example, can be much more interesting for many people. Today we have more flexible paths so that career progression is a choice aligned with what the person really identifies with. Below we will talk about a specific format used here at Sotplan, the Y career, and how it can help you become an expert. How does the Y career work? Softplan, to provide this career autonomy to Softplayers, considers the career in Y. And what does this mean? It means that our Softplayers can develop to act as leaders and direct their career towards management positions, or focus on technical performance and develop as specialists (image1). Cool huh? The specialist career is as relevant as the management career, in addition to representing a strategic role for the business. The specialist has the advantage of focus, of investing in the vertical search for knowledge, that is, he becomes a technical reference. It is understood that the specialist is capable of carrying out complex analyses, leading technical projects, analyzing scopes and seeing patterns of needs, encouraging the choice of tools, being strategic in solving problems, contributing to technological and product transformation and innovation, among other functions. He is a professional with the potential to bring innovation, consolidated experience, actively participate in team training and strategic decisions at different levels (team, unit, corporation). Example of career application in Y To make it clearer how the career in Y works, and how the different degrees of specialization fit into it, let's look at a practical example. Thinking about the Software Developer career, this structure would look like this: In image 2 it is visible that the structure of leadership and specialist positions overlap. At times, one surpasses the other. In other words, it is possible to reach strategic positions and progress financially in both paths. As previously mentioned, the specialist is expected to handle various activities, acting as an internal consultant, as a technical reference. However, in addition to technical skills, other skills must be developed, called Soft Skills. The technical specialist needs to develop communication and negotiation skills, for example. After all, at various times it will be necessary to engage the team in identifying and solving problems, adopting new practices, tools and/or technologies, etc. Furthermore, it is expected that this technical knowledge developed and acquired by the specialist will be shared, and again several soft skills will be required. In other words, it is a complex and challenging role, just like the leadership role, although each one has its specificities. Other career planning tools at Softplan The Y career is just one of the career planning tools at Softplan. We also have other projects that bring even more robustness to this very important structure. Some of them are: Individual development plan (PDI): an important tool to guide and monitor the development of skills identified as necessary for career progression and development; 360º performance assessment: provides a rich network of feedback that will bring diverse possibilities for developing and recognizing skills; Internal Selection Process (PROSIN): which provides career autonomy, making it possible to move between new areas and/or positions, etc. Regardless of your choice, career direction is essential to achieving our professional goals, after all, if we don't know where we want to go, any path will do, right?! ¹Sentence adapted from Alice in Wonderland. Did you like to know a little more about the career of a specialist and its main implications? Check out more content like this on our Blog! Want to be our next Tech Writer? Check out our vacancies on the Career page!

What is Asynchronous Programming and how to use it?
Tech Writers September 19, 2022

What is Asynchronous Programming and how to use it?

Asynchronous programming is a way to avoid delays or waiting times when executing a program. When we are executing something synchronously, we may have blockages in the process due to the need to wait for some code to be executed. This may block the program as a whole until this step finishes executing. Async and Await are the keywords used in C# to use asynchronous programming. In this content we will explain what asynchronous programming is and how to use it. When to use asynchronous programming? We can use asynchronous programming whenever we have a procedure that can be independent, such as: Reading and writing a file; 3rd party resource calls; Independent logic that can be separated from the main thread execution. Void return types: when we use this return in an async method, we are assuming that it will be executed in a parallelized thread, but it cannot be awaitable. In other words, we will not be able to use the await attribute on it to wait for its result. This concept is called "Fire and forget". Task: corresponds to a return of type void, but which is awaitable. In other words, we can use await to wait for its execution. In a void method, there will be no return of any type. Task T: this return is also awaitable, but in it we will have a generic that indicates the type of return we are waiting for, being T, any type we want. In practice, the OS manages the system thread as a single thread that executes step by step procedurally, that is, synchronously. When we work in an asynchronous format, we can have several process executions (threads) without blocking the main thread or the others if we wish. This way, we can work in parallel. Observe a synchronous call: Asynchronous Call: Sometimes, we need the result of a call made asynchronously to continue our operation. In these cases, we can use the await operator. The Await operator is necessary when we need a result in the middle of a process to continue, making our procedure wait for the return of what we are calling. All this without blocking the main thread, which means that the application does not hang. It's important to remember that, for the developer, using async await can look a lot like using the synchronous format. However, under the hood, that's not exactly how it works. Below, I present examples of how to use async await in a synchronous and parallelized way. Just to illustrate, we have Get methods that fetch data from the JsonPlaceHolder public api, which returns collections of Json objects to simulate a mass of data obtained (https://jsonplaceholder.typicode.com): In this endpoint we have the execution synchronous method. Even though they are Async, we call them with .Result() so that they are executed synchronously. In this second example, asynchronously, we execute the methods, but wait for their execution with await. In theory, this would work in a similar way to synchronous execution, with the difference that for each execution we have a new thread, even if the others wait for it to finish: In this third endpoint we have an optimization of the asynchronous concept. Comparing with the previous one, we can see that, at the beginning of the method, we trigger the Get method calls and assign them to the variables. In this scenario, different threads are fired for each call, and this is possible because we don't need their values ​​yet. But when we are going to use your values ​​for something, we need them to be ready. Then we use await, so that if the thread calling the Get method has not yet finished executing, the system waits for it, or them... Thus, we conclude that the use of async await goes beyond just using it in the code , where methods that contain async/await are not necessarily being executed in parallel. To obtain this result, we must think better about the structure of its use and how we want the processes to behave. Generally speaking, well-structured async/await processes save us time because we can execute “n” processes asynchronously. Did you like to know a little more about Asynchronous Programming and its main implications? Check out more content like this on our blog! Want to be our next Tech Writer? Check out our vacancies on the Career page!

What is privacy by design and how is it applied in the development of products and services?
Tech Writers September 05, 2022

What is privacy by design and how is it applied in the development of products and services?

Privacy by Design, loosely translated as “privacy by design”, is directly linked to the protection and privacy of individuals. This terminology gained strength in Brazil with the emergence of the General Data Protection Law – LGPD. Basically, the concept of Privacy by Design is understood as the application of technical measures to guarantee and protect user privacy, from the moment a product or service is designed that involves the collection of personal data. Thinking about the importance of the topic for those who work with Information Technology (IT), in this article we will address the application of the concept incorporated into the development of products and services, the pillars that form it, and its relationship with the General Data Protection Law. Data. Read on! What is Privacy By Design? For Bioni (2019, p.84), Privacy by Design “is the idea that the protection of personal data should guide the design of a product or services, and they should be embedded with technologies that facilitate data control and protection personal." But anyone who thinks that the term emerged recently is mistaken! In the 90s, the Privacy by Design methodology was created, which gained greater visibility with the creation of data protection regulations. The pioneer on the subject, Ann Cavoukian, former Canadian Commissioner for Information and Privacy, established principles to be treated as a basis for application. Thus, the concept demonstrates two important issues: the importance of implementing privacy settings by default; the need to apply proactive measures and ensure transparency with the data subject about the purpose of collecting personal data. “Whatever system is involved, Privacy by Design requires you to build it from the ground up, with privacy as the default setting.” - Ann Cavoukian. Integrating privacy measures at the beginning of a project is related to identifying potential problems at an introductory stage. This way, this step can avoid future negative consequences. The 7 pillars of Privacy By design To understand the application of the Privacy by Design concept, it is necessary to know the 7 pillars that form it. Let's discuss a little about each of them below. Proactive and not reactive The aim is to think about possible problems in advance, preventing them from happening, looking for solutions, ensuring that, when a certain product or service is implemented, possible risks have already been addressed. Privacy by default This principle establishes that the protection of personal data automatically occurs in any process in a given product or service. This ensures that the user does not need to worry about protecting their own privacy, as the product or process was created with security in mind. Privacy incorporated into the project User privacy should, in no way, be thought of as an additional element, but rather as part of what is being developed and implemented. Full functionality Also called “positive-sum instead of zero-sum”, it establishes that all functionalities must be complete and protected, generating benefits for both the owner and the company. End-to-end security It is necessary to think about data privacy at every stage. Thus, protection is guaranteed throughout the entire life cycle of data: at the time of collection, during processing and storage, until disposal. Visibility and transparency This can be considered as one of the most important pillars, in which transparency must be guaranteed to the data subject, so that they are always informed about the purpose of using personal data. Respect for user privacy The product or service must be centered directly on the user, and all functionality must aim to guarantee the security of personal data. What is Privacy by Design in the LGPD? The LGPD does not directly mention the term Privacy by Design in its text. However, this legislation is directly related to the provisions of article 46: “Art. 46. Processing agents must adopt security, technical and administrative measures capable of protecting personal data from unauthorized access and accidental or unlawful situations of destruction, loss, alteration, communication or any form of inappropriate or unlawful processing. (…) § 2 The measures referred to in the caput of this article must be observed from the design phase of the product or service until its execution.” Thus, we can understand that the Privacy by Design concept is related to the application of security measures to protect personal data. Therefore, from the beginning of the design of a product and service, privacy must be considered, thus ensuring compliance with the provisions of the article. Furthermore, the adoption of measures to ensure data privacy from design can be seen as a demonstration that the company is compliant with the LGPD. This precaution avoids the imposition of fines and the occurrence of security incidents involving personal data. What is the difference between Privacy by Design and Privacy by default? We can say that Privacy by Default is part of and directly linked to Privacy by Design. This is because, one of the ways to guarantee privacy from the moment of creation, is that the product or service, when directed to the user, has all the measures implemented to guarantee data protection. Regarding this issue, Pinheiros (2018) points out: “We can say that Privacy by Design is a result of Privacy by Default. In other words, it is the idea that the product or service is launched and received by the user with all the safeguards that were designed during its development. The principle of data protection by default is to recognize the minimum necessary in relation to the data (for the purposes of the intended processing), prohibiting that data from exceeding such purposes.” (PINHEIROS, 2018, p.399). In other words, when the product or service is launched to the public, the security and data protection settings must be applied as a standard measure. In such a way that only strictly necessary data is collected. Furthermore, the user must be given the autonomy to, if they wish, voluntarily enable privacy-related settings and functionalities. Conclusion In short, the famous phrase created by London mathematician Clive Humby “data is the name oil”, becomes increasingly real, given that companies use data as a source of revenue, directly or indirectly. Therefore, it becomes increasingly necessary to create regulations to protect data, giving the holder autonomy over their information. Therefore, it is up to companies to implement measures to ensure that their products and services comply with new regulations, guaranteeing the right to privacy for data subjects. It is also interesting to highlight that the application of Privacy by Design can be seen as a competitive differentiator. After all, companies that use measures that guarantee user privacy reinforce their commitment and concern for their well-being. In this way, the trust of all customers is strengthened through the transparency adopted. Therefore, the implementation of the Privacy by Design concept not only guarantees compliance with legislation, but can also be seen as a competitive differentiator, strengthening users' trust through the transparency adopted. Did you like to know a little more about Privacy by Design and its main implications? Check out more content like this on our blog! Want to be our next Tech Writer? Check out our vacancies on the Career page!

Product Discovery and its importance in product development
Tech Writers August 15, 2022

Product Discovery and its importance in product development

If you are entering or are part of the world of products, at some point you must have come across the term product discovery. The concept has been widely disseminated over the last few years, especially after the boom in product culture in companies across the globe. In this increasingly competitive world, some of the main challenges for companies are to build products that are successfully launched and have a long life, being loved by customers. However, situations are not rare in which time, money and a lot of energy are invested in creating products that do not arouse consumer interest. Thus, they end up not being used and not achieving the desired results. To develop relevant products that actually captivate the public, knowledge about the discovery process is essential. Therefore, in this article I will explain to you about product discovery: what it is, and how to apply it in the day-to-day life of a digital product. Furthermore, I will bring tips on how to create better and more assertive products in relation to the desires of their users. Here we go? So what is Product Discovery? Product discovery or product discovery, if we translate it into Portuguese, is nothing more than a set of practices that are related to understanding (discovering) our user's needs. In the product discovery process, we care about deeply understanding the problem before thinking about a solution. Applying a discovery means carrying out planning and a study (carried out by the business's product and UX team) on the user's pain points. This work can be done on an existing product or something new. Finding why, investigating, discovering opportunities, and, finally, solutions that generate value and are viable for the company is our biggest challenge. How to make a Discovery? The first thing to say on this subject is that there is no recipe for making a good discovery. Each product team applies tools within the activities that make the most sense at that moment. On the other hand, teams must always follow a plan, which will help in this process. We can mention the following important steps: Alignment of expectations (understanding the company's current situation, understanding the product we want to deliver); Research (in conjunction with the UX team), to understand users’ pain points (problems); Ideation of hypotheses to be validated (this is the time to have as many hypotheses as possible and align with the team through dynamics); Validation of hypotheses (this is the time to expose the prototype to the user, as close as possible to the product version); Refinement, which is creating a roadmap and establishing an MVP (minimum viable product) aligned with the company's strategies. When to do a Discovery? Discovery is essential when launching a new product, but it is not limited to that. When we have a new feature, we can also assess the need for a discovery. These activities can happen at any stage of a product's life cycle. We must evaluate the following conditions: Is the value we will deliver high? Do we have a clear understanding of the objectives? Do we have resources available (availability of time and money)? After answering these questions, we can gain a perspective on the actions we should take. We must always take into consideration: the less effort (in implementation) to validate a hypothesis the better. This way, we can perform different tests to have a clearer perspective of what our user needs. It's not enough to just go out and create new features and hope that you'll get good results. Therefore, we must have a mindset of always testing our hypotheses, to understand whether or not they make sense to our users. What are the areas involved in the discovery process? Responsibility for activities lies with product management (PM) and UX. However, this does not mean there is no engineering collaboration. Engineering signals to us the technical possibility of the solution we are proposing. In this way, collaboration between teams greatly improves processes, making everyone contribute to understanding the best solution (feasible, desirable and possible) in the product we are working on. Is Discovery right or wrong? We cannot say that there is an error, but rather that there may have been a lack of perception in the project. With each discovery, we learn and mature more, which generates greater assertiveness in understanding users' real problems. The important thing is to seek an unbiased perception, understanding that we must collect as much information as possible from our users before making any decisions based on “guesswork”. Discovery allows us to evolve in the actions we build throughout our product delivery journey. Finally, a very important point to clarify is the role of the product manager. It must strengthen the product culture in your company, always aligning the organization's strategy and the purpose of the product with everyone involved (engineering, sales, marketing, among others). It is the role of the product manager to ensure a very clear vision of the business and the value that the product generates for our users. I hope I was able to clear up some of your doubts about the topic! For those who are interested in the subject, I recommend some bibliographical references that can help you learn more about the subject: CAGAN, Martin. Inspired: How to create technology products that customers love; TORRES, Joaquim. Software product management: How to increase your software’s chances of success; Do you know the importance of the discovery process? – PM3 courses. Did you like to know a little more about product discovery and how it works? Check out more content like this on our Blog! Want to be our next Tech Writer?

How to promote accelerated learning within an organization?
Tech Writers August 01, 2022

How to promote accelerated learning within an organization?

We live in a time of significant transformations in society, whether related to human evolution or caused by the recent Covid-19 pandemic. The changing scenario requires an effort to adapt and a refined strategic vision. In this context, the capacity for accelerated learning is very useful, especially in the job market. On the other hand, accelerated learning is all about cognitive flexibility. This, in turn, is linked to the person's continuous learning capacity, which allows adaptation to different scenarios. In contexts related to technological evolution, cognitive flexibility is even more required. This is because the speed of transformation of digital concepts, learning techniques, tools and procedures is even greater than in previously analog contexts. As a technology company, business changes and the evolution of technical knowledge are recurring variables at Softplan. Furthermore, they generate an impact on the entire structure, from processes to products supplied. Promoting accelerated learning also links to leadership vision. It demands that managers understand the importance of continuous learning for the sustainability of their business. In today's content we will tell you about one of our experiences in this process of accelerating learning. Stay with us! What are the challenges of accelerated learning? The development of the SAJ (Justice Automation System) requires not only knowledge in technology, but also in relation to the processes of client institutions and current legislation. Therefore, it is necessary for its development to create, maintain and disseminate very specialized knowledge. In addition to the particularities of Softplan's business, we also have the advancement of technology in different segments of the economy, directly impacting the demand for qualified labor in TECH. Therefore, in 2019 we started the “Base Academy” project, one of the examples of action focused on accelerated learning in a corporate context. We created an internship program model, in which the training itself also proved to be a natural selection process, taking only professionals truly capable of serving customers and already adapted to the unit's culture. We have already held 4 editions, with more than 900 registrants and more than 60 people trained in this model! How do we carry out the “Base Academy” project? The “Base Academy” project was designed to solve the challenges related to the dissemination of specialized knowledge in the Justice Unit Support team, focusing on the intelligence necessary to serve Softplan customers. The main characteristics were: Programming of training phases When programming the training phases, we organized how knowledge would be transferred on: Technology (SAJ product and configurations); Business rule (judicial processes); Work processes and tools; This stage was carried out in incremental waves of learning and with practical activities (“hands on” in the field monitoring stage). Adaptable content Content with adaptation possibilities to meet the specific knowledge demands of service operation teams. In this sense, allowing a certain flexibility in the modeling of each class. Selection process based on behavioral profile The selection process for participation in the project was carried out based on a behavioral profile. In other words, our criteria went beyond the selection by undergraduate course correlated to the business of Justice. Qualified Instructors and related support materials To carry out the project, we seek to allocate employees with advanced specialized knowledge and extensive experience in serving customers as instructors. Furthermore, we provide materials, databases, knowledge bases, product documentation and practices that are fully correlated to the reality of support work. Proximity to the work routine We created a “shadow” period for project participants. In other words, at that moment, the interns monitored the actual work routine, defining mentors for day-to-day guidance. Continuous monitoring We promote constant monitoring of the interns' learning evolution, including theoretical and practical assessments, technical evaluation, constant feedback and research with mentors. Furthermore, we map the evolution of technical knowledge through self-assessment and collection of productivity data for each intern in the post-employment period. Creation of the project's brand We created a brand for the project, promoting the interns' sense of belonging to an innovative career development initiative, and of high quality as an educational action. What did we learn from the project? The application of this project resulted in many lessons. We realized that, although there is no single formula for accelerated learning, some strategies and actions can enhance this process. The main inferences about the relationship between the results generated by the “Base Academy” and the application of Knowledge Management practices, referenced in this study, are: Creation of a model for replicating specialized knowledge for new employees, fully scalable in terms of the management strategy selection, planning, execution and monitoring. This confirms the importance of an incremental training program connected to the knowledge required by the business operation; Application of the practice-centered learning model (experimentation of processes and systems). This optimizes learning for adults (reinforcing the assertiveness of choosing methodologies based on andragogy); Strategic value of the Knowledge Management team's performance in designing the methodology, planning, logistics and execution of the program, implementing multilevel governance practices recommended in the Corporate Network University model; Acceleration of learning due to the availability of specialists from different areas of the Justice vertical as instructors to carry out the training topics provided for in the program, reinforcing the importance of organizational learning centered on the practice of daily work; Positive financial impact, justifying the financial investment to hire interns in a training model for better assertiveness in hiring in a CLT model, demonstrating that good Knowledge Management practices can be translated into quantitative return for the organization (reduced general costs and indicators of increased productivity). Our experience reinforcing Knowledge Management processes highlights the positive impacts generated by a solid investment strategy in actions focused on intellectual capital. In short: to obtain better results, investing in people is crucial! Want to know more about it? I recommend: https://justicadigital.com/unisoft-gestao-do-conhecimento/ https://www.linkedin.com/pulse/acelera%C3%A7%C3%A3o-da-aprendizagem-na-unidade-de- justi%C3%A7a-softplan-prado/?trk=public_profile_article_view Did you like to know a little more about accelerated learning and how we seek to apply it at Softplan? Check out more content like this on our Blog! Want to be our next Tech Writer?

“Right Size” Microservices – Part I
Tech Writers July 18, 2022

“Right Size” Microservices – Part I

A difficult question to answer when working with microservices is regarding the appropriate “size” of the applications that make up the ecosystem. At first, just to cite a few examples and highlight the importance of the subject, services of inadequate granularity can result in: Increased maintenance costs and the team's rework rate; Loss to non-functional requirements such as scalability, elasticity and availability; Worsening of the impact of decentralized architecture in terms of performance; Accidental complexity in monitoring and detecting application failures. Determining the appropriate granularity is a difficult task. You probably won't be successful on your first try. In this article, I will bring some insights into possible scenarios that justify decomposing an application into smaller microservices to help you with this issue. Check it out! Granular or modular microservices? To better understand the justifications that will guide us during the application refinement process, we must clarify the conceptual difference between the terms “granularity” and “modularity”. Although closely related, they deal with different architectural aspects. Modularity, in our discussion context, deals with the internal organization of an application into separate components of high cohesion and low coupling. Ideally, every application (mainly monolithic) should have this concern, being developed in a flexible modular format that facilitates eventual decomposition. The absence of modularity will lead the project to the dangerous and almost certainly irreversible big ball of mud (or even worse: big ball of distributed mud). Figure 1- Example of cohesive modules with dependency Granularity, on the other hand, concerns the size of our modules and/or services. In a distributed architecture, it is much more common to have problems with granularity than with modularity. The central point of this discussion is that a modular architecture makes it much easier to break down a centralized service into more refined microservices, being almost a prerequisite for an architectural decomposition with less effort and controlled risk. The old admonition to refactor a problematic piece of code before changing its behavior can also, in due proportion, serve as guidance. Therefore, it is a wise choice to restructure a service into a flexible modular format before applying the guidelines we will cover below. Tip: Simple tools such as NetArchTest (.NET) and ArchUnit (Java) can be used by the architect to guarantee the modularity of an application, following the concept of fitness functions and evolutionary architectures! Microservices disintegration criteria After all, what would be the criteria that would justify breaking a service into smaller applications? They are: Scope and functionality; High volatility code; Scalability and throughput; Fault tolerance; Security; Extensibility. Below, we will explain each of these topics in more detail: Scope and functionality This is the most common justification for breaking the granularity of a service. A microservice aims to have high cohesion. Therefore, you must do one thing and do it very well. The subjective nature of this criterion can lead to mistaken architectural decisions. As “single responsibility” ends up depending on each person’s individual assessment and interpretation, it is very difficult to say precisely when this recommendation is valid. Look at the image: Figure 2 – Service decomposition with good cohesion In the example above, the functionalities are closely related within the same business context (notification). Evaluating only from a cohesion point of view, it is likely that we do not have a good justification for an architectural decomposition. Figure 3 – Service handling unrelated functionalities Now, consider a single service that manages the user profile and is responsible for handling a comment session. Clearly, we are talking about different business contexts and it would be easier to accept a decision based on this justification. To reinforce: this criterion, in itself, often does not justify breaking a service. Generally, it is applied in conjunction with other criteria, reinforcing decision making. 2 – High volatility code The speed at which the source code changes is a great guideline to base the decomposition of an application. Imagine a financial securities service, where the history module has new implementations every week, while the securities payable and receivable modules change every six months. Figure 4 – Separating high-volatility code In this situation, architectural decomposition can be a wise decision to reduce the scope of testing before each release. Furthermore, this decision will also increase agility and keep our deployment risk controlled, ensuring that the securities service is no longer affected by frequent changes to the history service logic. 3 – Scalability and throughput Very similar to the previous item, the throughput of a service can be a great justification for breaking the application. Different levels of demand, in different functionalities, may require the service to scale in different and independent ways. Keeping the application centralized can directly impact the capacity and costs of the architecture in terms of scalability and elasticity. Depending on the business context, this criterion alone may be sufficient to justify your decision.  Figure 5 – Service with different request levels in tpm (transactions per minute) 4 – Fault tolerance The term fault tolerance describes the ability of an application to continue operating even when a certain part of this application stops working. Let's consider the previous example. Imagine a scenario where the history service, as it integrates with several third-party applications outside of our architecture, tends to fail quite frequently, reaching the point of restarting the entire financial securities service and causing unavailability. In this case, an understandable decision would be: separate the problematic routine into an isolated service. In order to maintain our functional application despite possible catastrophic failures in the history service. Figure 6 – Separating a problematic routine to improve fault tolerance 5 – Security Consider the example illustrated in the figure below. In it, a service that handles a user's basic information (address, telephone number, name, etc.) needs to manage sensitive credit card data. This information may have different requirements regarding access and protection. Breaking the service, in this case, can help: Further restrict access to code whose security criteria are more stringent; Prevent less restricted code from being impacted by accidental complexity from other modules. Figure 7 – Services with different access and security criteria 6 – Extensibility An extensible solution has the ability to have new functionalities added easily as the business context grows. This ability can also be a strong motivator to segregate an application. Imagine that a company has a centralized service to manage payment methods and wants to support new methods. Of course, it would be possible to consolidate all of this into a single service. However, with each new inclusion, the scope of testing would become more and more complex, increasing the risk of release, and, with it, the cost of new modifications. Therefore, one way to mitigate this problem would be to separate each payment method into an exclusive service. This would allow new services to be able to upload and extend current functionality without impacting the code in production or increasing the scope of testing, thus keeping the risk of new implementations under control. Figure 8 – Segregation of services to allow extensibility Conclusion It is unlikely that an architect will get it right the first time. Requirements change. As we learned from our telemetry and feedback tools, the architectural decomposition of an application ends up being a natural process within a modern, evolving application. Fortunately, we have examples based on practical experimentation that serve as guides in this endeavor: Ensure that your service has a flexible, modular internal structure; Calmly evaluate the criteria that justify an architectural decomposition; Consider each trade-off based on the needs of your business context. Finally, for readers who want to know more about the topic, I recommend reading the book that was the basis for this article: Software Architecture: The Hard Parts. A deep and practical approach to this and many other challenges faced in the modern software architecture of complex systems. Later, in part II of this article, we will discuss the criteria that can lead an architect to unify distinct services into a centralized application. Stay tuned! Did you enjoy learning a little more about Microservices? Tell me here in the comments! Check out more content like this on our Blog! Want to be our next Tech Writer?