logo tech writers

tech writers

This is our blog for technology lovers! Here, softplayers and other specialists share knowledge that is fundamental for the development of this community.

Learn more
Techniques for migrating your monolithic system to microservices
Tech Writers October

Techniques for migrating your monolithic system to microservices

Introduction This article aims to explore some techniques to migrate your monolithic system to microservices based mainly on Sam Newman's work, the book: Migrating monolithic systems to microservices. Despite trying to consolidate the main lessons learned from the book through this review, I recommend that, at the end of reading, you seek to delve deeper into the subject through the original work. Understanding the context better Firstly, we need to understand the concepts involved, what a monolithic system would be, its advantages and disadvantages, as well as what a microservices system is, advantages and disadvantages. Monolithic systems The word monolith can be conceptualized as a monument or work made up of a single block of stone. A monolithic system, on the other hand, is that single system, which has at least three types: Single process (most common); Distributed monolithic; And third-party black box systems. Sam Newman refers to that system as an implementation unit, when all the functionalities of a system must be implemented together. Monolithic systems have the following advantages: Quickly upload a PoC or MVP to validate a business or product;The development environment is simpler when the architecture is monolithic;Simpler to test, as it is possible to test the application from end to end in a single single place; Low communication between networks; Simpler deployment topology. This model also has its disadvantages, such as: Difficult to scale; Because it is built on a single code, if something breaks, the whole system can be unavailable; Difficult maintenance given the growth of the code base; Low flexibility over the programming languages ​​of the project. Martin Fowler says: Monolithic applications can be successful but frustrating – especially when more applications are deployed in the cloud. Development cycles are tethered – a change made to a small part of the application requires the entire monolith to be republished. To the/]. over time it will become increasingly difficult to maintain a modular structure, which makes it harder to make changes affect only one module. Scaling requires scaling the entire application, not just the parts that require the most resources. Microservices Sam Newman conceptualizes that microservices “are services that can be deployed independently and are modeled around a domain”. The form of communication between them is also important, being through networks. Thus, this microservices architecture is based on several microservices collaborating with each other. One of the advantages of these, the microservices are technology independent, as they expose the functionalities through endpoints to be consumed through the network, see as a distributed system. Another advantage and characteristic is that they have independent deployments. This way, you can update (change and deploy) a service without having to update the rest of the system, in other words, forget about the need to orchestrate multiple deliveries from multiple teams so that everything is delivered in a single package. Remembering that, for this to be true, we must guarantee low coupling. Unlike monolithic systems, microservices can scale independently. This advantage can help in proper distribution of resources, cost reduction and many more. Furthermore, as the domain is encapsulated and externalizes its functionalities through endpoints, they can be reused in several applications. I mentioned some advantages, but it is not limited to that. We can work in parallel on services, include more developers to solve a problem without getting in each other's way, we reduce cognitive complexity over business knowledge and much more. Of course, I sold a dream with several advantages, but not everything is rosy. The model has its challenges. Among these challenges, we can mention communication between them, that is, networks – we have to worry, for example, about latencies, packet delivery failures. Another challenge is consolidating and reading logs for potential debugging. It also has the challenge of dealing with varying stacks across different services. One of the discussions that always comes up is about the size of microservices. This is one of the lessons learned from Sam Newman's book – don't worry about the size, or number of lines of code, but about the business. In the book, he focuses on understanding the business domain well and that is why he suggests the use of DDD (Domain-Driven Design). I'm not going to go into the merits of DDD, as it's not the focus of this article, but it's certainly worth studying this topic in more depth. Techniques for migration Before any migration, it is important to know the reason for it. What you want to achieve with this change, have a clear vision, as this change may require a significant investment. Furthermore, it's normal to take a transition like this, it's not just a matter of the sunk cost fallacy, but people actually forget why they're doing the work. Not least, many of the benefits achieved with a microservices architecture can be achieved in other ways, I will give just one example. The desire to increase the autonomy of teams is frequent, it increases confidence, motivation, freedom and many other benefits. How to do this? One possible path is to have different teams be responsible for different parts of the system. In this case, a modular monolith helps a lot. Another way is to identify people with greater knowledge about certain parts of the system and empower them with decision making about that part. One more way to increase autonomy can be by adopting self-service approaches for provisioning machines, environments, access to logs. Finally, it is important to know when not to adopt microservices: In cloudy domains; Startups; Software installed and managed by customers; And when there is no good reason! But let's say you are convinced that the best path forward for you is to actually migrate your monolithic system to microservices. So let's talk about some techniques. strangler fig application This is a technique often seen in system rewrites. The pattern, initially identified by Martin Fowler, is inspired by a type of fig tree that germinates in the upper branches of trees. The existing tree serves as a support for the new fig tree, which, if it reaches the final stages, it will be possible to see the original tree die and rot, leaving only the new fig tree. Looking at software, the idea is that the new system has the support of the existing system, allowing coexistence, and that, when the new system is ready, we can replace the old one. This pattern is based on three steps: Identify the parts of the current system you want to migrate – one of the ways to do this is with a cost-benefit analysis; Implement the functionality in your new microservice; Divert calls from the monolithic system to the new microservice. Diagram based on figure 3.1 from Sam Newman's book I draw attention to the fact that, until the call for the new functionality is made, it will exist in a production environment, but will not be technically active. This way, you have time to develop the new functionality until you are satisfied with the new implementation and management of the service. One of the ideas here is to separate the concept of deployment from release. This does not mean that the functionality, because it is in the production environment, will actually be used by customers. This gives you the freedom to test new functionality in production before it is used. Another important point is that this gradual migration approach allows us to rollback much more easily. UI Composition The previous technique was focused on a server-side migration, but the user interface also provides great opportunities. Imagine functionality partly served by the monolith and partly served by a new microservices architecture. One of the strategies widely used in migrations is composition by pages. Instead of migrating everything at once, pages are being updated, which in a moment of transition will present different experiences for users when using new parts of the system or site. Another strategy is to do this by composing widgets. In Sam Newman's book, in one of his cases, the idea was to take a part of a website, with interesting challenges, but which was also not the most prominent part of the system. The idea was to put something out there, learn from the experience and make sure that, if there was a mistake, it didn't affect the core of the system. The example presented is of a travel system, which decided to just deploy a single widget that displayed the top ten travel destinations defined by the new system. Here, the Edge-Side Includes technique was used, using Apache, where a template was defined in your web page and a web server provides the content. UI composition allows you to separate distinct UI modules that could represent a search form, a map, etc. The figure below illustrates the idea. Figure based on figures 3.20 3 3.21 from Sam Newman's book Another path, rather than the composition of multiple pages, was that of single-page applications, having a more efficient interface and running everything in a single panel. Here, the idea is a composition of widgets. There were attempts at common formats, one of these attempts was through the Web Components specification, but the delay for this standard to gain strength caused people to look for alternatives. An alternative that has gained a lot of traction in the world of Javascript frameworks such as Vue, Angular and React is Micro Frontends. The term Micro Frontend gained strength in 2016, proposing the same idea of ​​microservices that were used in the backend, only this time for the frontend. The principles that support the microfrontend are: Be agnostic in relation to technology; Each service be isolated and self-contained; Agree on nomenclatures for local storage, cookies and others to avoid conflicts; Prioritize native browser resources over custom APIs; Build a resilient website and usable even when you have problems loading Javascript code. Branch by abstraction Previously, we talked about the strangler fig pattern, in which it intercepts calls on the perimeter of the monolithic system. But what if the functionality we want to extract is too deep-rooted? We want to make changes without causing problems for the system, much less for the developers working on that code base. This leads us to want to make quick changes in order to avoid disruption. Normally, we develop code in branches and, when these changes are ready, we merge them to master. The longer that branch exists, the greater the challenges of merging. So, the idea here is to be able to develop code incrementally, without major disruptions and without using a long-lived code branch. Branching by abstraction allows you to make changes to existing code to allow implementations to coexist securely. The pattern has 5 steps: Create an abstraction for the desired functionality; Change the calls to use the new abstraction; Implement the abstraction with the reworked functionality, which will call the new microservice; Switch the abstraction so that the new implementation is used and,Clean up the abstraction and remove the old implementation. Below, I try to illustrate what each of these steps would look like. Imagine that we have a tax system that deals with the NF-e (electronic invoice), the titles generated, bookkeeping, adjustments, transfers, returns and finally generating a calculation. The first step would be to select the functionality to be replaced and create an abstraction. One of Sam Newman's lessons is to try to start with features with a low number of inputs and outputs. Therefore, the NF-e and the Apuração would be the last to be chosen. In this example, I'll choose bookkeeping to start decomposing our monolith into microservices. Step 1: Create abstraction Step 2: Use abstraction Step 3: Create another implementation Step 4: Switch implementation Step 5: Clean up Conclusion In this article, I've talked about a few techniques for decomposing your monolithic system, but there are dozens more shapes. In Sam Newman's book he shows other techniques, such as: Parallel execution; Collaborator decorator; Modified data capture; Not to mention database-related decompositions, which were not even mentioned in this article. It is important to know that, in a real project, a mix of techniques will normally be necessary to be successful in decomposing a monolithic system into microservices. Another great learning was that you don't need to do a migration like this to get many of the benefits of a microservices architecture, so be clear about the reason for doing work like this. Remember that only what you measure is managed. So, be well equipped with indicators to monitor the project's progress and success given the defined objectives. Finally, understand that it is necessary to involve people in such a migration process, they are fundamental to the success of the project.

Proud Tech: An event for technology lovers
Tech Writers October

Proud Tech: An event for technology lovers

We are a technology company created in 1990 to develop management software for Public Management, the Construction Industry and Justice. The dream that started with a conversation between 3 friends became reality and is now experienced by almost 2.000 softplayers. With a robust team full of desire to innovate, we guarantee more efficiency in public and private organizations, including city halls, large construction companies and the largest Court of Justice in the world! In this context, our development professionals have become specialists in their fields. However, they faced the challenge of communication between teams. If the team that develops solutions for Public Management had problems, it was very likely that someone from another team could help. But, as they were very distant and focused on their operations, the exchange hardly happened. The solution to the problem was “Proud Dev”, an event created by the developers themselves with the aim of connecting people and making experiences, problems and solutions in each area widely known. On July 02, 2019, our devs met and strengthened the community that makes the company happen through lectures and technical debates. The event was carried out with a lot of dedication and desire to continue. The pride in developing with purpose was strengthened and expectations for the 2020 edition were high, but the pandemic caused by COVID-19 slowed down this and other plans. At that time, our teams were 100% focused on how to ensure that customers continued to work and adapt to the remote work model. But in 2021, let's catch up! With a technical committee fully dedicated to thinking about trails, itineraries and lectures, the event now also opens its doors to other technology professionals and guests of softplayers. The purpose of this year's edition is to unite development, product and business to share knowledge, problems and everyday solutions in practice. Totally online and free, you can be part of this story! To participate, keep an eye on the content on our social networks about the event, which will feature our experts and names such as Rodrigo Branas, Elemar Junior and Klaus Wuestefeld. Don't miss this opportunity to develop and connect! 

Technology and different generations
Tech Writers October

Technology and different generations

https://open.spotify.com/episode/5leslsZoPVvcfucIXPga9C?go=1&sp_cid=65617f9945b6dafa4bd6df3d4f755a20&utm_source=embed_player_p&utm_medium=desktop Como é o impacto da tecnologia em diferentes gerações? No nosso primeiro bate-papo, convidamos duas pessoas de gerações distintas para debater sobre o impacto da tecnologia na sociedade. Uma delas é Moacir Marafon, um dos Sócios Fundadores e Presidente do Conselho da Softplan. Para fazer um contraponto, também recebemos Thiago Mathias Simon, um Desenvolvedor Júnior que, com apenas 19 anos, ingressou na empresa através de um programa de capacitação que é resultado de uma parceria entre a Softplan, o Senai SC e a ACATE. O host dessa conversa é o Guilherme Brasil, que a partir de hoje irá conduzir, mensalmente, debates que vão ampliar a sua visão sobre como os recursos atuais têm transformado a vidas das pessoas. A conversa foi muito proveitosa e falaremos sobre os principais pontos dela nesse artigo. A relação de cada convidado com a tecnologia Cada um dos convidados do bate-papo viveu a questão da tecnologia de uma maneira diferente, visto que nasceram em gerações totalmente distintas. Marafon teve seu primeiro contato com a tecnologia em uma matéria de introdução à computação no curso de Engenharia Civil. Quando se formou na faculdade, ele comprou uma calculadora Casio programada com 1700 bytes de memória, onde fazia programas e gravava os cálculos. Conforme Marafon adentrava no universo da tecnologia, ele se apaixonava e queria desvendá-lo ainda mais. Logo depois, ele foi trabalhar no Governo do Estado e encontrou um minicomputador com uma linguagem de programação Basic Comercial. Como autodidata, Marafon evoluiu sua experiência de programador e se encantou com essa área. Por isso, resolveu voltar para a Universidade Federal de Santa Catarina e fez parte da primeira turma de pós-graduação do curso de Ciências da Computação. Quando terminou, seu sonho de empreender no mundo da tecnologia ficou mais forte e acabou conhecendo dois parceiros que, juntos, criaram a Softplan. Diferente de Marafon, Thiago nasceu em um contexto digitalizado, no qual já existiam computadores em casa. Com isso, ele teve o convívio próximo com a tecnologia desde muito cedo. Com nove anos de idade, por exemplo, Thiago já tinha aula de operador de computador de forma EAD (ensino a distância). No ensino médio, o jovem iniciou o curso técnico em Desenvolvimento de Sistemas, que foi seu primeiro contato efetivo com a parte de programação. Hoje, com apenas 19 anos, ele trabalha na Softplan. Pode-se concluir que a tecnologia esteve presente em todos os momentos de sua vida. Thiago afirma que, atualmente, é difícil imaginar ir para algum lugar sem ter um celular do seu lado. A diferença geracional interfere no avanço da tecnologia? Durante a conversa, tivemos o privilégio de ter a perspectiva de diferentes gerações para analisar essa pergunta. Marafon, que se enquadra no começo da geração X (nascidos entre 1960 e 1980), expôs características marcantes da sua época. Entrou no mercado durante uma das maiores crises econômicas do Brasil, na qual as pessoas vivenciavam uma grande dificuldade para trabalhar na área de suas graduações. Para ele, a geração X é caracterizada por dar muito valor ao emprego e à estabilidade. Consequentemente, não era aconselhado a se arriscar, já que mudanças poderiam ser perigosas. Mas Marafon acredita que sua geração pode levar tais características e, juntamente com as gerações mais novas, gerar criações tecnológicas bem-sucedidas. Thiago, que faz parte da geração Z (nascidos entre 1995 e 2010), acredita que a diferença geracional interfere no avanço da tecnologia, mas não necessariamente de uma forma negativa. Ele observa a tecnologia como algo indispensável. Uma empresa que não possui um espaço virtual para ter um posicionamento, acaba tendo dificuldades de se comunicar e conectar com seu público, sobretudo jovens. A tecnologia vai evoluindo justamente por conta das diferenças geracionais, em que cada indivíduo com sua experiência apresenta ideias distintas e inovadoras que se complementam. Ao discutir esse tema e analisar a velocidade dos avanços na tecnologia, Guilherme, host do podcast, de forma resumida exprimiu sua opinião na seguinte frase: ‘’Essa convivência de várias gerações freia a velocidade, que já é exorbitante, da evolução tecnológica.’’ Regionalização do uso da tecnologia Cada vez mais, há um aumento do número de pessoas que se conectam à internet no Brasil. Para um país que possui desigualdades sociais visíveis, essa é uma notícia agradável de analisar. Certamente, ainda existem áreas onde não há acesso a internet. De modo geral, o país tem garantido a acessibilidade do mundo digital com uma velocidade rápida. A pesquisa TIC Domicílios, realizada pelo Centro Regional de Estudos para o Desenvolvimento da Sociedade da Informação (Cetic.br), mostra que o uso da internet no Brasil cresceu em 2020, passando de 74% para 81% da população, o que representa 152 milhões de pessoas. O aumento foi representativo, mas os 29% que ainda não tem acesso não podem ser esquecidos. Por isso, é responsabilidade dos governantes tornar o mundo digital próximo de toda a sociedade. O Presidente do Conselho da Softplan expôs, como exemplo, a sua vivência com a situação de regionalização do uso da tecnologia. Seus familiares vivem no interior de Santa Catarina, onde a questão do acesso a internet está, hoje em dia, sendo superada. As crianças, desde muito cedo, já sabem utilizar smartphones e computadores. Algo positivo, já que isso é resultado da democratização do acesso a internet. A riqueza da convivência de gerações distintas Quando diferentes tipos de gerações trabalham de forma unida, certamente há benefícios para uma determinada empresa e, além disso, pode gerar avanços na tecnologia. Analisar perspectivas distintas é fundamental, pois sempre há uma outra maneira de fazer algo ou de enxergar a solução para um problema. O debate entre as gerações é muito proveitoso. Quando se pensa em profissionais mais jovens, ideias como criatividade, energia, agilidade, praticidade estão quase sempre relacionadas. Juntando essas características com uma geração antiga, que é mais experiente, não tem como o resultado dar errado. Gostou do conteúdo? Ouça o bate-papo completo. Nele falamos de outras questões relacionadas ao que trouxemos neste artigo.

Benefits of automated functional testing
Tech Writers October

Benefits of automated functional testing

Things change quickly in the IT industry. New technologies and versions emerge almost daily and to keep up with this technological revolution without losing the excellence of the products delivered, companies are making large investments in software testing. The wide variety of devices, networks and platforms entails hard human work in technology companies, which aim to improve user experience through software testing. These tests are manual and repetitive, and unfortunately can be compromised due to short deadlines and exhausting days. This may be one of the causes of the start of the automated testing industry. It is clear that every day more companies are looking for quality in their products/software, and we can enable more agility through the use of automated scripts. And more than ever, nowadays we have a range of test automation tool options, which help us produce increasingly robust and efficient code. The term automation is broad and carries different meanings, some people think we are talking about unit testing, others see it as a functional regression test script and there is still the concept of performance, stress and security testing. However, we have to understand that they are all actually scripts, but the difference is in the objective of the test to be performed. The main objective of automated testing is to reduce manual testing efforts as much as possible, with a minimum set of scripts, generating more credibility, optimizing human labor, speeding up executions and freeing the team to focus more on strategic issues in the project. Automated functional tests, in turn, work like a robot that simulates human action, opening the browser or mobile applications, performing interactions and entering values ​​in the fields, clicking buttons on the screens and comparing them with the expected results. The main benefits of automated tests are: Automated tests take less time to run than manual tests: Manual tests can be slow, especially when there are numerous deployments. Testers must read the procedure, understand the scenarios and perform a manual action, such as typing a command or pressing a button, and recording the results. All of these procedures can be replaced with automated testing, allowing tests to be completed in an optimal time frame. Automated tests are less prone to errors than manual tests: Fatigue, stress, everyday pressures and repetitive work can lead to human errors. The test, when automated, eliminates this possibility, bringing credibility to the verification of expected results, as the robot follows the defined steps and does not skip executions. Automated tests can be executed without any user interaction: Another advantage of automated tests is the possibility of programming the automatic triggering of script execution, without the need for human action. Allowing, for example, a daily test of the “x” set of scenarios, and even testing the system behavior at alternative times and days, such as Saturday and Sunday. Automated tests can be run in parallel: While a human tester can only do one thing at a time, automation bots can simulate multiple tests at the same time. With well-architected automated tests we can check different functionalities, in different browsers and operating systems, all executed in parallel and in isolation. Automated tests can create well-designed test reports: Automated tests can do more than the test itself. They can also automate things that are normally done manually after testing is complete. In this case, creating an automated test report that indicates everything that passed, failed, or was not executed. Furthermore, automated testing can generate evidence such as prints and videos in real time. Of the benefits shown above, I consider two to be the most important: efficiency and precision. The efficiency of automated test scripts is the master key to adding value to the manual testing process.  

How to start programming in Java: choose the right JDK and IDE
Tech Writers September 20, 2021

How to start programming in Java: choose the right JDK and IDE

Despite being a language that has been on the market since the 90s, it has several aspects, several approaches and has been updating and adapting to new scenarios. Furthermore, many companies use Java as their main development stack, which makes the job market full of opportunities for those who master the language. First of all, you will need to have an environment installed on your machine to work with Java. The basics are a JDK (Java Development Kit) and an IDE (Integrated Development Environment). Well, that's when you start to feel how broad Java is. There are basically two types of JDKs: Enterprise and Open. Since Sun Microsystems was acquired by Oracle in January 2010, Oracle has been working on a more closed and corporate profile of Java, in line with its values ​​as a company. For non-commercial use, Java Enterprise is free to use in its latest version only. However, given the large global Java community, today we also have Open JDK, a fork of the main JDK project during its version 8, maintaining all the principles of free software. And which one should you choose? It depends on your needs. I believe that if you are starting out in the Java world, the best option is the latest (stable) version of Open. This is the one with the largest number of features, with the largest number of people interested and discussing the problems around it. Furthermore, as it is developed by the community, the community itself supports its use, which proves to be a much easier path than learning completely alone.   The JDK In practical terms, to install the Open JDK, you can access this link and download the correct version for your operating system. If you choose Oracle's JDK, access this link and download the correct version for your operating system. After downloading the JDK installer, install it on your system like a normal program. On some systems and installers, Java is already automatically configured for use. To confirm this, open a terminal and type java –version . If the installation has been configured correctly, you should have an output with the version number you installed, as per the example below: If you do not have any type of response or response such as “command not found”, you need to add the “bin” directory ” inside the Java installation directory to the path of your operating system. This varies greatly from system to system. In this specific case, I suggest you search directly for your needs in an internet search engine. THE IDE Once you have configured your JDK, you can write Java artifacts directly in text editors, as well as compile and execute them at command prompts. However, in order to facilitate working with Java, there are tools called IDEs (Integrated Development Environments) that group tools and functionalities intended for use with programming languages. IDEs tend to be flexible and accept different configurations and plugins to meet different needs and different programming languages. In the case of Java specifically, the most traditional IDE of all is Eclipse. Eclipse already comes with a series of basic tools configured for use, such as code debugging tools (line-by-line execution, allowing visualization of variable values ​​in real time and simulation of execution of code snippets) and source versioning (such as GIT, SVN, vss, etc). Eclipse can be downloaded here. Furthermore, in some of its distributions, it is possible to download the JDK which is automatically configured by it. It is worth mentioning that Eclipse itself has several versions. During its installation, it asks about its use. The most complete distribution is Java EE for web developers (and the one I recommend installing), as it allows different types of development with Java. It is obvious that, if your objective is something very specific within Java (such as development for embedded devices), this version will not suit you and another version of Eclipse would need to be downloaded.   However, don't get hung up on Eclipse. Dynamic IDEs like Visual Studio Code are growing in the development community, as they allow you to develop in more than one language with precision, only requiring greater configuration effort at the beginning of your projects. However, it has greater versatility in projects of greater complexity and several different programming languages.   The Workspace Installation of IDEs on your operating system follows installation like any program. On first access, it usually asks for a directory that will be your workspace. The workspace, as well as the internal configurations of the IDE, tend to be personal choices in order to organize all your artifacts and projects that you will develop. Think about it and create your own methodology, as there is no situation more confusing than a disorganized work environment. Considerations Once you have the IDE installed, simply create your new project or clone a remote repository and create your artifacts. Finally, the doors of Java development are open. Just explore! It is worth mentioning that this article is a small snippet of the big world of software development. To see more very interesting articles about the world of development, keep an eye on this blog. See you! Sources and Suggestions How to install an Open JDK on Windows.

Technical software debt: dental floss or root canal?
Tech Writers September 06, 2021

Technical software debt: dental floss or root canal?

Technical debt is an issue that is gaining more and more importance. This is due to the growing need for companies to quickly adapt their software products to the changing needs of customers, businesses or the market. If this aspect of product quality is neglected, the risk and effort to adapt and evolve the software tends to increase indefinitely. Worse, it can quickly make the operation and business unsustainable. When teams and organizations realize this, a question automatically arises: What should we do with our technical debt? When are we going to tackle this problem? How much effort is it worth investing in this? Should we plan and include items in our backlog with the aim of reducing this debt? Which part of the system do I improve first? Kent Beck, one of the creators of XP, TDD and the agile manifesto, usually explains some possible approaches with a metaphor that I really like. According to him, there are two ways to reduce the technical debt of software: one of them is like flossing teeth. The other is root canal treatment. Floss In the first way, the development team makes the act of refactoring the code a daily habit. Just like flossing your teeth, an agile, disciplined and well-oriented team knows that it is necessary to carry out this act of “hygiene” constantly, little by little, as it carries out its other activities to evolve or correct the system. For example, before changing the behavior of a small software component that does not have test coverage, some tests can be created to ensure the existing behaviors. The code to be changed can first be refactored so that the work to be carried out afterwards is easier and less risky. As a result, the quality of the code gradually and steadily improves, with technical debt being consistently reduced and with low risk. With these good habits being part of the team's culture, it is possible to take care of the internal “health” of the software, just as flossing our teeth helps us ensure our oral health. If we are not in the habit of flossing, over time, we run the risk of developing serious problems and oral diseases. This occurs until a time when the pain of these problems prevents us from carrying out our daily activities. The time has come for us to go to the dentist and have a root canal. Root canal treatment In the second way, we try to solve a large problem, which grew over time, in a more sudden, invasive and risky way. The treatment is painful and, like all surgery, involves risks for the patient. In the case of software, the equivalent of this is when the team decides to take major actions to reduce the software's technical debt. Expensive and risky projects that aim to improve the internal quality of the software eventually become long undertakings whose benefits and completion are difficult to measure. A big risk, in the case of software channel processing, is when one of the technical debts acquired is the lack of automated tests. By remodeling the code without the protection of automated regressive tests, the team risks undesirably changing the existing behavior of the software. In other words, ruining what already worked before. Another problem with this approach is the uncertainty of the return on investment. The payoff for having low technical debt in one part of the system is that it will probably be easier, cheaper and less risky to change that part of the system in the future. When we make major efforts to reduce the technical debt of a part of the system, we may not be sure that we will change those same points again. It may take months or even years to see a return on such refactorings, as this is conditioned by the need to change, evolve or correct these points. Personal opinion Even the type of improvement may have been poorly chosen because we do not know exactly what changes will occur in the future. For example, we can refactor the code and encapsulate part of the behavior of a class using the Strategy design pattern, aiming to facilitate the implementation of “probable” new variations of this behavior. If, over time, changes made to the class never require this specific type of variation, we will have to deal with the complexity introduced by the standard without enjoying the benefits. Personally, I prefer the first option. If improvements are made right before we make the desired changes, in addition to having immediate feedback, we will be more certain of the point of the code and the type of change that will be made. This way, we will know which code improvements will make this work easier. Furthermore, as today there is a need to change a certain point in the code, there is a good chance that the same point will be changed again in the future. It's no coincidence, it's part of the nature of software development. However, still using the metaphor, sometimes slightly more incisive actions are really necessary to cure serious health conditions (ours and the software's). In the case of software, one aspect of technical debt in which it is worth investing in larger and focused actions is improving the coverage and effectiveness of automated tests. In many cases, the risk of this type of improvement is lower and the result is protection to make future improvements more safely. However, what brings a better return to the organization is to foster a culture of continuous improvement in code quality. In it, teams know that they can act to reduce technical debt daily, covering legacy code with tests and refactoring just before and during carrying out their activities. If you liked this article, I'm sure you'll also like the recommendations below. _ Refactoring – Not on the backlog! by Ron Jeffries. The title of the article is somewhat radical, but it serves to highlight the differences in this approach. The article has a creative explanation using another metaphor and uses drawings; _ OpportunisticRefactoring by Martin Fowler; _ What Is Clean Code? by Robert C. Martin (Uncle Bob).