logo tech writers

tech writers

This is our blog for technology lovers! Here, softplayers and other specialists share knowledge that is fundamental for the development of this community.

Learn more
Observability: What is it, pillars and how to implement domain-oriented observability
Tech Writers March 16, 2023

Observability: What is it, pillars and how to implement domain-oriented observability

Recently we are seeing a great expansion of web systems, the emergence of hybrid applications and applications that increasingly use software as a service (SaaS). This growth brings a new challenge to DevOps, infrastructure and development teams: only monitoring networks is no longer capable of guaranteeing data security. This is where observability comes in. Observability helps in tracking and identifying problems from software tools. And today, we're going to teach you how to implement observability using the Domain Oriented Observability method. This will make the various calls to logging services and analysis frameworks on your systems less technical and verbose. Check it out! What is observability in IT? IT observability (also called observability) is the ability to monitor, track, analyze and diagnose a system using software tools. With it, it is possible to create a system of constant monitoring and observation of a system with the aim of better understanding how it behaves, especially in cloud architectures. This concept is widely applied by DevOps, infrastructure and development teams, as it is already widespread in Software Engineering that it benefits the software and facilitates problem solving. Domain-oriented observability in practice: Values ​​Large applications focused on high-level metrics analysis, such as Mixpanel, believe in the concept of “Value moments”, which indicates which events in a given product are important to instrument. These moments of value vary depending on the product, for example, software aimed at electronic signature solutions, such as 1Doc, may consider the signing of a contract to be a “moment of value”. However, the moment of value that makes sense for your business does not necessarily make sense for users. That's because the value of your business is made up of the balance between two forces: intention and expectation. If the intention is to facilitate the process of signing a contract, and that's exactly what your users expect, then you've reached the right balance. However, the mismatch of these two forces is a loss of opportunity, and consequently, of value. Thanks to high-level metrics, this mismatch is not a lost cause. With them, it is possible to recover and maintain the value of your business according to the moments of value identified by your team of product analysts. From here, your role as a developer will involve checking the technical feasibility and implementing the capture of these metrics so that the “business team” can deal with the data. How to implement domain-oriented observability? From now on, let's move on to the practical part of learning how to implement domain-oriented observability. To better understand this implementation, let's imagine a small task management system. This system registers scheduled tasks and executes them according to the schedule. However, due to user needs, at certain times, it may be necessary to advance the execution of one of these tasks manually. To meet this need for “early execution”, the structure below was created: TaskManager: Class responsible for executing a certain task based on its own code ─ it is the “use case” class; TaskRecoverer: Class responsible for abstracting the recovery of database tasks and returning domain objects ─ it is the “repository” class. Task: Class that represents a “task” in the system ─ it is the “domain entity”. See the example below: public class Task Manager { private static boolean TAREFA_PROCESSADA = true; private static boolean TAREFA_NAO_PROCESSADA = false; private RecoverTasks retrieverTasks; public TaskManager(RecovererTasks retrieverTasks) { this.recovererTasks = recovererTasks; ); if (task == null) { return TASK_NOT_PROCESSED; } try { task.startProcess(); return TASK_PROCESSED; } catch (TaskInterrompidaException e) { return TAREFA_NAO_PROCESSADA; } } } The code above may not be the best example, but it expresses your domain logic well. Now, let's apply the observability in our method executeTarefaPeloCodigo. To do this, let's imagine two libraries in our project: Log: It is a generic log library, useful for troubleshooting activities by developers. Analytics: It is a generic library of events that metricize user interactions to a certain functionality. public boolean executeTarefaPeloCodigo(Integer codigoTarefa) { Task task = recuperatorTarefas.recuperaPeloCodigo(codigoTarefa); if (task == null) { Log.warn("The task %d does not exist, so its process has not been started.", codeTarefa); return TASK_NOT_PROCESSADA; } try { Log.info("Task process %d has been started.", taskCode); Analytics.registraEvento("task_started", task); task.iniciaProcesso(); Log.info("Task process %d ended.", TaskCode); Analytics.registraEvento("task_finished", task); return TASK_PROCESSADA; } catch (TarefaInterrompidaException e) { Log.error(e, String.format("Process of task %d was interrupted.", codeTarefa)); Analytics.registraEvento("task_interrupted", task); return TASK_NOT_PROCESSADA; } } Now, in addition to the execution of the business rule previously expressed by the code, we are also dealing with several log calls and analyzes about the use of this functionality. Looking at it, not from an observability instrumentation point of view, but technically, of course, the maintainability of this code has dropped. Firstly, if this implementation is crucial to the business, it should be guaranteed with unit tests. Furthermore, the business rule, which was clearly expressed before, is now obfuscated with the use of these libraries. Scenarios like this are common to see in the most diverse systems and, generally, it doesn't seem to sound very good an “observability-oriented code” and a “domain-oriented code”, together. So, is there a solution? Let's understand better below. Solution for the Observability Case Thinking about the readability of the written code, instinctively, we ended up thinking about creating small methods that abstract this confusing content from within the executeTarefaPeloCodigo, isolating the code focused on the domain from the code focused on analysis. However, in this case, the introduced observability is a business requirement, so even though it is “analysis-oriented code”, it remains “domain-oriented code”. Understand better in the image below: In other words, not all code aimed at the domain is aimed at observability and not all code aimed at observability is aimed at the domain, but there is, in some cases, an intersection between these, as in our case presented. Finally, we also strongly recommend extracting the “magic” Strings, as it makes reading more pleasant and easier to understand what each one represents. Perhaps the introduction of some ENUMs is also valid to abstract what would be the “tracking of events”, such as task_initiated and task_finished, but we will not delve into this subject, as it is not the focus. public boolean executeTarefaPeloCodigo(Integer codigoTarefa) { Task task = recuperatorTarefas.recuperaPeloCodigo(codigoTarefa); if (task == null) { measureQueTarefaNoExiste(taskcode); return TASK_NOT_PROCESSED; } try { metricThatTaskWasStarted(task); task.startProcess(); measuresWhatTaskWasFinished(task); return TASK_PROCESSED; } catch (TaskInterruptedException e) { measureWhatTaskWasInterrupted(task); return TASK_NOT_PROCESSED; } } private void measuresQueTarefaNaoExiste(Integer codigoTarefa) { Log.warn(MESSAGEM_TAREFA_INEXISTENT, codigoTarefa); } private void measureWhatTaskWasInitiated(Task task) { Log.info(MESSAGEM_TASK_INITIADA, task.getCodigo()); Analytics.registraEvento(TASK_INITIADA, task);} private void metricifyQueTarefaFoiFinalizada(Task task) { Log.info(MESSAGEM_TASK_FINALIZADA, codeTarefa); Analytics.registerEvent(TASK_FINALIZADA, task);} private void metricifyTaskWasInterrupted(Task task) { Log.error(e, String.format(TASK_INTERRUPTED_MESSAGE, taskcode)); Analytics.registraEvento(TASK_INTERROMPIDA, task);} This is a good start, with the domain code getting well written again ─ that is, of course, if you consider that your “domain code” is just the executeTaskByCodigo method. Observing our class, it doesn't take us long to notice that we've made a trade. If we extract several other metrics methods from within the original method which do not fit with the main objective of the TaskManager class, we are just “sweeping the problem under the carpet”. When something like this happens, it usually tells us that a new class is looking to emerge. Therefore, perhaps the simplest solution is to segregate this class into two: one to deal with metrics and another to process tasks. In other words, our proposal is to create a new class specifically responsible for the application's analyzes and logs, as shown in the drawing below. This is also a good solution because the segregation of the original responsibilities and the encapsulation of the metrics functions in a new class, added to the possible injection of dependencies introduced, favors the design for testability of the Task Manager, which is the holder of our domain rules. We can further reinforce this idea by thinking about the fact that Java is an object-oriented language (OOP) and the testability of a class that uses static methods is reduced if the method modifies a state external to itself, and, generally, libraries of logs meet this requirement. This way, the result of our Task Manager would be the following: public class Task Manager { private static boolean TAREFA_PROCESSADA = true; private static boolean TAREFA_NAO_PROCESSADA = false; private RecoveryTasks recovery; private MetifierTasks meterifier; public TaskManager(RecuperadorTasks retriever, MetificadorTasks metrificador) { this.recuperador = retriever; this.meterifier = meterifier; ); if (task == null) { metrificador.metrificaQueTasaiNaoExist(codeTask); return TASK_NOT_PROCESSED; ); task.startProcess(); metrificador.metrificaWhatTaskWasFinished(task); return TASK_PROCESSED; ); return TASK_NOT_PROCESSED; } }} The process of segregating the TaskManager class and encapsulating the metrics is called Domain Oriented Observability, and the new class generated is our much-coveted Domain Probe. The name of this design pattern, “Domain Probe”, refers to “Domain Probe”. This name could not be more appropriate since our class literally acts as a “probe”, in a class that previously lacked the measurement of metrics. How to test domain-oriented observability? Before actually testing observability, let's go back to the first version of our class, and try to imagine a test scenario. public class TaskManager { private static boolean PROCESSED_TASK = true; private static boolean TASK_NOT_PROCESSADA = false; private RetrieverTasks retriever; public TaskManager(TaskRetriever retriever) { this.retriever = retriever; } public boolean executeTarefaPeloCodigo(Integer codigoTarefa) { Task task = recuperator.recuperaPeloCodigo(codigoTarefa); if (task == null) { return TASK_NOT_PROCESSED; } try { task.iniciaProcesso(); return TASK_PROCESSADA; } catch (TaskInterruptedException e) { return TASK_NOT_PROCESSADA; } }} If you are used to doing this type of analysis, you will notice some scenarios: Either there is no task with the informed code, returning FALSE; Either there is a task and its processing is completed, returning TRUE; Or there is a task and its processing is interrupted, returning FALSE; For simplicity, we'll just use the third scenario as an example. Below, we can see how the implementation of this test class would be. public class TaskManagerTest { private static final Integer CODIGO_TAREFA = 1; private TaskManager TaskManager; private RetrieverTasks retriever; @BeforeEach public void setUp() { this.recuperador = Mockito.mock(RecuperadorTarefas.class); this.TaskManager = new TaskManager(retrieval); ); Boolean foiExecutado = GestorTarefas.executaTarefaPeloCodigo(CODIGO_TAREFA); assertFalse(wasExecuted); } private Task creaTarefaComExceptionEmbutida() throws TaskInterrompidaException { Task task = Mockito.spy(new Task(CODIGO_TAREFA)); doThrow(new InterruptedTaskException()).when(task).startProcess(); return task; } } Following the GWT naming pattern (Given - When - Then), we can express our business rule in the test. However, it is worth mentioning that here we are translating and “Brazilianizing” the writing of GWT (Given - When - Then), transforming it into “DCQ” (Should ─ Case ─ When). Thus, we use DCQ means: “Must return false”, which is equivalent to “Then returns false”; “If a processing error occurs”, which is equivalent to the expression “When a processing error occurs”; “When there is a task with the informed code”, which represents the same as “Given an existing task with the informed code”. From this, when we re-implement our observability, our TaskManager class goes back to being like this: public class TaskManager { private static boolean TASK_PROCESSADA = true; private static boolean TASK_NOT_PROCESSADA = false; private RetrieverTasks retriever; private MetifierTasks metricifier; public TaskManager(TaskRetrieval retriever, TaskMetifier metric) { this.retrieval = retriever; this.meter = meter; } public boolean executeTarefaPeloCodigo(Integer codigoTarefa) { Task task = recuperator.recuperaPeloCodigo(codigoTarefa); if (task == null) { metric.metricsQueTarefaNoExiste(taskcode); return TASK_NOT_PROCESSADA; } try { metric.metricifyWhatTaskWasInitiated(task); task.iniciaProcesso(); metric.metricifyWhatTaskWasFinalized(task); return TASK_PROCESSADA; } catch (TaskInterruptedException e) { metric.meterifyWhatTaskWasInterrupted(task); return TASK_NOT_PROCESSADA; } }} It is important to remember here that no behavior was changed with the increase in observability. Therefore, the test done previously continues to fulfill its role even though it is outdated. At most, what would occur in this case is a compilation error, which would already serve as a warning to the tests that this class now has a new dependency. Being an increase in our original business rule, there is nothing fairer than increasing the tests by ensuring correct invocations of our instrumenter. See the following example: public class ManagerTasksTest { private static final Integer TASK_CODE = 1; privateTaskManagertaskmanager; private RecoveryTasks recovery; private MetifierTasks meterifier; @BeforeEach public void setUp() { this.recuperador = Mockito.mock(RecuperadorTasas.class); this.metrificador = Mockito.mock(MetificadorTasks.class); this.gerenciadorTasks = new ManagerTasks(recoverer, metrifier); ); Boolean wasExecuted = managerTasks.executeTasgaPeloCodigo(CODIGO_TAREFA); Mockito.verify(metrifier, times(1)).metrifyQueTasgaWasStarted(any()); Mockito.verify(metrificador, times(1)).metrificaQueTaskerHasInterrupted(any()); Mockito.verifyNoMoreInteractions(metrifier); assertFalse(wasExecuted); ); doThrow(new TaskInterrompidaException()).when( task).iniciaProcesso(); return task; } } Taking advantage of the dependency on an instrumenter within our TaskManager, we can also inject a fake class to check only the number of invocations of each method. In the test above, we check that the methods metricQueTaskWasInitiated and metricQueTaskWasInterrupted were invoked, and then we ensure that no further interactions are made with our instrumentation class. So, if a new metric appears, there is refactoring or a change in the business rule, we will have tests that guarantee what the business expects, or expected. Author's Opinion This article is, in large part, a rereading of the Domain-Oriented Observability study, written by Pete Hodgson in 2019, and also includes the views of several other authors on the subject, including the personal opinion of the author, Felipe Luan Cipriani , tech writer invited by the Softplan group.. When I read the landmark article “Domain-Oriented Observability” for the first time, I was not surprised by something revealing, as I already knew the method. However, after a few conversations with close colleagues and a few more attempts to understand the entirety of the article, I realized how I underestimated it. Domain Probe does not address encapsulation, segregation or dependency injection ─ although these are all elements that make it up ─, but rather the importance of metrics, and their relevance to the business. And while the Domain Probe design pattern has similarities to a Facade, it is concerned with the essence of every system: the domain. Therefore, it has its value. This is an essential design pattern to know and apply wherever there are metrics tools in a domain that are not designed or designed to be easy to read, interpret, or maintain. After all, developers spend more time reading code than writing it. Furthermore, this is a project pattern with extreme flexibility in terms of granularity. In other words, you can create anything from a Domain Probe for each domain class, this approach being more “specific”, to even a “generic” Domain Probe. There is no wrong approach, just different approaches. Another type of implementing a Domain Oriented Observability is through events. In this scenario, the current project pattern is Observer, and its approach is equally interesting, making it worth an article dedicated just to it. Finally, I thank you, dear reader, for your time and interest.

Caching: what it is and how it works
Tech Writers February 27, 2023

Caching: what it is and how it works

Currently, when we use systems to manage information, we need more and more accurate and fast data, and with that we create a paradox between quantity x speed. System users are looking for faster information to better optimize the time of their processes. However, speed is also associated with the amount of information and these two needs end up conflicting. This is because the greater the amount of information required, the processing speed or availability of this information decreases considerably. Given this scenario, how could we reconcile such distinct and coupled demands? What is caching? Caching is a technique for intermediately storing application data, done through hardware or software. It serves to provide faster access to certain information than directly accessing the application's database. While a database is focused on managing information, the cache is focused on reading this data, not worrying about other operations, as this task is already being assigned. This means we have faster and more direct access, saving time and performance when accessing this data. Caching is an important feature for applications, as it increases speed and performance in obtaining data. However, there is a big “it depends” on this solution, because not all applications necessarily really need the use of caching. Before applying this concept, we must always analyze the application and the real need for the use of caching. Should I use caching in my application? It is always important to ask this question to avoid unnecessary work. Here are some important questions to consider: Do I really need to improve the performance of my application? Are my users dissatisfied with performance? Would it be possible to achieve the performance I want in another way? For example, optimizing queries, creating database indexes and changing data structure. These and other questions can help us validate if we really need to use caching in our application. Where can I use caching? 1 - Systems focused on data reading It consists of systems where the amount of data reading is significantly greater than the amount of writing. Therefore, the use of caching can be a solution to scale such an application. 2 - Data state tolerance Systems that contain data that do not need to present their current version and that can be updated in the base and in the application, in addition to not needing to be updated for a certain period of time. In these cases, the cache can be a solution to store a specific value. 3 - Data that does not change frequently For systems where data is not updated frequently, caching can be used to store it for a certain time and always consult it according to the application's needs. Caching events We can classify the cache by having two types of events: Hit and Miss. Hit The fetched data is in the cache when the application performs the query, obtaining a faster and lighter response. Miss The data fetched by the application is not in the cache, meaning the application has to fetch it directly from the database by making an additional call. Main caching strategies There are some caching strategies that can be used to carry out the caching process. The most common are: Cache-aside Pattern This reading strategy consists of the application having access to both bases: cache and database. It relies on the application fetching the data from the cache and returning it to the application. However, if this data is not found in the cache, the application will fetch it directly from the database and return it to the application, and later insert it into the cache base. Read through / Lazy Loading This reading strategy involves the application not having access to the database, only the cache base. It requests the data from the cache and, if found, continues returning it to the application. If it cannot find the information, the cache itself goes to the base, finds the requested data and inserts it into the cache base, and only then returns it to the application. Read-Through is a strategy very similar to Cache-aside, but the difference is that in this strategy the application will always fetch from the cache base. Write through This writing strategy consists of the cache that can store all the database data, which must always be current. Every time the application writes some data, it first stores it in the cache and then stores it in the database. This way, the cache always has the most recent version of the information, being a replica of the database. This application always reads only from the base cache. Write through is aimed at systems where reading cannot be obsolete under any circumstances. However, there may be data stored and not read since it ends up storing everything. Write-back (Write-behind) In this writing strategy, the data is not immediately written to the base, as the application continues writing in the cache and after a certain time it synchronizes in the data base, acting on batches of information to be inserted. One of the problems with this strategy is that if the cache fails, data that has not yet been synchronized will be lost. Furthermore, if there is some kind of direct operation on the database, this operation may not use the updated version of the data. Caching implemetation To apply the previously presented caching strategies, we must choose a caching implementation. There are several alternatives and as long as the cache storage location is faster than accessing the database, it is already of great value. When we are talking about cache, it is generally stored using “key/value”. Below we will present 3 common types of caching implementation. In-Memory In Memory caching consists of storing caching data in the service's own memory. This format is the fastest as it works with direct access as well as a simple application variable. Despite being the fastest and easiest to implement, it also comes with some problems. For example, by storing it in the service's memory, if it ends up crashing, the cache data will also be lost. This implementation is recommended when dealing with unique or smaller services that have little data to be stored in the cache. Remote In this implementation, caching data is stored in an external service, such as Redis or Memcached, and not linked to the application service. In this format, the data becomes resilient to application service failure, as it is decoupled from it and has its own service. This implementation model is robust and recommended when you have data coming from several different or large services. Remote Distributed Both implementation models mentioned above suffer when it comes to scalability. The amount of data that can be stored in each implementation is limited, and this happens because both depend only on a single server, whether local or external. However, when we need to manage larger amounts of data, we can use the external and distributed implementation model, where the cache is distributed among multiple instances of the cache server. If the current quantity is insufficient, then it is possible to add new clusters and thus have ways to scale the amount of possible data to be stored, facilitating the scalability of this caching service. Conclusion Caching is an easy concept to understand, its implementation is also simple and the results appear with small applications, regardless of the strategy and model used. But it won't always be the savior of an application and not everyone should necessarily use this strategy.

Understand the Hackathon: What is it and why is it important
Tech Writers February 14, 2023

Understand the Hackathon: What is it and why is it important

Both for people who are looking for challenges in the market, and for companies that need out-of-the-box alternatives, the Hackathon is a great way. The Hackathon is capable of contributing to innovative solutions, as it promotes rich discussions and a lot of development among professionals. If you still don't know what a Hacksthon is about, stay with me until the end of this article. What is a Hackathon? The origin of the term Hackathon was the combination of the words program and marathon. Hack (programming) Marathon (marathon) Hackathons are events that bring together people interested in working on specific issues, developing solutions quickly and uniquely. Although the word Hackathon originates from the word programming, these are not events focused only on programmers (at least not anymore). Currently, they rely on the contribution of other professionals, mainly innovation specialists, managers, designers, developers and users. Hackathons discuss ideas and projects to be developed, always considering the opinions, resources and knowledge of the participating professionals. In summary, we can say that the importance of these events is due to all their capabilities, among which I highlight three: The encouragement of production and collective work; The integration of several fronts in the development of the project; Innovative solutions. For these and other reasons, a Hackathon is an excellent way to encourage innovation in companies. Who can participate in Hackathons? Generally, the event brings together professionals who have a relationship with technology, innovation and the development of new solutions, but there are no restrictions regarding participants, as these events are conducive to great learning. Fun fact about the Hackathon: Internal and external participation is quite common, that is, professionals from inside and outside the companies, which is a very positive characteristic for gathering knowledge and generating good results. In fact, the good results of Hackathons are the reasons why professionals in areas such as Human Resources use them as a way of training and developing employees from all sectors of organizations. Planning a Hackathon There is no standard when planning a Hackathon. Planning guidelines are at the discretion of each company or project. Just remember that everything must be aligned with the objectives. However, it is worth highlighting the existence of some planning pillars.  

Unit testing: How to identify the effectiveness of a test?
Tech Writers January 17, 2023

Unit testing: How to identify the effectiveness of a test? 

A unit test is a type of automated test that checks a specific part of code quickly and in isolation. The purpose of a unit test is to ensure that the code is working correctly and that it is doing what is expected of it. In other words, it: Checks a small part of the code; Quickly; And in an isolated way. Developers perform unit tests during the code development process, with the aim of identifying possible problems or errors quickly and accurately. They are also useful for ensuring that code continues to function correctly even after changes or refactorings. Unit tests are often performed in conjunction with other types of tests, such as integration tests and system tests, to ensure that the application is working properly. In this content, we will help you understand which parameters to use to identify when a unit test is good or not. Principles of effective unit testing More than checking whether the code "runs" or not, it is important to ensure that the expected behavior of the application will be carried out without errors, achieving the final business objective. Therefore, it is worth reiterating that automated tests do not only evaluate the code, but the expected behavior of that business domain. Nowadays, the importance of automated testing is widely recognized in software projects with the aim of ensuring scalability and quality. Vladimir Khorikov, in his book Unit Testing: Principles, Practices and Patterns, proposes that it is necessary to think beyond simply carrying out tests, but also to pay special attention to quality, so that the costs involved in their design and maintenance are the lowest possible. According to the author, this need arose because the search for a goal of unit test coverage in projects ends up generating a large number of tests, which in some cases do not identify possible bugs in the system and compromise a significant part of the development time. Therefore, below, we will present the main indicators when evaluating the effectiveness of a unit test or not. 4 main indicators of an effective unit test Have you ever worked on a project where, with every change you make, several tests fail and you can't identify why? Have you ever had to deal with tests that were difficult to understand and it took a considerable amount of time to analyze them? These are some indications that indicate that it is necessary to rethink the testing of a project. Next, we will present the four main points, according to Vladimir Khorikov, that help us recognize good unit tests. Regression Protection Regression is a bug in the software and tests must be able to identify them. The greater the amount of code in an application, the more exposed it is to potential problems. To ensure this protection, tests must execute as much code as possible, increasing the chance of revealing a regression. Furthermore, it is necessary to prioritize business domain codes and those that are more complex, avoiding evaluating trivial application behaviors, for example, methods that only pass values ​​for object properties. The image shows an example of a trivial behavior test. It checks for a method that just assigns a string value to the Name parameter of a User object. The framework itself performs this assignment. It is necessary to avoid this type of testing and focus on those that are more complex or that are really important for the business. Fig 1. Unit Testing of code considered "trivial" Resistance to refactoring Refactoring means changing existing code without changing the application's behavior. We often come across projects that have high test coverage, which initially meet our objectives, but with each refactoring, with each minimal improvement, the tests fail. In these cases, it is likely that at a certain point, these failures will turn the test into a burden, which is far from its objective. To guarantee this resistance, it is necessary to prevent the test from being coupled to the implemented code. It should be focused on "what" the application should do and not "how". Next we will present a test that verifies the sending of a correct SQL expression to return a user with a given ID. The test is capable of identifying bugs, but there are other SQL expressions that can bring the same result. A change in the code already leads to test failures, even if the application returns the same user, that is, has the same behavior. This type of testing should be avoided. Fig 2. Testing coupled to the implemented code (Source: Vladimir Khorikov, Unit Testing: Principles, Practices and Patterns, 2020) Fast feedback Fast feedback is one of the basic properties of any unit test. The faster the response speed, the more time you have to deal with activities that matter. Long tests also make the Continuous Integration pipeline more costly, since, in most cases, there is one step to run the project's tests. The consequence of this is a delay in deploying the application and an increase in costs. There is no exact amount of time considered good or bad. If the test execution time is making development costly, it is an indication of poorly constructed tests. Be easy to maintain Maintenance is an element that must always be considered if we want to guarantee the scalability of our applications. Easy-to-maintain tests have two characteristics: Being easy to read: ensuring the readability of the code for developers and business experts is important for a faster understanding of the test objective and reduction in maintenance costs. Tests with fewer lines tend to be easier to understand; Be easy to execute: it is necessary to create an infrastructure for tests, so that their dependencies (for example, database and external APIs) remain operational. As we have seen, it is important not only to know how to write tests but also to factor them in order to improve quality and provide more security to our applications. Either way, it is necessary to evaluate them based on these four characteristics. This ensures the development of tests that are low cost, easy to maintain and that fulfill their role within the application. This article was written based on chapter 4 of the book “Unit Testing: Principles, Practices and Patterns”.   

Do you really know how Knowledge Management impacts your business?
Tech Writers November 07, 2022

Do you really know how Knowledge Management impacts your business?

Knowledge management has become an essential tool for companies that want to remain competitive and growing. This is because technological acceleration, typical of today, requires constant learning so that the job market can be kept up. The main role of this management is to promote the right channels to transfer relevant knowledge to customers, employees, partners and suppliers of your business. In this article, we will explore this topic by discussing the applicability of knowledge management tools and strategies in the operation of your business – whether in the public or private sector. Knowledge management: multichannel training processes and tools We are experiencing a unique moment as human beings, in which we have much more information available than our capacity to absorb everything we have access to. The revolution in learning models was boosted by technological evolution and, here we are: lost in an ocean of available knowledge, but very poorly consumed. It is important that company management sectors increasingly realize the importance of ensuring and keeping their teams well trained and ready for the challenges in this new scenario. It is fast, uncertain, complex, and demands a great capacity for continuous adaptation. Within the concepts of Knowledge Management, we come across some initiatives that can support us in the challenges imposed by this moment. Furthermore, these initiatives can eliminate geographic barriers and democratize access to knowledge, ensuring its reach at any level of business operation. We can mention some of these practices, which together form a very interesting combination: learning trails; practical simulations; implementation of Distance Learning platforms - EAD (or LMS - Learning Machine System); gamification; knowledge repositories; communities of practice; immersions using new technologies; and the always “favorite” certification program. Recent research shows that up to 40% of people who do not receive a good training program tend to leave the company within the first year. Therefore, it is important to always ask yourself: how is your onboarding? Is the Company really prepared to welcome new employees and new customers? We are all at the mercy of a high turnover, caused not only by the rapid change resulting from the pandemic, but also by the absence of a solid strategy for transferring and internalizing knowledge within companies. Therefore, the focus is to generate knowledge and implement technologies capable of achieving team qualification, developing corporate education and consolidating an organizational culture based on knowledge. And, speaking of ways to apply good knowledge management, we have an interesting article about LMS platforms for you: click here to read it! Another tip for knowledge management processes that works well here at Softplan is the use of the Eva People tool. It focuses on automating communication for the Onboarding journey. Today we use this tool not only for Softplan employees in the Public Sector Unit, but also in new training for our clients. Click here to find out! Mapping and reformulating strategic and operational processes Despite promoting productivity and monitoring business operations, processes play an often underestimated role. However, they are essential. After all, they represent a structured way of recording knowledge, the “know-how” that, in fact, makes your service or product reach your customers or users. Capturing the step-by-step of each activity, aiming to document the most productive way to carry out work, is an arduous task of explaining what we call tacit knowledge. For this reason, process mapping is a useful technique for capturing best practices and critical activities in your business. Certainly, through it, you achieve alignment between teams, future understanding for new employees and the standard of quality and assertiveness of procedures. In fact, all this content is essential to a good onboarding strategy, as we mentioned above, but also a fertile repository for knowledge recycling actions. Such knowledge will sometimes be represented in the form of flowcharts, which graphically describe an existing process or a proposed new process. But the development of procedures, manuals, scripts, task lists, work instructions and even demonstrative videos available in an LMS system (for distance learning) can provide greater detail on the performance of each area of ​​the business. It is important to highlight that a company's operational/strategic processes and knowledge flow cannot be separated. This is because, since knowledge constitutes process inputs, they are used during processing and are generated as process outputs. The flow of knowledge is, therefore, inherent to the process. Read more about it in the article by YOO, SUH and KIM, from 2007: “Knowledge flow-based business process redesign”: https://www.emeraldinsight.com/doi/pdfplus/10.1108/13673270710752144 Base of specialized knowledge in your business Is your Support always full of questions? Does your service team realize that users, even after training, remain very dependent? Does the customer complain about features because they don’t know them? If you answered yes to these questions, you already understand that it is necessary to provide quality information to your users, clients, and consumers. A properly structured and populated knowledge base will free up your customer service, relationship and technical teams, eliminating routine and repetitive questions from your queue. Your customers will also demonstrate greater satisfaction with the different possibilities of service channels, as they will be able to resolve their queries with greater autonomy and speed. In addition to the positive effect on customer perception, the knowledge base also allows your employees to focus on more complex or strategic activities, since simple queries are resolved in another channel. It is worth highlighting that the big challenge is not just structuring a knowledge base and choosing the best platform for its consumption. But, yes, keep this content updated - coherent, with quality and relevance for your customers. Connect the maintenance of your knowledge base to the news cycle of your products and services. The process needs to be at the root! At Softplan super we make use of the knowledge base for the Justice segment. Discover SAJ Ajuda, implemented through Zendesk software, serving customers in Brazil and Colombia. Link: https://sajajuda.softplan.com.br/hc/pt-br Communities of practice and the power of networking A community of practice aims to bring together a group of individuals who meet periodically around a common interest in learning and applying the acquired knowledge. In this context, this collective learning is the cradle of improvements in processes and systems, innovations in the business model, and, especially, a much broader view of the entire context in which a product or service will be developed and commercialized. Emphasizing that this concept values ​​voluntary participation. Therefore, it is important that companies provide a favorable environment for discussing new ideas in a natural, safe, respectful way and without prior judgments. Furthermore, a minimum amount of time is required to dedicate to innovations. Identify the most productive period within your team and reserve agendas for the open space. Whoever has an agenda tells the others and then everyone participates. Nothing mandatory, but encouraged! This exchange of knowledge happens naturally between groups or co-workers with certain affinities, whether during a coffee break or lunch, in the corridors or during these people's moments of fun. Who has never left “happy hour” as an idea they hadn’t even imagined? Participants in these communities benefit from quick and informal communication resources such as WhatsApp, emails, discussion forums and platforms such as Slack. But be careful not to get lost in history. Any sharing and collaboration channel is welcome. However, it is crucial that people are able to find the information easily after the discussion. A minimum organization of themes is essential so that the idea does not get lost. The exchange that occurs in communities of practice allows the expansion of organizational knowledge. In them, it stops being just individual and starts to crystallize in the organization's knowledge network, comprising all its dynamism and volatility. An important note: communities of practice also take place on external platforms, without a direct connection to companies. Search for content groups on Linkedin. This way, you will discover a series of possibilities for contacts and knowledge! Bank of ideas for the innovation funnel This involves the association of a repository with a structured process to capture, evaluate and monitor new ideas generated in different work groups or even in individual initiatives by employees, especially for implementations in new products or services. It is the evolution of the concept of “suggestion boxes” to systematically collect and select actions to improve operations, suggestions for new processes and products, applications of new controls and services, action through competition, in addition to reports of experiences that can promote a revolution in the business model. Associate the collection of ideas with their development processes, such as: Brainstorming, Design Thinking and Prototyping. What matters is encouraging! This practice can be used by employees according to the most appropriate strategy for the company, often associated with solving specific problems in each ideation cycle and linked to gamification and recognition actions for the best ideas. A good way to stimulate is to hold rounds launching challenges, pulling out specific topics, and allowing people from different teams to work on this process, even if it is not a topic directly linked to their performance. This is also an excellent way to identify talents and promote employees’ career development. Oh, and yes, your customers can participate too! Bringing it into these discussions can raise proposals that have not yet been imagined. When the customer is included in the processes, the chance of loyalty also increases. Here, a tip with tools to collect ideas: https://aprendeai.com/criatividade/ferramentas-de-criatividade-e-geracao-de-ideias/ Gamification to attract hidden players in your operation The current generation of employees in your company or in its clients' teams grew up under the influence of different types of games during their childhood and adolescence. The current importance of the “games” market reveals the potential of game elements in the technological revolution and also in the dissemination of knowledge. Gamifying learning means transforming the training process into a journey with a sense of belonging and clearly defined stimuli. In this way, it is possible to provide, through game elements, new strategies for creating, collecting, disseminating and absorbing new knowledge. This technique can make people capable of developing skills and competencies aimed at the objective of their business and serving their customers. Some games were developed to promote Organizational Intelligence (IO) and Knowledge Management (KM) processes in organizations. The game “The Corporate Machine”, for example, aims to dominate the market your company occupies. In other words, teaching about strategy in a playful way. It is observed that gamification as a process is, in fact, an important ally for the development of the Organizational Intelligence process, as well as for the Construction and Management of Knowledge within organizations. Discover the game “The Corporate Machine” here: https://g.co/kgs/R3Ng58 Yes, management by indicators is part of Knowledge Management Who has never heard the saying: what you don't measure, you can't manage? Remember that measuring involves knowing the processes, their metrics and the expected results. Vast knowledge is generated from process performance indicators, as they: Provide data for the manager to act at each stage of the process, identify gaps and promote improvements; They allow for a better understanding of the scenario and greater predictability for future actions; They provide greater accuracy in decision-making by the manager, whether preventively or reactively to solve problems; They allow the creation of “dashboards” containing all the information available in real time for actions with the appropriate timing; They highlight the results achieved, based on facts and objectively; They feed artificial intelligence and machine learning systems. Therefore, indicators are part of the flow of knowledge, as they allow understanding the context of processes, products and services, as well as driving actions towards improvements and innovations necessary in your business. Furthermore, knowledge dissemination processes also have their own indicators that provide feedback to management processes, not only to monitor results, but to continue evolving strategic actions. At Softplan we use the Appus Skills tool to diagnose technical knowledge and map the gaps to be addressed in our training. I recommend reading the Appus blog, our partner: https://www.appus.com/blog/people-analytics/o-que-e-people-analytics/ Ah, the knowledge! Some companies from different segments have already adopted an area or position related to Knowledge Management in their structures. This area or professional's challenge is the process of structuring, expanding and maintaining the company's Intellectual Capital. Various strategies, platforms and procedures, such as those mentioned in this post, can promote a profound cultural change regarding the value of knowledge within your business. Reflect on everything we mentioned and answer: what would your company be without the dissemination and evolution of knowledge that it requires? To reflect, remember that: You do not lose the knowledge you have when you share it. It multiplies! Everyone has their own way of “learning more easily”, and that’s okay! However, when reflecting on your knowledge management solutions, think about different types of “learners” and create strategies that adhere to their particularities. And, finally, knowledge does not depreciate with use, unlike other assets in which its value derives from use/consumption. It is exponential!

Code Quality: Learn how to review code, measure its quality, and tooltips!
Tech Writers October

Code Quality: Learn how to review code, measure its quality, and tooltips!

Developing quality code is essential to reduce the number of bugs in software, facilitate code maintenance and retain users more. When a person uses software, they hope that this software can solve their problems and optimize their routine. If the software has many problems, people tend to lose confidence in it and opt for a competitor. Therefore, today, here at Softplan, we will teach you what quality code is, how to review your code and tool tips to help. Check it out! What is code quality? In general, code quality does not have an exact definition. Each development team establishes a definition for itself, which generally boils down to combining common factors, such as: maintenance, testability, readability and security. In other words, a code can be considered quality if it is readable, easy to maintain, safe and simple to test. Writing quality code or clean code involves a series of recommended practices and standards during development. These practices are defined at the beginning of the project. By following them, combined with the use of manual or automatic tools to measure code quality, it is possible to build a product with more security. Why is code quality so important? Implementing quality code is relevant mainly for the following factors: It reduces maintainability and optimizes future improvements; Reduces the risk of errors and bugs; Speeds up code deliveries; Helps with customer retention; Improves the user experience (UX) with the final product; Ensures greater data security. To explain further, creating quality code optimizes future maintenance and improvements, which are simpler and quicker to implement, making technical debt small. This way, there is no need to spend hours looking for a bug in a macaronic code. Another benefit is generating deliveries with few or no bugs and faster to the customer, as the team will not spend so many hours trying to understand very complex code. Code quality also guarantees a better user experience, avoiding bugs and performance problems in the final product. Furthermore, it can have a direct impact on how secure the software is. For example, even small defects, such as not validating user input, can leave an application exposed to bad actors and larger attacks. With these benefits, the customer increasingly trusts the product delivered to them, which helps in customer retention. However, software quality does not only refer to clean code, but to a set of factors that must be adopted such as documentation, good practices, management and monitoring, which will be What are the characteristics of quality source code? For a source code to be of quality it must be easy to understand, easy to maintain and must fulfill its functions efficiently. The main characteristics of a quality source code are: Readability: The code must be easy to read and understand for the team development, with a clear syntax, no confusing abbreviations and a consistent coding style; Simplicity: It must be simple and concise in order to reduce errors and bugs; Modularity: Code needs to be divided into independent modules or functions, each with a clear and well-defined responsibility. This makes code easier to test, debug, and maintain; Documentation: It is important to document the code properly, to help other people on the team understand the function of each part of the code; Testability: It must be easy to test and have a comprehensive set of unit tests to ensure it works correctly; Reliability: Must be reliable and properly handle errors and exceptions, returning clear and informative errors when they occur. Furthermore, to ensure code quality, it is important to have a continuous process of constant reviews and improvements. So what is bad code? Code quality is related to its level of complexity and readability. When it is very complex, macaronic, it ends up being difficult to understand and maintain. Therefore, it is considered to be of low quality. If the code is poorly organized, lacks documentation, does not follow good programming practices, and has poorly structured comments or no comments to aid understanding, it is a sign that it needs to be revised. Furthermore, code can end up being bad if it has not been tested, as this increases the chances of generating bugs and errors. Complex code, in addition to having major impacts on the team's productivity, generates high support costs. This can also lead to system errors and incompatibilities, which if not dealt with quickly can cause the end user to lose confidence in the product/software. How to implement code quality in everyday life? There are some points you can follow to improve the quality of your code, which will make all the difference. They are: Think twice before creating the code, and when done, think about whether it is in the most readable form possible; Follow best practices, such as SOLID (take the opportunity to learn about the Singleton principle); Use code quality tools like SonarQube. If you are using an IDE like eclipse or IntelliJ, then you can also use the sonar plugin (SonarLint); Do not create validations that already exist in libraries, such as isEmpty or isNull. Many open source libraries provide several methods for creating validations. One example is the Apache Commons library, which is a real Swiss army knife for developers; Before creating a class, research to see if there is already another one with the same functionality; Look for your immediate leader or tech leader, and talk about the importance of establishing coding standards to improve the software as a whole. The importance of documentation in code quality When a team passes a project to another team, the absence of supporting documentation with the code is always a warning sign. If the previous development team does not provide any documentation, it means the initial execution of the project with a flawed approach. The code architecture and "big picture" will likely be missing, which could easily result in project failure. With good documentation available, the new developer who joins the project can have an onboarding process that takes a few days instead of one or two weeks, with a significant reduction in development costs. How to measure code quality? It is possible to evaluate the quality of a code in different ways, such as, for example, with a code review policy, through tools and/or feedback from users. The main ways to measure the quality of source code are: Static code analysis: This technique examines the source code without executing it, in order to identify common problems, such as redundant code, unused variables, among others; Code metrics: Code metrics are used to evaluate the code and may include code size, complexity, cohesion index, among others; Unit tests: By testing each isolated part of the code, it is possible to identify logical errors, syntax errors, security flaws, etc.; Code Review: Implement a code review culture, in which the team's developers review other professionals' codes. This helps identify problems not detected by tools; Feedback from users: With feedback from users of your software, it is possible to create bug reports, evaluate system response time, ease of use, among others. In addition to these points, there are tools that make a complete analysis of new codes sent to repositories. Tool Tips for Code Quality Some tools can help you when it comes to ensuring the quality of your source code. The most famous tool is SonarQube, which checks almost everything – code quality, formatting, variable declarations, exception handling with an integration with CI/CD pipelines, with a single-line command. ESLint is a static code analysis tool for JavaScript. It can check whether the code is following style rules and good practices. With CodeClimate, a static code analysis tool, you can get feedback on code complexity, duplication, test coverage, and vulnerabilities. Postman is the ideal tool for testing the functionality of APIs, while CodeFactor gives you information such as test coverage, complexity and code duplication. In addition to these, GitHub Actions can also be used, as it is a workflow automation platform. It's ideal for running code quality tests on a GitHub repository. How to review the code? To perform a good code review, it is important to understand the context in which it is being used, that is, its objective, its general architecture and the target audience. Furthermore, some factors are fundamental in the review, such as, for example, checking whether the code is consistent with other related codes and whether it is readable, has documentation and is not redundant. It is also important to test the code to check if it has errors and if it works as expected, in addition to leaving comments to point out improvements and bugs found. After these steps, the ideal is to propose solutions to the problems encountered and ask for the opinion of other people on the team. Did you like to know a little more about code quality and its main implications? Check out more content like this on our Tech Writers blog! And if you want to be part of our team, follow our vacancies on the careers page. Did you like to know a little more about Code Quality and its main implications? Check out more content like this on our blog! Want to be our next Tech Writer? Check out our vacancies on the Career page!