logo tech writers

tech writers

This is our blog for technology lovers! Here, softplayers and other specialists share knowledge that is fundamental for the development of this community.

Learn more
Singleton: complete guide to understanding this controversy once and for all!
Tech Writers July 04, 2022

Singleton: complete guide to understanding this controversy once and for all!

Singleton corresponds to a software design pattern. Design patterns, as I discussed in my article on Strategy, are nothing more than a catalog of solutions to common problems. Each design pattern has pros and cons; implementation benefits and costs. In the specific case of Singleton, critics consider these “costs” too great. In this article, I will explain what a singleton is, what “problems” it can bring to the code, and what are the alternatives for its use. Continue reading! What is a Singleton? To understand the difficulties that the Singleton design pattern brings, we first need to define what a “Singleton” is. After all, it is often the very confusion about the concept that makes some people against this standard. We can start with the following statement: a single instance of a given object during the lifetime of a request is not a Singleton. A Singleton, by definition, is precisely this globally accessible instance in the project. Analogous to the descriptions given by GoF (Gang of Four), in the book “Design Patterns”, we can list 3 basic characteristics to define what we need to create a Singleton: It must have a single instance during the lifetime of the application; It should not be possible to instantiate it through its constructor, which needs to have preferably private visibility; Your instance must be available globally, in your project. These definitions are very important, as failure to understand what a Singleton represents can end up leading people to not understand the accusation of it being considered an anti-pattern. Anti-Pattern? Is it to eat? Anti-Patterns, contrary to what it sounds at first glance, are not necessarily the opposite of a design pattern. So-called “anti-patterns”, in short, are common responses to frequent problems, which are usually ineffective and have a high risk of being counterproductive. Attention to detail: these are “common answers” ​​to “common problems”. In other words, being an anti-pattern does not nullify the fact that Singleton continues to be a design pattern. Anti-patterns have more to do with the wrong use of a correct solution, than the solution itself being something bad ─ although that can also happen. The Singleton in practice The Singleton, as already explained, provides the developer with a single instance of an object with a global scope. Below is an example of what its implementation would look like using the Java programming language. public class Singleton { private static Singleton instance; private Singleton() {} public static Singleton getInstance() { if (instance == null) { instance = new Singleton(); } return instance; } } Note: Although the name “Singleton” is explicit in the example class, it is important to emphasize that you do not need to mention the name of the design pattern in the class name. This was just done to illustrate its structure. Singleton can be a viable solution if you are in a situation where these two problems arise: You need only one instance of an “object X”; You need to access this instance from anywhere in your application. If one of the two items above are not part of your needs, there is probably another way out of your problem. If you don't need a single instance, your object's constructor doesn't need to be private. Just instantiate your object wherever you need to use it; If you don't need to access your instance from anywhere, it means you have a limited scope. In this case, an alternative would be to identify “from where” it is necessary to access your object. One way to do this is to create a private static field that stores your instance and pass this object through dependency injection wherever it is used. The topic “dependency injection” and its connection with Singleton will be discussed later in this article. Benefits of Singleton Despite some people's resistance, Singletons have very interesting benefits. Check out some of them: It is extremely easy to access: serving as a “global scope” variable, it is clear that its access is exceptionally simple. There is no need to transport your object instance throughout your system. Guarantees that there is only a single instance: regardless of the object or reason, Singleton will guarantee that there is only a single instance within your project. This way, logical flaws that may involve this rule are avoided. Something interesting to comment on, which has a direct connection with the benefits mentioned above, is its versatility to act as a Mutex (Mutual Exclusion) key. In multi-threaded environments, where two or more operations can compete, the “key” instance remains one. This way, execution control is made easier. Of course, Singleton itself is not free from competition. Therefore, it needs appropriate treatment depending on the language in which it is being used, so that it can be considered a thread-safe solution. If you have never heard of Mutex (also called “semaphores” or “locks”), but are interested in the subject, I recommend reading this article. Singleton Costs By doing a brief search, we can find several articles by different authors commenting on different costs involved in implementing a Singleton. Among the most discussed, the ones that I consider most interesting to address, are: Breakage of SOLID principles; “Lack of traceability” factor; Difficulty implementing tests; Sacrifices transparency for convenience. To make it easier to digest what each problem represents, I will explore each of these factors in the following topics. It breaks SOLID principles The first and most common issue when it comes to Singleton is that it breaks SOLID principles. To be more exact, the Single Responsibility Principle (SRP). The following snippet is responsible for the problem: public static Singleton getInstance() { if (instance == null) { instance = new Singleton(); } return instance; } This happens because, in addition to the Singleton having control of the object's life cycle (its internal instance), it also guarantees its access. With these two factors added together, we end up with two responsibilities. But, what should be done to prevent the SRP from breaking? In short, the responsibility for populating the instance should be delegated to another class; exactly one of the proposals of another design standard, known as Monostate. If you want to know more about SRP and understand why it is important, I recommend reading our article Discover one of the most important SOLID principles: Single Responsibility Principle, written by my colleague, Pacifique Mukuna. Lack of traceability Suppose you are in a situation where you need to create and control user instances of a given system, and that only one user can be logged in at a time. One possibility would be to create a “manager” class for this connected user instance. There are some possible alternatives to continue with the development. Creating a Singleton serving as the “management class” is one of them. Therefore, it would not be necessary to worry about the parts of the system that will need to consume this connected user: just retrieve the instance of the management class, which is accessible from anywhere in the project. Now, we can reflect: “in this system, which object can change the logged in user?” It seems easy: “the Manager” would probably be the first answer. Reflecting a little more on the solution, we can observe a very important detail about this design: the management class is accessible from anywhere in the system. So the solution implies that the user can be changed from any location as well. Going a little further in this exercise, we can forget the “user” class and replace it with “Generic Object X”, which is constantly modified by the objects that call it. Note that as our Manager object is used, the “lack of traceability” factor becomes more common. The issue here is more philosophical than practical. The fact is that, no matter how justified its use, we can conclude that there are two absolute and intrinsic issues in this design pattern: You cannot guarantee that the properties of your object will not change, when they should not change. In this regard, global access is the factor that makes it very difficult to predict the misuse of Singleton; As a consequence of the previous factor, if there is an undue change in the properties of your object, it is extremely complex to identify the point at which the change is taking place. Mainly in large applications where operations that modify object properties are common. Makes it difficult to implement tests On this point, I want you to pay attention. I'm not saying that Singleton is difficult to test, but rather that it makes implementing tests difficult. This generally occurs in the code that consumes it. In order not to expand on this explanation, take into account that to understand why Singleton makes test implementation difficult, it is necessary to have a base of what automated tests are. In particular unit tests and how they are implemented. But, in short, a unit test consists of the following ideas: Testing a class in isolation: if all parts of a given system are working independently, there should be no problem when they are all together. However, this is a debatable subject, and depends on the developer's intention; Test independently: in addition to testing the class in isolation, each test must be absolutely independent of each other. Regardless of execution order, all tests must pass; Test quickly: as a result of the previous points, unit tests have the peculiarity of being small in scope, and, therefore, they are quick to execute. Unit tests are the basis of what in software engineering is known as the “test pyramid”. In order of priority, they are those that should exist in greater abundance in projects. To understand the problem that Singleton can present during the execution of a unit test, we can recover the idea described above, about there being a “managing class” for the connected user. Let's call it UserRegistry. Furthermore, let us take into account that a given system has a service with the following check: public class Servico { public boolean usuarioPodeCadastrarNovosClientes() { // Gets the instance of the Singleton RegistroUsuario RegistroUsuario = RegistroUsuario.getInstance(); // Stores the user logged into the system in a variable Usuario usuarioLogado = RegistroUsuario.getUsuarioLogado(); // Makes a return indicating whether the connected user has the permission return usuarioLogado != null && usuarioEhAdmin(usuario)); } private boolean usuarioEhAdmin(Usuario usuario) { return “ADMIN“.equals(usuario.getPermissao()); } } The above service is relatively simple, and only checks whether the user currently logged into the system has permission to register new customers. The method must return “TRUE” if there is a user connected, and this user has “ADMIN” permission. If you are already used to creating tests, you will easily be able to identify three possible scenarios: either the user is ADMIN; or the user is not ADMIN; or there is no user logged in. We can create a unit test class to automate these checks, as in the example below: public class ServicoTest { private Usuario usuarioAdmin = new Usuario(“ADMIN“); private User Common user = new User(“COMMON“); private Service service; @Before public void setUp() { this.service = new Service(); } @Test // When there is no user logged in.     public void teste01() { RegistroUsuario.getInstance().setUsuarioLogado(null); Assert.assertFalse(servico.usuarioPodeCadastrarNovosClientes()); } @Test // When a user is logged in, but does not have permissions.     public void teste02() { RegistroUsuario.getInstance().setUsuarioLogado(usuarioComum); Assert.assertFalse(servico.usuarioPodeCadastrarNovosClientes()); } @Test // When a user is logged in and has permission.     public void teste03() { RegistroUsuario.getInstance().setUsuarioLogado(usuarioAdmin); Assert.assertTrue(servico.usuarioPodeCadastrarNovosClientes()); } } It turns out that unit tests, in most cases, run in parallel because they are independent. It is exactly at this point that Singleton becomes a problem. Execution in parallel causes tests to make modifications to the Singleton in competition, and when test01, for example, asserts information by invoking the service method, it is very likely that another test, such as test02, has already modified the Singleton value again, which would cause a false negative in the test01 assert. Perhaps the graph below will make it easier to clarify the example described above: Returning to the second item listed about the definition of a unit test: tests must be independent. Therefore, in some cases, it is extremely difficult to unit test code that consumes a Singleton. Especially if the Singleton has a direct connection with the return of the method you are testing. Sacrifices transparency for convenience In the article “Singletons are pathological liars” by Misko Hevery, good context is given through testing of what the problem is here: Singletons, among many other problems, can make chain analysis and discovery exceptionally difficult. of dependencies. The example used by Hevery can be considered somewhat extreme. And perhaps it really is, because it is a very particular case of an event that occurred to him while he was developing for Google. But, your main point remains valid: the Singleton is nothing more than a global state. Global states make it so that your objects can secretly get hold of things that are not declared in their interfaces. As a result, Singletons turn their interfaces into pathological liars. Thus, we can interpret the phrase “sacrifices transparency for convenience” as something common with global scope variables, as there are no explicit dependencies in the interface of the code that consumes the Singleton. Analogous to the image above, we can say that the fazAlgumaCoisa() method, of Object A, does not know that the fazOutraCoisa(), of Object B, uses the Singleton. And again, this makes dependency chain analysis exceptionally complex. Hevery, in particular, gave a talk for Google Tech Talks in 2008, which I highly recommend if you understand English and are interested in the topic. In the talk, he delves into the fact that he considers Singletons to be a bad practice. Singleton versus Monostate The Monostate Pattern, or simply “Monostate”, was a design pattern proposed by Robert C. Martin in his article Singleton and Monostate, in 2002, as a “Clean Code” proposal for a Singleton. This design pattern proposes storing a single instance of an object and providing global access to this instance ─ just like Singleton, however, with some small differences: Its constructor must be public; Your methods cannot be static; It must have a private static property, to store the instance of the desired object. public class Monostate { private static Object instanceOfObject; // …other properties public Monostate() {} public void setObjetoInstancia(Object object) { Monostate.ObjetoInstancia = object; } public Object getObjectInstance() { return Monostate.ObjectInstance; } // …other methods } Observing the structure of its implementation, we can notice that Monostate, in addition to needing to be instantiated wherever it is used, does not control the life cycle of its object instance. Therefore, this control must also be implemented by the code that consumes it. This brings three main advantages to those who use it: It does not break SOLID's Single Responsibility Principle. Consequently, we have its benefits available; Because it needs to be instantiated to consume the single instance of the object, which is private in relation to Monostate, it can be considered a more transparent solution than Singleton; Even though there is Monostate getter call competition, there is no internal control that creates an instance for you. Therefore, it is more difficult for the problem of unintentionally creating instances of your object to occur ─ and if it does occur, it is probably related to the invocation of the setter in the code that consumes it. In terms of comparison, there is not much to look at here. It is often said that Monostate and Singleton are two sides of the same coin, with the detail of the type of category that each falls into. Monostate is a behavioral design pattern, while Singleton is creational. Still, it is worth knowing what is the best design pattern for each situation. Are you curious? Want to know more about Monostate? Read Uncle Bob's full article on this subject at: SINGLETON and MONOSTATE. Singleton versus Dependency Injection While one of the biggest premises of Singleton is its convenience, transparency is the key to dependency injection. The logic behind this topic is very simple to understand: if a class or method requires a certain object to perform its operations, this object must be injected as a dependency. By rewriting the usuarioPodeCadastrarNovosClientes() method, from the service described above, we can, instead of recovering the user from a Singleton, make explicit the dependency that the method needs on a User. This is also known as “passing object by reference”. public class Servico { public boolean usuarioPodeCadastrarNovosClientes(Usuario usuario) { return usuarioLogado != null && usuarioEhAdmin(usuario)); } private boolean usuarioEhAdmin(Usuario usuario) { return “ADMIN“.equals(usuario.getPermissao()); } } The service does not need to worry about where the user comes from. This is a concern that the customer ─ the one who consumes this service ─ must have. With this small change, the service became: Transparent: it is clear what its dependencies are and how it handles them; Easy to test: without using Singleton, our problem of running tests in parallel no longer exists. Finally, dependency injection also serves to cause reflections in whoever is writing the code, bringing very interesting insights. An example would be to reflect on whether our usuarioPodeCadastrarNovosClientes method really needs the User object, or whether just the permission String is enough; In fact, is there really a need for a method in our service to carry out this verification? What if the User object itself has an internal method to validate this rule? Questions like these are 100% pertinent, and can naturally occur as soon as the dependencies of that piece of code are explicit. Author's opinion As mentioned at the beginning of this article, every project pattern has implementation costs and benefits (trade-offs). It is important to know the “good side” and the “bad side” before we judge or defend an idea. I wrote this article, in large part, because every place I searched, I found new information with very few explanations to justify what was being said. This article, therefore, serves as study material to consolidate all this information in a centralized location. Most authors take one side. However, on some occasions, there is no way: there are situations where you need a single instance of an object, and that instance needs to be global in scope. But, more important than needing to use a resource, is understanding how that same resource works. Special thanks Writing this text was special for me, and many people helped me to achieve the result. So, I would like to say some thanks. I would first like to thank my wife for taking the time to read and re-read versions of this text. Even though I wasn't from the area, it was a very important piece and one of my biggest motivators to continue writing. I would also like to thank my colleagues Fernando Costa Leite and Felipe Pereira Maragno, for giving me some valuable feedback during the development of the article; and to Fabio Domingues and Francisco Hillesheim for supporting the writing of the advantages that Singleton brings to the developer, as well as adding to the idea of ​​a thread-safe solution. Last but not least, I would also like to thank Cleidir Cristiano Back, Vinicius Roggia Gomes, and Renato Mendes Viegas for reviewing the final result: you were fundamental in refining the article. Did you enjoy learning a little more about the Singleton controversy? Tell me your opinion here in the comments! Check out more content like this on our Blog. Want to be our next Tech Writer?

SaaS platform: what you still don't know about it!
Tech Writers June 20, 2022

SaaS platform: what you still don't know about it!

You certainly agree that cloud computing is becoming vital for companies. In this context, a huge portion of its popularization in the market is due to SaaS platforms, known as Software as a Service or Software as a Service. To give you an idea, 80% of organizations plan to have all their systems in SaaS by 2025, with 38% already running their applications almost entirely on the model. The data is from DevSquad. Considering that this segment is expected to reach US$143,7 billion in 2022, according to Gartner, it is essential that every technology professional is aware of its particularities, challenges and trends. This is what we will cover in this article. Follow us! What do you still need to know about the SaaS platform? Thanks to the rapid and growing expansion of cloud computing, it is currently more practical, cheaper and faster for SaaS developers to implement their applications than it would be in traditional software development, done locally. As you already know, all cloud technologies run via underlying systems. However, SaaS platforms specifically concern business applications that operate via the cloud. Currently, practically all essential tools for companies are made available by SaaS software providers. So much so that these applications can be found in different formats, scopes and directions. Despite this variability, SaaS systems can almost always be lumped into three specific categories. Discover how each of them is defined and what their market purposes are: Collaborative SaaS In the case of collaborative SaaS platforms, the purpose is to encourage teams of professionals to work together, whether in the same corporate environment or remotely. Therefore, whenever you come across a cloud system aimed at exchanging messages, sharing files and documents, video conferencing, process integrations, among others, this is considered a collaboration SaaS. Again, the examples are broad. They include video calling apps like Zoom, web project management solutions like Trello, and so on. Technical SaaS Finally, SaaS systems in the Technical category are those aimed at carrying out, improving and managing technical processes. The examples are web software that require technical skills to be used. This involves applications such as Google Analytics, used by marketing professionals, Adobe online package, used by designers, Sienge, for managers in the construction segment, etc. Main features of SaaS The biggest advantage of SaaS platforms is certainly the possibility of running everything via the web. In other words, users do not need to install the systems locally and run them on their devices, which makes initial hardware investments cheaper. In addition, providers are responsible for all application performance, availability and security demands. More than practicality, this also generates savings on issues such as licensing and support. Far beyond these benefits, SaaS projects and products have more interesting features, which justify their current popularity. They are: Multi Tenant Architecture The distribution of SaaS platforms is based on a Multi Tenant architecture aimed at all customers. This means that users have the same source code and automatically receive new features whenever they are added. Easy customization Even with Multi Tenant architecture, the vast majority of SaaS systems are customizable. The intention is to better meet the demands of each user, with no customization affecting the infrastructure. Easy access Far beyond access at any time, from any place and through any device connected to the cloud, SaaS still offers the same degree of functionality as traditional software, but at much lower costs. SaaS takes advantage of the consumer Web The ease we mentioned above also involves the fact that organizations can access applications and start using them immediately. This is because access is direct via the client's website, and the system is already readily installed and configured. SaaS Challenges Going beyond the code issues, benefits and possibilities of SaaS platforms, it is important to highlight that they also have certain challenges. As with any technology, there are relevant points of caution for practitioners, which include: Issues beyond the customer's control Even though applications can be customized, this is only possible to a certain extent. The functionalities and resources are limited to the system architecture. Sometimes these tools may not completely meet each company's specific needs. Increased lifetime value Customers do not purchase Software as a Service SaaS once. Subscriptions must be renewed for features to remain available. Therefore, vendors must keep users engaged and interested in the value of features. Reduction in the churn rate The increase in the average time that users invest in the product is directly related to the decrease in the churn rate. Innovations emerge all the time on the internet and, if you don't keep up with them, the customer may not continue purchasing your SaaS every month or year. Increase in average ticket Scalability is inherent to SaaS platforms. In other words, consumers can access more or fewer functions according to their demand. In this sense, more than increasing LFV and reducing churn, it is necessary to increase the average ticket with more useful and attractive extensions. SaaS market trends for the future Now that you know all the features and opportunities offered by SaaS platforms, it's time to turn your eyes to the future (which is already very close). After all, this is a market that evolves quickly and is full of new solutions. To stand out to consumers and maintain relevance in the face of so many innovations, it is essential to pay attention to trends in the segment. Among those that are already dictating new directions in the area, the following stand out: Transaction-based payment models As you could see previously, SaaS platforms are generally paid using subscription models. In other words, the challenge is to maintain customers' monthly and annual payments for as long as possible. As the emergence of new systems and resources is increasing, more and more companies in the area are looking to stand out by offering flexibility to consumers. This means that subscription formats are being replaced by pay-per-use. This way, users must pay according to their demand. The compensation is for the specific time of use and the level of resources used. In fact, making larger payments conditional on more robust tools increases the possibilities of better pricing. Whatsapp and SaaS Unlike the United States and European countries, Brazil has WhatsApp among its main means of communication. According to a survey released by Valor Investe, 80% of the population uses the app to communicate with brands. Aware of this specific reality of the national market, more and more organizations recognize the need to adapt to this preference of Brazilians, adapting their channels and approaches. In this sense, SaaS platforms already have a growing volume of functionalities integrated into the application. This includes service, sales, prospecting, help desk tools, among other similar tools. If you've made it this far, it's probably because you know that all this knowledge about SaaS platforms will add to your career. Softplan values ​​professionals with this profile and works to ensure they reach their best version. Do you want to be the next Tech Writer on our team and help us transform people's lives?

Design methodologies, methods & processes!
Tech Writers June 06, 2022

Design methodologies, methods & processes!

Every day, we come across some activity made up of stages. Often, we don't see them this way because they are part of our routine. But stop to think for a moment about the act of washing dishes. That's right, washing dishes! Personally, I have a very specific order for choosing the items that I will wash, soap, rinse and place in a dryer to drain the water. My way of carrying out this household task is very different, for example, from someone who has a dishwasher. I once knew someone who pre-washed their dishes before even putting them in the machine. Note how this is a simple example of a banal routine, where we find different processes to perform the same action. With this simple metaphor, I want to make you reflect on how we can achieve a goal through different processes. In this content, we will talk about design processes, their stages and particularities. Washing the dishes, a banal routine where we find different processes to perform the same activity. Concept of design processes Processes, in my view, are strategies for emptying the mind and getting organized. These are steps that we follow to program steps and tools, in order to leave the environment around us prepared to solve a problem or carry out an activity. Organizing our steps generates clarity about what we need to do. Clear Design Processes lead to simple and smooth paths to resolving issues. Appropriate methods and techniques guide the designer in discovering the best way to solve problems and, in our case, in developing and improving a product. The process should be a guide to see where the design can go. Design methods Design methods are guarantees of the effectiveness of the process and the delivery of design quality. Ignoring the separation of design areas, be it industrial, product or graphic, when we look at the process we will always come across the “basics that work”. Thus, “we can make a division of stages formed by information, observations and reflections that result from an investigation” (GERHARDT and SILVEIRA, 2009, p. 76). In this way, we can transform this knowledge into a “written account of what the researcher hears, sees, experiences and thinks during the course of a qualitative study” (BOGDAN and BIKLEN, 1991, p. 150). Method steps The first step is aligned with an understanding very similar to the concepts of common briefing techniques in the marketing area. This is when we perform an alignment and look at data collection techniques, the popular Discovery in the world of products. We move towards the discovery stage where we seek, through research, important questions regarding the product. In this way, we put into the balance relevant data that counterbalance all internally constructed hypotheses. Soon after, comes the analysis and ideation stage. This phase is one of the most challenging, because we need to rationalize everything that was discovered in the information collected. From this, to solve the problem raised, we devise the paths we will follow. The design or idea drawing stage consists of the visual development that materializes the ideas, with the aim of making them tangible and testable. The famous prototype. In testing, we validate the discovery along with the personas built around the initial problem. Here, we observe opinions, behaviors and understanding of the solution we built. Finally, we arrived at monitoring everything that was validated and launched to see the engagement of what was produced. This stage could very well be seen as final. However, it feeds a generation of insights for the next cycles of the developed solution. In summary, the design process is: Understand; To look for; Make the design; To analyze; To accompany. During your design journey you will come across different process models that encompass these same steps. What matters, in the end, is that this journey must be simple, direct and structured. Adapt to design methodology A clear and complete design process can guide in an orderly manner, optimizing and simplifying the result. The process determines the design steps, while design requires specific methods and support and guidance strategies. Methods determine the measurements and effects of the design and must be adapted and changed according to specific procedures. This way, problems are resolved efficiently and creatively. At this point, it is essential to remember that UX — User experience, that is, the study of user experience, is not a direct and pre-determined process. It requires constant adaptations. In the process of finding the ideal product and design, you need to move back and forth until you find the right fit. Therefore, we must adapt to the problems we are trying to solve together with other areas and teams. Design process standards The standards that make up design methods are an important part of the methodology. It is a much broader concept than a process. Methodologies comprise a set of principles and guidelines for best practices for applying methods and processes related to a discipline. Design methodology is an area of ​​knowledge studied by designers to research and test the way in which it will be possible to obtain a result in a project; Through it, it will be possible to predict, in advance, a set of procedures, rules and techniques to reach a planned goal. A method is an isolated way of doing things. It is related to the project stages, while the methodology is a set of methods. Methodology is the entire execution of the project. It is a science that deals with the study of methods, techniques or tools, their applications and definitions. Thus, they are composed of instruments for ordering, organizing and logically supporting development, and organizing the resolution of theoretical and practical problems. The methodology embraces the process, serving the designer with methods and tools. The dialogue between methodology and design allowed the latter to become teachable, learnable and communicable. This is because design theory and methodology are developed based on hypotheses and assumptions that aim to improve methods, rules and criteria. Conclusion Knowing the basic theory can give us the power to apply it and not just be a passive participant in projects. Every designer needs to be aware of the depth of this knowledge and its applicability in everyday life. The basis makes us understand our daily actions so that, even in failure, the support of the methodology gives us a way to recover through some process. It is our responsibility as designers to go beyond the operational. Thus, our action within companies occurs actively and not just in a replicating way, without understanding the reasons why our processes exist. Bonsiepe et al. (1984, p. 34) highlights that the design methodology should not be confused with a cake recipe book, as cake recipes certainly lead to a certain result, and design techniques only have a certain “probability of success”. Know how to use every piece of equipment to deliver top-notch work by valuing what you do, even if it's doing the dishes. Did you like to know a little more about methodologies, methods and processes in design? Check out more content like this on our Blog! Want to be our next Tech Writer? Check out our vacancies on the Career page!

Are distributed applications and coupling in software engineering worth it?
Tech Writers 16 May, 2022

Are distributed applications and coupling in software engineering worth it?

Nowadays, it is very common to see monolithic applications being migrated, or even being born, in a distributed architecture. Whether it is composed of coarse-grained domain services or even refined microservices. I will not go into, at this moment, the details and risks involved in a possible – and likely – premature decomposition. However, it is undeniable that distributed applications are an absolute trend in terms of modern solution architecture. Part of this movement was encouraged by large technology companies. It was also driven by tools such as: Application containerization; IaC (Infrastructure as Code); Cloud Computing. These, in turn, end up solving problems that not long ago were insurmountable obstacles for any proposal of a distributed nature. Are distributed applications worth it? To find this answer, we need to start with a question: why is this architectural style so attractive to companies? It is perfectly understandable that developers and architects feel intrigued by the technical challenges implicit in this type of approach. But what justifications make it possible to apply this additional complexity to a software solution from a business perspective? Before discussing these advantages, it is worth mentioning that a distributed architecture should only be applied to systems whose complexity is very high. That is, those that are difficult to evolve and maintain in a monolithic architecture. Therefore, it should not be applied only because of the implicit technical challenges that usually motivate the technical team. It's not the focus of this article, but I leave a good reference on the topic here. That said, let's assume that a considerable analysis of the trade-offs has been carried out and that there is a real need and maturity that justifies this type of approach. Some of the main advantages you can obtain are: Application maintainability; More testable code; Scalability and elasticity; Greater fault tolerance; Availability of services; All of this allows our solution to respond quickly to business changes. Thus, it is possible to make a considerable improvement in the product's time-to-market, providing great competitive advantages. The great villain of distributed applications! To preserve these advantages, it is essential that the team is aware of the importance of protecting one of the most fundamental characteristics in a distributed architecture: independence between services. When two or more services in your ecosystem are coupled and have their independence compromised, the advantages provided by this style of architecture begin to be overcome by the so-called essential complexities. This directly affects the team's ability to respond quickly to business changes. In this way, the competitive advantage ends up being lost. Types of coupling in software engineering I will simply explain two of the main types of coupling in software engineering: static coupling and dynamic coupling. Follow: Static coupling This is the one that “ties” two software components, causing changes in one element to affect all of its dependents. The greater the efferent coupling of your service, the greater its instability and the greater the chance of it being impacted by external changes. When we think about this type of definition, the most obvious example is packages or libraries that our solution depends on. However, all elements necessary to ensure that the service is initialized also fall into this coupling category, including databases and message brokers. This type of coupling in software architecture is one of the main reasons for making a domain services-oriented approach (where it is common to choose to use the database as an application integration point) less flexible than a microservices architecture with independent bases. . A consequence of this coupling would be the almost inevitable possibility of a change at the centralized bank level impacting more than one ecosystem service. Dynamic coupling It is the coupling in software engineering that occurs during communication between two services. A practical example: assuming we have two services that communicate synchronously. This way, they will be dynamically coupled during the execution of this communication. If the service being consumed fails or even faces a performance problem, this momentary coupling will also affect the consumer's characteristics. Scaling a service does not guarantee efficiency if it is dynamically coupled with inefficient services. In this scenario, it might make more sense to have static coupling with the message broker and asynchronous communication to avoid dynamic coupling between applications. But, like everything in software engineering, the answer always depends on the context and the real needs of the business. Conclusion Static and dynamic coupling are inevitable in a distributed architecture. Instead of fighting them, it is necessary to manage them. The team must be aware of the importance of independence between services and the impacts that each type of coupling in software engineering can have on the architectural characteristics of the solution. Considering the trade-offs and documenting the whys are a fundamental part of the work required to ensure an evolutionary and flexible software architecture. In this way, the competitive advantages that justify this interesting architectural style are ensured. Did you like to know a little more about distributed applications and coupling in software engineering? Check out more content like this on our Blog! Want to be our next Tech Writer? Check out our vacancies on the Career page!

Agile Software Development in the Civil Construction segment at Softplan
Tech Writers 02 May, 2022

Agile Software Development in the Civil Construction segment at Softplan

In 2016, Softplan's Civil Construction segment made the strategic decision to change its business model. Thus, the company replaced the sale of Software Perpetual Use Licenses (LUs) with distribution using the Software as a Service (Software as a Service – SaaS) model. At that time, it was identified that it would be necessary to create and improve internal skills in our area responsible for solutions for the Construction Industry. In a traditional usage license model, it is normal for the customer to wait months to receive a new version of their system. A SaaS system releases software updates constantly. Whether to correct flaws and vulnerabilities, or to provide new functions linked to technological or business innovations. In the SaaS model, with recurring subscription payments, the customer does not need to make large initial investments to acquire software licenses. While this makes it easier for a customer to enter, it also makes it easier for them to leave. This is because the client has not disbursed a large financial amount that will still need to generate Return Over Investment (ROI). Therefore, investing in software solutions is a way to facilitate customer retention, as it focuses on customer satisfaction.   We identified two major objectives. They are: frequent software releases and guarantee of customer satisfaction based on its quality. To achieve the above objectives, some changes would be necessary. They are: Code improvements; Improvements in documentation; Modernization of software engineering practices; Improvement in the processes that governed the development of Sienge – a specialized Civil Construction system. In order to promote these improvements, Softplan's Civil Construction segment invested in Agile Software Development. In this content, we will explain what Agile Software Development is, how we carry out the process, results obtained and much more on the topic. Read on! Agile Software Development Agile Software Development corresponds to behaviors, processes and practices used in the creation of software, which aim to be lighter or more agile than “traditional” processes. Most process frameworks for agile software development consider the delivery of small increments of the system in short, limited periods of time. This period comprises all stages of the software life cycle necessary to deliver these increments: Planning; Analysis; Test Programming; Documentation. The objective of the period itself is to carry out these gradual additions to the system. Agile Software Development is an empirical process: short deliveries aim to enable short cycles of obtaining feedback, learning and adaptation (of requirements, solution design and planning). In this way, the solution being developed emerges during development from these cycles. Another characteristic is the more direct and constant occurrence of communication and collaboration between development and the business/client in understanding, validating and developing the solution. Agile software development practices Agile software development practices have evolved since the 1990s, initially identified as “light” methods, as opposed to traditional and “heavy” methods. In the spring of 2000, leaders from the Extreme Programming (XP) community met to discuss the practices of this methodology. They also discussed the relationship between XP and other “lightweight” methods. These methods included Scrum, DSDM, Adaptive Software Development, Crystal, Feature-Driven Development, Pragmatic Programming and others. They were discussed together because they were understood as an alternative to cumbersome methods. From this moment on, Robert Cecil Martin decided to create a meeting with people interested in Light Methods. The event took place in February 2001, in Utah, in the United States. In it, 17 programming professionals and technology experts came together and created what we now know as the Agile Manifesto. Agile methods share some characteristics, such as iterative development. They are best suited when a product's requirements change frequently. It is also recommended for projects with small teams, as, as size increases, face-to-face communication becomes more difficult. Furthermore, it is necessary that some key success factors are guaranteed: An organizational culture that supports the methods; People's trust; Autonomy and support for decisions made by technical teams; The presence of an environment that facilitates communication between members. Software Developments for Civil Construction in 2016 at Softplan In 2016, the software solution produced by Softplan's Civil Construction segment – ​​Sienge – released versions every 45 days to deliver new features, and smaller versions to fix bugs. The software development life cycle adopted a more traditional, waterfall-type approach. The development teams were large, with an average of 20 members. These teams were made up of specialized roles, mentioned below. Development Manager: Responsible for the functional management of the teams and the technical/evolutionary management of the product; Development Coordinator: Responsible for assisting the development manager in the functional management of the teams; Requirements Analyst: Responsible for writing detailed requirements specifications for the features to be implemented; Programmer: responsible for coding the system; Tester: responsible for conducting tests in its different layers. At the time, most tests were conducted manually. There was also good coverage of automated tests in the system's GUI layer. The evolution of the product was guided mainly by requests from customers and prospects. The Jano project The first attempt to adopt agility in the area of ​​solutions for the Construction Industry occurred in 2010, with the Scrum framework. In that situation, small teams that developed system functionalities, such as the Engineering and Supply teams, were unified to be more compliant with the Scrum proposal. The PO role had been introduced in a previous adoption of Scrum in 2010, but ended up being forgotten over time. In practice, the development coordinator together with the client decided the product's functionalities. As previously stated, the teams had specialized roles. There was friction between the roles of the requirements analyst and the programmer. The programmers complained about not having any autonomy over the specification of the functionalities, since they received the complete and ready Use Case from the Requirements Analysts. Aiming to minimize this friction, in addition to achieving other objectives, the “JANO” project replaced Use Cases with User Stories. The HAT project In 2016, a new agility initiative was launched, with full sponsorship from the unit's management. This project was called HAT. The success of the HAT project would be crucial to the success of the change in the unit's business model, moving from the sale of perpetual licenses (LUs) to the SaaS model. Thus, an agility committee was formed. This committee was mainly composed of technical software development professionals, but even included the area's operations director, with active participation in the discussions. The group acted in the role of coach for adopting agility. The large teams were divided, adapting to the size standards suggested by agile frameworks. Each team was responsible for a main Sienge module and also for some secondary modules. Non-functional and functional teams Two non-functional teams were also established, which worked to support the others: the infrastructure and architecture teams. Each functional team now had a Scrum Master, who was also a member of the agility committee. It was believed that the role of Scrum Master would be temporary, as his objective was to teach the team to be agile, making his own existence dispensable in the future. Development coordinators began to play the role of Product Owner, migrating from a functional management position to a more technical position. The specialized roles of Requirements Analyst, Programmer and Tester no longer exist. T-shaped professional Based on the technology guideline of ensuring the permanence of Sienge as a SaaS system, the role of Software Developer began to adopt a profile known in the market as “T-shaped professional”. This professional is expected to present advanced performance in an area of ​​competence and also basic or average performance in complementary areas of competence. The areas of competence established for Sienge's Software Developer were the following: Analysis and design: involves analytical capacity regarding the need (requirement) to be implemented in the system; Programming: involves coding software that will meet the requirements raised by Analysis and design; Quality/Testing: encourages the correct adoption of the different layers of the testing pyramid when delivering each functionality; Teams and Processes: involves knowledge and skills to enhance teamwork (conducting ceremonies, facilitation techniques, structured feedback techniques) and mastery of frameworks to aid organization and productivity (Scrum, Kanban, XP, etc.); DevOps: includes technical knowledge related to the DevOps movement and also concern with the non-functional aspects of the code to be produced: security, capacity, high availability, disaster recovery, etc. This change in the developer profile was explained to all members of the technology area. Specialized profiles would no longer exist and people would need to seek the knowledge expected for the “T-profile”. The biggest challenge, in general, fell on those who did not program. There were then two options: the professional would need to learn to program, counting on the company's help, or choose to leave it. Both cases occurred, but the majority chose to remain in the organization, learning programming. Horizontalization in the unit's technology hierarchy During the HAT project, the Development Manager stopped conducting the functional management of employees. He focused on technical management and product evolution. Consequently, there was a flattening, or horizontalization, in the unit's technology hierarchy. Where there used to be a director, a manager, coordinators and teams, only the teams directly linked to the director remained. Flattening the hierarchy gave development teams more autonomy. Among the changes, everyone started to rely much more on peer feedback and not just on management feedback. This team autonomy could be observed in different situations: During a hiring process There is an interview phase for the candidate by the team they will be part of. The management of the Civil Construction segment believes that “team building” should begin with the opportunity for the team to choose its new member. Thus, the team has autonomy and the power to veto or approve the candidate. Terminations Since 2017, the company has terminated three employees. Of these three dismissals, the team itself decided on two, as it no longer saw cooperation from the dismissed professionals. Choice of processes and methodologies Teams have autonomy and freedom in choosing the development methodologies they adopt. Some teams respect the Scrum framework more faithfully, some opt for Scrumban, and some work in a continuous conveyor belt model, characteristic of Kanban. Even the interaction period is not standardized and varies between teams. Some adopt one-week interactions, others two weeks. Based on a 180º evaluation process (between peers), developers promote the salary actions of their colleagues. This process, called Apex Dev, will be the subject of another article. Follow the blog to check it out! The results obtained with technological innovations in civil construction In addition to the adoption of agility practices in readjusting teams, technical practices have also evolved. The union of procedural and technical practices made it possible to achieve a major change in the frequency of Sienge releases. Since 2017, Sienge has started releasing new versions almost daily. In 2020, which had 253 business days, 207 versions of Sienge were released. Each of these versions is applied to the entire Sienge datacenter, generating updates for thousands of customers. They are also made available so that on-premise customers can apply them. The new SaaS-based business model achieved its strategic objectives. Furthermore, the transition from Traditional Culture to Agile Culture achieved its objective by supporting this new business model. Did you enjoy learning about Agility Adoption in Softplan’s Civil Construction segment? Check out more content like this on our Blog! Want to be our next Tech Writer? Check out our vacancies on the Career page!

Discover one of the most important SOLID principles: Single Responsibility Principle
Tech Writers 21 April, 2022

Discover one of the most important SOLID principles: Single Responsibility Principle

One of the most important SOLID principles is the Single Responsibility Principle (SRP). According to Robert C. Martin (Uncle Bob) in his book Clean Code, a role should have only one responsibility. While reading the book, I came across a case where the author applied refactoring techniques to legacy code, leaving a function as lean as possible. For the author, that function, after being refactored, started to perform just one action. See: The point is that, analyzing the function, I identified that it performs three actions: Checks if it is a test page; If it is a test page, include setups and teardowns; Returns the html of the page. Levels of abstraction From this moment on, the author begins to discuss the “levels of abstraction” of the Single Responsibility Principle (SRP). He states that: “A function only does one thing if all the instructions within it are at the same level of abstraction.” I confess that, even after reading this sentence several times, I didn't understand how that function only did one thing. Until I decided to get my hands dirty. That's when it all made sense! To explore SRP a little further, let's use a fictitious example: Consider that, in one of the classes in our system, there is a function responsible for processing student approval. Approval processing consists of updating the student's data and sending an email notifying that the student has been approved. Below, the code that represents this function: As we can see in the image, the function above performs more than one action. To demonstrate this, I separated them by comments: At this stage, it became very obvious that our function does more than one thing. However, the question is: can the function perceive the different levels of abstraction it has? The purpose of this function is to process approval. “Approval processing consists of updating the student’s data and sending an email notifying that the student has been approved.” In other words, sending emails is part of our job responsibilities. However, putting the phrase “Good morning” in the body of the email is already a detail that the function does not need to know. Or rather, she doesn't need to know whether the email should start with “Good morning” or “Hello”. This is at a lower level of abstraction. Separation of levels The idea, now, is to separate each level of abstraction by a new function. Applying the method extraction technique, we can extract the update of our student's data to another function: Applying the same technique, we can also extract the sending of the email through different levels of abstraction: Note that I created a function to assemble the e -mail placing the recipients and its content and another function just to assemble this content. This is because the two “tasks” are at different levels of abstraction. With this, our main function looks like this: Now, if I ask you “how many things does this function do?”, you will probably answer: “two things: it updates the student’s data and sends the email” . If that's your answer, you're not wrong, but you're not right either. I'll explain why. Remember our role requirements? “Approval processing consists of updating the student’s data and sending an email notifying that the student has been approved.” In other words, updating the data and sending the email are at the same level of abstraction. Therefore, we can conclude that our function only does one thing: processes student approval. Did you like to learn a little more about the Single Responsibility Principle? Check out more content like this on our Blog! Want to be our next Tech Writer?