Company Logo Tech Writers

tech writers

This is our blog for technology lovers! Here, softplayers and other specialists share knowledge that is fundamental for the development of this community.

Learn more
.Net ThreadPool Exhaustion
Tech Writers March 25, 2025

.Net ThreadPool Exhaustion

More than once in my career I have come across this scenario: .Net application frequently showing high response times. This high latency can have several causes, such as slow access to an external resource (a database or an API, for example), CPU usage “hitting” 100%, disk access overload, among others. I want to add to the previous list another possibility, often little considered: ThreadPool exhaustion. It will be presented very quickly how the .Net ThreadPool works, and code examples where this can happen. Finally, it will be demonstrated how to avoid this problem. The .Net ThreadPool The .Net Task-based asynchronous programming model is well known by the development community, but I believe its implementation details are poorly understood - and it is in the details where the danger lies, as the saying goes. Behind the .Net Task execution mechanism there is a Scheduler, responsible, as its name suggests, for scheduling the execution of Tasks. Unless explicitly changed, the default .Net scheduler is the ThreadPoolTaskScheduler, which also as the name implies, uses the default .Net ThreadPool to perform its work. The ThreadPool then manages, as expected, a pool of threads, to which it assigns the Tasks it receives using a queue. It is in this queue where Tasks are stored until there is a free thread in the pool, to then begin their processing. By default, the minimum number of threads in the pool is equal to the number of logical processors on the host. And here is the detail in how it works: when there are more Tasks to be executed than the number of threads in the pool, the ThreadPool can wait for a thread to become free or create more threads. If you choose to create a new thread and the current number of threads in the pool is equal to or greater than the configured minimum number, this growth takes between 1 and 2 seconds for each new thread added to the pool. Note: Starting with .Net 6, improvements were introduced to this process, allowing for a faster increase in the number of threads in the ThreadPool, but the main idea remains the same. Let's look at an example to make it clearer: suppose a computer has 4 colors. The minimum ThreadPool value will be 4. If all incoming Tasks process their work quickly, the pool may even have less than the minimum of 4 active threads. Now, imagine that 4 Tasks of slightly longer duration arrived simultaneously, thus using all the threads in the pool. When the next Task arrives in the queue, it will need to wait between 1 and 2 seconds until a new thread is added to the pool, before it can dequeue and start processing. If this new Task also has a longer duration, the next Tasks will wait in the queue again and will need to “pay the toll” of 1 to 2 seconds before they can start executing. If this behavior of new long-running Tasks continues for some time, the feeling for the clients of this process will be of slowness, for any new task that arrives at the ThreadPool queue. This scenario is called ThreadPool exhaustion or ThreadPool starvation. This will happen until Tasks finish their work and start returning threads to the pool, allowing the queue of pending Tasks to shrink, or until the pool grows enough to meet current demand. This may take several seconds, depending on the load, and only then will the previously observed slowdown cease to exist. Synchronous vs. Asynchronous Code An important distinction must now be made about types of long-running work. They can generally be classified into 2 types: CPU/GPU-bound (CPU-bound or GPU-bound), such as executing complex calculations, or I/O-bound (I/O-bound), such as accessing databases or network calls. In the case of CPU-bound tasks, except for algorithm optimizations, there is not much that can be done: you need to have enough processors to meet demand. But, in the case of I/O-bound tasks, it is possible to free up the processor to respond to other requests while waiting for the I/O operation to complete. And that's exactly what ThreadPool does when asynchronous I/O APIs are used. In this case, even if the specific task is still taking a long time, the thread will be returned to the pool and will be able to serve another Task in the queue. When the I/O operation completes, the Task will be requeued and then continue executing. For more details on how the ThreadPool waits for I/O operations to finish, click here. However, it is important to note that there are still synchronous I/O APIs, which cause the thread to block and prevent it from being released to the pool. These APIs - and any other type of call that blocks a thread before returning to execution - compromise the proper functioning of the ThreadPool, and may cause it to become exhausted when subjected to sufficiently large and/or long loads. We can then say that ThreadPool – and by extension ASP.NET Core/Kestrel, designed to operate asynchronously – is optimized for executing low computational complexity tasks, with asynchronous bound I/O loads. In this scenario, a small number of threads is capable of processing a very high number of tasks/requests efficiently. Blocking Threads with ASP.NET Core Let's see some code examples that cause thread pool threads to block, using ASP.NET Core 8. Note: These codes are simple examples, and are not intended to represent any particular practice, recommendation, or style, except for the points related to the ThreadPool demonstration specifically. To maintain identical behavior between examples, a request to a SQL Server database will be used that will simulate a workload that takes 1 second to return, using the WAITFOR DELAY statement. To generate a usage load and demonstrate the practical effects of each example, we will use siege, a free command-line utility designed for this purpose. In all examples, a load of 120 concurrent accesses will be simulated for 1 minute, with a random delay of up to 200 milliseconds between requests. These numbers are enough to demonstrate the effects on the ThreadPool without generating timeouts when accessing the database. Synchronous Version Let's start with a completely synchronous implementation: The DbCall action is synchronous, and the ExecuteNonQuery method of the DbCommand/SqlCommand is synchronous, so it will block the thread until the database returns. Below is the result of the load simulation (with the siege command used). You can see that we achieved a rate of 27 requests per second (Transaction rate), and an average response time (Response time) of around 4 seconds, with the longest request (Longest transaction) lasting more than 16 seconds – a very poor performance. Asynchronous Version – Attempt 1 Let’s now use an asynchronous action (returning Task ), but still use the synchronous ExecuteNonQuery method. Running the same load scenario as before, we have the following result. Note that the result was even worse in this case, with a request rate of 14 per second (compared to 27 for the completely synchronous version) and an average response time of more than 7 seconds (compared to 4 for the previous one). Asynchronous Version – Attempt 2 In this next version, we have an implementation that exemplifies a common – and not recommended – attempt to transform a synchronous I/O call (in our case, ExecuteNonQuery ) into an “asynchronous API”, using Task.Run. The result, after simulation, shows that the result is close to the synchronous version: request rate of 24 per second, average response time of more than 4 seconds and the longest request taking more than 14 seconds to return. Asynchronous Version – Attempt 3 Now the variation known as “sync over async”, where we use asynchronous methods, such as ExecuteNonQueryAsync in this example, but the .Wait() method of the Task returned by the method is called, as shown below. Both .Wait() and the .Result property of a Task have the same behavior: they cause the executing thread to block! Running our simulation, we can see below how the result is also bad, with a rate of 32 requests per second, an average time of more than 3 seconds, with requests taking up to 25 seconds to return. Not surprisingly, the use of .Wait() or .Result in a Task is discouraged in asynchronous code. Problem Solution Finally, let's look at the code created to work in the most efficient way, through asynchronous APIs and applying async / await correctly, following Microsoft's recommendation. We then have the asynchronous action, with the ExecuteNonQueryAsync call with await. The simulation result speaks for itself: request rate of 88 per second, average response time of 1,23 seconds and request taking a maximum of 3 seconds to return - numbers generally 3 times better than any previous option. The table below summarizes the results of the different versions, for a better comparison of the data between them. Code VersionRequest Rate ( /s)Average Time (s)Max Time (s)Synchronous27,384,1416,93Asynchronous114,337,9414,03Asynchronous224,904,5714,80Asynchronous332,433,5225,03Solution88,911,233,18 Workaround It is worth mentioning that we can configure the ThreadPool to have a minimum number of threads greater than the default (the number of logical processors). With this, he will be able to quickly increase the number of threads without paying that “toll” of 1 or 2 seconds. There are at least 3 ways to do this: by dynamic configuration, using the runtimeconfig.json file, by project configuration, by adjusting the ThreadPoolMinThreads property, or by code, by calling the ThreadPool.SetMinThreads method. This should be seen as a temporary measure, while the appropriate adjustments are not made to the code as shown above, or after appropriate prior testing to confirm that it brings benefits without performance side effects, as recommended by Microsoft. Conclusion ThreadPool exhaustion is an implementation detail that can have unexpected consequences. And they can be difficult to detect if we consider that .Net has several ways to obtain the same result, even in its best-known APIs – I believe motivated by years of evolution in the language and ASP.NET, always aiming at backward compatibility. When we talk about operating at increasing rates or volumes, such as going from dozens to hundreds of requests, it is essential to know the latest practices and recommendations. Furthermore, knowing one or another implementation detail can make a difference in avoiding scale problems or diagnosing them more quickly. Tech Writers. In a future article, we will explore how to diagnose ThreadPool exhaustion and identify the source of the problem in code from a running process.

What is UX Writing and everything you need to know to create amazing experiences
Tech Writers February 11, 2025

What is UX Writing and everything you need to know to create amazing experiences

What is UX Writing and how does it positively impact a business's product? See best practices, responsibilities, methodologies, and much more! UX Writing involves creating valuable content in interfaces and digital products, including texts, based on the user experience, that is, aiming to deliver the best experience to the public. This practice is related to marketing, design, and information architecture concepts, and aims to delight and offer value through informative pieces. An example of UX Writing is when you access an online teaching platform, or an application that, as soon as you log in, demonstrates with an objective tutorial each step the user must take. Below, an example of the ProJuris ADV application Softplan, which shows a clean and friendly interface before the user decides whether to create an account on the application or log in, showing some things that can be done in the application. User experience has become increasingly important for attracting, converting and retaining customers. Aspects such as the agility of your website navigation, scannability and intuitive way of browsing, and even the colors chosen for the design of the pages directly affect users' decisions on any digital platform. When we talk about digital platforms with UX Writing, we can take Gestor Obras as an example, which, on the first page of the system, shows a practical tutorial on how it works. As you click where it says, it will show the next steps and functions of each part of the system. Another example of UX Writing, which provides direct and objective information, is in Sienge, showing in an image some advantages of using the system, in addition to direct communication in the CTA “Request a Demonstration”, moving away from the common “Learn More” and calling for a very objective action. Therefore, if your company is not yet constantly optimizing its communication channels with users, especially its website, it is time to review some choices. After all, the user is the one who uses your product or service. To achieve this, not only is the website design crucial when it comes to optimization, but also customer service and clear and objective communication on the brand's channels will be essential to create a greater connection with its consumers. To get an idea of ​​the user experience and how it is aggregating, a survey was conducted this year by Foundever, which revealed that 80% of customers consider the experience as a much more valuable aspect than the products and services themselves. When executed efficiently, the practice of UX Writing becomes a significant competitive advantage in a market that is increasingly rigorous in terms of quality and users who demand the best digital products. What are the main characteristics of UX Writing? UX Writing consists of some important characteristics so that it can be executed correctly and consistently. It's important to keep this in mind for better creations that will truly impact the user experience positively. Clarity and Objectivity: the content must be clear and direct, facilitating quick understanding by the user. Consistency: Language and tone should be consistent across all user touchpoints, creating a cohesive experience. Empathy: understanding and anticipating users’ needs and expectations to create texts that really help them. Focus on Action: Guide users on what to do next using clear calls to action (CTAs). Brevity: use as few words as possible without sacrificing clarity, respecting users' time and attention. Scannability: structure the text so that it is easy to read and skim quickly, using headings, subheadings, lists and short paragraphs. Accessibility: ensuring that content is accessible to all users, including those with some type of disability, through simple and inclusive language. Visual Orientation: Integrate text harmoniously with the visual elements of the interface, contributing to a pleasant and intuitive user experience. Personalization: adapting content to the user's context and preferences, offering a more relevant and personalized experience. Brand Tone and Voice: reflect the brand's personality and values ​​in all texts, strengthening the identity and connection with the public. Examples of the application of UX Writing It is easy to confuse UX Writing with other writing strategies. Therefore, we will demonstrate how to apply UX Writing to your website or digital applications. Personalization Want to see an example of UX Writing with personalization? Spotify is a streaming service that, as you use it, personalizes songs that end up being recommended to users, with similar songs that the user usually listens to. In addition, at the end of each year, the platform provides each user with an annual summary of what was listened to most throughout the period, as well as which artists, podcasts and genres were listened to. All of this is done in objective and clear language so that the user can understand exactly the entire summary, with no room for doubt. Photo: Reproduction/Spotify Objective and Clear Texts Now, for an application, for example, it is essential that the texts are very objective! Therefore, user error rates when using it will certainly be much lower, in addition to navigation being more intuitive. Good and bad example of an action button with UX Writing applied. Source: Adobe Anticipate errors We have already talked about the importance of giving the user a good experience, and this includes anticipating any possibility of future errors. In the example below, we can see a form being filled out, where the email address is not filled out correctly and the application tells the user to see the message, next to the field filled out with error, on the left side. Source: Adobe Differences between Copywriting, UX Writing and Tech Writing Although related, the strategies of copywriting, UX Writing and Tech Writing have their differences. Let's see what the main ones are within some approaches? Copywriting Objective: The objective is to persuade the reader to take a specific action, such as purchasing a product, signing up for a newsletter, or clicking on a link. Therefore, it is focused on conversions and sales. UX Writing: facilitates user interaction with a digital product or service, making the experience more intuitive, pleasant and efficient. With UX Writing, the user is guided through the interface and in completing tasks. Tech Writing: The goal is to explain clearly and precisely how to use complex products or technologies. Focused on providing detailed and informative instructions. Copywriting Approach: uses persuasion and rhetoric techniques to capture the reader's attention and motivate them to take some action. The tone is more emotional and appealing. UX Writing: adopts a functional and informative approach, prioritizing clarity, simplicity and usefulness. The tone is objective, focused on guiding and helping the user. Tech Writing: focuses on detail and accuracy, providing step-by-step instructions and technical explanations. The tone is technical and informative, with clear and objective language. Copywriting Application Location: Found in marketing materials such as advertisements, promotional emails, sales pages, blog posts, and social media content. UX Writing: present in digital interfaces, such as applications, websites, e-commerce, dashboards, and any point of user interaction with the system. Examples include buttons, error messages, instructions, and navigation menus. Tech Writing: Appears in user manuals, installation guides, software documentation, FAQs, tutorials, and knowledge bases. Copywriting Success Metrics: Measured by conversion metrics such as click-through rate (CTR), conversion rate, sales volume, and return on investment (ROI). UX Writing: Measured by usability and user satisfaction, such as reduced error rates, task completion time, user retention, and positive feedback on the user experience. Tech Writing: Measured by the clarity and effectiveness of documentation, such as number of support tickets, user feedback, time to find information, and ease of use of documentation. Copywriting Collaboration: collaborates with marketing, sales and branding teams. UX Writing: Works with UX/UI designers, developers, user experience researchers, and product managers to integrate writing into product design and functionality. Tech Writing: Collaborates with engineers, developers, product managers, and support teams to ensure documentation is accurate and useful. Ultimately, while copywriting seeks to persuade and convert, UX writing aims to facilitate and guide, and tech writing focuses on explaining and instructing. Each strategy uses writing as the main tool, but with different focuses and applications, which complement each other at some point in the user's journey. How to apply UX Writing to Products to add value? Now that you understand what UX Writing is, you can understand how to apply the strategy. In this case, when we talk about UX Writing and Product, these terms must go hand in hand in the creation and constant optimization of a product. Here we can even talk about the “Product Writer”, a professional totally focused on working in Products who seek improvements, researching and understanding the users’ point of view about a given product and defining writing solutions. So, we must understand how UX Writing adds value to digital products in different ways, contributing significantly to the user experience and, consequently, to the success of the product. Let's look at some practices that can be implemented in digital products? 1. Clarity in Error and Success Messages Error messages: should be clear and specific, informing the user what went wrong and how to correct the problem. For example, "Password must be at least 8 characters long" is more useful than "Password error". Success Messages: Clear confirmations that inform the user that the action was completed successfully. For example, "Your purchase was successful!" 2. Onboarding Instructions and User Guides: Provide step-by-step tutorials and guides for new users, helping them become familiar with the product. Tooltips and Pop-ups: Contextual instructions that appear at the right time to guide the user without interrupting their experience. 3. Effective Calls to Action (CTAs) Buttons and Links: Use clear and direct action verbs, such as “Buy Now”, “Sign Up” or “Learn More”. Avoid vague terms like "Click Here". Visual Hierarchy: Ensure CTAs are visually highlighted to guide user attention. 4. Improved Navigation Menus and Labels: Use familiar and intuitive terminology in menus and labels. For example, "Account" instead of "User Profile". Breadcrumbs: Implement breadcrumbs to help users understand where they are in the site navigation and how to return to previous pages. 5. Microcopy Forms: Provide clear, concise instructions for each input field. Examples: "Enter your email" instead of just "Email". Immediate Feedback: Provide instant feedback when filling out forms, such as marking correct fields with a green checkmark. 6. Accessibility Adjustments Alt Text: Add helpful descriptions to images, graphics, and icons to improve accessibility. Plain Language: Avoid jargon and complex technical terms, making content accessible to all users, including those with cognitive disabilities. 7. Consistency in Tone of Voice Style Manual: Develop and adhere to a style manual that defines the brand voice and tone, ensuring consistent communication across all platforms. Regular Review: Regularly review and update content to maintain consistency and relevance. 8. Educational Content FAQs and Documentation: Create and maintain FAQ sections and help documentation that are clear, detailed, and easy to navigate. Tutorial Videos and Tips: Integrate videos and quick tips that help users better understand and use a product's features. 9. Testing and Interactions A/B Testing: Perform A/B testing to evaluate the effectiveness of different versions of microcopy, CTAs, and error messages. User Feedback: Collect and analyze user feedback to identify areas for improvement and adjust content as needed. Conclusion Realize how many actions can be very simple and that they will greatly help a product to deliver a good user experience, with greater efficiency and satisfaction. Your product can end up creating more connection with your users, encouraging loyalty and, thus, creating a network of consumers who will organically evangelize about your product and how worthwhile it is. Finally, don't waste time.

Angular: Why you should consider this front-end framework for your company
Tech Writers February 02, 2024

Angular: Why you should consider this front-end framework for your company

A fear for every team is choosing a tool that will quickly become obsolete. If you've been developing applications for a few years, you've probably already experienced this. Therefore, choosing good tools is a task that involves responsibility, as it can guide the project (and the company) to success or to a sea of ​​problems and expenses. In this article, we will understand the uses and benefits of the Angular framework. Choosing a front-end framework is no different and also involves research and studies. Choosing a “stack”, as we call it in this world, is trivial both for the present and for the future. However, some questions will arise in the midst of this choice: Will we find qualified professionals to deal with this framework? Will we be able to maintain a pace of updates? Is there a well-defined plan for the direction the framework is going? Is there a community (we also mean large companies supporting it here) engaged? All of these questions must be answered before starting any project, as neglecting a screen can lead to devastating scenarios for the product, and consequently for the company and its profits. Motivations for using a framework Perhaps the most direct answer is that sometimes it's good not to keep reinventing the wheel. Routine problems such as dealing with routes for a web application, or even controlling dependencies, generating bundles optimized for publication in production, all of these tasks already have good solutions developed, and, therefore, choosing a framework that gives you this set of tools is perfect for gaining productivity, solidity in the development of an application and also keeping it always updated following best practices. As well as the direct motivations, I can also mention: The ease of finding tools that integrate with the framework The search for quality software, integrated with tests and other tools that will make the development process mature Many situations and problems have already been resolved ( because there are a lot of people working with the technology) Motivations for using the Angular framework: Built using Typescript, one of the most popular languages ​​at the moment MVC Architecture Control and Dependency Injection Modularization (with lazy load option) Good libraries for integration Community large and engaged 1835 contributors in the official repository Officially supported and maintained by the Google team The solidity of Angular Currently, we can clearly state that the framework is stable, receiving frequent updates due to its open-source nature. This is because it is maintained by the Google team, which always seeks to make the roadmap of what is to come as clear as possible, which is very good. Furthermore, the Angular community is very active and engaged. It's difficult to have a problem that hasn't already been resolved. One of the concerns of every developer is regarding drastic changes to a tool. Anyone who lived through the change from V1 to V2 of Angular knows this pain, the change was practically total. However, the framework was correctly based on Typescript, which brought robustness and another reason for its adoption: with Typescript, we have possibilities that Javascript alone cannot solve: strong typing, integration with the IDE, making life easier for developers , error recognition at development time, and much more. Currently, the framework is in version 17 and has been gaining more and more maturity and solidity, with the increase in innovative features such as the recently launched defer. Easy upgrade The framework provides a guideline for every upgrade through the website https://update.angular.io, this resource helps a lot to guide the update of your project. Complete CLI Angular is a framework. Therefore, when installing your package we will have the CLI ready to launch new projects, generate components, run tests, generate the final package and maintain updates for your application: To create your first project, simply open your terminal and run the command a follow: Solid interface designs If you need a design for your application that provides ready-to-use components such as alerts, modal windows, snackbar notices, tables, cards, one of the most popular possibilities is choosing Angular Material, a good The point to follow your software with it is because it is maintained by Google, so whenever the framework advances in version, Material usually follows this update. In addition to Material, there are other options in the community, such as PrimeNG, which brings a very interesting (and large) set of components. Nx library support Angular has full support for the Nx project, which makes it possible to scale your project in a very consistent way, mainly guaranteeing caching and advanced possibilities for you to maintain and scale your local application or in your CI environment. Here are some specific examples of how Nx can be used to improve an Angular project: You can create an Angular library that can be reused across multiple projects. You can create a monorepo that contains all your Angular projects, which makes cross-team collaboration easier. You can automate common development tasks like running tests and deploying your projects. Tests (unit and E2E) In addition to Karma and Protactor that were born with the framework, you are now free to use popular projects like Jest, Vitest and Cypress. State with Redux One of the most used libraries by the community is the NgRx Store, which provides reactive state management for Redux-inspired Angular applications. Brazilian GDEs In Brazil we currently have two Angular GDEs, which is important for our country and also for generating Angular content in Portuguese, bringing always updated news and insights to our community straight from the Google team. Loiane Gronner William Grasel Alvaro Camillo Neto Large companies using and supporting Perhaps the most notorious is Google, the official maintainer of the framework.   Checklist Fácil  Picpay Want to know more? Interested in starting with Angular? Visit https://angular.dev/, the latest documentation for the framework that includes tutorials, a playground and good, well-explained documentation. Good code! 

Architectural Model: how to choose the ideal one for your project
Tech Writers January 17, 2024

Architectural Model: how to choose the ideal one for your project

What is an Architectural Model and why is it important? Basically, an architectural model is the abstract structure on which your application will be implemented. “The software architecture of a program or computer system is the structure or structures of the system that encompasses the software components, the externally visible properties of those components, and the relationships between them.” (Bass, Clements, & Kasman, Software Architecture in Practice) To define the model that will best suit your project, we need to know well the company's short, medium and long-term strategies, the software's non-functional and architectural requirements, as well as the user growth curve over time and the volume of requests. As well as the points mentioned throughout this article, there are still others to take into account when deciding which architectural model to apply. As an example, we can list: Security concerns; Data storage; Lockins; Total volume of users; Volume of simultaneous users; TPS (transactions per second); Availability plan/SLA; Legal requirements; Availability on one or more types of platforms; Integrations. The survey of architecture, RAs (architectural requirements), VAs (architectural variables), RFs (functional requirements), RNFs (non-functional requirements) and the criteria that define each of these items directly influence the choice of the correct model. The choice of architectural model can impact the entire life cycle of the application. Therefore, this is a subject that we must treat with great attention. The use of MVPs (especially those that do not go into production) can greatly help with this task. They give a unique opportunity to make mistakes, adjust, make mistakes again, prove concepts, adjust and make mistakes as many times as necessary so that in the end the software has the architecture in the most correct version, thus bringing the true gains of this choice. How the architectural models are divided It is ideal to make it clear that like many definitions in the software world, what architectural models are and what they are can vary. Therefore, in this article I tried to divide them into four large groups: monolithic, semi-monolithic (or modular monolith), distributed monolith (or microlith) and microcomponentized. Monolithic Model in which all components form a single application or executable integrated into a single source code. In this case, it is all developed, deployed and scaled as a single unit. Figure 1 – Example of a Monolithic Model. Pros Simplicity: As the application is treated as a single, cohesive unit, it becomes simpler as all parts are contained in a single source code. Greater adherence to Design Patterns: taking into account that we have a single source code, another factor that makes it easier is that the design patterns themselves (Design Patterns, 01/2000) were written in times of monolith dominance, making the application of even more adherent. Greater performance: due to low latency in communication, monoliths tend to have good performance, even using more outdated technologies. Lower resource consumption: low complexity, simplicity and lower communication overhead between layers favor lower resource consumption. Easier troubleshooting: Creation of development and debug environments is made easier in monoliths, as the components share the same processes in them. Another factor that we can take into account is that monoliths have fewer external failure points, simplifying the search for errors. Cons Limited team size: breakdowns related to Continuous Integration and merge conflicts happen more regularly in monoliths, creating difficulties in parallel work for large teams. Scalability: Scalability may be limited in certain aspects. Even with ease in vertical scalability, horizontal scalability can often become a problem that could affect the growth of the application. Availability of windows: normally, for a monolith, executables are exchanged, which requires a window of availability without users accessing the application, which does not happen with other architectural models that can use other deployment techniques such as Blue-Green or even work with images or pods. Single technology: low technological diversity can often become an impediment to the growth of the application by only serving one type of operating system, for example, or not fully meeting new features requested by customers due to not having updates that have the capacity to solve complex problems. Greater expenditure on compilation and execution: large monoliths generally take a long time to compile and execute locally, generating a greater commitment in development time. When to Use Low Scalability and Availability: if the application has a limited scale where, for example, the number of users is low or high availability is not mandatory, the monolithic model is a good solution. Desktop Applications: the monolithic model is highly recommended for desktop applications. Low seniority teams: monolithic models, due to their simplicity and location of components, enable low seniority teams to work with better performance. Limited resources: for a limited infrastructure with scarce resources. Semimonolithic (or Modular Monolith) Model in which applications are composed of parts of monolithic structures. In this case, the combination tries to balance the simplicity of the monolithic model and the flexibility of the microcomponentized model. Currently, this architectural model is often confused with microservices. Figure 2 – Example of a Semimonolithic Model. Pros It brings benefits of the monolithic and microcomponentized models: with this, it is possible to maintain parts as monolithic structures and only microcomponentize components that have a real need. Technological diversity: possibility of using different technological approaches. Diversified infrastructure: this model can be developed to use both On-Premise and Cloud infrastructure, favoring migration between both. Supports larger teams: the segmentation of components allows several teams to work in parallel, each within its own scope. Technical Specialties: due to segmentation, the team's hard skills are made better use of, such as frontend, UX, backend, QA, architects, etc. Cons Standardization: due to the large number of components that can appear in a semi-monolithic model, standardization (or lack thereof) can become a major problem. Complexity: the complexity inherent to this type of model also tends to increase with new features. Therefore, new features such as messaging, caching, integrations, transaction control, testing, among others, can add even more complexity to the model. Budget: in models that support the use of different technologies with large teams, more specialist professionals with a higher level of seniority are needed, often resulting in greater expenditure on personnel expenses. Complex troubleshooting: the complexity of the model and the diversity of technologies make troubleshooting the application increasingly difficult. This is due to the large number of failure points (including external to the application) that come to exist and the communication between them. When to Use Accepted in Various Scenarios: it is a flexible model that can meet various scenarios, but not always in the best way. Little Definition: in projects that have uncertainties or even that do not have the full definition of their requirements, this model is the most suitable. In medium and large teams: as mentioned, the division of components into several groups facilitates parallel work in medium and large teams. Typically, groups have their own code repositories, which makes parallel work more agile. Diverse Seniority: this model benefits from teams with this format, as semi-monolithic software presents varied challenges, both in the frontend and backend layers and in infrastructure issues (IaC – Infrastructure as a Code). Infrastructure: for a Cloud-based or hybrid infrastructure, this model is more applicable. It is a model that allows, for example, gradual adoption between On-Premise and Cloud, facilitating adaptation and minimizing operational impacts. Distributed Monolith This modeling is a "modern" modeling that has also been implemented and confused with the microcomponentized/microservices model. "You shouldn't start a new project with microservices, even if you're sure your application will be big enough to make it worthwhile." (Fowler, Martin. 2015) In summary, in this architectural model the software is built on the basis of the monolithic model, but implemented according to the microcomponentized model. Currently, many consider it an antipattern. Figure 3 – Example of Distributed Monolith Model. It wouldn't be worth listing the pro features (I don't know if there are any), but it's still worth mentioning features that go against it: this architectural model brings together the negative points of the other two styles with which it is confused. In it, services are highly coupled and also have various types of complexity, such as: operational, testability, deployment, communication and infrastructure. The high coupling, especially between backend services, brings serious difficulties in deployment, not to mention the significant increase in points of failure in the software. Microcomponentized Software model in which all components are segmented into small, completely decoupled parts. Within microcomponents, we can mention: Microfrontends Microdatabases Microvirtualizations Microservices Microbatches BFFs APIs Figure 4 – Example of a Microcomponentized Model. "A microservice is a service-oriented application component that is tightly scoped, strongly encapsulated, loosely coupled, independently deployable, and independently scalable" (Gartner, n.d.). Opinions converge to say that every microservice that worked was first a monolith that became too big to be maintained and reached a common point of having to be separated. Pros Scalability: Scalability in this model becomes quite flexible. Depending on the need, the components are scaled in a specific way. Agile Development: Teams can work independently on each component, facilitating continuous deployment and accelerating the development cycle. Resilience: if a component fails, it does not necessarily affect the entire application. This improves the overall resilience of the system. It is important to note that there are single point of failure approaches to avoid this type of problem. Diversified Technology: each component can be developed using different technologies, allowing the choice of the best tool for each specific task. Furthermore, it also favors the existing skills of each team. Ease of Maintenance: changes to one component do not automatically affect others, facilitating maintenance and continuous updating. Decoupling: components are independent of each other, which means that changes to one service do not automatically affect others, facilitating maintenance. Cons Cost: high cost of all components of this model (input, output, requests, storage, tools, security, availability, among others). Size: microcomponentized software tends to be larger in essence. Not only the size of the application, but the entire ecosystem that permeates it from commit to the production environment. Operational Complexity: there is an exponential increase in complexity in this model. Designing good architectural components so that this complexity is managed is of great importance. It is important to choose and manage logging tools, APM and Continuous Monitoring, for example, well. Managing many microservices can be complex. Additional effort is required to monitor, orchestrate, and keep services running. Latency: Communication between microservices can become complex, especially in distributed systems, requiring appropriate communication and API management strategies. Network Overhead: Network traffic between microservices can increase, especially compared to monolithic architectures, which can affect performance. Consistency between Transactions: Ensuring consistency in operations involving multiple microservices can be challenging, especially when it comes to distributed transactions. Testability:  Testing interactions between microservices can be more complex than testing a monolithic application, requiring efficient testing strategies. Infrastructure: You may need to invest in robust infrastructure to support the execution of multiple microservices, including container orchestration tools and monitoring systems. Technical Dispersion: at this point, we can say that there is an action of "Reverse" Conway's Law, as teams, as well as technologies and tools, tend to follow dispersion and segregation. In teams, each person becomes aware of a small part of a larger whole. This way, for technologies and tools, each developer uses the framework or tools that suit them best. Domain-Driven Design: to increase the chances of success of this model, teams must have knowledge of DDD. When to Use Volumetrics: the microservices/microcomponents architecture has proven to be effective in high-volume systems, that is, those that need to deal with large amounts of transactions, data and users. Availability: one of the main reasons for adopting this type of architecture is availability. When well constructed, software that adopts microcomponentization does not tend to fail as a whole when small parts present problems. Therefore, other components continue to operate while the problematic component recovers. Scalability: If different parts of your application have different scalability requirements, microservices can be useful. You can scale only those services that need the most resources, rather than scaling the entire application. Team Size: Small teams can be problems. Configurations, boilerplates, environments, tests, integrations, input and output processes. Resilience > Performance": in cases of uncertainty, for example, the volume of requests and how far it can reach, such as large e-commerces in periods of high access (Black Friday) where it is necessary for the software to be more resilient and perform better median. Comparative Checklist Figure 5 – Checklist Comparison between models. Conclusion In summary, the choice of the architectural model is crucial to the success of the project, requiring a careful analysis of needs and goals. Each architectural model has its advantages and disadvantages and we must guide the decision by aligning it with the specific requirements of the project. By considering company strategies, requirements and architectural surveys, it is possible to make a decision that will positively impact the application life cycle. The work (and support) of the architecture team is extremely important. It is also of great importance that management and related areas support by providing time to collect this entire range of information. Still in doubt? At first, start with the modular semi-monolith/monolith. Likewise, pay close attention to database modeling. References Gartner. (n.d.). Microservice. Retrieved from https://www.gartner.com/en/information-technology/glossary/microservice Gamma, E., Helm, R., Johnson, R., & Vlissides, J. (1994) Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley. Bass, L., Clements, P., Kazman, R. (2013) Software Architecture in Practice (3rd ed.). Addison-Wesley. Microservices Architecture (12/2023). Retrieved from https://microservices.io/ Fowler, S. J. (2017) Production Ready Microservices. Novatec. ArchExpert Training. (n.d.). Premium Content. Retrieved from https://one.archoffice.tech/ Monolith First (06/2015). Retrieved from https://martinfowler.com/bliki/MonolithFirst.html Microservices. Accessed on 01/2024.

GraphQL in dotNET applications
Tech Writers January 15, 2024

GraphQL in dotNET applications

In this article I will talk about GraphQL with a focus on dotNet applications. Here, I'll show how the inherent problems of REST motivated the creation of GraphQL. Next, I will present the basic concepts of the specification of this language. Then I will introduce the Hot Chocolate library, which is one of the many libraries that implement the GraphQL specification. Finally, I will show a small example of using this library in a dotNet application. REST Before we talk about GraphQL it is necessary to talk about REST. The term was coined by Roy Thomas Fielding (2000) in his doctoral thesis. In this work, Fielding presents REST as an architectural pattern for web applications defined by five restrictions: Client-server: This restriction defines that the user interface must be separated from the system components that process and store data. Stateless: This restriction says that the client does not need to be aware of the server's state, nor does the server need to be aware of the client's state. Cache: This restriction indicates that, when possible, the server application must indicate to the client application that data can be stored in cache. Layered system: This restriction indicates that the application must be built by stacking layers that add functionality to each other. Uniform interface: This restriction indicates that the application's resources must be made available in a uniform manner, so that, when learning how to access one resource, one automatically knows how to access the others. According to Fielding's work, this is one of the central characteristics that distinguish REST from other architectural patterns. However, the author himself states that this degrades the efficiency of the application, as resources are not made available in a way that meets the specific needs of a given application. What REST looks like in practice In Figure 1 you can see part of Microsoft's OneDrive API. In this image you can see the uniformity in access to resources. This is noticeable when we notice that, to obtain data, simply send a GET request to an endpoint that starts with the term drive and is followed by the name of the resource and its ID. The same logic applies to creating resources (POST), modifying resources (PUT) and removing them (DELETE). Accessing the Google Drive documentation, we can see the typical return of a REST API. The aforementioned documentation shows the large volume of data that a single REST request can bring. Despite being large, a client application may still need to make extra requests to obtain more data about the owner of a file, for example. Considering the restrictions determined by Fielding and the examples shown, it is easy to see two problems inherent to REST. The first of these is the data traffic that the consumer does not need and the second is the possible need to make several requests to obtain the data necessary to create a web page. Figure 1 - Access the full article here. Understanding GraphQL GraphQL emerged in 2012 on Facebook as a solution to the problems found in the REST standard. In 2015, the language became open source and in 2018 the GraphQL Foundation was created, which became responsible for specifying the technology. It is important to highlight that GraphQL is not a library or tool. Like SQL, GraphQL is a language for searching and manipulating data. While we use SQL in the database, GraphQL is used in APIs. Table 1 shows an SQL expression to retrieve an order number and customer name from a database. Similarly, Table 2 shows a GraphQL expression to obtain the same data from an API that supports GraphQL. In the examples, we can see two major advantages of GraphQL over REST. The first is present in the fact that GraphQL allows the consumer to search only for the data they need to create their web page. The second is present in the fact that the consumer could search for order and customer data in a single call. Table 1: Example of a select in a relational database. Table 2: Example of a GraphQL expression. I consider it interesting to mention two more characteristics of a GraphQL API. The first of these is the existence of a single endpoint. Unlike REST, where an endpoint is created for each resource, in a GraphQL API all queries and mutations are sent to the same endpoint. The second is the fact that a GraphQL API only supports the POST verb. This is yet another difference in relation to a REST, where different HTTP verbs must be used depending on the intention of the request. Therefore, while in a REST API we must use the GET, POST, PUT and DELETE verbs, in a GraphSQL API we will use the POST verb to get, create, change and remove data. Schema Definition Language Let's now talk a little about SDL (Schema Definition Language). When using a relational database, it is first necessary to define the database schema, that is, it is necessary to define the tables, columns and relationships. Something similar happens with GraphQL, that is, the API needs to define a schema so that consumers can search the data. To create this schema, SDL is used. The official GraphQL website has a section dedicated to SDL. In that section you can find a complete description of the language for creating GraphQL schemas. In this text, I will present the basic syntax for creating a GraphQL schema. In Figure 2 you can see part of a GraphQL schema created using Apollo. We can see that the scheme begins with the definition of two fundamental types: Query and Mutation. In the first type we define all the queries that our API will have. In our example, consumers will be able to search for customers, products and orders. The Mutation type defines which data manipulation operations will be available to the consumer. In the example presented, the consumer will be able to create, change and remove customers and products. However, when it comes to orders, he can create, add an item, cancel and close the order. In addition to the Query and Mutation types, you can see the presence of the Customer and Product types. In both, there are ID, String and Float properties. These three types, together with Int and Boolean types, are called scalar types. The schema also shows the definition of an enumerate called OrderStatus. Figure 3 shows the definition of Input types that are used to provide input data for queries and mutations. I think it's important to point out that the way to create the schema varies depending on the library you choose. When using the Apollo library for javascript, the schema definition can be done through a string passed as a parameter to the gql function or through the creation of a file (generally called schema.graphql). However, when using libraries such as Hot Chocolate for dotNet, the schema definition is done by creating classes and configuring services in the application. Therefore, the way in which a GraphQL schema is created can vary greatly depending on the language and library chosen. Figure 2. Figure 3. Basic elements of the GraphQL language As mentioned earlier, GraphQL is a language and therefore has a syntax. You can find the complete guide with language syntax on the official GraphQL website. However, in this article, I will describe the basic elements of it.   Data is searched through queries, which must begin with the keyword query followed by the name of the query. If it has parameters, you must open parentheses and, inside them, you must place the name of each parameter followed by its value. The colon (:) must be used to separate the parameter name from its value. Having finalized the list of parameters, the parentheses must be closed. Then, you must open braces ({) and place the name of the fields you want inside them. With the list of fields finalized, you must close the brace (}). Table 3 shows a simple example of a query. Table 3: Example of query. There are scenarios where the query parameters can be complex. When a parameter is complex, that is, it is an object with one or more fields, braces must be opened immediately after the colon. Within the keys, you must place the value of each field of the object and its respective value, both of which must be separated by a colon (see table 4). There are also scenarios where the query fields can be complex. In these cases, you must open curly braces right after the field name. Inside the keys, you must place the names of the object field (see table 5). Table 4: Example of query. Table 5: Example of query. The rules described so far also apply to mutations. However, these must be started with the keyword mutation instead of query. It is interesting to note that there are other elements in the GraphQL syntax, but the elements described so far are sufficient to execute most queries and mutations. Being a language, GraphQL needs to be implemented by some application or library. For our API to support queries and mutations, we generally need a library. Of course, we could implement the language specification on our own, but that would be very unproductive. The “Code” section of the GraphQL.org website shows a list of libraries that implement GraphQL for the most varied languages. For the dotNet world, for example, there are the libraries “GraphQL for .NET”, “Hot Chocolate” and others. When talking about GraphQL implementations, it is necessary to talk about the concept of “resolvers”. A resolver is a function that is triggered by the library that implements GraphQL. This function is responsible for effectively fetching the data requested by the query. The same occurs with mutations, that is, when the library receives a request to execute a mutation, the library identifies the resolver that will execute the changes in the database (insert, update and delete). Note, then, that in most libraries, searches and changes to data are carried out by their own code. It can be seen, then, that the libraries that implement GraphQL are responsible for interpreting the query/mutation sent by the caller and discovering the appropriate function to resolve the requested query/mutation. To see an example of a simple API that uses Hot Chocolate, visit my GitHub. To sum it all up, GraphQL is a language created by Facebook with the aim of overcoming the problems inherent to REST. The language provides a simple syntax for obtaining data from an API as well as changing data from it. It is implemented by a wide variety of libraries for the most diverse languages, allowing the developer to create a GraphQL API using their favorite language. References “GraphQL.” Wikipedia, 9 June 2022, en.wikipedia.org/wiki/GraphQL. Accessed on 6 Nov. 2023. The GraphQL Foundation. “GraphQL: A Query Language for APIs.” Graphql.org, 2012, graphql.org/. Thomas Fielding, Roy. “Fielding Dissertation: CHAPTER 5: Representational State Transfer (REST).” Ics.uci.edu, 2000, ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm. Accessed on 6 Nov.  

Multi-brand design systems: what they are and main benefits
Tech Writers December 01, 2023

Multi-brand design systems: what they are and main benefits

What are multi-brand design systems Multi-brand design systems are systems with attributes that make them flexible for use in different contexts, visual patterns and interface design. They are developed for cases in which a single library aims to serve products from different brands. Generally, this type of design system is also independent of frameworks, platforms or technologies — they are called tech-agnostic design systems. Currently, the most popular agnostic design system is Lightning, developed by Salesforce, also the creator of the concept. Benefits In addition to being a single source of truth, the multi-brand design system shares the cost of operation, making work truly collaborative between teams. According to Volkswagen group designers, the implementation of GroupUI brought the following results: Increased agility, efficiency and cost reduction are some of the benefits of multi-brand design systems. Scalability Developed based on the concept of design tokens, they enable the same library to be replicated in different products, regardless of the framework in which they are developed. At the same time, they allow each of these products to use their own visual standards. Another very relevant point is the sharing of characteristics such as good practices, responsiveness, accessibility, performance, UX and ergonomics. Use in different technologies Currently, it is common to find in design systems, even those that serve a single brand, different libraries for web, iOS and Android products. This is due to the existence of different specifications for desktop and mobile browsers, as well as between devices with native operating systems, such as Apple and Google. Working independently of these technologies, it is possible to instantiate the same design system in different library components to meet these particularities. Gain in efficiency According to data released by UX and design systems leaders at the Volkswagen Group, through the presentation Multibrand Design System within the Volkswagen group & its brands, there is a great increase in agility, productivity and efficiency when working with the multi-brand concept . Operational efficiency with the use of multi-brand design systems. (Source: YouTube) Comparing the effort required between a product without a design system, going through one that has its design system, and arriving at one that adopts the multi-brand methodology, it is possible to notice an incremental and considerable reduction in UI efforts ( interface design) and development. This implementation enables a way of working that is more oriented towards user experience and discovery, by freeing up resources for these activities, which until then were being consumed in the design and implementation of interfaces. Standardization A detailed and well-specified design system becomes a single source of truth. When shared within the organization, in addition to making the work of teams much easier, it enables consistent standardization, avoiding the need for the same discussions, discoveries and definitions, which become ready to be reused as a result of the constant development of a design system. Easy customization According to experts, the main characteristic of a multi-brand design system is flexibility. In this context, making customizable means allowing each product to apply its visual design decisions. To make this possible, token design libraries are created. They can be easily duplicated and customized, generating distinct visual patterns for each brand and product. Design tokens can be interpreted as variables that carry style attributes, such as a brand color, which, applied as a token, allows, when changing the value carried by the variable, to reflect the change in all places where the color is displayed on the interface. In the example above, we have brand color specifications for three different design systems, and in the left column we have the token, which will remain the same across all products. The value carried by the variable is different in each case. These definitions apply to any other visual attribute, such as typography, spacing, borders, shading and even animations. Structure of multi-brand design systems According to Brad Frost, one of the most influential design systems consultants today and author of the book Atomic Design, it is recommended that multi-brand design systems have three layers: Three-level structure of a design system. (Source: Brad Frost) Tech-agnostic (1st layer) The agnostic level of a design system is the basis for the others, therefore, it only includes html, css and java script codes, with the aim of rendering components in the browser . This layer is extremely important in the long term, as it allows the future reuse of a design system. For example, in the current scenario, it can be said that the most popular language is React. However, this was not always the case and it is not known which technology will be the next to stand out. For this reason, it is important to have a base layer, which can be applied to new technologies, without having to start a new design system from scratch. In this first layer, designers and developers build the design system components in a workshop environment, documented in a tool such as Figma and Zeroheight. The result of this work are components rendered in the browser, considering that the framework adopted today may not be the same as the one adopted in the future. Tech-specific (2nd layer) The technology-specific level is where there is already a dependency on some technology and/or platform and, in addition, there is the opportunity to generate a design system layer for all products that use the same technology. A good example of this type of design system is Bayon DS, which serves SAJ products. It is also possible to use it to develop any other product that uses React technology. Prod-specific (3rd layer) The third layer is where everything becomes very specific and all the effort is made for a particular product. At this level, documentation can be created relating to very unique standards that only apply to that particular context. Following the Atomic Design concept, this layer creates components with greater complexity and less flexibility, such as pages and templates, in order to generate product patterns. In the third layer, individual applications consume the specific version of the selected technology, via package managers such as npm and yarn. How we are putting these new concepts into practice A few months ago, after the announcement of the Inner Source initiative, we began studying a way to transform Bayon, so that it could "receive" this new concept. Personally, I began in-depth research into the topics discussed in this article. Softplan. Web components and Stencil Through recurring meetings with representatives of the Group's companies Softplan, the possibility of developing a library of web components is discussed. In it, each visual attribute or design decision is applied through design tokens, allowing complete customization that guarantees that each component will present the visual characteristics of the corresponding product. Web components are a set of APIs that allow the creation of personalized, reusable and encapsulated HTML tags for use in web pages and applications. They have many advantages, such as compatibility with applications that use or do not use frameworks, compatibility with all major browsers and availability of open source libraries that reduce the cost of operation. In addition to this technology, Stencil.js is also used, an open source compiler that shares concepts found in the most popular frameworks and that further simplifies the development of components, as well as learning by developers. References Multibrand Design System within the Volkswagen group & its brands Design tokens — What are they & how will they help you? Design Systems Should be JavaScript Framework Agnostic Creating multi-brand design systems Managing technology-agnostic design systems Salesforce Lightning DS Managing technology-agnostic design systems Multibrand Design System within the Volkswagen group & its brands (Video) In the file: Creating multi-brand design systems (Video) Atomic Design by Brad Frost—An Event Apart Austin 2015 Webcomponents.org MDN Web Docs Stenciljs Creating Web Components with StencilJS (Youtube) Building web components with Stencil JS