How to Manage Risk

an hour, 23 links
From

editione1.0.1

Updated August 7, 2023

In the previous section, you learned that one of your primary responsibilities as a professional programmer is to create value. We looked at a number of ways you can create value, both with and without writing code. In this section, youโ€™ll learn about a second major component of your job, which is to manage risk.

Before we jump into the details of how to manage risk, we need to take a step back and answer a fundamental question: what is risk?

Risk in software development is the probability for uncertain events to occur that can have a negative impact on an organization or other entity. In essence, risk is the probability for bad things to happen. Shipping buggy code, lack of communication, changes to consumer behavior, and missing deadlines due to poor project planning are all kinds of risk that can harm your organization. Risk comes in all shapes and sizes, and each one has the potential to cause harm in one form or another.

As software engineers, weโ€™re focused on building scalable software systems, but what many developers lose sight of is that weโ€™re also responsible for keeping those systems up and running and ensuring a seamless experience for our customers. Software has become a critical component of everyday life, and so managing risk has become a critical component of software engineering.

Risk is always there. Thatโ€™s a fact. So, itโ€™s better to understand it and embrace it rather than ignore it. The more senior you become in your career, the better you will get at identifying and planning for risk. Managing risk is one of the most important skills you will use in your software career.

Itโ€™s important to note that some risk is acceptable. Senior software engineers learn how to manage it, not eliminate it. It takes time for junior engineers to learn how to identify different areas of risk. And as with most things in software engineering, there are trade-offs. Senior engineers learn when to allow low-probability or low-impact risks into the system because it allows them to move quickly.

Before we get into details about major risks involved in software engineering, letโ€™s look at the different kinds of risks you may encounter.

Types of Risk

Unfortunately, because a lot of risk management comes down to experience, you wonโ€™t learn everything there is to know about it in this section. What you will learn, however, are the different types of risks so that you are aware of what to look out for in your day-to-day roles.

Unlock expert knowledge.
Learn in depth. Get instant, lifetime access to the entire book. Plus online resources and future updates.
Now Available
  • Technical Risks

    • Poor code quality

    • Poor technical decisions that prevent you from adapting to changing requirements in the future

    • Lack of documentation and knowledge sharing

    • Poor technology choices

    • Poor code performance

  • Scheduling Risks

    • Poor project estimation

    • Requirements that are not finalized and keep changing

    • Lack of visibility into the work that is in progress and what has been completed

    • Project scope creep

  • Operational Risks

    • Poor communication

    • Lack of proper training

    • Lack of effective processes or procedures

    • No separation of concerns, or checks and balances

    • Data breach due to poor security practices

  • External Risks

    • Sudden market changes

    • Increasing competition

    • Government regulations

    • Changes to consumer behaviors

    • Weather and natural disasters (yes, really)

This is by no means a comprehensive list of every risk involved in software engineering. There are many additional categories and subcategories of things that can go wrong while keeping software systems running. Luckily, youโ€™re not the only one responsible for managing these risks. Risk management is a team effort, but that doesnโ€™t mean youโ€™re off the hook when it comes to doing your part.

Contributors to Technical Risk

Letโ€™s look into some of the major contributors to technical risk that you should look out for in your day-to-day role.

Overengineering vs. Underengineering

Effective engineering is about shipping software quickly while preserving your ability to make additional changes quickly in the future. The goal is to move fast without putting yourself in a situation youโ€™ll later regret. In essence, we need to build software that meets the current requirements for our customers but leaves enough flexibility to easily extend the code to handle additional requirements in the future.

Seems easy, right?

The longer you work as a professional programmer, the more you will come to realize that good code approximates the complexity of the problem at hand. Good code is not needlessly complex, but not overly simple either. The best engineers are able to design and build solutions that match the complexity of the problems theyโ€™re solving.

However, a lot of software engineers early in their career donโ€™t have enough experience to know how to match their solutions to the complexity of the problem, so they end up either underengineering or overengineering their solutions. Thereโ€™s no simple answer as to how to avoid these situations, unfortunately, but just being aware of each one is a step in the right direction. Over time youโ€™ll naturally gain an understanding of when a solution is being under- or overengineered. In the meantime, letโ€™s look at each one a little deeper so you can better identify each situation.

Underengineering

When a developer underengineers a solution, they are not doing enough forward thinking when designing a solution to a problem. Although they may be focused on solving the immediate problem at hand, they may be losing sight of a better long-term solution. This tends to be a common trait among developers who are just learning how to write code, because most of their energy is spent on getting the program to work. Once they come to a working solution they move on to the next task. That can cause problems in the future.

Just because a piece of code works and compiles without errors doesnโ€™t mean itโ€™s ready to ship. There may be better ways to solve a problem that allow for more functionality in the future. While the original solution solves the problem right now, the code may need to be significantly refactored when it needs to handle additional use cases in the future.

Underengineered solutions often contradict the Donโ€™t Repeat Yourself (DRY) principle. The DRY principle is a common guideline software engineers use while structuring their code so they are not repeating the same logic in different parts of the codebase. This is good because it encourages programmers to structure their programs so the logic can be written once and reused in multiple places throughout the codebase.

When you follow the DRY principle, you can often add additional functionality to your code with little effort because you only need to make changes to one part of the codebase when updating logic. Additionally, when updating logic that is repeated throughout the codebase, you risk the chance of missing a block of code. This increases the possibility of introducing bugs into the system during refactoring and may lower the quality of the codebase over time.

A common rule of thumb is if youโ€™re noticing yourself copying and pasting blocks of code throughout your codebase, that could be a sign that you need to consolidate your logic so that youโ€™re not repeating it. Itโ€™s a simple technique that goes a long way to reducing the amount of risk involved in making future changes to logic.

Underengineered solutions also sometimes contradict the Single Responsibility principle, which states that modules, classes, and functions should have only one responsibility over a programโ€™s functionality. If you find yourself writing a class or a method thatโ€™s doing multiple things, such as calculating values, transforming data, and storing it in a data store, then you may want to rethink how your solution should be designed.

Underengineered solutions tend to try to do everything in a single class or function, when they really should be broken up into multiple pieces that each handle a separate task. Solutions that contradict the Single Responsibility principle tend to be difficult to extend and often need to be refactored when new functionality needs to be added. Just like the DRY principle, following the Single Responsibility principle is a simple technique that will reduce the risk of needing to rework the code in the future.

Overengineering

On the other end of the spectrum, overengineering is the act of designing an overly complex solution to a problem when a simpler solution would do the job with the same efficiency. Software engineers often fall into this trap because they add unnecessary complexity to the system just in case it will be needed in the future. In essence, itโ€™s the act of solving one problem while optimizing for other requirements that donโ€™t, and may never, exist. When developers overengineer solutions, theyโ€™re often thinking about theoretical scenarios that could come up in the future but are never guaranteed to happen, which leads to extra time and energy spent writing, testing, and debugging code that isnโ€™t required.

When you end up with code and logic in your system that is overengineered, it increases the difficulty of reading, understanding, and modifying the code for your teammates. Developers will need to work around the complexity in order to add enhancements or fix bugs.

Plus, overengineering a solution directly contradicts the Keep It Simple, Stupid (KISS) principle, which argues that most systems will work best if they are kept simple rather than made complicated. If you strive to write code a junior engineer will be able to understand and modify, youโ€™re probably in good shape. If you add unnecessary abstractions or try to be clever with your solutions, youโ€™re probably not thinking about the risk of later developers modifying your code without fully understanding what itโ€™s doing.

The production lifetime of the code you write will likely be years, and you and other developers will eventually need to revisit that code and modify it to add new functionality. Code that is less complex will always be easier for future developers to understand and refactor than code that is more complex.

From a risk perspective, overengineering a solution may hinder your teamโ€™s ability to move quickly in a different direction in the future. Complexity often adds rigidity to code, because it is harder to refactor or modify when the business priorities change. Your goal should be to write clean and concise code, but not so clean that it constrains your ability to move and adapt in the future.

If possible, try to strive for the Goldilocks Principleโ€”just the right amount of engineering and nothing more. Unfortunately, that comes with experience, and itโ€™s easier said than done.

Large Rewrites vs. Incremental Refactoring

As software developers our job is never done. There is always more work to do on the codebase, whether thatโ€™s adding new features, cleaning up technical debt, improving performance, or maintaining a legacy system. At some point in your career, youโ€™ll be faced with the decision to continue adding to an existing codebase or to rewrite the system from scratch in a new project.

Both paths involve significant risks that itโ€™s good to understand before making any major decisions. When deciding whether to refactor a legacy codebase versus rewriting it from scratch, you should take a number of factors into account such as the type of application youโ€™re dealing with, your teamโ€™s capabilities, the available resources, future hiring plans, and your organizationโ€™s general appetite for risk.

Fortunately (or unfortunately), the decision is most likely not yours to make. The most senior engineers on your team will probably be the ones to make the decision along with your manager, because they will be the ones with the most experience and will understand the implications better than you will.

That shouldnโ€™t stop you from contributing to discussions and lending your opinion, however, so letโ€™s look at some of the risks involved in both paths.

Refactoring

If you choose to refactor a legacy system, you will be making incremental changes to the codebase to clean it up over time in order to get it to a more manageable state. The goal is to improve the internal structure of the code without altering the external behavior of the system.

Pros

  • Doesnโ€™t divert resources away from legacy systems.

  • Improvements can be isolated to specific parts of the codebase in order to limit the risk of introducing breaking changes.

  • Always an option; you can refactor as much or as little as you want as you have the resources.

  • Any codebase or architecture can be refactored incrementally.

Cons

  • Limits you to working within constraints of the legacy system.

  • While it improves code, sometimes you cannot fix underlying architectural issues.

  • Often difficult and complex to untangle the web of legacy code.

  • May require writing new automated tests prior to being able to refactor the business logic.

  • Refactoring maintains the status quo, so itโ€™s difficult to introduce new features or functionality.

  • Requires discipline to manage the complexity. The application will be in a transitional state as individual parts of the codebase are refactored.

Rewriting

The big rewrite happens when you start from scratch with a new codebase. It may sound enticing and straightforward, but the amount of work is almost always underestimated. This is often done concurrently with changing to a new platform, such as moving from on-premises servers to the cloud or moving to a new chip architecture as hardware is upgraded.

Pros

  • Enables foundational changes to a part of the system, often introducing new capabilities thanks to new technologies or design decisions.

  • Eliminates the need to retrofit old code to meet new use cases because you can build for them without any technical debt.

  • Engineers are able to set new coding standards with a clean codebase.

Cons

  • Always takes longer than anticipated, eating up resources for other projects and increasing the possibility that management will abandon the project.

  • Not guaranteed to solve all problems that plagued the legacy system. Sometimes those are due to systemic or cultural processes rather than the technology or codebase.

  • Complex migration periods as you phase out the legacy system.

  • Duplicates the amount of work during the transition period. One team builds the new system while another continues to maintain the legacy system.

  • Requirements for the new system are a moving target as the legacy system still needs to be maintained and upgraded. New functionality may need to be implemented in both codebases.

Every codebase is unique, and every business has different competing priorities, so the decision to refactor or rewrite an application is not a one-size-fits-all problem. You and your team will need to weigh the pros and cons and determine the risks involved in either choice before making a decision.

Bypassing Processes

In the previous section, we discussed the importance of adding or improving processes, and how they add value to an organization. Processes give you guardrails that enable consistency and allow teams and organizations to scale and pass down business knowledge.

But not all processes are created equal, and sometimes processes can feel like theyโ€™re getting in your way. A lot of developers donโ€™t want to deal with the โ€œred tapeโ€ that processes add to the software development lifecycle, and most would rather just write more code instead of getting slowed down by seemingly unnecessary processes. Eventually, a developer may cut corners and break protocol.

โ€‹exampleโ€‹Here are a few examples where developers sometimes bypass processes:

  • They may merge code to the main branch without a proper code review because they donโ€™t want to wait for feedback, leading to a bug that could have been easily caught.

  • They may elect not to use proper naming conventions because they donโ€™t want to take the time to search the docs to find out the correct way to name an environment variable, leading to a broken deployment because the code expected the variable to use a certain naming convention.

  • They may do some work without creating a proper ticket in the bug tracking system, leading to changes that are hard to audit and track down.

  • They may commit an inefficient SQL query without running an EXPLAIN on it because they think itโ€™s a harmless query, leading to a slowdown in database performance.

Yes, some processes can be frustrating, and it may feel like theyโ€™re just slowing you down unnecessarily, but processes exist for a reason. When you bypass processes, whether itโ€™s on purpose or by mistake, youโ€™re actually introducing more risk that something in the system may fail.

Next time you find yourself frustrated and wondering why you have to follow a process, ask yourself why you think the process is there in the first place? What could go wrong if it wasnโ€™t followed? Hopefully, thatโ€™ll help you understand and appreciate a little extra red tape here and there if it means saving you from making a catastrophic mistake.

Software Dependencies

Almost every codebase leverages third-party libraries and external dependencies to provide some part of its functionality. Why reinvent the wheel and build a library from scratch when you can use an open-source package that solves the problem better than you ever could? Add the fact that thousands of other developers use the library and consistently file bugs and contribute fixes to it so it improves over time, and it sounds like a no-brainer, right?

Most of the time, utilizing third-party libraries saves you time and money because you wonโ€™t need to implement and maintain a solution yourself. But be careful, because there is a hidden cost to any third-party library you pull into your codebase. Every time you add a new dependency to your codebase, youโ€™re introducing new areas of risk, because your system now relies on someone elseโ€™s code in order to function properly.

Sure, you might be able to view the source code and gain confidence that the software does what it claims it does, but thatโ€™s not the only kind of dependency risk you should be worried about.

โ€‹exampleโ€‹Here are some other examples of dependency risks:

  • Security risks. The third-party code that you add to your system may add new attack vectors to your codebase that you may be unaware of. Hackers often exploit known vulnerabilities in specific widely used libraries.

  • Upgrade risks. The third-party code may change over time as they add new features and apply bug fixes. They may introduce breaking changes that in turn cause your own code to break after upgrading to a new version, forcing you to drop everything to fix new bugs that were introduced into your system.

  • Dependency graph risks. You may be able to read the source code of your third-party dependencies, but those libraries may rely on their own dependencies, and those dependencies rely on their own dependencies, and so on. This creates a brittle dependency graph that can easily break your codebase. In some cases, it may be hard to remove or upgrade dependencies that have known bugs, because the library in question is a dependency of another library you installed, so youโ€™re at the mercy of your dependencies to fix the issues for you.

  • Supply chain risks. Supply chain attacks are becoming more common in the software industry. Supply chain attacks occur when someone uses a third-party software vendor to gain access to your system. When you install third-party libraries into your codebase, you are granting that code access to your system. If an attacker is able to compromise a third-party library that has been installed on your system, theyโ€™ll be able to access your data and possibly your infrastructure. Sometimes hackers will target little-known but critical libraries that are deep down in the dependency graph, making supply chain attacks difficult to prevent and mitigate.

Hopefully that gives you a good understanding of how introducing third-party libraries into your codebase also introduces added risk. Next time youโ€™re searching for a third-party library, ask yourself if itโ€™s really needed. If the code is open source and relatively small, it may be better to study how it works and build your own similar solution. This is not always feasible, however, since some third-party libraries can contain tens of thousands of lines of code.

โ€‹resourcesโ€‹

Mitigations for Technical Risk

Divide and Conquer

People often compare the ability to program a computer with superhuman powers. Sure, it may seem like that at times when you see programs do things that are seemingly impossible or futuristic, but programmers are only human. Thereโ€™s a limit to how much the human brain can comprehend at any given time, and we often find that limit when learning a new codebase or managing a large project at work.

There are some projects that are so incredibly complex that they cannot be built or fully understood by a single individual. To complete these projects, a team of developers needs to work together to build individual components that fit together to build a complete system.

Large software projects are inherently risky. They take up huge chunks of the engineering organizationโ€™s time and resources in an effort to build something that no one fully understands and that no one fully knows will succeed or not in the end.

Itโ€™s impossible to completely eliminate the risk involved in these large initiatives, but there is a useful tool for managing the complexity: decomposition.

Decomposition involves breaking down the problem into smaller and smaller pieces until each individual piece can be comprehended and completed individually. When decomposing large-scale projects, look for patterns or common components of work within the requirements and group them to create boundaries around related tasks. Doing this will help you expose natural hierarchies that simplify complex systems.

Breaking down tasks into smaller, more manageable chunks has the added benefit of exposing relationships and dependencies between the tasks. You may find that one task has to be completed before another can begin, or you may be able to identify tasks that can be worked in parallel by you and your team members so that the team can move quickly. Sometimes, things need to be built in a specific order, so use this technique to help expose critical dependencies and identify risks that may delay the project or prevent your team from meeting their deadline.

When to use decomposition:

  • When dealing with large projects: In an agile development shop, you would break down long-term initiatives into medium-term epics, which are further broken down into short-term user stories. You can then help prioritize the order of user stories that should be worked on first.

  • When refactoring large pieces of a codebase: Big changes equal big risk. Break up the changes into small pieces and refactor them piece by piece over time. Thereโ€™s much less risk in deploying incremental changes to a production environment than there is to deploying one large change.

  • When dealing with quarterly or annual goals: You may have a few main short-term and long-term priorities, but what you need to do to meet those goals may not be obvious. Breaking them down to smaller subgoals will help you work backwards and figure out a plan of action.

No matter how large a task or project is, decomposing the problem is all about breaking down the requirements into smaller puzzle pieces. While this allows you to organize the pieces so they are easier to understand, what it really comes down to is managing risk by planning ahead.

Planning Ahead

Every battle is won before itโ€™s ever fought.Sun Tzu

Part of our strategy is getting the programmers to think everything through before they go to the coding phase. Writing the design documents is crucial, because a lot of simplification comes when you see problems expressed as algorithms.Bill Gates*

Whether youโ€™re assigned a ticket to work on or youโ€™re able to choose which ticket to pull in next, taking a little time to put a plan together can go a long way in reducing wasted time and effort on coding the wrong solution. Depending on how much information is in the ticket, it may be a straightforward change thatโ€™s been thought through already, which is great! But there will be times where you donโ€™t have quite enough information from the ticket, and youโ€™ll need to do some research and planning before writing any code.

A common habit among junior programmers is that theyโ€™ll begin writing code as soon as they pull a ticket from the backlog. They may not fully understand the problem or may not have a complete grasp on the codebase, so they start making small changes here and there to see if they can come up with a solution that works. While you might find a good solution now and then, thereโ€™s a good chance some other programmer on your team had a different implementation in mind, and oftentimes, theirs might be better because they understand the problem or the codebase better.

Coding without a plan is a mistake that telegraphs your inexperience to your manager and the rest of your team. It often results in a lot of rework because you donโ€™t fully think through the problem and have to change direction before coming up with the final solution. You should be deliberate with most changes and not settle on the first solution that comes to mind, because there are oftentimes better ways to solve a problem.

An easy technique you can use to reduce the risk of rework is to plan out your work ahead of time. In fact, this is a technique youโ€™ll be using quite a bit as a professional programmer. Youโ€™ll find that this will improve your decision-making skills because you can work through different scenarios and eliminate ones that are insufficient or that would be difficult to maintain or extend in the future. Planning gives you an opportunity to iterate on your solution before writing code, rather than having to rewrite large chunks of code. After all, itโ€™s faster and cheaper to refactor an idea on paper than it is to refactor code thatโ€™s already been written.

Rewriting code is expensiveโ€”it can cost hundreds or thousands of dollars because you didnโ€™t consider the consequences or side effects of a solution before implementing it. You may have to toss out code you spent all day writing because your coworker pointed out an edge case you didnโ€™t think about ahead of time. Itโ€™s frustrating when you have to throw out work, especially after spending a long time working on the solution. Planning ahead hedges against the risk of having to toss out code.

Planning ahead also helps you see the bigger picture of the problem youโ€™re trying to solve, because it forces you to think about important decisions upfront when itโ€™s cheap to iterate to better solutions. Over time, the software industry has adopted different tools and frameworks for writing and structuring your planning. Design documents are the most common tool used for planning out the technical details of a software project.

A design document is a template or worksheet that helps you think about how your solution will meet a set of technical requirements. The technical requirements describe what the end result should be, and your design document describes how your solution will meet those requirements. A thorough design document should contain everything you and the other developers need to write the code to satisfy the projectโ€™s requirements. Once youโ€™ve spent the time thinking through the solution, the design document will serve as a guide for you and the other programmers throughout the life of the project.

Not only does a design document serve as your guideline, but it allows your teammates to evaluate and peer review your ideas before you spend valuable time and money implementing your solution. Itโ€™s nearly impossible to know every side effect and dependency of the code youโ€™re writing, so the more kinks you can iron out in the design, the easier your code will come together when itโ€™s time to write it. Spending time compiling your ideas in a design document forces you to think through the architecture and how it will integrate with other parts of your codebase, as well as helping you gather valuable feedback before spending developer-hours implementing the wrong solution or even solving the wrong problem.

A primary purpose of using design documents is to come to a consensus on a solution before implementing it, which helps avoid costly disagreements in the future. By getting all parties on board with a solution, you can be sure that what you deliver is what was agreed upon. And if the stakeholders come to you and try to increase the scope of the project or change direction, you can point to the design document and show them what they agreed to at the beginning of the project.

It may feel like more work up front, but itโ€™s much cheaper to change the design of a solution during the planning period than it is to make an expensive change once the code has already been written.

Code Reviews

While this one may seem obvious to most people, the number of teams that ship code to production without a proper code review process is probably higher than youโ€™d think. Under tight deadlines and stressful or even lax work environments, it is easy to skip the code review process altogether, and that introduces the risk youโ€™ll ship buggy code to production. Itโ€™s especially common on small teams or on newer projects because things change so quickly as youโ€™re building out a minimum viable solution.

While it may allow you to ship code faster, skipping code reviews comes at the expense of code quality. Adding a second pair of eyes to peer review your code increases the quality of your work because your coworkers might catch bugs that you didnโ€™t even know existed. In addition to catching syntactic errors or nitpicking on coding standards, your coworkers may also catch potentially dangerous errors in the logic itself. Code you were confident worked one way, may work a completely different way if thereโ€™s a misplaced operator or parentheses. Your coworkers may also have more knowledge about a specific part of the codebase that youโ€™re changing and can help identify unforeseen circumstances with the changes youโ€™re proposing.

Thereโ€™s no doubt that code reviews can be frustrating. You may think youโ€™ve done a good job coming up with a good solution to the problem, and youโ€™re probably proud of the code youโ€™ve written, but your teammates may pick apart your code and ask for changes. Theyโ€™ll ask questions about why you built something the way you did and suggest edits that you may not think are correct.

Itโ€™s easy to get defensive when it feels like youโ€™re being attacked, but itโ€™s important to remember that theyโ€™re not criticizing you personally. Youโ€™re all part of the same team and itโ€™s everyoneโ€™s responsibility to ship quality code. Try to keep in mind that theyโ€™re just trying to help you make improvements to your code.

Plus, there are plenty of benefits to code reviews that you may not realize, such as:

  • When you review other peopleโ€™s code, it helps you learn the codebase.

  • The codebase is constantly changing, so it also helps you stay up-to-date with the modifications being made.

  • Youโ€™ll be exposed to new techniques and patterns from the code that your coworkers write, and it will help you write better code.

  • Your coworkers may offer advice on a better way of solving a problem, helping you learn and grow as an engineer.

  • Requiring one or two pre-merge code review approvals adds checks and balances to reduce the risk of shipping buggy code.

  • Having your code reviewed forces you to tie up any loose ends and make sure your code works and has been tested before submitting it for peer review. Just knowing that your coworkers will catch bugs means youโ€™ll spend extra effort making sure your code works properly.

  • Code reviews give developers a chance to enforce consistency within the codebase, from patterns to naming conventions and syntax.

  • Code reviews help catch critical mistakes that are often overlooked or misunderstood by the author.

  • Your coworkers will help ensure your code meets the project requirements as well as your organizationโ€™s coding standards.

  • Your coworkers may find performance issues in your code and suggest ways to improve the efficiency of your algorithms.

  • Likewise, your coworkers may find security issues in your code that could compromise your businessโ€™s credibility or, worse, your customerโ€™s data.

The list above is by no means exhaustive, and there are many more benefits to the process of reviewing code before merging it into the main branch. While it can feel like a burden and extra overhead to some programmers just starting their career, the benefits outweigh the costs in the long run based on the number of issues that are caught during the development phase instead of allowing them to slip through to the staging and production environments.

Code reviews are all about managing and reducing the risk involved in shipping defective code. Just like authors, researchers, and students need to have their writing peer reviewed, so do programmers. Weโ€™re not able to catch every mistake, especially when weโ€™re deep in the weeds trying to get our code to compile correctly. Having other developers double-check your work benefits everyone in the long run.

โ€‹resourcesโ€‹

Static Code Analysis

Static code analysis is the act of analyzing a codebase without actually executing its code. The technique is gaining popularity among software organizations, and many teams are adopting tools to help standardize and find vulnerabilities within their code.

There is an entire industry dedicated to automating static code analysis so that you can focus on what you do best, building value for your customers. Some of the more advanced static code analysis tools will scan your software dependency graph for vulnerabilities and alert you to any libraries that you should upgrade and replace due to security issues. They often use proprietary or open-source databases, maintained by security researchers, to track known software vulnerabilities.

โ€‹exampleโ€‹Here are a few examples of some great static code analysis tools:

  • SonarCloud helps you quantify code coverage and identify security vulnerabilities, duplicate code, and code smells.

  • Snyk helps you find and automatically fix security vulnerabilities in your code, open-source dependencies, and infrastructure code so you can focus on building.

  • GitHubโ€™s Dependabot helps you keep your dependencies up-to-date by automatically opening pull requests against your GitHub repositories to install updates.

If your team doesnโ€™t already use static code analysis to aid in finding and fixing vulnerabilities, consider suggesting that they try out some tools. Youโ€™d be surprised at what vulnerabilities may be lurking in your codebase, and you can leverage these tools to harden your systems and build more reliable software.

โ€‹resourcesโ€‹

Automated Tests

In the previous section you learned how an automated test suite can provide immense value to your team. Automated testing is so important that itโ€™s worth mentioning again, because it doubles as a way to manage and reduce the risk of introducing defects when making changes to existing code. A team with sufficient automated test coverage across their codebase can proactively catch bugs faster and cheaper before their code changes hit production.

Building good habits like writing unit and functional tests when you commit new code is one of the best things you can do as a junior programmer. If your team doesnโ€™t already have a test suite or a continuous integration system in place, use that as an opportunity to suggest one and implement it yourself. Itโ€™s a lot of work up front, but itโ€™s a long-term investment that will bring improvements to developer productivity for years to come.

โ€‹exampleโ€‹Here are examples of how automated testing can help you and your team:

  • Automated tests lead to increased productivity, because you can make changes to parts of the codebase with confidence that youโ€™re not breaking existing functionality.

  • Faster feedback loops because you can run the tests locally or on your continuous integration server as youโ€™re making changes. Thereโ€™s no need to deploy your code to hosted environments to make sure itโ€™s working properly.

  • The overall software development life cycle can be shortened because you can make changes and write new tests to ensure the code is working properly.

  • Youโ€™re able to reduce the risk of introducing new defects because you can write test code that checks for specific edge cases and then run those tests over and over again.

  • Automated tests allow you to focus on feature development and building for scale, rather than tracking down and fixing bugs introduced into the system when you make changes to legacy code.

If your team already has a continuous integration system in place, thatโ€™s great. All you have to do then is build the habit of adding new tests with every change you make. Youโ€™ll be surprised at how quickly your test suite grows, and pretty soon youโ€™ll have good coverage over the business-critical components of your system. The more test cases you can cover, the lower the probability of introducing regression issues into your codebase. And lowering the probability of introducing breaking changes lowers the risk when refactoring or making changes to the system.

Postmortems

Failure is only the opportunity more intelligently to begin again.Henry Ford*

This whole section has been about managing and reducing risk, but an unfortunate fact of life is that itโ€™s nearly impossible to completely eliminate all risk involved in writing software. With any moderately complex software, things will go wrong at some point. And sometimes things will go very wrong. Failure is inevitable, and at some point, youโ€™ll be pulled into an incident. When these incidents happen, itโ€™s important to use them as learning experiences and take the time to reflect on the preceding events in order to better understand how and why they happened. In doing so, youโ€™ll be able to learn from your mistakes and make any appropriate changes to prevent them from happening again in the future.

The best thing you can do in the aftermath of an incident is to capture and document what happened leading up to, during, and after the incident so that you can reflect, learn, and share that knowledge with others within your organization. This process is known as a postmortem.

An incident postmortem should bring people together to discuss and document the details of an incident:

  • What was the timeline of events leading up to and during the incident?

  • What was the ultimate root cause?

  • What was the impact on the customers and the organization?

  • What actions were taken to mitigate the failures and get the system back to a stable condition?

  • What steps, if any, should be taken to prevent the same thing from happening again?

If you and your team are able to set aside time to put together a root-cause analysis after a major operational incident, then youโ€™re setting yourself up for the opportunity to improve yourself, your teammates, and your teamโ€™s software development processes. When you learn from your mistakes, youโ€™re able to reduce the risk of making those same mistakes in the future, but it takes time and effort to assess the impact and damage after the dust has settled. A postmortem is a useful framework for sharing knowledge and learning from incidents. Its ultimate purpose is to help organizations turn negative events into forward progress.

Postmortems can be difficult, however, especially if one highlights a mistake or oversight you personally made. You or one of your colleagues may be embarrassed or nervous to share details within your organization. Successful postmortems should be blameless and focus on finding a solution to prevent the root cause from happening again, not on pointing fingers and assigning criticism.

Your goal should be to bring people together in a constructive and collaborative environment that allows everyone to contribute to the progress and evolution of the organization. Postmortems are designed to build trust among team members, across teams, and even with customers. Some companies choose to publish their postmortems publicly in order to show their customers transparency and rebuild confidence in their products.

You donโ€™t need to wait for an incident to reflect and learn from your past, however. Thereโ€™s another framework, called retrospectives, commonly used by many modern software companies.

Retrospectives

While progress may seem linear from the outside, behind the curtains it is sometimes a chaotic and sloppy process to get where youโ€™re trying to go. Things never go according to plan all the time, and you need to learn to adapt to changing requirements and external influences.

As professional software developers, it is our job to master the art and science of delivering quality software on time and within the project requirements. To do this, we often reflect on our current processes and continually improve the way we deliver software. This act of continuous reflection and improvement is enabled by a framework called a retrospective.

The idea behind the retrospective was originally published in 2001 as the twelfth and last bullet point of the Agile Manifesto, which states that:

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

Retrospectives are meant to offer a framework for your team to evaluate itself and devise a plan to address any areas of improvement for the future. By reviewing and analyzing our past projects, we can determine which processes worked well and identify where we can improve the ones that broke down. This allows teams to add, modify, or remove processes in order to become more productive on the next development cycle.

Retrospectives are designed to involve the whole team and to encourage everyone to be honest and offer insights and opinions on what went wrong and how it can be improved. When teams identify areas of improvement and take action to improve going forward, theyโ€™re taking proactive measures to reduce the different types of risk we discussed at the beginning of this section.

Remember, the ultimate goal is to manage and, if possible, eliminate risks that would prevent you and your team from delivering high-quality software on time and on budget.

โ€‹resourcesโ€‹

How to Deliver Better Results37 minutes, 28 links

We all want to write great code and feel like weโ€™re contributing to the success of our team, but it takes more than just writing clean code or finding the perfect abstraction. Even as an individual contributor, there will be things you need to manage, such as your time and productivity. Youโ€™re directly responsible for making sure youโ€™re using your time wisely and keeping your output high, but thatโ€™s easier said than done. Some days, you may feel like youโ€™re getting a lot of work completed, while other days, youโ€™ll feel completely stuck and not sure what to do next.

Delivering results is all about finding your personal groove thatโ€™ll allow you to churn through tasks and ship some actual code on a regular basis. That doesnโ€™t mean you should lose sight of producing quality work, however. Your first focus should always be on quality code. If the code isnโ€™t up to your teamโ€™s standards, then you should absolutely spend additional time cleaning it up so itโ€™s ready for production. Thereโ€™s no point in moving quickly if youโ€™re shipping half-finished code thatโ€™s full of bugsโ€”youโ€™ll just be shifting the burden on to the rest of your team to find, fix, and maintain the defects in your work.

To be a productive software engineer, you should strive to continuously move forward and make progress toward building value and managing the risks involved in shipping code. So, letโ€™s dive in and look at what you can do to increase your productivity and deliver better results.

Youโ€™re reading a preview of an online book. Buy it now for lifetime access to expert knowledge, including future updates.
If you found this post worthwhile, please share!