Better Programming

Advice for programmers.

Follow publication

How To Write Self-Documented Code With Low Cognitive Complexity

Anton Yarkov
Better Programming
Published in
7 min readJul 11, 2021

--

Photo by Chris Ried on Unsplash

In this article, I will share practical and straightforward advice to stop holy wars, arguments about code quality, and the necessity of refactoring, simplifying, and adding comments or documentation for the code. While I will refer to exact commercial tools in the second half of the article, I want to say that I’m not affiliated with the tool’s authors. The tools are available via free community licenses as well as commercial licenses.

The goal of this article is not the tool itself but to tell you about useful metrics that will allow your team to write self-documented code, produce better software, and improve programmers’ lives.

Self-Documented Code

I frequently get the answer “Go read the code” when I ask developers to provide documentation or explain their code. I’m sure I’m not alone. Many developers feel their code is self-documented by default. Not many people understand that creating self-documented code is a complicated design task.

Why is that? Let’s take a look into the way we read code:

  1. First, we are trying to figure out the aim of this code: WHAT was the task and the goal (and real experts also try to dig into WHY).
  2. Next, knowing WHAT, we are reading the code to understand HOW the author achieves this.

While it’s possible to do vice versa, it’s tough in any production solution. Production code tends to be complex due to additional requirements to integrate with other system components like monitoring, logging, or security. They must also be resilient, scalable, configurable, support multiple platforms, versions, etc.

Some people claim that SQL and HTML answer both HOW and WHAT at the same time. I will disregard this comment here and concentrate on general-purpose languages.

Doing vice-versa-analysis, software engineers should figure out what the purpose of this code is, WHAT it mainly does, and (finally) WHAT it missing. This is usually called Mental Model. No matter how simple or complex, there is always some mental model underlying the code (even the bad one). It might be a domain model or another way to express the thinking process. There are many concrete rules to follow to make your code more clean, readable, and understandable.

As we know, many books have been written on this topic. But to sum up all of this, there is only one way to write self-documented code: the developer should write the code to uncover the mental model and express important model parts while hiding unnecessary implementation details. Very frequently, developers focus on implementation details like frameworks, databases, protocols, and languages, making understanding the model difficult.

Questions like HOW and WHAT are orthogonal because there are several ways to achieve the same goal. Imagine climbers analyzing the better way to reach the mountain peak by different paths. They consider various aspects, sum up their own experience and common knowledge about the mountain relief, weather and air conditions, time of the year, the group's readiness level, etc. Finally, they select the optimal path to climb. The optimal path doesn’t explain all these aspects but allows the group to put the flag on the peak.

image via freepik

As I see it, the mental model shows the explicit dependency of the self-documented code from the author’s design skills, allowing them to make the code more readable.

The mental model answers the WHAT, while the code tells us the HOW.

Measuring the Readability of the Code

Frederick Brooks, in his famous paper No Silver Bullet — Essence and Accident in Software Engineering specified two types of complexity:

  • Essential complexity — caused by the problem to be solved, and nothing can remove it
  • Accidental complexity — relates to the problems engineers create that can be fixed

Many years have passed, but we still cannot measure it precisely. The well-known metric, Cyclomatic Complexity (invented in 1976), tightly relates to the lines of code. While it’s an excellent way to measure code coverage, it is not a way to measure cyclomatic complexity. Here is the problem showcase:

As you can see, Cyclomatic Complexity shows the same digits for the code from left and right. However, from the developer’s viewpoint, the left and right pieces of code are not identically complex. The left one is harder to read and understand. We may believe the code is finding the Sum of Prime Numbers, a famously known problem, but an experienced developer will never think it solves the task until they verify it’s true:

  • Does the method’s name clearly state what the code does?
  • Does the code achieve the mission?
  • Does the code miss some use cases, and what are they? (i.e., what are the limitations of the code?)

Now, imagine how hard it is to understand something more specific about the domains that aren’t famous to others. Sonar Source released the Cognitive Complexity metric in 2017, and not many people know about it. However, I believe this is groundbreaking work that has to be widely adopted. As we can see, it works perfectly for the described example:

You can find all the details in their paper and on YouTube. The metric is based on three rules:

  1. Ignore structures that allow multiple statements to be shorthanded into one.
  2. Increment (add one) for each break in the linear flow of the code: loop structures (for, while, do-while), conditionals, ternary operations (if, #if, #ifdef).
  3. Increment when flow-breaking structures are nested.

You can find this metric using the static code analysis tools produced by Sonar Source (SonarQube, SonarCloud, and its freely available SonarLint IDE extension). SonarQube is available in the free community edition.

In Sonar Cloud, look into Project -> Issues -> Rules -> Cognitive Complexity.

It is easy to find the full report with the line-by-line explanation of the penalty assignment. Here’s how to do that:

The default thresholds for code quality are:

Cognitive Complexity

  • 15 (most of the languages)
  • 25 (C-family languages)

Cyclomatic Complexity

  • 10 (all languages)

It’s essential to know both Cyclomatic and Cognitive Complexity thresholds since one metric might be larger than the other and vice versa. Let’s take a look at a simple production example. To find it, you can do the following: Sonar Cloud -> Measures -> select Complexity filter

You can find the total complexity measurement for the group of files (folder) on the left side. Here, we can see the numbers doubled: 134 against 64. You can see file-by-file differences as well.

The LoggerHelper file isn’t so bad in Cyclomatic Complexity, but there are ways to improve its Cognitive Complexity. And for other files, we may see a controversial picture — Cyclomatic Complexity is bigger than the Cognitive one.

Outcomes

It looks like we have a way to measure code complexity, and I wish more tools were available to implement this, but we can already start using this quickly and straightforwardly. The Cognitive Complexity metric still doesn’t tell us how good code is expressing the mental model, but it is already excellent data to move toward good software. Using these metrics, you can start building a transparent dialogue between development and business on necessary resources and roadmaps for better code and product quality:

  • Measure cognitive complexity in all parts of your codebase to assess how hard it is to introduce new developers, implement and deliver new changes, etc.
  • Use measurable goals when planning your development cycles and any activities for improving your code, like refactorings.
  • Prioritize improvements for the most critical parts of your codebase.
  • See the places that should cover with additional documentation.
  • Stop arguing and holy waring, conflicting, and stressing with colleagues on code quality.
  • Make the life of your colleagues more fruitful (everyone wants to achieve results on their task as quickly as possible and then meet with friends and family).

I hope I shared exciting food for you to dig into this and showed you how Cognitive Complexity can be used in your daily life.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Anton Yarkov
Anton Yarkov

Written by Anton Yarkov

Senior Software Engineer and Engineering manager with 10+ years of experience in development of high loaded online systems.

Write a response