Escaping Technical Debt

By Osman Shoukry (@oshoukry) & Kris Young (@thehybridform)

The Visit

On October 6th we had Michael Feathers (@mfeathers) author of Working Effectively With Legacy Code visit our facility.  The visit was two achieve two objectives.  The first was to give tech talks to our engineers about legacy code.  The second was to train selected key individuals in the organization.  Specifically, techniques and skills on how to deal with legacy code.

Mr. Feathers graciously agreed to give a recorded community talk about Escaping Technical Debt.


Key Takeaways

  • Tech debt

Technical debt is a metaphor for the amount of resistance in a system to change.  The larger the debt the higher the resistance.  When change is introduced, the time spent looking for where to make a change is one example of tech debt.  Another is when the system breaks in unexpected ways.

  • It takes a community to mitigate technical debt

Tech debt affects everybody, from engineers, to product owners to the CEO.  Mitigating tech debt requires everyone’s support and involvement.  As Engineers, we are responsible for how to mitigate technical debt.  The product owners should have input on tech debt mitigation effort.  The benefits of tech debt cleanup should be visible to everyone.  Don’t surprise your peers or management by the sudden change in productivity.  They will bring ideas to you to help pay down the debt in more enabling ways for the future… Involve them!

  • Don’t pay the dead

Code that doesn’t change, even if it is in production, doesn’t collect debt.  Dead debt is any part of the system that doesn’t change including bugs and poor design.  Many times engineers get wrapped up cleaning code that isn’t changing.  Resist the urge to refactor unchanging parts of the system.  If it isn’t changing it, it isn’t technical debt, it is dead debt, walk away.

  • Size your debt

Target the most complex changing code ordered by frequency of change.  Build a dashboard to show the amount of changing code by frequency.  These are the most expensive tech debt hot spots.  This should be the focus of the tech debt cleanup effort.

  • Seal the leak

Technical debt should be paid down immediately.  Simple code is easy to make more complex, and easy to make simple.  However, as the code becomes more complex, the balance tips in the favor of adding complexity than removing it.  No matter how complex the code is, it is always going to be easier to add complexity than remove it.  Inexperienced engineers, fail to see the initial complexity that is added to the system.  In turn, they follow the path of least resistance making the code more complex.

To seal the leak, first identify the most complex and frequently changing code.  Second, give explicit ownership for that code to the most seasoned engineers.  Third, let the seasoned engineers publish the eventual design objectives.  And finally, owners should review all changes to the code they own.

  • Slow down to go fast

Clean code takes time and disciplined effort.  Adequate time is needed to name classes, methods and variables.  Poorly named methods and classes will create confusion.  Confusion will lead to mixing roles and responsibilities.


Finally, low tech debt will yield high dividends with compound interest…

“Don’t do half assed, just do half”

The InBetweeners’ Journey

By Amy Brandt, Sarah Breen, Kerri Gardner and Allison Liedtke
Mentors: Alan Chen and Dillon Eng

Our Product: Account Performance Monitor

Our product is a dashboard specifically created to help the Digital Advertising Advocates (Specialists) manage their accounts. When the page loads, a specialist can select their name and that loads their data from the database. Then the specialist can filter by any combination they choose (including none specified at all) from the following filters: customer, package, service type, and search engine. The main feature of the dashboard is a graph that keeps track of the average spend-to-target ratio for the last three months. In addition to that, it keeps track of the current underspend for all of that specialist’s accounts and the current streak for number of days that there has been no underspend.


Why It Is Useful

This dashboard is useful to the specialists because it is faster and more efficient than the current Tableau dashboard the specialists are using. In addition, our dashboard is focused and more user friendly. This dashboard is also useful because the graph shows the average spend to target ratio for the specialist. This means it shows the amount of money spent divided by the target budget. The closer the line is to one, the closer they are to spending their target amount for that month. The dashboard is also useful because it shows the underspend amount. This helps the specialist know when they need to spend more on a certain account. The specialist can easily switch between the filters without hassle, and the information is updated almost immediately.


To create our product, we started by making the UI using a combination of HTML, CSS, JavaScript, jQuery, and D3 (a JavaScript graphing library that we used to build our graph). Most of us were familiar with these languages so this part was not as challenging to complete. At first we used mock data to create our UI, but once we had the majority of the parts functioning, we switched to making a server and database connection. For this, we used node.js and Node.js is a server-side JavaScript framework that allowed us to create a full-stack web application; using the mssql package on npm, we were able get data from the database and then send it to our UI, all in JavaScript. Once the data was sent to the UI, we used a filter that we had created with Crossfilter (a JavaScript filtering library) to narrow down the information being shown based on what the specialist picked on the UI. These technologies were new for us, so implementing them in our project proved more challenging and took longer to complete; however, once we had them up and running, we were able to move on to the testing phase. To test our business logic we used a testing library called Jasmine to make sure all of our filters were working correctly and that the data being shown was accurate. Finally, after testing was complete, we worked on refactoring our code to make it cleaner and more efficient.

How it works

Our application is composed of three main layers: the client, the server, and the database. The client and server were tied together using WebSocket via We used event-based communication, enabling our app to receive information and update on demand. The main interaction between layers occurs upon selection of a specialist’s name from the search box. Once a name is selected, an event is triggered, running a check to make sure the text entered is a specialist’s name. If it is a specialist’s name, the client-side socket emits the text entered in the search box. Our server-side socket then receives this emitted message, causing it to insert the specialist’s name into the query and then send the query to the database to receive all of the information about that specialist. The server contains an event-based function that runs once the database has gathered the information requested and returns it to the server-side socket. This event runs a function containing another socket emit message, sending the received specialist’s information as a list of JSON objects. This emitted message is received by the client-side socket, where it is passed into the Crossfilter function. This function parses the received JSON objects into a series of JavaScript objects of arrays. These objects are received by the UI JavaScript files and used to populate the filter boxes, graph, underspend, and streak. Now that all of the specialist’s information is in the client-side of the application, no more communication between the layers is necessary until another specialist name is selected. Instead of querying the database for a new dataset each time different filters are selected, the select boxes are each linked to the Crossfilter each time their state is changed. The Crossfilter then uses these selections to create a dataset specific to that unique combination and passes in that new dataset to the dashboard. Because all of this processing is done on the client-side, the information changes corresponding to the filter selection appears to be nearly instantaneous.

Our Experience

DSC_6219 copy

From our internship experience this summer, we have taken away many insights and a tremendous amount of useful knowledge. First of all, we were all able to gain experience in technology and programming. This experience will really benefit us as we head off to our first year at college and begin to consider where we want to work in the future. Something that was especially important in relation to technology and programing was our exposure to both the backend and frontend sections of our project. This enabled us to have a well-rounded knowledge of how our project was built. In addition, we leaned a lot about teamwork and how a team runs at a software company. For example, we learned about how to act within a team and that it is often necessary to split up tasks individually or between partners. We also learned about agile development and the benefit of daily standups and weekly sprints. This goes along with another take-away of ours, which is agile development. While creating our project, we saw the advantage of breaking down a big project by creating small and achievable goals. This is valuable because it meant our project was always evolving and it was helpful to complete one small, focused task, get it to work, and then move on to another. Our final takeaway is the idea that one learns from their mistakes. Throughout the internship we each had different struggles we had to work through. These struggles, although difficult at the time, taught us a lot about programming and ended up benefitting our project by making it stronger as we worked through them.

Besides the experience we gained in programming and software development, we gained insight on Cobalt as a whole and useful advice for the future. Leaving this internship, we have a good sense of what Cobalt does and how our project can affect the company. In addition, every Friday the girl interns would attend a lunch where we would meet with an employee of Cobalt and discuss their career. This was very insightful because we received helpful advice for things like interviews, following your passion, and not letting men dominate in your career. In a nutshell, working at Cobalt was a very enjoyable experience for all four of us and we gained an immense amount of knowledge.

Creating an Interactive Wallboard for JIRA

By Ethan Goldman-Kirst – Team Epic (Justin Emge, Jun Fan, Ethan Goldman-Kirst)

Over the last few weeks, our team of three interns has been creating a plugin for displaying progress and work at a high level for JIRA. We have already deployed our free, open-source software internally at Cobalt and are in the process of publishing it to the Atlassian Marketplace. Through our time here, we have met with potential customers, adapting the plugin to meet their needs. In the next month, we will continue to receive feedback and improve our software so we can leave with a successful and widely used product.

Overview of the Plugin

The plugin is built for JIRA and consists of a live-updating table of projects. The projects shown will be all projects that you have permission to see (everyone at Cobalt can see all projects in the company). There are two main use cases that we have thought of: 1.) as a tool for product managers, and 2.) as a wallboard to be displayed on the TVs around the office to show work in progress in real-time. A fullscreen mode assists with the latter, making the page easily navigable using a touch screen. The main feature of the plugin is the unique ability to see detailed information about each epic in a project, including a list of all stories. Epics and stories are displayed in an old-fashioned style using post-it notes to replicate how Agile boards were done in the past.


Publishing to the Atlassian Marketplace

On monday this week, August 6th, we began the process of publishing the app to the Atlassian Marketplace. It will hopefully be available to everyone at the end of the three to five day review process. The publishing process has been quite a lot of work in itself. We had to consult a designer to make our icon, and gained experience in photoshop while creating the banner seen above. At the end of the week, as long as our plugin is approved, you can find it on the Atlassian Marketplace by searching for Epic Work View for JIRA. We would appreciate any questions or feedback.

%d bloggers like this: