1

Improve Your Thinking

 3 years ago
source link: https://completedeveloperpodcast.com/improve-your-thinking/?utm_campaign=improve-your-thinking
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Improve Your Thinking

Podcast: Play in new window | Download (54.1MB) | Embed

Subscribe: Apple Podcasts | Spotify | Email | RSS | More

We’ve all worked with them. There is always that one developer who is called a “senior” developer. They’ll tell you that they have 12 years of experience, but after you watch them work for a while, you realize that they really have one year of experience, repeated 12 times. While this has been something that nearly every developer has noticed in their career, few take the time to really quantify what it means to be a senior developer. It’s not really about the number of years of experience you have, but rather about how the years of experience force you to think about your code in a wider context.

If you look at the history of manufacturing and industrial development all the way up to the age of software, you’ll realize that the core principle is all about leverage. In the most basic sense, your work as developer (or as someone working on the average industrial project, for that matter) is centered around trying to make it so that someone else can use your work to be more effective at their job.

Essentially, in most development jobs, you don’t DO THE WORK, you simply make it so that the people doing the work can do so more effectively. This understanding of your job is probably not the one you were taught in school, but is nevertheless the one that really matters to your employer. You either do the work yourself or you make the work more efficient – regardless of what they tell you, they aren’t hiring you for anything else. If you understand this, it’s not as difficult to advance in your job or to get a better job elsewhere. If you don’t understand it, getting a better job or advancing in your current job is impossible, regardless of how long you’ve been in the industry, because you will be making less than optimal decisions.

The way you think as a developer determines how far along you really are. There are plenty of beginner developers that have the mindset of a senior developer (without some of the required practice), just as there are plenty of senior developers who have very neophyte understandings of what they are actually doing. While hiring managers often view a senior developer as someone with years of experience, this is really a secondary understanding of the type of leverage that a senior developer can (should) be able to provide. If you can provide this sort of value as a junior or mid-level developer, you will be well ahead of the other people at your level. Ultimately, it all comes down to an understanding of how developers can provide leverage to their employers.

Episode Breakdown

Considering pulling in a third party component to handle part of the application functionality, assuming the component works well and has a reasonable cost.

A bad approach would be “I don’t want to learn how to use that component – I’ll just write my own”. The problems with this are that most things are more complicated than you’d think and that ongoing maintenance can be a real pain. A better approach would be to directly integrate the new component into your code and deal with the consequences. While this gives you the advantage of using the new component, you are still facing some risk if the component changes, has a major security issue, or has a bug.

The best approach would be to integrate the new component in a way that decouples its implementation from the rest of the application. This lets you get the benefit of using the component, but limits the number of problems the component can cause for the rest of the application. Notice as the developer uses better approaches, they do so with an eye towards first reducing costs (via using a component someone else has built) and then with consideration towards limiting systemic risk.

A critical error has occurred in production and it is due to an entirely preventable problem. This happens in even the best run organizations, but handling it poorly can mean that an effective team has half their time wasted fighting fires.

A bad approach would be to simply spot patch the issue with no further inquiry for an extended period of time. Even if you can quickly fix a problem that occurs every day, this still is a lot of overhead. A better approach is to make it faster to fix a problem, either by providing diagnostic information or by surfacing functionality in the app to allow a junior developer (or non-developer) to fix the issue. While this reduces the interruptions to the senior developers (or developers in general), there is still a cost for fixing the issues, and a risk of developing larger issues due to mistakes.

The best approach is to actually take the time to fix the issue in your system if you can, or to explore ways to automatically mitigate the issue. While this seems obvious, actually getting to the root of some problems can take extensive investigation and major system changes to fix. As the developer’s approach and thought process improves, they will first try to take the pressure off their team and then will further improve by preventing problems from occurring in order to get rid of the rest of the costs of a problem.

A new version of your framework comes out, with major breaking changes. This happens every so often and can cause substantial development disruption. The longer you put off the upgrade, the worse it gets.

There are two bad approaches here. The first is NEVER upgrading and the second is IMMEDIATELY upgrading. The first means that you never get the advantages to be had from upgrading, while the latter means that you are constantly fighting churn and bugs in early versions of the upgraded framework. A better approach is to upgrade after waiting for a while, once others have done so successfully. This avoids major breaking changes and bugs that can waste time, but means that you can’t take advantage of major upgrades until many of your competitors already have.

The best approach is a full set of integration and unit tests, along with solid manual testing procedures so that you can safely upgrade earlier in the cycle. As a bonus, these practices also make other large scale refactorings easier to pull off. As the developer uses better approaches, they realize that framework upgrades are an excellent test of unit test coverage and other testing processes. They will use this to their advantage so that other operations are more easily accomplished.

A single process within your system is having major performance issues and it’s similar to other processes. While this is common in most software companies to varying degrees, the way you handle it can have dire business consequences.

A bad approach is to simply throw more resources at the problem. While this can work for small things, it isn’t a sustainable long-term strategy, both due to cost and due to the limits of how much you can scale. A better approach is to try and improve performance by altering your code so that it is more efficient with vertical scaling. While this gives you more room to scale without increasing costs, it only works for a while before you hit some kind of serious limit.

The best approach is to improve performance by altering your code so that it can scale horizontally and run in parallel. This allows you to scale much further (but not forever). Not only does this let you scale further at lower cost, but it can make dynamic scaling easier, which can really save a lot of money in terms of cost. Notice as the developer uses better approaches, the amount of time they have before they run into a scaling issue increases. The idea is to stop fixing problems in the short term and fix them in the longer term instead.

New developers on your team consistently have trouble getting to the point of being productive. Your onboarding process is a window into how well your development teams works in general. If it takes weeks to even get going, you have a problem.

A bad approach to this is to simply require a day or two of a senior developer’s time to help a new hire get their environment working. While this may be the way you do it in the short term, this doesn’t scale in the long term. In particular, you usually hire when you need help, and doing it this way means that getting help slows you down considerably in the short term. A better approach is to document the steps required to set up a new user, and have the new hire follow them. This keeps a senior developer from being tied up getting things working for a new hire, but it still is going to cause a lot of interruptions when setting up a new environment.

The best approach is to have a repeatable, automated process for setting up an environment for a new user. While this takes a lot of time to get going, it also means that you can quickly spin up new environments. This also means that it’s easier to keep environment configuration synchronized, which can get rid of a lot of uncertainty in the entire development process by eliminating variables. Extra points if you do the same on your server environments.

As the developers use smarter tactics, they will reduce the workload placed on existing developers when hiring new ones, because it allows the team to scale. Eventually, they will shift from vertical team scaling strategies (documented onboarding) to horizontal ones (automated new environment creation) to allow the team to scale more easily and consistently.

Regular after-hours failures occur on certain processes. While this is something that will probably happen at some level in most organizations, it’s a really bad sign when it happens frequently. Not only does this destroy work/life balance for the team, but the fatigue and distraction that it causes can often result in even more errors, along with employee turnover.

A bad approach to this is to simply have an on-call routine. While this may be necessary in the short term, in the long term it makes working for your company a negative experience, and you will lose the more capable members of your development team for somewhere that doesn’t suck. A better approach is use enterprise application patterns to implement retries and other fixes for after hours problems so that you reduce their frequency. This is a good first step and will help reduce the number of after hours support issues.

The best approach is to actually do a root cause analysis on the issues you are facing, fix them where possible, and make it possible for non-development staff to fix them where it isn’t. The idea here is to get rid of the problem to the degree possible and put the remainder of it on lower cost resources where it isn’t. Better practices mean that you will deal with after hours support interruptions by minimizing them as thoroughly as possible. When you can’t prevent them entirely, you do what you can to lower the costs of dealing with them. Odds are good that the cheapest way of dealing with a problem doesn’t require development involvement to a large degree.

Support personnel routinely interrupt developers to ask the same questions over and over again. While this can be an indicator of poor support training, it’s often a better signal of deeper process issues that need to be resolved.

A bad approach to this is to segregate the development team so that support can’t interrupt them. Unless your hiring and training processes for support personnel are deeply flawed, they should only be approaching development when they have no other options. Segregating development from support simply means that support has no support. A better approach is to document the problem and how to fix it each time that support contacts development (possibly in a wiki). While this should reduce the frequency with which development is tasked with support issues, it doesn’t solve all problems. It also means that any problems that do occur will do so in a naturally interruptive manner.

The best approach is to build automated ways that support can use to resolve the issues they encounter, even if those issues are otherwise too complicated to handle on their own. While this approach requires more development resources, it also tends to free up development resources from random interruptions.

Support is a first class user of your application and dealing with support issues is a large part of your role as a developer (even if no one told you that). As your approach improves, you should realize that your support personnel are your first line of defense against larger application problems and you should do your best to support them. If your support team is adequately supported, your users will be happy, making it easier for you to continue doing your job.

One of your API endpoints is getting attacked and this attack is degrading system performance across the board. As your application gets attention on the internet, this becomes more likely. If your API responses take more time to generate than the requests that cause them to occur, then it’s just a matter of time before an attack occurs that degrades system performance using this endpoint.

A bad approach here is to simply block access to the offending endpoint while the attack is occurring. While this is a short term fix, it cannot be a long term fix because it allows third parties to control whether your application is available to paying clients or not. A better approach is to defer workloads like this and handle them in a background process so that performance degradation is not as immediately obvious to your users. This makes it harder to accomplish a denial of service attack on your systems. Nevertheless, an attacker can substantially raise your costs in this manner, even if your users don’t notice.

The best approach is make sure that only paying, authenticated users can kick off a process that costs resources, especially if there is an asymmetry between the effort required to start such a process and the effort required to fulfill such a request. A denial of service attack where the attackers are paying is essentially just a scaling problem. As developers use better strategies, many security issues related to scaling will be more properly understood as a potential revenue opportunity. Many attacks where people are “misusing” your system are nothing more than an unaddressed business opportunity.

Regular development actions (such as builds, PRs, deployments, etc.) take an excessive amount of time and configuration. While automatic build/deploy systems are a huge productivity boost, they do take a lot of effort to set up properly. Frequently, these processes are set up and left in a “good enough” state that allows developers to do their work, while being cumbersome.

A bad solution to this is simply to add documentation. While this can help reduce the headaches for new developers, it doesn’t really solve the issue. If there is a lot of manual configuration required every time you create a feature branch, do a pull request, or deploy code, it becomes a burden for the development team to get it right. Additionally, it makes it harder to change, as you have to communicate every change to the team and make sure they follow new guidelines.

A better solution is to have sensible defaults set up in your systems that do as much as possible to make sure that developers follow standards, with automatic checks to catch them if they screw up. While this is better than the first option, it doesn’t entirely remove the burden from the development team and requires training after every change to the process.

The best solution is to either use built-in tools or to build your own tools that set up default settings automatically on your development process. {Will discuss about work tool that handles version 1 configuration and Azure branch setup}. The ideal solution to developer configuration headaches is to make sure that they aren’t a problem for the average developer.

While external-facing software is probably what you are being paid to develop, don’t neglect the development of developer-facing software that makes the rest of your process easier. Remember that development is all about creating leverage in regards to real-world processes. If you are creating leverage regarding important development processes, this can potentially translate into a lot of real-world leverage.

Tricks of the Trade

These tips don’t just apply to you role as a developer. As with everything you do, there are going to be many approaches you can take. Create a process for assessing how you address various situations. You can look at situations in your own life, outside of work, and use the same concepts, maybe not the same implementation as in the episode. You want to act on information not react to situations, but that takes time and practice. You can start by doing retrospectives on your own life as to how something played out and what you could have done better.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK