When To Write Bad Code

« »

Have you ever had this situation before? You have a problem to solve and no idea how to solve it. You want to sit down and do it “the right way”, but “the right way” involves writing tests, designing objects and generally working out something that’s far more complex than you need to get a working prototype. And so nothing gets done.

I’ve been there myself. I recently needed to prototype something. As I sat down to work on it, I had absolutely no idea how I was going to write the component I was working on. And so, I started working – without a plan, without writing tests, without designing an architecture, and without really knowing how the component was going to end up.

You know what? The component came out working, but when I was done it was ugly. Totally ugly. The code was bad. But I had a solution, and a solution that worked.

People who teach others how to write software too often forget that we’re responsible for showing the right way, but the right way can’t come at the expense of experimentation. Developers are paid to solve problems not to write code; our code is our expression of a solution. But most bosses who aren’t technical don’t care about coding standards and test plans. They care about problems being solved.

I advised an individual this week that it was therefore okay to write bad code if it meant getting something down in the code editor and ready for demonstration to someone else. I pointed out to him that getting something done beat getting nothing done, and that code was infinitely refactorable. He could clean it up once it was working.

Telling people they have to write perfect code from the beginning is like telling an author that they can’t have any mistakes in their rough draft. This would be absurd to most of us, yet we seem to think that writing code that can be thrown away or refactoring it after the fact is somehow a bad thing. Refactoring and rewriting code is just a part of our jobs.

This is not an excuse to release bad code. This does not absolve a developer of the need to come up with good solutions that also adhere to developer best practices. What this does is free developers from the belief that they have to be perfect from the very first line of code. It’s okay to write code not knowing where it will end up and to fix it as the next step.

This was part of my goal in writing Mastering Object Oriented PHP. Rather than espousing a particular design philosophy, I highlight the best practices that worked for me over time, in an easy to use way. These are the best practices that I use every day. If you’re struggling with object oriented PHP, Mastering Object Oriented PHP is a great resource for you to be able to understand the best practices, and also have the freedom to experiment. Pick up a copy today.

When the code is working and you’re ready to move onto the next phase, then you can work on making the code pretty, readable, well-documented and tested. While this flies in the face of concepts like test-driven development, I believe sometimes it’s necessary for developers to simply get the problem solved and worry about the details later. But this does NOT mean that developers can push bad code into a repository. Nothing lives longer than temporary code; see to it that your finished code is always good.

My prototype was rewritten, this time in the right way, with the components and architecture required to make it into good code. Nobody ever has to know that the first draft was ugly and horrible; they only care that the finished product is great. It’s our little secret.

This is a excerpt of the material that I send out each and every week to my mailing list through the Developer Weekly series. Sign up at the bottom of this post and never miss an issue!

Brandon Savage is the author of Mastering Object Oriented PHP and Practical Design Patterns in PHP

Posted on 1/29/2013 at 7:00 am
Categories: Best Practices, Object-Oriented Development, Software Development

Anthony Wright (@wrightmanthony) wrote at 1/29/2013 9:06 am:

Great post. Wished I worked at a company that had the same “Refactoring and rewriting code is just a part of our jobs” philosophy. We constantly write bad code due to the rushed nature of our deadlines with the boss saying that it can get cleaned up later, but then there’s never time later to clean it up. Now we are getting to the point with our clients where they are bringing larger projects with multiple phases, so I’m hoping that when they sees that such poor code is more costly to maintain, then they’ll see the need to refactoring rushed code when the project is done.

Ben wrote at 1/29/2013 9:29 am:

Shouldn’t you just form a good plan first rather than hammering away at the keyboard looking for a solution? I guess if coding is part of your problem solving process, then whatever.

Shane wrote at 1/29/2013 10:23 am:

Great post! When I first started, I needed everything to be perfect before it went into production. Logically, structurally, and aesthetically. After years of getting emergency requests from clients, I’ve moved to this process: 1) Duct tape it 2) Wait for at least a month of production use 3) Refactor. This lets me implement quickly, and gives the client time to reneg or adjust their request (which happens often).

georg (@twitter.com/kinggeorge11) wrote at 1/29/2013 10:28 am:

Great post – and if it was just a prototype to throw away after I am totally fine with experimenting and bringing your design ideas to some code, but if it is a code which gets refactored afterwards it should be done with tests (in fact TDD is not about writing perfect code from the start, the opposite; do it in iterations and refactor – because if it is a “prototype” of “ugly” code which needs to be factored to production code it could break at many places, tests could avoid this – if there is a problem but no solution, at least I know what the outcome should be…)

So I agree and disagree s bit ;-) thx for sharing!
Best, georg

John J. Locke (@lockedownweb) wrote at 1/29/2013 11:53 am:

The philosophy that refactoring and cleaning up the code is part of the process makes this whole thing work. The title had me completely in a different state of mind, I’m glad I read the whole piece before passing judgement.

Jason McCreary wrote at 1/29/2013 1:42 pm:

I actually proposed a similar topic for a conference talk. I like Author analogy. I’ll have to borrow that… Good, quick article.

Hector H Alpizar (@chaq686) wrote at 1/29/2013 7:25 pm:

I looks like an interesting post. I didn’t read it all, but I saw the title and I though “I could related to that” . The thing is that when I receive your e-mail, I was working. And I probably I’ll stay in worked till midnight.

Also I read your e-mailed tips, and I think you should have them on audio. Because I like to listen to podcast while I’m working.


Thomas Eyde wrote at 1/30/2013 2:42 am:

The intro seemed to go in the wrong direction, but the overall article is good. It’s easy to miss the refactoring point, though.

This is what some people would call to do a spike. It’s meant for experimentation and learning, but are never supposed to end up in production.

I have two concerns:

1. Some people thinks it’s up to the customer / boss to decide when or if to do refactoring. I have great problems with that, as this is a technical decision and hence our responsibility. Refactoring is an integrated part of writing code.

2. To refactor without tests is risky. When the spike is done and we know what to do, it would be more efficient to start over, write the test and implement TDD style.

Fenn wrote at 1/30/2013 4:23 am:

I agree with the idea, prototypes are meant to show some points, then to be thrown away.
However, in the industry, you’re not often allowed to do that. You can explain that “yeah, it’s working, but it’s a bad piece that need to be rewritten”, non-technical people can answer “but I see it works, that’s great, we go no to the next step now”.
So, before even considering doing things that way, what you need to do is get allocated time to be able to follow this path. And explain that to both your hierarchy, your commercials, and your clients. That can be a nightmare.
That’s when TDD can save your ass, you write code which is not in perfect state, as TDD is meant to lead to a lot of refactoring, but at least you can work with what you have.
Of course, it’s better to get things done, and “nothing done” is the worst state, but “not over-thinking” doesn’t mean you can’t afford to use a bit of methodology to avoid the ugliest things ;)

Steve Folly wrote at 1/30/2013 7:06 am:

Brilliant article. People can get obsessed with writing perfect code from the start which makes progress to a finished solution painful. Understanding that “just hacking a solution” can be perfectly valid in some circumstances. However, once your customer sees a “working” solution, they may be unwilling to sponsor the refactoring exercise to tidy up the code. In my experience, customers only see the short term view that this solution works, they don’t care about the state of the code. In that case, you *must* allow time for refactoring in version 2 (and 3, and 4…) – and perhaps not in an obvious way such that the customer would see this as an unnecessary task.

Jose Junior wrote at 1/30/2013 7:33 am:

Lovely! You just take a stone from the top of my head!
Thank you for this post. ;]

Ian Thomas (@ianmthomasuk) wrote at 1/30/2013 7:38 am:

I agree, but I don’t think you can stress enough

“But this does NOT mean that developers can push bad code into a repository.”

What you’re describing is building quick, disposable prototypes to help understand the problem and potential solutions. As soon as that code gets committed with the intent of “cleaning it up when I have time”, it will almost certainly end up getting released and still be in use five year later, when you still haven’t found the time to clean it up.

Frode wrote at 1/30/2013 7:46 am:

Be equipped for the journey. To pick up the newspaper Sunday morning, slippers and a kimono is perfect. To cross the South pole, you need some hundred pounds of equipment. Doing one trip with the equipment of the other is a very bad idea.

If the purpose of the journey changes, you have to go home and change equipment as well.

So, yes, for a prototype, go quick, go bad. For production go good. For reusable modules go perfect (i.e. never put anything on top of a faulty fundation).

Garry M wrote at 1/30/2013 9:05 am:

One responder hit the problem with this approach straight on the head. Just about every business user I’ve EVER dealt with, upon seeing a working dirty prototype, declared the development process over and moved us on to the next project. I have seen this a dozen or more times in my career (including with major Fortune 500 clients), and then have had to “make it work” with that prototype for the remainder of my time at that company.

In one case, the boss of a small IT consultant firm mandated a database done in Access for rapid prototyping – “We’ll put it into SQL Server after we demo it to [major multi-national client]. You guessed it. The client saw the web site working, paid her for hours done to date, and then looked baffled when their IT Department refused to install or support the product on their servers due to the Access back-end. We lost the contract, my boss loss her best talent as we all left to go to a company that appreciated good coding practices from the start.

While I agree that throw-away proofs of concept and prototypes have a place in development – that place should be kept inside the productive area of the IT Department. As far as a Business User (or non-technical IT manager) should be concerned, upon completion of a down and dirty prototype or proof of concept, you “have created some internal code that proves what you’re asking for is going to work quite well, now we need to make that code work with the web site/product/whatever”. The moment a BizUser sees their idea running, they don’t care how dirty or malformed it is behind the scenes – they’re ready to use it and want it NOW.

Dion wrote at 1/30/2013 10:38 am:

Here is my thought.
For Juniors – it’s might be ok to accept this.
For Experts – Very bad.

The title of the article should be rephrased as “When To Write Bad Code on my company” since you have an option to recode(possibly refactor) your bad code. I have been working as a contractor/perm over decades and such kind of implementation weren’t accepted any of the company that I worked for. I don’t think it will happen in the future as well. As

Ben wrote at 1/29/2013 9:29 am:
Shouldn’t you just form a good plan first rather than hammering away at the keyboard looking for a solution? I guess if coding is part of your problem solving process, then whatever.

Plan what you want to do before jumping on the code. Who knows the entire project may be changed after you code, even they may drop it.

giantism_strikes wrote at 1/30/2013 11:23 am:

I could not disagree more with this article. Bad code is bad code.

A large problem with software developers these days is that we tend to focus on implementation. Implementation should be the last thing you are concerned with. A good design that allows for swapping out implementations of a particular area should be the focus.

If you are not using a prototyping tool, then your solution should allow for swapping out of mock objects/services/data with their real implementations. The design of the system should remain pretty consistent as you begin your true implementations. This allows for implementations to be improved upon in the future without affecting the rest of the system.

Scott A. Tovey wrote at 1/30/2013 11:36 am:

It is not surprising that so many people have a real problem with this concept. TDD for what it’s worth is quite useless if you start out not know the first step in solving a problem.

TDD requires both the knowledge and experience of having tackled a given problem before and therefore one knows pretty much what steps and parameters are necessary for the job. When tackling something you’ve net come across in the past, it is quite difficult to know exactly what steps to take.

For those who are such omnipotent geniuses that they know everything about everything, such individuals have absolutely no excuse to not use TDD from the git go on every single project, nor do they have an excuse for nothing getting done.

Why does it seem to be so hard for some people to understand that prototyping is used to find the parameters necessary to utilize TDD and those who argue that employers and customers don’t allow refactoring also make the mistake of showing a proof of concept as a working model.

A proof of concept is just that, it is a proof of concept and not a production ready product. If your boss or customer has a problem understanding that fact, then it is your fault for not telling them from the git go that they did not hire God to do the job.

If you cannot present a proof of concept without that dirty code becoming the production code due to the idiocy of those for whom you work, then utilize prototyping to gather the information needed to set up a proper TDD environment. That way, you see a working prototype and proof of concept, but your boss or customer sees an unfinished product that has yet to pass it’s tests. This process allows you to both prototype and refactor along the way.

Dan Sutton wrote at 1/30/2013 12:06 pm:

Sorry, but this article reads like a justification for not being bright enough to get it right. It’s not enough that something works: it also has to be maintainable and extensible, and the only way to achieve this is to write it properly.

It’s OK to be a mediocre programmer: it’s not OK to justify it and say that this is how it should be.

Shane wrote at 1/30/2013 1:01 pm:

@Dan As long as you don’t confuse popular design patterns and good code. If it disturbs you that not everyone uses the same paradigms, you should probably switch to Rails.

Jasmine (@jasmine2501) wrote at 1/30/2013 2:35 pm:

Anyone who’s worked in the real world of programming for a while knows that this is what we do. The comments from people saying this is making excuses or doing it wrong, are simply inexperienced viewpoints. It is impossible to do something perfectly the first time. IMPOSSIBLE, unless it’s trivial. If it’s a trivial solution then yes, pick the right pattern, write the correct code, and go with it… but we are talking about problems **for which there is no known solution**, and if you think people approach that situation and get it right the first time, you are simply showing your inexperience.

My second point is this: if your process isn’t being followed by the business, that’s YOUR fault. Fix it by documenting the process and making people work the process along with you, from step 1 to the end. They will understand eventually if you point it out. Simply say “deploying the prototype as if it was finished will result in {whatever}” and then FOLLOW-UP on that claim. That’s what most people don’t seem to do – you complain and whine when management wants to cause a problem in the future, but you don’t ever go back and show them you were right. If you do this, things will get better – but you need to be perfect with it, and you need to accept that a cultural change like that takes years. It is like parenting – you must provide a consistent process to the business people or they won’t learn it, and you need to make sure they are aware of the consequences of their decisions, GOOD and bad!

ANonymous Coward wrote at 1/30/2013 4:29 pm:

When to write bad code … like never?

IMO/IME it’s programmers for which writing good code doesn’t come natural who use various excuses to write bad code. Once you get used to writing your code properly from the first version, it’s faster and more reliable to just always write good code.

I never could understand the justification that under time/deadline pressure you have to write bad code, or give up unit testing. That’s precisely the right recipe for being late.

Dan Sutton wrote at 1/30/2013 4:55 pm:

No – I don’t care about that. As long as the code is logical and constructed properly – and uses a decent algorithm – then I’m fine with it…

Ahwan Radi (@twitter.com/ehwenrad) wrote at 1/31/2013 2:24 am:

Thank you. I have been there too, a lot. And I always chose “my application should work” over “my application should have a beautiful coding”.

giantism_strikes wrote at 1/31/2013 8:33 am:

@Jasmine – Your argument is bad and you should feel bad.

My background is this: BS in CS, MS in CS, and 7 years of programming being my only source of income. During this time I have written government tax software, written medical research software, worked with the FDA on creating digital models for medical research, and worked in the DoD. In a few weeks, I am switching to R&D of surgical equipment. The point is that I have had plenty of real world experience across a wide range of businesses.

A good programmer can look at a problem and see how design patterns can be used. “Just getting it done” leads to horrible code libraries, non-reusable code, and difficult maintenance.

Gates VP wrote at 2/1/2013 3:29 am:

You say the following:… I started working – without a plan, without writing tests, without designing an architecture, and without really knowing how the component was going to end up…

And then the following sentence you say that you “got it working”.

If you had no tests, how did you know it worked?

Mario Gleichmann (@mariogleichmann) wrote at 2/1/2013 10:56 am:

There’s an ‘old’ saying – it goes something like this:

1st – make it work
2nd – make it right
3rd – make it fast

Scott A. Tovey wrote at 2/2/2013 4:11 pm:

@Gates VP

If you had no tests, how did you know it worked?

That’s the difference between those who are book smart,
and those of us who can look at a problem, and think
our way through to a solution.

Not everything that is written on paper works. Not
everything that is written on paper is accurate.

You can start working on a procedure that in your mind
should work but it doesn’t. How do you know that, because
you do have an idea of what the output should be. One
almost always knows what the desired output is, one just
does not always know exactly how to get there. That is
what this article is about. A situation where a problem
that was not previously encountered and therefore none
of the previously written code applied.

That’s when you just start writing code. One function
at a time. It’s a step by step process to determine
the solution to the problem. Planning can’t help you
because you just don’t know where to start. And since
your the only one on the project, you have no one to
do a back and forth discussion to get the ideas out.

I find it more difficult to try and plan out every nuance
regarding a program. But, if I sit down and just start
coding, the process of doing it turns into a step by
step, inspiration after inspiration resolution to the

I had a class where I was supposed to do a UML and all that
nifty planning you talk about. After trying to figure it out
for way to long that way, I just sat down and did the work
and then fished out the UML from there.

The mindset that I hear from so called professional coders
is no different from the mindset of those who criticized
and mocked Henry Ford because of his ignorance of history.
The man thought the Revolution happened in 1812.

OK, so the guy only had an 8 grade education from the
establishment’s point of view. As if that is more important
than the progress he made with the Automobile, creating
the middle class by turning his employees into potential
customers and the many other things he accomplished.

Certainly the man had a lot of negatives like the
insistence on taking credit for other peoples work
and ideas and the unfair and cruel way he treated
his only son Edsel Ford. But the things the establishment
criticized him for were not; in the grand scheme of
things; all that important.

So the revolution happened in 1776. But seriously,
are you going to keep a man in the poor house because
he mistakenly thinks it occurred in 1812? Do you hire
a programmer for his knowledge of history or his ability
to code?

Gates VP wrote at 2/4/2013 3:03 pm:

@Scott Tovey

One almost always knows what the desired output is…

You nailed. The OP talked about starting without a plan and without tests.

But if you know the outputs and (ostensibly) the inputs. Then you do have a test. If you know how the inputs are coming and you know where the outputs are going, then that is a plan.

So clearly, the whole premise here is flawed. The OP knew what he wanted out of this system. Sure, the underlying structures may have been crummy, but this is almost universally true. This is why UML is often just “crap”. The most you get out of UML is a reasonable tracking of the data that needs to exist at some point in time.

The first run of code is always about making that code “work” and pass all of the basic tests. Future runs of the code center around handling exceptions, improving performance or cleaning up the basic API to access that component.

I think the OP has some warped image of “the right way”. The premise that there is some “right way” involving tremendous amounts of planning and no code has been disproven for years.

We are always going to build software in these little increments of inputs and outputs. And those inputs/outputs are tests and they are plans. The code you write is a design and it is an architecture.

Scott A. Tovey wrote at 2/8/2013 12:43 am:

@Gates VP

Ahhh. So my natural tendency to run like a pack of wild dogs are chasing me when I hear new fangled, ambiguous and non descript terms that didn’t exist 10 years ago and is put forth as the the greatest programming secret ever discovered is a good thing?

It seems to me that Test Driven Development, a term I just recently started reading about, like in the last couple months or so, should be a natural occurrence when one is debugging code and wondering why al gore won’t get his rhythm out of the way so the thing will work.

Sorry, that just popped into my head and shot out my fingers.
I couldn’t help it, the humor made me do it.

At any rate, several years of crewel hardship has taught me that those who speak of mysteries that they cannot explain plainly; are either lying through their teeth or, or… No no, that’s it; they’re lying through their teeth.

So, until I get a good break down of what Test Driven Development is all about, a term by the way that was not spoken in any of the programming classes I took over the past few years, I will dismiss it as egotistical stupidity put forth by insecure programmers who have an uncontrollable urge to be seen as far more intelligent than they actually are.

Note: I don’t mind new terms if they clarify a theory or come with clarifying explanations. But it seems like every time I turn around there is this new term or phrase that is describing something that is already defined, and put forth as being better than the predecessor when it almost invariably turns out to not be any better than the predecessor, and brings into the mix more complications that tend to break good clean working code.

It’s one of the reasons I stopped pursuing programming. After several years of health issues, and getting chest pains whenever I’m stressed out, I just don’t need that kind of frustration.

Fenn wrote at 2/8/2013 3:45 pm:

@Scott A. Tovey:

No offense intented, but with your comment you’re the one who wants to look smart… and isn’t doing very well.
TDD is well-defined methodology which is mostly but not only associated with agile methodologies. Though agile is a lot about management, TDD is all about writing software, in a very practical way. It’s been around in software engineering for years. If someone claims that he’s a software engineer (you didn’t, I know) without having at least heard about TDD and without having had the curiosity to check by himself, that’s very sad. It’s like pretenting to be a computer enthousiast and never have heard about Linux…
That has nothing to do with insecure programmers: like design patterns, it’s a tool which solves some kind of problems and has to be adapted to one’s situation. And nobody’s forced to use it.
Of course, it’s not a magical thing that will ensure that everything will be perfect the first time ! We all know there is no silver bullet.
The base concept is quite simple: write a unit test, make it pass, repeat.
That looks easy and unproductive at first glance, I know, but there is more. First benefit is that you need to think your tests, and be always aware of what you want to achieve, and what are the edge cases. When you code something, you start by stating what you want to do exactly, you constrain it, then you do it. It helps not getting lost, especially for beginners who tend to write code which almost make coffee, but it in the end fails to meet the initial requirements and business rules.
While doing so, you must stay aware of mostly three things: DRY (don’t reppeat yourself), SRP (single responsibility principle), and dependencies. TDD is all about constant refactoring and abstrating. If one of your method or objects grows to much, you stop, you break it, and then you make all your tests green again. If you’re repeating yourself (and you will spot that easily), do the same. And if you encounter a dependency, abstract away (e.g. thanks to an interface), mock it up, and ensure that you inject the dependency in a way that makes your test itself independant from the real implementation. That is one of the biggest benefit from TDD, it leads to a design with a minimal coupling as you code. Your almost coding and designing at the same time. ;)
To understand it better around practical exemples, I’d recommend some reading, e.g. Erik Dietrich’s blog (examples are C# but should be understandable for any decnt developer), especially the 3-posts mini series “TDD for breaking problems apart” (
http://www.daedtech.com/tdd-for-breaking-problems-apart-3-finishing-up ), or Mark Seemann’s blog (yeah, still .Net, it’s currently my working environment. But with a bit of search you’ll find examples in your favorite language easily I think. TDD is well implanted in open source communities and amongst technical writers).
And for anybody who doesn’t think that would suit him or can’t find the benefits, I’m fine with that, I’m not an integrist, so no need troll me.

Fenn wrote at 2/8/2013 3:52 pm:

Ps: Forgot tp provide the Mark Seemann’s blog url: http://blog.ploeh.dk/
And sorry for the typos in the previous comment, (brrr, like “your” instead of “you’re”, I made my own eyes bleed) it’s late and I had a tough week, I have two software deliveries to prepare for next tuesday ^^”

ChadF wrote at 2/17/2013 2:09 am:

If something is written as a prototype (hence “prototype”, not “version 1.0”), then by definition it should be considered throw away code before it is even started. If some or most of it happens to be well designed from the start (perhaps it was a variation of something specific the programmer had past experience with), then that’s great (but don’t assume that from the beginning). And if management is even close to worth the [probably overpaid] salary they get, then they will account for this and allocate time and resources accordingly. It would be illogical even from managements/the company’s point of view to waste valuable time having the developers exhaustively plan out a project in the proof of concept stage for something that may not even be accepted. After the contact is awarded, project approved, or whatever the prototype was meant to justify, THEN all those extra steps should be taken (and with luck, the lessons learned building the “quick and dirty” prototype can go toward creating a well designed framework for the real thing).


Wow.. I detect a hint of egotism from your comments, including the one where you rattle off all those fancy (yet ultimately meaningless) degrees. I’ve seen plenty of programmers, fresh out of college, but weren’t of any significant programming value until they had some experience in the real world (and not that contrived classroom stuff). I don’t care if someone has three PhD’s.. all that truly proves is they are good at school, and whether they are any good beyond that depends on each individual.

I too have worked as a developer for the government.. and in that time I have seen the existence of monolithic “projects to replace them all” (too Lord of the Rings-ee?) in an attempt to get rid of all the [working] legacy systems. But all they really did was spend _years_ trying to “design the universal framework”, while taking money from the budgets of projects actually used, until eventually its budget was cut too, and nothing really came out of it (except maybe a few prototypes that only worked in a limited scope). And I heard they had some damn smart people working on that project.. if their whole project implementation hadn’t been doomed from the start. I’ve also seen the same unrealistic “one pass” coding expectations when dealing with contracting companies.. where the contractors are only allowed to work on things that have been approved by the contract managers (due to legal requirements) and were expected to get their tasks done by certain deadlines with no allocation for going back to fix/refactor all the “just make it work” code. Since the government was paying for work to be done, the company didn’t want to eat the costs of having to pay their employees to do code cleaned up, only that the next cycle of tasks were assigned for the next deadline (so the company could get their next batch of billable hours paid by the government). This basically continues until portions of the project become so unmaintainable, due to all the patchwork on patchwork, over the years that a major rewrite HAS to be done just to be able to [realistically] incorporate any new requirements. Luckily, I wasn’t one of the contractors and could my code without one hand [always] tied behind my back.

7 years of experience huh? Let me know when you’ve had 20+ years, and you’d have to deal with the futility of some of these work environments (and as Dilbert would put it, “the numbing effect”).

giantism_strikes wrote at 2/18/2013 3:54 pm:


No egotism, just a jerk. The comment was that anyone with any experience knows better. The point of listing my credentials was to show that I do have experience. If you think 7 years in programming is not any real experience, then I feel sorry for how long it took for you to grow in skills. I’ve been in a variety of environments, including absolute crap where we were Defect Driven Design (only way we knew the requirements were that someone threw in a defect with missing functionality). The point is that bad code is bad code. If you cannot abstract what needs to happen, then you have to re-write the same code over and over and over and over…

ChadF wrote at 2/18/2013 9:29 pm:


Yes, they [should] know better, but they also know reality. If a developer is expected to get something done by (or close to) a given deadline, then they have to do what is required, including less than ideal code, if keeping their job and/or not being looked over for promotion is a concern.

At around 7 years experience, I expect I was still under the misconception (or delusion) that ideal environments are realistic [to exist or be used]. But trying to do things the “right way” (i.e. avoid unverified assumptions, think ahead, have a clear plan/direction) and fighting to improve the system eventually wears one out. Sure you can continue to do your own best, despite everyone/everything else, but eventually have to accept you can’t [always] force the system to improve (unless you create your own startup company or something). The stress of “fighting [too many of] the good fights” is not worth it.

And even experienced developers must accept that they don’t know everything [in their field]. That when starting something radically different from all their past experiences that some [blind] experimentation, and trial and error, is required to develop a valid understanding. One can not create a proper design [up front] if they don’t understand the domain it is for. So the choice is either do a lot of reading (over a long period) and still only have “classroom experience” to show for it (all theory, little practically), or to dive in and experiment and learn much quicker (with a clearer understanding). In an ideal world developers would be given adequate time to experiment and learn independent of the “real work”, but the reality (employer’s goal, budget constraints, clueless management) often makes it “on the job” learning instead.

In all, what I took from the original article was “don’t write garbage code for real products if you can avoid it (prototypes not being a real product), but also know your limitations and practicality”.

And for the record.. as early as 2-3 years of [for salaried] experience, I was probably better than most of my co-workers with 5, 10, and 15 (or more) years of experience. I was terrible at a lot of things (so not too much ego there), but _very_ good at programming, as it is mostly intuition for me.

Oh, and the other reason I don’t blindly put value on someone “just because” they have some degree is while formalized education is a great way to obtain large amount of [good] information quickly.. it also means that knowledge was spoon-fed to the student and within the mindset (or “box”) of the teacher/school/whatever. Where in cases of pure unbiased/untainted experimentation, one may make many more mistakes (say that 5 times fast), but in the end [can] have a better appreciation for the subtleties of what was learned and think outside that “box”.

Hikari wrote at 3/8/2013 11:56 pm:

I see your point. And for sure, if the alternative is getting nothing, it’s better to have anything.

But let’s be serious about this serious matter. Engineering software rarely is about developing it until it’s finished, delovering it and forgetting it with its user. We, or another engineer, will need to change and maintain it.

It’s ok to have something not following those best practices and standards and patterns. if that’s the best we can do NOW. But ASAP we MUST improve its quality, and by that I mean really DO it.

And there are some things we can do, even with not enough time or knowledge, to make it easier to enhance software quality after it’s already constructed, that doesn’t increase its developing time.

* Use Design Patterns, always! If you’re gonna access some data source, instead of messing SQL everywhere, at least create a DAO class and put all data access code in it as you gonna
* Use encapsulation! Really, keep responsibilities in their place and spend a minimum time thinking in operations interfaces. Internal code can be bad, but if interfaces are good, later you’ll be able to refactor as needed without messing anything else.
* Instead of building a “big” software, create a separated project and try small pieces of features. As you feel how it’s working, remake it into the main project. Or, create a Subversion branch, mess with the code then just see what has changed, create another branch and remake it. This way you won’t mix bad code into good and stable code, but will still use repository advantages.

If you really don’t know how to do it, develop a prototype instead of pretending to develop a production-quality software. Present it to user/customer as a demonstration, explain you didn’t have enough time and just made it to get something to show and it is NOT good.

Developing prototype takes less effort than developing quality software, and even less than TRYING and failing to do it. Prototyping really reduce a lot of resources consumed, and help us better understand how it should be!

In my last prototype I created a very big form with hundreds of fields. It didn’t have any DTO, whole HTML was static, it was really unusable. But it was enough to present to customer, to get them to see it working, feel how it could be. For me to test a form layout that would fit all those fields and feel a validation scheme.

And now I’m remaking it, adding PHP, thinking in fields names, getting them from POST and setting in a DTO and getting back into input values. And the cool thing is that most HTML markup and whole CSS styles are being used, they were good enough!

« »

Copyright © 2024 by Brandon Savage. All rights reserved.