Refactoring gets really bad reviews, but from where I’m sitting as a hobby programmer in relative ignorance it seems like it should be easier, because you could potentially reuse a lot of code. Can someone break it down for me?
I’m thinking of a situation where the code is ugly but still legible here. I completely understand that actual reverse engineering is harder than coding on a blank slate.
Rebuild from scratch gets a bad reputation sometimes because it’s the go-to response of a junior programmer with a little experience. They know the system could be done better, and it seems like the fastest way to get there is to throw out everything.
What often happens next is the realization that the existing system was handling far more edge cases than it initially appears. You often discover these edge cases when the new system is deployed and someone complains about their use case breaking. As you fix each one, the new system starts to look worse than the old while supporting half its features.
This often leads people to prefer refactors rather than rewrites. Those can take a lot longer than expected and never quite shed what made the old system bad. Budget cuts can leave the whole project in a halfway state that’s worse than if it was left alone.
There are no easy answers, and the industry has not solved this problem.
The reverse is also sometimes true and it’s when a rewrite is justifyable.
I’ve worked with many systems that piled up a ton of edge cases handling for things that are no longer possible, it makes the code way harder to follow than it should.
I’ve had successful rewrites that used 10x+ less the amount of code, for more features and significantly more reliable. And completely eliminated many of the edge cases by design.
My current team has had a great solution to this, which is to re-build in parallel. Build the new system alongside the old one, including the reporting and integrations. You’ll find the edge cases pretty quickly.