@requiem Maybe it gives people the illusion of understanding, without the discomfort of genuine understanding.
@requiem Following fads and a desire to implement best practices, but without the requisite capability to really implement it? "We're supposed to have tests. I have written tests. The tests pass. 👍 " without other metrics to verify test coverage or whether the tests make sense.
@requiem It really depends on the context. Why are you fixing tests in the first place?
Is it because you made a change to production code before writing a test to confirm that change? If so, let's learn the lesson here: always write your tests first, make sure they fail, alter production code to make those tests pass, then remove (as in, just delete them) the old tests that are no longer relevant. This ensures good test coverage throughout the process.
Remember that tests are a scaffolding. They're not directly contributing to the intended deliverables, so they are in some sense completely disposable.
On the other hand, did you make a change to production code that is intended to preserve the current interface? If so, then why are you fixing the tests? The tests are there to ensure the interface integrity. It is the production code that is wrong if the invariants of the interface are not being held up.
Too many people jump to the conclusion that unit testing and integration testing is bunk because they don't think critically about why they're maintaining a test suite to begin with.
@requiem I've fallen into that trap too. Even blogged about it: https://felix.plesoianu.ro/blog/unit-testing-and-code-clarity.html
@cstanhope @requiem I cry every time I see someone associate testing with fads, especially in this era where people are legitimately using software which impacts peoples lives in a more material manner, even if not in a life-dependent way.
@vertigo @requiem Sorry, I didn't mean to imply testing is only a fad, just that there are aspects of fadishness surrounding it that people get caught up in. I certainly use test and testing extensively, especially with FPGA work. But doing testing right can be much more difficult and resource intensive than many orgs are willing to admit.
@cstanhope @requiem I get that. I certainly think that people do unit testing and integration testing wrong, especially if it's imposed from on-high by upper-management.
However, I find code reviews overwhelmingly more resource intensive than unit testing by at least an order of magnitude.
Case in point, my UART driver at work took a year to finally land, and even then, with some protest from my code reviewer. I actually had to get management approval to land it, overriding my reviewer, because we have a deadline to meet. My reviewer wanted perfection, I wanted something that actually worked.
(To be fair to my reviewer, I was still learning Rust, and there was much I didn't get right about how to design the driver at first. I learned a lot through those exchanges.)
Regarding heavy-weight testing practices, I save those for integration tests. Unit testing is intended to be light-weight and, if I can be allowed to say it, almost fluffy in nature. If unit testing becomes painful, there's definitely something wrong. If the interface to a module needs to change, then change the/write new tests accordingly and don't be afraid to throw away old, irrelevant tests. It's your version control system's job to remember those tests were important once upon a time. 😏
- replies
- 0
- announces
- 0
- likes
- 0
@requiem @cstanhope web dev is where test driven development actually got its start.
@requiem @vertigo @cstanhope what we recognise as test driven development in computer software is virtually unheard of in other engineering disciplines. Processes like hazop and squad checks are more like intensive/comprehensive code reviews.
Hazop is kind of equivalent to devising the tests I guess, but the process is waaaay more involved than how unit tests seem to be created. For example in a process control system's safeguard system PLCs every single potential execution path is enumerated and examined even if it seems nonsensical.
Typical computer software is too complex to expect 100% coverage I guess but in the unit tests I see they sometimes seem to miss wide swaths of potential scenarios because they don't fit the use case.