How does the dead code elimination work when using args to Printf? If static strings are fed into an empty function, I can imagine it does nothing. However, this I have less of a firm grip upon
dlg.Printf("the thing.Something is %+v", thing.Something())
since surely golang will still call Something, and still call Printf
And don't misunderstand me: it's a grave pet peeve of mine to do action inside the args to Printf but I can tell you it's a very common [anti-]pattern so I'd presume your users will want a story for how that interacts with the "disappears completely" claim
In short: You're right - Go will still evaluate argument expressions.
In hindsight, I should've made that more obvious, not calling this out initially was an expert blind spot on my part, not an intent to mislead.
What disappears is the logging work (formatting, interface/slice plumbing, I/O) and the call itself.
Go cannot eliminate calls to functions inside of Printf because they could produce side-effects.
Eliminating functions calls like this would be very expensive to do and would clashes with Go's focus on fast compilation times.
I'll add a section to the README that explains this.
Thanks for pointing it out.
This is a thing I had a little "are you sure?" itch in my brain about, but you've expressed so clearly in four lines of comment, and this is me showing gratitude for the comment and the untangling of my brain.
ETA: Presumably if there was at least an option for passing a closure that would provide a way to address this? Since the closure will only uselessly compile, and not actually uselessly run?
I usually leave them as uncommitted changes, then git checkout to undo them. And using something like `git add -p` to skip Printfs if I need to commit other changes.
Just one thought: I don't think your stack traces need to include the lines inside Printf itself and it adds to the noise a bit. Maybe just go up to the line where the call to Printf is made?
It's a bit finicky to do so, but I'm currently in an exploratory phase of providing an even more fine grained stack tracing feature and filtering out dlg itself is definitly on my todo list.
it doesn't appear to truly be zero cost if you log variables that can't be eliminated. The only way (I believe) to implement zero cost is some type of macro system which go does not support.
> it doesn't appear to truly be zero cost if you log variables that can't be eliminated.
I'd say it is fair to call it zero cost, if the costs you are seeing are due to the way you are using it. If the values being logged are values you are already computing and storing, constants, or some mix of the two (concatenated via its printf function), by my understanding (caveat: I've never actually used Go myself) all the logging code should be stripped out as dead code in the link stage.
Obviously if you are performing extra computations with intermediate values to produce the log messages, your code there might produce something that is not reliably detected as eliminable dead code so there would be cost there.
> The only way (I believe) to implement zero cost is some type of macro system which go does not support.
That looks to be effectively what it is doing, just at the link stage instead of in a preprocessor. Where C & friends would drop the code inside #ifdef when the relevant value is not set (so it won't be compiled at all) this should throw it out later in the process if DLG_STACKTRACE isn't set at compile time.
So there will always be a compile-time cost (unlike with a preprocessor macro, something will be produced then thrown away and the analysis of what to throw away will take longer with more code present), but not a run-time cost if the logging is not enabled, assuming you don't have any logic in the trace lines beside the calls to this module.
The problem is calling anything zero cost is inherently wrong. Nothing in life has no cost. I know I am being pedantic but I think a more accurate description is "zero production runtime cost," which is also how I interpret rust's zero cost abstraction. In that case too, I find the terminology grating because there are very obviously costs to using rust and its abstractions.
One obvious cost to code that is removed in production is that there is now a divergence between what your source code says it does and what it actually does. I now need to keep in mind the context the program is running in. There is a cost to that. It might be worth it, but it isn't zero.
Yeah I want a log package where the log statements that don't log also don't have their arguments evaluated. Half of my gc pressure is little strings created to call the log package that don't get logged.
So what you want is slog, found in the standard library? The doc.go file found in the package even goes into detail about what you (seem to) describe and how it avoids that problem.
Basically it says to pass in objects, not prepared strings, so the formatted output is only computed if the message is actually logged. That object can be a struct implementing the LogValuer interface to do any required work/produce formatted output.
> And passing pointers to strings rather than strings, hmmm.
Are you referring to the "URL" example? That isn't a case of passing pointers to strings, that is passing a URL (fmt.Stringer) where the String method has a pointer receiver.
To demonstrate why the pointer would be needed in that case, consider:
Which is a relatively recent addition to the Go standard library, based on the collective experience with logging over the years. Compare with the original log standard library: https://pkg.go.dev/log and you can see the changes in even just the time since Go was developed in logging best practices.
Slog, once you understand the interfaces involved in stuff, has performed most every trick I could desire, up to and including things like subloggers with their own priorities for outputs that override the global settings.
I'd like to see some slog integration in this package, though I say that without a design in mind. As it stands, this is exactly what it says on the tin; debugging-only output that you fully intend to just disappear out of the production binary.
The trick it uses is a fairly straightforward use of Go's tag-based compilation features, which I do not mean as a criticism, but to point out that any Go programmer who likes the looks of this but wants to adapt it to their own task will find it easy to simply take the technique directly and adapt it in not very many lines of code.
I do write tests but I'm not a fan of aiming for 100% coverage because it's incredibly tedious to achieve, and the gains in my experience are marginal.
That being said even with 100% coverage your code may still contain bugs.
If you've never had to debug code with extensive tests then hats off to you.
When you write a test you see the cute little "debug test" button show up beside it in your editor. If you press it, your logpoints spill the logging information you seek without having to modify the code.
I expect that is what the parent is really asking about: What does this meaningfully offer over the "built-in" debug logging?
Crazy that I get downvoted for a perfectly valid question, go HN!
I'm not saying that tests are perfect, but if you do find a bug, you write a new test and fix it. You don't add print statements that you expect to keep around any longer than the amount of effort there is to write the test and fix the code.
Maybe this product is for people who don't write tests, but even in this codebase, there is a pretty well done set of tests and zero dogfood usage that I noticed.
> but if you do find a bug, you write a new test and fix it.
Maybe there's a misunderstanding here but this library is not meant to replace tests.
It's for initially finding the bug for which you then write a test case (or not).
If you're not sure how logging debug information can help you find bugs then I'm not sure what to say. Especially if you've been programming for four decades, I would think it would be obvious how logging information can help you find logic or data errors in your code/inputs/outputs.
I think you're getting downvoted because it's not a valid question - debug logging and unit testing are two separate things, and the OP didn't say anywhere that he prefers this to unit tests. Your question presupposes a dichotomy that doesn't exist.
This is only mildly offtopic, but I wonder if the following is true:
Software developers who...
1. Experienced actual instruction on software development;
2. Learned to write software in C/C++/FORTRAN/Pascal/whatever; or
3. Spent a lot of time in more niche situations like system programming, embedded programming, enterprise systems, etc.
...are familiar with using debuggers as part of their workflow to inspect internal state and diagnose. In contrast, developers who...
1. Were entirely self-taught;
2. Grew up on interpreted languages (JS, Python, PHP, Perl, etc); and
3. Generally tend to write smaller, self-contained projects
...tend to just add debug logging and re-run the program.
I definitely have used debuggers in the past, but as someone who almost exclusively has written in interpreted languages (or languages where the write-compile-run loop is fast, e.g. Golang), my habit is to add debug logging to more places to validate internal state as the program runs, rather than attaching a debugger and inspecting specific things manually.
Either way, I think it's two separate approaches and I think in different circumstances there are definitely benefits to one or the other. In most of my Python scripts it's faster to add a `print(f"variable is {variable}")` than it would be to attach a debugger; also for a small script that's only going to run a few times, or run very occasionally, writing unit tests is often overkill because often the entire script is effectively a unit.
I don’t know if you’re trying to make assumptions about me or you, but I am those 3 things.
It is definitely different approaches and both can achieve the same goals, but having started with your approach and then learned IDE’s, debugging and writing tests, I can from my own experience say that tests and debuggers are the way to go when it comes to building long term stable applications, especially in shared environments.
As for speed to attach a debugger? Lol, it is one button click in a proper IDE and the speed and utility of figuring out the solution is faster. What I am hearing from you is that you’re unwilling to put the time into learning/using the tools. Or maybe you’re just stuck in your own headspace, many developers get that way, it is why we have editor wars and whatnot.
And don't misunderstand me: it's a grave pet peeve of mine to do action inside the args to Printf but I can tell you it's a very common [anti-]pattern so I'd presume your users will want a story for how that interacts with the "disappears completely" claim
If we consider this example:
And look at the disassembly: What disappears is the logging work (formatting, interface/slice plumbing, I/O) and the call itself. Go cannot eliminate calls to functions inside of Printf because they could produce side-effects. Eliminating functions calls like this would be very expensive to do and would clashes with Go's focus on fast compilation times.I'll add a section to the README that explains this. Thanks for pointing it out.
ETA: Presumably if there was at least an option for passing a closure that would provide a way to address this? Since the closure will only uselessly compile, and not actually uselessly run?
Use something like stacked git[1], and then it's just one "stg pop" and poof, they're gone.
[1] https://stacked-git.github.io/
I rarely use branches, my local work area is about 50-100 commits on top of master.
I pick/reorder/edit with “git rebase -i”
I’ll prefix the commit summary with a word that helps me keep straight debug changes from ones that will go toward a MR.
There is way too much going on in this code whose feature essentially boils down to
Writing a stack trace is fully unrelated to Printf(), and should be separated.Just one thought: I don't think your stack traces need to include the lines inside Printf itself and it adds to the noise a bit. Maybe just go up to the line where the call to Printf is made?
You're absolutely right, it annoys me as well.
It's a bit finicky to do so, but I'm currently in an exploratory phase of providing an even more fine grained stack tracing feature and filtering out dlg itself is definitly on my todo list.
I'd say it is fair to call it zero cost, if the costs you are seeing are due to the way you are using it. If the values being logged are values you are already computing and storing, constants, or some mix of the two (concatenated via its printf function), by my understanding (caveat: I've never actually used Go myself) all the logging code should be stripped out as dead code in the link stage.
Obviously if you are performing extra computations with intermediate values to produce the log messages, your code there might produce something that is not reliably detected as eliminable dead code so there would be cost there.
> The only way (I believe) to implement zero cost is some type of macro system which go does not support.
That looks to be effectively what it is doing, just at the link stage instead of in a preprocessor. Where C & friends would drop the code inside #ifdef when the relevant value is not set (so it won't be compiled at all) this should throw it out later in the process if DLG_STACKTRACE isn't set at compile time.
So there will always be a compile-time cost (unlike with a preprocessor macro, something will be produced then thrown away and the analysis of what to throw away will take longer with more code present), but not a run-time cost if the logging is not enabled, assuming you don't have any logic in the trace lines beside the calls to this module.
One obvious cost to code that is removed in production is that there is now a divergence between what your source code says it does and what it actually does. I now need to keep in mind the context the program is running in. There is a cost to that. It might be worth it, but it isn't zero.
https://cs.opensource.google/go/x/exp/+/645b1fa8:slog/doc.go...
Basically it says to pass in objects, not prepared strings, so the formatted output is only computed if the message is actually logged. That object can be a struct implementing the LogValuer interface to do any required work/produce formatted output.
Are you referring to the "URL" example? That isn't a case of passing pointers to strings, that is passing a URL (fmt.Stringer) where the String method has a pointer receiver.
To demonstrate why the pointer would be needed in that case, consider:
https://github.com/negrel/assert
My favored approach is a context aware level logger with structured logs going to an aggregator like Splunk.
Because it is context aware, custom filtering logic is available in a middleware.
Which is a relatively recent addition to the Go standard library, based on the collective experience with logging over the years. Compare with the original log standard library: https://pkg.go.dev/log and you can see the changes in even just the time since Go was developed in logging best practices.
Slog, once you understand the interfaces involved in stuff, has performed most every trick I could desire, up to and including things like subloggers with their own priorities for outputs that override the global settings.
I'd like to see some slog integration in this package, though I say that without a design in mind. As it stands, this is exactly what it says on the tin; debugging-only output that you fully intend to just disappear out of the production binary.
The trick it uses is a fairly straightforward use of Go's tag-based compilation features, which I do not mean as a criticism, but to point out that any Go programmer who likes the looks of this but wants to adapt it to their own task will find it easy to simply take the technique directly and adapt it in not very many lines of code.
That being said even with 100% coverage your code may still contain bugs. If you've never had to debug code with extensive tests then hats off to you.
"100% coverage" is just a measure of line coverage and is highly useless to reason about.
For testing to become meaningful, you should have test data covering way more than that for the relevant parts.
I expect that is what the parent is really asking about: What does this meaningfully offer over the "built-in" debug logging?
Maybe I'm just old, maybe it's just a different approach, but I built dlg for myself because it fits my workflow.
> What does this meaningfully offer over the "built-in" debug logging?
For you, it most likely doesn't offer anything meaningful.
For devs who use fmt.Printf a lot, it maybe does.
I'm not saying that tests are perfect, but if you do find a bug, you write a new test and fix it. You don't add print statements that you expect to keep around any longer than the amount of effort there is to write the test and fix the code.
Maybe this product is for people who don't write tests, but even in this codebase, there is a pretty well done set of tests and zero dogfood usage that I noticed.
Maybe there's a misunderstanding here but this library is not meant to replace tests. It's for initially finding the bug for which you then write a test case (or not).
If you write tests, you don't need a specialized library for debug logging.
Run the test, and set a breakpoint in your debugger.
Software developers who...
1. Experienced actual instruction on software development;
2. Learned to write software in C/C++/FORTRAN/Pascal/whatever; or
3. Spent a lot of time in more niche situations like system programming, embedded programming, enterprise systems, etc.
...are familiar with using debuggers as part of their workflow to inspect internal state and diagnose. In contrast, developers who...
1. Were entirely self-taught;
2. Grew up on interpreted languages (JS, Python, PHP, Perl, etc); and
3. Generally tend to write smaller, self-contained projects
...tend to just add debug logging and re-run the program.
I definitely have used debuggers in the past, but as someone who almost exclusively has written in interpreted languages (or languages where the write-compile-run loop is fast, e.g. Golang), my habit is to add debug logging to more places to validate internal state as the program runs, rather than attaching a debugger and inspecting specific things manually.
Either way, I think it's two separate approaches and I think in different circumstances there are definitely benefits to one or the other. In most of my Python scripts it's faster to add a `print(f"variable is {variable}")` than it would be to attach a debugger; also for a small script that's only going to run a few times, or run very occasionally, writing unit tests is often overkill because often the entire script is effectively a unit.
It is definitely different approaches and both can achieve the same goals, but having started with your approach and then learned IDE’s, debugging and writing tests, I can from my own experience say that tests and debuggers are the way to go when it comes to building long term stable applications, especially in shared environments.
As for speed to attach a debugger? Lol, it is one button click in a proper IDE and the speed and utility of figuring out the solution is faster. What I am hearing from you is that you’re unwilling to put the time into learning/using the tools. Or maybe you’re just stuck in your own headspace, many developers get that way, it is why we have editor wars and whatnot.