1
0
mirror of https://github.com/DataDog/go-profiler-notes.git synced 2025-07-15 23:54:16 +02:00

Fix broken references

This commit is contained in:
Armin
2021-03-11 12:40:37 +01:00
parent ac8cf39b91
commit f7902eebee
4 changed files with 6 additions and 6 deletions

View File

@ -16,7 +16,7 @@ go run . \
The benchmark works by spawning a new child process for the given number of `-runs` and every unique combination of parameters. The child reports the results to the parent process which then combines all the results in a CSV file. The hope is that using a new child process for every config/run eliminates scheduler, GC and other runtime state building up as a source of errors.
Workloads are defined in the [workloads.go](./workloads.go) file. For now the workloads are designed to be **pathological**, i.e. they try to show the worst performance impact the profiler might have on applications that are not doing anything useful other than stressing the profiler. The numbers are not intended to scare you away from profiling in production, but to guide you towards universally **safe profiling rates** as a starting point.
Workloads are defined in the [workloads_chan.go](./workload_chan.go) and [workloads_mutex.go](./workload_mutex.go) files. For now the workloads are designed to be **pathological**, i.e. they try to show the worst performance impact the profiler might have on applications that are not doing anything useful other than stressing the profiler. The numbers are not intended to scare you away from profiling in production, but to guide you towards universally **safe profiling rates** as a starting point.
The CSV files are visualized using the [analysis.ipynb](./analysis.ipynb) notebook that's included in this directory.

View File

@ -74,7 +74,7 @@ Block durations are aggregated over the lifetime of the program (while the profi
pprof.Lookup("block").WriteTo(myFile, 0)
```
Alternatively you can use [github.com/pkg/profile](https://pkg.go.dev/github.com/pkg/profile) for convenience, or [net/http/pprof](net/http/pprof) to expose profiling via http, or use a [continious profiler](https://www.datadoghq.com/product/code-profiling/) to collect the data automatically in production.
Alternatively you can use [github.com/pkg/profile](https://pkg.go.dev/github.com/pkg/profile) for convenience, or [net/http/pprof](https://golang.org/pkg/net/http/pprof/) to expose profiling via http, or use a [continious profiler](https://www.datadoghq.com/product/code-profiling/) to collect the data automatically in production.
Last but not least you can use the [`runtime.BlockProfile`](https://golang.org/pkg/runtime/#BlockProfile) API to get the same information in a structured format.
@ -123,7 +123,7 @@ Anyway, what does all of this mean in terms of overhead for your application? It
That being said, the benchmark results below (see [Methodology](./bench/)) should give you an idea of the **theoretical worst case** overhead block profiling could have. The graph `chan(cap=0)` shows that setting `blockprofilerate` from `1` to `1000` on a [workload](./bench/workload_chan.go) that consists entirely in sending tiny messages across unbuffered channels decreases throughput significantly. Using a buffered channel as in graph `chan(cap=128)` greatly reduces the problem to the point that it probably won't matter for real applications that don't spend all of their time on channel communication overheads.
It's also interesting to note that I was unable to see significant overheads for [`mutex`](.bench/workload_mutex.go) based workloads. I believe this is due to the fact that mutexes employe spin locks before parking a goroutine when there is contention. If somebody has a good idea for a workload that exhibits high non-spinning mutex contention in Go, please let me know!
It's also interesting to note that I was unable to see significant overheads for [`mutex`](./bench/workload_mutex.go) based workloads. I believe this is due to the fact that mutexes employe spin locks before parking a goroutine when there is contention. If somebody has a good idea for a workload that exhibits high non-spinning mutex contention in Go, please let me know!
Anyway, please remember that the graphs below show workloads that were specifically designed to trigger the worst block profiling overhead you can imagine. Real applications will usually see no significant overhead, especially when using a `blockprofilerate` >= `10000` (10µs).

2
cpu.md
View File

@ -41,7 +41,7 @@ The various ways one can record CPU profiles in Go are listed below.
go tool pprof -http=:6061 benchmark.cpu.pb.gz
```
2. The [net/http/pprof](net/http/pprof) allows you to setup http endpoints that can start/stop the CPU profiler via http requests on-demand and return the resulting pprof data file. You can directly pass a URL to such an endpoint to the pprof tool.
2. The [net/http/pprof](https://golang.org/pkg/net/http/pprof/) allows you to setup http endpoints that can start/stop the CPU profiler via http requests on-demand and return the resulting pprof data file. You can directly pass a URL to such an endpoint to the pprof tool.
```
go tool pprof -http=:6061 http://localhost:6060/debug/pprof/profile?seconds=30

View File

@ -1,6 +1,6 @@
# block-net
This [program](./main.go) explores the [question](https://twitter.com/rogpeppe/status/1359202847708037124) whether network i/o (e.g. waiting on socket read/write operations) will show up in the [block profiler](../block.md) or not.
This [program](./main.go) explores the [question](https://twitter.com/rogpeppe/status/1359202847708037124) whether network i/o (e.g. waiting on socket read/write operations) will show up in the [block profiler](/block.md) or not.
The program does the following:
@ -22,5 +22,5 @@ However, as you can see below, the block profiler [captures](./block.pb.gz) only
![block-net](./block-net.png)
This means that [block profiler](../block.md) is generally not able to give a good idea about goroutines that are waiting on network i/o.
This means that [block profiler](/block.md) is generally not able to give a good idea about goroutines that are waiting on network i/o.