* Add randomness pool mode for V4 UUID
Adds an optional randomness pool mode for Random (Version 4)
UUID generation. The pool contains random bytes read from
the random number generator on demand in batches. Enabling
the pool may improve the UUID generation throughput
significantly.
Since the pool is stored on the Go heap, this feature may
be a bad fit for security sensitive applications. That's
why it's implemented as an opt-in feature.
* fixup! document thread-safety aspects
Zero allocation by using non-pointer error.
related #69
name old time/op new time/op delta
ParseBadLength-16 15.4ns ± 0% 3.5ns ± 0% ~ (p=1.000 n=1+1)
name old alloc/op new alloc/op delta
ParseBadLength-16 8.00B ± 0% 0.00B ~ (p=1.000 n=1+1)
name old allocs/op new allocs/op delta
ParseBadLength-16 1.00 ± 0% 0.00 ~ (p=1.000 n=1+1)
* Add benchmarks for different kinds of invalid UUIDs
Also add a test case for too-short UUIDs to ensure behavior doesn’t
change.
* Use a custom error type for invalid lengths, replacing `fmt.Errorf`
This significantly improves the speed of failed parses due to wrong
lengths. Previously the `fmt.Errorf` call dominated, making this the
most expensive error and more expensive than successfully parsing:
BenchmarkParse-4 29226529 36.1 ns/op
BenchmarkParseBadLength-4 6923106 174 ns/op
BenchmarkParseLen32Truncated-4 26641954 38.1 ns/op
BenchmarkParseLen36Corrupted-4 19405598 59.5 ns/op
When the formatting is not required and done on-demand, the failure per
se is much faster:
BenchmarkParse-4 29641700 36.3 ns/op
BenchmarkParseBadLength-4 58602537 20.0 ns/op
BenchmarkParseLen32Truncated-4 30664791 43.6 ns/op
BenchmarkParseLen36Corrupted-4 18882410 61.9 ns/op
Making xtob take two byte arguments avoids a lot of slicing. This makes
the Parse function faster. In addition, because so much slicing is
avoiding, duplicating the parse logic to ParseBytes resulted in the
function being faster than Parse (<1ns).
The BenchmarkParseBytesNative function has been removed (parseBytes was
identical to ParseBytes). And a new benchmark,
BenchmarkParseBytesUnsafe, has been added to benchmark the old way of
parsing []byte (which is slightly slower than Parse and thus the new
ParseBytes implementation).
benchmark old ns/op new ns/op delta
BenchmarkUUID_MarshalJSON-4 685 667 -2.63%
BenchmarkUUID_UnmarshalJSON-4 1145 1162 +1.48%
BenchmarkParse-4 61.6 56.5 -8.28%
BenchmarkParseBytes-4 65.7 55.9 -14.92%
BenchmarkParseBytesCopy-4 121 115 -4.96%
BenchmarkNew-4 1665 1643 -1.32%
BenchmarkUUID_String-4 112 113 +0.89%
BenchmarkUUID_URN-4 117 119 +1.71%
benchmark old allocs new allocs delta
BenchmarkUUID_MarshalJSON-4 4 4 +0.00%
BenchmarkUUID_UnmarshalJSON-4 2 2 +0.00%
BenchmarkParse-4 0 0 +0.00%
BenchmarkParseBytes-4 0 0 +0.00%
BenchmarkParseBytesCopy-4 1 1 +0.00%
BenchmarkNew-4 1 1 +0.00%
BenchmarkUUID_String-4 1 1 +0.00%
BenchmarkUUID_URN-4 1 1 +0.00%
benchmark old bytes new bytes delta
BenchmarkUUID_MarshalJSON-4 248 248 +0.00%
BenchmarkUUID_UnmarshalJSON-4 248 248 +0.00%
BenchmarkParse-4 0 0 +0.00%
BenchmarkParseBytes-4 0 0 +0.00%
BenchmarkParseBytesCopy-4 48 48 +0.00%
BenchmarkNew-4 16 16 +0.00%
BenchmarkUUID_String-4 48 48 +0.00%
BenchmarkUUID_URN-4 48 48 +0.00%
The MustParse and MustNewUUID functions have been removed since they
can be replaced simply using the new function.
uuid.Must(uuid.Parse(s))
uuid.Must(uuid.NewUUID())
This also fixes a spurious bug in the UnmarshalJSON method that
prevented compiling the json.go file.
A common helper function is used for UUID.String(), UUID.URN(), and
UUID.MarshalJSON(). Any perforance hit in UUID.String() and
UUID.Marshal() appears to be negligable. The benefit to
UUID.MarshalJSON() is several hundred nanoseconds (23% faster) and 2
allocations (21% fewer bytes).
Some redundant checks are removed from UUID.UnmarshalJSON() method. The
"encoding/json".Unmarshaler interface specifies that implementations can
assume input is valid JSON content. This allows one to assume that (1)
input is not empty and (2) if index 0 is a quote, then the content is a
json string and the last index will contain a terminating quote. The
second point is not completely explicit in the documentation but it is
true in practice (and it is safe to assume -- errors will be caught).
A single array is used as a buffer and "encoding/hex".Encode() hex
encodes directly its final destination. Loops are avoided because they
are simple enough to unroll manually.